diff --git "a/deduped/dedup_0516.jsonl" "b/deduped/dedup_0516.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0516.jsonl" @@ -0,0 +1,44 @@ +{"text": "The primate-specific Alu elements, which originated 65 million years ago, exist in over a million copies in the human genome. These elements have been involved in genome shuffling and various diseases not only through retrotransposition but also through large scale Alu-Alu mediated recombination. Only a few subfamilies of Alus are currently retropositionally active and show insertion/deletion polymorphisms with associated phenotypes. Retroposition occurs by means of RNA intermediates synthesised by a RNA polymerase III promoter residing in the A-Box and B-Box in these elements. Alus have also been shown to harbour a number of transcription factor binding sites, as well as hormone responsive elements. The distribution of Alus has been shown to be non-random in the human genome and these elements are increasingly being implicated in diverse functions such as transcription, translation, response to stress, nucleosome positioning and imprinting.We conducted a retrospective analysis of putative functional sites, such as the RNA pol III promoter elements, pol II regulatory elements like hormone responsive elements and ligand-activated receptor binding sites, in Alus of various evolutionary ages. We observe a progressive loss of the RNA pol III transcriptional potential with concomitant accumulation of RNA pol II regulatory sites. We also observe a significant over-representation of Alus harboring these sites in promoter regions of signaling and metabolism genes of chromosome 22, when compared to genes of information pathway components, structural and transport proteins. This difference is not so significant between functional categories in the intronic regions of the same genes.Our study clearly suggests that Alu elements, through retrotransposition, could distribute functional and regulatable promoter elements, which in the course of subsequent selection might be stabilized in the genome. Exaptation of regulatory elements in the preexisting genes through Alus could thus have contributed to evolution of novel regulatory networks in the primate genomes. With such a wide spectrum of regulatory sites present in Alus, it also becomes imperative to screen for variations in these sites in candidate genes, which are otherwise repeat-masked in studies pertaining to identification of predisposition markers. In the post genome sequence era, repetitive sequences, erstwhile considered junk and devoid of function, are increasingly being implicated in many cellular functions, genome organization and diseases -8. Alu rAlus have been shown to harbor a number of regulatory sites like hormone response element (HRE), and a couple of ligand activated transcription factor binding sites -24. ThesAlus originally demonstrated to have non uniform distribution on the chromosomes through banding studies ,34 have Identification and analysis of various permutations and combinations of these regulatory elements in otherwise conserved repetitive Alus are mostly excluded from genetic analysis. Since, Alus occupy a tenth of the human genome, it is imperative to identify those, which might assume function in the proper context. Our primary aim in this analysis is to find out if any bias exists in the distribution of transcriptional regulatory sites in Alus of various evolutionary ages and their distribution with respect to the functional classes of genes.As a first step toward examining the role of these regulatory sites, we mapped their most probable positions on Alus, using in house developed algorithms Figure . This waNearly all the analyzed regulatory sites for RNA polymerase II (RNA pol II) are distributed in the region between A- Box and B-Box with more clustering near the B-Box region Figure . There iin vitro as well as in vivo studies in the 'B' Box that 'G' and 'T' residues at the 1st and 3rd positions respectively are very critical for it's functioning [th position in this case is mutated to \"T\" in the older families. The Yb8 family that has been reported to be transcriptionally and retropositionally active amongst the younger subfamilies, retains the B'-Box element in a significant fraction. This suggests that even though retropositionally competent younger Alus are hypothesized to be transcriptionally active, only a minority retains consensus B'-Box. It is possible that the enhancing activity of the A Box is sufficient to drive transcription from the weaker B'- Box in the younger subfamilies. Our findings corroborates well with an earlier study in which presence of all subfamilies in the RNA polymerase III driven Alu transcript pool was reported [Majority of Alu retroposition has ceased at least 30 million years ago and only a few Alu subfamilies are still active ,17,41. Tctioning . Our anactioning for the reported . Additioreported . Additioreported ,49, tranAlus have been demonstrated to exert effects at transcription, post-transcription as well as at the translation level. In an earlier study on complete chromosomes 21 and 22, we have demonstrated that the Alu elements are clustered in genes of signaling, metabolic and transport proteins and rarely present in the structural and information proteins . This clGene inversions, duplications and formation of pseudogenes have been extensively reported to be mediated both through retrotransposition as well as recombination of Alus. This, in many cases, has also been associated with aberrant gene expression. For instance, presence of AML sites in an Alu upstream of MPO gene, has been first demonstrated to be associated with Acute Myelocytic Leukemia . This isAnalysis of regulatory sites within Alus suggests that a polymorphic Alu has the potential to transpose and recombine which allows it to integrate at random sites in the genome. They also harbour potential regulatory sites, which could evolve to become accessory sites for RNA pol II transcription as revealed by their clustering in older subfamilies. Further, the Alu sequence due to acquisition of novel functions could form a part of the transcription repertoire involved in the regulation of the downstream /associated genes and create novel regulatory networks Figure . These rComparison of sequences in the regulatory regions of many homologous genes in human have shown accumulation of Alus, not only post divergence from non-human primates but also during primate evolution . PerhapsCurrently, Alus are repeat-masked in all studies pertaining to identification of predisposition markers in complex disorders. With such wide spectrum of nuclear receptors, which play a major role in maintaining normal physiological state and affect as diverse processes as development, reproduction, general metabolism, residing in Alus, it therefore becomes imperative to screen for variations in these sites. This might have important consequences in the candidate genes for those complex diseases that are triggered in response to hormonal imbalances as well as other environmental cues.126 polymorphic Alu sequences cited in literature ,40 were Information about the regulatory sites and their sequences was collected from various literature sources Table . CharactTwo different programs were written in order to locate the most probable biologically significant regions. A local alignment based program, Xalign, was implemented in C++, Red Hat 7.3 based Linux. This program finds the probable sites by aligning the consensus of regulatory site with the query sequence. Multiple queries with a size upto 600 nucleotides can be taken at a time. Another program, Promotif, was implemented in C++, Red Hat 7.3 based Linux, using the probabilistic modeling approach. It uses the position weight matrix, normalization of the positions with conservation index , and inter-nucleotide dependence in terms of transition matrix to find out the sites. Position weight matrices were generated using Gibbs Motif Sampler, for every site included in the program. The sequences for position weight matrix generation were carefully selected based on the sequence and length reported for each binding site. The final length for search was fixed at the lowest length observed. This provides element specific matrix with lesser chance for the selection on non-RE regions. For the sites analyzed, it had an in built transition matrix, position weight matrix and conservation index. Batch analysis of over a thousand Alu sequences can be performed with this program.Using the annotated sequences from literature as well as from NCBI web page, training set for the probabilistic model was created. Training was done for approximately 70% sequences and rest of the sequences were taken as test set. Details of the program along with the equations used are available on request.About 126 recently integrated Alus from younger subfamilies were searched in the human genome using BLASTn at NCBI server and regulatory sites were mapped in these regions using the programs discussed above.Alus in the promoter regions and intronic regions of functionally classified genes of chromRS developed the algorithms and programs for identifying regulatory and significant regions, carried out the analysis of distribution of these sites in Alu subfamilies, association analysis and drafted the manuscript. DG was involved in chromosome 22 analyses. SKB participated in the design of the study. MM conceived of the study, participated in its design, analysis, coordination and manuscript preparation. All authors read and approved the final manuscript.The analysis over the promoter and intronic regions has been performed through the data given in the supplementary table file, supplementary table 3_ravishankar et al. Format: .xls. For human chromosome 22, the data contains the accession number, associated Alu family, the respective positions, functional class of the region and further details, for each associated regulatory element found within the Alu repeats in the 5' flanking promoter and intronic regions. The zipped file name is supplementary 1.zip. Details about programs used are on request for academic users.Click here for file"} +{"text": "In the current era of high throughput genomics a major challenge is the genome-wide identification of target genes for specific transcription factors. Chromatin immunoprecipitation (ChIP) allows the isolation of in vivo binding sites of transcription factors and provides a powerful tool for examining gene regulation. Crosslinked chromatin is immunoprecipitated with antibodies against specific transcription factors, thus enriching for sequences bound in vivo by these factors in the immunoprecipitated DNA. Cloning and sequencing the immunoprecipitated sequences allows identification of transcription factor target genes. Routinely, thousands of such sequenced clones are used in BLAST searches to map their exact location in the genome and the genes located in the vicinity. These genes represent potential targets of the transcription factor of interest. Such bioinformatics analysis is very laborious if performed manually and for this reason there is a need for developing bioinformatic tools to automate and facilitate it.Transcription Factor Target Mapper). TF Target Mapper is a BLAST search tool allowing rapid extraction of annotated information on genes around each hit. It combines sequence cleaning/filtering, pattern searching and BLAST searches with extraction of information on genes located around each BLAST hit and comparisons of the output list of genes or gene ontology IDs with user-implemented lists. We successfully applied and tested TF Target Mapper to analyse sequences bound in vivo by the transcription factor GATA-1. We show that TF Target Mapper efficiently extracted information on genes around ChIPed sequences, thus identifying known (e.g. \u03b1-globin and \u03b6-globin) and potentially novel GATA-1 gene targets.In order to facilitate this analysis we generated TF Target Mapper (TF Target Mapper is a very efficient BLAST search tool that allows the rapid extraction of annotated information on the genes around each hit. It can contribute to the comprehensive bioinformatic transcriptome/regulome analysis, by providing insight into the mechanisms of action of specific transcription factors, thus helping to elucidate the pathways these factors regulate. In the current era of high throughput genomics there is a need for bioinformatic tools that are able to: 1. Automate and facilitate the storage and handling of large numbers of sequences and 2. Mine and decipher information contained therein. The interpretation of such data can provide new insight into sequence-function relationships and transcriptional/post-transcriptional regulatory mechanisms. A major challenge today is the genome-wide identification of target genes/regulatory elements for specific transcription factors. Chromatin immunoprecipitation (ChIP) allows the isolation of in vivo binding sites of transcription factors and is a powerful tool for examining gene regulation [The web front-end is programmed in PHP v4.3) running runningTranscription Factor Target Mapper). This entails five functions .Cleaning allows the user to strip the submitted sequences of vector sequence contamination and repetitive elements. Since cloned chromatin immunoprecipitated DNA fragments are usually small in size, vector sequences might be present on both sides of the inserts/submitted sequences and should be stripped before the BLAST searches. The user can upload specific vector sequences and set various parameters like vector clipping minimum match and score and insert length threshold. The stripping of the vector sequences is implemented by using the Cross_Match program . Most clPattern recognition allows the user to identify specific combinations of transcription factor binding sites in the cleaned input sequences. The user can upload transcription factors of interest as a file with TRANSFAC Matrix entries from the TRANSFAC database . TF TargBLAST searches allow the user to identify the exact location of the sequence in the genome ,13. CleaThe output list of genes can be compared to a list of known target genes for the specific transcription factor, if available. This allows the user to perform a quick comparison of his/her findings with what is already published or obtained from other sources, such as array analyses. Such comparisons provide bioinformatic validation of the ChIP experiment. A second comparison involves Gene Ontology (GO) IDs corresponding to the output list of genes. This list can be compared to a user's implemented list of GO IDs. This feature identifies genes associated with specific functions, processes, pathways or cellular components and allows extraction of specific genes from the TF Target Mapper list related to a specific function of interest. Gene and GO ID lists of interest can be uploaded using the parameters settings page.\u03b1-globin and \u03b6-globin . To assess if these sequences were real targets of GATA-1 , we thenAn increasing number of genomic ChIP approaches rely on the high throughput sequencing of sequence tags from cloned ChIPed DNA . We therTF Target Mapper facilitates the bioinformatic analysis of libraries generated by cloning chromatin immunoprecipitated DNA. Whilst essentially developed for this purpose, TF Target Mapper is a tool of general utility that can be used with any set of sequences that require the extraction of specific information in a window around a BLAST hit against a known genome. A useful feature is that it allows the user to easily repeat the BLAST searches when a new genome version is released and to compare the results on the annotated information around each hit in between versions.ChIP assays result in high background due to non-specific binding of DNA. Whereas recent experimental approaches have been developed aimed at reducing the background prior to cloning the ChIPed DNA (e.g. ), a usefTF Target Mapper was mainly used and tested with the mouse genome and we are presently expanding it for the human genome. It can also be expanded to include any of the other genomes in the Ensembl database. The utility of this tool will extend to the analysis of clusters of transcription factor binding sites in the wider area around each BLAST hit and implementation of other databases (e.g. microarray expression data), allowing for better prediction of real target genes.We devised TF Target Mapper, a BLAST search tool for the automatic extraction of annotated information on genes around chromatin immunoprecipitated sequences. We tested and demonstrated the efficiency of this tool with sequences bound in vivo by the hematopoietic transcription factor GATA-1. We anticipate that TF Target Mapper will contribute to the comprehensive bioinformatic transcriptome/regulome analysis aimed at investigating gene regulation. It can provide insights into the mechanisms of action of specific transcription factors and help elucidate the metabolic and developmental pathways these factors regulate.TF Target Mapper.For use: Standard WWW browser (Mozilla/Firefox/I.E.); For server: GNU\\Linux or Irix (tm SGI).PHP, SQL, Perl, BioPerl.Ensembl & Bio Perl APIs, Perl, RepeatMasker, Cross_Match, MySQL database server, PHP-enabled Web server (e.g. Apache), NCBI Blast. Locally available NCBI formatted Mouse Genome sequence.ErasmusMC license is needed for people that wish to obtain the code.License needed.TF: Transcription Factor.ChIP: Chromatin Immunoprecipitation.BLAST: Basic Local Alignment Search Tool.GO ID: Gene Ontology Identity.HSP: High-scoring segment pair.SH generated the code, the web interface and tested TF Target Mapper. MJM worked on the visualisation of hits on the chromosome ideograms and on the help pages, made contributions with ideas and was involved in critically correcting the manuscript. VCLdJ provided a template for the web interface, offered support concerning computer system maintenance and made contributions with ideas. PvdS has given support and guidance for the bioinformatic part of the project. FG and JS have made contributions with ideas to the project and were involved in revising and critically correcting the manuscript. EZK carried out the experiments for generating the sequences analysed, designed and supervised the project, tested TF Target Mapper and wrote the manuscript. All authors read and approved the final manuscript.TF Target Mapper application analytical flowchart : Analytical flowchart of the TF Target Mapper application including all its functions .Click here for fileChromatin immunoprecipitation (ChIP) : Description of the ChIP method.Click here for fileChromatin immunoprecipitation (ChIP) to confirm sequences analysed as GATA-1 targets : Chromatin immunoprecipitation (ChIP) experiments with GATA-1 antibodies to confirm sequences analyzed by TF Target Mapper as GATA-1 targets. Semi-quantitative PCR was used with primers specific for sequences that were found by TF Target Mapper analysis to contain binding sites for hematopoietic transcription factors. The control experiments refer to ChIP performed with rat IgG, whereas GATA-1 ChIP assays were performed with the GATA-1 N6 rat monoclonal antibody. Input refers to DNA from formaldehyde crosslinked sonicated chromatin. It can be seen that most of the sequences tested (with the only exception of the sequence G) were enriched by the GATA-1 antibody compared to the control. The chromosomes where the sequences map are also depicted (chr: chromosome).Click here for file"} +{"text": "Cis-regulatory modules are combinations of regulatory elements occurring in close proximity to each other that control the spatial and temporal expression of genes. The ability to identify them in a genome-wide manner depends on the availability of accurate models and of search methods able to detect putative regulatory elements with enhanced sensitivity and specificity.We describe the implementation of a search method for putative transcription factor binding sites (TFBSs) based on hidden Markov models built from alignments of known sites. We built 1,079 models of TFBSs using experimentally determined sequence alignments of sites provided by the TRANSFAC and JASPAR databases and used them to scan sequences of the human, mouse, fly, worm and yeast genomes. In several cases tested the method identified correctly experimentally characterized sites, with better specificity and sensitivity than other similar computational methods. Moreover, a large-scale comparison using synthetic data showed that in the majority of cases our method performed significantly better than a nucleotide weight matrix-based method., allows the identification, visualization and selection of putative TFBSs occurring in the promoter or other regions of a gene from the human, mouse, fly, worm and yeast genomes. In addition it allows the user to upload a sequence to query and to build a model by supplying a multiple sequence alignment of binding sites for a transcription factor of interest. Due to its extensive database of models, powerful search engine and flexible interface, MAPPER represents an effective resource for the large-scale computational analysis of transcriptional regulation.The search engine, available at Their presence suggests the existence of a combinatorial code for transcriptional regulation , that was generated using a methodology similar to the one described in this paper. The application is written in Common Lisp and relies on a development environment for web-based applications developed by the authors. Genomic annotations from the UCSC Genome Browser, TRANSFAC, JASPAR and HomoloGene information were used to build a MySQL relational database storing data about genes, transcription factors, and their binding sites. We implemented a web-based system, accessible at HMM \u2013 hidden Markov model; NWM \u2013 nucleotide weight matrix; ROC curve \u2013 receiver operating characteristic curve; TF \u2013 transcription factor; TFBS \u2013 transcription factor binding site; TPP test \u2013 true positive proportion test.Word file containing links to the factors table and the results of the small-scale and large-scale evaluations.Click here for fileExcel file containing detailed results of the small-scale evaluation.Click here for file"} +{"text": "Understanding the regulatory processes that coordinate the cascade of gene expression leading to male gamete development has proven challenging. Research has been hindered in part by an incomplete picture of the regulatory elements that are both characteristic of and distinctive to the broad population of spermatogenically expressed genes.K-SPMM, a database of murine Spermatogenic Promoters Modules and Motifs, has been developed as a web-based resource for the comparative analysis of promoter regions and their constituent elements in developing male germ cells. The system contains data on 7,551 genes and 11,715 putative promoter regions in Sertoli cells, spermatogonia, spermatocytes and spermatids. K-SPMM provides a detailed portrait of promoter site components, ranging from broad distributions of transcription factor binding sites to graphical illustrations of dimeric modules with respect to individual transcription start sites. Binding sites are identified through their similarities to position weight matrices catalogued in either the JASPAR or the TRANSFAC transcription factor archives. A flexible search function allows sub-populations of promoters to be identified on the basis of their presence in any of the four cell-types, their association with a list of genes or their component transcription-factor families.This system can now be used independently or in conjunction with other databases of gene expression as a powerful aid to research networks of co-regulation. We illustrate this with respect to the spermiogenically active protamine locus in which binding sites are predicted that align well with biologically foot-printed protein binding domains. K-Means and hierarchical clustering analyses are increasingly used in microarray studies to reveal correlated expression between groups of genes. Through time-series experiments, regimes of co-expression and regulatory cascades have been described. Nonetheless, determining the mechanistic relationship underlying co-regulation has not been trivial. The subtle interplay of systems controlling expression makes hidden variable models attractive to the analyst but ultimately problematic for the biologist seeking verifiable pathways. Studies of co-regulation have been most effective in bridging this gap when gene expression data has been used in conjunction with data describing transcription factor specificity. The approach allows the agents and outcomes of regulation to be explicitly connected .Transcriptional mechanisms regulating expression are currently thought to include binary differentiating systems that potentiate chromatin for transcription as well as a scalar mechanism that determines the extent and products of expression. The binding of transcription factors within critically defined promoter regions is thought to be a class of scalar regulation that initiates transcription only after binary control mechanisms have potentiated the chromatin locus . ReverseKrawetz-Lab database of Spermatogenic Promoters Modules &Motifs), provides online access to a suite of promoter structure-based analytical tools. This employs a database of known transcriptional control elements as an in-silico discovery tool that is targeted to the promoter regions of a set of testes expressed genes that regulate male germ cell differentiation.The differentiation of cells in the testes occurs continuously in adult mice through the serial interplay of gene expression that affects approximately one third of the genome. This includes an estimated 4% of genes that are uniquely expressed during spermatogenesis . Spermatth, 2005. The libraries, representing four major cell-types found within the testes, were selected as follows: Sertoli , spermatogonia , spermatocytes and spermatids (lib#-6786). Their respective promoter sequences were downloaded from mm5 genome build of DBTSS, the DataBase of Transcription Start Sites [K-SPMM databases. These databases describe murine promoter location, Transcription Factor Binding Site (TFBS) distribution and the location of putative homo or heterodymeric transcription-factor modules. This data was enhanced with a per-base conservation score relative to four vertebrate genomes hg17, rn3, canFam1 and galGal2 obtained from the UCSC archive of phastCons scores [A dataset of spermatogenically active genes was gathered from nine NCBI published cDNA libraries and acceThe promoter location database contains the many-to-one mapping of 11,715 potential promoter regions with the 7,551 genes in the cDNA libraries. Each DBTSS promoter sequence contains a 1 kb upstream sequence from each Transcription Start Site (TSS) described. Analysis of the 200 bp sequence downstream from TSS is available as an optional element. Annotation of the genes associated with each promoter was extracted from NIH DAVID 2.1 . The TFBTranscription factor binding sites were then refined and combined on the basis of distance metrics . This idThe system is executed as a JSP application within a Jakarta Tomcat framework with SQL queries directed to a local MySQL database.As shown in Fig. The response of the genome to spermatogenic differentiation is global, affecting the expression of approximately one third of its genes. Many of these genes are expressed as tissue specific isoforms or are drm1 & Prm2) required for the successful repackaging of nuclear DNA into the spermatozoon nucleus as well as one of the condensation enabling genes (Tnp2). The coordinate regulation of this locus has been widely investigated [The protamine locus provides a key example of a gene cluster that is active in the latter spermiogenic phase of spermatogenesis. It contains both protamine genes overlapping on opposite strands and paired with a third YY1 site located 20 bp further upstream. This places all three YY1 elements within the 113 bp upstream region required for Prm1 expression [In the 200 bp upstream region of Ppression , with thpression . Transfa117 and interaction with CBP, the ubiquitous CREB-Binding Protein co-activator. By contrast, the transcriptional activity of CREM in testes is controlled through its interaction with ACT, the tissue-specific Activator of CREM in Testis [CREM, the cAMP Response Element Modulator that directly binds to CRE, has been widely implicated for its role in spermiogenesis. CREM deficient mice arrest spermatogenesis at the early round spermatid stage , with thn Testis ,37 that n Testis . Interesn Testis and in cn Testis but thesTogether these results show that the use of transcription factor colocalization in conjunction with conservation as implemented in the K-SPMM promoter discovery tool yield potential sites of transcription factor binding that are biologically well validated. In testing the system, we noted the presence of the YY1 response element in the upstream regions of all three genes in the protamine domain that has been associated with a sterol response element binding protein that regulates proacrosin, another haploid expressed gene. . The devDatabase of Transcription Start Sites [DBTSS The rt Sites JASPAR An open source database of transcription factor DNA-binding preferences Krawetz-Lab database of Spermatogenic Promoters Modules &Motifs [K-SPMM &Motifs NIH DAVID National Institute of Health DAVID Gene Annotation system Position Weight MatrixPWM Transcription Factor Binding SiteTFBS Transcription Factor Database [Transfac The Database Transcription Start SiteTSS UCSC Genome Browser University of California Genome Browser YL developed the final version of the JSP codebase and constructed the SQL database. AEP coordinated the bioinformatic investigation leading to the data used in the system. GCO provided biological context and guidance during the initial phase of system development. SAK originated the concept and supervised its design and implementation. The manuscript was drafted by AEP and SAK with the input and approval of all authors."} +{"text": "GENEACT, a new software suite for the detection of evolutionarily conserved transcription factor binding sites or microRNAs from differentially expressed genes from DNA microarray data, is described. cis-acting regulatory elements. We present a suite of web-based bioinformatics tools, called GeneACT , that can rapidly detect evolutionarily conserved transcription factor binding sites or microRNA target sites that are either unique or over-represented in differentially expressed genes from DNA microarray data. GeneACT provides graphic visualization and extraction of common regulatory sequence elements in the promoters and 3'-untranslated regions that are conserved across multiple mammalian species.Deciphering gene regulatory networks requires the systematic identification of functional These cis-regulatory elements are often recognized in a sequence-specific manner by regulatory proteins or nucleic acids, which regulate the expression of the corresponding gene. In particular, activation and repression of gene transcription typically involves the binding of transcription factors to their cognate binding sites. The levels of mRNA transcript can also be modulated by microRNAs (miRNA), which tend to bind specific sequences in the 3'-untranslated region (UTR) of the transcript. Identification and characterization of cis-regulatory sequence elements that control gene expression are crucial to our understanding of the molecular basis of cell proliferation and differentiation.Cell type and tissue specific gene expression patterns are primarily governed by the cis-regulatory sequences was conducted experimentally on an individual gene basis, using time-consuming procedures such as promoter cloning, chromatin immunoprecipitation (ChIP) assays, and reporter gene assays using truncated and/or mutated DNA sequences. Given that hundreds of transcription factors regulate the expression of thousands of genes in the human genome, more high-throughput procedures are desired. The sequencing of several genomes, DNA microarray assays, and the rise of bioinformatics represent major steps forward in this regard.Until recently, identification of Sequencing of the human, mouse, and rat genomes has made it possible to perform genome-wide analyses of regulatory sequence motifs across these species. Such a comparative genomics analysis is powerful because functional transcription factor binding sites are likely to be under stronger evolutionary constraints than random DNA sequences. Therefore, reliable and effective identification of regulatory elements could be achieved using interspecies sequence alignments of orthologous genes ,2. Indeecis-acting regulatory elements, which suggests that such elements are likely to be over-represented in co-regulated genes more than would be expected by random chance. Flanking sequences for each gene are known from sequencing efforts, and many of the sequences to which individual transcription factors tend to bind have been determined experimentally and catalogued in databases such as the Transcription Factor Database (TFD) [cis-regulatory mechanisms important in a given biologic context is now possible. Indeed, a number of computational programs have been developed to reveal transcription factor binding sites that are statistically over-represented in co-regulated genes [DNA microarray technology is used to profile relative mRNA transcript levels between samples exposed to different experimental conditions. DNA microarrays represent a high-throughput, genome-wide experimental platform that enables analyses of differential gene expression. Differences in transcript levels could be caused by several mechanisms, most notably the differential activities of transcription factors and miRNA. The interpretation of DNA microarray results requires deciphering which transcription factors and/or miRNA are likely to mediate the observed changes in transcript levels. We expect that co-expressed genes may share similar se (TFD) and TRANse (TFD) ; therefoed genes -15.cis-regulatory elements. Most importantly, there is no program currently available that incorporates search tools for both transcription factor and miRNA binding sites. Recent studies with miRNA suggest that differential miRNA expression could be responsible for differential mRNA expression observed by DNA microarray data [cis-acting element browser for rapid identification of over-represented potential transcription factor binding sites and putative miRNA target sites has yet to be developed. The lack of an easy-to-navigate graphical web interface has hindered verification of computational predictions by experimental biologists who may be less comfortable with less accessible interfaces.Several deficiencies exist in currently available software for predicting ray data ,17. Thercis-acting elements that are evolutionarily conserved across species for a specified set of genes, which can be used to unravel transcriptional regulatory networks that are likely to be involved in differential gene expression.In this report we describe a suite of web-based, open source bioinformatics software tools (GeneACT) that graphically display transcription factor binding sites and microRNA target sites in the regulatory regions of human, mouse, and rat genomes. In addition, we present a unique method to identify quickly transcription factor binding sites or miRNA target elements that are over-represented in differentially expressed genes based on DNA microarray data. Thus, GeneACT enables the identification of putative GeneACT, an overview of which is given in Figure Detailed documentation of each of the tools in GeneACT can be found on the GeneACT website . GeneACTPre-processing of sequence data underlying the GeneACT tools was carried out as follows. DBSS, the interface of which is shown in Figure The second option for searchable region is 'downstream of stop codon'. Similar pre-processing was done for the downstream region from -2000 to +100 (2000 bp downstream of the transcript end) with respect to the stop codon. All incidences of transcription factor binding sites spanning all three species were also stored for this region. Finally, we offer a search option dedicated to detecting the occurrences of miRNA binding sites. In this case, the 3'-UTRs, defined as the region between the stop codon and the polyA signal, were extracted from the genome assemblies, and we employed miRanda , which i3'-UTRs from all three mammalian genomes are extracted and individually searched for potential miRNA target sites. Using the approach developed by Enright and coworkers , we pre-CDC2 (cell division cycle 2) is shown in Figure In order to display the presence of consensus transcription factor binding site sequences on a promoter that spans multiple species, we developed a novel Scalable Vector Graphic (SVG)-based graphical interface to display this information in a promoter-oriented way. Using the PBSS, regulatory regions of genes in multiple species along with the consensus TFD binding site information can be quickly visualized. The interface of PBSS is shown in Figure CDC2 motifs are conserved around the -150 bp region, of which two of the binding sites are elongation factor-2s (E2Fs). In Figure The benefits of the SVG graphical display of the regulatory regions of genes, presented in a regulatory motif-oriented fashion for each species, are numerous Figure . One majGeneACT also provides other tools to make promoter analysis easier. The genomic sequence retrieval tool allows the user to retrieve genomic sequences in a FASTA format using relative position with respect to the transcription start site, start codon, or stop codon. When the input has more than one gene name or gene ID, sequences are returned in a concatenated FASTA file. Information about the sequence such as the chromosomal location, gene name, synonyms, and gene ID are printed in the header of the FASTA file. For the genes that are annotated to be on the reverse complement strand, this tool returns the sequence on the reverse complement strand.TFD search can be used to perform a query in the TFD dataset for binding site sequence or transcription factor name Figure . Other tcis-regulatory elements that could mediate the differential gene expression patterns, we developed the DBSS tool to explore the distributions of regulatory sequence elements between the differentially expressed genes compared with those of the control genes. A corollary to the importance of cis-acting regulatory elements to generating differential gene expression patterns is that some of the co-expressed genes may share a common subset of these elements, and the observed frequency of these elements in the upregulated or downregulated gene set should be greater than in the unchanged gene set.The use of microarrays to elucidate genome-wide gene expression patterns is now standard practice. These microarray experiments generate large sets of differentially expressed genes, but the actual mechanism that controls the differential gene expression cannot readily be deduced using this technique alone. To ascertain the cis-acting elements conserved in human, mouse, and rat in a given set of genes and reveals the over-represented cis-acting elements in comparison with a control gene set. DBSS takes as input two sets of genes: a control set and a regulated set. For the purposes of identifying over-represented transcription factor binding sites in the regulated set, the regulatory regions of each gene in both sets are searched for transcription factor binding sites that are conserved across each genome. At present, we have pre-processed each gene that contains ortholog information in NCBI HomoloGene for the -10,000 bp to +100 bp region centered on the start codon and the -2000 bp to +100 bp region centered on the stop codon for the purposes of looking for enriched transcription factor binding sites. Restricting the binding sites solely to those that span multiple genomes is intended to reduce background noise. However, certain short degenerate binding site sequences may still appear as false positives. Thus, we use the control set of genes to reduce further the false-positive rate because these types of binding sites are also expected to appear with high frequency in this dataset as well.DBSS tracks the frequencies of Specifically, the DBSS calculates the frequency at which each binding site occurs in genes from both the regulated set and control set. The fold change in frequency of each binding site between the regulated and control gene sets is calculated in order to find binding sites that are enriched in the regulated set. For binding sites that do not contribute to the regulation of a particular gene, we expect there to be no relative change in frequency. These genes are then filtered from the results by specifying a lower bound for the 'binding site ratio' option on the search interface. For example, to keep only the binding sites that have three times the frequency in the regulated set versus the control set, one would specify a lower bound of three. By looking at the binding sites that have a large ratio (fold change) between the regulated set genes and control set genes, the binding site sequences that are potentially important to the regulation of a given system under specific conditions or treatments can quickly be determined. In this way, the regulatory mechanism of how the transcription factors regulate a given system can be inferred from the enriched binding site sequences.t-tests for each gene in this dataset and set our threshold at P < 0.05 to define genes that were differentially expressed; there were a total of 670 genes in this regulated gene set. We chose the genes that had P > 0.7 as our controls; there were a total of 612 genes in this control gene set. The actual P values for individual genes are reported in Additional data file 1. Using the DBSS, we analyzed the promoter regions of these genes in the -10,000 bp to +100 bp region relative to the start codon and filtered the results to those binding sites with a threefold change in frequency. As shown in Table To test whether mining of DNA microarray datasets using DBSS can generate novel insights into the key transcription factors operating in differential gene expression, we downloaded a microarray dataset GSE1692) deposited in the NCBI Gene Expression Omnibus database692 deposin vivo, we conducted a ChIP assay. We used E2F1 and E2F4 antibodies to analyze the occupancies of these two transcription factors on five different promoters in both synchronized and quiescent T98G cells. A brief description of our ChIP methodology is as follows. Approximately 1 \u00d7 107 T98G cells were fixed with formaldehyde at room temperature for 10 min. Fixation was stopped by the addition of glycine for 5 min. Cells were washed once with ice-cold phosphate-buffered saline supplemented with protease inhibitors . Cells were scraped and pelleted in the same buffer. Cell pellets were lysed in 0.5 ml lysis buffer . Soluble chromatin was prepared by sonication of the cell lysates. Subsequent immunopreciptation and analysis were performed essentially according to the method proposed by Lambert and coworkers [To demonstrate independently that some of the genes appearing in our list predicted to contain over-represented E2F binding sites are indeed bound by E2F1 or E2F4 oworkers , except DHFR, CDC6, CDC25A, and MCM3 are consistent with published results, binding of E2F1 and E2F4 to DUSP4 is a novel finding. Thus, based on the results of DBSS, we can gain biological insights similar to those obtained by ChIP-chip analysis.As shown in Figure 1 from G0. Indeed, one of the differentially expressed genes that contributes to the SRF ranking, namely EGR1, has been independently shown to be activated by SRF [MCM5 , whose binding sites were highly enriched in the regulated gene set. The increased presence of SRF binding sites implies that genes containing this site might be regulated by SRF when cells enter Gd by SRF . Genes t5 Figure and DHFRR Figure are showIf the abundance of mRNA is regulated by miRNA, then we would expect that expression levels of miRNAs and their authentic targets should be anti-correlated. Accordingly, computational identification of over-represented miRNA target sites shared among co-regulated genes from DNA microarray data in theory should provide valuable leads to uncover the biologically relevant miRNAs responsible for differential gene expression. To test this hypothesis in a well characterized system, we downloaded and analyzed the dataset created by Lim and coworkers . This inThe results are summarized in Table in vitro model for studying skeletal muscle differentiation because these cells are able to differentiate terminally into myotubes when serum is withdrawn from the culture medium [Myogenic differentiation is a process that leads to the fusion of muscle precursor cells (myoblasts) into multinucleated myofibers in the animal. The C2C12 myoblast cell line serves as a good e medium ,39. To ue medium . Our conin silico analysis of the C2C12 microarray gene expression profile using DBSS implied that at least 14 miRNA target sites are over-represented in downregulated mRNAs during myogenic differentiation in C2C12 cells, suggesting that some of these microRNAs may be differentially expressed during myogenic differentiation and contribute to the mRNA expression profile. Recently, Chen and colleagues [in silico predictions with their experimental results, we found that our analysis recaptured miR-133a, miR-206, and miR-130a target sites as the most enriched in differentially expressed genes. Therefore, a differential miRNA target site search can generate predictions consistent with experimental results in this system.The result is summarized in Table lleagues investigin vitro that more than two miRNA target sites in a given 3'-UTR seem to boost the efficacy of miRNA-mediated gene repression [It has previously been demonstrated pression . To testcis-acting elements that are evolutionarily conserved across species for all orthologous genes. A comparative, online, web-based, graphically oriented promoter browser was developed for the public domain. Using the DBSS, insights can be gained into a particular system in which transcription factors might be involved. GeneACT enables integration of cis-regulatory sequences identified by a comparative genomics approach with microarray expression profiling data to explore the underlying gene expression regulatory networks.GeneACT was developed to display and analyze regulatory regions across human, mouse and rat genomes, and it enables identification of putative To illustrate the uniqueness of GeneACT, we compared GeneACT with different existing software. The comparison is summarized in Table in silico annotation or prediction of potential transcription factor binding sites. Virtually all other programs make use of the position weight matrix (PWM)-based TRANSFAC [in silico prediction of prokaryotic transcription factor binding sites [in silico analysis provides an alternative and perhaps more relevant approach to identification of putative transcription factor binding sites in the flanking regions of genes of interest. Given the findings that no single transcription factor binding site discovery program is superior from a number of comparative studies and that using multiple independent programs improves the performance of prediction [Second, GeneACT employs the TFD database and pattern matching for TRANSFAC and relaTRANSFAC . Becauseng sites ,44. Howeng sites ,46. A PWng sites ,48. The ediction , GeneACTThe third and final distinct feature that separates GeneACT from other related programs is that the output of GeneACT is geared toward easy visualization and pattern recognition. It is designed to be a simple, freely available tool for experimental biologists to navigate promoter regions and discover the significance of a given DNA sequence based on comparative genomic analysis and DNA microarray data. Extensive tutorials and help documents are available on our website help page to guide users through different tools on this site. A major feature of GeneACT is the miRNA target site search capability. This is crucial, given that up to one-third of human genes could be targeted for regulation by miRNA , in addicis-regulatory elements involved in differential gene expression depends heavily on the reliability of transcription factor recognition and miRNA target site prediction. Accurate computational prediction of miRNA target sites is still a very challenging task because of insufficient experimental data [The quality of predictions of critical tal data . For exaGeneACT is open source online software and is relative easy to upgrade. We expect DBSS will improve significantly as miRNA target site prediction and transcription factor binding site recognition becomes more reliable. Moreover, in the future we plan to add additional genomes to GeneACT as they become available. Even so, it is possible for researchers interested in other species to use GeneACT by taking advantage of the input sequence feature and/or input binding site feature of PBSS. In this way, we expect researchers from different and diverse fields to find a valuable resource in GeneACT.The following additional data are available with the online version of this paper. Additional data file Table containing the original DNA microarray data generated by Cam and coworkers used forClick here for fileTable containing the full list that is summarized in Table Click here for fileTable containing cell cycle regulated genes containing E2F or SRF binding sites.Click here for fileTable containing the full list that is summarized in Table Click here for fileTable containing the original DNA microarray data generated by Tomczak and coworkers used forClick here for fileTable containing the full lists summarized in Table Click here for file"} +{"text": "A. thaliana, O. sativa and Z. mays. A variety of bioinformatic servers or databases of plant promoters have been established, although most have been focused only on annotating transcription factor binding sites in a single gene and have neglected some important regulatory elements (tandem repeats and CpG/CpNpG islands) in promoter regions. Additionally, the combinatorial interaction of transcription factors (TFs) is important in regulating the gene group that is associated with the same expression pattern. Therefore, a tool for detecting the co-regulation of transcription factors in a group of gene promoters is required.The elucidation of transcriptional regulation in plant genes is important area of research for plant scientists, following the mapping of various plant genomes, such as cis-regulatory elements with a distance constraint in sets of plant genes. The system collects the plant transcription factor binding profiles from PLACE, TRANSFAC (public release 7.0), AGRIS, and JASPER databases and allows users to input a group of gene IDs or promoter sequences, enabling the co-occurrence of combinatorial transcription factor binding sites (TFBSs) within a defined distance (20 bp to 200 bp) to be identified. Furthermore, the new resource enables other regulatory features in a plant promoter, such as CpG/CpNpG islands and tandem repeats, to be displayed. The regulatory elements in the conserved regions of the promoters across homologous genes are detected and presented.This study develops a database-assisted system, PlantPAN , for recognizing combinatorial .In addition to providing a user-friendly input/output interface, PlantPAN has numerous advantages in the analysis of a plant promoter. Several case studies have established the effectiveness of PlantPAN. This novel analytical resource is now freely available at However, defining all functional binding sites within an identified promoter is difficult, and the existence of some additional binding sites should be assumed [cis-regulatory elements in co-regulated genes are identified by exporting sets of genes to AthaMap. The study describes an effective resource, PlantPAN , for identifying the co-occurrence of transcription factor binding sites (TFBSs) in a group of gene promoters with distance constraint between two TFBSs, and presents graphically the transcription factor binding sites in specific gene promoter regions of interest. With the advent of microarray technology, Arabidopsis co-expression tool (ACT) [cis-regulatory elements in the 200 bp region upstream of the transcription start site. Recently, Chawade et al. proposed putative cold acclimation networks by combining data from microarrays, promoter sequences and known promoter binding sites [The appropriate regulation of gene expression is essential for all cellular processes, in which transcriptional control is primarily concerned with improved survival. In animals and plants, transcription factors are key regulators of gene expression and play a critical role in the life cycle . Investi assumed . Further assumed . Some co assumed . Accordi assumed ,6 identi assumed web toolol (ACT) was deveol (ACT) providesng sites . AccordiArabidopsis promoter sequences and consensus sequences for 105 previously characterized transcription factor binding sites (TFBSs) and provides analysis on over-represented TFBSs occurring in multiple promoters. PlnTFDB [cis- and trans- acting regulatory DNA elements, described in earlier studies[Arabidopsis thaliana transcription factor database (AtTFDB) consisting of approximately 1,770 Arabidopsis TFs and their sequences (protein and DNA) grouped into around 50 families with information on available mutants in the corresponding genes. AGRIS [Arabidopsis. JASPAR [Arabidopsis transcription factors. PlantCARE [cis-acting regulatory elements and a portal to tools for the in silico analysis of promoter sequences. AthaMap [cis-regulatory elements in Arabidopsis. Notwithstanding the recent development of the above resources, advances in plant science require a more detailed analysis of plant promoters. For example, CpG islands in the genome are important because of their strong correlation with gene regulation. CpG-rich regions are methylated and are associated with inactive DNA often linked to heterochromatin, gene silencing, and pathogen control [Oryza sativa [Arabidopsis, gene expression is up-regulated when gene promoters were enriched in GGCCCAWW and AAACCCTA repeat sequence; gene expression is down regulated when gene promoters were enriched with TTATCC motif repeat [Many databases harbor collections of numerous transcription factors and are useful for the prediction of transcription factor binding sites in the promoter regions of plants. For instance, TRANSFAC -13 is a PlnTFDB is an in PlnTFDB is a datr studies. AGRIS [r studies containss. AGRIS integrat. JASPAR ,19 is an. JASPAR stores ilantCARE is a dat control -25. In p control -28. Ther control -28. Rece control and CpG control were dev control -33. For a sativa . Moreovef repeat was devecis-regulatory elements within the conserved regions of homologous genes. Moreover, the combinatorial transcription factor binding sites with distance constraint can be identified in a group of gene promoter sequences. The detailed methods are illustrated as follows.PlantPAN is a web-based system which is running on an Apache web server on a Linux operation system. The content of the integrated databases including gene information, gene ontology (GO), gene sequence, promoter sequence, transcription factor binding sites, CpNpG islands and tandem repeat regions are stored in a MySQL relational database system, and all tables are connected by means of Gene ID , Oryza (O. sativa) and maize (Z. mays) was obtained from TAIR (TAIR6_genome_release) [Arabidopsis, Oryza, and Zea are 35,351, 62,827 and 29,759, respectively. Users are allowed to input the gene IDs [Gene information of release) , TIGR (orelease) and ZmGDrelease) , respectrelease) . The numgene IDs , locus nAfter the promoter region had been determined, the regulatory elements, such as transcription factor binding sites (TFBSs), CpG/CpNpG islands, and tandem repeats were annotated. Table Arabidopsis or locus name for Oryza) or a group of promoter sequences is allowed for input to the system. In the second step, the system calculates the GO terms related to the input genes. The genes involved in different GO terms are tabulated. Users can choose all genes or genes in a particular GO term for further analysis. In the third step, the promoter sequence is extracted from the PlantPAN promoter database. However, if users input a group of promoter sequences in step one, then the system will skip steps two and three. In the fourth step, users can select transcription factors binding profiles from different species and scan TFBSs in the promoter regions. The thresholds of the core similarity and the matrix similarity should be set in this step; the default values are 1.0 and 0.75, respectively.The \"Gene group analysis\" function of PlantPAN system, which comprises seven analytic steps Fig. , is utilApriori is a program that is implemented to mine association rules for a group of input data [Apriori was used to discover the co-occurrence of transcription factor binding sites (TFBSs) and combinatorial TFBSs in a group of gene promoters [In step five, a figure depicts all detected TFBSs in every promoter. Consequently, put data ,44. A seK is the number of background gene promoters used and T is the number of observed gene promoters that are input by users, k is the number of promoters have the combination in the background gene set and t is the number of promoters have the combination in the observed gene set. P-value is calculated for each combination based on the hypermetric equation; smaller the p-value is, more statistically significant the combination is. A smaller p-value of a combination corresponds to greater statistical significance.where et al. found that 75% of the interacting transcription factors were occurred within the characteristic distances which are smaller than 166 bp in yeast [One TFBS which co-occur in a group of gene promoters could be identified in sixth step. Additionally, the fact that target genes with characteristic distances show significantly higher co-expression than those without preferred distances provides evidence for the biological relevance of the observed characteristic distances . Yu et ain yeast . In thisArabidopsis and Oryza in the cross-species analysis of promoter sequences of homologous genes, were extracted from Gramene [The paralogous and orthologous genes among Gramene . Followi Gramene , was app Gramene program.cis-regulatory elements are also revealed graphically to improve presentation.The regulatory features discovered in the promoters are presented graphically or tabulated. A graphical interface is implemented using the GD library of a PHP programming language. Once the analysis has been completed, numerous regulatory characteristics, including transcription factor binding sites, CpG/CpNpG islands, and repeat regions, are shown in an overview. The regulatory features are then presented in more detail if users click the regulatory elements figured in the graph or the label, \"View in Table.\" Moreover, the regulatory elements in the conserved regions and the co-occurrence of PlantPAN has two main functions. Firstly, it applies \"Gene group analysis\" to identify the co-occurrence of transcription factor binding sites in a group of gene promoters. Combinatorial regulation by transcription factor complexes is an important characteristic of eukaryotic gene regulation ,4,45. Twet al. [et al. predicted that DOF and AP2 could co-regulate At4g37150.1 and At1g20440.1 in this cold regulatory network [In a previous study, Chawade et al. construcet al. . Moreove network , LFY (At5g61850.1), FUL (At5g60910.1), AGL24 (At4g24540.1), and PI (At5g20240.1), which participated importantly in flower development Fig. , and theze) Fig. . SeveralArabidopsis thaliana rbcS-1A (At1g67090.1) promoter has been defined from -320 bp to -125 bp; a binding site is present for the GBF (G-box binding factor) transcription factor binding[Arabidopsis rbcS-1A gene ID for a search, one GBF binding site was identified between -241 bp and -230 bp is one of the putative genes whose promoter contains Up1 and Up2 [Previous investigations have revealed that the gene expression can be up-regulated when the promoter that contains Up1 (GGCCCAWW) or Up2 (AAACCCTA) repeats . Arabido and Up2 . These r and Up2 , which cNevertheless, users can input a novel promoter sequence to analyze the above four regulatory features. After the annotation tools were employed, the selected features, such as TFBSs, CpG/CpNpG islands and tandem repeats, were represented in the graph and table , as predicted in the conserved regions between -58 bp and -48 bp and between -78 bp and -88 bp in Arabidopsis (AT1G48990) and Oryza (LOC_Os05g50110), respectively . Additionally, the transcription factors will be enlarged by taking into account more experimental matrices from different plants. The authors will in the near future be energetically connecting transcription factors to other proteins using protein-protein interaction databases. Furthermore, the plant microarray data will be integrated into \"Gene group analysis\" of PlantPAN.The number of sequenced and annotated plant genomes is rapidly increasing. The PlantPAN database is currently being expanded to cover species other than PlantPAN provides a \"Gene group analysis\" function for analyzing the co-occurrence of combinatorial TFBSs with a distance constraint in sets of plant genes. This function extends a good platform to examine the co-expression genes of microarray data in transcriptional regulation networks. Furthermore, the PlantPAN web server not only provides a user-friendly input/output interface, but also offers numerous advantages in plant promoter analysis over currently available tools for annotating plant promoters and table (S1). The data provided represent six supplementary figures and one supplementary table in this study.Click here for file"} +{"text": "It is known that transcription factors frequently act together to regulate gene expression in eukaryotes. In this paper we describe a computational analysis of transcription factor site dependencies in human, mouse and rat genomes.Our approach for quantifying tendencies of transcription factor binding sites to co-occur is based on a binding site scoring function which incorporates dependencies between positions, the use of information about the structural class of each transcription factor (major/minor groove binder), and also considered the possible implications of varying GC content of the sequences. Significant tendencies (dependencies) have been detected by non-parametric statistical methodology (permutation tests). Evaluation of obtained results has been performed in several ways: reports from literature ; dependencies between transcription factors are not biased due to similarities in their DNA-binding sites; the number of dependent transcription factors that belong to the same functional and structural class is significantly higher than would be expected by chance; supporting evidence from GO clustering of targeting genes. Based on dependencies between two transcription factor binding sites (second-order dependencies), it is possible to construct higher-order dependencies (networks). Moreover results about transcription factor binding sites dependencies can be used for prediction of groups of dependent transcription factors on a given promoter sequence. Our results, as well as a scanning tool for predicting groups of dependent transcription factors binding sites are available on the Internet.We show that the computational analysis of transcription factor site dependencies is a valuable complement to experimental approaches for discovering transcription regulatory interactions and networks. Scanning promoter sequences with dependent groups of transcription factor binding sites improve the quality of transcription factor predictions. Transcription factors (TFs) are a major class of DNA-binding proteins and are a crucial element in the regulation of gene expression. It is well established that many transcription factors act together to regulate gene expression in eukaryotes . For exa. This tool can help in predicting transcription factor binding sites in promoter analysis with relatively high sensitivity and modest specificity (which is still higher in comparison to single site prediction tools (such as [Based on dependencies between two transcription factor binding sites (second-order dependencies), it is possible to construct higher-order dependencies (networks). Obtained results about dependencies among transcription factor binding sites have been further used for development of a web-based tool that allows scanning of promoter sequences for groups of dependent transcription factor binding sites (such as ).From the JASPAR database, we selected all vertebrate transcription factors and made all the possible 2-order combinations , -0.27 and -0.39 for human, rat and mouse, respectively. These results indicate that there might be reduced statistical power for factors with many predicted sites (correlation coefficient significantly different from zero in the case of rat and mouse), potentially because their lower site information content could give rise to more noise in the site predictions. However, weak correlation coefficients imply small influence of such noise on obtained results.Similarly, we investigated the influence of binding site length on the number of dependent mates. Short binding sequences could increase the frequency of detected binding sites. We have therefore performed a correlation analysis between the length of binding sites and the number of dependent mates for each transcription factor. The Pearson's correlation coefficients were -0.30 , -0.17 and -0.06 for human, rat and mouse, respectively. These results indicate that at least for the analysis in human, shorter binding sites tend to give rise to more dependent pairs. We cannot rule out that this is due to a higher number of false positive predictions associated to TFs with short binding sites. Yet, the observed correlation coefficients are weak, and for mouse and rat not significantly different from zero. This indicates that the resulting bias is weak and does not dominate our results.Another potential source of bias could be the sequence composition of the promoters and binding motifs. For example, a GC-rich promoter sequence would be more likely to contain predicted sites for GC-rich binding motifs, and detection of dependencies between corresponding factors could be biased. The stratification according to GC-content used by our resampling approach should control for the GC-content, but other compositional biases might exist that we did not account for. To investigate this issue, we performed a clustering of transcription factors based on the similarity between their binding sites [see Additional file Next, we investigated how many dependent pairs contain transcription factors that belong to the same structural class, using the classification from JASPAR . It has . The predictions of dependent transcription factor binding sites are more likely to be true if they are supported by multiple lines of evidence. Figure Finding groups of genes that are correlated throughout a set of experiments leads to the hypothesis that these genes are involved in common functions . Further where users can search by transcription factor name and retrieve our results on dependencies . For stringent searching, users can require the transcription factor network to be fully connected and represents exactly the results which would be obtained via direct enumeration. Partial connectivity is less stringent (e.g. for third-order only two combinations are necessary to be dependent) and represents a less stringent approximation of the full enumeration results. Information obtained in this way can be useful for designing biological experiments where information about transcription factors that may cooperate is useful (design of regulatory gene networks for various processes). In addition, the results obtained about dependencies are potentially useful for better understanding transcriptional networks in human, mouse and rat genomes.It is likely that some protein-DNA complexes not only contain two, but three or more cooperating transcription factors. In order to identify such groups of more than two dependent sites, one could apply the same method as for pairs. In practise however, it is not feasible to enumerate and analyze all combinations of three or more transcription factor binding sites . Instead, we used the results on significantly associated pairs for extrapolation. Starting from dependencies of order two, we analyzed the dependencies of higher orders as fully or partially connected transcription factor networks. To make all results easily accessible, we have provided a web-based tool, freely accessible from NM_184041, NM_001927, NM_002479, NM_079422, NM_003281, NM_000257, NM_002471, NM_001100 and NM_005159) is regulated by combinatorial interactions between the transcription factors listed above [Results from descriptive data-mining about dependencies between transcription factor binding sites can be used for the computational prediction of modules of dependent binding sites. In order to evaluate the proposed tool, we used experimentally verified data from ,41. FromOnly module MEF2-SRF was not detected in all sequences, however there are other combinations that include one of these two transcription factors detected in more sequences. This is not a surprise because not only these 5 transcription factors are involved in the regulation of skeletal muscle genes.In order to further demonstrate the practical application of the proposed tool, we can simulate the following scenario: if we know that one specific transcription factor is involved in the regulation of a set of genes, and we would like to know which other possible transcription factors might be involved, then we could use the proposed tool to create a list of candidates. Specifically, using the set of nine genes that showed skeletal muscle expression we could start from the any of the 5 mentioned transcription factors and then find the factors that might interact with it in the regulation of these nine genes. Using the proposed tool, we were able to predict all the other known transcription factors reported to be involved in the regulation of these genes (true positives). However, we also determined another set of transcription factors for which no experimental support exists .In order to perform more detailed validation test, we used transcription factors that were predicted and experimentally identified as true positives, transcription factors that were not predicted but experimentally reported for a given promoter as false negatives, transcription factors that were neither predicted nor experimentally reported as true negatives and transcription factors that are predicted but not experimentally reported are false positives Table . We notiin vitro or in vivo and have been reported in the literature: these represent partial validation of our approach . Dependencies between transcription factors are not biased by similarities in their DNA-binding sites. The distribution of transcription factors, whose binding sites are dependent, according to their functional classification shows that they tend to be involved in same biological process. Genes that are involved in common functions tend to have similar sets of dependent transcription factor binding sites. Knowing these sets may further our understanding of gene regulation networks. This is why we provided distributions of dependent transcription factor binding sites in GO ontology classes of target genes whose promoters we used in the study and these results are available from . Starting from the dependencies of order 2, it is possible to construct higher order dependencies (networks). All results can be obtained via the web tool . This information may help others in their investigation of transcriptional processes in human, mouse and rat. In addition, we demonstrated how the information obtained about dependencies could be used for the computational prediction of modules of dependent transcription factor binding sites . We validated the tool using experimentally verified data set of transcription factors involved in the regulation of skeletal muscle expression. We also demonstrated how the proposed tool might be applied. Computational analysis of transcription factor site dependencies is a complement to experimental approaches for discovering transcription regulatory interactions and networks.In this paper we describe a data-mining study to identify transcription factor site dependencies in the human, mouse and rat genomes. Many of the predicted dependent transcription factors had been confirmed previously The dataset used in this study comprised promoter sequences (1500 bp upstream to 200 bp downstream of annotated transcription start sites) of 18,799 human , 17,954 mouse and 6,723 rat genes taken from the cisRED database, August 2007 . The setIn order to detect transcription factor site dependencies, we first enumerated all second-order combinations of transcription factors. Then, using the new scoring function introduced in our previous work , we predFor each promoter sequence we calculated the CG context (%G + %C). Histogram distributions of GC content are given in Additional file wherei = 1 means that sequence i has binding sites of transcription factor A, Ai = 0 means that sequence i has no binding sites of transcription factor A, and similar for Bi.and n is the total number of sequences, Ai <-> Bj).Then, in a series of R replicates, we performed a permutation of the initial table [see Additional file In order to define the term \"similar GC content between sequences\" we could have used equal intervals of GC content. However, we noticed that this would result in a smaller number of sequences for permutation in high and low GC bins. To correct for this, we produced 50 bins with a fixed number of promoters per bin [see Additional file whereand R is the resample size (number of replicates), and adding 1 is the pseudocount that prevents us from underestimating the p-value when it is low or zero. We used an adjusted p-value (with Bonferroni's correction) to correct for multiple testing errors. Dependencies were declared significant if the computed p-value was smaller than 0.05/k (where k is the number of multiple tests). We determine the number of re-sampling runs using the following formula:where P_threshold is the significance p-value threshold selected which, in our case, corresponded to P_threshold = 0.05/k where k = 75. We therefore selected R = 15,000 as a compromise between accuracy in p-value estimation and calculation time (R>>k/0.05 = 1500).Starting from dependencies of order two, we constructed dependencies of higher orders in the following way: if transcription factors A-B, B-C and A-C are all dependent, then we can claim that there is an order three dependency between transcription factors A, B and C. (Note: it is not true if only A-B and B-C are dependent pairs but A-C is not). Third-order dependencies between the transcription factors A, B and C can be represented as fully connected graph as shown in Additional file . Different cut-off values in the range between 0.8 and 0.9 only had a minor influence on the results in Table The computational prediction of cis regulatory motifs of dependent transcription factors in scanning form can be performed using information about dependencies between transcription factor binding sites using the scoring function which we introduced in a previous paper and, in AT designed the study, performed computational analysis, created supported web tool and drafted the manuscript. MS and EJO participated in the design of the study, discussion of the results and drafting of the manuscript. All authors read and approved the final manuscript.Distribution of dependencies of order 2 in the human, mouse and rat genomes using real promoters sequences and background sequences.Click here for fileDistributions of number of dependent mates in human, mouse and rat genome. File containing 3 histograms of number of dependent mates for each transcription factor in human, mouse and rat genome.Click here for fileDistribution of dependent mates for each transcription factor in human, mouse and rat genome, including cluster information about similarity between binding sites.Click here for fileDistribution of GC content in the human, mouse and rat promoters. File containing 3 histograms and corresponding fitted normal distributions.Click here for fileScanning promoter sequences. File containing a table that represents a general form of output after scanning promoter sequences for the given combination of transcription factors A and B.Click here for fileDistributions of GC content in human promoters, represented by a histogram of 50 bins. File containing 3 histograms of 50 bins each.Click here for fileRepresentation of higher order dependencies between transcription factors A, B and C. File containing fully connected graph (represents full 3-order dependencies) and not fully connected graph .Click here for file"} +{"text": "The aim of this work was to evaluate the role of low vision aids in improving visual performance and response in children with low vision.Prospective clinical case series.This study was conducted on 50 patients that met the international criteria for a diagnosis of low vision. Their ages ranged from 5 to 15 years. Assessment of low vision included distance and near visual acuity assessment, color vision and contrast sensitivity function. Low vision aids were prescribed based on initial evaluation and the patient's visual needs. Patients were followed up for 1 year using the tests done at the initial examination and a visual function assessment questionnaire.The duration of visual impairment ranged from 1 to 10 years, with mean duration \u00b1 SD being 4.6\u00b1 2.3299. The near visual acuities ranged from A10 to A20, with mean near acuity \u00b1 SD being A13.632 \u00b1 3.17171. Far visual acuities ranged from 6/60 (0.06) to 6/24 (0.25), with mean far visual acuity \u00b1 SD being 0.122 \u00b1 0.1191. All patients had impaired contrast sensitivity function as tested using the vision contrast testing system (VCTS) chart for all spatial frequencies. Distance and near vision aids were prescribed according to the visual acuity and the visual needs of every patient. All patients in the age group 5-7 years could be integrated in mainstream schools. The remaining patients that were already integrated in schools demonstrated greater independency regarding reading books and copying from blackboards.Our study confirmed that low vision aids could play an effective role in minimizing the impact of low vision and improving the visual performance of children with low vision, leading to maximizing their social and educational integration. Low vision means visual abilities that are less than needed by the patient for the performance of their essential daily activities. In 1992, WHO defined a person with low vision as the one who has impairment of the visual function even after treatment and/ or standard refractive correction, and one who has a visual acuity of less than 6/18 to light perception, or a visual field of less than 10 degrees from the point of fixation, but the person uses or is potentially able to use vision for planning or execution of a task.Socially, children with visual impairment have limitations in interacting with the environment, as they cannot see the facial expressions of parents and teachers; cannot perceive social behaviors; and sometimes, are unaware of the presence of others unless a sound is made.6The aim of this work was to evaluate the role of low vision aids in improving visual performance and response in children with low vision.Fifty patients, 27 males and 23 females, with ages ranging from 5 to 15 years that were diagnosed with low visionDistance Telescopes: The initial magnification power used for testing was predicted from the ratio of the denominator of the measured visual acuity to the denominator of the desired visual acuity.The eye with better contrast acuity or visual field was preferentially fitted. Binocular telescopes were used for children who exhibited binocular vision. Also the data collected from the questionnaire was used to guide the prescription of the desired magnification. After predicting the suitable magnification required for the patient, a series of suitable telescopes which gave the desired magnification were put in a suitable frame, with the pupillary inlet coinciding with the visual axis of the patient's eye. The visual acuities of the patient with different telescopes were recorded, and then the suitable one was chosen.Near Vision Aids: The required starting addition was determined using the pre-calculated magnification values printed in Keeler's chart. The starting addition required for near vision was determined using the Kestenbaum's role . This addition power was then refined by asking the patient to read a continuous text (school books), and the power was adjusted accordingly. After choosing the appropriate reading aid, the patient's speed of reading was measured to be used as a baseline value to assess the improvement of the child's reading abilities in the following visits.All children received in-office training sessions to familiarize them with the uses and limitations of the optical systems prescribed until the child demonstrated adequate skill, not necessarily proficiency, in the use of the device before taking it home. Then, the patients were instructed about the methods of care, cleaning and maintenance of the optical device.The patients were examined after 1 month, 2 months, 6 months and 1 year from the time of finishing their training sessions. During each follow-up visit, the visual function of the patient was evaluated, and the assessment questionnaire was repeated. The patient's performance of different tasks using the aid was discussed with the parents, and all aspects of difficulty in performance were noted and worked upon in the following visit.Descriptive analysis was used to interpret the results.There were 27 males and 23 females with ages ranging from 5 to 15 years, with the mean age \u00b1 SD being 11.04 \u00b1 2.579. The age of onset of visual impairment ranged from 1 to 12 years, with the mean age \u00b1 SD being 6.44\u00b1 2.8078. The duration of visual impairment ranged from 1 to 10 years, with the mean duration \u00b1 SD being 4.6\u00b1 2.3299. Patients were subjected to complete visual function evaluation. Following a complete clinical examination, fluorescein angiography and electrophysiological tests the following diagnoses were made. Twenty two (44%) patients had familial dominant drusen or other types of hereditary maculopathies, 11 (22%) patients had retinitis pigmentosa, 9 (18%) patients had optic atrophy and 8 (16%) patients had congenital high myopia and other congenital anomalies, namely, microphthalmia and iris and optic disc coloboma. The near visual acuities ranged from A10 to A20, with the mean near acuity \u00b1 SD being A13.632\u00b1 3.17171. Far visual acuities ranged from 4/60 (0.06) to 6/24 (0.25), with mean distance visual acuity \u00b1 SD being 0.122\u00b1 0.1191. Interpretation of Ishihara's color plates revealed that 31 (62%) patients were color blind, while 12 (24%) patients had impaired color perception, especially either for red or green; the remaining 7 (14%) patients had normal color prescription. All patients had impaired contrast sensitivity function when tested with the VCTS chart for all spatial frequencies. Testing for contrast sensitivity function demonstrated that for high spatial frequencies , there were 18 (36%) patients with severe impairment and 5 (10%) patients with mild impairment, while the remaining 27 (54%) patients had moderate impairment of the contrast sensitivity function. The contrast sensitivity function for mid-spatial frequencies , was severely impaired in 11 (22%) patients, moderately impaired in 36 (72%) patients and mildly impaired in 3 (6%) patients.Far vision aids were prescribed according to the visual acuity and the visual needs of every patient . The mosRegarding near vision aids , 8-dioptOf the patients who received distance vision aids, 5 (10%) patients achieved corrected distance visual acuity of 6/9, 11 (22%) patients achieved aided distance visual acuity of 6/12, 14 (28%) patients achieved aided distance visual acuity of 6/18, 12 (24%) patients achieved aided far visual acuity of 6/24, 6 (12%) patients achieved aided far visual acuity of 6/36, and 2 (4%) patients achieved aided far visual acuity of 6/60.The effect of near vision aids on the near visual acuity was as follows: 33 (66%) patients achieved aided near visual acuity of A10, 6 (12%) patients achieved aided near visual acuity of A12, 4 (8%) patients achieved aided near visual acuity of A11, 4 (8%) patients achieved aided near visual acuity of A13, and 3 (6%) patients achieved aided near visual acuity of A14.All patients and parents responded to the questionnaire at the initial visit to assess the degree of impairment of visual performance, and then the questionnaire was administered after 1, 2, 6 months and 1 year. Analysis of the effect of the visual aids in improving the visual performance suggested that the number of patients watching television increased from 18 (36%) to 32 (64%) with the use of the aids; the number of patients who could copy text from the blackboard increased from 4 (8%) to 12 (24%). However, the aids did not appear to have an effect on outdoor and leisure activities in that the number of patients who participated in leisure activities or who could navigate alone did not increase. The ability of the patients to copy from books was improved, as the number of patients who could copy from books increased from 21 (42%) to 37 (74%) with the use of near vision aids. The reading speed improved after using the reading aids in 42 (84%) patients; 24 (48%) of them achieved aided reading speed of more than 60 words per minute .Patient compliance with the visual aids was assessed after 1 year of follow-up. Forty four patients remained users of their near vision aids; 38 (76%) of them used the near aid for reading both at home and in the classroom, while 6 (12%) patients used the aid for reading at home only, and the remaining 6 (12%) patients stopped using the aid.After 1 year of follow-up, 38 (76%) patients were remained users of their aids, while 12 (24%) stopped using it without any influence of the age. Ten (20%) patients were found to be using the far vision aid daily for more than 1 hour per day, while 16 (32%) patients were found to be using the aid daily for less than 1 hour per day; and 10 (20%) patients are still using the aid but not everyday.In 1992, American Academy of Ophthalmology (AAO) defined low vision as an impairment of visual acuity of less than 6/18 or as restriction of visual field to less than 10 degrees from the point of fixation.8The aim of our work was to assess the different aspects of visual function impairment in children with low vision and to evaluate the role of visual aids in improving their visual performance and in keeping them socially as well as educationally integrated.Our study showed that all children with low vision had impaired contrast sensitivity function for all spatial frequencies; and in particular for mid-spatial frequencies, which are considered the accurate indicators for visual performance. Patients with severely impaired contrast sensitivity function showed lower reading speeds than those with the same near visual acuity but better contrast sensitivity function.9Our results are in agreement with those reported in other studies, wherein it was concluded that many patients respond well to low vision aids while others do not. These differences may be due to the variations in contrast sensitivity function, and so appropriate diagnostic use of contrast sensitivity function can explain the failure of low vision aids in some patients.12On the other hand, no significant correlation between contrast sensitivity and reading performance in children was found.13In regard to total for visual acuity improvement, 42 (84%) patients could achieve aided visual acuity of 6/24-6/9. These results matched the results of two previous studies concerning the efficiency of low vision aids in improving visual acuity. They studied visual rehabilitation for 96 patients of different age groups with advanced stages of glaucoma, optic atrophy, myopia and retinitis pigmentosa and found that 100% of patients showed improvement in visual acuity for both far vision and near vision with the use of low vision aids.15In this study improvement in near vision was seen in 43 (86%) patients who after receiving visual aids could read the print size of school books (A12-A10). These results are in accordance with previous work the showed that low vision aids were very effective in helping 90% of the low vision patients to read normal size prints.16Distance telescopes in this study were helpful in allowing children to function independently especially at school. It was also found that telescopes had no effect on mobility performance, as there was no remarkable increase in the number of patients who could walk alone or share during sports and games with the use of the telescope. These results matched the data obtained in a survey that assessed the user success for distance telescopes in 142 patients using various types of telescopes and found that telescopes were effective in improving visual performance both outdoors and indoors.18Patient's compliance with the telescopic aid, after 1 year, in this report was similar to that achieved by Lowe and Rubinstein,To summarize, low vision is a problem which has a wide-ranging impact on the behavior of children and adolescents in the social and educational spheres of life. Vision rehabilitation with the use of optical vision aids was found to be very helpful in minimizing the impact of low vision and in improving daily performance of the visually impaired patients."} +{"text": "To evaluate a low vision rehabilitation service implemented for heterogeneously diverse group of Egyptianpatients with vision loss in terms of improving their visual performance and fulfilling their visual needs.Fifty patients with low vision were included in a prospective study. History taking, ophthalmic examinationand evaluation of the visual functions were performed for all patients. The required magnification was calculated, andsubsequently a low vision aid was chosen after counseling with patients. Low vision aids were tried in office, followedby a period of training before patients received their own low vision aids. Follow up was done for 6 months.All patients who were referred to the low vision unit were not satisfied with their current spectacles or lowvision aids. After training and prescription of suitable LVAs, the improvement in distance and near visual acuity wasstatistically significant (p<0.001). Fifty-six per cent of the patients (n=28) showed improvement in distance visualacuity of 5 lines or more, and 57% of the patients (n=27) could discern N8 print size or better. The most commonlyused aids were high powered near adds. Despite the complaints about the appearance and use of LVAs, 76% of thepatients reported being moderately to highly-satisfied with their aids.The significant improvement in the visual performance of patients with low vision after the prescriptionand training on the use of LVAs, associated with patients' satisfaction, confirms the importance of expanding lowvision rehabilitative services and increasing the public awareness of its existence and benefits. The increasing numbers of patients who are old or visually impaired and who can no longer be helped by conventional optical, medical or surgical methods, represent a challenge to optometrists and ophthalmologists both in developed and developing countries.235The most effective way to reduce the degree of handicap associated with visual impairment is to provide low vision aids (LVAs) as a part of a comprehensive low vision rehabilitative service.368In a developing country like Egypt, provision of low vision services represents a challenge due to the lack of knowledge of some of the health care providers of the existence of such services. Furthermore, Egypt lacks an effective national health insurance program that can cover the relatively high cost of LVAs.The aim of this study was to evaluate the effectiveness of LVAs in improving both distance and near vision among 50 Egyptian patients of diversified etiology for low vision. We further aimed at evaluating the level of patients' satisfaction as well as at identifying the common complaints reported after use of LVAs.Patients included in this study were selected at random from patients attending the low vision clinic of Mansoura Ophthalmic Center, Mansoura University, Egypt. Patients were included in the study if they had a best corrected visual acuity (BCVA) of less than 6/18 in the better eye; in accordance with WHO definition of low vision.9Exclusion criteria were age less than 6 years, mental handicap, media opacity, illiteracy or visual acuity better than 6/18 or worse than 1/60. An informed consent was obtained from adults or parents of children enrolled in the study after detailed explanation of the nature and possible outcome of the study. The study conformed to the Declaration of Helsinki and was approved by the Research Ethical Committee of Mansoura University. The age of the 50 patients enrolled in this study ranged from 6 years to 88 years. Thirty-four patients were males (68%) while 16 were females (32%). Demographic data are summarized in All patients underwent full history taking including patient's visual requirements and previous low vision evaluation or use of LVAs. Full ophthalmic examination was performed including visual acuity (VA) testing. Distant VA was measured unilaterally then bilaterally at 3 meters and near visual acuity was then tested . Near visual acuity was measured binocularly at the patient's preferred distance and then at 25 cm using a +4.00 D reading add. Refraction was measured using streak retinoscopy when possible; otherwise a bracketing technique in the form of a trial of high powered spherical and cylindrical lenses was adopted. Central visual field was tested using Amsler Grid.Before proceeding to the choice and training in the use of LVAs, a thorough discussion with the patient was performed to assess the patient's visual needs, to describe the nature of visual impairment and to explain its influence on visual performance, including limitations even after use of LVAs.n (n= the number of steps of improvement required). When the N-point notation was used, the magnification was calculated as: Magnification required= Present VA/ Required VA.According to patient's needs, the required magnification was calculated. Magnification for distance was calculated using the formula: Magnification required= Required VA/ Present VA. For the near magnification, when Keeler A system was used, magnification was calculated using the formula: Magnification= 1.25In-office trials of variable LVAs were then started. For distant tasks, the available low vision aids were hand-held or spectacle-mounted telescopes, either in a fixed focus or variable focus form. For near tasks, microscopes, hand-held or stand magnifiers were offered to the patients. Non-optical aids such as reading stands, typoscopes, direct illumination or large print material were recommended according to each individual case. Patients were advised on how to use the aids and were individually trained on using different techniques such as steady eye strategy, eccentric fixation, focusing and tracking. After allowing patients to try variable aids, counselling to determine the suitable aid for each patient was performed, considering the needs, visual impairment status, and any other variables such as socio-economical factors. After the initial visit, 3 more in-office training sessions were performed. Each session was almost 30 minutes long. Patients were then allowed to purchase their own aids. The optical low vision aids used in this study were:Keeler Vision Enhancement Assessment Set.Schweizer Optik hand-held aspheric magnifiers, series 1840, Germany.Raylite, Coil illuminated stand-magnifiers, series 2, England.Coil half-eye microscopes with a built in base-in prism, England.In-office follow up visits were planned up to 6 months, at the end of which an interview questionnaire was performed by the LVA therapist. Patients were asked about the frequency of use of LVAs, the duration of use each time, how difficult was it to use the aid after the in-office training, and the kind of complaints patients had while using the LVAs. Patients were also asked to rate their level of satisfaction with their LVAs and with the rehabilitation service in general.According to etiology of low vision, patients fell into 4 groups: Group A: patients with low vision attributed to a macular lesion; group B: patients with low vision attributed to optic atrophy; group C: patients who had both macular and optic nerve disease and group D: patients with low vision due to other causes. The etiology of low vision among patients enrolled in this study is summarized in At the time of presentation all patients were no longer satisfied with their present spectacles or LVAs if they were using any. The refractive errors of patients are represented as spherical equivalent and summarized in In accordance with the WHO categories of visual loss, thirty-two patients (64%) were visually impaired-BCVA worse than 6/18, but better than 6/60-, 8 patients (16%) were severely visually impaired-BCVA worse than 6/60, but better than 3/60-, while 10 patients (20%) were legally blind (BCVA worse than 3/60).Differences in near VA between Keeler system and point system were observed, so we chose to report the results in Point system as it was in Arabic language and as a continuous text, while Keeler A system was in the form of Landolt's broken rings and as isolated symbols which might cause false high results. We did not include the results of 3 children who were considered non-proficient readers. At time of presentation only 3 patients (6%) could discern N8 print, 8 patients (17%) could discern N10 print, 5 patients (11%) could discern N24 print, 8 patients (17%) could discern N32 print, and 23 patients (49%) could only discern N48 print or even larger fonts.Improvement of distance VA using telescopes showed statistical significance . Twenty-eight patients (56%) showed improvement of 5 lines or more. Nineteen patients (38%) showed improvement of 3-4 lines and 3 patients (6%) showed mild improvement of 1-2 lines. The improvement in the groups according to the etiology is described in After provision of low vision aids, there was a significant increase in the number of patients who could discern N8 print and better. Twenty seven patients (57%) could discern N8 print and better. Thirty-one patients (66%) at presentation could only discern N32 or larger print, this number markedly decreased to only 2 patients (4%) after use of LVAs. Results of improvement in near visual acuity are detailed in A correlation between improvement in near VA and the pre-correction level of distance VA was observed. Using LVAs, twenty-two patients (68.75%) of the visual impairment group could read print size N8 or better; 2 patients (25%) of the severe visual impairment group could discern N8, while three patients (33.3%) of the blind group could discern N8 print. Therefore the best improvement was achieved in the group of patients that were in the visually impaired group.Twenty-seven patients (54%) asked for an aid to help them in near tasks, 17 patients (34%) asked for aids to help in both near and distance tasks, while only 6 patients (12%) needed aids to help in distance tasks only. Patients asking for distance tasks only were children.The magnification level of prescribed aids ranged between 2X and 10.1X. More than half of the patients (58%) used aids with a range of power between 2X and 5X, while 42% of the patients used aids ranging from 5.4X to 10.1X. High powered reading aids (microscopes) were the most commonly used near aid (54%), followed by hand-held magnifiers (24%). Non-optical aids were prescribed for 32 patients (64%). The most common were the direct illumination and reading stands. Large print text was used to assist 3 patients (6%) who only needed to read The Holy Books which were the only commercially available texts in large print in Egypt.Figures The findings of our study confirm that the provision of low vision aids is associated with a statistically significant improvement in both near and distance visual acuities and with patients' satisfaction.Despite the fact that provision of low vision services prove to be associated with improved functional status and quality of life of patients with visual impairment,21112135124Interestingly, unlike the epidemiological results of many studies,819In accordance with this was the etiology of visual impairment among our sample. Sixty-eight per cent of the patients in this study had macular diseases, yet only 6% of those were due to age-related causes such as AMD, while the rest were mostly congenital in nature. This represents another point of difference compared to studies reported elsewhere.192022We observed that the improvement in distance visual acuity was not dependent on the underlying pathology, since the etiology profile of the patients showing improvement in distance visual acuity to 5 lines or more was almost identical to the etiology profile of the whole patients' sample. Similar findings were previously reported.Analysis of the complaints of the patients after the use of aids in this study revealed that the clumsy appearance was the main complaint, especially when the patient started to use the aid in front of relatives, work or class mates. Another reason was the need to adopt new techniques for reading or using the LVAs with a sense of permanent loss of pre-visual impairment reading abilities, which was perceived as a declaration of patient's permanent handicap. Patients reporting to be frustrated about the service were mainly those who had unrealistic expectations even after counseling and discussion about the limitations of LVAs in terms of its functional as well as cosmetic aspects.One limitation to the accurate assessment of the visual performance of patients in this study was the use of only the ability to read small print, without assessing neither the speed nor the duration of reading or performing the visual tasks. Another limitation was the inability of our institute to provide patients with trial closed-circuit televisions due to financial restrictions, as well as our assumption that our patients would not be able to afford such aids even if they prove effective.This study is the first in Egypt to report the outcome of a low vision rehabilitation service. Relative to the costs of visual impairment, the provision of low vision rehabilitation services seem to be quite low. In a developing country such as Egypt, increased awareness of the public and the medical health providers of the availability and the benefits of such services is expected to help improve the quality of life of patients who are visually impaired. The concept that nothing further could be done for individuals who are visually impaired might be changed and perhaps health authorities might eventually be encouraged to finance such services."} +{"text": "CdZnTe detectors have been under development for the past two decades, providing good stopping power for gamma rays, lightweight camera heads and improved energy resolution. However, the performance of this type of detector is limited primarily by incomplete charge collection problems resulting from charge carriers trapping. This paper is a review of the progress in the development of CdZnTe unipolar detectors with some data correction techniques for improving performance of the detectors. We will first briefly review the relevant theories. Thereafter, two aspects of the techniques for overcoming the hole trapping issue are summarized, including irradiation direction configuration and pulse shape correction methods. CdZnTe detectors of different geometries are discussed in detail, covering the principal of the electrode geometry design, the design and performance characteristics, some detector prototypes development and special correction techniques to improve the energy resolution. Finally, the state of art development of 3-D position sensing and Compton imaging technique are also discussed. Spectroscopic performance of CdZnTe semiconductor detector will be greatly improved even to approach the statistical limit on energy resolution with the combination of some of these techniques. A major characteristic of this type of detector is the capability of converting \u03b3-rays directly into electronic signals. In comparison to scintillators, semiconductor detectors avoid the random effects associated with scintillation light production, propagation and conversion to electrical signal in such a way that they represent the main alternative to scintillator-based single photon imaging systems. Compared to established use of Si and Ge, cadmium zinc telluride (CdZnTe) is the most promising material for radiation detectors with high atomic number (good stopping power), large band-gap (room-temperature operation), and the absence of significant polarization effects ,2. The i2.The collection efficiency of charge carriers is a crucial property that affects the energy resolution of semiconductor detectors. This efficiency is always reduced by charge carriers trapping that results from crystal defects and the poor charge transport properties of charge carriers. For example, grain boundaries that are generated during crystal growth can seriously trap charge carriers . It has 2.1.L\u0394Q) that is achieved by a moving charge q from interaction position ix to fx and induced on the electrode (L), can be calculated according to ix to fx is the initial and final position of q, and 0E and 0\u03c6 correspond to the weighting electric field and weighting potential respectively. Weighting potential (electric field) is defined as the potential (electric field) that would exist in the detector when the collecting electrode is biased at unit potential and all other electrodes are held grounded. It does not really exist inside the detector but is only for calculation convenience. Note that the induced charge is independent of the applied bias voltage on the electrodes. That voltage only determines the trajectories of charge carriers.As already stated, the charge carriers that are generated by \u03b3-photon energy deposit drift towards the corresponding electrodes. Shockley and Ramo proposed a method in 1940s to calculate the induced charge by introducing a concept of \u201cweighting potential\u201d \u201310. The 2.2.totalQ) that is generated on the feedback capacitor consists of two parts\u2014free electrons that are collected directly by the electrodes and the charges induced by trapped carriers within the detector (hiQ and eiQ are charges that are induced by trapped holes and electrons respectively on the feedback capacitor and efQ is the free electrons collected on the anode and conducted onto the feedback capacitor. Suppose that the free charge carriers that are created by photons absorbtion are +0Q (holes) and -0Q (electrons) and that the interaction position is at a distance of 0x from the cathode. When these carriers drift towards the respective electrodes with initial velocities of hE\u03bc (holes) and eE\u03bc (electrons), the number of them is decreased due to charge trapping within the detector. Here, E is the applied uniform electric field, h\u03bc the hole mobility and e\u03bc the electrons mobility. Moreover, there is an assumption that a loss of charge caused by charge trapping proceeds exponentially with time (efQ) and holes (hfQ) with the change of the position x in the drift path, can be then obtained as:Another approach to calculate the output charge on the electrode, called \u201cstatic charge analysis and capacitance coupling method\u201d, was introduced by Lingren and Butler . What isdetector :(2)Qtotith time and detref(x)dQ and dhf(x)Q, is equal to the infinitesimal charge multiplied by a weighting factor. This weighting factor is a ratio of the capacitance from the interaction point to the collecting anode to the total capacitance from that point to all electrodes. Thus the infinitesimal induced charges (ei(x)dQ and hi(x)dQ) on the anode can be given as:a(x)C is the capacitance from the interaction point x to the collecting anode, t(x)C is the total capacitance from that point to all electrodes. Here we use an example of a planar detector to verify Lingren's method. For a planar detector, the weighting factor a(x)/ Ct(x)C is equal to x/L [L is the detector's thickness. Then the total induced charge (iQ) on anode is obtained as:totalQ can be obtained by pluging ef(L)Q (i(x)Q and Qi(x) (Qi(x) into Equ) (Qi(x) is ident3.Energy resolution is one of the main performance parameters for gamma ray detectors. As seen from 3.1.As mentioned previously, hole trapping limits detector performance. This is because a long tail is produced in the measured spectrum due to incomplete charge collection. It was observed that irradiation from cathode side can contribute to reducing this effect because Another irradiation configuration, in which the irradiation direction is orthogonal to the applied electric field, was being considered as a way of overcoming the compromise between good spectroscopy and acceptable detection efficiency ,15. In c3.2.Electronic methods have also been used to improve the spectrometric performance of CdZnTe detectors, such as pulse shape discrimination (PSD) \u201319 and pet al. [A novel algorithm, used for rejecting incomplete charge collection (ICC) events in CdZnTe detectors, was proposed by Bolotnikov et al. . This me4.To some extent, both of the methods mentioned in Sections 3.1 and 3.2 can reduce hole trapping effect on the charges that are collected by electrodes; these methods still remaining insufficient to obtain a good quality energy resolution. In addition, drastic losses in detection efficiency are caused with a limited improvement in energy resolution, especially with thicker detectors. Therefore, the approaches mentioned above could be adjunct methods to obtain better energy resolution or furthermore could be used in certain occasions where efficiency is a less important parameter. Unipolar detector designs, however, have been developed to overcome the deleterious effects of hole trapping problem. The most effective and successful prototype models, listed and mentioned below, include the Frisch-grid device, pixelate detectors, coplanar grid detectors, hemispherical electrodes and strip detectors.4.1.Frisch-grid-based design that was introduced by Frisch was origet al. [The most studied structures for such semiconductor detectors include the Frisch strip detector ,26, trapet al. ,26 with et al. . Main chet al. .Both of the two structures mentioned above suffer from severe surface leakage currents between the grid and anode, especially under conditions of higher applied voltage. This problem is definitely a limitation for detectors when trying to gain better performance. The capacitive Frisch grid detector ,43 and UOne drawback of pixelate detectors is that the pixelate devices suffer from charge sharing problem among pixels. The electronics to solve this problem can be challenging. A data correction method was repo4.3.The coplanar-grid electrode concept, first reported by Luke ,51 and ba(x)/Ct(x)C is equal to x/(2L) since the area of collecting electrodes is just one-half of that of the similar planar detector. With this version of detector, an energy resolution of 3.1% FWHM at 662 keV was obtained initially [The weighting potential distribution that is obtained using finite element analysis is shown in nitially ,51 and tnitially . Howeveret al. verified this point and identified the problems of non-symmetric effect that causes detector performance degradation. By adding a boundary electrode and then adjusting the strip width of the outermost grids [The key for coplanar grid design is that the weighting potential of the two anode grids are almost equal inside most of the detector volume except the vicinity of the anode. Therefore the subtraction of the two anode grids signal achieves a near-zero weighting potential inside most of the volume. As a result the subtraction signal is only sensitive to the motion of electrons in the vicinity of the anode. Several factors resulting in performance degradation have been studied ,50,53,54st grids , the detA drawback of coplanar-grid detector is that it requires two sets of output readout electronics, which inevitably will import more electronic noises. In addition, there is a trade-off between the excessive noise and collection efficiency of the coplanar anodes. The bias voltage should be sufficient to collect enough carriers but cannot reach a very high level before electric noises and leakage currents begin to overwhelm the effective signals.4.4.The basic concept of hemispherical electrodes is to increase the electric field in the region of the detector where carrier trapping is more frequent, thus attaining a uniform charge collection across the whole area of the detector \u201359. As s3[et al. [3 with energy resolution of less than 1.9% FWHM at 662 keV achieved with the optimal configuration.Such a design has two primary effects: one is that a higher electric field near the anode can sweep holes more effectively; the other effect is that more charge carriers are generated near the cathode and the holes concentration near the anode is very small. The combination of these two effects renders the hemispherical electrodes as a single-charge-sensing electrode. With this configuration, energy resolution of 6% FWHM at 662 keV was obtained using a detector with dimensions of 10 \u00d7 10 \u00d7 5 mm3. This ki3[et al. with a f4.5.A research group at the University of New Hampshire Space Science Center developed the earliest prototype of CdZnTe strip detector \u201364. Thiset al. [The new concept of \u201cthree-electrode model\u201d or \u201ccoplanar pixel and control electrode model\u201d, which was reported in 1997 by Lingren et al. and Butlet al. , aimed aet al. .2 pixels but only need 2N electronics, which greatly reduce the power requirement and complexity of the device electronics. As for the orthogonal strip detector, a bias voltage difference is required to be applied between the anode and control electrode. This difference value is dependent on its geometric design [Two kinds of strip detector , an orthc design ,71. A hic design .Compared to an orthogonal strip detector, the advantages of the charge-sharing strip detector include: (1) the electronics are simplified due to fact that the row and column electrodes are identical and therefore their output signals are of the same shape; (2) the grid electrodes have a larger area and thus they can provide a more effective non-collecting signal than that provided by the individual strip electrodes in orthogonal strip detector. The disadvantage, however, is that electronic noises are generated from so many pads and also by adding the signal of these pads. Further design details for the two versions of detectors, including the analog processing circuits design , edge anet al. [2 and fabricated a detector prototype that achieved an energy resolution of 0.9% at 662 keV.Another kind of strip detector concept, which is called drift strip detector, was first introduced in 1987 and developed using silicon material . Then Paet al. applied The first CdZnTe drift detector was developed in the Danish Space Research Institute ,80 and iA summary of the features and performances of CdZnTe detectors with different geometries mentioned above is shown in 5.et al. [Although hole trapping is the key factor in degrading spectrum performance, electron trapping does exist in practice. It was observed experimentally that around 5\u201310% of electrons that are generated by \u03b3-ray interactions are trapped on a 1 cm thick CdZnTe detector . The eleet al. using thet al. [The first prototype of 3-D position sensing spectrometer was developed and introduced by He et al. ,84 basedet al. . Anotheret al. , 2005 [8et al. ,88, 2007et al. and 2012et al. with dif3. Having a tatal volume of approximately 700 cm3 and weighing under 900 g, the IPRL system , signal processing methods and Application Specific Integrated Circuit (ASIC) with low electronic noise and leakage current. The combination of these techniques will produce a gamma ray detector with good energy resolution and detection efficiency."} +{"text": "Conclusions regarding disturbance effects in high elevation or high latitude ecosystems based solely on infrequent, long-term sampling may be misleading, because the long winters may erase severe, short-term impacts at the height of the abbreviated growing season. We separated a) long-term effects of pack stock grazing, manifested in early season prior to stock arrival, from b) additional pack stock grazing effects that might become apparent during annual stock grazing, by use of paired grazed and control wet meadows that we sampled at the beginning and end of subalpine growing seasons. Control meadows had been closed to grazing for at least two decades, and meadow pairs were distributed across Sequoia National Park, California, USA. The study was thus effectively a landscape-scale, long-term manipulation of wetland grazing. We sampled arthropods at these remote sites and collected data on associated vegetation structure. Litter cover and depth, percent bare ground, and soil strength had negative responses to grazing. In contrast, fauna showed little response to grazing, and there were overall negative effects for only three arthropod families. Mid-season and long-term results were generally congruent, and the only indications of lower faunal diversity on mid-season grazed wetlands were trends of lower abundance across morphospecies and lower diversity for canopy fauna across assemblage metrics. Treatment x Season interactions almost absent. Thus impacts on vegetation structure only minimally cascaded into the arthropod assemblage and were not greatly intensified during the annual growing season. Differences between years, which were likely a response to divergent snowfall patterns, were more important than differences between early and mid-season. Reliance on either vegetation or faunal metrics exclusively would have yielded different conclusions; using both flora and fauna served to provide a more integrative view of ecosystem response. Comparisons of persisting versus shorter-term effects of a given long-term disturbance are less common than might be expected; grazing management has provided a good laboratory for such studies, because of detailed, long-term stock use records, the presence of de facto long-term exclosures, and an understanding among managers that long- and short-term grazing effects may differ Absence of apparent long-term disturbance effects does not render shorter-term effects trivial. Invertebrates may be particularly susceptible to such additional short-term effects, but these impacts may not be easily ascertained, because a) invertebrates have been under-investigated in ecosystem studies in general, and b) short-term effects on invertebrates may not be detected by long-term sampling as a result of masking by dispersal and/or recolonization High elevation and high latitude wetlands are valued ecosystem components e.g., that havIn an initial, one-year study, we sought to determine if these grazing patterns caused lasting effects on terrestrial, epigeal arthropods and associated wetland vegetation, or if the long winters without stock allowed an annual recovery of assemblages from any impacts that occur during summer usage Calamagrostis muiriana B.L. Wilson and S. Gray; see Wet meadows are saturated with water during much of the year Pack stock grazing and associated management practices in Sequoia National Park present an ideal scenario for examination of long-term grazing effects. This work was facilitated by a) the presence of many wet meadows that had been closed to stock for decades that could be paired for contrast with grazed wet meadows with known usage patterns, and b) a controlled opening date for grazing on each wet meadow, so we could sample immediately after greenup, i.e., after there was high quality arthropod habitat, but just before stock grazing. The grazing patterns and management regime enabled us to design what was in essence a subsequent long-term and large-scale experiment see also . We inveThe study was cast as a 2\u00d72\u00d72 blocked design using ten pairs of control and grazed subalpine wet meadows. Each study site had two randomly-selected subsample locations, with two additional randomly-selected subsamples nested within each of the first pair of subsamples. Vegetation in these wetlands typically begins to senesce in mid- to late September We sampled all sites four days or less before grazed sites were opened to pack stock in early season and similarly just before vegetation senescence toward the end of mid-season conditions in both 2010 and 2011. The two years captured varying antecedent conditions, because the winter preceding 2011 sampling produced greater snow water equivalent than the winter preceding 2010 sampling (SWE \u200a=\u200a89 cm). Stock opening date at individual meadows is determined by the Park based on soil saturation and vegetation characteristics, so we sampled under similar phenological conditions in each year. Sampling in 2010 began in early July and concluded in early September, whereas 2011 sampling ran from early August through mid-September because of late snowmelt. See A Scientific Research and Collecting permit was obtained from the US National Park Service for work in Sequoia National Park for each year of the study. No protected species were sampled.We sampled the wetland canopy assemblage with sweep nets and secondarily targeted ground-dwelling fauna, especially ants, by baiting. Sweep nets are likely the most frequently used device for sampling epigaeic arthropods, can detect sparsely distributed taxa 2. The collapsible net had a 30.5 cm aperture and mesh size of 0.5\u00d70.75 mm (BioQuip #7112CP). We collected sweep samples prior to the disruption associated with other data collection at the sites, and samples were killed with 99% ethyl acetate 2 portions of honey or tuna that were placed on green construction paper cards and weighted with rocks. After 30 minutes, ants, mites, and other arthropods were removed with forceps and placed in a vial containing 70% ethanol , and litter cover, as well as canopy height and litter depth on each of the subsample locations. Such coarse vegetation parameters are effective in detecting pack stock impacts on vegetation assemblages Air temperature and average wind speed were recorded midway between the two subsample locations with a Kestrel 3000 digital meter in order to verify that similar meteorological conditions obtained between paired grazed and control wetlands. We used a pocket penetrometer (Ben Meadows) to estimate soil strength at each of the locations used for canopy height and litter depth measurement, and the average of these four estimates was the site mean.E(18S); E(S) is valuable, because samples with larger numbers of individuals will tend to have more species, even if all samples represent equal effort and are collected from the same assemblage. We calculated E(18S), and PIE using the application Diversity. Metrics that showed departures from normality 0.5) of proportional data and log transformations (log (y+1)) of all other data such that parametric assumptions were met. Substitutions were not made for cells with missing values. We used G*Power We examined the influences of grazing, season, and year on invertebrate assemblages and vegetation structure with both uni- and multivariate approaches. Univariate analyses were 2\u00d72\u00d72 blocked ANCOVAs (df \u200a=\u200a49) using a general linear model in SYSTAT 12. Analysis of site elevation as a covariate was necessary because elevation differed by treatment but could not be affected by the treatment We also used multivariate analyses in order to detect patterns as a function of study factors, across both family and morphospecies matrices, that might not emerge via univariate tests of individual taxa or assemblage metrics . Clearly, idiosyncrasies of the year or years selected for study can influence conclusions regarding assemblage structure and response to grazing disturbance see also , as has It appears that current management of pack stock in the Park has produced moderate negative effects on coarse vegetation structure, but only minimal effects on the arthropod assemblage. This study, however, did capture some minor grazing effects at mid-season that were not apparent from early season sampling that targeted persisting effects only Figure S1Agglomerative cluster analysis of site family data with overlay by grazing treatment and season.(PPT)Click here for additional data file.Figure S2Agglomerative cluster analysis of site morphospecies data with overlay by grazing treatment and season.(PPT)Click here for additional data file.Table S1Mean relative abundance/50 sweeps (standard error), both years combined, as a function of grazing and season; zeros are omitted for clarity.(DOC)Click here for additional data file.Table S2Abundance means (standard errors) for orders and ten most abundant families as a function of Treatment , Season , and Year and results of 2\u00d72\u00d72 blocked ANCOVAs with elevation as a covariate.(DOC)Click here for additional data file."} +{"text": "Brain structural alterations and neuropsychiatric symptoms have been described repeatedly in Fabry disease, yet cognitive deficits have been shown to be only mild. Here, we aimed to investigate neuropsychiatric symptoms and brain structure longitudinally. We expected no clinically relevant increase of neuropsychiatric symptoms in parallel to increased brain structural alterations. We assessed 14 Fabry patients (46.1 \u00b1 10.8 years) who had participated in our investigation eight years ago. Patients engaged in neuropsychiatric testing, as well as structural magnetic resonance imaging and angiography to determine white matter lesions, hippocampal volume, and the diameter of the larger intracranial arteries. While Fabry patients did not differ on cognitive performance, they showed progressive and significant hippocampal volume loss over the 8-year observation period. White matter lesions were associated with older age and higher white matter lesion load at baseline, but did not reach statistical significance when comparing baseline to follow-up. Likewise, intracranial artery diameters did not increase significantly. None of the imaging parameters were associated with the neuropsychiatric parameters. Depression frequency reduced from 50% at baseline to 21% at follow-up, but it did not reach significance. This investigation demonstrates clinical stability in cognitive function, while pronounced hippocampal atrophy is apparent throughout the 8 years. Our middle-aged Fabry patients appeared to compensate successfully for progressive hippocampal volume loss. The hippocampal volume decline indicates brain regional neuronal involvement in Fabry disease. Fabry disease (FD) is a rare hereditary x-linked lysosomal storage disorder that results from a deficient activity of the enzyme \u03b1-galactosidase A. Consequent lipid accumulation results in multiorgan pathology that predominantly affects tissues of cardiac or renal systems, and the central nervous system (CNS) . CNS invExisting studies have only focused on neuropsychiatric and neurological FD symptoms cross-sectionally. However, longitudinal designs are necessary to determine the relationship between neuropsychiatric and neurological symptoms in FD. In line with our baseline investigation where FD patients and healthy controls only differed slightly in their cognitive performance , we inteThis longitudinal cohort study was approved by the local ethics committee of the Landes\u00e4rztekammer Rheinland-Pfalz in Mainz and all patients gave their written informed consent. Participants were enrolled at the Children\u2019s Hospital, University Medical Center of Mainz. Baseline assessment was performed from 2003\u20132005 and follow-up assessment took place 8 years after baseline assessment from 2011\u20132012 . At baseReasons for patient dropout included: pregnancy, mortality, lack of contact information, and loss of interest . Both deTo assess learning and long term memory we used the German version of the Rey Auditory Verbal Learning Task (AVLT ) in formBaseline, as well as follow-up data was obtained from a 1.5 T Magnetom Sonata system . Standard 3D T1 Magnetization Prepared Rapid Gradient Echo (MP-RAGE)-weighted sequence was used for hippocampal volume (HV) analysis, FLAIR-weighted sequence was performed for determination of white matter lesions (WMLs), magnetic resonance angiography (MRA) time-of-flight (ToF)-sequence was assessed for means of measuring arterial diameters, and PD/T2 sequence to exclude further brain abnormalities.3 as obtained from BET.For hippocampus measurement Analyze\u00ae Software was used. Hippocampi were manually traced slice-by-slice on the default coronal view of MP-RAGE sequences for each hemisphere according to the Pruessner standardized protocol . An expeWMLs were determined on the transversal FLAIR-sequences using the Analyze\u00ae 8.1 Software. WML boundaries were manually traced slice-by-slice by an experienced rater (A.B.) and were defined as bright lesions (>2mm) of the white matter or basal ganglia. Slice volumes were summed (ml) for every participant, and the relative ratio with BET was calculated (as previously described).Diameters of the larger cerebral arteries were measured manually by an experienced rater (I.L) on the sagittal ToF sequence using the Sectra Workstation IDS7 . Diameters were measured perpendicular to the vessel . The folFor statistical analysis we used IBM SPSS statistics 22.0 software . All statistical analyses were performed with gender as a covariate, except otherwise specified, as it has been found to have significant influence on FD development . AnalyseDescriptive data and group comparisons of the neuropsychiatric parameters for baseline and follow-up are described in The following results have been controlled for gender, unless stated otherwise. p = .025; left: r = .683, p = .007). Furthermore, WML difference correlates with difference in performance on TMT\u2013B . Partial correlations after controlling for gender showed no significant correlations, but showed tendencies (r >. 4) towards the previously-described associations before controlling for gender. Brain structural parameters showed no association with depression severity or frequency, with occurrence of pain as measured with the brief pain inventory, with renal involvement (creatinine clearance), with cerebrovascular events or cardiovascular disease (cardiomyopathy and/or arrhythmias) [Spearman correlations showed significant associations between HV (left and right) difference (baseline minus follow-up) and difference of recognition, as measured with the memory test AVLT .Our analyses revealed no differences in cognitive performance between baseline and follow-up. In a previous publication, there were no clinically relevant cognitive deficits apparent at baseline, compared to controls . Slight Recent literature has shown that dolichoectasia of the larger intracranial arteries, especially the basilar artery, can be the earliest marker of cerebrovascular involvement and might therefore be a potential screening tool in FD , 4. HoweA limitation to our findings is the significant dropout rate of 44%, which decreased our FD cohort to 14 participants. The majority of the dropouts can be explained by the emergence of new FD centers in several locations throughout Germany during our follow-up interval of 8 years. Because we assessed patients from all over Germany at baseline, motivation for travelling to a more distant Fabry center at follow-up was most likely low. Mortality of the 25 patients included at baseline only accounted for 4% of the dropouts and these patients were not more severely affected than the mean severity at baseline, which suggests no FD severity bias in our results. Considering that our sample was rather small (n = 14), results might be susceptible to type II errors. Although we analyzed the relationship between hippocampal atrophy and several factors known to possibly alter HV , we cannot rule out that other factors such as diabetes, obesity, obstructive sleep apnea, vitamin b12 deficiency etc. might have influenced the results . HoweverThis investigation demonstrates clinical stability in cognitive function, while pronounced hippocampal atrophy is apparent throughout the 8 years. Our middle-aged FD patients seem to compensate successfully for progressive HV loss. However, since hippocampal atrophy was 11% over eight years, we expect FD patients to show further hippocampal atrophy, eventually passing a threshold of cognitive decline much earlier than the healthy population. Notably, marked hippocampal atrophy clearly exceeding age-associated volume decline provided further evidence of regional neuronal involvement in FD. The heterogeneous WML increases were associated with older age and higher WML-load at baseline, but not with HV, suggesting that WML involvement and HV decline are independent processes occurring in FD.S1 Table(DOCX)Click here for additional data file."} +{"text": "The dramatic rise in Noncommunicable Diseases (NCD) in the oil-producing countries of the Arabian Peninsula is driven in part by insufficient physical activity, one of the five main contributors to health risk in the region. The aim of this paper is to review the available evidence on physical activity and sedentary behaviour for this region. Based on the findings, we prioritize an agenda for research that could inform policy initiatives with regional relevance.We reviewed regional evidence on physical activity and sedentary behaviour to identify the needs for prevention and policy-related research. A literature search of peer-reviewed publications in the English language was conducted in May 2016 using PubMed, Web of Science and Google Scholar. 100 studies were identified and classified using the Behavioural Epidemiology Framework.Review findings demonstrate that research relevant to NCD prevention is underdeveloped in the region. A majority of the studies were epidemiological in approach with few being large-scale population-based studies using standardised measures. Correlates demonstrated expected associations with health outcomes, low levels of physical activity (particularly among young people), high levels of sedentary behaviour (particularly among men and young people) and expected associations of known correlates . Very few studies offered recommendations for translating research findings into practice.Further research on the determinants of physical activity and sedentary behaviour in the Arabian Peninsula using standard assessment tools is urgently needed. Priority research includes examining these behaviours across the four domains . Intervention research focusing on the sectors of education, health and sports sectors is recommended. Furthermore, adapting and testing international examples to the local context would help identify culturally relevant policy and programmatic interventions for the region.The online version of this article (doi:10.1186/s12889-016-3642-4) contains supplementary material, which is available to authorized users. Noncommunicable disease (NCD) accounts for a large portion of mortality and morbidity in the oil-producing countries of the Arabian Peninsula , 2. A laInsufficient physical activity is one of the main contributors to health risk globally . SedentaThe rapid socio-economic development of the region has contributed to a rise in urbanization, motorisation, trade liberalization and \u201cwestern\u201d dietary patterns , 11 whicPhase 1. Identifying relationships of physical activity and sedentary behaviour with health outcomesPhase 2. Measuring physical activity and sedentary behaviourPhase 3. Characterizing prevalence and variations of physical activity and sedentary behaviour in populationsPhase 4. Identifying the determinants of physical activity and sedentary behaviourPhase 5. Developing and testing interventions to influence physical activity and sedentary behaviourPhase 6. Using evidence to inform public health guidelines and policyResearch establishing patterns of physical activity and sedentary behaviour is well-documented in most other regions globally , 19\u201321. As a research framework, it helps identify research gaps and systemizes the development of a research agenda to inform and guide public health policy and practice. To be effective, regional evidence is needed to understand the contextual determinants of these behaviours and introduce regionally relevant policies to address them , 24. We A literature search was conducted in May 2016 with PubMed, Web of Science and Google Scholar using the following search terms: active living; exercise; lifestyle; physical activity; walking; screen time; sedentary; sitting or television viewing; and the name of each country in the Region or Arab. The search was limited to peer-reviewed publications in the English language from any time period through April 2016. All articles were imported in an Endnote file to facilitate deduplication, screening and selection.The initial search produced 3,560 articles, after deduplication. All articles were screened independently by two authors (RMM and MJK). Screening was conducted in two steps. In the first step, original English language articles on related disciplines published in peer-reviewed journals through April 2016 were included by judging from the title and source of articles. Publications in other languages, conference proceedings and theses as well as articles in unrelated disciplines were removed. By the end of this step, 347 articles remained.Phase 1: Cross-sectional studies used a clearly described measure for physical activity/sedentary behaviour and prospective studies involved a physical activity interventionPhase 3: Studies clearly defined physical activity as meeting the recommendation of 150\u00a0min/week for adults or 60\u00a0min/day for children/adolescents.Phase 4: For demographic correlates, studies used a clearly described measure for physical activity/sedentary. The secondary inclusion criteria were not used for the studies examining the non-demographic articles to ensure a comprehensive review of available research in the region.In the second step, the abstracts and full texts were examined. The primary inclusion criteria were country specific studies which gathered original data, fit into any phase of the Behavioural Epidemiology framework , 23, andThis resulted in a total of 100 articles. The flow diagram for article inclusion following PRISMA guidelines can be seen in Fig.\u00a0Once the list of selected studies were identified, RMM extracted and MJK cross-checked the following for each: authors, country in which study was conduct, sample characteristics , and physical activity/sedentary behaviour measurement tools. Key findings of each study were extracted and organized according to the Behavioural Epidemiology Framework. Differences in opinion in data extracted and placement within the framework were discussed to reach consensus. An ecological model that helps to classify potential multiple levels of influence on physical activity and sedentary behaviours-intrapersonal, social cultural, environmental , 27, wasTwo authors (RMM and MJK) independently assessed quality of studies included in the review. Studies were assessed for risk of bias using criteria adapted from the Cochrane risk of bias tool and a toFourteen prospective studies and 29 cross-cross-sectional studies utilizing a clearly defined measure of physical activity and sedentary behaviour published in 2013 or later. Examining physical activity and/or sedentary behaviour was explicitly mentioned in the objectives of half (55) of the articles; the remaining focused more broadly on \u201crisk factors\u201d or \u201clifestyles\u201d. Over half of the studies focused on populations in Saudi Arabia (57) and the UAE (16) with 8 or less articles about populations in each of the other countries of the region ; the target populations were citizens of each country except for one study where the sample was South Asian immigrants .A majority (86) focused on physical activity with only a few reporting on domain specific physical activity; work (2) , 36, traThe review identified 14 prospective studies involving a physical activity intervention and 29 cross-sectional population-based studies utilizing a clear definition of physical activity to examine the association of physical activity with a health outcome examined self-report derived measures of total physical activity. TV/computer use and/or screen time was the most common proxy measure for sedentary behaviour (32); only six studies reported total sitting time. Some studies used reliable and validated instruments like the IPAQ/IPAQ-short examinedOf the 27 studies reporting on the prevalence of physical activity Table\u00a0, 11 wereSixteen country-specific studies from all six countries except Qatar reported on the prevalence of physical activity in the adolescent population. Eight school-based studies utilized the ATLS; among them, the lowest and highest reported prevalence of physical activity were 43.8 to 70.5\u00a0% for boys and 4 to 39.2\u00a0% for girls in Saudi Arabia and Kuwait respectively , 77\u201382. Sedentary behaviour was reported across 18 studies. Only three were national surveys among adult populations in Oman and Qatar; each study presented their data differently. Two studies were secondary analyses of the same survey conducted in Oman; one reported that a quarter of adults (23.7\u00a0%) sat 6 or more hours/day and the All 15 studies conducted with child and adolescents reported on computer, TV and/or total screen time. Like the adult studies, data were presented differently: mean TV and/or computer time or a prevalence of computer and/or TV of greater than 2 or 3\u00a0hours per day. Two studies reported mean computer/TV times with higher rates in girls than boys , 82. FouThirty-five studies examined the correlates of physical activity Table\u00a0. PopulatIn addition to the demographic correlates, the studies explored other factors associated with participation in physical activity, including intrapersonal, social/cultural, physical environment and policy level correlates. The most frequently identified barriers (negative association) of physical activity identified included: time, self-motivation, perceived health, norms limiting women\u2019s mobility or prioritizing her care-taking role, social support, availability of facilities, limited capacity within health institutions and weather. Positive support for participation in physical activity mentioned in more than one study was the knowledge that physical activity is important , 93 and Only one study examined the correlates of sedentary behaviour . It repoOnly six studies reported on interventions conducted in Bahrain, Saudi Arabia and UAE; three reported increases in physical activity \u201398 Tabl. The durn\u2009=\u200994), the mean score was 3.6 with only a quarter (25.5\u00a0%) rated 5, the highest score , prevalence or identifying the correlates of physical activity and sedentary behaviours (phase 4 n\u2009=\u200935).The findings of this review have identified relevant evidence and some of the limitations in understanding physical activity, sedentary behaviours and public health, an emerging area of knowledge in the Arabian Peninsula. Although 100 publications were identified since 2000, over half of these were published since 2013. This research was spread unevenly across the behavioural epidemiology phases used to structure our review of the evidence , 23. Then\u2009=\u20093) [n\u2009=\u20096). Publications were found from all six countries in the study area, although were mostly focused on adults rather than on children. The sedentary behaviour research identified in this review was much more limited than that related to physical activity and covered only the first three of the five phases of the Behavioural Epidemiology Framework.Far fewer published studies addressed the measurement of physical activity and sedentary behaviours \u201334 or thThe findings points towards the need for more and higher quality research. The following paragraphs describe the research required closely following the Behavioural Epidemiology Framework. Overall, the body of evidence included only a small number of prospective and cross-sectional studies which reported generally consistent associations between physical activity and sedentary behaviours and various health outcomes . Globally, there is extensive evidence on physical activity and life expectancy, cardiovascular disease, diabetes, cancer, mental health and bone health, but it largely originates from countries outside this study region , 103. ExStudies found in our review revealed overall low levels of participation in physical activity (particularly among young people), and high levels of sedentary behaviour . Although the prevalence of physical activity among adolescent was generally higher than in adults, a large percentage of both adults and adolescents did not engage in sufficient amounts. Similar findings were observed in a global study of 34 countries ; howeverStudies exploring factors associated with physical activity reported consistent associations with known correlates. Gender and age were consistently association with physical activity. Men were found to be more active than women and younger people more active than older people which is consistent with other countries , 107. OnFew studies assessed the physical and policy environments across the four domains of active living . EvidencOnly six studies reported the testing of population-based physical activity interventions \u2013101 . This review revealed that many of the studies to date have employed a narrow understanding of physical activity behaviour, with their focus on \u201cexercise\u201d where this is a formal and structured activity. This is in contrast to the broader field of physical activity and public health which has adopted a wider view and includes all types of physical movement consistent with the WHO Global Recommendations .Of particular concern this review revealed is the wide variability and quality in the measurement instruments used and the presentation of outcomes variables of exposure which severely limits within-and between-country comparisons. Except for those studies reporting use of two well established international measures (IPAQ and GPAQ), there was limited adoption of other valid and reliable tools to assess physical activity and sedentary behaviours, measures of the physical environment, self-reported cognitive, psychosocial measures and domain-specific measures . There wAlthough there were no studies that fit into the policy-related phase of the Behavioural Epidemiology Framework, we propose policy-relevant research though a critical review of our findings through the lens of international guidelines. To guide international efforts, recommendations on effective and feasible interventions have been provided on physical activity by the WHO in the Global Action Plan 2013\u20132020 . ConsistA quarter of the population in these six countries are under 25\u00a0years of age , 126. InLimited capacity within health services to promote physical activity was identified as a key barrier , 128. TwInternationally there is increasing focus on the role of the physical environment \u2013133 and Initiating research to examine the impact of urban planning and transport policy and practice in countries in the Arabian Peninsula is of importance. Research from elsewhere has identified that patterns of land-use, population density as well as the provision of adequate infrastructure to support \u2018active transport\u2019 and optimal green and nature spaces are associated with higher levels of physical activity , 135. GiThe study of sedentary behaviour, relatively new globally, is only now beginning to receive the attention of researchers in the countries of the Arabian Peninsula. Only a third examined sedentary behaviour and the research was limited to phases 1, 2 and 3 of the Behavioural Epidemiology Framework. The proposed research agenda would be similar to that outlined globally; an ecological model of four domains of sedentary behaviour focusing specifically on domestic screen time, extended sitting time in workplaces and schools, and time spent sitting in cars -- not only to better understand their determinants but also in designing appropriate interventions .This is the first systematic review of physical activity and sedentary behaviour for this region and complements an earlier review of the prevalence of physical activity . AdherenThe epidemiological transition, including increasing life expectancy and changing mortality patterns, in the oil-producing countries of the Arabian Peninsula has taken only 50\u00a0years; a timeframe much more rapid than for many other high-income countries. The rapidly rising prevalence of NCDs and increased susceptibility of the population to these diseases have dire consequences to future generations. The predicted trends and future burden on health care systems demands that public health action be more interventionist than those in developed countries . Given the low levels of physical activity in the Arabian Peninsula and high levels of sedentary behaviour, a much stronger evidence base is needed to guide action than is currently available.Policy relevant research should be undertaken by interdisciplinary teams of policy makers and researchers , 108. Gu"} +{"text": "Pseudomonas fluorescens strain EK007-RG4, which was isolated from the phylloplane of a pear tree. P.\u00a0fluorescens EK007-RG4 displays strong antagonism against Erwinia amylovora, the causal agent for fire blight disease, in addition to several other pathogenic and non-pathogenic bacteria.Here, we report the first draft whole-genome sequence of Pseudomonas fluorescens is a Gram negative, rod-shaped bacterium that is widely distributed in various environments . The RAST and SEED (Genomic DNA was extracted using the DNA blood and tissue kit from Qiagen. The whole-genome sequencing library was prepared using the Nextera XT DNA library preparation kit (Illumina) and quantified by a fragment analyzer . Sequencing was completed with 2 \u00d7 250-bp paired-end reads using the Illumina MiSeq platform (Illumina). Standard protocols were used for all of the above kits, as provided by the manufacturers. The reads were cleaned and trimmed using CLC Genomics Workbench version 7 (CLC bio). Next, quality-filtered reads were assembled into contigs -Leu-Asp-Thr-Ile-Leu-Ser-Leu-Ser-Ile using antiSMASH (Genome comparison showed closest similarity between EK007-RG4 and the biocontrol agents ntiSMASH . The potMRST00000000. The version described in this paper is the first version, MRST01000000.This whole-genome shotgun project has been deposited in DDBJ/ENA/GenBank under the accession number"} +{"text": "The interactions between two or more molecules or colloidal particles can be used to obtain a variety of self-assembled systems called supramolecules or supracolloids. There is a clear, but neglected, convergence between these two fields. Indeed, the packing of molecules into colloidal or supracolloidal particles emerges as a smart solution to build an infinite variety of reversible systems with predictable properties. In this respect, the molecular building blocks are called \u201ctectons\u201d whereas \u201ccolloidal tectonics\u201d describes the spontaneous formation of (supra)colloidal structures using tectonic subunits. As a consequence, a bottom-up edification is allowed from tectons into (supra)colloidal particles with higher degrees of organization . These ( Therefore, molecular tectonics is \u201cthe art and science of supramolecular construction using tectonic subunits\u201d developed the principles of supramolecular chemistry based on the molecular subunits self-assembly through non-covalent interactions James, . This secolloidal molecules\u201d has been introduced to describe structures made by the self-assembly of particles under the effect of attractive forces and/or external environmental effects colloidal particles. By analogy with the molecular tectonics definition, I propose to use \u201ccolloidal tectonics\u201d to define the art and science of supramolecular formation of (supra)colloidal structures using tectonic subunits (molecular building blocks). Consequently, a bottom-up construction of large colloidal systems is allowed from tectons colloidal structures using tectonic subunits, it is necessary to keep in mind that the total interaction energy can be considered to be the sum of hydrophobic attraction, steric and electrostatic repulsions. To achieve colloidal structures stabilized by hydrophobic/hydrophilic intermolecular forces, it is necessary to use building blocks (tectons) with precise and scalable algorithm. A simple strategy is to self-assemble two tectons with opposite polarities using complementary binding sites leading to stable discrete supramolecular clusters. Next, the hydrophobic effect will be responsible for the clusters self-assembly into (supra)colloidal structures Figures , 2.The first well-investigated system was published in 2012. This system results from ionic metathesis between anionic polyoxometalates (POMs) and cationic surfactants (\u201chydrophilic\u201d and \u201chydrophobic\u201d tectons) leading to the formation of uncharged clusters . Although the stabilizing effect of CDs on biphasic oil/water systems is known since the 1990s colloidal systems are obtained.Throughout the section above, I have presented colloidal tectonics that enables the smart design of nanoparticles with tunable amphiphilic properties that can promote the catalytic activity of multiphasic systems 12O40 nanoparticles to perform olefin epoxidation in eco-friendly solvents , good yields (>95%) and high selectivity (>99%). It is noteworthy that the catalytic activities are directly correlated to the dispersion stability in a given solvent: the better the stability the higher the activity. Moreover, the self-assembled nanoparticles are much more active than the native POM (TOF0 \u00d7 10). This effect can be clearly related to the localization of the catalyst in the interfacial layer as well as the accommodation of substrates inside the porous nanoparticles. Such catalytic systems clearly combine the advantages of homogeneous and heterogeneous catalysis: high activity and selectivity, ease of phase separation, re-use of catalyst (after filtration and distillation).In 2014, our group reported the use of amphiphilic catalytic dodecyltrimethylammonium/PWThese systems are actually the so-called Pickering emulsions which are good platforms to enhance mass transfer between substrates with opposite polarity (see above). Two kind of processes can be used: \u201cPickering-Assisted Catalysis\u201d (PAC) and \u201cPickering Interfacial Catalysis\u201d provide a smaller catalytic activity than the system made with the preformed nanoparticles (TOF0 divided by a factor three). This behavior can be ascribed to a partial ionic exchange providing a mixture of supramolecular clusters that contribute to the apparition of defects in the lamellar packing limiting the particle growth (see the previous section). In these conditions, the emulsions are rapidly destabilized into biphasic media leading to inefficient catalytic systems. Therefore, the neat correlation between the catalytic performance and emulsification of biphasic systems results clearly from a PIC effect providing a much larger water/oil interfacial area where the catalytic nanoparticles are localized. Moreover, it is relevant to note that these catalytic performances are obtained under mild conditions (65\u00b0C) and without stirring (except during the emulsification) indicating that the process is only driven by the catalytic cycle.In 2012, the dodecyltrimethylammonium/PW3[PW12O40] as water-soluble catalyst and H2O2 as oxidant , good yield (>99% in 30 min) and high selectivity (>99%) due to the promoted interfacial contact between the substrate and the catalyst. Such catalytic emulsion allows straightforward separation of the phases by centrifugation or by heating. In addition, these systems can be used without any organic solvents for liquid substrates. The other uses the formation of inclusion complexes between CDs and polyethylene glycol (PEG) leading to the formation of hydrogels that providing a good platform to obtain Pickering emulsions (see above). Rhodium-catalyzed hydroformylation of higher olefins, at commercially competitive rates, can be successfully performed in these emulsions . One is based on the extemporaneously formation and adsorption of CD/oil insoluble complexes at the water/oil interface without any synthesis (see above). This system has been used for the oxidation of olefins, organosulfurs, and alcohols using [Na]colloidal tectonics.\u201d The foundations, design, synthesis, and structure of these systems are illustrated with the aid of recent literature. The unifying goal is to learn how to use attractive forces to control molecular self-assembly and produce new colloidal systems with predetermined functions and/or properties closely related to the desired applications. This approach is clearly highly interdisciplinary as the scope of this research is well beyond the traditional frontiers of organic chemistry. This methodology covers a wide field of investigation which could be applied in many domains such as cosmetics, pharmaceutical formulations, nanomaterials, catalysis, etc. In the case of the rational design of catalytic systems, the colloidal tectonics approach is highly compatible with some concepts of \u201cgreen chemistry\u201d (Figure The present manuscript develops a new approach that lies at the crossroads between supramolecular and colloidal chemistry which is called \u201c\u201d Figure . Moreove\u201d Figure . On the \u201d Figure .It is noteworthy that the colloidal tectonics approach is closely related to the phase-boundary catalysis (Nur et al., The author confirms being the sole contributor of this work and approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Molecular communications provide an attractive opportunity to precisely regulate biological signaling in nano-medicine applications of body area networks. In this paper, we utilize molecular communication tools to interpret how neural signals are generated in response to external stimuli. First, we propose a chain model of molecular communication system by considering three types of biological signaling through different communication media. Second, communication models of hormonal signaling, Ca Rapid progress of nano-technology enables the manufacturing of nano-machines for medical applications of body area network (BAN) . Thus faMC could be divided into wireless and wired types based on different properties of biological mechanisms. Existing research efforts generally focus on a single MC type and lack the comprehensive understanding of the complex biological activities in human bodies. Biologically speaking, the human body is a complex system composed of various biological substances including organs, tissues, cells, molecules, etc. It is widely accepted that, even for a simple biological activity, different substances interact with each other through exchange of power and substance. Moreover, processes of biological activities usually involve multiple types of biological signaling, which should be investigated jointly rather than independently. From the aspect of communication, proper integration of bio-systems promote the implementations of complex functionalities. For example, neurons and blood vessels form neurovascular unit, which performs more complex functions via communication pathways, including maintaining the normal activities of neural system, repairing the damaged neurons with the nutrition from the blood, and regulating the vasodilatation of blood vessels .In this paper, we develop a chain model of MC system, based on biological cells, and motivated by different chemical stimuli methods . The modytes see , and neuytes see .This paper is developed based on our earlier work , where aFirst, we develop a chain model of molecular communication based on biological signaling. The proposed model considers the biological interactions among hormone, Camodel of .Second, we propose an implementable amplify-and-forward relaying mechanism instead of decode-and-forward relaying as in . In addiThird, based on the work in , we examWe examined the literature on MC related to this work. Diffusion-based MC has attracted great attention due to its universality in biology. In , a matheAnother potential type of MC is Caling. In , a lineayzed. In , Ca2+ sidels. In , a channNeural signaling has been abundantly studied based on the knowledge of neural science. What concerns us most is the communication model in neural networks, and there are innumerable studies focusing on this area. In , a multiThe proposed MC system is illustrated in The first type of communication medium is the fluid, where hormones propagate around the astrocytes. There are two types of propagation mechanisms, determined by types and properties of the hormones and fluid medium. First, hormones perform approximative random walk towards any direction with low energy consumption in the passive diffusion mechanism. For example, steroid hormones propagate in the brain via lipid mediator and passive diffusion . Second,The second type of communication medium is astrocytes, which are specific cells connected with neurons. Astrocytes are closely related to neural activities, such as physically supporting structures of neurons, providing nutrition for neurons, and promoting the growth of neurons . Astrocy signals . Based oThe third type of communication medium is neuron, which processes and transmits information through the human body. Neural communication is an efficient and reliable biological signaling, exploiting both electrical and chemical methods. Different types of neurons respond to different external stimuli .As shown in The communication process is described as follows. Initially, Tx releases hormones in a controlled pattern, responding to external stimuli. Then, hormones diffuse in the fluid medium, some of which are absorbed by the astrocytes, inducing the oscillation of CaT. It is found that secretion pattern of hormones is usually with burst and not sustaining [T, a certain number of hormone molecules ith slot, the expression of OOK coding is given by,t. Hormone molecules propagate with random or directed walks, motivated by the passive or active diffusion mentioned in In Tx, a list of binary information is encoded with the number of released hormone molecules per time slot. One bit is transmitted in a symbol period, denoted by staining . Accordiioned in dCHdhormones , expressgiven by ,(4)un=\u222bAstrocytes are special star-shaped glial cells, physically close to neurons, and play a crucial role in regulation of neural activities. It is observed that astrocytes propagate Caignaling . In thisWe introduce a mathematical model of Carvations . The dynIn the above equations, x inside an astrocyte, the dynamic of the CaOn the spatial scale, the propagation of Ca of Ca2+ . ER is mquations \u201313). Eq. Eq2+ iossed by,dCCYNeural communication is a fast biological signaling, and its velocity varies due to different neuron types. Neural communication is also biologically reliable, as the neurons are able to eliminate errors in communications. In this section, we introduce a signaling model across two neighboring neurons, including the processes of neural firing, axonal transmission, gap junctional transmission, and postsynaptic response.One neuron is normally composed of different number of dendrites, axons, somas, and axonal terminals. In a communication, neural signals are first generated by ions through neural firing process in soma, transmitted along the axon fiber in the pattern of electrical impulses. Then, the electrical impulses trigger the release of neurotransmitters from the vesicles located in axon terminals. The neurotransmitters propagate to the neighboring neurons, until they are absorbed by the dendrites. Finally, the postsynaptic response is induced that might trigger firing of neighboring neurons.The generation of neural signals is called neural firing, which is an all-or-none process. Various types of ions including CaNeural firing is a relaying process, in which information carrier varies from Caications . We assuith astrocyte, injected into the neuron at time To generate neural signals, there must be adequate Caey model , given b2+ flows . We emplt and location x. The boundary condition of the axonal transmission is given by,In the process of axonal transmission, action potential spikes transmit along the axon until they arrive in the axon terminals. During the process, various ions \u03b3m\u03b3The above equations can be solved using the Fourier method , namely Equation , and canWe show the process of gap junctional transmission in Let The release probability of a vesicle is related to ith slot. Equations kiPe_neuronThe neurotransmitters diffuse in the gap junctions until absorbed by the dendrite receptors of neighboring neurons. Due to the stochastic behavior of the vesicle release process, the absorbed time of neurotransmitters is random. The diffusion of neurotransmitters is slow; this process is normally ignored since gap junctions are extremely short . However, neurotransmitters can be absorbed with delay due to the limited absorbing ability of dendrites . GeneralK denotes the capacity of the receptor. G denotes absorbing time k customers. The blocking phenomenon is more serious for bigger t.In our model, the neurotransmitters are defined as the batched customers, which are independent with identical distributions. We adopt the esult of , the bloThe process of postsynaptic response includes the formation of postsynaptic potential and the decoding of neural signals.The neurotransmitters are absorbed by dendrites of neighboring neurons via various receptors, such as the AMPA and NMDA receptors. Neurotransmitters accumulate on the dendrite membrane, namely postsynaptic terminals. During this process, the postsynaptic potential of dendrite membrane is elevated to generate the waveform of output signals, which could be expressed by an alpha function ,(27)\u03dd estimation of the action potential spikes; and (2) extraction of the binary information.q denotes a stochastic variable in the signal amplification, following the Gamma distribution [In the first step, a waveform of output signal is denoted by ribution . To extrT. We say K denotes the Kolmogorov distribution, and ith slot. We obtain ith bit, if the condition in Equation , , and 32)32), the In this section, we deduce the expressions of channel capacity and transmission delay for the proposed communication system.Let p. According to the conclusions in [In the above equations, sions in , the expiven by,1Pi(1)=\u220fiquations and 34)Pi aith time slot, the mutual information is given by,We assume that, under the condition of Y, X is uncorrelated to Z. During the The mutual information between X and Y is further calculated as,Equation is furthTransmission delay of the proposed communication system is denoted by The estimation of x during a time period, ith connected astrocyte with the neuron, and The estimation of n et al. , the vary of \u03a9Ca , i.e.,.Simulation parameters were grouped according to hormonal, CaT varied from 1 s to 20 s, molecule life expectancy p of transmitting bit \u201c1\u201d varied between 0 and 1.Parameters of the hormonal signaling are listed in ,31,44: dT varied from 1 to 20 s.Parameters of the Caisted in : ratio oT varied from 1 to 20 s.Parameters of neural signaling is listed in ,36,38: fIn the simulation, we present how various signals behave in 40 s to transmit 4 bits of binary information. We analyzed the chain models by comparing a group of different signals. For example, the signals in We analyzed the behaviors of various signals by sending different bits. Comparing Channel capacity of the proposed communication system is jointly determined by the three types of signaling. Our target was to check the relations between channel parameters and the channel capacity.C changes with different p and C is elevated by increasing number of released molecules C reaches its peak if p is closer to 0.5. By comparing C, because a larger C changes with different p, p of maximizing C is not 0.5, which is different with traditional wireless communication. Thereby, the source coding of MC should be particularly designed. Increasing the number of astrocytes C, since the relaying capability of CaC is larger with a larger readily release pool T and T results in the growing T and a fixed number of molecules T is large. Therefore, we deduce that it is important to choose a proper T in the proposed model. Besides, we note that We checked the relations between various channel parameters and the transmission delay. T and T signifies the increased T impacts the hormonal signaling and CaIn this paper, we design a chain model for molecular communication to understand and interpret neural signaling. The proposed model contains three types of signals, where CaOur work provides a philosophy of exploiting complex biological reactions for communication engineering in body area network. Future work will address the expansion of chain modeling and molecular communications in more complex networks, and explore more practical and complex biological activities."} +{"text": "The mol\u00adecular conformations of the racemic title mol\u00adecules are almost identical. Each crystal structure features a short C\u2014H\u22efO hydrogen bond arising from the chiral carbon atom, which generates monochiral chains, although the overall structures are centrosymmetric. 10H8Cl3NOS), 1 and 3-(4-chloro\u00adphen\u00adyl)-2-tri\u00adchloro\u00admethyl-1,3-thia\u00adzolidin-4-one (C10H7Cl4NOS) 2, are structurally related with one atom substitution difference in the para position of the benzene ring. In both structures, the thia\u00adzolidinone ring adopts an envelope conformation with the S atom as the flap. The dihedral angles between the rings [48.72\u2005(11) in 1 and 48.42\u2005(9)\u00b0 in 2] are very similar and the mol\u00adecules are almost superimposable. In both crystal structures, C\u2014H\u22efO \u2018head-to-tail\u2019 inter\u00adactions between the chiral carbon atoms and the thia\u00adzolidinone oxygen atoms result in infinite monochiral chains along the direction of the shortest unit-cell parameter, namely a in 1 and b in 2. C\u2014H\u22ef\u03c0 inter\u00adactions between the thia\u00adzolidinone carbon atom at the 4-position and the phenyl ring of the neighboring enanti\u00adomer also help to stabilize the packing in each case, although the crystals are not isostructural.The title compounds 2-tri\u00adchloro\u00admethyl-3-phenyl-1,3-thia\u00adzolidin-4-one (C Their synthesis was first reported as two of only three known 2-alkyl thia\u00adzolidin-4-one compounds with thio\u00adglycolic acid and with a mechanism to remove the water byproduct \u2005\u00c5 in both structures. The dihedral angles between the thia\u00adzolidinone and phenyl rings are 48.72\u2005(11) in 1 and 48.42\u2005(9)\u00b0 in 2. The C1\u2014N1 and C1\u2014S1 bond lengths are 1.445\u2005(2)\u2005\u00c5 and 1.816\u2005(2)\u2005\u00c5, respectively, for structure 1 and 1.4471\u2005(18)\u2005\u00c5 and 1.8181\u2005(16)\u2005\u00c5, respectively, for structure 2. The N\u2014C\u2014S bond angle is found to be 106.52\u2005(12)\u00b0 in structure 1 and 106.23\u2005(10)\u00b0 in structure 2. Overall, the molecular structures of both are almost exactly superimposable of the thia\u00adzolidinone ring and the phenyl ring of the symmetry-related enanti\u00adomer are also observed in both structures (Tables 11 is triclinic and 2 is monoclinic).Both extended structures exhibit C\u2014H\u22efO \u2018head-to-tail\u2019 inter\u00admolecular inter\u00adactions between the chiral carbon atom C1 and the thia\u00adzolidinone oxygen atom Tables 1 and 5 \u25b8 et al., 2016et al., 2014et al., 2013To date, there have been no reported X-ray structures of substituted 2-tri\u00adchloro\u00admethyl-3-phenyl-1,3-thia\u00adzolidin-4-ones or the unsubstituted parent compound. However, there are a number of studies for structures containing aromatic moieties at the 2- and 3-positions of the thia\u00adzolidin-4-one ring : Yield 43%; m.p. 447\u2013448\u2005K; IR: 1687\u2005cm\u22121; 1H NMR: \u03b4 7.1\u20137.5 , 5.72 , 3.77\u20133.96 ; 13C NMR: \u03b4 171.65 (C=O), 138.45 (N\u2014Ar), 129.17, 127.98, 126.98, 103.18 (CC13), 77.69 (C2), 33.08 (C5). Analysis calculated for C10H8NOSC13: C, 40.40; H, 2.72; N, 4.72; Cl, 35.86. Found: C, 40.60, H, 2.74; N, 4.60; Cl, 35.44.2-Tri\u00adchloro\u00admethyl-3-phenyl-1,3-thia\u00adzolidin-4-one (2): Yield 20%; mp 456\u2013458\u2005K; IR: 1685\u2005cm\u22121; 1H NMR: \u03b4 7.11\u20137.50 , 6.04 , 3.80\u20133.92 ; 13C NMR: \u03b4 171.61 (C=O), 136.96 (N\u2014Ar), 133.78 (C\u2014CI), 129.46, 127.92, 103.06 (CCI3), 77.51 (C2), 32.65 (C5). Analysis calculated for C10H7NOSC14: C, 36.47; H, 2.13; N, 4.25. Found: C, 36.65; H, 2.12; N, 4.04.2-Tri\u00adchloro\u00admethyl-3-(4-chloro\u00adphen\u00adyl)-1,3-thia\u00adzolidin-4-one = 1.5Ueq(C) for the methyl group and Uiso(H) = 1.2Ueq(C) for the remaining H atoms.Crystal data, data collection and structure refinement details are summarized in Table\u00a0310.1107/S2056989018013257/hb7769sup1.cifCrystal structure: contains datablock(s) I, 2, 1. DOI: Click here for additional data file.10.1107/S2056989018013257/hb77691sup2.cmlSupporting information file. DOI: Click here for additional data file.10.1107/S2056989018013257/hb77692sup3.cmlSupporting information file. DOI: 1868299, 1868298CCDC references: crystallographic information; 3D view; checkCIF reportAdditional supporting information:"} +{"text": "Holding a low social position among peers has been widely demonstrated to be associated with the development of depressive and aggressive symptoms in children. However, little is known about potential protective factors in this association. The present study examined whether increases in children\u2019s prosocial behavior can buffer the association between their low social preference among peers and the development of depressive and aggressive symptoms in the first few school years. We followed 324 children over 1.5\u00a0years with three assessments across kindergarten and first grade elementary school. Children rated the (dis)likability of each of their classroom peers and teachers rated each child\u2019s prosocial behavior, depressive and aggressive symptoms. Results showed that low social preference at the start of kindergarten predicted persistent low social preference at the start of first grade in elementary school, which in turn predicted increases in both depressive and aggressive symptoms at the end of first grade. However, the indirect pathways were moderated by change in prosocial behavior. Specifically, for children whose prosocial behavior increased during kindergarten, low social preference in first grade elementary school no longer predicted increases in depressive and aggressive symptoms. In contrast, for children whose prosocial behavior did not increase, their low social preference in first grade elementary school continued to predict increases in both depressive and aggressive symptoms. These results suggest that improving prosocial behavior in children with low social preference as early as kindergarten may reduce subsequent risk of developing depressive and aggressive symptom. To check whether our hypothesized effects apply equally to boys and girls, sex differences in the tested associations were also examined.N\u2009=\u2009324, 54% boys; Mage\u2009=\u20095.10, SD\u2009=\u20090.37 at baseline). The majority (95.2%) of children were Dutch/Caucasian, 3.1% were Turkish, 0.3% were Surinamese, and 1.4% belonged to other ethnic groups.Data were collected from 18 schools in the north and east of the Netherlands as part of a longitudinal project on children\u2019s social and emotional development. Schools were recruited through the Dutch Municipal Health Service (MHS), and the first 18 schools willing to join were included in the longitudinal project. This study included all children who were in kindergarten at the beginning of the project . The second and third assessments were conducted in the fall (T2) and spring (T3) of the first grade in elementary school. Before each assessment, parents were informed about the procedure and measurements, and were given the opportunity to decline their children\u2019s participation in the study. Children were also informed in class and had the opportunity to decline their participation at any time during the study. Almost all invited children (99.9%) participated. After each assessment, a small gift was given to the children as a token of appreciation for their participation. The study proceedings were approved by the Medical Ethical Review Board of the VU Medical Centre.2 \u2009=\u20090.59, p\u2009=\u20090.44). However, dropout children had higher ratings of depressive and aggressive symptoms at baseline, compared to children with complete data.Due to illness, grade retention, and moving, 26 children (8.02%) were absent at the time of follow-up assessments. Compared to children who stayed in, those who dropped out did not differ in sex , approximately half of the children received a preventive intervention program (Promoting Alternative Thinking Strategies (PATHS), Kusch\u00e9 & Greenberg, Low Social Preference was assessed with peer nominations. At each time of assessment, children were asked to nominate an unlimited number of classmates whom they liked most and whom the liked least . For each child, the rating at T1 was done by the kindergarten teacher and the ratings at T2 and T3 were done by the first grade elementary school teacher. Assessments at T1 and T3, but not T2, were included in model analyses in the present study, to prevent associations due to shared-method effects (ratings at T2 and T3 share the same rater) and thus increase the validity of the results by using different raters for the repeated assessments of depression and aggression. Depressive and aggressive symptoms were rated on a 5-point scale ranging from never applicable to often applicable. For depressive symptoms, three items were measured: \u201cIs unhappy or depressed\u201d, \u201cIs indifferent, apathetic and unmotivated\u201d, and \u201cDoes not take pleasure in activities\u201d. Cronbach\u2019s alpha for the depressive symptoms scale was 0.73 at T1, and 0.81 at T3. For aggressive behavior, five items were measured: \u201cThreatens other people\u201d, \u201cStarts fights\u201d, \u201cPushes other children or puts them at risk\u201d, \u201cBullies, or is mean to others\u201d, and \u201cAttacks others physically\u201d. Cronbach\u2019s alpha for the aggressive behavior scale was 0.92 at T1 and 0.91 at T3. For both variables, high scores indicate high levels of symptoms. Latent constructs were used for both depressive symptoms and aggressive behavior in the structural equation model to account for potential measurement error and improve model fit.Change in prosocial behavior from T1 to T2 was used as a buffering variable in the model analyses of the present study. This variable was calculated based on the level scores of prosocial behavior at T1 and T2. The levels of prosocial behavior at T1 and T2 were measured by the prosocial behavior scale from the Social Experiences Questionnaire-Teacher Report obtained by regressing the level score of prosocial behavior at T2 on its level score at T1. The URS not only adjusts for measurement errors , Tucker-Lewis index and prosocial behavior (\u03b72\u2009=\u20090.03), and higher levels of aggressive behavior (\u03b72\u2009=\u20090.06) than girls. There were no gender differences on depressive symptoms and change in prosocial behavior. Repeated ANOVAs showed no effect of intervention on low social preference, depressive symptoms and aggressive behavior (in Table d\u2009=\u20090.59).The means and standard deviations of all variables are presented in Table\u00a0in Table . HoweverBivariate correlations of all studied variables are presented in Table\u00a0B\u2009=\u20090.08, SE\u2009=\u20090.03, \u03b2\u2009=\u20090.15, p\u2009=\u20090.01) and aggressive behavior at T3 were both significant. Also, significant indirect pathways were found from low social preference at T1 to depressive symptoms at T3 and aggressive behavior at T3 via low social preference at T2. The results indicate that children\u2019s low social preference at the beginning of kindergarten predicted their low social preference one year later in the first grade, which consequently predicted increases in symptoms of depression and aggression from kindergarten to the end of first grade elementary school.The baseline model Fig. a containB\u2009=\u20090.16, SE\u2009=\u20090.10, \u03b2\u2009=\u20090.11, p\u2009=\u20090.10) or intervention effect on this path.To test our hypotheses, we included change in prosocial behavior from T1 to T2 in the model. Corresponding to the two hypotheses, the change in prosocial behavior score was added as a predictor of low social preference at T2, and also as a moderator on the paths from low social preference at T2 to depressive and aggressive symptoms at T3, respectively. With respect to the first hypothesis, results nor aggressive symptoms . Also, no significant intervention effect was found in the pathway toward depressive symptoms . However, the pathway toward aggression was different for intervention and control group children . A breakdown of this effect showed that for control group children, there was no significant modifying effect of change in prosocial behavior in the link between low social preference at T2 and aggressive behavior at T3. However, for children in the intervention group, change in prosocial behavior significantly modified the link from low social preference at T2 to aggressive behavior at T3 .To test our second hypothesis, change in prosocial behavior from T1 to T2 was added as the moderator of the paths from low social preference at T2 to depressive and aggressive symptoms at T3, respectively. Results indicated that change in prosocial behavior from T1 to T2 significantly modified the link from low social preference at T2 to the development of depressive symptoms as well as of aggressive behavior from T1 to T3. Further analysis showed no significant sex difference in these effects on the link from peer preference to depressive (SD above the mean), average increase (mean) and below-average increase or decrease in prosocial behavior (1 SD below the mean), respectively, in the prediction of low social preference at T2 on the development of depressive and aggressive symptoms and who followed the average trend (mean), but not significant for those who had a high score (+1 SD).The present study examined the effect of change in prosocial behavior on the link between low social preference and the development of depressive and aggressive symptoms in a community sample of 324 children from kindergarten to the end of first grade elementary school. In line with our hypotheses, we found that poor peer appraisal in kindergarten predicted poor peer appraisal in early elementary school, which in turn predicted increase in symptoms of depression and aggression. We also found that increase in prosocial behavior during kindergarten improved children\u2019s appraisal among peers at the beginning of the first grade. Also, change in prosocial behavior during kindergarten modified the effect of low social preference on the development of depressive and aggressive symptoms from kindergarten to the end of the first grade in elementary school. Children\u2019s low social preference at the beginning of first grade elementary school was not associated with the development of depression and aggression when children\u2019s prosocial behavior increased more than average level . However, when children with a low social position failed to increase their prosocial behavior during kindergarten, their prolonged low social preference predicted increases in depressive symptoms and aggressive behavior from kindergarten to the first grade in elementary school.Based on our findings, we suggest that the protective effect of increase in prosocial behavior functions in two parts. First, increase in prosocial behavior seems to improve children\u2019s general social position among peers. Our results showed that increase in prosocial behavior during kindergarten were linked to increases in social preference at the beginning of the first grade. This finding indicates that behaving in an extra prosocial manner increased children\u2019s social preference level. This is in line with previous findings showing that prosocial behavior helped in improving children\u2019s interpersonal relationships (Caputi et al. Moreover, our findings show that increasing prosocial behavior prevents children staying at a low sociometric position among peers, which in turn helps to reduce their risk of developing depressive and aggressive symptoms. Previous studies showed that prolonged poor peer preference predicted depressive and aggressive symptoms (Burks et al. We also examined whether boys and girls experienced the same positive effect from increase in prosocial behavior in terms of the development of psychopathologic symptoms. Results showed no significant sex differences on either of our hypothesized effects. This suggests that the buffering effect of increase in prosocial behavior is not gender-specific. Despite the sex differences in levels of peer problems and psychopathologic symptoms, no differences were found in the effect from change in prosocial behavior to social preference and the link from social preference to depressive and aggressive symptoms. This is in line with several previous studies that failed to find significant sex differences in the longitudinal associations among these structures (Dodge et al. The present study found a significant buffering effect of change in prosocial behavior on the pathway to aggression only among children who were in the PATHS intervention condition. We have to be cautious in our interpretation of this effect because the implementation fidelity of the program was fairly low. In fact, it would need to be replicated to draw stronger conclusions on effects of the intervention. However, PATHS might provide a possible explanation as to why change in prosocial behavior prevented social preference to link to aggression among children in the intervention group. The PATHS program aims at teaching children skills to regulate their behavior (Kusch\u00e9 & Greenberg, This study has important theoretical implications. The protective effect of increase in prosocial behavior and its mechanism were proposed in the ostracism theory Williams . PreviouFindings of the present study also have implications for future researches and preventive interventions. In view of the buffering effect on the links from peer problems to psychopathology, our findings offer support for the importance of improving prosocial behavior among children who are at risk in the peer context. Identifying characteristics of children, or of the children\u2019s environment that make them less capable of changing their prosocial behavior is warranted. Future studies could add relevant information by identifying child characteristics including social cognitive features such as social beliefs (Chen et al. The present study has a number of limitations. First, the majority of the children came from a Dutch/Caucasian ethnic background, which questions whether our findings can be generated to groups with more culturally diverse populations. Prosocial behavior is seen as a personal decision in western culture, while in collectivistic cultures it is usually interpreted as an obligatory choice Chen . CulturaIn conclusion, this study, with a longitudinal design in a real social interaction context, found that increase in prosocial behavior as early as in the kindergarten can protect children from developing prolonged low sociometric appraisal among peers, which further protects them from developing depressive and aggressive symptoms. Meanwhile, stronger prosocial adjustments might be required to prevent aggression compared to depression. The findings offer support for the importance of improving prosocial behavior in peer context in terms of a buffering effect on the links from peer problems to psychopathology."} +{"text": "Distal phalanx fractures of the toes are common injuries. The majority of them are treated conservatively with good outcome. We present the case of a painful non-union fracture of the distal phalanx of the 4th toe in a 60-year-old female patient with symphalangism of the 4th and 5th toes. She underwent surgical fixation of the fracture with concomitant inter-phalangeal joint (IPJ) arthrodesis for better stability. A transverse dorsal incision was made just distal to the IPJ to allow preparation of both the fracture site and IPJ. Fibrous tissue at the fracture non-union site was removed and the opposing surfaces drilled with a 0.88mm K-wire. Cartilaginous tissue at the IPJ was removed and similarly drilled with the 0.88mm K-wire. Stabilisation was achieved with a percutaneous headless compression screw. Radiographic union was achieved and the patient had resolution of symptoms 16 weeks after the surgery. The patient continued to be symptom-free at one year follow-up. This is the first case report of a surgically treated symptomatic non-union of distal phalanx fracture of a lesser toe in the literature. The vast majority of these fractures heal with non-operative management, typically with good outcomes. We were unable to find any previous report on the surgical management of such fractures that had gone into symptomatic non-union. We present a case of a painful non-union fracture of a 4th toe distal phalanx in a patient with symphalangism of the 4th and 5th toes which was stabilised with a headless compression screw.Distal phalanx fractures of the toes are common injuries, forming about 9% of fractures treated in the primary care settingOur patient was a 60-year-old female who sustained the injury when a trolley ran over her right 4th toe. It was a closed injury with no significant deformity. Radiographs showed a distal phalanx oblique fracture . IncidenHer past medical history included hyperlipidaemia and lumpectomy for breast cancer. She was also a hepatitis C carrier. She was on regular medication with Rosuvastatin 5mg nightly for the management of hyperlipidaemia. She was a non-smoker.As the toe alignment and soft tissue condition were satisfactory, the patient was initially treated conservatively with buddy splinting and advised to mainly weight-bear on the heel to avoid loading the forefoot. Serial radiographs at subsequent reviews did not show any signs of union, even after conversion to a short walker boot after three months. At five months after the initial injury, the patient was still symptomatic, especially with prolonged ambulation. The patient worked as a nurse and spent a significant amount of time on her feet at work. Outside of work, the patient was an avid trekker. Having failed conservative management and given her high functional demands, she was counselled regarding open reduction and internal fixation of the fracture. In view of the short proximal fragment, stable fixation of the distal phalanx alone was anticipated to be technically difficult, and she was advised on the option of fusion across the inter-phalangeal joint (IPJ) to improve stability, to which she consented.Subsequently, the patient underwent surgical fixation of the 4th toe distal phalanx fracture with concomitant arthrodesis of the IPJ. A transverse dorsal incision was made just distal to the IPJ to allow preparation of both the fracture site and IPJ surfaces . The fraThe toe was reduced with manual axial compression and stabilised with 2 x 0.88mm K-wires. A stab incision was made at the tip of the toe to allow passage of the K-wires and screw. The K-wires were first inserted in a retrograde fashion via the fracture site out through the tip of the toe, and then passed antegrade through the IPJ. Compression was achieved with a cannulated 20mm percutaneous headless compression screw . The screw had a diameter of 2.5mm at the tip and 2.8mm at the tail. There was already partial resorption of the proximal fragment of the distal phalanx, resulting in a triangular configuration in the axial plane. Due to this configuration, the overall toe alignment was in slight valgus but there was no impingement against the adjacent toes in both flexion and extension.Skin closure was with synthetic, non-absorbable monofilament 4-0 suture. The patient was kept on heel weight bearing with a forefoot-offloading shoe. Her surgical site sutures were removed at two weeks and the wound healed without any complication. She was taken off the forefoot off-loading shoe at ten weeks and allowed progressive weight-bearing as tolerated on her forefoot. At review six months after surgery, she reported that there was no pain in her toe. She was coping well at work, and had returned to her trekking activities without any problems. The patient also reported no limitations in her choice of footwear. Radiographs of her toe showed union of the fracture . At finaThe majority of distal phalanx fractures of the toes are treated conservatively. The threshold for satisfactory alignment and conservative management includes: angulation of less than 20 degrees in the sagittal plane, less than 10 degrees in the axial plane and less than 20 degrees in the coronal plane .2. These fractures usually heal with good outcomes. In rare cases where healing does not occur and the fracture goes into non-union, surgery can be offered to patients who are symptomatic.Fractures with satisfactory alignment can be stabilised with buddy splints and immobilised with rigid-sole shoes3. Due to the small size of the distal phalanx fragments and narrow soft tissue envelope, fixation options were limited to K-wires (0.8 to 1.2mm), cortical screws (1.3 to 1.5mm) and headless compression screws (2.5mm). Cortical screw fixation was associated with greater IPJ range of motion compared with K-wire fixation, but required implant removal in half of the cases due to the prominence of the screw head3. There are also newer expandable implants such as the X-Fuse\u00ae and Smart Toe\u00ae which have shown promising results in recent reports5.There is a paucity of literature on the surgical management of this condition in the toes. We tapped on the literature involving distal phalangeal fractures of the fingers to guide our surgical planningWe decided on the headless compression screw as we could obtain adequate bone stock by performing a concomitant IPJ fusion which would provide the most stable fixation in our high-activity patient, compared to K-wires and cortical screws. The headless compression screw provided double compression across both the fracture and fusion sites. The cannulated system also allowed easy instrumentation and passage of the screw. This surgical approach enabled us to achieve successful fracture union and IPJ arthrodesis, with satisfactory functional outcome for our patient.This case report illustrates the successful treatment of a non-united and painful fracture of the distal phalanx of the toe, using a headless compression screw to achieve stable fracture fixation with fusion across the IPJ."} +{"text": "Introduction: Thompson and Austin Moore prostheses have been commonly used in hemiarthroplasties for displaced femoral neck fractures. There has been considerable debate about which of these prostheses is preferred. The purpose of this meta-analysis was to compare historical data for clinical outcomes of cemented Thompson and uncemented Austin Moore hemiarthroplasty in displaced femoral neck fractures.Methods: We searched Medline via PubMed, Cochrane Central, Scopus, Ovid and Web of Science for relevant articles up to February 2019. The included outcomes measured were hip function, hip pain, implant-related complications, surgical complications, reoperation rate and hospital stay. The data were pooled as risk ratio (RR) or mean difference (MD) with 95% confidence interval (CI) between the two compared groups in a meta-analysis model.Results: Ten studies with a total of 4378 patients were included in the final analysis. The pooled RR showed that the Thompson group was associated with a lower incidence of postoperative hip pain , lesser reoperation rate , lesser intraoperative fractures , but a longer operative time in comparison to the Austin Moore group. The effect estimate did not favour either group in terms of hip function, periprosthetic fractures, prosthetic dislocations, wound infection, mortality and hospital stay.Conclusion: Evidence shows that Thompson hemiarthroplasty is better than Austin Moore hemiarthroplasty in terms of hip pain, reoperation rate and intraoperative fractures. Whereas the postoperative hip function is equivalent, these results could be considered when assessing the outcomes in modern hips. Femoral neck fractures are among the most serious and frequently occurring injuries in the elderly population, with a high risk of mortality and associated complications . HemiartIn 1940, Austin Moore implanted the first vitallium prosthesis to replace the proximal femur, then changed to a straight-stemmed prosthesis in 1950 , but oveDespite unsatisfactory clinical results, Thompson and Austin Moore undoubtedly have played an important role and remain in regular use within developed countries \u20139. A conWe designed the current systematic review and meta-analysis to evaluate the clinical outcomes between cemented Thompson and uncemented Austin Moore hemiarthroplasties for displaced femoral neck fractures in the elderly patient population to resolve this controversy.All steps of this systematic review were performed in accordance with the Cochrane handbook of systematic reviews and meta-analysis .: \u201cHemiarthroplasty\u201d, \u201carthroplasty\u201d, \u201cfemoral neck fractures\u201d, \u201cintracapsular hip fractures\u201d, \u201ccemented\u201d, \u201cuncemented\u201d and \u201ccementless\u201d. No restrictions by language, country, or publication date were employed. We also searched the bibliography of eligible studies for relevant articles.We searched Medline via PubMed, Scopus, EBSCO, Cochrane library and Web of Science for relevant articles, using the following keywordsWe included studies that compared patients with displaced intracapsular femoral neck fractures fixed using cemented Thompson hemiarthroplasty or uncemented Austin Moore hemiarthroplasty. We excluded studies that used prostheses other than Thompson or Austin Moore implants. Studies that involved patients with previous fractures of the same hip or pathological fracture were also excluded. Non-competitive studies, animal studies, duplicate references, case reports, conference abstracts and studies from which data could not be reliably extracted were excluded. We conducted eligibility screening in two steps: step (1) title and abstract screening for matching to the inclusion criteria and step (2) full-text screening for eligibility for meta-analysis. Disagreements were resolved through consensus after discussion.We included studies that reported the following outcomes: postoperative hip function, postoperative pain, reoperation and revision rate, implant-related complications , surgical complications (including postoperative fractures and postoperative infection), operative details (including operative duration and intraoperative blood loss), hospital stay, medical complications and mortality.Three independent reviewers extracted the author name, year of publication, study design, number of participants in each group, age, gender, type of intervention (including the type of prosthesis), study period, follow-up period and relevant outcome data. Another reviewer resolved disagreements.For RCTs, two independent reviewers used the Cochrane risk of bias (ROB) assessment tool of the Cochrane handbook of systematic reviews of interventions 5.1.0 . For obsp\u00a0<\u00a00.05 was considered statistically significant. Missing standard deviation (SD) data were calculated from the equations provided by Altman [We calculated the risk ratio (RR) and 95% confidence intervals (CI) for dichotomous data, and mean difference (MD) or standardized mean difference (SMD) and 95% CI for continuous data. A value of y Altman . Data anI2 statistic. Significant statistical heterogeneity was indicated by Q statistic p-value less than 0.1 or by I2 more than 50%. In case of significant heterogeneity, a random effects model was employed. Otherwise, the fixed effects model was used. We conducted subgroup and sensitivity analyses.Heterogeneity was evaluated by the forest plot methods and measured by Q statistic and According to Egger\u2019s and colleagues , 21, pubOur search retrieved 1166 unique citations. Fifty-one articles were retrieved and screened for eligibility to the meta-analysis. Of them, 41 articles were excluded and 10 articles were included in the present meta-analysis. Ten studies , 22\u201327 (p\u00a0=\u00a00.9). No substantial evidence of heterogeneity was noted , Three studies , 10, 26 p\u00a0<\u00a00.0001). There was no significant heterogeneity , Three studies , 10, 23 p\u00a0=\u00a00.02). Pooled studies were homogenous , Three studies , 23, 25 p\u00a0<\u00a00.0001, I2\u00a0=\u00a035,60%; p\u00a0=\u00a00.18), while the two groups were comparable in terms of periprosthetic fractures , dislocations of prosthesis and wound infection , Supplementary material.The pooled RR showed that the Thompson group (1323 patients) was related to a lower incidence of intraoperative fractures than the Austin Moore group (1366 patients) . Pooled studies were homogenous , Supplementary material.Two studies , 24 provp\u00a0=\u00a00.02, I2\u00a0=\u00a090.6%; p\u00a0=\u00a00). The MD showed no statistically significant difference between the Thompson and Austin Moore prosthesis groups in terms of hospital stay , Supplementary material.The MD showed that the Thompson group (327 patients) had a longer operative time than the Austin Moore group (326 patients) . Combined studies were homogenous , Supplementary material.Two studies reported on medical complications , enrolling 337 patients in the Thompson group and 445 patients in the Austin Moore group. The pooled RR did not favour either group . Combined studies were homogenous , Supplementary material.Four studies , 23, 25 Femoral neck fracture is one of the leading causes of mortality in the elderly . While hOur study showed that no significant difference existed in hip function between the two groups. Besides, our analysis did not favour Thompson or Austin Moore hemiarthroplasty with regard to mortality figures or medical complications. Our results came in line with another systematic review that compared cemented versus uncemented prosthesis , 31. TheWith regard to operative outcomes, our results indicated that the Thompson group had a lower incidence of surgical complications; however, this was not statistically significant. These findings are in agreement with two previous systematic reviews that compared cemented versus uncemented prosthesis , 35. OneConcerning postoperative outcomes, our results showed that the Austin Moore had a higher reoperation rate than the Thompson technique. This was affected by several factors including but not limited to: postoperative pain (which we found to be higher in the Austin Moore group) and prosthetic loosening . There wAccordingly, we conclude that the cemented Thompson group was associated with a lower incidence of hip pain and reoperation rate. However, no evidence for a decisive detrimental effect exists. This was in agreement with eight of our studies \u201325, 27, We conducted a comprehensive database search that yielded a great number of high-quality RCTs and observational studies, used a rigorous screening process that allowed us to focus only on the studies that met our selection criteria and were appropriate to our research question. The large sample size (4378 patients) may allow for data generalization application. This is due to our inclusion of observational studies as well as RCTs. Some of our results showed significant heterogeneity, which was best resolved using subgroup and sensitivity analyses. We used the Cochrane Collaboration tool to assess the risk of bias of the included RCTs. For observational studies, we used the Newcastle Ottawa scale. The results of this study are subject to limitations inherent to any meta-analysis based on pooling of data from different trials with various study protocols, different baseline patient characteristics and definitions for efficacy/safety outcomes. The number of studies in each outcome was low, and this could have an impact on the interpretation of the results. Further, only published data were used.Available evidence demonstrates that Thompson hemiarthroplasty is better than Austin Moore hemiarthroplasty in terms of hip pain, reoperation rate and intraoperative fractures. In institutions where these prostheses are still used, our results recommend the utilization of Thompson hemiarthroplasty.Figure A.1. Forest Plot of risk ratio (RR) of intraoperative fractures with 95% confidence interval, comparing between Thompson and Austin Moore groups.Figure A.2. Forest Plot of risk ratio (RR) of periprosthetic fractures with 95% confidence interval, comparing between Thompson and Austin Moore groups.Figure A.3. Forest Plot of risk ratio (RR) of prosthetic dislocations with 95% confidence interval, comparing between Thompson and Austin Moore groups.Figure A.4. Forest Plot of risk ratio (RR) of wound infection with 95% confidence interval, comparing between Thompson and Austin Moore groups.Figure A.5. Forest Plot of risk ratio (RR) of surgical complications with 95% confidence interval, comparing between Thompson and Austin Moore groups.Figure A.6. Forest Plot of mean difference (MD) of operative time with 95% confidence interval, comparing between Thompson and Austin Moore groups.Figure A.7. Forest Plot of mean difference (MD) of hospital stay with 95% confidence interval, comparing between Thompson and Austin Moore groups.Figure A.8. Forest Plot of risk ratio (RR) of medical complications with 95% confidence interval, comparing between Thompson and Austin Moore groups.Figure A.9. Forest Plot of risk ratio (RR) of mortality with 95% confidence interval, comparing between Thompson and Austin Moore groups.https://www.sicot-j.org/10.1051/sicotj/2019031/olmSupplementary material is available at The authors have no conflicts of interest to declare."} +{"text": "The public goods game is a famous example illustrating the tragedy of the commons . In this game cooperating individuals contribute to a pool, which in turn is distributed to all members of the group, including defectors who reap the same rewards as cooperators without having made a contribution before. The question is now, how to incentivize group members to all cooperate as it maximizes the common good. While costly punishment presents one such method, the cost of punishment still reduces the common good. The selfishness of the group members favors defectors. Here we show that including other members of the groups and sharing rewards with them can be another incentive for cooperation, avoiding the cost required for punishment. Further, we show how punishment and this form of inclusiveness interact. This work suggests that a redistribution similar to a basic income that is coupled to the economic success of the entire group could overcome the tragedy of the commons. Individuals either chose to contribute to the common good (cooperate) with a single payment, or withhold their investment (defect) of said investment. The common good can experience a growth in value due to synergy, which consequently benefits everyone, also the defectors. In the end, tragically, defectors will always receive a higher reward than the cooperators, even though a higher total gain could be achieved if everyone would cooperate in the first place.The 3, green beard effects4, or costly punishment of defectors7. Similarly, we know that in games played spatially cooperation often dominates2 compared to well mixed situations. Besides spatial play, the easiest method to incentivize cooperation in humans8 seems to be punishment (for a more detailed description of this rather wide term see Raihani and Bshary7). Punishment, which in its ability to drive cooperation by direct or indirect reciprocity9, can take many forms10 and differs between humans and other organisms11. Here we will consider costly punishment, which not only imposes a cost on the defector, but also requires the punishing agent to come up for the cost of punishment. As this form of punishment is an established form of driving cooperation it serves as the basis for further comparisons.As such, this model has been extensively studied to describe social systems, in which for example taxes represent the common good, and tax evaders would be defectors of that game. Obviously, we are interested in methods which encourage everyone to cooperate, overcoming the individual benefit gained from defecting. Many different solutions have been identified which promote cooperation, such as reciprocityIndividuals engaging in the public goods game selfishly optimize their own rewards, while neglecting the common good. One concept, also derived from nature, that might be able to overcome this issue is group-level selection, where the payoff of the individual is not only dependent on its own choices, but also of that of the group. Evolution is normally selecting the individuals of a population according how well fit they are to their environment, as they produce the most viable offspring. However, organisms often form groups to take advantage of mutual benefits that such behavior grants. Fighting off enemies by swarming, division of labor, or other forms of collaboration come to mind. If not only individuals experience the benefit, but the group as a whole enjoys reproductive success over another group, we speak of group-level selection.Dictyostelium discoideum12 and its life cycle illustrates the difference between individual and group-level selection. In its amoeba stage, cells can replicate individually, and evolution occurs on the level of the individual. When food becomes sparse, cells aggregate and first form a mobile slug which later culminates into a fruiting body. The group of cells forming the fruiting body can now experience the rewards of group-level selection when wind disperses the spoors. The individual spores in turn become amoebas again, and so forth. Group-level selection in the strictest sense requires all members to be selected and being allowed to propagate offspring into the next generation.The slime mold 13, the situation becomes more complicated. The group receives a benefit driving cooperation14, however, the group does not strictly reproduce as a whole. Mechanisms like kin selection, multilevel selection, and inclusive fitness come into play15. While these are distinct concepts, they are often used interchangeably. Kin selection would require the members of a group to also be selected by their genetic distance, which we do not consider here. Inclusive fitness on the other hand refers to a much larger concept. In predator prey dynamics the fitness of the prey is dependent on the fitness of the predator leading to the \u201cfit when rare\u201d phenomena for example. Meanwhile, multilevel selection refers to situations where selection occurs on a much higher level than the individual, which has been identified as one reason for the evolution of multicellularity16. Regardless, all these mechanisms in one way or the other affect cooperation3 but remain highly debated concepts17.When organisms receive benefits from hunting together, for example hyenas18. The degree of selfishness, or its opposite inclusiveness, can define how much of the rewards are distributed equally. The pooling of resources to allow for group-level selection in the public goods game has been introduced earlier19 but not its fractional redistribution. Further, groups here still do not reproduce as a group but remain individual replicators. The fraction of payoffs that can be pooled or remain at the individual can be dialed. We call this fraction the degree of selfishness inclusiveness and consequently groups that are inclusive will be called inclusive groups.Here specifically, we are interested in how humans might be able to overcome the tragedy of the commons. Even when working in groups, and payoffs are dependent on synergistic behavior of the group, individuals still reproduce individually, precluding a group-level mechanism from taking effect. However, resources within the group can be redistributed. Instead of assuming individuals to be selfish, we can assume or force the group members to be inclusive, and thus have group members share rewards with each other. A fully inclusive group would pool all rewards individually obtained, and then shares them equally amongst its members. A group of only selfish members would not pool their rewards. It is easy to imagine a mixed model between those extremesconditional basic income. While it might be intuitive that sharing the rewards and costs drives cooperation, we need to first confirm the intuition, and secondly there might be a critical point at which the system swings from defection to cooperation. Thus, we will show how different degrees of inclusiveness lead to cooperation, and show what role punishment plays in this context.We think that the hypothetical mechanism of introducing inclusiveness into the PGG could be implemented in social systems as well. Instead of an unconditional basic income, one could offer a basic income linked to the economic success of the social group: 6. Each individual in a group of k participants, can either cooperate by making a contribution of 1 unit to a common pool or defect and withhold that contribution. The sum of all contributions in the common pool is multiplied by a synergy factor r and and then divided equally among all participants, i.e.\u00a0cooperators and defectors alike. In the case 2, i.e.\u00a0providing each player with the option to impose a punishment fine 6We analyze the public goods game following Hintze 2015In our approach we additionally introduce the parameter n cooperators generates a group-level payoff, i.e.\u00a0net earnings, ofWe first analyze a public goods game with our parameter In this setting the payoff of a collaborator isEquations and 2) 2) can b20 and6. Any defecting individual within a group of We now extend our analyses to the case of punishment following6 we abbreviate 6) whereas we obtain our original expression as well as the level of individual payoff 22 was reconstructed, the final 100 generations from all replicate runs were averaged to determine the point of convergence. We find the predictions about the critical points without punishment confirmed (see Fig. r. As such, cooperation has it easier to evolve the more inclusive groups become. The gene for punishment, as it is neither costly nor rewarding (To model a game played without punishment we set mentclass2pt{minimTo confirm the effects of punishment and inclusiveness in the public goods game, six different combinations of cost and fine were tested. Again 100 replicate evolutionary experiments per parameter combination were run and analyzed as before.r to be necessary for cooperation to evolve. Similarly, we also observe that evolved strategies do not punish when they also do not cooperate. Consequently, when they do cooperate, the punishment gene starts to drift (6. When all strategies become cooperators and no one punishes, punishment does not happen, and thus no cost is applied, explaining why the punishment gene drifts under those conditions.As predicted by the mathematical model, we find the critical point at which cooperation starts to emerge to be dependent on the degree of inclusiveness entclass1pt{minimaIn the case where punishment is not costly anymore (r and degrees of selfishness Further inspection of the factors determining the critical point see Eq. suggest see Fig. . For selWe introduced a new way to redistribute the payoff in a group of players participating in the public goods game. The degree to which the resources are distributed depend on the degree of inclusiveness of the group. In the case of purely selfish group members, if the synergy between the players is low, defection becomes the optimal strategy. In the case of fully inclusive groups on the other hand, the total payoff the group receives dictates cooperative behavior. The important question answered here is whether resources can be distributed differently and in such a way that individual actions still affect the payoff of the individual while simultaneously coupling the payoff of the individual to the accomplishments of the group. The redistribution of resources according to the degree of individualism Costly punishment has been identified as an alternative factor that also promotes cooperation. We found that to be true, and also that costly punishment has a synergistic effect when combined with higher levels of group-level selection. However, we also found that the degree of selfishness seems to be a much better way to promote cooperation. When tragedy of the commons can already be achieved at 23. As such, our basic income concept is much more a conditional basic income, as it is conditional on the success of the whole.A similar argument is believed to be an important driver for the economy: people are most motivated when their efforts translate into individual gains. Here we showed, that full cooperation, and thus the remedy to the In case a higher degree of individualism is desired, punishment can be used as well to promote cooperation, and thus higher total payoffs. Interestingly, as soon as In conclusion, redistribution of resources in such a way that all group members benefit from the success of the group directly can make the tragedy of the commons obsolete. While there are many ways to facilitate this, a conditional basic income that is coupled to the gross domestic product could be one way, even though many other mechanisms of redistribution can be imagined.Supplementary Information."} +{"text": "Purpose: The aim of the study was to investigate perceptions of staff about the promotion of physical activity (PA) in selected group residences of Hong Kong (HK), some of which had experienced a multi-component PA program. Method: Focus group interviews with nineteen staff members from four group homes (two of which received the program) were conducted. Findings: A SWOT analysis provided important insights into residential staff views about key influences on the quality of PA programs for residents with intellectual disabilities (ID). Positive (strengths and opportunities) and negative (weaknesses and threats) influences were identified. They were associated with characteristics of residents, staff, and group residence. Increasing age and low motivation are impediments to PA engagement of adults with ID. Staff competence and prior unsuccessful experience in promoting PA are also implicated. Conclusion: The PA program quality is mediated by the quality of staff interpersonal interactions with their clients and their commitment in encouraging such adults with ID to join and persistent in PA as well as staff seeking external resources and support as well as using initiative to adapt PA promotion activities in their specific group residential context. Hong Kong (HK) is a highly urbanized modern city, easily recognized for the density of high-rise buildings that occupy only 25% of the land mass . The opeEvidence shows that adults with ID have generally increased vulnerability to poor health and physical illness, such as obesity and chronic diseases . To combn = 7444) of which 31 are for mild and moderate ID persons PA program, we need our staff to lead, guide, and motivate the residents to do the activities. To give exaggerated praises, not saving [with-holding] verbal cheers, we hope our residents will be focused into doing the exercise.(GHD)Social influence and positive reinforcement were also seen as means to help residents become more active. In regard to the social interaction between staff and residents, the executive officer of the GHD stated:Staff also recognized the potential contribution of daily routines and tasks to PA engagement. As embedded in the residents\u2019 daily life, adults with ID had to walk up and down stairs and do some simple household chores. As commented by a staffer (GHB), \u201cIn general, they can do [physical activities]; our principle is to allow them to do as much as possible, like washing dishes, folding clothes\u201d.The residential context has potential in either facilitating or impeding the PA levels of the residents. Having sufficient space to incorporate an outdoor area was a group home strength. At NHs, there are outdoor physical activities such as basketball (GHC) and jogging training for GHD\u2019s younger-age residents within its outdoor sport facility. In addition, residential policy to offer a range of physical activities had positive influences on PA promotion. The offered activities included daily morning exercise, regular walking trips to supermarket or community, occasional weekend hiking trips, short-term group programs . Daily after-breakfast video-exercise is offered. [It\u2019s] 20 min in duration and is mandatary because dormitory rooms are locked. The majority \u2026 move [with] the video. This policy was a contrast to their past failed experience to organize a walking program between bath time and dinner time. But residents hid themselves in their dormitory rooms.(GHB)We have organized an exercise video-session on Wednesday night for all residents for a month now, and [it] will be run as a regular event. We hope this session (involving group dancing) \u2026 [will be a] social influence so that they will move more.(GHD)At GHD, those residents who were usually PA refusers were encouraged \u201cto walk to supermarket because walking is still a type of exercise. It\u2019s once a week and will give priority to those who stay at dormitory during weekends\u201d (GHD).The two and a half hours between dinner and bedtime was another opportunity to offer physical activities. This practice was specific to GHD where residents were younger (20\u201330 years old); activities included Taekwondo (particularly well received by residents who reportedly liked Kung-fu drama) and a group fitness training program. GHD purposely selected a public swimming pool because it required 40 min travel time and gave an opportunity for their residents to get into a different community on the weekend. There was seasonal variation in planning weekend activities: swimming was offered in summer and hiking in autumn.Establishing institutional policy about daily routines was seen as essential in promoting residents\u2019 habitual PA. In order to increase PA, GHC required their residents to use stairs to access the dinning floor. GHA\u2019s daily morning exercise was mandatary, held at a time when dormitory bedrooms were locked. In addition, institutional review provided opportunity for reflection on the effectiveness of current PA programs and for future planning (GHD): Staff recommended a weekly \u201cexercise for all\u201d program prompted by a movement video.Staff from GHA and GHB participated in a multi-component PA program led by the authors. The program was viewed as having fun with participants (residents and staff) enjoying it. When asked about the effectiveness of the program, a staffer commented, \u201cI feel that it can build up [residents] habit of being physically active\u201d. Another reflected, \u201cI can say they felt novel and freshness towards physical exercise. Also having music in the lessons made them more immersed into the exercise\u201d. The professional development program had broadened staff knowledge about instructing a group exercise class. The use of a warm-up routine, music, and toys was considered instrumental in prompting movement among the participants. Staff also perceived three information sessions as being useful because they observed that those residents of higher cognitive ability had openly enjoying them and appropriately responded to instructions. Moreover, issuing stickers as a reward for correct answers encouraged the residents. In terms of one-hour staff training, personnel indicated that it was useful professional development especially in providing ideas for motivating games and using equipment in muscle strengthening exercise.Unfortunately, the staff perceived many drawbacks to their residents being physically active.\u201cMost of them just like sitting and watching TV\u201d. However, \u201cSome are willing to do physical exercise, [but] one-third won\u2019t, even when forced\u201d (GHA). Other groups variously reported about client disinterest in PA and their habitual physical inactivity; for example, (a) \u201cThose who don\u2019t like to walk won\u2019t go out no matter what. They would [rather] spend time watching TV in their rooms or playing card games\u201d. (GHC with outdoor facilities); (b) \u201cThey love to watch TV. It\u2019s not about games [offered] being uninteresting; it\u2019s just that they choose to watch TV. It\u2019s just their ingrained habit and it takes time to change\u201d (GHD); (c) \u201cIt\u2019s hard to ask those who are physically inactive to join a sport program; they will only do indoor physical activities with air-conditioning. They are not interested in outdoor ones\u201d (GHD); and (d) \u201cTen residents joined gateball training for 8 weeks. Afterwards, most of them said it was hard. Therefore, the program was discontinued\u201d (GHD).The most frequently cited reasons for resident non-participation in PA were their apparent disinterest in getting moving and low self-efficacy in physical exertion. When mandatory PA sessions were imposed, disinterested residents would refuse. They preferred sedentary activities; the compulsory PA was perceived as \u201chard to do\u201d and \u201cgetting tired\u201d. Staff members from GHB and GHD specifically noted that their residents only wanted to watch TV after work. The following comment from all group homes captures clients\u2019 embedded attitudes towards PA/inactivity: Another issue related to the characteristic of the residents was a concern of their advancing age, a major threat towards PA promotion as shared by those three group homes that had been longer in operation. Aging residents have multi-morbidities such as reduced mobility or increased joint pain, and these health conditions would possibly have led to further immobility and could further impede the rehabilitative progress from diseases.In a follow-up to the PA intervention, staff were asked about such disinterest and if people with ID and their cognizance about the personal benefits of a physical active lifestyle and if they could apply it their daily life. One staff member (GHC) said that \u201cthey could not . We asked them the benefits of doing physical exercises, no one replied\u201d. Similarly, when asked about using peer support in prompting PA, staff (GHC) commented that people with ID could not comprehend the meaning of peer encouragement in PA participation, and \u201cthey might perceive verbal persuasion among peers as a type of reprimand\u201d.Failed experiences of staff in promoting PA could be an impediment to future health promoting action. For example, two staff at GHB trialed a PA program for six obese residents. Feedback from activity leaders was that it was extremely difficult to operate; the program was subsequently terminated. Involved staff were disheartened with the failed experience, as inferred in the statement:Previously we had a yearly plan to set for residents to do physical exercise after work, say for around 30 min. But the outcome was that we didn\u2019t see anyone showing up [for the activity].(GHB)The residential setting itself was often perceived as a drawback in PA promotion. Although lack of physical space of OH settings could be solved by taking residents outside; positive intentions by staff were hampered by restricted staff numbers with implications for resident safety. At GHA, there was \u201climited space for PA. Although there is a basketball court outside the dormitory, we don\u2019t have the manpower [sic] to take them outside. We worry that they may run away. Therefore, we cannot take too many people there\u201d. Similarly, staff at GHC reported \u201cnot daring to organize hiking; safety is the first priority\u201d. Opportunities are external factors having positive impact onto PA promotion, which included parental support, external funding, and external support. Group homes sought parental support by organizing parental educational talks. As commented by staff (GHB), group home had increased the priority of PA promotion because of parental advocacy, which emerged from their program review. Although generally anxious about safety, parents at GHD also agreed to their resident child joining PA programs.External funding came from two sources: government subsidy and charity funds. Since 2015, residential homes have been given an additional government subsidy to hire program/activity workers for rehabilitative purposes in order to combat health issues associated with the aging residential population . Hence, All four homes were linked to the sport-related department of the host university through a student internship program whereby exercise-leading trainees offered programs to ID residents and available staff. As indicated by GHB, they planned to apply for funding to host university to conduct staff training with these stated goals, \u201cI think my staff would like to know how to set a program, how to play or have the experience of playing\u201d.In contrast to the identification weaknesses that could be resolved within the institution, threats were those elements impacting on PA promotion, which might be viewed as external to the home. This included availability of qualified staff, aging residents with older-age parents, parental-home environment, and surrounding physical environment of the group home. Specific only to GHA was the manager mentioning difficulty in hiring staff for care-taking roles because caring for aging residents involved physically demanding work. Along with their residents, the current caretakers are also aging. As staff members retired, management had difficulty in getting personnel to take up these duties. Indeed, with the aging population of HK, hiring appropriate staff could become a wider issue.For GHA, GHB, and GHC, having been longer in operation and having more older-age residents means, \u201ctheir parents are in more advanced-age, hence parents don\u2019t join any leisure activities\u201d (GHC). On the other hand, when residents go back their parental home on weekends, \u201ctheir parents will give them a lot of foods, perceived that residents received inadequate foods at group home\u201d (GHD). Hence, staff commented that individualized exercise program offered to the obese residents might not be successful in weight reduction because of the influence of parental feeding practices. Obesity is prevalent among aging residents with ID, and individualized or group PA programs were offered to the residents. Group home staff members considered that aging residents tend to spend time \u201csitting\u201d because of their declining health conditions and aging-related diseases .GHB was located in a highly-densely populated urban area and had no public recreation park or activity center within 250 m radius in proximity. Generally, open space for hiking was over 3 km away. This was a hindrance for this group home in using neighborhood physical activity facilities. Another major hindrance for GHB was that their physical activity room had been converted to additional dormitory accommodation to cater for new admissions.From the analysis of staff perceptions of PA among their clients, it was evident that staffers were aware of positive (strengths and opportunities) and negative (weaknesses and threats) influences on PA engagement. These influences are associated with characteristics of residents, staff, and residential home.Motivation, obesity, age, and interests were interactive influences on PA involvement of these HK residents. Consistent with previous research, resident low motivation and low self-efficacy towards PA, preference of sedentary lifestyle studies, and feeling of exhaustion after PA ,37,38,39Unsuccessful prior experience of staff in conducting PA programs was implicated as a significant barrier to PA promotion where a small group exercise training program could not be sustained because of reported participants\u2019 low motivation and other perceived obstacles. Professional knowledge of, and personal competence, in activity skills can be either a strength or a weakness in staff promotion of PA. Thus, some staffers when frustrated gave up and no longer offered PA opportunities. As congruent with a previous study others, During the PA program implementation at intervention sites (GHA and GHB), staff of these homes had first-hand observation of how to lead a group exercise class incorporating music, toys, and slow/fast tempo of movements. These strategies heightened participants\u2019 interest. This experience could have strengthened professional knowledge in PA program content selection. However, the withdrawal of the intervention program has the potential to degrade this strength to a weakness through possible re-emergence of staff feelings of incompetence to take up the instructional role of conducting group exercise classes. Thus, the intervention program may be unsustainable upon the withdrawal of the intervention program team. (See As with western-based literature ,11, our In terms of professional practice, these HK findings indicate specific health behavioral techniques that can be adopted to break the residents\u2019 habit of sedentariness, low self-efficacy, and low motivation and to eliminate weaknesses to PA engagement, which are generally similar to Western findings. As evidenced from our study, we recommend the following strategies to consolidate strengths and capitalize opportunities for quality PA program outcomes. Firstly, use the reward of social recognition to increase the residents\u2019 extrinsic motive; for example, set-up an honors board depicting photos of the most improved residents based on their weekly behavioral assessment or visual reward such as a badge. Secondly, use music and rhythmic movements in heightening participants\u2019 interest in doing PA. Exercises comprising physical movements can be adapted from diversionary therapy , which hA SWOT analysis provided important insights into influences on the quality of PA programs for residents with ID, included characteristics of resident, staff, residential institution, and external resources. Their increasing age and low motivation are impediments to PA engagement of adults with ID. Staff competence and prior unsuccessful experience in promoting PA are also implicated. The PA program quality is mediated by staff quality of personal interaction with their clients, their commitment in encouraging adults with ID to join, and persistence in exercise engagement as well as staff seeking external resources and support. These results have elucidated best practice in PA promotion among Chinese residents with ID in the east Asian context of HK. In order to promote active lifestyle of such residents, removing or minimizing weaknesses and threats concerning personal and environmental barriers and capitalizing strengths and opportunities for enabling environments are essential in the provision of quality PA programs. Future PA intervention programs should be conducted to evaluate the effectiveness of exercises, incorporated with strategies of enhanced motivation by rewarding participation, provision of active staff support, and varied game-like creative movement.One methodological strength of this study was the use of a SWOT analysis to determine salient factors influencing the provision of PA programs for adults with ID. Although different studies have identified barriers and facilitators affecting low level of PA in people with ID , to our"} +{"text": "Whilst the issues around early termination of randomised controlled trials (RCTs) are well documented in the literature, trials can also be temporarily suspended with the real prospect that they may subsequently restart. There is little guidance in the literature as to how to manage such a temporary suspension. In this paper, we describe the temporary suspension of a trial within our clinical trials unit because of concerns over the safety of transvaginal synthetic mesh implants. We also describe the challenges, considerations, and lessons learnt during the suspension that we are now applying in the current COVID-19 pandemic which has led to activities in many RCTs across the world undergoing a temporary suspension.There were three key phases within the temporary suspension: the decision to suspend, implementation of the suspension, and restarting. Each of these phases presented individual challenges which are discussed within this paper, along with the lessons learnt. There were obvious challenges around recruitment, delivery of the intervention, and follow-up. Additional challenges included communication between stakeholders, evolving risk assessment, updates to trial protocol and associated paperwork, maintaining site engagement, data-analysis, and workload within the trial team and Sponsor organisation.Based on our experience of managing a temporary suspension, we developed an action plan and guidance for managing a significant trial event, such as a temporary suspension. We have used this document to help us manage the suspension of activities within our portfolio of trials during the current COVID-19 pandemic. The early termination of randomised controlled trials (RCTs) for planned reasons such as statistically based stopping rules, loss of funding, poor recruitment, or futility is well documented \u20136. HowevSeveral factors can trigger such a suspension: the most common being around a safety concern, or a perceived shift in the risk/benefit balance . Internawww.ukcrc-ctu.org.uk) , Health Services Research Unit, University of Aberdeen) of a temporary trial suspension.A search of the literature revealed very little guidance on how to manage a temporary suspension. We are aware of some accounts through \u2018word of mouth\u2019 from other trialists. In this paper, we aim to document the experiences and lessons learnt within our UK Clinical Research Collaboration (UKCRC) Clinical Trials Unit surgical RCT, VUE , 10 asked all Scottish NHS Health Boards to consider suspending the use of TV synthetic mesh implants (mode=pdf ). This wVUE incorporates two parallel RCTs evaluating the surgical options for either uterine or vault prolapse which involved or potentially involved the use of TV synthetic mesh implants as part of the surgical intervention Table\u00a0. In VUE,VUE was actively recruiting participants and had full approval from an NHS Research Ethics Committee (REC) when the temporary trial suspension occurred. At the time of the suspension in 2014, there was no new emerging evidence within scientific journals or medical literature nor updates or changes to the UK NICE clinical guidelines ).https://www.independent.co.uk/news/uk/home-news/vaginal-transvaginal-tvt-sling-the-mesh-scandal-nice-guidelines-health-watchdog-nhs-sui-incontinence-a8111721.html [The Scottish Government\u2019s request to consider suspending the use of TV synthetic mesh in incontinence and POP surgery in Scotland was initially widely interpreted as \u2018a ban\u2019 (721.html ), causinThe temporary suspension involved three Scottish trial sites and incorporated suspending recruitment of any new potential participants, as well as suspending randomisation of eligible participants who had already consented to take part. The randomisation line was closed for the Scottish sites, and for trial participants about to receive surgery , we advised the local clinical team and local R&Ds to adhere to their local NHS governance decision on the use of TV synthetic mesh .The trial team ensured that the key stakeholders , Trial Steering Committee (TSC), and independent Data Monitoring Committee (iDMC)) together with the local clinical team were notified and kept up to date throughout the suspension period. Emergency trial oversight committee meetings of the iDMC and TSC were organised to discuss the impact or potential impact on the trial, and the ongoing need for robust evidence. The Sponsor also revised their risk assessment.It is worth noting that the suspension occurred over a relatively short period of time from the initial Scottish Government request to the lifting of the suspension.Sponsor made the decision to lift the suspension and restart trial activity in Scotland once they were satisfied that all recommendations, questions, or concerns from the key stakeholders had been met. The Sponsor also confirmed their clinical trials insurance continued to cover activity within the trial.The Scottish sites were made aware of the outcome of the suspension. For these sites, this also involved the need to explain that the lifting of the temporary suspension related only to the research and not to the Scottish Government\u2019s request to suspend the use of TV synthetic mesh in all SUI and POP surgeries . This meant that within VUE, the Scottish sites could perform POP surgery using TV synthetic mesh, provided fully informed consent was obtained.Finally, the randomisation line was reopened on 3 July 2014 for those Scottish sites impacted by the temporary suspension, and the key stakeholders informed that the suspension had been lifted.An event that triggers a temporary suspension can happen at any phase of the trial . Therefore, consideration needs to be given as to who and what is affected by a suspension; be it screening and recruitment ; randomisation; delivery of the intervention or the length, type, or frequency of participant follow-up; scheduled meeting of study oversight committees (TSC and iDMC); progress reports to the funder; and also scheduled monitoring and site visits for quality assurance purposes. Indeed, pretty much any of these trial activities, that constitute a multicentre pragmatic trial, can be adversely influenced by a suspension.We identified a number of key challenges associated with managing a temporary suspension of unknown duration. These challenges, together with considerations and insights as to how these may be overcome, are described below:The first task identified in managing an event such as a trial suspension is to establish a task force and nominate a \u2018Significant Event Lead\u2019. In addition to the Lead, the key team members of the task force included the chief investigator, trial manager, quality assurance manager, and CTU director. Identifying the key stakeholders and creation of distribution lists was also important, as well as prioritising who would require initial contact and standardising the communication outputs.As this was our first experience of a temporary suspension, we had no formal process to guide us. We therefore created a \u2018guidance for significant major events\u2019 document (including delegation of specific actions) as well as an \u2018events and actions timeline\u2019 template queries . The \u2018events and actions timeline\u2019 evolved in real time but was also updated and renewed over time to ensure it was an accurate reflection of what happened, and when.We highly recommend documenting all the information related to a \u2018major event\u2019 (the temporary suspension) in such an \u2018events and action timeline\u2019, which could include any immediate corrective/preventive action, and define responsibilities. This timeline document can then evolve over the lifetime of the event and be updated accordingly.Bringing the task force together following such an event is extremely important to ensure consistency and accuracy in documenting the lessons learnt, as well as updating or making changes to processes (if required) or to ensure preparation for any future similar events.Maintaining an accurate record of all events is extremely important. We set up a dedicated (and secure) folder to provide a suitable facility for the retention of all relevant documents that staff involved could access. These records could be held electronically, as hardcopy or as a hybrid system. A hard copy or screenshot of any relevant web articles should be retained as these may subsequently become unavailable online over time.Timely, consistent, and accurate communication is key, both within the trial team and to stakeholders. This can be particularly important (as was the case for us) given the potential speed of development of ongoing events and decisions, to ensure everyone could confidently rely on the information given out by the trial office as being correct and authoritative.The communication needs of all stakeholders had to be met. The stakeholders included the Sponsor, Insurer, Funder, REC, oversight committees, sites, clinical staff, and participants. Careful consideration of what should be shared was necessary. Updates needed to be consistent, and the amount and complexity of detail varied depending on the stakeholder in order not to overwhelm them with too much detail . These communications were coordinated by the \u2018Significant Event Lead\u2019.The mode of communication also required consideration. We used a variety of communication methods to convey the necessary information to all stakeholders such as email, telephone, and meetings as well as use of the site specific (non-public) trial websites (where logins/access could be monitored to verify access to the changing information).Differing interpretations of the Scottish Government\u2019s request to suspend the use of TV synthetic mesh, as well as media reporting of the event, were particularly challenging, as many Scottish Health Boards interpreted the request differently. For example, it was unclear from the initial report if it was only for incontinence or POP or if it was specifically related to the TV use of mesh in any procedure.In addition, we realised different stakeholders also had different interpretations of what a temporary suspension actually meant , and risk aversion varied amongst these stakeholders.In order to communicate clearly, we needed to clarify (to ourselves and ultimately to the sites) what a temporary suspension actually meant; in this case, no recruitment or randomisations could go ahead. We prioritised closing the randomisation line ahead of informing sites.The initial steps in restarting the trial, after lifting the suspension, were primarily to ensure the various stakeholders were happy to continue their involvement. The Sponsor, iDMC, TSC, and Funder needed to be kept fully informed throughout, particularly with any potential impact on the study processes. Agreement from all stakeholders was necessary before the decision to lift the temporary suspension was made.The media and patient groups can also generate FOI requests Table\u00a0 to the tThe risk assessment of the trial may need to be reviewed to consider if any changes are required, for example, updates to the safety reporting processes. Uncertainty around the suspension may also result in the risk being reclassified by the Sponsor, potentially involving increased reporting or monitoring requirements.Our trial underwent an updated risk assessment and was also reclassified by our Sponsor from a moderate to a high risk trial, resulting in increased monitoring and revision of serious adverse event reporting to that in line with drug trials.The trial protocol or paperwork may also need revised/amended depending on the reason for a temporary suspension. To make this process as smooth as possible, we recommend any changes to the study protocol or paperwork involve the key stakeholders to ensure everyone has an opportunity to discuss and agree to any required changes.The reason(s) and length of a suspension are likely to influence site engagement. For example, if the suspension is due to safety concerns, this could lead to a bigger impact than say drug supply issues.Our experience taught us that some sites were happy to reopen and recommence all trial activities whilst others were more reserved. Some sites will reopen but not continue to recruit participants due to reticence on the part of the principal investigator (PI)/surgeon and/or the hospital clinical director. Others will open but remain inactive and not recruit any further participants due to a decline in potential and/or willing participants, or sites may remain closed whilst waiting for further guidance/updates, etc.A temporary trial suspension may also impact potential new sites and sites in set up. Potential sites may then decline to take part in the trial or delay their participation indefinitely.To maintain engagement of site staff, we prioritised other trial-related activities during the suspension, such as data checks, ongoing training, and regular communication.Engaging the local research networks to ensure site staff are not moved on to other projects may also be important in retaining the sites\u2019 engagement.Further training for site staff or trial marketing should be considered if relevant, depending on the reason and length of suspension.A temporary trial suspension may impact recruitment of trial participants. It may be that some sites recruit more slowly as they either no longer prioritise the trial locally or continue to have ongoing concerns following the suspension.Once the suspension was lifted, recruitment was closely monitored to establish if it had slowed down. The impact of the suspension and reopening of the trial varied across the three sites; this was related to site and team engagement (as described above). The impact on recruitment on our trial was difficult to evaluate.The randomisation system we use was designed such that randomisation could be stopped immediately in individual or all sites. When the suspension was lifted, the randomisation line was reopened, and we recommend it is re-tested to ensure there are no problems. There may also be an opportunity to add key information to the randomisation system alerting sites to the suspension, particularly if it is an international trial and time zones make it difficult to communicate with individual sites in a timely manner.For our suspension, we deferred to the local NHS policy for treatment using TV synthetic mesh implants. During this suspension, that meant no participants received their randomised surgical procedure until after the suspension was lifted. Considerations for the delivery of the trial intervention are essential. In non-surgical studies, decisions around the ongoing delivery of the intervention may be more complex\u2014for example, if trial participants are taking study drugs in the community.Retention and follow-up of trial participants may also be impacted. Depending on the disease area/trial intervention, one impact of a suspension may be that more frequent participant follow-up is implemented. An increase or decrease on participant questionnaire response rates may also be experienced as heightened awareness may influence whether participants choose to engage. Again, this was difficult to evaluate in our trial.The temporary suspension may have an impact on trial data (amount being collected/integrity/changes prior to the suspension). Consideration should be made to how this will be handled in the data analysis. Differences may be observed in baseline characteristics pre- and post-suspension, and/or there may be an effect on the outcome data. It may be appropriate to do sensitivity analyses to evaluate the data before and after the suspension to reassure the findings are robust. If this is done, it is also important to update the statistics analysis plan (SAP).Not surprisingly, the impact on the trial office will likely involve an increase in workload, possibly resulting in a shift in the types of tasks that require to be undertaken, along with the usual day-to-day work.This increased workload may stem from addressing participant queries, Sponsor concerns, and changes to study documentation/paperwork and addressing concerns and queries from sites, key stakeholders, and media (through FOI requests, Table\u00a0An unanticipated impact on the trial office workload came from the media and patient groups. This was in the form of increased queries and the need to be more vigilant and consistent in our response.Whilst a trial suspension may be short in duration, the impact on the study should not be underestimated. VUE was temporarily suspended for a relatively short period, but the impact continues. Given the recent COVID-19 pandemic, most trials will be experiencing a suspension of some or all of their activities.In this paper, we have described our experiences of a temporary trial suspension and highlighted the need for further guidance of such a significant trial event. In order to provide some guidance for other trialists who may experience a temporary suspension of their trial, we have detailed the key challenges we experienced, together with insights as to how these may be overcome.Given our previous experience of a temporary suspension, we were well placed to deal with the suspension of trial activity as a result of the current COVID-19 pandemic. Within our own Unit, 17 trials had recruitment and/or follow-up suspended, or aspects of their intervention delivery or follow-up altered, to accommodate the impact of COVID-19. All these trials successfully used the guidance developed for significant major events and populated the events and actions timeline template (Additional file As a Unit, we have developed and tested our significant major events and timeline documents in two very different scenarios and plan to continue to use them for any future events.Additional file 1."} +{"text": "Previous studies have confirmed that miR\u2010195 expression is increased in cardiac hypertrophy, and the bioinformatics website predicted by Targetscan software shows that miR\u2010195 can directly target CACNB1, KCNJ2 and KCND3 to regulate Cav\u03b21, Kir2.1 and Kv4.3 proteins expression. The purpose of this study is to confirm the role of miR\u2010195 in arrhythmia caused by cardiac hypertrophy. The protein levels of Cav\u03b21, Kir2.1 and Kv4.3 in myocardium of HF mice were decreased. After miR\u2010195 was overexpressed in neonatal mice cardiomyocytes, the expression of ANP, BNP and \u03b2\u2010MHC was up\u2010regulated, and miR\u2010195 inhibitor reversed this phenomenon. Overexpression of miR\u2010195 reduced the estimated cardiac function of EF% and FS% in wild\u2010type (WT) mice. Transmission electron microscopy showed that the ultrastructure of cardiac tissues was damaged after miR\u2010195 overexpression by lentivirus in mice. miR\u2010195 overexpression increased the likelihood of arrhythmia induction and duration of arrhythmia in WT mice. Lenti\u2010miR\u2010195 inhibitor carried by lentivirus can reverse the decreased EF% and FS%, the increased incidence of arrhythmia and prolonged duration of arrhythmia induced by TAC in mice. After miR\u2010195 treatment, the protein expressions of Cav\u03b21, Kir2.1 and Kv4.3 were decreased in mice. The results were consistent at animal and cellular levels, respectively. Luciferase assay results showed that miR\u2010195 may directly target CACNB1, KCNJ2 and KCND3 to regulate the expression of Cav\u03b21, Kir2.1 and Kv4.3 proteins. MiR\u2010195 is involved in arrhythmia caused by cardiac hypertrophy by inhibiting Cav\u03b21, Kir2.1 and Kv4.3. In cardiac hypertrophy and HF models, K+ and Ca2+ currents are down\u2010regulated and APD is prolonged, which showed significant electrophysiological remodelling. The Ito and IK1 are central regulators of arrhythmia and may be promising targets for anti\u2010arrhythmic approaches. However, the exact mechanisms regulating the decreased potassium and calcium channels in hypertrophy need further study.Cardiac hypertrophy can easily trigger atrial and ventricular arrhythmias, increase the risk of morbidity and mortality, and lead to sudden cardiac death in patients.In addition, many studies have shown that miRNAs play a key role in the regulation of ion channels.This research experiment objective is to explore the mechanistic basis underlying miR\u2010195 dysregulation in electrical remodelling and propose a possible novel interaction between miR\u2010195 and calcium, potassium channels.22.19 transducing U/mL for miR\u2010195 overexpression lentivirus vector. A lentivirus vector carrying miR\u2010195 inhibitor was constructed using the GV280 expression vector by Shanghai GeneChem Co., Ltd. The final concentration of the constructed lentivirus is 1.0\u00a0\u00d7\u00a0109 transducing U/mL for miR\u2010195 inhibitor lentivirus vector. Virus suspensions were stored at \u221280\u00b0C, mixed and centrifuged on ice before use.In this study, miR\u2010195 was constructed using the BLOCK\u2010iT polII miR\u2010RNAi expression vector and EmGFP kit, and construction of the vector was applied after plasmid sequence was analysed (Invitrogen). The final concentration of the constructed lentivirus is 1.0\u00a0\u00d7\u00a0102.28 transducing U/mL of lenti\u2010miR\u2010195 or a negative control or lenti\u2010AMO\u2010miR\u2010195, and the arterial clip was removed and sutured. All animal experiments and animal welfare involved have been approved by the Institutional Animal Care and Use Committee of Harbin Medical University, College of Pharmacy (No. IRB3004619); after injection of lenti\u2010miR\u2010195 inhibitor for 1\u00a0week, the mice were applied TAC surgery; 8\u00a0weeks later, the mice were anaesthetized to detect cardiac function and electrocardiogram.The mice were weighed, anaesthetized with 2% avertin solution. Then the mice were fixed on the operating table, muscles were separated, the chest was opened between the third and fourth ribs space on the left, the aorta of the heart was exposed, and the aorta was clamped with the artery clip, the ventricular cavity was intraluminally injected with 70\u00a0\u03bcL of lentivirus containing a final concentration of 102.3TAC method was used to establish a model of cardiac hypertrophy in mice. The TAC model can simulate haemodynamic overload to cause left ventricular hypertrophy and pathological remodelling. The male C57BL/6 mice weighing 22\u201026\u00a0g were anaesthetized with 2% avertin with intraperitoneal injection. The mouse was fixed on the operating table in a supine position, the debris in the oral cavity of the mouse was cleaned, the tracheal tube was inserted into the trachea from the oral cavity, and the small animal ventilator was connected. Make a small incision near the end of the sternum, the muscular tissue and glands were carefully separated, use a 26 G cushion needle to gently bypass the 5\u20100 ligature line, pass the dead knot, narrow the aortic arch, draw out the cushion needle, and the sternum and skin were immediately sutured with the 6\u20100 ligation thread. After the operation, the mice were returned to the animal room for 8\u00a0weeks to ensure that the model mouse and the sham operation mice were kept in the same condition. Echocardiography was used to detect whether the model of myocardial hypertrophy was successfully established. In the sham operation group, all treatment methods are the same as the TAC model group except for the aortic arch narrowing operation.2.4The mice were firstly weighed, then they were generally anaesthetized by intraperitoneal injection of 2% avertin solution at a volume of 0.l\u00a0mL/10\u00a0g of body weight. The mice were then fixed on the supine position on the testing pad. The Echocardiography was used to detect the changes of cardiac function before and after the establishment of HF model in mice, as described previously.2.5The mice were generally anaesthetized with avertin solution. An 8\u2010electrode catheter was inserted into the right ventricle, and this procedure has been applied in our previous study .2.6The ventricular tissue was removed from the mouse heart and fixed in glutaraldehyde and maintained for two hours. Sodium cacodylate buffer was used to wash the tissue slices for three times by for 10\u00a0minutes each. The ventricular tissue slices were fixed with 1% osmium tetroxide for 1\u00a0hours. Then ventricular tissues were dehydrated in 50%, 70%, 90%, 100% ethanol; we use uranyl acetate solution and lead citrate solution to stain primary ventricular tissue slices; and the pathological changes were examined by a JEOL TEM .2.73, add 0.25% trypsin for heart tissue digestion. Digestion steps were repeated until the tissues disappeared, then collected suspension cells by centrifugation at 2000\u00a0g for 5\u00a0minutes. DMEM medium was used to culture cells , which was added with 10% foetal bovine serum and supplemented with penicillin (100\u00a0U/mL)/streptomycin ; the cells were cultured at 37\u2103. After fibroblast showed adherence after 120\u00a0minutes, the cell suspension which mainly included cardiomyocytes was plated at 3\u00a0~\u00a05\u00d7105 cells per well using DMEM. Add 5\u2010bromo\u20102\u2010deoxyuridine (10\u00a0nM) into the DMEM medium to inhibit fibroblasts proliferation.Ventricular myocytes were isolated from the hearts of 1\u2010day\u2010old neonatal mice (C57BL/6) and were differentially plated to remove fibroblasts, as described previously.2.8miR\u2010195 with or without AMO\u2010miR\u2010195, or negative control (NC) siRNAs at a concentration of 100\u00a0pmol/mL were transfected into neonatal primary mouse ventricular myocytes using X\u2010treme GENE siRNA transfection reagent . miR\u2010195 mimics sequences were shown as following: sense: 5'\u2010UAGCAGCACAGAAAUAUUGGC\u20103'; antisense: 5'\u2010 CAAUAUUUCUGUGCUGCUAU6 infectious titre miR\u2010195 lentivirus vector was also transfected it into the cultured neonatal mouse ventricular myocytes. miR\u2010195 lentivirus vector was designed to carry green fluorescent protein to make sure it will be successfully estimated the efficiency of miR\u2010195 lentivirus infection, and the cardiomyocytes image was obtained by microscopy after 48h transfection of miR\u2010195 lentivirus vector in cardiomyocytes. Sequence of miR\u2010195, AMO (anti\u2010microRNA antisense oligodeoxyribonucleotide)\u2010miR\u2010195, NC is shown in Table\u00a0U\u20103' and miR\u2010195 inhibitor sequences were shown as following: 5'\u2010GCCAAUAUUUCUGUGCUGCUA\u20103'. 48\u00a0hours after transfection, cardiomyocytes were collected to extract total RNA or were used for protein extraction. 102.9Total RNA was extracted using the phenol chloride method. Left ventricular (LV) tissues of C57BL/6 mice or primary neonatal cardiomyocytes were washed with diethylpyrocarbonate (DEPC) water\u2010treated phosphate\u2010buffered saline (PBS) buffer, then processing with Trizol reagent (Invitrogen). The quality of the extracted RNA samples was confirmed by denaturing gel, and those with clear 28s and 18s bands can be used for subsequent experiments.2.10\u2010\u0394\u0394Ct) was applied to calculate the relative expressions of mRNAs or miR\u2010195. Each data point of each sample was then normalized to GAPDH or U6, which were recognized as internal reference. The primer sets used in our analyses are shown in Table\u00a0.Complementary DNA was synthesized using random primers as shown in manufacturer's instructions . LightCycler 480 SYBR Green I Master was used for real\u2010time PCR. The target genes were quantified on the ABI 7500 fast Real\u2010Time PCR system (Applied Biosystems). Melting curve of target genes was used to estimate the specificity of our amplified product. Primer Premier 6.0 program was used to design all PCR primers. The comparative cycle threshold (Ct) method . The homogenate was then centrifuged at 12\u00a0000\u00a0g for 30\u00a0minutes and the supernatants (containing cytosolic and membrane fractions) were collected for protein concentration detection, using the BCA kit with TECAN Infinite 200 PRO NanoQuant detection system. Protein samples were fractionated by SDS\u2010PAGE (10% polyacrylamide gels for connexin43) then transferred to nitrocellulose blotting membrane. The primary anti\u2010Cav\u03b21 and anti\u2010Kir2.1 antibody , and anti\u2010Kv4.3 antibody were used. GAPDH was selected as an internal control for proteins. Western blot bands were imaged on the Odyssey Infrared Imaging System (LI\u2010COR Biosciences). The band intensity (area\u00a0\u00d7\u00a0OD) was measured in each group and normalized to GAPDH with Odyssey v1.2 software.2.12Cultured NMVCs were incubated with anti\u2010Cav\u03b21 and antibody of Kir2.1 , antibody of Kv4.3 , antibody of \u03b1\u2010actinin at 4\u00b0C refrigerator for overnight. The cells were washed with PBS buffer and incubated with the secondary antibodies (1\u00a0hours at room temperature) conjugated to Alexa Fluor 488 or Alexa Fluor 594 (Molecular Probes). The preparations were then examined under an immunofluorescence microscope.2.13TM\u20102\u2010target DNA with lipofectamine 2000 (Invitrogen) and supplemented with 20\u00a0\u03bcM/LmiR\u2010195, AMO\u2010miR\u2010195, or Nc. After transfection for 48h, Firefly and renilla luciferase activities were determined by luciferase assay kits as indicated by relative luminescence units (RLU), and luminometer was recorded according to the manufacturer's instructions.We cloned fragments from CACNB1 3\u2019UTR region for analysis of the luciferase activity, position 943\u2010949 on the 3\u2019UTR of CACNB1 contained the predicted putative binding sequences for miR\u2010195; we also cloned the KCNJ2 3\u2019UTR region, which has miR\u2010195 putative binding sequences (the position of 395\u2010401 and 2183\u20102190 on the 3\u2019UTR of KCNJ2) and KCND3 3\u2019UTR region, containing the miR\u2010195 predicted binding sequences (the position of 173\u2010179 on the 3\u2019UTR of KCND3), then they were amplified by PCR, the products were cloned into the pSICHECK\u20102\u2010control vector. Mutagenesis nucleotides were also designed at different binding site. The sequence of miR\u2010195 mimic is 5\u2019\u2010 UAGCAGCACAGAAAUAUUGGC\u20103\u2019 (synthesized based on the sequence of mmu\u2010miR\u2010195 (miRBase Accession No. MIMAT0000225); that of NC is 5\u2019\u2010UUCUCCGAACGUGUCACGUAA\u20103\u2019; the sequence of the antisense 2\u2019\u2010O\u2010methyl (2\u2019\u2010O\u2010Me) oligonucleotide for miR\u2010195 is 5\u2019\u2010 GCCAAUAUUUCUGUGCUGCUA\u20103\u2019, HEK293T cells were transfected with 0.5\u03bcg psi\u2010CHECK2.14t test was applied for comparisons between the two groups. One\u2010way ANOVA was used in multi\u2010group's comparisons were performed for multiple pairwise comparisons. The \u03c72 test is used to compare nonparametric data set comparisons. SPSS19.0 software was used for all statistical analyses. P\u00a0<\u00a0.05 was considered as statistical significance.ALL experimental data were described as MEAN\u00a0\u00b1\u00a0SEM. The two\u2010tailed Student's 33.1P\u00a0<\u00a0.05 vs. Sham), which indicated that the mouse cardiac hypertrophy model was successfully established. The miR\u2010195 level was detected by real\u2010time PCR. Compared with the sham operation group, miR\u2010195 showed significant increase in the myocardial tissue of the TAC\u2010induced hypertrophy model group . Compared with the sham group, the protein expressions of Cav\u03b21 were significantly decreased in myocardium of cardiac hypertrophy induced by TAC , Kir2.1 were significantly decreased in myocardium of cardiac hypertrophy induced by TAC , the protein expression of Kv4.3 was significantly decreased in myocardium of cardiac hypertrophy induced by TAC .Cardiac hypertrophy mouse model was first established by transverse aortic constriction (TAC). Eights weeks later, echocardiography was performed on the mouse heart Figure\u00a0. Compare3.2P\u00a0<\u00a0.001 vs. NC). Because miR\u2010195 lentivirus vector was designed to carry green fluorescent protein to make sure it will be successfully estimated the efficiency of miR\u2010195 lentivirus infection in cardiomyocytes, the cardiomyocytes image with transfection of miR\u2010195 lentivirus vector was shown in Figure S1. These data further confirmed the successful transduction of miR\u2010195 into cardiomyocytes. miR\u2010195 overexpression impaired cardiac function in mice. Compared with the NC group, the value of EF (%) and FS (%) in the lenti\u2010miR\u2010195 injection group was significantly decreased . The other cardiac functional parameters such as LVID;d, LVID;s, LVAW;d, LVAW;s, LVPW;d, LVPW;s were shown in Table\u00a0P\u00a0<\u00a0.05, **P\u00a0<\u00a0.01 vs. NC). These data indicate that cardiac\u2010specific overexpression of miR\u2010195 in mice induces heart failure. The effect of miR\u2010195 on the ultrastructure of myocardium was detected by transmission electron microscopy. The results showed that in the NC group, we could not observe any morphological changes, the nucleus was intact, the mitochondria were average cross\u2010sectionally arranged in a compact state, and the myofilament was intact. Compared with the NC group, the nucleus and mitochondria were swollen and deformed, and the mitochondria were paralysed in miR\u2010195 overexpression group. There is a fuzzy dissolution phenomenon; the myofilament connection is disordered or the fracture is increased; and the disc is cracked in miR\u2010195 over expression group . These data indicate that the likelihood of arrhythmia induction was increased by miR\u2010195 overexpression in normal mice.In our previous study, the incidence of ventricular tachycardia (VT) and prolongation of VT duration were significantly increased induced by programmed left ventricular tachypacing in HF mice.3.4P\u00a0<\u00a0.05 vs. Sham), which were reversed by\u00a0+\u00a0Lenti\u2010mir\u2010195 inhibitor (#P\u00a0<\u00a0.05 vs. +Lenti\u2010NC). The other cardiac functional parameters such as LVID;d, LVID;s, LVAW;d, LVAW;s, LVPW;d, LVPW;s were recorded in our study as shown in Table\u00a0P\u00a0<\u00a0.05 vs. Sham). Compared with mice in the\u00a0+\u00a0Lenti\u2010NC group, the incidence of arrhythmias and the induction duration of arrhythmia in the\u00a0+\u00a0Lenti\u2010mir\u2010195 inhibitor group were reduced .One week after injection with Lenti\u2010miR\u2010195 inhibitor or negative control lentivirus, the mice were divided into Sham group, TAC group, TAC\u00a0+\u00a0miR\u2010195 inhibitor group (+Lenti\u2010mir\u2010195 inhibitor) and TAC\u00a0+\u00a0NC (+Lenti\u2010NC) group. After 8\u00a0weeks, the ultrasound ejection fraction (EF%) and short axis shortening rate (FS%) of each group of mice were detected by echocardiography Figure\u00a0. Compare3.5P\u00a0<\u00a0.01 vs. NC), Kir2.1 and Kv4.3 were down\u2010regulated in miR\u2010195 overexpression group , and the expression levels of ANP , BNP and \u03b2\u2010MHC were increased in miR\u2010195\u2010overexpressing cardiomyocytes. miR\u2010195 inhibitor reversed the increased expression levels of miR\u2010195 and cardiac hypertrophy\u2010related indicators, ANP , BNP , \u03b2\u2010MHC levels.The primary neonatal cardiomyocytes of mice were cultured. After 48h, the cultured cardiomyocytes were transfected with miR\u2010195 mimics, miR\u2010195 inhibitor or negative control group. After transfection for 48\u00a0hours, the RNA was extracted to detect miR\u2010195 level and the related hypertrophy\u2010related indicators. The efficacy of miR\u2010195, AMO\u2010195 transfection in altering miR\u2010195, was detected and miR\u2010195 expression was detected by real\u2010time PCR experiments Figure\u00a0. Compare3.7P\u00a0<\u00a0.05 vs. NC), Kir2.1 (*P\u00a0<\u00a0.05 vs. NC) and Kv4.3 (*P\u00a0<\u00a0.05 vs. NC) protein levels were significantly decreased in miR\u2010195\u2010overexpressing cardiomyocytes compared with NC\u2010treated cell. After the addition of the miR\u2010195 inhibitor, the down\u2010regulation of Cav\u03b21 (#P\u00a0<\u00a0.05 vs. miR\u2010195 mimics), Kir2.1 (#P\u00a0<\u00a0.05 vs. miR\u2010195) and Kv4.3 (#P\u00a0<\u00a0.05 vs. miR\u2010195) was restored to normal level , which were improved by miR\u2010195 inhibitor .To investigate the regulatory role of miR\u2010195 on Kir2.1 and Kv4.3 protein expression in vitro, the protein levels of Kir2.1 and Kv4.3 were detected by Western blot in neonatal cultured cardiomyocytes treated with miR\u2010195 mimics, with or without miR\u2010195 inhibitor. The results showed that Cav\u03b21 . However, after mutation binding sites, miR\u2010195 restored the luciferase activity to normal level . After mutating a single site, the luciferase activity in the miR\u2010195 group was still significantly decreased . Luciferase activity was restored after mutation of both binding sites . However, after mutation binding sites, luciferase activity shows similar in NC group the expression of miR\u2010195 was significantly increased in the myocardium of HF mice. The protein expressions of Cav\u03b21, Kir2.1 and Kv4.3 were decreased in the myocardium of HF mice compared with sham group. (2) Overexpression of miR\u2010195 by Lenti\u2010miR\u2010195 decreased the cardiac function in WT mice and increased the likelihood of arrhythmia induction and duration of arrhythmia in normal mice. (3) Lenti\u2010miR\u2010195 inhibitor can reverse the decreased cardiac function, the increased incidence of arrhythmia and prolonged duration of arrhythmia induced by TAC in mice. (4) The protein expressions of Cav\u03b21, Kir2.1 and Kv4.3 were decreased in mice after miR\u2010195 treatment. (5) Luciferase assay results showed direct interaction between miR\u2010195 and Cav\u03b21, Kir2.1 and Kv4.3.The reason why miR\u2010195 was selected for our study has been documented to play key role in pathogenesis and progression of cardiac hypertrophy.to), current density decline, is the most stable electrical remodelling feature.K1) is decreased, the delayed rectifier potassium current (IKs) is down\u2010regulated, and the weakening of the potassium current causes repolarization abnormalities of the cardiomyocytes, which may cause or aggravate the occurrence of malignant arrhythmia.In cardiac pathological hypertrophy, with the increasing degree of hypertrophy, the incidence of malignant arrhythmia is significantly increased, and cardiac electrophysiological remodelling is the main cause of malignant arrhythmia.Upon bioinformatics, we predicted that miR\u2010195 is complementary to the CACNB1 gene encoding the Cav\u03b21 ion channel protein, the KCNJ2 gene encoding the Kir2.1 ion channel protein, and the KCND3 gene encoding the Kv4.3 ion channel protein. In fact, recent studies have confirmed that many microRNAs can participate in heart rhythms by acting on different ion channel proteins.We first constructed cardiac hypertrophy by TAC in mice and confirmed by Western expression that the decreased expressions of Cav\u03b21, Kir2.1 and Kv4.3 in the TAC group. After transfection of miR\u2010195 mimics and inhibitors, our results confirmed that miR\u2010195 overexpression inhibited Cav\u03b21, Kir2.1 and Kv4.3 protein expression; after adding miR\u2010195 inhibitor, the down\u2010regulated proteins were reversed. After confirming that miR\u2010195 has a significant inhibitory effect on Cav\u03b21, Kir2.1 and Kv4.3 ion channel proteins, we use luciferase reporter analysis to confirm the direct regulation between Cav\u03b21, Kir2.1, Kv4.3 and miR\u2010195. Therefore, to some extent, we have confirmed the hypothesis that the expression of miR\u2010195 is increased in cardiac hypertrophy, which inhibits Cav\u03b21, Kir2.1 and Kv4.3 ion channel protein, and accelerates the occurrence and development of arrhythmia.4.1Our results indicate that miR\u2010195 is elevated in cardiac hypertrophy, which in turn targets ion channels in cardiomyocytes and promotes cardiac arrhythmias. Knocking down of miR\u2010195 may affect the expression of relative ion channel protein Cav\u03b21, Kir2.1 and Kv4.3, which means that miR\u2010195 is expected to provide a therapeutic target to treat cardiac hypertrophy, arrhythmia and delay of sudden cardiac death. In our study, miR\u2010195 inhibitors may have been developed and applied in cardiac hypertrophy, which is expected to become a cardiac arrhythmia\u2010induced arrhythmia treatment.4.2Firstly, we have only studied miR\u2010195 in animal level and cellular level, and we have explored the knockdown of miR\u2010195 on cardiac function and heart rhythm in both in cardiac hypertrophy; in future study, relevant electrophysiology experiments should be detected. We did not use patch clamp techniques to study the ion channel function and kinetic changes of the target ion channel. Even so, however, it still provides a perspective for electrophysiological remodelling and arrhythmia after cardiac hypertrophy, that is, miR\u2010195 may be a potential target to treat arrhythmia.5miR\u2010195 was increased in cardiac hypertrophy induced by TAC, and miR\u2010195 overexpression plays key roles in triggering cardiac hypertrophy in heart which resulted in increased likelihood of arrhythmia induction in normal mice. miR\u2010195 inhibited the expression of Cav\u03b21, Kir2.1 and Kv4.3, which may contribute to the cardiac arrhythmias induced by cardiac hypertrophy. Together, our studies uncover a novel mechanisms that miR\u2010195 modulates cardiac hypertrophy by regulating electrical remodelling.The authors declare no conflict of interest.Lina Xuan: Conceptualization ; Data curation (lead); Formal analysis ; Funding acquisition ; Investigation ; Methodology (lead); Project administration ; Resources ; Software ; Supervision ; Validation ; Visualization ; Writing\u2010original draft (supporting); Writing\u2010review & editing (supporting). Yanmeng Zhu: Conceptualization ; Data curation ; Formal analysis ; Investigation ; Methodology ; Project administration ; Resources ; Software ; Supervision ; Validation ; Visualization ; Writing\u2010original draft (supporting); Writing\u2010review & editing (supporting). Yunqi Liu: Conceptualization (supporting); Data curation (supporting); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Resources ; Software ; Supervision ; Validation (supporting); Visualization (supporting); Writing\u2010original draft (supporting); Writing\u2010review & editing (supporting). Hua Yang: Conceptualization (supporting); Data curation (supporting); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Resources (supporting); Software (supporting); Supervision (supporting); Validation (supporting); Visualization ; Writing\u2010original draft (supporting). Shengjie Wang: Conceptualization (supporting); Data curation (supporting); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Resources (supporting); Software (supporting); Supervision (supporting); Validation (supporting); Visualization . Qingqi Li: Data curation (supporting); Formal analysis (supporting); Methodology (supporting); Software (supporting). Chao Yang: Data curation (supporting); Formal analysis (supporting); Investigation (supporting); Supervision (supporting). Lei Jiao: Conceptualization (supporting); Methodology (supporting); Software (supporting); Supervision (supporting). Ying Zhang: Formal analysis ; Investigation (supporting); Supervision (supporting). Baofeng Yang: Conceptualization (lead); Data curation ; Formal analysis (supporting); Funding acquisition ; Investigation ; Methodology (supporting); Project administration ; Resources ; Software (supporting); Supervision ; Validation ; Visualization (supporting); Writing\u2010original draft (supporting); Writing\u2010review & editing (supporting). Lihua Sun: Conceptualization (lead); Data curation ; Formal analysis ; Funding acquisition (lead); Investigation ; Methodology ; Project administration (lead); Resources ; Software ; Supervision ; Validation ; Visualization ; Writing\u2010original draft (lead); Writing\u2010review & editing (lead). Lina Xuan, Yanmeng Zhu, Baofeng Yang and Lihua Sun designed, performed study, supervised all aspects of the research and analysis. Lina Xuan, Yanmeng Zhu and Lihua Sun finalized the manuscript. Lina Xuan, Yanmeng Zhu completed the animal experiments and molecular targets detect. Yunqi Liu, Hua Yang, Shengjie Wang are responsible for in vitro experiments. Chao Yang, Qingqi Li, Lei Jiao, Ying Zhang assisted in research, data analysis and interpretation.Fig S1Click here for additional data file."} +{"text": "Abnormal sALFF and dALFF values were correlated with clinical features of patients. Compared with healthy controls (HC), DNP group demonstrated alterations of sALFF and/or dALFF in medial prefrontal cortex (MPFC), supplementary motor areas (SMA), cerebellum, hippocampus, pallidum and cingulate cortex, in which the values were close to normal in DRP. Notably, sALFF and dALFF showed specific sensitivity in detecting abnormalities in basal ganglia and cerebellum. Additionally, DRP showed additional changes in precuneus, inferior temporal gyrus, superior frontal gyrus and occipital visual cortex. Compared with HC, the DNP showed increased FC in default network and motion-related networks, and the DRP showed decreased FC in default network. The MPFC, hippocampus, SMA, basal ganglia and cerebellum are indicated to be intrinsically affected regions and effective therapeutic targets. And the FC profiles of default and motion-related networks might be potential core indicators for clinical treatment. This study revealed potential neuromodulatory targets and helped understand pathomechanism of BECTS. Static and dynamic analyses should be combined to investigate neuropsychiatric disorders.The present study aims to investigate intrinsic abnormalities of brain and the effect of antiepileptic treatment on brain activity in Benign childhood epilepsy with centrotemporal spikes (BECTS). Twenty-six drug-na\u00efve patients (DNP) and 22 drug-receiving patients (DRP) with BECTS were collected in this study. Static amplitude of low frequency fluctuation (sALFF) and dynamic ALFF (dALFF) were applied to resting-state fMRI data. Functional connectivity (FC) analysis was further performed for affected regions identified by static and dynamic analysis. One-way analysis of variance and Benign childhood epilepsy with centrotemporal spikes (BECTS), also called Rolandic epilepsy, is the most common type of focal epilepsy in children , with onAs a data-driven method, the amplitude of low-frequency fluctuation (ALFF) measures the magnitude of spontaneous blood oxygenation level-dependent (BOLD) activity, which depicts the energy intensity of brain activity over a period of time . TraditiSeveral questions need to be taken into consideration for the clinical treatment of patients with BECTS. First, what are the intrinsic abnormalities of patients with BECTS? Second, what happens to the brain activity of patients with BECTS receiving AEDs? Investigating which brain regions positively respond to the drugs would provide valuable information to help clinical neuromodulation and drug development. In the present study, drug-na\u00efve patients (DNP) and drug-receiving patients (DRP) with BECTS were collected and compared with matched healthy controls (HC). Notably, the DNP were newly diagnosed patients who had not received antiepileptic drugs. Data driven methods were applied to investigate abnormal brain activity in patients in drug-na\u00efve and drug-receiving conditions, attempting to provide some evidence to answer the above questions.Fifty-two patients with benign epilepsy with centrotemporal spikes were recruited at the Affiliated Hospital of North Sichuan Medical College. Twenty-eight patients were drug-na\u00efve, and 24 patients were receiving antiepileptic drugs with good seizure control. All of the patients underwent a comprehensive clinical evaluation for the diagnosis of BECTS according to the epilepsy classification of the International League Against Epilepsy . All patAll subjects underwent MRI scanning in a 3T GE scanner with an eight-channel-phased array head coil in the Affiliated Hospital of North Sichuan Medical College. Resting-state functional data were collected using an echo-planar imaging sequence with the following parameters: repetition time (TR) = 2000 ms, echo time (TE) = 30 ms, flip angle (FA) = 90\u00b0, slice thickness = 4 mm (no gap), data matrix = 64 \u00d7 64, field of view = 24 cm \u00d7 24 cm, voxel resolution = 3.75 mm \u00d7 3.75 mm \u00d7 4 mm, and 32 axial slices in each volume. Two hundred volumes were acquired in each scan, lasting 6 min and 40 s. Axial anatomical T1-weighted images were acquired using a 3-dimensional fast spoiled gradient echo sequence. The parameters were as follows: thickness = 1 mm (no gap), TR = 8.2 ms, TE = 3.2 ms, field of view = 25.6 cm \u00d7 25.6 cm, flip angle = 12\u00b0, data matrix = 256 \u00d7 256. There were 136 axial slices for each subject. All subjects were instructed to close their eyes and relax without falling asleep during the scan. A simple oral questionnaire was performed for each subject to ensure an awake state during the scan.1 ; ith time point in the x, y, and z directions, respectively; and mFD = For each voxel in the whole brain, a fast Fourier transform was first performed to convert the time series to the frequency domain. Then, the averaged square root of the power spectrum across an frequency band was calculated as the ALFF value . In the L = 50 TR) and 2 s steps (S = 1 TR) were used, considering that the window length should be in line with the commonly identified slowest frequency of the BOLD signal (F = 195 TR), was segmented at each time point by obtaining 146 (W = F-L + 1) sequential time windows. The sALFF was calculated within each segmented window, thus generating 146 sALFF values for every voxel. Then, the standard deviation (SD) across 146 continuous sALFF values was calculated to represent the temporal variety of brain activity. The same calculation steps were performed for every voxel in the whole brain to acquire the dALFF maps. Finally, the SD maps were z-standardized across all the voxels for the following statistical analysis. A rough illustration of sALFF and dALFF was shown in For the dynamic feature, a sliding-window approach was adopted in the present study to examine the temporal variability in ALFF over the duration of the scan. In the present study, a 100 s window length . Then, one-way analysis of variance (ANOVA) was used to detect the differences among the three groups, with age, gender, and mFD values of head motion as nuisance variables. Tukey-kramer post hoc analysis was performed to investigate pairwise between-group differences. Furthermore, we also performed partial correlation analysis to detect the relationship between the value of sALFF and dALFF in brain regions with significant between-group differences and the clinical features, including onset age, and illness duration, controlling for the effects of gender. ANOVA and post hoc statistical analyses were also conducted for the FC profiles.First, one-sample Here, because head motion could have a significant effect on dynamic features, to validate the findings, the sALFF and dALFF analyses were replicated in a subgroup with a stricter requirement of head motion (mFD < 0.2), consisting of 18 DRP, 19 DNP and 15 HC. Detailed demographic data are illustrated in p = 0.04), between DNP and HC (p = 0.02). Besides, the mFD values between DNP and HC showed difference (p = 0.03). In addition, the onset age between DNP and DRP did not show a significant between-groups difference . The DRP demonstrated a significantly longer illness duration and lower seizure frequency than DNP (p < 0.001). Detailed clinical and demographic information are shown in Four of the 52 recruited patients with BECTS were excluded from the sALFF and dALFF analyses because of excessive head motion, including two DRP and two DNP. There were no significant differences among the three groups for age. However, the differences of gender were observed between DRP and DNP (t-test) for each group is shown in post hoc analyses, with a significance threshold of p < 0.005 with voxel number >100. Compared with HC, some alterations were only observed in the DNP group . Compareer >100) . Notablyer >100) . Notablyer >100) . A straipost hoc statistical maps between two datasets reached 0.84, 0.83, and 0.90 for DRP-DNP, DNP-HC and DNP-HC comparisons, respectively. The significant F maps of the two datasets were overlapped to illustrate high spatial similarity between the two datasets and with the dALFF in the SMA . In addition, the onset age of DNP was positively related to the sALFF and the dALFF in the vMPFC (The duration of the illness in the DRP group showed a positive correlation with the sALFF in TMG_L (he vMPFC .p < 0.001) alteration was found in current study. In the voxel-wise FC analysis, increased FC within vMPFC was found in DNP relative to HC (p < 0.001). Besides, decreased FC between vMPFC and posterior DMN regions was observed in DRP compared with HC (For the cross correlation between of pairs of affected regions, no significant ( with HC . Moreove with HC . DetaileThis study investigated spontaneous neural activity from static and dynamic aspects in BECTS patients with and without drug treatment. First, na\u00efve patients showed abnormal activity in MPFC, hippocampus, SMA, pallidum and cerebellum, and drug-treated patients demonstrated a normalization effect on these regions. These findings suggested potential intrinsically affected regions in BECTS and indicated that AEDs could have a positive therapeutic effect by affecting the activity of the abovementioned regions. In addition, disrupted activity in precuneus, occipital visual cortex and lateral temporal regions was only observed in DRP, which might imply specific synergistic normalization effects related to AEDs. Moreover, specific disruption in DNP suggested that the abnormal FC in default and motion-related networks might be disease-intrinsic impairment. Distinct FC patterns in DRP further implied a potential therapeutic effect on these networks. In summary, the present study investigated intrinsic dysfunctions related to BECTS and the effects of AEDs on the brain activity of patients, providing insights into the pathomechanism of the disease. Finally, this study also indicated the necessary combination of static and dynamic activity in studying epilepsy.The DNP showed altered sALFF and dALFF in the vMPFC, hippocampus and SMA, and the DRP demonstrated values similar to those of HC in this study. The vMPFC and hippocampus are the core nodes of the well-known DMN, which has been widely reported to be associated with various epilepsies . The DMNIn the present study, increased sALFF in the left cerebellum and decreased dALFF in the left pallidum were also observed in the DNP, and no difference was observed in the two regions between the DRP and HC. This finding showed that sALFF and dALFF had different sensitivities in detecting abnormal activities in different brain regions. Increased sALFF in cerebellum in DNP implied that excessive cerebellar activity might be related to seizures. Besides, decreased nodal efficiency and regional homogeneity and increased gray matter volume of cerebellum have been reported in patients with BECTS , indicatIn the present study, additional abnormalities in the occipital visual cortex, lateral temporal regions and the right precuneus were shown in the DRP, which might be interpreted in two possible ways. On the one hand, it was recognized that the long-term use of AEDs might cause some chronic brain damage , and morIn addition, there are some limitations in the present study. First, the best experimental design for drug efficacy studies should be a longitudinal study, but this study is a cross-sectional study. Recruiting two independent patient groups with and without drug treatment to investigate the therapy effect made it difficult to exclude the confounding effects related to variability between subjects. Therefore, to address these issues, a cohort study needs to be conducted in the future. Second, despite widely reported cognitive impairment in BECT in previous studies, the lack of cognition assessment weaken the clinical meaning of the present study to some extent. Third, the gender was different between groups. In the present study, the gender was regressed out as a nuisance variate in the linear regression model of statistic analysis. Still, we can\u2019t completely rule out the potential impact of gender differences on current results.The present study adopted sALFF and dALFF to characterize abnormal brain activities in the DNP and DRP groups compared with the HC group. Low-frequency neural oscillations in the MPFC, hippocampus, SMA, basal ganglia and cerebellum were only altered in drug-na\u00efve patients, not in the treated group; thus, these alterations were inferred to be induced by the disease itself. FC profiles further suggested the crucial role of DMN and motion-related regions for the treatment of epilepsy. This finding revealed effective therapeutic targets and provided additional information for understanding the pathomechanism underlying BECTS. Notably, sALFF and dALFF demonstrated specific sensitivity in detecting abnormal activity in the cerebellum and basal ganglia, respectively, indicating the necessity of combining the two methods in epilepsy research.The datasets generated for this study are available on request to the corresponding author.The studies involving human participants were reviewed and approved by Affiliated Hospital of North Sichuan Medical College. Written informed consent to participate in this study was provided by the participants\u2019 legal guardian/next of kin.CL was responsible for study design. YH, YC, and ZL collected the data. SJ, HP, and PW performed data analysis and article writing. XL provided the methodological advice. CL and DY supervised the conduct of the study. SJ wrote the manuscript. XW proofread the manuscript. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Triple-negative breast cancer (TNBC) is an aggressive breast type of cancer with no expression of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor-2 (HER2). It is a highly metastasized, heterogeneous disease that accounts for 10\u201315% of total breast cancer cases with a poor prognosis and high relapse rate within five years after treatment compared to non-TNBC cases. The diagnostic and subtyping of TNBC tumors are essential to determine the treatment alternatives and establish personalized, targeted medications for every TNBC individual. Currently, TNBC is diagnosed via a two-step procedure of imaging and immunohistochemistry (IHC), which are operator-dependent and potentially time-consuming. Therefore, there is a crucial need for the development of rapid and advanced technologies to enhance the diagnostic efficiency of TNBC. This review discusses the overview of breast cancer with emphasis on TNBC subtypes and the current diagnostic approaches of TNBC along with its challenges. Most importantly, we have presented several promising strategies that can be utilized as future TNBC diagnostic modalities and simultaneously enhance the efficacy of TNBC diagnostic. Breast cancer is a group of cancer cells that starts in the breast cells and grows out of control. All breast cancer tumor diagnosis starts with the detection of estrogen (ER), progesterone (PR), and human epidermal growth factor receptor-2 (HER2) receptors using immunohistochemistry (IHC) to differentiate the type of breast cancer ,2,3. In The term \u201cluminal\u201d is used because this type of breast cancer is present at the luminal (inner) epithelial cells of the breast ,10. In aHER2 enriched breast cancer subtype is ER and PR negative but HER2 positive. This subtype is known to have faster growth and worse prognosis than the luminal subtype . HoweverThe basal-like subtype is ER, PR, and HER2 negative, known as triple-negative breast cancer (TNBC). The term basal-like is contributed by the similarity in the expression of epidermal growth factor receptor (EGFR), CK5/6, CK14, and CK17 ,16,17. TClaudin-low breast cancer subtype is another intrinsic type identified by their gene expression profiling and is also known as triple-negative breast cancer ,24. ThesTNBC is known as a heterogeneous type of cancer that is categorized into six subtypes. The subtypes are immunomodulatory (IM), luminal androgen receptor (LAR), basal-like 1 (BL-1), basal-like 2 (BL-2), mesenchymal (M), and mesenchymal stem-like (MSL) as shown in Subtyping TNBC tumors is vital in identifying the treatment alternatives and establishing personalized, targeted medications for every TNBC individual. A two-step procedure typically employed to diagnose TNBC is imaging and immunohistochemistry (IHC) . ImagingDiagnosis via ultrasound is performed when a lump or swelling is not detected in a mammogram but still can be felt and serve as the primary approach to distinguish between breast cysts (fluid-filled sac) and tumors if sample collection is carried out in the right area and tested for cancer . What diBreast cancer diagnosis by MRI, on the other hand, is opted when a patient is categorized as high risk (family history/BRCA gene mutation) and to determine the severity of the carcinoma due to the efficiency of MRI to detect the early formation of breast cancer in comparison to breast ultrasound and mammogram ,48. The Ideally, IHC is required for breast carcinoma typing performed by cell staining with biomarkers such as hormone receptor (progesterone receptor (PR) and estrogen receptor (ER)) as well as human epidermal growth factor receptor two (HER2) markers . In ordeBlood-based liquid biopsy is a non-invasive diagnostic method that can be utilized for future TNBC diagnosis. Liquid biopsy captures the information of a tumor through blood specimen, which is analyzed for the presence of circulating tumor cells (CTCs), tumor-derived extracellular vesicles (exosomes), and circulating tumor nucleic acids (ctNAs), which include circulating tumor DNA (ctDNA) and microRNAs (miRNAs) ,56. BaseAnalysis of ctNAs include circulating tumor DNA (ctDNA), microRNA (miRNA), and cell-free RNA (cfRNA) . CtDNAs MicroRNAs (miRNAs) are short ribonucleic acids (RNAs) made up of approximately 22 nucleotides that regulate thousands of genes via binding to target messenger RNAs (mRNAs) . miRNAs A study by Thakur et al. indicated a high expression of miR-21, miR-220, and miR-221 in TNBC Indian women , which rReported initially by Pan and Johnstone in 1983, exosomes are extracellular, membrane-bound vesicles that are secreted by many cells under normal and abnormal circumstances . The exoDuring carcinogenesis, exosomes from the cancer cells were found to trigger cancer cell proliferation and stage immune defense escape ultimately promoting cancer progression and metastasis ,96. In aPositron emission tomography, also known as PET scan, is a medical imaging approach that utilizes a radioactive element/drug to analyze the organ and tissue functionality and is well-known for its capability to detect a particular disease even before detection by other imaging methods . In thisBased on a similar approach, immune-PET imaging utilizes the integration of the PET system along with monoclonal antibodies (mAbs) to improve the efficacy of tumor characterization diagnosis and aid in selecting suitable targeted mAb-based therapy . In thisA biosensor is a tool comprised of bioreceptor, detector, and the signal transducer, utilized for the identification and analysis of a wide range of biological specimen, including enzymes, immune components (antigen and antibodies), nucleic acid components , and other biological components present in humans . BioreceRecognition begins when the bioreceptor binds to a distinctive biological analyte, which generates measurable binding signals by signal transducer and finally detected by the detector for data analysis . NanobioIn terms of TNBC cell detection, several nanobiosensors have been developed in the past. The zinc oxide (ZnO)-choline oxidase (ChOx) nanobiosensor generated in 2016 was able to identify the presence of choline in TNBC samples . In anot\u00ae Breast Cancer 360\u2122 Panel initiated in April 2018 is an analytical data tool comprising approximately 770 genes to aid in breast carcinoma classification based on molecular subtyping [TM panel assay before performing specimen and data analysis using the Nanostring nCouter\u00ae system [\u00ae was evident in Phase I clinical trial evaluating Eribulin and Everolimus in TNBC candidates whereby the panel was capable of disclosing the diversity of breast cancer and its microenvironment [\u00aeBC360 panel aided in distinguishing the intrinsic breast carcinoma subtypes and subsequently evaluated endocrine therapy effectiveness for stage I luminal breast cancer [\u00ae in determining breast cancer subtype was recently proven to draw a parallel with the traditional immunohistochemistry method [\u00ae panel will be utilized for breast cancer diagnostics in the future.The nCounterubtyping . In thisWA, USA) . The sysWA, USA) ,123. Thiironment . In anott cancer . In addiy method . In geneIntroduced by Vogelstein and Kinzler in 1999, digital PCR is a method that segregates the samples into multiple wells before the amplification process . Figure The pros of dPCR compared to a conventional quantitative polymerase chain reaction (qPCR) are that there is no requirement for a standard curve for analysis, it is able to tolerate any PCR inhibitors , able toIn general, all three diagnostic methods discussed above are based on the presence and expression of specific genes by the cancer cells. Hence, a summary of TNBC classification based on gene expression profiling would prTriple-negative breast cancer (TNBC) is an aggressive type of cancer but lacks targeted therapy methods such as hormone therapy due to the low expression of three primary receptors . Therefore, novel methods that can detect TNBC in real-time, accurate, and minimally invasive ways are urgently needed. This ensures that proper treatment can be provided in the early stages of cancer, and the treatment\u2019s efficiency can be monitored."} +{"text": "Prunus mume blossom is an edible flower that has been used in traditional Chinese medicine for thousands of years. Flavonoids are one of the most active substances in Prunus mume blossoms. The optimal ultrasonic-assisted enzymatic extraction of flavonoids from Prunus mume blossom (FPMB), the components of FPMB, and its protective effect on injured cardiomyocytes were investigated in this study. According to our results, the optimal extraction process for FPMB is as follows: cellulase at 2.0%, ultrasonic power at 300 W, ultrasonic enzymolysis for 30 min, and an enzymolysis temperature of 40 \u00b0C. FPMB significantly promoted the survival rate of cardiomyocytes and reduced the concentration of reactive oxygen species (ROS). FPMB also improved the activities of proteases caspase-3, caspase-8, and caspase-9 in cardiomyocytes. The cardiomyocyte apoptosis rate in mice was significantly reduced by exposure to FPMB. These results suggest that the extraction rate of FPMB may be improved by an ultrasonic-assisted enzymatic method. FPMB has a protective effect on the injured cardiomyocytes. Prunus mume is an edible flower used in traditional Chinese medicine. It is commonly used to prevent and treat various infections and inflammation, having antibacterial and anti-inflammatory effects [Prunus mume blossoms have high medicinal value and health care functions. Flavonoids are some of the most active substances in Prunus mume blossoms. As a natural product mainly found in plants, flavonoids have many important physiological and biochemical effects due to their unique chemical structure [ effects . Prunus tructure . The acttructure ,4. Flavotructure ,6,7.Prunus mume blossom [Prunus mume blossom are of great significance. An enzyme solution was used to damage the cell walls of the plant tissue when the flavonoids were extracted, thereby improving the effectiveness of the extraction [Prunus mume blossom powder. Enzymic extraction provides the advantages of simple operation and mild reaction conditions. However, the sheer force extracted by the enzyme method is insufficient to destroy the plant cell wall, causing a low extraction rate [China is one of the major countries in the production and utilization of blossom . The exttraction ,10,11. Tion rate ,13. An uion rate ,15,16.Prunus mume blossom (FPMB). The components of FPMB were identified by high-performance liquid chromatography (HPLC). Furthermore, we studied the effects of FPMB on cardiomyocyte activity; the concentrations of reactive oxygen species (ROS); the concentrations of proteases caspase-3, caspase-8, and caspase-9; and the cardiomyocyte apoptosis of mice. These results provide a reference for the extraction and biological activity of FPMB.Therefore, the purpose of this study was to use ultrasound to enhance the enzymatic extraction effect of flavonoids from Prunus mume blossom fully dissolves when the cellulase mass percentage reaches 2.0%. The extraction rate of FPMB was the highest at the temperature of 40 \u00b0C. The extraction rate of flavonoids did not significantly improve when the temperature continued to increase. The reason for this phenomenon may be that the effect of 40 \u00b0C on the extraction rate of flavonoids reached the maximum.At a temperature of 40 \u00b0C, ultrasonic power of 300 W, and ultrasonic time of 40 min, cellulase with the mass percentage of 0.5%, 1.0%, 1.5%, 2.0%, 2.5%, and 3.0% was added to conduct the extraction experiment of FPMB. The results are shown in Under the conditions of 2.0% cellulase, 40 \u00b0C hydrolysis temperature, and 40 min of ultrasonic time, the FPMB extraction experiments were conducted at the ultrasonic power of 200, 250, 300, 350, 400, and 450 W. The results are shown in According to the single-factor test results, cellulase concentration, enzymolysis temperature, ultrasonic power, and ultrasonic-assisted enzymolysis time were considered as factors. Each factor was designed with three levels. The concentration of the cellulase was 1.5%, 2.0%, or 2.5%. The enzymolysis temperature was 35, 40, or 45 \u00b0C. The ultrasonic power was 250, 300, or 350 W. The ultrasonic-assisted enzymolysis time was 30, 40, or 50 min, respectively. The test results are shown in 3B3C3D2. Further analysis is needed to determine whether single factors have a significant influence on the extraction rate of FPMB.As seen in 1 was selected as the enzymolysis temperature and D1 as the ultrasonic-assisted enzymolysis time. Therefore, the optimal process for the ultrasonic-assisted enzymolysis extraction of FPMB determined by orthogonal experiments was A3B1C3D1; namely, the cellulase concentration was 2.0%, the temperature of enzymolysis was 40 \u00b0C, the ultrasonic power was 300 W, and the time of ultrasonic-assisted enzymolysis hydrolysis was 30 min.As shown in Prunus Mume blossoms may lead to different types and mass percentage of FPMB. The separation and identification of these flavonoids are difficult and must be improved in the future.FPMB is a kind of new flavonoid. There were several peaks in the HPLC chromatogram of the FPMB samples. The identification of flavonoid types in FPMB should be made using more specific techniques. In addition, the different habitats of 2O2. The survival rate of cardiomyocytes in the model group was significantly lower than that in the control group (p < 0.05). However, the survival rate of cardiomyocytes in the three groups with FPMB was significantly higher than in the model group (p < 0.05). The survival rate of cardiomyocytes also increased significantly (p < 0.05) with the increase in FPMB dose. However, the survival rate of cardiomyocytes in the FPMB-H group did not return to the control group.2O2 significantly reduced the viability of cardiomyocytes (p < 0.05). After the exposure of FPMB, the cardiomyocyte activity increased significantly, trending with the increase in FPMB dose. The results mentioned above indicated that FPMB has a protective effect on the cardiomyocytes injured by H2O2.Cardiomyocytes are also known as cardiac fibers. All types of cardiomyocytes work together to maintain the complete function of the heart . The mea2O2 induction, the balance of ROS production and clearance in cardiomyocytes was disturbed. ROS accumulated in large quantities in the cardiomyocytes. The ROS content of the cardiomyocytes gradually recovered after exposure FPMB. In the FPMB-H group, ROS contents of cardiomyocytes were reduced to 30% compared with the H2O2 group. This result indicated that FPMB effectively reduces the ROS content of cardiomyocytes when consumed.2O2-induced oxidative stress injury of cardiomyocytes.ROS refers to a class of chemically active compounds containing oxygen groups with a strong oxidation ability that play an important role in cell signaling and homeostasis . Under np < 0.05). The activities of caspase-3, caspase-8, and caspase-9 in the FPMB-L group were not significantly different from those in the model group. This result showed that FPMB had no significant effect on the activities of cardiomyocytes caspase-3, caspase-8, and caspase-9 at this concentration. The activity of cardiomyocytes caspase-3, caspase-8, and caspase-9 was significantly lower than the model group (p < 0.05) when the concentration of FPMB increased; thus, a higher concentration of FPMB has a protective effect on injured cardiomyocytes induced by H2O2.In 1994, Prins et al. found th2O2 model group reached 61.23%. Therefore, the oxidative damage model induced by H2O2 was successful. The apoptosis rate of cardiomyocytes decreased significantly after being treated with different concentrations of FPMB. The cardiomyocytes apoptosis rate decreased by 57% in the FPMB-H group compared with the H2O2 model group.p < 0.05) compared with the H2O2 model group. The apoptosis rate of cardiomyocytes was close to that of the control group with the increase in FPMB dose. These results showed that FPMB has a restorative effect on the apoptosis of cardiomyocytes.Apoptosis is a programmed and active mode of death regulated by genes under a normal physiological or pathological environment . When exPrunus mume blossoms were purchased from Zhongshan Pharmacy in Wuhu, Anhui Province, China. The clean grade male mice (12 weeks old) were purchased from the Changzhou Cavins Laboratory Animal Co., Ltd. . Acidic cellulase was purchased from Anhui Yinqiao Biotechnology Co., Ltd. . Caspase-3, caspase-8, and caspase-9 kits were provided by Seymour Fly Biochemistry Products Co., Ltd. . Hydrogen peroxide (H2O2), dimethyl sulfoxide (DMSO), Dulbecco\u2032s modified eagle medium (DMEM), 3--2,5-diphenyltetrazolium bromide (MTT), and fluorescent dye 2\u2032,7\u2032-dichlorodihydrofluorescein diacetate (DCFH-DA) were purchased from Bomer Biotechnology Co., Ltd. . All other reagents were analytically pure.Prunus mume blossom was dried using a microwave and ground to powder. Then, 10.0 g of Prunus mume blossom powder was accurately weighed for each group. Distilled water (200 mL) was added to each group. The FPMB was extracted with different cellulase mass percentages, enzymolysis temperatures, ultrasonic power, and ultrasonic enzymolysis time. The extract was vacuum lyophilized into powder. The freeze-dried powder was then configured into solutions of 10, 20, and 40 \u00b5g/mL for the cardiomyocytes protective effect test.The 2 = 0.9928), where y represents the OD value and X represents the concentration of rutin standard solution (mg/mL).The standard curve was drawn according to the method of Chen et al. . When thPrunus mume blossom.In this test, sodium nitrite-aluminum nitrate colorimetry measured the flavonoids content. The sample liquid (5 mL) was absorbed by a pipette in each group, added into a 50 mL volumetric flask treated according to the above method, and stabilized at 50 mL with 75% ethanol. The absorbance of each test tube was measured at a wavelength of 510 nm. The content of flavonoids can be obtained from the regression equation .(1)ExtrPrunus mume blossom powder and 200 mL of deionized water. The single-factor test was conducted using cellulase mass percentage , enzymolysis temperatures , ultrasonic power , and ultrasonic enzymolysis time .Each group of single-factor experiments used 10 g of 9(34) orthogonal experiment was designed with the extraction rate of FPMB powder as the index to optimize the technological conditions of ultrasonic-assisted enzymatic extraction.According to the results of the single-factor experiment, three levels were designed for each factor, including cellulase concentration, enzymolysis temperature, ultrasonic power, and ultrasonic enzymolysis time. An LThe FPMB lyophilized powder (5 mg) was accurately weighed. A sample solution of 1.0 mg/mL was prepared from the lyophilized powder of FPMB in a 10 mL volumetric flask. The sample solution and standard solution were diluted with 30% methanol 5 times. The diluted solution was filtered with an organic membrane of 0.5 \u00b5m. The filtered solution was then analyzed by HPLC . The type of chromatographic column was ALPHA1-2 LD plus . The chromatographic conditions included the mobile phase: methanol (A), ultra-pure water (B), and 1% acetic acid solution (C): flow rate: 1.0 mL/min; injection volume: 10 \u03bcL; elution conditions: 0\u201330 min, 30\u201380% A, 1% acetic acid solution (C) always maintained at 10%; detection wavelength: 350 nm; and column temperature: 30 \u00b0C.3 in a petri dish. Afterward, 1 mL of trypsin (0.1%) was poured into the petri dish and digested for 10 min in a 37 \u00b0C water bath. After filtration with 200 mesh sieves, the mixture was centrifuged at 424\u00d7 g for 8 min. The cardiomyocytes precipitation was added to DMEM and cultured in a CO2 incubator for 2 h. The supernatant was obtained for counting. The cell density was diluted to 2 \u00d7 105 mL\u22121. The cardiomyocyte suspension was inoculated on 96-well plates (100 \u00b5L per well) in a 37 \u00b0C CO2 incubator, and the culture medium was changed every 24 h. The cardiomyocytes in suitable growth conditions were selected for tests after 72 h.Under aseptic conditions, mice were euthanized, soaked in 75% ethanol for 30 s, their chest opened with a scalpel, and their ventricles removed and rinsed with phosphate-buffered saline (PBS) 2\u20133 times . The ven2O2 model group, low-dose group (FPMB-L), medium-dose group (FPMB-M), and high-dose group (FPMB-H), with 10 replicates for each group. The control group was supplemented with 100 \u00b5L DMEM. The H2O2 model group was supplemented with 100 \u00b5L H2O2 (200 \u00b5mol/L). The FPMB-L, FPMB-M, and FPMB-H groups were supplemented with 100 \u00b5L DMEM medium and H2O2 (200 \u00b5mol/L). After incubation for 12 h, cardiomyocytes\u2019 activity, ROS content, and the concentration of proteases caspase-3, caspase-8, and caspase-9 were measured.The well-grown cardiomyocytes were randomly divided into the control group, H2 incubator at 37 \u00b0C for 4 h, 100 \u00b5L DMSO was added to each well. The 96-well plates were shaken for 15 min. The optical density (OD) of cardiomyocyte samples was measured at 570 nm [Cardiomyocytes were inoculated into 96-well plates at 100 \u00b5L per well, and 20 \u00b5L MTT (5 mg/mL) was poured into each well. After culturing in a COt 570 nm . Cardiom2) for 20 min. After cultivation, the cardiomyocytes were washed with DMEM (excluding FBS) 3 times to fully remove the DCFH-DA that did not enter the cells. Flow cytometry was used to detect ROS in the cardiomyocytes [The cell culture medium was removed from the 96-well plates. Then, 500 \u00b5L DCFH-DA (10 \u00b5mol/mL) was added to each well. The 96-well plates were placed in an incubator at 37 \u00b0C (containing 5% COmyocytes .g for 10 min after trypsin digestion (0.1%). Cell lysis buffer (30 \u00b5L) was then added to each well and placed in ice water for 3 min. The mixtures were centrifuged at 6797\u00d7 g for 10 min. The supernatant was collected and operated according to the kit\u2019s instructions to determine the activities of caspase-3, caspase-8, and caspase-9 in cardiomyocytes [405 nm/OD595 nm.The cardiomyocytes were centrifuged at 106\u00d7 myocytes . Caspaseg for 8 min. Cardiomyocyte precipitation was then configured with PBS to form mice cardiomyocytes suspension at a concentration of 2 \u00d7 105 mL\u22121. The cardiomyocytes suspension (100 \u00b5L) was added to a Falcon test tube. Then, 500 \u00b5L of binding buffer, 5 \u00b5L of annexin V-FITC, and 5 \u00b5L of propidium iodide (PI) were added to the tube. The Falcon test tubes were mixed and stored in a dark place at 25 \u00b0C for 10 min. The cardiomyocyte apoptosis of mice was determined with 400 \u00b5L of PBS buffer in Falcon test tubes using flow cytometry.Flow cytometry was used to measure the cardiomyocyte apoptosis of mice . The carp < 0.05).The software SPSS 20.0 was used for the ANOVA analysis of the samples in our study. Significant differences were determined by Duncan\u2019s multiple comparison test (2O2-induced injured cardiomyocytes. In the ultrasonic-assisted enzymolysis extraction of the FPMB test, the influence degree of every single factor on the extraction rate was successive: cellulase concentration > ultrasonic power > enzymolysis temperature = ultrasonic-assisted enzymolysis time. In the range of every single factor, the concentration of cellulase and ultrasonic power significantly influenced the extraction rate of FPMB. The effects of enzymolysis temperature and ultrasonic-assisted enzymolysis time on the extraction rate of FPMB were not significant.In this study, the ultrasonic-assisted enzymolysis extraction of FPMB was investigated along with the protective effect of FPMB on H2O2.By comparing the peak retention time of the FPMB sample with that of the mixed standard HPLC chromatograms, we found that the main flavonoids in FPMB are rutin, cynarin, and luteolin. The balance of ROS production is disrupted when cardiomyocytes are subjected to emergency oxidative damage. The proteases caspase-3, caspase-8, and caspase-9 are closely related to apoptosis and produced in large quantities. The activity of mice cardiomyocytes increased significantly after FPMB exposure, whereas the ROS content and caspase-3, caspase-8, and caspase-9 activities of mice cardiomyocytes were significantly reduced. The apoptosis rate of mice cardiomyocytes also decreased gradually; however, with the increase in FPMB dose, the apoptosis rate for cardiomyocytes was close to that for the control group. These experimental results showed that FPMB has a protective effect on the injured cardiomyocytes induced by H"} +{"text": "Background: People experiencing homelessness and mental illness have poorer service engagement and health-related outcomes compared to the general population. Financial incentives have been associated with increased service engagement, but evidence of effectiveness is limited. This protocol evaluates the acceptability and impact of financial incentives on service engagement among adults experiencing homelessness and mental illness in Toronto, Canada.Methods: This study protocol uses a pragmatic field trial design and mixed methods . Study participants were recruited from a brief multidisciplinary case management program for adults experiencing homelessness and mental illness following hospital discharge, and were randomly assigned to usual care or a financial incentives arm offering $20 for each week they attended meetings with a program provider. The primary outcome of effectiveness is service engagement, measured by the count of participant-provider health-care contacts over the 6-month period post-randomization. Secondary health, health service use, quality of life, and housing outcomes were measured at baseline and at 6-month follow-up. Quantitative data will be analyzed using descriptive statistics and inferential modeling including Poisson regression and generalized estimating equations. A subset of study participants and other key informants participated in interviews, and program staff in focus groups, to explore experiences with and perspectives regarding financial incentives. Qualitative data will be rigorously coded and thematically analyzed.Conclusions: Findings from this study will contribute high quality evidence to an underdeveloped literature base on the effectiveness and acceptability of financial incentives to improve service engagement and health-related outcomes among adults experiencing homelessness and mental illness. People experiencing homelessness and mental illness have significantly poorer mental and physical health and quality of life relative to the general population \u20133. In adFor adults experiencing homelessness and mental illness, the transition from hospital to community settings has been associated with an increased risk of homelessness and worsAn individual's decision to engage in and adhere to treatment is influenced by a wide range of factors. Behavioral economics principles in health care suggest that individuals have a tendency to make health decisions that are biased toward the present and immediate rewards vs. future outcomes and delayed gratification , 17. TheFinancial incentives have indeed been shown to influence health behaviors for a range of health conditions, including increasing smoking cessation rates , 26, weiAlthough existing literature has highlighted that financial incentives may be an effective service engagement strategy for underserved populations, particularly when implemented in the context of a short-term intervention , 33, 38,To address these knowledge gaps, this article describes an evaluation protocol for a study using mixed methods to investigate the effectiveness of and experiences with using financial incentives to increase engagement of homeless adults with mental illness with a brief case management intervention following hospital discharge in Toronto, Canada. In addition to service engagement, using a pragmatic field trial design, this study will investigate the impact of financial incentives on secondary health, health service use, quality of life, and housing outcomes. Qualitative data, exploring the acceptability, and perceived positive and negative impacts of financial incentives, will be integrated in study findings to support a comprehensive and nuanced understanding of the potential role of financial incentives in supporting service engagement in this underserved population.The Coordinated Access to Care for the Homeless (CATCH) program is a multidisciplinary brief case management program for individuals experiencing homelessness and mental illness being discharged from hospital in Toronto, Canada. Informed by the Critical Time Intervention model, the program was launched in 2010 and has been described extensively elsewhere , 45\u201348. ClinicalTrials.gov (Identifier: NCT03770221).Using a community-based, participatory research framework, CATCH-Financial Incentives (CATCH-FI) is a pragmatic field trial using mixed methods to evaluate the impact and acceptability of financial incentives in promoting service engagement of homeless adults with mental illness following hospital discharge. This study was launched in December 2018. Study recruitment is completed, data collection however is ongoing and the trial is registered with CATCH program participants enrolled in the CATCH-FI trial and randomly assigned to the intervention arm receive $20 for every week they remain meaningfully engaged with program service providers over 6 months of follow-up, or until they are successfully transitioned to longer-term supports. Study participants can earn up to $80CAN per month by attending meetings with their program service provider by phone, text, email, or in-person, as per their care plan. Participants randomly assigned to the control arm receive usual CATCH care, which does not include a financial incentive for attending meetings with their program service provider.The impact of financial incentives on participants' level of service engagement, measured by program attendance, evaluated as the number of \u201chealth-care contacts\u201d a participant makes with CATCH service providers over a 6-month follow-up period. Of note, participant contacts with service providers are counted as \u201chealth-care contacts\u201d if they relate to participants' care plans. As a low barrier program, CATCH service providers meet program participants in person, via phone, email, or texts, as per program participants' needs and preferences. Social or trivial contacts of program participants with service providers are not considered \u201chealth-care contacts\u201d and are not being measured or documented in program records or reports to program funders.Secondary health, health service use, quality of life, and housing outcomes (secondary outcomes) are also being collected over the study period. The study hypothesis is that participants receiving financial incentives will have higher levels of service engagement and consequently better health, health service use, quality of life, and housing outcomes compared to participants receiving usual care.This study additionally uses qualitative methods to investigate stakeholder perspectives and experiences using financial incentives. In-depth qualitative interviews and focus groups were conducted with study participants, program service providers, and other key informants to explore experiences of and perspectives on the acceptability of using financial incentives to support service engagement in this population.The research questions guiding this study are:What are the levels of service engagement and health, health service use, quality of life, and housing outcomes for homeless adults with mental health needs receiving financial incentives vs. usual care over a 6-month period following hospital discharge?What are key stakeholder perspectives and experiences using financial incentives, including their acceptability, feasibility, as well as potential drawbacks?Within a community-based, participatory research framework, a convergent mixed methods design is used to evaluate experiences with, perspectives on, and impact of financial incentives on service engagement of an underserved population. With the qualitative sample drawn from the larger study sample, and inclusive of additional key stakeholders, qualitative, and quantitative data collection take place in parallel , 50. In The CATCH program receives 450\u2013600 referrals of homeless adults per year, from hospitals or community agencies, prior to or shortly after hospital discharge for a mental health condition. Study participants were recruited among successive new CATCH program participants. Referrals to the study were made by CATCH staff during program intake meetings. Program participants expressing an interest in receiving information about the study were contacted by research staff to confirm interest, eligibility, to obtain informed consent, and to conduct a baseline assessment.Study participants meet both CATCH program and CATCH-FI study eligibility criteria. Program eligibility criteria include: (1) current homelessness (defined as having no fixed place to stay for at least the past seven nights with little likelihood of finding a place in the upcoming month) or precarious housing ; (2) service provider-determined unmet mental health needs; (3) service user-determined unmet support needs; and (4) age 18 years or older. Excluded from the program are individuals with recent aggressive behavior requiring a higher intensity of support, or individuals whose illness severity necessitates residential care. To be eligible for the current study, in addition to meeting program eligibility requirements, participants must have been new referrals to CATCH, recently admitted or readmitted to hospital services, and have had at least one contact with the CATCH team.Participants were randomized following the baseline interview using block randomization , which rA subset of participants completing qualitative interviews were purposefully recruited from the larger randomized sample to participate in in-depth semi-structured qualitative interviews. Qualitative study participants were purposefully selected to reflect a diversity of perspectives based on gender, ethnicity, study arm, and service engagement. Study staff invited individuals with the ability to reflect on their experiences, a strategy that has previously been used with success by our team in studies of adults experiencing homelessness and mental illness , 45\u201348. n = 22), service providers (n = 12), and other key informants (n = 6) was estimated a priori as adequate to achieve saturation of qualitative findings and triangulation of data sources.Previous research by our group estimateBaseline data collection occurred between November 2018 and September 2020; follow-up data collection will be completed in July 2021. All data collection is conducted by trained research assistants from the Survey Research Unit at the Centre for Urban Health Solutions at St. Michael's Hospital. Quantitative surveys lasting 1\u20132 hwere administered at baseline (up to within 6 weeks of enrollment) and are offered at 6 months post-enrollment (between 6 weeks prior to and up to 16 weeks after that) to all study participants. Survey data are collected using SNAP professional software, and data are held on an internally owned and operated secure sever. All program participants received honoraria paid in-person, by check or email money transfer for each completed interview ($30 for the baseline interview and $60 for the 6-month follow-up interview), in addition to public transportation fare.Qualitative data collection occurred between April 2019 and December 2020. In-depth, 45\u201360-min semi-structured interviews were conducted during the study period with program participants and other key informants, and focus groups were conducted with CATCH service providers. Qualitative interview service user participants received an honorarium of $30 and public transportation fare.This study uses several evidence-based follow-up and study retention strategies for this population , 58. To A schedule of enrolment, intervention, and assessment is detailed in The primary outcome is service engagement, or program attendance, measured as a count of health-care contacts with CATCH service providers per month over the 6-month period (or until discharge from the program). CATCH service providers record program participant attendance in health care appointments and care planning meetings lasting at least 5 min in participant health records. Eligible health-care contacts include in-person and virtual appointments, as well as care planning conversations through email or text. This definition is consistent with the current program practices and ensures only eligible health-care contacts are recorded and reported to the funding agency. Data on program attendance will be captured at study end through chart reviews by a blinded study staff.Demographic data and other participant characteristics including residential status and income sources were collected by self-report at baseline. Secondary outcome measures of mental and physical health status, health service use, quality of life, and housing are being collected at baseline and 6 months and a measure of perceived therapeutic working alliance is collected at 6 months. Health service use will also be evaluated using data linkage of administrative health records, conducted at ICES, which holds population-based health and health service use information at the patient level for all Ontarians with health insurance.See Interviews and focus groups explored the experiences of using financial incentives from both program participant and provider perspectives. Topic guides were developed and iteratively refined by the PI and study staff, with input from people with lived experience of homelessness and mental health challenges. Topics included perceptions of facilitators and barriers to service engagement during care transitions; factors affecting health decision-making in this population; and the perceived risks, barriers, and expected or experienced impact of financial incentives during care transitions. Given ethical concerns and underdeveloped literature on the use of financial incentives, topic guides specifically probed stakeholders to comment on the acceptability of using financial incentives, facilitators of ethical implementation, and potential negative or unintended consequences.Interviewers' extensive experience with the study population, rigorous interviewer training for this study, and early and ongoing review of transcripts by the PI and study staff helped to ensure consistency across interviews. Investigator triangulation and member checking will help to validate the findings.Exploratory analyses will calculate descriptive statistics , construct graphs , and estimate correlations between selected participants' characteristics and longitudinal outcomes.Since program duration is customized for each participant, and may last between 1 and 6 months, we will calculate participants' person-months of program participation. This will allow us to estimate the rate ratio comparing the intervention and usual care groups with respect to the number of contacts with CATCH service providers per month. Therefore, for each participant, the total number of months in the program before discharge and the total number of contacts over the number of months in the program will be calculated. A Poisson regression model (PROC GENMOD) with total contacts as the dependent variable, group (CATCH-FI vs. CATCH-UC) as the covariate and an offset equal to the log (number of months spent on the program) will estimate the rate ratio and 95% confidence intervals, and the mean number of contacts per person-months and 95% confidence intervals in each group.For continuous outcomes , we will define change from baseline to 6-month follow-up as scores at 6 months minus scores at baseline. We will conduct analyses of covariance (ANCOVA) to compare changes from baseline between CATCH-FI and CATCH-UC, adjusting for baseline scores as a covariate.For count outcomes we will model the baseline and 6-month outcomes using generalized estimating equations (GEE) assuming the Poisson distribution or the negative binomial distribution, if over-dispersion is suggested by the data. The models will include the main effects of group (CATCH-FI vs. CATCH-UC) and time (6 months vs. baseline), and the interaction of group by time. A significant interaction will indicate that change from baseline is different between the groups. Rate ratios and 95% confidence intervals will be estimated.The analysis of administrative data is similar to that of count outcomes, except that the period of consideration will be 12 months instead of 6 months pre and post-randomization.For analyzing the number of days stably housed in the past 6 months, we will consider GEE with a Poisson or negative binomial distribution. The model will include the main effects of group and time, an interaction between group and time, selected covariates, and an offset represented by the natural log of residence days accounted during the 6 months interval. Rate ratios and 95% confidence intervals will be estimated to compare the rate of days stably housed per person-months.t-test or the Wilcoxon rank-sum test if extreme outliers are present. The correlation between WAI-SR and other outcomes at 6 months will be explored by estimating the Pearson or Spearman correlation coefficients, overall and by group.For the WAI-SR, evaluated at the 6-month follow-up only, total scale, and sub-scales scores will be calculated and compared between the groups using the two-sample p-value of 0.05 or less will indicate statistical significance. There are no plans for interim analyses.SAS 9.4 will be used for all analyses and all analyses will use the intention-to-treat principle. We will consider multiple imputation to handle missing data. All statistical tests will be two-sided and a All interviews and focus groups were audio-recorded and transcribed verbatim. Grounded theory , 77 and Coding will be completed by a team of coders including the PI, study co-Investigators, and study staff, using a structured approach to maximize rigor \u201381. FirsAll study participants have access to the CATCH program throughout the trial period. Participants in the intervention group experience the direct benefit of receiving a financial incentive. Participants in both groups may indirectly benefit from sharing their experiences with study staff and by contributing to knowledge creation that may inform strategies to more effectively support this population. Involvement in this intervention poses minimal risk to the safety of participants, and no anticipated harms.A key criticism of the use of financial incentives is the risk of coercion. The study strives to minimize this risk directly, by recruiting participants from a program providing comprehensive support to homeless adults with mental health challenges, irrespective of study participation. In addition, the study uses a modest financial incentive and a rigorous informed consent process. Furthermore, the study strives to minimize the risk of coercion indirectly, by aiming to better understand this potential risk, how to minimize it in practice and how to identify strategies to better engage this underserved population.It is possible that some participants may find certain survey or interview questions uncomfortable. Participation however is voluntary and individuals may choose not to answer or withdraw from the study at any point in time without penalty. All interventions involving financial incentives include the risk of creating a differential effect on those with varying levels of financial need, but this study's exclusive focus on people experiencing homelessness and mental health challenges minimizes this risk.ClinicalTrials.gov on December 10, 2018 (NCT number: NCT03770221). Research Ethics Board (REB)-approved protocol amendments will be posted on the site. The PI and study team will meet regularly to review data, data confidentiality, any adverse events, adherence to protocol design, recruitment and retention. In addition, the study team meet will meet regularly throughout the trial period, and collect and report to the REB any reported adverse events or other unintended effects of the intervention as per institutional policies. Important protocol modifications will also be reported to the REB and trial registry. This pragmatic field trial is subject to audits by the host institution.The study protocol was registered with The PI and study team bring extensive experience in the design, implementation, and evaluation of interventions for the target population , 45\u201348, This article describes a pragmatic field trial protocol aiming to evaluate the acceptability and impact of financial incentives on service engagement of homeless adults with mental health challenges following hospital discharge. Service engagement of this population in traditional health services remains low, given their multiple competing priorities of securing shelter, basic income, access to health and social services, and other needed supports. A pragmatic randomized field trial and in-depth qualitative interviews and focus groups will contribute high quality evidence to an underdeveloped literature on the effectiveness and acceptability of financial incentives in supporting service engagement of this population, at high risk of poor outcomes.Using a participatory framework, the study aims to include the voices of all relevant stakeholders in data collection, analysis and interpretation, including affected individuals, direct service providers, program administrators, and other key informants. The protocol is strengthened by the use of mixed methods, to provide a nuanced understanding of the acceptability, risks, and impact of financial incentives, including ethical and pragmatic considerations associated with their use. Ultimately, the study aims to inform local health solutions to supporting service engagement of this population.Results and lessons learned will be useful to other populations or jurisdictions interested in implementing financial incentives or seeking to improve service engagement of underserved populations, a priority in many settings aspiring to promote health equity. Future research should investigate the role of additional strategies to promoting service engagement, including flexible drop-in appointments, using peers, and proactive outreach, in efforts to understand what service engagement strategies work best, for who, in diverse service contexts.Promoting service engagement of homeless adults with mental illness following hospital discharge is urgently needed. Study findings will contribute to growing literature on strategies to support service engagement and improve health outcomes among disadvantaged and underserved populations.This study protocol was approved by the Unity Health Toronto Research Ethics Board and the Centre for Addiction and Mental Health Research Ethics Board . All participants provided either written or verbal informed consent to participate. To facilitate and confirm participants' understanding, access to a professional interpreter and a capacity-to-consent questionnaire were used as needed.Trial findings will be communicated to study participants, funders and research audiences through briefing notes, presentations, and publications. There are no publication restrictions. Authorship membership is limited to study co-investigators and ICJME criteria for authorship will apply. Study datasets will be available through the corresponding author, as per Unity Health Institutional policies.NR led drafting of this manuscript. VS is the study's Principal Investigator and supervised the drafting of this manuscript. RN participated in the study design and implementation, led data analysis, and participated in the editing of this manuscript. SH, AD, and NK participated in study design and implementation and the editing of this manuscript. RW participated in data analysis and editing of this manuscript. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The experimental solubility data of SIM was regressed using van\u2019t Hoff and Apelblat models. The solubility of SIM (mole fraction) was recorded highest in M59 (1.54 x 10\u22122) followed by M52 (6.56 x 10\u22123), B58 (5.52 x 10\u22123), B35 (3.97 x 10\u22123), T80 (1.68 x 10\u22123), T20 (1.16 x 10\u22123) [the concentration of surfactants was 20 mM in H2O in all cases] and H2O (1.94 x 10\u22126) at T = 320.2 K. The same results were also recorded at each temperature and each micellar concentration of T80, T20, M52, M59, B35 and B58. \u201cApparent thermodynamic analysis\u201d showed endothermic and entropy-driven dissolution/solubilization of SIM in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58.The aim of this work was to solubilize simvastatin (SIM) using different micellar solutions of various non-ionic surfactants such as Tween-80 (T80), Tween-20 (T20), Myrj-52 (M52), Myrj-59 (M59), Brij-35 (B35) and Brij-58 (B58). The solubility of SIM in water (H The same results were also obtained at each temperature and four different micellar solutions of T80, T20, M52, M59, B35 and B58. The xe values of SIM were much higher in M59 in comparison with H2O. The maximum xe values of SIM in M59 might be possible due to similar polarity of SIM and M59. Due to the highest solubility of SIM in 20 mM M59, it can be used as a solubilizer in liquid formulation design of SIM.The influence of molar concentrations of various non-ionic surfactants on logarithmic solubilities of SIM at three different temperatures is presented in \u03b4) for SIM, H2O, T80, T20, M52, M59, B35 and B58 was obtained using Eq \u201d of each component using \u201cHSPiP software (version 4.1.07)\u201d The SMILES of each compound is easily available in the compound database. The calculated values of \u03b4, \u03b4d, \u03b4p and \u03b4h are presented in \u03b4 for SIM was obtained as 18.70 MPa1/2 which suggesting that SIM had lower polarity. The \u03b4 value for three different non-ionic surfactants i.e. M52, M59 and B58 was recorded as 18.70 MPa1/2. However, the value of \u03b4 for T80, T20, B35 and H2O was obtained as 21.30, 22.10, 18.90 and 47.80 MPa1/2, respectively. The xe values of SIM were obtained higher in M59, M52 and B35 which was possible due to same \u03b4 values for SIM, M59, M52 and B58 of H2O. Overall, the results of Hansen solubility parameters suggested good agreement of experimental solubility data of SIM with their polarities/solubility parameters.In which, the symbol and B58 . HoweverSc) using Eq uSt is the measured SIM solubility in the presence of surfactants, Sw is the intrinsic water solubility of SIM, Cs is the molar surfactant concentration and CMC is the critical micelle concentration of surfactant. The values of solubilization capacity for SIM in different micellar solutions of various non-ionic surfactants were determined at \u201cT = 300.2 K\u201d and results are presented in x = 174.0) was found in 10 mM micellar solution of M52.In which, xidl) was obtained using Eq R represents the universal gas constant and \u0394Cp represents the differential molar heat capacity of solute/SIM [In which, lute/SIM \u201345. OtheTfus, \u0394Hfus and \u0394Cp for solute/SIM were obtained as 412.95 K, 28.38 kJ mol-1 and 68.72 J mol-1 K-1, respectively from DSC/thermal analysis of SIM. The xidl values for solute/SIM were obtained using Eq of T80, T20, M52, M59, B35 and B58 at each temperature investigated. Theoretical/ideal solubility of SIM was also recorded as increasing significantly with increase in temperature, suggesting the dissolution behavior of SIM was endothermic process [The values of xApl) of SIM in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 was calculated using of Eq and R2. RMSD values between experimental and Apelblat solubilities of SIM were obtained using Eq and logarithmic Apelblat solubilities (ln xApl) of SIM in H2O and 1 mM and 5 mM micellar solution of T80, T20, M52, M59, B35 and B58 against reciprocal of absolute temperature (1/T) is presented in In which, xe and ln xApl of SIM in H2O and 10 mM and 20 mM micellar solution of T80, T20, M52, M59, B35 and B58 against 1/T is presented in xe and ln xApl values of SIM in H2O and different micellar solutions of T80, T20, M52, M59, B35 and B58. The resulting data of this correlation/fitting are listed in RMSD values for SIM in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 were obtained as (0.16 to 5.84) %. An average RMSD for this correlation was found to be 0.60%. The R2 values for SIM in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 were obtained in the range of 0.9957 to 0.9999. The results presented in RMSD and R2 suggested good correlation of experimental data of SIM with Apelblat model.However, the curve fitting between ln xvan\u2019t) of SIM in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 was obtained using Eq %. An average RMSD for this correlation was predicted as 0.78%. The R2 values for SIM in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 were recorded as 0.9944 to 1.0000. The results presented in RMSD and R2 again suggested good correlation of experimental data of SIM with van\u2019t Hoff model.The experimental solubilities of SIM were modelled/curve fitted with van\u2019t Hoff solubilities of SIM using 2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 were determined by applying \u201capparent thermodynamic analysis\u201d on solubilities (mole fraction) of SIM. Accordingly, three different thermodynamic parameters including \u201capparent standard dissolution enthalpy (\u0394solH0), apparent standard Gibbs free energy (\u0394solG0) and apparent standard dissolution entropy (\u0394solS0)\u201d for SIM dissolution/solubilization were determined using this analysis. The \u0394solH0 values for SIM dissolution/solubilization in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 were determined at mean harmonic temperature (Thm) by applying van\u2019t Hoff analysis using Eq kJ mol-1. The \u0394solH0 value for SIM dissolution was recorded highest in H2O (36.64 kJ mol-1). However, the lowest \u0394solH0 value (11.62 kJ mol-1) for SIM solubilization was obtained in 20 mM micellar concentration of M59. Overall, the low values of \u0394solH0 were obtained at each micellar concentration of M59 investigated. The average value of \u0394solH0 for SIM dissolution/solubilization was found out 20.94 kJ mol-1 with uncertainty of 0.30. The lowest \u0394solH0 value for SIM solubilization in 20 mM micellar concentration of M59 was possible due to the highest solubility (mole fraction) of SIM in 20 mM micellar concentration of M59. While, the highest \u0394solH0 value for SIM dissolution in H2O was attributed to the lowest solubility of SIM in H2O. The \u0394solG0 values for SIM dissolution/solubilization in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 were recorded as (11.13 to 35.03) kJ mol-1. The \u0394solG0 value for SIM dissolution was also recorded highest in H2O (35.03 kJ mol-1). However, the lowest \u0394solG0 value (11.13 kJ mol-1) for SIM solubilization was obtained in 20 mM micellar concentration of M59. Overall, the low values of \u0394solG0 were also obtained at each micellar concentration of M59 investigated. The average value of \u0394solG0 for SIM dissolution/solubilization was found out 18.68 kJ mol-1 with uncertainty of 0.26. In comparison, lower values of \u0394solH0 and \u0394solG0 were obtained in 20 mM micellar concentration of M59, indicating that minimum energies are used for the solubilization of SIM in M59. The results of enthalpy and Gibbs free energy measurements were in accordance with solubility data of SIM in H2O and various micellar solutions of different non-ionic surfactants. The positive values of apparent standard enthalpy (\u0394solH0 > 0) and apparent standard Gibbs energy (\u0394solG0 > 0) in all samples suggested an endothermic dissolution/solubilization behavior of SIM in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 [solH0 and \u0394solG0 might be due to the formation of new bond energy of attraction between the drug and solvent molecules [solS0 values for SIM dissolution/solubilization in H2O and different micellar solutions of T80, T20, M52, M59, B35 and B58 were also recorded as positive values in the range of (0.39 to 48.55) J mol-1 K-1. The average \u0394solS0 value for SIM dissolution/solubilization was recorded as 7.28 J mol-1 K-1 with uncertainty of 1.40. The positive \u0394solS0 values for SIM showed an entropy-driven dissolution/solubilization behavior of SIM in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 [2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 [The \u0394 and B58 , 51. Theolecules . The \u0394so11.62 to .64 kJ mo and B58 , 51.2O and various micellar solutions of T80, T20, M52, M59, B35 and B58 was determined at three different temperatures i.e. T = 300.2 K, 310.2 K and 320.2 K under atmospheric pressure. The results of DSC and PXRD analysis suggested crystalline nature of SIM before and after equilibrium. The solubilities (mole fraction) of SIM were regressed well with van\u2019t Hoff and Apelblat equations. With increase in temperature, the solubility of SIM was found to be enhanced significantly in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58. The solubility of SIM (mole fraction) was recorded highest in M59 (20 mM) followed by M52 (20 mM), B58 (20 mM), B35 (20 mM), T80 (20 mM), T20 (20 mM) and H2O at T = 320.2 K. The same results were also recorded at each temperature and four different micellar solutions of T80, T20, M52, M59, B35 and B58. The results of \u201capparent thermodynamic analysis\u201d showed an endothermic and entropy-driven dissolution/solubilization of SIM in H2O and various micellar solutions of T80, T20, M52, M59, B35 and B58. Overall, these results suggested that various micellar solution of non-ionic surfactants could be successfully used in solubilization of poorly water soluble drugs such as SIM.The objective of this work was to solubilize SIM using different micellar solutions of various non-ionic surfactants including T80, T20, M52, M59, B35 and B58. The solubility (mole fraction) of SIM in HS1 Fig(DOCX)Click here for additional data file.S2 Fig(DOCX)Click here for additional data file.S3 Fig(DOCX)Click here for additional data file.S4 Fig(DOCX)Click here for additional data file."} +{"text": "Context: A comprehensive plan has been launched by the Korean government to expand hospice and palliative care from hospital-based inpatient units to other services, such as palliative care at home, palliative consultation, and palliative care at a nursing home. Objective: To examine the illnesses and symptoms at the end of life associated with the place of death among older Korean adults. Methods: This secondary data analysis included a stratified random sample of 281 adults identified from the exit survey of the Korean Longitudinal Study of Aging aged \u226565 years and who died in 2017\u20132018. Results: Overall, 69% of the patients died at hospitals, 13% died at long-term care facilities (LTCF), and 18% died at home. In the multinomial logistic regression analysis adjusting for age, sex, and marital status, older adults who died in the hospital had higher odds (2.02\u20134.43 times) of having limitations in activities of daily living (ADL) as well as symptoms of anorexia, depression, weakness, dyspnea, and periodic confusion 1 month before death than those who died at home. Older adults who died in an LTCF were more likely to have limitations in ADL and instrumental ADL as well as a higher likelihood (2\u20135 times) of experiencing pain, anorexia, fatigue, depression, weakness, dyspnea, incontinence, periodic confusion, and loss of consciousness than those who died at home. Conclusion: Since the majority of subjects died either in a hospital or an LCTF, and this proportion is expected to increase, policy planning should focus on improving the palliative case in these settings. Future policies and clinical practices should consider the illness and symptoms of older patients at the end of life across different care settings. The number of older adults dying of chronic illnesses, such as cardiovascular diseases, chronic obstructive pulmonary disease, diabetes, cancer, and dementia across long-term disease trajectories, is increasing worldwide . AccordiAlthough most people wish to spend their last moments at home, hospitals remain a common place of death in many countries . Recent To develop successful palliative care services for older adults in diverse settings, it is essential to identify the common illnesses and symptoms experienced by these patients at the end of life in these care settings. In terms of place of care in Korea, hospitals have remained the most common place of death over the last several decades, with an increasing proportion of patients dying at long-term care facilities (LTCFs). Conversely, the proportion of patients dying at home is decreasing .According to a conceptual model by Gomes and Higgins in 2006, the place of death is significantly associated with illness factors, besides the patients\u2019 demographic and environmental factors ,10. The The existing literature describes inconsistent findings regarding the illness factors associated with the place of death. In a systematic review about terminally ill patients with cancer, death at home was associated with a long duration of disease and low functional status, while disease symptoms and pain were not associated with the place of death . AnotherAlthough most older adults suffer from multiple illnesses and symptoms at the end of life, there is limited data regarding the prevalence of symptoms and illnesses in older adults with multiple comorbidities at death in common care settings ,9. Previn = 55), those who received hospice care service at any time before death (n = 14), or were uncertain of this (n = 3), 281 decedents were included in this study. The decedents who received hospice care at any time before death were excluded from this analysis because hospice care service was discontinued at the time of death. The institutional review board of the Kyungpook National University approved this study .Secondary data analysis was performed using the data obtained from the Wave 7 exit survey of the Korean Longitudinal Study of Aging (KLoSA), collected after the death of the subjects in the study cohort; the subjects were selected via stratified random sampling from the national census and followed every 2 years since 2005 . The KLoWe assessed the decedents\u2019 place of death from the interviews with surrogate respondents with the question \u201cWhere did the decedent die?\u201d. The answers were categorized into their own homes or their offspring/relatives, hospitals, and LTCFs . The resDescriptive statistics were used to summarize the data. We used the chi-square test, analysis of variance, and multinomial logistic regression analysis to examine the factors associated with the place of death. The category of home death was used as a reference to identify the odds of hospital death and LTCF death. The demographic variables, which were significantly different among the places of death, such as age, gender, and marital status, were adjusted as covariates for the multinomial logistic regression. SPSS version 25.0 was used for analysis.Among the 281 decedents, 194 (69.0%) died in a hospital, 37 (13.2%) died at LTCFs, and 50 (17.8%) died at home. At the time of death, the decedents had a mean age of 81.61 \u00b1 7.73 years: 53.4% were female and 66.8% were married. About 68.7% had education less than elementary school and 89.3% had national health insurance. Before their death, 42.8% lived in metropolitan areas, 30.3% lived in rural areas, and 26.9% lived in small cities.p = 0.008). Males and married subjects had a significantly higher proportion of death at hospitals and a lower proportion of death at LTCFs than females and unmarried subjects, respectively . No other differences were observed in the demographic variables with respect to the place of death.On comparing the demographic characteristics, we observed that those who died in LTCFs had a significantly higher mean age than the patients who died in hospitals or at home of suffering from hypertension and experiencing ADL (1.55 times) and IADL limitations (1.40 times) than those who died at home. The presence of other illnesses, including diabetes mellitus, cancer, heart disease, osteoarthritis, dementia, cerebral infarction, lung disease, depression, and fracture, was not significantly associated with increased odds of death at LTCFs compared to deaths at home. Furthermore, older adults who died at LTCFs had higher odds (2.67\u20135.47 times) of having pain, anorexia, fatigue, depression, weakness/paralysis, dyspnea, incontinence, periodic confusion, and loss of consciousness.Korea has a high incidence of hospital deaths 69%) but a low proportion of home deaths (17.8%). These numbers are in stark contrast to the USA, where hospital deaths constitute only 19.8% of the total deaths, and 40.1% of the deaths occurred at home . These r9% but a However, those who died at a hospital had significantly greater odds (2 times) of experiencing anorexia, depression, weakness or paralysis, dyspnea, and confusion before death than those who died at home. Interestingly, we observed that there were no significant associations of the presence of cancer and experience of pain with deaths occurring at hospitals compared to deaths at home. Rather, older adults who died at hospitals had a higher likelihood of experiencing ADL limitation (1.14 times) and several debilitating symptoms (2.02\u20134.43 times) than those who died at home. This finding was not in line with the previous studies, which suggested that patients with advanced cancer who had higher functional status and greater pain intensity were more likely to be hospitalized at the end of life . Our resConversely, the proportion of deaths occurring in LTCFs in Korea 13.2%) was lower than that reported in the USA 24.9%) % was low. In our .9% % wasThis study also highlights the notable illnesses and symptoms encountered at the end of life across different care settings in Korea, which must be accounted for during direct policymaking and program development for palliative care. The place of death has mostly been discussed as an indicator of the quality of end-of-life care, but some determinants of the place of care are not modifiable . NeverthThis study had a few limitations. We used retrospective data from a secondary source reported by surrogate respondents without including various factors that affect the place of death. Moreover, since older adults under hospice services were excluded, we could not compare our study results to that of patients under palliative care services. Since data of the original study were collected from retrospective interviews with surrogates, the possibility of recall bias on symptoms experienced by older adults before death was not ruled out. Further prospective research is needed to identify comprehensive factors regarding the place of death, including patient, environmental, and healthcare service factors .Deaths among older adults in Korea occur mostly at hospitals, and the proportion of older adults approaching LCTFs in their end-of-life is expected to increase. Although the prevalence of illness among older adults did not differ significantly between those who died at hospitals and at LTCFs and those at home, the likelihoods of experiencing anorexia, depression, weakness/paralysis, dyspnea, and periodic confusion were higher among those who died at hospitals than those at home. Additionally, older adults who died at LTCFs experienced 2\u20135 times the functional limitations, pain, anorexia, fatigue, depression, weakness/paralysis, dyspnea, incontinence, periodic confusion, and loss of consciousness at the end of life than those who died at home. This study indicates that illness factors should be accounted for while developing advanced palliative care facilities in diverse care settings for end-of-life care in Korea. The findings regarding the illnesses and symptoms in patients across different places of death suggest the need to create specifically tailored services that cater to the needs of older adults at the end of life according to their place of care. This will contribute to the quality of end-of-life care among older adults. Future policies and clinical practices should consider the illness and symptom burden in this population across different care settings."} +{"text": "Background: While airborne pollen is widely recognized as a seasonal cause of sneezing and itchy eyes, its effects on pulmonary function, cardiovascular health, sleep quality, and cognitive performance are less well-established. It is likely that the public health impact of pollen may increase in the future due to a higher population prevalence of pollen sensitization as well as earlier, longer, and more intense pollen seasons, trends attributed to climate change. The effects of pollen on health outcomes have previously been studied through cross-sectional design or at two time points, namely preceding and within the period of pollen exposure. We are not aware of any observational study in adults that has analyzed the dose-response relationship between daily ambient pollen concentration and cardiovascular, pulmonary, cognitive, sleep, or quality of life outcomes. Many studies have relied on self-reported pollen allergy status rather than objectively confirming pollen sensitization. In addition, many studies lacked statistical power due to small sample sizes or were highly restrictive with their inclusion criteria, making the findings less transferable to the \u201creal world.\u201dMethods: The EPOCHAL study is an observational panel study which aims to relate ambient pollen concentration to six specific health domains: (1) pulmonary function and inflammation; (2) cardiovascular outcomes (blood pressure and heart rate variability); (3) cognitive performance; (4) sleep; (5) health-related quality of life (HRQoL); and (6) allergic rhinitis symptom severity. Our goal is to enroll 400 individuals with diverse allergen sensitization profiles. The six health domains will be assessed while ambient exposure to pollen of different plants naturally varies. Health data will be collected through six home nurse visits as well as 10 days of independent tracking of blood pressure, sleep, cognitive performance, HRQoL, and symptom severity by participants. Through repeated health assessments, we aim to uncover and characterize dose-response relationships between exposure to different species of pollen and numerous acute health effects, considering (non-)linearity, thresholds, plateaus and slopes.Conclusion: A gain of knowledge in pollen-health outcome relationships is critical to inform future public health policies and will ultimately lead toward better symptom forecasts and improved personalized prevention and treatment. Climate change has greatly impacted the onset, duration and intensity of the pollen season in recent decades, leading to an increase in exposure to some allergenic pollen species such as birch, hazel, oak, beech, and nettle and hemp families in Switzerland , 2. Simiintermittent allergic rhinitis (IAR) or colloquially as hay fever, are easily recognized but sometimes trivialized by patients pulmonary function and inflammation; (2) cardiovascular outcomes (blood pressure and heart rate variability); (3) cognitive performance; (4) sleep; (5) health-related quality of life (HRQoL); and (6) allergic rhinitis symptom severity. We highlight limitations of previous studies and identify gaps in knowledge, thereby providing the rationale for the EPOCHAL study . Secondly, this paper describes the design of the EPOCHAL panel study, which aims to quantify and characterize how ambient pollen concentration affects the aforementioned six health domains. Dose-outcome relationships in our study population will be investigated, looking specifically at (non-)linearity, thresholds, and plateaus.In addition, we aim to study:How does sensitization to at least one plant pollen, demonstrated by positive skin prick test (SPT), affect the outcomes within the six health domains?How do sensitizations to particular plants differentially affect the six health outcomes?In what ways are health outcomes measurably different in pollen monosensitized vs. polysensitized individuals?How is an increasing number of plant pollen sensitizations on the SPT related to the health outcomes?Do individuals with higher self-reported severity of allergic rhinitis symptoms manifest variant health outcomes in the other five health domains?Are there subgroups with distinct dose-response relationships between pollen exposure and health outcomes? Are there synergies between pollen intensity and other exposure variables in their effect on the health outcomes?Is there a measurable effect of variable pollen exposure on the six health outcomes among individuals without a pollen sensitization?Individuals with allergic rhinitis (AR) frequently have co-existing asthma (estimated at 10\u201340%) , while aTwo European cohort studies have noted a modest but significant elevation in systolic BP (3\u20136 mm Hg) in adults with AR vs. controls , 31. HowHeart rate variability (HRV) describes the changeability of time intervals between two heart beats and reflects a dynamic autonomic nervous system balance that is influenced by sympathetic and parasympathetic nervous system activity . HRV is Cognition could be modulated in individuals with AR via sleep impairment, medication side effects, disrupted mood and/or the actions of pro-inflammatory cytokines , 52 and Three mechanisms that could contribute to the influence of AR on sleep are: direct effect of inflammatory mediators such as histamine and cytoThe inflammatory process in AR involves cytokines, messenger molecules that can interact with the brain and cause changes in mood , anxiety3) \u201373 but a3) , 77. Morx) and ozone (O3) and sulfur dioxide (SO2) have been shown to interact with pollen in aggravating symptoms of asthma . Furtherf asthma , 80. Morpression , whereaspression .Beyond air pollution, weather can modulate pollen allergy symptoms. Thunderstorms with co-occurring extreme grass pollen concentrations have been associated with an escalation of asthma- and respiratory-related hospital admissions of individuals who were highly sensitized to grass pollen , 84. TheIn summary, among the six outcomes of interest, we identified the following research gaps:There is a paucity of prospective observational studies which collect health outcome data at more than 2 time points;Previous studies largely considered environmental exposure to pollen in a dichotomous manner (\u201cin\u201d vs. \u201cout\u201d of pollen season) rather than a continuous variable, which does not allow for dose-response analyses between pollen concentration and health outcomes;Most studies do not consider personal pollen sensitization profile on SPT, but instead rely on self-reported pollen allergy;Many studies have a lack of statistical power due to their small sample sizes;The inclusion criteria of many studies are very restrictive, limiting the generalizability of results to a \u201creal world\u201d population, particularly for allergy medication users, pollen polysensitized individuals, or adults with both AR and asthma;Other environmental pollutants are rarely considered as confounders;Little is known about the health effects of pollen on individuals without AR.The EPOCHAL study is an observational and longitudinal panel study conducted in Basel, Switzerland, with two recruitment periods, from February to end of August in 2021 and the same months in 2022. The chosen months of data collection cover the most typical pollen seasons for trees, grasses, and weeds in the Basel region. We aim to include 400 participants overall. The duration of study enrolment per individual will be approximately 6 weeks.This panel study will include adults who are between 18 and 65 years old and live within a 40-min commute from Basel-Stadt. The study panel will be sex-balanced and reflect the full pollen allergy spectrum, including individuals who are non-sensitized; sensitized but asymptomatic; monosensitized with symptomatic IAR; and polysensitized with symptomatic IAR. We aim to include ~300 adults with a health history of pollen-related allergic rhinitis and 100 individuals without pollen symptomatology. EPOCHAL is a real-world observational study which restricts participants minimally in their use of allergy and non-allergy medications. Nevertheless, one important exclusion to study participation is receipt of pollen immunotherapy within the previous 5 years. Study participants must also agree to short-term abstinence of oral antihistamines for 7 days prior to the SPT. These two restrictions on prior/current allergy treatment are meant to ensure the validity and reliability of data.Adults with asthma or preexisting high blood pressure are welcome to participate. However, we will exclude individuals with major, pre-existing cardiac and pulmonary conditions as well as epilepsy. Visual or hearing loss and restricted ability to complete the cognitive tests independently will also lead to exclusion. Furthermore, persons who are pregnant, regular users of medications which suppress the immune system and people who cannot refrain from psychoactive drug use for the duration of the study will not be enrolled.Recruitment channels will include the Division of Allergy of the University Hospital Basel; newsletters of the aha! Swiss Allergy Center; advertisements in newspapers; student, Swiss TPH and general websites; and social media posts and stories . Direct personal contact between the study nurses and participants from the start of recruitment and flexible, at-home health assessment scheduling will reduce the burden on participants and decrease the likelihood of loss to follow-up. Following completion of their involvement, participants will be remunerated with a CHF 40 grocery shopping voucher and receive their lung function, FeNO and blood pressure results.The EPOCHAL study consists of 6 weeks of active data collection per participant, starting with an initial study nurse visit at the participant's home. During this 90-min visit, informed consent is given, and an intake questionnaire, including medical history and personal habits, is completed by participants. The first pulmonary and cardiac assessments will be conducted by the study nurse. The participant will then be scheduled for SPT at the Division of Allergy. This will involve a determination of sensitization to 17 pollen extracts , althougEach participant will have five subsequent 60-min home visits by a study nurse. Data collected will include: pulmonary (PFT and FeNO) and cardiac (BP and HRV) assessments as well as self-reported HRQoL, mood, and allergic symptom severity collected via an electronic questionnaire. These nurse visits will repeat at approximately the same time of day and will be spaced weekly, as it is assumed that participants will be naturally exposed to varying concentrations of ambient pollen.During an overlapping 2-week period, participants will self-collect 10 days of data, to include: wear of a fitness/sleep tracker (periodic synchronization/transfer of data); participation in game-like assessments of cognitive performance; three consecutive measurements of BP using a device approved for home self-measurement; and completion of a brief online questionnaire regarding HRQoL and symptom severity. We estimate these items to collectively require 20 min of participant time per day.The dates for the nurse home visits as well as the 10-day data collection period will be based on self-reported typical months of allergy symptomatology for individuals with IAR. For individuals without pollen allergy symptoms, data collection will occur during any 6-week block during the months of study eligibility (February through August), with an effort to have non-symptomatic participation spread approximately equally over the 7 months. Throughout the period of participant data collection (February\u2013August 2021/2022), ambient pollen concentrations for the 17 chosen plant species will be furnished by the Federal Office of Meteorology and Climatology MeteoSwiss.A typical participation timeline is presented in Pollen seasons differ in their temporality see . Within It is understood that the published pollen data from the MeteoSwiss Basel monitoring station will not precisely match the unique pollen exposure of any particular participant, which can be locally influenced by green space, topography, altitude, and local plant density and variety. The EPOCHAL study will take into account the temporal change in pollen concentration over an approximate 6-week period for each participant, which will allow for enough pollen concentration variability to understand any potential pollen dose-response relationships. The EPOCHAL study will evaluate the confounding properties of both air pollution and weather on the six health domains . This goes well beyond what prior epidemiologic research investigated.The planned analysis will study the effect of MeteoSwiss pollen concentrations on the six major health outcomes . We will analyze how these outcomes vary between pollen-sensitized and non-sensitized individuals as well as how these outcomes vary in IAR individuals at varying levels of pollen exposure. For all primary outcomes, the null hypotheses will be that:There is no difference in outcome X between individuals with or without pollen sensitization(s).There is no difference in outcome X for pollen-sensitized individuals at varying levels of pollen exposure.The alternative hypotheses will be that there are significant differences for outcome X. The statistical significance level will be set as two-sided \u03b1 = 0.05. Analyses will be conducted using the lme4 R package for mixed models, considering that repeated measurements will be available for the same individual under different conditions.Pulmonary, cardiac, cognitive, sleep, HRQoL, and symptom severity outcomes will be analyzed as a function of same-day pollen exposure as well as multi-day lag (up to 7 days). In addition, we will correct for the following potential confounders:- Specifically for the cognitive testing, a potential learning effect that repetition in cognitive testing can have on individual participants' scores.- For cognitive and HRQoL outcomes, the previous night's sleep quality.- Class(es) of allergy medication used by participants in the preceding 24-h period.- Effect of different outdoor exposure periods on health outcomes.- Environmental confounders such as weather conditions and air pollution.We will use a generalized linear mixed effects regression model with a random intercept to account for individual baseline differences in health outcomes. If a dose-response effect is found between pollen exposure and any health outcome, we will investigate whether there is a pollen concentration above which no further health outcome effect is noted or a pollen threshold below which there is no indication of such an effect. We will analyze potential differences in the primary outcomes related to plant pollen mono- vs. polysensitization. Further, we will look at effect modifiers to ascertain if dose-response relationships are different between age and socioeconomic status strata, for men and women, for smokers vs. non-smokers, and for asthmatics vs. non-asthmatics.To perform the sample size calculation, the power two means command was used in Stata with the inputs listed in For sample size estimation for cardiovascular outcomes, a rather conservative high estimate for the intra-class correlation coefficient (rho) on the individual level was used. It was derived from a dataset that included three repeated HRV and BP assessments within the same individual (6 months apart) . Rho andTwo group means are compared since this is the distinction made in the available literature. The group means are used to approximate the values of a high-pollen day when an allergic person would be symptomatic vs. a low to no-pollen day when the same person is not symptomatic.As noted in Ethical clearance has been obtained from the Ethics Committee for North-Western and Central Switzerland (EKNZ number 2021-00151). This study is a research project involving human subjects with the exception of clinical trials and falls under the Risk Category A in the Human Research Ordinance of the Human Research Act . InformeData collected in this panel study is subject to compliance with the Federal Data Protection Act (DSG) as well as the EU General Data Protection Regulation (GDPR). All personal data will be coded by assigning a unique study ID to each participant. This study ID will be used for all digitally collected questionnaire data as well as data input into EasyOne and Actiheart software for spirometry and HRV assessment, respectively. The data gathered by Cambridge Brain Sciences (CBS), the software provider of the daily cognitive games, and the Fitbit wristband tracker are stored on the device as well as servers belonging to these companies. Platform-specific IDs will be generated for each participant of the study, which the companies cannot trace to the individual person or their coded study ID. Through these platform-specific IDs, the companies will be aware that a single participant made repeated measurements , but they will not be able to trace who this person is. All data is saved on a secure internal server that can only be accessed by study team members and will be stored for 10 years.We will take special care to limit data sharing and data use to the minimum necessary information needed to conduct the study. In case of CBS, the company will not obtain any data other than the CBS-specific ID and the test results obtained through the platform itself. In the case of Fitbit, besides a Fitbit-specific ID, the device requires input of sex, height, weight and date of birth in order to function. Sex, height, and weight cannot be used to uniquely identify people, and we will use a default date of birth as 1 July of the actual year of birth to minimize any possibility of identification through this data point. The study nurse will inform participants about these specific data safety measures taken during the informed consent process.https://www.fitbit.com/global/us/legal/privacy-policy#how-info-is-shared and CBS: https://www.cambridgebrainsciences.com/privacy-policy). We will make all participants aware of the implications of sharing data with the aforementioned companies. It will be explicitly discussed that CBS and Fitbit have their own data privacy policies, which are beyond the control and responsibility of Swiss TPH and EPOCHAL study staff. Participants will be directed to read these policies should they have questions about the hosting of their personal data, including whether the data may be potentially hosted abroad.It is possible that coded data collected by CBS and Fitbit is used for other purposes and self-reported symptom severity. The relationship between HRV as well as BP, AR, and pollen exposure is not well-defined and clarifying this association is a particularly important aspect of EPOCHAL. Furthermore, understanding whether these pollen-related changes are also manifested in a non-sensitized adult population makes the EPOCHAL project highly relevant for the general population and public health system.There are enormous direct and indirect health costs associated with each of the aforementioned health outcomes, and quantitatively measuring pollen-triggered hazards to population health is the fundamental driver behind this project. This project aims to understand Inflammation related to AR extends beyond the upper respiratory tract , 92. SysWhereas previous studies have been limited by cross-sectional or two time point designs, the EPOCHAL study will collect pollen and health data at 16 time points within a period of relevant and abundant pollen exposure in Basel, Switzerland (February through August). The serial measurements uniquely proposed by EPOCHAL will allow for outcome investigation along a continuum of differential pollen exposure. Statistical analysis will specifically consider sensitive subgroups which may show variable results, for example: younger vs. older age strata; asthmatics vs. non-asthmatics; and pollen mono- vs. polysensitized.A strength of our design is the consideration of same-day as well as lag (1\u20137 day) pollen exposure. This feature takes into account that some systemic inflammatory effects will not manifest immediately. Another strength is the inclusion of up to 17 relevant, allergenic pollens. This will allow for discovery of dose-response relationships, plateaus, and thresholds between pollen concentration and health parameters that may be limited to specific plants. The EPOCHAL study will also consider important confounders, which have been minimally explored in previous research, such as: air pollution exposure, weather, sleep quality and quantity from preceding night (for cognitive and HRQoL outcomes), caffeine intake, and use of allergy medications .One limitation of previous studies which we overcome is objective confirmation of pollen sensitization through SPT rather than reliance on self-report. In conjunction with self-reported symptom severity data, we can determine how the SPT profile is associated with the intensity or significance of systemic health effects. If such a relationship exists, this has the potential to greatly impact IAR medical management at the time of pollen allergy diagnosis.Another strength of the EPOCHAL study is the minimization of exclusion parameters in order to best approximate \u201creal world\u201d conditions. Whereas, some prior studies have disallowed allergy medication use and excluded asthmatics and individuals with hypertension, we believe our more inclusive approach to participant enrollment will generate results that are more relevant and applicable. One interesting feature of the EPOCHAL design is the targeted inclusion of ~25% non-allergic adults. Our rationale extends beyond the need for data comparison between sensitized and non-sensitized participants. To our knowledge, there is a paucity of research on systemic health effects of pollen exposure on non-sensitized adults. Given the global trends toward longer and more intense pollen seasons, this knowledge is important for the medical and public health communities.The EPOCHAL project aims to inform public health policies, particularly those which mitigate risk for the most vulnerable groups; shape environmental policies aimed at minimizing exposure to particularly allergenic plant species; and decrease the economic burden of IAR. Furthermore, with this work, we will make a significant step forward in providing personalized prevention recommendations that could greatly improve the quality of life of the pollen-allergic population. The health outcome information is crucially needed for accurate timing of population health alerts. The EPOCHAL study, while focused on an adult population in the Basel, Switzerland region, is widely generalizable to the wider European and global communities.The EPOCHAL study involves human participants and was reviewed and approved by the Ethics Committee for North-Western and Central Switzerland (EKNZ number 2021-00151). The participants will provide their written informed consent prior to participation in this study.AB, SG, KH, and ME contributed to conception and design of the study. AB wrote the first draft of the manuscript and created the figures and tables. AB and SG wrote sections of the manuscript. All authors read and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "An intriguing hypothesis proposes that hydrogen peroxide (H2O2) once acted as the electron donor prior to the evolution of oxygenic photosynthesis, but its abundance during the Archean would have been limited. Here, we report a previously unrecognized abiotic pathway for Archean H2O2 production that involves the abrasion of quartz surfaces and the subsequent generation of surface-bound radicals that can efficiently oxidize H2O to H2O2 and O2. We propose that in turbulent subaqueous environments, such as rivers, estuaries and deltas, this process could have provided a sufficient H2O2 source that led to the generation of biogenic O2, creating an evolutionary impetus for the origin of oxygenic photosynthesis.The evolution of oxygenic photosynthesis is a pivotal event in Earth\u2019s history because the O 2O2) has been proposed as an electron donor for photosynthesis before water, however, the amount of H2O2 available on early Earth was thought to be limited. Here the authors propose a new abiotic pathway wherein abrasion of quartz surfaces would have provided enough H2O2.Hydrogen peroxide (H Oxygenic photosynthesis operates by a four-electron reaction (R1) process associated with chlorophyll-a and the water-oxidizing complex (WOC)2. However, it would have been difficult for the ancestor of modern cyanobacteria to extract four electrons from water, a very stable compound, before the development of chlorophyll-a and the WOC4. Thus, it has been proposed that there might have been a transitional electron donor prior to H2O for the evolution of oxygenic photosynthesis on the early Earth3.The evolution of oxygenic photosynthesis was a critical biological innovation that allowed water to be used as an electron source and dioxygen gas , divalent manganese (Mn2+), and bicarbonate (HCO3\u2212) satisfy the multi-electron chemistry requirements. As an early intermediate candidate prior to H2O, H2O2 is plausible for two reasons. First, it can be oxidized at electrochemical potentials that are accessible to existing anoxygenic phototrophs1, meaning that if photosynthesis evolved as an anoxygenic process, as is generally accepted9, the photosynthetic machinery might already have been in place to handle H2O2 redox transformations. Second, H2O2 and the by-product (O2) would have consumed the surrounding reductants used by anoxygenic photosynthesizers , creating evolutionary pressure that forced existing photoautotrophs to adapt to locally oxidized environments and to utilize new electron donors6, assuming a large sustained H2O2 source. Blankenship and Hartman1 further developed a hypothesis that involves H2O2 as a transitional electron donor in a two-electron reaction releasing oxygen within a primitive system composed of an anoxygenic photosynthetic reaction center and a binuclear Mn-catalase enzyme (R2).Many intermediates have been proposed10, electrons in H2O2 can also be extracted by the Mn cluster that might directly evolve from inorganic Mn complexes11, such as manganese bicarbonate12 and a variety of MnO2 minerals13. Thus, H2O2 could have played a key role in the developmental stage of oxygenic photosynthesis, but for that process to endure, it would have depended on the continuous availability of an exogenous source of H2O2.Regardless of the disputes on the structural homology between the binuclear Mn-catalase and four Mn-containing core of the WOC2O2 were proposed to have existed on the early Earth18. For instance, Kasting19 proposed a photochemical model in which H2O2 was produced from the photolysis of H2O in the early stratosphere and reached the ground via precipitation. The flux of H2O2 on the surface of the early Earth was calculated to be ~106 molecules cm\u22122\u2009s\u22121 based on a 0.2-bar CO2 photochemical model20, resulting in a dissolved O2 concentration maximum of 0.08\u2009nM in the surface water when all H2O2 decomposed to O2 and H2O. Pecoits et al.21 similarly calculated a rate of ~106 molecules cm\u22122\u2009s\u22121 assuming lower pCO2 of 0.01\u20130.1\u2009bar. These theoretical calculations21 collectively suggest that the ground level of photochemically produced H2O2 is very low because of the low atmospheric H2O vapor and the short atmospheric lifetime of H2O2 against ultraviolet photolysis in the stratosphere. Importantly, the trace levels of H2O2 by photochemical process alone would be insufficient to fuel the respiration of the smallest cells (3\u2009nM dissolved O2), yet high enough for developing a defense against oxygen toxicity22. Although it has been proposed that substantial photochemically produced H2O2 stored in glaciers might have been released to the oceans following a Snowball Earth event23, the occurrence of possibly the earliest extensive glaciation (the Pongola glaciations at ~2.9\u2009Gyr)24 is still much later than the development time of oxygen-utilizing enzymes (~3.1\u2009Gyr)25.Several abiotic geochemical sources of H2O2 and \u2022OH) involving water reacting with newly abraded surfaces of quartz (SiO2). Specifically, O2 and ROS are produced by reactions between water and the surface-bound radicals (SBRs), such as \u2261SiO\u2022 and \u2261SiOO\u2022, under strictly anoxic conditions. These SBRs are highly reactive surface defects on quartz and various other types of silicate minerals 27 that can be created by the mechanical homolysis of Si\u2013O bonds28 through hydrological processes that induce quartz abrasion . We hypothesize that the persistent generation of H2O2 at quartz-water interfaces in rivers, estuaries, and deltas could have provided a source of H2O2 that stimulated the emergence of oxygenic photosynthesis from anaerobic predecessors on the early Earth6.Here we report experimental evidence for an overlooked mechanism for producing a stable abiotic source of reactive oxygen species 31 which can split water molecules and produce ROS. Previous studies investigated the formation of radicals on the surface of pulverized minerals and their role in pathogenicity37, which suggest that SBRs and the ROS produced are sensitive to the presence of atmospheric O2 and H2O. To avoid the false positive results induced by pre-existing O2, we performed ball-milling of quartz sands (0.25\u20130.6\u2009mm) in an O2-free, N2 atmosphere (O2\u2009<\u20090.1\u2009ppm) and obtained a fine powder of quartz with a median particle size of 0.002\u2009mm and SBRs with unpaired electrons through homolysis 40 on the quartz surfaces species. By contrast, there is no SBR on the aged surface of intact quartz. The SBRs were derived from the homolysis of \u2261Si\u2013O\u2013Si\u2261\u2009(intrinsic bonds in quartz) and \u2261Si\u2013O\u2013O\u2013Si\u2009\u2261\u2009.We investigated the feasibility of the formation of SBRs on the surface of the abraded quartz on Earth before its atmosphere accumulated free oxygen, e.g., the Great Oxidation Event (GOE), which occurred at ~2.45 to 2.3\u2009Gyr agoces Fig.\u00a0. In turn41. Correspondingly, transmission electron microscopy (TEM) observations demonstrated that the amorphous layer that formed on the surface of the abraded quartz increased with a longer time of ball-milling analyses were performed to further examine whether the generation of SBRs were related to structural changes during quartz abrasion. The XRD patterns show that the abraded quartz has distinctive variations in its quintuple lines that are consistent with the presence of crystal defects Fig.\u00a0, i.e., t2O2 and H2O2 Fig.\u00a0, but als2O2 Fig.\u00a0. The acc2O2 Fig.\u00a0.Fig. 2Ti2 release under anoxic conditions at the abraded quartz-water interfaces showed three stages. During the first stage (0\u20135\u2009min), the reaction between SBRs and H2O led to a rapid increase in the concentrations of \u2022OH, H2O2, and dissolved O2 in the suspension, resulting from reactions R6\u2013R10.Measurements of the kinetics of ROS and O2 were observed . Moreover, both the decrease of SBR exposure (R8\u2013R9) and consumption of H2O2 by other radicals via R11\u00a0\u2013\u00a0the Haber-Weiss reaction28\u00a0\u2013\u00a0resulted in a decreased [H2O2]. The subsequent increase in [H2O2] was negatively correlated to the concentration of the dissolved O2. This is due to the dissolved O2 produced in the intermediate reactions (R10\u2013R11) reacted with H\u2022 to form HO2\u2022 (R12)42, in which the H\u2022 formed via the reaction between H2O and \u2261Si\u2022 (R13)43. The continuously generated HO2\u2022 then gradually transformed to H2O2 in half-quantity cycling between R10 and R12, in which the dissolved O2 was reduced to half of its initial amount in each cycle, concomitant with the generation of an equivalent molar amount of H2O2. Meanwhile, as predicted by Henry\u2019s law, ~15% dissolved O2 might escape from the quartz-water interface to the headspace of the oxygen-free atmosphere (See\u00a0During the second stage (5\u2013120\u2009min), fluctuations in the concentrations of ROS and dissolved Oved Fig.\u00a0. The firhere See\u00a0.11\\docum2O2 approached steady values, and the concentration of the dissolved O2 decreased to <0.01\u2009mg\u2009L\u22121. This indicates that most SBRs were consumed in the quartz-water interface reactions. The production of both \u2022OH and H2O2 was restricted by the quantity of SBRs as the concentration of the ROS was linearly correlated to the particle loading , the concentrations of H0\u2009min, th2 in the reaction system could convert to ROS via the following two conditions. When quartz was ground in the presence of O2, both the yielding of \u2022OH and H2O2 in the suspensions under various pH values were greater than those of quartz ground in an O2-free atmosphere and 1O2 (singlet oxygen) during the time-course of quartz abrasion and benthic cyanobacterial mats . In the case of the former, our calculations suggest that in such coastal environments, atmospheric photochemical processes could yield 5\u2009\u00d7\u2009105 molecules O2\u2009cm\u22122\u2009s\u22121) (based on R2). By contrast, a literature survey of 84 in-situ measurements of oxygenic photosynthesis in benthic microbial ecosystems showed that depth-integrated net rates (O2 produced \u2013 O2 immediately consumed) are log-normally distributed and generally fall within two orders of magnitude (100\u201310\u22122\u2009nmol\u2009cm\u22122\u2009s\u22121) regardless of the benthic environment45. With this range in mind, those authors adapted the median net O2 production rate of 0.16\u2009nmol\u2009cm\u22122\u2009s\u22121 or 9.63\u2009\u00d7\u20091013 molecules\u2009cm\u22122\u2009s\u22121. This rate of O2 production from modern cyanobacterial mats is ~400 times higher than the in-situ flux of O2 at the Archean delta/shore . However, it clearly demonstrates that the abrasion of quartz sands in hydrodynamic processes can act as a geologically significant oxygen-producing pathway.We extrapolated our experimental results to depositional scale and calculated the H\u2009s\u22121 See\u00a0. To furt2 would eventually have expanded into the water column and readily eliminated the reductants necessary for anaerobic photosynthesis by planktonic communities . We modeled how this O2 first effected the photosynthetic communities growing within a mat, and then extrapolated to the water column above the continental shelf. In terms of the microbial mats, O2 generation would have presented a challenge to microbial metabolism that used electron donors that are readily oxidized. Most cyanobacteria are sensitive to sulfide as it directly inhibits the water-splitting reaction of Photosystem II (PSII)46. Even in metabolically versatile cyanobacteria mats, with a H2S concentration as low as 1\u2009\u03bcM, the oxygenic photosynthesis is severely inhibited and replaced by sulfide-oxidizing anoxygenic photosynthesis48. With the depletion of sulfide by diffusing O2, the onset of oxygenic photosynthesis can occur, then the competition with the two metabolisms (anoxygenic photosynthesis and chemosynthesis) is possible.Such high yielding of oxidants could have created locally oxidized environments that not only were detrimental to the early anaerobic photoautotrophs growing within the microbial mats, but the O50. These bacteria might have grown throughout the photic zone prior to cyanobacterial evolution, but thereafter were progressively marginalized to greater depths as the photic zone became more oxygenated51. Using a quantitative model modified from McKay and Hartman6, we performed the time evolution of H2O2 concentration in the oxic zone and the length of the pathway to which oxic conditions extend into the Archean shallow seawater near deltas and shores based on the two H2O2 flux calculated above as an example, it has been proposed that the oldest banded iron formations (BIF) were formed via the activity of anoxygenic photosynthetic bacteria, the photoferrotrophs2 levels within the oxic zone are several orders higher than the threshold stimulating the development of oxygen tolerance and aerobic respiration (dissolved O2\u2009>\u20093\u2009nM) in the anoxygenic phototrophs53. Furthermore, it provides the indispensable oxidizing species for the key evolution of oxygenic photosynthesis. Due to the catalysation of aerobic cyclase enzyme in the organisms, the mechanical/chemically produced O2 could have served as the earliest external substrate for the O2-dependent biosynthesis of chlorophyll-a precursor Mg-divinyl chlorophyllide55. In summary, the primitive PSII composed of an intermediate pigment of chlorophyll-a and an original Mn2-cluster would have allowed the ancestral cyanobacteria to extract two electrons from H2O2 at a time, liberating the earliest biogenic O2 (R2)56.The O57, well before the geochemical evidence of the rise of oxygen-evolving photosynthetic cyanobacteria60. These ROS-scavenging enzymes might have emerged in the initial adaptation to the weakly oxic environments and crucially protected the earliest cyanobacteria from the toxicity of ROS61. Notably, the rise in both oxygenases and other oxidoreductases suggests a wider availability of trace oxygen ~3.1\u2009Gyr25. Following on from this, we propose that the early evolution of oxygenic photosynthesis might have originated as benthic microbial communities at river or coastal setting56. In such high-energy hydrodynamic environments, the continuous abrasion of quartz-bearing sediment provided a high yield of oxidants despite the contemporary atmosphere being anoxic (O2\u2009<\u200910\u22127\u2009atm62). This process would have operated independent of atmospheric conditions, similar to the benthic pre-GOE oxygen oases previously proposed45. Moreover, in the absence of plant root systems, Archean surface terrains would have been subject to rapid migration of riverbeds and a predisposition for wide, braided streams to transport and disperse the large volumes of supermature quartz-rich sands weathered from the source terrains63. Subjected to much more intense tidal erosion than today64, the Archean land surface would have experienced enhanced wetting, conditions favouring colonization by microbial mats. With the mechanically resistant and sandy biolaminae shielding the harmful ultraviolet light, the ancestral cyanobacteria could perform photosynthesis via the H2O2 delivered by currents waves and tides associated with these high-energy environments placed an increasing evolutionary pressure on the anaerobic microbial world. Eventually, as the ancestral cyanobacteria developed oxygenic photosynthesis and gained the ability for extracting electrons from H2O70, they would have outcompeted the anaerobes that were constrained by a limited geological supply of reductant electron donors71. This shift in photosynthesis ultimately changed the structure of the Archean Earth\u2019s biosphere and initiated oxidative weathering on land72. Our study does not constrain the time when oxygenic photosynthesis successfully evolved but it does provide a plausible scenario for the initial development of oxygenation on the early Earth. Significantly, from a geochemical perspective, this means that some existing Archean proxies of O2 are not necessarily tied to early oxygenic photosynthesis.Our hypothesis neatly ties together a significant source of abiotic oxygen on the Archean Earth and the subsequent evolution of oxygenic photosynthesis. As more quartz-rich rocks emerged in the continental landmass of the Archean69 Fig.\u00a0, the qua2\u2009>\u200999\u2009wt.%). The quartz raw material was mesh-sieved and followed by a hydrofluoric acid (10\u2009wt.%) washing for 8\u2009h to remove surface impurities and produce the surface without dangling bonds (QW)28. The 0.25\u20130.6\u2009mm quartz particles were chosen and washed with distilled deionized water for a few times until the pH of the cleaning solution reached neutral, then dried in oven at 110\u2009\u00b0C for 24\u2009h and stored in a glovebox for further experiments .Quartz sands were purchased from Richjoint for grinding at 350\u2009rpm for 1 and 5\u2009h, respectively. After grinding, the jar was moved back to the glovebox, and the abraded quartz samples (QGN-1h and QGN-5h) were collected and sealed in glass bottles in the glovebox to protect the surface reactive sites.2 atmosphere for 5\u2009h with the following procedure: first, quartz powders were loaded into the jar in the glovebox and the jar was sealed tightly with the aforementioned procedure. Then, the jar was moved out of the glovebox and vacuumed through one inlet with a mechanical pump, and the jar was filled with pure oxygen (>99.99%) for grinding by the aforementioned procedure. Lastly, the sample was transferred to glass bottles in a glass desiccator full-filled with ultrapure N2. This sample was used in the according experiments within 48\u2009h after grinding.A control sample was ground in a pure O\u22121 at 25\u2009\u00b0C. The waters with a range of dissolved oxygen contents were prepared by diluting the high DO (9.00\u2009mg\u2009L\u22121) water with low DO (<0.01\u2009mg\u2009L\u22121) water. Potassium phosphate buffer solutions of various pH values were prepared by mixing 1/15\u2009M KH2PO4 with 1/15\u2009M K2HPO4 solutions with according ratios.All waters used in the experiments were deionized to 18.2\u2009M\u03a9\u2009cm\u22121) with 10\u2009mM benzoic acid ) was prepared and transferred to a three-neck flask with continuous magnetic stirring. Probes were then put into the solution through the flask necks to measure the pH values (Mettler-Toledo FiveEasy PlusTM), DO, and redox potentiality , respectively. When 20.00 g of QGN-5h (the specific surface area is 5.21\u2009m2\u2009g\u22121) was quickly plunged into the solution, the reaction started and lasted for 4\u2009h. In all, 4.00\u2009mL of suspension was extracted and filtered through a 0.22-\u03bcm membrane (Jinteng) at each sampling time . These subsamples were used for measuring the concentrations of hydroxyl radical and hydrogen peroxide.Measurements of the reaction kinetics of ROS generation were conducted in the glovebox at room temperature. A 250-mL solution (DO\u2009<\u20090.01\u2009mg\u2009L2. The quartz samples ground under different headspace gases (N2 or O2) were then mixed with 5.00\u2009mL of potassium phosphate buffer solutions that were previously prepared under various pH conditions (DO\u2009<\u20090.01\u2009mg\u2009L\u22121). The filtered liquid samples were collected for measuring the concentrations of hydroxyl radical and hydrogen peroxide.To examine the effects of atmosphere and aquatic chemistry on the production of ROS, suspensions were prepared by submerging the abraded quartz sample (1.00\u2009g) into the deionized water (5.00\u2009mL) with magnetic stirring. The QGN-5h sample was added into the water with various concentrations of dissolved O\u22121)) was set at 0.10, 0.20, 0.40, 0.60, 0.80, and 1.00\u2009g\u2009mL\u22121, respectively. All filtered liquid samples from the batch experiments were collected for ROS measurements.To further examine the effect of specific surface area on the production of ROS, the loading of quartz . In total, 1.00\u2009g of the abraded quartz sample (QGN-5h) was plunged into 5.00\u2009mL of the solution (DO\u2009<\u20090.01\u2009mg\u2009L\u22121) with 2\u2009mM the ROS scavenger (methanol or benzoquinone) with magnetic stirring. All filtered liquid samples were collected to measure the concentrations of H2O2.To confirm the presence of \u2261SiOO\u2022 on the surface of the abraded quartz produced in anoxic conditions, we conducted control experiments by scavenging specific radicals. Methanol was chosen as the \u2022OH scavenger, while benzoquinone was used to scavenge \u2022O75. The solid concentration (quartz sands with a size of ca. 2\u2009mm) is set to 250\u2009kg\u2009m\u22123, and the tumbling barrel was rolling at 1\u2009m\u2009s\u22121. About 10\u2009mL of the suspension was sampled at 24, 48, 72, 96, and 120\u2009h, respectively. After centrifugation and drying, the quartz powders were collected for surface area measurements.To experimentally test if ROS could be continuously produced in the natural erosion of quartz by waves and tides, a tumbling barrel was used to grind quartz , superoxide radical (\u2022O2\u2212), and singlet oxygen (1O2) were trapped by DMPO , DMPO/DMSO (dimethyl sulfoxide), and TEMP , respectively, to form more stable spin adducts for quantification. In all, 1.00\u2009g of quartz sand was ground in 2.00\u2009mL of solutions containing the trapping agent (10\u2009mM) by vibrating ball mill (Pulverisette 23 Mini-Mill). After shaking at 50\u2009Hz frequency for certain times , 0.50\u2009mL of the suspension containing spin adducts was sampled and filtered for EPR measurements.EPR spin trapping techniques were used to examine the transient radicals generated during the dynamic fracturing process of quartz. Probes with specific spin were selected to react with transient free radicals\u22121) were loaded into the jar in the glovebox. The sealed jar filled with the same atmosphere as in the glovebox was taken out, and transferred to a planetary ball mill for grinding at 350\u2009rpm for certain times . In total, 1.50\u2009mL gas in the headspace of the jar was extracted with a syringe from the valve with a thick silicone pad for gas content measurement in each interval.The headspace gas composition in the jar during the quartz grinding process was monitored. In total, 30.00\u2009g of quartz sands and 10.00\u2009mL water .XRD patterns were obtained over the 22 technique. Nitrogen adsorption-desorption measurements were conducted at 77\u2009K using an ASAP 2020 Surface Area & Pore Size Analyzer (Micromeritics Instrument Corporation). Prior to the measurement, samples were degassed in vacuum at 200\u2009\u00b0C for 12\u2009h.Surface area measurements were performed with the BET NThe particle size of the quartz powder (QGN-5h) was measured with a laser particle analyser . The morphology of the quartz powder (QGN-5h) was observed with a scanning electron microscope .Surface-bound radicals formed via grinding were measured by EPR on a Bruker A300-10-12 spectrometer. The settings for the EPR measurements were as follows: center field, 3320G; sweep width, 500G; microwave frequency, 9.297\u2009GHz; modulation frequency, 100\u2009kHz; power, 1.0\u2009mW; and temperature, 77\u2009K.The gas products in the headspace of the sealed jar \u2013 after the quartz sands were ground in water \u2013 was determined by gas chromatography . Each gas sample was injected into the GC inlet connected with a vacuum glass system. The gas analyser included two thermal conductivity detectors for the analysis of permanent gases and a flame ionization detector for the analysis of hydrocarbon gases, as well as five rotary valves and seven columns. This enabled the analysis of all gaseous components with a single injection.p-Hydroxybenzoic acid, p-HBA) of benzoic acid (BA)76. For the determination of p-HBA, 1.00\u2009mL of a filtered liquid sample was rapidly mixed with 1.00\u2009mL methanol to quench further oxidation caused by \u2022OH. The p-HBA concentration was measured by a high-performance liquid chromatography equipped with a UV detector and an Inter Sustain C18 column (4.6\u2009\u00d7\u2009250\u2009mm). The mobile phase was a mixture of 0.1% trifluoroacetic acid aqueous solution and acetonitrile at a flow rate of 1.00\u2009mL\u2009min\u22121. The volume of sample injection was 20\u2009\u03bcL, the column temperature was 30\u2009\u00b0C, and the detection wavelength of UV was set at 255\u2009nm. The p-HBA retention time was 3.2\u2009\u00b1\u20090.2\u2009min. The detection limit of p-HBA was optimized to 0.01\u2009\u03bcM, which corresponds to 0.059\u2009\u03bcM \u2022OH.All experiments were carried out at 25\u2009\u00b1\u20092\u2009\u00b0C in the dark. Quartz samples were mixed with the solutions containing the \u2022OH trapping agent , then filtered and sampled for \u2022OH measurements by determining the concentrations of the oxidative product technique optimized for UV-Vis measurements77. LCV can be oxidized by \u2022OH generated from H2O2 with the catalysis of the enzyme horseradish peroxidase . The oxidizing product of LCV is crystal violet cation (CV+), which has a spectrophotometric absorbance maximum at 590\u2009nm. The absorbance of CV+ was measured on a Shunyuhengping UV2400 spectrometer.The H2PO4 (Aladdin) as pH buffer, 50\u2009\u00b5L 41\u2009mM LCV (dissolved with HCl), and 50\u2009\u00b5L 0.5\u2009mg\u2009mL\u22121 HRP . After shaking to homogenize the samples, they were kept in dark at room temperature (25\u2009\u00b1\u20092\u2009\u00b0C) for 30\u2009min to stabilize the absorbance. Absorbance measurements were taken in 1\u2009cm path-length cuvettes.Reagents were added to 1.70\u2009mL filtered liquid sample in the following order for a total volume of 2\u2009mL: 200\u2009\u00b5L 100\u2009mM KHSupplementary InformationPeer Review FileDescription of Additional Supplementary FilesSupplementary Video"} +{"text": "Salmonella (NTS) is classified among the MDR pathogens of international concern. To predict their MDR potentials, 23 assembled genomes of NTS from live cattle (n = 1), beef carcass (n = 19), butchers\u2019 hands (n = 1) and beef processing environments (n = 2) isolated from 830 wet swabs at the Yaounde abattoir between December 2014 and November 2015 were explored using whole-genome sequencing. Phenotypically, while 22% (n = 5) of Salmonella isolates were streptomycin-resistant, 13% (n = 3) were MDR. Genotypically, all the Salmonella isolates possessed high MDR potentials against several classes of antibiotics including critically important drugs . Moreover, >31% of NTS exhibited resistance potentials to polymyxin, considered as the last resort drug. Additionally, \u226480% of isolates harbored \u201csilent resistant genes\u201d as a potential reservoir of drug resistance. Our isolates showed a high degree of pathogenicity and possessed key virulence factors to establish infection even in humans. Whole-genome sequencing unveiled both broader antimicrobial resistance (AMR) profiles and inference of pathogen characteristics. This study calls for the prudent use of antibiotics and constant monitoring of AMR of NTS.One of the crucial public health problems today is the emerging and re-emerging of multidrug-resistant (MDR) bacteria coupled with a decline in the development of new antimicrobials. Non-typhoidal Today\u2019s world is experiencing an antibiotic resistance pandemic due to growing bacterial resistance to a broad range of drugs in animals and clinical settings ,2,3. MicSalmonella (NTS). For instance, while 16% of NTS isolates exhibited resistance to at least one essential antibiotic, as high as 2% of them were resistant to at least three different classes of antibiotics in the US [Salmonella (iNTS) lineage with increased multidrug resistance (MDR) potential playing a considerable role in outbreaks, thereby threatening the global market and tourism [MDR has also been reported in non-typhoidal n the US . In Euron the US . In Afri tourism ,5. Moreo tourism ,8,9.Salmonella serovars and their antibiotic resistance profile is essential to inform policy and guide treatment strategies for appropriate therapy and the development of new antimicrobials [Salmonella [Salmonella isolates [Salmonella, thereby representing an important risk factor for beef carcass contamination during processing at the abattoirs such as the Yaound\u00e9 abattoir [Current knowledge of the type of crobials ,2; thus,lmonella . Given tisolates . Healthyabattoir ,12.Salmonella isolates at the Yaounde abattoir using molecular techniques such as whole-genome sequencing (WGS).The Yaound\u00e9 abattoir where more than 6000 animals are slaughtered every week, is one of the major slaughterhouses in Cameroon that has the capacity to supply meat to three regions of Cameroon and neighboring countries . Following the WHO recommendation and given the use of antibiotics for disease prevention or animal growth promotion, it seems crucial to monitor the antimicrobial resistance profile of Salmonella organisms. This study was aimed at predicting the MDR, pathogenicity, and virulence potentials of Salmonella isolated at the Yaounde abattoir using WGS.Unlike traditional antimicrobial susceptibility testing, whole-genome sequencing (WGS) can give information on the presence of MDR genes and pathTwenty-three genomes of 38 identified Salmonella isolates were successfully sequenced. Only 19 sequenced genomes were thoroughly exploitable for the required bioinformatics analyses. The genomic profile of NTS isolates and their GenBank accession numbers are outlined in Isolates 8ev, 20de, 22sa, 34de, 60sa, and evjul were resistant to streptomycin, whereas between 18 and 20 isolates were highly susceptible to ampicillin, chloramphenicol, and tetracycline . InteresSalmonella isolates is summarized in aadA, aadA1, and aadA2 were, respectively, present in 15.8%, 26.3%, and 21% of Salmonella isolates. Moreover, chloramphenicol-resistance was found in 26.3%, 15.8%, 10.5, 10.5% of the isolates carrying cat, cat1, cat2, and cat3 genes, separately while 10.5% of isolates harbored cmlA1, cml5, and cml6 genes, respectively. The tetracycline-resistance genes, tetA, tetB, tetC and tetR were present in 5.26% 26.3%, 26.3%, 31.6% and 84.2% of isolates, respectively. Ampicillin-resistance genes TEM-1 and TEM-163 were identified in 15.8% and 21% of NTS isolates, respectively. However, close to 80% of Salmonella isolates harbored at least one false-negative result and 81.2% (for ampicillin).The true positive and true negatives cases represented perfect matching between their phenotypic and genotypic antibiotic resistance . The senp = 0.46).Generally, results indicate that whole-genome sequencing predicted four times the antimicrobial resistance for NST than the traditional susceptibility testing complex was also present alongside gene golS in isolates that exhibited resistance potential against phenicols. The E. coli soxS and soxR genes were detected in 78.94 to 100% of Salmonella isolates.Eighteen genes (in purple) involved in the efflux, transport, and reduced permeability of antimicrobials were identified in isolates . Except mdtk that promotes resistance solely against fluoroquinolone was present in 100% Salmonella isolates. The AcraB regulator gene sdiA was detected in all the isolates. Furthermore, sulfonamide-resistance genes sul1 and sul2 and the gene CTX-M-14 that regulates resistance against third-generation cephalosporin were present in 15.8 and 5.26% of Salmonella isolates, respectively. Then, isolates 8ev, 22sa and 34de harbored cephalosporin-resistance genes OXA-1, OXA-2, and OXA-7. Additionally, the fluoroquinolone-resistance gene qnrB1 was reported in isolates 8ev, 20de, 34de, and 60sa. The gene macA that mediates efflux of macrolide and secretion of enterotoxin ST11 was detected in 52.63% of NTS isolates. Moreover, 47.36% of Salmonella isolates hosted the gene marA, which exports antibiotics and disinfectants out of bacteria.Furthermore, the gene Mutations of the PmrAB system were detected in more than 31% of isolates .p > 0.05) was found among different Salmonella serovars in pathogenicity (p = 0.95) and least (p = 0.93) human pathogens, respectively.Our isolates had a mean probability of 94% to cause diseases in humans, though no significant difference , and IncH) and four bacterial toxins were detected in some Salmonella isolates.Of the 11 identified SPIs, only C63PI was present in all the isolates . The funSalmonella strains isolated at the Yaounde abattoir were sensitive to tetracycline, chloramphenicol, and ampicillin , the majority of picillin . These rlmonella .Salmonella isolates constitutes a serious health concern. Similarly, MDR was recently reported to be observed among Salmonella isolated in an Ethiopian abattoir on similar drugs [Findings from the present study indicate that tetracycline, chloramphenicol, and ampicillin are still effective antibiotics in Cameroon contrary to the situation in many countries where these drugs are no longer appropriate for the treatment of invasive salmonellosis. Notwithstanding, the presence of 13% of MDR ar drugs . In the Salmonella isolates confirms their phenotypic antimicrobial resistance status described in Furthermore, the detection of the respective resistance genes to streptomycin, chloramphenicol, tetracycline, and ampicillin in the p = 0.46). However, the value of greater than 1.5 of the sum of sensitivity and specificity reflects the usefulness of the WGS prediction ; (ii) those that reduce membrane permeability to drugs ; and (iii) those that alter antibiotic target configuration [E. coli sox R, E. coli soxS, and ramA overlap (Data not shown).In addition, WGS is also useful in predicting the multiple drug resistance of NTS. Based on their mode of action, the eighteen MDR-promoting genes detected in this study are grouped into: (i) those that export drugs out of the bacterial cells (nd ramA) . NeverthTolC [mdsA, mdsB, mdsC, acrA, acrB, and mdtk, which generally work in synergy either with transcriptional activators or promotors such as TolC and golS [TolC only in MDR isolates justifies the critical role it plays in synergy with RND efflux systems in exporting a range of antimicrobials. The golS gene promotes the MdsABC complex to disseminate resistance against a variety of drugs and toxins and confers virulence and pathogenicity potentials to Salmonella [golS. The gene sdiA, a regulator for AcraB, a multi-drug resistance pump [All the genes involved in drug efflux are part of the resistance nodulation cell division (RND) efflux systems and effectively perform their duty in synergy with TolC . Moreoveand golS . The prelmonella ; thus, tnce pump detectedmdtK and qnrB1 in our isolates is extremely crucial because they could synergistically offer Salmonella an advantage to develop resistance via a plasmid-mediated or efflux pump mechanism against fluoroquinolone [ramR, soxS and marA could equally induce resistance against fluoroquinolone via overexpression of acrAB-TolC efflux pump in Salmonella [OXA-1, OXA-2, OXA-7, CTX-M-14, and qnrB in some isolates is critical because they, respectively, confer resistance against third-generation cephalosporin and fluoroquinolone, all considered as the WHOs highest priority drugs [OXA-1 and OXA-2 genes exhibit broad-spectrum cephalosporin-hydrolyzing and carbapenem-hydrolyzing activities, respectively [The presence of uinolone ,23. Howelmonella . The detty drugs ,26. Partectively ,28. FurtSalmonella resides in the modification of lipid A, via mutations on the PmrA/PmrB system causing overexpression of LPS-modified genes [Salmonella isolates to polymyxin in this study unveils both clinical and veterinary importance [Polymyxin is a bactericidal polypeptide, which disrupts lipid A subunit of the LPS outer membrane of Gram-negative bacteria causing cell lysis and their eventual irreversible death. One of the key resistance mechanisms to polymyxin adapted by ed genes . The resportance . Not onlSalmonella Enteritidis has been the most prevalent world foodborne pathogen after S. Typhimurium based on its involvement in disease outbreaks [In fact, their belonging to large pathogenic families confirms the zoonotic status of NTS and their ability to exhibit broad-host adaptation. It is not surprising that serovar Enteritidis scored the highest probability to cause disease in humans. Previously, utbreaks . Curiousutbreaks . DespiteSalmonella pathogenicity islands described in Salmonella infections. The ubiquity of C63PI may explain its role in Salmonella survival during iron uptake, thus its conservation among Salmonella species [Salmonella pathogenesis [Salmonella uses SPI-5 to induce a pro-inflammatory immune response sometimes resulting in diarrhea [The species ,32. Out ogenesis . While Sogenesis . Additioogenesis . Furtherdiarrhea . Howeverdiarrhea . Conversdiarrhea following ISO 6579 [Salmonella isolates were confirmed using API-20 E kit and a qualitative real-time PCR assay [This study was a cross-sectional study. Prior to slaughter, five cattle were randomly chosen per week for every sampling session following the Meat Industry Guide describing sampling frequency for red meat carcasses . MoreoveISO 6579 . FinallyCR assay .Salmonella concentration (1.5 \u00d7 108 CFU/mL) and those of controls (Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 43300) were spread onto the surface of Mueller\u2013Hinton agar to which the antibiotic disks were placed and incubated for 18 to 24 h. The diameter of the zones of inhibition around each antibiotic disk was measured with a ruler and recorded to the nearest millimeter and isolates were classified as resistant, susceptible, or intermediate [rmediate . The antSalmonella overnight culture using Quick-DNA\u2122 Miniprep Plus Kit following the manufacturer\u2019s instructions. The purified DNA was quantified using a NanoDrop 2000c spectrophotometer and stored at \u221220 \u00b0C until use.Total DNA was extracted from Paired-end libraries were constructed with 0.2 ng/\u00b5L of purified DNA using the Nextera XT DNA Library Prep Kit as recommended by the manufacturer and were quantified using a Qubit fluorometer . WGS was performed on Illumina NextSeq platform using NextSeq 500/550 high-output kit v2 (300 cycles) at Murdoch University and on Illumina Miseq platform using pair-ended MiSeq reagent v3 kit (2 \u00d7 201 bp) at the BecA-ILRI Hub following the manufacturers\u2019 guidelines.Salmonella pathogenicity islands were detected using SPIFinder 1.0 [The qualities of the raw sequences were checked with FASTQC and trimmed using Trimmomatic 2.6 at Q score below 20. The trimmed data were assembled using SPAdes version 3.11, and genomes were annotated with the NCBI Prokaryotic Genome Annotation Pipeline . The antnder 1.0 at \u226595% nder 1.0 while vinder 1.0 with thep \u2264 0.05, using the methods of Duncan [Statistical significance for all tests was set at the level of f Duncan , and desf Duncan . Sensitif Duncan . The odd"} +{"text": "The rapid expansion of the number of adult patients with inherited metabolic diseases (IMDs) has created demand for physicians with expertise in the field of adult metabolic medicine (AMM). Unfortunately, existing accredited training programs in this field are rare, and training programs in pediatric metabolic medicine cannot fully meet the needs of AMM physicians as the types of patients and the problems they face are different in the adult setting. We surveyed a group of working practitioners in AMM for input on what medical expert competencies they feel should be included as part of training programs in AMM. Through a modified Delphi process, 66 physicians from six continents reached consensus on a comprehensive list of training competencies in AMM. This list includes competencies from the fields of adult internal medicine, neurology, medical genetics, and pediatric metabolic medicine but also includes competencies not found in any of those programs, leading to the conclusion that the training needs for specialists in AMM cannot be met from any of these existing programs. We propose that AMM be considered a subspecialty separate from pediatric metabolic medicine and that accredited training programs in AMM be created using these medical expert competencies as part of a broader program design. We have developed a list of medical expert competencies for specialists in adult metabolic medicine using input from working clinicians in this field.1Adult metabolic medicine (AMM) is a relatively new discipline focusing on the care of adult patients with inherited metabolic diseases (IMDs). Clinical care of adults with IMDs has emerged as a new challenging reality.2We consolidated training competencies for Canadian programs in medical genetics,The consolidated list of medical expert competencies then underwent a multistage process Figure\u00a0 to reach2.1The list was reviewed by a core group of highly experienced AMM physicians from large AMM centers from five different countries. Reviewers were asked which competencies they felt were of little relevance and could be dropped from the list as well as for suggestions for medical expert topics they felt were not included in the consolidated list. These suggestions were incorporated into the first revision of the list (shown in the Appendix\u00a02.2An online questionnaire was developed and distributed by email to members of the AMM list serve which is an email group affiliated with the Society for the Study of Inborn Errors of Metabolism (SSIEM). Respondents were asked to rank the competencies from the first revised list as to their relevance using a 4\u2010point rating scale , and were also able to add new training competencies if they felt these had been missed from the list.2.3Those topics rated as \u201cimportant\u201d or \u201cvery important\u201d by fewer than 70% of the survey respondents in Stage 2 were dropped from the list. Those topics meeting the 70% threshold were then subdivided into competencies viewed as \u201cmandatory\u201d (ranked as \u201cvery important\u201d by 50% or more of respondents) and \u201crecommended\u201d (ranked as \u201cimportant\u201d or \u201cvery important\u201d by >70% but as \u201cvery important\u201d by <50% of survey respondents). These designations were then reviewed again by the core group of experienced physicians who could recommend changes in the designations from \u201crecommended\u201d to \u201cmandatory,\u201d leading to the creation of the second revision of the competency document.2.4The second revision of the competency document was circulated by email to the full AMM list serve and respondents were asked if they did or did not support this list of training competencies using an on\u2010line voting tool.2.5A final consensus document with the list of mandatory and elective competencies for specialists in adult metabolic medicine was developed (see Appendix\u00a03N\u00a0=\u00a018), medical genetics (N\u00a0=\u00a014) and metabolic medicine (N\u00a0=\u00a010). Some respondents reported more than one area of specialty training. These respondents were highly experienced, with 60% having 10 or more years in practice and almost half of respondents were in settings which followed 500 or more patients.The consolidated list of competencies included competencies in 10 medical expert domains including basic science in cellular biology and genetics, consultative expertise in AMM, clinical expertise in IMDs in adults, appropriate use of laboratory testing, longitudinal care of adults with IMDs, treatment, management of contraception, pregnancy and lactation, transition management, management of complications, and critical appraisal. Forty\u2010nine working AMM physicians participated in Stage 2 and 35 physicians in Stage 4. Not all physicians participated in both Stages 2 and 4 so, in total, 66 working AMM physicians (55% of the 119 physicians who participate in the SSIEM email list group) participated in one or more of the phases of this project. Demographic details and clinical experience of the respondents in Stage 2 are included in Table\u00a0There was widespread agreement among respondents with 95.2% of the 166 competencies identified in first revision ranked as important or very important by 70% or more of respondents. Table\u00a0Table\u00a0In Stage 4, the competency document with revisions from Stage 3 was again circulated to the list\u2010serve and participants were asked to vote on the document using an on\u2010line voting tool. All 35 respondents in Stage 4 voted to approve the content of the document. Again, the respondents were from a very broad geographic distribution and a wide variety of clinical backgrounds. The final approved competency document is included in Appendix\u00a04We engaged working physicians in AMM to develop competencies for dedicated training in AMM. These competencies are available for use by any country involved in setting up accredited training in this field. The strengths of our project are the broad consultative input (66 working physicians from 6 continents) and the high level of clinical experience of our respondents who have real world experience in the field. Our project is limited though by focusing on medical expert competencies only so complete objectives of training which specify the other physician roles will need to be developed in individual countries. Also, our study has underrepresentation of some areas of the world, notably the United States. We did approach the Society for Inherited Metabolic Disorders (SIMD) which is a US based organization dedicated to the study of IMDs similar to the SSIEM. However, the SIMD does not have a specific adult subsection so we were unable to distribute our survey to clinicians who may be members of the SIMD but not the SSIEM.Although not intended as a needs assessment for clinicians with expertise in AMM, it is notable that >40% of the respondents to Stage 2 of our survey were 50\u2009years of age and over. This fact, when combined with the increased numbers of adult patients with IMDs who require expert care, underscores the need to rapidly expand the availability of accredited training programs in this area.The competencies prioritized by our experienced clinicians are derived from multiple other disciplines. We can infer from this that none of the existing training programs are sufficient to provide the training thought necessary for consultants in AMM. Our responders put a high priority on training in critical appraisal of drugs for rare diseases. Such training would have to be a dedicated feature of training programs in AMM as the tools for appraisal of rare disease drugs differ from those for common drugs and therefore are not taught in internal medicine, general pediatrics, or medical genetics training programs. Such training would also be relevant to accredited training programs in pediatric metabolic medicine. Tools to facilitate training in this area could be developed by specialty organizations such as the SSIEM and made available to sites training in both pediatric and AMM.Given the need for diverse training derived from multiple other specialties as well as specific training in areas relevant specifically to rare diseases which is not available in any other specialty, we propose that adult metabolic medicine should be recognized as a distinct subspecialty of medicine which can be reached through multiple specialty paths such as internal medicine or medical genetics. Subspecialty training programs in AMM require flexible curricula to accommodate the different specialty training backgrounds of residents. For example, all trainees would need to have training in the competencies specifically related to adult IMDs whereas trainees from medical genetics may need additional internal medicine training and trainees from internal medicine would need medical genetics training . We suggest that any accredited training program in AMM should be located in a facility able to address the needs of trainees from different backgrounds. This specific subspecialty designation and flexible routes of entry are already available in the United Kingdom. The updated list of competencies specified by the existing UK AMM programIn conclusion, we have developed a consensus document on medical expert training competencies in AMM using a consultative process that drew on the experience of working physicians in this field. We hope these competencies can be used to stimulate the development of more accredited training programs in AMM.Sandra Sirrs, Elisa Fabbro, and Annalisa Sechi participated in all aspects of study design, protocol development, data analysis and manuscript development. Members of the SSIEM Adult Metabolic Medicine Training Competencies Working Group all participated in the validation of the training competencies and approved the final competency document which is included as supplementary material to the manuscript.No funding was received for this study.Sandra Sirrs, Elisa Fabbro, and Annalisa Sechi declare no conflicts of interest.No ethics approval was required for this study.This article does not contain any studies with human or animal subjects performed by the any of the authors.Appendix S1 First revision of the list at completion of stage 1Click here for additional data file.Appendix S2 Adult Metabolic Medicine Competency document \u2010 the approved version (Stage 4) including the full list of competencies voted on by survey participantsClick here for additional data file.Table S1 Characteristics of survey respondents in Stage 2.Click here for additional data file."} +{"text": "This study aimed to estimate annual health care and lost productivity costs associated with excess weight among the adult population in Belgium, using national health data.2), normal weight (18.5\u2009\u2264\u2009BMI\u2009<\u200925\u00a0kg/m2), overweight (25\u2009\u2264\u2009BMI\u2009<\u200930\u00a0kg/m2) and obesity (BMI\u2009\u2265\u200930\u00a0kg/m2). Health care costs were also analysed by type of cost . The cost attributable to excess weight and the contribution of various other chronic conditions to the incremental cost of excess weight were estimated using the method of recycled prediction (a.k.a. standardisation).Health care costs and costs of absenteeism were estimated using data from the Belgian national health interview survey (BHIS) 2013 linked with individual health insurance data (2013\u20132017). Average yearly health care costs and costs of absenteeism were assessed by body mass index (BMI) categories \u2013 i.e., underweight compared to the normal weight population: \u20ac2,015 per capita. The annual total incremental costs due to absenteeism of the population affected by overweight and obesity was estimated at \u20ac1,209,552,137. Arthritis, including rheumatoid arthritis and osteoarthritis, was the most important driver of the incremental cost of absenteeism in individuals with overweight and obesity, followed by hypertension and low back pain.The adjusted incremental annual health care cost of excess weight in Belgium was estimated at \u20ac3,329,206,657 . The comorbidities identified to be the main drivers for these incremental health care costs were hypertension, high cholesterol, serious gloom and depression. Mean annual incremental cost of absenteeism for overweight accounted for \u20ac242 per capita but was not statistically significant, people with obesity showed a significantly higher cost (The mean annual incremental cost of excess weight in Belgium is of concern and stresses the need for policy actions aiming to reduce excess body weight. This study can be used as a baseline to evaluate the potential savings and health benefits of obesity prevention interventions.The online version contains supplementary material available at 10.1186/s12889-022-14105-9. The sustained global increase of overweight and obesity over the last 40\u00a0years puts a heavy burden on the health system worldwide . In 2019Despite the disturbing figures in the global obesity prevalence and the related costs, no country or subpopulation was able yet to reverse the upward trend of obesity , 6. AddrIn Belgium, as in many high-income countries, average body mass index (BMI) has increased over the past decades among both children and adults. According to the Belgian health examination survey (BHES), in 2018, more than half of the adult population was affected by overweight and 16% was affected by obesity . In addiConsidering the importance of this risk factor and the need for updated evidence, this study investigates the burden of excess weight including overweight and obesity among the adult population on annual health care costs and lost productivity costs in Belgium, and investigates to what extent differences in expenditures by BMI differ by socio-demographic characteristics and comorbidity burden.N\u2009=\u200910,828) and comprises data on health status and related health behaviour and determinants. Respondents were recruited following a multistage sampling design, as described in detail elsewhere [https://www.riziv.fgov.be/nl/themas/kost-terugbetaling/door-ziekenfonds/verzorging-ziekenhuizen/Paginas/verpleegdagprijzen-ziekenhuizen.aspx). Finally, we summed up the estimated fixed costs with the available variable hospital costs resulting in the total hospitalization costs used in this analysis.Individual participant health care costs related to obesity and overweight were obtained by linking two national databases, i.e. the Belgian Health Interview Survey (BHIS) 2013 and the national health insurance data compiled by the Intermutualistic Agency (IMA) 2013\u20132017. Linkage was performed by means of a National Registry Number. The BHIS was conducted between January and December 2013 among a representative sample of the Belgian population who reported weight and height and for whom linkage with health insurance data was possible and were continuously insured from 2013\u20132017 (latest linkage available). People who deceased during the study period (from their participation to the BHIS until 31/12/2017) were excluded. The final study sample comprised 7,633 participants , normal weight (18.5\u2009\u2264\u2009BMI\u2009\u2264\u200924.99\u00a0kg/m2), overweight (25\u2009\u2264\u2009BMI\u2009<\u200930\u00a0kg/m2) and obesity (BMI\u2009\u2265\u200930\u00a0kg/m2) [Health care costs were analysed by BMI category calculated from self-reported weight and height obtained from the BHIS using the classification recommended by the World Health Organization, i.e., underweight (BMI\u2009<\u200918.5\u00a0kg/m0\u00a0kg/m2) . Socio-dN\u2009=\u20093,857) \u2013 individuals that stated to have a paid job at the moment of the interview.Absenteeism was reported in the BHIS as days absent from work during the 12\u00a0months prior to the BHIS interview queried by the following question: \u201cHave you been absent from work during the past 12\u00a0months due to health problems? In doing so, take into account any conditions, injuries or other health problems you may have had and which resulted in an absence from work\u201d. Followed by the question: \u201cHow many days in total have you been absent from work for the past 12\u00a0months due to health problems? If you are unable to indicate this number of days correctly, please give an estimate.\u201d. The question was asked to working individuals only minus 10 public holidays) . Howeverst of January 2018 [N\u2009=\u20093,906,170) [The final regression model allowed to estimate the adjusted attributable costs and associated uncertainties of overweight and obesity compared to normal weight. Incremental costs were estimated at the individual level using the method of recycled predictions that allows to estimate the marginal effect from overweight and obesity on health care costs , 17. TheTo investigate the relative importance of chronic conditions contributing to differences in health expenditure of persons with obesity and overweight compared to normal weight individuals, we evaluated how much of the attributable cost of excess weight can be attributed to each of 23 diseases. For this, we 1) extended the regression model for health care costs to also include the considered disease along with the covariates significant in a model with the disease as dependent variable; 2) used the model to predict health care cost assuming all respondents had a normal BMI, keeping all other characteristics (including disease status) as observed; 3) subtracted the predictions obtained in step 2 of the previous section from the obtained predictions; and 4) calculated how much of the attributable cost of obesity is due to the considered disease as the population survey-weighted average of the individual incremental cost obtained in the previous step, and dividing this by the average incremental cost of excess weight. For these analyses, the underweight population was omitted, considering that diseases related to underweight are commonly different from those related to excess weight. This method allowed to rank the diseases by their relative contribution to the incremental cost of obesity.Table Table p\u2009<\u20090.001) compared to normal weight . In addition, the multinomial regression of BMI categories as function of the candidate covariates revealed that the same predictors were significant (see Appendix Table N\u2009=\u2009222) was excluded from further analysis.Since increased health expenses in individuals with overweight and obesity compared to individuals with normal weight are also related to socio-demographic factors and chronic health conditions, results presented in Table N\u2009=\u20094,624; 60.6%). Since lack of physical activity is an important behavioural risk factor for chronic diseases, this indicator was kept as possible confounder in the multivariable model even if its inclusion would lead to a reduced sample size. In addition, sociodemographic characteristics of the reduced sample did not differ much from the original sample \u2013 see Appendix Table 3,857 individuals were identified as the adult working population and included within the analysis . The mean incremental cost of absenteeism in individuals with underweight and overweight was \u20ac360 and \u20ac242 per capita respectively but did not differ significantly from zero than average costs among individuals with normal weight. When adjusting for age, gender, household educational level and the lack of physical activity, the cost gap was reduced to 24% and 36% for respectively the population with overweight and obesity. Regarding the costs of absenteeism, individuals with obesity had a significantly higher cost compared to people with a normal weight (87% higher). Our results showed that in Belgium approximately \u20ac3.3 billion is spent yearly on average for direct healthcare costs due to excess body weight. It represents approximately 13.5% of the total yearly healthcare costs in Belgium and 10% of the yearly budget reserved to healthcare . Yearly In line with our estimates, OECD showed that the average healthcare expenditure for a person affected by obesity is 25% higher than for someone of normal weight . MoreoveConsidering that high BMI is associated with increased comorbidity, contributing to an increase in costs, we also investigated the relative contribution of different chronic diseases to the cost attributable to excessive body weight. In our study, hypertension constitutes by far the major contributor to incremental costs due to excess weight, followed by high cholesterol and serious gloom or depression. Different type of arthritis formed the main comorbidity driving the costs related to absenteeism, followed by hypertension and low back pain.In a study conducted in the US looking at electronic medical records and claims, hypertensive diseases, dyslipidaemia, and osteoarthritis were the three most expensive obesity-related comorbidities at the population level; each responsible for $18 million annually. Moreover, it was found that hypertension and osteoarthritis were much more costly among individuals with obesity than those without obesity . In PaduOur study provides valuable information on the extent of the societal impact that excessive weight status has in Belgium. The approach of recycled predictions has allowed us to compare direct and indirect healthcare costs among different BMI categories while adjusting for confounding by including important sociodemographic and health status covariates in the models. Our findings are also important from a health policy perspective, in the planning of strategies for health care cost containment. From a public health perspective, a sustainable approach towards effective prevention of the most impactful diseases is a more affordable strategy . Public We acknowledge some limitations within our study. First, there are some limitations that are intrinsic of the nature of our data sources. Self-reported data, deriving from national surveys, is subject to recalling and social desirability biases. This might have influenced primarily the reporting of height and weight, known to be a source of underestimation within the BHIS , as wellConsidering that there is currently no national nutrition and physical activity health plan in Belgium , our estBased on national health and financial estimates, we found that high BMI has a substantial societal economic burden in Belgium. We estimated that every year at least \u20ac4.5 billion are spent to cover the direct and indirect costs related to overweight and obesity. Policies and interventions are urgently needed to reduce the prevalence of overweight and obesity thereby decreasing these substantial costs.Additional file 1: Table 6. Multinomial regression of body mass index classes in function of age, gender, educational level and lack of physical activity, Belgian population \u2265 18 years \u2013 underweight population was excluded, BHIS2013-IMA2013-2017 \u2013 the model includes the confounders that were significant after backwards stepwise elimination . Table 7. Socio-demographic characteristics by body mass index category, Belgian population \u226518 years, health interview survey 2013 for population included in the multivariate regression . Table 8. Chronic disease in function of available confounders \u2013 coefficient (standard error), Belgian population \u2265 18 years \u2013 underweight population was excluded, BHIS 2013-IMA2013-2017 \u2013 the model includes the confounders that were significant after backwards stepwise elimination. Table 9. Cost of absenteeism in function of body mass index classes adjusted for age, gender, educational level, nationality, lack of physical activity, tobacco use and daily intake of sugared drinks - Belgian population \u2265 18 years, BHIS 2013-IMA2013-2017 . Table 10. Relative contribution of chronic disease to the direct and indirect costs (in percentage).Below is the link to the electronic supplementary material."} +{"text": "Regarding the overall depiction rate, the standard mode was able to reconstruct 96.9% of the planes properly, whereas the static mode showed 95.2% of the planes (p = 0.0098). Moreover, there was no significant difference between the automatic measurement of the cardiac axis . (4) Conclusions: Based on our results, the FINE static mode technique is a reliable method. It provides similar information of the cardiac anatomy compared to conventional STIC volumes assessed by the FINE method. The FINE static mode has the potential to minimize the influence of motion artifacts during volume acquisition and might therefore be helpful concerning volumetric cardiac assessment in daily routine.(1) Objective: To scrutinize the reliability and the clinical value of routinely used fetal intelligent navigation echocardiography (FINE) static mode (5DHeartStatic\u2122) for accelerated semiautomatic volumetric assessment of the normal fetal heart. (2) Methods: In this study, a total of 296 second and third trimester fetuses were examined by targeted ultrasound. Spatiotemporal image correlation (STIC) volumes of the fetal heart were acquired for further volumetric assessment. In addition, all fetal hearts were scanned by a fast acquisition time volume (1 s). The volumes were analyzed using the FINE software. The data were investigated regarding the number of properly reconstructed planes and cardiac axis. (3) Results: A total of 257 volumes were included for final analysis. The mean gestational age (GA) was 23.9 weeks (14.3 to 37.7 weeks). In 96.9 and 94.2% at least seven planes were reconstructed properly ( Congenital heart disease (CHD) is the most common birth defect ,2,3,4. CStill, the prenatal detection rates of CHD by conventional screening methods remain low, as shown by detection rates ranging from 15 to 39% ,13. VariNew solution approaches are needed in order to solve these problems. Several new methods have therefore been proposed. In addition to expensive, time-consuming and relatively insufficient training programs, technical solutions might be useful. Specifically, four-dimensional 4D) ultrasound with spatiotemporal image correlation (STIC) has been shown to overcome some of the mentioned difficulties D ultraso,16,17,18Nevertheless, the analysis of a STIC volume remains relatively difficult . A STIC In the recent past, a semiautomatic algorithm including artificial intelligence has been introduced to face the challenges of STIC volume analysis . The ideTM). In cases of CHD, FINE was able to show a sensitivity of 98% and a specificity of 93% [FINE has proven to be able to generate the nine fetal echocardiography views in up to 100% of STIC volumes derived from normal cases ,24,25,30y of 93% . Therefoy of 93% . Recent y of 93% ,33.Currently, our group was able to show the effectiveness of this technique in unexperienced hands as well The quality of FINE is based on the quality of the STIC volumes used for analysis . In partAll women undergoing second and thirdtrimester ultrasound at the Women\u2019s University Hospital of Schleswig-Holstein, Campus Luebeck, are routinely investigated by additional 3D and 4D echocardiographic examination with STIC volume acquisition compared to conventional 2D fetal echocardiography. Those volumes are stored and used for both onsite processing and future offsite analysis by FINE. The evaluation was approved by the local ethics committee. The volumes used for this study were acquired between September 2019 and January 2022. We examined a total of 296 fetuses during second and thirdtrimester targeted ultrasound. For this study, we excluded abnormal hearts. The included volumes were obtained by two expert investigators (J.W. and M.G.) and had to match certain quality requirements . The quality was judged by one expert investigator (J.W.).All volumes included in this study were acquired by two physicians, being experts in fetal echocardiography, using a Samsung Hera W10 device . The volumes were recorded starting from the four-chamber view using a mechanical convex transducer by performing automatic transverse sweeps through the fetal chest. The acquisition time for the conventional STIC volumes ranged from 9 to 12 s. In addition, all fetal hearts were scanned by a fast acquisition time volume (acquisition time 1 s). The acquisition angles for both kinds of volumes ranged from 15 to 35\u00b0, depending on gestational age.The analysis of the acquired volumes took place on the ultrasound machine using the installed FINE software by the same investigators. FINE semiautomatically generates nine standard fetal echocardiography planes ((1) four-chamber view; (2) five-chamber view; (3) left ventricular outflow tract; (4) short-axis view of great vessels/right ventricular outflow tract; (5) three-vessel\u2013trachea view; (6) abdomen/stomach; (7) ductal arch; (8) aortic arch; and (9) superior and inferior vena cava). In order to generate the planes, the operator was instructed by the software to mark seven anatomical landmarks of the fetal heart ((1) cross-section of the aorta at the level of the stomach; (2) cross-section of the aorta at the level of the four-chamber view; (3) crux; (4) right atrial wall; (5) pulmonary valve; (6) cross-section of the superior vena cava; and (7) transverse aortic arch), resulting in a complete reconstruction of the nine diagnostic views of the fetal heart. This marking is guided by the software .The process of analyzing the fast-acquired volumes works exactly the same way. The only difference is that the volumes contain no movement of the fetal heart, in contrast to the normal volumes, which are in motion. The process of generating the nine diagnostic planes using FINE and FINE static mode is illustrated in the In addition, both modes automatically measure and display the angle of the fetal cardiac axis.VIS-Assistance allows user-independent sonographic exploration of the surrounding structures in each of the nine cardiac diagnostic views . The volAll volumes were analyzed using the FINE software and rated by the expert panel regarding the number of those properly reconstructed. These were subsequently compared to those derived from the static approach. For each case, the overall image quality was assessed from \u201cvery good\u201d, \u201cgood\u201d and \u201cmoderate\u201d to \u201cpoor\u201d.t-tests and McNemar tests were applied. A statistical level of p < 0.05 was assumed to be significant.The data were investigated regarding the number of properly reconstructed planes and cardiac axis. GraphPad Prism 9 for Mac , GraphPad QuickCalcs , and Microsoft Excel 2016 for Mac were used. Descriptive statistics, 2 (18.6 to 65.3 kg/m2). The cases were rated regarding image quality, of which 10.1% (n = 26) were rated \u201cvery good\u201d, 50.2% (n = 129) were rated \u201cgood\u201d, 37.7% (n = 97) showed \u201cmoderate\u201d, and 2.0% showed (n = 5) \u201cpoor\u201d image quality.In total, 296 fetuses were investigated by standard and fast acquisition time STIC. We excluded 39 cases, of which 22 had abnormal hearts, 7 were in the first trimester of pregnancy, and 10 had incomplete data. A total of 257 volumes were included for final analysis. The mean gestational age (GA) was 23.9 weeks (14.3 to 37.7 weeks) and the mean BMI at scanning date was 27.5 kg/mp = 0.0961, not significant). Regarding the overall depiction rate, the standard mode was able to reconstruct 96.9% of the planes properly, whereas the fast mode showed 95.2% of the planes (p = 0.0098). The depiction rates for both modes are shown in In 96.9 and 94.2% at least seven planes were reconstructed properly , 7.4 and 5.8% for the aortic arch, and 4.3 and 6.2% for the superior and inferior vena cava, respectively. In addition, the static mode showed a high drop-out rate for the five-chamber view (7.0%). The drop-out rates for all planes and both modes are shown in p = 0.8827, not significant). Mean and standard deviation were not significantly different, and the results of both modes passed the normality test. The distribution of the cardiac axis angles is shown in Moreover, no significant difference according to the automatic measurement of the cardiac axis could be detected between the two different modes and 95.2% (static mode) showed a statistically significant difference, the difference is small enough to be rated as not clinically relevant.FINE static mode might be able to facilitate volume acquisition and plane reconstruction facing very active fetuses, saving time and without losing accuracy. This could be useful regarding the intention to establish FINE as a screening tool in daily routine. FINE static mode might need a lower number of sweeps to acquire a sufficient volume in comparison to the standard mode. We include FINE and FINE static mode planes derived from moving fetuses in the We think that the implementation of automatization and artificial intelligence in diagnostic processes such as fetal echocardiography is very promising. As mentioned above, the prenatal detection rates of CHD by conventional screening methods are low, as shown by detection rates from 15 to 39% ,13. The Furthermore, FINE demonstrated its abilities to work well with different features and in different situations. For example, Yeo et al. added color and power Doppler to FINE, which showed promising results . The ratIn one of our previous works, we investigated the use of FINE in different states of experience in fetal echocardiography and showed that experts, advanced practitioners and beginners in fetal echocardiography were able to adequately perform FINE in a short period of time (21\u201374 s per investigation), starting from an existing STIC volume . In addiAmong other authors, the value of 4D fetal echocardiography is often doubted. Some of them criticize a high user dependency and the absence of standardization. Novaes et al. showed that STIC volume acquisition was successful in 97.3% of patients, but all planes required for optimal fetal heart screening were seen in only 49% of their volumes . The conRoberts proposed two possible ways that4D echocardiography can be used to improve the detection rates of CHD on a large scale: (i) acquisition of STIC volumes locally at the screening center with subsequent analysis by a remote expert in fetal echocardiography; and (ii) storage and analysis of the volumes by the same examiner at a later time . RobertsIn our opinion, the ability to obtain a sufficient STIC volume and analyze it with FINE might be learned easier than performing accurate 2D fetal echocardiography. As we have shown, FINE works well in unexperienced hands . By compThe potential use of 4D echocardiography is not accepted by everybody. We think that new technical solutions might be very helpful, because the detection rates of CHD vary broadly and remain low in a general setting. According to the ISUOG guidelines, some of these variations can be conducted to different levels of examiner expertise . IntenseOur study has strengths and limitations. On the one hand, the relatively large sample size is a strength, especially because the volumes have been obtained under real-life conditions and might therefore be representative of other facilities. Our work is the first to show the practicability of FINE static mode in a routine setting. The volumes and corresponding generated planes of FINE static mode are not moving. As we have seen, there is no clinically relevant difference in the depiction rate compared to conventional STIC volumes processed by FINE. Nevertheless, it is conceivable that the detection or correct diagnosis of CHD might be limited because moving structures provide more information. In addition, FINE static mode cannot be combined with color Doppler. FINE static mode must prove its usefulness in detecting CHD in future studies, because we did not include cases of CHD. Another limitation is the expert acquisition of the volumes. In order to be used as a screening tool, the acquisition of STIC volumes must be reliable in non-expert hands as well. As we have demonstrated, the FINE postprocessing itself works in the hands of a beginner . In addiTo conclude, based on our results, FINE static mode is a reliable method. It provides similar depiction rates of the cardiac anatomy compared to conventional STIC volumes assessed by conventional FINE. FINE static mode has the potential to minimize the influence of motion artifacts during volume acquisition and might therefore be helpful when concerning volumetric cardiac assessment in daily routine. Evaluation of the fetal heart by FINE might be useful in facilitating the detection of fetal cardiac anomalies during general screening and, therefore, raise the detection rates of CHD. Future studies should aim to demonstrate the feasibility and validity of the complete workflow from volume acquisition to postprocessing via FINE and FINE static mode in less experienced hands, first-trimester fetuses and fetuses with CHD to prove the use of FINE as a screening tool."} +{"text": "Using a customized sound system, we fixed the SF at 120 steps\u22c5min\u20131 with SL variation (0.83\u20130.41 m) (SFfix) or fixed the SL at 0.7 m with SF variation (143\u201371 steps\u22c5min\u20131) (SLfix) during the subjects\u2019 sinusoidal walking. Both the subjects\u2019 preferred locomotion pattern without a sound system (Free) and the unprompted spontaneous locomotor pattern for each subject (Free) served as the control condition. We measured breath-by-breath ventilation [tidal volume (VT) and breathing frequency (Bf)] and gas exchange . The amplitude (Amp) and the phase shift (PS) of the fundamental component of the ventilatory and gas exchange variables were calculated. The results revealed that the SFfix condition decreased the Amp of the Bf response compared with SLfix and Free conditions. Notably, the Amp of the Bf response under SFfix was reduced by less than one breath at the periods of 5 and 10 min. In contrast, the SLfix condition resulted in larger Amps of Bf and E responses as well as Free. We thus speculate that the steeper slope of the E-2 relationship observed under the SLfix might be attributable to the central feed-forward command or upward information from afferent neural activity by sinusoidal locomotive cadence. The PSs of the E, 2, and 2 responses were unaffected by any locomotion patterns. Such a sinusoidal wave manipulation of locomotion variables may offer new insights into the dynamics of exercise hyperpnea.We tested the hypothesis that restricting either step frequency (SF) or stride length (SL) causes a decrease in ventilatory response with limited breath frequency during sinusoidal walking. In this study, 13 healthy male and female volunteers participated. The walking speed was sinusoidally changed between 50 and 100 m\u22c5min Humans\u2019 sinusoidal exercising is clearly voluntary rhythmic movement in response to the stress of a varying work rate or speed. The resulting exercise-induced hyperpnea is expected to be integrated with chemical feedback from both central and periMx), an amplitude (Amp), and a phase shift (PS) on gas exchange kinetics (E) were observed when the limb-movement frequency was varied sinusoidally by alterations in cycling work rates would remain constant.The synchronization of limb movement and breathing rhythms has been observed in locomoting animals as locomf due to a respiratory-locomotor network the locomotive cadence may be a significant factor to cause exercise-induced hyperpnea, possibly involving muscle reflex drives from type III and IV afferents or the c network . To testn = 7) and female (n = 6) volunteers who were not taking any medication that could affect cardiovascular responses. The subjects were fully informed of any risks and discomforts associated with these experiments before giving their written, informed consent to participate in the study, which was approved by the ethics committees of the Institutional Review Board of Doshisha University (no. 1045).The subjects were 13 healthy young male (\u20131 at periods (T) of 10, 5, 2, and 1 min. A warm-up session consisted of steady-state walking for 4 or 6 min, which preceded each recording sinusoidal exercise session. In Protocol I, after 50, 100, and 75 m\u22c5min\u20131 for 5, 3, and 4 min of warm-up walking, the sinusoidal loading was repeated for five cycles at 1-min periods, followed by three cycles at 2-min periods , and all subjects wore underwear, shorts, and a T-shirt, as well as socks and shoes. The protocols used herein are based on our previous work . The tre periods . In Prot periods . A microSLfix) condition, the SL was fixed at 0.7 m and the SF varied between 143 and 71 steps\u22c5min\u20131 in coordination with the sinusoidal changes in treadmill speed (SFfix) condition, the SF was set at 120 steps\u22c5min\u20131 with SL variation (0.83\u20130.41 m) during sinusoidal walking (Free) , we set two locomotion patterns. In the fixed SL . The subE) values by integrating the tidal volume and breathing frequency . The end-tidal oxygen pressure and carbon dioxide pressure were determined using mass spectrometry from a sample drawn continuously from the inside of the face mask at 1 ml\u22c5s\u20131. This loss of gas volume was not examined in this study, because the loss of 1 ml\u22c5s\u20131 was much smaller than the inspired and expired airflows. Three reference gases of known concentrations and room air were used to calibrate the mass spectrometer.A mass-flow sensor was fit to the expiratory port of the valve of the face mask worn by the subject to continuously record the subject\u2019s expiratory airflow, which was calibrated before each measurement with a 3-L syringe at three different flow rates. We calculated the ventilation (2), and partial pressure of oxygen (PO2) at the subject\u2019s mouth were recorded in real time with a 50-Hz sampling frequency using a computerized online breath-by-breath system from the time-aligned gas volume and concentration signals. Breath-by-breath E (BTPS), 2 (STPD), and 2 (STPD), VT, Bf, PETCO2, and PETO2 were determined. An electrocardiogram (ECG) was recorded using a bioamplifier . Heart rate (HR) was measured by beat-by-beat counting from the R spike of the ECG. The signals from the treadmill were fed into a data acquisition system and temporally aligned to the ventilatory and ECG data.The volumes, flows, partial pressure of carbon dioxide and the PS of the fundamental component (the same frequency as the input function) of the E, 2, 2, HR, and end-tidal PCO2 (PETCO2) responses as well as the locomotion responses (locomotion SF and SL) were computed as follows:All the cardiorespiratory and locomotive data were analyzed using a Fourier analysis as previously reported . The breandRe and Im are the real and imaginary components; these were calculated as follows. The larger the PS, the slower the response. The larger the Amp, the higher the responsiveness.where the andx(t) is the response value at time t (in s), Mx is the mean value of x for an integer number of cycles (N), T is the period of the input signal (in s), and f ( = 1/T) is its frequency in cycles per second.where Amp of the respiratory and locomotion variables against sinusoidal change in walking speed by dividing the magnitude of variables from 50 to 100 m min\u20131 during each steady-state exercise, and the results are presented as the Amp ratio (%) .The R-R intervals during sinusoidal work were calculated beat-by-beat by the computer, and 1-s interval HR data were measured from the calculated R-R intervals (R-R) and converted as HR values (60/R-R). The subject\u2019s locomotion SF and SL were measured with a switch activated by stepping on a sensor on the sole of the right foot in each protocol .f, 2, 2, and HR) was determined by a two-way repeated-measures analysis of variance (ANOVA) in the comparison of the three locomotion patterns \u00d7 oscillation frequency period . Bonferroni\u2019s test was applied for the appropriate datasets if a significant F-value was obtained. We compared the regression coefficients of the independent variables of E of the three locomotion patterns . The level of significance was set at p < 0.05.All values are presented as mean \u00b1 SD. The significance of differences in each variable are given in Free locomotion, the Amp of the SF under the SLfix condition was significantly greater at all periods (PS for SF was not significantly different between the Free and SLfix conditions except at the 5-min period (p < 0.01) (The = 0.983) . The Mx = 0.255) . The PS < 0.01) .Amp of the SL under the SFfix condition was significantly larger than that under Free at all periods (PS for the SL at any period (The = 0.965) . In cont= 0.877) . There wy period .f showed markedly different responses. Specifically, under the SLfix condition, the PS for the Bf response was preceded by a delay in VT , whereas under the SFfix condition, the PS for the VT response was preceded by a delay in the Bf .SFfix condition induced significantly lower Amps in E and PETCO2 at allmetricconverterProductID1 A of the periods compared with the SLfix condition, with similar values between the SLfix and Free conditions (Amp response for the Bf remained unchanged (<1.0 breath\u22c5min\u20131) under SFfix . The Amp of the VT response tended to be larger under SLfix compared with SFfix and Free at all periods .The nditions . The Amper SFfix , with a = 0.204) . With re= 0.275) . A BonfePS in E, Bf, or PETCO2metricconverterProductID1 C, with period effects for E and PETCO2 , with a significant difference between the Free and SLfix conditions at only the 5-min period (p = 0.015). There were no significant main effects of the locomotion patterns or frequency period, and no interaction effect in the Mx for VT, Bf, and PETCO2 was closely related to the Amp ratio in the 2 when the data from all periods were pooled (E\u22122 relationship was steeper under SLfix (s: 1.19) than under SFfix (s: 0.70) and Free conditions (s: 0.97).The < 0.01) . The sloSLfix locomotion pattern increased the Amp of E ; (ii) the Amp of the Bf under the SFfix locomotion pattern remained unchanged ; and (iii) when the slope of the E\u22122 relationship under the Free condition was used as the reference (1.0), the slope under SLfix was steeper than that under Free, and the slope under SFfix was lower than that under Free. These phenomena may be explained as follows: afferent feedback from the limb is important for locomotor-respiratory entrainment, whereby the discharge rhythm of sensory inputs can entrain a central respiratory pattern generator (The three major findings of this study are as follows: (i) the p of V.E and metaenerator .\u20131. We thus considered the following possibilities: locomotor-respiratory entrainment forcing the synchronization of step movement and breathing rhythms is more likely to occur when the sinusoidal change in speed is synchronized with the sinusoidal change in SF . According to our hypothesis, the SFfix condition provided an unchanged Bf , with the Mx for the Bf approximately 24 breaths\u22c5min\u20131 at all periods larger than that under the control Free condition in all of the periods used herein, the SFfix locomotion induced the sluggish dynamics of the Amp of E, 2, and 2. Thus, the contribution of sinusoidal variation in SL to the adjustment in E would be less than that in a Free condition.Apparently, the constrained Bd PETCO2 , despitenditions . Therefoenerator . It was enerator . Even thE dynamics under different experimental conditions during leg cycling, namely, between sinusoidal cadence with a constant pedal force and sinusoidal pedal force with a constant pedal cadence compared with the Free condition , the slope under SLfix (1.2) was steeper than that under Free, and the slope under SFfix was gentler than that in Free .When we used the slope of the in Free . The gair center . In othe command or upwarFree locomotion, our subjects\u2019 preferred SL was likely to have a lower metabolic demand was lower than that under Free even though the Mx of E, 2, and 2 were not significantly different between the SLfix and SFfix conditions to emphasize the faster E response against locomotion. Contrary to their observations, the Amp values of E in this study were tightly coupled to those of 2 during sinusoidal walking. The contribution of limb movement to exercise hyperpnea has thus been a matter of debate , which has been treated in leg cycling . FurtherE\u22122 relationship under SLfix locomotion. However, we were unable to differentiate the involvements of peripheral afferent feedback from central command without the direct measures. The individual contribution of both neural factors to the ventilatory response in humans cannot be evidently stated from the experimental results in this study.It remains difficult to determine the relative contributions of the mechanoreflex vs. the metaboreflex to ventilatory control in humans , 2014. TSLfix locomotion pattern enlarged the Amp of E and metabolic responses compared with the SFfix pattern . Moreover, the Amp of Bf remained unchanged (<1.0 beats/min) under SFfix. The slope of the E\u22122 relationship was steeper by 1.23 times under SLfix and was gentler by 0.72 times under SFfix when the slope under the control (Free) condition was used as the reference. These results are explained as follows: afferent feedback from the limb is important in locomotor-respiratory entrainment, whereby the discharge rhythm of sensory inputs can entrain central respiratory-pattern generation. The PSs of E, 2, and 2 responses were unaffected at any of the locomotion patterns. Such sinusoidal wave manipulation of locomotion variables may offer new insights into the dynamics of exercise hyperpnea.In summary, the The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The studies involving human participants were reviewed and approved by the Ethics Committees of the Institutional Review Board of Doshisha University (no. 1045). The patients/participants provided their written informed consent to participate in this study.MF, TA, and YF conceived and design of the study. MF, TA, and KK collected the data. MF, MH, KK, and YF interpreted of the data. MF, MH, and YF wrote, reviewed, and approved the final manuscript. All authors contributed to the collection of data.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Pomacea poeyana, showed an exceptional ability to specifically inhibit biofilm formation of the laboratory strain ATCC 90028 as a model strain of the pathogenic yeast Candida albicans. In follow-up, here, we demonstrate that the derivatives Pom-1A to Pom-1F are also active against biofilms of invasive clinical C. albicans isolates, including strains resistant against fluconazole and/or amphotericin B. However, efficacy varied strongly between the isolates, as indicated by large deviations in the experiments. This lack of robustness could be efficiently bypassed by using mixtures of all peptides. These mixed peptide preparations were active against biofilm formation of all the isolates with uniform efficacies, and the total peptide concentration could be halved compared to the original MIC of the individual peptides (2.5 \u00b5g/mL). Moreover, mixing the individual peptides restored the antifungal effect of fluconazole against fluconazole-resistant isolates even at 50% of the standard therapeutic concentration. Without having elucidated the reason for these synergistic effects of the peptides yet, both the gain of efficacy and the considerable increase in efficiency by combining the peptides indicate that Pom-1 and its derivatives in suitable formulations may play an important role as new antibiofilm antimycotics in the fight against invasive clinical infections with (multi-) resistant C. albicans.In previous studies, derivatives of the peptide Pom-1, which was originally extracted from the freshwater mollusk Candida spp. with a mortality rate of up to 70% [Candida are commonly present on human skin and in the gut microbiome and can be detected in 60% of healthy individuals [C. auris [Candida albicans [Candida spp. from mucocutaneous sites into the bloodstream [C. albicans to react to its environment with morphological changes (switching between unicellular cells to pseudohyphae and hyphae) represents a further challenge to the host defense mechanism, as the different morphotypes also have different surface compositions [Candida spp. to form biofilms on biotic and abiotic surfaces [Candida spp. infections by modifying the fungal cell membrane, but the yeast cells protect themselves by forming biofilms and can thus acquire higher-level resistance [Invasive candidiasis is a fungal infectious disease referred to bloodstream and deep-seated infections caused by various p to 70% . Yeasts ividuals . These pividuals . Antifunividuals ,5. Flucoividuals . Althougividuals . Drug reividuals ,11,12,13C. auris . Thus, iC. auris . In intelbicans) ,17. Invalbicans) . In addiositions ,19,20. Ssurfaces ,22. Thersurfaces ,24. The surfaces ,26,27. Asurfaces . ConventPseudomonas aeruginosa by a factor of 13 and 155 [A promising class of drug molecules with a wide range of activity against viruses, bacteria, fungi, and parasites are natural and synthetic antimicrobial peptides (AMPs), also known as host defense peptides (HDPs) ,30. Thes and 155 . Due to and 155 . However and 155 .Pomacea poeyana and then chemically resynthesized. These peptides showed antimicrobial activity not only against the bacteria Pseudomonas aeruginosa, Klebsiella pneumoniae, and Listeria monocytogenes, but also against planktonic cells and biofilm formation of various Candida species, and in addition, low cytotoxicity against human macrophages was observed [Candida species and no significant toxicity towards human cells has been observed so far [Mollusks represent interesting organisms for the identification of AMPs as they protect themselves exclusively with their innate immune system as they do not possess an adaptive immune system and thus have a wide range of AMPs ,60. The observed ,61. Baseobserved ,62,63,64d so far .P. poeyana, we previously showed that these six derivatives, designated Pom-1A to Pom-1F, significantly increased the antibiofilm activity against C. albicans, with Pom-1B, Pom-1C, and Pom-1D showing the highest improvement compared to Pom-1 as the lead structure (Pom-1D > Pom-1B > Pom-1C) [C. albicans ATCC 90028 as the model pathogen. The aim of this study was to demonstrate that this activity is also present against clinical isolates of C. albicans collected from patients suffering from invasive infections. Laboratory strains can be expected to differ from invasive isolates obtained from patients in clinical environments concerning biofilm formation and resistance against antifungal drugs. In this study, we showed as a follow-up that the peptides in fact also show remarkable activity against invasive clinical isolates, including strains with a strong resistance against fluconazole and/or amphotericin B. Application in mixtures increased both the efficacy and the efficiency of the peptides. The preparations were active against biofilm formation of all isolates with uniform efficacy, and the total peptide concentration could be halved compared to the original MIC of the individual peptides. Interestingly, low concentrations of the peptides were found to be active in combination with 50% of the standard therapeutic fluconazole concentration for fluconazole-resistant isolates as well, suggesting a synergistic effect of the peptides and fluconazole. Without having elucidated the reason for these synergistic effects of the peptides so far, both the gain of efficacy and the increase in efficiency by combining the peptides lead us to believe that Pom-1 and its derivatives in suitable formulations may play an important role as new antibiofilm antimycotics in the fight against invasive clinical infections with (multi-) resistant C. albicans.Based on the original study of the Pom peptides from Pom-1C) . HoweverN-morpholino)propanesulfonic acid (MOPS), peptone, and yeast extract were obtained from Carl Roth GmbH . RPMI-1640 medium supplemented with L-glutamine was purchased from Thermo Fisher Scientific . Fluconazole was obtained from Merck KgaA , amphotericin B\u2014from Carl Roth GmbH . Each of Dulbecco\u2019s modified Eagle\u2019s medium (DMEM), fetal bovine serum (FBS) (10% (w/v)), and penicillin\u2013streptomycin ), as well as Accutase\u00ae and Eagle\u2019s minimum essential medium non-essential amino acids (MEM NEAAs) were obtained from Life Technologies . Phosphate-buffered saline (DPBS) was also sourced from Life Technologies .Acetic acid, agar-agar, crystal violet, 3- as laboratory strain was purchased from the IPK Laboratory of Medical Mycology. Clinical C. albicans isolates were provided from the patient samples sent to the Microbiology Department for diagnostic purposes. Strains were collected anonymously, and it is therefore not possible to assign the strains to patients. The accreditation number of the Microbiology Department is DIN EN ISO15189:2014 (DAkks). They were all cultured on Sabouraud dextrose agar . For suspension cultures, individual colonies were inoculated in test tubes in 5 mL of RPMI-1640 supplemented with L-glutamine and grown at 37 \u00b0C with orbital shaking at 150 rpm for 16 h.The derivatives Pom-1A to Pom-1F were designed by the Core Facility Functional Peptidomics of Ulm University led by PD Dr. Ludger St\u00e4ndker as described before .Candida spp. biofilm formation can be determined following the Clinical and Laboratory Standard Institute guidelines (M27-A3) [3 yeast cells were incubated in 200 \u00b5L of RPMI-1640 medium supplemented with L-glutamine, fluconazole, amphotericin B, and Pom-1A to Pom-1F. Incubation was performed on flat-bottomed polystyrene microplates with 96 wells for 24 h at 37 \u00b0C without shaking. The subsequent treatment with crystal violet was originally developed by George O\u2019Toole for bacteria and adapted to Candida biofilms [w/v) crystal violet solution for 15 min. After removing the solution, they were washed again twice with 200 \u00b5L demineralized water, and the microtiter plates were dried for at least 24 h at 25 \u00b0C. The stain was dissolved with 200 \u00b5L of 30% acetic acid and transferred to a new plate after 15 min. The absorbance at 560 nm was measured using a Tecan Infinite F200 microplate reader . The resulting data were evaluated against the untreated controls so that the efficacy of the agents could be determined.The antifungal effect of Pom-1A to Pom-1F on (M27-A3) . For thibiofilms . For thiw/v), 15% (w/v)), MEM NEAAs (1% (w/v)), and penicillin\u2013streptomycin ) at 37 \u00b0C in an incubator containing 5% CO2.For the experiments, adenocarcinomic human alveolar basal epithelial cells A549 and humaw/v) FBS for A549, DMEM supplemented with 15% (w/v) FBS for HDFs) was preheated to 37 \u00b0C before passaging. The medium was removed from the culture flask, and 3 mL of Accutase\u00ae were added. The cells with Accutase\u00ae were incubated for 5\u201310 min until the cells acquired a round shape. To ensure complete cell detachment, the culture flask was slapped against the back of the hand. The desired number of cells was aliquoted into a new culture flask with the medium already provided. The cells were then incubated at 37 \u00b0C with 5% CO2.An appropriate medium were added. After incubation for 24 h at 37 \u00b0C and 5% CO2, 20 \u00b5L of a resazurin solution (0.15 mg/mL) were added into each well, and the cells were incubated again for 24 h at 37 \u00b0C and 5% CO2. Fluorescence measurement of the resulting resorufin was then performed using a Tecan Infinite F200 microplate reader .A resazurin assay was used to detect the viability of the cells. Therefore, 2 \u00d7 10The peptide properties were determined using the ProtParam analysis tool (ExPASy) . The calAmphiphilic index determination was carried out by the addition of the mole percentages (X) of the amino acids alanine (Ala), valine , isoleucine (Ile), and leucine (Leu) considering the relative volume of valine side chains (a = 2.9) and Leu/Ile side chains (b = 3.9) of Ala.Pseudomonas biofilms [Candida biofilms [C. albicans isolates formed biofilms complying with the threshold that biofilm formation is regarded as significant when the biomass deposited on the plastic substratum of the microtiter plate exceeds 15% of the reference biofilms formed by the laboratory strain ATCC 90028 (w/v) and 2 \u00b5g/mL (w/v), respectively), and four isolates were found fluconazole-resistant, six\u2014amphotericin B-resistant. Strains were considered resistant when the biofilm was detectable and inhibition of the antimycotics was lower than 100% of 2.5 \u00b5g/mL (w/v), whereas planktonic growth was only inhibited moderately, and this inhibition was not improved by concentrations higher than 15 \u00b5g/mL (w/v) for the laboratory reference strain [C. albicans-inhibiting concentrations of 2.5 \u00b5g/mL (w/v), 15 \u00b5g/mL (w/v), and 25 \u00b5g/mL (w/v) . The original Pom-1 peptide failed to inhibit 70% of the biofilms of these isolates at the MBIC (2.5 \u00b5g/mL (w/v)), whereas the number of strains affected by the peptides in biofilm formation was considerably increased with the derivatives (w/v)), which also had a higher efficacy, with Pom-1C in particular being the best derivative, with a fourfold average increase in efficacy (threefold for Pom-1B). However, even the best peptides resulted in an inhibition of less than 50%. The tenfold increase in peptide concentration (25 \u00b5g/mL (w/v)) only led to a nonproportional 10% increase in efficacy, indicating that this slight improvement was due to a drastic decrease in efficiency (w/v) MBIC. At the highest concentration of 25 \u00b5g/mL (w/v), all the peptides except Pom-1B and the original Pom-1 were 100% active against several individual strains , which already disqualified this concentration for the development of a therapy, it led to considerable cytotoxicity towards HDF cells for Pom-1E and the original Pom-1 peptide, whereas no significant effects on the viability of A549 cancer cells were observed. In contrast, at 2.5 \u00b5g/mL (w/v) (MBIC), none of the peptides, as well as fluconazole, were toxic neither to HDF nor A549 cells, whereas Triton X-100 as a known cell-lysing and thus toxic control perfectly worked to reduce cell viability to zero (Apart from the lower efficiency of the peptides at 25 \u00b5g/mL ( to zero .w/v) per peptide) to give again the MBIC of 2.5 \u00b5g/mL (w/v) as an equally composed working solution. For this mixture, biofilm inhibition was considerably higher and resulted in an increase in affected isolates with a minimum inhibition of >70% with an average of >90% of all the strains tested, including the fluconazole- and amphotericin B-resistant candidates 6, 8, 9, 11, 13, 14, 17, and 18, demonstrating that this combination of peptides is also effective against multi-resistant C. albicans (w/v)) (w/v) and 0.625 \u00b5g/mL (w/v), representing 0.5\u00d7 MBIC and 0.25\u00d7 MBIC, respectively. The 0.5\u00d7 MBIC experiments demonstrated that the efficacy was stable with all the isolates affected >70%, and the average efficacy was still >90% (w/v)), again suggesting a substantial gain in efficacy as well as in efficiency when using the individual peptides in mixtures. Finally, we tested the opportunity to use the Pom-1 derivatives as agents that synergistically improve or restore the activity of fluconazole against the resistant isolates which lost the sensitivity against this classic antifungal compound. In this part of our study, the biofilm formation of the fluconazole-resistant isolates 6, 8, and 13 was tested in the presence of both fluconazole and the Pom-1A to Pom-1F peptides. Based on the 8 \u00b5g/mL (w/v) standard therapeutic concentration of fluconazole, which had turned out to be sub-inhibitory in our biofilm assay, and the information that 0.25\u00d7 MBIC (0.625 \u00b5g/mL (w/v)) of the individual peptides) had failed, we tested whether this peptide concentration with a 50% concentration (4 \u00b5g/mL (w/v)) of fluconazole as the second antifungal compound would gain activity. Interestingly, we in fact achieved remarkable effects for all the peptides against biofilms of all the three isolates ) D. To estill >90% B, C. Thiisolates E. BiofilC. albicans is the most prominent and prevalent pathogenic yeast in this context, since up to 90% of Candida infections originate from this microorganism [Candida infections are associated with respective biofilms, which contribute to the high mortality rates and qualify this microbial life form as one of the main virulence traits associated with full pathogenesis of candidiasis [C. albicans isolates were obtained from invasive infections, four of which were found to be resistant to fluconazole, six\u2014to amphotericin B, and two\u2014to both classic antifungal compounds. The emergence of resistance has been recognized as a significant potential threat, particularly occurring in patients with AIDS and cancer after long-term treatment with immunosuppressive medications [Candida infections has already been challenging due to the limited number of available antifungal drugs, the emergence of (multi-)resistant C. albicans poses a significant additional urgency for the development of alternative treatment options and potent novel antimycotics. AMPs have emerged as a promising class of alternative antifungal compounds and have been described to possess different modes of action with classic pore-forming peptides being the best established and best characterized subgroup. However, cytotoxicity is often a severe limitation of these peptides, and AMPs dedicated to inhibition of biofilm growth have been described [Candida biofilms while being only slightly active against planktonic cells [C. albicans. These were expected to differ in terms of their robustness, their capability to form biofilms, and, more importantly, could also be resistant to fluconazole or amphotericin B. Pom-1A to Pom-1F were active against most of the isolates, but the efficacy never exceeded 50%, accompanied by large standard deviations resulting from extremely different magnitudes of efficacy for individual isolates, with strains reacting perfectly to one of the peptides and not reacting to others. The peptide mixtures were not cytotoxic for HDF and A549 human cells at concentrations of 25 \u00b5g/mL as well. The mixture of Pom-1A to Pom-1F with the final concentrations equal to those in the single peptide analyses increased both the efficiency and the efficacy of the peptides since the concentration required to inhibit biofilm formation could be reduced to 50% while not only maintaining the efficiency, but also increasing the average inhibition as well as robustness of the activity, with more isolates being inhibited. Pom-1 and its derivatives were found not to belong to the group of AMPs with the classic mode of action and have been suspected to attack biofilm-relevant cell functions on the cell surface [Candida species have been found to use adhesins localized on the cell surface to form amyloidal fibril structures, which could be linked to the ability of pathogenic yeasts to form biofilms [w/v)) also regained activity against the fluconazole-resistant Candida strains (at a concentration of 8 \u00b5g/mL (w/v) fluconazole) by mixing it with the 0.25\u00d7 MBIC of the individual peptides Pom-1A to Pom-1F. The aim of a future combination therapy based on these results would, besides simply achieving synergism, also lie in avoiding the development of resistance. Such synergism has already been demonstrated with a combination of amphotericin B with the nucleobase derivative flucytosine in Candida, even in flucytosine-resistant strains [Candida-dedicated antibiofilm drugs.One of the most common causes of hospital-acquired infections is candidiasis by different species, with alarmingly high mortality rates when these infections become systemic or even reach internal organs like liver, kidneys, or stomach . C. albiorganism . It is wdidiasis . In a saications ,75,76. Sescribed ,63,65. Iescribed ), Pom-1 ic cells , but comic cells ,65, thus surface . The obs surface , or most surface ,81,82. A surface ,85,86,87biofilms . Withoutbiofilms . Often, biofilms ,92. Anot strains ,94. We bC. albicans in earlier studies, were dramatically less effective against some resistant invasive clinical isolates of the same yeast species. However, combining all the six derivatives, the formation of their biofilms could be prevented efficiently. Moreover, the combination of fluconazole with the individual Pom-1 derivatives remarkably regained the sensitivity of invasive fluconazole-resistant isolates to a conventional fungicide.Dedicated antibiofilm agents are an important issue, especially in the age of increasing microbial resistance development, not only to prevent biofilm formation, but also to possibly regain the activity of the standard antimicrobial therapeutics affected by the occurrence of resistance by posing additional selective pressure on the cells in the presence of the new compounds. The Pom-1 derivatives A\u2013F, which had already shown strong antibiofilm activity against the laboratory strain ATCC 90028 of the pathogenic yeast"} +{"text": "The continuing decline in the birth rate has led to a series of problems, such as the disproportion of population structure and severe aging population, which have restricted the country\u2019s economic development. To have a deeper understanding of the geographical differences and influencing factors of the birth rate, this paper collects and organizes the birth population data of 31 provinces in mainland China from 2011 to 2019. The national region is divided into seven natural geographical regions to obtain the spatial hierarchy, and a hierarchical Bayesian Spatio-temporal model is established. The INLA algorithm estimates the model parameters. The results show significant spatial and temporal differences in birth rates in mainland China, which are reflected mainly in the combination of spatial, temporal, and Spatio-temporal interaction effects. In the spatial dimension, the northeast is low, the northwest and southwest are high, and the birth rate has an upward trend from east to west. These trends are caused by unbalanced economic development, different fertility attitudes and differences in fertility security, reflecting regional differences in spatial effects. From 2011 to 2019, China\u2019s birth rate showed an overall downward trend in the time dimension. However, all regions except the northeast saw a significant but temporary increase in birth rates in 2016 and 2017, reflecting the temporal effect difference in birth rates. The world\u2019s major economies are facing the problem of population aging, and the global fertility decline has become an inevitable trend5. As the world\u2019s largest developing country, although China\u2019s total population is still rising, the birth rate has declined, and the problem of population aging is becoming more and more severe7. China has made significant adjustments to population policy to supply the population with more suitable for the demand of steady economic growth. For example, China began implementing the \u201duniversal two-child \u201d policy in January 2016, which allowed all families to have two children9. Actually, China\u2019s birth rate increased in 2016 and 2017, then reached 13.57% and 12.64% respectively, but then entered a period of decline. In the long run, the continued decline in the birth rate will bring a series of negative impacts, such as the imbalance of the population structure, the increased retirement burden of the working population, the deepening of social conflicts, and the economy development will be restricted in the future and so on. Therefore, the birth rate problem has become a hot topic in the current society and a critical problem that the Chinese government urgently needs to solve.The population is the foundation of sustainable development. When an economy grows to a certain magnitude, population aging is inevitable due to declining fertility, increasing life expectancy, and population migration12. With the increasingly significant regional differences in population, scholars from various countries began to analyze population data from a spatial perspective. For example, Velarde et al.5 analyzed the fertility trend and influencing factors of young women in Chile from 1992 to 2012. Their study found that the average fertility rate of adolescents has declined by 25% over the past 20 years, the affluent areas are lower than the poor areas, and fertility rates varied considerably between regions. Nandi et al.4 analyzed the trends and influencing factors of the total birth rate and repeat birth rate among Georgia adolescents from 2008 to 2016. Their study found that adolescents\u2019 overall fertility and repeated fertility rates have decreased significantly since 2008, especially in areas with poor reproductive health care conditions. With the deepening of China\u2019s reform and opening up, the regional development differences problem has become increasingly prominent, which has become an essential factor affecting social harmony16. Hu17 described the uneven geographical distribution of China\u2019s population and the economy as early as the 1930s. He divided China into two parts of similar size, east and west, with the western part accounting for 3.7% of the country\u2019s total population. In contrast, the eastern part accounted for 96.3% of the country\u2019s total population. Zhang et al.3 used descriptive statistics and binary logistic regression analysis methods to analyze the fertility willingness and influencing factors of the population in mainland China. Their study found that fertility intentions will inevitably decline due to the lack of a good fertility environment, rising education levels and increasing monthly household income. Wu et al.18 used a spatial econometric model to analyze the spatial pattern characteristics and driving factors of population aging in China. Their study found that uneven economic development is the main reason for China\u2019s aging population, which is high in the east and low in the west. Wei et al.19 analyzed the distribution characteristics and dynamic laws of the Chinese population using the spline-based method. Their study found that the economic development of China\u2019s provinces is uneven, the growth rate of the resident population varies significantly, the total population growth rate is declining rapidly, and the problem of population aging is prominent. Chen et al.20 analyzed the regional differences and influencing factors of population aging in China from 1995 to 2011 using the Theil index method. Their study found that the regional differences in population aging in China are apparent, fluctuating repeatedly and increasing gradually.The problem of regional differences has always been one of the most common and essential problems all around the world23 (INLA) algorithm . This study reveals the regional differences and temporal development trend of birth rate by analyzing the birth population and influencing factors in the Chinese Mainland. In addition, this study can not only serve as a reference for studying regional differences in birth rate but also put forward relevant suggestions and practical measures.At present, China\u2019s population structure is undergoing significant changes. With the birth rate becoming the focus of sociology, the regional difference in population birth rate has become a new hotspot in geographical research. However, studies that analyze the population structure at multiple spatial levels are still rare, and our research has just made up for this gap. In this article, the spatial regions are stratified by natural geographic regions, and a hierarchical Bayesian Spatio-temporal model is established. The model parameters are estimated by the integrated nested Laplace approximationsFrom 2011 to 2019, the birth rate of 31 provinces in mainland China is shown in Fig.\u00a0The geographic variation of birth rates in mainland China is mainly reflected in two levels, level 1 and level 2. The spatial effect on the birth rate is influenced by the spatial effect on two levels, that is, under the influence of the spatial effect on the level 2, it is affected by the spatial effect on the level 1. where the relative risk of spatial effects on level 1 During the period 2011 to 2019, the birth rate in mainland China is subject to a combination of temporal structural and temporal unstructured effects. The relative risk of temporal structure effect is Figure\u00a0In this paper, we establish a hierarchical Bayesian Spatio-temporal model and estimate the model parameters by the INLA algorithm, to explore geographic differences and temporal trends of the birth rates in mainland China from 2011 to 2019. The results show that the economic development is unbalanced, the birth rate is generally declining, the population growth rate is slowing down, and the population aging is severe in the Chinese Mainland. The birth rate has apparent spatial heterogeneity, and the difference between provincial regions is more significant than in natural geographical regions. The spatial dimension mainly shows that the northeast is low, the northwest and southwest are high, and the birth rate has an upward trend from east to west. These trends are caused by unbalanced economic development, different fertility attitudes, and differences in fertility security, reflecting regional differences in spatial effects. Regarding the time dimension, the changing birth rate trend is mainly affected by the overall economy and fertility policy that year, reflecting the temporal effect difference in birth rates. From 2011 to 2019, the birth rate mainly showed a downward trend. Although in 2016 and 2017, the birth rate in most parts of China established a significant increase, the rise gradually declined after 2018. For example, the birth rate growth from 2015 to 2017 is related to the \u201cuniversal two-child\u201d policy in 2016. However, due to the lack of a relevant security system after the universal two-child policy, residents\u2019 growth level of per capita disposable income could not meet the burdens caused by the second child\u2019s birth, so the birth rate began to decline again after 2017. It shows that the country can temporarily affect its birth rate through policies like allowing all families to have two children. However, to achieve stable growth in the birth rate, it is still necessary to establish full fertility protection measures in the follow-up.Based on the results of this study, this paper makes the following recommendations: first, it is necessary to speed up the process of economic construction in the western region, further reduce the problem of unbalanced and insufficient development among various areas in mainland China, and promote long-term balanced population development. Second, we must raise the fertility awareness of all residents, especially in economically developed areas such as North China and South China. Residents\u2019GDP and average disposable income are high, but the birth rate is low, and the birth rate has excellent room for improvement. Third, we must protect women\u2019s legitimate rights and interests, establish a comprehensive assistance and security system for families with two or three children, and further implement maternity security and supporting measures for residents in various regions.http://www.stats.gov.cn/tjsj/).Considering data acquisition\u2019s feasibility and completeness, this paper selects the birth population data of 31 provinces in mainland China from 2011 to 2019. To fully reflect the trend of geographical differences and try to minimize the interpretation of other influencing factors, this article interprets the influencing factors in the space as a spatial effect, analyzes the influencing factors in time as the time effect, and interprets the remaining effects as an inseparable Spatio-temporal interaction effect. Since the seven natural geographical divisions are based on science and follow the relevant zoning principles, the seven natural geographical divisions are regarded as large-scale, and the provincial regions are considered small-scale spatial stratification. Because the detailed data of the east, central and west of Inner Mongolia and the south-central and northeast of Hebei are not available, those are divided into north China in this study. The seven natural geographical regions are coded as 24:i and j represent the observation indices at the provincial and natural geographical regions scales, respectively. t represents the time observation index, ith region in year t. ith region in year t. 25 are shown in Table Construction of a two-level Poisson spatial multiscale model based on hierarchical Bayesian Spatio-temporal model21 for latent Gaussian models that satisfy the Gaussian Markov random field conditions. In the INLA algorithm, the latent random field of model (1) is 27 (ICAR). Let n represents the number of regions, c adjacent to the region d. c. The spatial correlation between regions is defined by their spatial adjacency:The INLA algorithm is a fast calculation method proposed by Rue et al.26 (RW1), whose conditional distribution is:25, set by the interaction type in Table\u00a0The temporal structure effect Model (1) has four Spatio-temporal interaction types at level 1 and level 2, respectively. However, due to the superposition of interaction types at two levels, model (1) has 16 choices in Spatio-temporal interaction. In order to find the most appropriate interaction type for model (1) at both levels, we first disregard the spatial and Spatio-temporal interaction effects at level 2. At this point, we can get the model (2):29, a statistical indicator to compare Bayesian models\u2019 fitting effect and complexity. WAIC represents the widely applicable informationcite31, which is not affected by parameterization and is close to the Bayesian cross-validation results. It can be concluded from Table\u00a0The selection information of four spatiotemporal interaction types on level 1 of the model (2) is shown in Table"} +{"text": "Helicobacter pylori infection and the implementation of tobacco-restricting policies may have contributed to this decrease.Background: Peptic ulcer disease (PUD) is a common disease worldwide, especially in developing countries. China, Brazil, and India are among the world\u2019s fastest-growing emerging economies. This study aimed to assess long-term trends in PUD mortality and explore the effects of age, period, and cohort in China, Brazil, and India. Methods: We collected data from the 2019 Global Burden of Disease Study and used an age\u2013period\u2013cohort (APC) model to estimate the effects of age, period, and cohort. We also obtained net drift, local drift, longitudinal age curve, and period/cohort rate ratios using the APC model. Results: Between 1990 and 2019, the age-standardized mortality rates (ASMRs) of PUD and PUD attributable to smoking showed a downward trend in all countries and both sexes. The local drift values for both sexes of all ages were below zero, and there were obvious sex differences in net drifts between China and India. India had a more pronounced upward trend in the age effects than other countries. The period and cohort effects had a similar declining trend in all countries and both sexes. Conclusions: China, Brazil, and India had an inspiring decrease in the ASMRs of PUD and PUD attributable to smoking and to period and cohort effects during 1990\u20132019. The decreasing rates of Peptic ulcer disease (PUD) is defined as damage to the digestive tract with mucosal ruptures greater than 3\u20135 mm making the submucosa visible ,2. PUD iBoth Helicobacter pylori (HP) infection and the use of non-steroidal anti-inflammatory drugs (NSAIDs) are major aetiological factors for PUD and peptic ulcer complications ,10. The China, Brazil, and India share similar characteristics, including vast territories, dense populations, and abundant resources. As these three fast-growing developing countries are similar, they launched BRICS with Russia and South Africa, which aims to enhance cooperation between these countries. PUD is a common concern in China, Brazil, and India. The prevalence of endoscopically confirmed PUD is higher in the population of Shanghai 17.2%) than in Europe (4\u20136%) . The pre.2% than BRICS accounts for about 25% of the world\u2019s gross national income, more than 40% of the global population and nearly 40% of the global illness burden ,20. AddiWe aim to analyze a 30-year trend in PUD and PUD attributable to smoking mortality by exploring the effects of age, period, and cohort in China, Brazil, and India. This analysis can assist governments in formulating targeted measures to meet the needs of their respective populations. Analyzing data from China, Brazil, and India can help us achieve the 2030 goal of reducing non-communicable diseases (NCDs) early. The age, period, and cohort (APC) model has been widely used to analyze the mortality trends of NCDs because it can demonstrate the effects of age, period, and cohort. We used the 2019 Global Burden of Disease Study (GBD) data to conduct an APC analysis of PUD in China, Brazil, and India. This is the first comprehensive report on an APC analysis of PUD in China, Brazil, and India in 30 years.Based on systematic and standardized estimations of 369 diseases and injuries from 1990 to 2019, GBD 2019 provides data from 204 countries and territories . GBD 201The population attributable fraction (PAF) is defined as the proportion of related diseases or deaths in a population that would be reduced if exposure to a certain risk factor was lowered to its theoretical minimum exposure level . In the smoking . The ASMThe cause of death ensemble model (CODEm) is a cause of death combinatorial modeling tool that can estimate cause-specific mortality in different places, including age and sex . DisMod-There are some ways to obtain original death data in China, Brazil, and India: for China, this mostly includes the Cause of Death Reporting System, Disease Surveillance Points ; for Brahttp://analy-sistools.nci.nih.gov/apc/( accessed on 26 October 2022).The APC Web Tool is used for parameter estimation, along with associated statistical hypothesis tests. We adopted this tool, which can be accessed here: Because those who died from PUD aged <15 years were few, they were not considered in this study. From 1990 to 2019, a series of consecutive 5-year periods were considered appropriate for classifying the ASMR of PUD. So, we obtained 5-year age groups from 15\u201319 years to 80\u201384 years, and the corresponding consecutive 19 birth cohorts . In addition, we referred to a central age group (45\u201349 years old), the 2000\u20132004 period, and the cohort of 1955\u20131959.Since birth cohorts were calculated according to the death time period and death age of the individual , we used the APC model to assess a relationship between age, period, and birth cohort and PUD mortality . BecausePeriod effects are changes with the passage of time that affect each age group at the same time and may be due to changes in the social, cultural, economic, or natural environment. Cohort effects are associated with changes between groups of persons born in the same year. Net drift shows the holistic annual percentage change in the expected age-adjusted ratio over time, expressed as an integral log-linear trend between period and cohort. Local drift represents the annual percentage change in the expected age-adjusted ratio over time for each group, indicating a partial log-linear trend between period and cohort. The longitudinal age curve reflects the fitted age-specific rates in the reference cohort adjusted for period effects. The cross-sectional age curves depict the predicted age-specific rates in the reference period after accounting for the cohort influence. Period and cohort relative risk are the ratio of the age-specific ratio for each corresponding period and cohort relative to the reference. The Wald Chi-square test was performed to verify the importance of estimable parameters and functions . We usedThe longitudinal form and cross-sectional form of the APC model could be expressed as:http://ghdx.healthdata.org/GBD-resultstool (accessed on 26 October 2022). Therefore, ethical approval is not required for our study.Our study was based on a publicly available GBD database. No patients, public, or animals were involved in the design, conduct, reporting, or dissemination plans of our study. The data mentioned above were available here: p < 0.01). The absolute number of deaths from PUD decreased in all countries for both sexes, except for Brazilian and Indian women . Compared to 1990, the relative proportions of all-cause deaths due to PUD in China, Brazil, and India decreased to varying degrees for all sexes. Despite the population increasing globally, China\u2019s percentage of all-cause deaths due to PUD compared to the global population decreased for both sexes between 1990 and 2019 .The trends in PUD mortality across China, Brazil, and India are presented in From 1990 to 2019, India had the largest decrease in PUD ASMRs, from 20.17/100,000 in 1990 to 6.70/100,000 in 2019 (\u221266.78%), with the fastest annual percentage change (\u22124.34%). For Indian men, PUD ASMRs decreased by 69.84%, and the annual percentage change was up to \u22124.67%, while in 2019, the ASMRs of PUD in India were still much higher than in Brazil and China for both sexes. The relative proportions of all-cause deaths due to PUD in China represented the most significant decreases for both sexes .The ASMRs of PUD attributable to smoking by sex across China, Brazil, and India during 1990\u20132019 is illustrated in p < 0.01 for all).The net drifts and local drifts for PUD mortality in China, Brazil, and India by sex from 1990 to 2019 are shown in During 1990\u22122019, all local drift values were less than zero for all age groups (15\u201385) in all countries and both sexes, suggesting significant decreases in PUD mortality . In the The longitudinal age curves of sex-specific PUD mortality are depicted in As shown in p < 0.01 for all). India has the most significant reduction in mortality. After controlling for cohort effects, cross-sectional age curves show the anticipated age-specific rates during the reference period, i.e., 2000 to 2004. .We divided the mortality and population data into five-year periods, ranging from 1990 to 1994 and from 2015 to 2019 . Additionally, we obtained 19 successive cohorts, ranging from 1910 to 1914 and from 2000 to 2004 . The age-specific mortality rates of PUD by period and sex in China, Brazil, and India during 1990\u20132019 are illustrated in p < 0.01 for all). As in the example of Brazilian women, the PUD mortality of those who were 80 to 84 years of age gradually increased within their birth cohort .The cohort\u2013specific mortality rate of PUD by age group among China, Brazil, and India during 1990\u20132019 is presented in HP infection, NSAID use, smoking, and age play major roles in the pathogenesis of peptic ulcerations . The priDuring 1990\u20132019, there were significant improvements in PUD mortality in China, Brazil, and India. India had the greatest degree of reduction in the ASMRs of PUD (\u221267%), especially in men, who showed the most obvious decrease (\u221270%). Additionally, the relative rates of all-cause deaths due to PUD fell by more than 25% for all three countries and both sexes, except for Brazilian and Indian women . In general, China, Brazil, and India show a continuous rising age effect as well as similar decreasing period and cohort effects. Both improvements in the period and cohort effects are likely due to the contributions of improved living conditions, socioeconomic status, and better hygiene to the decrease in the prevalence of HP infection ,52, withIndia presented the most obvious decrease in PUD ASMRs among the three countries. These improvements were due to both period and cohort influences, especially for those born after 1955. The epidemiology of PUD in India changed from 1992 to 2012, including a decrease in PUD incidence frequency . The preChina has also seen a significant downward trend in PUD ASMRs, and these gains were the result of both period and cohort effects. The prevalence of HP among mainland Chinese adults was 49.6% , and it decreased by \u22120.9% per year, with an annual percentage change of \u22121.0% for men and \u22121.3% for women from 1983 to 2018 . NotablyBrazil had the most minor improvement in PUD mortality and a slightly increased number of PUD deaths. In addition, Brazil also had the smallest decrease in period and cohort effects; one reason may be that the prevalence of HP in Brazil was not effectively controlled. The time trends in HP prevalence in Brazil were 68.2% from 1970 to 1999 and 71.3% from 2000 to 2016 . Brazil\u2019There are certain limitations to our study. Firstly, while we have evaluated period and cohort influences, the GBD data were not from a cohort study. Additionally, the APC analysis originated from estimated GBD cross-sectional data during 1990\u20132019. Large cohort studies need to be conducted in different countries to determine the relative risks of a particular location and time. Secondly, statistical objects do not include data on people under the age of 15. This is because there are few deaths from PUD under the age of 15 in China, Brazil, and India. Data on NCDs are frequently scarce and incomplete in low- and middle-income nations. Additionally, when gastrointestinal diseases are confirmed as the cause of death, they are usually not believed to be the root cause . ThirdlyAlthough the mortality rates of PUD have decreased significantly in China, Brazil, and India, the burden of PUD is still heavy. During 1990\u20132019, China, Brazil, and India presented a declining trend in the ASMRs of PUD and the ASMRs of PUD attributable to smoking in both sexes, while men were at a higher risk than women. Between 1990 and 2019, the period and cohort effects decreased in all countries and both sexes. Combining the age effect, in general, men and the elderly were high-risk populations for PUD mortality. In China, Brazil, and India, there were significant reductions in PUD mortality across all age groups in both sexes over time. Additionally, in each cohort, the older age groups had higher PUD mortality among the three countries and both sexes.India stands out for its improvements in the ASMRs of PUD and the ASMRs of PUD attributable to smoking, which may be associated with effectively reducing the prevalence of smoking tobacco. China and Brazil also had significant reductions in the ASMRs of PUD and the ASMR of PUD attributable to smoking; the implementation of tobacco-restricting policies could be one of the reasons. For China, another reason for the improvements in the ASMRs of PUD may be due to the excellent control of the prevalence of HP. The relatively minor improvements in the cohort effects in Brazil in both sexes for all age groups may be due to Brazil\u2019s extremely high HP prevalence. These examples illustrate that it is necessary to control the rates of HP infection and the prevalence of tobacco. Additionally, Brazilian and Chinese policymakers should pay more attention to the implementation of tobacco-restricting policies and the reduction of the prevalence of HP."} +{"text": "The inverse design method based on a generative adversarial network (GAN) combined with a simulation neural network (sim-NN) and the self-attention mechanism is proposed in order to improve the efficiency of GAN for designing nanophotonic devices. The sim-NN can guide the model to produce more accurate device designs via the spectrum comparison, whereas the self-attention mechanism can help to extract detailed features of the spectrum by exploring their global interconnections. The nanopatterned power splitter with a 2 \u03bcm \u00d7 2 \u03bcm interference region is designed as an example to obtain the average high transmission (>94%) and low back-reflection (<0.5%) over the broad wavelength range of 1200~1650 nm. As compared to other models, this method can produce larger proportions of high figure-of-merit devices with various desired power-splitting ratios. With the improvement of nanofabrication technology ,2 and thThe traditional inverse design methods, such as topology optimization , are basConditional deep convolutional GANs (cDCGAN), as a variant of GANs, can produce specified objects in more detail with the help of conditional variables and convolution layers. It is used to design free-form nanostructures, such as silver antenna , diffracIn this study, we improve the WGAN model on two levels: First, we add a sim-NN after thT1 and T2 represent transmissions of the two ports, respectively, and R is the reflection back to the input side. For a broad wavelength range (1200~1650 nm), each spectral response has 91 data points in T and R. The height of the silicon core layer for the splitter is 220 nm and it is covered by the silica cladding on the SOI substrate to be compatible with the conventional CMOS process.Before describing the method in detail, we first introduce the target device, which is the integrated MMI power splitter with a 2 \u03bcm \u00d7 2 \u03bcm interference region, as shown in 0 mode excitation at wavelength 1550 nm. The interference region is uniformly divided into a 20 \u00d7 20 grid matrix, with each grid size of 100 nm \u00d7 100 nm. The holes to be etched at any grid points have diameters in the range of 20 to 80 nm, and these diameters are normalized by 80 nm to 0.25~1 to form the 20 \u00d7 20 hole matrix (HM). If the normalized diameter value is lower than 0.25 (corresponding 20 nm), no hole will be etched here.We use the Lumerical FDTD software to modelT1, T2 and R data as the dataset, by using direct binary search (DBS) [s , which has a total of 273 (=91 \u00d7 3) sampling points. Here, 90% of the samples will be used for training and 10% for validation.We prepare 10,000 MMI structures with ch (DBS) ,36 algorG is enclosed by the red dashed line and the discriminator D by the green-dotted one. The numbers at the top and right of the convolution kernels represent the channel and sizes of the output features for each layer, respectively. The target response s is used by the generator to produce device structures according to the desired spectrum, whereas the variable z is used to construct a latent space so that the structural parameter and response s can be mapped to it. By altering the values of latent variable z, the generator can produce a variety of devices that can have the target response s. During the training of G, the latent variable z with dimensions of 100 \u00d7 1 is sampled from the Gaussian distribution and then expanded into a vector of dimensions 512 \u00d7 1 \u00d7 1 (by the expansion layer as marked by the colored circles). Meanwhile, the target response s of dimensions 273 \u00d7 1 is prepared as a conditional vector and is expanded for the next step. Then, z and s are stacked together to pass a series of deconvolution, normalization and activation layers to obtain the generated HM. During training, the fake and real HMs are fed to D to discern their differences iteratively, after which, the HM structure and its corresponding target response s are padded and stacked for further processing. The convolution, normalization and activation layers are used by D to reach a final decision for each input.For the WGAN model, we can describe it schematically as in G is calculated by Equation (1) as minus the expectation value for the Wasserstein loss is to calculate the Wasserstein distance between GP and dataP, such that D can be guided by D [dataGP can be obtained by interpolation between dataP and GP, with the weighting factor \u03b5 randomly selected from 0 to 1.The loss function for int on D . The newG, as the WGAN-sim network, to predict the response s\u2019 of the fake HM as shown schematically in s\u2019 and the target response s. Additionally, here, the residual NN based on Resnet-18 [However, during the training process of WGAN, the mapping of the structure response in latent space may still lack strict restrictions on the response spectra of the generated devices. The discriminator can discern real and fake HMs to train the generator, which can gradually produce more similar structure distributions as compared to the real ones. However, neither the generator nor the discriminator can compare the spectrum responses of the generated devices and the targets, so the generator cannot receive feedback on the spectrum discrepancies during training. In order to avoid this issue, we concatenate a pre-trained simulation NN after esnet-18 is used s\u2032 and the EM-simulated one During the sim-NN training process, its loss function is defined by the mean squared error (MSE) between the NN-predicted response \u22123. G and D.The loss evolution can be shown in s and the sim-NN predicted one s\u2032 for the generative device. The discrepancy of these responses can be calculated by \u03b2 is the weight to balance D involved in the WGAN-sim training process remains the same as in Equation (2).The WGAN-sim loss function s and z are fed into G for inverse design, where the generated structures are verified by the EM simulations. As shown in For the simulation efficiency consideration, we randomly select 1000 samples from the validation set, as the mini-validation set, to evaluate the training performance of the above two models. For every 500 training epochs, 1T(\u03bb) = 2T(\u03bb) = {0.5, 0.5, \u2026, 0.5} and R(\u03bb) = {0, 0, \u2026, 0}, where the FOM parameter as in Equation (7) is used to indicate the quality of the device response spectrum as compared to the desired one.We further test the performances of the two models to design devices for five different power ratios. For example, in the 5:5 MMI power splitter, we can set the target response to be n is 91 and i is its index. T1\u2019 and T2\u2019 represent transmissions of the two output ports in the generated devices from EM simulation, and R\u2019 is the total back-reflection.The number of wavelength sampling points The comparisons of the WGAN and WGAN-sim models for the inverse design of devices with power-splitting ratios of 5:5, 6:4, 7:3, 8:2, and 9:1 are shown in With the help of sim-NN, the performance of the generated device has a significant improvement, where the proportion of devices with an FOM over 0.8 can increase by 24.92%. For the case of the 8:2 power ratio, devices with an FOM over 0.7 can reach 97.75%, which is increased by 21.25% as compared to the WGAN-generated ones.Among the 2000 samples generated by WGAN and WGAN-sim for each of the five different ratios, we select the best configuration in terms of FOM as in In the above models, high-level features , such asIn order to further improve the ability of the model in focusing more precisely on the local features, we introduce the self-attention layers into WGAN-sim, as the SAGAN-sim model, to generate better devices. The schematic structure of the SA layer is shown in G, as well as the seventh and eighth ones in D, respectively, such that features are more likely to be extracted with high fidelity as the number of (de)convolution layers increases [As shown in ncreases .To test the effectiveness of the model, the trained SAGAN-sim is used to generate 2000 devices for each of the desired power-splitting ratios. The proportion statistics of FOM for the SAGAN-sim generated devices can be seen to further improve as shown in Here, the SAGAN-sim-designed optimal MMI devices and their spectrum responses for the five ratios are shown in The programming language and deep learning framework we used to build the NN model are Python 3.7.1 and PyTorch 1.11.0, respectively. For all the models as mentioned in the paper, NVIDIA GeForce RTX 3090 GPUs are used for each GAN and sim-NN training, which take around 20 and 8.5 h, respectively, and the addition of the SA layers will not significantly affect the training time. During the training process, we manually tuned the hyperparameters of these models to ensure the optimized network structure and learning rate, etc. It takes only 6 to 9 s to generate 2000 different high-FOM devices with one single running of the generator. Since the generation process is parallelly conducted for the model, the number of devices generated per inverse design can be set according to the actual demand by controlling the number of target responses and latent variables. The EM simulation of the device is carried out by Lumerical FDTD software and is about 10 s for each structure. We can also use the trained sim-NN instead of the EM simulation software to accelerate the verification process for the generated devices at equivalent accuracy.In order to improve the inverse design method based on the generative neural network, the WGAN-sim and SAGAN-sim models are proposed to design nanopatterned MMI power splitters in the photonic integrated circuit. By exploring the global structural parameters in more detail, the SAGAN-sim model can enjoy high accuracy from the self-attention mechanism and the sim-NN to improve the FOM of the generated devices. Compared to the WGAN model, the average FOM for the SAGAN-sim-generated devices increases by 11.86%, whereas the proportion of devices with an FOM over 0.8 is improved by 51.29%. Across the wavelength range from 1200 to 1650 nm, the total transmission of the optimal devices can be over 94% and the reflection below 0.5%. As far as we know, this is the first time that the self-attention mechanism has been used for the inverse design of nanophotonic devices.Here, we only consider the structural parameters of devices in the two-dimensional cross-section, but this model can be readily applied to more complex nanophotonic devices with more parameters in higher dimensions. In addition to the transmittance and reflectance, the target response can also be the phase spectrum, electric/magnetic field distribution, etc. The NN-based method can also help us reduce the dependency on prior knowledge of the target device. Moreover, the GAN model can generate device structures according to the responses, even if the target response has not appeared in the training process, which indicates that the model can provide intrinsic connections between the device structures and corresponding responses. The method can also be extended to material science, biology, chemistry and other research fields to single out the optimal design according to their desired target properties. The focus of our future study will be on the NN algorithm to train the model with smaller datasets but better accuracy."} +{"text": "However, the characteristics of the data varies with their operating conditions. This article presents the time-series dataset, including vibration, acoustic, temperature, and driving current data of rotating machines under varying operating conditions. The dataset was acquired using four ceramic shear ICP based accelerometers, one microphone, two thermocouples, and three current transformer (CT) based on the international organization for standardization (ISO) standard. The conditions of the rotating machine consisted of normal, bearing faults (inner and outer races), shaft misalignment, and rotor unbalance with three different torque load conditions . This article also reports the vibration and driving current dataset of a rolling element bearing under varying speed conditions (680 RPM to 2460 RPM). The established dataset can be used to verify newly developed state-of-the-art methods for fault diagnosis of rotating machines. Mendeley Data. DOI: Specifications TableValue of the Data\u2022This article consists of two parts: varying load conditions, and varying speed conditions. In part 1, this dataset contains data related to most of the major faults that can occur in rotating machines. Therefore, this dataset can be used to verify the performance of the newly developed rotating machine fault diagnosis methods based on rotor dynamics theories.\u2022In particular, by securing the dataset according to various load conditions, it is possible to observe the change in the fault features according to the load variation. This provides a practical dataset to consider the load fluctuation conditions in the actual field.\u2022Recently, many fault diagnosis researches using non-contact sensors instead of vibration sensors are being conducted due to the problem of sensor installation and cost in the actual field ,2. In th\u2022In part 2, this dataset was acquired from rolling element bearing under varying speed conditions (680 RPM to 2460 RPM). Three different types of faults, including inner race fault, outer race fault, and ball fault, were seeded. This data consists of vibration data (in the x- and y-directions of the bearing), and current data.\u2022\u00b7Most of the fault diagnosis methods are proposed for extracting fault features with steady speed and cannot be directly used with varying speed conditions \u2022This dataset can be used to develop a learning-based fault diagnosis methodology despite varying speed conditions 1In part 1, this dataset was established for deep learning based rotating machine fault diagnosis research. Unlike other researches, it is very difficult to obtain data in the fault diagnosis research field because it is difficult to apply an actual failure to make training of deep learning algorithms challenging. To solve this problem, we simulated bearing faults, unbalance faults, and misalignment faults that may occurred dominantly in rotating machine. We collected vibration, acoustic, temperature and driving current data under different load conditions . This dataset is measured based on mechanical engineering knowledge in accordance with ISO international standards. This dataset can be used for the verification of newly-developed learning-based fault diagnosis methods.In part 2, this dataset was established for learning-based ball bearing fault diagnosis research. Unlike other researches with constant speed, it is very difficult to obtain data under the varying speed condition. In contrast, we obtained faulty vibration and driving current data under varying speed conditions (680 RPM to 2460 RPM). This dataset is measured based on mechanical engineering knowledge in accordance with ISO international standards. This dataset can be used for verification of the learning-based fault diagnosis method.2This article presents two varying operating condition including varying load condition and varying speed condition. First dataset consists of vibration, acoustic, temperature and driving current data under varying load conditions. Vibration, temperature, motor current, and acoustic data are collected under 3 different load conditions . The load conditions are controlled by hysteresis brake with air cooling method. The main motor rotates at a rated rotating speed of 3010 RPM.Vibration data were measured using four accelerometers (PCB352C34) at two bearing housings in the x-direction and y-direction, simultaneously. An acoustic microphone (PCB378B02) was located nearby the bearing housing (A). Temperature and driving current data were measured using two thermocouples (K-type) and three CT sensors (Hioki CT6700). Siemens SCADAS Mobile 5PM50 was used for collecting vibration and acoustic data. NI9211, and NI9775 modules were used for collecting temperature, and driving current data, respectively. Vibration, temperature, driving current data were collected at a sampling frequency of 25.6 kHz. This dataset was collected for 120 seconds in normal state, and for 60 seconds in faulty state. Lastly, acoustic data were collected with a sampling frequency of 51.2 kHz and only acquire bearing fault data under no-load conditions to avoid the noise from air-cooled brakes.2). The acoustic data file contains two columns, namely \u2018Time Stamp\u2019, and \u2018values\u2019. The unit of the acoustic data is \u2018Pascal (Pa)\u2019. The description of the vibration and acoustic files as per operating and health conditions of the rotating machine is as follows:The collected vibration and acoustic data are stored in binary MATLAB (MAT) files ,8. The v(1)0Nm_Normal.mat: This file includes the vibration data in the x and y directions of two housings of healthy bearing under the 0 Nm load condition.(2)0Nm_BPFI_03.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.3 mm inner race fault under the 0 Nm load condition.(3)0Nm_BPFI_10.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1.0 mm inner race fault under the 0 Nm load condition.(4)0Nm_BPFI_30.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 3.0 mm inner race fault under the 0 Nm load condition.(5)0Nm_BPFO_03.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.3 mm outer race fault under the 0 Nm load condition.(6)0Nm_BPFO_10.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1.0 mm outer race fault under the 0 Nm load condition.(7)0Nm_BPFO_30.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 3.0 mm outer race fault under the 0 Nm load condition.(8)0Nm_Misalign_01.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.1 mm misalignment fault under the 0 Nm load condition.(9)0Nm_Misalign_03.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.3 mm misalignment fault under the 0 Nm load condition.(10)0Nm_Misalign_05.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.5 mm misalignment fault under the 0 Nm load condition.(11)0Nm_Unbalance_0583mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 583 mg unbalance fault under the 0 Nm load condition.(12)0Nm_Unbalance_1169mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1169 mg unbalance fault under the 0 Nm load condition.(13)0Nm_Unbalance_1751mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1751 mg unbalance fault under the 0 Nm load condition.(14)0Nm_Unbalance_2239mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 2239 mg unbalance fault under the 0 Nm load condition.(15)0Nm_Unbalance_3318mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 3318 mg unbalance fault under the 0 Nm load condition.(16)2Nm_Normal.mat: This file includes the vibration data in the x and y directions of two housings of healthy bearing under the 2 Nm load condition.(17)2Nm_BPFI_03.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.3 mm inner race fault under the 2 Nm load condition.(18)2Nm_BPFI_10.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1.0 mm inner race fault under the 2 Nm load condition.(19)2Nm_BPFI_30.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 3.0 mm inner race fault under the 2 Nm load condition.(20)2Nm_BPFO_03.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.3 mm outer race fault under the 2 Nm load condition.(21)2Nm_BPFO_10.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1.0 mm outer race fault under the 2 Nm load condition.(22)2Nm_BPFO_30.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 3.0 mm outer race fault under the 2 Nm load condition.(23)2Nm_Misalign_01.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.1 mm misalignment fault under the 2 Nm load condition.(24)2Nm_Misalign_03.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.3 mm misalignment fault under the 2 Nm load condition.(25)2Nm_Misalign_05.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.5 mm misalignment fault under the 2 Nm load condition.(26)2Nm_Unbalance_0583mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 583 mg unbalance fault under the 2 Nm load condition.(27)2Nm_Unbalance_1169mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1169 mg unbalance fault under the 2 Nm load condition.(28)2Nm_Unbalance_1751mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1751 mg unbalance fault under the 2 Nm load condition.(29)2Nm_Unbalance_2239mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 2239 mg unbalance fault under the 2 Nm load condition.(30)2Nm_Unbalance_3318mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 3318 mg unbalance fault under the 2 Nm load condition.(31)4Nm_Normal.mat: This file includes the vibration data in the x and y directions of two housings of healthy bearing under the 4 Nm load condition.(32)4Nm_BPFI_03.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.3 mm inner race fault under the 4 Nm load condition.(33)4Nm_BPFI_10.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1.0 mm inner race fault under the 4 Nm load condition.(34)4Nm_BPFI_30.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 3.0 mm inner race fault under the 4 Nm load condition.(35)4Nm_BPFO_03.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.3 mm outer race fault under the 4 Nm load condition.(36)4Nm_BPFO_10.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1.0 mm outer race fault under the 4 Nm load condition.(37)4Nm_BPFO_30.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 3.0 mm outer race fault under the 4 Nm load condition.(38)4Nm_Misalign_01.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.1 mm misalignment fault under the 4 Nm load condition.(39)4Nm_Misalign_03.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.3 mm misalignment fault under the 4 Nm load condition.(40)4Nm_Misalign_05.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 0.5 mm misalignment fault under the 4 Nm load condition.(41)4Nm_Unbalance_0583mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 583 mg unbalance fault under the 4 Nm load condition.(42)4Nm_Unbalance_1169mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1169 mg unbalance fault under the 4 Nm load condition.(43)4Nm_Unbalance_1751mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 1751 mg unbalance fault under the 4 Nm load condition.(44)4Nm_Unbalance_2239mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 2239 mg unbalance fault under the 4 Nm load condition.(45)4Nm_Unbalance_3318mg.mat: This file includes the vibration data in the x and y directions of two housings of bearing, which has a 3318 mg unbalance fault under the 4 Nm load condition.(1)0Nm_Normal.mat: This file includes the acoustic data of healthy bearing under the 0 Nm load condition.(2)0Nm_BPFI_03.mat: This file includes the acoustic data of bearing which has a 0.3 mm inner race fault under the 0 Nm load condition.(3)0Nm_BPFI_10.mat: This file includes the acoustic data of bearing which has a 1.0 mm inner race fault under the 0 Nm load condition.(4)0Nm_BPFO_03.mat: This file includes the acoustic data of bearing which has a 0.3 mm outer race fault under the 0 Nm load condition.(5)0Nm_BPFO_10.mat: This file includes the acoustic data of bearing which has a 1.0 mm outer race fault under the 0 Nm load condition.The collected temperature and driving current data are stored in technical data management streaming (TDMS) files ,10. Temp(1)0Nm_Normal.tdms: This file includes the temperature data in two housings and the current data of three phases of healthy bearing under the 0 Nm load condition.(2)0Nm_BPFI_03.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.3 mm inner race fault under the 0 Nm load condition.(3)0Nm_BPFI_10.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1.0 mm inner race fault under the 0 Nm load condition.(4)0Nm_BPFI_30.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 3.0 mm inner race fault under the 0 Nm load condition.(5)0Nm_BPFO_03.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.3 mm outer race fault under the 0 Nm load condition.(6)0Nm_BPFO_10.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1.0 mm outer race fault under the 0 Nm load condition.(7)0Nm_BPFO_30.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 3.0 mm outer race fault under the 0 Nm load condition.(8)0Nm_Misalign_01.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.1 mm misalignment fault under the 0 Nm load condition.(9)0Nm_Misalign_03.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.3 mm misalignment fault under the 0 Nm load condition.(10)0Nm_Misalign_05.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.5 mm misalignment fault under the 0 Nm load condition.(11)0Nm_Unbalance_0583mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 583 mg unbalance fault under the 0 Nm load condition.(12)0Nm_Unbalance_1169mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1169 mg unbalance fault under the 0 Nm load condition.(13)0Nm_Unbalance_1751mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1751 mg unbalance fault under the 0 Nm load condition.(14)0Nm_Unbalance_2239mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 2239 mg unbalance fault under the 0 Nm load condition.(15)0Nm_Unbalance_3318mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 3318 mg unbalance fault under the 0 Nm load condition.(16)2Nm_Normal.tdms: This file includes the temperature data in two housings and the current data of three phases of healthy bearing under the 2 Nm load condition.(17)2Nm_BPFI_03.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.3 mm inner race fault under the 2 Nm load condition.(18)2Nm_BPFI_10.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1.0 mm inner race fault under the 2 Nm load condition.(19)2Nm_BPFI_30.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 3.0 mm inner race fault under the 2 Nm load condition.(20)2Nm_BPFO_03.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.3 mm outer race fault under the 2 Nm load condition.(21)2Nm_BPFO_10.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1.0 mm outer race fault under the 2 Nm load condition.(22)2Nm_BPFO_30.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 3.0 mm outer race fault under the 2 Nm load condition.(23)2Nm_Misalign_01.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.1 mm misalignment fault under the 2 Nm load condition.(24)2Nm_Misalign_03.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.3 mm misalignment fault under the 2 Nm load condition.(25)2Nm_Misalign_05.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.5 mm misalignment fault under the 2 Nm load condition.(26)2Nm_Unbalance_0583mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 583 mg unbalance fault under the 2 Nm load condition.(27)2Nm_Unbalance_1169mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1169 mg unbalance fault under the 2 Nm load condition.(28)2Nm_Unbalance_1751mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1751 mg unbalance fault under the 2 Nm load condition.(29)2Nm_Unbalance_2239mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 2239 mg unbalance fault under the 2 Nm load condition.(30)2Nm_Unbalance_3318mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 3318 mg unbalance fault under the 2 Nm load condition.(31)4Nm_Normal.tdms: This file includes the temperature data in two housings and the current data of three phases of healthy bearing under the 4 Nm load condition.(32)4Nm_BPFI_03.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.3 mm inner race fault under the 4 Nm load condition.(33)4Nm_BPFI_10.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1.0 mm inner race fault under the 4 Nm load condition.(34)4Nm_BPFI_30.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 3.0 mm inner race fault under the 4 Nm load condition.(35)4Nm_BPFO_03.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.3 mm outer race fault under the 4 Nm load condition.(36)4Nm_BPFO_10.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1.0 mm outer race fault under the 4 Nm load condition.(37)4Nm_BPFO_30.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 3.0 mm outer race fault under the 4 Nm load condition.(38)4Nm_Misalign_01.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.1 mm misalignment fault under the 4 Nm load condition.(39)4Nm_Misalign_03.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.3 mm misalignment fault under the 4 Nm load condition.(40)4Nm_Misalign_05.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 0.5 mm misalignment fault under the 4 Nm load condition.(41)4Nm_Unbalance_0583mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 583 mg unbalance fault under the 4 Nm load condition.(42)4Nm_Unbalance_1169mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1169 mg unbalance fault under the 4 Nm load condition.(43)4Nm_Unbalance_1751mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 1751 mg unbalance fault under the 4 Nm load condition.(44)4Nm_Unbalance_2239mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 2239 mg unbalance fault under the 4 Nm load condition.(45)4Nm_Unbalance_3318mg.tdms: This file includes the temperature data in two housings and the current data of three phases of bearing, which has a 3318 mg unbalance fault under the 4 Nm load condition.Second, the collected dataset consists of vibration and current data acquired from the ball bearing with different fault types of inner race faults, outer race faults, and ball faults, according to changes in motor speed conditions (680 RPM and 2460 RPM).Vibration data were measured using four accelerometers (PCB352C34) at the two bearing housing A and B in the x-direction and y-direction. Current data were measured using three CT current sensors (Hioki CT6700). Vibration data were acquired by a Siemens SCADAS Mobile 5PM50 with sampling frequency of 25.6 kHz, and current data were acquired by NI9775 with sampling frequency of 100 kHz. This dataset was collected for 600 seconds at constant speed, and for 2,100 seconds at varying speed conditions (680 RPM and 2460 RPM).2). The motor current data file contains five columns, namely \u2018Time Stamp\u2019, \u2018R_phase\u2019, \u2018S_phase\u2019 and \u2018T_phase\u2019. The unit of the motor current is \u2018Ampere (A)\u2019. To support more details in this dataset, synchronized speed data are also provided. Sample raw data and their time-frequency analysis of each state are shown in The vibration data file contains five columns, namely \u2018Time Stamp\u2019, \u2018x_direction_housing_A\u2019, \u2018y_direction_housing_A\u2019, \u2018x_direction_housing_B\u2019, and \u2018y_direction_housing_B\u2019. The unit of the vibration data is \u2018gravitational constant (g)\u2019 (1g\u00a0=\u00a09.80665 m/s(46)vibration_normal_constant.csv: This file includes the 600 seconds length of vibration data of the x and y directions of two bearing housings of normal under the constant speed condition at 3010 RPM.(47)vibration_inner/outer/ball_constant.csv: This file includes the 600 seconds length of vibration data of the x and y directions of two bearing housings of inner/outer/ball fault under the constant speed condition at 3010 RPM. Bearing with ball fault is installed in bearing housing A.(48)vibration_normal_0.csv \u223c vibration_normal_6.csv: Each file includes the 300 seconds length of vibration data of the x and y directions of two bearing housings under the varying speed condition.(49)vibration_inner_0.csv \u223c vibration_inner_6.csv: Each file includes the 300 seconds length of vibration data of the x and y directions of two bearing housings under the varying speed condition. Bearing with inner fault is installed in bearing housing B.(50)vibration_outer_0.csv \u223c vibration_outer_6.csv: Each file includes the 300 seconds length of vibration data of the x and y directions of two bearing housings under the varying speed condition. Bearing with outer fault is installed in bearing housing B.(51)vibration_ball_0.csv \u223c vibration_ball_6.csv: Each file includes the 300 seconds length of vibration data of the x and y directions of two bearing housings under the varying speed condition. Bearing with ball fault is installed in bearing housing B.(1)current_normal_0.csv \u223c current_normal_6.csv: Each file includes the 300 seconds length of current data of the R-, S-, and T-phase of main motor under the varying speed condition.(2)current_inner_0.csv \u223c current_inner_6.csv: Each file includes the 300 seconds length of current data of the R-, S-, and T-phase of main motor under the varying speed condition. Bearing with inner fault is installed in bearing housing B.(3)current_outer_0.csv \u223c current_outer_6.csv: Each file includes the 300 seconds length of current data of the R-, S-, and T-phase of main motor under the varying speed condition. Bearing with outer fault is installed in bearing housing B.(4)current_ball_0.csv \u223c current_ball_6.csv: Each file includes the 300 seconds length of current data of the R-, S-, and T-phase of main motor under the varying speed condition. Bearing with ball fault is installed in bearing housing B.(1)rpm_normal_0.csv \u223c rpm_normal_6.csv: Each file includes the 300 seconds length of rotating speed of main motor under the varying speed condition.(2)rpm_inner_0.csv \u223c rpm_inner_6.csv: Each file includes the 300 seconds length of rotating speed of main motor under the varying speed condition. Bearing with inner fault is installed in bearing housing B.(3)rpm_outer_0.csv \u223c rpm_outer_6.csv: Each file includes the 300 seconds length of rotating speed of main motor under the varying speed condition. Bearing with outer fault is installed in bearing housing B.(4)rpm_ball_0.csv \u223c rpm_ball_6.csv: Each file includes the 300 seconds length of rotating speed of main motor under the varying speed condition. Bearing with ball fault is installed in bearing housing B.33.1The rotating machine testbed consists of three-phase induction motor, torque meter, gearbox, bearing housing A, bearing housing B, rotors and hysteresis brake as shown in A total of four accelerometers (PCB35234) were installed in the x- and y-directions of bearing housings A and B, based on the vibration installation guide (ISO 10816-1:1995). A microphone (PCB378B02) was located nearby bearing housing A based on the microphone installation guide (ISO 8528-10). Two thermocouples (K-type) were installed in each bearing housing to measure the bearing temperature. To measure the three-phase motor current, three CT sensors (Hioki CT6700) were used. CT sensors were installed on the U-phase, V-phase, and W-phase of the three-phase motor.3.2Rolling element bearings are composed of two concentric rings called races and rolling elements such as balls or rollers between the races. The inner and outer raceway are grooved. To assemble a ball bearing, the balls are inserted in between the inner race and the outer race. The inner race is snapped to a position concentric with the outer race. The balls are separated uniformly between the races, and a riveted cage is inserted to maintain the separation.d) of 7.90 mm, a pitch diameter (D) of 38.5 mm, contact degree angle (\u03b8) of zero degrees, and the number of balls (N) is 9. Therefore, the shaft frequency (sf) is 50.17 Hz, fundamental train frequency (FTF) is 19.94 Hz, ball pass frequency inner (BPFI) is 272.07 Hz, ball pass frequency outer (BPFO) is 179.43 Hz, and ball spin frequency (BSF) is 234.19 Hz.In varying load condition test, the bearing faults, including inner race fault and outer race fault were simulated according to the crack sizes as shown in Shaft fault is a parallel misalignment that moves the shaft in bearing housing A as shown in In varying speed condition test, type 6205 steel NSK ball bearing were used for testing. Four different state of the ball bearing were emulated as shown in Human Lab., Center for Noise and Vibration Control Plus, Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea has given the consent that the datasets may be publicly-released as part of this publication.Wonho Jung: Conceptualization, Methodology, Software, Validation, Visualization, Writing \u2013 original draft. Seong-Hu Kim: Data curation, Validation, Investigation. Sung-Hyun Yun: Data curation, Validation, Investigation. Jaewoong Bae: Data curation, Validation, Investigation. Yong-Hwa Park: Funding acquisition, Writing \u2013 review & editing, Supervision.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."} +{"text": "An additional homogeneous epitaxy MgO (epi-MgO) layer, which was used to improve the biaxial texture in the IBAD-MgO layer, was deposited on the IBAD-MgO layer by electron-beam evaporation. The effects of growth temperature, film thickness, deposition rate, and oxygen pressure on the texture and morphology of the epi-MgO film were systematically studied. The best full width at half maximum (FWHM) values were 2.2\u00b0 for the out-of-plane texture and 4.8\u00b0 for the in-plane texture for epi-MgO films, respectively. Subsequently, the LaMnO3 cap layer and YBa2Cu3O7-x (YBCO) functional layer were deposited on the epi-MgO layer to test the quality of the MgO layer. Finally, the critical current density of the YBCO films was 6 MA/cm2 , indicating that this research provides a high-quality MgO substrate for the YBCO layer.Ion beam-assisted deposition (IBAD) has been proposed as a promising texturing technology that uses the film epitaxy method to obtain biaxial texture on a non-textured metal or compound substrate. Magnesium oxide (MgO) is the most well explored texturing material. In order to obtain the optimal biaxial texture, the actual thickness of the IBAD-MgO film must be controlled within 12nm. Due to the bombardment of ion beams, IBAD-MgO has large lattice deformation, poor texture, and many defects in the films. In this work, the solution deposition planarization (SDP) method was used to deposit oxide amorphous Y In lf-field . Howeverlf-field . In addilf-field . Therefolf-field ,12,13,14The functions of the buffer layers can be described as follows. First, the mutual diffusion of elements is blocked between the metal substrate and the REBCO layer. The flat and dense oxide buffer films can prevent direct contact between the superconducting layer and the REBCO layer, effectively blocking the mutual diffusion of elements ,16,17. SAt present, there are three well-explored technologies for obtaining biaxial texture, i.e., rolling-assisted biaxial texture (RABiTS), inclined substrate deposition (ISD), and ion beam-assisted deposition (IBAD) . The bia2\u2013Y2O3 stabilized) is one of the earliest materials to be used as a biaxially textured template for REBCO. YSZ is cubic and has a lattice constant of 5.14 A\u00b0. In 1992, Iijima et al. reported that a sharp biaxially aligned structure of YSZ films grown on alloy as a substrate could be developed by IBAD technology [Yttria Stabilized Zirconia (YSZ) /epitaxial-MgO/IBAD-MgO/ amorphous-Y2O3/amorphous-Al2O3 five-layer stack. In our previous study, the solution deposition planarization (SDP) method, which reduced the preparation cost and process complexity of the REBCO superconducting strip, was used to deposit oxide amorphous films on the surface of Hastelloy C276 tapes [2O3 layer can replace the three processes, including electrochemical polishing, sputtering-Al2O3, and sputtering-Y2O3 in the commercialized buffer layer. The biaxial texture formation of magnesium oxide (MgO) grown by IBAD was first reported by the researchers from Stanford University . In cont76 tapes . The SDP3 (LMO)/IBAD-MgO/SDP-Y2O3 has been used to sever the buffer template for YBCO films [\u03a6-scan of LMO (222) peak) in the buffer layer, respectively. It should be noted that sufficient critical currents can hardly be obtained with the in-plane texture over 5\u00b0 [2 on IBAD-MgO to improve the biaxial texture. However, the best in-plane FWHM value is 6\u00b0 in their research [In our previous research, LaMnOCO films . However over 5\u00b0 . Thereforesearch .In this paper, we prepared epi-MgO films using electron beam evaporation and systematically investigated the influence of the growth temperature, film thickness, deposition rate, and oxygen pressure on the texture and morphology of the MgO films. It is necessary to obtain the MgO film with the best biaxial texture, which would provide an excellent growth template for YBCO superconducting thin films. Finally, the LMO cap layer and YBCO superconductor films are fabricated on the MgO films to testify the function of the epi-MgO films.\u22125 Pa using a molecular pump. Subsequently, the growth temperature (150~500 \u00b0C) required for the experiment was controlled by a home-made heating device. A reel-to-reel system enabled the continuous preparation of long tapes for the dynamic deposition of the epi-MgO films. When the metal oxide films were prepared using electron beam evaporation, the ionic bonds of the materials could be easily disrupted by the electron beam, potentially causing the composition of the film material to deviate and affecting the microstructure of the film. Therefore, it was necessary to introduce oxygen to maintain the proper proportion of MgO film composition during the epi-MgO deposition process. Oxygen was introduced to the evaporation chamber through an oxygen valve. The quartz crystal microbalance (QCM) rate monitor and ion probe were used to measure the electron beam evaporation rate and the ion beam current density, respectively. The QCM was installed at the location of the substrate. Subsequently, the LMO cap layer was deposited on the epi-MgO by the DC reactive sputtering technology. Finally, a home-made metal organic chemical vapor deposition system was used to deposit YBCO films on the LMO/epi-MgO/IBAD-MgO/SDP-Y2O3 buffer layer. The detailed YBCO deposition process can be seen elsewhere [In this work, a home-made electron beam evaporation, which offers the ability to evaporate metal oxide materials with a high melting point, was used to deposit epi-MgO films on 10 nm-thick IBAD-MgO films. In this experiment, the vacuum chamber was pumped down to 10lsewhere .The biaxial texture was characterized in situ using high-energy electron diffraction (RHEED) equipment. The picture exhibited regular diffraction spots array with biaxial textured MgO films. RHEED involves emitting a beam of high-energy electrons (5~100 keV) from an electron gun, which is incident on the sample surface at a small grazing angle (1~5\u00b0) to generate an electron diffraction beam. The crystal structure was then displayed by collecting the signal on a fluorescent screen. The RHEED came in at a shallow incident angle which was perpendicular to the plane of the ion gun. The diffracted pattern was incident on a phosphorescent screen and the resulting image was then collected by a charge coupled device (CCD) camera, which was interfaced to a computer. The electron beam evaporator was placed off-center to allow the ion gun full angular range. The substrate was rotatable azimuthally about the substrate normal. \u03b8 and 2\u03b8 were locked and changed in a ratio of 1:2, respectively. The magnitude of q varied as the angles changed. For the \u03b8\u20132\u03b8 scan geometry, the change in angle corresponded to a change in the lattice spacing and both the orientation and phase of a sample can be observed in the collected scan. The detailed description of the \u03b8\u20132\u03b8 scan can be seen elsewhere [X-ray diffraction (XRD) (Bede D1) was used to probe the orientation and microstructure of the MgO films. We used symmetric scan geometry for the majority of the XRD measurements. The term symmetric scan kept the orientation of the scattering vector (q) fixed relative to the sample. In more practical terms, the inclined angle lsewhere . \u03c9-scan and \u03a6-scan to obtain the texturing information, such as out-of-plane texture and in-plane texture. \u03c9-scan is mainly used to determine the degree of crystallinity ordering in the films. During the \u03c9-scan, the receiver was fixed at the 2\u03b8 position of the desired crystal face of the film, such as the (002) diffraction peak position of MgO. Subsequently, the sample stage rotated around an angle as the central angle, testing the angle range. Through computer fitting of the \u03c9-scan curve, the half-maximum width (FWHM) of the out-of-plane texture was obtained. The lower the FWHM value, the more ordered the crystal grains of this orientation were arranged. It is also necessary to utilize \u03a6-scan was mainly used to determine the degree of ordering of epitaxial thin films in the a-b plane. Before conducting \u03a6-scan, the material\u2019s strongest relative peak intensity surface (002) was selected from the PDF card, assuming that the angle between this surface and the sample surface was \u03c6. During testing, the sample rotated to an angle \u03a6, and then the sample stage and receiver were fixed at the \u03b8 and 2\u03b8 positions of the desired crystal face. The sample rotated around its normal direction, and through fitting of the \u03a6-scan curve, the half-maximum width of the in-plane texture was obtained. For example, a (220) phi scan of a single crystal of MgO rotated about the (200) surface normal would have four peaks from 0\u00b0 to 360\u00b0 of phi angle. This scan gave the distribution of orientations of crystallites relative to one another in-plane. The FWHM value can be used as a measure and its value was used to characterize the goodness of the in-plane texture. Contributions to the width of a high-quality single crystal were the result of the instrument broadening as these widths should be very near 0\u00b0. Finally, the surface morphologies of the epi-MgO films were analyzed using the SPM 300HV scanning probe platform from Seiko and the INSPECT F50 SEM platform.\u03b8\u20132\u03b8 diffraction patterns of MgO films, which were deposited at different temperatures with a film thickness of 250 nm, a deposition rate of 1.2 nm/s, and oxygen pressure of 10\u22122 Pa. The intensity of the MgO (002) diffraction peak was weak at the deposition temperature of 150 \u00b0C, indicating that this temperature failed to provide sufficient migration energy for the Epi-MgO films to grow along the c-axis orientation. The MgO migration energy increased with the deposition temperature increasing, resulting in a gradual increase in the peak intensity of the (002) orientation of the film. However, the maximum temperature that the heater could reach was 500 \u00b0C in the electron beam evaporation system. Therefore, the research was stopped at 500 \u00b0C.The growth temperature plays a crucial role in the epitaxial growth of epi-MgO films, influencing both the structure and surface morphology of the films. As the deposition temperature gradually increased, both the out-of-plane and in-plane textures were gradually optimized. When the deposition temperature reached 500\u00b0C, the biaxial texture of the epi-MgO films was \u0394\u03c9 = 2.2\u00b0 and \u0394\u03a6 = 4.8\u00b0, respectively. It has been proposed that the grain size of MgO increases with the deposition temperature . The larIn order to investigate the effect of oxygen pressures on the MgO films, the epi-MgO films were prepared under different oxygen pressures while keeping other parameters constant with a deposition thickness of 150 nm, the growth temperature of 300 \u00b0C, and deposition rate of 1 nm/s. \u22122 Pa, indicating that as the oxygen flow rate increased, the oxygen defects in the film decreased, and the surface morphology was improved.The variation in the root mean square roughness (RMS) of the film surface with respect to the oxygen flow rate is illustrated in The deposition rate of the film also had a significant impact on the quality of the Epi-MgO films. By adjusting the electron evaporation\u2019s beam current parameters, the film\u2019s deposition rate was changed while keeping the film thickness, deposition temperature, and oxygen flow rate constant. The effect of the deposition rate on the film\u2019s biaxial texture is shown in The deposition rate had little effect on the biaxial texture. Subsequently, the surface morphology of these samples was characterized using SEM, and the results are shown in \u03b8\u20132\u03b8 scan results of the Epi-MgO films with a thickness range of 54\u2013720 nm. The film thickness was controlled by the carrier frequency, while other deposition parameters were maintained at a deposition temperature of 450 \u00b0C, a deposition rate of 1.2 nm/s, and an oxygen flow rate of 1.6 \u00d7 10\u22122 Pa. It should be pointed out that IBAD MgO films are extremely thin (10 nm). The researchers in LANL found out that epi-MgO improved the in-plane texture significantly from IBAD-MgO [Thickness is one of the key growth parameters that affect the grain size, crystallinity, and surface morphology of MgO. \u03c9-scan and \u03c6-scan, as presented in The variation in the FWHM values with different thicknesses in the MgO films was analyzed by the \u03b8\u20132\u03b8 scan of the YBCO films deposited on the LMO/epi-MgO/IBAD-MgO/Y2O3 buffer layers using metal-organic vapor deposition. The diffraction pattern shows that YBCO was growing along the pure c-axis and the (001) oriented peak intensity was strong. Nevertheless, the Y2O3 films exhibited a weak (004) peak, indicating that the Y2O3 films crystallized in the buffer layers during the YBCO process. It can be seen that the Y2O3 films were completely amorphous within the MgO films in 2O3 diffraction peaks can be found in the \u03b8\u20132\u03b8 scans. It should be mentioned that the deposition temperature of YBCO reached 700 \u00b0C, which can be seen in our previous research. The crystallization temperature of Y2O3 was 500 \u00b0C. Therefore, the Y2O3 films were re-crystallized during the YBCO films.Since the lattice mismatch between MgO and the YBCO is relatively high (~8.6%), it is necessary to deposit a template layer between MgO and YBCO. The lattice mismatch between LMO and YBCO is low (~0.8%), which reduces the influence of buffer layer on YBCO growth during the epitaxy process. Subsequently, the YBCO functional layer was deposited on the LMO layer. Ic value of the 10 mm-wide YBCO tape because of the limitation of current carrying capacity from our current source. The silver array was deposited on the YBCO tapes as a conducting electrode. The distance between the adjacent silver electrodes was 0.3 mm. The critical current value could be tested by mounting the test probe onto the adjacent silver electrode. Assuming the YBCO films were uniformly distributed over the 10 mm-wide tapes, the performance of the superconducting strip can divide the Ic of the 10 mm-wide YBCO tape by the Ic of the 0.3 mm-wide YBCO tape. The thickness of the YBCO was 1\u00b5m and the Ic value was tested under the condition of self-field. It can be seen that the critical current along the silver electrode is 9.27 A/0.3 mm in 2 . The above research shows that high-quality YBCO thin films can be epitaxially grown on the LMO/epi-MgO/IBAD-MgO template.We used the four-probe method to test the superconductivity of the YBCO films. 2 , indicating that this research provides a high-quality substrate for the YBCO layer.This paper investigates the influence of fabrication parameters, such as deposition temperature, film thickness, deposition rate, and oxygen partial pressure, on the structure, texture, and surface morphology of epi-MgO films during the preparation process using IBAD-MgO films as substrates. It was found that the crystalline quality of the epi-MgO films was improved when the deposition temperature was increased from 150 \u00b0C to 500 \u00b0C. The out-of-plane FWHM reduced from 4.2\u00b0 to 2.2\u00b0, and the in-plane FWHM decreased from 8\u00b0 to 4.8\u00b0. When the film thickness ranged from 54 nm to 720 nm, the RMS value rapidly rose from 1.6 nm to 25 nm and the in-plane texture increased from 5.4\u00b0 to 2.2\u00b0. The oxygen pressure and deposition rate have a minimal impact on the biaxial texture but significantly affect its surface morphology. The critical current density of the YBCO films deposited on the LMO/epi-MgO/IBAD-MgO template was 6 MA/cm"} +{"text": "These fringe patterns were confirmed via extensive electromagnetic wave simulations to be standing-waves formed between the tip and the edge-up assembled nano-emitters on the substrate plane. We further report that both light confinement and in-plane emission can be engineered by tuning the surrounding dielectric environment of the nanoplatelets. Our results lead to renewed understanding of in-plane, near-field electromagnetic signal transduction from the localized nano-emitters with profound implications in nano and quantum photonics as well as resonant optoelectronics.Strong light-matter interactions in localized nano-emitters placed near metallic mirrors have been widely reported via spectroscopic studies in the optical far-field. Here, we report a near-field nano-spectroscopic study of localized nanoscale emitters on a flat Au substrate. Using quasi 2-dimensional CdSe/Cd 2/Si substrates.Authors investigate quasi-2D nanoscale emitters on different substrates with tapping mode tip-enhanced spectroscopy. They visualize in-plane near-field and radiative energy propagation via Surface plasmon polaritons launched by the nanoscale emitters on dielectric/Au or SiO Different types of dipoles form different types of polaritons, including plasmon-polaritons in metals, exciton-polaritons in semiconductors, and phonon-polaritons in dielectrics in the IR range5. Such strong light-matter couplings require a confinement of light in a low-dimensional material or interface to efficiently interact with the dipoles. For the case of surface plasmon polaritons (SPPs), light is confined to a dielectric/metal interface and forms a propagating electromagnetic wave along the surface6. Conversely, the formation of exciton-polaritons requires an intrinsic optical resonance of the medium to overlap with trapped light wave-packets in an optical cavity medium such as Bragg mirror dielectric microcavity or plasmonic cavity5. Propagating modes of exciton-polaritons were also observed with help of scattering type nanoprobes8 and microcavities10.Understanding of light-matter interactions in materials with strongly resonant properties and deep-subwavelength dimensions is important for both basic science and nano-opto-electronic applications. In most cases, light does not modify the electronic dispersions of the materials. This is because either the material dimensions are much greater than the wavelength \u03bb or the material does not have an electronic dipole resonance at \u03bb resulting in weak coupling. When the coupling between light and dipoles in matter becomes stronger, the rapid exchange of energy between photon states and electric dipole resonances leads to the formation of part-light, part-matter quasiparticle states called polaritons2 (0.78\u2009eV)11, 1-dimensional (1D) single-walled carbon nanotubes (0.3\u20130.4\u2009eV)12 and 0-dimensional (0D) CdSe quantum dots (0.2\u20130.8\u2009eV)13 are significantly larger than room temperature thermal energy (0.025\u2009eV). The large exciton binding energy in these nanomaterials makes excitons \u2013 not free carriers - the dominant excited species, resulting in stronger light-matter interaction. Strong light-matter coupling in excitonic nanomaterials has been investigated in many ways such as exciton-polaritons in 2D MoS2 placed in an optical cavity14, exciton-plasmon polaritons of 2D WSe215, 0D CdSe/ZnS quantum dot placed in plasmonic cavities17, and surface plasmon polaritons at 2D MoS2/Al2O3/Au interfaces18. Most of these studies have been conducted in either diffraction-limited optical setups16 or via non-optical excitation techniques such as electron energy loss spectroscopy19. However, imaging\u00a0of strong light-matter coupling in nanoscale materials when excited in the near-field at optical frequencies20\u00a0has been scarcely investigated. Further, the impact of the nano-probe and complex nano-optical fields on dipole interactions as well as energy guiding and transduction at these deep sub-wavelength scales remains largely unexplored.In reduced dimensional materials, the exciton that is induced by light-excitation interacts strongly with the surrounding medium due to the lack of dielectric screening. The binding energy of the excitons in 2-dimensional (2D) WSe21. By taking advantage of plasmonic gap mode confined in the nano-gap between the plasmonic tip and the substrate, this technique has enabled the visualization of optical responses from sub-wavelength semiconductor structures such as fluorescence/radiation patterns of quantum dots20, strain-induced Raman and fluorescence shifts23, lateral heterostructures of van der Waals semiconductors25, and even the localized excitonic emission from nanobubbles in 2D semiconductors26. Most studies of the emission using tip-enhanced nano-spectroscopy are performed in contact mode to maximize the plasmonic gap mode confinement resulting in strong light-matter coupling along the normal direction to the surface. For example, strong light-matter interactions in CdSe/ZnS quantum dots using the plasmonic gap mode was achieved leading to exciton-plasmon polariton formation27. In addition, brightening of the dark exciton in monolayer transition metal dichalcogenides via the Purcell effect has also been observed28. Yet, investigations for inelastic emission or scattering with the tapping mode configuration have been limited30. Since the AFM tip oscillates in tapping mode, it is still close to the surface hence it can be considered a near-field signal. Recently, it was reported that tapping mode, tip-enhanced Raman maintains sub-wavelength resolution capability and is even beneficial in terms of a charging-free measurement tool by preventing hot-carrier injection32. However, the role of the tapping mode tip in emission remains elusive.Tip-enhanced nano-spectroscopy has paved the way for direct nano-resolution spatio-spectral imaging of the emission of nanomaterials at optical frequencies2/Si substrates with tapping mode tip-enhanced spectroscopy. When using tapping mode tip-enhanced spectroscopy, we visualize in-plane near-field radiation and radiative energy propagation via SPPs launched by the emitters on dielectric/Au or SiO2/Si substrates. By placing the nanoscale emitters, such as CdSe/CdxZn1-xS nanoplatelets and WSe2 nanobubbles, on dielectric/Au interfaces, we observe radiative fringe patterns that are indicative of sub-wavelength energy transfer from the nanoscale excitonic emitters to the plasmonic Au substrate. We further observe that dielectric permittivity and thickness are key parameters that control the observed fringe patterns and corresponding energy transfer. Our results facilitate a deeper understanding of near-field radiation from low-dimensional and hetero-dimensional excitonic systems.Herein, we report in-plane light-matter interactions of quasi-2D emitters on dielectric/Au or SiOxZn1-xS core-shell nanoplatelets (NPs) on ultrasmooth Au substrate. Using TEM, we measure the size of the NPs to be 40.2\u2009\u00b1\u20092.9\u2009nm (length)\u2009\u00d7\u200916.1\u2009\u00b1\u20091.7\u2009nm (width)\u2009\u00d7\u20092.8\u2009\u00b1\u20090.5\u2009nm (thickness). To collect the in-plane near-field signal from NPs, we obtained hyperspectral map using tapping mode operation 39 is comparable to the dephasing time of the SPP at Al2O3/Au interface (~10\u2009fs)40. As a result, the tip reflects the SPPs back to the emitter forming a standing wave. Standing wave formation have been observed in different types of polaritons via scattering mode n-SOM i.e. near-IR SPP in graphene41, exciton-polaritons in bulk WSe235, and phonon-polaritons in hBN metasurfaces42. In these previous reports, the tip, in proximity of specimen launches the polariton. On the other hand, in our present work, both NPs and the tip can launch the SPPs along the dielectric/Au interface. To evaluate the contribution of each case, we simulated the\u00a0cross-sectional E-field profile of a dielectric/NP/Au structure in the vicinity of the Au tip engaged in tapping mode, i.e. 20\u2009nm away from the Au surface, to visualize the generation of these standing waves. Figures\u00a0z field map with and without the tip demonstrates that surface-confined electric field persists from SPPs regardless of the presence of tip emission, respectively and Figure\u00a02O3 (5\u2009nm)/NP/Au system. The topography and corresponding hyperspectral TEPL maps were obtained with tapping mode during the same scan , region A was determined to be a face-down configuration by its thickness (12\u2009nm) which corresponds to the thickness of 2 NPs surrounded by 2\u2009nm oleate ligand layer. Cluster B was characterized as edge-up configuration with 2 NPs with ligands which coincides with 40\u2009nm thickness. Near-field TEPL of the two different regions shows spatially localized emission at 664\u2009nm , TiO2 (\u03b5\u2009=\u20095.02 at 630\u2009nm) and monolayer WSe2 (\u03b5\u2009=\u200915 at 630\u2009nm). A 0.7\u2009nm thick Al2O3 sample was prepared to compare with monolayer WSe2. Figure\u00a0\u03b5 between the dielectric material and metal creates a stronger confinement of electromagnetic waves at the interface. To generalize the dielectric effects on fringe periods, we simulated fringe patterns as a function of dielectric constant and thickness that is summarized in the 2D plot in Fig.\u00a02O3, TiO2 and monolayer WSe2 on as-prepared NPs on the Au substrate to verify experimentally the validity of the simulations. Note that the thickness of the NP cluster is ~30\u2009nm for all samples and the emission spectrum peaks at ~1.85\u2009eV for all samples. Similar boomerang patterns were observed in Al2O3 (0.7\u2009nm), WSe2 (0.7\u2009nm) and TiO2 (5\u2009nm)/NP/Au samples /NP/Au system, we observe 329\u2009\u00b1\u2009135\u2009nm for Al2O3 (0.7\u2009nm), 325\u2009\u00b1\u2009119\u2009nm for WSe2 (0.7\u2009nm) and 313\u00b192\u2009nm for TiO2 (5\u2009nm)/NP/Au, respectively. Once again, the large error bars are due to lossy plasmonic propagation. The E-K diagram of SPPs generated at the dielectric/Au interface was simulated by adopting Lorentz-Drude model for Au 49. The experimental values are in close agreement with the simulated E-K diagram /Au interface deviates from the calculated E-K diagram, possibly due to a non-uniform dielectric medium for <1\u2009nm thin ALD grown film of alumina. The strong agreement between calculations and experiments reveals that the small variation of SPP due to dielectric change can be detected by near-field scanning probe technique. It also excludes the possibility that the fringe is the result of diffraction, as the diffraction of free-space light cannot be affected by a deep-sub-wavelength thickness dielectric layer. The SPP decay constant, which directly correlates to energy transfer, shows the opposite relation with the dielectric constant of the medium i.e. larger \u03b5 results in longer decay constant. Quantitatively, high dielectric materials such as TiO2 and WSe2 showed decay constants of 468\u2009\u00b1\u20090.16\u2009nm and 417\u2009\u00b1\u20090.12\u2009nm respectively shows that the thicker dielectric medium is more effective for in-plane light propagation . WSe2 bubbles which confine excitons via spatial modulation of band structure due to strain also act as localized emitters50 which emit at 850\u2009nm and launch fringes with 357\u2009\u00b1\u2009163\u2009nm period and 785\u2009nm (1.58\u2009eV). The results revealed that the fringe period under 594\u2009nm excitation was similar to those launched with 633\u2009nm excitation, while the fringe period under 785\u2009nm excitation was longer -NP structures on SiO2 (50\u2009nm)/Si substrates rather than plasmonic Au substrates than that of the SiO2 (1.46) to ensure larger light confinement in the SiO2 layer. We observed a similar fringe pattern of the excitonic emission at 663\u2009nm with a 50\u2009nm thick SiO2 layer substrates. In this case the emitted light is expected to be confined into a waveguide mode that propagates and is reflected back from Au tip forming fringe patterns representing standing waves. To investigate this in-plane guided-mode light propagation in a dielectric layer, we fabricated TiOtes Fig.\u00a0. Note ther Figs.\u00a0. This suer Figs.\u00a0. In summWe report a comprehensive study on the near-field interaction of a plasmonic tip with localized nanoscale excitonic emitters by spatially imaging their emission patterns using tip-enhanced nano-spectroscopy. Taking advantage of nanoscale spatial-resolution capability of the tip, sub-wavelength interference of in-plane propagating electromagnetic modes can be analyzed under tapping mode operation. Hyperspectral maps clearly illustrate that photons from localized emitters can be emitted in-plane that can be visualized ~1.7 microns away from the emitting source by virtue of standing waves formation. The interference period and the corresponding signal decay rate is governed by dielectric layer thickness and permittivity which also dictates the fraction of photons radiated in plane and the degree of confinement. The exciton transition dipole orientation and presence of the tip determines shape and directionality of the standing wave. Strain-induced formation of localized emitters in a 2D dielectric medium is favorable in terms of in-plane radiation coupling efficiency. Finally, the difference in excitation energy and exciton emission energy is an important metric that dictates the mechanism involved in fringe formation. Our work shows that near-field scanning probe microscopy with metallic tips is a useful tool in imaging and analysis of nanoscale excitonic emitters including their radiation patterns as well as dipole orientations. In addition, our work helps understand energy transfer mechanisms and dynamics of excited state phenomena in emitters at deep sub-wavelength scales with both spectral and spatial information. The technique and approach could therefore serve as a useful tool for imaging, identifying and manipulating dipoles of even quantum emitters opening new avenues in classical and quantum nanophotonics.51. Colloidal, rectangular CdSe nanoplatelets with a thickness of 4.5 monolayers are synthesized following the literature52 with slight modifications53.Cadmium myristate precursor is prepared by following the literature340\u2009mg of finely-ground cadmium myristate and 28\u2009mL 1-octadecene are added to a 100\u2009mL three-necked round-bottom flask with a 1-inch octagonal stir bar. The central neck is connected to the Schlenk line through a 100\u2009mL bump trap, one of the side necks is equipped with a thermocouple adapter and thermocouple, and the other one is fitted with a rubber stopper. With a heating mantle, the flask is degassed at 100\u2009\u00b0C for 30\u2009min. In the meanwhile, a dispersion of 0.15\u2009M selenium in ODE is prepared by sonication for at least 20\u2009min. After switching the atmosphere of the flask to nitrogen, the temperature of the reaction is increased to 220\u2009\u00b0C. 2\u2009mL of 0.15\u2009M Se/ODE dispersion are quickly injected by using a 22\u2009mL plastic syringe equipped with a 16\u2009G needle. After 20\u2009s, 120\u2009mg of finely\u00a0ground cadmium acetate are added to the flask by temporarily removing the stopper. The flask is carefully rocked to ensure that the cadmium acetate powder does not stick to the side walls of the flask. The reaction is kept at 220\u2009\u00b0C for 14\u2009min and then rapidly cooled with a water bath. 12\u2009mL of oleic acid and 22\u2009mL of hexane are added when the temperature reaches 160\u2009\u00b0C and 70\u2009\u00b0C, respectively.54. The mixture is first centrifuged at 8586\u2009g for 10\u2009min. The precipitate is then redispersed in 10\u2009mL of hexane. The suspension is left undisturbed for 1\u2009h, and then centrifuged at 6574\u2009g for 7\u2009min. The precipitate is discarded as it contains undesired 3.5 monolayer nanoplatelets. The supernatant is retained and transferred to a new centrifuge tube. 10\u2009mL of methyl acetate are added to the supernatant, followed by centrifugation at 5668\u2009g for 10\u2009min. 6\u2009mL hexane is used to redisperse the precipitate. Measuring the optical absorption spectrum is useful to confirm the removal of the unwanted 3.5 monolayer nanoplatelets, which are characterized by a lowest-energy absorption peak at 462\u2009nm, while the 4.5 monolayer nanoplatelets are characterized by a lowest-energy absorption peak at 512\u2009nm. If 3.5 monolayer nanoplatelets are still present in the dispersion, they can be removed by titrating methyl acetate and centrifuging until all 3.5 monolayer nanoplatelets are successfully removed. The final dispersion is stored in a glass vial in the dark.The nanoplatelets are washed by following a procedure reported in the literature with modifications2) and zinc oleate (Zn(Ol)2) are synthesized according to the literature55. The growth of CdxZn1-xS shell on CdSe nanoplatelets is performed by following the literature54 with minor modifications.Cadmium oleate\u00a0(Cd(Ol)10\u2009mL of ODE, 0.4\u2009mL of OA, 90\u2009mg of cadmium oleate, 167.5\u2009mg of zinc oleate, and an amount of 4.5 monolayer CdSe nanoplatelets in hexane equivalent to a 1\u2009mL with an optical density of 120/cm at the lowest-energy absorption peak are added to a 100\u2009mL three-necked round-bottom flask with a 1-inch octagonal stir bar. The central neck is connected to the Schlenk line through a 100\u2009mL bump trap, one of the side necks is equipped with a thermocouple adapter and thermocouple, and the other one is fitted with a rubber stopper. The mixture is degassed for 35\u2009min at room temperature and for 15\u2009min at 80\u2009\u00b0C. In the meanwhile, a solution of 83\u2009\u03bcL of 1-octanethiol (OT) in 7\u2009mL of degassed ODE and 2\u2009mL of degassed OA is prepared in the glovebox and loaded in a plastic syringe. 2\u2009mL of degassed oleylamine are added to a second plastic syringe. The two syringes are removed from the glovebox. Afterward, the atmosphere of the reaction flask is switched to nitrogen and the 2\u2009mL of OAm are injected. Using a heating mantle, the temperature of the reaction flask is increased to 300\u2009\u00b0C. At 165\u2009\u00b0C, the solution of OT in ODE and OA is injected at a rate of 4.5\u2009mL/h. After complete injection, the temperature of the reaction is maintained for an additional 40\u2009min. The reaction mixture is cooled down to 240\u2009\u00b0C by using an air gun, followed by using a water bath to cool to room temperature. At 40\u2009\u00b0C, 5\u2009mL of hexane is added.g for 6\u2009min. The precipitate is redispersed in 5\u2009mL of hexane while the supernatant is discarded. Methyl acetate is added to the dispersion until the mixture turns turbid, followed by centrifugation at 6000\u2009g for 10\u2009min. This process is repeated. The precipitate is redispersed in 3\u2009mL of hexane and centrifuged at 6000\u2009g for 7\u2009min. The precipitate is discarded, containing aggregated nanoplatelets. The supernatant is retained and filtered through a 0.2\u2009\u00b5m PVDF or PTFE syringe filter. The final dispersion is stored in a glass vial under ambient conditions\u00a0in the dark.The reaction mixture is centrifuged at 6000\u2009For low-resolution TEM, a JEOL 1400 microscope was operated at 120\u2009kV. For higher-resolution TEM, a JEOL F200 microscope was operated at 200\u2009kV. During imaging, magnification, focus, and tilt angle were varied to yield information about the crystal structure and super structure of the particle systems. To prepare the dispersed nanocrystals for imaging, we drop cast 10\u2009\u03bcL of a dilute (~0.1\u2009mg/mL) dispersion of nanocrystals in hexane on a carbon-coated TEM grid (EMS). The grid was dried under vacuum for 1\u2009h prior to imaging.Absorption spectra of nanocrystal dispersions in toluene were measured by using a Cary 5000 UV-Vis-NIR spectrophotometer.Photoluminescence quantum yield (PLQY) measurements were performed by using the integrating sphere module of an Edinburgh FLS1000 Photoluminescence Spectrometer. The NCs were dispersed at a concentration corresponding to an absorbance of 0.1 at the excitation wavelength.xZn1-xS nanoplatelet dispersion (0.001\u2009mg/ml in toluene) was spin-coated on the template-stripped Au substrate. Template stripped Au substrate was used for exceptionally low rms value (0.5\u2009nm)56. Dielectric layer were deposited by atomic layer deposition (Cambridge Nanotech S200 ALD). Refractive index of dielectric layers was measured by Ellipsometer (Woollam VAS Ellipsometer).The diluted CdSe/Cd2 and signals were collected for 100\u2009ms. Near-field TEPL map and spectrum were extracted by subtracting the contact mode TEPL to the tapping mode TEPL.LabRam-EVO Raman/far-field PL Spectrometer (Horiba Scientific) coupled with AFM setup was used to conduct tip-enhanced photoluminescence measurement. After 633\u2009nm laser was aligned to the apex of the tip, the sample is engaged with the frequency-modulated feedback loop to measure the topography. Corresponding TEPL spectrum was obtained by both the contact and the tapping mode simultaneously. Each pixels in the hyperspectral map span 30\u2009\u00d7\u200930\u2009nmFurther information on research design is available in the\u00a0Supplementary InformationReporting Summary" \ No newline at end of file