{"text": "An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE), \"Digital Northern\" or Massively Parallel Signature Sequencing (MPSS), is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error.We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools (\"pseudo-libraries\") and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it.Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site. A much more usual approach is to assign an index that measures the confidence/significance of the hypothesis and let the biologists themselves to establish a cutoff of what they call significant.An important challenge in Serial Analysis of Gene Expression (SAGE) analysisThis necessity arises because counting sequenced SAGE tags is a process prone to random and systematic errors that affect gene expression abundance estimates. Systematic errors may come from various sources such as GC content bias , sequencIf an Expressed Sequence Tag (EST) library is non-normalized, its counting data, also known as \"Digital-Northern\", reflects the abundance of genes. Likewise, the Massively Parallel Signature Sequencing (MPSS) techniquNowadays, the variability in SAGE abundance data is modeled only as due to sampling from sequencing, since almost all statistical procedures are performed after aggregation of observations from various libraries of the same class, creating a \"pseudo-library\". See -11 for gHere we propose a Bayesian model of mixtures to account for within-class variability as a generalization of the Beta-Binomial model . We alsoi-th library is often modeled as a Bernoulli Process and a fixed unknown tag abundance πi is implicitly assumed. The pdf of the random variable of interest, \"expression abundance\" π ∈ among all n libraries is unknown, thus each library could be regarded as being created by a realization of π. These features lead naturally to mixture models , turn Eq.1 into the familiar and commonly used binomial distribution (see derivation in the Methods section).The common procedure of merging all observations from libraries of the same class, constructing a \"pseudo-library\" before statistical inference, is recognized as a particular case of this mixture model: just assume that all libraries have strictly the same abundance, with no biological variability. Mathematically, this is a function with infinite probability density over one single abundance value f(·) as a Beta in Eq.1, yields the so-called Beta-Binomial model (see derivation in the Methods section).We believe that Dirac's Delta is a naive description of real-life SAGE libraries. The Beta distribution is an alternative with non-zero within-class variance to account for intuitively expected biological differences among them. Using θ that describes the random variable π of some fixed gene G, we must decide if there is a difference between A and B classes . We propose to consider genes as being differentially expressed based on non-superposition of the predictive Beta pdfs of both A and B classes. By \"predictive\" we mean that we use the a posteriori mode in the Beta pdfs. The \"non-superposition\" intuitive feature is mathematically written as the Bayes Error Rate E of some gene G is indexed in a model family by means of a parameter vector θ. Therefore, following the usual Bayesian framework, the a posteriori pdf that describes the class is:To start generically, suppose that the probability density function (pdf) for the random variable of interest \"expression abundance\" X = is the vector of counts in all n libraries of same class, M = is the vector of total observations in all n libraries of same class, g(·) is the a priori pdf, and L is the likelihood of each i-th observation. Note that the product of all likelihood functions over all observations is the so-called Likelihood Function.where: posteriori expression anyway.The counting process from automatic sequencing is often modeled as a Binomial. Since the sample size and the stopping rule are not known in advance the model is not strictly Binomial. We do not need the combinatorial constant in the model, but we write it just because it is commonly used and will vanish in f(·) as a Dirac's Delta in Eq.1:Merging all observations from the same class libraries and constructing \"pseudo-libraries\", with the sum of their components, is the standard procedure to use replicates. Our general model is reduced to this one if one uses 1{·} is the indicator function.where: Using Eq.4 in Eq.3 yield:θ) = 1, the non-informative uniform a priori distribution.where: g, and the sum of observations is the mathematical translation of \"pseudo-libraries\" construction.The expert recognizes that f(·) as a Beta in Eq.1 we get the Beta-Binomial model as a particular case of general model:The only published solution that allows non-zero within-class variance in SAGE analysis is the Beta-Binomial model . Using fB(·) is the beta special function, and:where: θ = as the mean and standard deviation (stdv) of a Beta random variable. We prefer this parameterization of Beta distributions instead of the common one because: (i) it is much more intuitive to biologists to deal with mean and stdv than with abstract α and β, and (ii) as α, β > 0, the domain Θ = {: 0 ≤ θ1 ≤ 1, 0 ≤ θ22 <θ1 (1- θ1) ≤ 1/4} is bounded and much more amenable to perform the necessary numerical computations.Again, an expert recognizes Using Eq.6 in Eq.3 yield:g is the priori pdf.where: a priori pdf. We use an uniform distribution over Θ. On the other hand, we know in advance that variance of this model cannot be smaller than the variance eventually obtained if we do not consider within-class variability. Even if the within-class variability is very small, it cannot be estimated as being smaller than the simple sampling error because they are inseparable, and sampling error is the lower limit support, the relevant contribution for this integral is concentrated in a much smaller region. Integrating over the formal limits will cause serious numerical errors, and to avoid this problem we approximate our integration region to an interval delimited by 0.005 and 0.995 quantile of each Beta pdf since the relevant density lie in there.The credibility intervals (\"error-bars\") for the expression ratio of interesting tags were obtained as described in our recent work . We chosWe have also developed an easy-to-use web-based service that performs all calculations at our server and provides password-protected results. Although desirable, for the sake of automatic web hyperlink with SAGE Genie database, it is not necessary to explicitly identify the tags analyzed but rather any (custom) i.d. string. This could increase privacy or make our web-interface useful for \"Digital-Northern\", MPSS or any mathematically related problem of mixtures from binomial sampling. Figure The Table For our aims, it is sufficient to focus the analysis at the tag level. Thus, we process the tag counts and let the identification of tag's best gene match as a posterior question that could be carefully done only to really interesting tags. We choose not to process tags whose counts appear only in libraries of one class. It is important to note that all libraries are from bulk material, without cell-lines, and came from patients with similar disease description. The normal libraries came from different normal regions of the brain.We think that this data set is very illustrative since there are biological replicates in the tumor class allowing clear verification of within-class biological variability. On the other hand, taking only one kind of disease, astrocytoma grade III, instead of all brain tumors in the database, leads one to believe that the within-class variability is in fact due to biological diversity of the patients and not due to very distinct molecular profile of distinct brain tumors stored in SAGE Genie's database.in silico comparison is well-suited to demonstrate the necessity of dealing with within-class effect, although it is not our aim here to make a detailed or biological analysis of brain tumor data.Therefore, we believe that this χ2 proportion test and the bayesian Audic-Claverie's method. All these tests were performed using the easy-to-use web-interface IDEG6     (14)where:V is not unrealistic small when Vu is unrealistic small. To fit all these parameters, they used the computationally practical Method of Moments. Once pA, pB, VA and VB are found for classes A and B, these authors test if the proportions are significantly different proposing the use of a tw statistics as following a Student's tdf pdf:The max(·) function assure that SAGE: Serial Analysis of Gene ExpressionMPSS: Massively Parallel Signature SequencingEST: Expressed Sequence Tagpdf: probability density functionGEO: Gene Expression OmnibusRV conceived and executed this work. HB helped with all biological issues. DFCP helped in differential expression detection methods and implemented the on-line web-based tool. CABP helped with Bayesian statistics and proposed the mixture ideas.Results for all evidence measures. This file allows the user to interactively define significance cutoffs for ranked tags. The ranks are based on evidence measures against \"no differential expression\" hypothesis, i.e., evidences closer to 0 (zero) denote higher confidence in differential expression and closer to 1 (one) denote no evidence of differential expression.Click here for file"} {"text": "Serial Analysis of Gene Expression (SAGE) is a method of large-scale gene expression analysis that has the potential to generate the full list of mRNAs present within a cell population at a given time and their frequency. An essential step in SAGE library analysis is the unambiguous assignment of each 14 bp tag to the transcript from which it was derived. This process, called tag-to-gene mapping, represents a step that has to be improved in the analysis of SAGE libraries. Indeed, the existing web sites providing correspondence between tags and transcripts do not concern all species for which numerous EST and cDNA have already been sequenced. web site.This is the reason why we designed and implemented a freely available tool called Identitag for tag identification that can be used in any species for which transcript sequences are available. Identitag is based on a relational database structure in order to allow rapid and easy storage and updating of data and, most importantly, in order to be able to precisely define identification parameters. This structure can be seen like three interconnected modules : the first one stores virtual tags extracted from a given list of transcript sequences, the second stores experimental tags observed in SAGE experiments, and the third allows the annotation of the transcript sequences used for virtual tag extraction. It therefore connects an observed tag to a virtual tag and to the sequence it comes from, and then to its functional annotation when available. Databases made from different species can be connected according to orthology relationship thus allowing the comparison of SAGE libraries between species. We successfully used Identitag to identify tags from our chicken SAGE libraries and for chicken to human SAGE tags interspecies comparison. Identitag sources are freely available on Identitag is a flexible and powerful tool for tag identification in any single species and for interspecies comparison of SAGE libraries. It opens the way to comparative transcriptomic analysis, an emerging branch of biology. In order to characterize the molecular basis underlying self-renewal versus differentiation decision-making process we investigated the transcriptomic changes of various states related to this process, in two model systems : one derived from chicken and the other from human cells. We decided to use Serial Analysis of Gene Expression (SAGE) to attaia priori regarding the genes to be studied. It can be used with mRNAs derived from cells of any eukaryotic species. SAGE is based on the isolation of a unique sequence tag from each individual transcript and on serial concatenation of several tags into long DNA molecules. Sequencing of concatemer clones reveals individual tags and allows quantification and identification of transcripts. Tag counts are digitally archived and statistically significant comparisons of expression levels can be made between tag counts derived from different populations of cells.Serial Analysis of Gene Expression is a comprehensive method for analyzing transcriptomes without any An essential step in SAGE library analysis is the unambiguous assignment of each 14 bp tag to the transcript from which it was derived. This process, called tag-to-gene mapping, represents a step that has yet to be completed in the analysis of SAGE libraries. The automated version of this process mostly involves extracting \"virtual tags\" from sequence databanks : these virtual tags are predictions of the 14 bp sequences that might be produced by a SAGE experiment. The quality of the databanks from which the virtual tags are extracted represents a limiting step in this process. Ideally, the databanks should represent the complete collection of each and every transcript, fully sequenced and annotated. This clearly has yet to be achieved for most species, and therefore one must use the available information that comes mainly from large EST (Expressed Sequence Tags) projects.Arabidopsis thaliana, Bos taurus, Homo sapiens, Medicago truncatula, Meleagris gallopavo, Mus musculus, Pinus taeda, Rattus norvegicus, Sus scrofa, Triticum aestivum and Vitis vinifera). However, this site doesn't include tag to UniGene mapping for several other species for which numerous EST and cDNA have already been sequenced. This is the reason why we designed and implemented a freely available tool for tag identification that can be used in any species for which transcript sequences are available. It can include both complete cDNAs and EST cluster sequences and allow to interrogate the database according to the source of data, to assess the quality of virtual tags derived from different transcript sequences. In this paper we describe the use of this tool for the chicken where a large EST sequencing effort was completed )Identitag is a flexible and powerful tool for tag identification in any single species and for interspecies comparison of SAGE libraries. It opens the way to comparative transcriptomic analysis, an emerging branch of biology.Project name: Identitag• Project home page: • Operating system(s): SUN, Linux, Mac OS X• Programming languages: Perl, Bourne Shell, MySQL• License: GNU GPL• CK participated to the design of Identitag, implemented Identitag, and participated to the biological validation of Identitag with SAGE libraries. FD constructed the first two SAGE libraries with which Identitag was tested and participated to the biological validation of Identitag with these data. LD and DM brought their expertise in the orthology area, in order to design the orthology relationship. OG supervised this work. All authors participated in the writing of the manuscript, read and approved the final manuscript."} {"text": "Learning the exact demographic characteristics of a neighborhood in which a public library serves, assists the collection development librarian in building an appropriate collection. Gathering that demographic information can be a lengthy process, and then formatting the information for the neighborhood in question becomes arduous.As society ages and the methods for health care evolve, people may take charge of their own health. With this prospectus, public libraries should consider creating a consumer health collection to assist the public in their health care needs. Using neighborhood demographic information can inform the collection development librarians as to the dominant age groups, sex, and races within the neighborhood. With this information, appropriate consumer health materials may be assembled in the public library. the data was manipulated with ArcView GIS and queried to produce maps displaying the requested neighborhood demographics to view in respect to libraries.In order to visualize the demographics of a neighborhood, the computer program ArcView GIS (geographic information systems) was used to create maps for specified areas. The neighborhood data was taken from the U.S. Census Department's annual census and library addresses were accumulated through a free database. After downloading the census block information from ArcView GIS produced maps displaying public libraries and requested demographics. After viewing the maps the collection development librarian can see exactly what populations are served by the library and adjust the library's collection accordingly.ArcView GIS can be used to produce maps displaying the communities that libraries serve, spot boundaries, be it \"man-made or natural,\" that exist prohibiting customer service, and assist collection development librarians in justifying their purchases for a dedicated consumer health collection or resources in general. Libraries have the objective to build collections that support the communities they serve. To build a viable collection the collection development librarian, outreach/marketing librarian, and others, must determine exactly what populations reside in the neighborhood, in respect to race, spoken language, educational level, and age groups. One method collection development librarians use to gather demographic information is to physically visit the communities and integrate into the neighborhoods. They may attend events within the community in order to analyze the attendees or walk around the neighborhood to get a feel for the community. Another method for collection development is to perform an informal survey with people that visit the library and learn about their preferences and/or what they like to read or look for on the Internet. Using census information can be a third way to gather community/neighborhood information. The US Census Bureau takes a census of the United States every 10 years and publishes the results on the Internet. Taking the Census information and transforming it into a graphical format provides an objective view of the communities surrounding a library. One way to visualize the census data is through the use of GIS, Geographic Information Systems. GIS are databases arranged by spatial coordinates that, when programmed, can produce maps .\"Many disciplines have used GIS and many large academic libraries even support GIS by establishing a GIS department and employing a librarian to specialize in GIS information. In the field of library science articles have been published describing the establishment of GIS departments in libraries and what GIS librarians do -7, but lGovernments use GIS to visualize land use planning, tax appraisal, utility and infrastructure planning and more . BusinesBuilding a collection for a library has become a sophisticated art. Johnson states that \"collection building consists of four steps: identifying the relevant items, assessing the item to decide if it is appropriate for the collection and evaluating its quality, deciding to purchase, and preparing an order .\" Each sThis paper reports the steps taken to use GIS in assisting a large metropolitan public library system in visualizing neighborhoods that surround branch libraries in order to make an educated decision on whether a dedicated consumer health collection should be established to support the community. The objective of the project was to determine if GIS could be used to improve collection development.After comparing large library systems in major metropolitan areas, Chicago was selected as a convenient sample to be mapped with GIS because of its size, number of public libraries, their geographic distribution within the urban area, and ease of access to data. Compared to other large cities, the New York Public Library system has 86 libraries and Los Angeles has 67 public libraries . The citWith the selection of the city of Chicago, the next step was to create a map of the metropolitan area. ESRI ArcGIS v9.0 geographic information software (GIS) was seleIn order to create the city map of Chicago, data files were obtained from the U.S. Census Bureau website. The Census Bureau produces TIGER Files available for free to the public . These fThe next step required mapping distances between neighborhoods and neighborhood demographics. In order to achieve this, files were obtained from the U.S. Census Bureau website which di. The output of this search was imported into Excel, cleaned, and converted into a dBase file. The dBase file was imported into ArcView to coordinate the library addresses to the street addresses of the base map. This enabled the software to place an image for each public library in the city map of Chicago.The final piece of information needed to build a map showing neighborhoods in respect to public library locations, was to map the public libraries within Chicago. Because the base map of Cook County was a street map, addresses of public libraries were entered into ArvView to mark all the public libraries within the city of Chicago. The state of Illinois provides a database available on the Internet that permits searching types of libraries and locations of libraries, With the city of Chicago and its public libraries now in a GIS application, the location of libraries was displayed in respect to neighborhoods and distances from library to library . The age ranges presented in the maps varied by query and census information. The groups analyzed were: women, men, white women (group age 40–49), African American women (group age 40–49), Asian . The color schemes used represent the number of people that live within the census block as to the specified GIS query. In Figure In this project ArcGIS was used to build maps of neighborhoods in the city of Chicago, display public libraries and spatially represent distances from libraries to populations. The US Census demographic information was incorporated into ArcGIS and queried to produce 15 different maps. Queries were built to display neighborhood demographics through colors showing the percentages of the population(s). Each query produced a map for a specific age or race. Once the map was produced the public libraries were added to the map and analysis was made regarding collection needs according to the neighborhood population surrounding the library.To decide if a public library should establish a consumer health collection, a map showing the breakdown of women in their 40s was built. As in Figure By creating maps showing the demographic breakdown of the city, a public library can build, adjust, or verify its collection in respect to the neighborhoods it supports . It is eMany studies have been conducted analyzing what people search for on the Internet and who is searching. Studies have found that women in their 40s search for and utilize health information from the Internet more than men -24. UsinUsing ArcGIS can assist in defining the exact make-up of the population that a library serves. By turning the census information into a graphic the library is provided a chance to see how far their services may reach. Library systems that have branch libraries may have one collection development librarian who buys for each library. By using ArcGIS the central library can analyze the communities surrounding each library to purchase materials appropriately without having to spend time in each library learning the environment. Purchase decisions may now be justified with hard data from ArcGIS. For example, if ArcGIS shows that the largest population surrounding a library speaks a minority language, materials should be purchased in the dominant language to best meet the needs of the community.There is a fairly high learning curve to use ESRI Arcview GIS. There are five programs in ESRI Arcview and each program has separate features that must be used within the specific program to be imported into the primary map. Learning what each program does and then how to incorporate the data into the map requires hours of reading. Acquiring the necessary data to build a map requires background study of the geographic area in order to assure that the map is configured correctly. Upon attaining all necessary data for the map, one must build queries to manipulate the data according to the information need, and then configure the map accordingly.While GIS presents a new technique to use in performing collection development, there was not a way to test the differences from a collection built using GIS information and the current methods of collection development.In addition, research has discovered that knowing the service area radius of the library doesn't mean that people within the service area will actually use or belong to the neighborhood library ,25. WhenWhile GIS can take the census information and demonstrate the population surrounding a library, the census does not describe the specific users of the library . To builBudget constraints will add a need for justification of purchases for a library's collection. Using GIS may provide one method to justify additions to collections. As health care changes and individuals take more control of their health, libraries need to have materials that provide valid information in an age and language appropriate format and be culturally sensitive. Branch libraries respond to the demands of the neighborhood and knowGIS = geographical information systems"} {"text": "When the number of groups involved is more than two, however, a more general approach is needed.Two major identifiable sources of variation in data derived from the Serial Analysis of Gene Expression (SAGE) are within-library sampling variability and between-library heterogeneity within a group. Most published methods for identifying differential expression focus on just the sampling variability. In recent work, the problem of assessing differential expression between two groups of SAGE libraries has been addressed by introducing a beta-binomial hierarchical model that explicitly deals with both of the above sources of variation. This model leads to a test statistic analogous to a weighted two-sample We describe how logistic regression with overdispersion supplies this generalization, carrying with it the framework for incorporating other covariates into the model as a byproduct. This approach has the advantage that logistic regression routines are available in several common statistical packages.The described method provides an easily implemented tool for analyzing SAGE data that correctly handles multiple types of variation and allows for more flexible modelling. The Serial Analysis of Gene Expression (SAGE) methodology introduced by Velculescu et al. is a seqBriefly, mRNA transcripts are converted to cDNA and then processed so as to isolate a specific subsequence; starting from the poly-A tail, the subsequence is the 10 or 14 (long SAGE) bp immediately preceding the first occurrence of a cleavage site for a common restriction enzyme. Ideally, this subsequence, or \"tag\" is sufficiently specific to uniquely identify the mRNA from which it was derived. Tags are sampled, concatenated and sequenced, and a table consisting of the tag sequences and their frequency of occurrence is assembled. The complete table derived from a given biological sample is referred to as a SAGE \"library\". As most tags are sparse within the entire sample, most libraries contain numbers of tags in the tens of thousands to allow the expression levels to be estimated. Due to the current costs of sequencing, however, the total number of libraries assembled for a given experiment is typically small: often in the single digits and occasionally in the tens.While the type of information, gene expression, being investigated in a SAGE experiment is the same as that in a cDNA or oligonucleotide microarray experiment, there are some qualitative differences in the approaches. First, SAGE uses sequencing as opposed to competitive hybridization. Second, while the expression value reported for an array experiment is a measure of fluourescence and is loosely continuous, SAGE supplies data on gene expression in the form of counts, potentially allowing for a different type of \"quantitative\" comparison. Third, SAGE is an \"open\" technology in that it can provide information about all of the genes in the sample. Microarrays, by contrast, are \"closed\" in that we will only get information about the genes that have been printed on the array.Y, the number of counts of that tag in the library, and n, the total number of tags in the library. In analyzing SAGE data across a series of libraries, interest typically centers on assessing how the underlying true level of gene expression is changing as we move from one library to the next.Mathematically, the information pertaining to the abundance of a particular tag in a sample is summarized in two numbers: Yi} and the set of library sizes {ni}, where the subscript i denotes the specific library. Unless otherwise specified, we will restrict our assessment of differential expression to the case of a single tag. This approach is common to all of the procedures described below. In a real analysis the chosen test is applied to all tags individually and a list of those tags showing differential expression is reported. Different tests will provide altered assessments of significance for individual tags, and hence the list provided will depend on the test employed.When surveyed across a series of libraries, the sufficient statistics containing all of the information about the change in expression for a single tag are the set of counts {Xi describing properties of library i. The most common case involves comparing two groups of libraries, such as cancer and control. In this case the information Xi simply defines which group library i belongs to. If there are more than two groups, Xi can have more levels or can even be vector valued, but as before interest centers on assessing how and whether the expected proportion changes with X.In most problems of interest, there is also covariate information Much work has been done on the problem of comparing expression between two groups. Most of the approaches -9 deal wt-test used to compare two groups of samples in ,ni are the same. While approximate equality may suffice, even this assumption may be questionable for SAGE data, particularly if some of the libraries are drawn from experiments conducted at different times. Williams should be more than adequate. Second, the presence of overdispersion means that pooling the samples underemphasizes the evidence of a small proportion being supplied by the zero variance of the observed proportions. While we could pursue a more optimal proportion, we choose in this case to simply use the simplistic bound noted above. Here, as the library sizes in the first group are 49610 and 48479, the proportion is 1/(49610 + 48479) and the faked counts are 0.506 = 49610/(49610 + 48479) and 0.494, respectively. Some reformatted results from this fit are shown in Table The second run of the fitting procedure takes the overdispersion parameter as given, and fits the data after replacing the zero proportions in a group with the same small nonzero proportion, giving us a hopefully conservative estimate of the fold change. This type of replacement is commonly used, and is most often justified via the assumption of a vague prior distribution for the proportions, with the point estimate being derived as the posterior mean or mode. A common assumption for a prior in dealing with proportions is the uniform distribution. The posterior mean after 0 successes are observed out of t-value relies on the approximate normality of the likelihood function in the vicinity of the maximum, and this shape assumption breaks down severely if the number of counts in one group is small. Tests based on changes in the scaled deviance, corresponding to likelihood ratio tests, are better.The results for this fit are ridiculously \"insignificant\". The problem lies in the fact that the use of a The third run of the fitting procedure fits a simpler submodel, in this case a single proportion for all eight libraries, using the same overdispersion estimate so as to measure the change in deviance. The results of this fit are shown in Table Here, we cannot conclude (given the level of overdispersion) that the difference is real. Note that the degrees of freedom used in the denominator is 5; this follows from the fact that only 6 libraries were used to estimate the overdispersion parameter, and one of those 6 degrees of freedom was needed to estimate the proportion.In general, when any of the groups has very small counts, checking the change in deviance is a good idea.Logistic regression with overdispersion addresses three issues with SAGE data: simultaneously modelling multiple types of variance, dealing with multiple groups at once, and allowing for the incorporation of covariates. This procedure is widely implemented in available software. Further, and most importantly, viewing SAGE data in the logistic regression setting supplies the framework for thinking of models that describe such data.t-tests. The regression setting carries with it other benefits, such as a well-developed body of work regarding model checking, residual analysis, and detection of outliers. For example, the influence of any given library tag count on the overall analysis can be assessed, and methods can be made more robust by bounding these functions so that no single library drives the results.Dealing with multiple types of variance yields significance estimates we believe to be superior to those derived from pooled counts or from There are some areas in which we can identify difficulties and see room for improvement.First, the model that we are using for the error may be improved. For SAGE data, the proportion associated with a specific tag is rarely on the order of a percent, sopi) ≈ log(pi)logit = β0 + β1xi + εi,logit, where \"mixed\" refers to the fact that we have both fixed effects of interest, the changes with the covariates, and random shocks whose variance needs to be estimated and allowed for. Williams ;p.values <- 2 * pt,fit3$df.residual);}########################################### Next, we deal with three groups##########################################if(0){# We begin by focusing on gains# available when multiple groups are# present, even if the other groups are# not directly part of the contrast of# interest, due to the additional# information that the added groups can# provide about the scale of the# overdispersion.# Here, we use the data from the tag# TGCTGCCTGT, and this time we note that# there are 3 groups of libraries:# normals (libraries 1–2), primary# tumors (libraries 3–4), and cell lines# (libraries 5–8). If we are interested# in the contrast between normals and# primary tumors, we can fit this using# only the data from those two groups,# or using the data from all three.# First, fit the model as if there were# just two groups present.y <- c;n <- c;x <- c;fit1 <- glm ~x,family=binomial);fit2 <- glm.binomial.disp(fit1);# get the correct p-valuesfit2.t.values <- summary(fit2)$coefficients ;fit2.p.values <- 2 * pt, fit2$df.residual);# Next, fit the model assuming that# there are three groups. In this case,# we cannot use a single covariate# vector x, as this is not suited to# indicating 3 or more groups in an# unordered fashion # In general, if we have k groups, we# need to use k-1 covariate vectors.# Here, we use# x1 <- c;# x2 <- c;# The set of all 0s # corresponds to the first group, here# the normals, and the other groups are# defined by which one of the other# covariates is nonzero:# Group 2 (primaries), ,# Group 3 (cell lines), y <- c;n <- c;x1 <- c;x2 <- c;fit3 <- glm ~x1 + x2,family=binomial);fit4 <- glm.binomial.disp(fit3);# get the correct p-valuesfit4.t.values <- summary(fit4)$coefficients ;fit4.p.values <- 2*pt, fit4$df.residual);# The above approach has fit the model# with all of the covariates available,# but in order to perform an analysis of# deviance we want to fit various# submodels using the same estimate of# overdispersion as found here. In this# case, there are 3 submodels:fit5 <- glm ~x1,family=binomial,weights = fit4$disp.weights);fit6 <- glm ~x2,family=binomial,weights = fit4$disp.weights);fit7 <- glm ~1,family=binomial,weights = fit4$disp.weights);# alternatively, the anova function can# be used, but this only considers the# submodels obtained by adding terms# sequentially. Thus, we get the# deviances for beta_0 (the null model),# beta_0 + beta_1 (adding the x1# covariate only), and beta_0 + beta_1 +# beta_2 ;}########################################### Next, we deal with the case of other# covariates, possibly continuous.##########################################if(0){# Here, we are using the counts from the# GCGAAACCCT tag, but we are treating# the 8 libraries as coming from tissue# type 1 (libraries 1–4) and tissue type# 2 (libraries 5–8), with normal tissue# of both types and# primary tumor of both types . In this hypothetical# example, we are able to partition the# changes into effects associated with# normal/primary differences (x1) or# tissue 1/tissue 2 differences (x2).y <- c;n <- c;x1 <- c;x2 <- c;fit1 <- glm ~x1 + x2,family=binomial);fit2 <- glm.binomial.disp(fit1);# get the correct p-valuesfit2.t.values <- summary(fit2)$coefficients ;fit2.p.values <- 2*pt, fit2$df.residual);# Next, again using the tag as above, we# posit that we also have access to the# levels of a biomarker potentially# predictive of survival, supplied as# the levels of another covariate x3.# The values supplied here were# generated as random draws from a# uniform distributionx3 <- c;fit3 <- glm ~x1 + x2 + x3,family=binomial);fit4 <- glm.binomial.disp(fit3);# get the correct p-valuesfit4.t.values <- summary(fit4)$coefficients ;fit4.p.values <- 2*pt, fit2$df.residual);}KAB, LD and JSM developed the main ideas and the methodology; LD did most of the coding. CMA supplied SAGE data and provided practical feedback on aspects of earlier approaches found to be wanting, thus guiding further development."} {"text": "Serial Analysis of Gene Expressions (SAGE) produces gene expression measurements on a discrete scale, due to the finite number of molecules in the sample. This means that part of the variance in SAGE data should be understood as the sampling error in a binomial or Poisson distribution, whereas other variance sources, in particular biological variance, should be modeled using a continuous distribution function, i.e. a prior on the intensity of the Poisson distribution. One challenge is that such a model predicts a large number of genes with zero counts, which cannot be observed.We present a hierarchical Poisson model with a gamma prior and three different algorithms for estimating the parameters in the model. It turns out that the rate parameter in the gamma distribution can be estimated on the basis of a single SAGE library, whereas the estimate of the shape parameter becomes unstable. This means that the number of zero counts cannot be estimated reliably. When a bivariate model is applied to two SAGE libraries, however, the number of predicted zero counts becomes more stable and in approximate agreement with the number of transcripts observed across a large number of experiments. In all the libraries we analyzed there was a small population of very highly expressed tags, typically 1% of the tags, that could not be accounted for by the model. To handle those tags we chose to augment our model with a non-parametric component. We also show some results based on a log-normal distribution instead of the gamma distribution.By modeling SAGE data with a hierarchical Poisson model it is possible to separate the sampling variance from the variance in gene expression. If expression levels are reported at the gene level rather than at the tag level, genes mapped to multiple tags must be kept separate, since their expression levels show a different statistical behavior. A log-normal prior provided a better fit to our data than the gamma prior, but except for a small subpopulation of tags with very high counts, the two priors are similar. The exact magnitude of p is unknown but is certainly much smaller than 1 as Poisson distributed with intensity λt = pnt.In Serial Analysis of Gene Expression (SAGE), mRNA is extracted from a tissue sample and converted to cDNA, from which oligonucleotides at specific locations in the cDNA fragments are extracted and amplified using PCR. Those tags are either ten or seventeen bases long, depending on the experimental protocol. Sequencing the PCR product, it is possible to establish the number of copies of each tag extracted. . Idealluznetsov )), whichlambda-values that just happened not to be counted. Those cannot be distinguished from tags that do not exist at all or are never transcribed. The problem of estimating the total number of expressible tags (the size of the transcriptome) was studied by Stern mod modlambdYt ~ Poisson(λt), λt ~ f     (1)Yt is the observed count for tag t, λt is the \"true\" expression level of tag t and θ is some parameter in the model. For the prior f we tried a number of candidates . The gamma prior turned out to provide a good fit to the distributions of the tag counts for counts lower than a certain threshold, typically the 98th or 99th percentile. Attempts to model the tag counts above that threshold with a second gamma-distributed component failed, not surprisingly since the number of tags in that range was too small to support meaningful estimates.where α and β are known, the posterior distribution of λt given yt is distributed as Gamma This is convenient because the posterior distribution of γ represents our knowledge of the true gene expression after the SAGE count has been observed. Also, since 1/β is a scale parameter in the Gamma distribution, libraries of different size can be compared. Other things being equal, we expect the estimated value of β to be inversely proportional to the library size.For the purpose of this paper we choose the gamma distribution, whose parameters were estimated with an empirical Bayesian approach . The choY becomes a negative binomial distribution:The marginal distribution of and in particularSince the zero counts are not recorded, the counts of the recorded tags follow a zero-truncated negative binomial distributionα < 1 heuristic methods do not work and ML estimation should be used instead.The zero-truncated negative binomial distribution has been studied by several authors, mainly for modeling group sizes. See Johnson for an oλt given Yt, one can construct a variance-stabilizing transform for a SAGE library. Also, in order to assess the sensitivity of the SAGE technology with respect to genes with low expression levels, one needs to know the distribution of λ. The idea of applying Poisson models to SAGE data is not new. Cai ) bλ given a count above the threshold. Fortunately, this is not so critical because for a high count the posterior mean will be close to the observed count.As a consequence of this choice, the model does not provide a posterior distribution of λ ~ Gamma and correlation coefficient R. If the total count for the gene is Poisson distributed with intensity λ* ~ Gamma, we haveAnother issue relates to genes mapped to multiple tags. It is reasonable to assume some correlation between two tags representing the same gene. In the human transcriptome map, the counts for those genes were reported as the sum of the tag counts. Suppose a gene is represented by two tags, the count of both being Poisson distributed with intensity and thusα for genes mapped to two or three tags are proportional to the estimated values of α for the genes mapped to a single tag. This suggests that the tag counts have the same distribution, whether they share the gene with other tags or not. The proportionality constant of 2.2 for genes mapped to two tags and 3.5 for genes mapped to three tags correspond to a correlation coefficient of approximately -0.05. This is a surprising result, since we found positive correlation between tags mapping to the same gene in the data used by Cai [This shows that if the counts per gene, rather than the counts per tag, are reported in a data set, genes with different numbers of representing tags should be kept separate. As shown in figure d by Cai . A possiα. More important, the bivariate model has a useful interpretation: the shared gamma process Z is the main effect (gene effect) while the independent gamma process X is the interaction effect. A further generalization of the bivariate model will be to incorporate multiple interaction effects in a multivariate model, for example a third gamma process related to treatment groups.By modeling two SAGE libraries with a bivariate truncated negative binomial model, it was possible to achieve a more stable estimate of We have made the distinction between truncated models (only positive tags considered) untruncated model (the set of expressible tags assumed to be known). It could be argued, however, that even when zero counts are reported , the untruncated models should allow for a an unknown number of non-recorded tags. Such models are called zero-deflated Poisson models.By the analysis of the HTM data, we have ignored the issue of sequencing errors. Figure λ's across the genes could boost their methods further.SAGE data appear, when sequencing errors are handled properly, to follow a Poisson mixture with a log-normal prior on the Poisson parameter. The gamma prior provides a good approximation for low counts . Using a bivariate gamma-Poisson model, the transcriptome size can be estimated from the data; alternatively, the list of expressible tags from the Human Transcriptome Map can be used. Whether one prefers the mathematically convenient gamma prior or a log-normal prior traditionally applied to microarray data, and whether one prefers a parametric or non-parametric model for the high-expressed tags, we believe that the Poisson model is useful for analyzing SAGE data because it separates the sample variance in the Poisson process from the biological variance. Vencio and Cai We used 72 libraries from the Human Transcriptome Map (HTM) , which We also used two of the short-tag libraries from SAGE-genie , and made use of the S-PLUS option of computing the gradient and Hessian using the double dogleg step(Venables [We augmented this model with a non-parametric component for the high-expressed genes in the same way as the untruncated model described above. We maximized the log-likelihood using a quasi-Newton-Raphson method (S-PLUS function Venables ). As a sα ~ exp(1) and β ~ gamma. In order to quantify the uncertainty on the parameter estimates, we also computed the Bayesian a posteriori distribution of the parameters in the truncated model with non-parametric component using a Markov Chain Monte Carlo (MCMC) algorithm. The MCMC simulations were carried out with WinBugs [For the threshold k for the non-parametric component, the value found by the method of moments (described above) was used . For the prior distribution of the parameters, we used WinBugs .λ) in two libraries (1 and 2), we assumed the trivariate reduction model :λ1, λ2) = , μ1, μ2 ~ Gamma, τ ~ Gamma     (13)X1 ~ negbinom, X2 ~ negbinom     (15)lambda1 and lambda2 isThe correlation between VAR(λ.) = VAR(μ.) + VAR(τ)     (20)μ1, μ2 and τ are derived from the gamma distribution. For each pair of libraries from the 19 largest libraries from the Human Transcriptome Map, we estimated the parameters in this model. Anticipating that the model would not fit to the frequencies of counts above the threshold found in the univariate model, which varied between 9 and 22, we restricted the analysis to tags with counts below 15 in both libraries. We also fitted an augmented model in which X1 and X2 were allowed to have different values of α. In that model, the marginal distributions of Y1 and Y2 are negative binomial with parameters and , respectively. Note that it is not expected that α1 is characteristic for library 1 when library 1 is modeled together with different libraries: if library 1 and 2 show a high degree of correlation, ρ will be larger at the expense of α1 and α2where the variances of Raw SAGE data contain a high number of tags with a count of one, many of which, presumably, are artifacts such as sequencing errors. Beissbarth estimateHTM Human Transcriptome MapSAGE Serial Analysis of Gene ExpressionMCMC Markov Chain Monte CarloBoth authors were involved in the development of the models. HT implemented the models and performed the computations. Both authors contributed to the manuscript."} {"text": "Five species of the genus Schistosoma, a parasitic trematode flatworm, are causative agents of Schistosomiasis, a disease that is endemic in a large number of developing countries, affecting millions of patients around the world. By using SAGE we describe here the first large-scale quantitative analysis of the Schistosoma mansoni transcriptome, one of the most epidemiologically relevant species of this genus.After extracting mRNA from pooled male and female adult-worms, a SAGE library was constructed and sequenced, generating 68,238 tags that covered more than 6,000 genes expressed in this developmental stage. An analysis of the ordered tag-list shows the genes of F10 eggshell protein, pol-polyprotein, HSP86, 14-3-3 and a transcript yet to be identified to be the five top most abundant genes in pooled adult worms. Whereas only 8% of the 100 most abundant tags found in adult worms of S. mansoni could not be assigned to transcripts of this parasite, 46.9% of the total ditags could not be mapped, demonstrating that the 3 sequence of most of the rarest transcripts are still to be identified. Mapping of our SAGE tags to S. mansoni genes suggested the occurrence of alternative-polyadenylation in at least 13 gene transcripts. Most of these events seem to shorten the 3 UTR of the mRNAs, which may have consequences over their stability and regulation.SAGE revealed the frequency of expression of the majority of the S. mansoni genes. Transcriptome data suggests that alternative polyadenylation is likely to be used in the control of mRNA stability in this organism. When transcriptome was compared with the proteomic data available, we observed a correlation of about 50%, suggesting that both transcriptional and post-transcriptional regulation are important for determining protein abundance in S. mansoni. The generation of SAGE tags from other life-cycle stages should contribute to reveal the dynamics of gene expression in this important parasite. Quantitative and qualitative transcriptome analyses reveal some of the most important biological aspects of an organism. Transcriptome examination is crucial for the understanding of significant biological processes, allowing the study of transcription/translation relationships, the dynamics of gene expression and, an important feature in parasites, a quantitative evaluation of the expression of genes that are potential targets for drugs or vaccines across diverse life-cycle or developmental stages.S. mansoni has been mainly performed by the partial sequencing of cDNA clones derived from libraries prepared with RNA derived from diverse life-cycle stages of the parasite [Large-scale transcriptome analysis of parasite -4. The lparasite , where wparasite -9 or lifparasite ,11. HoweRattus norvegicus [Saccharomyces cerevisiae [Homo sapiens [Mus musculus [Caenorhabditis elegans [Drosophila melanogaster [Cryptococcus neoformans [Plasmodium falciparum [Giardia lamblia [Toxoplasma gondii [Schistosoma mansoni.Serial Analysis of Gene Expression is one orvegicus , Saccharrevisiae , Homo sa sapiens , Mus musmusculus , Caenorh elegans , Drosophnogaster , Cryptocoformans and manylciparum -22 and m lamblia and Toxoa gondii . Here weS. mansoni were maintained in the laboratory by routine passage through mice and snails and recovered from the porto-mesenteric system by perfusion, after 7 to 8 weeks of infection. Worms were washed in saline solution and stored at -20°C in RNAlater (Ambion) prior to mRNA extraction. Poly-A mRNA was isolated with MACS kit , eluted in 200 μL of DEPC-treated water and treated twice with Promega RQ1 RNAse-free DNAse (1 U/10 μL) for 30 min at 37°C. DNAse was inactivated at 65°C for 10 min. mRNA purity and integrity were checked by RT-PCR using appropriate primer pairs of known genes and also negative controls as described in Verjovski-Almeida et al. [Pooled adult worms from BH isolate of Sequences from cloning vectors were trimmed and tags were extracted from high-quality segments using Phred . SequencS. mansoni genes was generated in silico after mapping the NlaIII restriction sites (CATG) to the complete set of full-length cDNA sequences from S. mansoni available from GenBank, from the TIGR tentative consensus and the complete set of clusters and singlets generated by our group as part of the S. mansoni transcriptome project [NlaIII restriction site in the transcripts dataset was extracted, thus generating a list of putative S. mansoni tags. These tags were annotated according to the information available for the transcripts from which they were derived. Top priority annotation was given to full-length genes, followed by TIGR consensus and our S. mansoni transcriptome project [A second list, containing putative SAGE tags of project . Sequenc project . These tS. mansoni transcripts were also screened for putative alternative poly-adenylation sites using SAGE data. For this purpose, the list containing all putative SAGE-tags (adjacent to NlaIII sites) from S. mansoni full-length genes available in GenBank, was cross-referenced with the tag list and the putative tags and ranked according to their position in relation to the 3' end. The most 3' tags, that are more likely to be bona fide tags for the canonical transcripts, were ranked as zero and the remaining tags were organized in ascending order from 3' to 5'. Tags that have rank > 0, a number of counts > 1, and were not followed by a putative site of internal binding of an oligo-dT primer (at least 8 adenines in a window of 10 bases) [Full length 0 bases) were conEvaluation of positional distribution of SAGE tags and ESTs over S. mansoni full-length cDNAs was carried over a set of 208 genes that were tagged by at least two SAGE tags. Blast analyses showed 26,888 ESTs and 9,589 SAGE tags mapping to these genes, allowing the identification of gene regions covered by these sequences. The mapped coordinates were normalized in terms of relative position of EST over the mRNA and relative coverage over all genes was calculated. This positional distribution was plotted together the distribution of the SAGE tags over the same gene set, where the 0% and 100% are equivalent to 5' and 3' positions of mRNAs, respectively.Functional classification of S. mansoni transcripts was undertaken using the Gene Ontology database. For this, blast analyses of the genes mapped by our SAGE tags were performed against 2,413,334 protein sequences available from Gene Ontology database (02/2007). All ontologies associated to the first hit matched by the query sequence were recovered and then was assumed that S. mansoni gene would have the same functional annotation. Evaluations of function were performed for 3 different classes of abundance including: abundant (represented by more than 500 tags), intermediate (499 to 100 tags) and less abundant (lower than 100 tags).After sequencing and evaluating 5,626 clones of the SAGE library, 4,752 reads (84%) containing 998,200 nucleotides were accepted with the quality criteria adopted. The need for further sequencing was determined by evaluating the frequency of tags that appeared at least twice as a function of total tags sequenced. This curve reached a plateau close to 60,000 tags and suggested coverage of the majority of genes expressed in this developmental stage [S. mansoni adult worms. In fact, 2,886 of these tags found matches in the Schistosome gene index or in the list of S. mansoni transcripts identified by Verjovski-Almeida et al. [The most informative tags are those that appeared at least twice (less likely to contain sequencing artifacts) in the final tag list. These comprised a total of 6,263 distinct tags, which should approximate to the total number of genes expressed in this developmental stage . The lisa et al. , which sS. mansoni transcriptome project [st and 46th most abundant transcripts in adult worms. The most frequent transcript encodes the F10 eggshell protein, followed by a pol-polyprotein transcript, heat shock protein 86 and 14-3-3 protein homolog (see Table S. mansoni gene fragments (contigs and singlets). When only tags that appeared at least twice were considered, 3,347 (53.4%) matched S. mansoni gene fragments. A complete list of all tags, together with their frequency, tag sequences, and respective gene assignments can be found in the supplementary table that accompanies this paper , followed by TIGR consensus sequences and a clustering of the sequences produced by the project . By usinS. mansoni, blast analyses were performed against 2,413,334 protein sequences available from the Gene Ontology database (Feb/2007). Genes mapped by more than 3 SAGE tags were used as queries. All ontologies associated to the first hit matched by the query sequences were recovered and their functional annotations were given to the respective schistosome gene. In this process, ontologies were assigned to 2,933 genes. Functional classification was then investigated for transcripts distributed in expression classes, according to their tag abundance. We considered that the most abundant functional categories were those containing genes with more than 500 tags; followed by the intermediate (499 to 100 tags) and less abundant classes (lower than 100 tags). This allowed us to describe the most abundant functional classes among the highly expressed, intermediate and lower expressed genes.In order to evaluate the functional categories most abundantly represented in the transcriptome of It can be observed in Figure S. mansoni shows that for some transcripts two or more distinct SAGE tags have been sequenced. These distinct tags were used to investigate alternative poly-adenylation events that may occur in these transcripts. After the analysis of these events, using the criteria described in materials and methods at least 16 alternative poly-adenylation events could be identified in 13 full-length S. mansoni mRNAs in 35 (9.6%) of them. An extrapolation of this would suggest that the frequency of expression of 90% of the S. mansoni genes expressed in adults could be evaluated by the SAGE approach employed here.The SAGE technique involves generation and sequencing of large numbers of short tags, defined by the occurrence of a recognition site for a type I restriction enzyme in the mRNA . IdeallyS. mansoni Unigene cluster sequences were evaluated, we observed that 2,193 clusters contained ESTs derived from adult worms. Only 169 of these clusters contained full-length sequences. When tags (rank 0 and rank 1) of these 169 clusters were considered, we observed that 132 (78%) were represented in our SAGE tag list. So, this alternative estimate shows that coverage of our SAGE tags was of about 78% of the genes expressed in adult worms. We also noted that 39 UniGene clusters, with no adult-worm derived ESTs in the cluster composition, had their expression confirmed in this stage by our SAGE data.On the other hand, when 8,669 S. mansoni ESTs from the 3' end of the transcripts, for a more complete knowledge of the schistosome transcriptome. This also points to a reduced overlap of the SAGE and available EST data, which will result in a poor coverage of low expressed genes by non-normalized 3' UTR ESTs and in the failure of SAGE-to-transcript assignment.To establish how the transcriptome derived from SAGE and ESTs can be compared to each other, we evaluated the relative distribution of SAGE and EST sequences over a set of 208 worm full-length mRNA sequences available in GenBank. The 208 full-length transcripts are covered by 26,888 ESTs and 9,589 SAGE tags. As expected, 42% of the SAGE tags that map to the set of 208 full-length genes are positioned in the last 20% of the transcripts. On the other hand, only 17% of the ESTs mapped to these genes cover this same 3' portion of the transcripts , this data clearly demonstrates that more 3' sequences from normalized cDNA libraries are required for deciphering the transcriptome of this parasite.Indeed, from the total of 6,263 tags with frequency higher than one, 2,916 (46.6%) found no matches on the transcript databases used. As expected, this failure in finding the correspondent gene for a specific tag was found to be directly related to the low expression of the corresponding transcript, and its reduced coverage by ESTs. In fact this can be used as an indirect measurement of correlation of SAGE and EST coverage. Whereas 96% of the 50 most frequent tags or 92% of top 100 tags could be identified in a transcript, only 53% of all ditags or only 40% of all 15,655 tags could be assigned to its correspondent gene. As the While the same tag can be mapped to many transcripts (indicating a conservation of a nucleotide motif), we also see that a single transcript might sometimes generate various different tags. This parallels to what happens in proteomic studies when the same protein sometimes generates different spots in a gel. The occurrence of multiple tags deriving from the same transcript could occur by methodological problems or due to biological features such as splicing variants in the transcript region containing the most 3' tag or as the result of the use of multiple poly-adenylation sites. Whereas the use of SAGE tags to evaluate alternative-splicing is more difficult, the occurrence of alternative poly-adenylation events could be evaluated with less assumptions. In order to reduce the impact of methodological aspects over the determination of alternative poly-adenylation events, we have not considered tags sequenced only once, ambiguous tags (those that could be mapped to different transcripts) or internal tags that appeared before long stretches of A's in the transcript, which could have been used as false polyA tails during the cDNA synthesis step .S. mansoni, suggests that this parasite employs this mechanism for regulating mRNA stability. We should note that the occurrence of partial digestion with NlaIII seems to be rare here, as in our list of 15,655 distinct tags, not a single CATG (the restriction site for NlaIII) could be found.After using the above described filters, consistent events of multiple tags in a single transcript were identified in 13 full length genes. Poly-adenylation events cause a reduction in the transcript size, blocking the transcription of portions of its 3' region, together with the most 3' restriction site of the enzyme used for constructing the SAGE library. The reduction of the 3' UTR observed here, caused by the alternative poly-adenylation was usually accompanied by a removal of a significant portion of the putative ARE transcript repertoire (Adenosine and Uridine-Rich Elements) . AREs arS. mansoni became recently available. Curwen et al. [st to 253th that could not be evaluated by proteomic analysis [RNA analysis by SAGE enabled the evaluation of genes coding for proteins whose physical-chemical properties impaired their analysis by 2D gel electrophoresis. An example is the determination of transcript abundance of priority vaccine candidates of the World Health Organization , P19 (202nd with 42 tags) and P48 (356th with 26 tags), which advocates their importance in the early-stages of eggshell formation. We should observe that no tags could be identified for egg-secreted proteins (such as ESP3-6 and ESP15), suggesting their expression only in later stages of the eggshell development. The high expression of actin and myosin (heavy and light chains) was also observed, with the identification of their respective genes and gene-paralogs among the top 100 transcripts, reflecting the musculature as one of the major worm tissues. Among the 50 top transcripts, as expected, we observe the high abundance of 12 ribosomal-protein genes as well as genes that encode proteins involved in protein and carbohydrate metabolism. It is also interesting to note the high abundance of the gene that codes for a protein similar to thymosin beta (17th most abundant transcript in adult worms), especially due to its involvement with wound healing [The most abundant tag identified here is 'ACTATTCGGG', a sequence tag that matches diverse isoforms of the gene encoding SmP14, or F10 eggshell protein family. The frequency of this tag strongly suggests that this is the most abundant mRNA species found in adult worms. This abundance is highly significant, especially if we consider the larger biomass of male worms as well as the male bias found in the sex ratio of fections . Indeed, healing , its ant healing -36 and i healing .One of the most notable strengths of the SAGE method is that results from any new experiments are directly comparable to existing databases. SAGE data represent absolute expression levels, based on the digital enumeration of transcript tags in the total transcriptome. This allows the expression level of any gene to be compared with that of any other gene, from among many libraries of different sources and sizes . In thisThe author(s) declare that they have no competing interests.EPBO constructed the SAGE library presented here; RDM was responsible for RNA extraction and PSLO coordinated the bioinformatics analysis of SAGE data. EPBO, AP, RDM, SPG, KAA, CFMM, LCCL, SVA and EDN participated on the sequencing of the library and on the analysis and interpretation of the data; EPBO, PSLO, AP, RDM, DNN, SVA and EDN performed bioinformatics analysis. EPBO, PSLO, SVA and EDN conceived the study and participated on its design and coordination. All authors contributed to the writing of this manuscript and approved its final form.S. mansoni genesAnalysis of positional distribution of ESTs and SAGE tags for a set of 208 full-length . The positional distribution of all ESTs available in GenBank, as well as all SAGE tags from our study was evaluated over a panel of 208 full length S. mansoni genes. Only 17% of the ESTs mappedto 208 full-length transcripts cover the final 20% of the transcripts, while 42% of the generated SAGE tags cover this same region. This shows the reduced overlap of SAGE and ESTs suggests the necessity of generating more S. mansoni ESTs, especially from the 3' end of the transcripts, for a better knowledge of the schistosome transcriptome.Click here for fileSchistosoma mansoni Complete list of SAGE tags. Contains all 15,655 distinct SAGE tags sequenced, together with their frequency (tag count), the accession numbers of the corresponding genes, their relative position on the mRNA, their tag rank and the annotation of the respective gene.Click here for fileFunctional classification of the intermediate and less abundant schistosome transcripts based in Gene Ontology analysis. Functional classification of the most abundant S. mansoni transcripts based in Gene Ontology analysis.Click here for fileProtein and gene rank. Comparison of the top 10 proteins ranked by proteome analysis with the expression rank obtained by SAGE.Click here for file"} {"text": "SAGE has been used widely to study the expression of known transcripts, but much less to annotate new transcribed regions. LongSAGE produces tags that are sufficiently long to be reliably mapped to a whole-genome sequence. Here we used this property to study the position of human LongSAGE tags obtained from all public libraries. We focused mainly on tags that do not map to known transcripts.Using a published error rate in SAGE libraries, we first removed the tags likely to result from sequencing errors. We then observed that an unexpectedly large number of the remaining tags still did not match the genome sequence. Some of these correspond to parts of human mRNAs, such as polyA tails, junctions between two exons and polymorphic regions of transcripts. Another non-negligible proportion can be attributed to contamination by murine transcripts and to residual sequencing errors. After filtering out our data with these screens to ensure that our dataset is highly reliable, we studied the tags that map once to the genome. 31% of these tags correspond to unannotated transcripts. The others map to known transcribed regions, but many of them are located either in antisense or in new variants of these known transcripts.We performed a comprehensive study of all publicly available human LongSAGE tags, and carefully verified the reliability of these data. We found the potential origin of many tags that did not match the human genome sequence. The properties of the remaining tags imply that the level of sequencing error may have been under-estimated. The frequency of tags matching once the genome sequence but not in an annotated exon suggests that the human transcriptome is much more complex than shown by the current human genome annotations, with many new splicing variants and antisense transcripts. SAGE data is appropriate to map new transcripts to the genome, as demonstrated by the high rate of cross-validation of the corresponding tags using other methods. Serial Analysis of Gene Expression (SAGE) is a widp = 1 - L)N where L = 14 is the tag length and N = 3.272.204.263 represents the sum of the lengths of the mitochondrial and nuclear genomes. Therefore, p = 0.99). For a LongSAGE tag of 21 bp, this probability of a spurious match is much smaller (p = 0.000744). Therefore the LongSAGE tags are much more specific than 14 bp SAGE tags, even if the specificity of LongSAGE tags is not as high as these theoretical calculations suggest, as nucleotides are not randomly and equally distributed along the genome sequence, and the genome contains many repetitive sequences.The SAGE method consists of sequencing small tags derived from the 3' ends of mRNAs. A crucial step in SAGE analysis is tag identification ,7, or fiA systematic annotation of new transcripts by mapping a library containing 28,000 of these LongSAGE tags to the human genome sequence revealed 15,000 exons that are not currently described, at least half of which belong to novel genes . More rea priori knowing the transcribed sequences. We made a comprehensive study of all tags from all publicly available human LongSAGE libraries deposited in the public Gene Expression Omnibus databank. Most of the studies using 14 bp SAGE tags have focused on the expression of known genes. By contrast, here we concentrated on the tags that have not been generated by known transcripts. Because the main difficulty in estimating the amount of transcription in the human genome seems to be the false positive rate of detection [For this reason, we propose to study this question across the whole human genome using an independent method. We exploited the advantages of the LongSAGE method to study the transcriptome without etection , we firsWe used all the tags available in the public human LongSAGE libraries of the Gene Expression Omnibus database . This coTo be able to predict with sufficient confidence which regions of the genome have generated these SAGE tags, we selected a reliable set of tags from this total dataset. For this purpose, we first considered the tags present only once in our dataset, that have therefore been observed only once in a single SAGE library. Some of these infrequent tags correspond to very weakly expressed transcripts. Others, however, may be incorrect because they have undergone sequencing error(s) during the construction of SAGE libraries. Tags occurring only once represent 13% of the total dataset, and a large proportion (75%) of the different tags. This proportion is not negligible, but as some of these tags, unfortunately, may be incorrect, we checked the reliability of this set of tags before including it in our analysis.As mentioned above, the probability that a 21 bp sequence spuriously matches the human genome sequence is very small : if the subset of tags occuring only once in the total dataset contained many incorrect tags, it should therefore be enriched in unmapped tags. We therefore mapped each tag to the human nuclear and mitochondrial genome sequence, and compared the tags occurring only once in the SAGE libraries and the tags occurring more than once. Among the subset of tags occurring only once, 73% are unmapped. In contrast, in the other pool of tags, significantly less tags (39%) have not been localized.Each transcript generates several tags, that could either be correct or incorrect after sequencing: a large majority of these tags are correct, but a small number are incorrect tags containing one or more sequencing errors. For each incorrect tag present in our dataset, it should be possible to recover somewhere else in our dataset the corresponding correct tag, without sequencing error. Thus, for each tag which is present only once and does not match the genome sequence, we checked whether we could find in our dataset another tag matching the genome sequence and identical to this tag apart from one or two base pairs . For 69% of the unmapped tags occurring only once, we found at least one mapped variant. This frequency drops to 33% in the subset of unmapped tags occurring more than once in our dataset.In conclusion, these results suggest that the subset of tags occurring only once is particularly enriched in incorrect tags resulting from sequencing errors. We have therefore chosen not to include these tags in our analysis.Excluding tags that are present only once does not eliminate all tags containing sequencing errors. Indeed, the same error could occur several times . We tried therefore to eliminate incorrect tags that occur more than once in the libraries.t, we can evaluate the number of variants derived from this tag t by sequencing errors. We identified the set of tags corresponding to all the variants of t (differing by at most two base pairs because of substitution(s), insertion(s) or deletion(s)), and determined for each variant whether it was rare enough to be only due to sequencing error(s). If so, the variant was discarded from the dataset.For this purpose, we implemented the algorithm proposed by Colinge and Feger . In these sequences, 3% of the tags overlap two exons.Tags that do not map to the genome could correspond to tags overlapping two exons. We computed the expected proportion of such tags using a set of transcripts with reliably annotated exons. For this purpose, we extracted Among the tags from our dataset that do not map to the genome, we found that 1885 different tags overlap two exons, by using Ensembl annotations. These tags correspond to 1% of our initial set of tags. This proportion is slightly lower than the expected value, no doubt because the quality of annotations for all transcripts is not as high as in the set of Refseq transcripts.in silico the LongSAGE tag from 12418 Refseq transcripts : 6% of these tags extend into the polyA tail.Tags containing part of the polyA tail can also not be mapped to the genome sequence. We computed the expected proportion of such tags using a set of transcripts for which the polyA tail is known. For this, we extracted To estimate the frequency of such tags in our dataset, we mapped these tags to all human ESTs available in dbEST . We alsop that a given tag contains no SNP is p = (1 - 1/1000)21, and the expected proportion of tags with at least one polymorphic site is roughly 2% (1 - p).Unmapped tags may also be due to the presence of a polymorphic region of the genome (Single Nucleotide Polymorphism : SNP), if the allele sequenced in the genome project differs from the allele of the individual used to construct the SAGE library. It has previously been estimated that any two copies of the human genome differ from one another by approximately 0.1% of nucleotide sites . TherefoWe searched for the presence of such tags among our set of unmapped tags. For this purpose, we used a dataset computed using a previously published method . Some MEWe considered a tag as a contaminant if it did not map to the human genome, nor to a human mature transcript, it occured only in embryonic stem cell libraries propagated on MEF, and it mapped to the mouse genome. These tags represent a non-negligible proportion (13%) of the tags that do not match the human genome sequence.It is obvious that the percentage of tags that map to the mouse genome varies between embryonic stem cell libraries, revealing different degrees of exclusion of MEF. Our results show that even when libraries have been constructed from carefully dissected material, it is always necessary to filter the tags to exclude tags generated by transcripts present in the remaining MEF.6) as SAGE tags (3.1 × 106). We also tested whether these tags could come from edited mRNAs that are not represented among EST sequences. For this purpose, we examined the transition frequencies when comparing genomic and tag sequences, since the two known families of RNA-editing enzymes in humans perform adenosine to inosine or cytosine to uracil modifications [In total, the origin of 42% of the unmapped tags was explained by one of the situations previously described (29% correspond to a human transcript and 13% correspond to mouse contaminants). However, we could not explain the origin of the remaining unmapped tags. These tags do not belong to any library in particular. The large majority (91%) of these tags correspond to sequences varying by one base from another tag that maps to the genome, and some of these tags could therefore correspond to rare polymorphisms that are not represented by an EST. This is possible, but unlikely to be the main explanation because we studied twice as many ESTs . Because of this, and because our screens exclude many other possible explanations, we think that the most parsimonious explanation for the presence of these tags is that they contain sequencing error(s). We initially used an error rate that was previously published (17.3% of LongSAGE tags contain at least one error ) to remoTags mapping once to the human genome represent nearly half (49%) of the set of different tags. We studied the localization of these tags with respect to known transcripts, to evaluate the amount of transcription inside and outside annotated transcripts.We first studied the tags that are located inside known transcripts, using Ensembl annotations . These aAmong the tags mapping once to the genome, 69% are located in such \"extended\" transcripts. A more precise description of the tag positions in different parts of these transcripts is displayed in Figure p < 10-16). This means that a large proportion of the genome is transcribed from both strands of the DNA, especially in the 3'end of the transcripts, confirming previous expectations [Arabidopsis thaliana [Even among the tags mapping to annotated transcripts, a non-negligible proportion (32%) maps in antisense compared to the annotated transcribed strand. Such tags have already been highlighted by several previous studies -36. We octations . Among tthaliana , but is thaliana ). For eap(match|erroneous)). Consequently, for each tag from our dataset that maps to the genome (once or more than once), the probability that it is erroneous can be estimated by the following calculation :31% of tags mapping once to the genome do not correspond to a known transcript. We already have discarded the tags that could have been generated by sequencing errors using a published sequencing error rate in SAGE libraries. However, as we previously mentioned, it is possible that the error rate is higher than anticipated, and that some tags from our dataset still contain sequencing errors. We thus estimated the probability that such tags containing a sequencing error match the human genome sequence. For this purpose, we randomly selected 100,000 tags that match the human genome sequence, and we modified them by introducing \"sequencing errors\". These errors were randomly attributed, by using the percentage of error for each base we calculated (see methods). We found that 7.6% of these modified tags matched the human genome sequence : this is an estimate of the probability that an erroneous tag maps to the genome (p(erroneous) = 0.173, p(erroneous|match) = 0.020, and even with a higher error rate (p(erroneous) = 0.253), p(erroneous|match) = 0.030 is still low.With a sequencing error rate Because the probability that a mapped tag is erroneous is very small, the majority of the tags mapping once to the genome and outside annotated transcripts should come from unknown transcribed regions. However, it is possible that some of these tags do not belong to new transcripts, because the real 3'UTR may be longer than annotated (even after our extension). We therefore calculated the distance between these tags and the 3' end of the nearest transcript. We observed that very few tags are located in an incompletely annotated 3'UTR (less than 2% of tags that do not correspond to a known transcript are closer than 1000 bp to the nearest transcript).Because the vast majority of tags that do not correspond to an annotated transcript do not originate from an incompletely annotated 3'UTR, we searched for other evidence of transcription in the regions from which these tags originate . We alsAs we mentioned in the introduction, most of the recent work on finding new transcripts in the human genome has been performed using tiling microarrays. We thus compared our SAGE tags (mapping once to the genome but not on known transcripts) with transcribed regions predicted using tiling microarrays. For this comparison, we used the transfrags recentlConversely, only 0.39% of the transfrags contain one or more of our tags. We propose several explanations for this observation. Nearly half of the transfrags correspond to nonpolyadenylated transcripts that area priori knowledge of expressed genes. We used all the human LongSAGE libraries available, filtered them to remove tags containing sequencing errors, and systematically mapped these tags to the genome. We particularly concentrated on unexpected localizations, either because the tags did not match the genome sequence, or because they mapped outside known transcripts. We then proposed explanations or hypotheses for the origin of these tags.Using the SAGE method, it is possible to study the transcriptome without any More than one third of the different tags do not map to the human genome. Among them, 42% are part of mRNA sequences but are not found on the human genome because they correspond to polyA tails, junctions between exons, polymorphic sites or contaminant murine transcripts. The other tags are probably due to sequencing error(s). Consequently, the sequencing error rate in these public libraries is probably higher than previously estimated.Half of the different tags map once to the genome, and one quarter of these tags match outside annotated transcripts. This suggests that many transcripts are still to be annotated in the human genome. Because many tags mapping to known transcripts belong either to introns or are aligned in antisense, we suggest that they belong to new variants or antisense mRNAs of these transcripts. Consequently, the human transcriptome seems to be more complex than shown by the current genome annotations, and LongSAGE analysis should help to improve the annotation process., in July 2005.SAGE libraries were downloaded from the NCBI website at the following address: The tags were localized on the human nuclear genome and mitochondrial genome (Refseq sequence NC_001807) using Megablast . Only maTo discard tags that are likely to have been generated by sequencing errors, we implemented Colinge and Feger's method . This mex, the error rate per base: 0.173 = 1 - (1 - x)17. We obtain x = 0.0111, and therefore the probabilities to find exactly one error in one tag (p1 = 17x(1 - x)16), and exactly two errors in the same tag (p2 = x2(1 - x)15). The probability of finding one or two errors (p1 + p2 = 17.21%) is much greater than the probability of finding more than two errors (p3 = 0.083%). We therefore ignore p3.First, it is necessary to know the probability for one particular tag to be sequenced with an error. It has previously been estimated that the error rate is 17.3% in LongSAGE libraries . This reL be the set of tags in a given library. We can define for each tag t ∈ L one set of tags V1(t) that contain the tags q ∈ L that can be obtained by changing one base of the tag t . Likewise, we define V2(t) as the set of tags q ∈ L that vary from t by two changes. As proposed by [t the average contribution of its neighbors q to the number of occurrences of the tag t. This contribution, ν(t), can be calculated using the following equation:Let posed by , we compocc(q) is the number of occurrences of the tag q, and #Vi(q) the cardinality of Vi(q).where pi/#Vi(q) corresponds to the average contribution of q to each of its neighbors. In other words, each tag will equally contribute to each of its neighbors to increase their number of occurrences.Therefore, ν(t) ≥ occ(t), because these tags may be due to sequencing errors.In each SAGE library, we eliminated all the tags for which If we have not discarded all tags containing sequencing error(s), it is possible that some tags containing sequencing error(s) match the genome sequence. We therefore measured by simulation the probability that these tags containing sequencing error(s) map to the genome.For this purpose, we picked up tags that map once to the genome, and we modified them by introducing \"sequencing errors\". To obtain tags that could have plausibly been created by sequencing errors, we need to know the probability of finding each base instead of each given base, because of sequencing error (e.g. A changed to a T). By comparing each correct tag (matching once the genome sequence) with incorrect variants of this tag , we obtained a matrix with the relative frequencies of each of the 12 sequencing errors (plus the frequencies of deletions and insertions). We then applied one modification per tag, according to this matrix. Then, we checked whether these modified tags mapped to the genome. Finally, we obtained an estimate of the frequency of tags with a sequencing error that map to the genome.CK and MS designed and performed this study, and wrote the manuscript. LD, OG and DM provided guidance with comments on the study and on the manuscript. All authors read and approved the final manuscript.Characteristics of the LongSAGE libraries analyzed. Characteristics of the LongSAGE libraries analyzed : their identification number in the Gene Expression Omnibus database (GSM), their total number of tags and their title as provided in the Gene Expression Omnibus database.Click here for fileDifferent origins of LongSAGE tags. Classification of the different tags by our filtering process.Click here for fileFrequencies of base changes between a mapped LongSAGE tag and its variants. Since the vast majority of the unmapped tags whose origin could not be explained correspond to sequences varying by one base from another tag that maps to the genome, we investigated if these tags could come from edited mRNA. There are two known families of RNA-editing enzymes in human : the adenosine deaminases acting on RNA (ADAR) which perform adenosine-to-inosine (A-to-I) modifications, and the apoB mRNA-editing catalytic peptide (APOBEC) which induces cytosine to uracile (C-to-U) transformations [rmations . In humarmations . Recent rmations -44. Thesrmations -44. A harmations . We therClick here for file"} {"text": "In this study, we present a robust and reliable computational method for tag-to-gene assignment in serial analysis of gene expression (SAGE). The method relies on current genome information and annotation, incorporation of several new features, and key improvements over alternative methods, all of which are important to determine gene expression levels more accurately. The method provides a complete annotation of potential virtual SAGE tags within a genome, along with an estimation of their confidence for experimental observation that ranks tags that present multiple matches in the genome.Saccharomyces cerevisiae genome, producing the most thorough and accurate annotation of potential virtual SAGE tags that is available today for this organism. The usefulness of this method is exemplified by the significant reduction of ambiguous cases in existing experimental SAGE data. In addition, we report new insights from the analysis of existing SAGE data. First, we found that experimental SAGE tags mapping onto introns, intron-exon boundaries, and non-coding RNA elements are observed in all available SAGE data. Second, a significant fraction of experimental SAGE tags was found to map onto genomic regions currently annotated as intergenic. Third, a significant number of existing experimental SAGE tags for yeast has been derived from truncated cDNAs, which are synthesized through oligo-d(T) priming to internal poly-(A) regions during reverse transcription.We applied this method to the We conclude that an accurate and unambiguous tag mapping process is essential to increase the quality and the amount of information that can be extracted from SAGE experiments. This is supported by the results obtained here and also by the large impact that the erroneous interpretation of these data could have on downstream applications. Serial Analysis of Gene Expression (SAGE) technology has beeni.e. determining the UniGene cluster that most likely represents the gene from which the experimental SAGE tag was derived. Each UniGene cluster contains a collection of expressed sequences, which consists of well-characterized mRNA/cDNA sequences and expressed sequence tags (ESTs) that might represent a unique transcript. Unfortunately, this strategy allows only for the partial assignment of tags to transcripts, because the current resources for transcriptome data are incomplete for most species and organisms. Therefore, a significant fraction of the experimentally measured tags remains unidentified. In addition, there are several drawbacks of using this strategy for the mapping of SAGE tags to transcripts. First, a single gene may be represented in several clusters, resulting in ambiguous assignments. Second, EST sequences, which are the major components of the UniGene clusters, have an approximated error rate estimated at 1% (1 in 100 nts), resulting in a tag error assignment rate close to 10% [i.e. hypothetical and unknown genes). For example, SAGE studies in human have shown that 60% of the 14 bp tags do not have any match to sequences in the UniGene clusters [A critical step in the SAGE methodology is the tag mapping process, which refers to the unambiguous assignment of an experimentally measured tag to a given transcript. Currently, the tag mapping process frequently involves the search of the observed tag sequences within the known transcriptome. Commonly employed databases available for tag mapping -10 use Ue to 10% . Third, clusters . The corclusters . Fourth,SAGE can be very efficient for gene discovery and annotation -5. For tIn this work, we designed a bioinformatic method that gives different confidence values to each of the multiple hits in the genome for a tag sequence. Our method allows to fully exploit the abovementioned benefits while using genomic sequences for the tag mapping process in SAGE. The confidence values were assigned according to several parameters that were obtained by the analysis of experimental SAGE tags from previous studies in yeast -17. We dIn this work, we describe a new method, HGA, for tag mapping in SAGE. The method combines existing knowledge of a genome sequence and its current annotation, along with known data from previous SAGE experiments, to increase the accuracy and reduce the ambiguity of the tag mapping process.Saccharomyces cerevisiae. Some of these parameters are highly specific to yeast and may not be as crucial for other organisms, and viceversa.Though this methodology can be applied to any organism, we describe it here with some parameter values that have been specifically tuned for HGA consists of four main steps, which are described below. A detailed flowchart of the HGA method is illustrated in Figure Saccharomyces cerevisiae, its protein tables only specify the coding regions of each gene and do not contain the assignment of the untranslated regions (UTRs) at the 5' and 3' ends. Therefore it was necessary to assign them. The precise assignment of these regions is particularly relevant in the case of the 3'-UTRs, because it is expected that a significant fraction of experimental SAGE-tags will be obtained from these regions. With better knowledge of the transcriptome, a larger fraction of the UTRs can be accurately assigned. For most model organisms, a large number of expressed sequence tags (ESTs) are available even though only a small fraction of full length cDNAs is known. Therefore, the precise assignment of UTRs for most of the coding genes is not possible. For yeast, about half of the known genes have a predicted 3'-UTR with high confidence. These are mainly due to the identification of downstream polyadenylation signals [The complete genomic sequence of an organism is first searched for occurrences of the recognition site of the anchoring enzyme used in SAGE. The virtual potential SAGE-tags are then extracted by combination with a given tagging enzyme. These potential tags are then compared all-against-all in a pairwise fashion and the frequency of occurrence of each of the potential tags in the genome is determined of less than 100 nts . At thisAfter the assignment of complete coding transcripts to the genome, the RNA tables are used to map and assign the non-mRNA transcripts (see methods). This feature of the HGA method is new, because previous works in SAGE have not explicitly used the non-coding transcripts to map experimental tags. Though most non-coding transcripts do not contain poly(A) tails, and thus should not be observed in SAGE experiments, a recent study has shown that some ribosomal RNAs are polyadenylated in yeast, even in the absence of a canonical polyadenylation signal . FurtherOnce all transcripts have been assigned, the remaining intergenic regions of the genome are categorized into two types, depending on whether an annotated transcript is present in the complementary strand or not .The structured genome information generated above is crossed against all the potential tags, generating a genome based annotation of virtual SAGE-tags. The resulting virtual tags are categorized into one of seven classes, depending on the genomic position, annotation and frequency of occurrence of each virtual SAGE-tag in the genome Figure , center.de novo. The remaining tags are labelled as 'non-spliced-tags'.As an important complement for this new tag classification scheme, the HGA method also incorporates two additional tag features, which are intended to reduce some potential distortions that can affect the interpretation of SAGE results. First, all continuous stretches of eight or more adenines within each annotated transcript are recorded to account for oligo-dT priming to internal poly(A) regions of RNA molecules during reverse transcription. It has been demonstrated that this process occurs at a high frequency, causing that about 12% of ESTs are truncated due to internal poly(A) priming . TherefoThe resulting tag classification, along with the abovementioned additional tag features, are used to select particular tags from the genome, the occurrences of which are searched for among known experimental SAGE-tags obtained previously and described in the literature for the studied organism. Table To summarize, all the potential tags are assigned with a value according to the genomic regions they map and to their specific features. For example, the tags mapping into a certain transcript will have different values according to the transcript position and the proximity of internal poly(A) sequences, whereas the tags mapping to intergenic regions have a single value.ie. among all occurrences of a tag sequence that is observed multiple times in the genome, there is not a case where a particular virtual tag always has an odds ratio equal or higher than five when compared against all other instances). In these cases, the tags could still be ranked based on the odds ratios that they exhibit, which is provided by the annotation generated by the HGA method. Some examples illustrating how the tag confidence assignment process is carried out by the HGA method are shown in Figure The estimated probability functions described above are then crossed against all virtual genomic tags, to obtain a tag confidence assignment for each potential virtual SAGE-tag in the genome Figure . The oddSaccharomyces cerevisiae , or on the 3'-UTRs (15%) and a small fraction of the tags were found at the 5'-UTRs (5%). As should be expected, these figures correlate with the observed lengths of these elements.The total number of virtual tags shows an inverse linear relationship to its position within the transcript, as expected, based on the fact that position number correlates with distance from the 3'-UTR end, which is directly related to the probability of finding a downstream recognition site for the anchoring enzyme used in SAGE.Only a small fraction of the virtual tags map onto annotated introns (1%) and non-coding RNAs (1%). Very few tags map onto exon-intron boundaries (0.02%), accounting for a total of 13 new tag sequences generated by splicing.When we considered potential unique virtual tag sequences within the genome, most of the results described above remain unchanged region within a transcript are observed in all SAGE experiments reported. Third, in all experiments, SAGE-tags mapping onto non-coding RNAs are observed. Almost all these cases consist of tags belonging to the class 'non-poly(A)-next' and mapping to the first position within the transcripts, suggesting that typical polyadenylation occurs at the 3' end of these transcripts . Fourth, analogous to what was observed for introns, spliced-tags are observed in all SAGE experiments. This is the first time that experimental SAGE-tags are mapped onto virtual and potential spliced-tags from a genome. Fifth, a significant fraction of experimental SAGE-tags map onto regions in the genome that currently are annotated as intergenic. Though this has already been observed, it must be mentioned that it is for the first time that this analysis is carried out by considering the confidence of the assigned tags, and thus the figures obtained here should be more accurate. These intergenic tags could represent new genes not yet described in yeast. Using the HGA-based annotation they can now be easily ranked according to their estimated confidence, which will facilitate and optimize the experimental planning of the gene discovery process. Finally, a large fraction of the experimental tags that map onto an intergenic region has an annotated transcript on the opposite strand. These tags could correspond either to new genes or to new regulatory elements such as antisense RNA ,23. The i.e. those with several assignments by other authors), independently for each tag class, the obtained figures are highly significant aSeveral new features that improve the accuracy and completeness of the tag mapping process in SAGE have been incorporated by the HGA method, and are detailed below.First, instead of using only the coding regions of known and hypothetical genes, we assigned, as precisely as possible, the 3' and 5' UTRs, thus generating more accurate putative transcripts. Mature and immature transcripts were generated, by considering exon-intron boundaries, thus keeping and using all the relevant available genomic information and annotation. When no information about UTRs was available for a given gene, we used a fixed maximal length estimated from experimental data. It is noteworthy that most conflicting assignments of unique tags observed between HGA and other authors' assignments resulted from the large length of 3'-UTRs previously used by these authors -17 Tabl. The preSecond, non coding RNAs, in addition to known and hypothetical genes, were also included in the genomic annotation. Though the amount of tags mapping to these transcripts is low Table and mostS. cerevisiae constitutes one of the best annotated genomes available today. If EST data were used to map experimental SAGE-tags, this information would not be obtained. Hence, a method that considers these elements explicitly in the annotation process would accelerate the discovery of new regulatory elements. The identification of regulatory elements of this kind is important for a complete and accurate interpretation of the gene expression patterns.Third, tags mapping to intergenic regions in the genome, where an annotated transcript is found in the opposite strand, were also considered in the HGA method. These supposedly intergenic tags, if experimentally observed, could account for unknown elements, such as antisense RNA. We showed that a significant fraction of these tags were observed in SAGE experiments with yeast Table , even thFourth, by using genomic information in the tag mapping process, the HGA method identifies tags mapping onto regions where no gene annotations exist in either of the DNA strands. In this work, we demonstrated that a significant fraction of these tags were observed in SAGE experiments with yeasts Table . It is aFifth, the combined use of genomic information along with the generation of new putative splicing tags not explicitly available in the genome sequence, allows a more accurate estimation of tag uniqueness and, therefore, of potentially ambiguous mappings.Sixth, the inclusion of internal poly(A) regions within annotated transcripts as possible reverse transcription initiation sources is another important feature of the HGA method. This was included because recent EST data analyses have shown that a significant fraction of the reverse transcription processes are initiated at internal poly(A) regions of more than 8 consecutive adenines . The resSeventh, the new definition of tag classes considered by the HGA method Table facilitaFinally, the calculation of tag probabilities from experimental data based on the new tag classification, along with other tag features, allows the HGA method to get the odds or confidence that a tag would be experimentally observed when several instances of a tag sequence are present in the genome. This constitutes the core of the HGA method and one of the most important contributions of this work to reduce the number of unambiguous tag assignments in SAGE. In addition, we also demonstrated that about 20% of the experimental tags mapping onto a transcript are located from the second tag position and above. If this information is not considered in the tag-to-gene mapping process, a substantial fraction of the experimental tags will be missed. Finally, it is important to note that even in those cases where the ambiguity could not be completely removed, the HGA method could reduce the number of possible assignments, thus reducing the overall ambiguity for a particular tag with multiple occurrences in the genome.The score of intergenic tags is strongly dependent on the quality of the genome annotation. In poorly annotated genomes, intergenic tags will have a higher probability of being observed by the HGA methodology. This is a desirable feature for tag probability estimation in the discovery of new genes. In yeast, 11.3% of all experimental SAGE-tags obtained to date and searched against the current annotation of the yeast genome map into an intergenic region, suggesting that new coding or non-coding transcripts are still to be discovered. This figure will be even larger for poorly annotated genomes.Xenopus genome). Therefore, the main problem of using genome sequences for tag-to-gene assignment in long genomes is that with their increased size and complexity tag uniqueness and unambiguous tag-mapping becomes increasingly difficult. It is in these cases that HGA would be most useful, because it will significantly reduce unambiguous tag mappings.In this work, we achieved an 8–10% increase in unambiguous tag assignments when considering all experimental yeast SAGE-tags Table . This imLong-SAGE has been proposed to reduce the ambiguity of tag mapping for large genomes ,25. HoweXenopus tropicalis genome . This genome has 1.5 billion base pairs, half the size of the human genome and 125 times larger than the yeast genome. Our preliminary analysis of all potential tags for X. tropicalis genome showed that tag uniqueness for short (14 nts) virtual tags is around 9.1%, small compared to the 80.6 % of uniqueness for the long (21 nts) virtual tags. When a histogram of occurrence for each tag sequence was constructed, we found that 60% of the virtual short tags have less than 7 matches to the genome and 90% of the tags have less than 20 matches. This low number of genome hits per tag for a significant amount of the potential short tags suggests that the use of HGA tag-mapping should allow proper tag-to-gene assignment when typical SAGE technology with tags of 14 nts is used. More importantly, when we built a virtual reference database for SAGE-tags from X. tropicalis genome using only some of the parameters involved in the HGA method described here, we have found that 40% of the short virtual tags are included in the high confidence class. This indicates that after using the HGA method, a significant fraction of the short tags will be unambiguously mapped with a high confidence to the genome and represents a total potential gain of unambiguous assignments of about 31%. This figure is significantly higher than the 8–10% obtained for yeast, though its estimation for X. tropicalis was based on the virtual genomic tags instead of the experimental SAGE-tags, where this figure will increase, as it was shown for yeast A significant increase of unambiguous assignments for experimental SAGE-tags in yeast is achieved when using this method Figure .2) Using a genome-based annotation of virtual SAGE-tags like the one described here shows that a significant fraction of experimental SAGE-tags comes from intergenic regions, from partially digested cDNA, from the opposite strand of annotated transcripts, and from truncated cDNAs Figure .3) In all SAGE experiments reported for yeast, tags map onto introns, exon-intron boundaries and onto non-coding RNAs Table .4) In all SAGE experiments reported for yeast, it was observed that the largest fraction of tags map to the coding regions of transcripts and not to the 3' UTR elements Table .Saccharomyces cerevisiae organism was obtained from the July 2005 release available at the Saccharomyces genome database (SGD) web site [The full genome sequence of the web site . The fulweb site .th 2005 release of the genomic annotation available at the SGD web site was used [The July 26was used . This orwas used .subsequence. This process was carried out for the forward and reverse DNA strands, and the results concatenated into a single file. Second, the 14 bp sequences were filtered, selecting only those that matched the pattern CATG at their 5' end. This procedure resulted in a total of 76,516 tags that represent the theoretical product of a complete genome digestion by using a combination of the NlaIII and BsmFI restriction enzymes . In this process, the position and strand where each tag was found in the genome were stored. For this purpose, we set up a new computer program called pattern. Finally, an all-against-all pairwise tag comparison without mismatches was performed and the frequency of occurrence for each tag sequence at the different genomic positions on both strands stored with another computer software: freqtag. All these computer programs for the LINUX operating system are freely available as supplemental material at our web site [Several computer programs in C++ and ANSI C languages were written to perform specific tasks. First, the full DNA sequence of each chromosome in the genome was fragmented in a computer into all possible overlapping oligonucleotide sequences of a length of 14 base pairs (bp) using the computer program web site .ie. the 3'-UTR was defined with a length of zero nts; its end position was therefore assigned to the same position of the end of the annotated ORF). A total of 78 3'-UTRs with a length of zero nts were assigned.Only the records containing the 'ORF' word in the feature type field identifier from the filtered genome annotation table were considered for the annotation and prediction of 5' and 3' UTRs. This restriction yielded a total of 6,591 ORF candidates for the 3' and 5' UTR assignments. The 3'-UTR ends were first assigned to all those cases that cross-matched the previously described annotation . The latie. the 5'-UTR was defined with a length of zero nts and therefore its initial position was assigned to the first nucleotide corresponding to the first codon of the annotated ORF). 265 5'-UTRs with a length of zero nts were assigned.After all 3'-UTR assignments were completed, the 5'-UTRs were assigned as follows: as 5' UTRs of most of yeast ORFs are unknown, a fixed 5'-UTR length of 100 nts was assigned for all those ORFs where the previous annotated transcript was located at a larger distance than 100 nts. In those cases where an ORF was located upstream, then the previously assigned 3'-UTR end position was considered as the end position of the upstream ORF. 100 nts was chosen because more than 95% of the experimental tags that map into the 5'-UTR are observed at an upstream distance from the initial codon of the ORF of less than 100 nts . A totalIn addition to the putative coding transcripts, all non-coding RNAs available in the genomic annotation table were selected. Then, all the virtual genomic SAGE tags were mapped against these annotated elements, based on their genomic positions. Both complete and partial tag matches to a transcript were recorded. A complete match was defined when the virtual tag was totally contained within the transcript. A partial tag match was defined only if the previous condition was not fulfilled and if the most 5' nucleotide was contained within the transcript; otherwise the virtual tag was defined as intergenic. The same criterion described above was used to define those tags mapping to introns. A total of 775 introns are currently annotated within known transcripts in the yeast genome. The virtual tags partially mapping an exon-intron boundary were annotated as such, but in these cases the potential new tags not present in the genomic sequence that could be obtained by the splicing process were also generated and stored into the database. Only 13 potential new tags fulfilling these conditions were generated.All virtual tags defined as intergenic in the previous step were assessed for their occurrence in the opposite strand of an annotated transcript. All virtual tags located within an intergenic region but fully contained in the opposite strand of an annotated transcript were also annotated as such. These tags could be important for the discovery of new interference RNA elements .Virtual tags located within a transcript, but not at the most 3' end position, were observed with a high frequency experimentally due to internal poly(A) priming of oligo-d(T) primer during reverse transcription . TherefoThe class of virtual tags was defined based on three characteristics: 1) the frequency of occurrence of the tag sequence in the genome (whether it is unique or not), 2) the number of annotations for the tag, and 3) the type of annotations of the tag. Using these three distinct features we have defined seven different quality classes for any virtual tag occurring in the genome. The naming and definition of tag classes are described in Table The confidence of virtual tags was defined based on a combination of the tag class, the frequency of occurrence in the genome and the position of mapping within the transcript. All potential tags in the genome were classified as high, low, or undefined confidence. High confidence tags correspond to those that can be unambiguously assigned to a single known gene or to a single intergenic position in the genome. Low confidence tags correspond to those that should not be visible by experiment, because another tag of identical sequence but mapping at a different location in the genome should be. Undefined confidence tags correspond to those that are fuzzy and cannot be assigned clearly to a single gene or intergenic region in the genome. The procedure that we used to define tag confidence is as follows:First, the probability of observation by experiment was calculated for those tags belonging to the class platinum, copper and iron. In the case of the class platinum, the tags were subdivided into two groups depending if the tag was or was not located upstream from an internal poly(A) region. In the case of those tags that were not defined as next to an internal poly(A) site we calculated their frequency of occurrence at different positions in the transcript from the 3'-UTR end, using the currently known experimental SAGE data Figure . To miniie. tags belonging to the silver class), an assessment of the confidence was carried out based on some tag features and on the previously obtained probabilities. The probability of occurrence is obtained from the previous data for all the instances of a particular tag sequence. If an instance of a tag maps to an intergenic region in the genome, then the probability of observing copper and iron tag sequences is used. If an instance of a tag maps into a transcript, then it is first evaluated if it is located upstream from an internal poly(A) region or not. In the case of the former, the probability of a tag mapping upstream from an internal poly(A) region is used; otherwise, the probability is obtained based on the position within the transcript that the tag is found. Once the individual probabilities are obtained for all the instances of the tag sequence, a pairwise comparison table is built, which contains all-against-all odds ratios among the instances. Then, if a single instance of a tag has odd ratios higher than 5.0 against all other instances, this tag is defined as a high confidence tag, and all other instances as low confidence. Otherwise, all tag instances are assigned an undefined confidence.Second, all tags belonging to the classes platinum and copper were defined as high confidence. In the case of non-unique tags in the genome, or in the case of unique tags with multiple annotations (We compiled all experimental information available from SAGE experiments in yeast. These include three independent works -17, accoThe mapping process of experimental SAGE-tags against the virtual library was performed by assigning to the experimental tag the annotation of the high confidence virtual tag, when possible; otherwise, the experimental tag was assigned to multiple transcripts and/or intergenic regions with an undefined confidence.A web server that uses the HGA-based annotation described in this manuscript for the genomic mapping of experimental SAGE tags from yeast or for the exploration of the virtual SAGE-tags on this organism has been implemented and it is freely accessible.Project home page: Operating systems: MacOSX, Linux, WindowsProgramming language: C++, ANSI C, PERL, PHP, MySQLOther requirements: noneLicense: noneAny restrictions to use by non-academics: none"} {"text": "During gene expression analysis by Serial Analysis of Gene Expression (SAGE), duplicate ditags are routinely removed from the data analysis, because they are suspected to stem from artifacts during SAGE library construction. As a consequence, naturally occurring duplicate ditags are also removed from the analysis leading to an error of measurement.An algorithm was developed to analyze the differential occurrence of SAGE tags in different ditag combinations. Analysis of a pancreatic acinar cell LongSAGE library showed no sign of a general amplification bias that justified the removal of all duplicate ditags. Extending the analysis to 10 additional LongSAGE libraries showed no justification for removal of all duplicate ditags either. On the contrary, while the error introduced in original SAGE by removal of naturally occurring duplicate ditags is insignificant, it leads to an error of up to 3 fold in LongSAGE. However, the algorithm developed for the analysis of duplicate ditags was able to identify individual artifact ditags that originated from rare nucleotide variations of tags and vector contamination.The removal of all duplicate ditags was unfounded for the datasets analyzed and led to large errors. This may also be the case for other LongSAGE datasets already present in databases. Analysis of the ditag population, however, can identify artifact tags that should be removed from analysis or have their tag count adjusted. Serial Analysis of Gene expression (SAGE) is a global and digital gene expression profiling method ,2. It reHowever, duplicate ditags will be encountered naturally with a certain frequency, depending on abundance of the two transcripts from which the ditag is derived ,9. For eHowever, recent developments in SAGE technology have accentuated the problem of discarding duplicate ditags. First, there has been a drive towards using smaller samples for construction of SAGE libraries, facilitating the analysis of cells with specialized functions such as pancreatic cells obtained from biopsies . Such saA prediction of the number of duplicate ditags as a function of the abundance of the two monotags in SAGE and LongSAGE is shown in figure As can be seen in figure However, the assumption of equal proportions of compatible overhangs in LongSAGE is unrealistic. The genome sequence is not a random distribution of nucleotides and furtet al. have argued that up to 5% false ditags, so-called quasi-ditags should be removed from SAGE analyses [et al. is exclusively affecting rare tags, whereas we are concerning with an error increasingly affecting tags the more abundant they get.The experimental dataset derived from RNA isolated from pancreatic acinar cells by the aRNA-LongSAGE procedure was therefore analyzed in greater detail . It contanalyses . In our The corrective measures suggested by Welle and SnydeIn this study, an algorithm implemented in Perl was developed which extracts both monotags and ditags from phred or fasta formatted sequence files, defines the two nt overhang of tag pairs in the ditags, and counts and sorts these ditags into compatible overlapping classes. Of the 44,276 tags in total, 34,464 were seen twice or more. Considering these tags only (thus excluding most tags originating from sequencing error) 12,408 (36%) were present in duplicate ditags. A major complication of the analysis is the presence of most abundant tags in several forms differing in length by one or rarely by two nucleotides. Thus a single tag may be split into two or more compatible overlapping classes. For this analysis, only tags between 40 and 42 nt were considered . These accounted for 98.6% of all ditags using equation 3 for each ditag. Assuming normal distribution of the standardized residuals, the standard deviation is calculated for all ditags observed more than once. An observation is classified as an outlier if the standardized residual is larger than three standard deviations (99% confidence). The standardized residuals are plotted in figure found in the TIGR Human Gene Index and five other libraries derived from pancreatic tissue . The anaAlso, it is important to consider how the removal of duplicate ditags influences the initial identification of a gene as regulated in a comparison of two transcript profiles. To assess whether this changes by exclusion of duplicate ditags, we compared the pancreatic acinar library with one derived from pancreatic ductal cells with and without the inclusion of duplicate ditags. Excluding duplicate ditags, 122 tags was identified as statistically significantly regulated (P < 0.05 with Bonferroni correction). Including duplicates yielded 56 new tags, while three fell below the statistical cut-off are representative of the actual distribution of tag molecules, the expected occurrence of a duplicate ditag AB in SAGE can be approximated bywhere D is the number of ditags, P the probability, and T the number of monotags observed.The expected occurrence of a duplicate ditag AB in LongSAGE, assuming even distribution of compatible overlapping classes is then (including duplicate ditags).The expected occurrence of a duplicate ditag AB in LongSAGE, using dataset specific distributions of compatible overlapping classes can be approximated byPPT is the sum of all possible partner tags.where TStandardized residuals was calculated as follows A SAGE experiment is performed by digesting cDNA with the frequent cutting restriction enzyme NlaIII, isolating the most 3' fragment and ligating a linker containing the sequence TCCGAC, which is recognized the restriction enzyme MmeI. Tags are generated by MmeI which cleaves the DNA strand 20/18 nt or 21/19 nt downstream of this sequence. Ligated ditags have the general structure CATGXXXXXXXXXXXXXXXX(X)(Y)YYYYYYYYYYYYYYYYCATG, where X denotes tag A and Y denote the reverse complement of tag B. The parentheses indicate that most tags exist in both a short and a long form. Hence, the ditag AB can have the length 40, 41 or 42 nucleotides. Two central base pairs are common to both tag A and tag B and originate from the overlap used during ligation. The Perl script, LongSAGEbias.pl [additional file In the case of 41 nucleotide ditags, the ditag AB is first analyzed. Since A can exist in a 41 nt ditag both in the long and a short form, two predictions are made and the one closest to the observed is chosen. Then, the ditag BA is considered in an identical manner. The standardized residuals are calculated and the results are written to tabulator separated files easily imported into any spreadsheet for further analysis. Assuming the ditag counts are Poisson distributed, the mean can be estimated as the observed count and the standard deviation as the square root of the observed. The confidence interval of ditag counts can thus be estimated as mean ± 2*standard deviation. For small ditag counts this confidence interval extends below zero. Consequently, the standard deviation of the standardized residuals is calculated from ditags observed four or more times only (4-2*√4 = 0).The algorithm can be set to include all duplicate ditags, remove all duplicate ditags and adjust the observed ditag counts that fall outside the confidence interval to the prediction value.In sum, libraries derived from pancreatic acinar cells, ductal cells, and four libraries from different grades of pancreatic intraepithelial neoplasia were analyzed from pancreas. In addition, 5 potato tuber libraries derived from 6 week old minitubers, at harvest, two libraries from 60 days post harvest dormant tubers, and from tuber tissue excised from under an emerging sprout.LongSAGE tags of 17 nt + CATG were extracted from the human RefSeq v. 16 fasta file and the dinucleotide overlap distributions determined using the PERL script dinuccount.pl .LongSAGE tags from libraries generated from all potato and pancreatic tissue were extracted using the Perl script sage-phred.pl. For pancreatic acinar and ductal cells the tags were mapped to the Human Gene Index using saJE, AMH and KLN have designed the analysis of duplicate ditags. JE have produced scripts and performed the analysis. SAH and AMH have performed the LongSAGE studies on pancreatic tissue. ALH carried out the LongSAGE analyses on the potato. KLN drafted the manuscript, which was extensively discussed and modified by KLN, AMH, JE and KGW. Finally, all authors read and approved the final manuscript.Pancreatic acinar LongSAGE. Additional file Click here for fileObserved and predicted LongSAGE ditags of pancreatic acinar cells. Additional file Click here for fileMost abundant transcripts with and without the removal of duplicate ditags. Additional file Click here for filelongsagebias.pl. This file contains the PERL script that performs the ditag analysis described.Click here for file"} {"text": "The growth of sequencing-based Chromatin Immuno-Precipitation studies call for a more in-depth understanding of the nature of the technology and of the resultant data to reduce false positives and false negatives. Control libraries are typically constructed to complement such studies in order to mitigate the effect of systematic biases that might be present in the data. In this study, we explored multiple control libraries to obtain better understanding of what they truly represent.First, we analyzed the genome-wide profiles of various sequencing-based libraries at a low resolution of 1 Mbp, and compared them with each other as well as against aCGH data. We found that copy number plays a major influence in both ChIP-enriched as well as control libraries. Following that, we inspected the repeat regions to assess the extent of mapping bias. Next, significantly tag-rich 5 kbp regions were identified and they were associated with various genomic landmarks. For instance, we discovered that gene boundaries were surprisingly enriched with sequenced tags. Further, profiles between different cell types were noticeably distinct although the cell types were somewhat related and similar.We found that control libraries bear traces of systematic biases. The biases can be attributed to genomic copy number, inherent sequencing bias, plausible mapping ambiguity, and cell-type specific chromatin structure. Our results suggest careful analysis of control libraries can reveal promising biological insights. Sequencing-based Chromatin-Immunoprecipitation (ChIP) study has been rapidly gaining traction. Introduced around late 2004 with ChIP-SACO Many interesting techniques proposed thus far have been successfully applied to a host of high-throughput sequencing ChIP (htsChIP) data. We can loosely classify these techniques into (i) those that uses single htsChIP library solely genomic copy number variations, (b) mapping bias, (c) sequencing bias, and (d) chromatin and/or experimental bias. This study intends to explore the extent of these systematic biases.To investigate how much genomic copy number influence the control library, an in-house array CGH data (unpublished data – N.P.) of the MCF-7 cells was used as the benchmark for copy number variations in MCF-7. A whole cell extract library was also generated from MCF-7 and followed by direct ultra high-throughput sequencing using Solexa Genome Analyzer platform. Using Equation (1) and 1 Mbp sliding window see , we estiet al.Similar analyses were also performed using three mouse WCEseq libraries published by Mikkelsen Another likely source for systematic bias lies in the mapping procedures. For the purpose of assessing this bias, we used the repeat regions as a surrogate for heavily biased regions. We found that a number of repeat classes were significantly enriched (p<1e-3) for WCEseq tags, while some were unexpectedly depleted of tags . The depNext, we examined the tag density distribution across the genome. From this analysis, we noticed that some of the spikes did not fall into any repeat regions. This led us to ask the following questions: How many significantly deviating spikes are there in a typical WCEseq library? Could they be all explained by Satellite or other repeat? Or are they coming from other genomic features? To answer this, we took the mouse WCEseq libraries and analyzed them at 5 kbp resolution. For each 5 kbp non-overlapping window, a p-value was computed for tag enrichment within the 5 kbp window assuming random uniform distribution of tags found in the overarching 1.5 Mbp region. Even after FDR-adjustment of multiple hypotheses In all the mouse WCEseq libraries used in this study, the TSS was correlated with a sharp spike of tag population , howeverThe dense 5 kbp regions were also pervasive among intragenic regions. Around 88.26% of the significantly dense 5 kbp regions of the NP WCEseq library were found to be associated with intragenic regions . This obUsing the accompanying expression data in It has been reported that the sequencing efficiency of next generation sequencers is influenced by the nucleic acid composition of the DNA fragment being sequenced We started our analyses by comparing genome-wide profiles of various libraries at low 1 Mbp resolution. The fact that we could reasonably estimate the copy number using fragment density at low 1 Mbp resolution supports the assumption that a significant proportion of the fragments are random noise from the genome and that these random noise are predominantly influenced by the underlying genomic copy number. Consequently, this also supports the notion that WCEseq library should be able to negate bias from underlying chromosomal abundance (copy number). Having said that, though, copy number did not appear to be the sole component in influencing genome-wide profiles of WCEseq. When comparing three WCEseq libraries, which are from very similar and relatively normal genomes, we saw that they were not extraordinarily correlated even at low resolution. The observation suggested the presence of other biases. This was further confirmed by analyses at higher resolution, in which we found that tag-rich 5 kbp regions were non-randomly associated with repeats and gene boundaries (TSS and TES).From our analyses of localized spikes and dips around the TSS and TES, one might suspect that these features are primarily due to mapping bias. If this is the case, the three mouse WCEseq libraries should have roughly the same profile. However, we instead observed clearly distinct shapes of tag density at the TSS. Furthermore, the consistent phased profiles of sense and antisense tags suggesteAll the evidence gathered thus far strongly suggests that WCEseq profile is cell-type specific. Since sequencing and mapping biases are expected to be similar among libraries of the same species, the cell-type specific signals should be coming from the other two sources of bias (i.e. copy number or chromatin/experiment bias); although it has to be noted that the degree of tag enrichment or scarcity in repeat regions (which are the archetypic regions with mapping bias) were not completely uniform among the mouse WCEseq libraries. Obviously, WCEseq profiles will be different if the different cell types have distinct copy number profiles. However, chromatin bias was apparent in WCEseq from ES, NP, and MEF cells, which are expected to be normal and non-amplified. Tag densities near gene boundaries were distinct in the three libraries and were correlated to the genes' expression levels. For example, only 8.63% of the significantly dense 5 kbp regions found in NP WCEseq library was also found to be significantly dense in MEF WCEseq library were obtained from a published work An array comparative genomic hybridization readout was also obtained to measure the genomic copy number of MCF-7.c is the estimated copy number, d is the number of tag counts within the region, w is the length of the region, and λ is the expected number of tags per base pair computed as the total number of tags in the library divided by the total gap-less genome length.The following method was used to generate genomic copy number estimation using WCEseq library. With the assumption that other biases are minimal and should not greatly affect the distribution of the tags, the genomic copy number of a given region can be estimated as:d and copy number c can be described using Equation 2 below:Genomic copy number estimation from ChIP-PET data requires two fundamental steps. First, as the library contains both signal and noise fragments, we need to first be able to extract the noise part. For this we consider only singleton PETs λ is the expected number of tags per base pair computed locally for each region being considered. The second term denotes the fraction of random PETs expected not to overlap with other fragments The first term of equation 2 denotes the amount of singletons expected had there be no overlapping of random PETs in a region, where In our analysis, we used sliding windows to compute the average copy number from MCF-7 ER ChIP-PET, MCF-7 WCEseq, as well as from the three mouse WCEseq libraries. The same sliding windows were used in averaging the copy number readouts from the MCF-7 aCGH data, which was used as the benchmark in the MCF-7 study. Pearson's correlation was employed to assess the signal concordance within these windows among every pair of libraries. Comparison of MCF-7 aCGH and MCF-7 ER ChIP-PET was done based on hg17. To compare the aCGH data to WCEseq estimate, we first converted the aCGH data into hg18 assembly using the liftOver tool of UCSC Genome Browser.Tag densities computed in our study were based on 50 bp averaging and normalized against the total number of regions inspected. Tags mapped to sense strand and tags mapped to antisense strand were considered separately in As a proxy for mapping bias, we looked for irregularities in the number of tags mapped to different repeat classes. Repeat annotations were taken from UCSC Genome Browser database Having observed that copy number variation explains the coarse-scale profile of WCEseq libraries, we asked whether there exist finer-scale irregularities beyond what can be explained by copy number. To do this, we divided the genome into 5 kbp non-overlapping windows and assessed overabundance of tags while taking into account the local tag density within 1.5 Mbp window. For each window, we compute a p-value of tag overabundance using Poisson distribution as a null hypothesis, with the expected rate of tags based on the 1.5 Mbp window. After calculating the p-values for all 5 kbp windows, the p-values were corrected for multiple hypotheses using the FDR method The identified tag-rich 5 kbp regions were then associated with gene regions and boundaries , x could be uniquely mapped, and x.Let x in a given library. Clearly, x, or x could be mapped back with confidence to x, denoted as Following the above definitions, let We sought to roughly measure the bias that is correlated with the CG content. As a null hypothesis, libraries of simulated tags were constructed for tags of length 26 bp, 27 bp, and 29 bp, through random sampling of the genome sequences. Tags from H3K4me3 ChIPseq libraries were used as positive control. Comparing the resultant cumulative distributions, the WCEseq libraries were found to be relatively closer to the random tags compared to that of the H3K4me3 libraries experiment is to infer the first factor, i.e. the underlying fragment distribution Given a hts library, we can measure the distribution of CG-content distribution of the DNA fragments associated with the observed tags, i.e. Table S1Testing a model of gene profile bas on WCEseq tag density. A proxy test for tag density model around genes (Supplementary (0.01 MB PDF)Click here for additional data file.Table S2Sequencing depth of the libraries analyzed in this study(0.01 MB PDF)Click here for additional data file.Figure S1Comparative density profiles of tags mapped to forward strand (black lines) and reverse strand (blue lines) in a 5 kbp window centered around middle of Satellite repeats. As the enrichment of tags in Satellite repeats were likely to be resulted from mapping issues and other random noise, no well-positioned fragment was expected, resulted in closely correlating density profile of forward tags and reverse tags.(0.09 MB PDF)Click here for additional data file.Figure S2Comparison of 5 kbp tag-rich regions across WCEseq libraries. (a) A Venn diagram showing the tag-rich regions from the three Wcseq libraries. Regions from ES WCEseq library is negligible due to its shallow sequencing depth. Only 374 dense regions were found to be common in NP and MEF sets. It represented only 8.63% and 26.7% of tag-rich regions from NP and MEF libraries respectively. (b) Comparison of tag-rich regions that are associated with TSS. 296 TSS-associated tag rich regions were common, representing 20.6% and 28.6% of the total TSS-associated tag-rich regions found in the NP and MEF libraries. Common tag-rich regions of NP and MEF were mostly TSS-associated.(0.02 MB PDF)Click here for additional data file.Figure S3A schematic model of WCEseq fragments distribution across a typical gene, based on observations in (0.01 MB PDF)Click here for additional data file.Figure S4Cumulative distributions of tags based on their C+G content. Distributions of WCEseq tags (red curves) were relatively close to simulated tags , indicating that sequence composition bias is relatively mild. As a comparison, similar curves generated from H3K4me3 ChIPseq tags were also drawn (green curves).(0.06 MB PDF)Click here for additional data file.Figure S5Tag density (50 bp average) profiles after CG-content normalization. The normalization assumed that each tag represents a 150 bp fragment, taking into account the tag direction. Each tag was reweighted such that the CG-content distribution of the fragments matched that of randomly sampled uniquely-mapped simulated tags. Shown above are profiles around transcription start sites (TSS) and transcription end sites (TES) across three mouse WCEseq libraries. The black and blue curves denote density of tags mapped on the sense and antisense strands respectively.(0.12 MB PDF)Click here for additional data file.Figure S6Expression levels of genes were correlated with CG-content normalized tag density in WCEseq libraries. Density profiles (50 bp average) of tags around TSS and TES of highly expressed (red) and lowly expressed (green) genes. The curves show combined density of sense- and antisense-mapped tags. Tags were reweighted based on the CG-content of the corresponding 150 bp fragments.(0.13 MB PDF)Click here for additional data file."} {"text": "Tag-based techniques, such as SAGE, are commonly used to sample the mRNA pool of an organism's transcriptome. Incomplete digestion during the tag formation process may allow for multiple tags to be generated from a given mRNA transcript. The probability of forming a tag varies with its relative location. As a result, the observed tag counts represent a biased sample of the actual transcript pool. In SAGE this bias can be avoided by ignoring all but the 3' most tag but will discard a large fraction of the observed data. Taking this bias into account should allow more of the available data to be used leading to increased statistical power.Three new hierarchical models, which directly embed a model for the variation in tag formation probability, are proposed and their associated Bayesian inference algorithms are developed. These models may be applied to libraries at both the tag and aggregate level. Simulation experiments and analysis of real data are used to contrast the accuracy of the various methods. The consequences of tag formation bias are discussed in the context of testing differential expression. A description is given as to how these algorithms can be applied in that context.Several Bayesian inference algorithms that account for tag formation effects are compared with the DPB algorithm providing clear evidence of superior performance. The accuracy of inferences when using a particular non-informative prior is found to depend on the expression level of a given gene. The multivariate nature of the approach easily allows both univariate and joint tests of differential expression. Calculations demonstrate the potential for false positive and negative findings due to variation in tag formation probabilities across samples when testing for differential expression. Tag-based transcriptome sequencing libraries consist of a collection of short sequences of DNA called tags along with tabulated counts of the number of times each tag is observed in a sample. These observed tag counts represent a sample from a much larger pool of mRNA tags in a tissue or organism. In the past, Serial Analysis of Gene Expression (SAGE) was the most commonly used technology for generating libraries of tag counts. SAGE libraries have been used to address a number of biological questions including: estimating transcriptome size, and estimating the density of relative expression levels. Most frequently, SAGE was used to assess differential expression across cells from different tissues or strains, or cells grown under different experimental conditions. Next generation methods such as Digital Gene Expression (DGE) tag profiling now provAs has been repeatedly noted in both SAGE and DGE studies e.g. -4, the mWhile it is intuitively sensible to analyze the prevalence of an mRNA by aggregating multiple tags derived from it, such tags are often analyzed separately . At bestSaccharomyces cerevisiae o o9] offeThe goal of a SAGE experiment is to saIn both technologies the restriction enzymes used to create the tags cut at very specific sites within the cDNA. For example, the restriction enzyme used by cleaves While some genes will have no AE sites, many genes have multiple sites at which the AE could cleave the cDNA. Given that the AE is expected to act in a site-independent manner, a single cDNA molecule can be cut by the AE in multiple places. However, because only the fragment of cDNA that is attached to a streptavidin bead is retained during the experimental process, the site closest to the bead (i.e. the 3' most site) that is actually cleaved is the only site that can lead to an observable tag j-1p. This corresponds to the probability of no cleaving in sites 1 to j - 1, and a cleaving at site j. Note that this probability is independent of what happens at the AE sites 5' from the jth site. The fact that the expected distribution of tags varies with AE cleavage probability p can be used to estimate p from the library of tag counts; see [Because only cDNA that is attached to a streptavidin bead is retained during the experimental process, tags are created from the 3' most AE site that is actually cleaved Figure . Let ki nts; see .ith category of mRNA follows a geometric distributionIt follows that the probability of generating any of the possible tags from the tag formation probability.where the sum is over all non-ambiguous tagging sites. We refer to this quantity as the p is known to a very high precision. The variation in the tag formation probability between genes stems from variation in the number of AE sites as well as the number of ambiguous tagging sites. Given p, the only uncertainty in the estimate of ϕi results from ambiguous tags; this problem may be partially eliminated through techniques that generate longer tags; see [ϕi to be known constants allows us to accurately estimate this sampling bias and correct the inferences made from tag libraries.t = represent the aggregated observed tag counts for individual genes. Here l is the number of genes that contain at least one unambiguous AE site and ti is the total number of observed tags that can uniquely be attributed to an AE site in gene i. .) It is natural to view this vector as a sample from a multinomial distribution with l categories, t ~Multi where θ = , θi represents the frequency of tags from gene i in the tag pool, and ϕi distorts the abundances of observed tags, it is clear that θi represents a biased estimate of the true proportion of mRNA from gene i in the overall pool. Unfortunately, this true proportion of mRNA, denoted mi, is clearly the quantity of interest. [θi to mi with the following simple expression:Before the tag counts are considered, we assume that the data is processed to retain only informative tags, i.e. all ambiguous or orphan tags are removed, as is standard practice. Let the vector nterest. relates ϕi is the known tag formation probability based on Eq. (1) and the proportions mi are positive and sum to one. The bias corrected likelihood model is then,where mi is straightforward due to the fact that the likelihood function is maximized at the same location irrespective of the way the model is parameterized. Consider the observed sample proportion ti/ttot. The fact that the mi are positive and must sum to one along with Eq. (2) force the maximum likelihood estimates of Direct maximization of the likelihood with respect to the parameters m is difficult. The existence of the normalization term ∑jmjϕj in the denominator of the left hand side of Eq. (4) makes computation of Fisher's information matrix taxing, particularly for the large dimensional vectors encountered when working with SAGE data (i.e. l is of the order 103 to 104). In addition, because the number of categories is generally within an order of magnitude of, or possibly even greater than, the number of tags sampled, most categories have either zero or only a few observations. These relatively small counts call into question inferences such as chi-squared tests which are based on asymptotic approximations.While computation of the maximum likelihood estimates (MLE's) is relatively straightforward, constructing confidence intervals for m,As an alternative, we consider a Bayesian approach to the problem. The constraints lead naturally to the assumption of a Dirichlet prior on αi is the parameter describing any prior information we have on the value of mi. Combining this prior distribution with the likelihood function given in Eq. (3) leads to a posterior distribution proportional to the productwhere mi. However, a number of difficulties severely limit the range of prior parameters that can be used; i.e. those priors with appreciable weight relative to the observed sample sizes. Given the bias in the observed data, the Gibbs sampling based approaches for inference based on the posterior of the mRNA proportions introduced here are more flexible, and require fewer assumptions than analytic techniques. The methods are also highly computationally efficient. While the assumption of independent cleaving resulted in a geometric model, it is important to note that alternative models of tag formation can be substituted into the Bayesian algorithms without making any further adjustments to the model. introducflat priors , they note that while the posterior mean provide improved average accuracy with respect to sums of squared errors across all categories, estimates of categories where large counts are observed were shrunk excessively in order to boost the estimated probabilities of cells with few or no counts. Due to the massive number of categories and the extreme imbalance in observed frequencies , the estimates for frequently observed genes tend to consistently underestimate the true proportions while the reverse is true of genes with very low expression proportions. This observation highlights the main weakness of shrinkage estimators such as the posterior mean. While they perform better on average across categories, they may perform poorly on particular categories that may be of primary interest. [Joint estimators of the relative proportions of individual tags in SAGE experiments based on posterior means from a conjugate Dirichlet-Multinomial model were explored by and haventerest. addresser, were known then the data could be modeled as a standard Dirichlet-Multinomial model. By proposing a Poisson distribution for r, posterior inferences can be made. Detailed descriptions for these algorithms are given in the Methods section. Gibbs samplers based on these algorithms all produce posterior samples of m, the vector of relative proportions of each gene in the mRNA population. In addition, DPB produces samples representing N, the hypothetical total number of mRNA transcripts in the population while MD produces posterior samples of r, the number of unconverted transcripts. These quantities may be relevant in studies of the number of unique transcripts.Three hierarchical approaches are proposed to model aggregate tag counts while taking into account the tag formation probability. The Dirichlet-Poisson-Binomial model (DPB) extends the method of , who modm in all three algorithms. Two parameter vectors α1, ..., αl) were tested. The flat prior sets αi = 1 for all i while the tub prior sets αi = 1/l. The flat prior effectively assumes that a hypothetical previous experiment produced 1 tag for each category. As the name suggests, the flat prior represents a uniform density over the parameter space. The tub prior assumes that the prior information is equivalent to that of a single tag. The density of this prior takes a bowl or tub shape. The preponderance of mass along the edges forces the estimates of mi to be small unless significant counts are observed for that gene.A Dirchlet prior distribution was assumed for the proportion vector S. cerevisiae and analyzed by [l = 6, 178 is the number of genes in the sample. For each of the three methods, we compared posterior means from the Gibbs sampler to unadjusted multinomial MLE's, corrected MLE's based on Eq. (4), and the analytically derived joint posterior mode given in [flat prior are plotted versus rank for the 20 genes with largest tag counts. Equations (3) and (5) dictate that the corrected MLE and the posterior mode coincide perfectly in this case. Both posterior means and posterior modes based on the DPB bias correction deviate as much as 40% from the unadjusted MLE for many of the most frequently tagged genes indicating the importance of the correction. Posterior modes and means follow nearly identical trends but with the modal estimates being uniformly larger. Figure tub prior in the DPB case. Here one can see the ithcorrected MLE and posterior mean now coincide up to simulation precision (≈ 1%). As with the flat prior, all estimators generated using the tub prior deviate significantly from the unadjusted MLE's.To analyze performance, we applied each algorithm to data collected from the log-phase growth of lyzed by . Here, lquations and (5) Both Figures Graphical displays based on the results of the DMB and MD algorithms had very similar appearances and so are not shown. However, meaningful differences in posterior means generated by the methods and across the two priors do exist and are discussed in the following section.m3, which corresponds to the open reading frame YAL003W, were investigated for all approaches. This gene was observed in 32 of the 14,285 tags and was selected randomly among the set of genes with medium to large tag counts. Autocorrelations of both sequences were tested at various lags. Results given in Table flat prior may generate slightly larger correlations than the tub but in both cases autocorrelations are very low between samples with lags as small as 10.In order to ensure that autocorrelation did not adversely effect parameter estimates, convergence analysis was performed. Posterior samples for the proportion N in the DPB algorithm and the number of unconverted transcripts r from the MD method. When using the flat prior, high correlation among posterior simulations of N exist at all tested lags leading to unreliable inferences. Interestingly, the tub prior leads to negligible correlations among samples at all measured lags. Using the MD method, the autocorrelation in samples of r decreases to 0 after 50 simulations when using the flat prior. Samples are uncorrelated at lags as small as 10 when using the tub prior.Table l = 6, 178, 5 million samples from the DPB algorithm required 21.7 and 19.3 minutes with the flat and tub priors respectively. The DMB was slightly slower taking 30.7 and 25.2 minutes to run. Finally, the MD method completed 5 million samples in 20.2 and 19.3 minutes, respectively.In terms of computational efficiency, with m is usually the quantity of interest, far fewer samples would be needed in practice. This would significantly reduce the computation times for the algorithm.In order to avoid any effects due to autocorrelation our real data experiments stored only the last of every hundred samples for analysis. Inference for means was based upon the final 1000 of these stored samples drawn after an extensive burn-in period of 400,000 simulations. Because the vector mi from simulated libraries. The proportion of times the computed interval covers the true known simulation proportion mi is then recorded. Accuracy is assessed by how close the observed coverage percentage is to the desired level of 95%. The average interval length of the 95% posterior intervals was considered as a second criteria of inferential accuracy but no meaningful differences in average interval length were observed among the three methods.Although all methods produce similar trends in their estimates, quantitative differences do exist. It is also important to quantify the effects of the prior distributions on the analysis. Because the inferential quality is often more relevant in scientific investigations of relative and differential expression than point estimates, we attempt to contrast inferential quality of the three methods by examining marginal 95% posterior intervals constructed for mi values for library sizes l = 6, 178 and l = 1, 000. One clear conclusion is that under the simulation protocol and the priors considered, the DPB method has the best general performance across the range of mi values for both priors. In contrast, the DMB and MD methods, which have very similar performance with almost identical coverage values across all categories and priors, offer poor coverage for medium and large categories under the flat prior. All methods provide accurate coverage under the flat prior for the intermediate values of mi (i.e. .0000167 - 0.000912) which represent the bulk of the genes used in our simulations. Below this range, all three methods have a tendency to overestimate the true mi. This is due to the fact that the magnitude of αi for the flat prior is on the same order of magnitude as the observed tag counts and leads to poor coverage performance. In contrast, all three methods tend to underestimate the true mi value in the abundant categories. This shrinkage effect was discussed earlier; see [Table shrinkage effects become more pronounced as the number of genes l increases further degrading the quality of the estimates. This shrinkage is most acute for DMB and MD methods applied to larger library sizes, which perform poorly for the segments with the. As noted above, the flat prior works well across a wide range of values but performs poorly in the extremes. Figure tub prior works well for the majority of genes where the flat prior fails. For example, the tub works well in large mi categories because it shrinks the estimates very little. Its weakness; it adds almost no mass to categories with small positive values of mi and, if no counts are observed, intervals are produced with both endpoints near 0. Figures mi gene. Focusing on the left panel of Figure mi systematically underestimate its true value. This leads to upper interval endpoints being systematically too small. The tub prior on the right corrects this phenomenon well. On the left of Figure mi being drawn well above their actual values by the reversing of the shrinkage for small mi categories. This leads to reasonably accurate coverage. In the right panel we see that tub prior fails miserably in this case. Intervals for genes with 0 counts collapse close to zero systematically underestimating the true mi values.Not surprisingly, tub prior being relied upon for larger count categories while the flat prior is used for the remaining categories.Because these methods are very efficient to compute, both methods can be applied to analyze a single library with the Tag based transcriptome sequencing such as SAGE is most commonly used to identify differential expression of genes across groups of libraries. It is important to point out the consequences of differences in tag formation probabilities in such studies along with the advantages of the proposed methods in this context.p is experiment dependent [A and B. Letting pa and mia represent the cutting probability and relative proportion of mRNA for the ith tag in experiment A and similarly for B, the theoretical odds ratio for the tag proportions across libraries A and B isBecause the anchoring enzyme AE cutting probability ependent , it is sj in experiment A, Ω is the true odds ratio, and kj is the number of AE positions upstream from which the tag was derived. The approximation holds when both pa and pb are large (i.e. > 0.5) and the ith expression level mi is small in an absolute sense for both experiments, conditions that are almost universally met in SAGE studies.where k = 0, and ϕja = pa then the odds simply reduces to (mia(1 - mib))/(mib(1 - mia)), the exact odds ratio desired. However, suppose all tags are allowed and the experimenter tests differential expression for the ith tag, which derives from an AE site 1 position upstream from the 3' end. The tag formation probability now takes the form p(1 - p). Assuming cutting probabilities pa = .92 and pb = .96 we see that our estimate of the odds ratio is overestimated by a factor of ~1.9 potentially leading to a false positive diagnosis. In fact, this phenomenon could result in a sizeable proportion of both false positive and false negative conclusions. If SAGE analysis were restricted to 3' most AE sites and all data from upstream tags were eliminated from the library, ϕi would be constant and sampling bias would be eliminated by analysis using odds ratios (logistic regression coefficients are estimated log-odds ratios). Once again, the caveat to this approach is that a significant proportion of the observed tags may be discarded. The need for the adjustments we propose stems from an effort to accurately combine multiple tags arising from the same mRNA and offers the ability to utilize a much larger fraction of the data collected leading to more power for differential testing.If only data derived from the 3' most tagging site of each gene is analyzed so that i across two libraries, individual Monte Carlo samples from each library can be differenced, mi1 - mi2, and the distribution of these differences used for inference; see [[Monte-Carlo sampling mechanisms such as DPB can easily be applied to evaluate differential expression across groups of libraries. In the case of comparison for gene ce; see [, Sectionm. Importantly, our method requires no asymptotic conditions to be met in order to guarantee the validity of the results.The Monte-Carlo sampling nature of the algorithms presented here and their computational efficiency leads to several advantages over recent methods -16. FirsThis work focuses on a general biasing mechanism that may have a significant impact on the interpretation of libraries generated by SAGE and closely related methods. Beyond drawing attention to the consequences of incomplete digestion, two main contributions are presented. First, we derive and analyze three Bayesian algorithms that provide corrected posterior inference for relative proportions of genes or tags in the overall population. Second, through calculations we deduce the consequences of tag formation bias on testing of differential expression and discuss how the algorithms derived here can be used to correctly assess differential expression.This work compliments the earlier work in by proviflat prior was effective over a wide range of categories but resulted in excessive shrinkage for the most abundant categories. For a fixed number of sampled tags, this shrinkage effect becomes more severe as one increases the number of categories. Hence, aggregating tags provides a second advantage since it greatly reduces the number of categories used in an analysis. The proposed tub prior provides little shrinkage and posterior intervals based on this prior are very accurate for abundant categories. In practice, both priors should be applied and inference should be based on the samples with the larger average.The proposed methods are appropriate when incomplete digestion is the main source of multiple tag generation. They may be applied to both tag level data and to inference at the gene level based on aggregated counts. If possible, aggregating tags will provide better inferences due to larger available counts. Of the three approaches, the DPB method provided the most accurate confidence intervals over the widest range of relative proportions and we recommend its exclusive use. In addition, the choice of prior also plays an important role in the effectiveness of the algorithm. Of the two priors tested, the We note that the sampling algorithms presented are independent of the model for tag formation probability and models other than the geometric model discussed can easily be substituted. It should also be possible to extend the proposed methods using a mixture methodology as suggested by .Correction of tag formation bias may play a significant role in improving power in differential expression studies based on SAGE or related tag libraries. To ensure valid testing, one must either include bias correction or eliminate non-3' tag counts from the analysis. Furthermore, the ability to effectively test for changes in groups of genes (gene networks) is now possible due to the multivariate nature of the posterior distribution. This advantage may become significantly more important as more elaborate studies and techniques are developed in the future.S. cerevisiae and other microorganisms. However, alternative splicing is widespread in most multicellular organisms [Due to the overall simplicity of genome architecture, alternative splicing is generally not a problem when analyzing gene expression in rganisms ,2 and grrganisms .Extending our model in such a way should allow us to make inferences about the distribution of mRNA isoforms produced by a gene based on the distribution of tags experimentally observed.Conceptually, the problems caused by alternative splicing are similar to those caused by tag ambiguity between different genes. For example, in the case of alternative splicing, the challenge is in determining the mRNA isoform from which each tag originates. Similarly, in the case of ambiguous tags, the challenge is in determining the gene from which each tag originates. Thus it is plausible that data interpretation problems caused by both alternative splicing and ambiguous tags could be overcome by extending our model to include multiple sources of a single tag.S. cerevisiae [We explicitly consider three strategies to simulate from the posterior distribution given by Eq. (5). The algorithms discussed are tested for inferential accuracy using simulated data as well as being applied to a published SAGE data set that analyzes a yeast organism revisiae . The pubrevisiae .R-environment (Version 2.6.2) on an 8 core, Intel Xeon 2.66 GHz desktop server running Linux Ubuntu 8.04. R-scripts that implement the Gibbs sampling algorithms described below are provided; see Additional file All simulations described below were computed using N exists within the sampled cells. The total size N is random across samples and follows a gamma distribution that is rounded down to the nearest integer. The choice of gamma here is convenient due to its role as a conjugate prior to the Poisson distribution. Given N, the relative proportions of the categories of mRNA, mi, are unknown and assumed to follow a Dirichlet distribution with prior α1, ..., αl). The αi will typically be identical resulting in objective inference.The DPB approach, inspired by Casella and George , assumesN and mi, that the actual number of mRNA's of a certain type gi, i = 1, ... l, extracted from the group of cells, satisfies a Poisson distribution with mean μ = N·mi. Finally, the restriction enzyme process generates a tag count ti for a particular mRNA transcript in a binomial fashion based upon the tag formation probability ϕi. Hence, the hierarchical data generating mechanism follows, Ti~Binomial, gi~Poisson(N * mi), m ~Dl). We refer to this model as the Dirichlet-Poisson-Binomial (DPB) model.Because cells contain mRNA transcripts from thousands of different genes, the probability of seeing any particular type is low. It is therefore logical to assume, given gi may not add up to the total counts N. However, this approach allows us to infer the total population size N. Together, the elements described above give a joint posterior distribution,A first weakness of this model is that the sum of all counts ∑¿From this expression, we can deduce the set of full conditional distributions,δ = . Gibbs sampling can be implemented by iteratively simulating from each conditional after replacing any unknown random quantities in the conditioning set with simulation values from the previous iteration; see [where ion; see .γ1, γ2) which effect the mode of the posterior distribution. One possibility for choosing values of these parameters is to minimize the distance between the approximate analytical mode discussed in [The hierarchical model considered here is not identical to Eq. (3), but depends upon prior parameters = 16438.81. This was chosen intentionally to determine if the posterior would converge to a reasonable estimate of N.In our simulations we used a shape parameter N, the total number of mRNA in the sample is assumed to follow a Poisson distribution with mean λ. Given this mRNA population size, the vector g of counts for each category of mRNA prior to tag formation follows a multinomial distribution,The Dirichlet-Multinomial-Binomial (DMB) approach is arguably more faithful to the sampling mechanism inherent in SAGE than DPB. mi and are assumed to follow a Dirichlet distribution with parameter vector ϕ1, ϕ2, ..., ϕl) is the vector of tag formation probabilities and letting δ = , the full conditional distributions are derived to be,whose probabilities are the mRNA proportions m *.where N. One drawback is that the observed data provides no information for posterior inference about the population size N. Iterative maximization for mode calculation is not viable due to the discrete nature of the multinomial conditional distribution.This formulation is more natural in the sense that it restricts the pre-tagging counts to sum to the population total θi = (miϕi)/(∑miϕi) and basing inference on the posterior of Eq. (5), a more straightforward approach uses the concept of missing data which is often associated with the EM algorithm; see [t with an extra category that represents any cDNA that is not converted to a tag. This count, r, is not observed, but if a distribution such as the Poisson is proposed, we may consider a Bayesian approach. Given the value of r and assuming a Dirichlet prior on the unknown initial proportions m ~Dl, the resulting posterior distribution for m is,Instead of re-normalizing the probabilities thm; see . We augml + 1 categories compared with the l categories in the prior distribution. Exact computation of the conditional expectations E(mi|r) is now straightforward. Using the formula for the mean of a Dirichlet distributionThis resulting posterior distribution is a Dirichlet distribution with mi, which must sum to one, gives r. Hence, the marginal expectation of mi across values of r isSumming the expectation over r ~Poisson(μ) then the posterior distribution can be written,Gibbs sampling is also available for this problem and extends inference to Bayesian interval estimates. If The full conditional distributions arer, μ must be selected. A logical mean for the Poisson is ttot ∑imi(1 - ϕi)/∑imi ϕi since ∑mi(1 - ϕi) = 1 - ∑mi ϕi is the probability that an mRNA is not converted into a tag. Equation (4) provides a useful estimate of ∑mi ϕi in the case of a flat prior.In order to apply the Gibbs sampler a value for the prior mean of ti + αi - 1)/(∑i(αi + ti) + r - l) while the mode for the Poisson is the mean rounded down to the nearest smaller integer. For example, considering a flat prior for m along with a prior mean of μ = 10, 100 for r, the posterior mode of the missing data model is nearly identical to the exact analytical mode given in [Like the DPB algorithm above, the missing data algorithm also admits an iterative optimization procedure in order to compute the posterior mode. The mode of the Dirichlet conditional given by is = to compute individual probabilities ϕi. The number of cleaving sites followed a Poisson distribution with mean 2 with 1 count added to each category. This ensured that each \"gene\" has at least one cleaving site. The cleavage probability was set at p = .55 based on estimates given in [Simulation of libraries was based upon the proportions estimated from the aggregated l, and prior, 1,000 libraries, tj, each containing n = 15, 000 tags were simulated. For each simulated library, each of the three proposed algorithms was used to generate 20,500 posterior sample vectors mjk, k ∈ 1, ...20, 500. Every 20th sample was collected after a burn in period of 500 samples resulting 1,000 Monte Carlo samples of m for each method. Samples were then used to generate mean, median and 95% marginal posterior intervals for each of the l gene categories. Coverage percentages for the ith gene category are computed by finding the proportion of times that the interval for category i from a particular method contains the true proportion mi across the 1,000 simulated libraries tj . Figure Using the above protocol, for each combination of library size, RLZ devised and implemented the algorithms, computed the experiments and wrote the majority of the manuscript. MAG developed the tag formation model, prepared the real data, wrote parts of the manuscript, and provided subject matter expertise and computational resources. WMB helped edit the manuscript and construct graphical displays. AA assisted in development of the algorithms. All authors have read and approved this manuscript.SAGEGibbs.R. File contains a script which implement the Gibbs Samplers described in the paper. The script is written in the R-environment.Click here for file"} {"text": "Extramedullary hematopoiesis (EMH) is defined as the presence of hematopoietic stem cells such as erythroid and myeloid lineage plus megakaryocytes in extramedullary sites like liver, spleen and lymph nodes and is usually associated with either bone marrow or hematological disorders. Mammary EMH is a rare condition either in human and veterinary medicine and can be associated with benign mixed mammary tumors, similarly to that described in this case.Hematopoietic stem cells were found in a benign mixed mammary tumor of a 7-year-old female mongrel dog that presents a nodule in the left inguinal mammary gland. The patient did not have any hematological abnormalities. Cytological evaluation demonstrated two distinct cell populations, composed of either epithelial or mesenchymal cells, sometimes associated with a fibrillar acidophilic matrix, apart from megakaryocytes, osteoclasts, metarubricytes, prorubricytes, rubricytes, rubriblasts, promyelocytes, myeloblasts. Histological examination confirmed the presence of an active hematopoietic bone marrow within the bone tissue of a benign mammary mixed tumor.EMH is a rare condition described in veterinary medicine that can be associated with mammary mixed tumors. It's detection can be associated with several neoplastic and non-neoplastic mammary lesions, i.e. osteosarcomas, mixed tumors and bone metaplasia. Hematopoiesis in the adult animal is restricted to the marrow cavity of flat bones and long bones epiphysis . Thus, eMammary EMH is a rare condition and it is generally associated with non-neoplastic hematopoietic masses in both woman and bitches. However, the presence of hematopoietic activity can also be seen as an incidental finding associated with mammary neoplasia ,4,9-12.Cytological examination has been used since the 60's to investigate mammary lesions in women, and it is nowadays largely accepted as a screening test for such lesions in veterinary medicine. It is a minimally invasive and low cost diagnostic method which allows differentiation between non-neoplastic and neoplastic mammary lesions. In addition, it can accurately predict the malignant potential of mammary tumors if performed by an experienced pathologist -18.Based upon prevailing cytological characteristics, canine mammary tumors can be classified as epithelial, mesenchymal, or mixed type according to its origin ,18.Proliferation of epithelial cells from ducts and/or lobes, myoepithelial, and mesenchymal cells, in addition to cartilaginous, bone or myxoid fibrous tissue are sometimes seen in a variety of mammary neoplasm in dogs. These compound tumors are classified under the designation of mixed tumors ,15,18-20Even though the tumor cytological examination may show considerable tissue heterogeneity , hematolA 7-year-old intact female non-pure breed dog weighing 8.0 kg was admitted at the School of Veterinary Medicine and Animal Science from the Univ. Estadual Paulista (UNESP), Botucatu, São Paulo, Brazil. On clinical examination it was noted a subcutaneous mass in the left inguinal mammary gland, measuring 0.7 × 0.9 cm, and with 24-week of clinical evolution. The lesion was also firm with a smooth surface, painless, and with no ulceration nor deep muscle adhesion.®, São Paulo, Brazil) and a 10 ml syringe . The collected material was spread on five histological slides, and then fixed with methanol P.A. and ethanol 95% . Three slides were stained with Giemsa and two with Papanicolaou stain. Samples were examined under light microscopy and classified according to Allen et al. [Thoracic radiography, hematological and blood biochemistry analysis, and cytological evaluation of the lesion were performed in order to achieve a diagnosis. For cytological analysis, samples were obtained from 2 different areas using a fine needle . Then, the samples were analyzed under light microscopy.Hematological parameters and urea, creatinine, alkaline phosphatase, aspartate aminotransferase and alanine aminotransferase values were within reference interval considered normal for the specie Table . Thorax Cytological evaluation Figures and 2 reHistopathology revealed Figure an encapSpontaneous mammary tumors in bitches are very similar to those in women, which makes it a good model for comparative studies ,16 in teThe mixed type, either benign or malignant, is one of the most common mammary gland tumor in bitches ,21,22, uThe cytological examination usually shows that most of the pleomorphic adenomas present groups of epithelial, myoepithelial, and bone tissue cells, as well a chondromyxoid stroma . SimilarAccording to Allen et al. , during The presence of one of the three lineages of hematopoietic cells out of the bone marrow is sufficient to characterize EMH ,4. In thEMH is generally seen in patients with bone marrow and hematological diseases such as chronic anemia, hematopoietic neoplasia, medular hyperplasia, suppurative bacterial infections, or cardiorespiratory conditions ,3,4,8,25Several hypothesis has been suggested to explain mammary EMH. Functional disruptions of bone marrow, e.g. drug therapy or myelofibrosis, stimulates circulating stem cells to find a favorable environment and differentiate into hematopoietic cells . Focal lIn this case, no biochemical or hematologic changes that could characterize a functional failure of bone marrow were detected, which could result in the production of chemical mediators capable of promoting favorable environment for stem cell implantation. The patient didn't suffer from any other disease, and was not under current drug therapy. No previous surgical manipulation of the mass was attempted. Thus, the pathogenesis of this case shall probably be related to intrinsic factors of the neoplastic process.As far we know there are no reports in human medicine literature regarding the presence of erythrocytes, myeloids cells, and megakaryocytes in mixed or metaplastic breast tumors, and pleomorphic adenomas. In veterinary medicine there is only one report from Fernandes et al. that repDespite the existence of one previous report, the importance of those findings relies on the possible relationship with advanced medullar disorders. Moreover, mammary EMH is generally asymptomatic and clinically mimics a neoplasia.Sampling multiple different sites of the same tumor mass increases its representativeness and possibility of an accurate diagnosis, since there is a heterogeneous tissue composition within the same tumor. Hence, the EMH cytological diagnosis is straightforward when hematopoietic cells are present. However it's important to emphasize that the simple fact of finding this cell types does not allow a final diagnosis, since other neoplastic and non-neoplastic condition, i.e. osteosarcomas and bone metaplasia, may present similarly. Despite all that, histopathological examination is always recommended.FG performed the analysis and interpretation of cytologic and histologic findings, photographed images and helped to write the manuscript. MMC participated in clinical research, sampling and helped to write the manuscript. LNM has contributed to the analysis and interpretation of cytology and histology and for the preparation of the manuscript. JRVPL conducted clinical research and was responsible for collecting samples. NSR is the supervisor responsible for the case report and the review of the manuscript. All authors read and approved the final manuscript."} {"text": "Xenopus, mef2c and mef2d but not mef2a were recently shown to be expressed during cardiogenesis. We here investigated the function of Mef2c and Mef2d during Xenopus laevis cardiogenesis. Knocking down either gene by corresponding antisense morpholino oligonucleotides led to profound heart defects including morphological abnormalities, pericardial edema, and brachycardia. Marker gene expression analyses and rescue experiments revealed that (i) both genes are required for proper cardiac gene expression, (ii) Mef2d can compensate for the loss of Mef2c but not vice versa, and (iii) the γ domain of Mef2c is required for early cardiac development. Taken together, our data provide novel insights into the function of Mef2 during cardiogenesis, highlight evolutionary differences between species and might have an impact on attempts of direct reprogramming.The family of vertebrate Mef2 transcription factors is comprised of four members named Mef2a, Mef2b, Mef2c, and Mef2d. These transcription factors are regulators of the myogenic programs with crucial roles in development of skeletal, cardiac and smooth muscle cells. Mef2a and Mef2c are essential for cardiac development in mice. In Isl1) and T-box factor 1 (Tbx1) cardiac progenitor cell (CPC) population.In the mouse, the heart arises from two populations of cells referred to as the first and second heart field, respectively Xenopus starts at the onset of gastrulation with the induction of two bilaterally located cardiac progenitor cell populations on either side of the Spemann Organizer Xenopus FHF already express genes indicating cardiac differentiation such as cardiac troponin (tnni3) and myosin heavy chain (myh6). Upon further development, these cells will later contribute to forming the single ventricle, the two atria and the inflow tract isl1 and tbx1 as well as bone morphogenetic protein 4 (bmp4). These cells have been shown to develop into the outflow tract Cardiac development in myocyte enhancer factor 2) proteins belong to the MADS family of transcription factors. Members of the Mef2 family contain a common MADS-box and a Mef2-type domain at their N-terminus. The MADS-box serves as a minimal DNA-binding domain that requires the adjacent Mef2-type domain for dimerization and high-affinity DNA binding Mef2a, b, c and d- are expressed in precursors of the three muscle lineages as well as in neurons Mef2 gene gives rise to multiple isoforms through alternative splicing that are expressed in different embryonic and adult tissues Mef2 genes: the two mutually exclusive exons a1 and a2, and the short exons β and γ. The function of these isoforms depends on the exon composition after splicing. In particular the amino acids coded by the γ exon have been shown to function in gene repression Xenopus mef2c, three exons are alternatively spliced: two exons corresponding to the mammalian β and γ exons, as well as the so-called δ exon. No mef2d splice variants were detected in XenopusMef2 population at stage 20 whereas mef2d transcripts were strongly detected in CPCs (mef2c is expressed in the myocardium (arrowhead) as well as in the endocardium (arrow) during tailbud stages , and Xenopus mef2d (xmef2d) were cloned into the pCS2+ expression vector in front of and in frame with the GFP gene. The RNAs of these constructs were bilaterally co-injected with either Control MO or Mef2c MO or Mef2d MO into two-cell stage embryos. GFP expression was monitored at stage 20 or due to introduced silent point mutations in the coding sequence (hMEF2D). Again, the efficiency was tested as described above. The expression of mMef2c-GFP or hMEF2D-GFP was not blocked by Mef2c MO. The Mef2d MO did not have an inhibitory effect on hMEF2D-GFP or mMef2c-GFP Mef2c MO neither inhibits mMef2c nor hMEF2D translation and (III) the Mef2d MO does not interfere with mMef2c or hMEF2D expression. In conclusion, RNA coding for murine Mef2c and human MEF2D can be used as gene specific rescue constructs.For loss of function studies we relied on morpholino oligonucleotide (MO) based antisense strategies. To test the binding efficiency and functionality of the MOs used, the MO binding sites of stage 20 . Co-injeef2c-GFP . Taken tXenopus embryos to target cardiac tissue GFP RNA was co-injected as a lineage tracer and to identify correctly injected embryos. Mef2c and Mef2d morphant embryos revealed abnormalities during cardiogenesis whereas Control MO-injected siblings showed a normal cardiac development suggesting that their regulation by Mef2 factors is also conserved in Xenopus. In the case of myh6, a recent study identified a conserved Mef2 binding site in its promoter region Nkx2-5nkx2-5 expression in the CPC population at stage 20, but we have seen a down-regulation at the stage 28. This might suggest that nkx2-5 is a target gene of Mef2 proteins only at later stages of development or this observation might point to diverse functions of Mef2 proteins as discussed in the next section.Transcriptional targets of Mef2 factors that have been identified in different species include alpha cardiac myosin heavy chain, alpha cardiac actin and cardiac troponin I Nkx2-5 during early phases of cardiogenesis whereas this construct inhibits cardiogenesis at later time points Xenopus cardiogenesis may also be useful as another model suitable to gain further insights into the function of Mef2d during cardiac remodeling in mice, particularly in the reactivation of a fetal gene expression program.The γ domain of mMef2c has been shown to function as a transcriptional repressor Of note, our experiments have additional implications as Mef2c was used as one of three components in direct reprogramming experiments to generate cardiomyocytes out of cardiac fibroblasts Xenopus laevis embryos were obtained by in vitro fertilization, cultured and staged according to Nieuwkoop and Faber mMef2c and hMEF2D constructs were purchased from ImaGenes GmbH and the open reading frames were subcloned into pCS2+ hMEF2D by mutagenesis using the QuickChange II Site-Directed Mutagenesis Kit (Stratagene). All constructs were verified by sequencing. The mMef2cγ- deletion construct was generated by inverse PCR using the mMef2cγ+/pCS2+ rescue construct as template and the proof reading Phusion DNA polymerase (Finnzyme) followed by re-ligation. The primers used for the inverse PCR were: for: 5′-GAC CGT ACC ACC ACC CCT TCG A-3′; rev: 5′-GCT GAG GCT TTG AGT AGA AGG CAG G-3′. For cloning full length mef2c from heart enriched explants the following primers were used: Mef2c cloning for: 5′-GTT GGA GCA GAG GGG AAA AT-3′, Mef2c cloning rev: 5′-GGT ATA AGC ACA CAC ACA CTG CA-3′.The 2O and stored as aliquots at −20°C. Mef2cMO: 5′-CCA TAG TCC CCG TTT TTC TGT CTT C-3′; Mef2dMO: ′-AAT CTG GAT CTT TTT TCT GCC CAT G-3′5. For knock down approaches, we injected the MOs (10 ng) into both dorso-vegetal blastomeres of eight-cell embryos to target the presumptive heart region GFP RNA. Only correctly injected embryos were considered for the experiments. For control injection experiments, the standard Control MO of Gene Tools was used. Control MO: 5′-CCT CTT ACC TCA GTT ACA ATT TAT A-3′ For rescue experiments, mRNA (0.5 ng) was injected together with MO. The binding specificity of MOs was tested in vivo were cloned in frame with and in front of the GFP open reading frame in pCS2+. The indicated RNA and MO were co-injected bilaterally into two-cell stage embryos and GFP translation was monitored at stage 20 using a fluorescence microscope.All MOs were purchased from Gene Tools, LLC, OR USA, resuspended in DEPC-Hmef2c and mef2d were used as previously described GFP RNA. The uninjected side served as internal control. Wild type or injected embryos were fixed at +4°C in MEMFA at indicated stages. WMISH experiments were performed according to a standard protocol as previously described 2O2. For sections, stained embryos were embedded in gelatine/albumine overnight at +4°C, sectioned using a vibratome with a thickness of 25 μm, coverslipped, and imaged with an Olympus BX60 microscope.Probes for Xenopus embryos at different stages with peqGOLD RNAPure (peqLab) according to the manufacturer's protocol. To analyze mef2c variants in Xenopus cardiac tissue, the anterior-ventral part of stage 24, 28, and 32 of wild type embryos (heart-enriched regions) posterior to cement gland were dissected and total RNA was isolated using peqGOLD RNAPure (peqLab). cDNA was synthesized using random hexamers and the SuperScript II reverse transcriptase (Invitrogen). PCR was performed with the Phire Hot Start II DNA Polymerase (Thermo scientific). Primers for amplification were: gapdh_for: 5′-GCC GTG TAT GTG GAA TCT-3′; gapdh rev: 5′-AAG TTG TCG TTG ATG ACC TTT GC-3′; H4 for: 5′-CGG GAT AAC ATT CAG GGT ATC ACT-3′; H4 rev: 5′-ATC CAT GGC GGT AAC TGT CTT CCT-3′; Xenopus mef2c for: 5′-AGA GCG CAC GGA CTA CTG AT-3′; Xenopus_mef2c_rev: 5′-TCA CCT GTC GGT TAC GTT CA-3′; Xenopus mef2d for: 5′-GCA GCT TTA AAT TCC GCA AG-3; Xenopus rev: 5′-CGG TGT CAC TTG GCC TTT AT-3′; Quantitative PCR was performed on RNA isolated from heart-enriched regions using SYBR Green Master Mix (Fermentas) and a Roche Light Cycler 1.5. gapdh was used as housekeeping gene. Each sample was analyzed in triplicate. Primer pairs used were: Xenopus gapdh for: 5′-GCC GTG TAT GTG GAA TCT-3′; Xenopus gapdh rev: 5′-AAG TTG TCG TTG ATG ACC TTT GC-3′; Xenopus mef2cγ+ for: 5′-GCACAATATGCCTCATTCAGCC-3′; Xenopus mef2cγ+ rev: 5′-GGAGGAGAAACAGGTTCTGACTTG-3′; Xenopus mef2cγ- for: 5′-TGGCTCAGTTACTGGCTGGCAGC-3′; Xenopus mef2cγ-rev: 5′-TAGTACGGTCTCCCAGCTGGCTGAG-3′; Xenopus mef2d for: 5′AGA CCT GGC ATC CCT CTC TA-3′; Xenopus mef2d rev: 5′TTG CGG TTG GTT ATG TTG TT-3′.Total RNA was isolated from whole 30 embryos injected with RNA were homogenized in 300 µl lysis buffer and incubated on ice for 5 min. Protein samples were cleared at 13,000 rpm at 4°C for 5 min. For removal of lipids, the supernatant was mixed with 50 µl Freon followed by centrifugation at 13,000 rpm at 4°C for 5 min. The upper protein containing phase was collected. Concentration of protein samples was determined by Bradford assay with BSA as standard. Western blotting was performed according to standard procedures and proteins were visualized using a Li-COR ODYSSEY Imager. Primary antibodies were purchased from Abcam , Santa Cruz and Serotec . Secondary antibodies used were IRDye conjugates from Li-COR.Mef2a, b, c and d in human, mouse and X. tropicalis were compared using NCBI and Xenbase G Browse.For synteny analysis, genomic structure and chromosomal organization of p value of ≤0.05 was considered to be significant.Data were obtained from at least three independent experiments and analyzed with statistical program GraphPad Prism. The number of embryos (N) and the number of independent experiments (n) performed for each experiment is indicated in the corresponding figures. For rescue experiments, embryos of the same batch were evaluated upon injection either with either MO or MO along with the RNA of interest. The nonparametric Mann-Whitney rank sum test was used to determine statistical differences. A Figure S1Spatio-temporal expression of mef2cand mef2d in Xenopus.A. Temporal expression of mef2c. mef2c is maternally supplied. Mef2c embryonic expression starts at stage 9 and increases until stage 40. gapdh was used as loading control. –RT serves as negative control. B. Temporal expression of mef2d. mef2d is maternally supplied. Mef2d embryonic expression starts at stage 12. H4 was used as loading control. –RT serves as negative control. C–j. Spatial expression of mef2c. C. Anterior view with the dorsal side to the top. c. Sagittal section. D, F, H, J. Lateral views with anterior to the right. E, G, I, K. Ventral views with anterior to the top. Black arrowheads indicate the expression in the FHF, the red arrowhead highlights mef2c transcripts at the lateral sides of the SHF. C. Parasagittal section. d, f, h, j. Transverse sections. Black arrowheads indicate cardiac expression; the arrowhead in h shows mef2c expression in the endocardium, the black arrow in the myocardium. L–s. Spatial expression of mef2d. L. Anterior view with the dorsal side to the top. l. Sagittal section. M, O, Q, S. Lateral views with anterior to the right. N, P, R, T. Ventral views with anterior to the top. m, o, q, s. Transverse sections. White arrows indicate mef2d expression in cardiac progenitor cells. The white arrowhead indicates cardiac cells with low mef2d expression. Black arrows indicate mef2d expression in the myocardium, black arrowheads show mef2c expression in the first heart field (FHF). St: stage(TIF)Click here for additional data file.Figure S2In vivo MO specificity test. Two-cell stage embryos were bilaterally injected and GFP fluorescence was monitored at stage 20. MO binding sites of Xenopus, mouse and human are indicated. Red letters indicate different bases in the MO binding sites, green letters indicate the ATG start codon. Upper panels show the light view, lower panels provide the fluorescent view. A. GFP fluorescence was observed upon injection of mef2c-GFP together with Control MO but not with Mef2c MO. Neither mMef2c-GFP nor hMEF2D-GFP were targeted by Mef2c MO. B. GFP expression was observed after the injection of Control MO. Co-injection of xmef2d-GFP and Mef2d MO led to an inhibition of GFP expression. Neither the expression of hMEF2D-GFP nor mMef2c-GFP was influenced by Mef2d MO.(TIF)Click here for additional data file.Table S1Gene abbreviations used in (PDF)Click here for additional data file."} {"text": "Celecoxib (CXB) is a widely prescribed COX-2 inhibitor used clinically to treat pain and inflammation. Recently, COX-2 independent mechanisms have been described to be the targets of CXB. For instance, ion channels such as the voltage-gated sodium channel, L-type calcium channel, Kv2.1, Kv1.5, Kv4.3 and HERG potassium channel were all reported to be inhibited by CXB. Our recent study revealed that CXB is a potent activator of Kv7/M channels. M currents expressed in dorsal root ganglia play an important role in nociception. Our study was aimed at establishing the role of COX-2 independent M current activation in the analgesic action of CXB.We compared the effects of CXB and its two structural analogues, unmethylated CXB (UMC) and 2,5-dimethyl-CXB (DMC), on Kv7/M currents and pain behavior in animal models. UMC is a more potent inhibitor of COX-2 than CXB while DMC has no COX-2 inhibiting activity. We found that CXB, UMC and DMC concentration-dependently activated Kv7.2/7.3 channels expressed in HEK293 cells and the M-type current in dorsal root ganglia neurons, negatively shifted I–V curve of Kv7.2/7.3 channels, with a potency and efficiency inverse to their COX-2 inhibitory potential. Furthermore, CXB, UMC and DMC greatly reduced inflammatory pain behavior induced by bradykinin, mechanical pain behavior induced by stimulation with von Frey filaments and thermal pain behavior in the Hargreaves test. CXB and DMC also significantly attenuated hyperalgesia in chronic constriction injury neuropathic pain.+ channels with effects independent of COX-2 inhibition. The analgesic effects of CXBs on pain behaviors, especially those of DMC, suggest that activation of Kv7/M K+ channels may play an important role in the analgesic action of CXB. This study strengthens the notion that Kv7/M K+ channels are a potential target for pain treatment.CXB, DMC and UMC are openers of Kv7/M K In clinical practice, non-steroid anti-inflammatory drugs (NSAIDs) are the most frequently used pain relief drugs. It is believed that NSAIDs mainly relieve pain by suppressing the activity of cyclooxygenase (COX) COX-1 is a ubiquitous constitutive form of the enzyme that is involved in the regulation of various physiological processes such as platelet aggregation, and gastrointestinal tract and kidney homeostasis. COX-2 is an inducible isozyme mainly observed during pathological processes such as inflammation and cancer Recently, COX-2-independent mechanisms have been described to be the targets of CXB. For instance, several non-COX-2 components of the cell, such as sarcoplasmic/endoplasmic reticulum (ER) calcium ATPase (SERCA) Certain ion channels were also recently described as additional targets of CXB. For example, the voltage-gated sodium channel in rat retinal neurons + channels may also contribute to the analgesic action of CXB. It is well established that Kv7.2 and Kv7.3 constitute the molecular basis of the neuronal M-type potassium channel + currents 2+-activated Cl− currents in DRG neurons The strong modulation of Kv7.2/7.3 by CXB found in our previous work led us to think that activation of KIn light of the findings mentioned above, we propose that, in addition to reducing the generation of PGs by inhibiting COX-2, activation of M currents in DRG neurons may also be involved in the NSAID analgesic action of CXB. In the present study, we compared the effects of CXB and its two structural analogues, unmethylated CXB (UMC) and 2,5-dimethyl-CXB (DMC), on Kv7/M currents and pain behavior in animal models. As UMC is a more potent inhibitor of COX-2 than CXB and DMC has no COX-2-inhibiting activity, the role of COX-2-independent M current activation in the analgesic action of CXB can be assessed by comparing the effects of these three CXB analogues.UMC and DMC are structural analogues of CXB . With reThe Kv7.2/7.3 currents were recorded using the protocol shown in the right panel of 50 for the Kv7.2/7.3 current activation were 22.7 ± 3.0 µM, 4.5 ± 0.7 µM, and 2.5 ± 0.2 µM for UMC, CXB and DMC, respectively. Thus, the order of both efficacy and potency of these CXBs were UMC < CXB < DMC, which is opposite to the order of their COX-2 inhibition activity (UMC > CXB > DMC). These results strongly suggest that the modulation of Kv7.2/7.3 currents by CXB analogues does not depend on their COX-inhibitory activity.Next, we examined the concentration-response relationships for the UMC, CXB and DMC activation of Kv7.2/7.3 channels. 1/2 from the fitting of Boltzmann function (see Methods) showed that CXB, DMC and UMC shifted voltage dependent activation of Kv7.2/7.3 to more negative potentials to different degrees neurons + currents in DRG neurons are an important modulator of nociception. In this study, we explored the effects of UMC, CXB and DMC on M-type K+ currents from rat DRG neurons.Our previous work showed that CXB and DMC not only activated Kv7.2/7.3 currents expressed in cell lines but also activated the native M-type K+ currents were recorded from small-diameter DRG neurons by amphotericin B perforated patch clamp using the voltage protocol depicted in + currents from DRG neurons following the hind paw injection of 50 µl of saline containing the relevant compounds. Intraplantar injection of BK (200 µM) into the hind paw produced strong nocifensive behavior were injected into the plantar of the rat hind paw and the response to the mechanical stimuli was measured 8 min later. As demonstrated in We used the Hargreaves test The neuropathic pain model of chronic constriction injury (CCI) to the sciatic nerve was used in this part of the study. Nocifensive response changes to mechanical or thermal stimuli after surgery were monitored using von Frey filaments or the Hargreaves test as discussed above. As shown in On the first day after surgery, the mechanical pain threshold and thermal pain threshold were significantly decreased. From the first day after the surgery, the rats were divided into four groups with each group receiving intragastric administration of either CXB, DMC or RTG or solvent in a volume of 1 ml, twice a day (for details see Methods).th day after surgery, the mechanical pain threshold in the solvent group was further reduced to 4.3 ± 0.3 g, and all three drug treatments significantly increased the threshold compared with the solvent control . On the other hand, drug treatment did not affect the reduced thermal pain thresholds .On the 5th day after surgery, the mechanical pain thresholds in the drug groups were significantly increased compared with the solvent group . For the thermal pain threshold, only CXB treatment significantly increased the threshold .On the 10th day after surgery, only CXB treatment significantly increased mechanical threthold compared with the solvent treatment . For the thermal pain threshold, none of the treatments significantly affected the thermal latencies: control, 16.1 ± 0.9 s; CXB, 19.8 ± 1.3 s; DMC, 19.8 ± 1.2 s; RTG, 19.1 ± 0.7 s).On the 14+ conductance plays a role in the analgesic action of CXB. We found that all three CXB analogues concentration-dependently activate Kv7/M channels, with both potency and efficacy of the stimulatory effects inversely related to their COX-2 inhibitory activity: DMC showed the greatest effect while UMC showed the weakest effect in activating Kv7/M channels. Furthermore, CXB analogues showed similar order of potency on negatively shifting I–V curve of Kv7 channel. These results support our previous study showing that CXB modulates Kv7/M channel in a COX-2 independent manner + currents from DRG neurons were also observed, except that higher drug concentrations were required in DRG neurons.Growing evidence suggests that functional Kv7/M channels are expressed in peripheral sensory neurons and fibers and that their activity strongly contributes to fiber excitability + currents contribute to the analgesic actions of CXB. Thus, if all three CXB analogues with different potencies of COX-2 inhibition relieve pain similarly, it would strongly suggest that K+ channel activation is a target for the analgesic action of CXB. In this regard, DMC is particularly valuable, given that DMC, like CXB, activates Kv7/M channels but lacks COX-2 inhibitory activity. We found that CXB, UMC and DMC could attenuate inflammatory pain induced by BK. CXB and UMC, both inhibitors of COX-2, seemed more effective than DMC , DMC (100 mM) and RTG (100 mM) were dissolved in DMSO and stored at −20°C. The other chemicals were all purchased from Sigma .The use of animals in this studied was approved by the Animal Care and Ethical Committee of Hebei Medical University under the International Association for the Study of Pain (IASP) guidelines for animal use. All surgery was performed under sodium pentobarbital anesthesia and all efforts were made to minimize suffering.2). The cells were seeded on glass coverslips in a 24-multiwell plate and transfected when 60–70% confluence was reached. For transfection of six wells of cells, a mixture of 3 µg Kv7.2 and Kv7.3 in pcDNA3 (1.5 µg for each), pEGFP-N1 cDNAs and 3 µl Lipofectamine 2000 reagent were prepared in 1.2 ml DMEM and incubated for 20 min according to the manufacturer’s instructions. The mixture was then applied to the cell culture wells and incubated for 4–6 h. Recordings were made 24 h after transfection and the cells were used within 48 h.The HEK293 cell line was purchased from American Type Culture Collection . HEK293 cells were cultured in Dulbecco’s modified Eagle’s medium (DMEM) supplemented with 10% fetal bovine serum and antibiotics in a humidified incubator at 37°C : 160 NaCl, 2.5 KCl, 5 CaCl2, 1 MgCl2, 10 HEPES, 8 glucose, pH 7.4. A low-profile perfusion chamber fed by gravity perfusion system was used for solution exchange .Patch electrodes were pulled from borosilicate glass and fire-polished to a final resistance of 1–2 MΩ when filled with internal solution. An axon 700B patch clamp amplifier was used for voltage clamp experiments. All recordings were performed using the amphotericin B perforated patch technique. The internal pipette solution contained (in mM): 150 KCl, 5 MgClMale Sprague Dawley rats (180–220 g) were randomly grouped and allowed to acclimatize for at least 20 min to the environment prior to the experiment. All experimenters were blinded to the treatment allocation and were only unblinded once the study had finished.The right hind paw of the animal received an intraplantar injection (50 µl) of BK and the nocifensive response were recorded using a video camera for 30 min. To study the effects of drugs on BK-induced nociceptive behavior, animals were pre-injected with CXB analogues or RTG. After 5 min, BK and the drug were co-injected into the same site of the hind paw. Control animals were injected with solvent instead of the tested drugs. All drugs were diluted in saline from stock solution and applied at a volume of 50 µl at a concentration of 100 µM.Mechanical withdrawal thresholds were measured using calibrated von Frey filaments (a set of monofilaments made from nylon filaments of varying diameter) applied to the plantar surface of the paw. Testing was initiated with an Evaluator Size 5.07 (10 g). If the animal withdrew the paw, the next weaker hair was applied. In the case of no withdrawal, the next stronger hair was applied. The cut-off was Evaluator Size 6.10 (100 g).To test for thermal hyperalgesia, radiant heat was applied to the plantar surface of a hind paw from underneath a glass floor using a ray of light from a high-intensity lamp bulb. The paw withdrawal latency was recorded automatically when the paw was withdrawn from the light .CCI was used as a model of neuropathic pain. Animals were randomly divided into 4 groups that received either CXB, DMC, RTG or solvent treatment. After one day of environment acclimatization, basic mechanical and thermal withdrawals were assessed. The surgeries were performed one day later. The rats were anesthetized with an intraperitoneal injection of sodium pentobarbital (10–20 mg/kg). The left hind leg was shaved and cleaned using 70% ethanol. The sciatic nerve was exposed by blunt preparation of connective tissue at the mid-thigh level, proximal to the sciatic trifurcation. Four non-absorbable sterile surgical sutures (0.1 mm) were loosely tied around the sciatic nerve, 1–1.5 mm apart. The skin was sutured and the animal was transferred to a recovery cage. CCI rats received the vehicle (0.5% sodium carboxymethyl cellulose) or CXB, DMC or RTG treatment (30 mg/kg/day) by intragastric administration twice a day in a volume of 1 ml from 1 day to 14 days after the surgery. Mechanical and thermal withdrawals were tested at 1, 5, 10 and 14 days after surgery using the methods described above.2+(A1−A2)/(1+(x/x0)p), where x is the drug concentration, and p is the Hill coefficient. The current activation curves were generated by plotting the normalized tail current amplitudes against the step potentials and were fitted with a Boltzmann equation: y = A/{1+exp[(Vh−Vm)/k]}, where A is the amplitude of relationship, hV is the voltage for half-maximal activation, Vm is the test potential, and k is the slope factor of the curve. All data are reported as the mean ± standard error of the mean (SEM). Differences between groups were assessed by a Student’s t-test or one-way analysis of variance (ANOVA) followed by Bonferroni’s post-hoc test. The differences were considered significant if P ≤0.05.The concentration-response curve was fitted by logistic equation: y = A"} {"text": "These nanoscale features have the ability to impact cell adhesion, migration, proliferation, and lineage commitment. Significant advances have been made in deciphering how these nanoscale cues interact with stem cells to determine phenotype, but much is still unknown as to how the interplay between physical and chemical signals regulate in vitro and in vivo cellular fate. This review dives deeper to investigate nanoscale platforms for engineering tissue, as well use the use of these nanotechnologies to drive pluripotent stem cell lineage determination.Tissue engineering utilizes cells, signaling molecules, and scaffolds towards creating functional tissue to repair damaged organs. Pluripotent stem cells (PSCs) are a promising cell source due to their ability to self-renewal indefinitely and their potential to differentiate into almost any cell type. Great strides have been taken to parse the physiological mechanisms by which PSCs respond to their microenvironment and commit to a specific lineage. The combination of physical cues and chemical factors is thought to have the most profound influence on stem cell behavior, therefore a major focus of tissue engineering strategies is scaffold design to incorporate these signals. One overlooked component of the In recent years, tissue engineering has emerged as a potential method for treating numerous diseases and regenerating damaged cells. By applying engineering approaches to knowledge of biological systems, a tissue engineered substitute can be generated to restore, replace, or maintain partial if not entire organ function . The keyTowards this effort of fully directing cell behavior, various biomaterials as well as different cell types and signaling molecules have been investigated. Recent studies have examined the ability of mechanical signals to influence stem cell lineage commitment since cells in the body reside in different tissue niches with variable mechanical properties that effect cellular function. From this information, biomaterials have been designed to harness tissue-specific mechanical properties to guide stem cells to the targeted cell type. Specifically, nanoscale platforms are utilized since they offer a unique ability to mimic the physical cues that cells receive from their microenvironment. Researchers have developed materials with tunable matrix elasticities, nanotopographies, and nanoscale patterns that have the ability to manipulate cell phenotype. This review explores current developments in the use of nanotechnology to drive cell function determination. The following section briefly describes common cell types used in tissue engineering applications, then discusses the interactions between cells and different biomaterials with nanoscale features.Major sources of cells for tissue engineering include adult stem cells, progenitor cells, embryonic stem cells (hESCs), and induced pluripotent stem cells (iPSCs). Adult stem cells, or mesenchymal stem cells (hMSCs), are multipotent and generally derived from adipose tissue (AD-hMSCs) or the bone marrow (BM-hMSCs). When exposed to the correct chemical signals, AD-hMSCs can differentiate towards osteogenic, chondrogenic, adipogenic, myogenic, and hepatic lineages, as well as become endothelial cells -5. Bone Progenitor cells are lineage-committed and maintain the tissue in which they reside . These pin vivo germ layer establishment [in vivo and in vitro tissue development from hESCs can be overcome by engineering a three-dimensional (3D) microenvironment from which the undifferentiated cells can receive cues and thus differentiate towards specific lineages.hESCs and iPSCs overcome limitations of hMSCs and progenitor cells by offering the ability to self-renew indefinitely as well as differentiate into any cell type of the body . Embryonlishment . While tiPSCs were first created by introducing four pluripotency transcription factors to a mouse fibroblast cell, after which the fibroblast exhibited properties of undifferentiated hESCs . These sOne barrier for using hESCs and iPSCs in regenerative medicine is that teratoma formation in implanted tissue can occur when cells have not fully and uniformly differentiated into the target tissue ,28. TherAlthough hESCs and iPSCs are promising cell sources for tissue engineering applications and invaluable tools for studying developmental biology, there are still many fundamental aspects of PSC biology that are unknown. Specifically, researchers are striving to understand and deconstruct the mechanisms by which the microenvironment effects lineage determination, as well as cell phenotype and function.The native microenvironment is composed of the extracellular matrix (ECM), which is a network of proteins that provides physical and chemical cues determining cell behavior -32. Cellin vivo tissue structure provides cues to cells at a nanoscale. Furthermore, cells tend to respond to microscale fiber scaffolds the same way that they do when cultured on a 2D polystyrene cell culture plate. Cell morphology becomes fat, which causes a lopsided attachment of focal adhesions [Past biomaterial design has focused on microscale technologies to drive stem cell lineage commitment, but the dhesions . Therefodhesions , and thedhesions . The folin vivo microenvironment is composed of channels, pores, and ridges that provide physical cues to cells at a nano level [The no level . Knowledno level -43. ThesThis technique can form a network of polymer fibers down to the size of 10 nm . To genein vivo.The general method of soft lithography uses elastomeric stamps to print nanoscale polymers on a surface -54. Pattin vivo microenvironment. Similar diffusive transport also occurs in hydrogel cell culture platforms, and hydrogels can be designed to incorporate cell adhesion ligands and other biologically relevant components [Hydrogels are a popular tissue engineering scaffold with proven success in medicine and biological research due to their tunable tissue-like properties -65. The mponents . Althougmponents ,67. Furtin vitro 3D cell microenvironment to add roughness [m-aminobenzene sulfonic acid) and polyethylene glycol in which neurons exposed to more positively charged groups exhibited greater neurite length and had additional growth cones [Carbon nanotubes (CNTs) possess ideal qualities for tissue regeneration strategies such as tunable chemical and mechanical properties, electrical conductivity, cytocompatibility, and nanoscale dimensions that serve as topographical cues . Furtheroughness -71. Whenoughness . Severaloughness -76, myoboughness as well oughness , but litoughness . A studyoughness . CNTs cath cones . Since C-18 liters [in vitro microenvironment is optimized. Microfluidic platforms have been used extensively to study cell biology, specifically cellular adhesion forces [This technique allows for precise regulation of fluid flow and microenvironmental geometry, usually in the form of channels with similar dimensions to that of the cell type under investigation. Volumes can easily be controlled to levels of 108 liters , and then forces , the cytn forces , and then forces -85. ThisAs described in the previous section, several methods for tuning biophysical signals for influencing stem cell behavior have been explored. Nanoscale signals such as shear stress, strain, material elasticity, topographical variation, and cell shape all affect cellular function and lineage specification. This section discusses how each of these factors govern PSC behavior; specifically cell adhesion, proliferation, alignment, and differentiation.in vivo microenvironment. Muscle contraction and relaxation, bone compression and decompression, cell migration, fluid flow, and tissue regeneration all cause variations of mechanical forces in the body. The ECM also has a range of elastic moduli that generate physical stimuli for attached cells through focal adhesions. These mechanical cues are transduced through focal adhesion kinase (FAK) and Src family signaling [in vitro with hopes of determining what physical forces, separately or in combination, control lineage commitment.Cells experience mechanical cues such as stress and strain in the ignaling . Furtherignaling ,86,87. AThe effect of mechanical cues on cell function has been studied for many cell types, and has established general knowledge regarding cell behavioral responses ,88-91. Rin vivo, most commonly exerted on cells in the circulatory system [Shear flow has also been investigated as a mechanical cue for PSCs since it is a dynamic stress found y system . Mouse Ey system . This roy system ,98. Furty system .A recent study demonstrated that matrix elasticities of 1 kPA, 8 kPa, and 25 kPa lead hMSCs respectively towards neurogenic, myogenic, and osteogenic lineage . This diESCs are generally cultured on stiff 2D cell culture plates. Studies have shown that cell traction and colony stiffness increase when ESCs are grown on traditional rigid substrates, which also correlates with the downregulation of Oct3/4 in mouse ESCs ,103. CelStem cell shape regulates physiology, controls proliferation, and ultimately governs lineage specification , 105. Ce2 and 75 μm2 fibronectin islands to show that size directly controls apoptosis and proliferation, respectively [One of the first experiments demonstrating the impact of cell size on behavior used 20 μmectively . Furtherectively . Patternectively . These fin vivo; for example proteins in the ECM are generally arranged in a fibrous manner with these topographical properties. These fibrillar networks are approximately 10-100 nanometers but can be several microns [Topography plays a key role in cell maintenance and function. Nanoscale architecture has grooves, ridges, pits, and pores microns ,113, and microns . Nanotop microns . Surface microns . When ce microns . Trough microns ,115-122. microns . This, aTopography is a powerful tool since, not only is cytoskeleton tension altered like in cell shape experiments, but also entire molecular arrangement and dynamic organization of cellular adhesion mechanisms are affected . PolymetAnother study used UV-assisted capillary force lithography to create 350 nm ridge/groove pattern arrays, then demonstrated the ability of the surface topography to direct hESCs towards neuronal lineage in the absence of differentiation-inducing soluble factors . FurtherMWNT films were employed to investigate the response of hESCs to surface roughness. hESC colonies favored rougher surfaces for attachment, exhibited fattened morphology with standard colony size, and retained pluripotency when cultured on MWNT films . A similIn another study, surface nanoroughness of silica-based glass wafters was altered and hESCs were placed on the various substrates in single cells. hESCs on the control glass surface demonstrated highly branched morphology with many cytoplasmic extensions, while cells on the nanorough glass were compact with few, short flapodia. Cells on a rough surface patterned with square-shaped smooth islands favored attachment to the smooth glass instead of the nanorough areas and expressed pluripotent marker Oct3/4, and hESCs placed on an exclusively rough surface spontaneously differentiated. Proliferation of hESC colonies was determined by placing cells on smooth glass and nanorough substrates, and it was determined that doubling time of hESCs on the control surface was 41 hours compared to a slower 71 hour doubling time of colonies on the rough surface . In oppoin vivo microenvironmental cues in differentiation studies to parse which factors control lineage commitment. Trough carefully designed scaffolds and substrates, scientists have advanced the understanding of how nano-microenvironmental cues define cell behavior. By gaining this knowledge, stem cell differentiation can be further specified by combining nano-architecture and insoluble factors with other important biochemical cues.Tissue engineering has demonstrated the ability to generate desired cell types by combining the knowledge of biomaterials, stem cells, and signaling factors. The use of hPSCs in regenerative medicine has immense potential for treating numerous ailments, but experimental methods for solely and completely creating the desired tissues is necessary to avoid teratoma formation. Towards this goal, researchers have focused on mimicking"} {"text": "In legume nodules, symbiosomes containing endosymbiotic rhizobial bacteria act as temporary plant organelles that are responsible for nitrogen fixation, these bacteria develop mutual metabolic dependence with the host legume. In most legumes, the rhizobia infect post-mitotic cells that have lost their ability to divide, although in some nodules cells do maintain their mitotic capacity after infection. Here, we review what is currently known about legume symbiosomes from an evolutionary and developmental perspective, and in the context of the different interactions between diazotroph bacteria and eukaryotes. As a result, it can be concluded that the symbiosome possesses organelle-like characteristics due to its metabolic behavior, the composite origin and differentiation of its membrane, the retargeting of host cell proteins, the control of microsymbiont proliferation and differentiation by the host legume, and the cytoskeletal dynamics and symbiosome segregation during the division of rhizobia-infected cells. Different degrees of symbiosome evolution can be defined, specifically in relation to rhizobial infection and to the different types of nodule. Thus, our current understanding of the symbiosome suggests that it might be considered a nitrogen-fixing link in organelle evolution and that the distinct types of legume symbiosomes could represent different evolutionary stages toward the generation of a nitrogen-fixing organelle. Symbiosis between different organisms has played a key role in evolution and in fact, the term “symbiogenesis” is an evolutionary concept that refers to “the appearance of new physiologies, tissues, organs, and even new species as a direct consequence of symbiosis” form. IT grows inwardly until it reaches the nodule primordium cells. The intracellular mode of infection occurs in most of the rhizobia-legume symbioses studied and it is tightly controlled by the host. Intercellular infection may take place via natural wounds, where lateral roots emerge through epidermal breaks (crack infection), or it may occur directly between epidermal cells or between an epidermal cell and an adjacent root hair and humans (13). Moreover, ancestral bacterial protein transport routes coexist with the evolving mitochondrial protein import machinery in R. americana. Accordingly, Reclinomonas mitochondria may represent a “connecting link” between the metazoan mitochondria and their ancestral bacterial progenitors ; and the third, possibly induced by the action of pluricellular plant-based ecosystems in their cytoplasm. As in the case of rhizobia-legume symbioses, the host and microsymbiont are strictly separated by a host-derived membrane in these species , the extra- or intracellular location of the microsymbiont, the presence or absence of segregation to daughter cells and of vertical transmission and the fern Azolla, the diazotroph microsymbiont resides extracellularly in a mucilaginous sheath in the dorsal cavities of Azolla leaves. The cyanobacteria's filaments enter into the fern's sexual megaspore, allowing the microsymbiont to be transferred vertically to the next plant generation. While it retains its photosynthetic capacity, it seems that these diazotroph cyanobacteria have lost their capacity to survive as free-living organisms or intercellularly, depending on the host plant species. Frankia induces the formation of multi-lobed, indeterminate nodules, which are modified adventitious secondary roots formed from the root pericycle. Nodule infected cells become full of branching Frankia hyphae surrounded by a perimicrobial membrane of host origin, forming vesicles in which nitrogen fixation takes place but rather, it arises from unique zones of cell division in the root cortex is the only non-legume plant that can establish effective nodule symbiosis with rhizobia. This symbiosis is a case of convergent evolution and it occurred more recently than that of legumes. From a phylogenetic and taxonomic point of view, Parasponia is closer to some actinorhizal plants that belong to the Rhamnaceae, Elaeagnaceae, and Rosaceae families, than to legumes does not involve root hairs but rather, crack entry or root erosion and an intercellular IT. This IT protrudes into the host plant cell by plant membrane invagination, forming the so-called fixation thread. Fixation-thread, that remains in contact with the plasma membrane, are the equivalent to a symbiosome in legumes and to arbuscules in arbuscular mycorrhizal (AM) roots . Recently, Neorhizobium and Pararhizobium have been proposed as new genera . Similarly, the most recent genus Ensifer diverged about 200 Mya genes I and II allowed the time of divergence among the α-proteobacteria genera of rhizobia to be estimated . Rhizobia are among the α-proteobacteria with the largest genomes , and in fact, lateral gene transfer is the primary source of genetic diversity in rhizobia , and the largest of all rhizobia can perform photosynthesis and fix nitrogen in symbiosis or in free-living conditions are included in the Rosid I clade began their spread about 39 Mya and thus, it is the genistoid and dalbergioid that have the oldest origin within the papilionoids membrane and the peribacteroid space and an intrinsic tonoplast protein of the Nod26 group and a nucleotide translocator , as well as mitochondrial processing peptidases, probably located in the symbiosome lumen that is expressed in a nodule-specific manner in Lotus japonicus, is essential for nodule symbiosis and alfalfa (E. meliloti), and in the determinate nodules of soybean (Bradyrhizobium japonicum), display metabolic dependence on the host for branched-chain amino acids , a legume family of nodule-specific cysteine-rich (NCR) peptides are targeted to the endosymbiotic bacteria. These peptides are responsible for the bacteroid differentiation that involves the induction of endopolyploidy, cell cycle arrest, terminal differentiation, and a loss of bacterial viability. It was recently demonstrated that a nodule specific thyoredoxin (Trx s1) is targeted to the bacteroid, controlling NCR activity and bacteroid terminal differentiation , all of which form indeterminate nodules carry polyploid and enlarged bacteroids, and that these plants also express NCR peptides. However, these peptides are not homologous to NCR peptides from IRLC legumes, suggesting an independent evolutionary origin or non-swollen bacteroids, suggesting that the effects on bacteroid differentiation might have changed during the evolution of the Lupinus genus .Interestingly, species within the genus Lupinus spp. and Genista tinctoria (genistoid legumes), and also in certain dalbergioid legumes . All these legumes are infected by Bradyrhizobium spp. through epidermal infection or crack infection, and the infected zone of their nodules has no uninfected cells also contain dividing infected cells (Vega-Hernández et al., Infected nodule cells are usually post-mitotic and do not divide further. However, one of the most interesting and quite unusual traits for eukaryotic cells is found in certain legume nodules whose host cells can divide after being infected by rhizobia Figures , 3. The L. albus (Fedorova et al., L. albus nodules, symbiosomes are segregated equally between the two daughter cells when the host plant cell divides, just like other cell organelles, e.g., mitochondria (González-Sama et al., Mitochondria and plastids divide in the plant cytoplasm, and cytoskeletal elements not only secure their distribution and movement but also, their correct partitioning between the daughter cells at cytokinesis King, . SymbiosIt has been established that a key event in the evolution from a free-living bacteria to an organelle is the loss of bacterial genes and their transfer to the nucleus of the plant host, a fate that occurred during mitochondrial and chloroplast evolution (Douglas and Raven, In the case of nitrogen-fixing rhizobia, the absence of gene transfer to the nucleus may be due to the low oxygen concentrations required by the nitrogenase enzyme, which would generate poor ROS production and mutation rates (Allen and Raven, No gene loss or genome reduction has been observed in viable symbiotic rhizobia. Symbiotic rhizobia that do not undergo terminal differentiation are still capable of existing as free-living bacteria. Accordingly, they must be equipped with a number of genes to survive in different environments and to compete with other microorganisms. Moreover, in nodules containing swollen terminally differentiated bacteroids, some non-differentiated bacteria inhabit the apoplastic space and consequently, all the genes necessary for independent life are still retained (Stêpkowski and Legocki, An evolutionary pathway has been proposed in symbiotic systems to shift from free-living organisms to facultative symbiosis and to ecologically obligatory symbiosis, usually involving genome expansion (Provorov et al., While it has been postulated that organelle development cannot occur in differentiated multicellular organisms (McKay and Navarro-González, TC, EF, JP, and ML wrote the manuscript. All the authors read and approved the final version of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer OS and handling Editor declared their shared affiliation."} {"text": "The relatively simple molecular structure of hydrogen-bonded (HB) systems is often belied by their exceptionally complex thermodynamic and microscopic behaviour. For this reason, after a thorough experimental, computational and theoretical scrutiny, the dynamics of molecules in HB systems still eludes a comprehensive understanding. Aiming at shedding some insight into this topic, we jointly used neutron Brillouin scattering and molecular dynamics simulations to probe the dynamics of a prototypical hydrogen-bonded alcohol, liquid methanol. The comparison with the most thoroughly investigated HB system, liquid water, pinpoints common behaviours of their THz microscopic dynamics, thereby providing additional information on the role of HB dynamics in these two systems. This study demonstrates that the dynamic behaviour of methanol is much richer than what so far known, and prompts us to establish striking analogies with the features of liquid and supercooled water. In particular, based on the strong differences between the structural properties of the two systems, our results suggest that the assignment of some dynamical properties to the tetrahedral character of water structure should be questioned. We finally highlight the similarities between the characteristic decay times of the time correlation function, as obtained from our data and the mean lifetime of hydrogen bond known in literature. It is not common to find in Nature phenomena more intriguing than those related to the hydrogen bond interaction. This interaction dictates indeed not only the overall behaviour in organic molecules as DNA or proteins, but also that of systems as different as aqueous solutions and alcohols, which are undoubtedly the two classes of liquids having the larger impact on Life and daily Human activity and development, with innumerable applications in today’s life. The accurate knowledge of their properties is crucial to establish a firmer ground for the development of more refined liquid state theories, but it also has much broader societal implications. The full understanding of their microscopic behaviour is, however, far from being achieved even for the two most studied members of these families, namely water and methanol. This is mainly due to the presence of the hydrogen bond that induces strongly directional intermolecular interactions, absent in other non associated liquids. Comparing water to methanol at mesoscopic scales, as done in this paper, is rather natural and common. Methanol is indeed the alcohol having most analogies with water, their difference simply consisting in the replacement of a proton by a methyl group, that bestow on alcohols their amphiphilic character and different solvent properties. The continuous hydrogen bond (HB) breaking and formation is supposed on one hand to determine the overall behaviour of both liquids, and on the other to be at the root of their different structural, dynamical, and solvating properties. Liquid water, a system intensively studied in the search for a rationale for its unique properties and many unsolved anomalies235τHB, ranging from 1–2 psτHB ~ 1 ps, have been estimated for RT water by incoherent neutron scattering experimentsHowever, the accepted and sharp structural distinction between these two liquids appears to fade according to the findings of recent experiments on water791081213The investigation of the wavevector dependence of the THz excitations in liquid water, and their possible dispersive nature, can be readily performed by inelastic neutron (INS)222324252628Q primarily with longitudinal movements only, therefore not allowing us to directly examine the transverse character of a given mode. Consequently, any experimental signature of a shear wave propagation reflects a longitudinal-transverse (L-T) mixing2727293536Unfortunately, THz spectroscopic methods couple at low S may then be computed and analysed in a much broader exchanged wavevector Q and energy E = ħω range, where ħ and ω represent the reduced Planck constant and the angular frequency, respectively. Within this approach, the neutron data are thus essential to benchmark, a posteriori, the ability of simulation, which in principle deals with a model system, to provide a realistic description of the dynamics in the actual system under study through the use of a reliable interaction potential.In order to perform a reliable characterization of the dynamical response of the sample, the synergy between numerical and experimental techniques is essential. Once the simulation results for the spectrum of density fluctuations are validated by the experimental ones, the dynamic structure factor ω(Q) has often been performed using the neutron Brillouin scattering technique. This technique has to satisfy two main requirements. (i) Low scattering angles: because of long range disorder, collective phenomena show up indeed in these systems only at very low Q, in the so called pseudo first Brillouin zone which extends from the position of the main peak of the static structure factor down to the lowest possible Q, ideally to Q = 0. (ii) High energy incident neutron beam: since the collective excitations typically travel at velocities of the order of few thousands of m/s in liquids, the use of high energy (~10–100 meV) thermal neutrons is mandatory. Satisfying both conditions is extremely demanding at experimental level, because it implies the detection of high energy neutrons scattered from the sample at low angles: the contamination of the direct beam is unavoidable, and tight collimations must be used at the price of a strong flux decrease.The study of the collective dynamics in amorphous systems and the determination of the dispersion law BRIllouin SPectrometer BRISPT = 298 K. Measured spectra were reduced, with standard procedures, to the intensity, Iexp scattered by the sample, which is a quantity that contains, in addition to the sought-for single-scattering contribution, also a non negligible component due to multiple scattering (MS).The neutron Q, E) domain than in the experimental case and quantities that cannot be measured directly, such as: (i) the center-of-mass (CM) dynamic structure factor and (ii) the longitudinal and transverse currentsQħ.The MD simulations allow us to access a broader once the MS and instrumental resolution are taken into account, and emphasizes the overall accuracy of both measurements and calculations. More details on the experimental and calculated quantities can be found in the Sections Methods and Supplementary Information.Q, E) range, the more so if one realises that measurements with different neutron wavelengths and resolutions amount to independent determinations. This provides a solid ground to model the spectral response of the sample by the CM dynamic structure factor SCM determined from the simulations. Complying with standard notation for the spectral variable, we now switch from the energy E to the angular frequency ω.The examples shown in SCM is taken as the correct quantity representing the collective translational dynamics, which, however, is not unaffected by the molecular asymmetry and the anisotropy of the interaction, also due to the HB. The computed SCM are plotted in Q values. It is worth noticing that the highest-frequency mode, visible in the simulated spectra, was out of the experimental energy window and could not be covered by measured INS spectra.As in previous works on molecular fluids4445SCM reads:Consistently with the long-debated case of water which, in recent experimental studies25Methods section and, for more detail, to refs ω = 0 describing the quasi-elastic response arising from the combined effects of thermal and viscous relaxations, while the frequency of the two other lines is related to the frequency ωVE of the propagating sound wave. As to the DHO components, each of them contributes to SCM through a further pair of Brillouin lines located at ωD1,2.For the explicit definitions and detailed properties of the two models used in SCM are shown in −1Q range, while a model with only a DHO function was preferred at the lower or larger Qs. Best fit values of ω(Q) reported in ωs(Q), spans the intermediate energy values and exhibits the typical trend expected for a sound mode. The excitation propagates with an apparent propagation velocity ~2750 m/s, much larger than the adiabatic sound velocity (cs = 1100 m/s), followed by the overdamping and bending down to a vanishing frequency around the position of the first maximum of the static structure factor, Qp ≈ 17 nm−1. Conversely, both DHO components of SCM display a softer mode not clearly dispersive, which remains underdamped in the whole Q range.Examples of best fits of JL and JT, respectively. The study of JL = (ω/Q)2SCM represents an alternative to the inspection of SCM, since it is a positive variable vanishing both at infinite and zero frequencies and, unless null everywhere, it must reach at least a maximum in betweenQ limit, the position of the maximum tends towards the frequency of the dominant acoustic mode and it is customarily assumed to provide a reasonable estimate of this even at finite Q values. Conversely, although the JT cannot be rigorously approximated by any known analytic function, the general features of its shape provide a meaningful characterisation of the transverse nature of the dynamics, and the frequency of the corresponding modes.Prior to a deeper discussion of these results, it is extremely useful to inspect the MD outputs, displayed in Q; by increasing Q, we observe a gradual merging of the two spectra that become coincident at Q ≥ 15 nm−1, except for the intrinsic difference in the ω → 0 limit. The evidence of such progressive merging of the two polarisation states has been, to the best of our knowledge, never shown before, and it can be presumably seen as the effect on the currents of the transition from the continuum to the single particle regime: at low Q, when the system is probed over mesoscopic (few tens of Å) distances including many neighbours, the two polarisation states fully probe the anisotropic nature of the dynamics. Upon increasing Q and approaching, although still far by reaching, the single particle limit, this asymmetry gradually disappears together with the signs of any distinction between longitudinal and transverse dynamics.As evident from SCM, both currents in JT clearly shows at all the Q investigated the presence of bumps related to excitations of transverse nature. The frequencies corresponding to these local JT maxima, ωTl(Q), ωTi(Q), ωTh(Q), for the low, intermediate and high energy mode respectively, are compared to the frequencies obtained from the SCM in SCM.Similarly to what has been observed for the ωTl(Q) has a slight, though evident, Q dependence, with a slope close to the methanol adiabatic sound velocity, but with the important difference that ωTl(Q) does not show a positive dispersion as ωs(Q) does. Therefore, the analysis of JT allows to attribute to the lower energy mode, ωD1(Q), a transverse and acoustic nature. The two transverse excitations at higher energy, in particular ωTi(Q), are substantially Q-independent. Both are detectable at almost all Qs, and follow the typical behaviour of optic modes, as revealed by the non vanishing Q = 0 extrapolation of their characteristic frequency. The optic-like mode at ωTi(Q) ~ 20 rad ps−1 is present in both JL and JT in SCM probably because the corresponding frequency was too close to that of the longitudinal acoustic mode, and therefore hard to separate from it. The values obtained for ωTh(Q) confirm the presence of a third mode at high energy, equivalent to ωD2(Q). Its persistence in a broad Q domain is clearly inferred when the currents are analysed.Even more clearly than in Q ≥ 15 nm−1. This seems consistent with the idea that being movements essentially localised and non-propagating, their dominant frequency becomes independent from both amplitude and direction of the exchanged momentum. This makes the mere concept of a mode polarisation ill-defined.Analogously to what has been observed for the current spectra of SCM of methanol are even harder to detect in any spectroscopic method probing the total S as often hidden by intramolecular dynamics. In x-ray spectroscopy, the difficulty is further complicated due to the typical long-tailed energy resolution function. In the IXS study of ref. S spectra were analysed with a model fit function that intrinsically excludes the presence of two excitations and reduces to a simplified model composed of a single DHO supplemented by a single central Lorentzian line. While the neutron study of ref. SCM through MD computations was correctly recognized in ref. The ensemble of these findings reverses the interpretation of the collective dynamics of methanol. Due to their intrinsic weakness, the inelastic components of Qp values: this suggests a universal nature of dispersion in HB liquid dynamicsQ’s in water, probably owing to its larger Qp. In addition to this, we find the important result that methanol, as water, exhibits a second excitation at lower frequencies. This mode is only slightly dispersive but with a possible transverse acoustic origin that also in water has been suggested on the ground of a low Q linear dispersion with a slope smaller than, yet somewhat close to, the adiabatic sound velocity of water−1 range at Q ≥ 4 nm−1. This mode has been frequently related to a coupling between longitudinal and transverse dynamics arising in systems, like water, having a tetrahedral coordination25Our findings suggest some similarities of the methanol dynamics with the long debated case of liquid water242527Q > 5 nm−1 as in methanol, but at higher energies, was also observed in the recent simulation of supercooled TIP4P/2005 water32Even closer resemblances can be found in supercooled water, which shows a similar triple mode structure as reported in both an experimentalAnalogies have been found also in the normal mode analysis of a few hydrogen bonded liquidsQ-constant frequency modes are to be connected to the symmetric and asymmetric stretching of the HB connecting them. Therefore, in analogy with the picture proposed in ref. Based upon the results of ref. JL at low Qs rapidly vanishes being gradually taken over by a low frequency transverse mode at high Qs. A similar trend was previously observed by IXS measurements in waterQ range a viscoelastic behaviour characterised by propagating sound modes which are increasingly damped with growing Q, until they become overdamped in a rather narrow range around the position of the main peak in the static structure factor Qp, where the sound propagation is arrested. This behaviour, already known to be a common property of a large variety of liquids454649SVE, is also shared by methanol. In fact, if the acoustic mode dispersion ωs(Q) of methanol is compared to that of its non-hydrogen-bonded analog, i.e. methane, one sees that the two curves have substantially the same shape, differing only by a constant factorMoreover, we notice in Q and the longitudinal and transverse waves propagating in water and methanol .The main conclusion of our work is thus represented by the discovery of a methanol collective dynamics much richer and complex than known so far, and displaying features bearing evident similarities to those found in liquid water. Our data indirectly suggest that relevant dynamical features typically attributed to the tetrahedral coordination in water survive also in a non-tetrahedral liquid and it seems tempting to interpret the overall dynamics found for methanol as a sort of distinctive behaviour in HB systems. τ reported in Methods). The lifetimes corresponding to the three inelastic excitations shown in τ1 = 1/z1 ~ 1 ps, and τ2 = 1/z2 ~ 2 – 3 ps, respectively, at Q ≤ 15 nm−1.At a more speculative level, we may try to provide additional insight into the dynamics in methanol by recalling the strong asymmetry introduced by the HB, and assuming that its lifetime affects the decay time of all vibrational excitations implying HB bonds (such as e.g. the bending of O-O-O triplets). We recall indeed that our model is the frequency counterpart of time correlation functions having decay constants that reflect damping factors of both the VE and DHOs contributions in Eq. 147τ’s reach an almost constant level at Q ~ 15 nm−1, which implies that localisation of all the vibrations occurs at the same Q where the longitudinal and transverse modes become almost undistinguishable. At Q ≤ 15 nm−1, we find that the two additional VE decay times, τ1 and τ2, have values similar to the HB mean lifetime obtained from simulations on methanol17τ1 and τ2 that we have found in parallel simulations ran with the methanol HB interaction turned off.All the We finally remark that the dynamic scenario emerging from the present results seems compatible with the one observed in the Fast Infrared Spectroscopy study of the water OH stretching dynamics, where the coexistence of fast HB local oscillation (~200 fs) with a slower (~1 ps) correlation due to collective motions was provedT = 298 K. A deuterated sample (CD3OD) was chosen in order to maximise coherent scattering and the visibility of collective dynamics. The measurements were carried out with two values of the incident-neutron wavelength (λ0 = 1 and 2 Å) for a more efficient coverage of the region while keeping the neutron speed sufficiently larger than the adiabatic sound speed in methanol (cs = 1100 m/sGexp(E) with a measured full width at half maximum of 3.0 and 0.7 meV for the two neutron wavelengths. The scattering angle range (1° ≤ θ ≤ 14°) allowed by the BRISP setup made possible the exploration of dynamical excitations at wave vectors between Q = 2 and 16 nm−1, the upper bound being close to the position of the main peak in the static structure factor Qp = 17 nm−1.The small-angle spectrometer BRISP58CH = 1.09 Å, rCO = 1.41 Å, rOH = 0.945 Å) by means of the SHAKE algorithm, while the remaining intramolecular motions such as bond-angle bending and rotation of the methyl group around the axis of the C-O bond were treated using a harmonic and a cosine potential, respectively, with the corresponding OPLS-AA parameters which are reported in the Supplementary Information. This model has proven to be able to reproduce in a satisfactory way the density and heat of vaporization of liquid methanolQ < 24 nm−1, and ω up to ~350 rad ps−1.MD simulations have been performed using the DL POLY 2.20 packageIt is also worth noting that MD simulations have also been carried out with the H1 potential in ref. The VE contribution in bVE makes the inelastic Lorentzian lines asymmetric with respect to frequency of sound excitations ωVE with a damping zVE, and (ii) two Lorentzian quasi-elastic terms proportional towhere the parameter z1,2with damping The DHO model is expressed by a stripped-down version of the same spectral profile, where only the two inelastic termsD1,2 and ΓD1,2 being the frequency and damping of the excitations, respectively. The characteristic lifetime τ1,2 of a given excitation is simply obtained as the reciprocal of the respective damping.survive, ΩHow to cite this article: Bellissima, S. et al. The hydrogen-bond collective dynamics in liquid methanol. Sci. Rep.6, 39533; doi: 10.1038/srep39533 (2016).Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations."} {"text": "Parasponia represents five fast-growing tropical tree species in the Cannabaceae and is the only plant lineage besides legumes that can establish nitrogen-fixing nodules with rhizobium. Comparative analyses between legumes and Parasponia allows identification of conserved genetic networks controlling this symbiosis. However, such studies are hampered due to the absence of powerful reverse genetic tools for Parasponia. Here, we present a fast and efficient protocol for Agrobacterium tumefaciens-mediated transformation and CRISPR/Cas9 mutagenesis of Parasponia andersonii. Using this protocol, knockout mutants are obtained within 3 months. Due to efficient micro-propagation, bi-allelic mutants can be studied in the T0 generation, allowing phenotypic evaluation within 6 months after transformation. We mutated four genes – PanHK4, PanEIN2, PanNSP1, and PanNSP2 – that control cytokinin, ethylene, or strigolactone hormonal networks and that in legumes commit essential symbiotic functions. Knockout mutants in Panhk4 and Panein2 displayed developmental phenotypes, namely reduced procambium activity in Panhk4 and disturbed sex differentiation in Panein2 mutants. The symbiotic phenotypes of Panhk4 and Panein2 mutant lines differ from those in legumes. In contrast, PanNSP1 and PanNSP2 are essential for nodule formation, a phenotype similar as reported for legumes. This indicates a conserved role for these GRAS-type transcriptional regulators in rhizobium symbiosis, illustrating the value of Parasponia trees as a research model for reverse genetic studies. Parasponia are tropical tree species belonging to the Cannabis family (Cannabaceae) and are known as the only non-legume plants that can establish a nitrogen-fixing endosymbiosis with rhizobium diverged about a 100 million years ago signals and CALCIUM AND CALMODULIN-DEPENDENT PROTEIN KINASE (PanCCaMK) – commit conserved functions in the Parasponia and legume LCO signaling pathways expression (HISTIDINE KINASE 4 (HK4) that in legumes is essential for nodule organogenesis (ETHYLENE INSENSITIVE 2 (EIN2) that is a negative regulator of nodulation in legumes . Pots were half-filled with agraperlite and watered with modified EKM medium (Mesorhizobium plurifarium BOR2 (OD600 = 0.025) (All experiments were conducted using thereof . P. andeNa2MoO4] and plac= 0.025) .NPTII) and an Arabidopsis thaliana codon-optimized variant of Cas9 (M. truncatula (Mt4.0v1) (Populus trichocarpa (v3.0) (1. Protein sequences of P. andersonii (PanWU01x14_asm01_ann01) and Trema orientalis (TorRG33x02_asm01_ann01) were obtained from www.parasponia.org , M. trunMt4.0v1) and Popua (v3.0) were obtonia.org . These sTAIR102) and M. tT v7.017 implemenT v7.017 implemenP. andersonii was performed using A. tumefaciens strain AGL1 of A. tumefaciens were used. Bacteria were scraped from plate and resuspended in 25 ml of infiltration medium , chosen based on previous study (RNA was isolated from snap-frozen root tips ∼2–3 cm) as described by us study . All pri–3 cm as post hoc tests. Statistical analyses were performed using IBM SPSS Statistics 23.0 .Statistical differences were determined based on one-way ANOVA and Tukey P. andersonii, we first determined the most optimal conditions for regeneration of non-transgenic tissue. We compared regeneration efficiencies of nine tissue explant types in combination with 11 different media, including the propagation and root-inducing media previously used for P. andersonii . When 2 mm in size, transgenic calli were separated from tissue explants, which stimulated shoot formation . Two to three months after the start of transformation, a single shoot was selected from each explant to ensure that the transgenic lines represent independent transformation events. These shoots can be genotyped and vegetatively propagated targeting PanHK4 and PanNSP2 and single sgRNAs targeting PanEIN2 and PanNSP1 were placed under an A. thaliana AtU6 small RNA promoter . Genotyping of regenerated shoots showed that >85% contained the Cas9 gene, indicating successful transformation. Potential mutations at any of the target sites were identified through PCR amplification and subsequent sequencing of the PCR product. This revealed mutations at the target site in about half of the transgenic shoots examined, of which the majority were bi-allelic (Table 1). Most mutations represent small insertions and deletions but also larger deletions and inversions were identified, some of which occur in between two target sites , cotton (Gossypium hirsutum), and citrus (Citrus clementina) P. andersonii as well as control transgenic lines . In contrast, leaf abscission was not observed on Panein2 mutant trees . This demonstrates that Panein2 mutants are indeed ethylene insensitive.To characterize the resulting mentina) . We explPanein2 mutant trees revealed an additional non-symbiotic phenotype. These trees form bisexual flowers containing both male and female reproductive organs . In contrast, WT P. andersonii trees form unisexual flowers that contain either stamens or carpels . This suggests that ethylene is involved in the regulation of Parasponia sex type.Inspection of carpels . Therefore, we conclude that PanHK4-mediated cytokinin signaling is required for regulation of P. andersonii secondary growth.Cytokinins are important regulators of cambial activity, as shown in uloides) . To deteM. truncatula previously identified a set of genes downregulated in roots of Mtnsp1 and Mtnsp2 mutants and MORE AXILLARY BRANCHING 1 that are putatively involved in strigolactone biosynthesis . We noted that Pannsp1 mutant lines differ in the level of PanD27 and PanMAX1 expression. Both genes have an intermediate expression level in Pannsp1–6 and Pannsp1–13, compared to Pannsp1–39 and Pannsp2 mutants . The three Pannsp1 mutant lines differ from each other in the type of mutations that were created. Pannsp1–6 and Pannsp1–13 contain a 1 bp insertion and 5 bp deletion close to the 5′-end of the coding region, respectively. These mutations are immediately followed by a second in-frame ATG that in WT PanNSP1 encodes a methionine at position 16. In contrast, Pannsp1–39 contains a large 232 bp deletion that removes this in-frame ATG . This suggests impaired nodule development in P. andersonii ein2 mutants.The phenotype of the root . Panein2Panein2, Panhk4, and Pannsp1–6/Pannsp1–13 mutant nodules, we sectioned ∼10 nodules for each mutant line and studied these by light microscopy. Wild-type P. andersonii nodules harbor an apical meristem, followed by several cell layers that contain infection threads . These cells are followed by cells that are filled with fixation threads . The general cytoarchitecture of Panhk4 and Pannsp1–6/Pannsp1–13 mutant nodules does not differ from that of WT or transgenic control nodules , suggesting that these are functional. In contrast, in Panein2 mutant nodules intracellular infection is hampered . Most (>75%) Panein2 mutant nodules harbor only infection threads as well as large apoplastic colonies . Some mutant nodules, harbor cells that contain fixation threads. However, even in the best nodules, fixation thread formation is severely delayed and many cells in the fixation zone still show vacuolar fragmentation . This shows that ethylene signaling is required for efficient fixation thread formation in P. andersonii nodules.To determine the cytoarchitecture of gure 7A) . Below tnsp1, nsp2 and ein2, whereas no effect on nodule formation was found by knocking out hk4 in P. andersonii. Interestingly, we uncovered a novel role for the ethylene signaling component EIN2 in intracellular infection of P. andersonii nodules.Taken together, these data reveal symbiotic mutant phenotypes for Parasponia can provide insights into ‘core’ genetic networks underlying rhizobium symbiosis , making generative propagation of multiple mutant lines logistically somewhat challenging. An alternative to generative propagation is in vitro maintenance of transgenic lines. Additionally, the fast and efficient transformation procedure presented here will allow recreation of a particular mutant in less than 6 months.An advantage of the pagation . This alpagation . Most mupagation . Since cmpatible . Howeverllinated . This coPanhk4 and Panein2 showed symbiotic phenotypes that differ from corresponding legume mutants. P. andersonii Panhk4 mutants form nodules with a WT cytoarchitecture, indicating that these nodules are most likely functional. Analysis of stem cross-sections showed that Panhk4 mutants possess a reduced procambial activity. Similar phenotypes are observed in homologous mutants in A. thaliana (arabidopsis histidine kinase 4 (ahk4), whereas it is completely abolished in the ahk2 ahk3 ahk4 triple mutant and melon (Cucumis melo) (Parasponia species.is melo) . Moleculis melo) , 2015. IEIN2 knockout mutations result in different phenotypes between Parasponia and legumes. In legumes, ethylene negatively regulates rhizobial infection and root nodule formation (M. truncatula ein2 mutant (named sickle) that forms extensive epidermal infection threads and clusters of small nodules . However, all three mutants are affected in transcriptional regulation of strigolactone biosynthesis genes PanD27 and PanMAX1 . The three Pannsp1 mutant lines differ from each other in the type of mutations that were created. Pannsp1–6 and Pannsp1–13 contain small deletions that are immediately followed by a second in-frame ATG that in WT PanNSP1 encodes a methionine at position 16. In contrast, Pannsp1–39 contains a larger deletion that removes this in-frame ATG that interferes with hormone homeostasis (P. andersonii symbiosis genes to determine to what extent legumes and Parasponia use a similar mechanism to establish a nitrogen-fixing symbiosis with rhizobium.Taken together, we showed that on e.g., but has eostasis . The proP. andersonii genes used in this study can be found in Supplementary Table www.parasponia.org.All datasets analyzed for this study are included in the manuscript and the supplementary files. Gene identifiers for all Conceptualization, AvZ and RG; Methodology, AvZ, MH, SL, and WK; Investigation, AvZ, TW, MSK, LR, FB, MH, SL, EF, and WK; Formal analysis, AvZ, TW, and EF; Visualization, AvZ; Writing – original draft, AvZ; Writing – review and editing, AvZ and RG; Funding acquisition, TB and RG; Supervision, RG.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Dairy branch had a combined 35% share of the above consumption. As shown by the data obtained from the Polish Central Statistical Office, the majority of dairy plants use its own source of water, so this branch is also important water producer in Poland. Water used for dairy industry should meet the requirements of at least drinking water quality, so the factories need to treat the water. This paper analyses the correlations between selected technical process, equipment profiles and water quality, and consumption in two types of dairy factories (DF). The first one DF-1 processes approx. 50,000 L of milk, and the second, DF-2 processes approx. 330,000 L of milk per day. The water taken from the wells needs to be pre-treated because of iron and manganese concentration and due to specific requirements in various industrial processes. As a result of this work, we have managed to propose technological solutions in the context of water consumption rationalization. The proposed solutions aim at improving water and wastewater management by reducing the amount of consumed water by industry.Food industry is one of the most important and fastest growing sectors of economy in Poland. This sector is also characterized by high demand for the resources, particularly for water. Polish food industrial plants consumed 793 hm At the same time, in 2014, Polish industrial sector generated more than 7876 hm3 of sewage, out of which 734.5 hm3 falls in the dairy industry, as analyzed further in this article . However, given the fact that a number of chemicals in concentrations normally found in water can cause negative health effects with long-term exposure, toxic substances having the ability to accumulate in the body, as well as substances with carcinogenic properties, are also subject to strict rigor and are eliminated from water intended for human and animal consumption. This is due to sanitary standards on water quality, among which the most important legislation in the European Union being the Directive of the Parliament and the EU Council on the quality of water intended for human consumption, specifying the standard chemical, physical, and biological properties which must be met by water supplied and used in food plant production , and parameters such as color, turbidity, total number of bacteria, total organic carbon, taste, and smell. Although the parameters have no direct impact on consumer health, the aim is to determine the effectiveness and quality of water treatment process, which is referred to as auxiliary indicator parameter. It should be noted that the standards for a number of indicators applicable under the Directive are more restrictive than the WHO recommendations ; the flow and load of the sewage; existing practices and procedures for water management; and analysis of the obtained results. The procedure for water quality sampling and the appliances used for water quality parameter measurements were determined according to recommendations of Polish State Sanitary Inspection. The spectrophotometer manufactured by HACH Co. model DR 2000 was applied for water quality analysis of ammonium, nitrate, nitrite, iron, and manganese turbidity and color. The hydrogen ion concentration (pH) was measured with a pH meter CP—411 , and the electrical conductivity was measured using a conductivity meter—CC—411 . Hardness level was determined by titration following accredited testing procedure. The concentration of total count of microorganisms was conducted as per PN-EN ISO 6222:2004. The measurement of smell has been conducted via a simplified method based on the norm PN-EN 1622. Some data was taken from direct interview with the employees of the company. Furthermore, the correlations between selected technical process and production factors, equipment profiles, water quality, and consumption are also presented. The collected data was compared with the data available in literature. It should be noted that the analyzed companies had diverse production profiles and management procedures.Qd = 127 m3/day, the average hourly demand is Qh = 5 m3/h, whereas the maximum is Qmaxd = 140 m3/day. Because of increased color and iron concentration, the water is subject to pre-treatment process. Following the treatment, it is transported into a clean water tank with the capacity of V = 48 m3, and through the use of pressure tank, the water is directed to the factory water supply system. The wastewater produced in the process of filter backwashing is discharged into industrial treatment plant.The first factory (DF-1) processes approx. 50,000 L of milk per day. The DF-1 is characterized by own drill well and additional water supply system connection with the municipal network, so that in case of accident or increased production, the water could be charged from municipal water supply. The presented dairy cooperative uses water for technological purposes, washing plant and equipment, steam generation and heating, as well as partly for social purposes. Average daily water demand is 5 and suspended solids are the main pollutants that arise from leaks, spills, and removal of adhering materials during cleaning and sanitizing of equipment, cleaning and sanitizing solution with water washing and cooling water.DF-1 is in the possession of a permit for the uptake of groundwater. The individual amounts of pollutants in the wastewater discharged into the sanitary sewage system determine other water permit disposed by the facility. At the production facility, there are three types of sewer network, namely the sanitary sewage system, storm water network, and the technological wastewater network. The storm water system is used to drain rainwater from the site. Dairy effluent is pre-treated in industrial wastewater treatment plant. Dairy processing discharge wastewater characterized by high organic load due to the presence of milk components. Wastewater are characterized by high changes in temperature or pH. Other parameters of importance for dairy effluent are high COD, TSS, N, and P. As indicated also by Borbon et al. , high BOQavd = 2.16 m3/day, the hourly average Qavh = 90 m3/h, and the hourly maximum Qmaxh = 180 m3/h. Raw water from wells is transported under pressure by pumps for drinking water treatment plant, where it is subject to processes of removing iron and manganese and chlorination. Then, it is pumped into two underground reservoirs with a capacity of 500 m3 each and subsequently distributed throughout the whole plant as treated drinking water. Water used for the supply and charging of boilers requires additional preparation. This includes processes such as filtration, softening, and reverse osmosis. The DF-2 factory makes use of CIP systems, which limits the consumption of raw water by reusing the water from the last rinse of equipment to the first rinse cycle. DF-2 also owns water permit for underground water uptake and wastewater permit to discharge wastewater. If whey from the cheese-making process is not used as a by-product and discharged along with other wastewaters, the organic load of the resulting effluent is further increased processes approx. 330,000 L of milk per day. At DF-2, the water supply system is based on its own underground water intake. Water intakes are located approximately 0.75–1.00 km from the plant. Noteworthy is the fact that a slight amount of water produced by the industry can be sold to the third parties. Water intake, as in the case of a DF-1, includes a defined zone of primary and intermediate protection. DF-2 owns a water permit for groundwater intake originating from their own sources serving for production and social purposes, as issued by the head of district. The consumption amounts are as follows: the daily average The dairy processing plants manufacture various dairy products where the primary ingredient is raw milk. In this paper, the presented case studies covered DF-1—milk production and DF-2—cheese production line. The processes taking place at the milk plant include milk receipt and filtration of raw milk; separation of complete or part of the milk fat due to standardization of market milk; pasteurization and homogenization, if required; followed by packaging and storage, including refrigerated storage. In the cheese production, line milk is separated into skim milk and cream, then pasteurized, followed by specific processes depending on the desired product. The product is packaged and stored before being distributed.Selected processes employed for manufacturing of various products are indicated in Fig. The basic element of proper water management in the food industry is to ensure adequate water quality. This is a prerequisite for prevention of incidents arising from security threats. Risk prevention constitutes of detection and identification (recognition). Risk control in both factories is carried out by specially developed programs to monitor the quality of raw water and treated water. Such programs provide appropriate steps throughout the whole chain: supply, production, and distribution of water. The most important, however, insignificant from the point of view of the analysis of sanitary water security threats is a parameter that requires constant monitoring, namely—iron. Iron as a component of water-bearing rocks occurs in waters of almost 80% underground sources. Harmless from the point of view of health, it causes significant trouble in production processes, particularly affecting negatively the quality of dairy products. The most important parameters of the quality of raw water in individual plants with reference to the current standards are presented in Table Since at both plants the problem of excessive concentration of iron in raw water has been recognized, both analyzed plants are equipped with iron removal system. Iron removal systems operate on the basis of the same unit processes, namely, aeration, which aims to oxidize ferrous iron, as well as rapid filtration, in an attempt to stop the precipitated iron compounds. In order to ensure microbiological safety after the iron removal process, at both plants, the water is being disinfected with the use of chlorine. Analysis of archival data and interviews with employees at the plant lead to the conclusion that the operating treatment system ensures the required security level, both in terms of quantity and quality of water used in the manufacturing process. As previously mentioned, the process water, which may have contact with food, must meet strict quality requirements, which prevents it from being reused.Nonetheless, the water used for washing machines and halls, especially when used for the first washing, is not the subject of such restrictive preconditions. This case was adopted at DF-2 where the CIP system has been implemented. After the completion of the technological process, clearing machines, and equipment from the product and its detachment from supply tanks, the rinsing is conducted in a closed circuit. The process of cleaning the line begins with pre-wash, which removes the remaining product from washed surfaces. Cleaning agent containing many contaminants is removed from the system. Thereafter, the preliminary heavily contaminated water is transported directly to the treatment plant. The initial washing is followed by proper wash with the use of detergents. The solution remaining from last washing is passed to the pre-wash tank at the washing station. The process ends with rinsing with clean water. Considering that such water may have contact with food production, it must meet the requirements for drinking water.Other requirements are set for the water used for steam production. In the dairy industry, the technological steam is used not only as a heat carrier, but also, and perhaps primarily as a disinfecting agent in the process of sterilization and pasteurization. Some processes require direct contact of steam with the product . If in the production process the steam being used has direct contact with food products, the parameters of water used for steam production should correspond to the parameters of drinking water, including the boiler water used for steam boilers. Such water must meet the requirements defined by the manufacturers of boilers, which usually relate to the hardness of water. The requirements depend on the design of a boiler and increase accordingly with its operating pressure.At both study sites, there are water softener stations which effectively remove hardness to the levels below 0.01 mval/l, supporting the operation of boilers. In addition, water in the boiler system is adjusted through the use of chemicals that do not have toxic, carcinogenic, mutagenic, or harmful properties and are thermostable under normal operating conditions of the boiler. These substances are approved by the National Institute of Health for having contact with food. Both basic ingredients of these formulas and complementarity-stabilizing additives are a category of chemicals listed in food additives. This approach of both companies creates the comfort of safety, both from the point of view of the construction of the boiler and steam quality, and thus the quality of dairy products.Another problem regarding water management at the company is to ensure the supply of water to maintain greenness in the summer, and the provision of sufficient quantities of water in case of fire. In these cases, the requirements for water quality are not as restrictive, since this part of water management depends on the quantity of water available. At both plants during the summer months, the maintenance of green areas was carried with raw water and additionally with treated wastewater which quality allowed for such use Zahuta . RegardiDairy processors are aggressively challenged to conserve water necessitating the need for not only reducing water consumption but also to employ measures for recovery and recycling of process water without compromising on the hygienic quality and safety of the products. In Poland, there are norms of water consumption specified by the Minister of Infrastructure Regulation of 14 January 2002 on determining the average water consumption standards. Specific standards of the average water consumption for dairy products are presented in Table In the literature, Flemmer , Perry and SteiRespective processes have been analyzed at the plants in order to evaluate water consumptions. Water consumption indicators broken down by technological processes in both analyzed companies are presented in Tables The diversity of indicators at individual factories was discovered to be resulting from the diversity in the type of production. When comparing the data with the available literature data as presented in the introduction, at Table The data presented indicate a high potential of this industry to use systems reuse water. According to literature, data recovery of water used can save up to 20–40% of the total costs associated with the production of water (Milani et al. This is an essential fact because Poland is in the process of major legislative changes and the introduction of significant increase in fees for the use of water from primary sources. Therefore, any change will translate into tangible economic savings and improve enterprise competitiveness.Undoubtedly, water consumption is also influenced by the nature of dairy production. Most water is required at plants that produce milk powder and cheese, a little less is consumed at plants producing drinking milk. In order to compare the levels of water consumption for both of analyzed plants, the average water consumption indicators have been calculated: the average for years 2011–2014 and for 2015 only, after the introduction of water management industry changes. The data are presented in Table When taking into account data of the year by year period, we may observe reduction in water consumption per liter of product. These improvements are attributed to developments in process control and cleaning practices. During the research period on presented case studies has been implemented the system of simple changes and lower water consumption in the washing process. Additionally, some percent of wastewater has been used for green area irrigation, and all leaks have been monitored. At DF-1, 10% decrease in water consumption indicator has been achieved. At DF-2, almost 15% presumably because DF-2 uses CIP and only a slight modification to CIP has resulted in high reduction of fresh water use. Significant impact of controlling and optimizing cleaning parameters on water consumption was described also by Wojdalski et al. . PracticThe above data allows us to draw very optimistic forecasts and conclusions, namely that the existing measures aimed to reduce water consumption from primary sources do bring the expected results.The problems of environment protection in the industrial sectors are becoming more and more relevant, with strict legal requirements that imply considerable investments. This encourages researchers to look for new systemic solutions and methodologies to improve efficiency of water management. Industrial plants are specific in terms of quantity and quality of treated water, applied technologies and technical solutions, and specific operational regime. Thus, the decision about selecting the most appropriate type of technology is very individual and should be accurate for selected industrial plant.Management strategies for improving water productivity of dairy production have to start at the source of the water. Since water treatment technology significantly influences the total consumption of water at the industrial plants, we should properly select the source of water and unit processes, so that water consumption for technological purposes is as low as possible.3 water/m3 of milk intake (Wojdalski et al. Further technologies used in production require large quantities of water, used either for washing machine, cooling system, or product processing. The possible ways these operations can be modified or employee practices changed to reduce water use are identified and discussed. The role of management in processing water and waste control is an important factor for rational water management in the industry. In the literature, we could find a water consumption rate even of 1.3–2.5 m"} {"text": "Congenital adrenal hyperplasia (CAH) comprises a group of autosomal recessive inherited disorders that arise due to defects in one of the enzymes of steroidogenesis pathway in the adrenal glands. Ninety-five percent of the cases occur due to deficiency in 21-hydroxylase (21-OH). Clinically, CAH due to 21-OH deficiency presents in two distinct forms, classic CAH and non-classic CAH. Females with classical forms present with genial ambiguity while the presentation in males is more subtle with severe electrolyte disturbances being the initial manifestation in many cases. Arrhythmias are a rare manifestation of CAH. We report the case of an 18-day-old male child who presented with pulseless ventricular tachycardia and was later diagnosed with congenital adrenal hyperplasia based on the laboratory findings of elevated 17-hydroxyprogesterone (17-OHP) levels. Our case reveals that fatal arrhythmias such as a pulseless ventricular tachycardia can be the primary manifestation of the adrenal insufficiency of CAH even in the absence of any physical findings and hence clinicians should always maintain a strong suspicion for CAH in any child presenting with unexplained arrhythmia. Furthermore, this case also highlights the need for CAH screening in neonates so that the appropriate hormone replacement can be initiated before the development of life-threatening adrenal crisis. Congenital adrenal hyperplasia (CAH) comprises a group of autosomal recessive inherited disorders that arise due to defects in one of the enzymes of steroidogenesis pathway in the adrenal glands. More than 95% of cases of CAH are due to the deficiency of 21-hydroxylase, the enzyme responsible for the conversion of progesterone into deoxycorticosterone and 17-hydroxyprogesterone to 11-deoxycortisol . This deIn males with classic CAH, the electrolyte imbalances may present before the recognition of genital abnormalities. The subtle virilizing features in males make CAH difficult to diagnose until the electrolyte imbalance is severe.Patients with CAH are prone to develop cardiac arrhythmias due to electrolyte imbalances particularly due to hyperkalemia. Some rare cases of CAH have presented in the form of cardiac arrest and ventAn 18-day-old male baby, first product of a non-consanguineous marriage, born at full term through normal vaginal delivery was brought to the emergency department in an unresponsive state. According to the parents, the child had been vomiting and eating poorly for the past two days. Birth history was unremarkable with no antenatal and postnatal complications.On admission, his blood pressure and peripheral pulses were undetectable. He was bradycardiac (heart rate 40/minute) and moderately dehydrated. He was unresponsive with shallow breathing the respiratory rate being 33 breaths per minute. His oxygen saturation was 92% and the temperature was 37°C. Capillary refill time was found to be four seconds. Random blood sugar came out to be 32 mg/dl. Cardiovascular examination revealed muffled heart sounds but no murmurs. Central nervous system examination revealed normal tone, reactive pupils and normal fontanelles. The remainder of the systemic examination was also unremarkable.An electrocardiogram (ECG) was instantly obtained which revealed ventricular tachycardia Figure . Other E3 with neutrophils 56.3%, lymphocytes 31.8%, platelets 652,000/mm3 and C-reactive protein (CRP) 1.9.Laboratory test results showed hemoglobin 15.1 g/dl, mean cell volume 90 fL, total leukocyte count 26,600/mmElectrolyte report revealed sodium 123 mEq/dl, potassium 6.0 mEq/dl, chloride 80 mEq/dl, calcium 9.6 mg/dl and magnesium 1.9 mg/dl while creatinine and blood urea nitrogen (BUN) were 1.4 mg/dl and 37 mg/dl, respectively.His labs revealed significant hyperkalemia which was most likely the underlying cause of ventricular tachycardia. The high potassium and low sodium levels made congenital adrenal hyperplasia a plausible diagnosis.A laboratory test for the detection of 17-hydroxyprogesterone was sent. The test revealed high levels of 17-hydroxyprogesterone (320 ng/ml). Renal ultrasound was done to check for adrenal hyperplasia but it came out to be normal. The genital examination was unremarkable with no ambiguity, the penis was of normal length and no skin hyperpigmentation was noted on the axilla, neck, and genitals. The child was diagnosed with CAH based on the laboratory results of increased levels of 17-hydroxyprogesterone. To determine the type of CAH, tests for plasma renin and aldosterone were also performed. Plasma renin came out to be elevated . Serum aldosterone was also high .2/24 hours in three divided doses) and fludrocortisone (0.2 mg daily in two divided doses) along with supplementary NaCl (8 mmol/kg). Parents were advised to consult the doctor in case the child fell ill, as stress requires increment in the dose of glucocorticoids to prevent adrenal crisis. The dose was also increased prior to circumcision. Follow-ups in the outpatient department have shown normal electrolytes and ECG and optimal growth and development of the child.The infant was discharged on hydrocortisone . The simple virilizing form has no features of salt-losing crisis and usually manifests late in childhood with precocious puberty or with clitoral or penile enlargement secondary to excess adrenal androgens .Our patient presented with life-threatening salt crisis and electrolyte imbalances, features of classic CAH-salt wasting form. Patients with classical CAH are prone to develop cardiac arrhythmias secondary to electrolyte imbalances particularly due to hyperkalemia. However, in most cases of CAH, hyperkalemia resulted in sudden cardiac death and arrhythmias have rarely been documented .Hyperkalemia is defined as a potassium level of >5.5 mEq/l. The ECG findings of hyperkalemia are progressive from tall peaked T waves, a shortened QT interval to prolongation of PR interval and loss of P waves and then to diffuse widening of QRS complex and if untreated, death .Shockable rhythms, like pulseless ventricular tachycardia or PVT, are a rare event in neonates -13. The Deterioration of VT into pulseless ventricular tachycardia and ventricular fibrillation may occur. In pulseless VT, the patient is apneic, unresponsive with undetectable peripheral pulses, as was documented in our case. The management of ventricular tachycardia and pulseless ventricular tachycardia is different. Pulseless VT is a shockable rhythm and early recognition is necessary for prompt management as was sA similar case of ventricular tachycardia in a 20-day-old male child with CAH was reported by Virdi et al. . The corSalt-wasting form of classical CAH is characterized by low levels of serum aldosterone . SurprisThe adrenal ultrasound of this child was also unexpectedly normal; although unusual, this does not rule out the diagnosis of CAH and normal adrenal ultrasound in untreated children with CAH has been documented by Al-Alwan et al. .21-hydroxylase deficiency, in its classic SW form, is characterized by markedly elevated levels of 17-OHP usually >600 nmol/ml or >2000 ng/ml. Corticosteroids are the mainstay of treatment of classical CAH . TreatmeNewborn screening programs using 17-OHP assays have been initiated in many countries resulting in a considerable decrease in morbidity and mortality that was previously attributable to the severe adrenal crisis in salt-wasting forms of CAH . ReviewsOur case reveals that rare fatal arrhythmias, such as a pulseless ventricular tachycardia, can be the primary manifestation of the adrenal insufficiency of CAH in a neonate even in the absence of any physical findings. Electrolyte testing should be sought in any infant who presents with significant unexplained arrhythmias without any evidence of congenital heart disease. Moreover, our case also highlights the need for CAH screening in newborns in developing countries like Pakistan so that affected neonates can be identified before the development of the life-threatening salt-losing crisis and appropriate hormone replacement be initiated."} {"text": "Fast food restaurant-fried potato chip serving (FFRPCS) aldehyde contents were also monitored. Substantially lower levels of aldehydes were generated in the MRAFO product than those observed in PUFA-richer oils during LSSFEs. Toxicologically-significant concentrations of aldehydes were detected in FFRPCSs, and potato chips exposed to DBRDFEs when using a PUFA-laden sunflower oil frying medium: these contents increased with augmented deep-frying episode repetition. FFRPCS aldehyde contents were 10–25 ppm for each class monitored. In conclusion, the MRAFO product generated markedly lower levels of food-penetrative, toxic aldehydes than PUFA-rich ones during LSSFEs. Since FFRPCS and DBRDFE potato chip aldehydes are predominantly frying oil-derived, PUFA-deplete MRAFOs potentially offer health-friendly advantages.Human ingestion of cytotoxic and genotoxic aldehydes potentially induces deleterious health effects, and high concentrations of these secondary lipid oxidation products (LOPs) are generated in polyunsaturated fatty acid (PUFA)-rich culinary oils during high temperature frying practices. Here, we explored the peroxidative resistance of a novel monounsaturate-rich algae frying oil (MRAFO) during laboratory-simulated shallow- and domestically-based repetitive deep-frying episodes (LSSFEs and DBRDFEs respectively), the latter featuring potato chip fryings. Culinary frying oils underwent LSSFEs at 180 °C, and DBRDFEs at 170 °C: aldehydes were determined by Mechanisms available for this process primarily involve the oxidative conversion of such UFAs to primary lipid oxidation products (LOPs), commonly described as lipid hydroperoxides sources, abbreviated HPMs and CHPDs respectively), a process sequentially followed by their fragmentation to secondary ones, the latter including extremely toxic aldehydes in particular2. Further HPM and CHPD degradation products include epoxy acids, alcohols, ketones, oxoacids, alkanes and alkenes, in addition to further toxic oxidation and fragmentation products5.The peroxidation of unsaturated fatty acids (UFAs) at temperatures commonly used for standard frying or cooking episodes are much more resistant to oxidation than polyunsaturated ones (PUFAs), and hence they give rise to lower levels of only particular LOPs when heated in this manner, and generally only after exposure to prolonged thermal stressing episodes at standard frying temperatures. Therefore, the order and extent of toxic LOP production in culinary oils is PUFAs > MUFAs >>> saturated fatty acids (SFAs), and the relative oxidative susceptibilities of 18-carbon chain length fatty acids (FAs) containing 0, 1, 2 and 3 carbon-carbon double bonds (i.e. >C=C< functions) are 1:100:1,200:2,500 respectively8, and these results have been available to the scientific, food and public health research communities since 19946. Indeed, samples of repeatedly-used oils collected from domestic kitchens, fast-food retail outlets and restaurants, have confirmed the generation of these aldehydes at high concentrations during ‘on-site’ frying practices. Such results have been repeated, replicated and ratified by many other research laboratories worldwide (most notably9). We can also employ these NMR techniques to monitor the corresponding degradation of culinary oil PUFAs and MUFAs during such standard frying/cooking practices7, and also to detect and quantify a range of further LOPS, i.e. CHPDs and HPMs, ketones and alcohols, together with toxic epoxy acids, the latter including leukotoxin and its derivatives such as isoleukotoxin and leukotoxindiol9.Previous NMR-based investigations focused on the peroxidative degradation of culinary oil UFAs during standard frying practices, or corresponding laboratory-simulated thermal stressing episodes, have demonstrated the thermally-promoted generation of very high levels of highly toxic aldehydes and their hydroperoxide precursors in such products (particularly those rich in PUFAs)in vivo10 following oral ingestion, where they have access and may cause damage to cells, tissues and essential organs. Indeed, these agents have been demonstrated to promote a broad spectrum of concentration-dependent cellular stresses, and their adverse health properties include effects on critical metabolic pathways ; the promotion and perpetuation of atherosclerosis and cardiovascular diseases14; mutagenic and carcinogenic properties19; teratogenic actions ; the exertion of striking pro-inflammatory effects22; the induction of gastropathic properties (peptic ulcers) following dietary ingestion23; neurotoxic actions, particularly for 4-hydroxy-trans-2-nonenal (HNE) and -hexenal (HHE)24; and impaired vasorelaxation coupled with the adverse stimulation of significant increases in systolic blood pressure25. Further deleterious health effects include chromosomal aberrations, which are reflective of their clastogenic potential, sister chromatid exchanges and point mutations, in addition to cell damage and death27.Of critical importance to their public health risks as food-borne toxins, typical chemically-reactive α,β-unsaturated aldehydes produced during the thermal stressing of culinary oils according to standard frying practices are absorbed from the gut into the systemic circulation 18. Moreover, these secondary LOPs are also able to form covalent adducts with many proteins via Schiff base or Michael addition reactions10, and these can induce significant structural and conformational changes in these macromolecules, which serve to impair their biocatalytic functions. However, until recently, such toxicological considerations have generally continued to elude interest or focus from many food industry and public health researchers.The toxicity of these aldehydes, particularly the α,β-unsaturated ones, is ascribable to their aggressive chemical reactivity. Indeed, they cause damage to critical biomolecules such as DNA: since they are powerful electrophilic alkylating agents, α,β-unsaturated aldehydes readily alkylate DNA base adducts, and this serves to explain their mutagenic and carcinogenic properties2-fueled peroxidation during common frying cycles, this predominantly MUFA-containing [i.e. >90% (w/w)] culinary oil offers further major advantages, including a very high smoke-point of 252 °C , and also a neutral taste contribution.The recent development of algae-derived cooking oils has provided much scope and benefits regarding the effective bioengineering of their triacylglycerol FA contents, in particular for their valuable uses as cooking oils. Indeed, the novel MUFA-rich algae frying oil (MRAFO) tested in this work represents the first ever such algal product available to consumers in the USA. In addition to its potential resistance to thermally-induced, O1H NMR analysis to determine the concentrations of a series of highly toxic classes of aldehydic LOPs therein as a function of heating time, i.e. from 0–90 min. for LSSFEs, and 8 × 10 min. DBRDFEs (the latter featuring a 30 min. oil cooling rest period between each frying cycle). The time-dependent production of epoxy acid LOP toxins was simultaneously monitored in all oils investigated. Such experiments serve to provide valuable information and insights regarding the possible health-threatening effects of these aldehydes when ingested in human diets featuring fried food sources of these toxins, e.g. potato chips, fish fillets, battered chicken, chicken strips, etc., and here we have also demonstrated, for the first time, the availability for human consumption of high, toxicologically-significant (up to 25 ppm) levels of two major classes of α,β-unsaturated, and one major class of saturated aldehydes in servings of fried foods collected directly from fast food retail outlets/restaurants, and also in potato chips subjected to DBRDFEs when PUFA-rich sunflower oil is used as the frying medium. The potential deleterious health effects presented by these oils when employed as frying media, particularly those associated with PUFA-rich frying oil sources of dietary aldehydes, are discussed in detail.Therefore, in view of the much lowered susceptibility of MUFAs to oxidation than PUFAs, in this study we have explored the oxidative resistance of the above high-stability MRAFO product during laboratory-simulated standard shallow frying practices, i.e. one of its major culinary applications, and also during DBRDFEs. For this purpose, we exposed this MRAFO, in addition to commonly-utilised sunflower, corn, canola and extra-virgin olive oils, to these episodes at 180 °C and 170 °C respectively, and have employed multicomponent high-resolution 1H NMR analysis demonstrated the thermally-induced generation of aldehydic LOPs in all oils investigated, and Fig. 1H NMR profiles demonstrating the time-dependent production of -CHO function resonances ascribable to a range of these toxins when culinary oil products were heated according to our LSSFEs . Spectra of heated sunflower, corn and canola oils also contained resonances assignable to aldehydic precursors, in particular cis,trans- and trans,trans-CHPDs , and cis,trans-conjugated hydroxydienes (δ = 5.40–6.50 ppm range), as previously reported8; broad -OOH resonances were also visible in spectra acquired on thermally-stressed extra-virgin olive oil and MRAFO products, although presumably they largely arise from HPMs rather than CHPDs in these cases. The CHPDs detectable are produced during recycling O2- and heat-stimulated peroxidative bursts throughout the whole simulated frying process, especially since these LOPs remain detectable in spectra acquired on the PUFA-rich oils after a 90 min. LSSFE period. Moreover, relatively low concentrations of these aldehydes and their CHPD precursors were also detectable in unheated sunflower and corn oil products [Fig. cts Fig. .Figure 1trans-2-alkenals [(E)-2-alkenals], trans,trans- and cis,trans-alka-2,4-dienals , along with 4-hydroperoxy-/4-hydroxy-, and 4,5-epoxy-trans-2-alkenals [the latter three all substituted (E)-2-alkenal derivatives] confirmed that it was assignable to an α,β-unsaturated aldehyde, since it was directly connected to two vinylic proton multiplet signals located at δ = 6.59 and 5.95 ppm -2-butenal , which has corresponding signals at δ = 10.10 ppm , 6.70 (dq) and 5.97 ppm (ddq) .Also notable was the diminishing intensity of the omega-3 FA (linolenoylglycerol) 1H NMR spectra acquired on potato chip samples purchased from fast-food restaurants (data not shown).The above epoxy acid resonances were also observed in 10, where they have the potential to exert a wide range of adverse health effects in humans. In particular, HNE and HHE represent highly toxic and carcinogenic secondary LOPs derived from the thermally-induced degradation of culinary oil PUFAs29; indeed, HNE is also a toxic second messenger30. However, since these α,β-unsaturated aldehydes predominantly arise from glycerol-bonded linoleate (HNE) and omega-3 FAs (HHE), little or none of them were detectable in thermally-stressed MRAFO and olive oil products (or other PUFA-deplete ones), since these contain only 4 and 5–10% (w/w) linoleoylglycerols respectively, and both products contain ≤1% (w/w) linolenoylglycerols. This also explains why much lower levels of similarly toxic trans,trans-alka-2,4-dienals are generated in such MUFA-rich oils when subjected to thermal stressing episodes.Heating of linoleoylglycerol- and linolenoylglycerol-rich culinary oil products according to LSSFEs generates very high levels of a range of extremely toxic aldehydic LOPs. These established toxins have been proven to be absorbed from the gut into the systemic circulation following their dietary ingestionn-alkanals and trans-2-alkenals, the former of which are arguably of a lower toxicity than the latter), whereas a much broader pattern of these agents are produced from PUFA-derived CHPDs 9. Intriguingly, the total unsaturated aldehyde concentration determined in PUFA-rich sunflower oil heated for a period of 90 min. according to our LSSFEs was very close to or exceeded a staggeringly high figure of 0.05 mol.kg−1.Although significant amounts of aldehydic LOPs also arise from MUFAs, these were only generated at prolonged heating times, i.e. significant lag phases preceded their evolution Figs . Moreove2-driven peroxidation processes occurring during standard frying practices than frequently employed PUFA-laden ones. Indeed, significantly lower levels of trans-2-alkenals and n-alkanals were generated in this product at each sampling time-point following exposure to LSSFEs at 180 °C. Furthermore, markedly lower concentrations of PUFA-derived aldehydes, such as trans,trans-alka-2,4-dienals and 4,5-epoxy-trans-2-alkenals, were detected in this product when thermally-stressed in this manner, as expected. Indeed, samples of this oil collected at the 10 and extreme pan-frying 20 min. LSSFE time-points contained little or no toxic aldehydes, whereas substantially higher concentrations of these toxins were found in corresponding samples of PUFA-rich corn and sunflower oils.Our results therefore clearly demonstrate that the predominant, >90% (w/w) MUFA source of the MRAFO product renders it much more resistant to thermally-induced, autocatalytic, OTherefore, when used as a medium for shallow frying purposes, this MUFA-rich algae oil serves to offer a high level of protection to human consumers, i.e. it is anticipated that much lower quantities of toxic aldehydes will permeate into food matrices during frying episodes performed with it.1H NMR investigations which compared the extent of aldehyde generation within fixed volumes of oils heated according to standard frying practices in vessels of increasing diameter7. Concentrations of trans-2-alkenals, trans,trans-alka-2,4-dienals and n-alkanals for the LSSFEs performed were found to be ca. 6, 3.5 and 3 mmol./mol. FA respectively for sunflower oil at the 60 min. heating time-point, whereas these values were only ca. 0.13, 1.0 and 0.5 mmol./mol. FA respectively following the 6th repetitive frying episode (i.e. 6 × 10 min. sessions with 30 min. cooling periods between each one) performed with this oil according to our domestic deep-frying protocol. Moreover, at the 60 min. heating time-point, although LSSFE levels of trans-2-alkenals, trans,trans-alka-2,4-dienals and n-alkanals were ca. 5, 1.7 and 2 mmol./mol. FA respectively for extra-virgin olive oil, and ca. 1, 0.3 and 0.5 respectively for the MRAFO product, little or none of these were found in either of these oils when exposed to our DBDRFEs; low concentrations were only detectable in these oils at the later DRDRFE episodes.The higher levels of aldehydes found in sunflower, extra virgin olive and MRAFO oils when exposed to our LSSFEs than those found in the ‘at-home’ domestic frying ones (DBDRFEs) are predominantly explicable by the fact that the former processes were performed under shallow frying conditions, whereas the latter domestic ones involved deep-frying, albeit repetitive sessions. Indeed, these results are fully consistent with our previously conducted 2 required for the peroxidation process, and also the subsequent dilution of surface-formed aldehydic LOPs into the larger volume frying medium available in the deep-frying experiments conducted.Specifically, these results arise from the greater surface area of the frying oil medium during shallow frying practices, and hence a greater exposure of it to atmospheric Oet al.31 was followed. A further, albeit less significant, explanation is that chain-breaking antioxidants such as tocopherols in the oil products tested are likely to be more effective at suppressing the peroxidation process under deep- rather than shallow-frying conditions in view of the much lower levels of aldehydic LOPs formed during the former process, i.e. such antioxidants will have an enhanced ability to compete for lipid peroxyl radicals. Additionally, conceivably there will be less volatilisation and thermally-induced degradation of these antioxidants at the lower temperature employed for these domestic deep-frying experiments.However, these results are, of course, also explicable by the lower temperature employed for the domestic deep-frying experiments, i.e. 170 rather than 180 °C used for the LSSFEs. A temperature of 170 °C was used for the former experiments since the method of Boskou However, the lower levels of aldehydes found in culinary oils exposed to deep-frying processes may, at least in part, be compromised by a greater extent of oil absorption by the fried food available for human consumption.32.The marked susceptibility of omega-3 FAs to thermo-oxidation is also very likely to exert a major effect on the omega-6 to omega-3 FA concentration ratio health indices of the cooking oils evaluated here, and this is discussed in more detail in section S6. In view of the very high susceptibility of omega-3 FAs to thermoxidative deterioration, both Belgium and France have sensibly adopted regulations which limit the linolenate content of frying oils to 2% (w/w)remaining therein8. Indeed, a large number of these secondary LOPs generated are volatilised at standard frying temperatures, and this also presents austere health hazards in view of their inhalation by humans, especially those working in fast-food retail outlets or restaurants with insufficient or inadequate ventilation precautions. This is especially the case for secondary LOPs arising from the oxidation of linoleoyl- and linolenoyglycerols. Indeed, a substantial fraction of such aldehydes have boiling-points (b.pts) < 180 °C , notable examples being trans-2-heptenal, trans,trans-deca-2,4-dienal and n-hexanal from peroxidation of the former source33, and acrolein, trans,trans-2,4-heptadienal and propanal from peroxidation of the latter acylglycerol. Information regarding the toxicological concerns and actions of inhaled or ingested trans,trans-deca-2,4-dienal are provided in section S7.Intriguingly, the above concentrations of aldehydes arising from the thermal stressing of commercially-available culinary oils is a value representing only that trans-2-undecenal and trans-2-decenal, which have b.pt values of 194, 213, 234 and 230 °C respectively33, and hence this consideration amply, albeit indirectly, serves to reinforce the hypothesis that MUFA-rich cooking oils are much less susceptible to thermally-induced oxidation than PUFA-rich ones, since despite their low b.pts and hence greater volatilities, residual oil trans-2-alkenal and n-alkanal concentrations in post-heated PUFA-rich oils are always greater or much greater than those observed in corresponding MUFA-rich ones such as MRAFO and, to a lesser extent, canola and olive oils. However, the heating/frying time-dependence of these differences observed is also a critical factor for consideration. Furthermore, an additional aldehyde arising from the fragmentation of HPMs is n-octanal, which has a b.pt of 173 °C33; hence, presumably somewhat higher concentrations of this LOP would be expected to be found in the gaseous (volatile emission) phase during standard frying practices, whereas higher levels of the above alternative HPM-sourced aldehydes may be anticipated to remain in the thermally-stressed oil medium, together with accessible foods fried therein.Contrastingly, major aldehydes derived from the peroxidation of oleoylglycerols include nonanal, decanal, 33 found that the predominant trans-2-alkenals and n-alkanals detected in the headspace of extra-virgin olive oil heated at a temperature of 190 °C for 20 hr. in a stainless steel frying tank were n-nonanal, and a combination of trans-2-decenal and -undecenal, respectively . Similar results were found by Fullana et al. in 200434, who also observed much lower headspace concentrations of peroxidised linoleoylglycerol-derived trans,trans-alka-2,4-dienals than those of trans-2-alkenals and n-alkanals for olive oil when thermally-stressed at 180 °C for a period of 15 hr. in a closed Pyrex Instatherm reaction flask and head.However, Guillen and Uriate (2012)1H NMR detection of cis-2-alkenals in thermally-stressed culinary oils (section S1). Such cis- isomers, including cis-2-butenal, are also known to be secondary LOPs which arise from the peroxidation of PUFA sources 4. However, it is conceivable that they may also arise from the thermally-induced isomerisation of their corresponding trans-2-alkenals, and this may explain their generation at only the later LSSFE time-points. Indeed, cis -2-heptenal may arise from cis-trans isomerism of its trans-isomer, which is a β-homolysis product of linoleate-12-hydroperoxide35. PCA of our oil dataset confirmed a correlation between cis- and trans-2-alkenal resonances, i.e. they were both found to load significantly and positively on the second orthogonal PC 37. However, we found no major multivariate statistical evidence for associations between the cis-2-alkenal 1H NMR resonance intensities and those of either cis,trans- or trans,trans-alka-2,4-dienals. Indeed, this possible precursor may also decompose to 2,3- or 4,5-epoxyaldehydes, which are then further degraded to combinations of either isomeric 2-octenals and acetaldehyde, or glyoxal and 2-octene38. Although these mechanistic pathways appear to be consistent with Boskou et al.’s results31, our observations revealed that oil trans,trans-alka-2,4-dienal concentrations continued to increase from the 60 to 90 min. time-points for all products explored, with the exception of those for extra virgin olive oil in which it saturated at 60 min.Here, we have also demonstrated, for the first time, the section . Such cicis-2-alkenal levels and those of all other aldehydic LOPs (r = 0.86–0.93), the most significant one was that with trans-2-alkenals , frying time and frying temperature, these values broadly range from 6–38% (w/w)41. Moreover, Naseri et al.42 have reported that the deep frying of fish (silver carp) gave rise to a substantial exchange of acylglycerol (predominantly triacylglycerol) FAs between the food and the culinary oils employed for this purpose, and as expected, the frying oil FA composition substantially altered that of these silver carp fillets on completion of these frying episodes. Comparable results have been observed in similar investigations focused on lipid uptake by potato chips during standard frying practices . For example, the total lipid content of different varieties of fresh, unfried potato tubers is only ca. 0.10% (w/w), of which the total PUFA content is 70–76%44, but escalates to values exceeding 30% (w/w) in chipped potatoes after frying45. Hence, frying oil acylglycerol-normalised (proportionate) concentrations of LOPs will also be expected to migrate into foods fried in such media, and in 2012 Csallany et al.46 found that HNE was readily detectable in French fry samples collected from n = 6 fast-food restaurants at concentrations of 8–32 µg/100 g portion .Although the lipid content of fried products is dependent on the types of food, class of frying episode . These observations are consistent with previous investigations33, in which HNE was detectable in sunflower oil at concentrations of ca. 350 and 430 μmol.L−1 when thermally-stressed at 190 °C for prolonged 17.5 and 20.0 hr. episodes respectively, whereas neither of these LOPs were observed in extra-virgin olive oil at either of these time-points. Similarly, levels of trans-4,5-epoxy-trans-2-decenal and 4-oxo-trans-2-nonenal were found to be much greater in heated, PUFA-rich sunflower oil than those in a correspondingly-heated extra-virgin olive oil product tested, as expected29. Further information regarding the levels of HNE and other hydoxyaldehydes detectable in thermally-stressed culinary oils, and their evaporative loss therefrom, is available in section S8, along with that relating to their availability for human consumption in fried food sources.However, PUFA-derived HNE is always detectable in thermally-stressed PUFA-containing oils at much lower levels than those of similarly health-threatening 1H NMR analysis results clearly demonstrate that much greater levels of trans-2-alkenals, trans,trans-alka-2,4-dienals, and arguably somewhat less toxic n-alkanals, are present in FFRPCSs purchased from fast food retail outlets. Indeed, for peroxidised linoleoylglycerols, the predominant compounds featured within the three major aldehydic LOP classes detectable are trans-2-octenal, trans,trans-deca-2,4-dienal and n-hexanal respectively33, and assuming that these represent 100% of the above 3 classes of aldehydes, our estimated mean μmol.kg−1 values would constitute as much as 1.53, 2.44 and 1.25 mg aldehyde/100 g portions of FFRPCSs, equivalent to 1.1, 1.7 and 0.9 mg per small (71 g), and 2.4, 3.8 and 1.9 mg per large (154 g) servings, respectively than those of the oils in which they are fried (ca. ≤ 25%). This may reflect the lower reactivities of n-alkanals than α,β-unsaturated aldehydes towards free, and/or protein-incorporated amino acids with selected ‘target’ side-chain amino or thiol functions, processes involving Maillard and/or Michael addition reactions47 (section S9).Also notable is the higher mean molar percentage levels of 48. This alone is a critical toxicological concern, especially since we have found here, for the first time, substantially greater contents of trans-2-alkenals, trans,trans-alka-2,4-dienals and n-alkanals present in FFRPCSs available to consumers for purchase in fast food restaurants. Indeed, assuming that linoleoylglycerol CHPD-derived trans-2-octenal represents the total trans-2-alkenal content therein, this estimate for a 154 g ‘large’ serving portion of this fried food (ca. 2.4 mg) is 68-fold greater than that of this acceptable daily intake limit for its lower homologue acrolein . Strikingly, this 2.4 mg value is that of only one of the 3 major classes of such cytotoxic/genotoxic aldehydes detectable at similar levels, together with at least several minor ones. Corresponding estimates for the major trans,trans-alka-2,4-dienal and n-alkanal species arising from linoleoylglycerol peroxidation were > 3.7 and 1.9 mg , respectively, per 154 g serving. Moreover, these estimates correspond to only one potato chip portion of a single fried meal!In view of these observations, very recently the Australian Government Department of Health (AGDH) specified that the acceptable daily intake of the similarly-toxic, simplest α,β-unsaturated aldehyde acrolein, i.e. that which is considered to be a level of intake of this molecule that can be ingested daily over an entire lifetime without any appreciable risk to health, to be only 0.5 µg per kg of body weight, i.e. a total of only 35 μg for an assumed (average) human body weight of 70 kg49. This tolerable intake level is ca. 4.5-fold less than that estimated for only the total trans-2-alkenal content of the above 154 g single FFRPCS, although this is only one of the 7 or more classes of aldehydes detected and monitored in this work. Moreover, the estimated mean molar % of this aldehyde was only an estimated 30% of the total aldehyde concentration found in FFRPCSs; that for similarly-toxic trans,trans-alka-2,4-dienals was 39 molar % over that of acrolein (56.06), the estimated tolerable daily intake level for an average human would be 1.18 mg, and therefore the content of the former still exceeds this tolerable intake value > 2-fold. Moreover, the ‘acrolein-mass equivalent’ content of total α,β-unsaturated (i.e. both mono- and di-unsaturated) aldehydes would be approximately 3-fold higher than this maximum tolerable intake value.The health-threatening significance of these estimated LOP intake values are further exemplified by the World Health Organisation (WHO)’s tolerable intake level of acrolein, which in 2002 was specified as a higher 7.5 μg 0.13 μmol) per day per kg of body weight μmol pertrans,trans-deca-2,4-dienals present in FFRPCS samples pre-fried in a hypothetical oil containing a 100% (w/w) linoleic acid content (24 ppm) is comparable to that determined in French fries exposed to repeated frying episodes using sunflower oil in a domestic deep-fryer (up to 11 ppm after 3–4 fryings)31. As expected, similar levels of this α,β-unsaturated aldehyde were found when this food was fried in PUFA-rich vegetable shortening, but intermediate ones were observed when cottonseed oil was employed in place of sunflower oil, and lower values still were found for palm and olive oils, with olive oil giving rise to the lowest ones . Our higher estimated total trans,trans-alka-2,4-dienal level of 157 ± 43 µmol.kg−1 (mean ± SEM) is not dissimilar to Boskou et al.’s31 maximal deep-frying value of ca. 65 µmol.kg−1 (10–11 ppm) for trans,trans-deca-2,4-dienal in French fries. However, our higher estimated mean value will, of course, also include contributions from alternative trans,trans-alka-2,4-dienals such as linolenate hydroperoxide-derived trans,trans-nona-2,4-dienal, and trans,trans-hepta-2,4-dienal (the latter arising from scission of linolenate’s 12-hydroperoxide), in the FFRPCS samples investigated, although the linolenate content of sunflower oil is negligibly small. Moreover, this difference observed may also reflect continued sequential reuse of UFA-rich frying oils in the fast-food outlets from which they were purchased. Our estimated FFRPCS total mean trans,trans-alka-2,4-dienal content value is also similar to that found in a further report50.Our estimated mean level of total et al.31 i.e. 61–88 µg/g of absorbed oil from the 2nd to the 8th frying sessions, equivalent to 401–578 µmol.kg−1 oil; assuming an overall 15% (w/w) absorption uptake of sunflower oil frying medium into this fried food31 would yield potato chip mass-normalised contents of 60–87 µmol.kg−1. Our results also confirmed that MUFA-rich oils such as extra virgin olive and especially MRAFO oils offer little or no toxicological threats to human health when employed for such deep-frying practices.Results acquired from our domestic deep-frying experiments clearly show that only PUFA-laden sunflower oil gave rise to the availability of significant levels of aldehydes for human consumption in repeatedly-fried potato chips, and these levels were similar to those found for 2,4-decadienal by Boskou trans-2-alkenals, 1-2 for trans,trans-alka-2,4-dienals, and 2 for n-alkanals respectively, and hence for each of these aldehyde classes, these values provide an indication of the maximum levels of repetitive use for sunflower oil when used for frying purposes according to our domestic deep-frying protocol.The oil reuse lag periods observed in Fig. 51] in potatoes.However, it appears that the use of sunflower oil for a maximum of 2 episodes under our deep-frying experimental conditions would not give rise to any significant levels of each aldehyde in fried potato chips, and therefore would not present any adverse toxicological or health concerns. However, our data provide evidence that the additional reuse of this and perhaps other PUFA-rich oils does indeed give rise to toxicologically-relevant concentrations of aldehydes in this commonly consumed fried food. Although significant levels of lipid hydroperoxides were found to be generated in this culinary oil, which sequentially increased with increasing number of repetitive frying episodes performed, they were not detectable in any of the DBRDFE potato chip samples collected, an observation which indicates that they are rapidly degraded to secondary LOPs such as aldehydes and/or epoxy fatty acids, or further products, when uptaken by this food during the frying practice employed. Indeed, such decomposition is likely to be promoted by the availability of catalytic trace levels of transition metal ions , of which 3 g represents SFAs. Therefore, 4 portions of these fries consumed per week (mean 0.57 portion per day) would constitute an overall fried food fat intake of ca. 19% of the reported 77 g mean total daily human UK fat consumption54 (14.3 g/day), which would predominantly comprise thermally-peroxidisable UFAs for this particular food service outlet, i.e. ca. 16%, corresponding to 12.6 g/day.The total FA contents of French fries varies widely, and generally ranges from 5 to >15% by weightet al.55 found that CHD patients reported a much elevated, highly statistically significant daily consumption of both deep- and shallow-fried foods (15 ± 25 and 24 ± 60 g respectively) than that of an age-matched healthy control group (1 ± 5 and 3 ± 17 g respectively).Strikingly, Panwar et al.56, ingestion of a 150 g serving of French fries which have been deep-dried in sunflower oil and therefore which contain a maximal aldehyde amount of 1.65 mg is sufficient to induce 97% oxidative conversion of LDL in vitro.According to Kaliora Previously reported epidemiological, meta-analysis, animal model and laboratory experimental investigations which connect the ingestion of fried foods and/or more specifically, aldehydes themselves to the pathogenesis and/or incidence of further human diseases , are outlined in section S10 of the Supplementary Materials section. This section includes information relating to the mechanisms of the toxicities of aldehydes and/or lipid oxidation products (LOPs) present in pre-heated frying oils.33.Our results further demonstrate that the shallow- or deep-frying of foods in MUFA-rich, PUFA-deplete cooking oils such as MRAFO, which generate much lower thermally-inducible aldehyde levels in frying oils than those produced in PUFA-rich ones, gives rise to the passage of proportionately much lower concentrations of these toxins into fried food matrices available for human consumption, and therefore such servings offer less potential adverse dietary threats to human health. However, as noted above, a further major factor for consideration is that the b.pts of aldehydes derived from linoleoylglycerol CHPD fragmentation are predominantly lower than those arising from the scission of oleoylglycerol HPMsn-hexanal (2.5–9.5 mg) and malondialdehyde (0.24–0.66 mg)57. Moreover, the acrolein content of such a cigarette allocation to humans has been estimated to be 0.62–3.5 mg 58.Intriguingly, the above FFRPCS quantities of aldehydes available for human ingestion are not dissimilar to those arising from the smoking of a mean daily allocation of 25 cigarettes, i.e. mg quantities of crotonaldehyde (1.8–5.7 mg), butyraldehyde (2.2–23.2 mg), Of course, PUFAs localised within or originating from the food sources themselves will also be expected to undergo thermally-induced oxidative deterioration during frying practices. The conceivable consumption of aldehydic LOPs by amino acids and proteins present in fried foods is outlined in section S11.10. This is attributable to the poor capacities of such low antioxidant concentrations to combat the aggressive, recycling autocatalytic oxidative assaults upon highly-susceptible PUFAs induced by their exposure to such high temperatures. Along with their chemical consumption by thermally-inducible lipid peroxyl radicals during frying practices, the loss of these antioxidants during such episodes is also ascribable to (1) their volatilisation at such temperatures , and (2) their thermal instability when exposed to these temperatures (details available in supplementary section S11).An additional consideration is that the concentrations of natural or oil-supplemented lipid-soluble, chain-breaking dietary antioxidants such as α-tocopherol (vitamin E) and DTBHQ, molecules which are known to terminate the autocatalytic lipid peroxidation process, unfortunately appear to be only poorly effective at suppressing the adverse generation of toxic LOPs produced during standard frying practicesThe possible therapeutic intervention of L-cysteine, especially in relation to attenuating acetaldehyde toxicity, in both humans and experimental animals, is available in section S12. Further previously documented interventional and preventative/prophylactic strategies for guarding against oxidative stress induced by the consumption of diets containing peroxidised culinary oils are also documented in this Supplementary Materials section.1H NMR signals assignable to a range of toxic epoxy acid LOPs were also detectable in samples of culinary oils exposed to LSSFEs. As expected, a multicomponent pattern of these resonances was observed in PUFA-rich sunflower oil when heated according to our LSSFEs at 180 °C for ≥30 min. periods, and these included those assigned to leukotoxin, isoleukotoxin and leukotoxindiol. However, in view of its predominant (>90%) MUFA content, the only 1H NMR-detectable epoxy acids found in the MRAFO product evaluated were trans- and cis-9,10-epoxystearates, although these only evolved at the 60 and 90 min. heating time-points, which are very lengthy and hence irrelevant to those of shallow frying practices . Leukotoxin and its diol derivative, which can also be generated in vivo, are known to give rise to the degeneration and necrosis of leukocytes, and have been implicated in the pathogenesis of multiple organ failure, breast cancer, and perturbations to the reproductive functions of rats59. Leukotoxins also exert disruptive effects on cell proliferation and the respiratory burst of neutrophils in vitro60.A series of trans-fatty acid (TFA) intake to coronary heart diseases (CHDs) remains widespread , and their potential health risks in this context are currently considered to be greater than those presented by SFAs62 (section S13). However, in view of these estimates, it should be stressed that, on a mole-for-mole basis, aldehydes arising from PUFA and MUFA peroxidation are clearly very much more toxic than TFAs, although estimated human intakes of the latter are, of course, much greater than those of the former. However, such investigations focused on the roles of TFAs in promoting CHDs, e.g.62, have failed to also consider the myriad of adverse health effects presented by aldehydes ingested in fried food sources, which include atherosclerosis and its pathological sequelae14. Therefore, without any efficient control for such potentially confounding effects, and those also offered by further toxic LOPs , along with the quantities of each of these LOP toxins available in human diets, then such public health studies targeting TFAs as CHD ‘malefactor’ molecules may indeed be compromised.Documented evidence which relates cis-configuration FAs to their corresponding TFA derivatives, although one study has reported a marginal increase in levels of the latter in corn oil following its exposure to stir-frying episodes63.Moreover, in principle TFAs may themselves also be susceptible to peroxidative damage, followed by the possible sequential fragmentation of their corresponding hydroperoxides to toxic secondary LOPs. Despite some major conjecture in the literature available, the heating of oils according to frying practices does not appear to transform natural 65 (further details are available in section S14).In addition to their greater resistance than PUFAs to thermo-oxidative damage, particularly that induced by high temperature frying practices, dietary MUFAs offer many additional further potential health benefitsUnless exposed to such frying practices (single or repeated), or alternatively stored and/or exposed to light for prolonged periods of time at ambient temperature, the authors accept that PUFA-rich culinary oils offer little or no threats to human health. Indeed, unperoxidised, intact essential FAs therein such as linoleoyl- and especially α-linolenoylglycerols offer valuable protective health benefits. However, the presence of only trace concentrations of LOP processing contaminants, aldehydic or otherwise, in these products may substantially negate such benefits.Therefore, a full investigation of all factors exerting an influence on the nature and levels of LOP toxins available in fried foods and hence their roles in the development of NCDs, particularly fried food and CO types used for frying, frying practices , temperatures and durations, and oil reuse status, is required. Further considerations should include the extent of fried food consumption prepared at home or at commercial food service outlets, and also the overall dietary patterns of populations surveyed.trans-2-alkenals, trans,trans-alka-2,4-dienals and n-alkanals in FFRPCSs , in principle this MUFA-laden algae oil should present a lower level of health hazards to human consumers than those associated with PUFA-rich oils when employed for this purpose. Indeed, experiments involving the analysis of fried potato chip samples collected during repetitive domestic deep-frying episodes clearly demonstrated that the use of PUFA-rich sunflower oil gave rise to significant reuse-dependent levels of each class of these aldehydes in this regularly consumed food source, whereas only negligible amounts were found in these when MUFA-rich extra virgin olive and MRAFO oils were employed as frying media. Clearly, these results have a high level of public health significance in view of a wealth of evidence available for a myriad of toxicological effects exerted by these secondary LOPs.Exposure of PUFA-rich culinary oils to LSSFEs for periods of up to 90 min. generates extremely high levels of hazardous aldehydic LOPs, which may present both serious and chronic threats to human health. Contrastingly, results acquired here also clearly demonstrated that the predominantly MUFA-containing, PUFA-deplete MRAFO oil explored was particularly resistant to LSSFE-induced thermo-oxidation, i.e. much more so than PUFA-rich sunflower and corn oils, and also more so than other MUFA-rich oils tested; the PSI value and [MUFA]:[PUFA] % content ratio of this oil were significantly lower and greater, respectively, than those of the other MUFA-rich oils investigated here. Indeed, little or no toxic aldehydes, nor epoxystearoyl species, were generated in the MRAFO oil at recommended shallow-frying time-points of 5–20 min. Since we have also, for the first time, demonstrated the availability of potentially health-threatening levels of cytotoxic and genotoxic 1H NMR method66, which involves expression of the intensity (I) of the intelligently-bucketed omega-3 FA chain terminal-CH3 function resonance chemical shift bucket to that of the total FA chain terminal-CH3 signals [i.e. I0.97/(I0.90 + I0.97)], the δ = 0.90 ppm one representing that for all non-omega-3 FAs. In this manner, the molar percentage omega-3 FA (predominantly linolenic acid) contents of these oils was found to be 0.20, 1.89, 10.61, 1.37 and 0.92 molar % for the sunflower, corn, canola, extra-virgin olive and MRAFO oil products tested, respectively. Where appropriate, correction was made for the interfering 13C satellite resonance of the major terminal-CH3 one, i.e. especially for COs with low or very low omega-3 PUFA FA contents.Sunflower, corn, canola, extra-virgin olive and MRAFO oils were all purchased from UK or USA retail stores. Each oil was then de-identified in the laboratory via its transference to coded but unlabelled storage containers. The specified SFA, MUFA and PUFA contents of these oils were 11.0, 28.0 and 61.0% for sunflower oil; 14.4, 23.3 and 61.4% for corn oil; 7.0, 64.4 and 28.5% for canola oil; 13.0, 77.4 and 9.4% for extra-virgin olive oil; and 4.0, 91.2 and 4.2% (w/w) respectively for MRAFO. [MUFA]:[PUFA] % content ratios for these oils were 0.46, 0.38, 2.26, 8.23 and 21.71 for sunflower, corn, canola, extra-virgin olive and the MRAFO products investigated respectively. The molar percentage of omega-3 FAs in these samples was estimated by a previously reported −1) of the antioxidant product Fortium® brand MT70 IP liquid, which contained a mixture of α-, β-, γ- and δ-tocopherols in sunflower oil .The MRAFO oil purchased was supplemented with 1,000 ppm (1.00 g.kgca. 0.25 ml) of oil samples were collected at the 0, 5, 10, 20, 30, 60 and 90 min. heating time-points for 1H NMR analysis. Immediately following collection, the lipid-soluble chain-terminating antioxidant 2,5-di-tert-butylhydroquinone (DTBHQ) was added to each oil sample in order to block or retard the further generation of aldehydes and their CHPD and HPM precursors during periods of storage and sample preparation at ambient temperature. Samples were prepared for 1H NMR analysis within 2 hr. after collection, and were stored in sealed containers within a light-excluded zone whilst awaiting analysis.All oils were heated at 180 °C for periods of up to 90 min. according to a LSSFE, and these experiments were conducted by a ‘blinded’ laboratory researcher. Each 90 min. heating cycle was completed n = 6 replicated sessions for all oils investigated. This shallow frying simulation involved the heating of a 6.00 ml volume of culinary oil in an air-dried 250 ml glass beaker within a thermostated silicon oil bath maintained at a temperature of 180 °C throughout the total heating period. Aliquots operating at a frequency of 399.94 MHz and a probe temperature of 293 K. All spectra were acquired as described in8. Typically, a 0.20 ml aliquot of each oil sample was diluted to a final volume of 0.60 ml with deuterated chloroform (C2HCl3) containing 3.67 mmol.L−1 tetramethylsilane (TMS) and 15.00 mmol.L−1 1,3,5-trichlorobenzene : the C2HCl3 diluent provided a field frequency lock, the TMS acted as an internal chemical shift reference (δ = 0.00 ppm), and 1,3,5-TCB served as an internal concentration reference standard. These solutions were then placed in 5-mm diameter NMR tubes. Typical pulsing conditions were: 128 or 256 free induction decays (FIDs) using 65,536 data points and a 4.5 s pulse repetition rate, the latter to allow full spin-lattice (T1) relaxation of protons in the samples investigated. Resonances present in each spectrum were routinely assigned by a consideration of chemical shifts, coupling patterns and coupling constants. One- and two-dimensional COSY and TOCSY spectra were acquired to confirm 1H NMR assignments as previously described8.1H NMR spectral profiles for the determination of seven classes of aldehyde, and selected epoxy acids, is described in section S15 of the supplementary material section.Preprocessing of these 67, i.e. PSI = [0.025(% monoenoic FA)] + [1.00(% dienoic FA)] + [2.00(% trienoic FA)] + [4.00(% tetraenoic FA)] + [6.00(% pentaenoic FA)] + [8.00(% hexaenoic FA)]. However, for all oils investigated here, contributions to the PSI from tetraenoic, pentaenoic and hexaenoic FA sources were negligible.From the FA compositions of these oils, their intrinsic peroxidative susceptibility indices (PSIs) were computed as previously described1H NMR analyses of culinary oil and potato chip determinations of aldehydic LOPs are provided in section S12 of the supplementary materials section. These include evaluations of the accuracies and precision of each of these assay determinations, together with corresponding lower limits of detection (LLOD) and quantification (LLOQ) values for trans-2-alkenals, trans,trans-alka-2,4-dienals and n-alkanals. These parameters were evaluated in both neat C2HCl3 solutions, and also unheated (control) culinary oils either ‘spiked’ with trans-2-octenal and n-hexanal, or unspiked. ‘Between-frying cycle’ and repeat determination coefficients of variation (CV) for these LOP assays are also provided.Details regarding the quality assurance/quality control monitoring of our trans-2-alkenals and n-alkanals (0–600 µmol.L−1 and 0.20–50.00 mmol.L−1) were linear, with R2 values ≥ 0.993 for neat C2HCl3 solutions, and ≥0.987 for aldehyde-‘spiked’ C2HCl3-diluted oil media prepared as described above.Calibration curves for typical 2HCl3 were added (predominantly ca. 1.0 ml per g), and these mixtures were then mechanically homogenised using an electric pestle rotor . Subsequently, the homogenates were then centrifuged at 10,000 × g for a period of 10.0 min. at 4 °C. Exactly 0.60 ml volumes of each clear supernatant were removed and then treated with either 0.06 or 0.12 ml aliquots of a 1.00 × 10−2 mol./L stock solution of the 1,3,5-TCB internal standard in C2HCl3, and 0.06 ml of a 10.0 mmol.L−1 solution of the chain-breaking antioxidant DTBHQ , also in C2HCl3, the mixtures thoroughly rotamixed, and then transferred to 5-mm diameter NMR tubes for analysis. C2HCl3 extractions of FFRPCSs were performed either singly, or as 3 or 4 replicates of these samples as indicated in Table 1H NMR spectra of these extracts were acquired in duplicate.FFRPCSs were purchased from a total of 12 local fast-food restaurants. To accurately weighed quantities of each sample (1.00–6.30 g), 1.00–8.00 ml volumes of C1H NMR detection of the DTBHQ antioxidant in 1H NMR spectra of these FFRPCS sample extracts confirmed that it had been predominantly retained and not consumed via its chain-terminating antioxidant actions during the sample preparation stages of our experiments.The bis-allylic-CH2-, -CH2-CH = CH- and α-CH2-CO2- function resonances , i.e. an adaption of the method described in9.The SFA, MUFA and PUFA contents of these fried food samples was estimated via integration of ISBs featuring their acylglycerol et al.31. The deep-frying facility employed was a domestic model equipped with a variable thermostat and an inert cross-lined steel mesh for the purpose of lowering the chips into the oil without contacting the fryer’s inner surface. This deep fryer was filled with 3.00 litres of oil according to the manufacturer’s instructions, and 400 ± 10 g of potato chips then deep fried at a temperature of 170 °C for a period of 10.0 min.Eight batches of hand-cut chips of lengths and widths of 87.00 ± 1.15 and 12.70 ± 0.36 mm (mean ± SEM) respectively were consecutively fried 8 times using sunflower, extra virgin olive and the MRAFO oils. For this purpose, we used a modification of the approach utilised by Boscou The % (w/w) total PUFA, MUFA and SFA contents of the frying oils employed for these deep-frying studies were 61.0, 28.0 and 11.0%, respectively for sunflower oil, and 30.8, 62.3 and 6.9% respectively for extra-virgin olive oil.Each frying oil used was allowed to cool for a period of exactly 30 min. between each repetitive frying episode . Following each 10 min. frying episode, chips were thoroughly shaken in their wire basket for 15 s, and then allowed to drain therein for 30 s in order to remove excess oil. Subsequently, these chips were transferred to a steel mesh draining board.1H NMR analysis. Two samples of the unfried (raw) potatoes were also collected and subjected to storage in this manner.For each of the 8 consecutive frying episodes, 2 randomly-selected samples of chips were transferred to plastic-stoppered sample tubes and immediately frozen at a temperature of −20 °C until transported to the laboratory where they were then stored at −80 °C for a maximum duration of 18 hr. prior to On completion of each frying episode, duplicate samples of each oil were also collected for analysis, and these were also stored prior to analysis in the same manner as the potato chip samples, as were duplicate unheated (control) frying oil samples. Further samples of these oils were collected at a time-point of 10.0 hr. following completion of each repeated frying session.1H NMR reference purposes, including n-hexanal, n-octanal, trans-2-octenal, trans-2-nonenal trans,trans-deca-2,4-dienal, etc. were obtained from the Sigma-Aldrich Chemical Co. (UK). Crotonaldehyde was also purchased from Sigma-Aldrich as a 20:1 molar ratio of its trans-(E-):cis-(Z-) isomers in order to permit assignments of resonances for cis-2-alkenal LOPs in thermally-stressed culinary oils.Authentic aldehydes employed for 1H NMR aldehydic class ISB intensity datasets involved an analysis-of covariance (ANCOVA) model, which incorporated 2 prime factors and a total of 3 sources of variation: (1) that ‘between-culinary oils’, qualitative fixed effect (Oi); (2) ‘between-sampling time-points’ quantitative fixed effect ‘nested’ within ‘culinary oils’ (Tj); and (3) the culinary oil x time-point first-order interaction effect (OTij). This experimental design is represented by equation ijk represents the (univariate) aldehyde ISB dependent variable values observed, μ its overall population mean value in the absence of any significant, influential sources of variation, and eijk the unexplained error contribution.For experiments featuring the exposure of culinary oils to LSSFEs, the experimental design for univariate analysis of the total acylglycerol-normalised XLSTAT2016 software. Datasets were generalised logarithmically (glog)-transformed, mean-centred and standardised prior to analysis in order to satisfy assumptions of normality and homoscedasticity. Post-hoc analysis of significant differences observed between individual culinary oils and sampling time-points were performed using the Bonferroni method which corrected for false discovery rates.ANCOVA was conducted with Metaboanalyst 3.0 software module options. Datasets were analysed either untransformed and unscaled, or alternatively after cube root- or glog-transformations, and Pareto scaling. AHC dendograms were generated employing Euclidean distance and Ward’s linkage clustering algorithm. Heatmap and correlation feature diagrams were also obtained using this software module.Agglomerative hierarchal clustering (AHC), principal component analysis (PCA) and Pearson (linear) correlation analysis of the extensive culinary oil NMR-based aldehyde concentration dataset was performed using ijk represents FA content-normalised sample aldehyde dependent variable values, Si and Fj the ‘between-sample’ (i.e. potato chip vs oil levels) and ‘between-sequential frying episode number’ sources of variation (both fixed effects), μ the overall population mean value in the absence of any significant, influential sources of variation, and eijk the unexplained error contribution.For experiments featuring the domestic deep-frying of potato chips in sunflower oil only, statistical analysis of experimental data was performed according to the ANOVA mathematical model described by equation SUPPLEMENTARY INFORMATION"} {"text": "The Internet of Things (IoT) is an emerging paradigm that proposes the connection of objects to exchange information in order to reach a common objective. In IoT networks, it is expected that the nodes will exchange data between each other and with external Internet services. However, due to deployment costs, not all the network devices are able to communicate with the Internet directly. Thus, other network nodes should use Internet-connected nodes as a gateway to forward messages to Internet services. Considering the fact that main routing protocols for low-power networks are not able to reach suitable performance in the displayed IoT environment, this work presents an enhancement to the Lightweight On-demand Ad hoc Distance-vector Routing Protocol—Next Generation (LOADng) for IoT scenarios. The proposal, named LOADng-IoT, is based on three improvements that will allow the nodes to find Internet-connected nodes autonomously and dynamically, decreasing the control message overhead required for the route construction, and reducing the loss of data messages directed to the Internet. Based on the performed assessment study, which considered several number of nodes in dense, sparse, and mobility scenarios, the proposed approach is able to present significant results in metrics related to quality-of-service, reliability, and energy efficiency. The Internet of Things (IoT) is a wide concept that has attracted attention from the research community in recent years . The terIoT devices form, in general, a low power and lossy network (LLN), composed of many nodes with strong restrictions on memory, processing capacity, and, in some cases, energy. Depending on the application, the nodes in an IoT network can have different hardware capacities and application objectives. In the IoT scenario presented in In the context of low-power IoT networks, the routing protocols can be grouped into two types based on the route creation principles: proactive and reactive . ProactiLOADng-IoT improves network QoS and reliability by increasing the packet delivery ratio and the reduction of end-to-end latency for the different message types exchanged by nodes in both dense and sparse IoT scenarios.It reduces the number of control messages required to construct routes among nodes, contributing to a more efficient network with lower overhead.LOADng-IoT reduces the amount of energy required to both build paths and route data messages, making the network more power efficient.It dispenses with the use of predefined Internet gateways since the Internet-connected nodes are sought on demand and can change according to their connection availability. This feature also removes the existence of a single point of failure (SPOF) for the connection of IoT devices with external Internet services.It presents a flexible solution, whereby parts of the proposal can be adopted according to the hardware capacities of the nodes.Although it uses the most indicated route creation principles in the IoT scenario considered in this study, LOADng can present several problems such as the necessity of previous and static definitions of the nodes responsible for providing the Internet connection to other network devices. Also, the obligation of several routing creation processes to construct on-demand routes to P2P traffic can provoke a high control message overhead. Thus, the main objective of this work is to create an enhancement for LOADng to allow the protocol to better discover and maintain routes for traffic directed to the Internet in IoT networks formed by devices with different capacities. The proposed approach, LOADng-IoT, is composed of three improvements that are able to boost the process of route discovery, reduce the overhead of control messages, and improve the network’s quality-of-service (QoS). In summary, the proposal allows the nodes to find Internet-connected nodes without the prior definition of a gateway. This behavior allows nodes without an Internet connection to forward their data packets to external Internet services with much greater reliability and lower latency. Also, the proposed solution presents a cache system for routes to Internet nodes to reduce the control message overhead required in the process of route discovery. Finally, the solution introduces a new error code that should be used to avoid the loss of data packets to the Internet. Thus, the main contributions of the proposal presented in this work (LOADng-IoT) are as follows:The remainder of this document is organized as follows. This work proposes a new enhancement to aid the LOADng route discovery process in IoT scenarios composed of nodes with different capacities and variable message traffic. In the current literature, several studies focus on performance and propose improvements for LOADng. However, to the best of the authors’ knowledge, the current related literature does not propose improvements to the route discovery process of Internet nodes for LOADng in IoT scenarios.In , the autBased on the limited performance of LOADng in MP2P scenarios, Yi and Clausen proposedA new composite routing metric for LOADng is proposed in . The terA multipath improvement for LOADng is also proposed in . The NeiAraújo et al. propose weakLinks mechanism to identify links with low quality across a path. In the route creation process, paths with a high number of weakLinks are avoided. The choice of the best route also considers the residual energy of the nodes and hop count. In all the studied scenarios, REL was able to outperform the AODV regarding latency, packet delivery ratio, and network lifetime.A routing protocol for IoT networks based on the composition of routing metrics is presented in . The RouAraújo et al. present In ,23, the Considering the limitations of the current literature and the requirements of IoT low-power networks, this work proposes a new mechanism for LOADng that will allow it to search Internet-connected nodes in a dynamic and on-demand manner. Moreover, the proposed approach can improve normal data traffic among the nodes, enabling the network to become more energy efficient and reliable.The LOADng routing protocol proposes a simplification of the AODV , a well-The following subsections present an overview of the LOADng protocol. These explanations are necessary to understand the approach proposed in this work.LOADng is a reactive routing protocol based on route discovery using route request and route reply messages. Thus, when a node wants to send a data message and the route to the destination is unknown, it should begin a new route discovery process. To this end, the node broadcasts a route request (RREQ) message to search for a route to the desired destination. Each node that receives an RREQ should perform message processing and consider the message to be forwarded. This process continues until the RREQ reaches the sought destination. The destination should then generate a route reply (RREP) message to answer the received RREQ. The RREP is forwarded in unicast to the RREQ originator, constructing a route between the two nodes interested in the message exchange. Finally, the RREP is received by the RREQ originator, which should begin to send data messages using the path created by the route discovery process.The process of route discovery is performed with the use of control messages inspired by AODV. RREQ messages are always used to request the creation of a route to a destination when a node needs to send a data message and the path to the destination is unknown. RREP messages are used by the destination that receives the RREQ as an answer to the request for route creation. An RREP message may, optionally, require an acknowledgment. In this case, the route reply acknowledgment (RREP_ACK) message is used to answer a received RREP. When a node fails at the moment of data message forwarding, a route error (RERR) message can be used to inform the data message originator of the problem detected. The RERR can also be used when the data message destination is unknown by the intermediate node. ackrequired defined as true. In the process of route discovery, control messages are used in conjunction with an Information Base maintained by each network node. According to the content of the control messages, the Information Set of nodes are fed and updated. The main elements of the Information Base are the following: Routing Set, Blacklisted Neighbor Set, and Pending Acknowledgment Set. The Routing Set is composed by route tuples entries that store data about the neighbor nodes. Based on the Routing Set, a node can verify the existence of a path to a destination or the necessity of starting a new route discovery process. The Blacklisted Neighbor Set is responsible for storing the addresses of nodes with possible communication inconsistencies that make the bidirectional linkage unavailable. The Pending Acknowledgment Set records information about the RREP messages sent with the field When a node wants to send a data message, it should look for a route to the message destination on its Routing Set. If the path is found, the node should forward the message to its destination through the next hop node. The message forwarding process is described in detail in originator and the address of the desired destination in the destination field. It should also set a unique seq-num to the RREQ and define the other message fields. Then, the node should broadcast the generated RREQ to its neighbors.In the routing discovery process, the node generates a new RREQ message, defining itself as hop-count, hop-limit, and route-metric from the message. In sequence, the node should search for a route entry for the message originator on its Routing Set. If the route is not found, a new route entry for the message originator is created. Then, the created or found route entry is compared with the fields of the received message to verify whether or not the message can be used to update the route entry. If the message is valid, the route entry is updated, the common processing is finished, and the message returns to its specific processing. In contrast, if the message is not used to update the route entry, the node should verify the message type, send an RREP_ACK if required, and drop the message. When the message returns to the specific RREQ processing, the node should check whether it is a message destination. If negative, the node should verify whether the message is valid to be forwarded (checking the hop-count and hop-limit), updating its fields, and forward it using broadcast. Otherwise, if the node is the RREQ destination, it should generate an RREP message to answer the request from the RREQ originator.Each receiver of the RREQ message should execute its processing according to the flowchart presented in destination, the address of its originator in the originator field, and a unique seq-num. After being generated, the RREP should be sent in unicast to its destination. Thus, the RREP originator should look for a route entry to the destination on its Route Set and forward the message to the R_next_hop node. Note that the route entry should be found after it has been created by the RREQ message being received. A node that receives the RREP message should perform its processing as described in the flowchart in R_next_hop node using unicast. Otherwise, if the node is the RREP destination, the route discovery process is completed, and the data message can be sent using the constructed path.The generated RREP message should have the address of the RREQ originator as In the data message sending process, the node should use the path created in the route discovery process to deliver the data message to its correct destination. Thus, the node consults its Routing Set looking for an entry that matches the message destination. In sequence, the node should forward the message to the next hop of the found route entry. According to the latest LOADng specification, a node should always refresh the valid time of a route entry that it uses. The intermediate node that receives a data message should forward to the next hop of the path based on the information in its Routing Set. This process occurs until the message reaches its final destination. If an intermediate node does not find a route entry that matches the message destination, it should perform a new route discovery process to recover the broken path. If the path recovery does not succeed, the node should generate an RERR message to inform the data message originator of the impossibility of delivering the message successfully.smart-rreq flag set as true. Every node that receives a SmartRREQ (RREQ message with smart-rreq true) should perform additional processing in the RREQ message handling. After executing all the initial processing, and after verifying whether the message is valid for forwarding, the node should perform the specific processing of SmartRREQ. Thus, the node checks whether it owns a route entry on its Routing Set to the message destination with R_next_hop that is different from the previous hop of the received SmartRREQ. If this condition is satisfied, the node should transmit the SmartRREQ message in unicast to the next address found. The next hop that receives the SmartRREQ message should perform the same processing until the message reaches the final destination. If a node does not find a route entry to the SmartRREQ destination, the message should be forwarded using broadcast. The destination of a SmartRREQ should answer the request by generating a normal RREP. Hence, the SmartRREQ enhancement can reduce the number of broadcast transmissions, thereby contributing to reducing the control message overhead required to discover a new route and decreasing the network energy consumption.To reduce the number of control messages exchanged during the route discovery process, the SmartRREQ enhancement was proposed for LOADng . With thThis work proposes an enhancement for the LOADng protocol in IoT networks composed of devices with different capabilities. The proposed LOADng-IoT introduces a new route discovery process dedicated to finding devices with the capacity to forward special messages from other nodes. The following subsection explains the IoT applications scenario in which the proposed approach can be applied. In the sequence, LOADng-IoT is fully described, including its features, requirements, and operation.This work considers an IoT network as presented in A smart home (SH) IoT application can be used to exemplify the use of the presented network model. In a SH, the smart objects with a low necessity of consulting external Internet services can be represented by simple nodes. In contrast, smart objects that require continuous Internet-connection can be represented by INs linked to the Internet using cellular network or optical fiber. Thus, as an example, a smart window can occasionally consult an external Internet service to verify the weather forecast. The smart window, to access the service, should use an IN as its gateway to the Internet. Besides, in a context of a local message exchange, an air-conditioner, when activated, can send a local message to close the smart windows without the necessity of using the Internet to perform this communication.Current routing solutions can address the presented network model when the simple nodes have been previously configured with a default gateway to forward their Internet messages. Thus, prior knowledge of the nodes with Internet connection capacity is required to then define a gateway for each simple node. This approach, although functional, can give rise to several issues. The obstacles that can occur using this simplistic approach are identified as: (i) high deployment time is required to define the gateway of each simple node; (ii) Internet-connected nodes can be overloaded with Internet messages from simple nodes; (iii) bad deployment can make simple nodes create long paths to their gateways; (iv) simple nodes can become unable to send Internet messages if their gateways lose Internet connectivity.Based on the exposed constraints, and seeking to better address the requirements of the described IoT network scenario, this work proposes a new enhancement for the LOADng protocol that is able to optimize the route discovery process for INs and improve the network performance. The proposed LOADng-IoT can simplify the discovery of Internet routes, avoiding the necessity of the prior definition of gateways for Internet messages. In addition, the proposal reduces the number of control messages required to construct paths to INs and makes the data message forwarding process more reliable. The following subsections present a detailed description of the proposed LOADng-IoT.The proposed LOADng-IoT is composed of three components: the Internet route discovery process, the Internet Route Cache (IRC) system, and a new error code for RERR messages. The first component is responsible for finding IN nodes without the requirement of previous knowledge of its address in the local network. Thus, a node that wants to send a message to the Internet should start an Internet route discovery process by broadcasting a special RREQ, named RREQ-IoT. The message has the objective of seeking an IN that can be used as a gateway for the RREQ-IoT originator. To reduce the number of broadcast transmissions, an intermediate node that knows a route for an IN can forward the RREQ-IoT message to it using unicast transmission (in the same way as SmartRREQ). When an IN receives an RREQ-IoT message, it should generate a special RREP to answer the request. This message, named the RREP-IoT, is forwarded in unicast via the opposite route created by the RREQ-IoT. Each node that receives an RREP-IoT should create an entry on its Routing Set with the information that the message originator has an Internet connection. When the RREP-IoT reaches its destination, the node should immediately start to send the Internet data messages. The proposed Internet route discovery process of LOADng-IoT is fully described in The second component is responsible for storing the Internet routes (routes to Internet-connected nodes) removed from the Routing Set. During the Internet route discovery process, the nodes create entries on the Routing Set to the INs. These entries, which have a valid time, can expire and be removed from the Routing Set when not used. Thus, to reduce the number of transmissions in a new Internet route discovery and to allow the nodes to follow a previously known Internet route, these entries, when removed, have some of its information inserted in a new data structure, the IRC. The IRC should always be consulted when a new Internet route discovery process is started and should, when possible, indicate a previously known Internet route to direct the discovery process. The IRC is optional and should be adopted according to the hardware capacity of the network devices. A complete description of the proposed IRC is presented in INTERNET_CONN_LOST error code should be used to indicate to an Internet data message originator that it was not possible to forward the message to the Internet. Thus, the receiver of the RERR should start a new Internet route discovery process to find a new IN to transmit its Internet messages. The generation, processing, and functioning of the proposed new error code for RERR messages are detailed in The third component is a new error code for RERR messages. The The proposed LOADng-IoT requires several increments in the existing default LOADng structure. In addition to the increments previously presented, the LOADng-IoT also presents several new features required to improve the performance of the studied IoT networks. iot set as TRUE . Then, the node defines its own address as the RREQ-IoT originator and destination, and a set new unique seq-num. In the following, the node should consult its IRC (if in use) to verify the existence of a previously known Internet route. At this point, the IRC is considered to be empty, and will be explored in the next subsection. Thus, the node should transmit the generated RREQ-IoT in a broadcast to its neighbors.The Internet route discovery should always be initiated when a node does not find a route entry to an IN on its Routing Set. Thus, the node should create an RREQ with the flag R_Internet_conn set as FALSE. In addition, if the received RREQ-IoT is used to update the information on an existing route entry, the R_Internet_conn should never be changed to false. If the common processing is concluded without the message being dropped, the message handling proceeds to the next step. Thus, in the specific RREQ-IoT processing, the node should check whether it has an active Internet connection . If the node identifies an active Internet connection, it should reply to the RREQ-IoT by generating and transmitting an RREP-IoT message to the request originator. However, if the node does not have an Internet connection, it should check whether the message is valid for forwarding. If true, the node searches for an Internet route on its Routing Set. Thus, if it is found, the node should use a mechanism similar to SmartRREQ to assist the path creation. Hence, the node changes the RREQ-IoT destination to the found R_destination, updates the other message field and sends the message in unicast to the R_next_hop. If more than one Internet route exists in the Routing Set, the node should select the one that best address the used routing metric. However, if an Internet route entry is not found in the Routing Set, the node should use the IRC mechanism, if it is activated. Considering that the node does not use the IRC, it should update the RREQ-IoT message fields and transmit the message in broadcast. Independent of the transmission mode of the RREQ-IoT (unicast or broadcast), the next receiver of the message should perform the process previously described.Each node that receives an RREQ-IoT should perform its processing according to the flowchart in A needs to send an Internet message and has begun an Internet route discovery process. Node B has received the RREQ-IoT from A and begun its processing. B does not have an Internet connection but finds an Internet route to node C on its Routing Set. Thus, B changes the RREQ-IoT destination to node C and forwards the message in unicast to C. Thus, inspired by the SmartRREQ, the Internet route discovery process is optimized to reduce the number of broadcast transmissions, thereby contributing to the reduction of energy consumption.Please note that unlike the normal route discovery process, the Internet route discovery process searches for any IN rather than a defined destination address. During the RREQ-IoT message processing, the node does not verify whether it is the message destination, but whether it has an active Internet connection. In addition, it is possible that an intermediate node that knows an Internet route will change the RREQ-IoT destination and redirect the Internet route discovery. originator, and the address of RREQ-IoT originator as destination. The message should also have a new unique seq-num. Then, the node transmits the RREP-IoT message in unicast to the next address in the path to the destination.As previously explained, a node with an Internet connection that receives an RREQ-IoT should generate a reply message to answer the request. Thus, the node generates an RREP-IoT, defines its own address as R_Internet_conn is TRUE. In addition, the R_valid_time of the Internet route should be, by default, two times greater than the valid time of regular routes. Since only Internet-connected nodes can generate RREP-IoT messages, the intermediate nodes that forward RREP-IoT messages should not be included as R_Internet_conn set as TRUE. Thus, the RREP-IoT message performs the maintenance of routes and creates the path to the Internet nodes. At the end of the common processing, the node can generate an RREP_ACK message, if necessary. In the LOADng-IoT, the RREP_ACK generation, transmission, and processing are equal to the default LOADng. In sequence, the RREP-IoT receiver verifies whether it is the message destination. If false, the node should check whether the message is valid to forward, perform the message update, and transmit it to its destination. Otherwise, if the node is the message destination, the Internet route discovery process is completed, and the node can begin the sending of Internet messages.The node that receives an RREP-IoT should perform its processing, which is very similar to the normal RREP processing, following the flowchart in In the described Internet route discovery process, all nodes that receive an RREQ-IoT should reply using an RREP-IoT. Thus, it is possible for more than one RREP-IoT to be received by the request originator. This behavior makes the construction of several Internet routes possible, one for each different IN. Hence, a node that intends to send an Internet message should look up its Routing Set and find the best path among those available. The selection of the best Internet route should be made based on the used route metric or the lowest number of hops. The process of Internet message sending and forwarding is described in detail in In the course of the network functioning, all route entries of the Routing Set can be removed when the valid time expires. This process occurs to allow the nodes to reduce the memory usage and make the creation of paths to other nodes possible. In the LOADng-IoT, the valid time of the regular routes and the Internet routes can be different and should be adjusted according to the expected traffic in the network. However, considering a scenario where both message types are equality generated, an Internet route tends to be used more by representing a path to all Internet messages. Thus, by default, the authors suggest that the Internet routes have a valid time that is two times greater than regular routes.Even with a higher valid time, Internet routes that are not used can expire and be removed from the Routing Set of the nodes. Thus, when a node needs to send a new Internet message, the whole Internet route discovery process should be completed again, transmitting several control messages and expending more network resources. To reduce this problem, LOADng-IoT offers an optional improvement that is able to minimize the control message overhead during the construction of Internet routes. This mechanism, the IRC, stores the most relevant information about the last Internet route entries removed from the Routing Set. Then, when it is necessary to perform an Internet route discovery, the node should check its IRC to verify the existence of a previously known Internet route. If positive, the node can direct the Internet route discovery to the destination of the found entry in the IRC.R_Internet_conn is TRUE), its R_next_addr and R_dest_addr are used to create a new route cache entry that is inserted in the IRC set. As presented in An entry can be removed from the Routing Set due to valid time expiration or to lack of memory for the insertion of a new entry. Thus, with the use of the IRC, if the removed entry is an Internet route . B receives the RREQ-IoT from A and realizes its processing normally. However, B, at the moment of checking its Routing Set, finds an Internet route to node C. Thus, B changes the received RREQ-IoT destination to C and forwards the message in unicast. Node C then receives the request from A and sends a reply offering the required Internet route. This behavior is accepted because the intention of an RREQ-IoT is to reach a route to the Internet, independent of the IN providing the connection. In addition, this redirection of the RREQ-IoT ensures that the Internet route discovery process follows with most recent information (considering that the information provided by the Routing Set is frequently newer than the information provided by the IRC). However, if an Internet route is not found in the Routing Set of the intermediate node, the intermediate node should verify its IRC, change the message destination (if necessary), and then forward the message in unicast to an IN that is able to provide an Internet connection to the RREQ-IoT originator. Finally, whether the IRC set is empty, the node should change the destination address of the RREQ-IoT to the same address as its originator and send the message in broadcast. This process “converts” the RREQ-IoT received in unicast in a normal RREQ-IoT to be broadcasted and the Internet route discovery process can continue until an IN is reached.This situation is exemplified in The use of the IRC mechanism allows the nodes to reduce the number of control messages required to construct an Internet route. The cache mechanism is used to direct the RREQ-IoTs to a previously known IN. Thus, when adopted, the IRC contributes to the reduction of the number of packet collisions, minimizes energy consumption, and improves network efficiency. As explained, the entries in the IRC set are only removed by lack of memory or the reception of an RERR message. This process is discussed in the description of the new error code proposed in this work in INTERNET_CONN_LOST), is defined by code 253. The used code number is included in the range of experimental use codes according to the most recent LOADng [Due to unexpected situations, the Internet nodes can sometimes lose Internet connection. To avoid these nodes receiving Internet messages when they have no connection, this paper proposes a new error code with the function of advising the neighbor nodes that the Internet connection has been lost. The error code, described as “Internet connection lost” . However, to send an Internet message, the node should to select the best path based on the selected routing metric since the Internet route discovery process can create several routes to different INs. Thus, it is possible to choose the best path among those available. The nodes that receive a data message (both simple and Internet) should consult its Routing Set to find the next hop to the destination node. This process should continue until the data message is delivered.Independent of the message type, it is possible for a broken route to occur during the message forwarding process. In this case, the intermediate node that was not able to forward the message detects the broken path, queues the data message, and starts a new route discovery process according to the message type. If the message is simple, the node should begin a normal route discovery using the standard LOADng procedure. However, if the message is directed to the Internet, the node should start an Internet route discovery following the process described in This section presents the performance assessment realized to evaluate the behavior of the proposed solution. The Cooja simulator/emulator, which is a part of the Contiki O.S. , was usenxn nodes. The simulated area grew together with the number of nodes. Thus, a fixed network density was maintained where the nodes had between two and four neighbors.Grid Sparse Scenarios: the network nodes were organized in linear grids of Random Dense Scenarios: the different quantity of nodes was randomly deployed in an area of 200 square meters just once. Thus, the random deployments were the same for all compared proposals. The simulated area was the same for the different quantities of nodes. Hence, the network density grew with the increase in the number of nodes.Mobility Dense Scenarios: the nodes were deployed with the same positions of random dense scenarios in an area of 200 square meters. However, the nodes with an Internet connection was able to move in the whole area of the studied environment.The proposed LOADng-IoT was compared with the most recent version of LOADng and LOADng-SmartRREQ. The objective was to analyze the behavior of the proposed solution with the other proposals in different scenarios. Thus, situations were created with three topology organizations: grid sparse, random dense, and mobility dense. For all the topologies, the number of nodes in the network changed from 16 to 64. This quantity was chosen because it can represent the majority of the existing small-scale IoT application scenarios, mainly in smart homes. The following itemization presents more details on the used grid, random, and mobility topologies.For all scenarios and the different quantities of nodes, the simulation time was 600 s. In the application, all network nodes generated and sent data messages in variable intervals of between 10 and 15 s. The minimum data message interval was defined as 10 to avoid the nodes being overloaded with several data messages while they were still realizing a route discovery process. This measure was required because the nodes do not implement a significant buffer for data messages in the routing layer. Thus, all data messages generated inside a route discovery process are lost. The data message generation should be able to address the requirement of Equation , the energy spent per delivered data bit (AES), the control message overhead per delivered data message (CMO), and the percentage of packets with low latency (PLL). For all studied metrics, the simulations were executed 30 times, and the results presented a confidence interval of 95%.N is the number of nodes in the network, i, and i.The PDR metric represents the number of data messages that were successfully delivered to the destination node. Thus, a high PDR represents an efficient network that is able to deliver the generated data messages with high reliability. This metric is constantly affected by the quality of the links among the nodes, the radio interference provoked by neighbor devices, and collisions with other data and control messages. In this paper, the Internet messages delivered for a node without an Internet connection were considered lost. The PDR value of the network was obtained according to Equation :(2)PDR=∑N is the number of nodes in the network, i, i.The average energy spent per delivered data bit metric represents the amount of energy spent by the network to successfully delivery each data bit to its destination. Thus, the less energy spent to deliver the data successfully, the higher the power efficiency of the network. The results obtained by this metric are affected by the energy consumption of the nodes and the packet delivery ratio. The metric is computed using Equation :(3)AES=∑N is the number of nodes in the network, i, and i.The control message overhead per delivered data message (CMO) metric shows the number of control message transmissions required to deliver each data message successfully. As in the previously presented metric, the results of the CMO are directly related to the packet delivery efficiency. Although control messages are not used during the forwarding of data packets, it is possible to count the overhead generated by the nodes to construct the path used to deliver the data messages. Thus, calculating the ratio between the quantity of control message transmission used to discover the routes and the number of data packets delivered, it is possible to obtain the mean of control message transmissions required to deliver each data message. Thus, this metric is calculated according to Equation :(4)CMO=∑The results obtained for the CMO metric are presented in N is the number of nodes in the network, i, The metric of the percentage of packets with low latency exposes the percentage of data packets delivered with a latency that is considered low and acceptable for low-power devices in IoT applications. Several aspects can affect the metric results; among these, it is possible to cite: the length of the path constructed during the route discovery process; the quality of links among the nodes; the radio duty cycle frequency; the medium access control protocol; the density of nodes, etc. The PLL metric is computed according to Equation :(5)PLL=∑This work presents a new improvement to the LOADng routing protocol in IoT scenarios when the network devices have different capacities and use different message types. The proposal was compared with the default implementation of LOADng and LOADng-SmartRREQ through simulations using COOJA. Four different metrics were studied to expose the network performance in terms of reliability, QoS, and power efficiency. For all the considered metrics, LOADng-IoT demonstrated better performance for sparse, dense, and mobile networks. These significant results were obtained due to the set of improvements provided by the proposed approach.Unlike other approaches, the proposed LOADng-IoT does not require a previous definition of gateways and, hence, it can find the most appropriated Internet node to forward messages. This feature provides a self-adaption capacity to the network nodes and reduces the necessity of human intervention in both network deployment and execution. Optionally, LOADng-IoT allows the use of a cache system dedicated to storing Internet routes, enabling the nodes to direct the route discovery to INs that anteriorly had been used as gateways. These approaches together permits LOADng-IoT to reduce the control message overhead required to find the Internet routes, contributing to the reduction of power consumption and improving the packet delivery ratio. Finally, this work proposed a new error code that makes it possible for the Internet nodes to advise the other network nodes about its temporary Internet connection loss. Thus, devices that intend to send Internet messages can find new gateways to forward messages, increasing the chances of successful delivery. It was also noted that LOADng-IoT performance was, in general, less variable when compared with the other studied proposals. This behavior is justified because LOADng-IoT requires fewer radio transmissions to perform the data packet delivery and route discovery. Thus, LOADng-IoT is less affected by the interferences and collision that commonly makes the network performance unstable. In conclusion, the authors deduce that, together, all of the proposed mechanisms that comprise LOADng-IoT provide a new significant enhancement for IoT networks, allowing them to attain better QoS, efficiency, and reliability.For future work, the authors suggest that experiments in real IoT environments should be conducted to test the results obtained using computational simulation. Moreover, the source code from the proposed solution should be improved, documented, and disseminated to the scientific community."} {"text": "In sub Saharan Africa one of the key challenges in assessment using neuropsychological tools has been the lack of adequately validated and easily implementable measures. This study will translate into English, adapt and standardize the Computerized Battery for Neuropsychological Evaluation of Children (BENCI). The BENCI battery will be adapted using back-translation design, comprehensive cultural adaptation and standardized in a case–control study involving two groups of children: HIV infected and HIV unexposed, uninfected children. The content adaptation will be iteratively carried out using knowledge of English and feedback from pilot testing with children. The proposed study will first involve the cultural adaptation of the BENCI. It will then recruit 544 children aged 8–11 years with half of them being HIV+, while the other half will be HIV unexposed-uninfected. Test–retest reliability will be analyzed using Pearson’s correlation while ANOVA and correlational analyses will be used to calculate discriminant, convergent and construct validity.This study will result in an open access adequately adapted and standardized measure of neuropsychological functioning for use with children in East Africa. The protocol paper provides an opportunity to share the planned methods and approaches. Children growing up in low and middle-income countries (LAMICs) are at a significant risk of experiencing neurocognitive impairment due to exposure to multiple risk factors . HoweverRecent efforts indicate that using systematic adaptation process can contribute to the development of neuropsychological measures that can be adequately used in LAMICs settings . We woulOne attractive feature of the BENCI is its computerized nature which makes it relatively easy to administer and get results readily and paper-free. The BENCI has good psychometric properties in terms of validity and reliability . It has To evaluate the internal consistency, and re-test reliability of the BENCI;To evaluate the construct and criterion validity of these measures;To evaluate discriminative validity of the measures by comparing the performance of HIV infected and HIV exposed uninfected school going children.The general objective of this study is to establish the reliability and validity of the BENCI and its utility in monitoring outcomes among HIV positive school going children. The specific objectives are:A three-phased approach will be carried out. In the first phase, the linguistic and semantic equivalence of the BENCI content will be ensured through back translation design from Spanish to English by two translators. An evaluation of the tools structure and appropriateness will also be done where psychologists will check for the appropriateness of the pictures and other materials. In the second phase, a pilot study among 10 children will evaluate the appropriateness of the items including pictures and instructions. While in the third phase, the psychometric properties of the English version of the BENCI with regards to the Kenyan population of HIV+ children will be evaluated in a case control study at a HIV programme and three public schools.One of the study sites will be a county level HIV programme, which is an outpatient programme catering for HIV infected individuals and their families from culturally diverse backgrounds within resource poor settings.The normative data will be collected from 3 primary schools in the resource poor setting in Nairobi County.The two settings are in the Kenyan capital city of Nairobi which has a high level of literacy 87.1%) with English being the primary language of instruction in the schools [7.1% withA socio-demographic questionnaire will incorporate elements such as age, gender and education background among other variables.A breakdown of measures within each cognitive domain within BENCI is available in Table The Kilifi Neuropsychological Tool Kit comprises of a set of measures adapted and standardized from published measures. The tools have good psychometric properties with split half reliability being between .70 and .84 while internal consistency was ≥ .70 . Some ofThe Guidelines for Translating and Adapting Tests developed by the International Test Commission will be used in adapting and standardizing the tool . PermissA pretest of the BENCI will be carried out in order to identify elements that may not be well understood by respondents and problems that may be encountered during the main study. The piloting will be carried out among randomly selected 10 children from a community-based HIV programme. The randomization will be carried out among 8- to 10-year-old who are living with HIV. They will be randomly selected as they come into the clinic for their usual appointments and requested to enroll for the pilot study. The piloting will aid in adapting BENCI in terms of modifying item formats that may not be recognized by respondents, eliminating translation bias among other modifications. In order to improve content validity, inter-rater reliability analysis will be carried out where two raters will review the results of the pretest. One rater will administer the tool amongst the pilot sample, while the other rater will review how the tool is administered and the respondent responds. This work will be qualitative in nature trying to identify and refine the items and their relevance within BENCI.HIV exposed and unexposed children will be considered as comparative groups—and potentially matched on the patient background characteristics (age and gender). The two comparable groups will be of equal sample sizes which will be calculated using a formula cited in Wittes .2 + 63.72)/2) = 66.3. Together with a significance level of 5% and a power of 80%, these result in a total sample size of 544 respondents, 272 in each study arm. So, the study will need to enroll 272 HIV exposed children and 272 unexposed children.In this study sample size computation is based on data from earlier studies in Africa, the nexposed . Thus, tRespondents will be recruited using random stratified sampling by sex and age. A list of all respondents aged 8–11 years in the HIV program and 8–11 years in the primary schools according to gender will then be extracted from the children data base in the institutions. The children’s caregivers will then be requested to give consent on behalf of the children in the schools and the HIV program. A list of all the children with parental consent will then be compiled in readiness for data collection day. On data collection day, the respondents who agree to participate, will be shown the room with neuropsychological assessment tools and proceed with the demographic questionnaire and later on the neuropsychological tests. The process will take around 90–120 min per child.Since the data generated by the BENCI will be collected in computerized format, it will automatically be numerically coded with identifiers attached to different respondents and excel sheets will be generated as programmed in the tool. The Kilifi Toolkit subsets data will be manually keyed into Excel sheets after being numerically coded with identifiers that are attached to differentiate the respondents. Descriptive statistics, as well as, frequency distributions will be used to analyze the demographic traits among other characteristics of the HIV uninfected unexposed (control) and HIV infected with the aid of SPSS. Intraclass correlation, in the same software, will be used to calculate test–retest reliability. To examine convergent validity, the raw scores of BENCI subtests will be compared to the raw scores of the subtests in Kilifi Toolkit. A confirmatory factor analysis will be used to assess construct validity. The validity indicators will be several alternative fit statistics as recommended by Hu and Bentler : Chi squThe study will be conducted in a community setting; hence the findings may not be replicated within a clinical setting."} {"text": "Epidemiological evidence suggests that cadmium (Cd) is one of the causative factors of prostate cancer, but the effect of Cd on benign prostatic hyperplasia (BPH) remains unclear. This study aimed to determine whether Cd exposure could malignantly transform BPH1 cells and, if so, to dissect the mechanism of action. We deciphered the molecular signaling responsible for BPH1 transformation via RNA-sequencing and determined that Cd induced the expression of zinc finger of the cerebellum 2 (ZIC2) in BPH1 cells. We noted Cd exposure increased ZIC2 expression in the Cd-transformed BPH1 cells that in turn promoted anchorage-independent spheroids and increased expression of stem cell drivers, indicating their role in stem cell renewal. Subsequent silencing of ZIC2 expression in transformed cells inhibited spheroid formation, stem cell marker expression, and tumor growth in nude mice. At the molecular level, ZIC2 interacts with the glioma-associated oncogene family (GLI) zinc finger 1 (GLI1), which activates prosurvival factors , as well as an X-linked inhibitor of apoptosis protein (XIAP)) signaling in Cd-exposed BPH1 cells. Conversely, overexpression of ZIC2 in BPH1 cells caused spheroid formation confirming the oncogenic function of ZIC2. ZIC2 activation and GLI1 signaling induction by Cd exposure in primary BPH cells confirmed the clinical significance of this oncogenic function. Finally, human BPH specimens had increased ZIC2 versus adjacent healthy tissues. Thus, we report direct evidence that Cd exposure induces malignant transformation of BPH via activation of ZIC2 and GLI1 signaling. Both BPH and prostate cancer (CaP) share similar etiology and pathophysiological factors5. Although, BPH to CaP conversion is controversial, the published evidence suggests that BPH patients have a two to threefold increased risk for CaP and a two to eightfold increased risk for CaP-associated mortality6. More recently, a comprehensive review reported an association between BPH and CaP, suggesting that BPH is a risk factor for CaP7. While the possibility exists that BPH patients have an increased risk of developing CaP and CaP-associated mortality, it remains unclear how and why BPH patients are at risk for developing CaP.Benign prostatic hyperplasia (BPH) is a chronic urological disorder characterized by noncancerous enlargement of the prostate gland8. Epidemiological studies have reported Cd could be a potent prostate carcinogen because it is found in significantly higher levels in tissues and plasma of CaP patients than healthy controls9. Cd is an endocrine disruptor in experimental models10, supporting the hypothesis that this metal carcinogen can potentially induce the development of hormone-dependent tumors in humans, including that of the breast and uterus12. However, its molecular effects on malignant transformation of BPH cells remain elusive.Cadmium (Cd) is a known metal carcinogen and is one of the most abundant occupational and environmental pollutants found in air, soil, water, dietary products, and tobacco smoke13. ZIC2 plays a regulatory function in pluripotent and self-renewal of cancer stem cells (CSCs)15 and is also highly expressed and involved in tumorigenesis of various cancer types, including CaP16. As a zinc finger transcriptional factor, ZIC2 binds with the zinc finger domains of other protein families, including glioma-associated oncogene (GLI) in either a synergistic or antagonistic manner18. GLI proteins are also the downstream targets of the Sonic hedgehog (Shh) pathway that are an important therapeutic target for CaP treatment. Several studies have demonstrated growth arrest in CaP cell lines and xenografted tumors in mice20 following the inhibition of Shh signaling. Shh-associated secretory protein binds and inactivates patched1 (PTCH1), resulting in the release of smoothened protein and activation of GLI1, GLI2, and GLI3 receptors. GLI1 predominantly functions as a transcription activator, while GLI2 and GLI3 function as either an activator or repressor21. GLI1 activation initiates the expression of downstream target genes involved in proliferation (cyclin D1), survival , metastasis (Snail), and stem cell activation (Nanog and SOX2)22. Activated GLI1 and GLI2 proteins can also directly promote the expression of a group of genes involved in the process of epithelial mesenchymal transition (EMT)23.Deregulation of transcription factors, such as the zinc finger of cerebellum 2 (ZIC2) has been linked to heavy metal exposure and metal-induced carcinogenesisHere, for the first time, we report direct evidence that Cd exposure induces malignant transformation of BPH1 cells that exhibit an aggressive phenotype similar to that of CaP. In addition, our results suggest that chronic exposure of Cd to BPH1 cells was responsible for stem cell renewal, proliferation, and tumorigenesis of the transformed cells.2 at 37 °C in an incubator. The RWPE-1 cells were cultured in keratinocyte serum-free medium containing l-glutamine, Epidermal Growth Factor (EGF) and bovine pituitary extract (BPE).The BPH1 cell line was a kind gift from Dr. Simon W. Hayward . The cells were authenticated by Genetical Cell Line Testing . The cells were treated with 10 µM Cd for 1 year and transformed into a malignant phenotype. The transformed cells were named Cd-transformed BPH1 (CTBPH1). Human normal prostate epithelial cells (RWPE-1) were purchased from the American Type Culture Collection . BPH1 and CTBPH1 cells were cultured in RPMI medium supplemented with 10% fetal bovine serum, and 1% antibiotic and antimycotic solution in a humidified atmosphere of 5% CO24. BPH1 cells following exposure to 10 µM Cd for 24, 48, and 72 h.Cell viability assays were performed using the trypan blue exclusion method or the 3--2, 5-diphenyltetrazolium-bromide (MTT) assay as described previously25. A group of at least 50 or more cells were considered as a colony and those colonies were counted.Colony formation assays were performed on Cd-exposed BPH1 and CTBPH1 cells to monitor anchorage-independent growth via the CytoSelectTM 96-well in vitro tumor sensitivity assay kit +, and ZIC2− using Boyden chambers equipped with polyethylene terephthalate membranes with 8-μm pores (BD Biosciences) as described previously26. Similarly, migration assays were performed for Cd-exposed BPH1 cells in six-well plates as described previously24.Invasion assays were performed on BPH1, 6-month Cd-exposed BPH1, CTBPH1 (12-month Cd-transformed cells), ZIC2+ and ZIC2− cells were generated by forced floating method using 96-well round-bottom Ultra Low Attachment plates . Single-cell suspension of ZIC2+ and ZIC2− cells at a density of 2 × 103 cells in 200 μl of respective culture media supplemented with MatrigelTM was loaded into each well. Then, the morphology of the spheroids, cell growth were characterized over a 7-day culture period in triplicate.Spheroids of ZIC25 cells/well. After 24 h, the cells were transiently transfected with short-interfering RNA (siRNA) specific for ZIC2 or GLI1, or control siRNA, or an overexpression plasmid specific for ZIC2 for 48 h. Lipofectamine® 2000 was used as a transfection reagent. After 48 h, lysates were prepared and western blotting was performed.BPH1 cells were seeded in six-well plates at a density of 3 × 10xL (Santa curtz 7B2.5), X-linked inhibitor of apoptosis protein , cleaved caspase-3 , PTCH1 (ab53281), NOTCH1 , SOX2 , E-cadherin , N-cadherin , Slug , and β-catenin . Protein–antibody complexes were visualized using enhanced chemiluminescence as previously described25. For immunoprecipitation experiments, the protein samples were immunoprecipitated with ZIC2 antibody at 4 °C under agitation overnight following protein extraction using radioimmunoprecipitation assay (RIPA) buffer; the immunoprecipitated protein was then pulled down with protein A-agarose beads at 4 °C under rotary agitation for 3 h. Immunoprecipitates were washed three times with RIPA buffer. After centrifugation, the pellets were resuspended in sample buffer and heated for 5 min at 95 °C for sodium dodecyl sulfate–polyacrylamide gel electrophoresis followed by immunoblot analysis.BPH1, transforming cells , and CTBPH1 cells were seeded in six-well plates and incubated for 24 h. Cell lysates were prepared, and western blotting was performed using specific antibodies against ZIC2 (ab150404), GLI1 (ab151796), Shh (ab53281), NANOG , ZIC2 immunohistochemistry (Sigma AU35821), CD44 , Poly(ADP-ribose) polymerase (PARP) , NFκB-p65 , Bcl2 , Bcl27.Total RNA was extracted from BPH1, Cd-exposed BPH1 , CTBPH1, and ZIC2 siRNA-treated CTBPH1 cells and RT-qPCR performed for ZIC2, SOX2, Nanog, CD44, and Notch1 expressions as described previously28. BPH tissue microarray and xenograft tumor tissues were examined for ZIC2, GLI1, and p65 expression using immunohistochemistry analysis.Immunofluorescence for ZIC2 and GLI1 expression was performed on Cd-exposed BPH1 and CTBPH1 cells, as described previously7) cells were incubated with 2.0 μl of fluorochrome-conjugated ZIC2 antibody before being sorted using fluorescence-activated cell sorting (FACS). Cells were analyzed on a Cytopeia Influx FACS using Spigot. Cell subpopulations were separated based on the surface antibody labeling and collected by discriminatory gating. After selecting for ZIC2+, cell suspensions were sorted from BPH cell population.Cd-exposed BPH1 or CTBPH1 in 50 μl Matrigel (Corning). The mice were monitored twice weekly, and the tumor volumes were measured once per week. At the end of experiments, the mice were euthanized via CO2 asphyxiation; the tumors were removed and fixed in 10% formalin for histopathological studies.Five-week-old male BALB/c (nu/nu) athymic mice were purchased from The Jackson Laboratory and housed in the University of Louisville vivarium under pathogen-free conditions. All experimental animals were maintained in accordance with the Institutional Animal Care and Use Committee approval and approval was obtained from the ethical committee of the University of Louisville, KY. The mice were randomly divided into four groups of eight each . At 7 weeks of age, the mice were subcutaneously injected with BPH1, CTBPH1, ZIC2t-test was performed for two-group comparisons and one-way analysis of variance analysis was performed for multiple group comparisons. The statistical significance was set at a P < 0.05, and values were presented as means ± SD.All statistical analysis used GraphPad Prism 8.0a software . An unpaired two-tailed Student’s XL, and XIAP but did not significantly alter the expression of proapoptotic proteins cleaved caspase-3 and PARP . Western blots and qRT-PCR confirmed that ZIC2 and Shh pathway markers were upregulated, validating our transcriptomic analysis. The results confirmed the activation of Shh signaling in Cd-exposed BPH1 cells, as determined by a decrease in PTCH1 expression and corresponding upregulation of Shh, GLI1, and ZIC2 Fig. . While CTo further confirm that this activation is specific to Cd-exposed BPH1 cells, we assessed ZIC2 levels in RWPE-1 cells following acute Cd exposure. No changes in ZIC2 mRNA and protein expression were noted in Cd-exposed RWPE-1 cells Fig. , suggestAlthough the use of immortalized cell lines is imperative for metal carcinogenesis especially for transformation assay, the effect of the immortalization process itself on the molecular signature of the metal carcinogen-induced transformation remains a concern for researchers. To address this, we compared effect on cell viability and ZIC2/GLI1 expression on primary human BPH cells with normal primary BPH cells following Cd exposure. The Cd exposure reduced the cell viability (not significantly) as seen in primary human BPH cells and increased the expression of ZIC2 and GLI1 Fig. . This coIn order to develop an in vitro model of Cd-induced malignant transformation, we chronically exposed BPH1 cells to 10 μM Cd for up to 16 months. We periodically performed anchorage-independent growth assays to determine the malignant transformation of Cd-exposed cells. BPH1 cells began forming colonies after 6 months of Cd exposure, and the number of the colonies increased with exposure time. This confirmed that Cd exposure induced the malignant transformation of BPH1 cells Fig. .Fig. 3EfWe next performed western blot analysis to confirm whether the molecular signature of chronic Cd exposure was similar to acute exposure in the BPH1 cells. The results showed that similar to acute exposure, chronic Cd exposure also increased the expression of ZIC2, Shh, and GLI1. Upregulation of prosurvival proteins p65, XIAP, and Bcl2 stem cell markers Nanog and CD44 was also observed Fig. . InteresAlthough, the BPH1 cells were chronically exposed to Cd up to 16 months to induce malignant transformation, we did not observe any significant phenotypic changes in the cells exposed beyond 12 months. Thus, we considered the cells exposed to Cd for 12 months to be completely transformed and will refer to them as CTBPH1 below.30. We used FACS to sort the ZIC2-negative and -positive cell populations at specific time points to determine whether Cd-mediated ZIC2 CSCs properties in the transformed BPH1 cells. A gradual increase in the percentage of ZIC2+ cells was observed in the transforming cells; the 12-month timepoint had the highest fraction of positive cells (45–50%; Fig. + cells formed spheres while ZIC2− cells did not form spheroids (Fig. + developed tumors versus mice injected with ZIC2− cells (Fig. + and ZIC− CTBPH1 cells. Increased, MMP 9, slug and β-catenin expression followed by downregulation of E‐cadherin was noted in the ZIC+ cells as compared with ZIC− CTBPH1 cells. (Fig. Self-renewal, higher tumorigenicity, sphere formation, and metastatic ability are characteristic properties of CSCsids Fig. , confirmlls Fig. . To furtlls Fig. ZIC2 ovels. Fig. .Fig. 4ZI+ cells showed increased invasive and migratory properties (Fig. − cells. These results suggest that ZIC2 may aid in the transformation of Cd-exposed BPH1 by inducing mesenchymal phenotype and promoting stem cell renewal.Furthermore, invasion and migration assays demonstrated that ZIC2ies Fig. versus ZConsidering that Cd-mediated ZIC2 activation promoted the self-renewal properties of the CTBPH1 cells, we inhibited the ZIC2 expression in these cells to determine if this altered their self-renewal potency. Spheroid assays showed that silencing ZIC2 expression in CTBPH1 cells completely abrogated the formation of spheroids versus cells transfected with scrambled vectors Fig. . SubsequAs shown above, Cd exposure induced ZIC2- and Shh-mediated GLI1 signaling in the CTBPH1 cells. We postulated that the induction of prosurvival or inhibition of proapoptotic signaling may be regulated by activation of GLI1. Interestingly, cell viability assays showed that inhibition of GLI1 inhibited the growth of CTBPH1 cells Fig. . To conf31; thus, we performed IPs to confirm whether Cd-induced BPH carcinogenesis occurs due to the interaction between these two proteins. Unlike the vehicle-treated BPH1 cells that showed a complete absence of GLI1/ZIC2 binding, strong binding was observed in CTBPH1 cells (Fig. These results indicate that both ZIC2 and Shh/GLI1 signaling may be necessary for Cd-induced transformation of BPH1 cells. To delineate how the GLI1 and ZIC2 molecularly interact and coordinate their functions during Cd-mediated transformation, we first investigated the localization of these two transcription factors in the CTBPH1 cells. Western blot and immunofluorescence analysis revealed that Cd exposure caused induction and nuclear accumulation of ZIC2 and GLI1 versus vehicle-treated control cells Fig. . Previoulls Fig. . These rNext, we investigated whether CTBPH1 cells could induce tumor formation in nude mice. We observed a progressive increase in tumor growth in mice injected with CTBPH1 cells versus mice injected with vehicle-control cells Fig. over a 4The immunohistochemistry of BPH tissue microarrays comprising tissue sections from 80 BPH patients showed that ZIC2 immunostaining was predominant in BPH specimens versus normal prostate tissue Fig. . Examina32, the underlying mechanisms involved in Cd carcinogenesis remain unclear. Here, we demonstrated that long-term chronic exposure to Cd to BPH1 cells induced transcriptional changes responsible for stem cell renewal, proliferation, and tumorigenesis of the transformed CTBPH1 cells. Cd-mediated ZIC2 activation-initiated stem cell renewal and activation of GLI1: both of these steps promoted cell proliferation of the transformed cells. Thus, we confirmed that the dynamic interaction of ZIC2 and GLI1 are necessary for Cd-induced malignant transformation of BPH cells, a phenomenon not observed in normal prostate epithelial cells.Although Cd is a well-known human prostatic carcinogen34. Similarly, this study helped show that chronic exposure to Cd resulted in malignant transformation of BPH1 cells. We previously showed that defective autophagy was responsible for Cd-induced transformation of RWPE-135. Here, ZIC2 and GLI1 is responsible for Cd-inducted transformation in BPH1 cells, and it appears that it is specific to BPH1 cells and not to normal prostate epithelial cells.We previously demonstrated the implantation of Cd-transformed RWPE-1 cells; these cells formed tumors in xenograft models36. Similarly, the stem cell renewal function of ZIC2 was established37 by activating the stem cell transcription factors, such as Oct4, Sox2, and Nanog as binding sites38. We also found that Nanog and CD44 were upregulated in CTBPH1 cells as compared with vehicle-treated cells. Previous studies demonstrated that CSCs may be responsible for tumor initiation, invasion, distant metastasis, chemo-resistance, self-renewal, and differentiation potential39. Thus, we set out to isolate ZIC2+ cells from CTBPH1 cells and characterize the ZIC2 function. CSCs are believed to be able to form spheres in culture that can then exhibit extensive similarities to endogenous CSCs in human tumor tissues40. Here, sphere formation assays using ZIC2+ and ZIC2− cells revealed the increased ability of ZIC2+ cell to form spheroids, suggesting that ZIC2 activation imparts CSC properties to Cd-exposed BPH1 cells. Interestingly, upon testing the in vivo potency of ZIC2+ versus ZIC2− cells in athymic mice, we observed tumor formation in mice injected with ZIC2+ cells but not in those injected with ZIC2− cells. However, our results also demonstrated that the inhibition of ZIC2 expression did alter the expression of cell survival and proapoptotic markers, suggesting activation of an alternative machinery in Cd-exposed cells.Earlier studies established the oncogenic role of ZIC2 in pancreatic ductal adenocarcinoma cells by activating the expressions of FGFR3 and ANXA814 demonstrated that the overexpression of ZIC2-induced nuclear retention of GLI1: the downstream effector of oncogenic Shh signaling. Other studies have suggested that GLI1 physically interacts with ZIC2 protein via a zinc finger domain42. We found that Cd-induced Shh ligands that in turn inhibited PTCH1 expression leading to the activation of GLI1 and downstream prosurvival targets, such as Bcl2, BclXL, and XIAP. This strongly suggested that activation of Shh signaling initiated the prosurvival machinery and promotion of cell proliferation in Cd-exposed BPH1 cells; this was later corroborated when we found that silencing GLI1 expression inhibited cell proliferation in CTBPH1 cells. Interestingly, we also noted that the interaction of ZIC2 and GLI1 was stronger in CTBPH1 cells than RWPE-1 and BPH1 cells. This suggests that the oncogenic effects of ZIC2 could be due to the interaction between these two proteins and could in turn aid in the tumorgenicity of the CTBPH1 cells.Recently Chan et al.44. Wei and Shaikh45 reported that prolonged Cd treatment in triple-negative breast cancer cells stimulates cell proliferation, adhesion, cytoskeleton reorganization, as well as migration and invasion. Here, we observed that both ZIC+ cells and chronic exposure to Cd-induced expression of mesenchymal marker, β-catenin. The Wnt/β-catenin pathway plays a pivotal role in multiple malignancies, including regulating cell proliferation, EMT, and migratory process47. Moreover, Wnt/β-catenin signaling could also regulate Zic gene expression48. Thus, our results strongly suggest that chronic Cd exposure promotes invasion, migration, and EMT in BPH1 cells via activation of the Wnt/β-catenin signaling pathway.Recent studies have demonstrated that Shh activation plays a central role in EMT regulation, involving the loss of cell–cell adhesion, changes in cell morphology, and the propensity to migrate and invade CaPIn summary, we found that both exposure of Cd in BPH1 cells causes malignant transformation and tumorigenesis via upregulation of Shh/GLI signaling, as well as ZIC2 activation. We also found that both ZIC2 and GLI1 function as complementary signals initiating stem cell renewal and leads invasion as well as migration, respectively. Thus, they are necessary components for Cd-induced prostate carcinogenesis."} {"text": "Primary neurons from rodent brain hippocampus and cortex have served as important tools in biomedical research over the years. However, protocols for the preparation of primary neurons vary, which often lead to conflicting results. This report provides a robust and reliable protocol for the production of primary neuronal cultures from the cortex and hippocampus with minimal contribution of non-neuronal cells. The neurons were grown in serum-free media and maintained for several weeks without any additional feeder cells. The neuronal cultures maintained according to this protocol differentiate and by 3 weeks develop extensive axonal and dendritic branching. The cultures produced by this method show excellent reproducibility and can be used for histological, molecular and biochemical methods. A primary neuron culture from embryonic rodent hippocampus or cortex has been one of the most fundamental methodologies for modern neurobiology. Primary neurons can be easily cultured and over a few days or weeks differentiated into neurons with clearly separable axons, dendrites, dendritic spines and synapses. By modifying the culture medium and conditions, numerous factors responsible for directing different aspects of neuronal survival, differentiation and phenotype have been revealed . Earlier One of the disadvantages of primary culture is that they do not divide in culture and need to be generated from embryonic or early postnatal brains every time. Moreover, successful dissection and preparation of cultures require substantial skill and experience. Over several decades, cell lines have been discovered and created that mimic many or most of the features of primary neurons ,12. MoreNeuronal cultures, however, vary vastly depending on source, age of derivation and culture conditions. Results obtained by a culture protocol used in one lab may not be reproducible in another lab, which adds to the ongoing discussion about reproducibility crisis. We have over more than a decade developed and refined a culture protocol of primary neurons derived from E17–18 rat embryos, which has been successfully used in several publications ,16. Heread libitum. All the experiments were performed at the Animal Unit, Biomedicum, University of Helsinki. The procedures were followed according to institutional guidelines by the University of Helsinki internal license number: KEK17-016.Pregnant female Wistar rats were obtained from Envigo . The plug date of the female rats was marked E0. All embryos staged at E17–18 from the female rats were used in the experiments. The embryos were staged according to the Witschi Standard Rat Stages. The average litter size was 9 pups per female rat. Animals were kept in standard conditions . Food and water were available 80 g NaCl,0.2 g KCl,2HPO4 × 2H2O,14 g Na2PO4.2 g KH2O and autoclaved at 121°C for 20 min.Make upto 1 l with Milli-Q HPhosphate-buffered saline (PBS) buffer pH = 7.4HBBS ,1 mM sodium pyruvate,10 mM HEPES.Preparation medium, pH = 7.2DMEM,10% fetal bovine serum,l-glutamine,1% 1% penicillin–streptomycin.Dulbecco’s modified Eagle’s medium (DMEM) ++1 mg DL-Cysteine HCl,1 mg bovine serum albumin (BSA),25 mg Glucose in 5 ml PBS.The stock Papain buffer stored at −20°C0.5 mg papain,10 μg DNase I in 5 ml Papain Buffer.Papain solution10 μg DNase I in 10 ml Preparation medium as above.Trituration mediumNeurobasal medium,2% B27 supplement, L-glutamine,1%1% penicillin–streptomycin.Growing mediuml-lysine working solution was made as 1:10 dilution of stock Poly-l-lysine in Milli-Q H2O.Poly-40 g PFA in 1 l PBS [The final solution was filtered with a Whatman filter paper (pore size 45 mm).4% Paraformaldehyde (PFA)0.3% Triton X-100 in PBS.PBST1% BSA,4% normal goat serum,0.3% Triton X-100 in PBS.Blocking bufferPrimary antibodies used were glial fibrillary acidic protein (GFAP), neuronal nuclei (NeuN) and microtubule-associated protein 2 (Map2) at 1:100Secondary antibodies were goat-anti-rabbit 647, goat-anti-mouse 568 and goat-anti-chicken 488 diluted 1:1000 in blocking buffer.2) in a CO2 euthanasia chamber. Abdominal skin was washed with 70% ethanol followed by an incision to cut and open the peritoneal cavity (Supplementary video S1). Amniotic sacs were exposed with fine scissors and embryos were taken out from the uterus. These embryos were transferred to a 50-ml polypropylene Falcon tube with 30 ml of PBS on ice. The embryos were transferred to a sterile laminar hood where further processing took place. The heads were decapitated by scissors and immediately transferred to ice-cold PBS stored on ice. Brain dissection was performed in 10-ml preparation medium [The pregnant female Wistar rats were terminally anesthetized with carbon dioxide CO in a CO2The dissected hippocampi were transferred into 10 ml of preparation medium on a 35-The hippocampi were transferred using a fire-polished glass Pasteur pipette into a 15-ml polypropylene Falcon tube with 5 ml of papain buffer with addThe tissue was incubated for 10 min at 37°C. After the tissue had sunk to the bottom, excess papain solution was discarded using a glass pipette.Pipette 3 ml of trituration medium to the tThe cells were triturated in the trituration medium with a fThe trituration was repeated for three times with a fire-polished glass Pasteur pipette.g in Centrifuge 5810 (Eppendorf) at RT.The pooled supernatant with the dissociated tissue from each trituration step (approximately 9 ml) was centrifuged for 5 min at 900 rpm/154×The clear supernatant was discarded.The remaining cell pellet was re-suspended in 1 ml of fresh growing medium at RT .The hippocampal cells were diluted to 1:10 (5 μl of cells + 45 μl growing media). This gentle dissociation method results in hippocampal cells with very few dead cells or debris. Therefore, diluted cells were directly used for cell counting. Ten microliters of the diluted cells were transferred into a disposable hemocytometer by a pipette. For cell counting, cells in four fields of the hemocytometer were counted using a manual cell counter under a Leica stereo microscope and averaged.Hippocampi from ten embryos typically yields 5–6 million neurons.2. The cells were grown at 37°C, 5% CO2 humidified incubator for the indicated times on to each plate.2 humidified incubator for 30 min. During this incubation, glial cells and other unwanted debris adhere to the bottom of the plate and the cortical cells are also recovering from the trituration.The plates were incubated at 37°C, 5% COPipette 7.5 ml of DMEM++ on to uncoated 90 mm Petri dishes. Additionally, pipette 2.5 ml of the supernatant from the previous step (representing five brains) on to each new Petri dish or dishes.g in RT.The supernatant was carefully removed, without disturbing attached glial cells and unwanted cell debris at the bottom of the plate into a new 15-ml Falcon tube using a pipette. The collected supernatant from the above step was centrifuged for 5 min at 900 rpm/154×After centrifugation, the supernatant was discarded and the pellet was re-suspended in 1.5 ml growing medium per eachThe cell suspension was removed into a new 15-ml Falcon tube using a pipette.The cortical cells obtained by this method results in a lot of cell debris. Therefore, cell counting for the cortical cells was performed along with 0.4% of Trypan Blue dye. The cortical cells were used at 1:20 dilution (5 μl of cell suspension + 75 μl growing medium + 20 μl Trypan Blue) for cell counting. The cell counting was done similarly as for the hippocampal cells.Cortices from ten embryos typically yield 50–60 million neurons. If the number of embryos is low, the relative yield is typically lower than when the cells are prepared from a large number of embryos.l-lysine [l-Lysine. After at least 18 h (overnight) of incubation and before plating the cells, the plates were rinsed twice with 500 µl of PBS.The multi-well plates or round coverslips (histochemistry) within a 4- or 24-well plate were pre-coated for 18 h with 500 µl of 10 μg/ml Poly-2 in a humidified incubator . Half of the growing medium was changed once a week.The number of cells plated per well varies depending on the downstream processing of the samples. Representative examples of plating densities for different cell types and well sizes are documented in 2 on the coverslip in a four-well plate. The cells were grown at 37°C, 5% CO2 in a humidified incubator. The cell plating density always depended on the purpose of the experiment, examples of typical cell plating densities are shown in the The four-well plates were washed three times for 5 min with 200 µl of PBST .The fixed cells were pre-blocked with 200 µl blocking buffer for 1 h After blocking, 150 µl of primary antibody diluted After incubation with the primary antibody, the buffer was removed and cells were washed 3 × 10 min with 200 µl of PBST.The secondary antibody was diluAfter the incubation with the secondary antibody, the coverslips were washed for 10 min in 200 µl of PBST followed by two times for 10 min with 200 µl of PBS.2O to rinse off the PBS. Before mounting, coverslips were gently dried by touching the edge on to a tissue paper.The coverslips were held with fine forceps and dipped into 50 ml Falcon tubes containing Milli-Q HThe cells attached on to the coverslips were mounted on the Superfrost slides with 10 µl of mounting media . Two covThe slides were stored in dark, protected from light at 4°C until imaging.For immunostaining, we used 26000 cells/cmhttps://imagej.nih.gov). Cells from a single focal plane were analyzed for an area of 4 mm2 per image per coverslip. We analyzed five images per coverslip and four coverslips for each time point was used for imaging. Thus, 20 images for each time point was analyzed.The whole slide imaging was performed using a 20× objective in a Histoscanner at the genome biology unit, Biomedicum Helsinki. The images were analyzed using panoramic viewer software . Four coverslips for each stage were scanned. Cell counting was performed using ImageJ software . The groups were compared using one-way ANOVA. All the data are represented as means ± SEM.The method for culturing primary neurons described here has been developed and used in our lab for over two decades. The quality of neurons has been high and reproducible over the period of time. The overall procedure is outlined in 2 for both hippocampal and cortical cells. In the cortical cell preparations, no GFAP positive cells were detected at 7 and 14 days in vitro (DIV). At 21 DIV the GFAP cells formed 4% of the total cell population in the primary cortical cells , GFAP and Map2 at three different time points . The denal cells A. The nual cells B. The deal cells .The protocol described here is originally based on the protocol described by Brewer and Cotman , with sol-Lysine. For histochemical staining and live imaging, the cells were grown on glass coverslips.The primary neuronal cell culture is a standard system for the investigation of neuronal structure and function at a high resolution. The current protocol generates relatively pure neuronal cultures with maximum reproducibility and minimal contribution of glial cells. In this method we have cultured the neurons for 3 weeks without any additional feeder cells. For cell dissociation, we have used papain only for hippocampal cells rather than trypsin, while cortical cells were dissociated by trituration without any prior enzymatic digestion. It has been observed that trypsin digestion of tissue leads to RNA degradation . The corEspecially for biochemical assays, such as transcriptomic and proteomic analyses, it is important to control and minimize the number of other cell types in the culture, particularly the number of glial cells. In cortical cultures, we found virtually no glial cells at all during the first 2 weeks in culture and less than 5% at 21 DIV, and also in hippocampal cultures, the number of glial cells was low until DIV 14, but started to increase thereafter. This is comparable with reports using related protocols that have reported less than 5% or virtuin vitro models for biomedical research of the post-mitotic neuronal cells. The primary cells are genetically more stable than neuronal cell lines and they maintain in culture many crucial markers and functions as seen in vivo. Thus, they complement in vivo experiments allowing for more controlled manipulation of cellular functions and processes. Once neurons are cultured, advanced molecular and biochemical studies can be easily performed. For example, successful CRISPR-cas9 gene editing has been achieved using primary neuronal cultures [The cell lines have been the largest source for medical research due to their immortal nature. These immortal cell lines have produced variable results arising, among others, from different passage times leading to genetic drift and selection for the faster growing cells . The pricultures . FurtherThe preparation of cells should be performed within no more than 2–3 h after dissecting the pups.Tissues should be preserved on ice all the time to prevent degradation.Petri dishes used are a common source of problems in the primary cell culture. We have reported here the manufacturers and plate types that have worked in our hands. Our previous experience with other manufacturers has resulted in low yield and bad quality of neurons with excessive glial contamination.Trituration of cortical and hippocampal tissues should be gentle without any force. Higher force results in a large number of dead cells detected in the hemocytometer counting. If the neuron quality is poor or yield is half of the average, we obtain typically using this method, these neurons should not be used for any downstream processing.Click here for additional data file.Click here for additional data file.Click here for additional data file.Click here for additional data file.Click here for additional data file."} {"text": "Results: Ad interim knockdown of HIF-1α also inhibited manganese-dependent superoxide dismutase (MnSOD), catalase and sestrin 3 (Sesn3) expression in OS cells. Furthermore, hypoxia-induced ROS formation and apoptosis in OS cells were associated with CYP450 protein interference and were ablated by HIF-1α silencing via siRNA. Conclusions: Our data reveal that HIF-1α inhibits ROS accumulation by directly regulating FoxO1 in OS cells, which induces MnSOD, catalase and Sesn3 interference, thus resulting in anti-oxidation effects. The combination of an HIF-1α inhibitor and ROS inducer can prohibit proliferation and migration and promote apoptosis in MG63 cells in vitro while inhibiting tumour growth in vivo.Background: The present study was designed to explore the underlying role of hypoxia-inducible factor 1α (HIF-1α) in reactive oxygen species (ROS) formation and apoptosis in osteosarcoma (OS) cells induced by hypoxia. Methods: In OS cells, ROS accumulated and apoptosis increased within 24 h after exposure to low HIF-1α expression levels. A co-expression analysis showed that HIF was positively correlated with Forkhead box class O1 (FoxO1) expression and negatively correlated with CYP-related genes from the National Center for Biotechnology Information’s Gene Expression Omnibus (NCBI GEO) datasets. Hypoxia also considerably increased HIF-1α and FoxO1 expression. Moreover, the promoter region of FoxO1 was directly regulated by HIF-1α. We inhibited HIF-1α via siRNA and found that the ROS accumulation and apoptosis induced by hypoxia in OS cells decreased. In this study, a murine xenograft model of BALB-c nude mice was adopted to test tumour growth and measure the efficacy of 2-ME + As Osteosarcoma (OS) is the most common primary bone cancer and is one of the leading causes of mortalities related to cancer in paediatric patients . The fivThe subgroup of Forkhead box class O (FoxO) transcription factors contains important regulators of the genome; they are characterized by the structural feature of a winged-helix in their DNA binding domain ,5. The mMitochondria are centres for cellular bioenergetic activities and are important sources for ROS . ROS proSaccharomyces cerevisiae and mitochondrial transcription specificity factors, in up- and downstream signalling pathways. All of the above transcription factors bind to and are further regulated by nuclear respiratory factors and PGC-1 family coactivators [The genome within the mitochondria encodes 12 proteins closely associated with biogenetic activities . The exptivators ,21. In ativators . InteresWe have previously reported that FoxO1 could promote the expression of antioxidant proteins such as MnSOD, catalase and Sesn3 . Specifi2O3 can inhibit MG63 cell proliferation and migration while promoting MG63 cell apoptosis and intracellular ROS accumulation. To further examine the effect of 2ME + As2O3, a xenograft murine model of OS in BALB/c nude mice was used to test its efficacy. In an in vivo drug-sensitivity test, the combination of 2ME and As2O3 achieved anti-tumour effects without obvious adverse reactions.Here, we present a comprehensive analysis of the transcriptional response to HIF-1α, revealing the repression of numerous nuclear-encoded mitochondrial genes through the regulation of FoxO1 function. We demonstrate that through this signalling arm, HIF-1α reduces cellular ROS production, independent of MnSOD, catalase and Sesn3 activation. Regulation of mitochondrial structure and function could be an important role for HIF-1α factors in regulating ROS production, and these processes can affect cellular adaptation to hypoxia. Through in vitro drug experiments, we found that 2ME combined with AsWe retrieved microarray data for normal tissues and human osteosarcoma tissues from the National Center for Biotechnology Information’s Gene Expression Omnibus (NCBI GEO) datasets for a total of eight samples.Ethical approval: This study was approved by the Ethics Committee of Fudan University Shanghai Cancer Center.In all, 29 paired osteosarcoma specimens and adjacent normal bone tissues, which were confirmed as primary malignant bone cancer by trained pathologists, were collected from the Department of Musculoskeletal Oncology of the Fudan University Cancer Hospital in 2017–2018. One of these samples was immediately snap-frozen in liquid nitrogen. The other tissues were formalin-fixed and paraffin-embedded.Paraffin-embedded blocks were cut into 4 μm thick sections and dewaxed and hydrated. Then, the slices were immersed in distilled water containing 3% hydrogen peroxidase twice to reduce endogenous oxidase activity. Afterwards, the tissue sections were incubated with primary antibodies for 2 h at room temperature, and a secondary antibody was subsequently applied to the cells at room temperature for 40 min. The staining degree was determined by diaminobenzidine (DAB) chromogen . Subsequently, the tissues were dehydrated and sealed with gum. Five random fields of view (100×) were captured with a camera and a microscope .2 (5%) atmosphere.Two human OS cell lines (U2OS and MG63) were purchased from the American Type Culture Collection (ATCC) and cultured in Dulbecco’s modified Eagle’s medium (DMEM) supplemented with 10% foetal bovine serum , 100 U/mL penicillin and 100 mg/mL streptomycin (Thermo Fisher Scientific). Regular osteoblast cells (hFOB1.19), used as a control, were acquired from the Chinese Cell Bank of the Chinese Academy of Sciences and cultured in Ham’s F12/DMEM supplemented with 10% FBS, 100 U/mL penicillin and 100 mg/mL streptomycin. The cultures were preserved at 37 °C in a humidified CO® RT Master Mix Perfect Real-Time Kit and SYBR Green Master Mix . qPCR was performed on an Applied Biosystems 7900HT Real-Time System .Complete RNA was extracted from cells and tissues with Trizol reagent according to the protocol. All mRNA was subjected to quantitative polymerase reaction and reverse transcription according to the protocols for the PrimeScriptCollected cells were lysed with RIPA protein extraction reagent containing a protease inhibitor cocktail . The lysates were then loaded onto sodium dodecyl sulfate (SDS)-polyacrylamide gel electrophoresis (PAGE) gels for separation, transferred to polyvinylidene fluoride (PVDF) membranes and blocked in 5% milk prior to incubation with the indicated primary and secondary antibodies. Autoradiograms were quantified through densitometry, and GAPDH was used as a control. The antibodies against HIF-1α, FoxO1, MnSOD, catalase and Sesn3 were purchased from Cell Signaling Technology .® HD transfection reagent according to the manufacturers’ instructions, respectively. The cells were then collected to observe the knockout or overexpression efficiency via qRT-PCR for 48 h after transfection. Two distinct siRNAs against FoxO1 were created and synthesized by GenePharma . The objective si-FoxO1 sequences and synthetic FoxO1 sequence (3099 bp) are described in previous research (15). The siRNA sequence targeting FoxO1 (si-FoxO1) was 5′-GCTCAACGAGTGCTTCATCAAGCTACCCA-3′.U2OS and MG63 cell lines were transiently transfected with siRNAs after being cultured in six-well plates overnight. A mixed negative control, a plasmid overexpressing FoxO1 and a blank vector were used as well with Lipofectamine 2000 transfection reagent (Thermo Fisher Scientific) and FuGENE3 cells were seeded in quadruplicate for each group in a 96-well plate. The cells were incubated with 10% CCK-8 reagent diluted in regular culture medium at 37 °C until optical colour conversion occurred. Proliferation rates were measured at 24, 48 and 72 h after transfection. The absorbance of each well was determined with a microplate reader set at 450 nm.Cell viability was determined with a CCK-8 assay. First, 1 × 10Four- to six-year-old female nude mice were purchased from Vital River Laboratory Animal Technology . All animals were housed in individual ventilated cages, provided sterilized water and food at libitum and handled under specific pathogen-free conditions in the Institute’s animal care facilities, which meet international standards. The mice were checked for their health status, animal welfare supervision was provided and experimental protocols and procedures were reviewed by a certified veterinarian. All animal experiments were carried out in accordance with the Chinese governing law on the use of medical laboratory animal .2O3/DMSO solution and a 2-ME + As2O3/DMSO solution (2-ME at 5 mg/kg and As2O3 at 5 mg/kg). 2-ME was administered once every two weeks, and As2O3 was administered for five subsequent days and then every two days. The length of the treatment course was three weeks. Food and water were supplied ad libitum after treatment. The health status was monitored by a specific veterinarian. All animals were euthanized and sacrificed by cervical dislocation after treatment. After two courses of treatment, the mice were euthanized, and the tumours were removed and weighed.In total, 32 female mice were divided randomly into four groups. The average weight of the animals was 12 grams. The administered drugs were dissolved in dimethyl sulfoxide (DMSO). For the control group, the animals were injected with only DMSO intraabdominally. The other three groups were injected with a 2-ME/DMSO solution, an Ast-test or one-way analysis of variance (ANOVA). Recurrence-free survival and total survival were determined by Kaplan–Meier survival analysis and compared via log-rank test. For this study, p-values <0.05 were considered statistically significant. Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis was performed via the Database for Annotation, Visualization and Integrated Discovery (DAVID) program.All statistical analyses were executed with SPSS 22.0 software and GraphPad Prism 5.0 . Differences between groups were analysed utilizing Student’s ΔΔCt values of HIF-1α and FoxO1 were significantly increased in bone cancer tissues relative to normal adjacent tissues (p < 0.05) and osteosarcoma tissues from NCBI GEO datasets through R Package Limma and Affy. In all, 17 genes were discovered to be significantly differentially expressed in these datasets A. Among < 0.05) A. This f < 0.05) B. Intere < 0.05) .Pyruvate dehydrogenase (PDH) activity is inhibited through HIF-1α by upregulation of the target molecule, pyruvate dehydrogenase kinase (PDK-1). HIF-1α blocks pyruvate in the tricarboxylic acid cycle, thereby inhibiting mitochondrial oxidative phosphorylation. Because mitochondrial respiration is the main source of ROS, we hypothesized that HIF-1α could reduce ROS production. Intracellular ROS levels were significantly lower in MG63 cells treated with HIF-1α siRNA than in control cells A,B. In aHomo sapiens FoxO1 5′-UTR are shown in To further explore the role of HIF-1α in FoxO1-induced ROS changes, the effects of HIF-1α on FoxO1 expression were examined. Two hypoxia-responsive elements within the promoter region of FoxO1 were identified, indicating that FoxO1 was a potential HIF-1α target. The target sites in the The antioxidant proteins MnSOD, catalase and Sesn3 have been reported as primary downstream messengers of Akt/FoxO1 signalling . To elucCyp2c38, Cyp2c38, Cyp2c38, Ptgs2 and Alox12) participating in ROS metabolism were significantly increased in HIF-1α-silenced OS cells and proliferation (CCK8) assays were performed on MG63 cells. A Transwell assay revealed that the migratory capabilities of MG63 cells were greatly decreased when they were treated with 2ME + As2O3 , which then leads to OS cell apoptosis. This double consequence of FoxO1 appears to be crucial in hypoxic injury regulation in OS cells. The combination of As"} {"text": "In their meta-analysis of observational studies, Low et al. showed a high prevalence of burnout syndrome (BOS) among medical and surgical residents across the globe with an aggregate prevalence of burnout as 51.0% (CI: 45.0–57%). However, the sample size in many of the included studies was quite low (only 26 out of 47 included studies had a sample size of more than 100 participants), and almost all of the 47 studies reported a rate of respondents of less than 80% . Furthermore, in many of them, the rate of respondents was unknown (5 out of 47) or less than 50% of eligible persons (23 out of 47 studies). As BOS is a self-reported syndrome, healthcare professionals who decided to participate in those studies were many of those affected by BOS, making the percentage of respondents potentially overstated due to the nonresponse bias. Policy decision-making in public health relies on evidence-based research; therefore, quality evaluation of studies in meta-analysis is essential to draw useful data for policymakers. Policy decision-making in public health relies on evidence-based research. Burnout syndrome (BOS) is a worrying phenomenon among clinicians and residents, not only in the United States, but across the entire globe . HoweverTherefore, the meta-analysis of obserIn every systematic review and meta-analysis, however, the quality evaluation of included studies is essential to drawing useful data to be translated by policymakers into public health decisions. We have noted in this meta-analysis that Low et al. adopted the National Institute of Health’s Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies (NIH-QAT) , which iAs BOS is a self-reported syndrome, participants who responded were probably also those affected by BOS, potentially making the percentage of respondents higher due to the nonresponse bias; that is, the error resulting from distinct differences between the people who responded to a survey versus the people who did not respond . The pre"} {"text": "Over the last years, fiber optic sensors have been increasingly applied for applications in environments with a high level of radiation as an alternative to electrical sensors, due to their: high immunity, high multiplexing and long-distance monitoring capability. In order to assess the feasibility of their use, investigations on optical materials and fiber optic sensors have been focusing on their response depending on radiation type, absorbed dose, dose rate, temperature and so on. In this context, this paper presents a comprehensive review of the results achieved over the last twenty years concerning the irradiation of in-fiber Long Period Gratings (LPGs). The topic is approached from the point of view of the optical engineers engaged in the design, development and testing of these devices, by focusing the attention on the fiber type, grating fabrication technique and properties, irradiation parameters and performed analysis. The aim is to provide a detailed review concerning the state of the art and to outline the future research trends. Beyond their typical applications for communications and sensing, optical fibers and fiber sensors have found wide interest in radiation related scenarios, due to their several advantages such as: high sensitivity and resolution measurements, low cost implementation, immunity to electromagnetic interferences, chemical inertness, long distance monitoring and high multiplexing capability ,2,3. Hen15 cm−2 [18 cm−2 whereas the gamma dose can accumulate up to 10 MGy [20 cm−2). Moreover, fiber components are also important in space applications where the main sources of energetic particles are protons and electrons in the Van Allen belts, heavy ions in the magnetosphere, cosmic ray protons and heavy ions, and protons and heavy ions from solar flares. Here, the doses are usually lower than 10 kGy [Radiation can interact with materials in different forms, for example it can be distinguished between purely ionizing radiations such as gamma- and X-rays and particle radiations such as protons, neutrons, and heavy ions. The first kind delivers energy mainly through the creation of secondary electrons (and positrons), whereas the second one interacts with materials both through ionization and non-ionizing energy loss, the latter being associated for example to the displacement or the vibration of an atom . Optical15 cm−2 ,11. Conco 10 MGy ,13, whern 10 kGy ,15. Finan 10 kGy .Three main physical effects can influence the working conditions of fiber-based devices when subjected to irradiation ,16: the Until now, the most part of the studies targeted optical fibers, by mainly focusing on their radiation induced attenuation. Different fiber compositions were considered in order to assess the dependence of the response upon the dopant ions, and tested under different radiation conditions and types ,23,24,25Finally, despite the great interest evinced from the scientific community on the Long Period Grating (LPG) as the sensing platform ,50,51,52L is the grating length. The properties of the LPG rejection bands are dependent on fiber parameters, grating properties, as well es external conditions of temperature, strain, bending and surrounding refractive index [LPG sensors are fabricated by inducing a periodic perturbation in the refractive index and/or geometry of an optical fiber. The period of the perturbation Λ typically ranges from 100–1000 µm and promotes the power coupling between the core mode and several co-propagating cladding modes . As a reve index . Hence, ve index ,50,51,522 lasers [Different techniques are available for their fabrication, the most important being: UV-radiation , CO2 las2 lasers , IR femt2 lasers , mechani2 lasers and elec2 lasers ,62. MoreIn this section, the analysis concerning the state of the art about radiation effects on LPGs is reported taking into consideration gratings fabricated in different fibers and with different techniques. The results are primarily presented by following a chronological order and they are grouped based on the outcomes of the main research groups working in this field.19 mode) and a thermo-induced LPG in an N-doped fiber by CO-laser . The gratings were exposed to a 60Co gamma source at a dose rate ranging from 5.4–6.6 Gy/s up to a total dose of 1.47 MGy, at 40 °C temperature. Mach–Zehnder interferometers (MZI) [The first report about Long Period Gratings under gamma radiation was provided by Vasiliev et al. in 1998.rs (MZI) and FBGs−5 after a 100 kGy dose were also reported, probably due to radiation induced absorption bands of atoms in the UV region affecting the RI via the Kramers–Krönig formula. Finally, the authors stated they found some inconsistencies in their results probably due to some experimental errors.The authors stated that the LPG in the N-doped fiber did not show any change in the resonance wavelength after the irradiation to within an experimental error of ±0.3 nm, the MZI response was also negligible. The same happened for the LPG in the Ge-doped fiber, differently significant phase shifts were observed in the MZI in this case. The authors justified the apparent stability of LPG response with the elimination of the precursors of gamma radiation induced color centers in the process of grating writing through the UV laser. Induced refractive index changes in the core region up to 2.8∙10We would like to add that the response of LPGs in the Ge-doped fiber found in was unex60Co gamma source and two dose rates were considered for comparison: 0.87 Gy/s up to a total dose of 100 kGy and 0.1 Gy/s up to 20 kGy dose. Each grating was inserted into a thin quartz capillary, without fixing the fiber to have a strain-free state and mounted onto an aluminum plate.In , HenscheThe attention was mainly focused on the fiber RIA and grating wavelength shift as a consequence of irradiation. Moreover, Another aspect was related to the trends of wavelength shift and RIA as a function of radiation dose. In particular, for low RIA fibers the wavelength shift saturated as the dose approached 10 kGy while the RIA still continued to increase up to 100 kGy. Differently, for high RIA fibers the behavior was the opposite: RIA saturated after 20 kGy and wavelength shift increased up to 100 kGy. Finally, for Nufern 1 (fiber 6) both RIA and wavelength shift showed a steady increase up to 100 kGy, while for Alcatel (fiber 1) they both saturated at 20 kGy. Concerning the comparison between the time dynamics of RIA and wavelength shift recovery, reported in Based on these results, the authors concluded that RIA and wavelength shift shows similarities but also differences; anyway it is not surprising since the RIA of most single mode fibers is primarily due to an attenuation increase of the core material, whereas the wavelength shift is dependent on radiation induced changes of both core and cladding materials (changing core and cladding effective refractive indices) through the phase-matching condition of Equation (1) and maybe also compaction (affecting grating period) has an influence.Another interesting outcome of this research is related to the dependence of wavelength shift upon the dose rate: it was found that for a dose rate of 0.87 Gy/s it was around 1.1 to 1.2 times higher than at 0.1 Gy/s. Finally, concerning the temperature sensitivity of the CPLPGs, the authors stated that it did not change after the irradiation up to 100 kGy.2 laser in a commercial photosensitive B/Ge codoped PS980 fiber by Fibercore UK. The period of the gratings were selected in the range 206–208 µm to achieve the coupling with 11th order cladding mode and different working points in the TAP region. The LPGs were irradiated by a 60Co gamma source at 1.3 kGy/h dose rate and up to a total dose of 65 kGy and measured off-line. The grating was fixed on a metal plate during irradiation.In ,67 Kher −5.The wavelength shifts observed in this case were the highest reported so far due to the TAP operation: each peak of the double resonance 11th mode experienced a shift (that is positive for left peak and negative for right peak) of about 35 nm after a 6 kGy dose that increased up to 80 nm when the dose reached 65 kGy. The authors attributed the wavelength shift to an increase of the core refractive index in the B/Ge codoped fiber by about 10−4 after a 1.54 MGy dose. This RI change recovered by less than 10−5 after 67 h of room temperature annealing. Finally, the authors predicted a saturation of the RI change at 8.2·10−4 after a 15 MGy dose.The same group deepen their study in by corre2 laser based setup, with a period of 450 µm and presented high strain sensitivity and negligible temperature response in comparison to LPGs in the standard fiber [Kher et al. ,70 also rd fiber .The device was tested under gamma irradiation up to 75 kGy and no significant changes were observed in spectral properties and sensing characteristics. Such fibers, along with air guiding PCFs (or photonic bandgap fibers-PBGs) have recently attracted interest in this field, as the all silica structure can lead to an improved radiation hardened response ,28,29.2 laser assisted by a micro flame.In ,73,74 Sp02) [60Co industrial gamma source at “Horia Hulubei” institute , resulting in a dose rate of 0.2 kGy/h and up to a total dose of 45 kGy.The F-doped model was a single mode optical fiber with 8.5 µm core diameter and fluorine concentration of 0.2 wt.% in the core and 1.8 wt.% in the cladding; the grating was fabricated by iXblue, France using a 740 µm period which resulted in the coupling with 1st order cladding mode (LP02) . The graA blue shift of 0.7 nm was measured in the resonance wavelength after the irradiation, whereas the recovery was about 0.6 nm after 120 h at room temperature (6.7 pm/h rate). The temperature sensitivity of the grating before and after irradiation was also evaluated: it raised from 27.7 pm/°C to 29.3 pm/°C suggesting some radiation dependence of the response at high total doses.A similar procedure was adopted in ,74 for tIn the first case, the final wavelength shift reached 3.3 nm after 34 kGy, however, the response saturated after the 10 kGy dose as shown in 60Co gamma source at the Radio Isotope Test Arrangement (RITA) irradiation facility at 37.4 °C temperature, using a rate of 1 kGy/h and up to a total dose of 560 kGy. The gratings were placed into stainless-steel capillary tubes with the fiber fixed at both ends with wax. The authors did not report about wavelength shift explicitly, but they considered changes in amplitude. The experiment showed that the transmission spectra of the gratings written in both fibers remained almost unchanged. Moreover, the temperature and strain sensitivities of LPGs written in the Oxford fiber were also not affected by radiation.One of the first reports about arc-induced LPGs under gamma irradiation was provided by Rego et al. in 2005.03 or LP04 at 1560 nm. The irradiations were performed at room temperature at the “Horia Hulubei” institute at 0.2 kGy/h dose rate, whereas the final doses ranged from 26.6–35 kGy. The gratings were kept in plastic frames to fix their strain state during irradiation and put in a thermally insulated box.Over the last years, the authors of this review have performed a systematic investigation about the effects of gamma radiation on arc-induced LPG in standard and different radiation hardened optical fibers ,77,78,79The attention was focused on the real-time resonance wavelength shift and optical transmission of the fiber, as reported in −5 for SMF28, 2.3∙10−5 for Fiber-A and Nufern in the fiber core at the end of the irradiation, whereas for Fiber-B the changes were one order of magnitude lower.By combining the experimental results with full spectrum numerical simulations ,83, the −8 °C−1, whereas for the other fibers it was lower than 10−8 °C−1.Finally, the temperature sensitivity of the gratings was also compared before and after the irradiation. For SMF28 LPG it increased from 50.5 to 53.8 pm/°C (around 6.5% change), whereas smaller variations were observed for Nufern and Fiber-A gratings, where it changed from 49.3 to 49.6 pm/°C and from 48.9 to 49.3 pm/°C, respectively. For Fiber-B LPG, the value of 22.8 pm/°C was not modified. The numerical model was also applied to estimate a change in the core thermo-optic coefficient of the SMF28 fiber of 1.5∙1012 n/(cm2∙s) resulting in a neutron fluence of 9.18∙1015 n/cm2. The gratings were kept in plastic frames to fix their strain state during irradiation and placed on an aluminum alloy plate.The same authors also irradiated a similar set of gratings under a mixed neutron-gamma field by using a TRIGA research nuclear reactor at the Nuclear Research Institute ICN ,85. The 15 n/cm2 neutron fluence. The wavelength shifts recorded at the end of the irradiation were the following: 6.4 nm for LPG in SMF28, 9.0 nm for Fiber-A, 11.8 nm for Nufern and −0.4 nm for Draka. These trends were in agreement with those observed during gamma irradiation of similar samples reported in [The real-time wavelength shift of the gratings during the irradiation is reported in orted in .−5 for SMF28, 3.5∙10−5 for Fiber-A, 4.1∙10−5 for Nufern in the core of these fibers, whereas in the cladding of Draka the change was equal to 0.2∙10−5.By combining the experimental results with numerical modeling, the following RI changes were estimated by the end of irradiation: 2.6∙10−8 °C−1, for the Nufern fiber it was around 3∙10−8 °C−1, finally it was 6∙10−8 °C−1 in the cladding region of Draka.Finally, the temperature sensitivity of the gratings was also compared before and after the irradiation. For SMF28 and Fiber-A it increased 2% maximum, passing from the value of 50.0 and 50.8 pm/°C before the irradiation to 51.2 and 51.6 pm/°C, respectively, after the irradiation. Concerning the Nufern fiber, a greater increase in the thermal sensitivity was recorded, changing from 49.5 to 57.7 pm/°C (17% increase), whereas for Draka fiber a 10% decrease was found from 29.6 to 26.5 pm/°C. The changes in thermo-optic coefficients were also estimated by numerical analysis: in the core of SMF28 and Fiber-A they were lower than 1008) at 1540 nm. The irradiation was performed at the CERN proton facility named IRRAD, where the grating was exposed to a proton fluence of 4.4∙1015 p/cm2 for about 6 days up to a total high dose of 1.16 MGy (dose rate of 2.36 Gy/s). The grating was mounted on a support to fix the strain conditions during the experiment.Recently, in ,87 the gAs reported in −4 in the core effective refractive index was found at the end of irradiation , while a decrease of 0.93∙10−4 in the grating RI modulation was estimated as well. Finally, no significant change in the temperature sensitivity of the grating was found after the irradiation.Subsequently, the authors combined the experimental results with numerical modeling in order to estimate the changes of the main parameters affecting the grating response during the irradiation. In particular, a variation of about 1.61∙102) layer of about 100 nm. Such a material was selected due to its hygroscopic properties, moreover having a refractive index (n = 1.96) higher than cladding it was used to induce the mode transition phenomenon and enhance the LPG sensitivity [In ,89, the sitivity ,91,92.60Co source up to a dose of 10 kGy. The preliminary results reported a wavelength shift of about 4.4 nm (at 30% RH), whereas the sensor response towards humidity exhibited the same shape (except for some variations around 16%) as a consequence of irradiation.The performance of the sensor was measured in the range 0–75% RH and at −10, 0, 10, 25 °C, to replicate the working conditions required for CERN experiments. The sensor exhibited an exponential-like response with sensitivity changing from 1.4 to 0.11 nm/%RH for humidity levels in the range 0–10% RH (at room temperature) as reported in Figure 17 from . The coaIn this work, we conducted a thorough review of the state of the art concerning the irradiations performed on LPGs over the last twenty years. We considered all the contributions to the topic by the research groups working in this field.2-written LPGs in standard and radiation hardened fibers have been reported in [The first report about gamma irradiation of LPGs can be dated back to 1998 . Since torted in ,73. Thenorted in and, fororted in . The attorted in .For a comparative analysis, it is worth noting that most of the works attributed the shift in LPG resonance wavelengths to radiation induced refractive index changes, even under different kind of radiations and grating fabrication techniques. The shift is typically towards higher wavelengths, which can be attributed to an increase in the core RI. The amount of shift is, of course, dependent on the radiation type, dose rate, final dose, as well as fiber and grating parameters. Concerning the attenuation band depth, it seems to show trivial changes in most of experiments except for a few cases. In fact, an UV-written LPG showed a significant band decreasing under proton beam . It is rFuture works should focus on the investigation of LPGs fabricated in innovative optical fibers, that have never been tested before in this field, exhibiting unconventional dopants, refractive index profiles and glass structures. In this context, the advancements in grating fabrication techniques could give a further push. Despite the huge efforts done so far and the sustained irradiation costs, there is still demand for a full understanding of the influence of dose rate and irradiation temperature. Moreover, as most of the literature is focused on gamma radiation (with single reports on neutrons and protons), other kinds of radiations have to be also considered . Finally, as LPG can be integrated with sensitive overlays, the mentioned studies could be extended to coated gratings as well, for the development of different sensing devices to be used in radiation environments. The need of the hour is to develop innovative LPG sensors comprised of new features which can provide further insights into radiation effects."} {"text": "Also, the Zn(II), Hg(II), Pb(II) and Cu(II) ions interfered moderately on the determination of Cd(II).A poly(acrylic acid-co-itaconic acid) (PAA-co-IA)/NaOH hydrogel containing bamboo-type multiwall carbon nanotubes (B-MWCNTs) doped with nitrogen (PAA-co-IA/NaOH/B-MWCNTs) was synthesized and characterized by SEM, absorption of water, point of zero charges, infrared spectroscopy, thermogravimetric analysis, and differential scanning calorimetry. The possible use of the PAA-co-IA/NaOH/B-MWCNT hydrogel as an electrode modifier and pre-concentrator agent for Cd(II) sensing purposes was then evaluated using carbon paste electrodes via differential pulse voltammetry. The presence of the B-MWCNTs in the hydrogel matrix decreased its degree of swelling, stabilized the structure of the swollen gel, and favored the detection of 3 ppb Cd(II), which is comparable to the World Health Organization’s allowable maximum value in drinking water. A calibration curve was obtained in the concentration range of 2.67 × 10 Multiwall carbon nanotubes (MWCNTs) have been used in numerous applications, including biomedicine, energy conversion and storage, as well as in the construction of nanoscale electronic and electrochemical devices such as sensors and biosensors . One of Other composites generated through the reaction of ionic polymers and metals have been successfully used as dynamic sensors, transducers, and actuators. In this regard, these ionic polymer-metal composites (IPMCs) have been studied for various applications such as integration into energy harvesting as systems , for vibRegarding the bamboo-like multiwall carbon nanotubes doped with nitrogen (B-MWCNTs), these are tubes divided into small compartments, which are produced by partial substitutions of carbon atoms by nitrogen atoms as dopants. Their electrical conductivity and chemical reactivity are greatly improved due to the introduction of an unpaired electron pair per nitrogen atom . Among t−1 as the maximum permissible limit in water for human use and consumption 0.01 mol L−1 in KCl 0.1 mol L−1 at pH 6 [6]3−/[Fe(CN)6]4−. However, these responses demonstrated that with the hydrogel as a modifying agent it is possible to obtain an electroactive surface. The resulting pastes were packed into plastic tubes (0.15 cm diameter and 7 cm long). Electrical contact was achieved by inserting a copper wire into the packed plastic tube. The surface was renewed before each experiment by pushing an excess of paste out of the tube and polishing the new surface with filter paper.The CPEs were prepared according to the methodology used by Bejarano-Jimenez et al. . The dri at pH 6 . The cycThe electrochemical analysis was performed in a three-electrode electrochemical cell connected to a potentiostat/galvanostat model VMP3 Bio-Logic SAS coupled to EC-Lab software version 10.23. The working electrode consisted of a carbon paste modified with either PAA-co-IA/NaOH or PAA-co-IA/NaOH/B-MWCNT hydrogel. The Ag/AgCl/KCl (sat.) system was used as a reference electrode and a glassy carbon rod as the auxiliary electrode.−3 M Cd(II) and 0.1 M KNO3 electrolytic solution at pH 5. The solutions were deoxygenated with argon for 10 min before each test. The detection procedure was performed following the next stages: (1) The surface of the modified carbon paste electrode was pre-treated before use in the detection of Cd(II) by applying a cathodic potential of −1.0 V for 20 s followed by an anodic potential scan from −1.0 to 1.3 V at a scan rate of 100 mV s−1 in 0.1 M KNO3 solution; (2) the uptake of Cd(II) was tested at different times at open circuit potential, under constant stirring in 1 × 10−3 M Cd(II) + 0.1 M KNO3 solution; (3) the electrode with pre-concentrated Cd(II) was transferred to a 0.1 M KNO3 solution free of Cd(II) to perform the DPV, which consisted of reducing the pre-concentrated Cd(II) to Cd(0) on the electrode surface at −0.95 V for 40 s, followed by anodic stripping (oxidation of Cd(0) to Cd(II)) from −0.95 to 0.6 V with a 33.3 mV s−1 scan rate, 40 mV pulse height, 75 ms pulse width, and 5 mV step height. The DPV conditions were optimized according to the voltametric response of the re-dissolved Cd(II) on a modified CPE electrode, which is observed in s is the standard error of the linear regression and m is the slope.The electroanalytical performance of the PAA-co-IA/NaOH/MWCNT hydrogel was studied using differential pulse voltammetry (DPV) in 1 × 10−1. The polymer spectra show the band associated with the O–H stretching at 3450 cm−1, which is typical for PAA and similar polymers [−1 is consistent with the extension links –CH2– in the polymer chains, as well as the bands at 1394 and 795 cm−1, which indicate C–H bond deformations. On the other hand, it is possible to distinguish bands corresponding to the harmonics and combinations of bands near 1413 and 1248 cm−1 augmented by Fermi resonance [−1, as well as to identify C=O and C–O stretching bonds from carboxylic acid groups at 1697 and 1160 cm−1, respectively. The band at 1540 cm−1 corresponds to carboxylate groups. The spectrum of PAA-co-IA/NaOH/B-MWCNTs shows an additional band located at 1641 cm−1 associated with the C=C bond present in B-MWCNTs. These results confirm the effectiveness of the synthesis process. Additionally, the band around 2400 cm−1 corresponds to the stretching of carbon dioxide (O=C=O), suggesting its presence in the large quantity of pores in hydrogels.polymers . A band esonance with theThe appearance of the dry and swollen PAA-co-IA/NaOH/B-MWCNT composite hydrogel is shown in The swelling capacity of pure PAA-co-IA/NaOH and modified PAA-co-IA/NaOH/B-MWCNT hydrogels was determined at pH 5 and 6.6. It is interesting to point out that the hydrogels developed in this work have shown a greater degree of swelling in comparison to some PAA/graphene oxide hydrogels developed by Bejarano-Jimenez et al. , which wThe thermal stability of the copolymer was determined through the derivative thermogravimetry (DTG) of individual components of hydrogels. Thermograms of PAA-co-IA/NaOH and PAA-co-IA/NaOH/MWCNT hydrogels, as well as a PAA-co-IA hydrogel without NaOH, are presented in −3 M Cd2+ and 0.1 M KNO3 solution at pH 5. 2+ such as Zn(II), Hg(II), and Ni(II) [The capacity of the carbon paste electrode modified with the PAA-co-IA/NaOH/B-MWCNT hydrogel to pre-concentrate the analyte was studied in a 1 × 10d Ni(II) , and pred Ni(II) , which c−3 M Cd(II) . A longer immersion time would result in an increase of the anodic charge (anodic current) until a maximum was reached. This maximum can be apparently explained by the equilibrium achieved between the Cd(II) bound to the modifier and the one, which is solubilized in solution [The effect of accumulation time of Cd(II) on the surface of CPEs modified with either the PAA-co-IA/NaOH or PAA-co-IA/NaOH/B-MWCNT hydrogel was studied to assess the role of the B-MWCNTs in the polymeric network. The swelling capacity of the hydrogels on the CPEs was expected to have a significant influence on the voltametric response. Therefore, a study was carried out to establish the relationship between the voltametric signal and the accumulation time at the surface of the carbon paste electrode. solution . The elesolution , the incsolution ,46. Thes2+ due to the higher mechanical stability of this hydrogel, which contributed to better reproducibility of the voltametric signal. Therefore, the maximum value of time used for Cd(II) accumulation was 10 min to avoid the excessive swelling. An increase in the size of hydrogel particles on the electrode surface would cause hydrogel fragmentation, which may decrease the overall conductivity of the electrode surface and produce a degradation of Cd signals. These results confirm that accumulation time played a major role in the detection of Cd(II) when these hydrogels were used as modifying agents.On the other hand, at times longer than 12 min, both hydrogels began to fragment at the electrode surface, a situation that was undesirable since it may lead to poor reproducibility of readings. The swelling of the hydrogel causes the formation of structures like a cauliflower that can be seen with the naked eye on the surface of the electrode. Fragmentation was favored due to agitation during the pre-concentration of Cd(II) at open circuit potential and the fragments were seen at the bottom of the cell. In this test, the electrode modified with PAA-co-IA/NaOH/B-MWCNTs presented a significant reduction of the uncertainty of the quantification of CdConcerning the B-MWCNTs, Pérez et al. reported2+. The calibration plot was found to be linear between 2.67 × 10−8 and 6.23 × 10−7 M with a slope of 0.15 µC ppb−1 L (R2 = 0.99) and an LOD and LOQ of 19.2 and 58.3 ppb, respectively. The time needed for preconcentration of Cd(II) in this case was longer, 10 min at OCP, since in this case the concentrations were lower than those used to obtain the voltametric responses shown in −3 and 0.13 × 10−3 M with a slope of 0.02 µC ppb−1 (R2 = 0.99) and a LOD and LOQ of 1800 and 5500 ppb, respectively. Both curves with a slope of 0.002 µC ppb−1 L (R2 = 0.99) see , showing−3 and 0.9 × 10−3 M with a slope of 0.001 µC ppb−1 L (R2 = 0.99), compared with the linear regression for other ranges of concentrations, meaning that this electrode was less sensitive and had less reproducibility than the electrode containing the PAA-co-IA/NaOH/B-MWCNT hydrogel. Further studies will consider the preparation of specific molecular weights of the polymers to improve the degree of uncertainty.On the other hand, −8 M), so it was evidently a competitive adsorption mechanism during the pre-concentration process, causing a recovery of 35% of the voltametric response of Cd(II) in the mixture of all the metal cations. It has been reported that the metal ion binding properties of linear hydrosoluble poly(acrylic acid) with 3 × 106 of average molecular weight increased in the following order: Ni(II) < Cd(II) < Cu(II) < Pb(II) [2+ such as Zn(II), Hg(II), and Ni(II) [The interference of the voltametric response of Cd(II) was also studied in the presence of other metal cations, such as Pb(II), Zn(II), Hg(II), and Cu(II), which could be adsorbed simultaneously under the conditions for adsorption of Cd(II) . Figure < Pb(II) . Also, ad Ni(II) , which eFurthermore, other studies have proven that PAA composite films have great capacities to sense Cu(II) and Pb(II) ,51 due tThe polymeric material containing itaconic acid possessed an increased swelling capacity due to the presence of many carboxylate groups. The swelling capacity was affected by the presence of B-MWCNTs since they acted as crosslinking agents, decreasing the swelling capacity but providing greater mechanical resistance. The electrochemical detection of cadmium by the carbon paste electrodes modified with PAA-co-IA/NaOH/B-MWCNT hydrogel confirmed that such material can pre-concentrate Cd(II), and therefore provided electroanalytical signals derived from its re-oxidation by anodic stripping voltammetry techniques. The electrodes were able to detect 3 ppb of Cd(II) like those stipulated as the maximum permissible limit in water for human use and consumption (3 to 5 ppb). The modified electrode was able to detect Cd(II) in the presence of other metal cations, and therefore further studies to try to better understand the behavior of the composite in the presence of these metals are being carried out, which is very important for future applications. This composite material can be potentially applied for electrochemical detection of other water contaminants, such as disinfection by-products and alga"} {"text": "I think I am … underweight, normal weight, overweight, obese”). We used the following statistical analysis: Paired sample t-tests, a Bland–Altman plot kappa statistics, chi-squared tests, and a logistic regression. Results: Mean difference of BMI calculated from self-reported and measured data was 0.06 in men, and 0.16 in women, with four participants being outliers of the 95% limits of agreement (Bland-Altman plot). Allowing a difference of 0.5 kg between self-reported and measured weight, we found that 16% reported their weight correctly, 31.2% underreported (−1.89 ± 1.59 kg), and 52.8% overreported (1.85 ± 1.23 kg), with no sex differences (p = 0.870). Further, our results suggest that both sexes may have difficulty recognizing overweight/obesity in themselves, and particularly men are likely to underreport their perceived weight group compared with women. More than half (53.3%) of the overweight men perceived themselves to be normal weight (women: 14%), and only 33.3% of obese men and women correctly classified themselves as being obese. We did not find any difference between participants correctly or incorrectly classifying weight group and fitness club attendance (≥2 times a week) at three months follow-up. Conclusion: Both sexes reported body weight and height reasonably accurately, and BMI based on self-report appears to be valid measure. Still, a large proportion of novice exercisers do not recognise their own overweight or obesity status, which may in part explain why public health campaigns do not reach risk populations.Background: Data from the research project “Fitness clubs—a venue for public health?” provided an opportunity to evaluate the accuracy of self-reported body weight and height, and subsequent Body Mass Index (BMI), as well as the “trueness” of novice exercisers perception of weight status category, which has not been examined in this population. The aims were to examine self-reported body weight, height, and calculated BMI data from an online survey compared with measured data at fitness club start-up, investigate how accurately novice exercisers place themselves within self-classified weight group , and compare this with fitness club attendance at three months follow-up. Methods: Prior to anthropometric measurements, 62 men and 63 women responded to an online questionnaire, including body weight and height , and self-classified weight group (“ Body Mass Index (BMI) has gradually increased over the past three decades, with 39% and 13% of adults being overweight BMI ≥ 25) or obese (BMI ≥ 30) worldwide 5 or obes,5,6. HenSystematic literature reviews have examined the validity of self-reported body weight and height in different adult populations ,12, conc(1)Examine self-reported weight, height, and calculated BMI data from an online survey compared with measured data in men and women starting a fitness club membership.(2)Investigate how accurately new members place themselves within self-classified weight group .Since the 1990s, the number of fitness and health clubs have increased, reflecting a growing interest of health among the general adult population. To date, this industry has about 185 million members worldwide, representing a 54% increase over the last decade . In fitnThis is a secondary analysis of data collected as part of a prospective study investigating contributing factors that influence exercise involvement, attendance, and drop-out in a fitness club setting .The research project was reviewed by the Regional Committee for Medical and Health Research Ethics (REK 2015/1443 A), who concluded that, according to the act on medical and health research , the study did not require full review by REK. The procedures followed the World Medical Association Declaration of Helsinki, were approved by the Norwegian Social Science Data Service (NSD 44135), and financed and conducted at the Norwegian School of Sport Sciences (NSSS) (October 2015–April 2017). No economic compensation was given to the participants.n = 270) or had cardiovascular disease, hypertension, or asthma (n = 8). Besides, 148 individuals did not respond after the first e-mail, leaving 250 in the original study. Of these, a subgroup of 62 men and 63 women completed anthropometric measurements at the university laboratory. More details of the research project are published elsewhere [New members at 25 fitness clubs in Oslo, Norway were contacted by an e-mail invitation from the fitness club chain (SATS). In this email, the aims and implications of the study were explained. Among those who expressed interest in participating in the study, the eligibility criteria were checked by a follow-up email from our research fellow. The participants had to be healthy novice exercisers (≥18 years), with 30 degrees or substantial overlap/foreshortening in pre-PCI (n = 19) or post-PCI (n = 21) angiograms, inadequate pressure waveforms (n = 10) or assessment of bypass grafts (n = 9). Baseline clinical characteristics are summarized in A total of 200 patients who underwent post-PCI evaluation with invasive FFR were screened. Residual ‘virtual stenting’ vFFR and post-PCI vFFR computation were subsequently performed in 81 eligible individuals. Key reasons for screening failure included: presentation with STEMI were male. Diabetes was present in 20 (24.7%) of the patients. A history of previous myocardial infarction (MI) or prior PCI was present in 18.5% and 30.9% of the patients, respectively. In 49.4% of the patients, the FFR measurement was performed in the left anterior descending artery. Mean 3D QCA-based diameter stenosis pre-PCI was 53 ± 15% with a reference vessel diameter of 2.90 ± 0.65 mm . Mean prp < 0.001) as well as between residual vFFR and post-PCI vFFR and actual post-PCI FFR value (0.91 ± 0.06). Likewise, no differences were found between estimated residual vFFR (0.91 ± 0.06) and post-PCI vFFR (0.92 ± 0.05). A good linear correlation was observed between residual vFFR and post-PCI FFR A. Bland < 0.001) B, wherea< 0.001) B.p < 0.001) (There were 31 lesions (38.5%) identified with post-PCI FFR < 0.90. Residual vFFR showed a good accuracy in the identification of lesions with post-PCI FFR < 0.90 0.93 (95% CI: 0.86–0.99), < 0.001) .The present study is the first to evaluate the feasibility of vFFR estimation of post PCI functional outcome. Residual vFFR calculated based on pre-PCI angiograms simulating the effects of coronary stent implantation correlates well with both post-PCI FFR and post-PCI vFFR values. Moreover, discriminative ability for post-PCI FFR < 0.9 was good, without the necessity for an invasive pressure wire or microcatheter and pharmacological induction of hyperemia, before actual stent implantation in patients presenting with either chronic coronary syndrome or non-ST-ACS.Indeed, a considerable proportion of patients after successful PCI still suffer from angina . The latThe presented findings, although preliminary, constitute another step forward in the development of virtual PCI planning tools. Our study extends prior observations from computed tomography coronary angiography (CCTA) based FFRCT PCI planning software and otheImportantly, virtual stenting vFFR cannot account for heavy calcifications or stent underexpansion; the software used in this study assumes an almost perfect PCI result.Virtual stenting vFFR predicts the physiological response to PCI and is not intended to be a replacement for optical coherence tomography or intravascular ultrasonography in determining procedural success, which is dependent on several procedural factors . As suchThe following limitations of this study need to be noted while interpreting its results. This is a retrospective cohort study with a relatively small sample size. The residual vFFR was compared with post PCI FFR analyzed with a dedicated microcatheter known to slightly overestimate FFR values ,37. ImpoPre-PCI estimation of ‘residual’ vFFR based upon invasive angiographic imaging is feasible, correlates well with post PCI invasive FFR and vFFR measurements and can predict physiological response to stenting with high accuracy. Further studies are needed to evaluate the efficacy and safety of 3D-QCA based FFR-guided coronary interventions."} {"text": "In this review, we discuss the role of transforming growth factor-beta (TGF-β) in the development of pulmonary vascular disease (PVD), both pulmonary arteriovenous malformations (AVM) and pulmonary hypertension (PH), in hereditary hemorrhagic telangiectasia (HHT). HHT or Rendu-Osler-Weber disease is an autosomal dominant genetic disorder with an estimated prevalence of 1 in 5000 persons and characterized by epistaxis, telangiectasia and AVMs in more than 80% of cases, HHT is caused by a mutation in the ENG gene on chromosome 9 encoding for the protein endoglin or activin receptor-like kinase 1 (ACVRL1) gene on chromosome 12 encoding for the protein ALK-1, resulting in HHT type 1 or HHT type 2, respectively. A third disease-causing mutation has been found in the SMAD-4 gene, causing a combination of HHT and juvenile polyposis coli. All three genes play a role in the TGF-β signaling pathway that is essential in angiogenesis where it plays a pivotal role in neoangiogenesis, vessel maturation and stabilization. PH is characterized by elevated mean pulmonary arterial pressure caused by a variety of different underlying pathologies. HHT carries an additional increased risk of PH because of high cardiac output as a result of anemia and shunting through hepatic AVMs, or development of pulmonary arterial hypertension due to interference of the TGF-β pathway. HHT in combination with PH is associated with a worse prognosis due to right-sided cardiac failure. The treatment of PVD in HHT includes medical or interventional therapy. Hereditary hemorrhagic telangiectasia (HHT), additionally known as Rendu-Osler-Weber disease, is an autosomal-dominant inherited disease with an estimated prevalence of 1 in 5000 individuals and higher in certain regions . UnfortuFrequent and recurrent epistaxis, which may be mild to severeMultiple telangiectases on characteristic sites: lips, oral cavity, fingers, and noseAVMs or telangiectases in one or more of the internal organs A 1st-degree relative with HHTDiagnosing HHT can be done through genetic testing or by the use of the clinical Curaçao Criteria framework. The Curaçao diagnostic criteria for HHT consist of the following : FrequenA diagnosis of HHT is considered confirmed if at least three criteria are present, and possible with two criteria, as listed above .Currently, there are five different mutations known to cause HHT; this has led to the subdivision of HHT into five subtypes . These mHHT1 is caused by mutations in the ENG gene . HHT type 1 is characterized by a higher prevalence of pulmonary and cerebral AVMs, mucocutaneous telangiectasia, and epistaxis compared to HHT type 2 .HHT type 2 is caused by a mutation in the ACVRL1-gene and has a higher prevalence of hepatic AVMs compared to HHT type 1 ,11.HHT type 3 and HHT type 4 are linked with mutations in, respectively, chromosome 5 and 7; however, the exact genes remain unknown ,13,14.HHT type 5 is caused by a mutation in the Growth Differentiation Factor 2 gene (GDF-2) that codes for the Bone Morphogenetic Protein 9 (BMP9) OMIM615506) which expresses an HHT-like phenotype and is therefore classified as HHT type 5 ,15.506 whichMutations in the SMAD4 gene can cause a rare syndrome that is a combination of juvenile polyposis and HHT. This mutation is only found in 1–2% of HHT patients ,16.The various types of HHT can be subdivided based on the genetic mutation in the TGF-β signaling pathway :HHT1 is Approximately 80% of HHT patients have mutations in the ENG and ACVRL1-gene .Excessive TGF-β activation contributes to the development of a variety of diseases, including cancer, autoimmune disease, vascular disease, and progressive multi-organ fibrosis. The TGF-β signaling pathway is involved in many cellular processes, including cell growth, cell differentiation, apoptosis, cellular homeostasis, and others ,18 Figu. In endothelial cells (ECs), TGF-β can signal through two type-1 receptors: the ALK-5 pathway in which SMAD 2 and 3 are activated and the ALK-1 pathway in which SMAD 1, 5, and 8 are activated Figure . All theSeveral studies show biphasic effect of TGF-β in ECs ,20. In lEndoglin is upregulated by ALK-1 and is an accessory receptor in the TGF-β signaling pathway that is specifically expressed in the proliferation of ECs and has an opposite effect on ALK-5 ,22,27,28BMP-9 is a ligand involved in the TGF-β/ALK-1 complex, high concentrations of BMP-9 in vitro and ex vivo inhibit proliferation and migration of ECs ,33. In vECs deficient in endoglin cannot mature because the balance between the ALK-1 pathway and the ALK-5 pathway is disrupted and ALK-5 predominates . Since eMutations in the ENG and ACRVL-1 genes alter the ligand–receptor interaction, creating an imbalance between Vascular Endothelial Growth Factor (VEGF). VEGF is stimulated by the ALK-5 pathway and inhibited by the ALK-1 pathway ,40. AlthThis allows for the creation of a thin-walled arteriovenous complex that is exposed to increased arterial blood flow and increased pressure . VEGF apAlthough ENG and ACRVL-1 mutations cause HHT-1 and HHT-2, it is noteworthy that HHT vascular lesions only occur in certain organs and are not expressed throughout the body ,48. One In resting EC, endoglin is present at low concentrations, but when cells are actively proliferating or during angiogenesis and embryogenesis, the endoglin concentration is increased ,50,51,52Thus, in the haploinsufficient HHT setting where a second hit occurs, endoglin and ALK1 do not reach the minimum concentration necessary to perform their roles in vascular damage ,54,55,56Further genome analysis of HHT families with phenotype variability as well as families with HHT whose genetic causes are unknown may be useful to identify new genes that may explain the heterogenic spectrum . Pulmonary AVMs (PAVMs) are a direct connection between a pulmonary artery and pulmonary vein without the interference of the pulmonary capillary bed. This results in an intrapulmonary right-to-left shunt with no gas exchange and a reduced filtering capacity of the pulmonary capillary bed. PAVMs are frequently underdiagnosed and asymptomatic . PAVMs cThe estimated prevalence of PAVMs is 38 in 100,000 individuals . ApproxiThe gold standard for screening PAVM is transthoracic contrast echography (TTCE) using agitated saline. TTCE has a sensitivity of 95–100% and can therefore be used to exclude a PAVM ,63,69,70The degree of pulmonary shunt can be classified by the number of microbubbles found in the left heart. The presence of a moderate or large shunt is an independent predictor of cerebrovascular events and brain abscesses . In caseScreening for PAVMs in (asymptomatic) HHT patients is justified due to good treatment options and non-invasive examination, reducing the risk of serious complications ,73. It rGroup 1—Pulmonary arterial hypertension (PAH)Group 2—PH caused by left heart diseaseGroup 3—PH caused by lung diseases and hypoxiaGroup 4—Chronic thrombo-embolic PH and other pulmonary arterial obstructionsGroup 5—PH with unclear or multifactorial mechanismsPH is a condition of increased blood pressure within the pulmonary arteries. PH has been defined as a mean pulmonary arterial pressure (PAP) ≥25 mmHg at rest as assessed by right-heart catheterization . In the There are no data describing the prevalence of PH per group. In an echocardiographic study, the prevalence of PH (estimated pulmonary artery systolic pressure >40 mmHg) was 11%, with 79% of patients suffering from left heart disease and 10% suffering from lung diseases, respectively . A studyDiagnosing PH in HHT can be challenging. Symptoms of HHT, such as fatigue, dyspnea and exercise intolerance, resemble those of PH due to anemia, hypoxemia associated with PAVMs, inadequate sleep due to epistaxis, and the psychological burden of a chronic illness ,6,75. A transthoracic ultrasound (TTE) should always be performed when PH is suspected. TTE provides different echocardiographic variables, such as an estimation of the PAP and secondary signs, to assess the probability of PH . A rightOne of the subgroups of PAH, a disease with a pre-capillary hemodynamic profile and an increased pulmonary vascular resistance (PVR > 3 Woods Units), is heritable PAH (HPAH) . SeveralLess than 1% of patients with HHT suffered HPAH caused by a mutation in the ACVRL1 gene . Up to 2Research by Vorselaars et al. showed that very few cases are known in the literature with the combination PAH-HHT. Ref. in totalHowever, a majority of family members of patients with HPAH in combination with HHT do not develop HPAH, indicating that other genetic or environmental factors are required to develop an HPAH phenotype .Symptomatic HPAH patients with ACVRL1 mutations, frequently without HHT, are more likely to present with symptoms than patients with a BMPR2 mutation or idiopathic PAH . DespitePost-capillary PH arises from a hyperdynamic state caused by an increased cardiac output (CO), which can cause heart failure in the long term . Within Treatment of PAVMs in HHT is recommended to prevent severe complications—in particular, the development of brain abscesses and cerebral ischemic events—and is therefore justified for asymptomatic patients. In addition, symptoms of hypoxia and dyspnea can be reduced by PAVM treatment . EmbolizSometimes, the PAVM are complex and diffuse involving pulmonary arteries from different segments. This group is difficult to treat; surgery might be an alternative for percutaneous treatment. However, lung transplantation might be the only option left , 91]..91].Despite the fact that local treatment of telangiectases and PAVMs continues to improve, no ideal systemic therapy is available to date. Various studies and trials have attempted to find new drugs and have investigated the possibilities for repurposing existing drugs ,93.Currently, anti-angiogenic drugs used in cancer treatment (anti-VEGF antibodies and thyrosine kinase inhibitors) are under investigation with the aim of inhibiting the pro-angiogenic processes in HHT .VEGF plays a role in the development of AVMs and anti-VEGF therapy has shown to be effective in the treatment of other AVMs. Several case reports described treatment of diffuse PAVM with bevacizumab in which respiratory symptoms improved and epistaxis was decreased, without the formation of new AVMs on chest-CT during follow-up . BevacizTreatment differs for the different types of PH in HHT. There have not yet been randomized control trials to contribute to a guideline for PAH-specific therapy in HHT . StandarTreatment of PAH consists of lifestyle advice and drug therapy. Lifestyle advice includes avoiding pregnancy and infections, having elective surgery performed in specialized centers with experience in PH, genetic testing of family members, oxygen, psychological assistance, and water and salt reduction ,6. Drug Different drugs are currently under investigation for the treatment of PAH. Tacrolimus is a drug used to prevent rejection after allogenic organ transplantation . The preA few case reports describe the use of tacrolimus in PAH with promising results . A phaseA recent case report has shown that low-dose tacrolimus treatment improved HHT-related epistaxis but had no effect on PH progression in HHT patients . HoweverPH due to left heart disease can be treated by means of salt reduction and diuretics. Embolization of liver AVMs can cause serious complications, such as biliary ischemia. Treatment of choice might be the use of intravenous bevacizumab, recently recommended in the international guideline for HHT . SeveralTacrolimus has demonstrated to be a potent ALK-1 signaling mimetic, downregulating the ALK-1 loss of function transcription response, tacrolimus is therefore an interesting option for the treatment of HAVM in HHT2 with high cardiac output PH . Secondly, if anemia is present the underlying etiology should be treated to reduce the high CO.In this review, we discussed the pathophysiology, screening, and treatment of PVD, both PAVM and PH, in HHT. Research into pathophysiology of these mutations has led to potential targets for therapy like tacrolimus and bevacizumab. Although case reports show promising results, scientific evidence is still insufficient to use these therapies in daily practice. Further research is required, and it is reasonable to assume that clinical trials will follow."} {"text": "The VD was significantly lower in the foveal area in choriocapillaris (p = 0.046). There were no statistically significant changes in the VD in the superior, inferior, nasal, and temporal quadrants in superficial and deep plexus, or in the choriocapillaris. The VD was not significantly lower in the foveal area in superficial or deep plexus. COVID-19 may affect the retinal vasculature, causing ischemia, enlargement of the FAZ, and lowering of the VD in the choriocapillaris area. Routine ophthalmic examination after SARS-CoV-2 infection should be considered in the course of post-infectious rehabilitation.The purpose of this study was to evaluate retinal and choroidal microvascular alterations with optical coherence tomography angiography (OCTA) in COVID-19 patients hospitalized because of bilateral pneumonia caused by SARS-CoV-2. The vessel density (VD) and foveal avascular zone (FAZ) of 63 patients with SARS-CoV-2 pneumonia who had positive polymerase chain reaction (PCR) tests and who recovered after receiving treatment and 45 healthy age- and gender-matched controls were evaluated and compared using OCTA in the superficial capillary plexus (SCP) and deep capillary plexus (DCP). The VD was also estimated in both groups in the choriocapillaris (CC). In COVID-19 patients, there was a statistically significant difference between the patients and a control group in both superficial (FAZs) and deep (FAZd) avascular zone ( Coronavirus disease 2019 (COVID-19) is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a new beta coronavirus that was first identified in December 2019 in Wuhan, China. Since then, SARS-CoV-2 has spread as a pandemic, announced by the World Health Organization (WHO) on 11 March 2020. Transmission of the virus occurs primarily by droplets, through exposure to respiratory secretions containing the pathogen, but infection through contact with contaminated objects or surfaces was also reported [The clinical picture of the disease ranges from asymptomatic through mild and moderate respiratory infection to life-threatening viral pneumonia with acute respiratory distress syndrome, septic shock, and multiple organ dysfunction with a very high mortality rate. SARS-CoV-2 uses a spike protein that directly binds with a human angiotensin-converting enzyme 2 (ACE2) in order to infect human cells ,4. ACE2 The virus has been detected in the tears of COVID-19 symptomatic and asymptomatic patients, as well as in the retina of human cadavers, using a real-time polymerase chain reaction (PCR) . PostmorThe vascular endothelium plays a role in the control of vascular tone and the preservation of the blood–retinal barrier. Dysfunction of the endothelium can lead to microvascular ischemia due to mechanisms including vasoconstriction and inflammation ,13. It iOur study aimed to assess the macular anatomy, macular vessel density, and the foveal avascular zone in patients who recovered after hospitalization due to COVID-19.Cross-sectional, consecutive, and prospective case-control series. COVID-19 patients, confirmed with at least one PCR test, were selected from a group of cases with bilateral pneumonia. Patients were admitted at the Department of Infectious Diseases of WSZ in Kielce during March, April, May, and June 2021. This project was approved by the Bioethics Committee of Collegium Medicum of Jan Kochanowski University in Kielce (study codes 54 approved 1 July 2021). Written informed consent was obtained from all patients. All patients were examined 8 weeks after hospital discharge (group 1). This group consisted of 63 subjects who agreed to participate with a mean age of 51.33 ± 1.45. At the moment of ocular examination, all patients were already asymptomatic. Nineteen patients (3.16%) suffered from hypertension and three patients (4.76%) suffered from dyslipidemia. Twenty-two (34.92%) patients received oxygen therapy during hospitalization.The control group (group 2) included healthy patients who attended the ophthalmology department for a routine eye examination. This group consisted of 45 subjects with a mean age of 47.76 ± 1.38. Written informed consent was obtained from all patients. Inclusion criteria for this group were as follows: age of 30–70 years; negative laboratory tests for SARS-CoV-2 infection ; absence of COVID-19 symptoms in the past or close contact with COVID-19 patients within the 14 days before the examination; absence of concomitant eye diseases. Demographic characteristics of both groups are presented in A flow chart illustrating the consolidated standard for reporting all trials which describe included and excluded eyes is presented in Exclusion criteria related to eye diseases for both groups were myopia >3 diopters, hyperopia >3 diopters, retinal vascular disease, macular and optic nerve disease, previous ocular surgery (including cataract or glaucoma surgery), uveitis, ocular trauma, age-related macular degeneration, other retinal degenerations and media opacity affecting the OCTA’s scan or image quality, and diabetes mellitus.Both groups underwent complete ophthalmic examination, including a best corrected visual acuity (BCVA) test measured on a logMAR scale, intraocular pressure (IOP) measurement, slit lamp examination, OCT of the macula and optic nerve, and angio-OCT (OCTA).All scans were acquired with swept-source angio-OCT . OCT protocols included 3D macula 7 × 7 mm scanning protocols and 3D 6 × 6 mm disc scanning protocols. OCTA images were captured using the 4.5 × 4.5 mm and the 6 × 6 mm scanning protocols. All scans eligible for the study reached an image quality of at least 65%.Structural OCT macular parameters were measured using the early-treatment diabetic retinopathy (ETDRS) grid centered in the fovea by manual fixation. Three areas of interest were defined as the fovea, the inner ring (IR), and the outer ring (OR). The retinal stratus and parameters analyzed were the total retina (TR), the retinal nerve fiber layer (RNFL), the ganglion cell layer (GCL), and choroid thickness, delineated automatically by the built-in segmentation software. Optic nerve head RNFL thickness was measured in each of four quadrants using a radial scan centered on the optic nerve head and presented as mean RNFL.OCTA parameters evaluated were vessel density (VD) in the three different plexi: superficial capillary plexus (SCP), deep capillary plexus (DCP) and choriocapillaris (CC), using the ETDRS grid subfields to define the areas of interest. The mean vessel density (VD) was calculated as the average value obtained in the parafoveal area, defined as the area conformed by the inner superior (IS), inner nasal (IN), inner inferior (II), and inner temporal (IT) ETDRS subfields centered on the macula by fixation. The foveal avascular zone (FAZ) area was manually delineated on the SCP and the DCP by two independent graders, encompassing the central fovea where no clear vessels were seen in the image.p-value of less than 0.05 was regarded as statistically significant. All statistical analyses were performed using Statistica 13.3 . Clinical demographics and imaging data were analyzed with frequency and descriptive statistics. The description of quantitative variables was performed using the mean (M), standard error of the mean (SEM), median (Me), and quartiles (IQR). Differences between COVID-19 and control groups were tested using the Mann–Whitney U test. Clinical variables, structural OCT, and OCTA parameters were compared between cases and controls using Student’s t-test. Categorical variables were presented as percentages, and comparisons between groups were performed with the chi-square test. Pearson (r) or Spearman’s correlation coefficient, Q (rho), was used to assess associations between SCP (%), DCP (%), CC (%), RNFL retinal, GCL, BMCSI, and RNFL optic nerve. A A total of 203 eyes were included in the study: 120 COVID-19-affected patients and 83 eyes in the control group. In all, 43 men and 20 women with COVID-19 bilateral pneumonia participated in the analysis. The control group consisted of 28 men and 17 women. The mean age of the study group was 51.33 (SEM = 1.45), while the mean age of the control group was 47.76 (SEM = 1.38). p = 0.024) and inner nasal ring (p = 0.021) and in the inferior sector was observed in COVID-19 patients . No stat= 0.018) . In the patients .p = 0.012), the inner nasal ring , the inner temporal ring , and the outer inferior ring . Detailed information about retinal thickness measurements can be found in supplementary tables p = 0.000) and for the DCP . Enlargep = 0.046) (p = 0.456). The VD was significantly lower in the foveal area of CC , but thep = 0.620) (p = 0.706) and in t= 0.706) . p = 0.241) (p = 0.849) and in t= 0.849) .There were no statistically significant differences in the superior, inferior, temporal, or nasal areas in the SCP, DCP, or CC between COVID-19 patients and the control group. Detailed information can be found in p < 0.00). We observed that the FAZ area was significantly larger in female COVID-19 patients than in male COVID-19 patients in the SCP , but significantly higher in women than in men in the inferior area of the SCP . The VD in the SCP was significantly lower in women than in men in COVID-19 patients in the foveal area of the SCP , in the nasal area and in the temporal area .The VD in the DCP was significantly lower in women than in men in COVID-19 patients in the superior area , in the inner nasal ring , in the inner inferior ring , in the inner temporal ring , and in the outer temporal ring .In the structural OCT analysis, a statistically significantly thicker RNFL retina was observed in men than in women in COVID-19 cases in the foveal area . p = 0.002), in the inner temporal ring , and in the outer temporal ring .A significantly thicker GCL was observed in men than in women in COVID-19 patients in the foveal area , in the inner superior ring , in the inner nasal ring , in the inner inferior ring (318.10 ± 1.67 vs. 309.19 ± 2.72), p = 0.002), in the inner temporal ring , in the outer inferior ring , and in the outer temporal ring .The retinal thickness was significantly thicker in men than in women in COVID-19 patients in the foveal area . The immunofluorescence microscopy confirmed two essential proteins of the SARS-CoV-2 viral particles: S protein, exposed on the virus surface, and nucleocapsid protein located within the virus. The nucleocapsid protein was visualized in the ganglion layer (GCL) . These fBelow we discuss the most important findings of our work regarding changes ongoing in the retinal microvasculature in COVID-19-affected patients. One of the most prominent features in our study was the enlargement of the FAZ area in the SCP. There are conflicting data in the literature, as some authors confirm this finding , while nEnlargement of the FAZ zone can be a consequence of retinal ischemia due to endothelial dysfunction, vasoconstriction, and procoagulant activity. It can be the perfusion deficit in the foveal area caused by general hypoxia and inflammation.Postmortem studies confirmed the presence of microvascular thrombosis, endotheliitis, viral elements, apoptotic bodies, and inflammatory cells within the endothelium of small vessels in many organs. The ischemic processes can cause enlarging of this FAZ area. It can be observed in many vascular diseases such as diabetes mellitus or retinal vascular occlusion ,29, and The ganglion cell layer (GCL) and the inner plexiform layer (IPL) are supplied with blood by the SCP, but the DCP supplies the outer plexiform layer (OPL) adjacent to the outer nuclear layer (ONL). The OPL includes oxygen-dependent synapses of photoreceptors, bipolar cells, and horizontal cells. The SCP and the DCP are the final branches of the central retinal artery. We can expect the ischemia and secondary atrophy of the inner retina due to endotheliitis and micro thrombotic processes within these small vessels.In our group, we did not find a difference between COVID-19 patients and the control group in the foveal GCL layer, but we confirmed significantly thicker GCL in the inner superior ring, the inner nasal ring, the inner temporal ring, and the outer inferior ring. Our results are in accordance with the study performed by Burgos-Blasco et al. on children between 6 and 18 years who recovered from COVID-19. They observed increased macular GCL in the nasal outer and temporal inner sectors . On the The RNFL and the GCL layers are the neuroretinal layers. John Dowling referred to the human retina as “the approachable part of the brain”, which can be clinically visualized with proper tools . Many stIn our study, the foveal RNFL was not statistically different between COVID-19 patients and the control group, but a significantly thicker RNFL retina was observed in COVID-19 cases in the inner superior ring and inner nasal ring. There were no signs of choroidal neovascularization, or any abnormalities in the RPE in these regions. It is difficult to state the origin of this change. Most authors also did not find differences in foveal RNFL ,26,35.Contrary to our analysis, Burgos-Blanco et al. confirmed decreased macular RNFL in the nasal outer and temporal inner sectors, but also an increase in global peripapillary in the temporal superior and temporal sectors . AuthorsSavastano et al. described decreased radial peripapillary capillary plexus (RPCP) perfusion density in post-COVID-19 patients. The RNFL thickness was linearly correlated with the RPCP flow index and perfusion density in this study group. The impairment in the blood supply to the optic nerve may result in the peripapillary RNFL thinning .Marinho et al. reported the presence of hyperreflective lesions at the GCL and inner plexiform layer (IPL) in adults 11-33 days after the onset of COVID-19 symptoms. Subtle cotton wool spots and microhemorrhages were present along the retinal branches . In our In our study, we observed significant thickening of the RNFL in the optic nerve head. This stays in line with Gonzalez-Zamora et al. findings, as they also noticed it in the superior and inferior sectors of the optic nerve head in COVID-19 patients .Another parameter analyzed in this study was the VD. We did not notice, either in the SCP or the DCP, statistically significant differences between groups in VD in the foveal or the parafoveal area. Our results are consistent with those reported by Dominika Szkodny et al. , but mosThese results were similar to Leyla Hazar et al’s. estimations, which confirmed significantly lower VD in the SCP and DCP one month after patients were discharged following recovery .Another parameter analyzed in this study was VD in the CC plexus. It was significantly lower in the foveal area in COVID-19 patients than in controls, but not in the parafoveal area. Most studies did not estimate VD in the CC plexus. In the study performed by Gonzales-Zamora et al., no alterations in VD in the CC plexus were observed .As all the above-mentioned studies vary significantly in the time of examination , it can probably explain the differences in results obtained. There is no consensus yet of when a reliable measurement should be taken, or which mechanisms COVID-19 can affect the vessel density.In our earlier study, we emphasized the role of the choroid in the nourishment of the outer layers of the retina, part of the optic nerve, and retinal pigment epithelium. The choroid is the only metabolic source in the avascular zone of the macula. Retinal oxygenation is also provided by the choroid, which is one of the most vascularized tissues in the human body. It plays a role in the pathophysiology of many eye and systemic diseases such as anemia or carotid artery stenosis .We suspect that reduced VD in the choriocapillaris in COVID-19 patients can affect the retina. Long-term follow-up is needed to estimate the functional and anatomical consequences on the general and local condition.Our study has several limitations. It could not be performed during the symptomatic acute phase of COVID-19 due to the emergency condition and risk of contagion. Nineteen COVID-19 patients suffered from hypertension and three from dyslipidemia, so we cannot exclude that part of our findings were related to the general condition. We have no baseline results of the OCT before disease, so we cannot compare these findings. A strong point of the study is a large group consisting of selected patients without the influence of vascular diseases such as diabetes mellitus, examined at the same time point after discharge from the hospital.Here we report significant changes in the retinal microvasculature of COVID-19 patients, especially in the most metabolically active regions of the retina. These findings confirm the microvascular involvement of SARS-CoV-2 infection and its possible vascular sequelae. We cannot yet state whether these changes are permanent or transient, but even a minor decrease in the perfusion in the macular area can result in slow degradation of its function. Long-term follow-up studies are required to further evaluate these findings. OCT can be a useful tool to assess the severity of the disease and to determine the effectiveness of the COVID-19 treatment."} {"text": "The statistically significant thickening of RNFL optic disc and thinning of RNFL retina in some macular areas in patients with SpO2 ≤ 90% were reported. The size of FAZ area in SCP and vessel density were significantly greater in some areas of SCP, DCP, and CC in patients with SpO2 ≤ 90% (p = 0.025). Baseline oxygen saturation ≤90% has been found to influence the ocular parameters of OCT in COVID-19 patients. We noticed a widened FAZ zone in SCP and increased VD in some regions of the retina and choroid as a response to systemic hypoxia.The aim of the study was to evaluate changes in the retinal thickness and microvasculature based on optical coherence tomography (OCT) depending on baseline oxygen saturation (SpO2) in patients hospitalized due to COVID-19 bilateral pneumonia. The prospective study was carried out among 62 patients with COVID-19 pneumonia who underwent ophthalmic examination after hospital discharge. They were divided into three groups depending on the oxygen saturation (SpO2) on admission: ≤90% (group 1), >90% and ≤95% (group 2), and >95% (group 3). The following parameters were assessed in the ophthalmological examination and correlated with the baseline SpO2: ganglion cell layer (GCL), the retinal nerve fiber layer (RNFL) in the macular area, RNFL in the peripapillary area, the foveal avascular zone (FAZ) in superficial capillary plexus (SCP) and deep capillary plexus (DCP), vessel density (VD) in SCP, in DCP, and in the choriocapillaris plexus (CC). Baseline saturation ≤90% in COVID-19 patients caused a decrease of VD in some areas of SCP and DCP and an increase in FAZ area in SCP and DCP. In the group of patients with SpO2 ≤ 90% statistically significant thinning of the retina in the inner superior ring (ISR) ( Since the end of 2019, the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has spread worldwide, resulting in the pandemic announced by World Health Organization (WHO) on 11 March 2020. SARS-CoV-2 is transmitted by respiratory droplets, close contact with the infected person, or aerosols. The disease it causes, termed coronavirus disease 2019 (COVID-19), is mild or even asymptomatic in most of the patients affected, while about a fifth of them experience a severe course ,4. AccorTreatment methods for patients with COVID-19-associated pneumonia include oxygen therapy and antiviral, anti-inflammatory, and immunosuppressive drugs depending on the phase of the disease, while low molecular weight heparin is recommended regardless of the stage of the illness as prophylaxis of thromboembolism ,11,12. OThe presence of the genetic material of SARS-CoV-2 in the retinal tissue has been confirmed postmortem by PCR . The vasThe most current method to visualize macular micro vessels is by using non-invasive optical coherence tomography angiography (OCTA). The OCTA can successfully visualize the microvascular network of the retina with its blood flow in numerous eye disorders and general diseases . Both OCThe current study aimed to evaluate the microvascular changes in the retina based on OCTA in COVID-19 patients hospitalized due to bilateral pneumonia caused by SARS-CoV-2. The impact of comorbidities and the received treatment was also analyzed.A cross-sectional, consecutive, prospective case-control series analysis was carried out. Cases were selected from a population of COVID-19 patients with bilateral pneumonia hospitalized in the Department of Infectious Diseases of Municipal Hospital in Kielce during the pandemic spring wave caused by B.1.1.7 variant of SARS-CoV-2 from March to May 2021. The project was approved by the Bioethics Committee of Collegium Medicum of Jan Kochanowski University in Kielce (study code 54 approved on 1 July 2021).Exclusion criteria related to eye diseases included as follows: myopia > 3 diopters, hyperopia > 3 diopters, retinal vascular disease, macular and optic nerve disease, previous ocular surgery including cataract surgery, glaucoma surgery, and other types of eye surgery, uveitis, ocular trauma, age-related macular degeneration, and other macular degenerations and media opacity affecting OCTA’s scan or image quality, diabetes mellitus. All examinations were performed by a single non-masked investigator, with standard protocol applied to each patient.All patients signed the written informed consent to participate in the current study and underwent ophthalmological evaluation eight weeks after hospital discharge.2. Thirty-nine patients suffered from comorbidities with the most common arterial hypertension (20 patients) followed by other cardiovascular diseases, including ischemic, valvular heart disease, and cardiac rhythm disturbances, which affected six patients. Five individuals had liver steatosis, and five had a history of malignant neoplasm (inactive). Data on the course of hospitalization were obtained retrospectively from hospital records. At the time of admission to the hospital, due to COVID-19, all patients were symptomatic. In all patients, the clinical diagnosis of COVID-19 was confirmed by the positive result of the real-time reverse transcriptase-polymerase chain reaction (RT-PCR) from nasopharyngeal swabs. The diagnosis of bilateral pneumonia was supported by typical chest computed tomography scan changes described in 26 patients.The analyzed group consisted of 62 patients with male predominance , the mean (M) ± standard error of the mean (SEM) age was 51.3 ± 1.4 years, and BMI M ± SEM was 28.5 ± 0.5 kg/mTwenty-two patients were classified at baseline as stable with oxygen saturation (OS) >95%, twenty-nine were unstable with OS 91–95%, and the remaining eleven patients were assessed as unstable with OS ≤ 90%. The need for continuous low-flow oxygen therapy was documented in 23 patients. The most frequent drug used for the in-hospital treatment of COVID-19 was low-molecular-weight heparin in prophylactic dose, which was administered in fifty-nine patients; two individuals received a therapeutic dose. The antiviral agent remdesivir was used in 26 patients. Immunosuppressive treatment with dexamethasone was administered to 22 patients, while 3 received tocilizumab in a single dose of 600–800 mg, depending on the patient’s weight.The low-molecular-weight heparin in prophylactic dose was administered in 59 COVID-19 patients for an average of 9 (6.25–12) days. Dexamethasone was administered in 22 infected patients, tocilizumab was administered in 3 patients in a single dose of 600–800 mg depending on the patient’s weight. Remdesivir was administered to 26 patients. The need for continuous oxygen therapy was in 23 patients for an average of 5 (4–10) days. Twenty-three patients received oxygen therapy during hospitalization for an average of 5 (4–10) days.The COVID-19 patients underwent complete ophthalmic examination, including a best corrected visual acuity (BCVA) test, in a LogMAR scale, intraocular pressure (IOP) measurement, a slit-lamp examination, OCT of the macula and optic disc and angio-OCT (OCTA).The mean LogMar BCVA was 0.0 and the mean LogMar Reading Vision (RV) was 0.3, the spherical equivalent (SE) was 0.13 (0.13) D, the mean axial length was 23.55 (0.08) mm .All scans were acquired with Swept Source DRI-OCT Triton SS-OCT Angio . OCT protocols included 3D macula 7 × 7 mm scanning protocols, 3D Disc- 6 × 6 mm scanning protocols, and OCTA images were captured using the 4.5 × 4.5 mm and the 6 × 6 mm scanning protocols.Structural OCT macular parameters were measured using the early treatment diabetic retinopathy (ETDRS) grid, centered in the fovea by manual fixation. Three areas of interest were defined as the fovea, inner ring (IR), and outer ring (OR). IR and OR include superior (S), inferior (I), nasal (N), and temporal (T) areas. The retinal stratus and parameters analyzed were total retina, retinal nerve fiber layer (RNFL), ganglion cell layer (GCL), and choroid thickness as delineated by the boundaries automatically defined by the built-in segmentation software. Optic nerve head RNFL thickness was measured in each quadrant using a radial scan centered on the optic nerve head and presented as mean RNFL.The evaluated OCTA parameters were: vessel density (VD) in the three different plexi: superficial capillary plexus (SCP), deep capillary plexus (DCP), and choriocapillaris (CC) using the ETDRS grid subfields to define the areas of interest. Mean vessel density (VD) was calculated as the average value obtained in the parafoveal area, defined as the area conformed by the superior (S), inner nasal (N), inner inferior (I), and inner temporal (T) ETDRS subfields centered on the macula by fixation. The foveal avascular zone (FAZ) area was manually delineated on the SCP and the DCP by two independent graders, encompassing the central fovea where no clear and demarcated vessels were seen on the OCTA.p ≤ 0.05 were considered significant. The statistical analysis was performed using the STATISTICA 13.3 statistical package .Counts (n) and percentages (%) attendance were counted for all qualitative parameters in the COVID-19 patient group. Three groups of patients with saturation were distinguished using the following cut-off values: ≤90%, ≤95%, and >95%. In the distinguished groups, the distributions of the variables were checked with the quantitative Shapiro–Wilk test. The mean .The clinical course of SARS-CoV-2 disease was assessed with the ordinal scale based on the WHO recommendation, modified to an 8-score version to fit the specificity of the Polish healthcare system and used in previous SARSTer studies ,15,16. TThe COVID-19 patients were divided in three groups depending on oxygen saturation: ≤90% (group 1), ≤95% (group 2), >95% (group 3).p = 0.029 in inner, superior ring (ISR), r = 0.49, p = 0.034 in inner, temporal ring (ITR), r = 0.56, p = 0.012 in outer superior ring (OSR), r = 0.062, p = 0.004 in outer temporal ring (OTR)] in some areas of the macula in the group of patients with SpO2 equal or lower than 90%. A statistically significant correlation was found between choroidal thickness (BMCSI) and SpO2. For SpO2 equal to or lower than 90% r = 0.52, p = 0.021 in the outer nasal ring (ONR) and for the oxygen saturation of 90–95%, r = −0.38, p = 0.007 in the outer nasal ring (ONR).There were no statistically significant differences between the different OCT parameters in COVID-19 patients according to oxygen saturation . Group 1 consisted of 10 patients, including 19 eyes, group 2 consisted of 29 patients, including 56 eyes, and group 3 consisted of 23 patients, including 44 eyes . There wp = 0.005 in superior RNFL optic disc, r = −0.60, p = 0.012 in temporal RNFL optic disc). A decrease in RNFL retinal thickness was observed in some areas of the central retina in COVID-19 patients with SpO2 ≤ 90% . As SpO2 ≤ 90% level, a decrease in macular GCL thickness was observed , r = 0.46, p = 0.047 in GCL inner inferior ring (IIR). There was a positive statistically significant correlation between GCL in the outer inferior ring (OIR) and SpO2 of 90–95% . As the SpO2 decreased, the FAZ area in the SCP statistically significantly increased .A statistically significant negative correlation was reported between SpO2 ≤ 90% and RNFL optic disc . There was a statistically significant correlation between mean VD in DCP and SpO2 ≤ 90% and between the temporal area of VD in DCP and SpO2 ≤ 90% .There was a statistically significant correlation between foveal VD (vessel density) in SCP and SpO2 .In this study, we present the correlations between blood saturation and ocular parameters based on changes in blood flow observed in OCTA examination in COVID-19 patients. This is intended to clarify the role of hypoxia in metabolism and microvasculature of the retina and choroid.Hypoxia and inflammation are linked at the molecular, cellular, and clinical levels. Factors that induce acute hypoxemia, such as SARS-CoV-2 disease enhance various cytotoxic functions of neutrophils and may stimulate hyperinflammation. Animal models showed that exposure to low oxygen concentrations results in increased vascular permeability, accumulation of inflammatory cells, and increased serum cytokine levels. Therefore, hypoxia is not only a consequence of respiratory disease but also contributes significantly to progressive lung damage and failure of other organs ,23.OCTA has revolutionized ophthalmic clinical practice. It produces high-contrast images of the retinal blood flow, with sufficiently high resolution to show the location of individual capillaries in the retina. OCTA can differentiate the SCP from the DCP and show how each plexus is affected in retinal vascular disease . The OCTWe observed a significant thinning of retinal thickness in some macular areas in patients with SpO2 equal to or lower than 90%. A statistically significant correlation was found between choroidal thickness (BMCSI) and SpO2; we reported significant thinning of choroidal thickness in patients with SpO2 equal to or lower than 90%, but the thickness of choroid was correlated significantly negative in patients with SpO2 of 90–95%.p = 0.0002) and the baseline (p = 0.0320). The OCT features were only significant in the inner retina in the GCC (p < 0.0001) but not in the outer retina.Thinning of the GCC on OCT may reflect the selective vulnerability of the inner retina to hypoxia, and secondary peripapillary edema development, likely as a result of post hypoxic inflammation . The eleWe also observed a significant thickening of RNFL in the optic disc, and significant thinning of RNFL in some macular areas in patients with SpO2 equal to or lower than 90%.In a pig model of acute respiratory distress syndrome, RNFL thickness was increased and there was immunostaining for reactive oxygen species HIF-1 alpha and VEGF-A in retinal arterioles, suggestive of increased retinal vascular permeability and endothelial dysfunction . In our We observed a significant increase in the size of the FAZ area in patients with lower values of SpO2. This area allows the most distinct vision because of the high cone density and absence of blood vessels. The circulation is particularly vulnerable in the FAZ, as in this region retinal blood vessels are absent. Therefore, the retinal cones of the FAZ area are completely dependent upon oxygen and nutrient delivery from the underlying choriocapillaris. The FAZ is therefore highly sensitive to ischemic events, and because of this, can act as an indicator of several pathological processes ,35. EnlaIn our study, we observed significantly greater vessel density in some areas in SCP, DCP, and CC in patients with SpO2 equal to or lower than 90%. The inner retina is supplied by the central retinal artery, but the outer avascular retina, which consists entirely of photoreceptors and Muller cell processes is supplied and nourished by choroid .2 in the outer and inner retina behaves differently as well. The choroidal circulation does not seem to adapt to the metabolic needs of the outer retina and its resistance is not considered to change during hypoxia, although there are not many studies on the subject. Choroidal blood flow is likely to increase as a result of increased arterial pressure during hypoxia, but the choroidal circulation is regulated to some extent during moderate pressure changes, so this is not a major effect. As a result, when PaO2 falls, there is no mechanism to compensate and transport more blood to the choroid. In contrast to the choroid, the retinal circulation does regulate in response to metabolic demand, and blood flow goes up during hypoxia. It is sensible to ask whether hypoxia itself can result in neovascularization in the retina. In young animals, the retina shows elevated HIF-1alpha and VEGF before the development of the retinal vasculature, and this is most probably the main stimulating factor for the normal development of the retinal circulation. The elimination of HIF-1 alpha and VEGF in astrocytes does not change this, therefore the other cells or other compensatory processes must be involved. In the hypoxic conditions of oxygen-induced retinopathy, HIF-1 alpha and VEGF are also vital to abnormal neovascularization. Several hours of hypoxia in rats led to the upregulation of HIF-1 alpha protein and VEGF expression. Many researchers have suggested that the neovascularization in ischemic diseases such as diabetes or vein occlusion, which do not destroy the whole inner retina, may be caused by hypoxia, but the retinal disease always involves a complex mixture of events besides hypoxia, so the role of hypoxia itself is rarely certain [The influence of hypoxia on retinal and choroidal microcirculation was widely studied previously ,39. Beca certain .In an examination of the diabetic patients without clinical retinopathy (NoDR) with OCT, significantly higher perfused capillary density (PCD) and the PCD were noted compared to the healthy control group. The PCD was more sensitive than FAZ metrics for detecting a difference between diabetic patients and the healthy group. The diabetic patients with NPDR (non-proliferative diabetic retinopathy) and PDR (proliferative DR) had progressively decreased PCD . The incRelative tissue hypoxia can be an early trigger in the pathogenesis of the diabetic microvascular disease. Direct evidence of inner retinal hypoxia in diabetic cats without retinopathy has been observed using an intraretinal electrode. The tissue demand for oxygen is increased due to the need to accommodate increased levels of glucose. Relative hypoxia can occur as a result of reduced oxygen extraction from blood vessels by retinal tissues . It is kAnother mechanism was proposed for increased retinal flow in well-controlled type 1 diabetics without retinopathy, reported on the absence of retinal arterial constriction in response to a pressure stimulus, attributing it to a defect in the myogenic response of the smooth muscle cells of retinal arterioles in the setting of diabetes . A more 2, the changes in pO2 are opposite to those in hypoxia [2, which was observed in several studies in various animal models. The amount of O2 transported by the choroid does not change much beyond the point of full hemoglobin saturation, but the increase in PC is quite essential because this is the driving force for O2 diffusion into the retina. If the increase in PC is large enough, O2 will diffuse to the inner retina. The value of PC will increase still further with hyperbaric oxygen. Therefore, hyperoxia and hyperbaric treatment for vascular occlusive disease have been suggested, but there is a misconception that hyperbaric O2 would be much better than 100% O2 at atmospheric pressure. Because the influence of hyperoxia on PC is large, the PO2 in the outer retina increases dramatically. The retinal circulation vessels constrict during hyperoxia. This tends to keep inner retinal PO2 close to levels during air-breathing, but the regulation is not perfect, because the diffusion of O2 from the choroid is unavoidable. The rise in inner retinal PO2 in animal models is considerably less than the increase in PO2 in the outer retina. Hypercapnia superimposed on hyperoxia can become smaller or eliminate the hyperoxic vasoconstriction of the retinal circulation and lead to much larger increases in inner retinal PO2 [In our study, twenty-three patients were treated with continuous oxygen therapy for an average of 5 (4–10) days. The influence of hyperoxia on retinal and choroidal microvasculature was previously studied. When the arterial pO2 (oxygen tension) increases as a result of increased inspired O hypoxia . The choinal PO2 .Significantly decreased SpO2 ≤ 90% was one of the indications for the implementation of tocilizumab (TOC), apart from elevated interleukin 6 (IL-6) levels > 100 pg/mL and the need for oxygen supplementation. In these indications, the effectiveness of TOC was the best. TOC was used in three patients in our study group. This medication is aimed at blocking an IL-6 proinflammatory pathway. Despite the direct viral effect, the pathogenesis of COVID-19 includes an overproduction of cytokines . This meOur study has several limitations. It could not be performed during the symptomatic, acute phase of COVID-19 due to the emergency condition and risk of contagion. Patients were burdened with additional diseases such as hypertension or dyslipidemia, so we cannot exclude the effect of these diseases. We have no baseline results of the OCT in COVID-19 patients before the disease, so we cannot compare these findings. The number of patients with SpO2 equal to or lower than 90% was small because some patients died or had a comorbid vascular disease that excluded them from the study. Most patients in our group had SpO2 above 90%.A strong point is a group consisting of the selected patient without the influence of vascular disease such as diabetes mellitus. Only hospitalized COVID-19 patients were included in the study, forming a significant number of COVID-19 patients, examined at the same time point after discharge from the hospital.This is the first study to describe the correlations between individual OCT parameters and SpO2 and to explain the effect of general hypoxia on ocular vascularization.Our study demonstrated the effect of systemic hypoxia due to bilateral SARS-CoV-2 pneumonia on ocular parameters based on OCTA examination. Further follow-up of these patients is needed for long-term evaluation of the retina, choroid, and optic nerve for the development of degenerative changes resulting from systemic hypoxia."} {"text": "In addition to being highly digestible, lupin and chickpea beverages have anti-inflammatory and anti-carcinogenic potential evaluated through the inhibition of metalloproteinase MMP-9.There is a strong demand for plant-based milk substitutes, often low in protein content (<1.5% w/v). Protein-rich pulse seeds and the right processing technologies make it possible to make relevant choices. The major objective of this study was to assess the impact of processing on the nutritional characteristics of beverages with a high impact on health, in particular on digestibility and specific bioactivities. The results suggest that pulse beverages are as high in protein content (3.24% w/v for chickpea and 4.05% w/v for lupin) as cow’s milk. The anti-nutrient level characteristics of pulses have been considerably reduced by strategic processing. However, when present in small quantities, some of these anti-nutritional factors may have health benefits. Controlling processing conditions play a crucial role in this fine balance as a tool to take advantage of their health benefits. There is evidence of protein hydrolysis by Two batches per pulse beverage were produced and this procedure was repeated three times to verify the repeatability. Briefly, 150 g of dried seeds, previously soaked in water (1:3 w/v) for 16 h, changing the water two times in the same proportion, were cooked for 30 min in a pressure cooker into 1.5 L of tap water. Cooked pulse seeds were drained and 1.5 L of fresh tap water was added. The milling step included the food processor at 20,500 rpm for 4 min, followed by colloid milling simulated by a mortar grinder at a lab scale at 70 rpm for 45 min, at room temperature. Both chickpea and lupin beverages were sieved with a strainer, and their particle diameters were marked from an analysis carried out in a previous study and incubated for 2 min at 37°C under continuous agitation in an overhead rotator . The volume of simulated gastric fluid (SGF) with porcine pepsin to dilute the oral bolus was 1:1 (v/v) and the pH was adjusted to 3 with 10 M HCl, when necessary. The mixture was incubated at 37°C with agitation for 2 h. The gastric chyme was diluted 1:1 (v/v) with simulated intestinal fluid (SIF), and the bile salts (10 mM) and pancreatin (100 UI/mL of trypsin activity) were added followed by adjustment to pH 7 (with 1 M NaOH). Incubation at 37°C under agitation was stopped after 2 h with Pefabloc, a serine PI (5 mM). An enzyme-blank tube was included in these trials where the 2 mL of pulse beverage was replaced by demineralized water. To obtain the soluble and insoluble fractions after in vitro digestion, the whole digesta was centrifuged at 6,000 × g for 10 min, at 4°C.Chickpea and lupin beverages were subjected to the static n method comprisiin vitro digestion and that presumably becomes accessible (available) for absorption through the small intestine walls. Bioaccessibility should be distinguished from the term bioavailability, which is defined as the fraction of nutrients or food components that have been efficiently digested in vivo, assimilated, and then absorbed in the body in 17.5% (w/v) polyacrylamide gel and the correction factor used to convert nitrogen to crude protein was 5.4 . All anaThe total starch analysis was determined according to the Megazyme Total Starch Assay Procedure (K-TSTA) based on the AOAC official method 996.11 (2005), regarding specific procedures for samples in which the starch is present in a soluble or suspended form. The analyses were carried out in duplicate and expressed in grams of starch per 100 mL of sample.D-glucose content was obtained by high-performance liquid chromatography (HPLC) (g for 10 min and 500 μL of supernatant was collected. After its dilution in H2SO4 (50 mM) (1:1 v/v), the samples were centrifuged to discard the precipitated protein and filtered under vacuum through a 0.20 μm-pore-size filter. Glucose was quantified in an HPLC system equipped with a refractive index detector (Waters 2414) and a Rezex™ ROA Organic Acid H+ (8%) column , at 65°C. Sulfuric acid (5 mM) was used as the mobile phase at 0.5 mL min–1. The analysis was performed in triplicate and the results are expressed in grams of glucose per 100 mL of sample.The y (HPLC) . BrieflyTotal carbohydrate content was carried out in triplicates and according to the Dubois’ method . The resThe ash content was determined gravimetrically by incineration of triplicates at 550°C in a muffle furnace using the AOAC 923.03 method . The resDry matter was also determined gravimetrically by drying at 105°C in a forced-air oven until the constant weight of the sample according to the AOAC 934.01 method , and solThe energy value of each pulse beverage was calculated considering the conversion factors for prot3 (65%) were added. The digestion occurred at 15 min/45°C, 15 min/80°C, and 60 min/105°C. After cooling, up to 50 mL of distilled water was added and the solution was left to settle. Finally, the clear supernatant was used for the ICP analysis. Eleven elements were identified in triplicate. Results are expressed in milligrams of mineral element per 100 mL of sample.Minerals’ profile was evaluated by inductively coupled plasma optical emission spectrometry based on the AOAC 984.27 method . Brieflyg for 20 min at 10°C in a Beckman Coulter™ Allegra™ 25R centrifuge until clear supernatants were obtained. Supernatants were collected for color development. A calibration curve was previously obtained from a 25 mg/mL phytic acid solution varying from 0 to 3.5 mg/mL. The modified Wade reagent [0.03% (w/v) FeCl3⋅6H2O + 0.3% sulfosalicylic acid (w/v)] was added to volume samples in a proportion of 3:1 (v/v), thoroughly mixed on a vortex, and centrifuged at 1,000 × g during 10 min at 10°C in a centrifuge Himac CT15RE. The absorbance was measured at 500 nm using the Synergy HT spectrophotometer, Bio-TEK . Final results are expressed in grams of phytic acid per 100 mL of beverage and in mg/g of phytic acid per weight of the beverage.The phytic acid content of the beverages was determined in triplicate using the Gao et al.’s method . Brieflydigesta, were first desalted through PD-10 columns previously equilibrated in saline solutions (0.9% w/v NaCl), following ultrafiltration at 1,400 × g and 4°C of all samples, in a way to wash them with saline and reduced to volumes containing 50 and 100 μg of protein. The last centrifugation was carried out with a saline solution containing 2 mM CaCl2 and 2 mM MgCl2. For the hemagglutination activity, 5 mL of rabbit erythrocytes was used and washed three times in a saline solution by centrifugation , after which they were incubated with trypsin and shaken at 120 rpm for 1 h at 37°C, at a final concentration of 0.1% (w/v) in a saline solution. The suspension of 4% (v/v) of trypsinized erythrocytes was stored at 4°C and used for the hemagglutination activity assays. The measurements of hemagglutination activity required the protein analysis through serially dilution (1:2) in a 96-well microplate. The erythrocyte suspension (50–70 μL) was then added and the microplate was incubated for 30 min at 37°C before visual evaluation. Both positive [Con-A lectin at 0.5 mg/mL] and negative controls were prepared. One H.U. (Hemagglutination Unit) is defined as the minimal protein concentration, which induces erythrocyte agglutination (n refers to dilution).Lectins activity (hemagglutination activity) of sample protein extracts was performed according to the method described by Ribeiro et al. . BrieflyThe human colon adenocarcinoma cell line, HT29 (ECACC 85061109), obtained from a 44-year-old Caucasian female, was used. HT29 cells were maintained according to Lima et al. .5 cells/well) were seeded in 24-well plates and allowed to reach 80% confluence. Each well was subsequently supplemented with a fresh medium containing the water-soluble protein extracts of both beverages at a concentration of 100 μg mL–1. The invaded area after 48 h was calculated for each treatment and compared to the initial area at 0 h.For cell migration analysis, the wound healing assay was performed according to Lima and coworkers . HT29 ce4 cells/well), and the soluble protein extracts of both beverages were added at a concentration of 100 μg mL–1 as described above, and the cells were incubated for 48 h. The extracellular medium was then collected, the wells were washed with phosphate-buffered saline (PBS) to remove the unattached cells, and the cell growth and viability were determined using the MTT assay as described in an earlier study (HT29 cultured cells were seeded in 96-well plates (2 × 10er study .MMP-9 gelatinolytic activities in the culture media after exposure to both beverages for 48 h were determined using the DQ (dye-quenched) gelatin assay, as described by Lima et al. .digesta soluble fractions were then subjected to the DQ gelatinolytic activity assay to test their inhibitory activity upon MMP-9. Briefly, the fluorogenic substrate DQ-gelatin was acquired from Invitrogen and dissolved in water at 1 mg/mL. All solutions and dilutions were prepared in assay buffer . A 96-well micro-assay plate was used. Each well was loaded with 0.1 mM MMP-9 (Sigma), to which 100 μg/mL of total protein fraction from each legume beverage and also their digesta was added, and the plate was incubated for 1 h at 37°C. Subsequently, DQ-gelatin was added to each well and the plate was left to incubate again for 1 h. Fluorescence levels were measured (ex. 485 nm/em. 530 nm). In each experiment, positive (no protein fraction) and negative (no enzyme) controls were included for all samples to correct for possible proteolytic activities present in the protein samples. All data were corrected by subtracting the corresponding negative controls. Triplicates were used for each sample.Lupin and chickpea beverages as well as their respective digesta were also tested for lectin activity, mineral content, bioaccessibility estimation, and inhibitory activity on MMP-9 (four levels). The replication number has at least 3.A complete randomized design (CRD) was used due to low variability in beverage production and digestion (two factors). The two pulse beverages were subjected to thirteen different analysis procedures (13 levels), resulting in nine physicochemical results, bioaccessibility value, inhibitory activity on MMP-9, and evaluation of cancer cell migration and proliferation. The two respective p < 0.05). Multiple comparisons were performed by the Tukey HSD test.All statistical processing was carried out using SPSS Statistics . An analysis of variance (one-way ANOVA) was used to assess significant differences between samples at a significance level of 95% than chickpea beverage (3.24% w/v).Evaluation of the nutritional composition revealed that lupin and chickpea beverages are important sources of protein. Lupin beverage had a higher (4.05% w/v) significant value (The significant higher starch content of the chickpea beverage (1.391 g/100 mL), when compared to the lupin beverage 0.008 g/100 mL), was already expected from the predictable starch content of these pulses: 10 g of dried seeds (in 100 mL) contain 4.5 g of starch in chickpea and 0.7 g in lupin . The val g/100 mLp < 0.05) than the chickpea beverage of dry seed.in vitro digestion of both beverages for the majority of the elements presented a broad polypeptide profile with more representative molecular weights than the chickpea beverage (C), while digesta showed polypeptides with molecular weights under 50 kDa mixed with the commercial enzymes added. Comparing the differences between digesta and the enzyme-blank profiles, the digesta exhibited some low molecular weight peptides (<10 kDa) and several polypeptides under 25 kDa that are distinct from the blank (B). This revealed the high digestibility suffered in the digesta as a result of the enzymes used.Protein digestion by the static o method has alsoo method . Polypepdigesta, evidencing all the enzymes used during the protocol, as expected. The α-amylase used showed a molecular weight of around 58–62 kDa, the pepsin of 36 kDa, and pancreatin comprises a group of several enzymes with molecular weights from 13 to 64 kDa (The electrophoretic profile of the enzyme control (B) has a match with the beverage o 64 kDa on MMP-9 activity, cancer HT-29 cell migration, using the wound healing assay, and cancer HT-29 cell proliferation, using the MTT assay.To understand if beverage processing conditions kept the anti-inflammatory and anti-cancer potential of both pulse seeds, particularly in relation to their inhibitory ability on gelatinase MMP-9, a matrix metalloproteinase, which is related to inflammation and cancer disease, we set out to test the p < 0.05) reduce MMP-9 activity and cancer cell migration of HT-29 cells, while not reducing HT-29 cell proliferation in a significant manner. Although both beverages present similar results on MMP-9 activity and cell proliferation, the reduction in cell migration was significantly higher with the lupin beverage when compared to the chickpea beverage (p < 0.05).The results show that both pulse-based beverages were able to significantly , which were considerably higher after in vitro digestion, particularly for the lupin beverage, with a 96% reduction in the MMP-9 activity, as opposed to the chickpea beverage, for which a 48% inhibition was obtained (p < 0.05).Overall, both beverages presented very significant inhibitory activities on commercial MMP-9 established for adults , 57 Tab was highin vitro method and mineral bioaccessibility evaluation of cow milk and soy milk should be used in further studies to compare accurately the differences between these two pulses-based beverages.Despite that, the pulse beverages obtained in this study could not have a nutrition claim for minerals, except for manganese (DRI > 7.5%) , which ain vitro method to dairy protein intolerance. The possible nutritional claims that could be used for both pulse beverages are as follows: “with no added sugar,” “fat-free,” “very low sodium,” and “source of manganese,” but for lupin beverage itself, it is possible to have one more specific allegation as “source of protein” , taking Furthermore, the studied anti-nutritional compounds (phytic acid and lectins), naturally present in pulse seeds, are highly reduced through beverage processing and did not hinder minerals bioavailability and nonexistence of intestinal malabsorption events.Laminaria ochroleuca seaweed, and another study (However, the mineral content of these pulse beverages, especially in calcium, magnesium, and phosphorus, is much lower compared to milk as mentioned above, and to obtain a sustainable vegetable alternative to milk, using a clean label approach, algae could be added to increase its mineral content. Several studies have been carried out and confirm this approach: Fradinho et al.’s study showed aer study increaseer study .In addition to being highly digestible and nutritious, lupin and chickpea beverages showed specific bioactivities of MMP-9 inhibition, as well as a reduction in the migration of colon cancer cells. Furthermore, the MMP-9 inhibitory activity was resistant to the digestion process, and is significantly enhanced, thus suggesting a strong potential as a functional food, for effective preventive diets against inflammatory and cancer diseases, especially those related to the digestive system.The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.IS, ARa, and RF: conceptualization. IS, ARa, AL, and ARi: methodology. CD, MN, ARi, JM, and AL: validation. CD, JM, AL, and MN: formal analysis. CD, JM, RA, CM, and ARi: investigation. IS, RF, RA, and CM: resources. CD and AL: writing—original draft preparation. CD, ARi, AL, IS, RF, RA, CM, ARa, and MN: writing—review and editing. CD and AL: visualization. IS and RF: supervision. IS: project administration and funding acquisition. All authors have read and agreed to the published version of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "This Frontiers in Veterinary Science Women in Veterinary Neurology and Neurosurgery collection of scientific articles presents the innovative work of female neurologists working in the UK, Europe, and North America. The series comprises 13 articles showcasing original research and case reports in various fields of Veterinary Neurology and Neurosurgery, including advances in diagnostic imaging techniques and novel neurosurgical procedures.Boudreau et al. describes DWI MRI findings of spontaneous canine CVA in relation to the time of clinical onset, the DWI type (EPI vs. non-EPI), and the presence or absence of a haemorrhagic component of the lesion. The results of this study help to inform the appropriate clinical interpretation of these sequences by veterinary neurologists and neuroradiologists.Diffusion MRI is a specific sequence that detects and quantifies water diffusivity, which is the molecular motion (Brownian movement) of water and represents an intrinsic feature of tissues. Diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC) are MRI sequences routinely used in the diagnostic investigation of suspected cerebrovascular accidents (CVA) in people . There iRemelli et al. highlights the usefulness of low and high-field MRI in complementing clinical and laboratory findings in the diagnosis of SRMA. In this retrospective study, including 70 dogs with SRMA, the MRI showed abnormalities in 98.6% of dogs, with the majority (87.1%) being MRI features suggestive of meningeal inflammation. T1W FAT-SAT sequences were particularly useful in detecting meningeal enhancement. In addition, the contrast enhancement of the synovium of the cervical articular facets and the epaxial muscles was detected in 48.6% of dogs.Steroid-responsive meningitis-arteritis (SRMA) is an inflammatory disorder of probable immune mediated origin, commonly recognized in dogs . It typiLaws et al. describes the clinical and diagnostic findings, treatment , and outcomes of dogs with SEE that were presented to five referral hospitals in the UK. This study provides detailed information on the presenting clinical signs, MRI findings, laboratory investigation results, treatment, and long-term outcomes. The results of this study inform client communication and clinical decisions on canine SSE management.Spinal epidural empyema (SEE) is characterized by the “accumulation of purulent material in the epidural space of the vertebral canal” . It can Goffart et al. shows that end-on fluoroscopy, with or without inversion, is a highly accurate technique for the intraoperative evaluation of bicortically placed Steinmann pins' position in the canine thoracolumbar vertebral column. Three-dimensionally printed patient-specific drill guides have been used to improve the accuracy of implant placement in the canine spine of the central and peripheral nervous system.The author confirms being the sole contributor of this work and has approved it for publication."} {"text": "Osteoarthritis (OA) is an immensely pervasive joint disorder—typically concerning large weight-bearing joints—affecting over 30 million people in the United States, with this number predicted to reach 67 million by 2030 . Its patRecently, various molecular targets, such as, interleukin-1 (IL-1), transforming growth factor-β (TGF-β), matrix metalloproteinases (MMPs), etc., have been reported to be linked in the etiopathogenesis of OA ,5,6; howOver the last decade, there has been a heightened interest in the use of biologics, including autologous biologics such as platelet-rich plasma (PRP), bone marrow concentrate, adipose tissue, and allogenic biologics, such as perinatal tissue for regenerative medicine applications specifically for musculoskeletal disorders . Perinathomogenized amniotic membrane suspended in physiological solution). These patients were assessed at baseline (prior to injection) and at 3, 6, and 12 months post-injection using the International Knee Documentation Committee (IKDC) and Visual Analogue Scale (VAS) scores. No severe adverse events were reported throughout the duration of the study. Statistically significant improvements (p < 0.05) were observed for both IKDC and VAS at all follow-up time points compared with the baseline. Interestingly, both IKDC and VAS scores regressed by 6 months, indicating a lack of long-lasting effect of ASA; however, at the 12-month follow-up, both scores indicated significant improvement compared to the baseline. In spite of this, the results from this study indicated that a single intra-articular injection of ASA is safe and showed positive clinical outcomes, which is in accordance with other published clinical trials utilizing ASA for the treatment of knee OA [In this Editorial, I will focus on a recently published clinical trial by Natali et al. , titled, knee OA ,22,23. Thomogenized amniotic membrane suspended in physiological solution’, whereas previously published studies [amniotic suspension allograft that contains human amniotic membrane and human amniotic fluid-derived cells’, with no description of the formulation protocol. Thus, I believe, it is essential to maintain uniformity in the composition of similarly named biologics and to describe the formulation protocol to allow the repeatability and reproducibility of the results of prospective trials assessing the safety and efficacy of these biologics throughout the world, in order to ultimately justify their clinical usage.In addition to the aforementioned limitations, one of the concerns, not limited to this study, is lack of consistency in the composition of similarly named biologics. For instance, this study used the term ASA and defined it as a ‘ studies ,22,23 declinicaltrials.gov . These trials are summarized in In summary, despite limitations, I applaud the efforts of the authors, as this study positively adds to the current literature suggesting that the administration of amniotic tissue including ASA is safe and justifies the need for a high-powered, prospective, multi-center, double-blind, randomized controlled trial with a longer follow-up duration to further establish the efficacy of ASA to alleviate symptoms associated with knee OA, thereby, possibly providing a new minimally invasive therapeutic alternative for patients suffering with knee OA. As of 13 October 2022, there are three ongoing clinical trials registered on"} {"text": "Although the dominant view in the literature suggests that work-related anxiety experienced by employees affects their behavior and performance, little research has focused on how and when leaders’ workplace anxiety affects their followers’ job performance. Drawing from Emotions as Social Information (EASI) theory, we propose dual mechanisms of cognitive interference and emotional exhaustion to explain the relationship between leader workplace anxiety and subordinate job performance. Specifically, cognitive interference is the mechanism that best explains the link between leader workplace anxiety and follower task performance, while emotional exhaustion is the mechanism that best explains the link between leader workplace anxiety and follower contextual performance. Additionally, we examine how follower epistemic motivation serves as a boundary condition for the effect of leader anxiety on follower performance outcomes. Results from a 2-wave study of 228 leader-follower dyads in a high-tech company mostly supported our theoretical model. We conclude the study with a discussion of the theoretical and practical implications of our findings. In the current “age of anxiety,” the dual influences of unfavorable environmental factors and individual psychological characteristics cause many people to experience varying degrees of anxiety in the workplace . WorkplaConsidering the harmful consequences of workplace anxiety on employee attitudes and behaviors, it is not surprising that a large number of articles have focused on the negative effects of workplace anxiety . DespiteSecond, prior anxiety research has focused almost exclusively on the prediction of in-role behaviors (task performance) that reflect formal job expectations. However, the effect of anxiety on extra-role behaviors beyond the job description has been overlooked . Task peAccordingly, our study aims to better understand how and when leader workplace anxiety influences subordinates’ task and contextual performance. We draw on the Emotions as Social Information (EASI) model to examiOur research makes three main contributions to the literature. First, we advance the anxiety literature by deriving a conceptual framework from past theory and research to describe the mechanisms by which leader workplace anxiety affects follower performance. Based on the Emotion-as-Social-Information theory, we tested how follower emotional exhaustion and cognitive interference mediate the relationship between leader anxiety and employee job outcomes. Second, we extend the anxiety literature by simultaneously considering the influence of workplace anxiety on subordinates’ task performance and contextual performance. Because previous studies have found that task and contextual performance involve different behavior patterns and have unique antecedents , consideAnxiety is defined as the tendency to experience tension and worry with regard to the appraisal of threatening situations . SpielbeIn this study, we focus on leader workplace anxiety for several reasons. First, leaders not only need to complete their work tasks but also undertake important aspects of people management such as developing the relationship with subordinates or even handling complex relationships between multiple members of the organization at different levels of the hierarchy . Second,via two distinct routes: inferential processes and emotional reactions. Inferential processes refer to an emotional observer’s inference about the emotion expressor’s true emotions and intentions theory provides a comprehensive framework to understand how emotions are interpreted and used by those who perceive them . The EAStentions . These ptentions . In thisEmotional reactions are the processes through which emotions expressed by one individual influence the emotions of another individual who observes them . Unlike Drawing on the EASI model, the emotional expressions of others can elicit the cognitive processes of observers, which may subsequently affect the observers’ behavior . As emotWe expect that leader workplace anxiety engages followers’ cognitive processes and interferes with their ability to process immediate events, thereby reducing their job performance. High job performance requires sustained effort over extended periods, requiring employees to mobilize cognitive resources such as high levels of attention and focus. This means that any kind of cognitive interference may divert employees’ attention from their current job task, resulting in lower task performance. At the same time, cognitive interference also exhausts a number of personal resources, including time, energy, and effort . This caHypothesis 1a: Leader workplace anxiety has a negative indirect effect on followers’ (a) task performance and (b) contextual performance via follower cognitive interference. Specifically, leader workplace anxiety is positively related to follower cognitive interference, which is negatively related to task performance and contextual performance.Emotion expression can also affect the behavior patterns of observers by triggering their emotional reactions, and then exerting interpersonal influence in the organization . When leHypothesis 1b: Leader workplace anxiety has a negative indirect effect on follower (a) task performance and (b) contextual performance via follower emotional exhaustion. Leader workplace anxiety is positively related to follower emotional exhaustion, which is negatively related to task performance and contextual performance.Although the inferential processes and emotional reactions described above can lead to lower employee performance, it is important to determine when each process is likely to produce negative work outcomes. According to the EASI framework, the interpersonal effects of emotional expression in the work environment depend on the observer’s motivation and ability to process the information conveyed by these emotions, also known as epistemic motivation . EpistemSpecifically, individuals with high epistemic motivation were more likely to view their immediate emotional responses to expressed emotion as inaccurate or irrelevant . InsteadGiven these findings, when followers experience cognitive interference caused by their leader’s expression of anxiety, we expect that the effect of such cognitive interference on task performance depends on each follower’s level of epistemic motivation. Employees with high epistemic motivation will be more likely to reflect on other people’s emotions and engage in deeper analysis and information processing of the root causes of leaders’ anxiety . This wiHypothesis 2a: Follower epistemic motivation will moderate the negative relationship between cognitive interference and task performance, such that this negative relationship is weaker when epistemic motivation is higher rather than lower.We also expect that epistemic motivation will change the relationship between the emotional exhaustion felt by an employee and her contextual performance. The EASI theory posits that the interpersonal effects of emotional expressions depend on the observer’s ability to process and interpret the information conveyed by these expressions. The shallower the epistemic motivation, the stronger emotional reaction an observer will have to others’ displays of emotion .Emotional exhaustion depletes an individual’s resources such as energy, optimism, and self-efficacy . To presHypothesis 2b: Follower epistemic motivation will moderate the negative relationship between emotional exhaustion and contextual performance, such that this negative relationship is weaker when epistemic motivation is higher rather than lower.In the above sections, we have proposed that employees’ epistemic motivation serves as a moderator variable affecting two mediation processes that link leader anxiety to subordinate job performance. Our model predicts that in leader-follower interactions, workplace anxiety displayed by leaders can reduce contextual performance and task performance through emotional responses and cognitive processes. We predict that followers’ epistemic motivation levels are a boundary condition on this indirect effect. Employees with high epistemic motivation rely less on emotional states and instead process information more deliberately and systematically to guide their behavior. When they experience cognitive interference due to the leader’s expression of workplace anxiety, they will interpret this information as a signal that leaders are dissatisfied with their work results and will work harder to improve task performance.In contrast, employees with low epistemic motivation tend to process information in a fast, relaxed, and heuristic manner. They respond to displays of emotion with their own emotions. Therefore, when they experience emotional exhaustion due to the leader’s expression of workplace anxiety, individuals with low epistemic motivation further amplify the panic and stress conveyed by their leader. This emotional exhaustion causes employees to be self-critical and deplHypothesis 3a: Follower epistemic motivation will moderate the indirect effect of leader workplace anxiety on follower task performance via follower cognitive interference, such that this indirect effect will be weaker when epistemic motivation is high than when it is low.Hypothesis 3b: Follower epistemic motivation will moderate the indirect effect of leader workplace anxiety on follower contextual performance via follower emotional exhaustion, such that this indirect effect will be weaker when epistemic motivation is high than when it is low.We collected matched supervisor-subordinate dyadic data at two time points from several high-tech enterprises located in northern China. The authors contacted each company’s human resource (HR) director and assisted the HR department with organizing the participants. Each company’s HR department helped us randomly invite dyads with direct supervisor-subordinate relationships to participate in our research. Participants were informed that we were interested in their true feelings and behaviors at work to diagnose current issues within the company and that they could quit at any time. In total, we successfully solicited 279 supervisor-subordinate dyads to participate.Before participants filled out the questionnaire, we assured them that their answers would be kept confidential and used only for research purposes. Surveys were coded before distribution and were distributed to project team leaders and members. We collected time-lagged data at two different points to alleviate common method bias.At time 1, we asked 279 team leaders to rate their workplace anxiety and subordinates to rate their emotional exhaustion and cognitive interference. In addition, we asked supervisors and subordinates to report their demographic information including gender, age, education, and supervisor-subordinate tenure. In total, we received 256 pairs of supervisor-subordinate dyadic data at the first time point, giving a response rate of 92%.SD = 8.35), and 156 subordinates (68%) held an undergraduate or graduate university degree. The average supervisor-subordinate tenure was 1.81 years (SD = 1.60). Following the procedures recommended by At Time 2 (1 month later), the 256 subordinate members who participated in the Time 1 survey assessed their task performance, contextual performance and epistemic motivation. This time, we received a total of 234 responses, for a response rate of 91%. After matching the superior-subordinate data we obtained a final sample of 228 supervisor-subordinate pairs. In the final sample, 128 subordinates (56%) were male. Their average age was 32.46 years old (strongly disagree to 7 = strongly agree).We followed α = 0.933).We adapted the eight-item scale from α = 0.93).Using an eight-item scale from α = 0.89).Cognitive interference was assessed with the five-item subscale of the Cognitive Interference Questionnaire Items developed by α = 0.82).We employed the eleven-item scale developed by α = 0.97).We utilized the fourteen-item scale developed by α = 0.86).Task performance was assessed with the ten-item scale developed by We controlled for the gender of the participants because previous studies have found that men and women may experience different levels of workplace anxiety for several reasons. First, biological factors such as genetic predisposition and hormonal influences may predispose women to experience higher levels of anxiety in different workplace contexts . Second,2 = 1939.82, p < 0.01, Comparative Fit Index = 0.91, Root Mean Square Error of Approximation = 0.05, Standardized Root Mean Square Residual = 0.07. Alternative models that combined cognitive interference and emotional exhaustion, as well as contextual performance and task performance, did not improve the fit compared to the six-factor model. Therefore, the six-factor model was retained for hypothesis testing.To investigate the discriminant validity of focal variables, we performed a series of confirmatory factor analyses with all items as indicators. Fit indices for a six-factor model including workplace anxiety, emotional exhaustion, cognitive interference, epistemic motivation, task performance, and contextual performance were satisfactory : χ2 = 19r = 0.240, p < 0.01) and emotional exhaustion . Follower cognitive inference was significantly and negatively correlated with task performance and contextual performance . Emotional exhaustion was also negatively correlated with task performance and contextual performance .We first adopted hierarchical regression and via cognitive interference. As shown in b = 0.159, p < 0.001), which was negatively related to task performance and contextual performance . As reflected in b = –0.064, 95% CI . However, the indirect effect of leader workplace anxiety on contextual performance through cognitive interference was not significant, b = –0.004, 95% CI . Thus, H1a was only supported for task performance.Hypothesis 1a proposes that leader workplace anxiety indirectly affects follower task and contextual performance via emotional exhaustion. In the parallel mediation model, leader workplace anxiety was positively related to follower emotional exhaustion , which was negatively related to follower task performance and contextual performance . Similarly, the bootstrap analysis showed that the indirect effect of workplace anxiety on follower contextual performance through emotional exhaustion was significant, b = –0.056, 95% CI . However, the specific indirect effect of leader workplace anxiety on follower task performance through emotional exhaustion was not significant, b = –0.007, 95% CI . Thus, H1b was only supported for contextual performance.Hypothesis 1b predicts that leader workplace anxiety indirectly affects follower task and contextual performance b = 0.104, s.e. = 0.029, p < 0.01). Simple slope tests and the interaction plot depicted in simple slope = –0.642, p < 0.001). When epistemic motivation was low (1 SD below the mean), the negative relationship between followers’ cognitive interference and task performance was stronger . Although cognitive interference was negatively related to task performance in the case of both high and low epistemic motivation, the relationship was stronger when epistemic motivation was low.Hypotheses 2a predicts that follower epistemic motivation will moderate the relationship between cognitive interference and task performance, such that this negative relationship is weaker when follower epistemic motivation is higher rather than lower. Results in b = 0.009, s.e. = 0.029, ns). Thus, Hypothesis 2b was not supported.Hypothesis 2b proposes that epistemic motivation will moderate the relationship between follower emotional exhaustion and contextual performance, such that this negative relationship is weaker when epistemic motivation is higher rather than lower. Results presented in b = –0.023, 95% CI = ] and lower levels of epistemic motivation. Using a directional index of moderated mediation test ] or lower levels of epistemic motivation. Thus, our results support Hypothesis 3a, but not 3b.Hypotheses 3a and 3b propose indirect effects which are moderated by epistemic motivation. As shown in ion test , we founDrawing on the Emotion-as-Social-Information theory, our study investigated how leader workplace anxiety affects employee task performance and contextual performance through two pathways: emotional exhaustion and cognitive interference. We also found that employee epistemic motivation is a key boundary condition in this indirect relationship. Our findings offer implications about how leader anxiety can affect subordinate performance.First, we extend the job performance literature by examining the differential influence mechanisms of leader anxiety on employee performance. Previous research on the impact of anxiety on performance has mainly studied performance as a one-dimensional construct, with few studies suggesting that anxiety might have different effects on different dimensions of performance. Our research showed that anxiety affects different types of performance through different mechanisms. While cognitive interference mediated the effect of leader anxiety on task performance, emotional exhaustion mediated the effect of leader anxiety on contextual performance.Second, we respond to calls for a fine-tuned framework to explore the connection between leader anxiety and follower behaviors . PreviouThird, our study extends EASI theory in several important ways. Specifically, the EASI framework provides a mechanism to explain how leader emotions affect follower behaviors and introduces the social function view of emotions into the leadership field. We apply EASI theory to an organizational setting and show how its predictions apply to the effect of managers’ emotional displays on their subordinates. Specifically, we found that leader anxiety can affect subordinates’ job performance by triggering both their inferential processes and emotional reactions. These findings help to generalize the principles of EASI theory to the ongoing leader-follower relationships that occur in actual organizations. Furthermore, we contribute to the literature on the EASI model by examining the effect of anxiety, a discrete emotion universally experienced in the workplace, on individual behavior. Previous studies using this model have examined the effects of discrete emotions such as anger , disappoOur findings also offer insights into managerial practice. First, employee performance is not only affected by the employee’s own anxiety, but also their leader’s anxiety. Therefore, leaders need to be aware that their emotional displays may have a considerable impact on their followers’ behaviors. More specifically, we suggest leaders should learn to adjust their anxiety in the workplace and try to avoid showing excessive anxiety in front of subordinates. To alleviate the anxiety of team leaders and reduce the impact of leader anxiety on employees’ work results, organizations should reduce the work pressure of leaders and guide them to effectively use their rest time to adjust their emotions . Second,Despite the theoretical contributions and practical implications discussed above, our research is not without limitations, which provide avenues for future work. First, all variables we measured were evaluated by a survey, which may lead to a certain degree of common method variance . To alleSecond, our study only explored how subordinates’ epistemic motivations act as boundary conditions affecting the relationship between leader workplace anxiety and subordinate job performance. Therefore, future research could continue to explore other possible contextual factors, such as leaders’ emotion regulation strategies, employees’ personal traits, the quality of leader-member exchanges, and organizational climate. For example, subordinates who are more professionally adaptable and resilient are likely to be less susceptible to the severe consequences of leader workplace anxiety, and leaders who are adept at employing emotion regulation strategies can reduce the impact of their anxiety on their employees.Third, considering the widespread use of work groups in organizations, another important direction for future research is to consider the anxiety levels of groups. In work teams, individual employees may transmit their anxiety to other team members through emotional contagion .Fourth, we were only able to collect empirical data from China, a culture that features high power distance and a highly collectivistic orientation. Although the theoretical arguments discussed in our study are not culturally bound, we encourage future research to use cross-cultural data to demonstrate the generalizability of our findings.Today, workplace anxiety is more prominent than ever before, leading to noticeable consequences for employees and organizations. This study established, tested, and discovered that leader workplace anxiety affects subordinate performance through both emotional and cognitive mechanisms. Meanwhile, our study also contends that employee epistemic motivation acts as an antidote to the influence of leader anxiety on subordinate job performance. Overall, these findings provide further insights for researchers and practitioners to understand the consequences of workplace anxiety.The datasets generated for this study are available on request to the corresponding author.Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the patients/participants or patients/participants legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.SZ contributed to the data curation, formal analysis, and original draft and revision of the manuscript. LC contributed to the conceptualization and revision of the manuscript. LZ contributed to the supervision and guidelines. AS contributed to the review and editing of the manuscript. All authors contributed to the article and approved the submitted version."} {"text": "Although cough was more frequent in 46 patients with normal pleural elastance (p < 0.0001), it was associated with significantly higher ∆P in patients with elevated elastance . Cough during TT is associated with small but beneficial trend in Ppl changes, particularly in patients with elevated pleural elastance, and should not be considered solely as an adverse event.Cough during therapeutic thoracentesis (TT) is considered an adverse effect. The study was aimed to evaluate the relationship between cough during TT and pleural pressure (Ppl) changes (∆P). Instantaneous Ppl was measured after withdrawal of predetermined volumes of pleural fluid. Fluid withdrawal (FW) and Ppl measurement (PplM) periods were analyzed separately using the two sample Kolmogorov–Smirnov test and the nonparametric skew to assess differences between ∆P distributions in periods with and without cough. The study involved 59 patients, median age 66 years, median withdrawn fluid volume 1800 mL (1330 ÷ 2400 mL). In total, 1265 cough episodes were recorded in 52 patients, in 24% of FW and 19% of PplM periods, respectively. Cough was associated with significant changes in ∆P distribution ( Although, in general, therapeutic thoracentesis (TT) is thought to be a safe procedure, the withdrawal of a large pleural fluid volume can be associated with some complications, including chest discomfort, pain, pneumothorax and re-expansion pulmonary edema; some of these side effects are at least partially related to pleural pressure (Ppl) fall caused by fluid withdrawal6. In patients with normal pleuro-pulmonary mechanics, a gentle slope of the withdrawn pleural fluid volume−Ppl curve reflects the replacement of the fluid by the expanding lung. However, if lung expandability is limited by, for example, visceral pleural thickening, lung scars, fibrosis, or airway collapse, even a small amount of withdrawn pleural fluid may result in significant Ppl decline, which in turn may produce symptoms such as vague chest discomfort or cough7. Although significant chest discomfort is believed to be an indication for TT termination because it may suggest a potentially unsafe decline of Ppl8, the significance of cough seems to be controversial. Jones et al. recorded cough in less than 1% of patients undergoing ultrasound guided TT and it was not related to post procedure pneumothorax9; however these authors referred to other studies showing a significantly higher cough incidence (9–24%) and stated that cough in the late phase of TT should be regarded as an indication to stop the procedure. On the other hand, cough may be associated with lung re-expansion and thus, if not accompanied by other symptoms, it should not be considered as an indication for TT termination10.Pleural effusion affects approximately 1.5 million patients per year in the United States with the annual number of thoracenteses reported between 127,000 and 173,000pl monitoring during TT and provides new insight into processes occurring during pleural fluid withdrawal, including a better understanding of complications and symptoms reported by the patients11, although its routine use in clinical practice in daily life was recently put into question12. Some previous observations led to the intriguing conclusion that cough during TT may favorably impact Ppl allowing to avoid excessive Ppl decline10. Although that observation was based on three patients only, it suggested that cough during TT need not necessarily be considered as a predictor of forthcoming complications, but may also be viewed as a protective phenomenon during the procedure. In general, we hypothesized that: (a) cough is linked with subsequent Ppl increase or its less significant decrease, (b) cough can be related to increased recruitment rate of atelectatic lung regions. Verification of the first hypothesis was the main purpose of this study.Pleural manometry is a key tool to study different aspects of pleural pathophysiology in patients with pleural effusion. Access to pleural manometers, whether water or electronic, enables Ppl measurements caused by the presence of fibrin membranes and loculations (n = 2) and low quality records (n = 1). One patient was excluded due to unclear and uncertain cough markings in the study documentation. Thus, the records of 59 patients were included in the final analysis. Patients' characteristics are presented in Table Data of 63 patients who underwent TT with pleural manometry between January 2015 and January 2019 were reviewed. Records of 3 patients were excluded due to a questionable reliability of Ppl measurement periods and in 222 of 927 analyzed fluid withdrawal periods . Characteristics of the changes in Ppl (∆P) distributions for PplM and FW, with and without cough, are presented in Table Cough during TT occurred in 52 patients. The total number of cough episodes was 1265 and the median and maximal number of episodes in individual patients were 11 and 104, respectively. Cough was found in 172 of 926 analyzed PplM and FW. Histograms of ∆P during PplM showed that cough was associated with an increase of the right tail of the ∆P distribution . The median number of coughs in normal and elevated Pel group was 22 and 1, respectively; ΔP was greater in patients with increased Pel, particularly during PplM with cough an increase of Ppl due to lung re-expansion and (2) no significant changes if cough was not associated with additional lung re-expansion. It can be supposed that the withdrawal of the first several portions of pleural fluid in patients with large volume pleural effusion may not significantly reduce lung compression. Therefore cough in the initial phase of pleural fluid withdrawal may not generate significant Ppl changes until the space for lung re-expansion appears. Then, the increase in Ppl may be observed if cough helps to open the compressed alveoli and fill them with air. The above hypothesis seems to be confirmed by our results. We found that cough-associated increase in Ppl was more pronounced in measurements performed after withdrawal of 1 L of pleural fluid applied during TTion Fig.  is also ues Fig. .Figure 29, cough is a symptom presented by 9–24% of patients undergoing TT (while 88% in our study), its mechanism remains unclear. It has been documented that the main receptors responsible for cough are present in the larynx, trachea and main bronchi19. The vagal afferents are also present in small bronchi and lung parenchyma (juxtapulmonary receptors), and the new ERS chronic cough guidelines mention potential existence of cough receptors also in the alveolar septa and parenchyma of the lungs20. However, there is no proof that their irritation results in cough, despite the fact that cough is one of the symptoms in patients with heart failure, pulmonary edema or altitude sickness. It is supposed that cough in those cases appears only when sputum moves to larger bronchi and irritates the cough receptors or when there is bronchial compression21. The presence of cough receptors in the pleura is also considered4. Some authors suggested that negative Ppl caused by fluid withdrawal stimulates cough receptors on the visceral pleura, particularly in patients with nonexpendable lung 22. The results of our study do not seem to support these opinions since cough was significantly less common in patients with elevated Pel. Importantly, although less common, cough in patients with high Pel resulted in higher increase in Ppl than in patients with normal Pel sustainably elevated Pel, and (b) elevated Pel which can be overcome by additional maneuvers e.g. cough or CPAP.Although, according to the literatureel Table . Perhapsgh Table . Thus, c25. The results of our study do not support this opinion. This is because our study showed that in some patients a large fluid volume could have been withdrawn without significant Ppl fall despite episodes of cough which appeared even in the early phase of the procedure (data not published). Therefore, we agree with the opinion of Feller-Kopman et al. that TT should not be terminated solely because of cough5. Moreover, our findings may even support a hypothesis that cough can even be a beneficial factor preventing an excessive Ppl drop during TT in some cases. To confirm this view and to evaluate the clinical application of voluntary cough during thoracentesis, further, well-designed prospective studies are mandatory. It also seems necessary in the future to have a closer look to cough episodes characteristics in terms of Ppl increase; namely to check whether all types and episodes of cough can be construed as beneficial, especially cough attacks or severe excessive cough during the procedure.Some authors believe that cough is a criterion for TT termination as it is considered a sign of complete or near-complete drainagepl for each breath as a surrogate of Ppl at FRC, whereas this value depends on the Ppl at FRC and possible intrinsic positive end-expiratory pressure and by the Nalecz Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences (the IBBE PAS own research fund). The study protocol was approved by the Institutional Review Board of Medical University of Warsaw (KB 105/2012) and registered at ClinicalTrials.gov (NCT02192138) on 16/07/2014. The study conformed to the standards set by the Declaration of Helsinki. All medical procedures were performed in patients hospitalized in the Department of Internal Medicine, Pulmonary Diseases and Allergy, Medical University of Warsaw, and all patients signed an informed consent to participate in the study.Sixty three patients who underwent TT with pleural manometry between January 2015 and January 2019 were included in the analysis; consecutive subjects were enrolled to avoid selection bias. The inclusion criteria were as follows: (1) age 18–85 years; (2) symptomatic pleural effusion occupying at least 1/3 of the hemithorax (in posteroanterior chest radiograph); (3) symptoms severity (dyspnea) warranting TT; (4) no contraindication for TT; (5) signed informed consent for participation in the study. The exclusion criteria were: (1) poor performance status requiring maximal shortening of the procedure; (2) unstable hemodynamic or respiratory status unrelated to pleural effusion; (3) respiratory failure requiring mechanical ventilation.29. Pleural fluid was evacuated through a small-bore pleural catheter . Pleural pressure was measured with a digital pleural manometer and recorded for one minute directly after catheter insertion and then during the procedure. There were interchanging periods of Ppl measurement (PplM) and periods of fluid withdrawal (FW) , and then after the withdrawal of each 100 mL (frequent Ppl measurement phase). The procedure was terminated when no more fluid could be aspirated, a significant pleural pressure decline was observed or chest pain occurred. Ppl was displayed on a monitor during the procedure and its instantaneous values were recorded on a portable computer for further analysis. Each cough episode, both during PplM and FW, was marked in the computer record at the corresponding place on the time vector and noted in the patient's individual study documentation ) should ideally be used for analysis. This is because at FRC respiratory muscles are fully relaxed and Ppl depends entirely on the volume of pleural fluid and the relationship between the outward pull of the thoracic cavity and the inward elastic recoil of the lung. Since, however, the exact time points of expiration ends were impossible to determine without airflow measurement, the maximal value of Ppl (Pplmax) in a breathing cycle was assumed to reflect Ppl at FRC. The median values of Pplmax at the beginning (PplmaxB) and the end (PplmaxE) of PplM were used to quantitatively characterize Ppl changes during PplM and FW. Thus, ∆P equal to the difference between PplmaxE and PplmaxB (PplmaxE–PplmaxB) was considered as the index characterizing the Ppl change during the corresponding PplM, whereas ∆P equal to the difference between PplmaxB for the subsequent PplM and PplmaxE for the previous PplM was used to quantify the Ppl change during FW the number of Pplmax values used in determination of the median should be as large as possible to avoid random errors, and thus the interval for analysis should be relatively long; (b) PplmaxB and PplmaxE should reflect the Pplmax values at the actual beginning and end of each PplM, and thus the intervals should be short and limited to extremes of the PplM.In each patient, all P FW Fig. . The firel), i.e. the ratio of Ppl fall during the whole procedure to the total volume of withdrawn fluid. The first group included patients with normal Pel (< 14.5cm H2O/L)30, while the second included those with elevated elastance (≥ 14.5cm H2O/L).Additionally, to examine whether a possible link between cough and ΔP depends on lung expandability, all patients were classified according to the total pleural elastance . Data were presented as median and quartiles. Since the majority of analyzed variables had non-normal distributions, non-parametric statistical tests were used. The differences between groups were tested with the Mann–Whitney U-test. To analyze the statistical significance of cough influence on the ΔP distribution shape, the two-sample Kolgomorow-Smirnow test was used. The nonparametric skew was used to determine the nature of this influence."} {"text": "Bradyrhizobium japonicum and Bacillus megaterium (newly isolated strains) as a single inoculant and co-inoculant during seed bio-priming to improve seed germination and initial seedling growth of two soybean cultivars. The treated seeds were subjected to germination test (GT), cold test (CT) and accelerated aging test (AAT). B. megaterium significantly improved all parameters in GT and CT; final germination, shoot length, root length, root dry weight, and seedling vigor index in AAT, as compared to control. In addition, co-inoculation significantly increased all parameters except shoot dry weight in GT; all parameters in CT; germination energy, shoot length, root length, and seedling vigor index in AAT, in comparison to the control. Moreover, Br. japonicum significantly improved the germination energy, shoot length, shoot dry weight, root dry weight, and seedling vigor index in GT; all parameters in CT; shoot length, root length, and seedling vigor index in AAT, compared with non-primed seeds. Thus, B. megaterium strains could be used in soybean bio-priming as a potential single inoculant and co-inoculant, following proper field evaluation.Bio-priming is a new technique of seed treatment that improves seed germination, vigor, crop growth and yield. The objective of this study was to evaluate the effectiveness of Glycine max (L.) Merrill) is one of the most important leguminous crops, with the highest protein content—around 40%, and the second highest oil content—around 20% [Soybean and identified based on 16S rDNA sequencing (quencing .B. megaterium strains are presented in Bacillus strains were able to produce indole-3-acetic acid (IAA) in medium with L-tryptophan (8.00–15.65 µg mL−1). Among the seven B. megaterium strains, five were able to solubilize phosphorus, seven were positive for phosphate mineralization and able to grow on N-free medium, and six strains were capable of producing siderophores. Similarly, Haque et al. [B. megaterium expressed production of IAA, solubilization of nutrients , production of siderophores, and other PGP activities. Moreover, Nascimento et al. [B. megaterium that participate in the biosynthesis of auxins and cytokinins, stress resistance, antagonistic activities, and other PGP traits.Multiple plant growth-promoting (PGP) properties of the e et al. reportedo et al. found geB. megaterium: B8, B12, B15 and B17 were selected due to the highest potential for plant growth promotion. Dual culture assay demonstrated that the chosen B. megaterium strains were compatible with each other. Furthermore, compatibility assay did not show any antagonistic actions among Br. japonicum and B. megaterium strains.Although the newly-isolated bacterial strains tested in this study generally had good PGP abilities, four strains of Seed germination, together with seed vigor, represents a key factor in determining crop yield . Seed quB. megaterium or co-inoculation and the control were noted in final germination, abnormal seedlings, and root length, while single treatments had a significantly higher effect compared to control on shoot dry weight. Bacillus treatment led to the highest increase in final germination (3.5%), root length (8.9%), shoot dry weight (18.9%), and seedling vigor index (12.5%) compared to the control. The highest increase in germination energy (3.4%), and shoot length (8.2%) in comparison to the control was recorded with Br. japonicum. However, Br. japonicum did not improve final germination and abnormal seedlings. The highest decrease in abnormal seedlings (−2.8%), and increase in root dry weight (70.2%) related to the control was observed after co-inoculation. Nevertheless, the mentioned treatments, that showed the highest and significant effect compared to the control, did not differ significantly in relation to certain individual treatments. Only Bradyrhizobium had significantly lower effect compared to Bacillus and co-inoculation on root length and seedling vigor index, and single treatments had significantly higher effect compared to co-inoculation on shoot dry weight and decrease in abnormal seedlings (−4.8%) in comparison to the control was obtained from inoculation with Br. japonicum, while the highest increase in root length (43.7%) and root dry weight (49.1%) was observed after inoculation with B. megaterium. A combined treatment with Br. japonicum and B. megaterium led to the highest improvement in shoot length (35.1%), shoot dry weight (9.3%), and seedling vigor index (48.5%) compared to the control , root length (43.8%), root dry weight (17.2%), and seedling vigor index (32.4%) was obtained from inoculation with B. megaterium. Co-inoculation had the highest impact on germination energy (3.1%), and shoot length (19.8%) in comparison to the control . Stressful conditions, such as high or low temperatures and high humidity, can decrease germination and extend germination time, primarily due to cell membrane damage and electrolyte leakage. Low temperatures, especially during the first few days of the germination period, have a major effect on seed germination and initial growth, as shown later in soybean development and yield. According to Szczerba et al. , low temBradyrhizobium treatment in cv. Teona, due to its lower effect on the examined soybean parameters. Additionally, PCA showed that cultivars were separated from each other within the same group. Hence, it was confirmed that Bacillus and co-inoculation treatments had a better effect than Bradyrhizobium on both cultivars.Furthermore, the association between bacterial treatments and cultivars in optimal and adverse conditions is illustrated by the biplot of the principal component analysis (PCA) . In optiBradyrhizobium was clearly separated from Bacillus and co-inoculation, being the optimal treatment in cv. Teona.According to the PCA for bacterial treatments and cultivars under cold stress , bacteriBased on the PCA for bacterial treatments and cultivars under double stress , bacteriB. megaterium strains have the ability to grow at a temperature range 3–45 °C [Br. japonicum strains tolerate 15–20 °C and below well, although less frequently [B. megaterium as a single and/or co-inoculant compared to the sole application of Br. japonicum, especially under high temperature and humidity conditions, can be attributed to their ability to produce spores, grow and remain active under stress. Concurrently, survival and efficacy of bacterial strains, particularly on aged seeds, indicates the possibility for prolonged storage of bio-primed seeds.The results emphasized the role of inoculants in establishing soybean tolerance to stressful conditions such as low or high temperature and high humidity. However, the different response of cultivars to inoculation , especia 3–45 °C , whereasequently . HoweverAdditionally, correlation analysis confirmed positive effects of seed bio-priming treatments under both optimal and unfavorable conditions, with the largest impact under cold stress. Overall, a positive interrelationship was established between germination and other parameters, with an exception of abnormal seedlings . In optiBr. japonicum, in order to improve plant growth under adverse conditions. Namely, in a meta-analysis of studies from 1987 to 2018, Zeffa et al. [Bradyrhizobium and PGPR improves plant development, increases nodulation, and facilitates nutritional limitations and potential stresses during plant growth. Moreover, co-inoculation of soybean with Br. japonicum and Bacillus strains, which were shown to have PGP activity, increased nodule number, total biomass, total nitrogen, and yield, under the conditions of low root zone temperatures [Bacillus strains tested in this study, produce IAA. It was found that PGPR use this hormone to interact with plants as part of their colonization strategy. Being the first contact between the bacteria and the seed, IAA penetrates the seed coat with water, promotes seed germination, enhances and regulates vegetative growth and development, affects biosynthesis of various metabolites, resistance to stressful conditions, photosynthesis, etc. [Bacillus strains improved biomass and stress tolerance of soybean plants, while modulating abscisic acid (ABA) and salicylic acid (SA) levels. Additionally, PGPR have the ability to solubilize/mineralize phosphorus, and produce siderophores, which was also confirmed for B. megaterium strains used this study. This study confirmed that single inoculation and co-inoculation of soybean with B. megaterium improved seed germination and initial seedling growth, probably due to the production of IAA and other PGP activities. Similarly, Bacillus velezensis increased germination, root length and root surface of soybean compared to control, while strain genome analysis revealed the presence of genes linked to root colonization and PGP ability [Br. japonicum and Bacillus amyloliquefaciens promoted early seedling growth and significantly improved nodulation, probably due to the production of high levels of auxin, gibberellins and salicylic acid by the Bacillus strains [The current study, as well as several other studies, reported the positive effects of using PGPR as co-inoculants of soybean with a et al. summarizeratures . Cardareeratures . It initeratures . The beneratures . Diverseis, etc. . As a reis, etc. . For insis, etc. found th ability . Single strains . Br. japonicum (BJ) and B. megaterium (BM) strains from the collection of the Laboratory for Microbiological Research of the Institute of Field and Vegetable Crops, Novi Sad (IFVCNS), Serbia. Strains of Br. japonicum are commercially used as microbiological fertilizer in the production of soybean in Serbia. Strains of B. megaterium were isolated in 2015–2016 from soil samples collected from different locations in northern Serbia . Soil samples included the agricultural soil (rhizosphere and bulk soil), as well as non-agricultural soil, which differed in chemical and physical properties. Isolation was performed using serial dilution and streak plate methods on nutrient agar (NA). Bacillus strains were morphologically characterized and identified by 16S rDNA sequencing , while the amplification of 16S rDNA gene fragments of Bacillus strains was performed using primers 27F (AGAGTTTGATCMTGGCTCAG) and 1492R (TACGGYTACCTTGTTACGACTT) [Bradyrhizobium strains were cultured in yeast extract mannitol broth (YEMB) and incubated at 28 ± 2 °C and 120 rpm for 72 h. Bacillus strains were grown in nutrient broth (NB) and incubated at 30 ± 2 °C and 120 rpm for 24 h. Each strain of Bradyrhizobium and Bacillus was grown individually, and then mixed in equal proportions to form the mixtures for further examination. A culture suspension of each mixture was adjusted to have a final concentration of 109 colony forming units per mL (CFUs/mL).This study used quencing . MorpholACGACTT) . AmplifiACGACTT) . BradyrhBacillus strain was inoculated in NB supplemented with 500 µg/mL of L-tryptophan and incubated at 30 ± 2 °C for 24 h. The supernatant was mixed with Salkowski reagent (FeCl3-HClO4: 1 mL of 0.5 M ferric chloride solution in 49 mL of 35% perchloric acid) in a ratio of 1:2 (supernatant/reagent v/v). Development of a pink color after 20 min at room temperature indicated the production of IAA. Indole production was measured by spectrophotometric absorption at 530 nm [Br. japonicum vs. B. megaterium, and/or B. megaterium strains with each other was tested by dual culture assay on yeast extract mannitol agar (YMA) and/or NA plates. All determinations were performed in triplicate production, 1 mL of overnight grown culture of each t 530 nm . The phot 530 nm . The comiplicate .Glycine max L.), namely Atlas and Teona, developed at the Soybean Department, IFVCNS were used in the present study. Seeds were surface disinfected in 2% sodium hypochlorite , washed with sterile distilled water four times, and then dried back on sterile filter paper under aseptic conditions to their original weight. In addition, seeds were soaked in 150 mL of bacterial culture suspension per each test and treatment (4 × 100 = 400 seeds) for germination and accelerated aging tests, and 75 mL per treatment (4 × 50 = 200 seeds) for cold test. Three bacterial treatments were tested: Br. japonicum (BJ), B. megaterium (BM), and Br. japonicum + B. megaterium (BJ + BM), with the inoculum rate 2 × 109 CFUs/g. The ratio of the strains in their mixtures in single inoculation treatments, as well as the ratio of the Bradyrhizobium and Bacillus mixtures in the co-inoculation treatment, was 1:1 (75 mL: 75 mL per 150 mL and 37.5 mL: 37.5 mL per 75 mL). Bradyrhizobium mixture contained 6 strains of Br. japonicum: the amount of each strain in single and co-inoculation treatments was 25 mL and 12.5 mL, respectively, per 150 mL (germination and accelerated aging tests), and half less milliliters per 75 mL (cold test). Bacillus mixture contained 4 strains of B. megaterium: the amount of each strain in single and co-inoculation treatments was 37.5 mL and 18.75 mL, respectively, per 150 mL (germination and accelerated aging tests), and half less milliliters per 75 mL (cold test). Non-primed seeds were used as the control. Seed bio-priming was conducted using the aforementioned liquid bacterial suspensions at 25 °C for 5 h, under dark conditions. After priming, seeds were rinsed thoroughly with distilled water, and dried on sterile filter paper at room temperature for 72 h [Seeds of two cultivars of soybean and high relative humidity (≈95%) for 72 h . The gerSamples consisted of 25 randomly selected soybean seeds per replicate to measure shoot and root length. Samples were germinated in rolled filter paper for eight days in the germination chamber. Afterwards, 10 normal seedlings per replication were randomly selected for further assessment. In the cold test, samples were first exposed to a low temperature (10 °C) for seven days and then transferred into a germination chamber for six days. In the accelerated aging test, seeds were exposed to a high temperature and humidity for 72 h and then placed into a rolled filter paper for eight days in the germination chamber at 25 °C. The length of the shoot and roots were measured using a ruler on the day of germination. The fresh shoot and root weight of 10 seedlings were determined using analytical balance on the day of seed germination. For determination of dry weight, samples were oven dried at 80 °C for 24 h. The seedling vigor index was determined according to Abdul-Baki and Anderson , using tp ≤ 0.05). The relationship between germination and other parameters was determined by Pearson’s correlation analysis. Principal component analysis (PCA) was performed to determine the effect of seed bio-priming treatments on the examined parameters of soybean cultivars. The data were statistically processed by STATISTICA 10 programme .Four treatments were arranged in a complete randomized design (CRD) for laboratory tests, with four replications. The obtained data were processed statistically, using analysis of variance (ANOVA), followed by mean separation according to Tukey’s HSD test (Bacillus and co-inoculation improved the majority of the investigated soybean parameters as compared to Bradyrhizobium. The greatest improvement was obtained for the shoot length, root length, and seedling vigor index in cold-treated seeds, followed by aged and normal seeds, as well as for shoot and root weight recorded in germination, cold and accelerated tests, respectively. Bio-priming treatments led to the highest increase in final germination in cold-treated seeds, whereas the effect was lower in normal and aged seeds. The present study represents the first experimental evidence of using newly-isolated, indigenous B. megaterium strains from soil, for seed bio-priming of soybean under different conditions of laboratory tests. This technique can be recommended for priming seeds prior to field planting as an environmentally friendly strategy to improve seed germination and initial seedling growth. Extensive field trials under different environmental conditions, using several seed lots of each soybean cultivar, will be necessary in order to establish the efficiency of these strains as potential bio-priming seed treatments for improving soybean productivity.Overall,"} {"text": "Food selectivity is among the most common problems for children with Autism Spectrum Disorder (ASD). The present study aims to validate the Brief Autism Mealtime Behavior Inventory (BAMBI) in an Italian population of children with ASD. BAMBI was translated and cross-culturally adapted following international guidelines, then we investigated internal consistency as measured by Cronbach’s alpha and test–retest reliability, as measured by the Intraclass Correlation Coefficient (ICC) in a sample of both children with ASD and with typical development (TD). A total of 131 children were recruited in a clinical and community sample. Internal consistency revealed significant data for both TD and ASD children, with a Cronbach’s Alpha of 0.86 and 0.71, respectively. Test–retest reliability showed excellent values for each item of the BAMBI (range 0.83–1.00). Furthermore, we investigated differences in gender and body max index; however, no significant differences were found among groups. In conclusion, the Italian version of the BAMBI showed good internal consistency and test–retest reliability and it can be used for clinical and research purposes. Autism Spectrum Disorder (ASD) is an early onset neurodevelopmental disorder primarily affecting two main areas: ‘social communication’, and ‘restricted, repetitive and/or sensory behaviors or interests’ . It affeFeeding and eating problems are common problems that affect persons with ASD across all ages and cognitive abilities. It is well known that children with typical development, especially at preschool age, show an attitude of preference or rejection toward some foods; in these cases, children are referred to as “picky eaters”. Scientific literature points out that this behavior declines around the age of 6 years, when children have more opportunities to eat outside the family context and are exposed to a greater variety of foods that promotes the extinction of dietary restrictions . In the Therefore, the assessment of feeding problems in ASD should be included in routine screenings, and it would be appropriate for health professionals to have a greater awareness of this topic . Food seThe need to have validated tools to measure a specific construct is well noted. Considering the lack of specific assessment tools on this topic, no measurement tools have been developed or adapted to measure the mealtime and feeding problems of individuals with ASD or other neurodevelopmental disorders in Italy; therefore, the present investigation aims to translate the BAMBI into Italian and to validate its psychometric properties in an Italian population of children with ASD. The study was conducted from June 2021 until September 2021. The present investigation was carried out by a research group of Italian rehabilitation and healthcare professionals within the Child Neuropsychiatry Unit of the Department of Human Neurosciences of Sapienza University of Rome, together with the collaboration of a non-profit organization R.O.M.A.—Rehabilitation and Outcome Measures Assessment. The research group has great experience with validation outcome measures in children ,20,21,22BAMBI is an asFollowing the recommendation of the Beaton and colleagues , the traAn expert committee, comprising all translators and the research group, reviewed the translated versions, and discussed discrepancies and difficulties during the process. This led towards a pre-final version of the BAMBI that was administered to the first five participants as a pre-test. There were interviewed and answered questions concerning the wording and comprehensibility of the tool. Consensus was achieved and the final Italian version of the BAMBI was produced see .We recruited children aged between 6 and 10 years, with parent who demonstrated good ability to communicate in Italian. For the objective of the present study, we opted to include a convenience sample of both children with typical development and children with ASD. Children with a certified diagnoses of ASD were recruited in the outpatient clinic of the Institute of Child Neuropsychiatry of the Policlinico Umberto I University Hospital in Rome, while children with typical development were recruited in a primary school nearby the aforementioned institute. Participants with other neurodevelopmental disorders and genetic syndromes were excluded. Before starting the study, the research group participated in an internal training course to increase confidence in administering BAMBI. A speech therapist with significant experience using the assessment tools led this training. The recruitment period lasted four months, from June to September 2021. Before administering the BAMBI, parents were asked about the socio-demographic information of the family and information on the child, in particular age, weight, height, and the presence of gastric disorders or eating problems. Data for children with ASD were obtained from clinical records and parents’ interviews, while for children with typical development, information was collected by two speech and language therapists . Once inclusion and exclusion criteria were verified, the research team administered the BAMBI. Data were summarized and analyzed using frequency tables, means, and standard deviations. We used Cronbach’s Alpha to measure and investigate internal consistency. Cronbach’s Alpha expresses the correlations between different items on the same tool; a coefficient greater than 0.7 indicates acceptable internal consistency ,29. AnotWe recruited 47 individuals with ASD and 90 individuals with typical development. Of these, six were excluded due to lack of consent to data processing. Therefore, the final sample was composed of 84 children with typical development with a mean age of 6.73 (SD 2.5) and 47 children with a diagnosis of ASD with a mean age of 7.45 (SD 1.8). Sample characteristics are summarized in Internal consistency analysis showed statistical significance for both children with typical development and children with ASD, with a Cronbach’s Alpha of 0.86 and 0.71, respectively. To better understand differences and how items work in both groups, we reported the data of Items—total statistics. The findings are summarized in For test–retest reliability, we reported a descriptive analysis of mean (SD) scores of the two repeated measurements and ICC values with 95% CI, resulting in a range of 0.83–1.00. We also investigate differences in mean scores among gender and BMI. From the analysis, we can assert that there are no differences in gender and correlation between BMI and total BAMBI score. Data are reported in The present study aims to preliminarily validate the Italian version of the BAMBI in a sample of children with both ASD and with typical development. Recently, the DSM-V included diagnostic criteria for Avoidant and Restrictive Food Intake Disorder (ARFID) , charactOne of the most frequently cited feeding problems for patients with ASD is the limited variety of food intake , which cThe BAMBI questionnaire also allows for ASD characteristics to be assessed through some questions . Children with ASD, compared with controls, are found to be more aggressive at mealtimes, have difficulty remaining seated until the meal is finished, and exhibit less flexibility in mealtime routines . FurtherThe BAMBI is a simple and quick-to-use Observer Reported Outcome, which is a measurement based on an observation by someone other than the patient or a health professional in general. The original version of the BAMBI was translated into Italian following international guidelines. Equivalence between the Italian version of the BAMBI with the original was investigated on a semantical domain. Only a few modifications were suggested; for example, Item 8 was literally translated using the Italian verb “chiudere” (close in English), but the research group opted to insert the synonym verb “serrare” because it indicated closing the mouth as a refusal behavior. Also, Item 18 was modified because the original version reported as an example “fried foods, cold cereals, raw vegetables”, while the research group opted to insert “pasta” because more appropriate for the Italian culture and eating habits of families; therefore, the examples of item 18 were modified as “cibi fritti, pasta fredda e verdure cruda” . These changes were also discussed with participants during the pre-test phase. Participants’ observation allowed us to gain cross-cultural validity and proved to be strictly related to the meaning of the original items.For internal consistency, as measured with Cronbach’s coefficient alpha, our findings showed significant values for both children with typical development (0.86) and children with ASD (0.71), in line with the Turkish version (α 0.79) , the VieDespite these encouraging results, several limitations can be acknowledged. First of all, we did not investigate differences in the ASD population according to severity of symptoms, while emerging evidence suggests an overlap between the severity of ASD symptoms and eating problems, like food selectivity . Second,Since food and, in general, mealtime is a convivial event with high social and cultural value, it would be interesting to assess the correlation that may exist between food selectivity and the development of communication and social skills of people with ASD, and how this characteristic is delineated in different cultures. Finally, it would be desirable to train rehabilitation professionals who can develop individualized programs and strategies that consider the specific difficulties of these children to reduce and extinguish food selectivity, thereby promoting the well-being of the affected individuals and the entire family unit.In conclusion, the BAMBI has proven to be a reliable, quick, and easy-to-use tool and can be used for clinical and research purposes. The flexibility and specificity of the scale identified in this study also allow it to be used in various clinical and research settings depending on the purpose and participants."} {"text": "Macrophages are multifunctional immune system cells that are essential for the mechanical stimulation‐induced control of metabolism. Piezo1 is a non‐selective calcium channel expressed in multifarious tissues to convey mechanical signals. Here, a cellular model of tension was used to study the effect of mechanical stretch on the phenotypic transformation of macrophages and its mechanism. An indirect co‐culture system was used to explore the effect of macrophage activation on bone marrow mesenchymal stem cells (BMSCs), and a treadmill running model was used to validate the mechanism in vivo for in vitro studies. p53 was acetylated and deacetylated by macrophages as a result of mechanical strain being detected by Piezo1. This process is able to polarize macrophages towards M2 and secretes transforming growth factor‐beta (TGF‐β1), which subsequently stimulates BMSCs migration, proliferation and osteogenic differentiation. Knockdown of Piezo1 inhibits the conversion of macrophages to the reparative phenotype, thereby affecting bone remodelling. Blockade of TGF‐β I, II receptors and Piezo1 significantly reduced exercise‐increased bone mass in mice. In conclusion, we showed that mechanical tension causes calcium influx, p53 deacetylation, macrophage polarization towards M2 and TGF‐β1 release through Piezo1. These events support BMSC osteogenesis. Piezo1‐mediated mechanical tension stimulates macrophage polarization and secretion of transforming growth factor‐beta1, promoting osteogenic differentiation. Acetylation and deacetylation of p53 play a major role in this process. Piezo1 is found in a variety of tissues, including macrophages.Piezo1, a non‐selective CaPrevious research has shown that stretching can influence macrophage function and polarization towards M2. It has been demonstrated that 10% stretch intensity is optimal for activating M2‐type macrophages strongly boosting suture stem cells (SuSCs) osteogenic differentiation.2+, and controlled the acetylation or deacetylation of p53 in response to mechanical stress, which caused macrophages to switch from a pro‐inflammatory to an anti‐inflammatory phenotype. M2‐type macrophages produced TGF‐β1, which induced BMSC migration, proliferation and osteogenic differentiation. Running mice were given TGF‐β receptor inhibitors or Piezo1 inhibitors, which decreased the increase in bone mass brought on by exercise. In summary, we found that Piezo1‐mediated changes in calcium inward flow, as well as p53 acetylation and deacetylation, caused macrophages to polarize towards M2, secret TGF‐β1 and promote BMSC osteogenesis.Despite numerous studies showing the osteogenic effects of macrophages on mesenchymal stem cells (MSCs), there is no agreement on which macrophage phenotype is favourable for MSC under mechanical stretch. Here, we created a cellular model of tension to examine how mechanical stretch affected the phenotypic transformation of macrophages. We revealed that the optimal mechanical stretch exhibited strong immunomodulatory effects and encouraged M2 macrophage polarization, the latter of which was a key regulator driving bone marrow mesenchymal stem cells' (BMSCs) osteogenic differentiation in an indirect co‐culture system. Mechanistically, we identified Piezo1 ion channels as tensile stress sensors in macrophages. Piezo1 triggered inflammatory responses, activated Ca22.12O and then administered to mice as a 10 μL (5 μM) intraperitoneal injection.Male C57BL/6 wild‐type mice aged 6 weeks were divided into five groups, each with five animals: the blank control group, the simple running group, the negative control group and the remaining two groups were administered drugs that were TGF‐β receptor I, II and Piezo1 inhibitor. Each mouse was raised in a separate cage to prevent fights that might mask the benefits of exercise. We used operating circumstances from other labs as references.2.22. After 3 days of incubation, the culture media was changed to remove non‐adherent cells. The adhering cells were washed with PBS when the confluence reached 80%–90%, and the medium was changed every 3 days while they were being grown. The cells were then cultured once more after being passaged at a ratio of 1:3. The cells from passage 3 were employed in the subsequent studies. The murine‐derived macrophage cell line RAW264.7 was purchased from the Cells Resource Center of Shanghai Institutes for Biological Sciences, the Chinese Academy of Science. It was then cultured in Dulbecco's modified Eagle's medium containing 10% FBS , 100 IU/mL penicillin and100μg/ml streptomycin at 37°C in a humid atmosphere with 5% CO2.Male C57BL/6 wild‐type mice aged 3–4 weeks had their femurs and tibias thoroughly separated and washed with phosphate‐buffered saline (PBS). Following the removal of the bone marrow from these bones' cavities, cells were cultured in alpha minimal essential medium with 10% foetal bovine serum and 1% penicillin–streptomycin . The cells were kept alive at 37°C in a humidified incubator with 5% CO2.3To apply mechanical stimulation to macrophages, RAW264.7 cells were uniformly seeded into six‐well collagen I‐coated BioFlex culture plates with flexible silicon membrane bottoms . A Flexcell® FX‐5000™ Tension System was used to apply cyclic sinusoidal continuous tensile strain for 0, 1, 2, 4 and 6 h . Cells were applied this optimal mechanical stretching throughout the in vitro series. Cells were cultivated in the same plates but were not stretched as controls.2.4In accordance with the manufacturer's recommendations, RAW 264.7 cells were transiently transfected in six‐well plates with Piezo1 siRNA at a final concentration of 100 nM using Opti‐MEM (Gibco) supplemented with Lipofectamine 2000 Reagent (Invitrogen). We refer to the sequences of other subject groups2.5−ΔΔCt method. The messenger RNA (mRNA) expression levels were normalized with β‐actin. Table Using Trizol reagent , total RNA from RAW264.7 cells was extracted for quantitative real‐time PCR analysis. HiScript III Q RT SuperMix was used to reverse‐transcribe RNA (Vazyme). Next, qPCR was performed in ABI QuantStudio7 (Applied Biosystems) using ChamQ Universal SYBR qPCR Master Mix (Vazyme). The relative gene expression was quantified using the 22.6Total proteins from macrophages were extracted using RIPA lysis buffer. The protein concentration of the cell lysates was determined using a BCA protein assay and separated using 10% sodium dodecyl sulphate‐polyacrylamide gel electrophoresis (SDS‐PAGE) before being transferred to a polyvinylidene fluoride membrane. After blocking with 5% skimmed milk or BSA blocking solution in Tris‐buffered saline containing 0.05% Tween 20 (TBST), the membranes were incubated overnight with the following primary antibodies: anti‐Runx2, anti‐OSX, anti‐OPN, anti‐Col1, anti‐iNOS, anti‐Arg‐1, anti‐p53, anti‐ac‐p53, anti‐β‐actin and anti‐GAPDH. After primary antibody incubation, membranes were washed, incubated with a secondary anti‐rabbit or anti‐mouse antibody for 1 h at room temperature and then visualized using ECL detection kits . The dilutions and suppliers of antibodies used in this study are listed in Table 2.7Flow cytometry was used to analyse the expression of the M1 marker CD86 and the M2 marker CD206 in order to describe the M1 and M2 macrophage phenotypes. 0.25% trypsin in ethylenediaminetetraacetic acid was used to separate the cells, and they were then thrice washed with PBS. The cells were incubated in PBS with APC‐F4/80, PE‐CD86 antibody and FITC‐CD206 antibody for 60 min at 4°C in the dark. The labelled cells were measured using a flow cytometer . The negative control cells were those with no additional antibodies.2.8Upon the cell density reached roughly 70%, we washed the cells three times in PBS before fixing them for 30 min in 4% paraformaldehyde. We used goat serum to block the samples for 30 min at room temperature. After that, we chose the primary antibodies against F4/80, CD206, Ac‐p53 and Piezo1 for overnight incubation at 4°C. On the second day, the samples were washed in PBS three times. Subsequently, the species‐matched secondary antibodies were applied, and the nucleus was stained with 4′,6‐diamidino‐2‐phenylindole as a counterstain . Cells were similarly fixed in 4% paraformaldehyde for 15 min and then soaked in 0.5% Triton‐X for 5 min prior to phalloidin staining. According to the manufacturer's instructions, DAPI staining and phalloidin staining were carried out. For 5 or 30 min, cells were fixed in DAPI and phalloidin staining solution. The images were captured using an immunofluorescence microscope .2.9Cell migration was performed in 6.5 mm Transwell® with 8.0 μm Pore Polycarbonate MembraneInserts . The system's lower chamber contained 600 μL conditioned medium while the upper chamber contained 200 μL of BMSCs suspended in serum‐free medium. The transwell filter system containing the BMSCs was subsequently placed in a humidity incubator. After 12 h of incubation at 37°C, BMSCs were fixed with 4% paraformaldehyde, and the non‐traversed cells that remained on the filter's upper surface were carefully removed with a cotton swab. Migrated (traversed) cells on the lower side of the filter were stained with crystal violet for 30 min, and then counted in five randomly selected fields under an inverted microscope .2.102. The plate was subsequently processed with Cell Counting Kit 8 (CCK8) solution for specified durations at 37°C. Optical density was measured at 450 nm to calculate the percentage of cell viability on a microplate reader .For the CCK8 proliferation assay, BMSCs (3000 cells per well) were seeded in 96‐well plates and incubated at 37°C with 5% CO2.114 BMSCs was evaluated using 5‐ethynyl‐2′‐deoxyuridine (EdU) incorporation assay reagent in a 96‐well plate. Simply put, the cells were exposed to the EdU reagent for 4 h once cell growth in the plate reached 80% confluence. Cells were fixed, permeabilized and dyed with 4′,6‐diamidino‐2‐phenylindole (DAPI) solution before being observed under a fluorescent microscope (Leica).The proliferation of 5 × 102.126 cells/well (in six‐well plates), and a scratch was formed on the cell monolayer 6 h later. The cells were placed in a humidified incubator with 5% CO2 at 37°C after being washed three times with serum‐free media. They were then monitored for a further 24 h. New scratch width/original scratch width was used as the unit of measurement for cell migration.For the scratch wound‐healing assay, cells were seeded at a density of 1 × 102.13The cells were PBS‐washed before being fixed in 4% paraformaldehyde at room temperature for 30 min. Following that, the samples were stained with the BCIP/NBT Alkaline Phosphatase Color Development Kit according to the manufacturer's instructions. The images were captured with a scanner (GE Image Scanner III). The staining was simultaneously viewed under a microscope. We measured the alp enzyme in three randomly chosen fields using Image‐J software.2.14BMSCs were cultured in plates with different conditional mediums for 21 days. Cells were rinsed three times in PBS after a 21‐day osteogenic induction, and then paraformaldehyde was used to fix the cells for 30 min at room temperature. The samples were then washed and stained at room temperature with Alizarin Red S . The images were captured with a scanner (GE Image Scanner III). The staining was simultaneously viewed under a microscope, and after that, Hexadecylpyridinium chloride monohydrate was employed for quantification.2.152+ fluorescence images were captured with a fluorescent microscope (Leica).RAW264.7 cells were cultured in BioFlex culture plates after Piezo1 silence or Yoda1 injection. Cells were then loaded with Fluo‐3 AM and simultaneously treated to mechanical stretch for 0–6 h. The macrophages were washed with PBS three times after mechanical stretch. Ca2.16TGF‐β1 expression levels were quantified for inflammatory cytokine measurement. The supernatant was utilized to identify macrophages after they had been stretched. ELISA was performed as directed by the manufacturer's instructions .2.17Tibia bones were extracted, fixed in 4% paraformaldehyde solution and scanned with a high‐resolution micro‐ct at a resolution of 15.6 μm. Three‐dimensional images were reconstructed for analysis , and cross‐sectional images of the distal tibia were employed. Based on the selection of the volume of interest (VOI), which was generated in accordance with the previous report,2.18After being fixed in paraformaldehyde overnight, the samples were decalcified in 14% EDTA and embedded in paraffin. The tibias were cut into 4‐μm‐thick sections. The slices were stained with haematoxylin and eosin (H&E). The trabecular structure and osteocyte lacunae were examined to assess bone alterations. Femurs were first deparaffinized and heated to facilitate antigen retrieval for immunohistochemical tests. Goat serum was used to block tissue slices for 30 min at room temperature. TGF‐β1 and F4/80 primary antibodies were added and incubated overnight at 4°C.The sections were then treated with fluorescein Cy3‐conjugated secondary anti‐body (Beyotime) and fluorescein CoraLite488‐conjugated secondary antibody (Proteintech) on the second day. Nuclei were stained with DAPI. A fluorescent microscope was used to obtain the images (Carl Zeiss).2.19p‐Values were calculated using a two‐way analysis of variance (ANOVA) with Bonferroni's correction or multiple t‐tests. A p‐value of <0.05 was judged statistically significant.To limit experimental error, all tests were repeated three times. The data were recorded as mean ± SD. Graphpad Prism 8.0 was used for statistical analysis. 33.1+ cells in RAW264.7 cells, further demonstrating that stress alters the phenotype of macrophages , or TGF‐β receptor I, II inhibitor (LY‐364947) were injected. Running significantly increased bone mass in mice, but we could see that mechanical force transmission was also stopped when Piezo1 was inhibited, and bone mass in the GsMTx‐4‐injected group was almost the same as in the control group . TGF‐β1 released by macrophages failed to play a function after TGF‐β receptor I, II was blocked, resulting in severe bone repair in the tibia of mice , we found that the p53 axis is important in the mechanical force‐dependent Piezo1 that leads to macrophage polarization.+F4/80+ macrophages sense mechanical signals via Piezo1 and secrete TGF‐β, which encourages the production of periosteal bone. Recent research has also demonstrated the advantages of exercise on M2 macrophage polarization and energy homeostasis.2+ inward influx mediated by macrophage‐sensitive ion channels.Bone consists of cortical bone and bone trabeculae.Although the current work confirmed that Piezo1 mediates mechanical signalling in macrophages in vitro and in vivo, numerous issues remain. Actually, detected TGF‐β1 may also originate from MSCs.5In conclusion, we found that mechanical forces mediated changes in calcium inward flow, p53 acetylation and deacetylation through Piezo1, inducing macrophages to polarize towards M2, secreting TGF‐β1 and promoting BMSCs osteogenesis Figure . This deConceived and designed the experiments: Hua Wang and Wei‐Bing Zhang. Performed the experiments: Guanhui Cai, Yahui Lu, Weijie Zhong, Ting Wang, Yingyi Li and Xiaolei Ruan. Analysed the data: Hongyu Chen, Lian Sun, Zhaolan Guan, Gen Li, Hengwei Zhang, Wen Sun and Minglong Chen. Wrote the paper: Guanhui Cai and Yahui Lu.The authors declare no conflict of interest.Figure S1. Morphology of macrophages before and after mechanical stretching. (A) Phalloidin staining (Red) of RAW264.7 cells under the tension of 0–6 h. Blue indicates DAPI staining of nuclei. Scale bar, 10 μm.Click here for additional data file.Figure S2. Effect of different concentrations of conditioned medium on osteogenesis of BMSCs. (A) After 7 days, bone marrow mesenchymal stem cells treated with conditioned mediums of macrophages under 2 h of tension were induced into osteogenesis in different concentrations of osteogenic differentiation medium. ALP expression in BMSCs was measured by the ALP staining method. The top are gross scanning images , and the lower are enlarged images . (B) Cell Counting Kit 8 (CCK8) was performed to explore BMSCs proliferation following treatment with different concentrations of conditioned medium.Click here for additional data file.Figure S3. Expression changes of Piezo1 and calcium influx under mechanical tension. (A) The intensity of Ca2+ fluorescence marked by Fluo‐3 AM probe under fluorescence microscope after 0–6 h of tension. (B) Immunofluorescent staining of Piezo1 (red) in RAW264.7 cells under the tension of 0–6 h. Blue indicates DAPI staining of nuclei. Three random fields from each time period of the slides of cells were examined. Scale bar, 10 μm. The Piezo1+ area were qualified as area values of overlapping fields. (C) mRNA expression of Piezo1 in RAW264.7 cells after tension.Click here for additional data file.Figure S4. Drug stimulation of Yoda1. Piezo1 in RAW264.7 cells with different concentrations of Yoda1‐administration were measured by real‐time RT‐PCR. GAPDH was used for normalization. Data are presented as three biological replicates from three independent experiments.Click here for additional data file.Table S1. List of antibodies used in this research, along with their dilutions and suppliers.Table S2. List of primers used for this study.Click here for additional data file."} {"text": "In audio transduction applications, virtualization can be defined as the task of digitally altering the acoustic behavior of an audio sensor or actuator with the aim of mimicking that of a target transducer. Recently, a digital signal preprocessing method for the virtualization of loudspeakers based on inverse equivalent circuit modeling has been proposed. The method applies Leuciuc’s inversion theorem to obtain the inverse circuital model of the physical actuator, which is then exploited to impose a target behavior through the so called Direct–Inverse–Direct Chain. The inverse model is designed by properly augmenting the direct model with a theoretical two-port circuit element called nullor. Drawing on this promising results, in this manuscript, we aim at describing the virtualization task in a broader sense, including both actuator and sensor virtualizations. We provide ready-to-use schemes and block diagrams which apply to all the possible combinations of input and output variables. We then analyze and formalize different versions of the Direct–Inverse–Direct Chain describing how the method changes when applied to sensors and actuators. Finally, we provide examples of applications considering the virtualization of a capacitive microphone and a nonlinear compression driver. The transduction process that characterizes such devices involves different physical domains , which not only are affected by different nonlinear behaviors but they do interact in a nonlinear fashion. For instance, piezoelectric loudspeakers are impaired by hysteretic phenomena which do increase the Total Harmonic Distortion (THD) incident matrices ,17, it hcompendium for the design of inverse circuital models of audio transducers in different vitualization scenarios. In fact, we describe both the case of actuator virtualization and of sensor virtualization, making appropriate adjustments to the employed Direct–Inverse–Direct Chain [In this paper, we discuss the task of audio transducer virtualization from a general theoretical perspective, by analyzing different scenarios and combinations of input/output signals. Our aim is to provide a ct Chain ,26. In dct Chain ,27. FinaThe manuscript is organized as follows. In this section, we first provide background knowledge on nullors, and, then, we present the four major classes of inversion scenarios, supplementing the overview that is available in the literature which only comprises two cases out of four .nullator, which has both port voltage and port current equal to zero, and norator, which is characterized by unconstrained port variables [Nullors are theoretical two-port elements composed of other two theoretical one-ports: a ariables . Figure rs, etc. ,29,30.Nullor is the fundamental element for carrying out the inversion of circuits according to Leuciuc’s theorem ,18,19. AIn the following subsections, we will reword the original theorem considering the four possibile combinations of input and output signals of the system to be inverted, providing proofs and a complete overview over the method.Direct System, are Inverse System, are Direct System, we can write down the following system of equationsx of the network shown in Before presenting the nullor-based inversion theorem, let us consider the two linear-time-invariant (LTI) non-autonomous circuits shown in quationsv1=z11i1+x of the network shown in In Equation , v^x andHereafter, we first introduce a reworded version of Leuciuc’s theorem which also generalizes the above approach for the inversion of linear circuits to the case of nonlinear circuits, and then we go through possible inversion scenarios characterized by pairs of input/output variables of different kinds.Theorem 1.Let us consider a nonlinear non-autonomous circuit containing at least one nullor, as the one shown in Proof. v as the output, we can describe such a system according to the state-space formalism as followsU and u and t. If we re-introduce the nullator, we constrain voltage v to be zero leading the system into an unphysical condition. In order to avoid such an unphysical state, one of the two sources u and u on the entire state space, we can write y is a bounded solution for Equation driven by the output voltage of the Direct System. The inverse of Network A is Network B of Let us consider Network A of theorem , and lat theorem ,17 for dDirect System driven by the output current of the Direct System. The inverse of Network C is Network D of Let us consider Network C of uciuc in . Let us Direct System, this can be augmented with the parallel connection of a nullator and a norator. In fact, this adjunct does not modify the behavior of the circuit given that a nullator and a norator in parallel are equivalent to a short circuit [Inverse System will be similar to the one shown in In the case in which no nullors are present into the circuit . Such a circuit . The resDirect System features as input a current signal and as output a voltage signal, namely, Direct System. For this particular case, the Inverse System is obtained by replacing the input current source of the Direct System with the norator and the norator with a VCVS driven by the output voltage of the Direct System. The inverse of Network E is Network F of Let us consider Network E of Direct System, this can be augmented with the series connection of a nullator and a norator. As for the VIVO case, such a series connection must be inserted between the very same nodes where the output voltage is taken. The result will be a circuit similar to the one shown in In the case in which no nullors are present into the Direct System features both as input and output a current signal, namely, Direct System. For this particular case, the Inverse System is obtained by replacing the input current source of the Direct System with the norator and the norator with a CCCS driven by the output current of the Direct System. The inverse of Network G is Network H of Let us consider Network G of Direct System ∑n=1∑n=1adjoiPassive elements are kept without any changes.Nullators are replaced with norators, while norators with nullators.The input voltage is replaced with a short circuit . The output of the adjoint circuit will be then the current flowing through such a short, where the positive direction follows the element convention, i.e., from the positive to the negative terminal.A current source is connected to the output port. This will be the input of the adjoint circuit. In this case, the direction of the current follows the source convention, i.e., from the negative to the positive terminal.Controlled sources are replaced with their dual .The procedure for deriving the adjoint of a given circuit, whose input is a voltage signal, can be summarized as follows:Similar considerations can be drawn for current inputs. The interested reader is referred to ,33 for aDirect System or directly on the Inverse System. For example, Inverse System of Direct System shown in Direct System and applying then the rules presented in the previous subsections for deriving the Inverse System.The adjoint transformation can be carried out either on the In the next section, we will present the inversion-based virtualization algorithm providing details on how to apply it for both the cases of actuator and sensor virtualization.Direct–Inverse–Direct Chain (DIDC) and was proposed in [Direct Systems, and one Inverse System. The Inverse System is always implemented in the digital domain, whereas, according to the considered actuator or sensor application, only the first or the last Direct System is implemented in the digital domain since the other is the actual physical transducer. Moreover, in real scenarios, amplifiers could be present in-between blocks. Hence, gains should be considered at different stages of the processing chain for the algorithm to properly work.We now present a general block chain to perform virtualization of transducers. Such a chain, shown in posed in for addrInverse System and the Physical Direct System is equivalent to the identity. This means that the digital processing chain allows us to somehow cancel out the behavior of the transducer such that the target behavior, i.e., the behavior of the digital Direct System, can be imposed. Hence, the proposed processing chain can be employed to accomplish the task of transducer virtualization .The DIDC working principle is based on the assumption that the cascade of the In the following two subsections, we will present application-specific DIDCs targeting both the cases of actuator and sensor virtualization.Target Direct System is the digital implementation of the actuator circuital model which we would like to obtain, whereas the Inverse System is the inverse circuital model of the Physical Direct System, which is the transducer itself. Hence, given that we are considering actuation, we may call this chain Target-Inverse-Physical Chain (TIPC), since the target behavior must be imposed in a pre-processing phase and thus before driving the actuator, i.e., the Physical Direct System.Let us consider the particular DIDC shown in Target Direct System) can be a linearized or equalized version of A, or a different speaker. It follows that the more accurate the considered electrical models, the higher the performance of the algorithm. Nonetheless, in [For the case of loudspeaker virtualization, the input ectively ,16,17,27Direct System, and the derivative of the displacement in the Inverse System.It is worth adding that other variables aside currents and voltages could be taken into account to obtain the inverse of a given circuit. For example, for the case of loudspeakers, it might be convenient to consider the displacement of the diaphragm as the output variable. In this case, another stage should be inserted into the processing chain for performing the integral of the velocity in the Target Direct System). In this case, the purpose of virtualization is to improve the performance of the transducers by reducing the Total Harmonic Distortion (THD) imposing somehow the acoustic behavior of an ideal version of the transducer under consideration. Such a discussion is also valid for the specific DIDC that we introduce in the next subsection.Finally, note that according to the type of virtualization, the blocks could be either linear or nonlinear. For example, all the three blocks could be nonlinear if the chain is exploited for imposing the nonlinear sonic behavior of a target transducer. Instead, if linearization is envisaged, one out of the three blocks will be linear , since first the audio signal is acquired by means of the sensor, and then, after compensating for the physical behavior by means of the Inverse System, the signal is processed to impose the target acoustic response. For the case of microphone virtualization, the input signal Let us now consider the DIDC shown in Once again, input and output variable different from voltages and currents can be considered by introducing integrators and derivators into the green blocks of the processing chain.In this section, we provide an example of sensor virtualization by taking into account a capacitive microphone as a case study. For this application, we employ a linear model, while an example of transducer virtualization based on nonlinear models will be provided in the next section. The microphone is described by means of the circuit shown in The two transformers model the transduction between the physical domains, where The implementation of the microphone circuital model in the digital domain can be carried out by employing different techniques ,36,37. IIn order to test the accuracy of the WD implementation, we compare the Discrete Fourier Transform (DFT) of the impulse response obtained by simulating the reference circuit in the WD domain with the DFT of the impulse response obtained by simulating the same circuit in Mathworks Simscape (SSC), for both BK4134 and BK4146 microphones. The curves are then normalized with respect tot the pressure at 1 kHz as it is typically done to describe the microphone sensitivity. The results are shown in Direct System. By applying the theorem presented in Inverse System circuit shown in Inverse System, once augmented the Direct System with a parallel connection of a nullator and a norator as explained in Inverse System can be implemented in the WD domain in a fully explicit fashion. In order to validate the Inverse System implementation, we consider the processing chain in Direct System and the Inverse System, and we verify that the output of the cascade is equal to the input of the same cascade, i.e., Direct System to be an impulse and we compute the response of the microphone, which is shown in Inverse System with the obtained voltage signal and we compare the output Inverse System is indeed an impulse. In order to further remark the accuracy of the Inverse System implementation, we compute the Root Mean Square Error (RMSE) between the input and output of the processing chain, obtaining a result below the machine precision. Finally, a similar test is carried considering the circuit equivalent parameters of microphone BK4146; even in this case, the RMSE is numerically zero.In this subsection, we refer to the digital implementation of the microphone BK4134 equivalent circuit as Physical Direct System is microphone BK4134 and that we would like to obtain a voltage signal Target Direct System. It follows that the Inverse System is the circuital inverse model of microphone BK4134. The circuit parameters of both BK4134 and BK4146 are, once again, those listed in k the sample index, and Physical Direct System when the PITC-based algorithm is not active, the dashed red curve represents the target behavior that we would like to obtain, while the continuous blue curve represents the output of the PITC, i.e., the output of the system when the algorithm is active. The overlap between blue and red curve is perfect given that the RMSE is below machine precision. The algorithm is thus able to impose the response of microphone BK4146 even if the pressure signal is acquired by means of BK4143, being characterized at the same time by real time capabilities. In fact, the algorithm implemented in a MATLAB script is able to process, on average, one sample in Physical Direct System is simulated by means of the WDF shown in In this subsection, we provide an example of sensor virtualization. In particular, we employ the PITC-based algorithm presented in Finally, we would like to stress the fact that the circuital model shown in x is the displacement of the diaphragm in millimeters obtained integrating velocity We now consider a compression driver as an example of audio actuator. Such a transducer can be described by means of the circuit shown in follows 15)Bl nicely disappear, leading to a perfect match between With the purpose of validating the WD implementation, we consider a processing chain similar to that shown in Target Direct System is the linear version of the circuit shown in Target Direct System, the Inverse System, and the Physical Direct System. Contrary to what done in the microphone case, the desired behavior is imposed at the beginning of the processing chain since the physical transducer is an actuator. In order to test the chain, we set the input A is the amplitude, k is the sample index, Direct System output (“Non Compensated”) and of the TIPC output (“Compensated”), together with the values of THD. Physical Target System is simulated but, in real scenarios, it represents the actual physical transducer.In a last application scenario, we show how the transducer linearization task can be accomplished as a particular case of the proposed virtualization algorithms. We aim, in fact, at eliminating the distortion effect introduced by the nonlinear behavior of the loudspeaker SEAS. In order to reach this goal, we employ the TIPC-based algorithm presented in n Bl=Bl0 . The parThe TIPC-based algorithm can be thus promising for improving the acoustic response of the loudspeaker on the fly by pre-processing the electrical signal driving the loudspeaker itself. Finally, it is worth stressing that the tested virtualization algorithm can be exploited not only to accomplish linearization but also to impose a desired nonlinear behavior, similarly to what already shown in .Direct System with a theoretical two-port called nullor, exploiting nullor equivalent models of short and open circuits [Physical Direct System, which is the responsible for the actual transduction process, an Inverse System, which is the circuital inverse of the Physical Direct System, and a Target Direct System, which is the transducer characterized by the behavior that we would like to obtain. We exploited WDF principles to implement the digital blocks of such processing chains in a fully explicit fashion, i.e., without resorting to iterative solvers. Finally, we tested both the PITC-based and the TIPC-based algorithms for addressing microphone virtualization and linearization of a loudspeaker system with nonlinear compression driver.In this paper, we described a general approach for the virtualization of audio transducers applicable to both sensors, like microphones, and actuators, like loudspeakers. We defined virtualization as the task of altering the acquired/reproduced signal by making it sound as if acquired/reproduced by another ideal or real audio sensor/actuator. In order to accomplish such a task, we started by reformulating Leuciuc’s theorem and proof for circuit inversion, providing case-specific guidelines on how to derive the inverse circuital model for all combinations of input and output variables. In particular, circuital inversion is achieved by augmenting the circuits . We thenFuture work may concern first the extension of circuital inversion theory to the Multiple Input Multiple Output (MIMO) case, and then, by exploiting this new theory, the development of refined DIDC-based algorithms for addressing the case of array virtualization, both in sensing and actuation scenarios."} {"text": "Oryza rufipogon species complex (ORSC), the wild progenitor of cultivated Asian rice, to identify genomic regions associated with environmental adaptation characterized by variation in bioclimatic and soil variables. We further examine regions for colocalizations with phenotypic associations within the same collection. EAA results indicate that significant regions tend to associate with single environmental variables, although 2 significant loci on chromosomes 3 and 5 are detected as common across multiple variable types . Distributions of allele frequencies at significant loci across subpopulations of cultivated Oryza sativa indicate that, in some cases, adaptive variation may already be present among cultivars, although evaluation in cultivated populations is needed to empirically test this. This work has implications for the potential utility of wild genetic resources in pre-breeding efforts for rice improvement.Crop wild relatives host unique adaptation strategies that enable them to thrive across a wide range of habitats. As pressures from a changing climate mount, a more complete understanding of the genetic variation that underlies this adaptation could enable broader utilization of wild materials for crop improvement. Here, we carry out environmental association analyses (EAA) in the This has enabled their adaptation across diverse environments, and as a result, wild gene pools represent rich repositories of novel alleles that can be utilized for plant improvement . HoweverIn recent years, there has been increased efforts in using ex situ germplasm collections as sources of traits to help adapt crops to climate change , soybeanOryza rufipogon species complex (ORSC) is the wild progenitor of cultivated Asian rice, Oryza sativa, a staple crop particularly important for feeding the world's poorest populations. The ORSC's geographic range is distributed widely across South, Southeast, and Eastern Asia, and the species occupies diverse tropical and subtropical habitats often found near or within human-disturbed areas such as cultivated fields, ditches, and irrigation channels in the Philippines and the National Institute of Genetics (n = 3) in Japan. Environmental data were sourced from WorldClim for precipitation and temperature variables (2) and the 10 soil variables at a resolution of 2.5 arcmin (approximately 12 km2). The soil data were available at 6 depths: 0–5, 5–15, 15–30, 30–60, 60–100, and 100–200 cm. These data were aggregated to 2 layers (topsoil and subsoil) using a weighted mean approach, whereby the weights were proportional to the size of the original layer depths (n = 240) was recently phenotyped by All data analyzed in this study were retrieved from public sources. Genotype data on a panel of 286 ORSC accessions were originally published by ariables and Inter = 0.3. Communities of highly connected clusters of environmental and phenotypic variables were detected using group_infomap in the R package ggraph on the 16 phenotypic variables that clustered with bioclimatic and biophysical variables, first standardized to a mean of 0 and standard deviation of 1, was conducted using the R library FactoMineR (K) using the R package rrBLUP v. 4.6.1 and minor allele frequency (≥0.05) using Plink1.9 . GWA was also carried out using scores from the first 5 components (n = 5 variables) that resulted from the phenotype PCA. Using the P-value inflation factor (lambda), we initially compared 3 GWA models: naïve (no polygenic background effect or adjustment for population structure), K (polygenic background effect only), and K + Q . The K + Q model with 2 PCs was identified as the best model based on having a high median lambda value but low variance; this model was used for all subsequent analyses (Genomewide association (GWA) analyses were run using the GWAS function in analyses . For envhttps://ftp.ensemblgenomes.ebi.ac.uk/pub/plants/release-55/gff3/oryza_sativa/; last accessed 2022 December 12). We limited our search to those predicted genes within 150 kb of a given associated SNP. This threshold was based on the average distance at which linkage disequilibrium (LD) decayed to 0.2 across the 6 identified ORSC subpopulations , which houses data on 3,000 cultivated rice genomes . Out of the 41 environmental variables (19 bioclimatic and 10 soil × 2 depths) explored from these georeferenced locations of the ORSC panel, we found that soil organic carbon and precipitation of the driest month/quarter (bio14/bio17) had the greatest dispersion based on the coefficient of variation, while soil bulk density, mean temperature of the warmest quarter (bio10), and mean temperature of the wettest quarter (bio8) were found to be the least variable . In geneO. sativa (n = 259), a medium panel , and a small panel . Overall, we found that the medium panel yielded the most GWA runs that passed QC, resulting in 37 significant SNPs detected across 11 variables , followed by the small panel, which yielded 16 significant SNPs detected across 2 temperature and 5 soil variables. The large panel, despite having the greatest number of individuals, only yielded 9 significant SNPs detected across 4 soil variables , we carried out genome-wide association analyses on 19 bioclimatic variables and 10 soil variables, each at 2 depths, leveraging published genotype data and available georeference information on the panel. Previously, subpopulation-specific GWAS has been carried out with success in the highly stratified subpopulations of cultivated Asian rice, ariables . This maariables . BecauseMaterials and Methods). Chromosomes 7 and 12 harbored 3 significant loci each, chromosomes 1–6 and 9 contained 2 significant loci each, while chromosomes 8 and 11 had one each previously shown to be inducible by abiotic stress in rice that encode for MT-like type 1 proteins. MTs in plants play known roles in the scavenging of reactive oxygen species (ROS). In rice, dehydration has been demonstrated to induce expression of an MT type-1 gene, and overexpression of the same gene improved drought tolerance as well as a superoxide dismutase (SODCC1), whose expression has been reported to be inducible by cold temperatures and ABA . In addition to 10 SNP colocalizations identified between highly similar environmental variables [e.g. clay in topsoil with clay in subsoil or precipitation of the driest month (bio14) with precipitation of the driest quarter (bio17)] that would be expected, we detected 2 SNPs that colocalized across precipitation, temperature, and/or soil variables. One was located on chromosome 3 at 13,114,933 bp and was shared between one temperature variable [mean temperature of the warmest quarter (bio10)] and 2 related soil variables (pH of the topsoil and pH of the subsoil) . Within and ABA . As soilLSY1; RSR1; OsRAD51B involved in DNA repair). Also among the named genes were 3 ATP-binding cassette (ABC) proteins: one was a cloned gene involved in aluminum tolerance . Found in both prokaryotes and eukaryotes, ABC proteins are primarily transmembrane transporters that also contain a conserved nucleotide binding domain. In plants, these transporters are documented to play varied and critical roles in cellular detoxification , growth and development (e.g. hormone transport and cuticle development), and pathogen defense (e.g. via secretion of plant secondary metabolites) and various phytohormones such as abscisic acid (down-regulation in root), gibberellic acid (down-regulation in root), and jasmonic acid (up-regulation in both shoot and root) (−1), and had higher diurnal temperature fluctuations relative to annual temperatures [average of 72.4 vs 49.9% for isothermality (bio3)] ], one soil variable (nitrogen in the topsoil), and 2 precipitation variables [precipitation of the driest month (bio14) and precipitation of the driest quarter (bio17)] . These c (bio3)] . Based oTo investigate whether loci significant for environmental variables also colocalized with regions of the genome associated with plant phenotypes, we leveraged trait data published recently on this panel by To reduce dimensionality of the phenotype dataset, a principal components analysis was carried out on the 16 phenotypes that did cluster with environmental variables. The first 5 principal components explained 78% of the total variation, and scores of these PCs were used as composite traits in conjunction with the SNP data to carry out genome-wide scans for associations. Results from GWA on scores from 2 of the components, PC2 and PC4, passed QC and yielded significantly associated SNPs . We founO. sativa. Except for the W5 subpopulation for the QTL on chromosome 3, the 2 minor alleles were rare in the wild populations (0–0.2). In contrast, we observed very high frequencies in the cultivated populations based on a database of 3,000 re-sequenced rice genomes . In the cultivated species, O. sativa, the frequency of the minor allele for the QTL on chromosome 5 was 0.67 in the tropical japonica subpopulation and fixed or nearly fixed in all other subpopulations while the minor allele of the chromosome 3 QTL was fixed across cultivated subpopulations , which were geographically associated (indica (indica-3 (at 0.77); this population was localized to South Asia . We note, however, that the presence of these alleles in cultivated populations does not necessarily indicate they have the same effects as in wild populations. More experimental work in O. sativa is needed to test this.We next cataloged the frequency of the minor alleles at the QTLs identified on chromosomes 3 and 5 across the W1, W2, W4, W5, and W6 subpopulations of ORSC as well as in its cultivated relative, ulations . Previou . Nonetheless, back-introgression may still be the most plausible explanation. The W5 subpopulation is small, geographically constrained in the mountainous region of Nepal, and highly differentiated at a whole-genome level from the O. sativa detected in wild rice and discuss potential candidate genes within LD of their significant SNPs. Documentation of allele frequencies at these SNPs across the subpopulations of cultivated jkad128_Supplementary_DataClick here for additional data file."} {"text": "Background: To estimate associations of sulfur-containing amino acids (SAAs) in the early trimester of pregnancy and gestational diabetes mellitus (GDM) and estimate associations of maternal SAAs with adverse growth patterns in offspring. Methods: We established a 1:1 matched case-control study (n = 486) from our cohort of pregnant women, and 401 children were followed up at ages 1 to 8 years. We conducted binary conditional logistic regression to estimate the risk associations of serum SAAs with GDM. Multinomial logistic regression was implemented to explore associations of maternal SAAs with adverse growth patterns in the offspring. Results: High serum methionine and cystine were independently associated with increased GDM risk . Conversely, a low level of serum taurine was independently associated with increased GDM risk . Maternal high cystine and low taurine were also associated with an increased risk of persistent obesity growth pattern (POGP) in offspring and the effect was largely independent of GDM. Conclusions: High serum methionine, cystine and low serum taurine in the early trimester of pregnancy were associated with a greatly increased risk of GDM. Maternal high cystine and low taurine were associated with elevated risk of offspring POGP, largely independent of GDM. Gestational diabetes mellitus (GDM) is one of the most common complications of pregnancy and has affected approximately 14.0% of pregnant women globally in 2021, according to an estimate by the International Diabetes Federation ,4,5 and Sulfur-containing amino acids (SAAs), including methionine, cysteine, cystine and taurine, play major parts in the transmethylation and transsulfuration pathways ,10. DysrChildhood obesity is one of the concerns in GDM. Undue insulin resistance, hyperglycemia during pregnancy and/or DNA methylation in the fetus are potential risk factors for obesity during childhood. A small study including 64 prepubertal children (5–9 years old) showed that the level of plasma cystine was elevated in healthy obese children while plasma cysteine was not different compared to healthy lean children . The stuWe established a case-control study from our cohort of pregnant women and its follow-up data to examine (1) associations between the serum SAAs concentrations in early trimester of pregnancy and subsequent GDM risk; (2) the associations between maternal SAAs levels in early trimester of pregnancy with obesity-related growth patterns if any, in offspring at 1 to 8 years of age; and (3) whether GDM mediates the risk associations between maternal SAAs concentrations in early trimester of pregnancy and adverse growth patterns in offspring.The design and methods of this study have been reported previously ,22. In sAt 24–28 weeks of pregnancy, all pregnant women took a 50 g 1 h glucose challenge test (GCT). Thereafter, pregnant women with positive GCT were transferred to the GDM center, where they underwent a standard 75 g 2 h oral glucose tolerance test (OGTT) for diagnosis of GDM. GDM was diagnosed based on the International Association of Diabetes and Pregnancy Study Group’s (IADPSG) criteria .During the fieldwork, 2991 pregnant women out of 22,302 participants were invited and agreed with the provision of overnight fasting blood samples in the first trimester of pregnancy. Among them, we excluded 227 pregnant women who did not have GCT results or whose GCT results were positive but did not have OGTT results. Finally, we included 243 women with GDM and 243 women free of GDM matched by maternal age (±1 year) in the current analysis.At postpartum, 486 children of the included women were asked to attend the follow-up study and were offered health examinations, including measurement of body height and weight, each year from 1 to 8 years. Finally, 401 children (a response rate of 82.5%) participated in the postnatal follow-up and completed body weight and height measurements at 1 to 8 years of age.The nurses or obstetricians with uniform training measured maternal weight, height, and systolic/diastolic blood pressure (SBP/DBP) at the first antenatal appointment and offspring height and weight at each follow-up visit aged 1 to 8 years. We also collected other demographic information on maternal age, parity, ethnicity, family history of diabetes, the habits of smoking and drinking, and child gender. We calculate BMI by dividing weight (kg) by squared height (m).Measurement of serum sulfur-containing amino acids was previously described . Brieflyt-test if normally distributed or a Wilcoxon signed-rank test otherwise. Categorical data were evaluated by the McNemar test or Fisher’s exact test.We performed all statistical analyses using SAS version 9.4 . Quantitative data were expressed as mean ± standard deviation (SD) or median (interquartile range) where appropriate. Differences in quantitative data between the two groups were tested using a paired Student p < 0.05 at entry and exit).Binary conditional logistic regression was conducted to obtain odds ratios (OR) and their 95% confidence intervals (CI) of serum SAAs for GDM. Restricted cubic spline (RCS) analysis was employed to check the linearity of the associations between SAAs concentrations and GDM risk as before ,26. We sA group-based trajectory modelling method applied to the longitudinal BMI data was performed to identify distinct BMI growth patterns from 1 to 8 years of age . The metThe baseline characteristics of the participating pregnant women are shown in Serum methionine and cystine in the early trimester of pregnancy are positively associated with GDM risk in nonlinear manners, while serum cysteine and taurine are negatively associated with GDM risk in nonlinear manners . Serum mBased on the group-based trajectory modelling approach, four distinct BMI growth patterns were observed in 401 children who participated in the follow-up visits. The four BMI growth patterns can be described as (1) persistent lean growth pattern (PLGP) characterized by a persistent low BMI over time; (2) normal growth pattern (NGP) characterized by middle and “normal” BMI over time; (3) persistent obesity growth pattern (POGP) characterized by a high and persistent increased BMI over time; and (4) late obesity growth pattern (LOGP) characterized by a normal BMI before 5 years of age and rapidly increased BMI thereafter . CompareMaternal cystine ≥ 150 nmol/mL is associated with markedly increased risk of POGP in the offspring in univariate and multivariate analyses . Further adjustment for GDM slightly attenuates the OR of high cystine for POGP, but its statistical significance persists . However, maternal high cystine is not significantly associated with LOGP risk in offspring. Similarly, maternal taurine ≤ 21.9 nmol/mL is associated with an elevated risk of POGP, while low taurine is not associated with LOGP risk in offspring . Further adjustment for GDM slightly attenuated the OR of low taurine for POGP, but the statistical significance remained . On the other hand, maternal methionine and cysteine are not significantly associated with the risks of POGP and LOGP in offspring .Our study explored and confirmed that (1) serum methionine and cystine in early pregnancy were positively associated with GDM risk, while serum taurine was negatively associated with GDM risk in the Chinese population; (2) maternal high serum cystine and low serum taurine in the early trimester of pregnancy were also associated with increased POGP risk in offspring, largely independent of the occurrence of GDM.In recent years, a few animal and human studies have surveyed the risk associations of SAAs concentrations with insulin resistance and T2D, but their findings were inconsistent. For instance, an animal study found that plasma levels of methionine and cysteine were markedly decreased, and hepatic methionine, cysteine and taurine were also decreased in Zucker diabetic fatty rats . A cohorBiological links between SAAs and GDM remain unclear. SAAs play a major part in the transsulfuration pathway, which can protect against uncontrolled oxidative stress and inflammation through sulfur’s role in redox biochemistry ,31. One In recent years, much research has surveyed the BMI growth patterns in children. A large longitudinal cohort study implemented in the USA identified three distinct BMI growth patterns in children aged 2–6 years using the group-based trajectory modelling method . The thrThe associations of maternal SAAs and offspring growth patterns had not been explored. A small research implemented in the USA showed that the concentration of plasma cystine was elevated in healthy obese children while plasma cysteine was not different compared to healthy lean children . The uteIt is biologically plausible that maternal high serum cystine and low serum taurine were associated with markedly increased POGP risk in offspring. There is growing evidence that epigenetic changes have been a potential mechanistic link between exposure to the uterine environment and childhood obesity in the offspring . The intGDM can increase the risks of maternal diabetes and cardiovascular disease in later life and childhood obesity risk in offspring ,5,6, wheThere were some limitations in our study. First, the findings of this study were obtained from a case-control study, which was from a single cohort of pregnant women and their offspring in Tianjin. These findings, therefore, need to be verified in other cohorts. Second, the levels of serum SAAs were related to dietary intake, but we did not obtain detailed diet information from the mothers. Third, homocysteine is an important sulfur-containing amino acid and may play a part in GDM. However, in the targeted LC-MS/MS assay, homocysteine was not specifically included in the measurement scheme. We cannot exclude the possibility that it may have confounded associations between other sulfur-containing amino acids and GDM in this report. Fourth, we noticed that the sample size of the POGP group was small, and the 95%CI of the risk association between maternal low taurine and offspring POGP was large. Further replications by other cohorts are certainly needed to reduce the large type 2 error.In summary, our study found (1) high concentrations of serum methionine and cystine and low concentrations of serum taurine in the early trimester of pregnancy were independently associated with greatly increased GDM risk, and (2) maternal high concentrations of serum cystine and low concentrations of serum taurine were also associated with greatly elevated POGP risk in offspring, largely independent of GDM. Further studies are warranted to verify our observations, and biological mechanisms underlying the intriguing findings also need exploring for better comprehension of the pathophysiology of GDM and biological links from early life SAAs exposure to adverse childhood growth patterns."}