{"text": "In a previous paper, we introduced MUSCLE, a new program for creating multiple alignments of protein sequences, giving a brief summary of the algorithm and showing MUSCLE to achieve the highest scores reported to date on four alignment accuracy benchmarks. Here we present a more complete discussion of the algorithm, describing several previously unpublished techniques that improve biological accuracy and / or computational complexity. We introduce a new option, MUSCLE-fast, designed for high-throughput applications. We also describe a new protocol for evaluating objective functions that align two profiles.We compare the speed and accuracy of MUSCLE with CLUSTALW, Progressive POA and the MAFFT script FFTNS1, the fastest previously published program known to the author. Accuracy is measured using four benchmarks: BAliBASE, PREFAB, SABmark and SMART. We test three variants that offer highest accuracy (MUSCLE with default settings), highest speed (MUSCLE-fast), and a carefully chosen compromise between the two (MUSCLE-prog). We find MUSCLE-fast to be the fastest algorithm on all test sets, achieving average alignment accuracy similar to CLUSTALW in times that are typically two to three orders of magnitude less. MUSCLE-fast is able to align 1,000 sequences of average length 282 in 21 seconds on a current desktop computer..MUSCLE offers a range of options that provide improved speed and / or alignment accuracy compared with currently available programs. MUSCLE is freely available at Multiple alignments of protein sequences are important in many applications, including phylogenetic tree estimation, secondary structure prediction and critical residue identification. Many multiple sequence alignment (MSA) algorithms have been proposed; for a recent review, see . Two attLN) in the sequence length L and number of sequences N / [min - k + 1 ]. \u00a0\u00a0\u00a0 (1)\u03c4 is a k-mer, LX, LY are the sequence lengths, and nX(\u03c4) and nY(\u03c4) are the number of times \u03c4 occurs in X and Y respectively. This definition can be motivated by considering an alignment of X to Y and defining the similarity to be the fraction of k-mers that are conserved between the two sequences. The denominator of F is the maximum number of k-mers that could be aligned. Note that if a given k-mer occurs more often in one sequence than the other, the excess cannot be conserved, hence the minimum in the numerator. The definition of F is an approximation in which it is assumed that (after correcting for excesses) common k-mers are always alignable to each other. MUSCLE also implements a binary approximation FBinary, so-called because it reduces the k-mer count to a present / absent bit:Here FBinary = \u03a3\u03c4 \u03b4XY(\u03c4) / [min - k + 1 ]. \u00a0\u00a0\u00a0 (2)\u03b4XY(\u03c4) is 1 if \u03c4 is present in both sequences, 0 otherwise. As multiple instances of a given k-mer in one sequence are relatively rare, this is often a good approximation to F. The binary approximation enables a significant speed improvement as the size of the count vector for a given sequence can be reduced by an order of magnitude. This allows the count vector for every sequence to be retained in memory, and pairs of vectors to be compared efficiently using bit-wise instructions. When using an integer count, there may be insufficient memory to store all count vectors, making it necessary to re-compute counts several times for a given sequence.Here, mutation distance, i.e. the number of mutations that occurred on the historical path between the sequences. The historical path through the phylogenetic tree extends from one sequence to the other via their most recent common ancestor. The mutation distance is trivially additive. The fractional identity D is often used as a similarity measure; for closely related sequences 1 - D is a good approximation to a mutation distance . As sequences diverge, there is an increasing probability of multiple mutations at a single site. To correct for this, we use the following distance estimate . \u00a0\u00a0\u00a0 (6)Following MAFFT, we also implemented a weighted mixture of minimum and average linkage:dMixPC = (1 - s) dMinPC + s dAvgPC, \u00a0\u00a0\u00a0 (7)s is a parameter set to 0.1 by default. Clustering produces a pseudo-root (the last node created). We implemented two other methods for determining a root: minimizing the average branch weight . \u00a0\u00a0\u00a0 (15)Dk is the last position in X that is aligned to a letter in Y. Extract the special case of a gap of length 1:Here, xy = max { max(ki S + 1/2 \u03a3x \u03a3i (ni2 - ni) S. \u00a0\u00a0\u00a0 (24)SPFrequencies are computed as:f xi = ni [x]/ N. \u00a0\u00a0\u00a0 (25)Using frequencies,N2LP) but (25) and (26) are O(NLP). For N >> 20, this is a substantial improvement. Let SPg be the contribution of gap penalties to SP, so SP = SPa + SPg. It is natural to seek an O(NLP) expression for SPg analogous to (26), but to the best of our knowledge no solution is known. Note that in MUSCLE refinement, the absolute value of the SP score is not needed; rather, it suffices to determine the difference in the SP scores before and after re-aligning a pair of profiles. Let SP be the contribution to the SP score from a pair of sequences s and t, so SP = \u03a3s \u03a3t>s SP, and denote the two profiles by X and Y. Then we can decompose SP into intra-and inter-profile terms as follows:For simplicity, we have neglected sequence weighting; it is straightforward to show that (26) applies unchanged if weighting is used. Note that (23) is O + \u03a3s\u2208Y \u03a3t\u2208Y:t>s SP + \u03a3s\u2208X \u03a3t\u2208Y SP \u00a0\u00a0\u00a0 (27)SP = \u03a3Note that the intra-profile terms are unchanged in any alignment that preserves the columns of the profile intact, which is true by definition in profile-profile alignment. This follows by noting that any indels added to align the profiles are guaranteed to be external gaps with respect to any pair of sequences in the same profile. It therefore suffices to compute the change in the inter-profile term:XY = \u03a3s\u2208X \u03a3t\u2208Y SP. \u00a0\u00a0\u00a0 (28)SP\u03c0- be the alignment path before re-alignment and \u03c0+ the path after re-alignment. The change in alignment can be specified as the set of edges in \u03c0- or \u03c0+, but not both; i.e., by considering a path to be a set of edges and taking the set symmetric difference \u0394\u03c0 = (\u03c0- \u222a \u03c0+) - (\u03c0- \u2229 \u03c0+). The path \u03c0+ after re-alignment is available from the dynamic programming traceback. The path \u03c0- before re-alignment can be efficiently computed in O(LP) time. Note that in order to construct the profile of a subset of sequences extracted from a multiple alignment, those columns that contain only indels in that subset must be deleted. The set of such columns in both profiles is therefore available as a side effect of profile construction, and this set immediately implies \u03c0-. It is a simple O(LP) procedure to compute \u0394\u03c0 from \u03c0- and \u03c0+. Note that SPa is a sum over columns, and there is a one-to-one correspondence between columns and edges in \u03c0. The change in SPa can therefore be computed as a sum over columns in \u0394\u03c0, with a negative sign for edges from \u03c0-, reducing the time complexity from O(NLP) to O(N|\u0394\u03c0|). We now turn our attention to SPg. We say that a gap G intersects \u0394\u03c0 if and only if any indel in G is in a column in \u0394\u03c0, and denote by \u0393 the set of gaps that intersect \u0394\u03c0. If a gap does not intersect \u0394\u03c0, i.e. does not have an indel in a changed column, its contribution to SPg is unchanged. It therefore suffices to consider penalties for gaps in \u0393, again with negative signs for edges from \u03c0-. The construction of \u0393 is straightforward in O(NLP) time. Finally, a sum over pairs in \u0393 is needed, reducing the O(N2) component to the smallest possible set of terms.This reduces the average time by a factor of about two. We can further improve on this by noting that in the typical case, there are few or no changes to the alignment. This suggests computing the change in SP score by looking only at differences between the two alignments. Let NLP) time. Define a two-symbol alphabet {X, -} in which X represents any amino acid and - is the indel symbol. There are four dimers in this alphabet: XX, X-, -X and --, which denote by no-gap, gap-open, gap-close and gap-extend respectively. Re-write a multiple alignment in terms of these dimers, adopting the convention that dimer ab composed of symbol a in column x-1 and symbol b in column x is written in column x. Now consider the contribution to SPg of an aligned pair of dimers, written as ab\u2194cd. Clearly XX\u2194X- adds a gap-open penalty; XX\u2194-X adds a gap-close , and compute the change in SP by considering only those columns in \u0394\u03c0. We find use of the dimer approximation to marginally reduce benchmark scores. By default, MUSCLE therefore uses the exact SP score for N \u2264 100 and the dimer approximation for N > 100, where the higher time complexity of the exact score becomes more noticeable.We next describe an approximation to SP that can be computed in O against the number of false pairs with score \u2264 S (y axis); we call the resulting graph a discrimination plot. Ideally, all true pairs would score higher than all false pairs, in which case the profile function would be a perfect discriminator and would always produce perfect alignments. A function that perfectly discriminates will appear as a \u0393-shaped plot; a function that has no ability to discriminate will appear as a diagonal plot along the line x = y. If a function F has a discrimination plot that is always above another function G > DG(x) \u2200 x, where DF is the discriminator plot for F as a function of x), then F has a superior ability to discriminate true from false pairs compared with G. If the plots intersect, the situation is ambiguous and neither function is clearly superior. We used sets of structural alignments from [m(F) where m(x) is a monotonically increasing function). However, a monotonic transformation may change the alignments produced by a profile function, so we can regard high discrimination as a necessary but not sufficient condition for a good profile function. One can turn this into a virtue by noting that the discrimination plot allows the relative probability of true versus false to be determined from a score. It is therefore possible to numerically determine a log-odds function from the discrimination plot, which can be evaluated by table look-up. Using discrimination plots for PP2, we found the optimal transformation for LE to be close to linear, in contrast to other functions we tried, including PSP (results not shown). This observation further encouraged us to explore the performance of LE in an MSA algorithm. Testing on multiple alignment benchmarks we find LE to give superior results on BAliBASE, but statistically indistinguishable results on other databases (results not shown). MUSCLE therefore uses LE as the default choice as it sometimes gives better results but has not been observed to give lower average accuracy on any of our tests. It is also useful to introduce a method with a distinctively different scoring scheme as an alternative that may give better results on some input data and may provide unique features for incorporation into jury or consensus systems. One drawback of LE is its relatively slow performance due to the need to compute a logarithm for each cell of the dynamic programming matrix.We have previously attempted a systematic comparison of profile functions . The metnts from (PP) andnts from (PP2). Pnts from with \u2264 3nts from , LAMA [4nts from . Using PLP = O(L + N), the e-string construction for the root alignment, and a fixed number of refinement iterations.The complexity of MUSCLE is summarized in Table FBinary is used as a distance measure (Equation 2), the PSP profile function is used, and diagonal finding is enabled.MUSCLE offers a variety of options that offer different trade-offs between speed and accuracy. In the following, we report speed and accuracy results for three sets of options: (1) the full MUSCLE algorithm including Stages 1, 2 and 3 with default options; (2) Stages 1 and 2 only, using default options (MUSCLE-prog); and (3) Stage 1 only using the fastest possible options (MUSCLE-fast), which are as follows: Q, the number of residue pairs correctly aligned divided by the length of the reference alignment. For more discussion of the reference data, assessment methodology and a comparison of MUSCLE with T-Coffee and NWNSI, the most accurate MAFFT script, see [In Tables ipt, see .LP is O(L). Results are shown in Figure To compare speeds for a larger number of sequences, we created a test set by using PSI-BLAST to search the NCBI non-redundant protein sequence database for hits to dienoyl-coa isomerase (1dci in the Protein Data Bank ), selectMUSCLE demonstrates improvements in accuracy and reductions in computational complexity by exploiting a range of existing and new algorithmic techniques. While the design\u2013typically for practical multiple sequence alignment tools\u2013arguably lacks elegance and theoretical coherence, useful improvements were achieved through a number of factors. Most important of these were selection of heuristics, close attention to details of the implementation, and careful evaluation of the impact of different elements of the algorithm on speed and accuracy. MUSCLE enables high-throughput applications to achieve average accuracy comparable to the most accurate tools previously available, which we expect to be increasingly important in view of the continuing rapid growth in sequence data..MUSCLE is a command-line program written in a conservative subset of C++. At the time of writing, MUSCLE has been successfully ported to 32-bit Windows, 32-bit Intel architecture Linux, Solaris, Macintosh OSX and the 64-bit HP Alpha Tru64 platform. MUSCLE is donated to the public domain. Source code and executable files are freely available at"} {"text": "The hit criterion is a key component of heuristic local alignment algorithms. It specifies a class of patterns assumed to witness a potential similarity, and this choice is decisive for the selectivity and sensitivity of the whole method.group criterion combining the advantages of the single-seed and double-seed approaches used in existing algorithms. Second, we introduce transition-constrained seeds that extend spaced seeds by the possibility of distinguishing transition and transversion mismatches. We provide analytical data as well as experimental results, obtained with the YASS software, supporting both improvements.In this paper, we propose two ways to improve the hit criterion. First, we define the .Proposed algorithmic ideas allow to obtain a significant gain in sensitivity of similarity search without increase in execution time. The method has been implemented in YASS software available at The central problem is therefore to improve the trade-off between those opposite requirements.Sequence alignment is a fundamental problem in Bioinformatics. Despite of a big amount of efforts spent by researchers on designing efficient alignment methods, improving the alignment efficiency remains of primary importance. This is due to the continuously increasing amount of nucleotide sequence data, such as EST and newly sequenced genomic sequences, that need to be compared in order to detect similar regions occurring in them. Those comparisons are done routinely, and therefore need to be done very fast, preferably instantaneously on commonly used computers. On the other hand, they need to be precise, i.e. should report all, or at least a vast majority of interesting similarities that could be relevant in the underlying biological study. The latter requirement for the alignment method, called the hit) a potential similarity. Those patterns are formed by seeds which are small strings that appear in both sequences. FASTA = S2[j..j + k - 1] for some i \u2264 m and j \u2264 n, then these two k-words form a seed denoted . Two functions on seeds are considered: For a seed , the seed diagonal d is m + j - i. It can be seen as the distance between the k-words S1[i..i + k - 1] and S2[j..j + k - 1] if S2 is concatenated to S1, For two seeds and , where i1 and have a probability (1 - \u03b5) to belong to the same similarity iffWe first introduce some notations used in this section. Let D \u2264 \u03c1, \u00a0\u00a0\u00a0 (1)d - d| \u2264 \u03b4. \u00a0\u00a0\u00a0 (2)|k-words occur at close diagonals.The first inter-seed condition insures that the seeds are close enough to each other. The second seed diagonal condition requires that in both seeds, the two \u03c1 and \u03b4.We now describe statistical models used to compute parameters p for a match and (1 - p) for a mismatch. To estimate the inter-seed shift Dk, we have to estimate the distance between the starts of two successive runs of at least k matches in the Bernoulli sequence. It obeys the geometric distribution of order k called the Waiting time distribution [Consider two homologous DNA sequences that stem from a duplication of a common ancestor sequence, followed by independent individual substitution events. Under this assumption, the two sequences have an equal length and their alignment is a sequence of matched and mismatched pairs of nucleotides. We model this alignment by a Bernoulli sequence with the probability ribution ,26:\u03c1 such that the probability is (1 - \u03b5) for some small \u03b5.Using this formula, we compute k inside a Bernoulli sequence of length x. In a Bernoulli sequence of length x, the probability of the event Ip,x,r of having exactly r non-overlapping runs of matches of length at least k is given by the following recursive formula:Note that the Waiting time distribution allows us to estimate another useful parameter: the number of runs of matches of length at least r non-overlapping seeds of length at least k inside a repeat of size x. The recurrence starts with r = 0, in which case and is computed through the Waiting time distribution.This gives the probability of having exactly allows us to infer a lower bound on the number of non-overlapping seeds expected to be found inside a similarity region. In particular, we will use this bound as a first estimate of the group criterion introduced later.The distribution Indels (nucleotide insertions/deletions) are responsible for a diagonal shift of seeds viewed on a dotplot matrix. In other words, they introduce a possible difference between d and d. To estimate a typical shift size, we use a method similar to the one proposed in [posed in for the q at each of l nucleotides separating two consecutive seeds. Under this assumption, estimating the diagonal shift produced by indels is done through a discrete one-dimensional random walk model, where the probability of moving left or right is equal to q, and the probability of staying in place is 1 - 2q. Our goal is to bound, with a given probability, the deviation from the starting point.Assume that an indel of an individual nucleotide occurs with an equal probability i after l steps is given by the following sum:The probability of ending the random walk at position and consider the power Pl(x) = al.xl +\u2026+ al-.xl-. Then the coefficient ai computes precisely the above formula, and therefore gives the probability of ending the random walk at position i after l steps. We then have to sum up coefficients ai for i = 0,1, -1, 2, -2,..., l, -l until we reach a given threshold probability (1 - \u03b5). The obtained value l is then taken as the parameter \u03b4 used to bound the maximal diagonal shift between two seeds.A direct computation of multi-monomial coefficients quickly leads to a memory overflow, and to circumvent this, we use a technique based on generating functions. Consider the function"} {"text": "Multiple genome alignment is an important problem in bioinformatics. An important subproblem used by many multiple alignment approaches is that of aligning two multiple alignments. Many popular alignment algorithms for DNA use the sum-of-pairs heuristic, where the score of a multiple alignment is the sum of its induced pairwise alignment scores. However, the biological meaning of the sum-of-pairs of pairs heuristic is not obvious. Additionally, many algorithms based on the sum-of-pairs heuristic are complicated and slow, compared to pairwise alignment algorithms.An alternative approach to aligning alignments is to first infer ancestral sequences for each alignment, and then align the two ancestral sequences. In addition to being fast, this method has a clear biological basis that takes into account the evolution implied by an underlying phylogenetic tree.In this study we explore the accuracy of aligning alignments by ancestral sequence alignment. We examine the use of both maximum likelihood and parsimony to infer ancestral sequences. Additionally, we investigate the effect on accuracy of allowing ambiguity in our ancestral sequences.We use synthetic sequence data that we generate by simulating evolution on a phylogenetic tree. We use two different types of phylogenetic trees: trees with a period of rapid growth followed by a period of slow growth, and trees with a period of slow growth followed by a period of rapid growth.We examine the alignment accuracy of four ancestral sequence reconstruction and alignment methods: parsimony, maximum likelihood, ambiguous parsimony, and ambiguous maximum likelihood. Additionally, we compare against the alignment accuracy of two sum-of-pairs algorithms: ClustalW and the heuristic of Ma, Zhang, and Wang.We find that allowing ambiguity in ancestral sequences does not lead to better multiple alignments. Regardless of whether we use parsimony or maximum likelihood, the success of aligning ancestral sequences containing ambiguity is very sensitive to the choice of gap open cost. Surprisingly, we find that using maximum likelihood to infer ancestral sequences results in less accurate alignments than when using parsimony to infer ancestral sequences. Finally, we find that the sum-of-pairs methods produce better alignments than all of the ancestral alignment methods. Multiple genome alignment is an important problem in bioinformatics. It is used in comparative studies to help find new genomic features such as genes and regulatory elements. Current multiple genome alignment programs ,2 use prThe primary operation of progressive alignment is the alignment of two multiple alignments. Most genome aligners have two main phases: anchoring and aligning between the anchors. Here, we focus on the algorithms used to align between anchors. Many popular alignment algorithms for DNA use the sum-of-pairs heuristic, where the score of a multiple alignment is the sum of the induced pairwise alignment scores. However, Just has showThe biological meaning of the sum-of-pairs of pairs heuristic is not obvious. Additionally, many heuristic algorithms are complicated and slow, compared to pairwise alignment algorithms -7. An alBray and Pachter use this approach to align alignments in MAVID . MAVID uIn this study, we explore this idea as well as other aspects of aligning alignments by ancestral sequence inference. We compare four ancestral alignment methods with two sum-of-pairs alignment algorithms. We infer ancestral sequences using parsimony and maximum likelihood, and study the effect of allowing ambiguity in these sequences. Since we are interested in the performance of these methods under optimal conditions, we use data generated by a very simple evolution simulation. For aligning full alignments with the sum-of-pairs heuristic, we use ClustalW and a neWe find that alignment algorithms based on the sum-of-pairs heuristic are more accurate than all of our methods based on ancestral sequence alignment. However for alignment of inferred ancestral sequences, parsimony outperforms maximum likelihood in this application. Using maximum likelihood to infer ancestral sequences results in final alignment accuracies that are more unpredictable. Also, computing log-odds for ancestral sequences inferred with maximum likelihood is far more computationally intensive than computing log-odds scores for ancestral sequences inferred with parsimony. Finally, we find that allowing ancestral sequences to have ambiguity does not result in more accurate final alignments.To determine whether using ambiguous symbols in ancestral sequences inference improves multiple alignment, we have performed experiments on simulated sequences. We propose five hypotheses, explain our experimental method, and finally discuss results and give conclusions.The first hypothesis is that by using ancestral sequences with ambiguity, we obtain more accurate multiple alignments. Ambiguous symbols may allow us to retain more information about the underlying multiple alignments, which may make it easier to identify matching positions. Combined with an appropriate log-odds scoring system, this extra information may allow for more accurate alignment of ancestral sequences, and by extension, for more accurate multiple alignments.Our second hypothesis is that alignment of ancestral sequences is more sensitive to gap open costs than alignment of alignments using the sum-of-pairs heuristic. When aligning ancestral sequences, existing gaps in the underlying alignment are not considered when inserting a new gap, so the first position of a new gap always costs the gap open cost. Incorrect gap penalties may cause too many gaps to be inserted between ancestral sequences. During progressive alignment, errors at each step propagate leading to an incorrect final alignment. In contrast, when aligning alignments using the sum-of-pairs heuristic, the cost of adding a new gap depends on all underlying gaps as well as the gap open cost. An incorrect gap open cost affects the cost of gaps less and new gaps may still be correctly inserted based on the structure of the existing gaps. Our third hypothesis is that the function used to estimate the gap open cost during progressive alignment is important to alignment accuracy when aligning ancestral sequences. When aligning ancestral sequences, the frequencies of gaps in the ancestral sequences depends on the amount of mutation between the sequences. Therefore, it is important to modify the gap open cost based on the distance between the ancestral sequences being aligned.Our fourth hypothesis is that we expect that aligning alignments using the sum-of-pairs heuristics gives more accurate multiple alignments than aligning inferred ancestral sequences. There are two reasons for this. First, as stated in hypothesis two, using an incorrect gap open cost affects the sum-of-pairs heuristic less than it affects the alignment of ancestral sequences. Since choosing the correct gap open cost can be difficult in practice, we expect that aligning alignments using the sum-of-pairs heuristic results in a more accurate final alignment because it is less sensitive to this parameter. Additionally, the ancestral sequences we infer are not completely accurate, which compounds the errors made in the process of progressive alignment. Thus, the much slower run times of algorithms based on the sum-of-pairs heuristic are acceptable.Finally, we expect that the maximum likelihood methods result in better multiple alignments than the parsimony methods. Unlike parsimony, maximum likelihood uses the edge distances on the phylogenetic tree. Thus we expect maximum likelihood to better infer ancestral sequences.We use synthetic data in order to have correct alignments to test our methods against. Additionally, by generating our own data we ensure that the data is generated from the same model of evolution that is required for the alignment algorithm. Therefore, we consider the performance of the algorithms on this data to be the the best possible for algorithms of their type.Despite our use of synthetic data, we want our data to mimic the basic properties of real biological sequence. Thus we generate random trees that resemble real trees, and assign mutation rates based on analysis of real sequences.Specifically, we are interested in algorithm performance on two different types of random trees: trees with a period of heavy growth followed by a period of no growth, and trees with a period of light growth followed by a period of heavy growth. The first type of tree, which we refer to as early growth, is similar to the tree of placental mammals from Eizirik, Murphy, and O'Brien . The secWe generate these in two steps. First, we generate one large tree of each type using a random birth-death process implemented in Phyl-O-Gen v1.2 . To geneFrom each final tree, using the method of Kearney, Munro, and Phillips , we randFor each of the eight taxon trees in our two data sets, we generate twenty random sequence sets by simulating evolution over the tree. We use a program written by us, but similar to ROSE , to geneSince we wanted our mutations, insertions, and deletions to be as close to real sequences as possible, we calibrated our simulator with parameters estimated through analysis of homologous human and baboon sequences. This gives us two sets of 400 random input sequences, one for each set of trees.We chose the CFTR region in human and baboon for this parameter estimation, so that we could ensure that our alignment is mostly correct. We obtained human and baboon sequences with repeats masked out from the NISC Comparative Vertebrate Sequencing project . We alig-r\u03b1te), \u00a0\u00a0\u00a0 (1)Pr[mutation] = 3/4 accuracy. Tables It is not clear that including ambiguity in ancestral sequences improves alignment. Looking at Tables Table Our experiment confirmed our hypothesis that the gap cost scaling function is very important to the resulting alignment accuracies. When changing from scaling based on the largest value in the ancestral sequence scoring matrix to the expected cost for related positions, we see a significant increase in alignment accuracy on all data sets and all methods. See Table In Tables Surprisingly, the maximum likelihood methods performed worse than parsimony methods in the context of ancestral alignment. In Tables We have tested four ancestral alignment methods as well as two sum-of-pairs alignment methods on simulated data. The data mimics evolution on two types of evolutionary trees: trees with a period of rapid growth followed by a period of slow growth, and trees with a period of slow growth followed by a period of rapid growth. The four ancestral alignment methods we have tested are unambiguous parsimony, ambiguous parsimony, unambiguous maximum likelihood, and ambiguous maximum likelihood. The sum-of-pairs alignment methods we have tested are the ClustalW algorithWe have found that, contrary to our hypotheses, allowing ambiguity in ancestral sequences does not lead to better alignments. When we use ambiguous ancestral sequences, we find that the multiple alignment is more sensitive to our choice in gap costs than to the form of ancestral sequence chosen. Reinforcing this conclusion, we find that the gap open cost scaling function is also extremely important to obtaining good scores when aligning ancestral sequences. Finally, to our surprise, using maximum likelihood to infer ancestral sequences resulting in less accurate alignments than using parsimony. The reason for this is that the maximum likelihood method is far more sensitive to the underlying data and therefore resulting in alignments accuracies that have a large amount of variation. Also, on the data set generated from the tree that has a small amount of growth followed by a large amount of growth, the maximum likelihood based methods did particularly poorly compared to the parsimony based methods.Finally, both the sum-of-pairs approaches did better than all the ancestral alignment methods, as expected. Additionally, we found that Ma, Wang, and Zhang's algorithm outperfoOur multiple alignment framework uses progressive alignment up a specified phylogenetic tree. At each internal node we perform an alignment of two multiple alignments. We test six different algorithms for aligning alignments: ClustalW, the recent algorithm of Ma, Wang, and Zhang, and four algorithms that align inferred ancestral sequences. Our four ancestral alignment algorithms explore the use of both parsimony and maximum likelihood to infer ancestral sequences, and also allow the use of both ambiguous and unambiguous ancestral sequences. We include ClustalW, as it is widely used in practice, and the algorithm of Ma, Wang, and Zhang, whose output more accurately approximates the optimal alignment under sum-of-pairs scoring.In this section, we describe how we align two alignments using ancestral sequence inference, as well as our four ancestral sequence inference techniques and associated log-odds scoring frameworks.n alignment of k sequences, we can infer an ancestral sequence in time \u0398(kn). To align two length n alignments, one with k sequences and with \u2113 sequences, requires \u0398((k + \u2113)n + n2) time; this contrasts with the much larger run time of O (n2 (k + \u2113)) required by Ma, Wang and Zhang's algorithm of seeing the observed symbols d1 and d2 in ancestral sequence positions related by evolution according to the given tree, and Pr, the probability of seeing observed symbols d1 and d2 in ancestral sequences positions unrelated by evolution. Here, d1 and d2 are either from the ambiguous or unambiguous alphabet depending on the type of ancestral sequence we use.We compute two probabilities: the probability PrCv \u00a0\u00a0\u00a0 (3)cv at node v given the true value is bv, times the probability that the true value at node v is bv given that the true value at node u is b. We compute Pr[Tv(bv)|Tu(b)] using the length of the edge and the probabilistic mutation model. We compute Cv for each node of the tree, and obtain Cz in O(m) time, where m is the number of leaves. \u00a0\u00a0\u00a0 \u25a1is the the probability of seeing c1 and c2 at a positions arising from a common ancestor isWhen we use ambiguous ancestral sequences, the probability of seeing consensus letters x, y, and z, we compute the conditional probabilities of seeing consensus pair at positions related by ancestry.That is, over all choices of the true value at c1 and c2 isAt positions unrelated by ancestry, we assume that the true value for the ancestral sequence is equally likely to be any of the four DNA bases, though this model can be made more complex to model more realistic sequences. Therefore, the probability of seeing consensus letters c1 and c2 isFinally, the log-odds score for aligning consensus symbols S = log2 , \u00a0\u00a0\u00a0 (6)in bits.V be the probability that we randomly choose base b from the set of bases represented by consensus symbol c \u220a \u0393. That is, if c is a symbol corresponding to a set of k symbols from \u03a3, and b is in this set, then V = 1/k, otherwise it is zero. Then,If we use unambiguous ancestral sequences, the situation is similar. Let b1 at x and DNA base b2 at y given that we are looking at positions arising from a common ancestor.is the probability that we see DNA base b1 and b2 isAt positions unrelated by ancestry, we assume that the true value for the ancestral sequence is equally likely to be any of the four DNA bases. Therefore, the probability of seeing DNA base b1 and b2 isThe log-odds score for DNA symbols S = log2 , \u00a0\u00a0\u00a0 (9)in bits.For a given alignment column, we compute the most likely ancestral DNA base as in Felsenstein , but modW represents an A or a T. We define associated vector , scaled to a probability distribution, where the numbers in the vector refer to A,T,C, and G, in that order. We then map the likelihood vector, also scaled to a probability vector , to an IUPAC symbol, choosing the IUPAC symbol with associated vector that has the closest euclidean distance to the likelihood vector scaled to a probability distribution.Upon completion of the basic inference algorithm, we have a vector at the root that gives, for each position and each DNA base, the likelihood of that base. Assuming independence, the likelihood of a given ancestral sequence is the product of these. We obtain an unambiguous ancestral sequence from this by taking the base with maximum likelihood, randomly choosing between ties. To obtain an ambiguous maximum likelihood, which in effect is an approximation of the posterior Bayesian distribution of the ancestral symbol at that site, where we assume a uniform prior distribution over all alphabet symbols, we map the vector to an IUPAC symbol as follows. For each IUPAC symbol, we define a vector over the DNA alphabet where we have a one for each DNA symbol described by the IUPAC symbol, and a zero for all other DNA symbols. For example, the IUPAC symbol We desire a log-odds scoring framework similar to that developed for parsimonious ancestral sequences. While an appropriate scoring matrix can be obtained by sampling, we want to eliminate any sampling error from our study and so choose to compute the log-odds scoring framework directly.z with children x and y in a tree rooted at node r. First, consider the sub tree rooted at x. Let G to the leaves of the sub tree rooted at x such that maximum likelihood infers symbol d at x.Assume we are at node Ox(a) be the probability that a particular assignment a of bases to the leaves of sub tree x occurs by evolution. We compute the probability that the true DNA base at node x is b by considering the probability all possible bases for the root of the tree, and simulating evolution down to b.Let d1 and d2 refer to symbols from \u03a3G. To compute scores for ambiguous ancestral sequences, d1 and d2 refer to symbols from \u0393. We compute Pr, the probability of seeing symbols d1 and d2 in positions related by a common ancestor, asIn the following, as we compute scores for unambiguous ancestral sequences, z is b aswhere we compute the probability that the true value at node r, we compute the probability that the value mutates to b on the path from r to z.That is, for each possible true value at root d1,d2|unrelated], the probability that we see symbols d1 and d2 in positions unrelated by a common ancestor. First, let MLx(d) be the event that maximum likelihood infers symbol d at node x. We compute MLx(d) asWe now compute Pr[Tx(b)] in the same way as Pr[Tz(b)] previously. We compute Pr aswhere we compute Pr = Pr[MLx(d1)] Pr[MLy(d2)]. \u00a0\u00a0\u00a0 (13)Pr[d1 and d1 isFinally, the log-odds score of symbols S = log2 . \u00a0\u00a0\u00a0 (14)k be the maximum of the number of taxa below the left child of the root and the right child of the root. We require time O(|\u03a3G|k) to compute the log-odds scores for this tree.To compute the above scores, we must examine every possible assignment of bases to leaves for both the left and right children of the root. For a particular tree, let When aligning alignments with ancestral sequences, gap open costs play a major role. In a tree with differing edge lengths, the gap open cost should also be able to vary. Since existing gaps are not considered when inserting a new gap in ancestral alignments, the gap open cost has a large influence over the quality of the resulting alignment. It is therefore important that we estimate an appropriate gap open cost for each node of the tree.z with children x and y. We scale the gap open cost according to the distance between the nodes x and y. We test two slightly different scaling functions. The first function multiplies the gap open cost by the largest score value in the log-odds scoring matrix for node z. We call this the Max method. The second function computes the expected score of two symbols from unrelated positions and uses this value to scale the gap open cost. We call this the Expected method.Consider creating alignment for node Both AH and DB developed the fundamental ideas and hypotheses and the mathematical framework for deriving the alignment scores. AH developed and ran the experiments and implemented the alignment algorithms."} {"text": "Topological descriptors, other graph measures, and in a broader sense, graph-theoretical methods, have been proven as powerful tools to perform biological network analysis. However, the majority of the developed descriptors and graph-theoretical methods does not have the ability to take vertex- and edge-labels into account, e.g., atom- and bond-types when considering molecular graphs. Indeed, this feature is important to characterize biological networks more meaningfully instead of only considering pure topological information.In this paper, we put the emphasis on analyzing a special type of biological networks, namely bio-chemical structures. First, we derive entropic measures to calculate the information content of vertex- and edge-labeled graphs and investigate some useful properties thereof. Second, we apply the mentioned measures combined with other well-known descriptors to supervised machine learning methods for predicting Ames mutagenicity. Moreover, we investigate the influence of our topological descriptors - measures for only unlabeled vs. measures for labeled graphs - on the prediction performance of the underlying graph classification problem.Our study demonstrates that the application of entropic measures to molecules representing graphs is useful to characterize such structures meaningfully. For instance, we have found that if one extends the measures for determining the structural information content of unlabeled graphs to labeled graphs, the uniqueness of the resulting indices is higher. Because measures to structurally characterize labeled graphs are clearly underrepresented so far, the further development of such methods might be valuable and fruitful for solving problems within biological network analysis. Major reasons for the emergence of biological network analysis -4 are thANG et al. [UBER et al. [Taking into account that a large number of graph-theoretical methods have been developed so far, approaches to process and meaningfully analyze labeled graphs are clearly underrepresented in the scientific literature. In particular, this holds for chemical graph analysis where various graph-theoretical methods and topological indices have been intensely used, see, e.g., -34. Yet,G et al. recentlyG et al. . FinallyR et al. reviewedR et al. .In this paper, we restrict our analysis to a set of bio-chemical graphs which have already been used for predicting Ames mutagenicity, see . To perfSHANNON's entropy for predicting Ames mutagenicity, see [As already mentioned, topological indices have been proven to be powerful tools in drug design, chemometrics, bioinformatics, and mathematical and medicinal chemistry ,34,41-43ity, see ,45-47. Gity, see ,45-47.HANNON's entropy to characterize graphs by determining their structural information content [Further, topological descriptors have often been combined with other techniques from statistical data analysis, e.g., clustering methods ,48 to in content ,52-54. U content ,52,54-60 content ,52,61. R content ,62. More content . In this content ,52,53. N content ,53,54,63The contribution of our paper is twofold: First, we develop some novel information-theoretic descriptors having the ability to incorporate vertex- and edge-labels when measuring the information content of a chemical structure. Because we already mentioned that there is a lack of graph measures which can process vertex-and edge-labeled graphs meaningfully, such descriptors need to be further developed. In terms of analyzing chemical structures, that means they can only be adequately represented by graphs if different types of atoms (vertices) and different types of bonds (edges) are considered. Hence, there is a strong need to exploring such labeled networks. Besides developing the novel information-theoretic measures for vertex- and edge-labeled graphs, we will investigate some of their properties thereof (see section 'Properties of the Novel Information-Theoretic Descriptors') [ESHPANDE et al. [UE et al. [UE et al. [AH\u00c9 et al. [iptors') ,47. Secoiptors') ,65 (RF) iptors') ,66 (SVM)iptors') . Furtheriptors') -69. For E et al. developeE et al. that deaE et al. . ParticuE et al. determinE et al. . The las\u00c9 et al. ,69 was v\u00c9 et al. ,71-74.To present the novel information-theoretic measures for labeled (weighted) graphs, we express some graph-theoretical preliminaries ,57,75-77Definition 1 is a finite, undirected graph. In this paper, we always assume that the considered graphs are connected and do not have loops.Definition 2 Let G be a finite and undirected graph. \u03b4(v) is called the degree of a vertex v \u2208 V and equals the number of edges e \u2208 E which are incident with v.Definition 30 d stands for the distance between u \u2208 V and v \u2208 V expressed as the minimum length of a path between u,v. Further, the quantity \u03c3(v) = maxu\u2208V d is called the eccentricity of v \u2208 V. \u03c1(G) = maxv\u2208V \u03c3(v) is called the diameter of G.Definition 4 We callthe j-sphere of a vertex vi regarding G.Definition 5 Letandbe unique (finite) vertex and edge alphabets, respectively. and are the corresponding edge and vertex labeling functions. G := is called a finite, labeled graph.Definition 6 LetClearly, vi, are equal to j and possess the vertex label denotes the cardinality of the set of vertices whose distances, starting from G. In the following, we will use this definition to derive an advanced information functional for incorporating edge- and vertex-labels when measuring the structural information content of a labeled network.To finalize this section, we repeat the definition of a so-Definition 7 Let G = be an undirected graph. For a vertex vi \u2208 V, we calculate and the induced shortest paths,kj stands for the number of shortest paths of length j. Their edge sets are defined byFurther, letandThe local information graph \u2112G of G regarding vi is finally defined byFig. EHMER et al. [f E (see next section), we also deal with certain graph partitions for quantifying the information content of a vertex- and edge-labeled graph because we have to compute all local information graphs . But nonetheless, the construction of our information measures basically differs from the ones mentioned in [As already outlined, the majority of classical information measures for graphs are based on determining partitions by using an arbitrary graph invariant and an equivalence criterion, see, e.g., ,48,53,54R et al. ,62 recenR et al. ,62. Thisioned in be an arbitrary finite graph. The vertex probabilities for each v i \u2208 V are defined by the quantitiesf represents an arbitrary information functional.Definition 9 Let G = be an arbitrary finite graph. Then, the entropy of G is defined byNow, we repeat the definition of an information functional for quantifying the structural complexity of unlabeled and unweighted chemical graphs . GeneralDefinition 10 Let G = be an undirected finite graph. For a vertex v i \u2208 V, the information functional f V is defined asRemark 1 We want to point out that further information functionals have been developed so far [The appropriateness of such a functional that captures structural information of a graph strongly depends on the graph class and on the specific problem under consideration.d so far . The appAnother measure to determine the structural information content is the following one. Until now, it has been used to perfoDefinition 11 Let G = be an undirected finite graph. We define the family of information measureswhereis a scaling constant.\u03bb > 0 In this section, we present novel information measures to quantify structural information of labeled (weighted) chemical structures by adapting the just shown approach. Because the majority of the developed topological indices is only defined for the underlying skeleton of a chemical structure, the further development of descriptors for processing chemical graphs containing heteroatoms and multiple bonds is generally of great importance. Before we start expressing the new definitions, we first point out some related work in this area.VANCIUC et al. [IENER index [U, V, X, Y [VANCIUC et al. [Note that earlier contributions to infer measures for labeled graphs are often based on special distance matrices and polynomial methods -80. AnotC et al. where thC et al. . For exaC et al. . Then, sER index for vertER index . Further V, X, Y have bee V, X, Y . As a reC et al. ,84 obtaij-spheres) for all involved atoms (vertices) of the molecule. By now considering labeled graphs, our first attempt results in an information functional with the property that every vertex in each j-sphere possessing a certain vertex label (atom type) will be weighted differently.We now start by stating the novel partition-independent information-based descriptors to determine the information content of vertex- and edge-labeled graphs. The first definition represents an information functional to account for vertex labels of a chemical structure. For this, we adapt the idea ,62 of deDefinition 12 Let G = be an undirected finite vertex-labeled graph, We defineExample 2 To demonstrate the calculation of exemplarily, we consider Fig. 1and set O, C and N denote the atom types of the molecule. The edge type s represents a single bond whereas d represents a double bond within the chemical structure. For example, if we now calculate for G shown in Fig. we yieldBecause it is not always clear how to choose the involved parameter in practice, we further derive an information functional to overcome this problem.Definition 13 Let G = be an undirected finite vertex-labeled graph, If we determine all local information graphs \u2112G of G for the vertices vi \u2208 V, we then define the quantitiesThis quantity denotes the number of vertices of \u2112G possessing vertex label Definition 14 Let G = be an undirected finite vertex-labeled graph, We define the information functionalwhere Remark 3 We note thatThe expressionquantifies the number of occurrences of vertex label inExample 4 Fig. 2shows the calculated local information graphs of G regarding v3. For example, this leads toBy determining all local information graphs for the remaining vertices of G, the just shown calculation can be performed analogously.G into account. The main idea is to use weighted paths which can be directly determined by calculating the local information graphs.Next, we are able to derive an information functional that takes the edge labels of a graph Definition 15 Let G = be an undirected finite edge-labeled graph, , and assume that there exists a correspondence between the edge labels and numerical values. We definewhereandNow, we present an example how to apply this definition to the local information graphs shown in Fig. Example 5 We exemplarily apply the information functional f E to G and v3 as the starting vertex and recall that s = 1, d = 2. The edge labeled local information graphs for this vertex are depicted in Fig. We yield,andThus,In order to incorporate both edge and vertex labels when determining the topological entropy of a labeled graph, we also deriveDefinition 16Finally, we obtain the following entropy measures for measuring the structural information content of labeled graphs.Definition 17 Let G = be an undirected finite labeled graph, We now straightforwardly define the information-theoretic descriptors (graph entropy measures) as follows:Remark 6 We emphasize that according to the above stated definition and the definitions of the underlying information functionals, the resulting information measures are obviously parametric. This property generalizes classical information measures which have often been used in mathematical chemistry, see, e.g., [As already pointed out in [such measures establish a link to machine learning because the parameters could be learned using appropriate datasets. However, we won't study this problem in the present paper.e, e.g., ,29,53,83d out in , such meThis section aims to evaluate the just presented (see previous section) information measures for labeled graphs numerically. Also, we will calculate some known information indices to tackle the second part of our study when applying these measures to machine learning algorithms. Our study will be twofold: First, we examine some properties of the measures for labeled graphs when applying them to a large set of real chemical structures. Second, we analyze a QSAR problem by applying supervised machine learning methods ,85 usingV| \u2264 109; 1 \u2264 \u03c1(G) \u2264 47 \u2200 G \u2208 AG 3982. To evaluate the novel descriptors for labeled graphs, we then considered these structures as vertex- and edge-labeled graphs. Evidently, for calculating the descriptors of the unlabeled graph versions (skeletons), the corresponding descriptors were used which take only topological information into account.We created the database AG 3982 from the benchmark database called Ames mutagenicity ,47 origiTo generate and process the underlying graph structures, we used the known Molfile format . The graBefore starting to evaluate our novel molecular descriptors, we define some concrete information measures by choosing special weighting schemes for the coefficients.Definition 18 We define a special weighting scheme for the coefficients to determine as follows: Starting fromwhere ma denotes the atomic mass of the atom a (in the i-th sphere), we also defineThe scheme starts with the lightest element Hydrogen (H) and ends with the heaviest one, namely Uranium (U). If the underlying ci will be chosen byand by using Definition (11) and Definition (17), the concrete information-theoretic descriptors are called and If the underlying ci will be chosen bythe measures and follow correspondingly. Further, if the underlying ci will be chosen linearly or exponentially decreasing ; see also that the measures and follow correspondingly (Equation (50), (35), (52), (53)).Definition 19 Let G = be an undirected finite labeled graph, If we choose the coefficients of information functional (see Equation (29)) linearly or exponentially decreasing, we call the resulting information measures and \u03bb = 1000 to perform the entire numerical calculations in this paper. In order to interpret some of these measures, we consider Fig. k-regular graphs, the measure Ifv always leads to maximum entropy. By definition, it then follows that G0, G3 and G6, all three measures vanish. Because the graphs G1, G2 and G4, G5 have different label configurations - based on the different weighting schemes - and, therefore, the line between these points is not exactly horizontal as shown by the zoomed region depicted in Fig. Ifv and the higher the value of G0, G3 and G6 leads to descriptor values equal to zero. Again, we obtain maximal values for the calculated indices when applying them to G7 because the edge and vertex configurations are most disordered.Note, that we set j-spheres. To determine the corresponding descriptor values, we first considered the graphs of AG 3982 as only vertex-labeled graphs into account. If we use the information functional j-sphere cardinalities , the resulting measure captures nearly the same structural information than fV, that is, we only considered the skeleton versions. The plot shows that in this case, Another problem we want to investigate relates to determine the information loss when computing the structural information content by truncating the cardinalities of the see Fig. . The notEHMER et al. [ONSTANTINOVA et al. [In order to evaluate the uniqueness oR et al. utilizedI. In general, I. In Table Iorb denotes the well-known topological information content developed by RASHEVSKY[W is the WIENER index [to evaluate the discrimination power of an index RASHEVSKY that is RASHEVSKY,53. W isER index and [55,ER index (55)(56) based oNow, before discussing the classification results, we first state some definitions.Definition 20 Let I1,..., Im be topological indices. The superindex of these measures is defined as [fined as (61)Definition 21 Let G = be an undirected finite labeled graph, Then, each graph will be represented bySI such that it consists of the twelve indices from Table m = 16. The measure IU is defined as [To perform the graph classification, we chose fined as (63)and by Equation (58). Further, we state the definitions for(64)(whereandSI-representation (see Equation (62)) of a chemical graph, we tackle the mentioned graph classification problem using RF and SVM. The main steps were as follows:Now, based on the \u2022 We performed 10-fold crossvalidation for both classification methods.\u2022 When doing cross validation, we did a parameter optimization on the corresponding training sets. By using different kernels like linear polynomials, polynomials of higher degree etc., we found that the RBF kernel give the best results.\u2022 The random forest was composed by fifty different trees.\u2022 We performed the classification both with all features (information measures) and with only seven features gression .The classification results are shown in Table Taking into account that we classified only with (i) sixteen and (ii) seven information measures, we consider the classification results as feasible. One clearly sees that for both classifiers, the Precision and Sensitivity values - which are important quantities to evaluate the performance of the classification - are relatively high. Precision is the probability that the cases classified as positives are correctly identified where Sensitivity is the probability of positive examples which were correctly identified as such. The F-Measure defined as the harmonic mean of Precision and Sensitivity represents a single measure to evaluate the performance of the classifiers. By definition, the F-Measure varies between zero and one whereas one would represent the perfect and zero the worst classification result. We clearly see that by using SVM's, we reached values of F-Measure of over seventy percent which are the highest among all calculated ones. In order to examine the influence of incorporating vertex- and edge-labeled graphs on the prediction performance, we first present the following procedure and, then, the obtained results, see Table \u2022 Note that in our previously presented classification, we used eleven indices for unlabeled graphs and five for vertex- and edge-labeled graphs. From this feature set, we generated ten subsets composed of seven randomly selected measures for unlabeled graphs (among the eleven), and ten subsets composed of five randomly selected measures for unlabeled graphs and two measures for vertex- and edge-labeled graphs (among five available).\u2022 Based on these sets, we again performed 10-fold cross validation with RF and SVM and averaged the classification results.As a result, Table To finalize our numerical section, we also present results when choosing a different representation model of the graphs. In the following, we do not characterize a graph by its structural information content and by its superindex. In contrast, we now represent every graph by a vector that indicates if the given graphs contains certain substructures. To achieve this, we used a database of 1365 By looking at the performance evaluation in Table This paper dealt with investigating several aspects of information-theoretic measures for vertex- and edge-labeled chemical structures. We now summarize the main results of the paper as follows:\u2022 We already mentioned that the majority of the topological indices which have been developed so far are only suitable to characterize unlabeled graphs. By adapting the approach of deriving partition-independent information measures, we developed families of information-theoretic descriptors to incorporate vertex- and edge labels when measuring the structural information content of graphs. First, we did this by calculating spherical neighborhoods and distinguishing atom types for every sphere. For the resulting measures, we presented a weighting scheme for the vertices which takes chemical information of the graphs into account. Second, to reduce the number of parameters, we developed a simplified version based on the so-called local information graphs. Generally, these graphs are induced by shortest paths and provide information about the local information spread in a network. We here assume that information spreads out via shortest paths in the network . By usin\u2022 Using the benchmark database AG 3982, we evaluated the novel information-theoretic descriptors to see how they capture structural information of the chemical graphs. Based on some characteristic properties of the m\u2022 Another aim was to predict Ames mutagenicity when using supervised machine learning methods (RF and SVM) and representing the graphs by a vector consisting of topological descriptors (superindex). First, we performed the graph classification based on 10-fold crossvalidation and evaluated the quality of the learned models. Taking into account that we only used (i) 16 and (ii) 7 information measures for classifying the graphs, we obtained feasible results . However, another goal was to examine the influence of incorporating vertex- and edge-labels when measuring the prediction performance of the underlying graph classification problem. Here, we obtained the result that the prediction performance was very similar to the one we obtained by only measuring skeletal information. From this, interesting future work arises as follows: Because of the obtained results, it would be important to explore the developed measures for determining the structural information content of the underlying vertex- and edge-labeled graphs in depth. This aims to investigate the measures such that the prediction performance could be significantly improved when applying them to the machine learning methods we have used in this paper. Another reason for the results shown in Table \u2022 As already mentioned (see section 'Introduction'), labeled graphs play an important role when analyzing biological networks. But because the theory of labeled graphs is not well developed so far (compared to the contributions which have been done towards unlabeled graphs), see, e.g., , a thoroInspired from this study, we think that especially the development of further measures for labeled graphs can be an interesting and valuable attempt not merely to analyze QSPR/QSAR problems. Besides applying these measures to machine learning methods, we believe that the measures itself might be valuable for those who will investigate biological networks, see, e.g., . In fact\u2022 As a conclusive remark, we argue from a mathematical point of view that a further development of the theory of labeled graphs will surely help to develop more sophisticated methods for analyzing biological networks, see, e.g., ,22,38,39All authors contributed equally to all aspects of the article. All authors read and approved the final manuscript."} {"text": "This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based) complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases. The problem to quantify the complexity of a network appears in various scientific disciplines hannon's entropy Before sketching the aims of our paper, we start with a brief review about classical and more recent approaches to measure the complexity of networks. However, for performing the numerical results, we mainly restrict our analysis to information-theoretic measures which are based on Scomplexity and, even, structural complexity is generally not uniquely defined because it is in the eye of a beholder onstantineuknainoliolmogorovolmogorov-complexity of labeled and unlabeled graphs were obtained in In general, it seems clear that Size of the giant connected component Degree distributions Exponent of degree distributions Total number of vertices and edges Path-based quantities Distance-based quantities, e.g., Degree, degree statistics and edge density Clustering coefficient, modularity and network motifs Eigenvector measures im et al. da Costa et al. laussenim et al. im et al. Further, various measures have been developed to characterize the complexity of networks where many of the recent ones were summarized by KIn this paper, we investigate information-theoretic network complexity measures which are particularly relevant for enhancing empirical QSAR/QSPR models partition-based measure that relies on symmetry with respect to topologically equivalent vertices having the same degrees. The latter is a partition-independent information measure that is based on using a special information functional capturing structural features of the networks. In order to perform this study, we evaluate these measures numerically by using several large datasets containing real and synthetic chemical graphs. To our best knowledge, such a large scale analysis involving the classical topological information content has not been done so far. Note that in this study, we only consider skeletons of the chemical structures, that is, all atoms are equal and all bonds are equal. Another problem we want to address in this paper is to investigate the uniqueness of complexity measures. This relates to examine their discrimination power, that means, their ability to discriminate non-isomorphic graphs as unique as possible. For this, we also use the mentioned databases - real and synthetic chemical structures - and calculate a special sensitivity measure To tackle this problem, we select a few measures from two different paradigms for inferring such indices: The so-called topological information content This section aims to present the information-theoretic topological descriptors we want to investigate in this paper. In the following, we briefly shed light on the two main procedures (resulting in partition-based and partition-independent measures) to infer information-theoretic complexity measures for characterizing chemical network structures. Afterwards, we express their concrete definitions for performing our numerical analysis.ruccoashevskyowshowitzApplying information-theoretic methods for exploring complex networks is a still challenging and ongoing problem ruccoashevskyashevskyhannon's entropy formulas More precisely, TowshowitzEquation (2) is the total information content of Now, we give a sketch of the second procedure for inferring graph entropy measures that results in obtaining partition-independent measures As follows, we start with the definition of some concrete partition-based entropy measures to be applied to real and synthetic chemical structures. Note that in this paper, we only evaluate the mean information contents. For the sake of simplicity, we write Let be a graph.is called topological information content of . Here, denotes the number of topologically equivalent vertices in the -th vertex orbit of where is the number of different orbits.Let be a graph. We recall the definition for two vertices being topologically equivalent: For each -th neighboring vertex of there exists an -th neighboring vertex of which possesses the same degree. A vertex orbit is a set of vertices that only contains topologically equivalent vertices.Let be a graph.whereiener index is called the W and denotes the shortest distance between . and are so-called magnitude-based information indices, see . It is assumed that the distance of a value in the distance matrix appears times. stands for the diameter of a graph .Let be a graph.whereSee . equals the number of vertices having distance starting from . Also, equals the corresponding -sphere cardinality. is the eccentricity of . denotes the cyclomatic number, see .Let be a graph.where is a local vertex entropy . Finally, the entropy of can be defined byIn particular, we define special information measures for characterizing graphs by choosing concrete coefficients Let be a graph. We definewhereFinally,whereehmer et al. To finalize this section, we now express the definitions of some partition-independent entropy measures for graphs introduced by DLet be a graph. The following partition-independent entropy measures based on a special information functional were defined as where is a scaling constant.are vertex probabilities. The special information functional was defined as Here, denotes the -sphere of a vertex , that is, the set of vertices having shortest distance starting from . are positive coefficients for emphasizing certain structural of a graph, e.g., high vertex degrees, also see, .To perform the numerical calculations in this paper, we set .Let be a graph. The measure becomes to by choosing the coefficients according to Equation (19), i.e., linearly decreasing. Correspondingly, becomes to when choosing the coefficients according to Equation (21), i.e., exponentially decreasing.iener index is In the following, we briefly comment on the computational complexity of the discussed information measures without giving proofs. Obviously, the measures whose definitions are based on calculating matrices can be often computed in polynomial time . For instance, it has been proven As stated in the Let be a graph and let be the vertex-vertex link correlation matrix, see . denotes the number of all neighbors with degree of all vertices with degree . stands for the maximum degree of . The normalized version of can be defined as whereandLet be a graph and let be the largest eigenvalue computed from its adjacency matrix.whereBefore discussing numerical results, we describe the databases and our developed software in brief.MS 2265: This database has been extracted by own software from the commercially available mass spectral database NIST AG 3982: The original freely available database called Ames Genetoxicity contains 6512 chemical compounds, see APL 91075: The ASINEX Platinum Collection is a freely available, in-house designed and synthesized collection of 126615 drug-like compounds CCCIn order to generate and process our chemical graphs, we used the known Molfile format We implemented all used topological measures in Python using freely available libraries like Networkx, Openbabel and Pybel packages In this section, we will apply the complexity measures presented in the previous section. As stated before, we mainly put the emphasis on exploring the relatedness between the topological information content In the following, we discuss and interpret numerical results when applying the selected descriptors to sets containing real chemical structures. Our study involves calculating and interpreting dependency plots, cumulative entropy distributions, and the so-called uniqueness of the used topological indices We start to examine how the entropies lots see for explIf is vertex transitive , then If is -regular , then and, hence, .The graph vertices . BecauseThe interrelation between the entropies ): The main idea is to partition the vertex set in equivalence classes according to the criterion that each such class contains topologically equivalent vertices iener index is also very low We now start discussing the results shown in ower see .To interpret the sensitivity values when applying our information measures to synthetical chemical graphs, we look at However for the tree class, our The cumulative entropy distributions are illustrated by We start by observing that about 80% of the graphs of MS 2265 possess relatively small entropy values when evaluating Equally, the cumulative entropy distributions of AG 3982 are depicted in In particular, we have found that for all three chemical databases, the evaluation of the topological information content (see Equation (4)) and the partition-independent measures (see Equation (23)) led to clearly different cumulative entropy distributions that is obviously in accordance with the results of the preceding sections.In the present paper, we studied interrelations between classical and novel entropy measures to quantify the structural information content of networks. Here, these measures served as graph complexity measures which take certain structural features of the networks under consideration into account. In the following, we express the main findings of the paper in brief:We explored the relatedness between information measures for graphs. In particular, we examined the correlation between the topological information content is large , the invAnother important aspect of our numerical study was to examine the discrimination power of the used network measures. We found that the topological information content For the real chemical databases, the cumulative entropy distributions of some measures were calculated. This approach can be considered as an important preprocessing step to learn how the measures capture structural information of networks. Particularly, it is suitable to explore certain correlations between the measures and, finally, to learn whether the complexity indices capture structural information differently or similarly.As a conclusive remark, we emphasize that the presented information-theoretic methods to analyze complex networks bear a considerable potential. Our study aimed to get a better understanding towards the problem of characterizing chemical graphs using information-theoretic complexity measures. In this paper, we put the emphasis on such measures which have already been applied in the context of mathematical chemistry and drug design. We think that our results can help to apply the measures to more complex network classes and to interpret the results more adequately than before.In the future, we want to extend our measures for determining the structural complexity of weighted chemical graphs and test their ability to tackle QSAR/QSPR problems. Further, we would like to test novel information indices by combining existing ones and evaluate their discrimination power. Moreover, an interesting task would be to classify molecules by using this approach and to apply it to special problems in drug design."} {"text": "Helicobacter pylori, a gram-negative bacterial pathogen that expresses a strong urease activity, is associated with the development of gastroduodenal disease. Urease B subunit, one of the two structural subunits of urease, was expressed in E. coli BL21 (DE3) strain. The objective of this study was to evaluate the effects of Helicobacter pylori urease B subunit on the immune responses in mice by subcutaneous immunization.Helicobacter pylori urease B subunit antigen subcutaneously three times with 2-wk intervals between the immunizations and boosters. The mice in the control group were immunized with PBS. The adjuvant group received PBS containing complete/incomplete freund\u2019s adjuvant identical to antigen group without Helicobacter pylori urease B subunit antigen. Four weeks after the final booster, all the mice were sacrificed. Blood was collected on d 0, 14, 28 and 56 before immunization, booster and sacrifice, respectively. Immediately after sacrifice, gastric liquid and spleen were collected for antibody and cytokine analyses.The mice were immunized and boosted with P\u2009<\u20090.05).Urease B subunit increased the concentrations of serum and gastric anti-urease B antigen specific IgG, and the levels of interleukin-4 and interferon-\u03b3 in splenocytes of the mice (Helicobacter pylori.This study demonstrated that recombinant urease B subunit can induce systemic and local immune responses in mice by subcutaneous immunization, which might be used as the effective component of vaccine against Helicobacter pylori, a gram-negative bacterial pathogen that expresses a strong urease activity, is associated with the development of chronic gastritis, peptic ulceration and gastric carcinoma [H. pylori infection, negative effects concerning antibiotic-resistant strains always limit the treatment [H. pylori from the upper gastrointestinal tract of the patients, utilization of a safe and effective vaccine might be the ideal strategy to immunologically prevent H. pylori infection [arcinoma . It is oreatment -4. In canfection -7.H. pylori, is expressed broadly on the surface of H. pylori and contributes a lot to colonization of H. pylori[Urease, a recognized virulence factor of H. pylori,9. The mH. pylori.H. pylori or other bacteria containing urease or its subunits antigens via oral, nasal, rectal or other routes has previously been reported as effective ways to protect human or animals from H. pylori infection [H. pylori infection as effectively as oral immunization because of the stimulation of relatively higher specific antibody concentrations [Mucosal immunization with the attenuated nfection -13. Howetrations .H. pylori urease B subunit was obtained after plasmid-encoded urease B from H. pylori was expressed in E. coli BL21 (DE3) strain. To evaluate the effects of H. pylori urease B subunit on the immune responses by subcutaneous immunization in mice, the indices of specific antibodies in serum and stomach as well as the splenocyte-secreted cytokines were determined in this study.Here E. coli SE 5000 containing the urease expression genes used in this study were kindly provided by H. T. Mobley from Department of Biology, Kilgore College, Kilgore, TX, USA. E. coli BL21 (DE3) strain was used as a recipient for the recombinant urease plasmid constructs that expressed urease B from the plasmid in which urease B genes from H. pylori had been cloned. After the E. coli BL21 (DE3) strain containing the urease B genes was activated overnight at 37\u00b0C, the bacteria were inoculated and cultured in Luria broth medium. Then the culture was induced by isopropyl-\u03b2-D-thiogalactopyranoside and centrifuged at 6,000\u2009\u00d7\u2009g for 15\u00a0min. Cells were harvested and catabolizated by lysozyme and nuclease. After sonication, the cell lysate was centrifuged at 12,000\u2009\u00d7\u2009g for 10\u00a0min. Urease B was expressed and obtained after washing with Buffer B and resolving in urea-Tris\u2013HCl solution. Urease B antigen was purified by using a Ni-NTA kit according to manufacturer\u2019s instructions. Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) was applied to identify the expressed protein and the purity of urease B was measured by Bradford method .ad libitum under pathogen-free conditions.All the mice used in this experiment were maintained in accordance with the principles of Chinese Academy of Agricultural Sciences Animal Care and Use Committee. Fifteen specific-pathogen-free, six-week-old female Kunming mice weighing 17\u201320\u00a0g were purchased from the Beijing Laboratory Animal Research Centre and divided into three treatments at random. Each treatment had five replicates with one mouse per replicate. All animals were housed in plastic cages in a mechanically ventilated nursery room where 12\u00a0h light: 12\u00a0h dark was set, and constant temperature remained at 23\u201325\u00b0C and relative humidity at 50-60%. All the mice had sterilized commercial chow and water H. pylori urease B subunit antigen was mixed with complete/incomplete freund\u2019s adjuvant , and 40\u00a0\u03bcg of H. pylori urease B subunit antigen in a volume of 100\u00a0\u03bcL of emulsion was injected into the lower back of the antigenic mice on d 0 and 14. The mice in the control group were immunized with PBS. The adjuvant group received PBS containing complete/incomplete freund\u2019s adjuvant identical to antigen group without H. pylori urease B subunit antigen. The mice were boosted with 80\u00a0\u03bcg of urease B antigen and incomplete freund\u2019s adjuvant on d 28. Blood was collected retro-orbitally before immunization and booster on d 0, 14 and 28, respectively.Three groups of 5 mice were used as control, adjuvant and antigen. After seven days of adaptation, mice were immunized and boosted subcutaneously three times with 2-wk intervals between the immunizations and booster. Four wk after the final booster, all the mice were sacrificed after final blood collection from the heart. Mice were immersed into 75% ethanol for 5\u00a0min immediately after sacrifice. Thereafter, the peritoneal cavity was opened, and the spleen was removed from each mouse followed by the recovery of gastric fluid flushed with 1\u00a0mL of phosphate buffered saline (PBS) containing protease inhibitor.6 cells per well with or without 10\u00a0\u03bcg/mL\u2009H. pylori lysate for 48\u00a0h at 37\u00b0C and 5% CO2. The supernatant was harvested on d 0, 7 and 14 and stored at \u221270\u00b0C for cytokine assay.Immediately after sacrifice, the spleen was aseptically removed from mice and the tissue was minced by syringe and washed twice with RPMI 1640 containing 10% fecal bovine serum (HyClone Laboratories Inc. Logan UT), 10\u00a0mmol/L Hepes, 100\u00a0\u03bcg/mL penicillin and 100\u00a0\u03bcg/mL streptomycin . After erythrocyte lysis, splenocytes were separated and cultured in 24-well plates at a density of 1\u2009\u00d7\u200910g at 4\u00b0C for 10\u00a0min, and the supernatant was obtained for IgA and IgG determination. The measurement of serum anti- H. pylori urease B specific IgA, IgG, IgE, gastric IgA and IgG levels was performed by enzyme-linked immunosorbent assay (ELISA) using an indirect ELISA as described by Weltzin et al. [3/Na2CO3, pH\u00a09.6) and then free binding sites were blocked with PBS containing 1% bovine serum albumin (BSA) and 0.1% Tween 20 (ELISA buffer) for 1\u00a0h at 37\u00b0C. Duplicate serum and gastric fluid samples were diluted in ELISA buffer and incubated for 2\u00a0h at 37\u00b0C. The plates were then incubated with ELISA buffer containing IgA, IgG or IgE antibodies conjugated with horseradish peroxidase for 1\u00a0h at 37\u00b0C. Samples were washed five times with PBS containing 0.1% Tween 20 between each incubation step. Then the plates were developed with tetramethylbenzidine (TMB) to measure the absorbance at 450\u00a0nm. The data were expressed as optical density (OD) units.The gastric fluid sample was centrifuged at 10,000\u2009\u00d7\u2009n et al. with somin vitro with or without H. pylori lysate at the concentration of 10\u00a0\u03bcg/mL in a total volume of 1\u00a0mL. The supernatants were collected on d 0, 7 and 14, respectively. Interleukin-4 (IL-4) and interferon-\u03b3 (IFN-\u03b3) concentrations in the cultured splenocyte supernatant were measured by the mouse ELISA Kit following the manufacturer\u2019s instructions.Single splenic cell suspension was inoculated P values less than 0.05 were considered statistically significant.All data were analyzed using the ANOVA procedure of SAS system . E. coli BL21 (DE3) strain was assayed using SDS-polyacrylamide gels for expression of urease B. As shown in Figure\u00a0H. pylori can be obtained through mostly expression in E. coli without negative effects and is at least suitable for animal protection experiments in further studies.E. coli was capable of inducing specific antibody responses and what type of antibody production against urease B was stimulated by systemic immunization, specific IgA, IgE and IgG in the serum and IgA and IgG in gastric liquid were measured by ELISA. It is reported that parenteral immunization tends to induce low levels of secretary IgA but often increase a low level IgE production in most mice [Mice were immunized and boosted with recombinant urease B subunit and freund\u2019s adjuvant by subcutaneous route. To investigate whether urease B subunit expressed by ost mice . As presost mice . Serum IP\u2009<\u20090.05). Furthermore, gastric IgG to urease B subunit antigen was statistically higher (P\u2009<\u20090.05) in urease B antigen-immunized mice than in the control and adjuvant-injected mice . Simultaneously, the production of IFN-\u03b3 was also enhanced by splenic cells while the concentrations of IFN-\u03b3 secteted by splenocytes from control mice or adjuvant mice were very low and nearly zero cells, Th2 cells and Th17 cells are different types of helper T cells resulting in the secretion of different patterns of cytokines [It has been well documented that T lymphocytes can be classified as CD4 markers . Recentl markers ,30. CD4+ytokines ,29,31. Tytokines ,16. Th 1Helicobacter antigens and Freund\u2019s adjuvant induced protective anti-Helicobacter immunity resulting in production of IFN-\u03b3. The analysis for cytokines in the cultured splenocyte supernatant also showed that concentrations of IL-4 and IFN-\u03b3 was significantly elevated in the supernatant of spleen cell culture of urease B immunized mice compared to the PBS control and adjuvant control groups, which suggested that Th1/Th2 type responses were both stimulated by the recombinant urease B subunit antigen. Similar results were also obtained by Guy et al. [H. pylori infection than that of a predominantly Th2 type response.Previous studies have proven that IL-4 is a Th2-type representative cytokine which plays a key role in allergic inflammation . ProductH. pylori.The current study demonstrates that recombinant urease B subunit induced higher concentrations of serum and gastric IgG as well as an increase of IL-4 and IFN-\u03b3 in splenocytes of the immunized mice by subcutaneous immunization. The application of urease B subunit in parenteral inoculation strategies might enlighten us to use it as the effective component of vaccine against The authors declare that they have no competing interests in relation to this study.PS carried out the experiment and drafted the manuscript. JQW conceived the study, participated in its design and coordination, and helped draft the manuscript. YTZ performed the animal experiments. All authors read and approved the final manuscript."} {"text": "Musa (banana and plantains). The expression stability of six candidate reference genes was tested on six different sample sets, and the results were analyzed using the publicly available algorithms geNorm and NormFinder. Our results show that variety, plant material, primer set, and gene identity can all influence the robustness and outcome of RT-qPCR analysis. In the case of Musa, a combination of three reference genes can be used for normalization of gene expression data from greenhouse leaf samples. In the case of shoot meristem cultures, numerous combinations can be used because the investigated reference genes exhibited limited variability. In contrast, variability in expression of the reference genes was much larger among leaf samples from plants grown in vitro, for which the best combination of reference genes (L2 and ACT genes) is still suboptimal. Overall, our data confirm that the stability of candidate reference genes should be thoroughly investigated for each experimental condition under investigation.Gene expression analysis by reverse transcriptase real-time or quantitative polymerase chain reaction (RT-qPCR) is becoming widely used for non-model plant species. Given the high sensitivity of this method, normalization using multiple housekeeping or reference genes is critical, and careful selection of these reference genes is one of the most important steps to obtain reliable results. In this study, reference genes commonly used for other plant species were investigated to identify genes displaying highly uniform expression patterns in different varieties, tissues, developmental stages, fungal infection, and osmotic stress conditions for the non-model crop The online version of this article (doi:10.1007/s11032-012-9711-1) contains supplementary material, which is available to authorized users. Arabidopsis thaliana is a routinely used technique for gene expression analysis because of its main advantages of relatively low cost, good speed, a wide dynamic range, and feasibility in non-model organisms species provide a staple food in many developing countries and with an annual production of more than 130 million tons per year it is the fourth most important food crop worldwide (FAO wide FAO . Diseasewide FAO as well wide FAO are amonMusa Genome Consortium (GMGC) sequences available. The Global A summary of the different types of cultures, tissues, and varieties used is provided in Table\u00a0\u20131 ascorbic acid, 0.09\u00a0M sucrose, and 3\u00a0g\u00a0l\u22121 Gelrite\u00ae] at 26\u00a0\u00b1\u00a02\u00a0\u00b0C under a 16-h photoperiod with a photosynthetic photon flux density of 50\u00a0\u03bcE\u00a0m\u22122\u00a0s\u22121 provided by Cool White fluorescent lamps . After 5.5\u00a0weeks of growth, the plants were transferred to a liquid REG medium. After 2\u00a0months, fresh liquid REG medium was added, and to half of the plants, acetone was supplemented to a final concentration of 0.5\u00a0% (v/v). Acetone treatment was tested since acetone is used to dissolve certain biologically active compounds in the author\u2019s laboratory. Leaves were harvested 2\u00a0days after the addition of acetone from six and seven plants grown on the REG medium without and with 0.5\u00a0% (v/v) acetone, respectively.Plants of the variety Grand Nain , were grown on semi-solid regeneration medium ] Pfaffl (Table\u00a02M), defined as the average pair-wise variation between a particular reference gene and all of the other candidate reference genes, was determined using geNorm v3.4 . ANOVA was used to determine whether differences in the Ct levels between the different experimental treatments within each experiment were significant.The Ct values were converted into relative quantities or expression levels according to the data obtained for the samples of the dilutions series, which are used to create standard curves. Next, the reference gene stability factor (Musa (banana and plantains). Nine genes from different functional groups were chosen: 18S rRNA, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), elongation factor-1\u03b1 (EF1), polyubiquitin, actin11 (ACT11), \u03b1-tubulin, \u03b2-tubulin (TUB), cyclophilin, and ribosomal protein L2 (L2) genes. Banana genes and EST fragments belonging to these gene families were identified by conducting similarity searches (BlastX). The identity of the coding sequence between Arabidopsis or rice and Musa varied between 80 and 97\u00a0% . At the time this study was performed, no orthologous Musa sequences of sufficient length could be identified for GAPDH, 18S rRNA, and \u03b1-tubulin.Reference genes commonly used for other plant species were investigated to identify genes displaying highly uniform expression patterns in different varieties, tissues, developmental stages, and stress conditions for the non-model crop ACT11, cyclophilin, EF1, L2, TUB, and polyubiquitin genes . Using data from other plant species, a primer pair spanning an intron was designed for ACT11; this was not possible for the other genes. To ensure that each primer pair resulted in the production of a single PCR product, gradient PCR was performed on genomic DNA (gDNA) and on cDNA from leaves. For ACT11, EF1, L2, and TUB, a suitable primer pair was identified (Table\u00a025S) and actin (ACT) genes that have been previously used in other banana gene expression studies .The leaves of each of the six and seven plants grown on the REG medium without and with 0.5\u00a0% (v/v) acetone, respectively, were pooled and used for RNA isolation and cDNA synthesis. The Ct values of the six different candidate reference genes exhibited broad variability between the samples, irrespective of the acetone treatment . This result was confirmed by the Ct values obtained, as ACT11 (\u0394Ct\u00a0=\u00a03.6) showed larger variability than ACT (\u0394Ct\u00a0=\u00a02.6), and by Normfinder analysis , a measure that is used to determine how many additional reference genes should be included in the calculation of the normalization factor for gene expression. A cut-off V-value of 0.15, below which the inclusion of additional reference genes is not required, has been proposed by Vandesompele et al. between leaf samples harvested at different time points, although the differences between the average Ct values of the different groups were smaller than 1.05 Ct except for ACT with a difference of 1.4 between time points Ta and Tc. In contrast, the Ct values for 25S, L2, and TUB showed significant differences between the two varieties and the Ct differences between the average values for L2, 25S, and TUB in the two different varieties is 4.3, 4.1, and 2.0 Ct\u2019s, respectively Fig.\u00a0b. AdditiACT is the most stable reference gene, and subsequently 25S, EF1, TUB, L2 and EF1, TUB, L2,25S with and without grouping function, respectively , thus excluding the other reference genes from further analysis. For the greenhouse development experiment, the most stable reference genes were TUB and EF1 and the least stable genes were 25S and L2 was optimal as the V-value of 0.15 was obtained was sufficient as the combination yielded a V-value of 0.09 was possible as this combination yielded a V-value of 0.14 and the least variation for ACT although the differences between the average Ct values were less than 1 Ct. For the variety experiment, five meristems of Cachaco, Mbwazirume, and Williams grown under standard conditions on the P4 medium were harvested. The Ct value showed the maximum variation for ACT (\u0394Ct\u00a0=\u00a02.2) and the least variation for EF1 (\u0394Ct\u00a0=\u00a00.7) over all three varieties experiment, RNA was isolated from five meristems grown either on control medium or on a medium containing a higher sucrose concentration. Additionally, a third group of meristems was harvested 24\u00a0h after subculturing to investigate the effect of wounding associated with the cutting and subculturing process. The Ct values of the six different reference genes exhibited the largest variation for TUB was the most stable reference gene followed by L2,ACT11, EF1, and ACT, although the order interchanged depending on the grouping factor. 25S was the least stable reference gene and 0.09 (ACT11 and TUB) was obtained for the samples of the sucrose and varieties experiments, respectively as well as two previously reported reference genes in Musa (ACT and 25S RNA). Recently, Chen et al. resulted in even more unacceptable reference gene combinations for normalization, indicating that suitable reference genes for in vitro gene expression studies are scarce. These results also suggest that plants grown in vitro might be stressed and show variable expression levels of genes involved in basic biological processes. Analysis of the expression of genes of interest in such samples is thus difficult and requires careful examination of candidate reference genes prior to any analysis.Our study showed that the expression levels of all reference genes investigated exhibited high Ct variability in the leaf samples of the EF1/TUB/ACT and L2/ACT, respectively, allow reliable normalization despite the occurrence of significant Ct differences between different sample groups in the former for all but one (L2) reference gene. NormFinder analyses resulted in similar results and indicated that the combinations ACT/EF1 and ACT/TUB are optimal for normalization of leaf samples at different developmental stages and leaf discs, respectively. For leaf samples from different varieties the ANOVA indicated significant differences for L2, 25S, and TUB with large differences (\u0394Ct\u00a0>\u00a04.0) in average Ct\u2019s for L2 and 25S, which were both excluded from the geNorm analysis, resulting in the inability to identify a suitable combination of reference genes. NormFinder identified ACT as most stable and L2 and 25S as least stable reference genes, but surprisingly indicated L2 and 25S as the most suitable reference gene pair. A glance at the raw Ct\u2019s shows that these genes are relatively stable within each variety, but the level of L2 and 25S is \u00b14 Ct\u2019s higher and \u00b14 Ct\u2019s lower, respectively, in Km5 than in TG . In our study, ACT was found to be one of the most stable reference genes whereas Chen et al. Below is the link to the electronic supplementary material."} {"text": "The induced favorable environment for bio-cathode formation might be the main reason for this improvement since the content of total extracellular polymeric substances (TEPS) of the substrate in the cathode area almost doubled (from 44.59\u2009\u03bcg/g wet sludge to 87.70\u2009\u03bcg/g wet sludge) as the percentage of PAC increased to 10%. This work provides another potential usage of PAC in CW-MFCs with a higher wastewater treatment efficiency and energy recovery.MFC centered hybrid technologies have attracted attention during the last few years due to their compatibility and dual advantages of energy recovery and wastewater treatment. In this study, a MFC was integrated into a dewatered alum sludge (DAS)- based vertical upflow constructed wetland (CW). Powder activate carbon (PAC) was used in the anode area in varied percentage with DAS to explore its influences on the performance of the CW-MFC system. The trial has demonstrated that the inclusion of PAC improved the removal efficiencies of COD, TN and RP. More significantly, increasing the proportion of PAC from 2% to 10% can significantly enhance the maximum power densities from 36.58\u2009mW/m Natural resources for freshwater production and energy generation are depleting at an unprecedented rate. It was estimated that two-thirds of the global population will face water quality problems by 2025 while the demand for water consumption will increase more than 40% by 2050Most recently, a new hybrid technology based on the principle of MFCs was developed by embedding MFCs into constructed wetlands (CWs), giving the name CW-MFCs3453 under certain circumstances3 when the pure MFC\u2019s volume increases to over 2\u2009L2) at presentSignificant improvements of pure MFCs were achieved during last several years, with the highest power density reaching as high as 2870\u2009W/mPowder activate carbon (PAC), a well-known cost-effective material with high specific area, has been widely applied as an adsorbent in various wastewater treatment processes for different pollutants removal. It has also been used to provide sufficient adhesive surface for microorganism growth, while these attached bacterial can also utilize the adsorbed and/or surrounding organic pollutants to maintain their metabolism121416171820In this study, five CW-MFC systems (four for testing and one for control) were set up, which employed dewatered alum sludge (DAS) as the wetland substrate while PAC was adopted to modify the DAS in the anode area to explore the enhanced performance on electricity generation and wastewater treatment. Emphasis was placed on the role of PAC in reducing the internal resistance (activation and ohmic losses). The influences of the percentage of PAC versus DAS on the performance of the CW-MFC were examined under continuous operation of the system in vertical flow mode.3-N and NO2-N. However, it is interesting to find out that the integration slightly improved the NH4-N and TN removal, which means the improvements in both nitrification and denitrification process since NO3-N in the effluent didn\u2019t show the corresponding changes. Many previous studies have showed the possibility by using MFC for nitrogen removal through SND (simultaneously nitrification and denitrification) processDiluted swine water with the designed concentration was continuously fed to five parallel CW-MFC systems. Effluent quality and the related parameters of each system were examined throughout the stable operation period (about 4 months) and the results are shown in More specifically to those operated under close circuit, COD was significantly removed in all systems. Based on our previous studies426With regard to phosphate removal, it has already been shown that DAS has excellent phosphate adsorption ability due to the strong adsorption affinity between Al in DAS and P in the influent2831et al.et al.35373936Nitrobacter can directly oxidize nitrite into nitrate owing to the sufficient DO provision in the upper layer of the CW-MFC system In terms of nitrogen removal, ammonia-N removal efficiency of the control system is lower than 40% (from 26.40\u2009\u00b1\u20094.2\u2009mg/L to 15.45\u2009\u00b1\u20093.3\u2009mg/L). This is mainly due to the insufficient oxygen provision for the nitrification process, since without an artificial aeration system, nitrification can only happen near the surface of the system where the oxygen dissolved form air can be utilized (with DO of around 2.0\u2009mg/L). It is also noted in At the end of the trial, 9 substrate samples from the top, middle and bottom layers of each CW-MFC system were sampled and the LOI of each sample was measured. Differences in LOI caused by different PAC additions can be observed from In order to explore the impact of PAC additions to the colonization of microbes in the system, TEPS of the substrates taken from top to bottom of each system were measured. It should be clear that more microbes refer to more secretion of metabolites to extracellular space, resulting in a higher concentration of EPS, which mainly consists of polysaccharide and protein45472 to 36.58\u2009mW/m2 was observed. In contrast, when the percentage of PAC increased to 2%, MPD almost doubled (from 36.582\u2009mW/m2 to 73.8\u2009mW/m2). With further increase of PAC percentage to 10%, MPD increased to 87.79\u2009mW/m2. From the polarization curves, it is also obvious that the open circuit voltage was significantly improved by PAC additions when its content is higher than 2% (from about 500\u2009mV to over 700\u2009mV).The main purpose of this study lies in exploring the role of PAC addition on the electrical performance of the CW-MFC systems. To this end, polarization and power density curves were generated at the eIn order to explore the reasons behind those differences, electrode potentials of each system were measured . From thRecalling the content of TEPS within each system, the addition of PAC in the anode chamber will suppress the growth of microbes which might be the main reason for the decrease of the anode potential. Some previous studies found that when granular activated carbon was used as exoelectrogenic bacteria support material, it can function as an electron capacitor. With intermittent contact of the anode electrode, the fluidized particles are more efficient than packed-bed styleet al.As indicated by the changing trends of the amount of TEPS in substrates taken from each reactor, PAC additions can favor the growth of microbes near the surface of the reactor, which may contribute to the formation of a bio-cathode. Unlike abiotic cathodes, bio-cathodes can use bacteria as biocatalysts to intrinsically decrease the activation losses of cathode reactions, which can facilitate the oxygen reduction reaction in air-cathode MFCsint) can be regarded as the sum of anodic resistance (Ra), cathodic resistance (Rc), membrane resistance (Rm), and electrolyte resistance (Re). Here, the relationship between external voltage (E) and current (I) can be derived from the equation proposed by Fan et al.Internal resistance can be used to indicate the amount of energy lost during electricity production. The information about the changes of the internal resistances brought by the addition of PAC will help to explore the role of PAC in CW-MFC systems. Therefore, the distribution of internal resistance in each system was evaluated. For near linear polarization, the internal resistance (Rb (V) is the linear extrapolation open circuit voltage (LE-OCV); r (\u03a9 m2) is the area-specific resistance (ASR) of respective electrode; S (m2) is the projected area and Sr is cross section area of the reactor; A is decided by the electrodes distance and electrolyte concentration.where, EEb, ra, rc and A are determined using the SOLVER function in Microsoft excel with a best fit of the experimental date with et al.The values of 6H12O6) as an example, with 1\u2009mol of glucose oxidation, 24\u2009mol electrons will be released which can provides enough electrons for 6\u2009mol of oxygen reduction. Considering the slow catalytic kinetics of oxygen reduction, the electrons produced is abundant for cathode reactions if only oxygen was considered as the electron acceptor. Thus future work should focus on the improvement of cathode performance, rather than the anode compartment. Methods such as utilizing materials to promote the catalytic property of the electrode or adopting other electron acceptors warrant further investigation. Since most of the internal resistance comes from electrolyte resistance which strongly relates to the electrode spacing, reducing the spacing and keeping respective electrodes working under its favorable conditions is the most effective way to minimize the energy losses from ohmic resistance.Based on the influences of PAC additions on power output of CW-MFC described above, it seems that though the amount of microorganism around the anode area decreased, the performance of the anode electrode was not influenced too much. In comparison with the anode, the performance of the cathode was greatly enhanced due to the bio-cathode formation. This reveals the fact that the anode is not the primary factor in controlling how much electricity can be generated in the CW-MFC. Taking glucose , which reflects the ability of MFC to convert organics in wastewater into electricity. Based on previous studies, the theoretical highest NER of pure MFCs is about 3.86\u2009KW/Kg COD, which is considerably higher than most of the studies of MFCs at present (less than 1.0\u2009KW/Kg COD)et al.Another key parameter used to assess the performance of MFCs is the NER respectively. Among each CW-MFC, cathode and anode were buried close to the surface and bottom of the system, respectively, which resulted in an electrode spacing of about 200\u2009mm. The cathode compartment was filled with a layer of granular graphite (GG) and located at the air\u2013water interface. The anode compartment consisted of the PAC modified DAS (DAS/PAC) with different percentages of PAC ranging from 1%\u2009wt (PAC/DAS ratio) to 10%\u2009wt. This configuration of CW-MFCs set up remains the same for four systems while one system had no PAC, and served as a control. Cathode and anode were connected by insulated titanium wire through an external circuit with a load of 950\u2009\u03a9, which was chosen based on our previous workFive lab-scale vertical flow CW-MFCs, with identical dimensions (\u03a6 0.15\u2009m\u2009\u00d7\u20090.32\u2009m), were set up with their configuration shown in 4+-N) and reactive phosphorous (RP) concentrations of 40.5\u2009\u00b1\u20095.7\u2009mg/L, 26.4\u2009\u00b1\u20094.2\u2009mg/L and 9.8\u2009\u00b1\u20091.3\u2009mg/L, respectively. The prepared wastewater was then continuously pumped into each CW-MFC system (from the same influent tank) at the bottom, passing through the anode compartment, cathode compartment and finally left the system from the upper outlet. Influent flow rate was controlled through peristaltic pumps, giving an average hydraulic retention time (HRT) of about 1 day for all systems. All the experiments were conducted at room temperature (15\u2009\u00b1\u20095\u2009\u00b0C) during the whole operation period of 4 months.Swine wastewater was collected weekly from a local agriculture research farm. The influent wastewater was diluted with tap water to obtain a COD concentration of about 500\u2009mg/L, which resulted in average total nitrogen (TN), ammonium ; DO was determined through a microprocessor oximeter ; COD, NH4+-N, NO3\u2212-N, NO2\u2212-N and RP were analysed using Hach DR/2400 spectrophotometer according to its standard operating procedures. TN was determined with persulfate methodAfter a period of steady operation, the performance of CW-MFCs in wastewater treatment was investigated via pH, DO, COD, TN, NHet al.LOI is usually used to provide a rough approximation of the total organic matter (TOC) presented in the solid fraction of the sample2/RS, mW/m2) were determined through basic electrical calculations, where U is the voltage (V), R is the resistance (\u03a9) and S is the surface area of the cathode (m2). To obtain the polarization curves, external resistance was varied over a range from 580,000\u2009\u2126 to 20\u2009\u2126 and the steady-state voltage across the resistors was measured. The electrode potentials were determined against a saturated Ag/AgCl electrode (Mettler Toledo).The electricity generation was monitored through either a digital multimeter or a data logger in terms of the voltage drop (V) across the external resistor. Power densities (the fraction of electrons used for electricity generation versus the electrons in the starting organic matter) was calculate through the formula shown in 2 (g O2/mol O2), which is 32. I is current (A) and F is Faraday\u2019s constant (C/mol), which is 94,685. q is flow rate (L/s) while b is number of electrons donated per mole O2 (mol e\u2212/mol O2), which is 4. Finally, \u0394COD represents the change in COD between influent and effluent (g/L).where, M is molecular mass of OHow to cite this article: Xu, L. et al. Promoting the bio-cathode formation of a constructed wetland-microbial fuel cell by using powder activated carbon modified alum sludge in anode chamber. Sci. Rep. 6, 26514; doi: 10.1038/srep26514 (2016)."} {"text": "Culex flavivirus. The potential implications of this relationship and the possible uses of these and other arbovirus-related insect-specific flaviviruses are reviewed.Three novel insect-specific flaviviruses, isolated from mosquitoes collected in Peru, Malaysia (Sarawak), and the United States, are characterized. The new viruses, designated La Tina, Kampung Karu, and Long Pine Key, respectively, are antigenically and phylogenetically more similar to the mosquito-borne flavivirus pathogens, than to the classical insect-specific viruses like cell fusing agent and Many of the new ISVs appear to be members of the family Flaviviridae, genus Flavivirus, and are common in insect populations in nature, with a worldwide geographic distribution.During the past two decades, there has been a dramatic increase in the discovery and characterization of novel insect-specific viruses (ISVs).Diptera and that replicate in mosquito cells in vitro, but do not replicate in vertebrate cells or infect humans or other vertebrates.1 This is in contrast to the classical arthropod-borne viruses (arboviruses) that are maintained principally, or to an important extent, through biological transmission between susceptible vertebrate hosts by hematophagous arthropods.4 The arboviruses are dual host (vertebrate and arthropod) viruses, whereas the ISVs appear to involve only hematophagous insects.The terms \u201cinsect-specific\u201d or \u201cinsect-restricted\u201d viruses in current usage generally refer to viruses that naturally infect hematophagous 12 The ISFs can be separated into two distinct groups, based on their phylogenetic and antigenic relationships (Culex flavivirus (CxFV), and Kamiti River (KRV) viruses. The cISFs constitute a separate clade distinct from the vertebrate pathogenic flaviviruses. The second ISF group consists of the arbovirus-related or dual host affiliated insect-specific flaviviruses (dISFs).3 The dISFs are phylogenetically more similar to the flavivirus vertebrate pathogens than to the cISFs. Furthermore, as shown in this report, the dISFs are also closely related antigenically to some of the flavivirus pathogens, like West Nile (WNV), Zika, and dengue viruses. These similarities raise the possibility that some of the dISFs might modulate arbovirus infection and transmission in a dually infected mosquito host or that they could be useful in developing potential flavivirus vaccines or reagents.3To date (December 2016), approximately 35 insect-specific flaviviruses (ISFs) have been described.ionships . The firA total of 31 ISFs were included in this study. Their names, strain designations, GenBank numbers, host source, and geographic origin are given in Aedes scapularis mosquitoes collected on August 22, 1996, in a horse-baited Shannon trap by M.R. Mendez. The collection site was surrounded by irrigated rice fields near the village of La Tina, Piura Province, Peru . After isolation at the NAMRU-6 facility, the virus was subsequently sent to the University of Texas Medical Branch (UTMB) for further study and characterization.strain 49 LT96, was isolated in C6/36 cells at the U.S. Naval Medical Research Unit Number 6 (NAMRU-6) in Lima, Peru, from a pool of female Anopheles crucians collected on July 24, 2013, was designated as the prototype.A total of eight isolates of LPKV were made in C6/36 cells at UTMB from mosquitoes collected in CDC-type light traps placed in various habitats at Long Pine Key within Everglades National Park in southern Florida. Mosquito collections were made between June 13 and July 25, 2013, by a team from Yale University studying the distribution, abundance, and species composition of mosquitoes and mosquito-borne viruses occurring in the Florida Everglades. Mosquito collections were approved by the U.S. National Park Service under Collecting Permit EVER-2013-SCI-0032. Anopheles tesselatus mosquito collected in a gravid mosquito trap (Bioquip 2800) on October 16, 2013, in the village of Kampang Karu, Kuching District, Sarawak, Malaysia . KPKV was originally isolated in C6/36 cell cultures at the University Malaysia Sarawak and was sent to UTMB for further study and characterization.strain SWK P44, was isolated from a single female 2 were added. The C6/36 monolayers were washed in 0.1 M cacodylate buffer, cells were scraped off and processed further as a pellet. The pellets were postfixed in 1% OsO4 in 0.1 M cacodylate buffer pH 7.3 for 1 hour, washed with distilled water and en bloc stained with 2% aqueous uranyl acetate for 20 minutes at 60\u00b0C. The pellets were dehydrated in ethanol, processed through propylene oxide, and embedded in Poly/Bed 812 . Ultrathin sections were cut on Leica EM UC7 \u03bcLtramicrotome , stained with lead citrate and examined in Philips 201 transmission electron microscope at 60 kV.For ultrastructural analysis in ultrathin sections infected cells were fixed for at least 1 hour in a mixture of 2.5% formaldehyde prepared from paraformaldehyde powder, and 0.1% glutaraldehyde in 0.5 M cacodylate buffer pH 7.3 to which 0.01% picric acid and 0.03% CaCl13 All animal work and preparation of murine antibodies was covered by an approved UTMB Institutional Animal Care and Use Committee protocol (number 9505045).Since the ISFs by definition do not infect vertebrates or vertebrate cells, we were unable to produce ISF-specific antibodies. Attempts to immunize mice with ISF antigens produced from infected C6/36 cells inevitably resulted in antibodies that reacted with uninfected C6/36 cell controls. Attempts to absorb out the mosquito cell antibodies were unsuccessful; consequently, we used heterologous mouse hyperimmune ascitic fluids (MIAFs), prepared with infected mouse brain antigens of selected flavivirus pathogens, such as WNV, dengue type-2, Zika, yellow fever, and Japanese encephalitis viruses, in serologic tests. These antibodies were obtained from the World Reference Center for Emerging Viruses and Arboviruses; their homologous titers, as determined by hemagglutination-inhibition (HI) tests, are given in 14 using the heterologous mouse hyperimmune polyclonal antibodies described above at a 1:20 dilution.The antigens used in immunofluorescent studies were ISF-infected C6/36 cells. Six or 7 days after virus inoculation, the infected cells were scraped from the surface of the culture flask and spotted onto Cel-Line 12-well glass slides for examination by indirect fluorescent antibody test (IFAT),15 we found that it was possible to prepare serologic antigens for some ISFs, using infected C6/36 cells. Selected ISFs were inoculated into flask cultures of C6/36 cells maintained at 28\u00b0C, as noted above. When most of the cells showed viral cytopathic effect (CPE), the entire flask, containing medium and cells, was frozen at \u221280\u00b0C. For preparation of an acetone-extracted antigen, the flask contents were thawed and dropped through a 26 gauge needle into 20 volumes of chilled acetone. Within 5 minutes, this mixture was centrifuged at 1,000 rpm for 2 minutes and the supernatant fluid discarded. The sediment was resuspended in another 20 volumes of chilled acetone, shaken, and held for 1 hour at 4\u00b0C. The mixture was then centrifuged at 1,600 rpm for 5 minutes, the acetone decanted and the sediment dried for 1 hour by vacuum. The dried sediment was rehydrated in a volume of borate saline solution pH 9.0 equal to the original fluid used for extraction and was stored frozen at \u221280\u00b0C until used.In preliminary studies,16 Nonspecific inhibitors in the antisera were acetone extracted by the method of Clarke and Casals.17 The ISF antigens (infected C6/36 cells) were described above. Antibodies (MIAFs prepared to various flavivirus pathogens) were tested at serial 2-fold dilutions from 1:20 to 1:5120 at pH 6.0\u20136.2 with 4 units of antigen and a 1:200 dilution of goose erythrocytes, following established protocols.16A standard HI test was done in microtiter plates, according to methods described previously.Aedes albopictus, clone C6/36 (CRL-1660); baby hamster kidney, BHK-21 (CCL-10); and African green monkey kidney, Vero E6 (CRL-1586). The three cell lines were originally obtained from the American Type Culture Collection (ATCC), Manassas, VA. The mosquito cells were maintained at 28\u00b0C and the vertebrate cells at 37\u00b0C in 12.5 cm2 tissue culture flasks with 5 mL of medium recommended in the ATCC specification sheets. When a confluent monolayer of cells was present, 200 uL of a stock of each virus were inoculated into flasks of C6/36, BHK-21, and Vero E6 cells. After incubation for 2 hours, each flask was washed three times with 5 mL of maintenance medium, with aspiration to remove all remaining medium between washes. The medium was replaced and a sample (500 uL) taken as a day 0 control. Each day thereafter, for 7 consecutive days, the medium was completely removed, sampled, and fresh medium added to each flask, as described above. The daily samples (day 0\u20137) were subsequently assayed by reverse transcription-polymerase chain reaction (RT-PCR). Primer sequences available upon request.To determine the host range of the three new ISFs, their replication was studied in the following cell lines: N = 10) were inoculated intracranially with approximately 15 uL of C6/36 culture fluid from stocks of each of the three ISVs . After inoculation, the pups were returned to their dams and were examined daily for 14 days for signs of illness or death. Mice were purchased from Harlan Sprague\u2013Dawley ; this animal work at UTMB was carried out under IACUC-approval protocol number 9505045.One litter of 1- to 2-day-old Institute for Cancer Research (ICR) mouse pups once CPE was advanced. One milliliter of clarified supernatant from each virus was treated with a cocktail of DNases for 1 hour at 37\u00b0C. Viral RNA was then extracted using Trizol and resuspended in 50 \u03bcL RNase/DNase and protease-free water .For all ISVs, fluid supernatant from cultures of infected C6/36 cells were used for RNA extraction and sequencing. Supernatants were harvested and clarified by low speed centrifugation , \u223c340,000 (3.37%), \u223c350,000 (6.0%), \u223c3,100,000 (36.6%), \u223c3,300,000 (29.0%), \u223c7,870,000 (82.6%), \u223c460,000 (4.1%), \u223c3,770,000 (22.8%), \u223c2,780,000 (25.3%), \u223c2,150,800 (10.8%), and \u223c6,400,000 (30.5%), respectively.Viral RNA (\u223c0.9 \u00b5g) was fragmented by incubation at 94\u00b0C for 8 minutes in 19.5 \u03bcL of fragmentation buffer (Illumina 15016648). A sequencing library was prepared from the sample RNA using an Illumina TruSeq RNA v2 kit following the manufacturer\u2019s protocol. The sample was sequenced on a HiSeq 1500 using the 2 \u00d7 50 paired-end protocol. Reads in fastq format were quality-filtered, and any adapter sequences were removed, using Trimmomatic software.22The evolutionary history was inferred by using the maximum likelihood method based on the General Time Reversible model. The tree with the highest log likelihood (\u2212366425.2857) is shown. The percentage of trees in which the associated taxa clustered together is shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically by applying Neighbor-Join and BioNJ algorithms to a matrix of pairwise distances estimated using the maximum composite likelihood approach, and then selecting the topology with superior log likelihood value. A discrete gamma distribution was used to model evolutionary rate differences among sites . The rate variation model allowed for some sites to be evolutionarily invariable . The tree is drawn to scale, with branch lengths measured in the number of substitutions per site. The analyses involved 93 nucleotide sequences. Codon positions included were 1st + 2nd + 3rd + Noncoding. All positions containing gaps and missing data were eliminated. There were a total of 5,981 positions in the final dataset. Evolutionary analyses were conducted in MEGA7.LTNV, LPKV, and KPKV were each initially isolated in cultures of C6/36 cells. In this cell line, the three viruses produced a similar CPE (rounding and detaching of cells from flask surface) 6 or 7 days after inoculation. Subsequent inoculation of the C6/36 culture fluid into BHK-21 or Vero E6 cells failed to produce detectable CPE. Likewise, intracranial inoculation of the infected C6/36 culture fluid into newborn ICR mice failed to produce illness or death in the animals.To determine if LTNV, LPKV, and KPKV replication occurred in vertebrate cells without producing CPE, additional experiments were carried out in C6/36, Vero, and BHK cell cultures to assay for virus replication by RT-PCR. Samples of medium from the three ISV-inoculated cell lines were collected from day 0 to day 7, as described in the Methods section. After RNA extraction, a partial region of the following genes of each virus was amplified and run on gels: LTNV, a partial region covering part of the NS1 and NS2B genes with expected band size between 350 and 400 nucleotides (nt) long; KPKV, a partial region between the NS1 and NS2A genes with expected band size 500\u2013600 nt long; and LPKV, a partial region of the NS3 gene with expected band size between 450 and 500 nt long (primer sequences available upon request). RNA extracted from culture fluid from the C6/36 cells infected with LTNV, LPKV, and KPKV from day 0 to day 7 postinoculation (dpi) displayed strong bands on all days (data not shown). In contrast, extracted and amplified viral RNA from the Vero and BHK cells inoculated with the three viruses showed decreasing intensity of the RNA bands from day 0 to day 7, indicating that the three new flaviviruses did not replicate in the two vertebrate cell lines (data not shown).In ultrathin sections of infected C6/36 cells, the three viruses had typical flavivirus morphology with intracellular localization and formation of smooth membrane structures (SMS) . VirionsIn preliminary studies, we observed that polyclonal MIAFs prepared against selected flavivirus pathogens reacted in HI tests with some of the dISFs, using antigens prepared from infected C6/36 cell cultures. However, for most of the dISFs, we were unable to prepare reactive hemagglutinins. In the case of the three new ISFs , LTNV was the only virus that produced a reactive hemagglutinin.Flavivirus genus sequences were aligned to a dataset of sequences representing 93 flavivirus species. A consensus tree was obtained based on maximum-likelihood with bootstrap resampling of 1,000 replicates used to obtain confidence limits on individual branches. As expected the cISFs, represented by cell fusing virus (CFAV), KRV, CxFV, and others, clustered in a clade basal to all other member species of the us genus . On the The size of the positive sense, near complete single-strand genomes of the three identified viruses are 10,859, 10,968, and 10,882 nucleotides (nt) long, for LPKV (KY290256), KPKV (KY320648), and LTNV (KY320649), respectively. A single ORF of 10,365 , 10,311 , and 10,356 nt for LPKV, KPKV, and LTNV, respectively, are flanked by untranslated regions at the 5\u2032 and 3\u2032 ends .3 The first and most numerous group is the cISVs as well as other agents such as NHUV, Marisma mosquito, and Donggang viruses . The dISWhat are the possible implications of this relationship between the ISFs and important mosquito-borne flavivirus pathogens? One possible implication is that the ISFs, and the dISFs in particular, may alter the vector competence of their dipteran hosts. Since most of the ISFs described to date have been associated with mosquitoes, the following discussion will focus on the mosquito-specific flaviviruses.24 In addition, flavivirus-derived endogenous elements (EVEs) or nonretroviral integrated RNA viruses have been reported in the genomes of several mosquito species.28 ISFs and EVEs are part of the microbiome of many species and genera of mosquitoes, but they appear to have no obvious deleterious effect on their natural insect hosts.3 The available evidence suggests that the ISFs are maintained in mosquito populations by vertical transmission, and that they do not use vertebrate hosts as part of their life cycle, like most of the classical arthropod-borne viruses of vertebrates (arboviruses).29The available data on the ISFs indicate that they are relatively common in mosquito populations in nature and that some, like CFAV and CxFV, have a wide geographic distribution.35 Some studies have suggested that coinfection reduces vector competence35 whereas others have found that it had no effect.31There has been considerable speculation, but conflicting experimental evidence, that infection of a mosquito with an ISF can alter the insect\u2019s vector competence for certain mosquito-borne flavivirus pathogens, due to heterologous interference. But most of the experimental studies of this phenomenon to date have used cISFs, such as CxFV or Palm Creek viruses; and the results have been mixed. 36 so in vitro results of dual infection may not be indicative of what actually occurs in a live mosquito with a functional RNAi response (in vivo). One study34 with Culex quinquefasciatus infected with Nhumirim virus and then challenged with WNV indicated that the dually infected mosquitoes were less competent vectors of WNV than control mosquitoes infected with WNV alone, but additional studies of dual infection with dISF and related flavivirus pathogens are needed in live mosquitoes to clarify this possibility. If a dISF reduced the vector competence of Aedes aegypti for Zika or dengue viruses, for example, potentially it might be used as a disease control agent, as some strains of Wolbachia have been used.37 If the candidate dISF was also vertically transmitted in mosquitoes, then theoretically it would be maintained in the vector population.A second problem is that most of the experimental studies have been done in vitro, using the C6/36 mosquito cell line. However, the C6/36 cell line has a dysfunctional antiviral RNA interference (RNAi) response,1 The insect-specific alphavirus, Eilat,38 is defective for vertebrate cell infection39 and has been exploited by using recombinant DNA technology to generate Eilat chimeras, where the structural polyprotein ORF was swapped with that of a vertebrate-pathogenic alphavirus to generate a chimera that is structurally indistinguishable from the pathogenic virus.1 Eilat-based alphavirus chimeras have been developed as vaccines for chikungunya and Venezuelan equine encephalitis viruses.1 Chimeras between Eliat and chikungunya viruses have also been shown to serve as high-quality antigens for enzyme-linked immunosorbent assays.40Another potential application for the dISVs could be to use them as platforms for development of vaccines or diagnostics.35 it appears that some of these ISFs have an effect on the vector competence of their mosquito hosts for related mosquito-transmitted flavivirus pathogens. Although the frequency of ISF infections in field populations of mosquitoes is currently unknown, it is probably relatively low; furthermore, the infection rate likely varies by locality and species. But the variety and number of ISFs (and EVEs) serves as yet another example that all Ae. aegypti or Culex quinquefasciatus do not have the same vector potential. Other recent studies have demonstrated that the gut microbiota of mosquitoes can also change the insect\u2019s vector potential for some arboviruses by altering the mosquito\u2019s basal innate immunity or by directly inhibiting the virus through bacterial metabolites.42The discovery of three new mosquito-specific flaviviruses brings the total number of these agents to 38 . This ilAe. aegypti population may be of equal importance in determining the character of an anticipated dengue or Zika outbreak, as the herd immunity of the local human population. Given the complexity of factors affecting vector competence of mosquitoes and other hematophagous insects, further research in this area is needed if we want to understand and control the transmission of vector-borne viral diseases.Although the effects of an insect\u2019s microbiome on its vector competence are now being recognized by microbiologists and vector biologists, they apparently are not by most public health officials, epidemiologists, and modelers of arboviral diseases. It appears that the latter groups assume that all mosquitoes of a given species have the same vector potential and that it is possible to predict the risk, intensity of transmission, and spread of a disease caused by a mosquito-borne pathogen, such as dengue or Zika virus, based solely on climatic data, estimates of vector distribution and density, and susceptibility (immune status) of the local human population. But this approach is simplistic and misleading. The microbiome and resulting vector competence of a local 44 have shown that flavivirus-like and other negative-sense RNA viruses are much more numerous and diverse in invertebrates than in vertebrates, suggesting that the flavivirus pathogens may have evolved from earlier arthropod viruses. Thus it seems possible that an ISF could emerge as a vertebrate pathogen, although at present it is impossible to know how or when this might occur. However, this is another reason to discover, characterize, and monitor novel ISFs.In view of the similarity of some of the ISFs with the mosquito-borne flavivirus pathogens of vertebrates, one final consideration is whether one or more of the ISFs could evolve or mutate and acquire the ability to infect vertebrates. In other words, could an ISF like Marisma mosquito virus emerge sometime in the future as a human or animal pathogen? This is not a frivolous question. Recent metagenomics studies"} {"text": "Macrobrachium rosenbergii) is one of the most farmed freshwater crustaceans in the world. Its global production has been stalling in the past decade due to the inconsistent quality of broodstock and hatchery-produced seeds. A better understanding of the role of nutrition in maturation diets will help overcome some of the production challenges. Arachidonic acid is a fatty acid precursor of signaling molecules important for crustacean reproduction, prostaglandins E and F of the series II (PGE2 and PGF2\u03b1), and is often lacking in maturation diets of shrimp and prawns. We examined the effects of ARA in a combination of different fish oil (FO) and soybean oil (SO) blends on females\u2019 reproductive performance and larval quality. Adult females (15.22 \u00b1 0.13 g and 11.12 \u00b1 0.09\u00a0cm) were fed six isonitrogenous and isolipidic diets containing one of two different base compositions (A or B), supplemented with one of three levels of Mortierella alpine-derived ARA (containing 40% active ARA): 0, 1 or 2% by ingredient weight. The two base diets differed in the percentages of and docosahexaenoic acid ). After the eight-week experiment, prawns fed diet B with 1 and 2% ARA supplement (B1 and B2) exhibited the highest gonadosomatic index (GSI), hepatosomatic index (HSI), egg clutch weight, fecundity, hatching rate, number of larvae, and reproductive effort compared to those fed other diets (p\u00a0\u2264\u00a00.05). Larvae from these two dietary treatments also had higher tolerance to low salinity (2 ppt). The maturation period was not significantly different among most treatments (p\u00a0\u2265\u00a00.05). ARA supplementation, regardless of the base diet, significantly improved GSI, HSI, egg clutch weight and fecundity. However, the diets with an enhanced ARA and LOA (B1 and B2) resulted in the best reproductive performance, egg hatchability and larval tolerance to low salinity. These dietary treatments also allow for effective accumulation of ARA and an n-3 lcPUFA, DHA in eggs and larvae.The giant river prawn ( Macrobrachium rosenbergii, is an economically important species cultured in the Indo-Pacific region and linoleic acid and has -6, LOA) . In crus-6, LOA) and infl-6, LOA) . In gravy acids) , althoug\u22121 diet ingredients) can enhance reproduction of tank-domesticated P. monodon (M. rosenbergii reproduction (\u22121 DW) enhanced fecundity of females compared to those containing a low n-6 fatty acid level (approximately 4 mg g\u22121) improved hatching rates and larval ammonia tolerance. For this current study, we built on this maturation diet and a diet formulation recommended by the Thai Department of Fisheries based on locally available diet ingredients (Schizochytrium sp. yielded the best growth performance in juvenile M. rosenbergii (unpublished data). Some individuals in this dietary treatment reached earlier maturation compared to the control fish oil (FO) diet.ARA is often inadequate in pelleted maturation diets because of the naturally low base level of ARA derived from fish oil . Moreoveredients . Our preM. rosenbergii female broodstock and larval quality, namely the tolerance to low salinity. We examined the effects of ARA supplementation in two base diets varying in percentages of FO and SO . We hypothesized that increased dietary ARA along with a higher LOA level would improve reproductive performance of female M. rosenbergii. Our results provided insights to the understanding of these fatty acids in facilitating the prawn maturation process, and in larval development. Our findings also help fine tune the formulation of optimal maturation diets based on locally available material that will improve larval quality.The present study investigated the effect of ARA and LOA given adequate n-3 lcPUFA on the reproductive performance of We used a 6 \u00d7 3 completely randomized design (CRD) in this experiment, with six treatments performed in triplicate. Each experimental unit (a cage)consisted of a group of 15 females receiving each of the six diet formulations. Each cage was sub-divided into fifteen 18 \u00d7\u00a013 \u00d7\u00a012\u00a0cm compartments, each of which held one individual female. To have a manageable number of individuals per tank, to ensure optimal water quality for growth and reproduction, and to have an adequate number of individuals for all analyses, we created the entire experimental setting in two 3 \u00d7\u00a03 \u00d7\u00a01.2\u00a0m recirculating concrete tanks. Each tank contained 270 individual female prawns. Individuals raised in one tank were used for the analyses of females\u2019 reproductive performance and fatty acids of eggs and larvae. Those raised in the second tank were used for determining fatty acid compositions in muscle and stage II\u2013V ovaries, as well as for estimating Gonadosomatic index (GSI) and Hepatosomatic index (HSI). Because the hepatopancreas is a major lipid storage organ in crustaceans, HSI is an indication of the status of lipid and nutrient storage. The experiment lasted eight weeks.M. rosenbergii adults were obtained from a commercial prawn farm in Chachoengchao Province, Thailand. Individuals were acclimatized to 28\u00a0\u00b0C in a freshwater recirculation system at Rajamangala University of Technology Tawan-ok and fed a diet for seven days.Approximately four-month-old At the start of the experiment, females and males were randomly selected and stocked into the experimental set-ups. The initial weights and total lengths of females and males were 15.22 \u00b1 0.13 g and 11.12 \u00b1 0.09\u00a0cm, and 17.68 \u00b1 0.22 g and 11.44 \u00b1 0.09\u00a0cm, respectively. Males and females were kept separately until mating. The males used in the experiments were raised together in another two 1 \u00d7\u00a03 \u00d7\u00a01\u00a0m communal concrete tanks, each containing 250 individuals. Only females received the experimental diets; the males were fed a commercially formulated diet . Experimental animals were fed twice daily (07.00 and 18.00) at approximately 5% of body weight.\u22121, pH from 8.19\u20138.38, alkalinity from 178.42\u2013242.33\u00a0mg L\u22121, hardness from 129.42\u2013146.33 mg L\u22121. Nitrogenous compounds, namely ammonia nitrogen, NO2-N and NO3-N were 0.01\u20130.06, 0.01\u20130.04 and 0.05\u20130.08\u00a0mg L\u22121, respectively. Temperature, pH and dissolved oxygen were monitored daily. Alkalinity contained 2% FO and 2% SO by ingredient weight . Compared to the HH diet, base diet A had a comparable DHA content while diet B had comparable LOA contents using standard procedures . Moisturr lipids .For female reproductive performance, we determined gonadosomatic index (GSI), hepatosomatic index (HSI), incubation period , fecundity, reproductive effort, hatching rate, larval length and larval tolerance to low salinity. For fatty acid analysis, we determined fatty acid profiles of muscle tissue , stage II\u2013V ovaries, two stages of eggs (OE and BE stages) and larvae. For all analyses, we collected the data in triplicate and nine individuals per treatment were analyzed. With the exception of fatty acid analysis, all other analyses obtained nine independent values per replication. For fatty acid analysis of all tissue types, we prepared a homogenate for each replication from a pool of three individuals; only three values per treatment were obtained. We started to observe stage II ovaries at approximately one month after beginning the experiment. Stages III and IV developed one to two weeks after the prior stage. By the end of the experiment, most experimental animals developed stage V ovaries. The tissue samples were kept at \u221240\u00a0\u00b0C for fatty acid analysis.\u22121. The temperature program used was an initial 150\u00a0\u00b0C for 0.5\u00a0min, increasing to 170\u00a0\u00b0C at a rate of 5\u00a0\u00b0C min\u22121, held at 170\u00a0\u00b0C for 10\u00a0min, then increasing to 190\u00a0\u00b0C at a rate of 3\u00a0\u00b0C min\u22121, and then held at 190\u00a0\u00b0C for 28\u00a0min. Temperatures at the injection and detection ports were 230\u00a0\u00b0C and 250\u00a0\u00b0C, respectively. The fatty acid analyses were performed at the Institute of Marine Science, Burapha University, Thailand.Total lipids from experimental diets and various tissues were extracted by homogenizing each sample in 20 ml ice-cold chloroform:methanol containing 0.1% butylated hydroxytoluene for 20\u00a0min before the liquid fraction was transferred to a separating funnel. The residual matter was then subjected to a second round of extraction, after which the liquid portion was transferred to a separating funnel. To separate the non-lipid phase, 0.88% (w/w) KCl was then added to the separating funnel, agitated to mix the contents, and then left until the solution separated into two layers. Total lipid was obtained by filtering the lower layer through anhydrous sodium sulfate before evaporating the collected fraction . Fatty aMolting and ovarian development of each female were monitored daily. Maturation stages of the ovary were classified into one of five stages, based on the ovarian size and color observed through the carapace, following the criteria detailed in We paired a mature female to a male at a 1:1 ratio; the entire experiment required a total of 270 mating pairs. Each gravid female (with a stage V ovary) was mated in a compartment to a mature male immediately after molting. We manually placed a mature male, kept in one of the \u2018male\u2019 tanks (see above), in the compartment. After the eggs were fertilized (about 6\u20138\u00a0h after mating), the male was removed from the compartment to another tank without returning to the original male tank; each male was used only once. Approximately 8\u00a0h after mating, freshly fertilized eggs migrated to the female\u2019s brood chamber in the abdominal area and became visible.The female prawns in individual compartments were observed daily to visually determine embryonic development stage, based on the egg color . As embrA 10-day-old egg clutch (brownish) (after OE become apparent), scraped from the brood chamber of each of three females per treatment was weighed to the nearest 0.01 g after removing any excess water by repeated blotting. Three independent samples of equal weight (0.2 g) from each clutch were counted under a microscope to estimate the total number of eggs per clutch. The averaged egg numbers did not differ statistically among treatments. We therefore extrapolated the averaged egg number per 0.2 g of 1,450 \u00b1 72.13 eggs to an actual egg clutch weight for fecundity estimates for all treatments.An egg clutch weight was the difference between a gravid female\u2019s weight before and after hatching. Gravid females bearing brownish eggs were weighed and transferred into an aerated 20-L tank containing 14 ppt water. After spawning, the females were weighed and transferred back into the original compartment, while the spawned eggs remained in the hatching tank and then were used for the subsequent experimental steps. Fecundity was estimated for each individual female as the total number of eggs ) per body weight . The reproductive effort for each individual female was estimated as the percentage of egg weight to body weight. We estimated the number of newly hatched larvae to a female\u2019s weight by counting the number of individuals in a sample of 100 ml water from a well-mixed tank and then extrapolating to 20 L. Hatching rates were determined from the number of larvae per total number of brownish eggs.We tested the tolerance to low salinity of newly hatched larvae of the experimental females. The larvae from each treatment were exposed to a low salinity level of 2 ppt for 24\u00a0h. Each experimental unit contained 30 individual larvae. A group raised in 14 ppt water were treated as a control. At the end of the test, we recorded the survival for each treatment.p value < 0.05. All statistical analyses were executed with SPSS version 17 for Windows . For the data that were not normally distributed, we performed either square root arcsine or arcsine transformation before ANOVA. However, only non-transformed means are presented in the table.We determined the differences in female reproductive performance, larval quality and fatty acid compositions in diets and various tissues among dietary treatments using one-way analysis of variance (ANOVA). The significance of the differences in means was determined by Duncan\u2019s new multiple range test at a To examine the variation of fatty acid profiles among various tissue types and treatments, we analyzed principal components from seven quantitative variables: percentages of five major fatty acids and total values of two fatty acid classes detected in our study . Total n3-lcPUFA was treated as a supplementary variable on a principal component analysis (PCA) biplot for a better understanding of the ordination based on the seven active quantitative variables; it did not interfere with the ordination. This multivariate approach utilized all information available for each individual while at the same time reduced the dimensionalities of the data (from seven dimensions to two dimensions in our case). The first few principal components typically captured most of the variation in the data set. PCA was performed using algorithms implemented in the FactoMineR package and the \u22121.The proximate compositions of the experimental diets were similar . Each exThe fatty acid composition of each diet generally reflects the level of ARA supplementation and different proportions of lipid sources in two base compositions (A and B) . The proFor n-6 PUFA, all diets contained LOA as a major component with B2 containing the highest level of total n-6 PUFA and A0 containing the lowest level. For total n-3 PUFA, LNA, EPA and DHA were primary components. All diets contained approximately 6\u20137% LNA, 4\u20135% EPA and 6\u201310% DHA of total fatty acids. The total n-3 PUFA and n-3 lcPUFA were highest in diet A1. In diets A0-A3, LNA contents were slightly higher than EPA but lower than DHA contents. In contrast, LNA contents were higher than DHA in diets B0\u2013B2. In all diets, dietary EPA levels were lower than DHA; EPA contents were similar among all diets . EPA to DHA ratios were similar among experimental diets, approximately 0.5 (0.43\u20130.72).p\u00a0<\u00a00.05; p\u00a0<\u00a00.05). Females fed diets B1 and B2 had GSI of 8.59 \u00b1 0.53 and 8.73 \u00b1 0.51% and HSI of 4.61 \u00b1 0.57 and 4.85 \u00b1 0.43%, respectively. The average egg clutch weights for these groups were 2.89 \u00b1 0.38 and 2.91 \u00b1 0.11 g. Their average fecundities were 1,399.82 \u00b1 187.96 and 1,354.62 \u00b1 129.93 eggs per female; reproductive effort was 16.61 \u00b1 1.78 and 16.79 \u00b1 1.03 g of eggs per g BW female. Hatching rates were higher than 90% for these groups. Female fed diets A1 and A2 had HSI of 3.81 \u00b1 0.34 and 4.05 \u00b1 0.42%. The average egg clutch weights for these groups were 2.50 \u00b1 0.24 and 2.49 \u00b1 0.22 g, average fecundities were 1,191.15 \u00b1 73.29 and 1,268.33 \u00b1 128.62 eggs per female, for A1 and A2, respectively. Larval lengths and egg incubation periods (OE to hatching) of all dietary treatments were not statistically different.Females fed diets containing 0.4% and 0.8% ARA supplementation had higher reproductive performance than those fed on diets without ARA addition (A0 and B0) . Variation of muscle ARA percentages to total fatty acids corresponded to the percentages of ARA added to diets and source of fatty acid in each base diet. Groups fed diets A2 had the highest proportions of muscle ARA followed by B2, A1and B1 while those fed diet A0 and B0 had the lowest muscle ARA level (p\u00a0<\u00a00.05). For n-3 PUFA, muscle EPA was highest in the A0 group (p\u00a0<\u00a00.05) while the remaining treatments showed similar values (p\u00a0>\u00a00.05). The total n-3 PUFA in muscle was highest in prawns fed diets A0 and A1, while the remaining dietary treatments exhibited similar values.Among experimental treatments, muscle total saturates and total monoenes were not statistically different, but some PUFAs, namely LOA, ARA, EPA and DHA, showed some significant variation. The differences in LOA and DHA in muscle were consistent with the differences of these two fatty acids between the two base diets. Dietary treatment B had a higher muscle LOA level while dietary treatment A had a higher muscle DHA level . OvarianFor each ovarian maturation stage, females fed different experimental diets varied in their ovarian fatty acid compositions , especiaThe variation of ovarian ARA within each ovarian maturation stage corresponded to the dietary ARA supplementation. Compared to dietary treatments without ARA supplementation, groups fed diets with ARA supplementation had higher ovarian ARA levels in all ovarian stages, especially those fed diets B1 and B2 (from 4.47\u20135.65% in stage II ovary to 6.17\u20137.05% in stage IV). In all ovarian stages, females fed B2 and B1 typically had the highest ovarian ARA proportional contents, followed by those fed A2 and A1; this relationship between the base diet and ARA retention was opposite to that observed in muscle and diet where treatments A2 and A1 had the highest ARA percentages.Egg and larval fatty acid profiles were more similar to the ovarian profiles, especially at stage V ovaries, than to those of muscle . Howeverp\u00a0<\u00a00.05). Egg ARA was highest in treatments A2, B1, and B2 followed by A1 (p\u00a0<\u00a00.05). For n-3 PUFA, EPA was highest in treatments A0, A1, A2 and B0, but LNA and DHA were highest in B1 and B2 (p\u00a0<\u00a00.05).At the OE stage, the differences in dietary fatty acid levels in the two base diets were consistent only with the differences in total n-3 PUFA and total n-3 lcPUFA . Groups p\u00a0<\u00a00.05). For n-6 PUFA, LOA contents were similar across treatments (18.05\u201319.84%) although treatment B2 showed the highest level at 19.84 \u00b1 0.24%, followed by A0, B0 and B1 (18.92\u201319.03%). Similar to OE, ARA was highest in treatment B2 followed by A2 and A1/B1 (p\u00a0<\u00a00.05). The variation of total n-6 PUFA among treatments was therefore similar to that of ARA and LOA, with treatment B2 showing the highest values, followed by treatments A2 and B1 (p\u00a0<\u00a00.05). For n-3 PUFA, EPA and DHA percentages were similar across treatments, with A1 showing the highest values , and B0 showing the lowest values .At the BE stage, groups fed diet A had higher LNA, EPA, total n-3PUFA, and total n-3 lcPUFA than those fed diet B (p\u00a0<\u00a00.05). For n-6 PUFA, LOA was highest in treatment B2 and lowest in A1. Similar to egg tissues, larval ARA was highest in treatment B2 followed by A2/B1 (p\u00a0<\u00a00.05). For n-3 PUFA, the total n-3 PUFA and total n-3 lcPUFA were higher in treatments A0, B0, B1 and B2 than those in treatment A2; an opposite pattern was observed in egg tissue. This n-3 PUFA variation was due to high levels of EPA detected in B0 and B1 and high DHA levels detected in A0, B1 and B2.In larvae, groups fed the base diet B had higher total saturates in tissue than those fed diet A, with the percentage being highest in treatment B1 to 9.43 \u00b1 0.28% (A2) in diets. The basal dietary ARA levels were approximately 3% in treatments without ARA supplement. The enhanced reproduction in M. rosenbergii was similar to that of tank domesticated P. monodon broodstock fed ARA-supplemented diet . These positive effects of dietary ARA supplementation on M. rosenbergii reproduction and embryonic development may link to an ARA role as a precursor to prostaglandins (PGs).Our research highlighted the importance of ARA in females\u2019 reproduction, especially for traits relevant to ovarian maturation and oogenesis, in ted diet . The exp\u03b1 in the cyclooxygenase pathway. PGE2 and PGF2\u03b1 are important signaling molecules in several reproductive functions in crustaceans. The roles of PGE2 have been more extensively examined and demonstrated in several shrimp species, including M. rosenbergii. PGE2 is associated with oocyte maturation were comparable to those detected in mature ovaries and eggs of wild P. monodon and our base diets (diets A0 and B0). Our findings have an important implication for maturation diets for domesticated broodstock of M. rosenbergii in that the broodstock of this species requires ARA levels as high as marine shrimp even though their natural diets are derived mainly from freshwater systems, which may be lacking ARA. Formulating maturation diets from observations in the wild may not optimize the prawn\u2019s ARA requirements. We did not detect negative effects of supplementing ARA at 0.8% (9.43 and 8.33%) of total fatty acids in diets A2 and B2, respectively).We detected much higher levels of ovarian and egg ARA in the females fed diets supplementing ARA than those detected in mature ovaries and eggs of wild y acids, ). This dM. rosenbergii than treatments A1 and A2, containing 6.17\u20139.43% ARA and 14.85\u201313.35% LOA. Positive effects of dietary LOA levels on reproductive performance of broodstock and larval quality observed in this study were consistent with those observed by M. rosenbergii and M. amazonicum. \u22121 DW LOA could improve fecundity of mature female M. amazonicum.In addition to ARA, we detected a positive synergistic effect of enhanced dietary ARA and LOA on females\u2019 reproductive performance, egg hatchability and larval quality . The dietary treatments B1 and B2, containing 5.7\u20138.33% ARA and 21.78\u201322.49 % LOA of total fatty acid, led to higher reproductive performance of Macrobrachium species, M. borllii, Although some authors speculated that LOA may serve as a raw material for the synthesis of ARA , some emM. rosenbergii. When the dietary LOA was present at a higher percentage (as in diets B1 and B2), LOA was utilized during muscle tissue formation, ovarian maturation and early embryogenesis. On the other hand, dietary LOA available at a slightly lower level (as in diets A1 and A2) was mostly retained and accumulated in muscle and ovarian tissues. In addition, females fed diets with higher LOA tended to retain higher proportions of LOA in ovaries and larvae. LOA may also have played a role in hatching as proportional contents of LOA in larvae were slightly lower than that of mid-stage embryos.LOA contributed to a large percentage of total fatty acids in all tissue types, suggesting its importance as an energy source and as cellular structural components for Eggs of females fed diets with enhanced dietary ARA and LOA also had higher hatchability and resulted in higher numbers of larvae than those of other dietary treatments. Also, larvae from these treatments were more tolerant to low salinity stress. The results suggested the importance of both fatty acids in yolk deposition, embryogenesis and possibly larval osmoregulation. A combination of enriched ARA and LOA in diets may have facilitated the accumulations of other important fatty acids, especially DHA, in eggs and larvae despite the lower n-3 lcPUFA available in diets. Having adequate energy required by these energetically expensive life stages, especially from fertilized eggs (containing mainly yolk) to advanced embryo and larvae (involving organ formation) may alloM. rosenbergii . The dominance of these fatty acid classes in muscle, ovarian and egg tissues were similar to those observed in wild and farmed r growth , ovarianr growth , embryogr growth and earlr growth as they r growth .M. rosenbergii. Compared to marine shrimp species, M. rosenbergii tends to accumulate higher LOA, comparable EPA, but less DHA in the muscle tissue while it accumulates comparable LOA, but less n-3 lcPUFA in reproductive tissues. Tissues of M. rosenbergii adults from natural environments tended to contain higher amounts of LOA compared to marine shrimp species .The pattern of LOA, EPA and DHA accumulation and utilization observed in our study may be reflective of fatty acid requirements specific to this diadromous tissue) . Marine tissue) compared10.7717/peerj.2735/supp-1Data S1Click here for additional data file.10.7717/peerj.2735/supp-2Data S2Click here for additional data file."} {"text": "Betula platyphylla) ectopic overexpressing a late embryogenesis abundant (LEA) gene and a basic leucine zipper (bZIP) gene from the salt-tolerant genus Tamarix show increased tolerance to salt (NaCl) stress. Co-transfer of TaLEA and ThbZIP in birch under the control of two independent CaMV 35S promoters significantly enhanced salt stress. PCR and northern blot analyses indicated that the two genes were ectopically overexpressed in several dual-gene transgenic birch lines. We compared the effects of salt stress among three transgenic birch lines and wild type (WT). In all lines, the net photosynthesis values were higher before salt stress treatment than afterwards. After the salt stress treatment, the transgenic lines L-4 and L-8 showed higher values for photosynthetic traits, chlorophyll fluorescence, peroxidase and superoxide dismutase activities, and lower malondialdehyde and Na+ contents, compared with those in WT and L-5. These different responses to salt stress suggested that the transcriptional level of the TaLEA and ThbZIP genes differed among the transgenic lines, resulting in a variety of genetic and phenotypic effects. The results of this research can provide a theoretical basis for the genetic engineering of salt-tolerant trees.The aim of this study was to determine whether transgenic birch ( Betula Platyphylla) is one of the most extensively distributed broadleaf species in the northern and southwestern forested areas of China [Birch seeds [LEA genes were subsequently found to be one of the most important stress-associated gene families. Many studies have demonstrated that LEA genes are associated with tolerance against salt and other stresses [Late embryogenesis abundant (LEA) proteins were first discovered in germinating cotton (m) seeds , and LEAstresses \u201319. Basistresses . bZIP trstresses . One of stresses .TaLEA and ThbZIP genes from Tamarix were transformed into birch and then subjected to salt stress treatments. The aim of these experiments was to determine whether these genes affected salt tolerance, and to detect the variation in physiological characters among the different transgenic lines. This information will provide a theoretical basis for molecular breeding in birch.Although birch has a strong cold resistance, it is weak in salt tolerance, which limits the popularization and application of birch in saline soils. In order to gain transgenic birch with salt tolerance, both TaLEA and ThbZIP genes were firstly cloned from Tamarix as described by Wang [TaLEA-Tnos) amplified from a pROK2-TaLEA vector was inserted into the pROK2-ThbZIP vector, concrete steps of which were described in detail in our previous research plantlets were obtained by Agrobacterium-mediated transformation. To prepare the infection liquid, an Agrobacterium culture was incubated until the OD600 was 0.6\u20130.8, then centrifuged at 3000 r min\u22121 for 5 min, finally, the collected pellets were diluted with sterile water to a final concentration of OD600 = 0.1. Leaves from 60-d-old clones were cut in half and the pieces were gently shaken in the infection liquid for 5 min. Then, the leaves were removed and excess liquid was absorbed with sterile filter paper. The infected birch leaves were then co-cultured on antibiotic-free differentiation medium at 25 \u00b1 2\u00b0C in the dark for 2 d. To eliminate bacteria, the co-cultured leaves were washed with 200 mg L\u22121 cephalosporin solution for 3\u20135 min. The excess liquid was absorbed with sterile filter paper, and then leaves were cultured in new differentiation medium . In the first week, bacteria elimination was performed every 2 d; subsequently, bacteria were eliminated every 7 d until small buds differentiated. The resistant buds grew into leaves, which were cut off and cultured on a differentiation medium containing antibiotics. Finally, adventitious shoots were transferred onto medium to allow shoot growth for 2 weeks. To induce rooting from the shoots, 2-cm shoot cuttings were transferred to rooting medium .To study the functions of the NptII (Kanamycin resistance gene), TaLEA and ThbZIP genes according to the manufacturer\u2019s protocol. The cycling conditions were as follows: pre-denaturing for 3 min at 94\u00b0C followed by 35 cycles of denaturing at 94\u00b0C for 30 s, annealing at 58\u00b0C for 30 s, and extension at 72\u00b0C for 1 min, and a final extension at 72\u00b0C for 7 min. The PCR products were detected on 1.0% agarose gels.Total DNA was extracted from all transgenic and WT birch lines using a modified CTAB method . Using tIP genes . The vecTaLEA and ThbZIP in birch, total RNA from WT and transgenic birch clones was isolated as described by Qu [+ nylon membrane, and fixed with UV cross-linking for the northern blot analysis. The membrane was hybridized with full-length TaLEA and ThbZIP genes labeled with DIG-dUTP. Hybridization and detection were using a DIG Northern starter Kit,.To detect ed by Qu . Subsequ\u22121, 4 g L\u22121 or 6 g L\u22121 NaCl. Moreover, the wild-type and the transgenic plants exhibiting similar height (about 3 cm in length) were grown on 1/2 MS medium supplemented with 4 g L\u22121 or 6 g L\u22121 NaCl for rooting. The growth condition was controlled at 25\u00b0C in a 16 h light/8 h dark photoperiod at an intensity of ~2000 lux. The phenotypes of seedlings were photographed and measured after 20 d of growth.To test the salt stress tolerance of transgenic birchs, the shoot of tissue cultured seedlings was cut into 1- cm pieces and cultured on WPM differentiation medium containing 2 g L\u22122 s\u22121.For growth comparison of plants in soil, three transgenic birch lines and one wild type line (WT) were used in this study. In April 2014, the four lines were propagated and grown in separate pots in a greenhouse. After 60 d, 200 healthy plants in each line were selected as the experimental materials (about 40 cm in height). The greenhouse was controlled at a relative humidity of 65\u201375% with an average temperature of 27 \u00b1 2\u00b0C. Cool white fluorescent lights supplied photons at 200 \u03bcmol m\u22121). NaCl solutions were applied at 18:00\u201319:00 every 2 d for 16 d. The phenotypes of each group were observed and instantaneous net photosynthesis rate (Pn) and chlorophyll fluorescence parameters (Fv/Fm) were measured at 0, 2, 4, 6, 8, 10, and 12 d during the stress treatment. Pn values were measured from 8:30 a.m. to 11:30 a.m. with a Lico-6400 portable photosynthesis measuring system on the third to fifth fully expanded leaves of each plant. The conditions during photosynthetic trait measurements were as follows: leaf temperature, 28\u00b0C; PPFD, 1400 \u03bcmol m-2 s-1; relative humidity, 60%; ambient CO2 concentration, 400 \u03bcmol mol-1. Chlorophyll fluorescence parameters were measured with the same leaves using a pulse amplitude modulation chlorophyll fluorometer MINI-PAM2500 . Minimal fluorescence, F0, was measured in 30-min dark-adapted leaves using weak modulated light of < 0.15 \u03bcmol m-2 s-1. Maximal fluorescence, Fm, was measured after an 0.8-s saturating white light pulse (6000 \u03bcmol m-2 s-1) in the same leaf with 2.9. Maximal variable fluorescence (Fv = Fm\u2013F0) and the photochemical efficiency of PSII (Fv/Fm) for dark-adapted leaves were calculated.Thirty uniform WT seedlings were selected and divided into five group as the preliminary material, One group was designated as the control group and the remaining four groups were treated with NaCl at various concentrations were selected as the experimental materials. All plants were watered with a 4 g L\u22121 NaCl solution every 2 d, and the Pn-photosynthetic photon flux density (PPFD) curves and Pn-CO2 concentration in air (Ca) curves were measured and analysesed by the method of Zhao [2 saturation point (CSP) and CO2 compensation point (CCP) were evaluated by fitting the data to the model function as follows:Y is the Pn value, X is the PPFD (or Ca), b0 is a constant, and b1 and b2 are coefficients.We selected 4 g L of Zhao on 8 d o2 saturation point (CSP) and CO2 compensation point (CCP) were evaluated by fitting the data to the model function.The CO+ contents were conducted on 0, 4, 8, 12, and 16 d of the salt stress treatment, using 10 plants from each line. Also the third to fifth fully expanded leaves of each plant were used for photosynthetic parameters, chlorophyll fluorescence parameters, antioxidant enzyme activity and Na+ contents assays. Photosynthetic parameters , stomatal conductance (Gs), and transpiration rate (Tr)) were measured by lico-6400, chlorophyll fluorescence parameters (Fv/Fm) were measured by MINI-PAM2500, the methods were the same with as ahead. The total superoxide dismutase (SOD) activity was assayed as described by Giannopolitis [+ concentration was determined using atomic absorption spectroscopy as described by Chen [Measurements of photosynthetic parameters, antioxidant enzyme activity, malondialdehyde (MDA) and Naopolitis , total popolitis , Malondiopolitis and Na+ by Chen .F-tests. Variation among lines in different time was analyzed by ANOVA according to Hansen and Roulund [ij is the performance of an individual of line i within time j, \u03bc is the overall mean, Li is the line effect , Tj is the time effect and \u03b5ij is the random error.Statistical analyses were carried out using the Statistical Product and Service Solutions (SPSS 19.0) software. All the parameters were compared using analysis of variance; the significance of fixed effects was tested with Roulund .yij=\u03bc+LTaLEA gene (GenBank accession NO.: DQ663481) was isolated from T. androssowii, which belongs to late embryogenesis abundant 3 superfamily protein. The ThbZIP gene (NO.: FJ752700) was isolated from T. hispida, a member of basic leucine zipper superfamily of plant G-box binding factor 1 (GBF1)-like transcription factors, which are involved in developmental and physiological processes in response to stimuli such as light, hormones or stress. To investigate the physiological functions of TaLEA and ThbZIP genes in birch, we produced transgenic birch lines ectopically overexpressing these two genes and five contained both genes, as confirmed by PCR with specific TaLEA and ThbZIP primers and co-transfer (TaLEA and ThbZIP) both can enhance the salt and osmosis tolerance of transgenic tobacco plants [Abiotic stresses in plants involve a series of physiological and biochemical responses. A great many genes are associated with the abiotic stress-tolerance trait of plants, and multiple transcription factors were activated in different signal transduction pathways to respond single stress. As a result of the limited contribution of single gene to stresses, transfer of multiple transcription factors have been reported to produce additive or synergistic effects on stress tolerance in plants . LEA proo plants \u201325.Crossbreeding has been one of the most important traditional approaches to obtain new materials for tree breeding. However, this method is very time-consuming, because forest trees have long life cycles with extended vegetative phases ranging from one to many decades . Genetic\u22121 was determined to be appropriate; this is the same concentration that was used in a study on poplar [Determining the appropriate concentration of salt was the most important factor in the design of these experiments. Salt concentrations that were too high led to a rapid decrease in the photosynthetic index, which was not conducive to observation, and those that were too low did not affect the photosynthetic index sufficiently, or required a very long experimental period to detect any effects. In this research, the NaCl concentration of 4 g Ln poplar .\u22122 s\u22121, indicating that birch can adapt to strong illumination intensity. The LSPs of transgenic lines were higher than that of WT, suggesting that the plants harboring TaLEA-ThbZIP showed greater resistance to high illumination intensity. After 8 d of salt stress, the Pn of the different lines increased slowly with increasing PPFD, and the LSPs and maximum Pns at the LSP were lower than those before the stress treatment. The results also showed that salt stress affected antioxidant enzyme activity and the Na+ content in plant cells, which ultimately affected photosynthesis, consistent with the results reported by Deng [Measurement of the Pn\u2013PPFD curve is an important method to analyze the ability of plants to adapt to high or low light conditions . In this by Deng .2 concentrations are projected to double from the current concentration of 350 \u03bcmol mol\u22121 to 700 \u03bcmol mol\u22121. This increase will further stimulate plant growth and result in ecosystem changes [2 is the most important substrate for photosynthesis. Within a certain concentration range, enhanced CO2 concentrations can promote the instantaneous photosynthetic rate [2 concentration [\u22121. Below the CSP, the Pn of the four lines ranged from 11.23 to 14.52 \u03bcmol m\u22122 s\u22121, similar to the Pn values reported for poplar seedlings [2. The instantaneous Pn under CSP was lower at 8 d of the salt stress treatment than before the stress treatment, indicating that salt stress affected photosynthesis in birch. The instantaneous Pn values were higher in L-4 and L-8 than in WT and L-5, indicating that the new genes had affected the physiology of the transgenic lines.In the next 80 years, atmospheric COtic rate . There intration . In thiseedlings . These rv/Fm values differed significantly (P < 0.01) among the four lines and the five time points. The Fv/Fm values of L-4 and L-8 were higher than those in L-5 and WT after 8 d of salt stress, indicating that the new genes had altered the efficiency of energy transfer from photosystem II, and increased the salt resistance of these lines.Chlorophyll fluorescence measurements are easy and rapid to conduct, do not damage plants, and sensitively reflect the relationship between the physiological status of the plant and the environment . In receGmbZIP78 leads to not only reducing salt resistance, will also affecting plant growth [bZIP genes in Arabidopsis thaliana leads to slow growth, dwarf, and abnormal phenotype [bZIP may exist the most appropriate concentration range in vivo, too high may lead to the imbalance of transcriptional regulation.During plant growth and development, many metabolic pathways produce reactive oxygen species, which can damage cell membranes and lead to cell death . During t growth . Overexphenotype . That is+ content in leaves indirectly reflects the amount of Na+ absorbed by the roots, and is an indicator of the degree of stress. In this study, as the duration of the salt stress treatment extended, the Na+ content in leaves of all lines increased to different degrees, indicating that salt stress affected the growth of the lines differently. The average Na+ content showed little change after 4 d of salt stress, but had increased markedly after 8 d of salt stress. The Na+ contents were lower in Lines L-4 and L-8 than in WT and L-5, indicating that the former two lines were more salt-resistant than the latter two lines.The ability of a plant to tolerate salinity is related to its ability to maintain ion homeostasis in cells. Many different soluble salts can reduce the osmotic potential of plant rhizosphere, making it more difficult for the plant to absorb water, leading to physiological drought . The Na+TaLEA and ThbZIP genes were transformed into birch. Because the transcriptional level of these exogenous genes differed among the lines, the various transgenic lines showed different physiological properties under salt stress. Lines L-4 and L-8 were selected as excellent lines because of their strong salt resistance. Further research should characterize their growth and stem traits, and the regulation mechanism of the exogenous genes. Also, we should fully consider the safety of releasing and cultivating transgenic plants in the field.With the development of science and technology, increasing numbers of transgenic varieties of various crop species have been produced . In foreS1 Table(DOCX)Click here for additional data file."} {"text": "Pinna nobilis was detected across a wide geographical area of the Spanish Mediterranean Sea and linked to a haplosporidian parasite. In 2017\u20132018, mass mortality events affecting the pen shell Pinna nobilis were recorded in two different regions of Italy, Campania and Sicily, in the Tyrrhenian Sea (Mediterranean Sea). Histopathological and molecular examinations of specimens showed the presence of Haplosporidium sp. in only one specimen in one area. Conversely, in all of the surveyed moribund animals, strong inflammatory lesions at the level of connective tissue surrounding the digestive system and gonads and linked to the presence of intracellular Zhiel-Neelsen-positive bacteria were observed. Molecular analysis of all of the diseased specimens (13) confirmed the presence of a Mycobacterium. Blast analysis of the sequences from all of the areas revealed that they were grouped together with the human mycobacterium M. sherrisii close to the group including M. shigaense, M. lentiflavum and M. simiae. Based on pathological and molecular findings, it is proposed that a mycobacterial disease is associated with the mortality episodes of Pinna nobilis, indicating that, at this time, Haplosporidium sp. is not responsible for these events in Campanian and Sicilian waters.Disease is an increasing threat for marine bivalves worldwide. Recently, a mass mortality event (MME) impacting the bivalve Pinna nobilis is an endemic Mediterranean species and among the largest bivalves worldwide, playing an important ecological role for soft bottom communities and contributing to the increase in local biodiversity. This species can reach up to 120\u2009cm in length and a maximum reported age of 27 years2. The species commonly lives in seagrass fields of Posidonia oceanica or Cymodocea nodosa2 but also in non-vegetated estuarine areas and the soft bottoms of marine lakes3. The family of Pinnidae includes two genera (Pinna and Atrina) with 61 species described worldwide4. Different anthropogenic factors have adverse effects on the pen shell lifecycle and distribution, bringing structural and functional alterations in habitats and species physiology. Currently, P. nobilis has become a threatened and vulnerable species and is legally protected under Annex II of the Barcelona Convention (SPA/BD Protocol 1995), Annex IV of the EU Habitats Directive (EU Habitats Directive 2007), and the Spanish Catalogue of Threatened Species .The bivalve pen shell 7 and have also been described in the Mediterranean Sea10. Evidence suggests that a combination of predisposing and necessary biotic factors is involved in the disease causation, resulting in a scenario of new emerging complex multifactorial diseases11. In fact, these diseases have been often linked to changes in host/pathogen interactions and have been associated with increased water temperature, pathogen distribution and virulence, host reduced immune competence and growth13. The spread of infectious diseases of molluscs due to pathogens of different types has been intensively described in the last years, especially in farmed species15, with new outbreaks continuing to be recorded globally, representing a large limitation for the aquaculture industry. Conversely, in wild animal populations, only a few descriptions are present, and these outbreaks can involve keystone species, with consequences for the whole ecosystem.To date, disease conditions and mortality outbreaks have been reported in marine benthic population worldwide, such as corals, sea urchins, molluscs, sea turtles and marine mammalsPinna nobilis were reported over hundreds of kilometres of the western Mediterranean coast of Spain, except for Catalonia, due to a haplosporidian parasite17.Recently, mass mortality events of the pen shell P. nobilis populations were detected in Campania and Sicily, affecting animals of all sizes and affecting 85\u2013100% in prevalence. The aims of this paper are to unravel the bivalve health status and to define the possible causes of the mortality events involving the pen shell P. nobilis in these two geographical areas in late 2017 through May 2018.In Italy, since 2016, mortality episodes of the pen shell have been described by local SCUBA divers and scientists along the Tyrrhenian coastline and encompassing different regions. In early 2017, mass mortality episodes of Following mortality episodes from different areas of the regions of Campania and Sicily, samples were collected and provided to our laboratory by divers in the Area Marina Protetta (AMP) Punta Campanella from the north-southeast part of the Campania region from December 2017 to May 2018. At the same time, samples were also collected by local fisherman in Sicily (Messina) in May 2018 were negative for all of the diagnostic analyses.Details about animal dimensions, macroscopic lesions and disease diagnosis are reported in Table\u00a0P. rudis was healthy and negative for all of the diagnostic analyses performed for the different pathogens.The pen shells did not show any sign of illness, and the only collected individual of Regarding diseased animals, during collection, anomalous behavioural signs were demonstrated in debilitated specimens, showing difficulty in closing the valves (gaping) or slow responses to touching. The animals also showed retracted mantles or mantles with watery cysts.Pontonia pinnophylax; in 7 cases (53%), there were two per bivalve.On laboratory examination, on the external valves, the animals presented with attached epibionts of different types; among them, polychaetes, bryozoans, red and brown algae, ascidians, sponges and small bivalves were present. All of the specimens contained in the valves the Palaemonidae shrimps In 9 of the exanimated specimens (69%), gross examination of the bivalves revealed diffuse tissue oedema, mainly visible at the level of the gill and mantle Table\u00a0, Fig.\u00a03A18 as both hyalinocytes and granulocytes. The inflammatory condition was characterized by large nodular aggregates of the above immune cells, which were filled with long, slightly shaped, acid-fast positive bacteria , both cytology and histopathology showed the presence of different phases of development of a haplosporidian parasite in the digestive tissue. Large numbers of multinucleate stages were disseminated in the digestive tubule epithelia Fig.\u00a0. SporogoElectron microscopy (EM) was used to further characterize these bacteria in the absence of a cultured strain. Transmission electron microscopy (TEM) performed on three infected individuals was used to assess the features of the bacteria Fig.\u00a0, allowinMycobacterium sp., three different PCR sets were run for different pathogens: the Herpes virus OsHv-119; haplosporidian parasites20; and Mycobacterium sp21 detection.Considering the pathogens responsible for mortality episodes reported in Italian bivalves and considering the observed P. nobilis examined apart from the non-moribund specimens from south of Campania ; moreover, it showed 92% similarity to the 18S small subunit ribosomal RNA of uncultured Haplosporidium sp. from environmental samples but was not grouped with any definite haplosporidian species (sequence accession number: MH572222) of sessile benthic invertebrates, including ascidians, sponges, anthozoans and bivalves, have been reported in the aquatic environment and many indeed in the Mediterranean Sea. Several factors have contributed to mass-mortality episodes. Many papers have agreed that global warming might be linked to the occurrence of such catastrophic events in the Mediterranean Sea, which could alter the host/pathogen range, due to alteration in host immunocompetence/pathogen virulence and thus modify pathogen transmission rates. In this scenario, opportunistic pathogens are suspected to play an important role, with a modified condition of virulence and pathogenicity25. With the exception of Mycobacterium leprae, the species responsible for tuberculosis in human and animals consists of a group of highly related mycobacterial lineages collectively known as the mycobacterium tuberculosis complex (MTBC)25. Finally, MTBC is distinguished from nontuberculous mycobacteria (NTM): free-living organisms that are ubiquitous in the environment and can cause a wide range of mycobacterial infections in humans and animals (more frequently pulmonary infections)26.Mycobacteriosis is a serious and generally lethal infectious disease, affecting a wide range of species from human to animals, both in farmed and wild conditions. The Mycobacteriaceae family includes 128 validly published species30. In particular, in marine environments, three species have been frequently cited as the main causative agents of infections in fish M. marinum, M. fortuitum and M. chelonae but also several other mycobacterial species, including a number of novel species, have been reported33. Thus far, mycobacteria in molluscs have only been detected in different gastropod species from the freshwater environment, while in bivalves, frequently oysters of the genus Crassostrea have been reported to be vector of infection involved in human disease, and one case was instead reported in the pecten Placopecten magellanicus with no human infection24 system. In the case of M. ulcerans, Mycobacteria are primary intracellular parasites of phagocytes. Phagosomes containing mycobacteria are believed to resist the normal processes of acidification and phago-lysosomal fusion42, thus promoting bacterial survival45. However, according to electron microscopy results, the Mycobacterium detected in our cases were located in the cytoplasm of immune cells without evidence of phagosome membranes around them. This finding seems to be in accordance with early evidence, suggesting that mycobacteria might eventually escape from phagosomes by translocating to the cytosol, as in the cases of M. tuberculosis and M. marinum46. It has been suggested that cytosolic translocation might reflect a strain-dependent virulence mechanism of pathogenic mycobacteria, conferring to cytosol-preferring strains a gain in virulence function by acquiring resistance to autophagy48. In this context, Mycobacterium sherrisii has been so far associated with several human diseases49 and only a few cases have been reported in animals, all of them in mammals50. To our knowledge, this report is the first concerning evidence for the involvement of M. sherrisii in disease outbreaks of aquatic organisms and in a bivalve mollusc. Over thousands of years, mycobacteria have undergone extensive specialization, particularly with vertebrate hosts or specific environmental ecosystems retaining the flexibility to occupy new niches by continually infecting different animals as primary pathogens or opportunists. In our cases, the zoonotic potential of the observed Mycobacterium should be considered and clarified.With the exception of 28, surviving and replicating within various hosts51, so vectors are potentially present throughout the food web. In many cases, the modality of transmission is not clear. Water and associated biofilms are natural habitats for Mycobacterium spp. including M. marinum, M. fortuitum, and M. chelonae, so waterborne transmission seems likely. In our case, the origin and transmission of the Mycobacterium remain unknown, but aquatic environments constitute the natural habitat of M. simiae complex.Mycobacteria are known to infect a number of aquatic organisms other than fish11. In this context, it is important to emphasize that the boundaries between pathogen and symbiont can be, in some cases, unfixed, with an alteration in the association, even in evolutionary short periods of time52. About haplosporidians, the group includes more than 50 described species, many of them responsible for important diseases in aquatic invertebrates53. An haplosporidian parasite was detected across a wide geographic area of the Spanish Mediterranean Sea (western Mediterranean Sea) in early autumn 2016 and was recognized as possibly responsible for the MME of P. nobilis population in the area17. A previous hypothesis reported that the observed Haplosporidium was instead a symbiont that modified its relationship with the host following environmental changes, finally leading to the mortality outbreak17.Disease is the outcome of complex interactions among the host, causative agent(s) and environment. Many causes frequently cooperate to induce diseases (complex of causes), and some of them are necessary (their absence prevents the onset of the effect), while others are predisposing (preparing the ground for the action of the necessary cause)P. nobilis, in which two possibly opportunistic pathogens are involved in two different areas of the Mediterranean Sea, suggesting that both are activated by further possible common unidentified causes. Based on our pathological and molecular findings, we are more likely to consider that a mycobacterial disease is associated with the observed mortality episodes, indicating that, at this time, Haplosporidium sp. is not responsible for the events in Campanian and Sicilian waters. Further knowledge of the modality of transmission, distribution and source of this pathogens is key to devising methods for the identification and understanding of disease pathogenesis. In our cases, the lack of genomic and proteomic data does not allow us to go beyond the consideration that further studies are needed to clarify the pathogenicity and potential virulence for the reported mycobacteriosis other than its origin and evolution. The present study is especially relevant since it uncovers additional information about the complexity in understanding the multifactorial diseases that we have experienced in recent years in the aquatic environment. Future collaboration among wildlife ecologists, environmental biologists, animal pathologists and related disciplines should be conducted to deeply explore emerging diseases in marine wildlife.In such a scenario, we consider that data reported in our study drive a hypothesis of a more complex disease pathogenesis involved in the disease outbreak of the bivalve P. nobilis from 5 sites: in December 2017 in Massa Lubrense 1 and Massa Lubrense 2 (40\u00b034\u203257.5\u2033N 14\u00b021\u203222.6\u2033E); Ischia island, AMP Regno di Nettuno and Positano in April 2018 ; and in May 2018 Cilento 1 and Cilento 2 (40\u00b010\u203223.9\u2033N 15\u00b005\u203208.4\u2033E). In Sicily, there were two places: Torre Faro and Paradiso . During the macroscopic evaluation of the individuals, a description of each specimen\u2019s condition was conducted, and the presence of epibionts and valve length were recorded. Animal valves were opened with the help of a blade, and the flesh observed was examined macroscopically, checking for eventual external signs. Impression smears of the digestive glands, gills, mantle and ovaries were obtained, air dried, fixed in absolute ethanol, and stained with May\u2013Grunwald\u2013Giemsa quick stain for cytological examination. Samples from the digestive glands, mantle, labial palps, gills, gonads and adductor and retractor muscles were fixed in Davidson\u2019s solution for 48\u2009h at room temperature. Pieces of digestive gland and mantle were also preserved in 2.5% glutaraldehyde for TEM examination. Fragments of bivalve tissues and shrimp P. pinnophylax were fixed in absolute ethanol for DNA isolation.The study area in Campania covers a large geographical scale including the main habitat for P. nobilis and P. pinnophylax tissue samples were embedded in paraffin blocks and cut to 5 \u03bcm with a rotary microtome . Tissue sections were deparaffinized, stained with Carazzi haematoxylin and eosin and examined by light microscopy (Zeiss Axioscope A1). Additional staining techniques were also performed: Gram, Mallory Trichrome, Ziehl-Neelsen and PAS-BA (periodic acid Shiff)54.Five animals from different areas were processed for TEM observation, searching for pathogens of a different nature. From each animal, pieces of digestive gland tissue were placed in 2.5% glutaraldehyde, post-fixed in 2% OsO4, and embedded in Epon. Ultra-thin sections were stained with uranyl acetate and lead citrate and were examined in a JEOL JEM 1010 transmission electron microscope at 80\u2009kV.DNA was isolated from pieces of different tissues using the Qiagen Blood and Tissue Kit (Qiagen). DNA quality and quantity were checked with a Nanodrop ND-1000 spectrophotometer .19, generic haplosporidian primers (HAPF1- HAPR3)20 and primers derived from the 16S rRNA sequence of mycobacteria, mycgen-f (5\u2032-AGAGTTTGATCCTGGCTCAG-3\u2032) mycgen-r (5\u2032-TGCACACAGGCCACAAGGGA-3\u2032), as described by21. A positive control was used for OsHv-1 PCR reaction. The PCR was performed in 25\u2009\u03bcl of reaction volume containing 1\u2009\u03bcl of genomic DNA, 12.5\u2009\u03bcl of GoTaq MasterMIX (Promega) at 1x concentration, 6.5\u2009\u03bcl of water and 2.5\u2009\u03bcl of each primer (10\u2009\u03bcM). PCR products were electrophoresed on 2% agarose gels in 1x TAE buffer. The amplified fragments were gel eluted and directly sequenced. Negative controls were included in the PCR reaction.DNA was amplified by PCR with OsHv-1 primers (OsHVDPFor/OsHVDPRev) byBLASTN analysis was conducted using the nucleotide sequences obtained in the present study. The sequences were then submitted to GenBank (the accession numbers are listed in Figs\u00a0Mycobacterium sp. obtained in the present study and those of different Mycobacterium species present in GenBank, selected on the basis of the highest BLASTN score. The analysis also included the 16S sequence of Mycobacterium sp. KR822678 isolated from sea scallops24 and of M. marinum and M. ulcerans, which are able to infect marine molluscs23. The nucleotide alignment was constructed using ClustalW and the neighbour-joining tree was obtained using the Maximum Composite Likelihood model implemented in MEGA X61 with 1000 bootstrap replicates.The pairwise p-distances were calculated among the 16S nucleotide sequences of"} {"text": "CHD). Animal studies have suggested that hypoxia results in cortical dysmaturation at the cellular level. New magnetic resonance imaging techniques offer the potential to investigate the relationship between cerebral oxygen delivery and cortical microstructural development in newborn infants with CHD.Abnormal macrostructural development of the cerebral cortex has been associated with hypoxia in infants with congenital heart disease . Regions of reduced cortical orientation dispersion index in infants with CHD were related to impaired cerebral oxygen delivery . Cortical orientation dispersion index was associated with the gyrification index .We measured cortical macrostructural and microstructural properties in 48 newborn infants with serious or critical CHD is impaired dendritic arborization, which may underlie abnormal macrostructural findings reported in this population, and that the degree of impairment is related to reduced cerebral oxygen delivery.This study suggests that the primary component of cerebral cortex dysmaturation in Newborn infants with serious or critical congenital heart disease demonstrate abnormal development of the cerebral cortex at the microstructural level, assessed using diffusion magnetic resonance imaging.This study suggests that the primary component of cerebral cortex dysmaturation in congenital heart disease is impaired dendritic arborization.Impairment of cortical microstructural development was associated with reduced cerebral oxygen delivery measured in the newborn period.Identification of abnormal development of the cerebral cortex in this population provides insight into the mechanisms that may underlie poorer neurodevelopmental outcomes in congenital heart disease.The relationship between impaired cerebral oxygen delivery and abnormal cortical microstructure corroborates recent animal models investigating the role of oxygen tension in cortical development.Methods to quantitatively assess impairment of cortical development may enable more rapid assessment and iteration of novel interventions to improve the trajectory of brain development in congenital heart disease during pregnancy.Congenital heart disease (CHD) is the most common congenital abnormality, affecting almost 1% of newborns.The detrimental effect of CHD on early brain development can be observed via a faltering trajectory of brain growth in the third trimester of pregnancyLinking physiological changes in CHD to brain development is assisted by 4 recent findings. First, oxygen tension has been shown to regulate development of human cortical radial glial cells, with hypoxia exerting negative effects on gliogenesis by reducing the number of preoligodendrocytes while increasing the number of reactive astrocytes.In this study, we aimed to use high\u2010angular\u2010resolution diffusion imaging and NODDI to test the hypothesis that reduced cerebral oxygen delivery in CHD is associated with impaired cortical microstructural development. We predicted that infants with CHD would exhibit higher cortical FA and lower ODI when compared with a group of healthy matched controls, and that infants with the lowest cerebral oxygen delivery would exhibit the most severe impairment of cortical microstructural development.The project was approved by the National Research Ethics Service West London committee and informed written parental consent was obtained before imaging. All methods and experiments were performed in accordance with relevant guidelines and regulations. The data, analytic methods, and study materials will be available to other researchers for purposes of reproducing the results or replicating the procedure on reasonable request.A prospective cohort of 54 infants with serious or critical CHDWe therefore studied 48 infants with CHD, born at a median gestational age (GA) of 38.8\u00a0weeks . A control group of 48 healthy infants was retrospectively matched to the CHD group by GA at birth and scan, born at a median GA of 38.5\u00a0weeks (38.1\u201338.9). Healthy infants were recruited contemporaneously from the postnatal ward at St Thomas\u2019 Hospital as part of the Developing Human Connectome Project.T1\u2010weighted (T1w), T2\u2010weighted (T2w), diffusion\u2010weighted imaging (DWI), and phase contrast angiography magnetic resonance imaging was performed on a Philips Achieva 3\u00a0Tesla system with a 32\u2010channel neonatal head coil and neonatal positioning device,T2w images were acquired using a multislice turbo spin echo sequence, acquired in 2 stacks of 2\u2010dimensional slices , using parameters: repetition time: 12\u00a0seconds; echo time: 156\u00a0milliseconds, flip angle: 90\u00b0, slice thickness: 1.6\u00a0mm acquired with an overlap of 0.8\u00a0mm; in\u2010plane resolution: 0.8\u00d70.8\u00a0mm, scan time: 3:12\u00a0minutes per stack. The T1w volumetric magnetization prepared rapid acquisition gradient echo acquisition parameters were as follows: repetition time: 11\u00a0milliseconds, echo time: 4.6\u00a0milliseconds, TI: 714\u00a0milliseconds, flip angle:\u20099\u00b0, acquired voxel size: 0.8\u00d70.8\u00d70.8\u2009mm, field of view: 145\u00d7145\u00d7108\u00a0mm, sensitivity encoding factor: 1.2, scan time: 4:35\u00a0minutes. DWI with 300 directions was acquired using parameters: repetition time: 3.8\u00a0seconds, echo time: 90\u00a0milliseconds, multiband: 4; sensitivity encoding E: 1.2; resolution: 1.5\u00d71.5\u00d73\u00a0mm with 1.5\u00a0mm slice overlap, diffusion gradient encoding: b=0\u00a0s/mm (n=20), b=400\u00a0s/mm (n=64), b=1000\u00a0s/mm (n=88), b=2600\u00a0s/mm (n=128) with interleaved phase encoding.T2w images were reconstructed using a dedicated neonatal motion correction algorithm. Retrospective motion\u2010corrected reconstructionMotion\u2010corrected T2w images were segmented into tissue type using an automated, neonatal\u2010specific pipeline,max=0, 4, 6, 8 for respective shells), with registration operating at a reduced rank=15.High\u2010angular\u2010resolution diffusion\u2010weighted imaging data were reconstructed using a slice\u2010to\u2010volume motion correction technique that uses a bespoke spherical harmonics and radial decomposition of multishell diffusion data, together with outlier rejection, distortion, and slice profile correction.\u22123\u00a0mm2\u00a0s\u22121. This is consistent with previous NODDI studies in neonates,\u22123\u00a0mm2\u00a0s\u22121) likely reflecting the higher water content of the neonatal brain.Nonbrain tissue was removed using FSL BET (Brain Extraction Tool).A multivariate group template was generated from both T1w and T2w images, using symmetric diffeomorphic normalization for multivariate neuroanatomy and a cross\u2010correlation similarity metric.We used an approach for aligning cortical data from multiple subjects into a common space to provide voxel\u2010wise spatial characterization of FA, MD, NDI, and ODI, as previously described.2 was measured at the time of scan using a Masimo Radical\u20107 monitor applied to the right hand.For infants with CHD, we calculated their cerebral blood flow using a previously described method.2) was calculated using the following formula:Cerebral oxygen delivery . All analyses were subject to family\u2010wise error correction for multiple comparisons, and thresholding for all analyses was at P<0.05. Linear regression was used to investigate the association between GI and diffusion metrics. To assess the relationship of GI and ODI independently of advancing brain maturity, GA at scan was included as a variable in the multiple linear regression model.The control group was retrospectively matched to the CHD group by GA at birth and at scan using an R implementationU test. All analyses of clinical variables were performed using SPSS V24 .Categorical clinical variables were compared using Fisher's exact tests. For continuous clinical variables, we determined medians and interquartile ranges, and compared groups using the Mann\u2013Whitney The analysis included 96 newborn infants: 48 infants with confirmed serious or critical CHD scanned before surgery without evidence of arterial ischemic stroke, and 48 age\u2010matched healthy infants. Clinical characteristics of both groups are shown in Table\u00a0Infants with CHD demonstrated widespread changes in cortical ODI, with the most significant reductions observed posteriorly in the posterior parietal cortex, insula cortex, cingulate cortex, primary motor cortex, supplementary motor area, and occipital regions Figure\u00a0.Cortical FA was higher in infants with CHD, with effects seen in predominantly midline cortical structures at time of scan was positively associated with cortical ODI across many regions of the cortex , with the most significant associations found in the bilateral temporal lobes, occipital lobes, cingulate cortex, and right insula cortex. To demonstrate this linear relationship, mean ODI data were extracted for each subject from significant voxels in the gray matter skeleton and plotted against CDO2 at the time of scan and negatively with FA but not for cortical FA . Cortical gray matter volume was significantly positively correlated with cortical ODI but not with cortical FA . Results are summarized in Table\u00a0P=0.005), and regional brain volumes were significantly smaller in those with CHD across all regions of the brain Figure\u00a0. The lin2. We speculate that hindered microstructural development underlies the abnormal macrostructural changes in brain development that have been observed through reduced birth head circumference,Long\u2010term neurodevelopmental impairment is a major remaining challenge for infants with congenital heart disease, yet our understanding of the underlying biological substrate remains limited. Our study suggests that the microstructural development of the cerebral cortex in infants with CHD is abnormal in the newborn period compared with healthy controls and, importantly, that the degree of impairment is related to reduced CDOWe found that cortical ODI was widely reduced in the CHD group, with associated but more sparsely distributed areas of higher FA. These findings suggest a hindered trajectory of normal brain development, with increased sensitivity to tissue changes using the more advanced NODDI model. As the brain matures in utero, cortical neurons migrate outward toward the pial surface, populating the cortexWhile reduced cortical folding complexity in newborns with CHD has previously been reported in our cohortChanges in cortical orientation dispersion were more pronounced posteriorly than frontally, which is consistent with a previously described sequence of cortical development maturing earlier in the occipital cortex and completing later in frontal regions.2 on development of cortical ODI. There was a widespread positive relationship between ODI and CDO2, supporting the hypothesis that impaired oxygen delivery to the developing brain may be associated with delayed cortical microstructural\u00a0development. There was no relationship between ODI and either cerebral blood flow or preductal arterial saturation when considered individually, suggesting that both components of CDO2 are required to estimate oxygen delivery to the brain, and that when considered alone, neither component explains enough variance of ODI to achieve statistical significance. In the case of cerebral blood flow, this may additionally suggest that alternative proposed metabolic substratesHaving established group differences between infants with CHD and healthy controls, we investigated the effect of CDO2 was also measured in the postnatal period, while the most influential period on brain growth would have been in utero, and particularly during the third trimester. Despite this, we feel that postnatal CDO2 remains a useful surrogate for severity of cardiac circulatory compromise to date, taking into account both measures of cerebral blood flow and degree of hypoxia as a result of structural changes in congenital heart disease. Third, the underlying genetic basis of CHD is becoming increasingly better understood2 but also to intrinsic abnormalities in microstructural development of the brain. Future work in larger, more homogenous cohorts to characterize in utero flows, oxygen saturations, and CDO2 will allow further correlation of CDO2 or lesion type with measures of fetal brain development.There were limitations to our study. First, quantitative estimates obtained from microstructural studies are invariably model dependent, exhibiting biases and limitations that are related to model assumptions. Despite this, NODDI indices have been shown to correlate with histological changes in neurite geometric configuration2 is associated with impaired cortical maturation in this population supports the development of strategies to optimize fetal CDO2. The provision of supplemental oxygen to mothers during pregnancy may enable restoration of fetal cerebral oxygen tension to levels required to prevent or reverse abnormal corticogenesis.2 is most likely. In addition, the use of newer microstructural measures such as ODI may provide a crucial leading indicator in the postnatal period to assess the impact of novel interventions on cortical development before child neurodevelopmental outcomes can be assessed at a later age.There are currently no validated neuroprotective therapies available for infants with CHD. Our demonstration that reduced CDOThis research was funded by the British Heart Foundation (FS/15/55/31649) and Medical Research Council UK (MR/L011530/1). This work received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/20072013)/ERC grant agreement no. 319456 (dHCP project), and was supported by the Wellcome Engineering and Physical Sciences Research Council Centre for Medical Engineering at Kings College London (WT 203148/Z/16/Z), MRC strategic grant MR/K006355/1, Medical Research Council Centre grant MR/N026063/1, and by the National Institute for Health Research Biomedical Research Centre based at Guy's and St Thomas\u2019 NHS Foundation Trust and Kings College London. Dr O'Muircheartaigh is supported by a Sir Henry Dale Fellowship jointly funded by the Wellcome Trust and the Royal Society (206675/Z/17/Z). The views expressed are those of the authors and not necessarily those of the NHS, the National Institute for Health Research, or the Department of Health.None.Table\u00a0S1. Differences in Mean Cortical Microstructure and Cerebral Oxygen Delivery Between Those With and Without Punctate White Matter LesionsTable\u00a0S2. Differences in Mean Brain Volume, Regional Brain Volumes, and Mean Diffusion Measures, Between Those With Congenital Heart Disease (CHD) and Age\u2010Matched ControlsFigure\u00a0S1. A, Mean orientation dispersion index (ODI) and (B) fractional anisotropy (FA) from significant cortical regions plotted against gestational age at scan, for both congenital heart disease and control groups.Figure\u00a0S2. Infants with congenital heart disease exhibit impaired orientation dispersion index compared with healthy age\u2010matched controls (n=37), overlaid on the mean orientation dispersion index (ODI) template.Click here for additional data file."} {"text": "Myrmica rubra ant workers (i) detect and avoid fungus-infected substrates and (ii) excavate nest patterns that minimize their exposure to entomopathogenic spores. Small groups of M. rubra workers were allowed to dig their nest in a two-dimensional sand plate of which one half of the substrate contained fungal spores of Metarhizium brunneum, while the other half was spore-free. We found that the overall digging dynamics of M. rubra nests was not altered by the presence of fungus spores. By contrast, the shape of the excavated areas markedly differed: control nests showed rather isotropic patterns, whereas nests that were partially dug into a fungus-contaminated substrate markedly deviated from a circular shape. This demonstrates that the sanitary risks associated with a digging substrate are key factors in nest morphogenesis. We also found that M. rubra colonies were able to discriminate between the two substrates (fungus-infected or not). Furthermore, some colonies unexpectedly showed a high consistency in excavating mainly the infected substrate. This seemingly suboptimal preference for a contaminated soil suggests that non-lethal doses of fungal spores could help ant colonies to trigger \u2018immune priming\u2019. The presence of fungi may also indicate favourable ecological conditions, such as humid and humus-rich soil, that ants use as a cue for selecting suitable nesting sites.As entomopathogens are detrimental to the development or even survival of insect societies, ant colonies should avoid digging into a substrate that is contaminated by fungal spores. Here, we test the hypotheses that These patterns of interactions are shaped, at least partially, by the spatial structure of the nest. Nest patterns result from stigmergic processes in which the built structure acts as a feedback on the digging behaviour of individuals, leading to adaptive, self-organized patterns, without the need for any template, centralized control or even direct communication between nest-mates \u20133. AlthoCoptotermes lacteus termites display an avoidance response or dig out shorter tunnels into substrates infected by Metarhizium brunneum fungus [Solenopsis invicta ant workers selectively avoid building their nest in nematode-infected soils [Formica selysi queens [Monomorium pharaonis ant workers [The topological features of a nest reflect both the intrinsic features of the colony and the characteristics of its environment. In the case of subterranean nests built by social insects, colony size ,4,12,13 m fungus . Likewised soils . Howeveri queens and Mono workers show a sMyrmica rubra ant colonies with soil patches infected by Metarhizium brunneum spores. We investigated whether the choice of the substrate, the digging dynamics, as well as the size and topology of the nest were influenced by the presence of potentially harmful pathogens. For this, we tested small groups of 50 M. rubra workers in a two-dimensional digging set-up [Metarhizium brunneum, while the other was spore-free. This allowed us to assess whether a contaminated substrate leads to a decrease of excavated soil as well as to a nest topology that minimizes the level of ants' exposure to fungus spores.In the present study, we challenged g set-up ,16 in wh2.2.1.M. rubra ants containing one queen, 200\u2013300 workers and brood were used for the experiments. In the laboratory, each colony was reared in a plastic tray (Janet type: 47\u2009\u00d7\u200929\u2009cm) in which the floor was covered with plaster and the borders were coated with polytetrafluoroethylene to prevent ants from escaping. A square 10\u2009cm wide glass plate, placed 3\u2009mm above the ground and covered with a red filter, was used as a nest ceiling. Each colony was fed with one mealworm (Tenebrio molitor) three times per week, while water and sucrose solution (0.3\u2009M) were provided ad libitum. Laboratory conditions were kept at a 21\u2009\u00b1\u20091\u00b0C and 50\u2009\u00b1\u20095% humidity rate, with a constant photoperiod of 12\u2009h per day.Eleven colonies of 2.2.Metarhizium brunneum fungus (Strain F52 from Novozymes) that is produced in the form of barley grains coated with fungal spores. This generalist entomopathogen fungus is known to kill more than 200 insect species [M. rubra ants [6\u2009spores\u2009ml\u22121. In addition, the viability of conidia was determined by placing 5\u2009ml of the final solution of spores on a thin layer of potato dextrose agar and by incubating it at 25\u00b0C for 4 days.We used a commercial strain of species and to pbra ants ,23. Fourbra ants . We esti2.3.6\u2009spores\u2009ml\u22121) per 100\u2009g of sand. As a control, spore-free substrate was made by adding 25\u2009ml of solution containing 0.05% Tween 20 and 0.05% Triton-X per 100\u2009g of sterilized sand. The used level of Metarhizium spores (25\u2009\u00d7\u2009104\u2009spores per g\u2009soil) was of the same order of magnitude as the natural density of Metarhizium detected in soils . By cho [et al. , the dig2.4.N\u2009=\u200922) and the third one was used as control . The replication of the experimental condition allowed us to assess the effect of the mother colony on the digging response of ants to the contaminated substrate. Each group of 50 workers was dropped into a circular arena (55\u2009mm diameter) to be tested 2\u2009h later and was not fed until the end of the experiment to prevent them from being engaged in other tasks than nest-excavating ones. This starvation did not reduce the ants\u2019 survival, because less than 2% of the ants died at the end of the experiment. The nest sand plates were randomly placed in groups of four in a closed wooden box to simulate the darkness of natural nests. We started the experiment by connecting the circular arena hosting the group of 50 tested ants to a central hole made on the upper glass plate covering the digging area. The connection was made with a vertical plastic tube (3.5\u2009cm) that was filled with clean sand to encourage ants to start digging. The digging process was followed for 40\u2009h once the first ant reached the central hole of the nest sand plate. Snapshots of the digging area were taken under red light every 5\u2009h using a Logitech camera (HD Pro C920) placed 20\u2009cm below the glass plates. Image J software was used to automatically compute both the dug area (A) and the perimeter (P) of the nest for each snapshot. This allowed us to quantify the dynamics of digging activity and to compare nest patterns between the two halves of each nest plate.From each of the 11 colonies , we randomly sampled three groups of 50 workers: two groups were assigned to the experimental condition . Non-parametric tests with a significance level of \u03b1\u2009=\u20090.05 were used because all data did not meet the normality assumption. With regard to the digging activity, a generalized linear mixed-effects model (GLMM) was used to investigate the effect of treatment , colony and time on the area excavated by colonies. Colony and treatment were treated as categorical variables, whereas time was considered as a continuous variable. Moreover, time and treatment were specified as fixed effects, colony as a random effect and replicates as a nested random factor within the colony to account for the repeated measurements performed on mother colonies [U test to compare the final excavated volumes between control and experimental nests.Statistical analyses were performed by using Scolonies . Full moWithin each type of nest, we also used GLMM analyses to test for the effect of the side , colony and time on the area excavated by ants. Wilcoxon matched-pairs tests were used to compare the final excavated areas between the two sides in the experimental or in the control nests. To provide evidence of ants' preference for a spore-free substrate, we tested whether the number of ant colonies for which the most dug part was the clean half of the set-up differed from random by using a binomial test.A) and the perimeter (P) of a nest can be described by the linear equation log(P)\u2009=\u2009log(\u03bc)\u2009+\u2009\u03c9 log(A), where the parameters\u2019 values are \u03bc\u2009=\u20092\u221a\u03c0 and \u03c9\u2009=\u20090.5. For both the control and the experimental nests, the values of \u03bc and \u03c9 were estimated from the intercept and the slope of regression lines that best fitted log-transformed values of final perimeters as a function of final areas. The slopes and intercepts of these linear fittings were compared between experimental and control nests by using F-tests.We characterized nest patterns by their level of digitation as well as by the anisotropy of their shape. As regards the level of digitation in control or in experimental nests, we assessed whether the perimeters of the final excavated areas significantly deviated from those expected from a circular shape, by using the Wilcoxon matched-pair test. In the case of a perfect circle, the relationship between the area . However, for both control and experimental nests, there was a highly significant effect of time on the excavated area. Ants were the most active in digging during the 10 first hours with around half of the final total area being excavated . From 30\u2009h onwards, the digging activity nearly ceased, and the excavated areas increased by only 9.9% for the control nests and by 7.5% for experimental ones during the last 10\u2009h. Ultimately, a similar excavated area was reached in the control and experimental nests .We found no significant effect of treatment or time by treatment interaction on nest growth dynamics and led to the same total area excavated .In control nests, the left and the right side of the sand plates were dug with similar growth dynamics , and reached similar final excavated areas .Similarly, the growths of dug areas over time in the infected and the clean sides of the experimental nests were not different (GLMM: time by side interaction: N\u2009=\u200911) of control nests did not differ from random . This confirmed that there was no bias due to external stimuli or substrate heterogeneities in our sand plates. The most dug part of control nests grew at a rate that was only slightly faster and not significantly different from the least dug one . The findings were quite different for the experimental nests. When considering the growth dynamics in the most dug part, the excavation increased at a significantly higher rate than the least dug part of the set-up . As a result, from 10\u2009h onwards after the start of the digging activity, the excavated volume became significantly larger in the most dug part of the set-up compared to the other side . Unexpectedly, not all the ant colonies preferred to dig into the spore-free side of the experimental nests. Indeed, the proportion of colonies that had mostly dug the clean half of the set-up did not differ from random . Furthermore, differences in the growth dynamics between the most and the least dug side were of the same magnitude, regardless of whether the most dug part was the clean side or the infected side .With regard to the ants' preference for digging into one side of the set-up, the percentage of colonies that mostly dug in the left side , while ants' digging activity seemed more directional in experimental nests, leading to the emergence of long galleries extending preferentially in one side of the set-up .The anisotropy of nest patterns was estimated by the aspect ratio, i.e. the maximum over the minimum Feret values. These ratios were significantly higher for the experimental nests than for the control ones . Indeed, the final excavated areas were highly correlated between the two experimental groups that originated from the same mother colony . Surprisingly, a colonial effect was also observed in the ants' preference for a given type of digging substrate. In most cases (nine out of 11 colonies), each pair of experimental groups that came from the same mother colony chose to focus the main part of their digging activity in the same type of substrate . Indeed, the percentages of the total area that were dug into the infected side of the experimental nests were highly correlated between the two replicates are less abundant inside nests, whereas others (Beauveria brongniartii) are more frequent inside than outside ant nests [Metarhizium brunneum fungus is known to be efficient at killing M. rubra workers [Pathogen avoidance is considered a first line of disease defence in animals. In the case of social immunity, insect societies should reduce exposure to sanitary risks by avoiding digging their nest in contaminated areas. However, pathogen prevalence is quite variable inside nt nests . In shar workers \u201335. Thes workers \u201341, poss workers ,39.Atta sexdens ants [Macrotermes michaelseni termites [Mo. pharaonis ant colonies that display a clear preference for infected sites when they migrate to a new nest [F. selysi are attracted to nest sites contaminated with Beauveria and Metarhizium pathogens [Previous studies on ens ants and on Mtermites showed tnew nest . Similarathogens , althougathogens .. However, host manipulation often results from a process of coevolution between the host and highly specialized parasites. This is not the case with Metarhizium fungus, which targets a broad spectrum of insect hosts [. Second, while being potentially a sanitary challenge for the ants, the presence of fungi may also be a cue associated with suitable nesting sites, indicating favourable ecological conditions, such as humid and humus-rich soil. Finally, regardless of substrate contamination, the similar death rates observed in all colonies after 40\u2009h of digging indicate that the amount of conidia present in the soil was not a lethal threat for the ants. Previous studies found that contacts with a pathogen at non-lethal doses reduce the susceptibility of individuals to later exposure to the same pathogen [M. rubra ants and Metarhizium brunneum fungus naturally occur in the same habitats [From a functional perspective, the seemingly suboptimal preference shown by some colonies for a substrate containing live entomopathogenic fungus may be explained in several ways. First, the fungal pathogen may have manipulated the ants by luring them with odour cues in order to increase its probability to contaminate the whole ant colonyct hosts . Togethect hosts , a fungupathogen \u201344 or otpathogen . Althougpathogen ,47, thispathogen ,49. By ehabitats ,23.M. rubra colonies are able to discriminate between substrates on the basis of their pathogenicity, displaying either avoidance or attraction to the contaminated substrate. Mechanisms that underlie such discrimination remain unclear but lead to anisotropic nest patterns, thereby demonstrating the key role of soil biotic factors in nest morphogenesis. The preference for fungal-infected soils seems to be a colonial trait and may be associated with factors that are beneficial to the colony. Further investigations are still needed to understand whether these two distinct nesting strategies are based on genetic factors, are due to different features of ants' nesting biotopes or the outcome from differences in life-history traits of ant colonies such as their previous exposure to entomopathogenic fungi.Overall, we showed that To conclude, we found that the pathogen load of a digging substrate is a key factor of nest morphogenesis in ant societies. The presence of entomopathogenic spores in the soil does not alter the growth dynamics of excavated nests but makes their shape less isotropic with a few long galleries extending in the substrate. Quite unexpectedly, pathogen avoidance was not systematic as some colonies even showed the opposite preference of fungus-contaminated substrate. The relevance of this seemingly suboptimal preference remains to be investigated. The present study is a first report of pathogen-induced changes in collectively built nests, and more work is needed to understand this relatively unexplored area of disease defence in social insects."} {"text": "Seizures occur in a recurrent manner with intermittent states of interictal and ictal discharges (IIDs and IDs). The transitions to and from IDs are determined by a set of processes, including synaptic interaction and ionic dynamics. Although mathematical models of separate types of epileptic discharges have been developed, modeling the transitions between states remains a challenge. A simple generic mathematical model of seizure dynamics (Epileptor) has recently been proposed by Jirsa et al. (2014); however, it is formulated in terms of abstract variables. In this paper, a minimal population-type model of IIDs and IDs is proposed that is as simple to use as the Epileptor, but the suggested model attributes physical meaning to the variables. The model is expressed in ordinary differential equations for extracellular potassium and intracellular sodium concentrations, membrane potential, and short-term synaptic depression variables. A quadratic integrate-and-fire model driven by the population input current is used to reproduce spike trains in a representative neuron. In simulations, potassium accumulation governs the transition from the silent state to the state of an ID. Each ID is composed of clustered IID-like events. The sodium accumulates during discharge and activates the sodium-potassium pump, which terminates the ID by restoring the potassium gradient and thus polarizing the neuronal membranes. The whole-cell and cell-attached recordings of a 4-AP-based in vitro model of epilepsy confirmed the primary model assumptions and predictions. The mathematical analysis revealed that the IID-like events are large-amplitude stochastic oscillations, which in the case of ID generation are controlled by slow oscillations of ionic concentrations. The IDs originate in the conditions of elevated potassium concentrations in a bath solution via a saddle-node-on-invariant-circle-like bifurcation for a non-smooth dynamical system. By providing a minimal biophysical description of ionic dynamics and network interactions, the model may serve as a hierarchical base from a simple to more complex modeling of seizures. In pathological conditions of epilepsy, the functioning of the neural network crucially depends on the ionic concentrations inside and outside neurons. A number of factors that affect neuronal activity is large. That is why the development of a minimal model that reproduces typical seizures could structure further experimental and analytical studies of the pathological mechanisms. Here, on a base of known biophysical models, we present a simple population-type model that includes only four principal variables, the extracellular potassium concentration, the intracellular sodium concentration, the membrane potential and the synaptic resource diminishing due to short-term synaptic depression. A simple modeled neuron is used as an observer of the population activity. We validate the model assumptions with in vitro experiments. Our model reproduces ictal and interictal events, where the latter result in bursts of spikes in single neurons, and the former represent the cluster of spike bursts. Mathematical analysis reveals that the bursts are spontaneous large-amplitude oscillations, which may cluster after a saddle-node on invariant circle bifurcation in the pro-epileptic conditions. Our consideration has significant bearing in understanding pathological neuronal network dynamics. A simple canonical mathematical model of epileptic discharges has been proposed by V. Jirsa et al. based onWhile formulating the model assumptions, special attention was focused on two specific experimental observations. First, in a number of experiments, IDs were formed from a clustered number of short discharges \u20138. TheseThe proposed model consists of three subsystems that describe: (i) the ionic dynamics, (ii) the neuronal excitability, and (iii) a neuron-observer .The proposed population model consists of the following equations:K]o and [Na]i represent extracellular potassium and intraneuronal sodium concentrations, respectively; V(t) is the membrane depolarization; xD(t) is the synaptic resource; \u03bd(t) is the firing rate of an excitatory population; and an inhibitory population firing rate is assumed to be proportional to \u03bd(t). The dynamics described by these equations is driven \u03bd(t), which is calculated with a sigmoidal input-output function:x]+ is equal to x for the positive argument and 0 otherwise. The input current u(t) includes the potassium depolarizing current, the synaptic drive, and the noise \u03be(t), respectively:All the terms of these equations are explained in the next section. Here, only the main variables are introduced, which are as follows: o = 0.02mM, \u03b4[Na]i = 0.03mM, \u03b4xD = 0.01; the Gaussian white noise \u03be(t) has zero mean and unity dispersion, \u27e8\u03be(t)\u03be(t')\u27e9 = \u03c4m\u03b4(t\u2212t'); the noise amplitude is \u03c3/gL = 25mV; the maximum pump flux is \u03c1 = 0.2mM/s; the volume ration is \u03b3 = 10 ; the poU(t) are as follows:gU = 0.4nS/mV, CU = 200pF, VT = 25mV, Vreset = \u221250mV, U1 = \u221260mV, U2 = \u221240mV; an initial condition is U = \u221270mV.A representative neuron was modeled with a quadratic integrate-and-fire neuron , 19. The The model states that the elevation of the extracellular potassium concentration [K]o plays a primary role in self-regenerating IDs. Before each ID, the extracellular potassium concentration accumulates after a series of IIDs. Each IID involves the activation of interneurons o tends to return to the bath concentration due to diffusion and glial buffering. These processes are expressed by the first term in the right-hand side (r.h.s) of \u03c4K. The potassium is pumped into the neurons by the ATP-dependent Na-K pump. In K]o through the second term in the r.h.s.Between IIDs, the potassium concentration [ The intracellular sodium concentration [Na]i increases due to the firing activity \u03bd(t), as expressed by the final term in \u03bd(t). The sodium is pumped from the neurons by the ATP-dependent Na-K pump. In Na]i through the second term in the r.h.s. The leakage and intracellular diffusion are taken into account by the first term in the r.h.s. of \u03c4Na. Describes a mean membrane depolarization due to the input u(t) using a single-compartment leaky neuron model. Not taking into account a spike generation, the voltage V(t) reflects a nominal, extreme level of membrane polarization. Together with \u03c4m bath = 8.5mM increase. The generation of SB begins when the input current u(t) leads the depolarization V(t) close to the threshold VthgL. According to u(t) in accordance to xD(t) begins to vanish, the current u(t) decreases, and SB terminates. During each SB, a representative neuron generates a few spikes (u(t).The mechanism of the IIDs is shown in w spikes . The spiK]o and [Na]i increase at each SB and then relax during the interburst intervals (K]o relaxation (\u03c4K = 10s) compared with the characteristic interburst interval. This fast relaxation prevents any significant potassium accumulation during the train of SBs.The ionic concentrations [ntervals . The meaRegime with ictal discharges. The simulations of the model based on Eqs (K]bath = 8.5mM. Each ID is characterized by a high rate of activity (black line) for 30 seconds. It consists of SBs resembling IIDs compared with the characteristic interburst interval. This slow relaxation leads to potassium accumulation at some point of the train of SBs. At some critical level of [K]o, depolarization due to the shift of VK(t) (term 1 in xD(t). It begins an ID via the positive feedback provided by the depolarizing effect of the potassium accumulation.The regime with IDs is obtained with relatively slow o and [Cl]in. K]o and [Cl]in during ID are similar. Presumably, this proportionality is explained by the activity of K-Cl-cotransporters, which evoke a potassium outflux in response to chloride accumulation within the neurons. seizure . The mostrations . Changesntenance . The cenK]o during a single SB results in moderate depolarization on about 1 mV according to \u03b4[K]o provides the same level of potassium increase after each SB; the parameter \u03c4K defines the speed of potassium level relaxation, and the parameter gK,leak controls the level of polarization dependent on [K]o.The extracellular potassium concentration increases during a single SB up to 1\u20132 mM according to data from and V(t) oscillate faster than the dynamics of [K]O and [Na]i, producing the SBs. Thus, the subsystem of Eqs Based on the simulations of the system of Eqs \u03c3 = 0) are shown in the phase space that has the only attractor of the deterministic system was examined. A small amplitude noise that leads to small voltage fluctuations near 0 cannot excite the system. Indeed, small deviations from the equilibrium result in the monotonic convergence to the equilibrium; however, deviations larger than the threshold result in a large excursion before returning to the resting state. This implies a stochastic excitability. As the noise intensity increases, the time between two successive activations decreases. A random trajectory beginning from the stable equilibrium and declined by the noise to the zone of excitation is plotted in the phase plane along wiK]O determines an additive current in K]O leads to an inward current. The inward current shifts the U-shaped nullcline left and down and shifts the straight line nullcline to the right than Eqs \u03c4m and \u03c4D. These observations allowed for reducing the full model to a slow subsystem that is based on Eqs K]o. On the other hand, Eqs \u03bd(t). In the simulations, the slow high-amplitude oscillations of [K]o and [Na]i did not follow the fast fluctuations of \u03bd but were controlled by a slow component of \u03bd. Therefore, to extract a slow subsystem, \u03bd(t) was averaged, and the dependence of this average variable K]o was evaluated. This was done with the integration of Eqs K]bath (from 3 to 22 mM during 200s) with the other parameters taken from the basic parameter set. The obtained dependence was approximated as follows:K]o < 20 mM.To analyze an appearance of IDs, the full model was reduced to a slow subsystem. In the simulations , the dyn\u03bd by K]bath = 8.5 mM, \u03c4K = 100 s, \u03c4Na = 20 s, \u03b4[K]o = 0.02 mM, \u03b4[Na]i = 0.03 mM, \u03b3 = 10, \u03c1 = 0.2 mM/s, as in the basic parameter set.After the substitution of The reduced model based on Eqs K]bath = 3 mM, the system converges to a fixed point, which is a stable node. The other two equilibrium points are an unstable focus and a saddle. At some critical value of K]bath = 8.5 mM, as shown in K]o is observed, which controls the generation of SBs during IDs. Overall, the analysis explains the dynamics of the regime with ID generation due to oscillations of potassium and sodium concentrations, with each ID composed of clustered SBs.The slow dynamical system was then analyzed based on Eqs In the present work, a mathematical model of IDs and IIDs is proposed based on a consideration of only a few dominating processes that underlie the generation of the discharges. These processes have been described in the form of a simple mathematical model consisting of four ordinary differential equations written in terms of physically meaningful variables, referred to as Epileptor-2, as an alternative to the well-known but more abstract model Epileptor . Along wThe first assumption has been partially validated by experimental observations from the literature . It is aThe second assumption seems to be too strong if taking the sometimes crucial role of GABAergic interneurons in interictal discharges into account ,31,38,39The third assumption postulates an essential role of the extracellular potassium concentration. It corresponds to the Fertziger and Ranck hypothesis ; howeverThe fourth assumption has been validated based on the experimental data. More precise matching would require explicit consideration of excitatory and inhibitory populations synaptic components to fit experimental estimates of either cumulative excitatory and inhibitory synaptic conductances ,46 or AMThe fifth assumption is based on experimental evidences of a short-term depression of glutamatergic synapses ,48,49. AIn summary, the assumptions of the Epileptor-2 model are in line with a wide range of experimental evidence obtained in different models of epilepsy.Based on the simulations, both ictal and interictal discharges consist of elementary events, SBs. To the best of the authors\u2019 knowledge, the IDs have not been modeled before as bursts of spike bursts. The SBs have been mathematically analyzed. They were found to be large amplitude stochastic oscillations. The second-order system of differential equations is probably the simplest model of these kernel elements of pathological discharges. This statement is an important prediction, which opposes the inherently stochastic mechanism of the discharge generation to an oscillatory mechanism. The latter explains a stochastic sequence of the discharges due to noise in a deterministic periodic process. In contrast, the former implies that the discharges are impossible without noisy fluctuations. Similar behaviors have been studied for a neuronal model .The original Epileptor model explains epileptic discharges in terms of bifurcations . In thisK]O as the control variable of fast oscillations, SBs. The variable [K]O shows slow oscillations with pulses the amplitude of the fast oscillations is large this large due to t voltage occurs dK]bath, similar to previous studies conducted with a single neuron model [K]bath just above the critical value and (ii) the amplitude of [K]O oscillations is always finite, which indicates a similar shape of IDs at different [K]bath. Both predictions are more or less consistent with the experimental observations. The whole system of the Epileptor is of fold/homoclinic type [The analysis explains the scenario of an origination of a regime of ID generation during a change in an external factor, an increase of [on model ,10. Usinnic type .The Epileptor-2 can not be classified in the same way. Instead, fast and slow dynamics are to be considered separately. In contrast to the Epileptor, the Epileptor-2 is a non-smooth, stochastic dynamical system. The non-smoothness is due to the kink in Several mechanisms of seizure recruitment have been tested in computational models with varying levels of mathematical abstraction ,52,53. TIn this paper, a simple model of epileptic discharges is proposed, and the fundamental mechanisms described by this model are illustrated. The model is qualitatively and in a sense quantitatively compared to a wide range of experimental recordings. Similar to the known Epileptor model, this model can be applied to investigate the pathological states in a network of coupled oscillators . It can All animal procedures followed the guidelines of the European Community Council Directive 86/609/EEC and were approved by the Animal Care and Use Committee of the Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences.2PO4, 1 MgSO4, 2 CaCl2, 24 NaHCO3, and 10 dextrose. The ACSF was aerated with carbogen (95% O2/5% CO2). Recordings were made at 30\u02daC. Pyramidal neurons in deep layers of the entorhinal cortex were visualized using a Zeiss Axioscop 2 microscope equipped with a video camera and differential interference contrast optics. Patch electrodes (3\u20135 M\u03a9) were pulled from borosilicate filamented glass capillaries on a P-1000 Micropipette Puller . For current-clamp recordings, a potassium-gluconate-based filling solution was used with the following composition (in mM): 135 K-gluconate, 10 NaCl, 5 EGTA, 10 HEPES, 4 ATP-Mg, and 0.3 GTP (with pH adjusted to 7.25 with KOH). For voltage-clamp recordings, a solution based on cesium-methane-sulfonate (CsMeS) was used with the following composition (in mM): 127 CsMeS, 10 NaCl, 5 EGTA, 10 HEPES, 6 QX314, 4 ATP-Mg, and 0.3 GTP (with pH adjusted to 7.25 with CsOH). For cell-attached voltage-clamp recordings, a sodium-chloride-based filling solution was used. It had the following composition (in mM): 138.5 NaCl, 8.5 KCl, 10 HEPES, 5 EGTA (with pH adjusted to 7.25 with NaOH). Cell-attached voltage-clamp recording of neuron firing activity was performed as described [Experimental procedures were described previously in details in our recent paper . Shortlyescribed .Recordings were performed with two Model 2400 patch-clamp amplifiers , and an NI USB-6343A/D converter using WinWCP5 software . The data were filtered at 10 kHz and sampled at 20 kHz. After formation of the whole-cell configuration, access resistance was less than 15 M\u03a9 and remained stable (< 30% increase) during the experiments in all cells included.2PO4, 0.25 MgSO4, 2 CaCl2, 24 NaHCO3, 10 dextrose, and 0.05 4-AP. The flow rate in the perfusion chamber was 5\u20136 ml/min. The liquid junction potentials were measured as described [Epileptiform activity was induced with the pro-epileptic solution, containing the following (in mM): 120 NaCl, 8.5 KCl, 1.25 NaHescribed , and theThe new model is based on the previous biophysically detailed considerations of ionic dynamics during the pathological states of brain activity ,17,30,42https://yadi.sk/d/927UjbS-3QQhMW; the code and executive file in Delphi-Pascal are at https://drive.google.com/file/d/1AJhAFKLOjvgauBF_6SQzvol8zzmfMaDV/view?usp=sharing and https://drive.google.com/open?id=10ij-Nt780jROcMv9qniUm4rAJNr8WD2j, correspondingly.The simulations were performed in the Delphi-7 environment. The mathematical analysis of the stochastic oscillations was performed using Wolfram Mathematica 10 . The Euler-Maruyama explicit numerical scheme was applied for the integration of the stochastic ordinary differential equations. The typical value of a time step was 0.5 ms. The results were dependent on the numerical parameter in a similar extent as for different realizations of noise. The numerical realizations of the model are available from the websites: the code in Wolfram Mathematica is at S1 TextDisinhibition as a model of epilepsy. Depolarization block. Alternative model of a neuron-observer.(PDF)Click here for additional data file."} {"text": "We correct the following error in our publication. The primary focus of the work is on the description of antiviral resistance-associated markers in the influenza PA endonuclease domain , which is responsible for the endonuclease activity of the virus. Figure\u00a0S1 presents data from minigenome assays that measure cumulative viral polymerase activity with and without the endonuclease inhibitor RO-7. The construct used in these assays contained the I38T resistance substitution in the PA plasmid. After further sequencing of this construct, two additional unexpected changes, P224S and P295L, both outside the endonuclease domain, were detected. Neither mutation was described in the original manuscript.Volume 9, issue 2, e00430-18, 2018, In a summary of four independent experiments, we have subsequently determined that P224S/P295L in combination with I38T lower polymerase activity approximately 1.7-fold with the CA/04 virus. When only I38T is present, a statistically insignificant decrease was observed, suggesting that P224S and P295L do negatively impact polymerase activity, albeit mildly. P224S and P295L were not present in the PR/8 PA plasmid.Regardless of the impact on output from minigenome assays, the main conclusions of the work, that I38T is a critical determinant of endonuclease inhibitor resistance, remain unaltered. Importantly, the presence of P224S/P295L in conjunction with I38T does not impact resistance to RO-7. In the presence of 250\u2009nM RO-7, the CA/04 constructs containing I38T, P224S, and P295L retain 71.6% of normalized polymerase activity compared to 72.6% of the I38T-alone plasmid. Further, we have confirmed that the P224S/P295L substitutions are not present in either the WT or the reverse genetics CA/04 or PR/8 viruses generated in this study and used to evaluate RO-7-resistant viruses (listed in Table 1).Figure\u00a0S1 and its legend have been replaced online."} {"text": "Saccharomyces cerevisiae,Homo sapiens, andSus scrofa genomes. The datasets were used as evidence in a project that investigated the history of genomic science. To design the datasets, we first retrieved all sequence submission data from the European Nucleotide Archive (ENA), including accession numbers associated with each of our three species. Second, we used these accession numbers to construct queries to retrieve peer-reviewed scientific publications that first described these sequence submissions in the scientific literature. For each species, this resulted in two associated datasets: 1) A .csv file documenting the PMID of each article describing new sequences, all paper authors, all institutional affiliations of each author, countries of institution, year of first submission to the ENA (when available), and the year of article publication, and 2) A .csv file documenting all institutions submitting to the ENA, number of nucleotides sequenced and years of submission to the database. We utilised these datasets to understand how institutional collaboration shaped sequencing efforts, and to systematically identify important institutions and changes in the structure of research communities throughout the history of genomics and across our three target species. This data note, therefore, should aid researchers who would like to use these data for future analyses by making the methodology that underpins it transparent. Further, by detailing our methodology, researchers may be able to utilise our approach to construct similar datasets in the future.This data note describes a unique two-step methodology to construct six linked datasets covering the sequencing of Medical Translation in the History of Modern Genomics (TRANSGENE); a project that explored the history of scientific collaboration around DNA sequencing. By investigating the interactions between different institutions in the determination and description of new DNA sequences, this project showed changing and varied configurations of genomic science between the 1980s and 2010s. These historical configurations and their dynamics were shaped by the objectives, organisation and development of research communities and their distinct target species (Saccharomyces cerevisiae), a farm animal that has been the object of agricultural genetics, immunogenetic research and commercial breeding programmes (the pigSus scrofa), andHomo sapiens (subsequently referred to as \u2018human\u2019).This data note describes the methodology used to construct six novel datasets for the European Research Council funded project,species . To docudatasets in the data repository at the University of Edinburgh via automated routines and Application Programme Interfaces (APIs).Extracting data on sequence submissions to the2. Linking particular sequence submissions to peer-reviewed publications that first described these in the literature via API queries, which utilised sequence accession numbers to mine Europe PubMed Central and SCOPUS.We then discuss our approach to re-structuring and cleaning these data and offer a description of the content of each dataset. Finally, we reflect on the strengths and weaknesses of these datasets and methods.This project entailed a large and unique data collection exercise of over 13 million records, which were retrieved via 30 million API queries to three different databases. This involved a two-step process. First, we retrieved all sequence submission data from the ENA, including accession numbers associated with particular sequence lengths. Second, we used these accession numbers to construct API queries to retrieve peer-reviewed scientific publications that first described and linked to these sequence submissions in the scientific literature. S. cerevisiae (1980\u20132000),H. sapiens (1985\u20132005), andS. scrofa (1990\u20132015). The date ranges for each species were selected based on the history of science objectives underlying our project. The purpose was to capture submissions before, during, and after the completion of concerted efforts to comprehensively sequence the genome of each of the species. The search was conducted by making a series of calls to ENA\u2019s API for each species and each year our project investigated. The query was constructed by specifying the taxon\u2019s number(tax_eq) in the ENA index and the sequence release date(first_public) to filter records that were released within a certain year. The search parameter offirst_public was specified as \u201cgreater than or equal to\u201d 1st January and \u201cless than or equal to\u201d 31st December of the year. Additional parameters were used to specify search forsequence release records (result=sequence_release) and download the data in .XML format (display=xml). In cases where records per year exceeded the ENA\u2019s limit of 100,000 records per API call, the pagination function (offset) was deployed.We retrieved sequence submission data from the ENA for each of the three species over defined periods \u2013Software availability;This procedure allowed us to mine the ENA database based on the species and years relevant to our study and extract data on: 1) the number of nucleotides submitted for each of these species; 2) all accession numbers associated with these sequence lengths; 3) the date of submission; 4) the name of the submitting individual and/or their institutional affiliation (if available); and 5) papers in the scientific literature associated with each accession number (if specified by the submitter) . This linkage allowed us to identify a list of PubMed IDs (PMIDs) of the publications linked to these accession numbers at 1000 publication records. We deployed a routine to automate the search for each accession number in our dataset. The routine\u2019s procedure to compose and make an API call to Europe PMC using a list of accession numbers has been made available in an online repository API by using the ENA accession number as a parameter to search for associated publications (PMID) and utilising other default parameters such asapikey, apart fromview=complete to specify returning of full meta-data. The routines and R scripts used are also available . Although this correspondence between submission and first publication was not universal,VantagePoint (2017) v.10 by using a combination of fuzzy logic algorithms available in the software (i.e. \u201cFuzzy word matching\u201d to make word comparisons at 95% or lower) and manual cleaning to standardise institution, author and country names according to a pre-specified protocol. The protocol specified ensured consistency in name conventions, fully spelling out acronyms and abbreviations, removing articles and legal entities, using proper case conventions, removing white spaces and ineligible characters, removing duplicates, and keeping school and department data if it appeared more than 50 times in the dataset. All our publication dataset and the main submitting institutions, as documented by the volume of DNA nucleotides registered in the ENA, were cleaned according to this protocol. Missing data, particularly regarding institutional affiliation, was filled manually by scrutinising the record on SCOPUS\u2019 web front-end. To replicate this cleaning process, other open source software, such as OpenRefine, may also be used as an alternative.Once collected, researchers in our team cleaned these datasets viaIn total, each species has two associated datasets: 1) A .csv file documenting the PMID of each article describing new sequences, all paper authors, all institutional affiliations of each author, countries of institution, year of first submission to the ENA (when available), and the year of article publication, and 2) A .csv file documenting all institutions submitting to the ENA, number of nucleotides sequenced and years of submission to the database. While the data about yeast submissions is provided sequence per sequence with full dates and information about both submitting individuals and institutions, the pig and human submission datasets offer aggregate figures of both number of sequence submissions to the database and number of nucleotides sequenced per institution per year.Our corpus of publications includes all data necessary to construct co-authorship networks of collaboration between individuals, institutions and countries that were involved in the sequencing efforts.Our study reflects that the growing capacity in data infrastructure and the development of bioinformatics offers new opportunities not only for life scientists and molecular biology but also for social scientists and historians of science. The method outlined in this paper provides a novel source of evidence to evaluate the development and growth of collaboration in DNA sequencing and genomics research. It is also able to avoid placing a narrow focus on a number of key players based on previous studies or historical accounts. Our datasets show a diversity of countries and institutions involved in the sequencing of the human, yeast and pig genomes. Thus, they enable us to complement previous historical studies that have been focused on a limited number of large-scale sequencing centres e.g..This analysis is, however, limited by the data infrastructure that we have used. Its organisation and, especially, its absences can indeed shape and affect how and what we can know about the past; how and what information is being recorded, what is missing, what can and cannot be automatically retrieved, what is considered important (or not), and for what questions the information was expected to provide answers to. These processes, including storage and curation, were built into the databases and can have significant impacts on what we know and what we can study about collaboration in genomic sequencing. For instance, as noted above, a substantial proportion of accession numbers in the ENA did not have any further information about submitters. We need to consider these absences, along with their underlying meanings and power dynamics more carefully, especially when we use digital research methods and online data .For this reason, we argue that qualitative work should accompany digital research methods. In our project, we developed a mixed methods approach based on constant, bi-directional interactions between quantitative data and other qualitative evidence, such as documents stored in archives . This ap I would like to thank the authors for this read. It was a thorough and clear explanation of valuable methods for linking institutional and authorship metadata with submitted sequence data to the European Nucleotide Archive. Good clean metadata is the dream of all bioinformaticians, and public archives often fall short. This work is a nice illustration of linking together multiple public archives with a lot of manual effort to build a valuable metadata dataset for an interesting historical project.This is valuable work and I feel the abstract sells it short. Firstly, I would recommend a clearer problem statement or sentence about the larger project before diving into the methods. Also, you don't make enough of why you need to link in EuropePMC and SCOPUS. Some quick statistics on the number of ENA accessions missing the needed metadata would really illustrate why the link to the literature is such a good idea. i.e. 50% of Homo sapiens accessions are missing both submitted and publication metadataI assume it's due to the scope of the larger project, but a sentence mentioning why the choice of the three species would be nice.I'd just like to applaud the sharing of the code. Yay for open science.My one request that might take a few hours of work but would be a valuable addition: Your method uses the first publication that is linked to an accession as likely being the original authors. That is probably right. But a small validation would be really useful here. You could pick 10 (or more) random accession and see if that strategy worked. It doesn't have to be perfect, but good to see if it works okay. I have some minor comments and one request that I hope will not be too laborious.Are sufficient details of methods and materials provided to allow replication by others?YesIs the rationale for creating the dataset(s) clearly described?YesAre the datasets clearly presented in a useable and accessible format?YesAre the protocols appropriate and is the work technically sound?YesReviewer Expertise:Bioinformatics & biomedical machine learningI confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. on data curation as it addresses many of the relevant issues raised in the limitations relating to curation and completeness of data specifically in the context of modern biology.This article describes methodologies used to construct datasets on sequence submissions and co-authorship relationships relating to genomic sequencing of three major organisms. The methodology is clearly described, as are its limitations and prospects for use by other scholars. I particularly appreciated the careful reflections on strengths and weaknesses of the approaches taken, and agree that these approaches have clear prospects for enriching our historical/sociological accounts given tendencies to focus on the strongest (or loudest!) research centres to the neglect of other participants particularly in genomic sequencing efforts. Would strongly suggest citing Leonelli's bookAre sufficient details of methods and materials provided to allow replication by others?YesIs the rationale for creating the dataset(s) clearly described?YesAre the datasets clearly presented in a useable and accessible format?YesAre the protocols appropriate and is the work technically sound?YesReviewer Expertise:History/philosophy of contemporary biological sciences I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Thank you for your positive comments and excellent feedback. We are grateful for your time and appreciate your comments on the originality, significance, and timeliness of this paper. We have adopted your suggestion and cited Leonelli's (2016) seminal work in this area too."} {"text": "Methylmalonic acidaemia with homocystinuria type C (cblC defect) is an inherited error of cobalamin metabolism. Cobalamin deficient processing results in high levels of methylmalonic acid and homocysteine. The latter is considered to be a risk factor for multiple sclerosis (MS). We report on the first case of a patient with comorbid cblC defect and MS.MMACHC and another splicing variant in PRDX1 (c.1-515G\u00a0>\u00a0T) that cause the silencing of the wild-type MMACHC allele, so confirming the diagnosis of cblC defect. Although cblC treatment was effective, when 17-year-old he experienced a relapse of neurological symptoms. Further imaging and laboratory studies eventually supported the diagnosis of MS.This young male presented at the age of 14 with a relapsing-remitting neurological disorder associated with imaging alterations suggestive of MS. Treatment resulted in a partial clinical improvement with vanishing of white matter lesions. Later on, the emergence of unexpected clinical features led to a metabolic work-up, revealing a cobalamin intracellular defect. Genetic analysis disclosed a single variant in While the comorbid association of MS and cblC in our patient may remain anecdotic, we suggest measuring Hcy and MMA levels in young patients with a relapsing-remitting demyelinating disorder, in order not to miss a cblC defect, that requires a specific and effective treatment. While the majority of patients become symptomatic during the first year of life, adolescents and adult patients mainly present with psychiatric symptoms, dementia, myelopathy, peripheral neuropathy and thrombosis . High le2MMACHC gene (NM_015506) revealed a heterozygous variant in exon 4 , inherited from his father. The lack of other pathogenetic variants on the MMACHC gene prompted us to sequence the PRDX1 gene (NM_001202431) that revealed a c.515-1G\u00a0>\u00a0T splicing variant, inherited from the mother, and confirmed the diagnosis of cobalamin disorder type C [This 17-year-old male is the first child of nonconsanguineous healthy parents. His younger sister is healthy. He was born after a pregnancy complicated by rubeola infection: at birth anophthalmia of the right eye was detected and brain imaging showed a congenital arachnoid cyst in the left temporal lobe. Familial history revealed a father's 1st cousin diagnosed with MS in adulthood. When 14-year-old, the patient complained a decline in school performance and a loss of strength of the inferior limbs. A year later, he was admitted to a Neurology Unit due to sudden visual loss (3/10 in the left eye) and the subacute emergence of a spastic paraparesis. A brain magnetic resonance imaging (MRI) showed multiple subcortical and periventricular white matter lesions suggesting the diagnosis of MS. The treatment with steroid improved visual acuity (8/10), but it was ineffective on lower limb spasticity. Subsequently, a treatment with interferon 1-b was started. On follow-up, a brain MRI showed a complete vanishing of the white matter lesions. However, at the age of 16, he suffered from a sudden episode of lethargy, loss of ambulation, bilateral upper limb dystonia and urinary incontinence. Brain and spinal MRI were normal. A partial spontaneous recovery was observed during the following weeks. A comprehensive clinical evaluation revealed a hypertrophic cardiomyopathy and proteinuria. A few months later, on examination he showed spastic paraparesis, pale optic disc, and large-amplitude horizontal left beating nystagmus. A further metabolic work-up revealed high Hcy and low methionine plasmatic levels and a marked increase of urinary MMA . Molecular analysis of DRB1*13\u201315 haplotype. The diagnosis of relapsing-remitting MS was definitively made in comorbidity with cblC defect. The patient has received treatment with natalizumab for 10\u00a0months in addition to the specific treatment for cblC. During the follow-up we observed no relapses nor novel lesions at MRI and no progression of cardiac nor renal dysfunctions. Written informed consent of patient and his parents was obtained.The treatment with betaine (6\u00a0g/day), carnitine (6\u00a0g/day), hydroxy-cobalamin , and folinic acid (15\u00a0mg/day) resulted in a remarkable lowering of plasma Hcy (27\u00a0\u03bcmol/L) and urine MMA (47\u00a0mmol/mol creatinine) as well as in an improvement of spastic paresis with restoration of autonomous gait. Unexpectedly, at the age of 17, he complained an acute loss of strength and paresthesia in the right arm. A brain and spinal MRI showed asymmetrical and multifocal cerebral white matter lesions, some of them with an open-ring enhancement, and a C2-C3 medullary lesion . Oligocl3The patient here reported suffered from a relapsing-remitting demyelinating disorder resulting from an early onset MS and a concomitant alteration of cobalamin metabolism. While he showed a complete remission of white matter lesions under steroid treatment, which is a relatively common feature in pediatric-onset MS , other nMMACHC gene product is involved in the regeneration of S-adenosylmethionine, the most important methyl donor, from Hcy. Several MS risk loci have been found in this metabolic pathway, such as SCL19A1, SHMT, MTHFR, CBS, suggesting a major role of methylation processes in MS pathogenesis [MMACHC adjacent gene PRDX1, was demonstrated to cause MMACHC gene silencing by methylation of the promoter through a mechanism called \u201cepi-cblc\u201d [PRDX1 encodes the peroxiredoxin-1, an enzyme induced by oxidative stress to protect the integrity of the blood brain barrier; and b) vascular peroxiredoxin-1 immunoreactivity is upregulated in active MS lesions in brain tissues [ogenesis . Recentlpi-cblc\u201d . The pre tissues . So far tissues .While the comorbid association of MS and cblC in our patient may remain an anecdotic observation, given the higher prevalence of MS than intracellular cobalamin defects in this age group, we suggest measuring Hcy and MMA levels in young patients with a relapsing-remitting demyelinating disorder, in order not to miss a cblC defect, that requires a specific and effective treatment.None of the authors have any disclosure to declare.Data availability not applicable.All the reported investigations and a specific permission for the publication of the results was obtained through a written informed parental consent."} {"text": "The global health system is currently facing the new SARS-COV 2 pandemy. This exceptional situation requires, from our African health systems, to reorganize and readapt the usual protocols when they were carried out before the crisis and/or their urgent implementation otherwise. As imaging is one of the pillars of the diagnosis of infection with this emerging virus, it was essential to rethink the imaging department organization so as to dedicate a unit to COVID-19 activity while maintaining the usual emergency activity within the Ibn Sina university hospital in Rabat. The protection of exposed personnel and the bio-cleaning of radiology equipment and rooms also became an evidence. The active involvement of the administration, the Clinical Pharmacy Department and the Nosocomial Infections Control Committee is a key to the success of this reorganization. The global health system is currently facing the new SARS-COV 2 pandemic. This exceptional situation requires, from our African health systems, to reorganize and readapt the usual protocols and their urgent implementation. As imaging is one of the pillars of the diagnosis of infection with this emerging virus, it was essential to rethink the imaging department organization so as to dedicate a unit to COVID-19 activity while maintaining the usual emergency activity within the Ibn Sina university hospital in Rabat. The protection of exposed staff and the bio-cleaning of radiology equipment and rooms also became an evidence. The active involvement of the administration, the Clinical Pharmacy Department and the Nosocomial Infections Control Committee is a key to the success of this reorganization. In recent weeks, the imagery team of Ibn Sina University Hospital in Rabat (Morocco) has been confronted with a new activity, the COVID-19 activity . The graOrganizational component: Ibn Sina University Hospital in Rabat normally has 800 hospital beds. The non-urgent activity was suspended and the hospital was geographically divided into two activities with separate circuits: a dedicated \u201cCovid\u201d activity located in the basement and on the ground floor which includes the emergency reception service, the emergency radiology service, a second so-called central radiology service and several hospital units; a non-Covid activity located on the floor level and reserved for urgent non-Covid activities. The imaging activity, like all hospital activity, has been divided into: a Covid activity which is taken care of by the emergency radiology service which performs computed tomography (CT) and chest x-rays for suspected or confirmed patients infected with Covid-19 arriving from the emergency reception service or other hospital units. Any respiratory symptomatology that is not clinically proven is considered suspicious; a non-Covid activity carried out in the central radiology department, which is responsible for imaging other hospital patients. A clinical information collection form has been developed to facilitate the referral of patients receiving Covid imaging to chest x-ray or CT rooms. The indications for CT have been widened due to its high sensitivity, although its specificity is discussed .Bio-cleaning, disinfection and Protection of personnel: bio-cleaning and protection of caregivers working in radiology were not formalized in our hospital before the health crisis. In the urgency of the situation, the clinical pharmacy department, the nosocomial infection control committee and the radiology department set to work. In addition to a review of the recent literature on the contamination and contagiousness of SARS-COV 2, the effectiveness of the implementation of the protocols was conditioned by three essential factors: 1) The involvement of the personnel; 2) The effective application of the protocols:it was possible either by simulations or by its implementation when a Covid patient arrived in radiology; The operational hygiene team of the hospital, reporting to Nosocomial Infection Control Committee (NICC) and the Nursing Service, were of undeniable contribution; 3) The total acceptance of the administrative staff and the hospital pharmacy to all our requests in terms of organization, supplies of hygiene products and protective equipment. The available data in the literature prompted us to opt for draconian disinfection measures in an imaging environment considered to be a high-risk environment, among other things because of the confined nature of the radiology rooms. Studies conclude that SARS-Cov-2 persists on different surfaces and reveals that plastic and stainless steel offer greater stability to the virus ,6. As foSARS-COV-2 being an enveloped virus, its inactivation can be effectively carried out by surface disinfection procedures with solutions containing 62-71% ethanol, 0.5% hydrogen peroxide or 0.1% sodium hypochlorite with minimum contact time of 15 minutes . The EN It should be noted that after more than a month of Covid activity, 260 scanners and 180 chest x-rays, no one of the emergency radiology department staff was contaminated. The protection of the radiology staff in contact with the patients is achieved by the installation of a specific kit including and by a short training on its use, especially on dressing and undressing. Ultrasound in suspected or confirmed COVID-19 patients is conceived in two distinct situations: 1) Pulmonary ultrasound as part of screening or monitoring of lung lesions. It is currently in the research field and it is recommended that CT replace it whenever possible. Nevertheless, in the eventual absence of the scanner under our skies and in case of availability of expertise in pulmonary ultrasound, the latter is probably an interesting alternative. 2) Ultrasound other than pulmonary in the context for example of digestive, renal, vascular or other symptomatology associated with the classic symptomatology of Covid-19 infection or in the context of a complication. Rigorous protective measures must be implemented during an ultrasound due to an extended contact time -16. The We wanted to bring our experience in the reorganization of hospitals in general and of the Radiology department in particular while facing the Covid-19 pandemic. The aim is to facilitate and inspire our colleagues, brothers and sisters from the rest of our continent, for the organization of their own units if it has not already been set up. To each thing woe and good as they say. This exceptional situation forced us to improve hygiene practices in our hospitals and to implement protocols that normally should have existed before."} {"text": "Self-management and remaining physically active are first-line recommendations for the care of patients with low back pain (LBP). With a lifetime prevalence of up to 85%, novel approaches to support behavioural self-management are needed. Internet interventions may provide accessible support for self-management of LBP in primary care. The aim of this randomised controlled trial is to determine the clinical and cost-effectiveness of the \u2018SupportBack\u2019 internet intervention, with or without physiotherapist telephone support in reducing LBP-related disability in primary care patients.A three-parallel arm, multicentre randomised controlled trial will compare three arms: (1) usual primary care for LBP; (2) usual primary care for LBP and an internet intervention; (3) usual primary care for LBP and an internet intervention with additional physiotherapist telephone support. Patients with current LBP and no indicators of serious spinal pathology are identified and invited via general practice list searches and mailouts or opportunistic recruitment following LBP consultations. Participants undergo a secondary screen for possible serious spinal pathology and are then asked to complete baseline measures online after which they are randomised to an intervention arm. Follow-ups occur at 6\u2009weeks, 3, 6 and 12 months. The primary outcome is physical function (using the Roland and Morris Disability Questionnaire) over 12\u2009months (repeated measures design). Secondary outcomes include pain intensity, troublesome days in pain over the last month, pain self-efficacy, catastrophising, kinesophobia, health-related quality of life and cost-related measures for a full health economic analysis. A full mixed-methods process evaluation will be conducted.This trial has been approved by a National Health Service Research Ethics Committee (REC Ref: 18/SC/0388). Results will be disseminated through peer-reviewed journals, conferences, communication with practices and patient groups. Patient representatives will support the implementation of our full dissemination strategy.ISRCTN14736486. The SupportBack 2 trial is a large multicentre randomised trial that will determine the additional benefit, over usual primary care, of an internet-based approach that supports self-management of patients with low back pain (LBP) in UK primary care.The trial is designed to investigate the effectiveness of an internet intervention in addition to usual primary care, both with and without telephone physiotherapist support.A full mixed-methods process evaluation will be carried out to inform a logic model and \u2018theory of change\u2019 for the interventions.Inclusion is limited to those with LBP who have access to the internet and are able to communicate in English without assistance.Low back pain (LBP) has a lifetime prevalence of up to 85%Internet interventions are typically automated, interactive, tailored interventions that make use of multimedia formats to deliver behavioural change strategies online.et alSupportBack is an internet intervention designed to support patients to self-manage their LBP following consultation in primary care.The aim of the present full randomised controlled trial (RCT) is to determine the clinical and cost-effectiveness of the SupportBack internet intervention, delivered in addition to usual care with and without physiotherapist telephone support, in reducing LBP-related physical disability in UK primary care.A three-parallel arm (1:1:1), multicentre RCT is being conducted to determine the clinical and cost-effectiveness of an internet intervention for patients with LBP in primary care. Participants will be followed up at 6\u2009weeks, 3, 6 and 12 months.The trial is being carried out with patients from 140 to 180 general practices across the UK. Patients access the intervention through their own devices with internet access at a location that is convenient for them . If allocated to receive telephone physiotherapist support, this support is delivered wherever is convenient for the patient. A list of patient identification centres is available from the trial team on request.Aged 18 and above.Current LBP (have experienced pain in the last week) with or without sciatica.Access to the internet and an active email address.Ability to read/understand English without assistance.Ability to provide informed consent.Signs and symptoms in a patient with LBP that indicate potential serious spinal pathology such as infection, malignancy, fracture, inflammatory back pain, progressive neurology and/or cauda equina.Have had spinal surgery in the past 6\u2009months.Pregnancy.Taken part in the prior SupportBack feasibility study.Two recruiting centres, Southampton and Keele (each with a team of telephone support physiotherapists) are working with National Institute for Health Research Clinical Research Networks to facilitate the recruitment of general practices. Potentially eligible participants will be identified in one of two ways:Patients who have consulted with LBP in the last 2\u2009months will be identified by general practice staff from computerised records of consultations. Practices will be asked to repeat the searches approximately three times, or until the target number of patients per practice has been reached. Resulting lists of patients identified by the search will be screened by a practice GP who will rule out patients based on aspects of the eligibility criteria that can be determined from patient notes.During a patient consultation and on entering a relevant diagnostic or symptom Read code into the patient electronic medical record, GPs will be prompted about the trial and patient eligibility by an automated \u2018pop-up\u2019 screen activated by the Read code. GPs will then screen for eligibility (using the inclusion/exclusion criteria listed) and patients identified as suitable will have their medical record electronically tagged. A download of \u2018tagged\u2019 patients will occur regularly, anticipated to be every 2\u2009weeks. This method will be used in practices where possible. Participating general practices not implementing the \u2018pop up\u2019 Read code method can identify potential patients during consultation. Having considered eligibility the GP or nurse practitioner will provide the patient with an invitation pack.Patients identified either by a medical records review or general practice consultation are mailed a study pack including an invitation letter from the GP, participant information sheet, reply slip, screening questions and prepaid envelope. Interested patients return the reply slip and screening questions using the prepaid envelope to the research team. Screening consists of two questions regarding current LBP and access to the internet, followed by three safety questions listing symptoms that may indicate serious spinal pathology. Patients who answer \u2018yes\u2019 to the first two questions, and \u2018no\u2019 to all safety questions, are considered eligible. For those who complete the screening questions and fail safety screening, a physiotherapist contacts the patient to make an appropriate clinical recommendation on hearing a further description of the symptoms. Those who fail the screening are documented on a screening log maintained by the research team. All patients considered eligible for the trial are assigned a unique participant identification number and sent a link to the study website, to complete consent, baseline questionnaires and be randomised. Recruitment opened in November 2018 and is expected to close in December 2020, with data collection completing approximately 12 months later in December 2021.The randomisation process for this trial is fully automated. The intervention and data collection software automatically generates the randomisation sequence, and a computer-generated algorithm block randomises participants to the trial groups. Participants are being stratified by trial recruiting centre and level of physical function: a score of less than four on the Roland Morris Disability Questionnaire RMDQ is beingParticipants randomised to this arm will continue to receive unrestricted usual primary care for LBP. Current National Institute for Health and Care Excellence recommendations for primary care management of LBP suggest assessment to rule out specific spinal pathology and use of risk stratification tools has been extensively described elsewhere.16Practically, patients can access SupportBack from any device with an internet connection from wherever is most convenient for them. SupportBack consists of six sessions, and patients are encouraged to log in and use one session per week. Automated reminders adhere to this schedule. The first session highlights the centrality of PA in managing LBP, and supports patients to set goals to either walk more, or engage with a range of gentle back exercises of their choice. Goal options are tailored and are based the extent that patients report their LBP obstructs their day-to-day activities. The further sessions feature self-monitoring and feedback regarding their progress with walking or exercise goals, combined with encouragement from SupportBack to continue. After the first session, patients can unlock one further module per week on topics such as sleep, mood and work. These build into a personal repository, that alongside weekly goals, can be accessed at any time. If engaged with weekly, the tailored, interactive part of the intervention will last 6\u2009weeks. Following completion of all the sessions, SupportBack converts into static resource where all activities/exercises and modules can be accessed for the duration of the trial.Participants randomised to this arm will also continue to receive unrestricted usual primary care, with access to the SupportBack internet intervention. In addition, these participants will also receive up to 1\u2009hour of physiotherapist support over the telephone . At both centres (Southampton and Keele) support is provided by MSK physiotherapists working in the National Health Service (NHS).The objectives of the telephone contact are to encourage the use of the SupportBack intervention, provide reassurance regarding LBP and encourage adherence to PA goals. The physiotherapists are asked to closely adhere to a standardised content checklist for each phone call. The checklist follows the Congratulate, Ask, Reassure, Encourage approach,All measures and time points for collection are listed in The primary outcome in this trial is LBP-related physical function measured with the RMDQ.Demographic data are being collected at baseline including age, sex, educational attainment, marital and occupational status. A range of secondary measures are being collected including pain intensity,33To support the health economic analysis health-related quality of life is being measured with the 5-level EQ-5D (EQ-5D-5L)The internet intervention software automatically collects data on number of logins, page and module views and time spent in each login. This data will be used to explore adherence and user engagement to the digital component of the intervention.The reported minimally clinical important difference (MCID) between groups for the RMDQ varies. A between group MCID of 2 or 3 points is commonly reported.Data are primarily being collected online. The LifeGuide intervention and data system collects consent, baseline data including demographics and follow-up data across the four time points . When first sent a link to the system following screening, if patients do not log on within a week, they are emailed to check that they received the link and advised to look in their spam mail. If there is no response, one telephone call is attempted by the research team.With regard to follow up protocol, where there is no response to the online follow-up questionnaire emails, two reminder emails and text messages will be sent. Following continued non-response, a paper questionnaire pack with a prepaid envelope will be sent 1\u2009week after the last email/text reminder. If the paper questionnaires are not returned within 2\u2009weeks of being sent, a blinded research assistant will call the participant to complete the primary outcome measure (RMDQ), quality of life questionnaire (EQ-5D-5L) and pain severity. If the participant is happy to continue, further measures from the questionnaire battery at the respective follow-up point will be collected in this manner. The full follow-up protocol with the telephone calls will be implemented at 6\u2009weeks and 12\u2009months follow-up points. These two follow-up points are considered most important, capturing initial and long-term response. Calling at all time points may lead to increased dropout at later time points. Follow-up at 3 and 6\u2009months will include all the above steps except for the phone calls. All participants will receive a \u00a35 voucher when asked to complete questionnaires at the more distant time points of 6 and 12 months. Examples of data collection forms can be provided by the trial team on request.Quantitative analysis will begin following cleaning and inspection of the data. Descriptive analysis will be conducted to determine outliers and distributions of the data. Where necessary, if data are not normally distributed, transformations will be applied or another appropriate distribution used. The primary analysis for the RMDQ score will be performed using a multilevel mixed model framework with observations at 6\u2009weeks, 3, 6 and 12 months (level 1) nested within participants (level 2). Results will be reported adjusting for baseline severity in function, stratification factors and any prespecified confounders. The model will use all the observed data and makes the assumption that missing RMDQ scores are missing at random given the observed data.As there may not be a constant treatment effect over time, a treatment/time interaction will be modelled and included if significant (at the 5% level), with time treated as a random effect. An unstructured covariance matrix will be used.Analysis of secondary outcomes will also be conducted using linear regression for continuous outcomes and logistic regression for dichotomous outcomes, again controlling for baseline symptom severity, stratification factors and any potential confounders. The structure and pattern of missing data will be examined, if appropriate, and a sensitivity analysis based on data imputed using a multiple imputation model presented. Data will be analysed on an intention-to-treat basis . We will also undertake a complier-average causal effect analysis,It is not anticipated that there will be significant practice level (cluster) effects but this assumption will be tested by comparing a fixed effect model to a random effects model. If there are significant practice level effects then, the model will include a random effect for practice (random intercept) and participant (random intercept and slope on time) to allow for between participant and practice differences at baseline and between participant differences in the rate of change over time (if significant at the 5% level), and fixed effects for baseline covariates.https://www.southampton.ac.uk/medicine/academic_units/projects/supportback2.page).No interim analyses are planned. Full details of the analyses to be undertaken will be set out in the statistical analysis plan and approved by the trial steering committee (TSC). Our full statistical analysis plan will be published on the trial website in due course , obtained from the EQ-5D-5L instrument using the published UK value set. In addition, a cost-effectiveness analysis will be carried out using the study primary outcome measure, that is, the cost per point change in back-related physical function measured using the RMDQ will be estimated. Both costs and effects will be estimated using multiple regression, to allow for potential confounders, such as baseline scores for EQ-5D-5L and RMDQ. Standard practice will be followed to calculate incremental cost-effectiveness ratios (ICERs), and present ICER(s) where any one option has both higher costs and increased effects compared with another. ICERs will show incremental cost per QALY or incremental cost per point improvement in RMDQ. Bootstrapping will be used to calculate cost-effectiveness acceptability curves. These will illustrate the effect of uncertainty on study results. Major assumptions made in the analysis will be tested by means of sensitivity analysis. In particular, assumptions made during the costing of the intervention such as the number of individuals who will be using the website will be explored. Similar methods to the main clinical analysis will be used to handle missing data, that is, analysis of patterns of missing data with multiple imputation methods employed if deemed appropriate. The proposed health economics analysis will be detailed in a health economics analysis plan (HEAP) which will be completed before analysis commences. The HEAP will be circulated for comment prior to the health economics analysis. Any digressions from the HEAP will be documented and justified in the final health economics report.A process evaluation will be carried out following Medical Research Council guidelines on process evaluations of complex interventions.Quantitative data describing trial implementation will be presented including number of practices recruited, patient eligibility and recruitment rates. The number of withdrawals from the trial per arm will be presented, along with numbers/percentages of drop-outs from the intervention who do not respond to follow up. Use of the internet intervention will be described by presenting automated data collected on number of logins and modules accessed for both the internet intervention and the intervention plus telephone physiotherapist support arm. With regard to the internet intervention plus telephone physiotherapist support arm, the number of support calls successfully made (and attempts contact the patient), along with the mean number per participant in this arm will be described.Qualitative interviews will be conducted with up to 45 trial participants following the 3, 6 and 12 months follow-up points. Different participants will be interviewed at each time point, enabling us to explore how time since accessing the tailored weekly component of the intervention effects how suggestions are used and implemented in daily life. Interviews will also be conducted with the trial physiotherapists. Participants will be purposively sampled to ensure diversity in terms of age, sex and symptom severity . Participants will also be sampled based on high and low usage of the internet intervention and high and low engagement with the telephone physiotherapist support. For participants, questions will focus on their experience of using the intervention, including telephone physiotherapist support and usual care. Interviews with the trial support physiotherapists will be designed to explore their experience of delivering the intervention, with a particular focus on barriers and facilitators, and determinants of successful exchanges.A logic model of proposed mechanisms affecting LBP-related physical disability and pain outcomes for the SupportBack intervention has been developed see ref.. This mo40 41Questions will be included in the qualitative interviews focusing on participants\u2019 perceptions of how use of the SupportBack intervention and/or telephone support affected their LBP. This will enable the inductive exploration of participants\u2019 views and triangulation of qualitatively derived theory on mechanism with our quantitative analysis. Similar questions will also be explored in the usual care arm, focusing on how elements of their usual care may have led to improvements in their LBP.The relationship between elements of participants\u2019 context (moderators) and the effect of the interventions across the 12\u2009months follow-up period will be explored. This will include variables such as LBP severity and duration at baseline, age, educational level and occupation status. Following the analysis of mechanisms, correlations and multiple regression (linear and logistic) will be used to explore relationships between moderating variables and LBP-related physical function and pain intensity. Qualitatively, the above aspects of participants\u2019 context, including their own descriptions of their LBP history, will feed into analysis when exploring themes regarding participants use of the intervention and their perceptions of benefit.42Interview data collected regarding implementation, mechanisms and context will be transcribed verbatim, coded and analysed using an inductive thematic analytic approach.The SupportBack 2 trial has a data monitoring and ethics committee (DMEC) composed of a statistician (chair) and two academic clinicians (Professor in Primary Care Research and Professor of Physiotherapy respectively). The DMEC reports to the TSC and is fully independent from the trial Sponsor with no competing interests. Interim descriptive analyses are prepared for the DMEC. The DMEC charter can be obtained from the research team on request.All serious adverse events (SAEs) are reported to the lead clinical trial unit. The assessment of seriousness will be made by the participants GP or delegate. Assessment of causality will be made by the GP or delegate, and related or unrelated status will be determined. As the SupportBack intervention provides reassurance and encourages gentle activity within the participants\u2019 own limits, there are no \u2018expected\u2019 SAEs documented.All patient data are being kept in strict confidence and managed in accordance with the Data Protection Act 2018 and General Data Protection Regulation (2018) legislation. The University of Southampton policy on archiving will be followed; the data will be stored for 10 years following the end of the study, after which time it will be disposed of securely. Following completion of the trial, a cleaned anonymised data set will be shared on request.Patient representatives have been involved with the SupportBack trials from the outset. The idea for the trials and their subsequent design was informed by the local branch of the national charity BackCare. From this group, LL joined the research team and contributed to funding applications for both feasibility and main trials. SupportBack 2 has a panel of three patient and public involvement (PPI) representatives who are part of the trial management group, advising on patient facing materials and contributing to discussions of trial related issues as they arise. PPI will pay a key role in dissemination of trial findings and interpretation of qualitative data.The SupportBack 2 trial has received full ethical approval form a local review board (REC Ref: 18/SC/0388). All potentially eligible patients receive a patient information sheet. This information emphasises that participation in the trial is voluntary and that the participant may withdraw from the trial at any time for any reason. The participants are given the opportunity to ask any questions that may arise by speaking with the trial team and time to consider the information fully prior to agreeing to participate.The findings of this trial will be published in peer-reviewed journals and presented at international conferences. We will develop press releases in order to disseminate the findings to the general public, and work closely with our PPI collaborators to ensure dissemination to patient and other special interest groups. A summary of the findings will be sent to all included general practices and those patients that request this information. If the intervention is shown to be effective, we will work with developers to rapidly develop a version for widescale dissemination and implementation."} {"text": "First Person is a series of interviews with the first authors of a selection of papers published in Disease Models & Mechanisms, helping early-career researchers promote themselves alongside their papers. Paco L\u00f3pez-Cuevas is first author on \u2018 Paco L\u00f3pez-CuevasHow would you explain the main findings of your paper to non-scientific family and friends?Chordoma is a rare type of bone cancer, affecting approximately one individual per million every year. This cancer can compromise different locations of the spine, but often appears in the base of the skull, where surgeries are challenging. Chordomas originate from internal cells of the spine known as notochord cells, which form part of the notochord. The notochord plays an important role in the formation of the spine, but when notochord cells get transformed into cancerous cells, they grow and divide uncontrollably, and this can result in chordomas. To study the impact of notochord cancer cells in spine formation and bone quality, we used a zebrafish line that induces cancer in the notochord cells. Interestingly, we saw that the notochord cells of these fish divide faster than normal, and they induce a local wound-like response, characterised by increased inflammation and alterations in the sheath layer of collagen that wraps the notochord. These changes have an impact on how the spine is formed in these fish, resulting in vertebral defects that we term butterfly vertebrae, because they look like they have \u2018wings\u2019. We also saw an imbalance between the cells that form bone (osteoblasts) and those that destroy bone (osteoclasts), resulting in poor-quality bones. The coolest observation was that when we get rid of inflammatory cells, the notochord cancer cells stop dividing uncontrollably, and fish have fewer butterfly vertebrae. Eureka! We might have found a solution for chordoma in our fish model by playing with the number of inflammatory cells, and this could be a potential way to treat chordomas in human.\u201c[\u2026] when we get rid of inflammatory cells, the notochord cancer cells stop dividing uncontrollably, and fish have fewer butterfly vertebrae.\u201dWhat are the potential implications of these results for your field of research?Our results suggest that the zebrafish chordoma model used here could be an important tool to help us better understand chordoma biology and how transformed notochord cells and inflammation could be involved in the development of spine malformations and malignancy. Our findings on the role of inflammatory innate immune cells in the development of notochord lesions and subsequent vertebral defects suggest that inflammation could be a promising pharmacological target for chordoma. Overall, our study supports parallels between chordoma and wound-triggered inflammation.What are the main advantages and drawbacks of the model system you have used as it relates to the disease you are investigating?We used a zebrafish cancer model in our studies. Zebrafish are teleost fish that have emerged as advantageous animal models to study a variety of human diseases, including bone diseases and cancers, due to several reasons such as their fast development, translucency and feasible genetic manipulation. Their translucency allows time-lapse confocal imaging to study cell behaviour in real time, in a non-invasive way and without the need to sacrifice animals, something that would not be possible using other animal models. In addition, during the first weeks of development, zebrafish do not have a functional adaptive immune system, allowing the investigation of the innate immune response, on its own. All of these advantages, together with the availability of reporter lines labelling specific cell types, the feasibility of generating mutants for different genes and their skeletal similarities to humans, led us to choose zebrafish to study chordoma. More specifically, a major advantage of our zebrafish chordoma model is the fish survival to adulthood. While previously reported zebrafish chordoma models were lethal at larval stages, our model allowed us to analyse changes from developmental stages to bone homeostasis in adult zebrafish, using techniques such as micro-computed tomography (CT) and histological analysis. Bone changes are characteristic of human chordomas; therefore, our model would be suitable to study aspects of chordoma not possible before, adding the possibility of testing new drugs for therapeutic applications in chordoma development and bone maintenance.A limitation of our model is that transformation of notochord cells is achieved by expression of a mutated RAS gene, which is not commonly observed in chordoma patients. However, our RAS model mimics the downstream signalling driven by the activation of epidermal growth factor receptor (EGFR), which is well known to be highly expressed in human chordomas. Another drawback of our model is the fact that transformation is not only induced in notochord cells, but also in melanoblasts, resulting in melanoma development. Fortunately, this can be avoided by combination with available pigment-free fish lines.What has surprised you the most while conducting your research?During the development of this research, we read in the literature that the progression of several cancers could be mediated by an inflammatory response, and that inhibition of inflammatory cells might be a desirable treatment to prevent cancer growth. We were really surprised when we found out in our own hands that chordoma is a type of cancer that could be treated by depletion of inflammatory cells. This finding was a very striking result that really motivated us to carry on with our investigation. Another surprising moment was when we realized that our zebrafish chordoma model had extensively been used by others to study melanoma for many years , but no one had previously paid attention to their skeletal changes.Describe what you think is the most significant challenge impacting your research at this time and how will this be addressed over the next 10\u2005years?Chordomas are highly resistant to current mainstream cancer treatments, including chemotherapy and radiotherapy, which is the reason why surgery is the main strategy to treat this type of cancer. However, due to the proximity of chordomas to vital structures (e.g. nerves or blood vessels), surgery remains challenging and often fails to achieve a complete removal of the tumour, increasing the probability of cancer recurrence. Therefore, alternative interventions are needed for chordoma patients. In our study, we report that immunotherapy, in particular the suppression of innate inflammatory cells, could be a suitable therapeutic approach for chordoma. Due to its success, this type of treatment modality has increased in popularity over the last years, especially among those cancers where inflammation clearly benefits cancer growth. While immunotherapy is very promising, further research will be required, and animal models like the zebrafish will undoubtedly be valuable to extrapolate results to the clinic.What changes do you think could improve the professional lives of early-career scientists?In my case, after I finished my undergraduate studies, I had the opportunity to conduct a research internship abroad. This helped me to network and establish new collaborations, allowing me to grow scientifically and ultimately facilitating the possibility to start my PhD studies. Being exposed to a completely different laboratory environment outside my country and make new connections was definitely key in the beginning of my scientific career. Therefore, I believe that early professionals would benefit from more funding schemes that are specifically dedicated to this matter. In addition, the election of the laboratory where you would like to carry out your PhD is a fundamental element that can strongly influence your future scientific career. In this line, although I am aware that several universities have begun to do this, I would encourage the implementation of PhD programmes with rotations or short internships in different laboratories at the beginning of these studies and prior to the final laboratory election. This would allow PhD students to get to know their potential PhD supervisor and work with laboratory members, helping them in their decision before this long journey starts.During my PhD, I have been very fortunate to have Prof. Paul Martin as my PhD supervisor. He has always encouraged me to establish collaborations with other scientists outside our laboratory. For instance, this study has been the result of a collaborative work with Dr Erika Kague, who has also been a great mentor for me and has taught me a lot about the bone field. Therefore, in my view, scientists should collaborate from very early in their careers, not only because they could learn from other mentors different from their principal investigators, but it could also increase their chances to publish in scientific journals.Some early-career scientists choose to leave academia due to the uncertainty of this field. Offering permanent postdoctoral positions to those who are not interested in becoming a principal investigator or lecturer, but they rather prefer to perform experiments in the laboratory, would increase the number of scientists remaining in their research jobs.\u201c[\u2026] the election of the laboratory where you would like to carry out your PhD is a fundamental element that can strongly influence your future scientific career.\u201dWhat motivated you to pursue a scientific career and what are your next steps?Since I started my undergraduate studies, I have been interested in research, so I exposed myself to different laboratories where I enjoyed designing and executing experimental approaches to try and give an answer to biological questions, particularly those that were associated with human pathologies. However, it was not until my first research internship abroad mentioned above that I could contextualize the social, clinical and economic impact that my role as a biomedical researcher could have. My goal during this internship was to investigate new approaches for tuberculosis treatment using the zebrafish model. Seeing that results of my own experiments could have a clinical application was very motivating and made me realise the personal and professional satisfaction that research can create on me. That was the turning point when I decided that I want to pursue a biomedical scientific career, with the hope that I can contribute to our current knowledge of human diseases and help to develop new treatment strategies. Because I am particularly interested in the study of cancer and inflammation, after completion of my PhD I would like to combine all my previous expertise to continue my research in this field and start working as a postdoctoral fellow."} {"text": "Ixodes ricinus tick reproduction host in this system, and (iii) a warmer climate, concurring with our current knowledge of how temperature affects tick activity and development rates. The implications for policy include adopting increased disease management and awareness in high risk habitats and in the presence of alternative LIV hosts and tick hosts . These results can also inform deer management policy, especially where there may be conflict between contrasting upland management objectives, for example, revenue from deer hunting vs. sheep farmers.Identifying the risk factors for disease is crucial for developing policy and strategies for controlling exposure to pathogens. However, this is often challenging, especially in complex disease systems, such as vector-borne diseases with multiple hosts and other environmental drivers. Here we combine seroprevalence data with GIS-based environmental variables to identify the environmental risk factors associated with an endemic tick-borne pathogen\u2014louping ill virus\u2014in sheep in Scotland. Higher seroprevalences were associated with (i) upland/moorland habitats, in accordance with what we predicted from the habitat preferences of alternative LIV transmission hosts (such as red grouse), (ii) areas of higher deer density, which supports predictions from previous theoretical models, since deer are the key Mycobacterium bovis the causative agent of bovine tuberculosis in domestic cattle, which can have wildlife reservoir hosts including red deer Cervus elaphus, wild boar Sus scrofa, and European badgers Meles meles (Fasciola hepatica infection (fasciolosis) in livestock via their effect on the vector, the mud snail Galba truncatula . In Europe these pathogens are vectored primarily by the most ubiquitous tick in Europe, Ixodes ricinus, which is a generalist, parasitizing almost all terrestrial vertebrates. It spends the vast majority of its lifecycle away from its hosts, so its survival and activity is influenced by a multitude of environmental factors [e.g., . In addition, this ensured we excluded pet sheep and small-holdings or \u201chobby-farms,\u201d which tend to have different management.We conducted a national survey, using a stratified random sampling design based on Scottish Agricultural Census data to ensure random and representative sampling of sheep flocks for all regions over Scotland. Only flocks with at least 50 breeding ewes were included and, on farms that had multiple flocks, only one flock was used. Breeding ewes were chosen to get a representative sample for a location. Younger animals may not have yet seroconverted to endemic pathogens. Tups are likely to have been purchased from elsewhere, so that any seroconversion may have been due to infection picked up in a different location. The inclusion criterion of 50 sheep was chosen because holdings of <50 sheep often do not have the number of breeding ewes we required for sampling . This sample size allows a 95% confidence interval of <5% for estimating LIV sero-prevalence . FarmersHemagglutination-inhibiting antibody (HIA) tests were undertaken on sheep blood sera using chick red blood cells as described by Clarke and Casals . The LIVFrom GIS databases we extracted data on climate, habitat and tick hosts for the locations of each farm. Climate variables from a GIS database included variables relating to temperature and precipitation on a 1 km or 5 km grid . All theHabitat data were derived from the UK Land Cover Map (2000) in a 50 m grid. They were split into the following categories according to those most commonly occurring around the farms: bracken, blanket bog, heathland , improved grassland, rough grassland, montane, broadleaf woodland, coniferous woodland, mixed woodland and, in addition, we created a generic \u201cwoodland\u201d category ; 2 grid resolution.Sheep and cattle density data were obtained from the national agricultural census data (AgCensus), available at the Parish level, at a 2 kmApproximate red deer densities were derived from Deer Commission for Scotland count data, based on dedicated observer counts of individual deer from the ground or air, Krigged to a 2 km grid. These data are the best (indeed only) quantitative deer data available, but have several caveats. For example, red deer counts were conducted where the 44 Deer Management Groups areas are, but these cover only around 75\u201380% of Scotland. Furthermore, the counts for different areas were not always conducted at the same time but, instead, staggered between 2000 and 2006. Therefore, given the level of error in these data, we consider any positive results linking deer density to LIV seroprevalence in sheep to be highly conservative.Intersect Point tool was used in ArcMap v9.3 to extract the values of all environmental parameters at the locations of each of the 125 sheep farms from a set of raster and vector maps of environmental data.To test for environmental variables associated with the sero-prevalence of louping ill virus among sheep farms we used general linear mixed models using the glimmix procedure in SAS Version 9.1. The response variable was seroprevalence for each farm expressed as the number of positive serum samples divided by the number of samples assayed for each farm. This is more powerful than using merely a single figure for the proportion of positives because it allows the model to take into account the number of samples taken, which varied from 26 to 29. A binomial distribution was specified. The data distribution was over-dispersed and zero-inflated , which is commonly found with disease prevalence data such as these. Therefore, each data point, i.e., individual farm, was entered as a random effect in the model as a wayP-values for the variable and model AIC. We also then entered all climate variables within a category into the model simultaneously to identify which had the strongest overall effect in terms of F and P-values and change in AIC. This variable selection procedure selected annual growing degree days from the temperature-related variables, and the number of dry days from the precipitation-related variables.Because of the large number of potential climate-related explanatory variables and becaThus, we entered the following selected explanatory variables as fixed effects into the model: easting and northing , time of year that bloods were sampled , year of blood sampling, estimated deer density, estimated sheep density, growing degree days, dry days, and the proportion of land cover that was each habitat category listed in We had expected deer density to covary positively with the proportion of heathland. Also, because areas with more heathland have less improved grassland , we expep > 0.1, and then checked that removal did not adversely affect model fit (increase AIC). If removal had increased AIC, the variable would have been kept in the model, but this did not occur in our procedure. Because such terms were eliminated from the models, we present test statistics for only those fixed effects that remained in the final model.We conducted a backwards stepwise procedure, whereby we sequentially removed from the model all explanatory variables that did not improve the model: we first removed each variable that had very low significance, i.e., Of the 125 sheep farms sampled, 28 (22.4%) farms contained LIV seropositive sheep . The proportion of positive farms varied greatly over different areas of Scotland , with a The national average seroprevalence, including farms that did not have LIV, was 6.39% . If only sero-positive farms are considered, the average seroprevalence was 28.52% . The frequency distribution of within-farm LIV seroprevalence exhibited a negative binomial distribution typical of disease and count data, with most farms having no or low infection, and a small number having very high infection rates .Sheep farms had higher within-farm LIV seroprevalences (% of ewes within a farm that tested positive) if they were in areas with a warmer climate (more growing degree days per annum), a higher proportion of land that is heath-dominated moorland, a lower proportion of land under improved (smooth) grassland and areas with higher deer densities . There wWe aimed to test the hypothesis that LIV prevalence in sheep farms is influenced by environmental factors, especially those associated with tick abundance and LIV transmission hosts.B. burgdorferi prevalence: . Higher temperatures increase tick interstadial development rate, oviposition rate, egg development rates and tick activity \u201337, and valence: , 40. WhiAs predicted, there were higher LIV seroprevalences among sheep farms in heather moorland which is the characteristic habitat in upland UK and is the habitat most frequented by wildlife hosts that are competent LIV transmitters: red grouse and mountain hares. Upland areas with more heather moorland had less improved grassland . ImproveI. ricinus ticks in Scotland in both upland heather moorland and forest habitats at a range of spatial scales between woodlands and adjacent open habitats which were often only 50\u2013100 m apart, which suggests that any link between woodlands and tick or tick-borne risk incidence in adjacent open habitats probably operates at a much finer spatial scale than we had access to in this study.Against our predictions, we did not find a significant association between LIV seroprevalence and the proportion of land cover that was woodland. However, a previous more detailed study of ticks (not LIV) found that (i) distance to woodland and (ii) the proportion of sheep pasture that had tree cover were strong predictors of tick burdens on lambs and tick densities in sheep pastures in Norway . One reaI. ricinus tick densities in most areas of these islands (Gilbert unpublished data). This is likely due to a combination of the lack of deer and the colder climate which inhibits tick activity and development. The eastern regions of Scotland had intermediate seroprevalence. These areas, especially Grampian Region, Speyside and Perthshire, have a particularly heterogeneous landscape, from high quality improved grassland for cattle up to high altitude montane habitats, with extensive forested areas and heather moorland in between. Some of these heathlands have the highest deer, red grouse and mountain hare densities in Scotland. Here, therefore, we would expect a wide spectrum of LIV seroprevalences, which is reflected by the overall intermediate values over the whole region. South of the Central Belt of Scotland there were even lower seroprevalence than the Northern Isles, even with a good sample size of 36 farms. The main habitats are upland rough grasslands and commercial coniferous forests, with some improved grassland for high density livestock grazing. There are deer, and some mountain hares and red grouse present, although not at the densities found in the East region of Scotland. We would therefore expect lower LIV infection rates than in the East region, but it is not clear why the seroprevalence is as extremely low as it is. This could be due to unconsidered factors such as historical movements of infected sheep and warrants further research.Geographically, the proportion of farms testing seropositive to LIV was much higher along the West and North coasts of Scotland than in other areas. This might be expected given the warm, humid climate which aids tick survival, activity and development. There was low LIV seroprevalence in the Northern Isles (Shetland and Orkney) which is most likely attributed to very low Although exposure of sheep to ticks can be mitigated by acaricide application to the animals alternatThe datasets generated for this study are available on request to the corresponding author.The study design was reviewed and approved by Moredun Research Institute.CC and FB initiated and designed the blood collecting sampling strategy as part of a separate study on Ovine pulmonary adenocarcinoma . FB organized farm visits and collected blood samples and KW conducted the LIV serology tests. LG wrote the LIV proposal to Scottish Government, assimilated the separate data sets, analyzed the data and wrote the paper. CC, FB, and KW commented on the manuscript drafts. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Saccharomyces cerevisiae, expressing different toxins, to independently control the rate at which they released their toxins. We developed mathematical models that predict the experimental dynamics of competition between toxin-producing strains in both well-mixed and spatially structured populations. In both situations, we experimentally verified theory\u2019s prediction that a stronger antagonist can invade a weaker one only if the initial invading population exceeds a critical frequency or size. Finally, we found that toxin-resistant cells and weaker killers arose in spatially structured competitions between toxin-producing strains, suggesting that adaptive evolution can affect the outcome of microbial antagonism in spatial settings.Antagonistic interactions are widespread in the microbial world and affect microbial evolutionary dynamics. Natural microbial communities often display spatial structure, which affects biological interactions, but much of what we know about microbial antagonism comes from laboratory studies of well-mixed communities. To overcome this limitation, we manipulated two killer strains of the budding yeast Microbes affect nearly every aspect of life on Earth, from carbon fixation to humanVibrio cholerae strains to kill each other, using the type VI secretion system, coarsens the single-strain domains of populations that were initially well mixed . A formally equivalent model that included spatial diffusion and noise due to number fluctuations was studied theoretically in where concentration rather than the actual physical size discussed later in this paper for spatially structured communities on surfaces. Nevertheless, when number fluctuations are included in the dynamics, there is an interesting analogy with escape over a barrier problems in statistical mechanics highlighted with a gray arrow and gray lines in o showed that their average fluorescent intensity was reduced with respect to the ancestral K1 population, and also compared to K1 cells sampled from a non-outlier population (K1s) that successfully expanded in the spatially structured experiments rule out this hypothesis, given that they followed the same dynamics of competition assays between the original K2b stock and K1 re-invaded the K1 population (magenta) after the halo had formed. By competing K2b cells from the sub-population highlighted with the green arrow (K2bg) in We tested the hypothesis that the outlier population K1stor K2b , panel Ak and K1 . Given t the top , howeverKm, for the local nutrient concentration, and a death term proportional to the local concentration of the toxins. Cells diffuse locally on the surface of the agar via a growth-dependent diffusion term reflecting the fact that cells push each other around as they interact mechanically with other cells during their growth and division , using as backbone a pFA6a-prACT1-ymCherry-KanMX6 plasmid (pAG3) linearized with the restriction enzymes EcoRI and EcoRV. The segments containing CUP1P, K2, and CYC1T were assembled via Gibson assembly (plasmid pAG14), using as backbone a pFA6a-prACT1-ymCitrine-KanMX6 plasmid (pAG5) linearized with the restriction enzymes EcoRI and EcoRV. These plasmids were linearized at the CYC1T locus using the restriction enzyme PpuMI, and their integration at the CYC1T locus was verified using colony PCR using primers oAG5/oAG46 and oAG44/oAG45 (strain K1), and oAG8/oAG46 and oAG44/oAG45 (strains K2 and K2b).The killer toxin and fluorescent protein genes used in this study were cloned into integrative plasmids. The K1 killer toxin gene was PCR-amplified from the plasmid YES2.1/V5-HIS-TOPO-K1 pptox , which cK2 toxin , and for plasmid . The proS. cerevisiae strain yJHK234 derived from the W303 genetic background. This strain was constructed as described in GAL1P occurs in a titratable, unimodal way in response to changes in the extracellular concentration of galactose, because galactose has been turned into a gratuitous, non-metabolizable inducer by deleting the genes GAL1 and GAL10 from its genome, and the bistability in galactose induction has been removed by placing the GAL3 gene under the constitutive promoter PACT1 methylene blue (an indicator for cell death), incubated at 25\u00b0C for 24 hr and isolated blue colonies (clones cured of the virus being killed by surrounding non-cured colonies). We tested that the isolated clone yAG74 was sensitive to toxins secreted by both the K1 and K2 reference strains. To prevent catabolite repression, which would prevent expression of the K1 toxin from GAL1P in the presence of glucose, we deleted the hexokinase two gene, HXK2 was obtained by transforming the linearized plasmid pAG11 into strain yAG75. The K2 (yAG83) and K2b (yAG82) killer strains were obtained by transforming the linearized plasmid pAG14 into strain yAG75. The transformant clones yAG94 and yAG83 were chosen for the experiments because of the bright fluorescence signal of the transformed cells observed at the flow cytometer and at the stereomicroscope compared to other transformants, which suggests multiple integrations of the plasmid into the genome. Conversely, yAG82 was selected because of the weaker fluorescent signal of the transformed cells compared to yAG83, suggesting a single integration of the plasmid. The sensitive, nonkiller strains S1 (yAG96) and S2 (yAG99) were obtained by transforming strain yAG75 with the linearized plasmids pAG3 and pAG5 digested with the restriction enzyme AgeI (which cleaves DNA in ACT1P), respectively.The strains used here were derivatives of the er PACT1 . By comp Ram\u00edrez , we founne, HXK2 from yAGN,N,N\u2032,N\u2032-tetraacetic acid (EGTA), a chelating agent that we used to reduce the baseline expression of CUP1P. The medium was prepared by mixing 990 mL of Millipore-purified water, 20 g of BD Bacto Peptone, 10 g of BD Bacto Yeast Extract, 10 mL of a 1% (w/v) solution of adenine and tryptophan, 1.04 g of NaOH and 9.51 g of EGTA. Then, 2 g of NaOH were added to bring the EGTA into solution. Then, we added 11.2 g of succinic acid and brought the pH to 4.5 by adding approximately further 2 g of NaOH. The solution was then filter-sterilized. Agarose medium was prepared following the same procedure but using 590 mL of water instead of 990 mL. Separately, 400 mL of Millipore water were mixed with 20 g of BD Bacto Agar and microwaved for 2 min. The two solutions were then combined and used to fill Petri dishes. Solutions of copper(II) sulfate and of galactose at different concentrations were added to the media in different volumes according to the desired final concentration of the two inducers All experiments were performed using YPD buffered at pH 4.5 and supplemented with adenine, tryptophan, and ethylene glycol-bis(\u03b2-aminoethyl ether)-b were mixed with strains S2, S1, and S1, respectively (so that the two strains expressed different fluorescent proteins). Cell suspensions containing different strains were then mixed at the desired relative frequencies, and 40 \u00b5L of these were then diluted in 10 mL YPD buffered at pH 4.5 with EGTA. Each replica in the competition assays consisted of 500 \u00b5L of this solution placed in deep-welled (capacity 2 mL/well), 96-well, round-bottomed plates, taped to a roller drum rotating at a frequency of 1 rotation per second and placed in a room kept at 25\u00b0C. Technical replicates were assigned to random positions on the 96-well plate, irrespective of the treatment they belonged to. At regular time intervals, small samples (\u226410 \u00b5L) were taken from each well, diluted in 50 mM Tris-HCl, pH 7.8, and the relative frequencies of the two strains were measured by flow cytometry. Flow cytometry data was performed using the Python package FlowCytometryTools and custom Python and Mathematica scripts. Occasionally, during measurement with the flow cytometer, some wells were not measured due to the aspiration of bubbles by the robotic liquid handler that automatically measured the 96-well plates. Due to the temporal sensitivity of the assay, relative frequency data from those replicates could not be recovered, and thus we excluded those technical replicates from the analysis. Competition assays with the unusual cells sampled from the experiments of Competition assays in liquid media were performed as follows. Strains were plated from the glycerol stock 4 days prior to the start of the experiment and grown for 2 days at 30\u00b0C in YPD plates. One day before the start of the competition assays, overnight cultures were started by transferring cells from the plates to a tube containing 2 mL YPD buffered at pH 4.5, which was placed in a rotating roller drum at 30\u00b0C. At the start of the competition assays, 200 \u00b5L from the overnight cultures were centrifuged, the supernatant was removed, and cells were then resuspended in 2 mL autoclaved water. The centrifugation and resuspension were repeated twice to dilute away any toxins produced overnight. For killer-versus-nonkiller competition assays, the strains K1, K2, and K2Media and growth conditions) were added to 100 mm diameter Petri dishes 2 days before the start of the experiment, along with appropriate amounts of a 5 mM solution of copper(II) sulfate or a 50 mM solution of galactose to reach the desired target concentration of inducer on the plates. The day before the experiment we inoculated overnight cultures of strains K1, K2, S1, and S2 in 2 mL YPD culture tubes with pH 4.5 and 25 mM EGTA and grew them at 30\u00b0C on a rotating roller drum. At the start of the experiment, we centrifuged and resuspended 200 \u00b5L of the overnight cultures in 2 mL autoclaved Millipore-purified water, repeating the centrifugation and resuspension twice to remove toxins from the overnight cultures. We mixed strains K1 and K2 with K1 frequencies b, S1, and S2 in 200 \u00b5L autoclaved Millipore water, repeating the centrifugation and resuspension twice to remove toxins from the overnight cultures. For the experiments of b culture were spread on the surface of the agar using an inoculating loop. Then, using a micropipette, we deposited droplets of the resuspended overnight K1 culture on top of the K2b lawn, at random locations on a regular lattice, well separated from each other. We deposited droplets of six different volumes (0.5\u20133 \u00b5L with 0.5 \u00b5L increments), with seven replicates per volume, across multiple plates. Similarly for the experiments shown in The experiments with spatially well-mixed populations on surfaces shown in n effect . In thisThe starting point for the derivation of the frequency model where which is in fact with \u20131, \u20131 and \u20131(cell/mL)\u20131 (see next section for its estimate), and thus the condition is satisfied for \u20131, it takes about 17 hr for the condition which show that the toxin concentrations evolve with the characteristic time scale the data and 3A. Our starting point for the investigation of antagonistic population dynamics in spatially structured populations was the following spatial generalization of 2), which is related to the well-mixed carrying capacity via the relationship Parameter fitting). We found that the non-spatial version of this model could indeed fit the antagonistic dynamics between toxin-producing and nonkiller strains, and that it could predict the dynamics of antagonistic competition between the two toxin-producing strains K1 and K2 (and K1 versus K2b) in well-mixed media . These S. cerevisiae on glucose, where where b) in well-mixed media.Here, lgorithm and the The parameters of the frequency model were obtained by least-squares fitting of the equation:where The parameters of where The parameters of The toxin concentration and toxin production rate were rescaled as described previously. The initial concentration of glucose was set to the experimental value terature . The valConfidence intervals for the model predictions in 2/hr , the critical inoculum size is about 20 mm2. In The diffusion coefficient of glucose in s factor . Additio 1/a1-a2 . In Figu2 to account for the increased killer strength of copper-induced K2 killer strains on agar plates, compared to liquid cultures. This value was calculated as follows. Upon interpolating between the last two data points in the lower-right panel of 2 for strain K2 grown on plates with 0 \u00b5M copper and 350 \u00b5M galactose. Given that K2b was a weaker killer than K2 by a factor 2. The initial condition for the radial density profiles of the two strains was set to reproduce the experiments as closely as possible. To this end, we first reproduced experimentally the initial conditions of the experiments of b using the same protocol as for the experiments of b occupied the region b was set to zero for b. The dynamics of this simpler, but less accurate, simulation .Thank you for submitting your article \"Antagonism between killer yeast strains as an experimental model for biological nucleation dynamics\" for consideration by The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.As you will see below, all reviewers found the paper interesting and important. A series of concerns me questions raised by all reviewers concerns the nutrient depletion explanation of the halos. The specific assumption made in the model (and not confirmed by the experiment) may turn out to be inevitable. Nevertheless, please address the concerns regarding the assumptions on which the model based, as well as all other comments, as specified by the detailed reviews below.Reviewer #1:The authors present a beautifully clean model system to study mutual antagonism in spatial and well-mixed populations which they use to test theoretical predictions from models developed previously (and adapted here). In particular, the authors verify the prediction that in a spatial setting a stronger antagonist can invade a population of a weaker antagonist only if the former is present at a large enough initial population size. The paper is well written and very accessible, and I have only two substantive concerns:1. Given the focus on minimum population size for invasion in the introduction I expected the authors to present the same verification also for well-mixed populations. While Figure 3B seems to indicate that in well-mixed populations the relative abundance of the stronger antagonist can indeed decrease if its initial abundance is below a threshold value I would have liked to see this shown (or at least stated in the text) more clearly.2. In the later portions of the manuscript, the authors infer from a variety of models they tested that the halo separating the two genotypes stems from nutrient depletion. To make their conclusions about nutrient depletion more comprehensible it would be helpful to show the results of models of intermediate complexity and their lack of a halo effect. The authors then go on to suggest (l. 495) that resistant mutants can re-invade the invading strains by presumably crossing the halo into the invader's territory; but how do the mutants cross the nutrient-depleted halo?Reviewer #2:In this manuscript the authors report interesting results on the population dynamics of a yeast population containing two antagonistic strains. The authors designed the yeast strains to produce toxins in variable amounts, which are regulated by inducible promoters. This enables them to systematically study the role of antagonistic interactions under controlled experimental conditions in different environments, including well mixed liquid cultures and agar surfaces. Specifically, they are investigating under which conditions the \"stronger\" of the two strains is able to invade a population of the \"weaker\" strain. Their main result is that the invasion requires a threshold population size (nucleation threshold) of the stronger strain. While the existence of such a threshold is less surprising (due to the state-dependence of the competition), the quantification of the effect in well-mixed and spatially extended systems is interesting. In particular, the authors give a fairly quantitative mathematical description of population dynamics using a rate equation approach for the well-mixed system and a reaction-diffusion model for the spatially extended system. This mathematical analysis provided further insights into the nature of competition. This includes that it is possible to describe the well mixed system in the form of a frequency model using interaction parameters determined from the interaction of the toxin-producing strains with sensitive strains. Furthermore, the reaction-diffusion equations, which explicitly consider both the toxin and the nutrients, seem to capture the formation of a depletion zone between the two antagonistic strains at the start of their interaction.The paper is well written and interesting to read. In principle I am in favour of recommending the manuscript for publication. However, I have a major point of criticism that the authors should consider before I can make final recommendations.When investigating the nature and role of the different mutants found in their studies, the authors do not discuss the possibility of changes in growth rates. While the engineered original strains show the same growth rates, this might not be the case for the mutant strains. This could have a major impact on the dynamics, since, for example, the validity of a pure frequency model depends on identical growth rates. I would ask the authors to measure these growth rates in their mutant strains. Furthermore, I think it should be straightforward to test possible effects of a change in growth rates with their reaction-diffusion model.Reviewer #3:In addition to the technical questions raised above I have a few quibbles about the presentation.\u2013 The authors start using the term \"stronger antagonist\" already in the abstract without giving it even any definition, so the reader has to guess the meaning the best she/he can. Definition comes on line 118, page 8\u2026\u2013 It would be useful to have a more explicit layout for the paper early on. This reader was wondering about effects of toxin stability and diffusion all through the well-mixed section, without knowing that it is coming later.eLife after suitable revisions/clarifications.Overall, I think the manuscript is suitable for As you will see below, all reviewers found the paper interesting and important. A series of concerns me questions raised by all reviewers concerns the nutrient depletion explanation of the halos. The specific assumption made in the model (and not confirmed by the experiment) may turn out to be inevitable. Nevertheless, please address the concerns regarding the assumptions on which the model based, as well as all other comments, as specified by the detailed reviews below.and of the depletion of nutrients. However, we cannot come up with a simple way of testing whether the toxin activity alone is sufficient to cause the formation of the halo. One could imagine a setup in which nutrients are continuously provided to the agar, but this would cause an indefinite growth of the two populations, which would lead to a very thick lawn on top of the agar, and the physiological state and physical environment of cells within such lawn would be very different from that of our experiment. Most likely, such indefinite growth would lead to the halo being filled with cells, as it is hard to imagine that a \u2018canyon\u2019 would persist with no \u2018landslides\u2019 in such configuration. Alternatively, one could envision doing an experiment in a sophisticated setup using microfluidic chambers that would only allow a monolayer of cell to grow within them, but the pressure buildup within the chamber would also most likely lead to the halo being filled with cells. In both cases, the physical environment would be very different from the one experienced by the cells in our experiments. For these reasons, we rely on modeling to explain the halo formation dynamics, using a combination of experimentally measured and physically realistic parameters. Incidentally, the increased density of cells at the two sides of the halo oC to 32oC for further 48h , a temperature at which both the K1 and K2 toxins are unstable and fail to inhibit the growth of susceptible strains .\u201d\u201cIf the halo was caused by the presence of the toxins alone, and not by the combined effect of the toxins and the diffusion of nutrients away from the agar underneath the halo, one would expect that inhibition of the toxin would allow cells to re-invade the halo region. To test this, we experimentally verified that no further growth in the halo region is observed after transferring populations that competed for 48h at 25Although we failed to comment on this in the previous version of the manuscript, there is a simple explanation for the failure of the model with logistic growth term to produce the halo, and we have now added a discussion on this point in the revised version of the manuscript (lines 837-849). We thank you and the reviewers for pointing out the missing explanation. The rationale is as follows.2 on the agar can support a given number of cells. In such a model, nutrients located in a given region of space cannot diffuse to nearby regions and thus can only support cells locally. With toxin production rates representative of our experiments, the toxin produced by an antagonist strain is not sufficient to completely halt the growth of the other antagonist. In the model with logistic growth, therefore, the two populations are able to grow at the interface between the two antagonist strains, even if at a slower pace compared to other regions of space, eventually filling the halo region with cells in the limit of large times. When nutrients can diffuse, however, nutrients move to other regions of space before cells at the interface between the two antagonists are able to grow, leading to the depletion region that we referred to as the \u2018halo\u2019.The logistic growth term assumes that every cmReviewer #1:The authors present a beautifully clean model system to study mutual antagonism in spatial and well-mixed populations which they use to test theoretical predictions from models developed previously (and adapted here). In particular, the authors verify the prediction that in a spatial setting a stronger antagonist can invade a population of a weaker antagonist only if the former is present at a large enough initial population size. The paper is well written and very accessible, and I have only two substantive concerns:1. Given the focus on minimum population size for invasion in the introduction I expected the authors to present the same verification also for well-mixed populations. While Figure 3B seems to indicate that in well-mixed populations the relative abundance of the stronger antagonist can indeed decrease if its initial abundance is below a threshold value I would have liked to see this shown (or at least stated in the text) more clearly.Thank you for this comment, we have now highlighted this point more clearly in the main text at lines 192-198:eq thus represents a critical inoculum size below which the invasion of a stronger antagonist is predicted to fail in well-mixed settings. Note that this particular \u201csize\u201d relates to an inoculum concentration rather than the actual physical size discussed later in this paper for spatially structured communities on surfaces. Nevertheless, when number fluctuations are included in the dynamics, there is an interesting analogy with escape over a barrier problems in statistical mechanics .\u201dThe equilibrium frequencyf and at lines 248-251:feq, is required for a stronger antagonist to invade a resident, antagonist population.\u201d\u201cOverall, the experimental results from well-mixed experiments confirm the theoretical prediction that a critical starting frequency, the equilibrium frequency 2. In the later portions of the manuscript, the authors infer from a variety of models they tested that the halo separating the two genotypes stems from nutrient depletion. To make their conclusions about nutrient depletion more comprehensible it would be helpful to show the results of models of intermediate complexity and their lack of a halo effect. The authors then go on to suggest (l. 495) that resistant mutants can re-invade the invading strains by presumably crossing the halo into the invader's territory; but how do the mutants cross the nutrient-depleted halo?Thank you for this comment, which helped us realize that we hadn\u2019t given sufficient explanation for the failure of the logistic growth model to reproduce the formation of the halo. We have now included the following discussion at lines 836-849 explaining why the models with logistic growth fail to reproduce the formation of the halo:2 on the agar can support Kspatial cells. In such a model, nutrients located in a given region of space cannot diffuse to nearby regions and thus can only support the growth of cells locally. With toxin production rates representative of our experiments, the toxin produced by an antagonist strain is not sufficient to completely halt the growth of the other antagonist, as shown by the fact that the absolute number of cells of both antagonists grew in all our well-mixed competition experiments, even if the relative frequency of one of the strains declined with time. In the model with logistic growth, the two populations are thus able to grow at the interface between the two antagonist strains, even if at a slower pace compared to other regions of space, eventually almost completely filling the halo region with cells . When nutrients can diffuse, however, nutrients move to other regions of space before cells at the interface between the two antagonists are able to grow, leading to the depletion region that we referred to as the \u2018halo\u2019.\u201d\u201cThe failure of such model to reproduce the halo can be explained as follows. The logistic growth term in Equations 6 assumes that every cmWe have also now included in Figure 7 \u2013 supplement 1 plots of numerical simulations of the intermediate models, showing that they fail to reproduce the formation of the halo.Concerning the ability of the mutants to cross the nutrient-depleted halo, it is important to note that, right after a dilution, nutrients are abundant everywhere on the plate, including in the area corresponding to the halo. In the absence of toxin-resistant mutations, the growth rate of cells is reduced in that area due to the presence of the toxin, so nutrients that are below the area corresponding to the halo can diffuse away from that region of space and can be taken up by nearby cells at the edge of the resident and invader populations. In the presence of a resistant mutant, however, the growth rate of resistant cells at the edge of the halo is not reduced by the presence of the toxin, and thus these cells are able to take up nutrients before they diffuse away from the halo. This early access to nutrients allows these cells to cross the halo. We have now added a sentence to clarify this point at lines 550-554:;We believe that resistant cells were able to cross the nutrient-depleted region of the halo because, right after the populations were diluted by replica plating, resistant cells could grow and divide despite the presence of the invader\u2019s strain toxin, and they could thus take up nutrients located in that region of space before those nutrients diffused away.\u201dReviewer #2:In this manuscript the authors report interesting results on the population dynamics of a yeast population containing two antagonistic strains. The authors designed the yeast strains to produce toxins in variable amounts, which are regulated by inducible promoters. This enables them to systematically study the role of antagonistic interactions under controlled experimental conditions in different environments, including well mixed liquid cultures and agar surfaces. Specifically, they are investigating under which conditions the \"stronger\" of the two strains is able to invade a population of the \"weaker\" strain. Their main result is that the invasion requires a threshold population size (nucleation threshold) of the stronger strain. While the existence of such a threshold is less surprising (due to the state-dependence of the competition), the quantification of the effect in well-mixed and spatially extended systems is interesting. In particular, the authors give a fairly quantitative mathematical description of population dynamics using a rate equation approach for the well-mixed system and a reaction-diffusion model for the spatially extended system. This mathematical analysis provided further insights into the nature of competition. This includes that it is possible to describe the well mixed system in the form of a frequency model using interaction parameters determined from the interaction of the toxin-producing strains with sensitive strains. Furthermore, the reaction-diffusion equations, which explicitly consider both the toxin and the nutrients, seem to capture the formation of a depletion zone between the two antagonistic strains at the start of their interaction.The paper is well written and interesting to read. In principle I am in favour of recommending the manuscript for publication. However, I have a major point of criticism that the authors should consider before I can make final recommendations.When investigating the nature and role of the different mutants found in their studies, the authors do not discuss the possibility of changes in growth rates. While the engineered original strains show the same growth rates, this might not be the case for the mutant strains. This could have a major impact on the dynamics, since, for example, the validity of a pure frequency model depends on identical growth rates. I would ask the authors to measure these growth rates in their mutant strains. Furthermore, I think it should be straightforward to test possible effects of a change in growth rates with their reaction-diffusion model.Thank you for this interesting observation. We have now measured the growth rates of mutants in two ways:oC using a stage-top incubator. The toxin diffusion coefficient is estimated at 0.003 cm2/h, and thus the production of the K2 toxin by strain K2b and its mutants does not affect cells of the of the K1 strain and its mutants at a distance of 1 cm on the same plate (the average distance traveled by a toxin molecule over such time is ). During the 6 h, cells formed micro-colonies of up to 32 cells, with the exception of a few non-dividing or very slow dividing cells that we excluded from the analysis, but whose growth rates are shown in the Figure 6 \u2013 figure supplement 2. T-tests performed between all pairs of strains/mutants gave no statistically significant differences between their growth rates .1. By spreading single cells of each mutant and ancestor cells on a YPD agar plate identical to the ones used in the spatial experiments , with no inducers added. Droplets of each strain/mutant were deposited on the plate on a grid at distances of 1 cm from each other and the location of each strain/mutant within the grid was randomized. Cells on the plate were imaged using an inverted microscope with 50X objective every 20 min for 6 h, during which the plate was kept at 25GAL1P versus strain S2 can be used to directly detect differences in reproductive fitness, because no toxin is produced in the absence of inducer and thus the relative frequency of the two strains varies due to differences in reproductive fitness alone. Mixed competition assays between strains carrying the K2 killer toxin gene induced by the promoter CUP1P versus strain S1, instead, can only detect the joint effect of differences in reproductive fitness and killer activity, given that the promoter CUP1P has non-zero expression even in the absence of its inducer (copper). T-tests performed between all pairs of competitions between killer strains/mutants of the same type and the corresponding sensitive strains give no statistically significant differences between the rates at which the relative frequency of killer strains in competition assays vary with time . Although the curves on the left plot are not linear, we note that the curves for strains K2b, K2bo and K2bs overlap. The curve for the resistant strain K2br does not overlap with the others, which may be attributable to a difference in the initial relative frequency of K2br and the sensitive strain S1.2. By growing liquid cultures of each mutant and ancestor strain mixed with a strain sensitive to the toxins produced by the former and expressing a fluorescent protein of the opposite color, measuring the relative density of each strain in pairwise competitions, and diluting daily. These competition assays performed over multiple days and generations can detect smaller fitness differences than the experiments reported above, by measuring the relative frequency of the two strains at different times over multiple generations. Mixed competition assays performed with strains carrying the K1 killer toxin gene induced by the promoter Taken together, these results show that no changes in growth rates are detectable between the strains and mutants isolated at the end of the experiment, and between such strains and their ancestors. We have now included these plots in the revised version of the manuscript as Figure 6 \u2014figure supplements 2 and 3. The fact that no differences in growth rate could be detected between any pair of strains is now mentioned in the main text at lines 439-442:\u201cFigure 6 \u2014figure supplements 2 and 3 reveal that we could not detect any differences in growth rate between any pairs of strains, ruling out the possibility that the altered outcomes of competition observed in Figure 6 could be due to changes in cell division times during the experiment.\u201dReviewer #3:In addition to the technical questions raised above I have a few quibbles about the presentation.\u2013 The authors start using the term \"stronger antagonist\" already in the abstract without giving it even any definition, so the reader has to guess the meaning the best she/he can. Definition comes on line 118, page 8\u2026The definition has now been anticipated to lines 61-63, as there was no space to define it in the abstract:\u201cWe refer to the strain that survives in a 1:1, well-mixed culture as the stronger antagonist and the one that goes extinct as the weaker antagonist. Models based on generalizations of the Lotka-Volterra equations predict that being a stronger antagonist is a necessary, but not a sufficient condition for an invading strain to replace a resident, antagonist population\u201dIt is of course difficult to give a precise definition before having introduced the details of the experimental system, hopefully the adjectives \u2018strong\u2019 and \u2018weak\u2019 in the abstract make at least intuitive sense.\u2013 It would be useful to have a more explicit layout for the paper early on. This reader was wondering about effects of toxin stability and diffusion all through the well-mixed section, without knowing that it is coming later.We have now added an explicit layout at lines 98-105. Together with the paragraph at lines 8597 and Figure 1, the layout of the paper should now be clearer."} {"text": "Betsch and Sprengholz address In this study , severalhttps://pollenscience.eu/, using \u201callergy-relevant\u201d pollen thresholds (30 and 100 m\u22123), whose own creator has highlighted that such thresholds may be disputable (https://pollenscience.eu/).Third, based on their data and analysis code (/j5g7n/) , the var/j5g7n/) , as wellhttps://www.augsburg.de/umwelt-soziales/gesundheit/coronavirus/fallzahlen; last access: June 17, 2021), as well as pollen measurements . We found that while the number of tests per week remained relatively stable, the percentage of positive tests peaked between weeks 13 and 19 , with a minor decrease in R2 = 0.16.We do acknowledge with Betsch and Sprengholz that it 3 and 19 . The peaOverall, we agree that the testing is a significant parameter to be taken into account when modeling the spreading of COVID-19, but when integrated into other more significant variables, like airborne pollen concentrations, the magnitude of difference in the explained variability is minimal, only 1%."} {"text": "Dear Editor,We read with great interest the recent article by Moleski et al. , discussn\u2009=\u2009205) and with a new presentation of IBS \u2018type\u2019 symptoms (n\u2009=\u200974). Patients were divided into two groups, group one consisted of individuals presenting with self-reported NCGS. In contrast, group two consisted of individuals with a new presentation of IBS \u2018type\u2019 symptoms. Individuals were characterised according to demographics, in addition to investigative outcomes.It is important to highlight to readers that patients who present with either self-reported NCGS or irritable bowel syndrome (IBS) \u2018type\u2019 symptoms should be investigated for other causes for their gastrointestinal symptoms. In view of this, we reviewed the case notes of patients with self-reported NCGS (n\u2009=\u2009172) and 62% (n\u2009=\u200946) respectively. There was no significant difference in the presenting age between patients . Following investigation, 11.7% (n\u2009=\u200924) of NCGS and 17.6% (n\u2009=\u200913) of patients with IBS \u2018type\u2019 symptoms were found to have other gastrointestinal diagnosis (Tables n\u2009=\u200918) and by comparison for patients with IBS \u2018type\u2019 symptoms it was bile acid diarrhoea (BAD) .We found that patients presenting with NCGS and IBS \u2018type\u2019 symptoms were predominantly female, at 84% n\u2009=\u200972 and 62We conclude that the demographics of patients with NCGS and IBS are similar, with a young female presentation. We found a significant minority of these patients have other GI diagnoses. The data supports the recommendation that patients presenting with self-reported NCGS should only be confirmed as having NCGS after excluding other gastrointestinal disorders. We have also demonstrated from our data that a similar approach is required in patients with IBS \u2018type\u2019 symptoms. One strategy that may help would be the utilisation of the Salerno experts\u2019 criteria in reaching a firm and positive diagnosis of NCGS which would in turn improve the validity of the results published . Whilst"} {"text": "This presentation will describe the creation and findings from an interprofessional curriculum in behavioral health developed by social work faculty for medical students. Training in behavioral health is needed more than ever during a time of increased isolation and fear during the COVID pandemic. Older adults with untreated behavioral health concerns are a vulnerable population, which can result in negative effects, including emotional distress, reduced physical health, increased mortality, and suicide . Healthcare is increasingly complex with a need to focus on the physical, social, and behavioral aspects of daily living, and providers are realizing the importance of interprofessional collaboration. Towards that aim, I created a module for 4th year medical students in mental health and older adults, which is now part of their medical education curriculum. I will present outcomes in: (1) satisfaction; (2) acquired knowledge and skills (post-test); (3) application of knowledge and skills (pre-post competency assessment and comfort around asking about depression); and (4) patient outcomes . Feedback from the 143 medical students is positive with 95% strongly agreeing or agreeing that this expanded their knowledge and understanding in mental health issues among older adults. At baseline, 17% of medical students were moderately to very comfortable in asking questions on the GDS compared to 42% at post-assessment. After completing the course, almost 25% of medical students made a referral to social work during their rotation. This collaboration resulted in curriculum that is both rigorous and impactful."} {"text": "High volatility and inertness were predicted for Fl due to the strong relativistic stabilization of the closed 7p1/2 sub-shell, which originates from a large spin-orbit splitting between the 7p1/2 and 7p3/2 orbitals. One unpaired electron in the outermost 7p1/2 sub-shell in Nh is expected to give rise to a higher chemical reactivity. Theoretical predictions of Nh reactivity are discussed, along with results of the first experimental attempts to study Nh chemistry in the gas phase. The experimental observations verify a higher chemical reactivity of Nh atoms compared to its neighbor Fl and call for the development of advanced setups. First tests of a newly developed detection device miniCOMPACT with highly reactive Fr isotopes assure that effective chemical studies of Nh are within reach.Nihonium and flerovium are the first superheavy elements in which the Z \u2265 104 with atomic number Z \u2265 104 . GeneralZ \u2265 104 . As SHE or less . Chemica or less . These ement 118 .7s2 and 7p1/22, respectively and Fl (element 114) due to a large relativistic stabilization of the outermost spherical orbitals ectively . Locatedgroup 13 . The lar -0.7\u00a0eV . Adsorpt -0.7\u00a0eV .2 and Au. Relying on the expected high volatility and weak chemical reactivity, the gas-solid chromatography method was used for Cn and Fl adsorption studies, mainly on Au surface shells in Cn and Fl and Ts (element 117) isotopes, respectively 288Mc has a higher cross section of about 10\u00a0pb at the GSI Helmholtzzentrum f\u00fcr Schwerionenforschung (GSI) in Darmstadt, Germany. The results of all previous studies on Nh chemistry call for further optimization of the existing techniques to facilitate future gas-chromatography experiments on Nh and, possibly, Mc.Nh isotopes for chemistry studies can be produced as decay products following nuclear fusion reactions between ectively . The nucut 10\u00a0pb . The sec studies . Followi243Am or 249Bk targets were irradiated with 48Ca beams. Because of the limited availability of the 249Bk target material, most experiments were performed by irradiation of 243Am targets. The main product of those irradiations was 288Mc, produced in the 3n-evaporation channel following the complete fusion reaction. The SHEs of interest were thermalized in a He/Ar gas mixture directly behind the target and transported by the gas flow to a detection setup. The transport line from the chamber, where the ions recoiling from the target were thermalized, to a detection setup was 4\u00a0m or longer in all experiments performed at FLNR. Since the half-life of 288Mc (T1/2 \u2248 170\u00a0ms) is too short for the applied technique, the chemical experiments were focused on adsorption studies of its longer-lived \u03b1-decay product, 284Nh (T1/2 \u2248 1\u00a0s), on a Au surface. The Au-covered PIN photodiodes formed a detection channel, consisting of one or two detector arrays kept either at constant ambient or lower temperatures, or with a negative longitudinal temperature gradient. In all these experiments without pre-separation, an insert made of quartz was installed in the recoil chamber to prevent collisions of thermalized products with the metallic walls. In addition, a hot quartz-wool filter was installed to avoid the transport of non-volatile species by aerosol particles, which can be formed by the intense beam interacting with the recoil chamber back wall.In the past, a series of chemical experiments with Nh were conducted by a collaboration led by FLNR . In tota48Ca ion beam and 243Am (249Bk) recoiling from the target, i.e., primarily 288Mc (294Ts), were thermalized inside a recoil chamber (1) placed directly behind the target or in a Recoil Transfer Chamber (RTC) at the focal plane of the Dubna Gas-Filled Recoil Separator (DGFRS) or of TASCA. Quartz was inserted inside the recoil chamber in the experiments without pre-separation (A), where the beam traversed through the gas before it was stopped in a beam stop (2). To prevent the break-through of aerosol particles, a quartz wool plug heated to 600\u00b0C (3) was installed at the exit of the recoil chamber. In experiments behind a preseparator, 288Mc ions penetrated a window separating the RTC volume from the separator volume. Two detector arrays, (4) and (5), placed in a row were used in the experiments . The thickness of the SiO2 and Au layers on the detector surface was 30\u201350\u00a0nm. The recoil chamber (or RTC) was connected to the detection setup by PTFE capillaries of different lengths kept at different temperatures, as is indicated in The products from the fusion-evaporation reaction between the ents see : A) a Auents see ; B) a Auents see ; C) two 284Nh were found in the first three runs at Dubna using 243Am targets although observation of 10\u201320 decay chains from 284Nh had been expected based on the known efficiency and a total beam integral of 1.35.1019 particles .\u03b1-decay spectra from volatile byproducts of multi-nucleon transfer reactions, the data quality in the experiments without pre-separation hampered the safe identification of Nh and its daughters > 45\u00a0kJ/mol was concluded based on the non-observation simulations using the mobile adsorption mechanism . The time elapsing between the start of each beam pulse and the observed decay-in-flight of Hg isotopes over the first 16 SiO2-covered detector pairs was registered. This measurement allowed determination of the flush-out time distribution. The measured flush-out time distribution and the integrated fraction of flushed-out 182,183Hg atoms as a function of time are depicted in 284Nh (T1/2 \u2248 1\u00a0s) due to decay during their transfer from the RTC volume into COMPACT were estimated not to exceed 50%.Besides losses due to irreversible adsorption before reaching the detectors, the rate of observed atoms may be reduced due to nuclear decay during transport. These losses depend critically on the time required for transporting the atoms from the production site to the detector; in our case, the COMPACT arrays. Transit times through TASCA are on the order of 1\u00a0\u03bcs and are negligible compared to the time needed for flushing out the atoms from the RTC to COMPACT. The transport time to the detection setup was measured with short-lived 2 chromatography channel. The travelling time of the weakly-interacting Hg along the 16\u00a0cm-long SiO2-covered detector array was estimated to be 50\u00a0ms at ambient temperature.The measurement of decay-in-flight of Hg isotopes in the first detector array with time and position resolution enabled us to determine the retention time of Hg in the SiO2 and Au surfaces was next targeted. The first attempt at adsorption studies of Nh at TASCA was performed in 2016. A pulsed 48Ca+10 beam from the UNILAC accelerator with a beam energy of 5.47\u00a0MeV/amu (before the Ti target substrate) bombarded a 243Am target wheel, which rotated synchronously with the pulsed beam (243Am target segments were deposited on 2.2(1)\u00a0\u03bcm Ti foils by molecular plating (243Am segments was 0.80(1)\u00a0mg/cm2 of 243Am (isotopic enrichment: 99.7%). A beam integral of 4.4(1)\u00b71018 was collected during the 20-days long irradiation. TASCA was filled with He at a pressure of pHe = 0.8\u00a0mbar and set to a magnetic rigidity of B\u00b7\u03c1 = 2.21\u00a0Tm to center 288Mc recoils in the TASCA focal plane , made of 3.6\u00a0\u03bcm Mylar\u2122 film on a stainless-steel supporting grid with 80% transparency. Recoiling EVRs were then thermalized in a He/Ar (1/1) gas mixture at 1\u00a0bar inside the 48-cm3 large RTC volume and were flushed out to COMPACT detector arrays at a gas flow rate of 2\u00a0L/min. The inner RTC wall was covered with a Teflon\u2122 layer. A 6-cm long PTFE tube (i.d. 4\u00a0mm) connected the RTC volume to the detection setup, so that atoms thermalized inside the RTC encountered only non-metallic surfaces before they entered COMPACT.Building upon successful Fl chemistry experiments at the gas-filled separator TASCA , the gassed beam . Four 24 plating . The aveal plane . The nom2 followed by 16 Au-covered detector pairs. The second COMPACT array consisted of 32 Au-covered detector pairs terminated by a spontaneous fission (SF) event, were registered. This indicates a reactivity of Nh that is higher than that of Fl on a confidence level of >95% for small numbers. Only one SF event consisting of two coincident fission fragments but without \u03b1-decay precursors was detected in the fifth detector pair of the first COMPACT array, on the SiO2 surface. The probability for the detection of a single \u03b1 decay is about 80%. Thus, the probability that the SF event originated from the long decay chain beginning in 288Mc or 284Nh is almost negligible. However, it could originate from shorter decay chains, which end by the spontaneous fission of Nh or Rg isotopes; according to the nuclear decay properties given in was determined (2 and H2O) were kept at a level of a few ppm, yet gas-phase reactions between Nh atoms and gas impurities cannot be completely excluded. In that case, the formation of NhOH could be expected, which is predicted to be non-volatile of Nh (7.306\u00a0eV) (IP(Mc) = 5.574\u00a0eV is large5.574\u00a0eV .2-covered detector channel. PTFE is known as a very inert hydrophobic material, and the adhesion of different chemical species on the PTFE surface is very low \u00a0\u03bcm Ti foils and had thicknesses of about 0.5\u00a0mg/cm2. The inner volume of the RTC was separated from the TASCA volume by a 6-\u03bcm thick Mylar\u00ae window. The reaction products recoiling from the target were physically separated using TASCA, and thermalized in a He/Ar (1/1) gas mixture inside the RTC. The measured flush-out efficiency of the reactive alkali metal Fr was compared to that of the volatile and relatively inert metal Hg. While the efficiency for Hg isotopes was expectedly higher (\u223c58%), a substantial efficiency for Fr isotopes (\u223c30%) was determined. The distributions along the detector channel were similar for both elements following the diffusion-controlled deposition on the Au surface. The lower transport efficiency of Fr can be explained by its higher chemical reactivity, resulting in larger adsorption losses on the inner walls of the RTC and in the connecting slit.The transport efficiency from the RTC into the miniCOMPACT was measured with short-lived 284Nh. The non-observation of decay chains from 284Nh in the experiment behind the DGFRS (284Nh or its progenies. These results called for the development of an advanced setup. We therefore developed the new miniCOMPACT detector array, which does not require any transport line between the RTC and the detection setup. Initial tests of this approach with highly reactive Fr isotopes gave an encouraging value of 30% for the absolute transport yield suggesting that a chemical study of Nh and Mc is now within experimental reach.Elucidating the adsorption behaviour of Nh on different surfaces is currently one of the hottest topics in the field of superheavy element chemistry. Nh has one unpaired electron in the valence shell and is, therefore, expected to be chemically reactive. To date, several attempts to study the interaction of Nh with Au were reported from FLNR, Dubna. The chemical results of the first studies without physical pre-separation sufferedhe DGFRS , and in"} {"text": "Hibiscus sabdariffa calyx (HS) water decoction extract is a commonly consumed beverage with various pharmacological properties. This systematic review examines the possible effect of HS intake on immune mediators. The Scopus and PUBMED databases were searched for all human and animal studies that investigated the effect of HS administration on immune related biomarkers. For each of the immune biomarkers, the mean, standard deviation and number of subjects were extracted for both the HS treated and untreated group. These values were used in the computation of standardized mean difference (SMD). Statistical analysis and forest plot were done with R statistical software (version 3.6.1). Twenty seven (27) studies met the eligibility criteria. Twenty two (22) of the studies were used for the meta-analysis which included a total of 1211 subjects. The meta-analysis showed that HS administration significantly lowered the levels of TNF-\u03b1 , IL-6 , IL-1\u03b2 , Edema formation , Monocyte Chemoattractant Protein -1 and Angiotensin converting enzyme cascade . The levels of IL-10 , Interleukin 8 , iNOS and C- Reactive Protein , were not significantly changed by HS administration. Some of the results had high statistical heterogeneity. HS may be promising in the management of disorders involving hyperactive immune system or chronic inflammation. Hibiscus sabdariffa (HS) is a shrub that belongs to the Malvaceae family .Hibiscus sabdariffa\u201d OR roselle). The title and abstract of all the resultant search results were then screened for eligibility.Online databases (Pubmed and Scopus) were systematically searched by two researchers reviewing independently (UF and NC) to find articles that documented the impact of HS on immune biomarkers. Date restriction was not applied in the search. The search was conducted between March 20, 2020, and June 5, 2020. The databases were searched with the keywords or other species of Hibiscus were excluded.The following were inputted in the inclusion criteria; (1) original articles published in the English language. (2) Human and animal studies that used HS as an intervention agent in a healthy or disease model. (3) Studies that reported any immune or inflammatory related biomarkers such as Interleukins, Chemokines, Toll-like receptors (TLRs), Prostaglandins, Tumor Necrosis Factor-\u03b1 (TNF-\u03b1), NF-\u03baB, Monocyte chemoattractant protein-1 (MCP-1/CCL2), Inducible Nitric Oxide synthase (iNOS), components of the cyclooxygenase and lipoxygenases pathways, components of the ACE cascade , ACE), inflammatory endpoints such as edema formation, and inflammatory diseases, etc. (4) Studies done using any extract from the calyces of HS. (5) Only studies that included both a control and an HS-intervention group were considered. Data obtained from Duplicate records were deleted, after which two authors reviewed the collated articles to ascertain their suitability for inclusion. Where the two authors had different opinions on the inclusion of an article, the other researchers voted and a consensus was reached. Titles and abstracts of articles were first screened, and publications that did not meet the written criteria were removed. Five independent researchers assessed the full texts of the remaining articles for eligibility. For each of the eligible studies, relevant data were extracted independently by each of the reviewers , into a predesigned data extraction table which consisted of the first author\u2019s last name, year of study, experimental model, HS formulation, dosage/route of administration, analytical method, the inflammatory biomarker investigated, tissue investigated, study sample size and summary of major findings. The data extraction table generated by each reviewer was subsequently compared and collated.Missing or incomplete data was estimated with Cochrane\u2019s review manager software (RevMan 5.3). Statistics and meta-analysis were done with R statistical software (version 3.6.1). A random effect model was assumed and meta-analysis was computed using the DerSimonian-Laird tau estimator. The forest plot was plotted with the \u201cmeta-forest\u201d statistical package using R \u201335. The www.arulerforwindows.com.The forest plot is essential for analyzing the pooled estimate of the mean differences between the control and the HS intervention group across different studies. The overall standardized mean difference and the pooled 95% confidence interval were presented for each biomarker analyzed. A forest plot was carried out on a particular immune-related biomarker only if more than one study reported complete data for that particular biomarker. The effect of HS on the levels of inflammatory biomarkers across healthy and diseased subgroups of the experimental models, as well as across human and rodent models were examined using subgroup analysis. Raw data for mean and standard deviation (SD) were extracted from chart images with the aid of a digital pixel ruler downloaded from For studies that reported standard error of mean (SEM) values, the SD was calculated using SD = (SEM\u2217\u221an).In estimating the pooled effect of HS administration on ACE signaling, data from different members of the pathways were combined in one analysis. For example the Ang II, ACE1, and Ang I receptors were all analyzed under ACE signaling. In studies where more than one dose of the HS extract was given, the dose that elicited the most effect was reported and used for the analyses. Some of the studies recorded some missing data. Such missing data (when possible), was estimated using RevMan 5.3. In some cases, the published data were not reported in forms that can be subjected to analysis with other data. Such data were transformed appropriately before analysis.The total search hits from both databases were 3718 papers (1943 from Scopus and 1775 from PubMed). A total of 78 relevant articles were downloaded after screening the title and abstract of the search results against the eligibility criteria. Duplicates articles (n=19) were removed and the remaining unique articles (n=59) were subjected to full-text screening. One article was excluded on account of being inaccessible. Twenty seven articlesAdministration of HS calyces extract significantly decreased the levels of different pro-inflammatory biomarkers such as TNF-\u03b1 , 40\u201343, HS administration lowered the MCP-1 levels when compared to untreated groups in the studies examined. The pooled SMD showed a significant decrease in MCP-1 Figure 2The meta-analysis 11 showeWe also attempted to compare the immunosuppressive potential of HS in different animal disease models using TNF-\u03b1, IL-6, IL-1\u03b2, MCP-1, and ACE as biomarkers of its flavonoids and anthocyanin contents are rapidly absorbed and metabolized before appearing in the blood plasma as flavonoid conjugates and hippuric acid respectively , 51. AntAs discussed previously, the presence of different bioactive agents in extracts of HS, suggests the possible involvement of several mechanisms in the suppression of immune and inflammatory responses. Suggested mechanisms include the suppression of cellular stress, inhibition of enzymes that can trigger inflammatory reactions, and down-regulation of pro-inflammatory genes.An important mechanism involved in the anti-inflammatory activities of HS is its ability to suppress the generation of oxidative stress and cellular damage in cells. Several pathological conditions are associated with oxidative stress. Increased oxidative stress can cause cellular damages leading to the release of stress factors such as heat shock protein, reactive oxygen species (ROS), necrotic, and apoptotic factors, etc. These factors are capable of activating TLR proteins in immune cells leading to downstream expression of several inflammatory genes . Oxidati2O2)-stimulated rat dermal fibroblast , which functions in an opposing manner to both increase and decrease vasoconstriction respectively . ACE1 caACE2, a homologue and regulatory arm of ACE 1, diverts some of the Ang II generated by ACE1 into the production of a potent vasodilatory growth inhibitory peptide, angiotensin , which aInhibition of components of the angiotensin converting enzyme (ACE) cascades by HS extract have been reported in both human and animal studies , 25. TheHS has also been reported to inhibit lipoxygenase and cyclooxygenase enzyme systems , 59. ThiDown-regulation of pro-inflammatory genes and factors is another potential mechanism involved in HS anti-inflammatory activities. Different in-vivo and in-vitro studies have demonstrated the ability of HS or known components of HS to downregulate the expression of pro-inflammatory genes such as NF-\u03baB, TNF-\u03b1, interleukins, iNOS , 31, 44.Microarray gene profiling studies by Chou et\u00a0al. , showed Treatment with HS extract has been reported to down-regulate the activity of NF-\u03baB (p65) subunit in different animal and cell line models , 18, 31.The results from the meta-analysis suggest that HS administration suppressed the levels of IL-1\u03b2 across different experimental models Figure 6The result from the meta-analysis indicates that HS supplementation significantly suppressed the levels of TNF-\u03b1 across the different studies Figure 8Evidence from this meta-analysis shows that HS treatment can significantly suppress the levels of MCP-1 Figure 2The meta-analysis showed that HS administration significantly suppressed the cellular levels of IL-6 in different disease models Figure 6IL-10 is an anti-inflammatory cytokine produced by virtually all immune cells. Its main function is to prevent immunopathology during inflammatory responses . The resInducible nitric oxide synthase (iNOS) synthesizes nitric oxide a regulator of immune responses. Overexpression of iNOS and the consequent over production of nitric oxide can lead to cellular injury including DNA damage . FindingThe suggested mechanism of HS anti-inflammatory activities is outlined in Components of HS extracts can suppress TLR mediated expression of inflammatory mediators by reducing the levels of DAMP molecules and suppressing the expression of TLR. HS extract also suppresses the levels of MCP-1, TNF-\u03b1, IL-1\u03b2, IL-6 resulting in a reduction in the overall inflammatory/immune responses. HS extract can inhibit ACE1 activity as well as reduce the levels of ACE1 and Ang II proteins hereby suppressing downstream activation of AT1 receptor-mediated expression of inflammatory cytokines. HS can also interfere with components of NF-\u03baB cascade thereby further suppressing the expression of inflammatory genes.The mechanism involved in the anti-inflammatory effect of HS extract in different disease models is multifaceted and involves the interaction of different constituents of HS with different cellular targets. It appears that the mechanism that predominates depends on the causative factors of each disease model. Diseases linked to dysfunctional oxidative stress most likely could involve the antioxidant capacity enhancement potentials of HS to restore normal antioxidant levels, bringing about the cessation of the observed inflammation. In pathological conditions caused by localized inflammation, the MCP-1, COX2, and lipoxygenase suppressing properties of HS might come into play to reduce inflammation. Reduced expression of MCP-1 will suppress the infiltration of circulating monocytes into inflammatory sites, reducing the extent of the innate immune response at such sites. The ability to inhibit ACE suppresses the possibility of having an enhanced Ang II/AT1 receptor-mediated downstream oxidative dysfunction and inflammatory response. A combination of all these effects could be beneficial for suppressing immune responses in conditions of chronic systemic inflammatory disorders.Apart from anti-inflammatory effects, other beneficial health effects have also been observed following HS supplementation. Using a combination of transcriptomics and metabolomics, Beltr\u00e1n-Deb\u00f3n et\u00a0al. showed tThis systematic review is affected by the various limitations that are characteristics of systematic reviews. Some of such limitations include the fact that only published data that were identified using the search strategies and databases specified in our method section were accounted for in the data synthesis. The possible omission of other relevant studies either because they were not published or because they not were indexed in the databases we consulted cannot be ruled out. The data used in this review were obtained from in-vivo studies in a limited disease model. Furthermore, there were few studies per disease model and in some cases, conclusions were drawn using models from just two studies. Only two disease models were reported for humans (i.e hypertensive and metabolic syndrome), as such, the possible impact of the human disease model on HS efficacy could not be reliably ascertained from this study. The small number of studies (per each biomarker) made it impossible to identify the possible causes of the heterogeneity observed in some of the pooled estimates. The number of studies used to generate the pooled estimate might not be sufficient to entirely reflect the real situation. Finally, most of the data used for this systematic review and meta-analysis were obtained from studies that exposed experimental models to chronic or sub-chronic levels of HS. The instantaneous efficacy of HS administration could not be thus ascertained from this systematic review.This systematic review and meta-analysis revealed that HS extracts possess immunosuppressive capabilities. This finding suggests that HS extracts might be beneficial in the treatment/management of pathological conditions associated with a hyperactive immune system. Further studies should verify the efficacy of HS in suppressing immune responses in autoimmune or chronic inflammatory disease conditions.The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.FU conducted the search, participated in data extraction, ran the statistical and forest plot analyses, designed the graphic abstract, and contributed to the introduction and discussion sections. JU and GB wrote the introduction and materials and methods and undertook data extraction. BE-E participated in data extraction, contributed to the discussion section, and collated the work. NC searched the databases and participated in data extraction. OO mentored the authors, reviewed the draft manuscript, and certified the final manuscript. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "San-Huang-Yi-Shen capsule (SHYS) has been used in the treatment of diabetic kidney disease (DKD) in clinics. However, the mechanism of SHYS on DKD remains unclear. In this study, we used a high-fat diet combined with streptozocin (STZ) injection to establish a rat model of DKD, and different doses of SHYS were given by oral gavage to determine the therapeutic effects of SHYS on DKD. Then, we studied the effects of SHYS on PINK1/Parkin-mediated mitophagy and the activation of NLRP3 inflammasome to study the possible mechanisms of SHYS on DKD. Our result showed that SHYS could alleviate DKD through reducing the body weight loss, decreasing the levels of fasting blood glucose (FBG), and improving the renal function, insulin resistance (IR), and inhibiting inflammatory response and oxidative stress in the kidney. Moreover, transmission electron microscopy showed SHYS treatment improved the morphology of mitochondria in the kidney. In addition, western blot and immunoflourescence staining showed that SHYS treatment induced the PINK1/Parkin-mediated mitophagy and inhibited the activation of NLRP3 signaling pathway. In conclusion, our study demonstrated the therapeutic effects of SHYS on DKD. Additionally, our results indicated that SHYS promoted PINK1/Parkin-mediated mitophagy and inhibited NLRP3 inflammasome activation to improve mitochondrial injury and inflammatory responses. Diabetic kidney disease (DKD), one of the leading diabetic complications, is a chronic structural and functional kidney disorder . Early D\u03b1 axis in renal tissues, thereby regulating mitochondrial biosynthesis and alleviating renal impairment in a mouse DKD model was given by intraperitoneal injection. Animals were fasted for 12\u2009h before the injection but were allowed to drink water. After 72\u2009h, blood was collected from the tail vein to check if blood glucose was \u226516.7\u2009mmol/L, which was used to confirm that the diabetic rat model was successfully constructed. Diabetic rats were housed for another 1 week. After 1 week, the rats were housed in metabolic cages and we performed 24\u2009h of urine protein quantitation. Blood glucose level \u2265\u200916.7\u2009mmol/L and urine protein \u226520\u2009mg/24\u2009h were used as criteria for the successful induction of the DKD animal model [DKD rats were divided into the model group, the SHYS low-dose group, and the SHYS high-dose group, with 10 rats per group. The control group and the model group were given an equal volume of distilled water via intragastric administration. Meanwhile, 0.81\u2009g/kg and 1.62\u2009g/kg of SHYS were given via intragastric administration for 8 continuous weeks to the SHYS low-dose group and the SHYS high-dose group, respectively. The dosages of SHYS were calculated based on human\u2013rat body surface area conversion. During the study, rats were given ad libitum access to water and a standard diet. Fasting blood glucose (FBG) and body weight was measured every two weeks during SHYS treatment.After 8 weeks of SHYS intervention, metabolic cages were used to collect urine output from the groups, which was then centrifuged at 3000\u2009rpm for 10 minutes. The urine protein content was measured according to the manufacturer's instructions.Besides, sodium pentobarbital (50\u2009mg/kg) was administered by intraperitoneal injection to anesthetize the rat, and inner canthus blood was collected. The blood was centrifuged at 3,000\u2009rpm for 15 minutes to collect serum. A fully automatic biochemical analyzer measured serum Cr and BUN levels in the various groups.\u03bcL of normal saline and homogenized on ice. Then, the homogenized renal tissue mixture was centrifuged at 3,000\u2009rpm for 15\u2009min and the supernatant was collected to obtain the tissue homogenate. The activities of SOD and GSH-Px and the level of MDA were investigated following the established protocol of test kits. The total protein contents in tissue homogenates were quantified using BCA protein concentration test kits.In addition, 100\u2009mg of renal tissues were weighed and immersed into 900\u2009\u03b2, IL-6 and TNF-\u03b1) in tissue homogenates were measured using ELISA based on the established protocol of the kits. The IR was evaluated based on homeostatic model assessment of IR (HOMA-IR) using the following formula: HOMA \u2212 IR = FBG \u00d7 FINS/22.5. Besides, the total protein content in tissue homogenates were quantified using BCA protein concentration test kits.The fasting insulin (FINS) level in serum and the levels of proinflammatory cytokines (IL-1\u03bcm sections were cut and used for the hematoxylin and eosin (HE), Masson, and Sirius red staining. Pathological changes in rat renal tissues were observed under a microscope. The pathological changes in HE staining were evaluated based on the HE staining score as described previously [Renal tissues from the groups were fixed with a 10% formalin solution for 24\u2009h, washed with water for 20 minutes, followed by dehydration using an alcohol gradient, xylene cleared, and then paraffin-embedded. Five eviously , LAMP2 (1\u2009:\u2009500), Parkin (1\u2009:\u2009500), and VDAC1 (1\u2009:\u2009500) were incubated at 4\u00b0C overnight. The corresponding secondary antibodies were added after washing with PBS three times, followed by DAPI counterstaining for 10 minutes. The sections were then washed with PBS three times and sealed with an antiquenching agent. Images were taken using an immunofluorescence microscope. Image Pro Plus 6.0 was used to quantitate the fluorescence intensity, total area, and mean optical density. The positive area was calculated based on the ratio between mean optical density and total area.Renal tissues were fixed with formalin solution, embedded in paraffin, and cut into 3\u2009\u03bcg sample was added to a 10% SDS-polyacrylamide gel for electrophoresis. The target proteins resolved in the gel were transferred to methanol-activated PVDF membranes. Membranes were blocked in 5% skim milk at room temperature for 2 hours before the antibodies, anti-VDAC1 , TOM20 , COXIV , LC3 , p62 , PINK1 (1\u2009:\u2009500), Parkin , NLRP3 , IL-1\u03b2 , caspase-1 , ASC (1\u2009:\u2009500), IL-18 (1\u2009:\u2009500), and \u03b2-actin , were added and incubated at 4\u00b0C overnight. On Day 2, 1 \u00d7 TBST buffer was used to wash the membrane three times for 10 minutes each. The corresponding secondary antibodies were added, and the membrane was incubated at room temperature for 2\u2009h. 1 \u00d7 TBST buffer was used to wash the membrane three times, 5 minutes each time. The enhanced chemiluminescence method was used to visualize the bands, and \u03b2-actin was the internal control. A gel imaging system was used for imaging. Image J software was used to analyze the gray values of each band to calculate the relative protein expression.Rat renal tissues were lysed on ice for 30 minutes with RIPA lysis buffer. Samples were centrifuged at 12,000\u2009rpm at 4\u00b0C for 10 minutes, and the supernatant was transferred to fresh EP tubes. BCA protein concentration test kit was used to measure the protein concentration in the supernatant. A 20\u2009t-test was used for comparison. One-way ANOVA followed by Tukey's post-hoc test was used for multigroup comparison of means. A difference of P < 0.05 was considered to be statistically significant.The SPSS Statistics 20.0 statistical software was used to analyze the experimental results. Quantitative data are expressed as mean \u00b1 standard deviation, and a P < 0.01, respectively). Low-dose and high-dose of SHYS treatment reduced the body-weight loss and decreased the FBG level in DKD model rats whereas the Cr level was decreased in SHYS low-dose and SHYS high-dose groups and the level of BUN was lower in the SHYS high-dose group (P < 0.05) compared with the model group , low-dose and high-dose of SHYS treatment lowered the HOMA-IR in rats with DKD compared with the control group in renal tissue homogenate were detected using ELISA to evaluate the anti-inflammatory effects of SHYS on DKD model rats. Compared to the control group, the levels of IL-1\u03b2, IL-6, and TNF-\u03b1 were significantly increased in the model group . High-dose of SHYS treatment decreased the levels of IL-1\u03b2, IL-6, and TNF-\u03b1 in the DKD model rats . The activity of SOD was increased in SHYS high-dose group compared to the model group (P < 0.05). Besides, low-dose and high dose of SHYS treatment decreased the MDA level and increased the GSH-Px activity in the DKD model rats , whereas low-dose and high-dose of SHYS treatment decreased the proportion of fragmented mitochondria in rats with DKD . The protein levels of VDAC1 , TOM20 , and COXIV were lower in the SHYS low-dose and SHYS high-dose groups as compared with those in the model group . Compared with the model group, low-dose and high-dose of SHYS treatment increased the protein levels of PINK1 and Parkin in the kidney was decreased and the protein level of p62 (P < 0.05) was increased in the model group compared with the control group. The protein level of LC3-II was higher in the SHYS low-dose and SHYS high-dose groups as compared to the model group and the protein level of p62 was lower in the SHYS high-dose group compared with the model group in the kidney were increased in DKD model rats compared to the rats in the control group . The coexpression of Parkin and VDAC1 , ASC (P < 0.05), cleaved caspase-1 (P < 0.01), mature IL-1\u03b2 (P < 0.05), and mature IL-18 (P < 0.01) were increased in the model group. The protein levels of NLRP3, ASC, and mature IL-1\u03b2 were lower in the SHYS high-dose group as compared to the model group . The protein levels of cleaved caspase-1 and mature IL-18 were lower in the SHYS high-dose group as compared to the model group to Ub substrate or mitochondrial outer membrane proteins. The ubiquitin chains formed can be selectively recognized by the autophagy receptor p62 . p62 is \u03b2, and IL-18 expression in DKD rat renal tissues. Under physiological conditions, NLRP3 within cells is in an autoinhibitory state. Hyperglycemia can cause NLRP3 to recruit ASC and bind to pro-caspase-1 to form an inflammasome and activate caspase-1. Activated caspase-1 can cleave pro-IL-1\u03b2 and pro-IL-18 to produce active IL-1\u03b2 and IL-18, respectively, releasing from cells and worsening renal tissue inflammation.A recent study found that blockade of PINK1/Parkin-mediated mitophagy can induce NLRP3 inflammasome activation, and the activation of PINK1/Parkin-mediated mitophagy can block NLRP3 inflammasome activation . Our resIn summary, our study demonstrated the therapeutic effects of SHYS on DKD. Additionally, our results indicated that SHYS promotes PINK1/Parkin-mediated mitophagy in renal tissues and inhibits NLRP3 inflammasome activation to improve mitochondrial injury and inflammatory responses."} {"text": "In this paper, a deep learning algorithm was proposed to ensure the voice call quality of the cellular communication networks. This proposed model was consecutively monitoring the voice data packets and ensuring the proper message between the transmitter and receiver. The phone sends its unique identification code to the station. The telephone and station maintain a constant radio connection and exchange packets from time to time. The phone can communicate with the station via analog protocol (NMT-450) or digital . Cellular networks may have base stations of different standards, which allow you to improve network performance and improve its coverage. Cellular networks are different operators connected to each other, as well as a fixed telephone network that allows subscribers of one operator to another to make calls from mobile phones to landlines and from landlines to mobiles. The simulation is conducted in Matlab against different performance metrics, that is, related to the quality of service metric. The results of the simulation show that the proposed method has a higher QoS rate than the existing method over an average of 97.35%. The first cellular networks were developed using analog first generation 1G) standards. The most common are NMT and AMPS. Usually, next to the standard name, they write the frequency in megahertz, followed by the frequency range for basic station communication with mobile phones. For example, the basic stations on NMT-450 networks communicate with cell phones on the 450\u2009MHz bandwidth. Working simultaneously on analog standards to ensure that multiple mobile phones in a cell, as well as the base stations of different cells, used multiple frequency multiplexing , which would work in one of the conditions of scarcity of free frequencies G standar. This enCDMA (code segment multi-access) makes extensive use of sophisticated methods of separating radio air between different mobile phones. Also, no matter how many different phones are in one cell, and no matter how many base stations are in the neighborhood, each mobile phone uses the CDMA 2000 1x standard of 1.25\u2009MHz to receive and transmit the entire frequency band (channel) of relatively large width . To distLater, with the advancement of mobile phones and the development of computerization, technologies for GSM networks such as computer data transfer and Internet access were introduced (cm Internet) . The firThe most common data transfer technology is GPRS . GPRS allows multiple mobile phones to use dedicated time intervals simultaneously, using different algorithms for different quality of communication with BS and different workloads of BS . Each phInitially, these technologies were used on mobile phones to access the Internet using personal computers, and only later, did further development of mobile phones provide direct Internet access from mobile phones. To obtain information on a mobile phone, WAP technology , which has relatively small requirements for technical specifications , pages cHowever, with the advancement of mobile phones, things changed quickly. First, the need for an intermediate server has disappeared\u2014now the browsers of modern mobile phones do their work independently. Second, the specialized language WML is replaced by the XHTML standard, which differs from the language commonly used on the Internet . It is oThe authors devised a deep learning algorithm was proposed to ensure the voice call quality of the cellular communication networks.The deep learning model consecutively monitors the voice data packets and ensures the proper message between the transmitter and receiver.The main contribution of the paper includes:Carvalho et al. discusseCells that use the same radio channel group can avoid interruptions if they are spaced apart. In this case, frequency reuse is observed. Cheng et al. expresseAccording to Hsu et al. , the covHussien and Sadi explaineEzaki et al. discusseFrom the existing models, it is found that most of the communications are not analyzed at the micro-level and even the micro-level , dynamic channel assignment algorithm (DCAA), integrated structured cabling system (ISCS), and fuzzy probabilistic based semi-Markov model (FPSMM)) solutions provide a computational burden to the entire system. This leads to increased computational cost and communication cost in estimating the solution related to voice call QoS.The UMTS standard is based on W-CDMA technology , somewhat compatible with GSM. The speed of data reception and transmission reaches 1920 kilobits per second.1xEV technology for CDMA2000 networks. Data reception speeds reach 3.1 megabits per second, and transfer speeds reach 1.8 megabits per second.Technologies like TD-SCMA, HSDPA, and HSUPA. It allows you to reach even higher speeds. Until 2006, W-CDMA technologies mostly provided HSDPA support. TD-SCMA is in development.The data transfer rates on second-generation networks are not sufficient to carry out many new functions of mobile communications, especially high-quality video in real-time (videophone), modern optical computer games on the Internet, and others. New standards and protocols have been developed to ensure the required speed:In this way, modern technologies such as mobile communication are not mobile phone technologies like global communication technologies. Microcells are commonly used in densely populated areas. Due to their small range, microcells are less susceptible to transfer degradation effects such as reflection and signal delays. A macrocell integrates with a group of microcells. The microcell serves slow-moving mobile devices and the macrocell serves fast-moving devices. The mobile device can determine the speed of its movement. This makes it possible to reduce the number of hops from one cell to another and adjust the location data.N\u2009=\u2009the number of complete dual cellular channels available in the cluster; A\u2009=\u2009the number of channels in the cell; and D\u2009=\u2009the number of cells in a cluster.When the distance between the mobile device and the base station of the microcell is small, the transition mechanism from one cell to another can be changed. When planning systems using hexagonal cells, the base station transmitters are located at the center of the cell, at the edge of the cell, or at the top of the cell. Cells with a transmitter at the center typically use omnidirectional antennas, and cells with transmitters at the edge or apex typically use sector directional antennas. Consider a system with a fixed number of full-duplex channels in an area. Each service area is split up into groups called \u201cclusters.\u201d Each cluster gets a group of channels that are spread out among the cells of the cluster in different ways.M\u2009=\u2009total number of channels in a given region; A\u2009=\u2009the number of channels in the cell; D\u2009=\u2009the number of cells in a cluster; and k\u2009=\u2009the number of clusters in a given zone.If the cluster is duplicated within the specified service area, then the total number of full-duplex channels is :From the exposures, we can see that the total number of channels in the cellular telephone system is directly proportional to the number of back clusters in a given service area. If the cluster size decreases but the cell size does not change, more clusters will be needed to cover a given service area, and the total number of channels in the system will increase. Due to the absence of neighboring cells in a small service area , the number of subscribers, who can use the same frequencies (channels) simultaneously depends on the total number of cells in the area. The average number of such subscribers is four, but in densely populated areas, the number is much higher.With the planned increase in cellular transport, the increased demand for service is met by reducing the size of the cell, which divides it into several cells, each with its own base station. If the cells are not very small, efficient cell division allows the system to handle more calls. If the cell diameter is less than 460\u2009m, the base stations of neighboring cells will affect each other. The criterion for determining the relationship between frequency reuse and cluster size is the cellular structure as the subscriber density increases.S\u2009=\u2009the number of cells in the S-cluster; z and x\u2009=\u2009non-negative integers ;Execution: direct transfer of control from one base station to another;Shutdown: unwanted network resources are released and available to other mobile devices.The transfer process from one cell to another, i.e., when the mobile device moves from the base station 1 to base station 2, has four main steps shown in Q\u2013the cellular utilization point, R\u2013the range cellular cluster, t\u2013the quantity of signal, uee) was compared with the existing optimal call admission control (OCAC), dynamic channel assignment algorithm (DCAA), integrated structured cabling system (ISCS), and fuzzy probabilistic based semi-Markov model (FPSMM). The entire simulation is conducted in a Matlab environment that runs on a high-end computing system with 8\u2009GB of RAM and an i5 computing processor.The moving vehicle's location is found by the automatic telephone transmission, while the call is sent to the communication channel by the circuit controller. In the cutoff region of the channel location area, all the existing models achieved 63.05%, 71.32%, 63.36%, and 87.24% of channel allocations. But the proposed model achieved 92.08% of the channel allocation. This is possible because the proposed model already predicted the upcoming signal location. Hence, this achieves a high allocation rate.When the vehicle leaves the area of the remote base station, the driver cannot use cellular communications. If the call is made on the way to the boundary of the zone, the signal will weaken and eventually disappear completely. Where the cutoff region of the base station management for all the existing models was achieved at 64.90%, 65.48%, 58.43%, and 79.86% of base station management. But the proposed model achieved 93.88% of base station management. This is possible because the proposed model was already predicted for the base station. It is useful for the users to periodically modify the base stations while traveling. Hence, this achieves high base station management. In most communication networks, the transmitter transmits the higher data rate messages through the air medium. This was always at a higher level because, based on the modulation and filter quality, this may be affected. So the data rate from the transmitter was always huge. From the receiver end, the same data rate was received, and then the demodulation process was performed with the help of different filters. Where the cutoff region of the data transfer rate for all the existing models was achieved, 56.23%, 68.98%, 57.61%, and 79.87% of transfer communication between the transmitter and receiver. But the proposed model achieved 92.36% of the channel allocation. This is possible because the proposed model already found the shortest path between two endpoints. Hence, this achieves a high data transfer rate.In general, the communication mediums delay the process. The delay from the transmitter is called the transmission delay, and the delay from the receiving medium is called the receiver delay. So the speed of the transmission and receiving medium are always high for cellular communication. In the cutoff region of the communication speed, all the existing models achieved 63.09%, 64.79%, 54.57%, and 75.54% of channel allocations. But the proposed model achieved 96.70% of communication speed. This is possible because the proposed model already predicted the upcoming destination. Hence, this achieves a high speed.In general, the quality of service in cellular communication is very essential because this is the parameter used to evaluate the service range of the cellular base station. These cellular communication instruments are performed well to increase the quality of service of the network. A modern MTA can be automatically switched to Dictaphone mode by a signal or a given program without its owner's approval. It is not true that every MDA records the owner's speech and voice and then sends the information, but such an opportunity is technically provided in every modern MDA. The proposed LWDLA was compared with the existing OCAC, DCAA, ISCS, and FPSMM. If an action takes place during a show in a theater, it is almost obvious that the gun will explode by the end of the show. So, in this case, MTA has the ability to record and transfer information, and this factor should be taken into account when using your mobile phone. The MTA receives information from the station closest to the cell. In it, information spreads in the air. The MTA interacts with the station at bursts of digital pulse signals called \u201ctime intervals.\u201d The duration of a service contact session can range from fractions of a second to several seconds. In the future, the solutions can be improved by increasing the rate of QoS of voice calls or HD voice calling using artificial intelligence approaches."} {"text": "The model group and the drug group are given intraperitoneal injections of vitamins, and the model group and the drug group are given a high-fat diet. Rats in the low-dose group and high-dose group are given low-dose and high-dose Dahuang Zhechong pill lavage solution, respectively. Besides, the control group is given simvastatin solution by gavage, and intervention is performed once a day for 12 weeks. Ubiquitin (Ub) protein expression, ubiquitin activase (UBE1), nuclear factor-\u03baB, nuclear inhibitory factor-\u03baB (I\u03baB) gene expression, total cholesterol (TC), triglyceride (TG), low-density lipoprotein cholesterol (LDL-C), and serum tumor necrosis factor-\u03b1 (TNF-\u03b1) are compared. The experimental result shows that Dahuang Zhechong pills can reduce inflammation and prevent and treat AS by blocking the activation of the UPP/NF-\u03baB signaling pathway and can be used as a proteasome inhibitor in the clinical treatment of AS.In order to investigate the effects of different doses of Dahuang Zhechong pills on the ubiquitin proteasome pathway/nuclear factor- The ubiquitin proteasome pathway (UPP) is a pathway involved in the NF-\u03baB activated signal pathway that plays an important role in regulating the expression of inflammatory factors. Therefore, it is of important reference value to clarify the change characteristics and mechanism of the UPP-NF-\u03baB signaling pathway in AS for the optimization of subsequent clinical targeted therapy [With the change of people's lifestyle and the aggravation of population aging, the incidence rate of cardiovascular disease is increasing year by year. As one of the pathological bases of various cardiovascular and cerebrovascular diseases, the incidence rate of atherosclerosis (AS) has increased significantly. Effective prevention and treatment of AS has important guiding significance for disease control, improving quality of life and reducing the risk of other cardiovascular diseases . The cli therapy . At pres therapy . In rece therapy .\u03baB signaling pathway and its mechanism, aiming to provide data support for subsequent application of paternal treatment in the clinical prevention and treatment of AS.Dahuang Zhechong pill is a prescription from the ancient book Synopsis of the Golden Chamber by Zhang Zhongjing, which can play a good role in removing blood stasis, generating new vitality, and nourishing deficiency. Previous studies have confirmed that Dahuang Zhechong pills can inhibit the activity of and factor signaling pathway, reduce inflammation, and play a certain role in anti-AS, but the specific mechanism has not been clarified in relevant studies. Therefore, this study further explored the effect of different doses of paternal treatment on the UPP-NF-This paper is organized as follows: \u03b1 is one of the polytropic inflammatory factors in AS, which can activate cyclic adenosine monophosphate (cAMP) before AS [\u03b1 is reversely upregulated and activated to promote the capture of a large number of granulocytes in the blood stream, thus inducing a vicious cycle of calcification and inflammation mutually promoting [Oxidative stress, high glucose, and inflammation play an important role in the formation of atherosclerotic calcification. TNF-efore AS . The cAMromoting .\u03baB signaling pathway in several ways. For example, the degradation of NF-\u03baB precursor protein requires UPP to generate p50 and p52. The NF-\u03baB signaling pathway can be activated only after ubiquitination and degradation of I\u03baB protein, leading to the high expression of related inflammatory factors and genes. Therefore, inhibition of ubiquitination and degradation of I\u03baB can indirectly inhibit the activation and transcription of the NF-\u03baB signaling pathway, thereby inhibiting inflammatory response and improving arteriosclerosis [As a nonlysosomal protein degradation pathway, UPP is widely present in eukaryotic cells and has a certain dependence on adenosine triphosphate (ATP), participating in and playing a decisive role in the activation of nuclear factor signaling pathways . UPP regclerosis .\u03baB promotes the inflammatory signaling pathway and increases IL-1\u03b2 and macrophage content, thereby increasing serum IL-6 and TNF-\u03b1 concentration, aggravating inflammatory response, and aggravating AS. Inhibition of the NF-\u03baB signaling pathway can reverse the downregulation of contractile protein in some smooth muscle cells and inhibit cell proliferation and secretion. NF-\u03baB may be a key signaling pathway node in the phenotypic transformation of smooth muscle cells and participates in the occurrence and development of AS by regulating the phenotypic transformation of vascular smooth muscle cells [\u03baB was highly activated. Activation of NF-\u03baB and high expression of inflammatory genes were important factors leading to the occurrence and development of AS. In addition, the UPP pathway was highly activated in the aorta tissues of AS rats. It indicates that inhibition of I\u03baB degradation and activation of the NF-\u03baB signaling pathway play an important role in improving AS [. As a statin, simvastatin can inhibit cholesterol synthesis and promote LDL metabolism by upregulating LDL receptors on the cell surface, reducing TC and LDL, and thus achieving anti-AS effects [Scutellaria baicalensis, peach kernel, almond, Rehmannia glutinosa, grubs, Tabanidae, insects, leeks, peony, dried lacquer, and licorice. In terms of pharmacology, they can reduce the plaque area of AS, inhibit the proliferation of smooth muscle cells and collagen fiber proliferation, and then inhibit intima thickening and foam cell formation. Clinical studies showed that Dahuang Zhechong pills can play an anti-inflammatory role and inhibit the expression of TNF-\u03b1 and NF-\u03baB in the AS lesion area. The insects and leeches in the prescription can, respectively, play the roles in reducing blood viscosity, antiplatelet aggregation, and blood lipid [\u03baB in rats with atherosclerosis.NF-le cells , 11. Abnle cells . Yang etle cells showed toving AS . As a st effects . Dahuangod lipid . It is i58-week-old male clean grade Wistar rats were purchased from the Animal Experiment Center of Hebei Medical University for the experiment, weighing 180\u2013200\u2009g, with Certificate No. 1305143 and License No. SCXK(Ji)2016-1-004. After adaptive feeding for a week, the rats are divided into 5 groups, including the normal group, model set, matched group, LDG, and HDG, with 10 rats in each group.Before the start of the experimental rats, in addition to the normal group, the rest of the group by intraperitoneal injection of rats are given vitamin D360 unit/kg, the experimental group; after the start of normal rats using normal feed, the rest of the group of rats were with 10.2% lard propylthiouracil, 3% cholesterol, 0.5% sodium cholic acid, 5% sugar, and high-fat feed, continuous feeding for 12 weeks.\u22121\u00b7d\u22121 distilled water intragastric administration. The low-dose group and high-dose group are given 0.7\u2009g\u00b7kg\u22121\u00b7d\u22121 and 1.4\u2009g\u00b7kg\u22121\u00b7d\u22121 Dahuang Zhechong pills, liquid intragastric administration, respectively; the control group is given 5\u2009mg\u00b7kg\u22121\u00b7d\u22121 simvastatin liquid intragastric administration, once at 9 am every day. The intervention lasted for 12 weeks.At the beginning of modeling, rats in each group are given corresponding drugs by gavage intervention. The normal group and the model group are given 10\u2009ml\u00b7kgThe expression level of Ub protein is detected by immunohistochemistry. The rat thoracic aorta tissue of 1-2\u2009cm is prepared into wax blocks, and the slices are dewaxed into water. The antigen is repaired by microwave for 10\u2009min and washed repeatedly with phosphate buffered saline (PBS) for 3 times. After 20\u2009min, goat blood is added, cleaned, and sealed for 30\u2009min in a constant temperature environment of 37\u00b0C. After washing, rabbit anti-ub antibody (1\u2009:\u2009200) is added and stayed overnight at 4\u00b0C. After washing with PBS, the secondary antibody working solution is dropped, respectively, and incubated for 30\u2009min in a warm box. After dropping horseradish peroxidase-labeled chain enzyme ovalbumin, the reaction is continued to be incubated for 30\u2009min, and the reaction is terminated by DAB. Known Ub-positive is used as a positive control, and PBS is used as a negative control instead of the primary antibody. Two experienced pathologists independently read and interpreted the radiographs. Five high magnification fields (\u00d7400) are randomly selected. The percentage of Ub-positive cells is calculated according to the brownish yellow particles in the cytoplasm and nucleus, of which 26%\u201350% is 1 point, 51%\u201375% is 2 points, and more than 75% is 3 points. The staining intensity score is 0 for no color, 1 for light brown or yellow mark, 2 for tan, and 3 for tan. Add the two score values, and \u2265 3 is positive and < 3 is negative. Pathologists who read these films selected typical pigmentation sites, excluded background pigmentation and nonspecific pigmentation, and then performed micrographs. Image Pro Plus software is used for gray scale scanning of photos. The gray value is proportional to the protein expression intensity.\u03bcL reverse transcription reaction system is composed of 3.0\u2009\u03bcg total RNA, 0.5\u2009\u03bcg oligo dT, 1.3\u2009\u03bcl dNTP, 40U RNasin, 200U MMLV-RT, 5\u2009\u03bcL5\u2009\u00d7\u2009loading buffer, and appropriate DEPC water. After kept in a waterbath at 37\u00b0C for 1 hour, the cDNA is inactivated at 70\u00b0C for 5\u2009min. The cDNA is stored at\u221220\u00b0C. The 50\u2009\u03bcL PCR amplification reaction system is composed of 5\u2009\u03bcL cDNA, 1\u2009\u03bcL dNTP, 5\u2009\u03bcL10\u2009\u00d7\u2009loading buffer, 20\u2009\u03bcmol primers, 25\u2009\u03bcL MgCl, 1\u2009\u03bcL Taq enzyme, and DEPC water. UBE1 mRNA, NF-\u03baB mRNA, and I\u03baB mRNA are pre-denatured at 50\u00b0C for 2\u2009min, pre-denatured at 95\u00b0C for 10\u2009min, and then annealed at 60\u00b0C for 60 seconds. The operation is repeated 40 times, with U6 as an internal reference. 10\u2009\u03bcL PCR product is extracted and electrophoretic is performed with 20\u2009gl\u22121 agarose gel. The results are observed by the UV lamp. The absorbance ratio of UBE1 mRNA, NF-\u03baB mRNA, and I\u03baB mRNA is calculated by semiquantitative detection of the optical density scanner. All samples are repeated twice, and the average value is taken.Total RNA is extracted by a one-step method using the Invitrogen TRIzol kit. 25\u2009\u03b1 detection kit .After 12 weeks of administration, the head is cut off and 2\u2009ml blood samples are collected by two test tubes, one of which is centrifuged at 800r/min for 10\u2009min. Plasma is separated by a Roche automatic biochemical analyzer . Measured total cholesterol (TC), triglyceride (TG), and low-density lipoprotein cholesterol (LDL-C). serum is separated by centrifugation at 3500\u2009r/min for 10\u2009min in another test tube, and serum is determined by enzyme linked immunosorbent assay (ELISA) in strict accordance with the TNF-For each rat aorta tissue, made paraffin section, dewaxing in normal operation to the water, with hematoxylin staining for 10 minutes, immersed in water ish, appropriate differentiation is accomplished with 1% volume fraction of hydrochloric acid alcohol and ishing in water, which lasts for 5 minutes with eosin stain, also ish in flowing water for five minutes. The 70%, 80%, 95%, and 100%gradient alcohols are dehydrated, according to the normal steps, and neutral gum is used to seal the tablets. Images are taken under an Eclipse TE 2000-S immunofluorescence microscope purchased from Japan, and morphological changes in rat aorta are observed.x\u2009\u00b1\u2009s) and the t-test is adopted. P < 0.05 indicated that the data are statistically significant.The software that effectively processed the data in the study is SPSS 22.0, which was used to test the normality of the measurement data. The normal distribution of the measurement data is presented in the form of , as shown in \u2217\u201d means prompt comparison model set, P < 0.05.Compared with the model set, the normal group, the matched group, and the HDG had lower lipid levels, and there are statistical differences between the groups , as shown in Compared with the model set, TNF-P < 0.05), as shown in Compared with the model set, Ub protein in the normal group, matched group, and HDG is lower, and there is a statistical difference between groups , as shown in Table 4.Compared with the model set, the mRNA levels of UBE1, NF-\u03baB in rats with AS are investigated. The experimental results demonstrate that Dahuang Zhechong pills can reduce the expression of inflammatory factors and improve AS by blocking the activation process of the UPP/NF-\u03baB signaling pathway. It is recommended that high-dose Dahuang Zhechong pills be promoted as a proteasome inhibitor in the clinical treatment of AS [In this study, the effects of different doses of Dahuang Zhechong pills on the UPP-NF-nt of AS ."} {"text": "Guidelines for management and prevention of contrast media extravasation have not been updated recently. In view of emerging research and changing working practices, this review aims to inform update on the current guidelines.In this paper, we review the literature pertaining to the pathophysiology, diagnosis, risk factors and treatments of contrast media extravasation. A suggested protocol and guidelines are recommended based upon the available literature.\u2022 Risk of extravasation is dependent on scanning technique and patient risk factors.\u2022 Diagnosis is mostly clinical, and outcomes are mostly favourable.\u2022 Referral to surgery should be based on clinical severity rather than extravasated volume.The online version contains supplementary material available at 10.1007/s00330-021-08433-4. Contrast media extravasation (CMEX) is a complication where there is leakage of intravenously administered contrast agents (either iodine or gadolinium-based), into the surrounding soft-tissues . This caCMEXs are thought to be one of the most frequent adverse events in radiology but are much less studied than others such as contrast-associated acute kidney injury \u201310. WhilPrevious guidelines around CMEX from the Contrast Media Safety Committee (CMSC) of the European Society of Urogenital Radiology (ESUR) published in 2002 related to older contrast injection protocols . Since tAuthors prepared seven clinical questions in Patient\u2013Intervention\u2013Comparison\u2013Outcome (PICO) format . A systeRisk of bias of each study included was graded according to National Institutes of Health (NIH) study quality assessment tools. The strength of recommendation of different risk factors, diagnosis, detection and management of CMEX were graded according to OCEBM Levels of Evidence Working Group \u201cThe Oxford 2011 Level of Evidence\u201d plus location of the cannula; more damage occurs with involvement of tight sub-fascial compartments compared to looser subcutaneous layers . An acutConsequently, much research has been based on preventing cannula dislodgement and exploring the impact of variables that increase risk of leakage from the puncture site.With CMEX contrast escapes and infiltrates the interstitium during injection. This is mostly a clinical diagnosis and routine use of imaging is not indicated for its detection.Non-physicians are usually first responders to CMEX, i.e. radiology technicians/radiographers and healthcare support workers. Hence, it is recommended that imaging departments follow a protocol which allows identification of at-risk patients, easy detection of CMEX, awareness of monitoring needs and effective management of extravasation by a wide range of staff . As an eWe summarise three main ways of detecting CMEX in Table Many vendors have developed Extravasation Detection Accessories (EDA) which are sensors that allow detection and interruption of automated injection in the event of an extravasation. Dykes et al. demonstrated the use of EDA resulted in smaller volumes of CMEX when they occurred compared to when no EDA was utilised . HoweverOne quality improvement project and three case-series have investigated EDA use; each assessed a different device \u201327. Two Proper documentation of CMEX is crucial ; at the With severe extravasations, imaging documentation may be helpful . Plain rSix studies evaluated the influence of the physical properties of cannulae; none were prospective randomised trials. A pseudo-randomised trial compared between a fenestrated 20G and a non-fenestrated 18G cannula which showed no effect on CMEX rates or volume . FurtherCentral venous catheters (CVC\u2014tunnelled or non-tunnelled), totally implantable vascular access devices, haemodialysis catheters and peripherally inserted central catheters (PICCs) are increasingly used for patients in critical care, on chemotherapy or long-term antibiotics. Of course these patients often require regular cross-sectional imaging with IV contrast. Power injector compatible versions of these have been shown to be safe , with a Prior to the advent of power-injectable CVCs, the Medical and Healthcare Products Regulatory Agency (MHRA) in the UK published recommendations regarding rates and volumes for CM injections. However, these have been withdrawn and current advice is to follow specific manufacturer guidance. Plumb and Murphy recommended a maximum flow rate of 2.0\u00a0ml/s for CVC and to use the distal lumen if a multi-lumen CVC is in place . A CT toSinan et al. did not find any significant difference in extravasation rates in patients receiving CM via power injection vs. manual injection . A studySix studies have assessed rate and volume impact on CMEX. A randomised controlled trial by Kok et al. demonstrated no significant difference in CMEX when assessing different flow rates . Howeverp\u2009=\u20090.0005) [A non-randomised retrospective study found higher rates of extravasation in patients who had ultrasound-guided cannula insertion (3.6%) vs. standard insertion (0.3%). However, the numbers of patients in each group were drastically different\u2014364 in the ultrasound-guided cannula inserted group and 896 in the standard cannula inserted group . The difference in results in this study is likely due to confounding variables; i.e. prior failure of standard insertion and with deeper veins there is potentially a shorter length of intravascular cannula hence greater potential for dislodgement upon injection . The typ\u20090.0005) , despiteThree main contrast media properties have been studied in relation to CMEX: osmolarity, charge and viscosity.2O) and cellular lysis which is thought to be a factor in the degree of tissue damage caused by extravasation. This has only been studied in animal models where old ionic CM was used [Osmolarity\u2014A relationship has been demonstrated between higher osmolarity CM are well tolerated in humans , 50. On Viscosity\u2014This is thought to influence the likelihood of extravasation occurring. A computational fluid dynamics study by Sakellariou et al. demonstrated the potential for increased incidence of extravasation with more viscous CM, especially in CT angiography when performed with smaller peripheral cannulas . A viscoOverall, this suggests that using CM with lower viscosity and/or warming CM and/or diluting/mixing CM with saline has a preventative role in CMEX.There is much less data on rates of CMEX for MRI; the incidence of CMEX is reported as approximately 0.06% with no serious complications described\u2014likely due to low infusion rates and lower CM volumes compared to IBCM uses. Theoretically, the extravasation of GBCAs could lead to oedema, necrosis or haemorrhage, potentially exacerbated due to their ionicity and higher osmolarity when compared to IBCM, although this does not seem to be borne out in practice , 48.Based on a meta-analysis of 356,582 patients by Ding et al., females and older patients (>\u200960\u00a0years) are at greater risk of developing CMEX; however, gender and age have no impact on the volume of CM extravasation when it does occur . In-patiIt is suggested based on the risk factors discussed in Table The untreated sequelae of severe CMEX are as follows: increased intra-compartmental pressure (compartment syndrome), with subsequent risk of ischemia due to venous congestion and low arterial gradient causing disproportionate necrosis, severe neurovascular compromise or even limb loss. Treatments to avoid these serious complications can be divided into passive conservative measures and more active therapies Table .Table 5Studies assessing conservative methodologies to treat extravasation events are few and not of robust quality in terms of applicability for radiology, i.e. laboratory-based studies or investigations of extravasation of cytotoxic drugs rather than contrast media. Most recommendations are based on \u201cgood clinical practice\u201d. The aim of conservative measures is to reduce the morbidity associated with CMEX. The evidence base for use of invasive treatments is limited with mostly retrospective studies, small case numbers, lack of control groups, data based on older CM or even not CMEX.Increased swelling or painIncreased rednessChange in sensation in the affected limbSkin ulceration or blisteringIt is important that clear instructions to the patient who has suffered CMEX are given regarding when to seek additional medical care if there are worsening symptoms. A patient information leaflet is recommended, available in the languages appropriate for the institution. Patients should be warned about red flag signs and symptoms which are as follows:In Table Compartment syndrome is the most serious among the consequences of CMEX. It is extremely rare with less than 12 cases reported in a recent literature review however must be suspected if patient complains of severe pain and/or neurovascular compromise . Signs wOverall, the decision to refer for surgical intervention should be a clinical one\u2014i.e. based on red flag signs and symptoms, rather than arbitrary CMEX volumes.This review highlights the key up-to-date evidence pertaining to CMEX summarising the important risk factors and a systematic approach to management. Whilst this review has been all encompassing, there are some limitations. Heterogeneity of the studies included in the paper have made performing a meta-analysis tricky and often difficult to compare data across different types of studies. Most of the studies are retrospective and with rates of CMEX being generally low, there are inadequately statically powered studies. There have been technological changes in CT and MRI, especially with use of non-ionic CM, increased understanding of the risk factors and use of EDAs as well as systems which halt injection if problems are encountered during infusion, all of which inform update on the previous guidelines. However, there remain important areas where further research would be merited. There is no data available on patient experience of CMEX and this is an important impact to explore as the patient may refuse to have CM in the future. Long-term follow-up of patients after CMEX and prospective trials with CMEX interventions are also not well-researched. Important questions to ask are appropriate time interval when to re-scan after a CMEX, impact on workflow and cost implications.Supplementary file1 (DOCX 27.1 KB)Supplementary file2 (DOCX 27.3 KB)Supplementary file3 (DOCX 27.5 KB)Supplementary file4 (DOCX 90.8 KB)Supplementary file5 (DOCX 33.3 KB)Below is the link to the electronic supplementary material."} {"text": "Significant evidence links white matter (WM) microstructural abnormalities to cognitive impairment in schizophrenia (SZ), but the relationship of these abnormalities with functional outcome remains unclear.n\u2009=\u200925; SZ-HCP-C2, n\u2009=\u200924) and patients with lower cognitive performance . Healthy controls (HC) were included in both cohorts . We compared fractional anisotropy (FA) of the whole-brain WM skeleton between the three groups by a whole-brain exploratory approach and an atlas-defined WM regions-of-interest approach via tract-based spatial statistics. In addition, we explored whether FA values were associated with Global Assessment of Functioning (GAF) scores in the SZ groups.In two independent cohorts , patients with SZ were divided into two subgroups: patients with higher cognitive performance was positively correlated with GAF score, in C2 the FA of the temporal part of the left IFOF was positively correlated with GAF score.We provide robust evidence for WM microstructural abnormalities in SZ. These abnormalities are more prominent in patients with low cognitive performance and are associated with the level of functioning.The online version contains supplementary material available at 10.1007/s00406-021-01363-8. Diagnostic and Statistical Manual of Mental Disorders (DSM-V) describes a broad range of severities of cognitive impairment in SZ, ranging from intact to severe [Schizophrenia (SZ) is a severe neuropsychiatric disorder that is associated with poor social \u20133 and oco severe . HoweverThe disconnection hypothesis in SZ has been put forward in the form of various disconnection theories, such as disconnection of fronto-temporal regions , 12, forTherefore, we performed a replication study that applied a whole-brain exploratory approach and atlas-defined WM regions-of-interest approach via tract-based spatial statistics (TBSS) in two different cohorts. In the present study, we aimed to robustly identify microstructural differences in WM in patients with SZ subdivided into groups with good and poor cognitive performance. Moreover, we aimed to investigate the relationship between regions-of-interest (ROIs) selected on the basis of exploratory findings and functional outcomes in SZ. We hypothesized that brain WM microstructure in SZ is more severely disturbed in patients with poor cognitive performance and poor functioning.International Statistical Classification of Diseases and Related Health Problems, 10th revision (ICD-10) [Participants in cohort (1) were recruited from the University Hospital, LMU Munich, Germany. This study was approved by the local ethics committee of the University Hospital, LMU Munich (project number: 17\u201313), and written informed consent was obtained from all participants. The patients were diagnosed by two independent, experienced psychiatrists using the criteria of the (ICD-10) . Individ(ICD-10) ) were re(ICD-10) and addi(ICD-10) and leve(ICD-10) . GAF sco(ICD-10) \u201340. Part(ICD-10) , 42.n\u2009=\u200925) and LCP group , and cohort (2) comprised 27 HC and 48 patients with SZ subdivided into an HCP group and LCP group .The demographic and clinical characteristics of the study participants in cohorts 1) and (2) are shown in Table and (2) We used neurocognitive instruments that are related to functional deficits in SZ , 30, 43.b values (b\u2009=\u20091000\u00a0s/mm2 and b\u2009=\u20090).All magnetic resonance imaging (MRI) examinations were performed with a 3.0\u00a0T MR scanner with a standard 20-channel phased-array head coil. DTI was performed with 64 non-collinear diffusion-encoding directions and the following parameters: repetition time, 9600\u00a0ms; echo time, 95\u00a0ms; field of view, 244\u00a0mm; voxel size, 2.0\u2009\u00d7\u20092.0\u2009\u00d7\u20092.0\u00a0mm; slice thickness, 2.0\u00a0mm; 65 slices; and multiple diffusion weighting b values (b\u2009=\u20091000\u00a0s/mm2 and b\u2009=\u20090).All MRI examinations were performed with a 3.0\u00a0T MR scanner with a standard 8-channel phased-array head coil. DTI was performed with 12 non-collinear diffusion-encoding directions and the following parameters: repetition time, 6500\u00a0ms; echo time, 96\u00a0ms; field of view, 256\u00a0mm; voxel size, 2.0\u2009\u00d7\u20092.0\u2009\u00d7\u20092.0\u00a0mm; slice thickness, 2.0\u00a0mm; 49 slices; and multiple diffusion weighting DTI data were processed with TBSS programs in the Ft tests for continuous variables and chi-square tests for categorical variables (sex distribution and hand preference), with a significance level of \u03b1\u2009<\u20090.05. Differences in demographic and clinical characteristics between the SZ-HCP and SZ-LCP groups in each cohort were analyzed with independent samples t tests for continuous variables and chi-square tests for the number of patients taking medication, with a significance level of \u03b1\u2009<\u20090.05. Differences in demographic and clinical characteristics between the HC, SZ-HCP, and SZ-LCP groups in each cohort were analyzed by analysis of variance and Bonferroni\u2019s post hoc test for continuous variables and chi-square tests for categorical variables (sex distribution and hand preference), with a significance level of \u03b1\u2009<\u20090.05. Group differences in demographic and clinical characteristics between the SZ groups in cohorts (1) and (2) were analyzed by independent samples t tests for continuous variables (age and duration of school education) and chi-square tests for categorical variables (sex distribution and hand preference), with a significance level of \u03b1\u2009<\u20090.05. All statistical analyses were performed with IBM SPSS statistics 20.In each cohort, differences in demographic and clinical characteristics between the HC and SZ groups were analyzed by independent samples \u03b1\u2009<\u20090.05.Voxel-wise statistics of the skeletonized FA data were applied using randomize in FSL, version 6.0.0. The HC and SZ groups were compared by an analysis of covariance design, with age and sex as nuisance covariates. We randomly performed permutation-based testing with 5000 permutations and inference by threshold-free cluster enhancement (TFCE) with a threshold of less than 0.05. The mean FA values of the whole skeleton in the HC, SZ-HCP, and SZ-LCP groups were examined for differences by analysis of variance and Bonferroni\u2019s post hoc test, with age and sex as covariates, with a significance level of t tests, with age and sex as covariates, with significance set at p\u2009<\u20090.00125. (=\u20090.05/40 WM tracts because 20 WM tracts were examined in each cohort). The mean FA values of 20 ROIs in the HC, SZ-HCP, and SZ-LCP groups were examined for differences by analysis of variance and Bonferroni\u2019s post hoc test, with age and sex as covariates and significance set at p\u2009<\u20090.00125 (=\u20090.05/40 WM tracts because 20 WM tracts were examined in each cohort). In the voxels with a statistical difference in the mean FA values of 20 ROIs, voxel-wise multiple regression analyses were performed with TBSS to examine the relationship between FA values and demean GAF scores. We used 5000 permutations to calculate FA values using age and sex as covariates. Spearman's rank correlation test was carried out between the demeaned GAF scores and mean FA values of the voxels that were statistically significant in the voxel-wise multiple regression analysis.For atlas-based segmentation, all extracted skeletons were overlaid with the Johns Hopkins University DTI-based WM Atlas in FSL , 50. Difp\u2009<\u20090.01).Table p\u2009<\u20090.001). No significant differences were found between the SZ-HCP and SZ-LCP groups with regard to the duration of illness, PANSS subscales, GAF scores, chlorpromazine daily dose equivalents, and the number of patients taking an antidepressant or benzodiazepine.In cohort (2), no differences in age, sex, hand preference, or duration of school education were observed between the HC and SZ groups. Age, sex, and hand preference were not different between the HC, SZ-HCP, and SZ-LCP groups, but duration of school education was significantly lower in the SZ-LCP group than in the SZ-HCP group (p\u2009<\u20090.05).Further analyses of demographic and clinical characteristics between the SZ groups in cohorts (1) and (2) are shown in Supplementary Table 1. The SZ group in cohort (2) had significantly higher PANSS negative, general, and total scores than the SZ group in cohort (1) , the FA values in the SZ group were significantly lower than those in the HC group in the left temporal basal areas , the FA values in the SZ group were significantly lower than those in the HC group in widespread regions , FA values in the frontal part of the left IFOF were significantly associated with the demean GAF scores . Second, in the SZ group we found a significant positive relationship between cognition-related FA values in the fronto-temporal part of the left IFOF and the GAF score.Our results are consistent with previous findings which revealed that WM volumes were significantly smaller in patients with SZ with cognitive impairment than in healthy individuals but WM volumes in patients without cognitive impairment were not . MoreoveIn this study, we could replicate in two independent cohorts of patients with SZ, that especially patients with poor cognitive performance were affected by WM microstructural abnormalities. Moreover, cognition-related WM abnormalities, which were mainly found in fronto-temporal parts of the left IFOF, were related to general, social, and occupational functioning.It is worth emphasizing here that in our study FA was measured with different MRI scan parameters in two independent cohorts. A clinical application of structural MRI in clinical trials requires a certain robustness of readouts across different scanners and protocols, despite all biological and technical variability.The advantage of the current study is the replication of our findings in two independent cohorts and thus increases generalizability. Therefore, our results provide robust evidence that emphasizes the relevance of large reductions in FA as indicating a neurobiological mechanism of SZ in patients with severe cognitive impairment. Beyond the group comparison, the future goal is to find a framework for individual patients to allowIn its atlas-based analysis, the current study found a significant positive relationship in both cohorts between cognition-related FA, mainly in the fronto-temporal part of the left IFOF, and demean GAF scores in patients. The IFOF is the longest associative bundle and connects the occipital cortex, superior parietal lobule, and temporal basal areas to the frontal lobe , 56. MorAn interesting finding from our analysis of the whole skeleton is that the FA values in widespread regions were significantly lower in the SZ group than in the HC group in cohort (2) but not in cohort (1). Another interesting finding from our atlas-based analysis is that we found a significant positive relation between the FA values and the demean GAF scores in the temporal part of the left IFOF in cohort (2) but in the frontal part of the left IFOF in cohort (1), i.e., the local regions in the left IFOF could not be replicated in both cohorts. This discrepancy between the independent cohorts might be explained, at least in part, by some factors affecting FA, such as differences in symptom severity for SZ , the useSome limitations of this study must be noted. First, the patients were taking a variety of pharmacological agents not only at the time of scanning, but prior to the scanning . Second, they were evenly divided with regard to the cognitive composite score and could not be classified according to unique score criteria , though previous findings show that approximately one quarter of schizophrenia have similar cognitive performance as healthy . Future The main strength of this study is the replication approach that used independent samples with different MRI parameters. The results provide a foundation for developing the neurobiological basis of cognitive function, which could serve as a functional proxy in SZ and other psychiatric disorders. Of particular importance will be the design of an approach to transform individual changes of the structural and functional connectome towards a clinical application. Here, cognition and functionality for psychiatric disorders will also play an important role across diagnoses.Supplementary file1 (DOCX 28 KB)Below is the link to the electronic supplementary material."} {"text": "The identification of novel molecular systems with high fluorescence and significant non-linear optical (NLO) properties is a hot topic in the continuous search for new emissive probes. Here, the photobehavior of three two-arm bis[(dimethylamino)styryl]benzene derivatives, where the central benzene was replaced by pyridine, furan, or thiophene, was studied by stationary and time-resolved spectroscopic techniques with ns and fs resolution. The three molecules under investigation all showed positive fluorosolvatochromism, due to intramolecular charge-transfer (ICT) dynamics from the electron-donor dimethylamino groups, and significant fluorescence quantum yields, because of the population of a planar and emissive ICT state stabilized by intramolecular hydrogen-bond-like interactions. The NLO properties (hyperpolarizability coefficient and TPA cross-section) were also measured. The obtained results allowed the role of the central heteroaromatic ring to be disclosed. In particular, the introduction of the thiophene ring guarantees high fluorescent quantum yields irrespective of the polarity of the medium, and the largest hyperpolarizability coefficient because of the increased conjugation. An important and structure-dependent involvement of the triplet state was also highlighted, with the intersystem crossing being competitive with fluorescence, especially in the thiophene derivative, where the triplet was found to significantly sensitize molecular oxygen even in polar environment, leading to possible applications in photodynamic therapy. In our long-term research project on the push-pull behavior of stilbenoid compounds bearing donor\u2013acceptor (D/A) groups linked by a conjugated bridge (D-\u03c0-A systems) ,3,4,5,6,These push-pull systems show large charge displacement during the excitation and the presence of intramolecular charge transfer (ICT) in the excited state particularly favored in polar solvents.Recently we have studied the photobehavior of distyrylbenzene analogs in which the central benzene was replaced by a heteroaromatic ring , and the lateral benzenes functionalized with strong electron-withdrawing nitro groups in the para position ,26.3 states able to generate strong spin-orbit coupling benzene derivatives spectroscopic techniques coupled with theoretical calculations at the TD\u2013DFT level was performed. The combined experimental and theoretical study allowed the competitive processes deactivating the excited electronic states and their mechanisms to be pointed out, highlighting the effect given by the central heteroaromatic ring . The spectral characterization of the three compounds in solvents of different polarity to derive their hyperpolarizability coefficient through a solvatochromic method is also reported, together with their two-photon absorption (TPA) cross-sections measured by the two-photon excited fluorescence technique.Particular attention was paid to the comparison of the photobehavior of the compounds investigated of the present study, with the symmetric dinitro-derivative analogs and the asymmetric ones that carry both the dimethylamino group on one edge and the nitro group on the opposite side, with the intent to investigate how the replacement of strong electron-withdrawing nitro substituents with strong electron-donor dimethylamino ones can tune the photobehavior and NLO properties in these symmetrical systems.\u22121 cm\u22121. The absorption spectrum shifted to the red at about 60 to 70 nm when replacing the \u03c0-deficient pyridine unit with the \u03c0-rich furan and thiophene rings, respectively, due to the increased degree of conjugation , an, an49], f DMA-QT . TD\u2013DFT f DMA-QT managed tals see showed atals see , a consiCT, between the first excited singlet state and the ground state was determined. The Stokes shift recorded in EtOH was excluded from the plot for both DMA-QP and DMA-QF because of the intramolecular H-type bonds with the solvent affecting spectral positions beside solvatochromism. The \u0394\u03bcCT values match the fluorosolvatochromic behavior of the DMA derivatives; the largest difference was found for the furan-derivative, followed by the pyridine- and thiophene-substituted compounds. This information was then manipulated using Oudar\u2019s formula (Equation (2)) for the pyridine and furan derivatives and in the modified equation (Equation (3)) for the quadrupolar DMA-QT compound, to give the dynamic hyperpolarizability coefficients (\u03b2CT) in Tol were found to be comparable to other NLO materials and their trend reproduces the increased conjugation when replacing the central pyridine with the furan and, particularly, with the thienyl unit, as found for other push-pull molecules benzene derivatives under investigation proved very informative about the role of the central heteroatoms in tuning the spectral, photophysical, and NLO properties of this molecular series. In fact, the \u03c0-rich furan and thiophene rings endow the system with an increased degree of conjugation, shifting the absorption spectra towards longer wavelengths as opposed to the absorption of DMA-QP, which instead features a \u03c0-deficient pyridine ring. Conjugation also plays a leading role in determining the entity of the hyperpolarizability coefficients, which were found to be large and comparable when dealing with DMA-QF and DMA-QT, due to the great electron-transport ability of furan and thiophene , and abomophores ,51, as wmophores ,57, analTPA were, however, modest and the TPE spectra did not overlap the OPE profiles. This finding could be explained by taking into account the conformational equilibria between compressed and elongated forms. On the one hand, the compressed and bent configuration is favored because of intramolecular hydrogen-like bonds; on the other hand, the elongated form is expected to show a higher TPA cross-section because of the larger transition dipole moment associated with a straight configuration, as already reported for other systems [Conversely, the ICT character is fundamental for enhancing third order NLO properties, such as TPA abilities, with DMA-QF exhibiting higher TPA cross-sections relative to DMA-QT. The \u03c3 systems , thus coF = 0.20, 0.37, and 0.53 for DMA-QP, DMA-QF, and DMA-QT, respectively, vs. \u03a6F = 0.004, 0.10, and 0.21 for the corresponding dinitro compounds) [T = 0.43 and very significant production of singlet oxygen by sensitization from the triplet. These properties make the dimethylamino-thiophene derivative even more attractive than its nitro-substituted counterpart, where both fluorescence and triplet production were found to reduce with polarity [The presence of ICT dynamics for the three molecules was evidenced by the important fluorosolvatochromism, especially marked for DMA-QF, and the significant spectral evolution probed by fs-resolved ultrafast spectroscopy. However, the lack of excited-state lifetime quenching with solvent polarity, which was instead found to increase for DMA-QP and DMA-QF when passing from a scarcely to a highly polar medium because of the competition with intersystem crossing, points to a planar ICT state which remains significantly fluorescent. The emissive capability in a polar environment differed radically from what has been previously observed for asymmetric styrylbenzene analogs, whose highly dipolar push-pull A-\u03c0-D structure leads to the formation of an ICT state that becomes non-emissive upon increasing polarity . As for mpounds) , making polarity .DMA-QT, therefore, features a very interesting photobehavior in a polar environment, where its excited state deactivates entirely by fluorescence and intersystem crossing (followed by molecular oxygen sensitization) in an almost 50/50 ratio, making the molecule promising for applications as both an imaging probe and a PDT agent.The synthesis procedure of the DMA-QP compound has been already described in a previous paper , while tE,1\u2032E)-furan-2,5-diylbis)bis.DMA-QF , p-dimethylaminobenzaldehyde , KOAc (1 eq), and acetic anhydride (3 eq) were added. In addition, a catalytic amount of I2 was added and the reaction mixture was left on reflux for 24 h. The reaction mixture was poured into the ice and saturated with aqueous NaOH. The precipitate was formed, filtered, and washed with water. After removal of the solvent, the residue was worked up with water and toluene and dried over MgSO4. The crude reaction product was chromatographed and the pure E,E-isomer of DMA-furan was obtained in the last fractions using petroleum ether\u2013diethylether (5%) mixture as eluent.DMA-QF: yellow solid; isolated yield 21% (32 mg); fR = 0.35 (petroleum ether\u2013diethylether (5%)); m.p. 119\u2013122 \u00b0C; 1H NMR \u03b4/ppm of 7.37 , 6.93 , 6.72 , 6.68 , 6.09 , 2.97 ; 13C NMR \u03b4/ppm of 152.1 (s), 150.8 (s), 149.4 (s), 128.8 (2d), 125.4 (2d), 112.1 (d), 107.5 (d), 107.1 (d), 39.9 (2q); HRMS for C24H26N2O: M+calcd 358.2045; M+found 358.2049.E,1\u2032E)-thiophene-2,5-diylbis)bis).DMA-QT and azobisisobutyronitrile (0.1 eq) were added. The reaction mixture was heated on reflux and irradiated with a halogen lamp (75 W) overnight. After cooling down to RT, the reaction mixture was filtrated to remove succinimide and evaporated. After the removal of the solvent, an extraction with dichloromethane and water was carried out. The extract was dried and concentrated. The crude product was dissolved in benzene and triphenylphosphine (PPh3) in benzene was added. After stirring overnight at RT the precipitate was filtered off and used in the next step after drying. To a stirred solution of obtained phosphonium salt (0.45 mmol) and p-dimethylaminobenzaldehyde (0.45 mmol) in p.a. ethanol, sodium ethoxide was dropwise added . Stirring continued for 4 h at RT. After removal of the solvent, the residue was worked up with water and toluene and dried over MgSO4. The crude reaction product was chromatographed and the pure E,E-isomer of DMA-thiophene was obtained in the last fractions by repeated column and thin-layer chromatography using petroleum ether\u2013diethylether mixture as eluent (5%).DMA-QT: yellow solid; isolated yield 14% (25 mg); fR = 0.30 (petroleum ether\u2013diethylether (5%)); m.p. 128\u2013130 \u00b0C; 1H NMR \u03b4/ppm of 7.36 , 6.99 , 6.84 , 6.82 , 6.70 , 2.99 ; 13C NMR \u03b4/ppm of 146.8 (s), 141.8 (s), 137.9 (s), 127.4 (d), 127.2 (2d), 125.6 (2d), 124.8 (d), 112.5 (d), 40.4 (2q); HRMS for C24H26N2S: M+calcd 374.1817; M+found 374.1814.1H and 13C NMR spectra were recorded on a spectrometer at 600 MHz (3 using tetramethylsilane as a reference. High-resolution mass spectra (HRMS) were obtained on a matrix-assisted laser desorption/ionization time-of-flight MALDI-TOF/TOF mass spectrometer equipped with Nd:YAG laser operating at 355 nm with a firing rate of 200 Hz in the positive ion reflector mode, described in detail in Ref. [The 600 MHz . All NMR in Ref. . A totalThe structures of the molecules under investigation are presented in F, experimental error \u00b1 10%) of dilute solutions (A at \u03bbexc < 0.1) were obtained by exciting each sample at the relative maximum absorption wavelength by employing 2-(1-naphthyl)-5-phenyl-1,3,4-oxadiazole (\u03b1-NPD), 9,10-diphenylanthracene and tetracene (\u03a6F = 0.70 [Absorption spectra were recorded with a Cary 4E (Varian) spectrophotometer. Fluorescence emission and excitation spectra were detected using a FluoroMax-4P (HORIBA Scientific) spectrofluorimeter and analyzed by FluorEssence software with appropriate instrumental response correction files. The fluorescence quantum yields were measured in air-equilibrated Tol and DMF solutions using the DMA derivatives as sensitizers. The 1O2 phosphorescence spectra were detected using a spectrofluorimeter FS5 (Edinburgh Instrument) equipped with an InGaAs detector. Phenalenone was employed as a reference compound for comparison purposes [Singlet oxygen quantum yields or HRS (hyper\u2013Rayleigh scattering) techniques, with the advantages of simplicity and the application of conventional steady-state stationary spectroscopy only. As for the case of the EFISH method, the solvatochromic method provides the \u03b2CT dominant contribution, which corresponds to the \u03b2XXX component of the \u03b2 tensor describing the CT transition and being referred to as the frequency of the exciting laser of EFISH allows for direct comparison and good agreement between the results of the two approaches.Even though approximated, the solvatochromic method managed to give a valid estimation of the second order hyperpolarizability coefficients of \u03b2n) expressed as CT = \u03bcE \u2212 \u03bcG) according to Equation (1):\u22121), a is the cavity radius within Onsager\u2019s model, taken as 60% of the calculated diameter along the CT direction resulting from the optimized geometry [h is the Planck constant and c is the speed of light in a vacuum.This method is based on the results of the fluorosolvatochromic behavior. The dependence of the Stokes shift as follows:d and thus \u0394QCT = 4d2(\u0394\u03bcCT)2. In this case, the quadrupole is considered as two opposite dipoles, separated by a distance d between their barycenter, sharing the central heteroatomic ring.In the case of DMA-QT, as also reported for other quadrupolar push-pull systems ,25, anotf) obtained by the integrated absorption band as \u22121) of the bathochromic CT transition, and 0 has been calculated considering Equation (4) [The dynamic hyperpolarizability coefficient was then derived through Oudar\u2019s formula (3)\u03b2CT=\u03b29\u222b\u03b5(v)dv or predi\u22123 J/cm2 of energy fluence and under the magic angle condition, stirring the solution in a 2 mm cuvette (0.5 < A < 1.0 at \u03bbexc = 400 nm) during the experiments to avoid photoproduct interferences. Photodegradation was checked by recording the absorption spectra before and after the time-resolved measurement, where no significant change was observed. The experimental data matrixes were first analyzed using the Surface Xplorer PRO (Ultrafast Systems) software, where it was possible to perform SVD of the 3D matrix to derive the principal components (spectra and kinetics). Successively, a global analysis using GloTarAn software was performed in order to obtain the lifetimes and the evolution-associated spectra (EAS) of the detected transient.The experimental setup for the femtosecond transient absorption and fluorescence up-conversion measurements have been widely described elsewhere ,24,27. B\u22121) laser flash photolysis (Edinburgh LP980) with a pump pulse centered at 355 nm , coupled with a PMT for signal detection. A pulsed xenon lamp was then used to probe the absorption properties of the produced excited states. The setup was calibrated using an optically matched solution of Benzophenone in AcCN (\u03a6T = 1.0 and \u03b5T = 5200 M\u22121 cm\u22121) [T) were measured by energy transfer experiments from dithienylketone to the DMA-QP and DMA-QF compounds, and from DMA-QF and DMA-QT to the all-trans-\u03b1, \u03c9-di(2-thienyl)octatetraene [T = 0.66 [T = 30,000 M\u22121 cm\u22121 at \u03bbT = 630 nm) [T = 0.71 and \u03b5T = 45,500 M\u22121 cm\u22121 at \u03bbT = 422 nm) [T). All measurements were performed by bubbling the sample with pure N2.Triplet formation quantum yields, transient absorption, and triplet lifetimes were measured using nanosecond (pulse width 7 ns and laser energy < 1 mJ pulse\u22121 cm\u22121) . Triplet 465 nm) . The triT = 0.66 and \u03b5T = 630 nm) and Anthsignal between 410 and 750 nm; idler between 820 and 2200 nm). The fluorescence intensity is detected by a PTM tube (Hamamatsu R2257-Y001), powered by a high-voltage power supply and given in mV by an oscilloscope LeCroy-Wave Runner-LT322 . Solutions of known concentration (about 1 \u00d7 10\u22125 M) in DMF were prepared together with fluorescein in buffered water at pH 11 (\u03c3 = 26 GM at 930 nm) [The same Nd:YAG pump coupled with an optical parametric oscillator was used to perform the two-photon excited fluorescence experiments. The OPO can be tuned to produce radiation in the 410\u20132200 nm range ( 930 nm) as referQuantum mechanical calculations were performed by using the Gaussian 16 package . DFT wit"} {"text": "Organic cation transporter 1 primarily governs the action of metformin in the liver. There are considerable inter-individual variations in metformin response. In light of this, it is crucial to obtain a greater understanding of the influence of OCT1 expression or polymorphism in the context of variable responses elicited by metformin treatment.We observed that the variable response to metformin in the responders and non-responders is independent of isoform variation and mRNA expression of OCT-1. We also observed an insignificant difference in the serum metformin levels of the patient groups. Further, molecular docking provided us with an insight into the hotspot regions of OCT-1 for metformin binding. Genotyping of these regions revealed\u00a0SNPs 156T>C and 1222A>G in both the groups, while as 181C>T and 1201G>A were found only in non-responders. The 181T>C and 1222A>G\u00a0changes were further found to alter OCT-1 structure in silico and affect metformin transport in vitro which was illustrated by their effect on the activation of AMPK, the marker for metformin activity.Taken together, our results corroborate the role of OCT-1 in the transport of metformin and also point at OCT1 genetic variations possibly affecting the transport of metformin into the cells and hence its subsequent action in responders and non-responders.The online version contains supplementary material available at 10.1186/s12902-022-01033-3. SLC22A1/OCT-1 is one of the main hepatic-uptake transporters positioned on the sinusoidal membrane of hepatocytes. The human OCT-1 gene has 11 exons and 10 introns. OCT-1 has four isoforms, a long-form that has been reported to be functionally active and three shorter forms that are splice variants of the gene \u20135. PolymIn general, the hypothesis laid down for this study is based on the consequential variability of drug response apropos of polymorphism and gene expression/isoform variation in the OCT1 gene. This study was aimed at identifying the variability in the OCT1 gene and the interindividual differences in response to the drug. This study could provide us with a basic picture of the responder, non-responder ratio in our population as no such study has been done in our region to date. Also, this was a pilot study carried out on newly diagnosed type 2 diabetes patients that could be recruited for over 1\u00a0year only. It was observed that the response to metformin in these patients was independent of isoform variation and mRNA expression of OCT-1, however, some changes in exon 1 and exon 7 were found to have a profound effect on metformin activity.For PCR-based work, materials like PCR buffer and Taq polymerase were purchased from Sigma Aldrich, USA, and dNTPs were from Fermentas, USA. For expression studies, materials like cDNA synthesis kit, SYBR green, and ROX solution were purchased from Fermentas, USA. RNA later was purchased from Sigma Aldrich, USA. For cell culture studies, DMEM, FBS, and metformin were purchased from Sigma Aldrich, USA. LB agar and LB broth were purchased from Himedia; DPnI from Thermo scientific; A769962 from Abcam; Q5 polymerase from New England Biolabs and Plasmid extraction kit from Qiagen. For western blot analysis, phospho-AMPK rabbit monoclonal antibody, total AMPK rabbit monoclonal antibody, and beta-actin rabbit monoclonal antibody were purchased from CST, USA. The anti-rabbit IR DYE 800 antibody was purchased from Li-COR Biosciences, USA.A total of 41 patients clinically diagnosed with type 2 diabetes mellitus were recruited for this study. They were treated with metformin (500-1000\u00a0mg/day) for 3\u00a0months in the Department of Endocrinology, Sher-I-Kashmir Institute of Medical Sciences, Soura, Srinagar, Kashmir. Based on the response to metformin, patients were classified into two groups: responder group (decrease in HbA1c levels by more than 1% from the baseline i.e. 5.7%) and non-responder group (decrease in HbA1c levels less than 1% from the baseline). Ten liver tissues and their respective blood samples were obtained from the Department of Surgical Gastroenterology, Sher-I-Kashmir Institute of Medical Sciences, Soura, Srinagar, Kashmir.5\u00a0ml peripheral blood samples were collected from responders and non-responders. Out of this quantity, 3\u00a0ml was transferred to EDTA vials for RNA and DNA extraction. The rest of the 2\u00a0ml blood was used for serum separation and the serum was stored at -80\u00a0\u00b0C for further analysis. Liver tissues were stored in RNA later at 4\u00a0\u00b0C overnight and then transferred to -80\u00a0\u00b0C till further use.Isoform analysis of OCT-1 was carried out using reverse transcription PCR. 1\u00a0\u00b5g of RNA (extracted by TRIzol method) was reverse transcribed to cDNA using a first-strand cDNA synthesis kit (Thermo scientific). The isoform analysis for the liver tissue and their corresponding blood samples as well as across the patient groups was carried out by setting a PCR reaction using 10X PCR buffer, 10\u00a0mM dNTPs, 5U/\u00b5l of Taq polymerase, and 10\u00a0\u00b5M primers for wild OCT-1, isoform-1, isoform-2 and isoform-3 respectively (Table \u2212\u2206\u2206CT). For comparison, OCT-1 expression in blood was calculated using responder sample expression as a reference control. Primers used for qRT-PCR are listed in Table For the expression analysis of OCT-1 mRNA between patient groups (responders and non-responders), real-time quantitative PCR was utilized. The real-time PCR was performed with a light cycler PCR instrument (Applied Biosystems) in a 96 well plate in triplicates using SYBR Green master mix (Fermentas). The ct values of each patient were normalized to GAPDH using the SYBR Green-based comparative CT method 2\u2212\u2206\u2206CT. FoMolecular docking was carried out to analyze the interaction of metformin with OCT-1. OCT-1 protein structure was not available and therefore was generated by template modeling using Swiss Model online software and by generating the best fit structure using iTasser. The structure of metformin was acquired from PubChem. SwissDock and Autodock4 were employed for docking metformin on the OCT1 structure. Lamarckian genetic algorithm with local search (GALS) was used as a search engine, with a total of 100 runs. The dock run was programmed to score for blind docking with a minimum forced planner. Cluster analysis was performed on the docked results using an RMS tolerance of 2.0\u00a0\u00c5. Finally, the more energetically favorable cluster poses were evaluated by using USCF Chimera software.The exons 1 and 7 of the OCT-1 gene were selected for polymorphic/mutational analysis after carrying out docking analysis. Primers were designed against these exons Table S and a PMolecular Dynamics simulations (MDS) were performed on the wild-type protein and mutants using the GROMACS 4.6 platform and the GROMOS CHARMM force field . Before The sequence analysis of the clinical samples was followed by PCR-based mutagenesis in cell lines. This was followed by the transfection of mutant constructs into a suitable model cell line (HepG2). Functional assays were carried out on transfected cells that include the downstream effector signal molecule analysis viz quantification of activated AMPK by western blot.5 polymerase was used to minimize the incorporation of wrong bases. The PCR product was run on 1% agarose gel to confirm the successful amplification. The mutant constructs of OCT-1 were transformed into DH5\u03b1 cells separately. The overnight cultured broth was subjected to plasmid extraction using the QIA PrepR Spin miniprep kit from QIAGEN.We received the OCT-1 wild plasmid as a kind gift from Kathleen M. Giacomini, Department of Biopharmaceutical Sciences, University of California, San Francisco. The concentration of the plasmid was 140.1\u00a0ng/ul and the reference vector/backbone was pcDNA5. The variants R61C (181C\u2009>\u2009T), G401S (1201G\u2009>\u2009A), and M408V (1222A\u2009>\u2009G) were constructed by site-directed mutagenesis using the wild plasmid as a template and setting up PCR reactions using primers listed in Table HepG2 liver carcinoma cells were cultured in Dulbecco\u2019s modified Eagles medium (DMEM) supplemented with 10% (v/v) fetal bovine serum, 50\u00a0\u03bcg /ml of penicillin, and 0.1\u00a0mg/ml streptomycin. The cells were kept in a humidified atmosphere of 5% CO2/95% air at 37\u00a0\u00b0C. Further, the HEPG2 cells were transiently transfected with the vector, wild, and mutant OCT-1 constructs using polyethyleneimine (PEI). They were then treated with 20\u00a0\u00b5M of metformin and 100\u00a0\u00b5M A769962. Finally, the cells were processed for RNA extraction, and protein extraction, while the expression studies were monitored by western blotting.Western blotting was used for analyzing pAMPK expression in metformin-treated and untreated HEPG2 cells expressing vehicle control, reference OCT-1 and OCT-1 mutant constructs. Briefly, cells were harvested, and lysed in RIPA buffer for protein extraction. The protein concentrations were determined by the Bradford method. Equal amounts of protein from whole-cell lysates were separated on SDS\u2013polyacrylamide gel (10\u201312%) and then electrophoretically transferred onto the PVDF membrane. Blots were incubated with primary antibodies against phospho AMPK/pAMPK (CST), total AMPK/tAMPK (CST), and \u03b2-actin (CST) at 4\u00a0\u00b0C overnight The dilutions used were 1:1000 for pAMPK, 1:1000 for tAMPK and 1:5000 for \u03b2-actin respectively. After washing, the blots were incubated with appropriate IR-tagged secondary antibodies (Licor Biosciences) used at a dilution of 1:10,000 for 1\u00a0h at room temperature. The blots were then scanned in an infrared image scanner (Licor Biosciences). .The samples used for the analysis were the serum samples obtained from patients (10 responders and 10 non-responders) who were on 1000\u00a0mg of metformin. The analysis was performed on a Shimadzu UFLC chromatographic system equipped with an LC-20AD solvent delivery pump, PDA SPD M-20A detector, DGU-20A degasser, SL-20AHT autosampler, and CTO-10AS column oven. A Merck high resolution chromolith RP-18e column was used and the mobile phase consisted of 80% acetonitrile: 20% water containing 0.1% formic acid. Analyses were run at a flow rate of 0.5\u00a0ml/min. The peak height ratios of six calibration standards (0.02\u20130.8\u00a0\u00b5g/\u00b5l for responders and 0.01\u20130.4\u00a0\u00b5g/\u00b5l for non-responders) were used to establish the standard curves and the standard curves were linear was used for statistical analysis. Student-t test, direct gene count method, Chi-square test, Fischer's exact test, and multivariate analysis were used wherever applicable. Differences were considered statistically significant when the 'n\u2009=\u200925) and non-responders (n\u2009=\u200916). The groups did not differ significantly in age . Of all the participants, 15 were males and 10 were females in responders whereas 8 were male and 8 were female in non-responders. Values of the study parameters based on responders and non-responders are presented in Table In this study, subjects were divided into two groups: responders n\u2009=\u20092 and non-ers n\u2009=\u20091. The grop\u2014value being\u2009>\u20090.05 whereas the genotype 1222 GG was found to be significantly associated with the responders (p\u2009<\u20090.05). As far as allele frequencies are concerned, 181\u00a0T allele and 1201A were found to be significantly associated with the non-responders (p\u2009<\u20090.001 and p\u2009<\u20090.05 respectively). However, the 1222 G allele was found to be significantly associated with responders (p\u2009<\u20090.05).Molecular docking revealed the hotspot regions for metformin binding on OCT-1 protein and these regions majorly spanned exon 1 and exon 7. Sequencing of the exon-1 and exon-7 PCR products Figs. S and 10STo assess the effect of the reported SNPs on the OCT-1 protein structure and conformation, molecular dynamic simulation of 181C\u2009>\u2009T (p.Arg61Cys) and 1201G\u2009>\u2009A (p.Gly401Ser) variants identified in non-responders leading to Arginine-Cystine and Glycine-Serine substitution was carried out. The energy-minimized structures were subjected to two independent 20\u00a0ns runs under the GROMOS CHARMM force field variant showed the highest SASA followed by the (p.Gly401Ser) variant Fig.\u00a0. The wilThe wild-type OCT-1 gene was subjected to site-directed mutagenesis to create the mutant constructs having changes that we obtained after genotyping of patient samples. These mutant constructs generated were confirmed by sequencing of the PCR products Fig. S, 13S.AMPK functions as a ser/thr kinase whose activity is known to be altered by metformin. Owing to this fact, we choose to study the transport of metformin via OCT-1 into the cells and considered activation of AMPK as an indicator of metformin activity. Therefore, protein expression of phospho-AMPK and total AMPK was checked to analyze the effect of the mutations in OCT-1 on the transport of metformin. As metformin is known to activate AMPK, therefore, we carried out proteomic studies for the same. We observed that mutant 181C\u2009>\u2009T and mutant 1201G\u2009>\u2009A had a profound effect on AMPK activation/ phosphorylation. The activation of AMPK was indicated by increased expression of phospho-AMPK and vice versa. We found that variants 181 C\u2009>\u2009T and 1201 G\u2009>\u2009A show decreased expression of phospho-AMPK when compared to the wild OCT-1 counterpart. However, the expression of total AMPK was not affected by any of these changes in OCT-1. Before the metformin treatment and subsequent experiments, the effect of the AMPK activator A769962) was also observed in these cells, to confirm that the effects observed for metformin-treated cells were exclusively due to metformin only .We performed HPLC analysis to evaluate the levels of metformin in the serum of patients from both groups. The mean\u2009\u00b1\u2009SD values for responders and non-responders were 0.11\u2009\u00b1\u20090.02 and 0.14\u2009\u00b1\u20090.06 respectively Table S. Althou. Further, we also found a higher number of people taking PPI in the non-responder group (p\u2009<\u20090.05). Our observation is further supported by the study that identified PPIs as an important drug class inhibiting OCT-mediated metformin transport [p\u2009=\u20090.001). This further clarifies that non-responders do not sufficiently respond to metformin even at higher doses of metformin. There is an increase in HbA1c levels of non-responders as compared to responders, which also comes as a part of studies elsewhere wherein it has been reported that higher doses of metformin lead to better glycemic control in patients with diabetes [Our study was a pilot study, that focussed on patients suffering from diabetes to find the underlying difference between the cohort of patients that were put on metformin medication but were unable to respond to the treatment viz a viz patients who responded to metformin therapy. These patients were divided into responders and non-responders based on their response to metformin which was assessed by the decrease in HbA1c levels in them. HbA1c level has been assessed as a marker of treatment response in patients with diabetes in many studies \u201312. In transport . In our diabetes , 19. To Further, we studied the isoform pattern between responders and non-responders. We hypothesized that the difference in the response between responders and non-responders might be due to the difference in the type of isoforms between the two groups that could have led to variation in the response due to differential uptake of metformin. Since the four isoforms of OCT-1 were reported from liver tissue [invitro [P\u2009=\u20090.020) in the Han Chinese population [To get a clearer picture of varying metformin responses between the two groups of patients we further moved on to analyze the hotspot domains of OCT-1 protein, likely responsible for metformin binding and transport across the cell. We found that among the other metformin binding regions predicted by docking it on OCT-1 protein model, three regions showed a high docking score Table S. Out ofinvitro . A studyinvitro . The varinvitro , 22. By invitro , Indian invitro , Caucasiinvitro and Japainvitro , 26 popuinvitro . When repulation . We furtp\u2009>\u20090.05). We next moved on to performing the invitro functional assays to examine the effects of the variations reported in our study, on metformin transport. We generated 181 C\u2009>\u2009T, 1201 G\u2009>\u2009A, and 1222 A\u2009>\u2009G mutants using site-directed mutagenesis. These were transfected into HepG2 cells and experiments were set up using metformin and A769962 (AMPK activator). We assessed the effect of metformin in these mutants by looking at the phosphorylation status of pAMPK, which is known to be a functional reporter or marker for metformin activity. While mutant 1222 A\u2009>\u2009G did not affect the pAMPK levels, 181 C\u2009>\u2009T and 1201 G\u2009>\u2009A led to a decrease in pAMPK expression as compared to reference wild-type OCT-1. Concomitant to our results, Shu et al. also reported seven SNPs in OCT-1 including 181 C\u2009>\u2009T and 1201 G\u2009>\u2009A variants that exhibit reduced transport of metformin [We also carried out an estimation of metformin levels in serum by HPLC to find the difference in metformin levels between responders and non-responders. The serum metformin levels were slightly higher in non-responders than responders although the association was statistically insignificant (Our study points out the importance of OCT-1 and its variants in the transport of metformin into the cells and hence its subsequent action. Molecular docking analysis revealed the role played by exon-1 and exon-7 in the binding of metformin to OCT-1 The SNPs associated with these hot spot binding regions of OCT-1 were found to affect the transport of metformin in\u00a0vitro. Our study also concluded that the variation in the metformin response between the two groups of patients i.e. responders and non-responders is independent of isoform variation and mRNA expression of OCT-1 further hinting that the changes in the OCT-1 gene lead to this varied response to metformin in our patient groups. Furthermore, variations in non-responders were observed to oppose the structural compactness of the protein, thus leading to its compromised and aberrated conformation that may impede the OCT-1 function and hence metformin uptake. However, this work was a pilot study and therefore further studies are needed to be carried out on a larger cohort of patients to get a clearer picture of the OCT-1 and metformin relation. Also, further studies are warranted to examine the effects of the reported SNPs on the protein folding, substrate binding, and substrate selectivity of the OCT-1 transporter.Additional file 1.Additional file 2.Additional file 3."} {"text": "Streptococcus pyogenes (Sp) Cas9 revealed through these techniques.Cas9 is an RNA-guided endonuclease from the type II CRISPR-Cas system that employs RNA\u2013DNA base pairing to target and cleave foreign DNA in bacteria. Due to its robust and programmable activity, Cas9 has been repurposed as a revolutionary technology for wide-ranging biological and medical applications. A comprehensive understanding of Cas9 mechanisms at the molecular level would aid in its better usage as a genome tool. Over the past few years, single-molecule techniques, such as fluorescence resonance energy transfer, DNA curtains, magnetic tweezers, and optical tweezers, have been extensively applied to characterize the detailed molecular mechanisms of Cas9 proteins. These techniques allow researchers to monitor molecular dynamics and conformational changes, probe essential DNA\u2013protein interactions, detect intermediate states, and distinguish heterogeneity along the reaction pathway, thus providing enriched functional and mechanistic perspectives. This review outlines the single-molecule techniques that have been utilized for the investigation of Cas9 proteins and discusses insights into the mechanisms of the widely used By measuring the resonance energy transfer efficiency studies the surface structure and properties of samples by detecting the extremely weak interatomic interaction between the sample surface and a miniature force-sensitive sensor , whereas PAM-distal mismatches still allow for the stable binding of the complex to DNA targets. Specifically, 9\u201310 PAM-proximal matches are sufficient for ultrastable SpCas9\u2013gRNA binding. Moreover, as the dwell-time analysis shows two characteristic binding times, a two-step mechanism of Cas9\u2013RNA binding involving PAM surveillance and RNA-DNA heteroduplex formation (see the next section) was proposed . Fully unwound protospacer DNA coupled with full R-loop formation possibly drives the docking of the HNH domain, thus licensing cleavage-competent SpCas9 (see the following section \u201cDNA dissociation after cleavage\u201d). Modifications of gRNA or the engineering of SpCas9 could rebalance the unwinding-rewinding equilibrium and make it stricter to reach the cleavage-competent state, thus minimizing off-target effects. et al. et al. et al. et al. et al. et al. et al. et al. et al. et al. et al.SpCas9 is a multidomain DNA endonuclease. Structures of SpCas9 showed two distinct lobes, the alpha-helical recognition (REC) lobe and the nuclease lobe (NUS), as well as the more variable C-terminal domain (CTD) , the 3\u2019 flap generated by the cleaved NTS is possibly exposed and can be digested by exonucleases ( et al.One distinguished characteristic of SpCas9 is its stable binding to the on-target site after cleavage. Both ucleases (Wang et et al.. Therefo et al.in vitro, could facilitate the dissociation of DNA-cleaved SpCas9 from DNA. Zhang et al. used optical tweezers to examine the consequence of encountering a BLM helicase with a DNA-bound dSpCas9 from both sides ( et al. et al. et al. et al.The long lifetime of the SpCas9\u2013gRNA\u2013DNA complex limits the efficient usage of each SpCas9 protein and impairs the repair of DSBs (Clarketh sides . They prAs evident from this review, single-molecule studies provide not only a fundamental understanding of Cas9 mechanisms but also a framework for rational design aiming at improving Cas9 efficiency and minimizing off-target effects. Based on these studies, a detailed dynamic picture of DNA interrogation and cleavage of SpCas9 has been generated . Upon coQian Zhang, Ziting Chen and Bo Sun declare that they have no conflict of interest."} {"text": "Animal behavior can be difficult, time-consuming, and costly to observe in the field directly. Innovative modeling methods, such as hidden Markov models (HMMs), allow researchers to infer unobserved animal behaviors from movement data, and implementations often assume that transitions between states occur multiple times. However, some behavioral shifts of interest, such as parturition, migration initiation, and juvenile dispersal, may only occur once during an observation period, and HMMs may not be the best approach to identify these changes. We present two change-point models for identifying single transitions in movement behavior: a location-based change-point model and a movement metric-based change-point model. We first conducted a simulation study to determine the ability of these models to detect a behavioral transition given different amounts of data and the degree of behavioral shifts. We then applied our models to two ungulate species in central Pennsylvania that were fitted with global positioning system collars and vaginal implant transmitters to test hypotheses related to parturition behavior. We fit these models in a Bayesian framework and directly compared the ability of each model to describe the parturition behavior across species. Our simulation study demonstrated that successful change point estimation using either model was possible given at least 12\u00a0h of post-change observations and 15\u00a0min fix interval. However, our models received mixed support among deer and elk in Pennsylvania due to behavioral variation between species and among individuals. Our results demonstrate that when the behavior follows the dynamics proposed by the two models, researchers can identify the timing of a behavioral change. Although we refer to detecting parturition events, our results can be applied to any behavior that results in a single change in time.The online version contains supplementary material available at 10.1186/s40462-023-00430-0. Knowing the vital rates for wildlife populations of management or conservation concern is critical for determining the best management actions and assessing their outcomes. For example, vital rates can inform quotas and license sales , 40 and Parahyaena brunnea) isolating themselves from the group to give birth in natal dens [Megaptera novaeangliae) moving away from the pod during the birthing process [Rangifer tarandus) are lower in females that gave birth compared to those that did not [Parturition is accompanied not only by morphological , physioltal dens , 49 and process . Other s did not , 43.Given the large number of wildlife studies using satellite telemetry devices to monitor individual movement , 42, thePrevious work has demonstrated that fine-scale location data can identify behavioral shifts related to parturition in a single individual . HoweverMany studies in the last decade have developed various methods to identify parturient individuals and parturition events using movement metrics derived from location data , 48, 62.To address these hypotheses, we developed two change-point models that capture different parturition-related changes in movement behavior: a location-based model and a movement metric-based model to identify the timing of these events. Comparing the two manifestations of behavioral shifts is important as a change in physical locations does not necessarily result in a change in movement metrics, and vice versa. For example, relocating a core area can be achieved without a detectable change in movement metrics, such as step lengths (the straight-line distance between two points) and turning angles (the change of direction between three successive steps). Meanwhile, the geographic location of the core-use area may not change, but if the core area becomes smaller post-parturition, then quantities such as step length and turning angle must necessarily change.Odocoileus virginianus) and Rocky Mountain elk (Cervus canadensis nelsoni), to test proposed hypotheses about parturition-related behavior in individual ungulates with known timing of parturition events. Species-specific behavior may result in different models being better at detecting parturition-related changes in movement [To test the ability of our change-point models to detect single behavioral shifts, such as parturition, we conducted a simulation study in which we varied the sampling effort (duration and frequency of observations). We used the simulation study to determine the optimal sampling efforts for detecting behavioral events that manifest as either a change in home-range location or derived movement metrics. We then applied our models to a case study of two ungulate species, white-tailed deer is based on the 2-dimensional spatial location of the individual at time step 7), which did not vary depending on the state. To model the expected location at each time step, we used an autoregressive model of order one (AR(1)) where the location at time t, The multivariate normal distributions were parameterized by their time-varying expected location in space , becausmentclass2pt{minimmentclass2pt{minimPreliminary results indicated that the autocorrelation parameter did not vary between states; therefore, we used a single autocorrelation parameter, \u03c1, arising from a Uniform prior, which allowed for positive autocorrelation while enforcing some dependence on a geographic centroid.t asIn the Movement Metric-based Change-Point Model (MMCPM), the observations of interest were turning angles and step lengths. Turning angles and step lengths are metrics widely used to characterize behavioral states in animal movement modeling , 47. TurThe wrapped Cauchy distribution was parameterized using location . To assess the effect of fix interval, we again simulated fifty complete datasets that contained locations 48\u00a0h prior and 24\u00a0h post-parturition with a 15\u00a0min fix interval. We thinned each complete dataset to both 30\u00a0 and 60\u00a0min fix intervals.A larger variation in observations and a smaller difference between the two states may make it more challenging to determine when the event of interest occurred. For the MMCPM, we determined four ways in which a difference in behavior might manifest: one with no change in step lengths or turning angles between pre-and post-parturition, one with a change in turning angles but not step lengths, one with a change in step lengths but not turning angles, and one with a change in both step lengths and turning angles. We simulated fifty datasets for each of these movement scenarios. For the LCPMWe first assessed if either LCPM or MMCPM detected a change point by calculating the mode of the posterior distribution of The Pennsylvania Game Commission and the Pennsylvania State University captured 17 deer from January to April 2015\u20132017 and 37 elk from January to April 2020 in north-central Pennsylvania. Study area details can be found in Additional file Our focus was on parturition events; therefore, we extracted individual location data from 3\u00a0days prior to and 4\u00a0days following the known parturition event . We explored other pre- and post-parturition durations and found comparable results across durations , 48. BecThe LCPM estimated a change point according to a Level 1 success for all datasets for each variation of post-parturition observation duration Fig.\u00a0, Table 1When we compared fix intervals, the LCPM estimated the change point according to Level 1 success for all the datasets when the fix interval was 15\u00a0min. However, when the datasets were thinned to 30\u00a0min and 60\u00a0min intervals, the LCPM did not estimate the change point as accurately and a Level 3 success for one and in 24 individuals the 95% credible intervals of the estimated change point fell within the 24\u00a0h prior to the known parturition event and the duration of observation. The balance between fix interval and the duration of observation is essential when considering movement behavior because quantities, such as mean, maximum, and total distance moved, will vary as a function of fix interval . AlthougGiven these potential sources of variation in the detection of a change point, our simulation study evaluated the ability of our two models to detect a given change under varying post-event sampling durations, fix intervals, and magnitude of behavioral changes. We determined successful detection of a change point was possible given a 15\u00a0min fix interval, at least 3\u00a0h of observation following a change for LCPM and at least 12\u00a0h for MMCPM. Moreover, for the MMCPM, successful detection of a behavioral change can occur\u00a0with a 60\u00a0min fix rate and at least 24\u00a0h of observation following a change. The success of our simulation study\u00a0demonstrates that when behavior follows the dynamics proposed by the two models, researchers can detect the timing of true behavioral change with current technology.When we applied our change-point models to two ungulate species, not all individuals within a species exhibited movement behaviors captured by the models. This could be due to large behavioral variation across species and individuals. For deer, the LCPM failed to detect a change in a majority of the individuals and the ability to estimate a change was not consistent among individuals. These results indicate deer are not consistently changing their locations prior to or during parturition. In over 50% of the elk, however, the LCPM consistently identified and estimated a change within 12\u201336\u00a0h of parturition (Fig.\u00a0Much like the LCPM, the MMCPM could not consistently or accurately estimate parturition events in deer. The failure of the MMCPM to capture parturition behavior of deer could be due to brief behavioral changes that are not able to be detected by the MMCPM. For example, females have been observed moving their fawns to a second location within 3 to 24\u00a0h after parturition and will only re-visit these secondary locations briefly at dawn and dusk to nurse . While eBased on change point analysis of telemetry data from the mother, locating neonates in real-time is unlikely to be successful given individual variation and the It is important to be cautious when assigning behaviors or events of interest from the change point estimated by a model . The chaHowever, when a behavioral change event is known to occur, our two change-point models successfully identified it under different monitoring and ecological scenarios. Therefore, these models could be used to identify the timing of parturition events, but only if the methods have been validated a priori. These methods and guidance can be applied in the future to other systems where single behavioral change occurs, such as migration, natal dispersal, or survival of offspring. Our change-point models provide a valuable tool for wildlife managers and researchers to monitor vital rates for populations of management and conservation interest.Additional file 1. Supplementary information on modeling framework, study area, and data collection and processing.Additional file 2. Supplementary figures and tables for simulation and case study."} {"text": "The purpose of this study is to synthesize evidence on risk factors associated with newborn 31-day unplanned hospital readmissions (UHRs). A systematic review was conducted searching CINAHL, EMBASE (Ovid), and MEDLINE from January 1st 2000 to 30th June 2021. Studies examining unplanned readmissions of newborns within 31\u00a0days of discharge following the initial hospitalization at the time of their birth were included. Characteristics of the included studies examined variables and statistically significant risk factors were extracted from the inclusion studies. Extracted risk factors could not be pooled statistically due to the heterogeneity of the included studies. Data were synthesized using content analysis and presented in narrative and tabular form. Twenty-eight studies met the eligibility criteria, and 17 significant risk factors were extracted from the included studies. The most frequently cited risk factors associated with newborn readmissions were gestational age, postnatal length of stay, neonatal comorbidity, and feeding methods. The most frequently cited maternal-related risk factors which contributed to newborn readmissions were parity, race/ethnicity, and complications in pregnancy and/or perinatal period.Conclusion: This systematic review identified a complex and diverse range of risk factors associated with 31-day UHR in newborn. Six of the 17 extracted risk factors were consistently cited by studies. Four factors were maternal , and two factors were neonatal . Implementation of evidence-based clinical practice guidelines for inpatient care and individualized hospital-to-home transition plans, including transition checklists and discharge readiness assessments, are recommended to reduce newborn UHRs.The online version contains supplementary material available at 10.1007/s00431-023-04819-2. Newborn unplanned hospital readmission (UHR) is defined as an unexpected hospital readmission within a specified time period following discharge from the initial hospitalization at the time of birth , 2. NewbIdentifying risk factors associated with UHRs of the newborn can assist in reducing readmission rates through improvements in clinical practice, policy development, and the use of maternal-child healthcare services. While studies have examined causes associated with neonatal morbidity and mortality , 10, theThe systematic review followed the 2009 PRISMA Statement .An electronic database search was carried out using the CINAHL, EMBASE (Ovid), MEDLINE from 1st January 2000 to 30th June 2021 with key search terms and (newborn* or new born* or newly born or baby* or babies or premature or prematurity or preterm or pre term or preemie* or premie* or low birth weight or low birthweight (LBW) or very low birthweight (VLBW) or extremely low birth weight (ELBW or infant* or infancy or neonat*) (A complete search strategy is provided in Appendix Four inclusion criteria for this review were (1) primary research studies, (2) UHRs assessment/measurements within 31\u00a0days, (3) study design stated clearly and reported statistical analysis procedure/s, and (4) published in peer-reviewed journals and the English language with full text available. Studies were excluded when mixed adverse outcomes, including complications and emergency department (ED) visits post-hospital discharge or readmission were measured more than once. Conference abstract-only references were also excluded.Two reviewers initially read all titles and abstracts independently to assess potential inclusion. Included full-text articles were then assessed against the inclusion criteria. Disagreements between the reviewers on potential articles for inclusion were resolved through discussion. Reference lists of all included articles were screened to identify additional articles.Data extraction included study characteristics, examined variables, and statistically significant risk factors. Study characteristics included study setting, population, sample size, the timing of data collection, study design, data source, readmission rate, and statistical analysis test/s used to identify risk factors as per Table The methodological quality of included studies was assessed independently by two reviewers using a standardised set of predefined criteria in six dimensions . The evaluation results of each item were rated as Yes/Partly/No/Unsure. The potential bias of each study was evaluated by overall risk \u201clow\u201d or \u201chigh\u201d , 13.Pooling extracted risk factors is not possible due to the heterogeneity of included studies such as diagnosis, examined variables, or follow-up period to identify readmissions. Therefore, content analysis was used to synthesize the extracted risk factors, and the results are presented narratively . Due to n\u2009=\u20092); (2) readmissions were measured more than once [n\u2009=\u20091). Two additional articles [A total of 6783 records were initially identified, after removing 1771 duplicates, 5012 records remained and were screened through titles and abstracts. Of these, 4979 records were excluded due to irrelevance and 33 relevant references were considered eligible for potential inclusion. A further 4 were excluded as they were conference abstracts only. A total of 29 references were retrieved as full text. Three studies were further excluded for the following reasons: (1) Outcome measures included unplanned ED visits , 15 (n\u2009=han once (n\u2009=\u20091).articles were ideOverall, the risk of potential bias for the 28 included studies was low against the six predefined dimensions of potential bias , 13. KeyCharacteristics of the 28 included studies are summarized in Table n\u2009=\u200918) recorded the age of patients using gestational age (GA), while nine studies referred to the newborn without specific GA. The four main types of the population involved in the 28 included studies were health-term newborns (n\u2009=\u20092), all live newborns (n\u2009=\u20099), late-preterm newborns with various health condition focus (n\u2009=\u20095), and newborn with varieties of health issues (n\u2009=\u200912).Twenty-two included studies retrieved data from multiple sites, while 6 from a single center. Samples sizes varied from 58 to 4,667,827 and UHRs rates varied from 0.2 to 39% . The majThe time span for the retrieved data varied from 2\u00a0months to 12\u00a0yeVariables or confounding factors differed across the 28 included studies. The number of examined variables for each study ranged from one , 25, 26 Seventeen of the 28 included studies identified maternal variables contributing to 31-day newborn UHRs. The three most frequently cited risk factors were maternal parity, pre-existing or perinatal complications, and race/ethnicity. Nine studies , 38, 39 Nine studies reported that mothers with pre-existing or perinatal complications increased the probability of newborn readmission following discharge , 35, 39.In five studies, health care utilization and family resources, including uninsured health care status, unstable family income and inadequate support for the mother following the discharge, were identified as increasing the risk of newborn readmission , 39, 40.The geographic location of both the hospital where the birth occurred and the residential address of parents was cited as risk factors by five differing studies , 40, 41.The maternal age was cited as a risk factor by five studies , 38, 41.Eleven significant risk factors pertaining to the newborns were extracted. The most frequently cited risk factors were gestation age, neonatal comorbidity, postnatal length of stay (LOS), and feeding methods.Gestational age was the most frequently cited significant predictor of unplanned readmission for newborns with OR range from 1.18 to 9.43 \u201338, 42. Newborns who either had a medical condition at birth or developed medical conditions following their birth were associated with an increased risk of UHRs , 36, 40.P\u2009<\u20090.05) [Eleven included studies , 38, 39 \u2009<\u20090.05) .Two studies , 33 repoFeeding methods and feeding problems were identified in nine studies , 35, 38.Gender was examined and reported consistently across seven differing studies. Compared to females, male newborns experienced a higher risk of unplanned readmission after birth , 38\u201340.Three studies , 32, 35 The birth weight of newborns was also identified as a statistically significant factor in two studies. The measurement of birth weight, however, was inconsistent amongst the studies. One study reportedTwo studies cited newborns\u2019 weight at discharge as risk factors. One study reportedThis systematic review synthesized risk factors associated with newborn 31-day unplanned hospital readmissions following discharge from the hospital where the birth occurred. Twenty-eight studies were reviewed, and 17 significant risk factors were extracted. These included six maternal and 11 newborn-related variables. Of the 17 predictors, six were consistently cited. Four factors were maternal , and two factors were neonatal . The remaining risk factors were inconsistent across the included studies.Newborns of mothers under 20 or over the age of 35, especially primiparous, were at greater risk of unplanned hospital readmission. This is consistent with evidence on the adverse outcomes of pregnancies conceived at extreme maternal age . Adversen\u2009=\u200925) in this review were conducted in western developed countries such as the USA, Canada, UK, Australia, and France with extensive multicultural backgrounds. Mothers of Asian ethnicity experience language and cultural barriers during hospitalization impacting their health literacy and comprehension of discharge information on caring for newborns and themselves [Newborns of Asian mothers were found in this review to have up to a 3 times greater likelihood than other ethnicities of being readmitted. It is noted that almost 90% of the included studies , who are physiologically immature, were often treated the same as a full-term newborn and experienced higher readmission rates , 44. TheReadmission rates associated with the LOS for newborns after their birth were inconsistent and varied from 1.3 to 6.6\u00a0days across 92 countries . Since tThis review found that vaginal or assisted vaginal deliveries significantly increased the risk of unplanned newborn readmissions compared with cesarean section. This is opposite to evidence promoting the advantages of vaginal delivery. Compared to newborns delivered by cesarean section, those delivered vaginally were found to have an increased probability of newborn hyperbilirubinemia and jaundice , 58, whiSix included studies cited exclusive breastfeeding as a predictor of newborn readmission, which conflicts with the evidence citing the advantages of breastfeeding. Notably, most studies citing breastfeeding as a risk factor were related to newborn readmissions with jaundice , 33, 38.This systematic review has certain limitations. Firstly, only English language papers with full-text access were considered. The majority of the included studies were conducted in the North America, Europe, and Australia; therefore, generalization of this review\u2019s results should be made with caution considering the characteristic of the healthcare settings. In addition, a meta-analysis was not performed to synthesize the extracted risk factors due to the\u00a0heterogeneity in the 28 included studies. The studies\u2019 heterogeneity included newborns\u2019 characters, examined variables, time period associated with UHRs, and outcomes coherence. This systematic review did not restrict newborn\u2019s gestational age and comorbidities, which might contribute to the large variation of UHR rate of 0.2% to 39%.This systematic review confirms the diverse and complex nature of risk factors associated with newborn 31-day UHRs. Six consistently cited predictors include 4 maternal factors and 2 neonatal factors . There is a need to promote healthcare providers\u2019 awareness of risk factors then develop and implement comprehensive individualized hospital-to-home transition plans from the time of admission for the birth through to discharge home to reduce unplanned neonatal readmissions . TransitApplying identified predictive risk factors assists healthcare providers to recognize newborns at higher risk of readmission and implement preventative strategies, for example, individualized discharge planning .\u00a0Future Supplementary file1 (DOCX 14 KB)Below is the link to the electronic supplementary material."} {"text": "Owing to the high radiative forcing and short atmospheric residence time of methane, abatement of methane emissions offers a crucial opportunity for effective, rapid slowing of climate change. Here, we report on a colloquium jointly sponsored by the American Society for Microbiology and the American Geophysical Union, where 35 national and international experts from academia, the private sector, and government met to review understanding of the microbial processes of methanogenesis and methanotrophy. The colloquium addressed how advanced knowledge of the microbiology of methane production and consumption could inform waste management, including landfills and composts, and three areas of agricultural management: enteric emissions from ruminant livestock, manure management, and rice cultivation. Support for both basic and applied research in microbiology and its applications is urgently needed to accelerate the realization of the large potential for these near-term solutions to counteract climate change. Methane is a potent greenhouse gas, contributing to about one-fourth of current positive radiative forcing and already having caused about 0.5\u00b0C warming since the beginning of the industrial revolution . FortunaThe American Society for Microbiology and the American Geophysical Union, two of the largest scientific societies in the world, jointly convened a colloquium on \u201cThe Role of Microbes in Mediating Methane Emissions\u201d on 31 May to 1 June 2023 to explore opportunities to improve methane mitigation options through improved understanding of the microbial processes of methane production and consumption. A group of 35 national and international experts from academia, the private sector, and government met to review the understanding of the microbial processes of methanogenesis and methanotrophy in waste management, including landfills and composts, and in three areas of agricultural management: enteric emissions from ruminant livestock , manure management, and rice cultivation. Presentations on the state of the science in each of these topics revealed that much is already known about the biology of methanogens and methanotrophs 5, but iThe colloquium participants found good reasons for optimism that microbial science is on the verge of facilitating significantly improved abatement of methane emissions from agriculture and waste management systems, but more basic and applied research is still urgently needed to accelerate the realization of the large potential for near-term solutions to climate change from methane emission abatement. Looking forward, the colloquium aims to articulate insights in the form of a roadmap to benchmark progress and to facilitate coordination of the scientific community toward a meaningful goal of methane emission reduction. A full report of the colloquium will be published by ASM in the autumn of 2023. In parallel, the National Academy of Sciences is also collecting information and preparing a report on the removal of atmospheric methane . Meanwhi"} {"text": "This review is an outlook on CAR-T development up to the beginning of 2023, with a special focus on the European landscape and its regulatory field, highlighting the main features and limitations affecting this innovative therapy in cancer treatment. We analysed the current state of the art in the EU and set out a showcase of the field\u2019s potential advancements in the coming years. For this analysis, the data used came from the available scientific literature as well as from the European Medicines Agency and from clinical trial databases. The latter were investigated to query the studies on CAR-Ts that are active and/or relevant to the review process. As of this writing, CAR-Ts have started to move past the \u201cceiling\u201d of third-line treatment with positive results in comparison trials with the Standard of Care (SoC). One such example is the trial Zuma-7 (NCT03391466), which resulted in approval of CAR-T products (Yescarta\u2122) for second-line treatment, a crucial achievement for the field which can increase the use of this type of therapy. Despite exciting results in clinical trials, limitations are still many: they regard access, production, duration of response, resistance, safety, overall efficacy, and cost mitigation strategies. Nonetheless, CAR-T constructs are becoming more diverse, and the technology is starting to produce some remarkable results in treating diseases other than cancer. Chimeric Antigen Receptor (CAR)-based therapies represent a significant development in immunotherapy, since they have the potential to be effective in relapsed and refractory (r/r) disease, where efficacy of other therapies is lower. They could also virtually be safer and more effective than traditional chemotherapy, though at the moment they present several unsolved limitations. CAR-T cell therapies are ATMPs and all those currently authorised are orphan drugs. This means they are complex medicinal products, made of biological materials engineered with cutting-edge technologies, currently indicated only for rare diseases, thus being subject to a composite regulatory framework.Looking at the European pharmaceutical regulatory landscape, CAR-engineered medicinal products are specifically regulated through the following: The Advanced Therapies Regulation (EC) No. 1394/2007, which had the objective to discipline Advanced Therapy Medicinal Products (ATMPs) [Other pieces of legislation relevant to this review are those respectively establishing the Hospital Exemption (HE) and Compassionate Use (CU) in Europe. The former describes a regulatory option, foreseen by Article 28 of Regulation (EC) No 1394/2007 amending Article 3 of Directive 2001/83/EC, based on which any ATMP can be prepared as an individual medical prescription for an individual patient (named-based) on a non-routine basis, under the exclusive professional responsibility of a medical practitioner for the treatment of severe, disabling, or life-threatening conditions [In order to provide an updated picture of CAR-Ts state of the art, together with developments most likely to be achieved in the near- and mid-term, as well as insights on what is currently possible and how the regulatory sector is enabling these products to reach the clinical setting, several sources and existing projects and initiatives have been considered for this review, namely: The EU Clinical Trials Register , supplemented by the US database of privately and publicly funded clinical studies ; the EMA website\u2019s EPAR Repository, to obtain the latest updates on information and indications of EU-authorised CAR-T medicinal products; the lists of medicinal products awarded with the PRIME scheme, as well as relevant examples of HE and CU to look at possible approaches for future Marketing Authorization Applications (MAAs). From a structural perspective, a CAR is an artificial protein expressed on the surface of an immune cell, encoded by a transgene introduced via a variety of methods, most commonly transfection via viral vectors. The majority of immune cells presently engineered belong to the cellular component of the adaptive immune system (CD4+ and/or CD8+ T cells), but engineering of other families of immune cells, especially those belonging to the innate immune system like Natural-Killer (NK) and MacrAll approved CAR-T products bear a chimeric antigen receptor composed of an antigen-binding domain typical of antibodies, a co-stimulatory factor from the T cell\u2019s surface, and a signalling domain found in T-cell receptors (TCR), all deriving from different sources, hence the name \u2018chimeric\u2019. In a process of innovation already visible today and reported by other authors , this stUp to the first promising results from clinical trials in 2014, the participation in CAR-T research was scarce, but in recent years, Chimeric Antigen Receptor T cells have demonstrated to be an efficacious means to achieve enhanced survival in haematological malignancies and presented enough benefit to warrant marketing authorisations for several of the investigated medicinal products. These drugs are showing a very positive risk/benefit balance, with benefits continuing to improve as safety is enhanced by new construct designs and protocols to deal with treatment complications.The general principle of CAR-T-cell therapy is summarised in In CAR-T design, domains are sometimes referred to as modules . This emhttps://www.ema.europa.eu/en/medicines/whatwe-publish-when/european-public-assessment-reports-background-context, accessed 23 May 2023) was consulted. EPARs publicly report the positive opinion from EMA\u2019s CHMP on a medicinal product, that subsequently led to its authorisation by the European Commission. Over the past few years, European marketing authorisations have significantly increased in terms of number, and the time to complete the process has been shortened, closely following US FDA (Food and Drug Administration) approvals. In addition, increase in indications for several medicinal products signals the growing use and interest in this therapeutic approach. Currently, six products have been authorised in the EU and thirteen different indications were evaluated by the CHMP (To describe the regulatory activity conducted by the EMA (European Medicines Agency) in terms of marketing authorisations, the online repository for EPARs (European Public Assessment Reports) . The scheme provides the ground for an accelerated assessment by the EMA, as well as regulatory support to the sponsor in overcoming hurdles encountered during clinical trials. PRIME is an acronym standing for \u201cPriority Medicines\u201d. It is a scheme first introduced in 2016 to specifically \u201cenhance support for the development of medicines that target an unmet medical need\u201d were CAR-Ts and theiIt is important to note how, among the drugs in this list, some present constructs are not \u201ctypical\u201d, in that they do not adhere to the description previously given and use an antibody-derived antigen-binding domain.These \u201catypical\u201d designs, specifically MB-CART2019.1 and ADP-A2M4, feature a dual-CAR and a SPOf note, given the successful experience with PRIME in improving and accelerating access to innovative medical products, this scheme is considered one of the key regulatory tools to be strengthened in the new proposal of the European pharmaceutical legislation .Other regulatory paths where CAR-T candidates for future MAAs may be found are those that enable real-world use of the product in a clinical setting before approval. The major options in this regard in the EU are Hospital Exemption and Compassionate Use programs. In Italy, the hospital exemption is implemented through the authorisation procedure for medicines prepared on a non-repetitive basis, regulated through the Ministry of Health Decree of 16 January 2015. By querying the Italian Hospital Exemption database (ISS-AIFA) starting from 2021, twenty-six (26) CAR-T cell-based treatments were retrieved that have been authorised for patients with B precursor acute lymphoblastic leukaemia, neuroblastoma, and lupus erythematosus. Among the authorised treatments, twenty (20) involve paediatric patients (<18 years old) as shown in The products were almost all allogeneic CAR-Ts obtained from HLA-identical donors, being in some cases members of the patients\u2019 family. Fifteen (15) out of the twenty-six (26) cases used reference protocols similar to those used in phase I and I/II clinical trials, meaning they were already approved for experimental use.Of note, despite the presence on the market of licensed products, HE has been successfully used to target patients otherwise excluded from any type of treatment.In the case of Kymriah\u2122, which was suitable for patients with relapsed disease and/or refractory to conventional treatments, some common limitations to its use led to the demand for allogeneic CD19-CAR-T cells for relapsed and/or refractory B-cell acute lymphoblastic leukaemia (ALL) under HE. The most frequent cases submitted under HE included relapsed patients already undergoing heavily lymphodepleting chemotherapy cycles or treated with HSCT.Similarly, with reference to the application of CAR-T cells in the neuroblastoma indication, an innovative product that uses an inducible iC9 caspase to control the possible CRS due to CAR-T therapy was used on twenty-seven (27) patients aged 1 to 25 years and the extremely promising results were published in April 2023 [As a general limitation of this analysis, it has to be underlined that products administered throughout Hospital Exemption in EU are under the supervision of the corresponding national competent authority of each Member State (MS), after careful risk\u2013benefit analysis of the medicinal product. The lack of a centralised register for HE in EU makes it difficult to retrieve information from each MS. However, reports may be found by looking at the available literature for studies on CAR-T HE use. As an example, the product ARI-0001 or CART19-BE-01, currently included in the list of PRIME medicines, is an anti-CD19 CAR against B-cell malignancies, which was approved by the Spanish Agency of Medicines and Medical Devices (AEMPS) under HE to treat adult patients (>25 years old) with r/r ALL in 2021.Safety and efficacy of ARI-0001 have been assessed in a study conducted together with the paediatric hospital Sant Joan de D\u00e9u (Barcelona), with similar results with respect to those of other products ,22. ARI-Also, in this case, HE constituted a valid alternative path foreseen by the European ATMP Regulation to enhanCompassionate Use programsThe study, whose interventions were reported to the relevant legal authorities , demonstrated that MB-CART19.1 cell treatment did in fact led to depletion of B cells and disease remission, which notably did not require maintenance therapy afterwards.On the other hand, it also showed that the CD19-targeted CAR T-cell approach may not be generalised to other autoimmune diseases: this approach requires diseases that not only are B cell-driven, but also develop on B-cell activation. Other autoimmune diseases exist that, despite resulting from B cells, are caused by long-lived plasma cells, which are usually CD19-negative , but sucThis could be the start of an important change in the treatment of this rare disease: the clinical effect of CAR-T-cell treatment is associated with resolution of the SLE patients\u2019 autoimmunity, persisting even after about 100 days, when patients reconstituted their B cells. The likeliest development in the near future regarding CAR-Ts clinical adoption and use is the combination of CAR-T cells with other drugs, already approved, mainly to enhance safety . In the A major objective of the drug combination approach is to try to curb a crucial side effect of CAR-Ts, namely the Cytokine Release Syndrome (CRS).From murine models replicating CRS, cytokines implicated in its development were identified, the main ones being GM-CSF, IL-1, IL-6.Since the Granulocyte Monocyte Colony Stimulating Factor (GM-CSF) is a myeloid cytokine, it was hypothesised that CRS could be prevented from occurring by reducRecently, Yi and colleagues used GM-CSF knock-out CAR Ts at the clinical level , treatinInterleukin-1 (IL-1) has an important role in severe systemic inflammation, and its association with CAR-T toxicities has been widely described. IL-1 as a target in CAR-T therapy was hit by administering Anakinra, an IL-1R antagonist already Another therapeutic approach under investigation to control the CRS consists of the administration of a combination of CAR T cells with Dasatinib, at a critical time during the onset of CRS . DasatinFrom clinical trials results on liquid or haematologic tumours, it is known that the majority of patients treated with CAR-Ts do have important responses, yet most of these are still not durable. This is not the case in solid tumours, where current CAR-Ts are not significantly effective, making it an area of unmet need despite being the focus of much of the current research in the field. CAR-T cells appear to be associated with significant toxicities of inflammatory nature, not only the cited Cytokine Release Syndrome (CRS), but also the Immune effector Cell-Associated Neurotoxicity Syndrome (ICANS), as well as cytopenias which then may lead to opportunistic infections. On the other hand, the development of allogeneic CAR-engineered cells, including non-T immune cells like Natural Killer (NK) and Macrophage (M), likely represents the largest gap in knowledge in the field today, which will require substantial investments to reach the clinic, although several reports of CAR-NK clinical trials are already available . The present limitations of CAR-T-cell therapy are summarised in Target choice can be considered a key element in the development of CARs: by choosing an ideal target, the safety of these products can be raised to high standards. The importance of a CAR\u2019s target cannot be overstated, as it ties in directly to both efficacy and safety of these treatments.Even if there is a vast and growing body of literature and expanding research, both preclinical and clinical, dealing with the discovery and testing of novel targets for CAR constructs ,33, \u201ctarWhile some of the toxicities associated with CAR-T-cell therapy were anticipated , the two adverse events which have become distinctive of CAR-T-cell therapy were not anticipated from the early murine studies, i.e., CRS and ICANS . CRS usually develops 3\u20135 days after infusion, while \u2018Immune effector-Cell Associated Neurotoxicity Syndrome\u2019 (ICANS) starts 5\u20137 days after infusion and is likely connected to CRS development. CRS is caused by extremely high levels of cytokines due to a strong immune response. Typical CRS symptoms range from sustained fevers, hypotension, and in general, the need for airway protection, low blood pressure, high fevers, and circulatory failure, which might require the patient\u2019s admission to the Intensive Care Unit (ICU) and the use of ventilators.This implies that a small clinical centre, not equipped with advanced facilities, will not be authorised for the use of CAR-Ts, thus contributing to a reduced access to this therapeutic opportunity. Recent studies in cytokines involvement in CAR-T therapy\u2019s CRS show that many cytokines derive from the CAR-T treatment. This occurs as engineered cells proliferate in vivo when activated by coming in contact with tumour cells, and produce dramatic increase in cytokines. Moreover, some key cytokines mediating CRS, like IL-6, are produced by Tumour-Associated Macrophages (TAMs), myeloid-type cells found in the patient\u2019s tumour ,36. A direct proportionality between the tumour burden, the scale of T-cell activation and action, and the severity of the syndrome has been observed. As a result, the most fragile segment of patients, those with more severe disease conditions, tend to suffer the direst CRSs.Despite incremental improvement in the ability to recognise Cytokine Release Syndrome and the early treatment with monoclonal antibodies that might induce IL-6 blockade , deaths linked to this side effect continue to be observed. Thus, a deeper understanding of CRS, its causes and mechanism of actions, is needed to control, treat, or reduce occurrence in future CAR-T medicinal products.ICANS appears to be due to the expression of CD19 on some neurons and blood\u2013brain barrier lining. ICANS main symptoms are confusion, aphasia, seizures, and encephalopathy.Even if, according to MRI observations during neurotoxicity, these symptoms tend to be transient, ICANS-induced encephalopathy can be deadly if associated with brain edema, and high-grade ICANS can result in the destruction of the blood\u2013brain barrier (BBB) , with poGM-CSF seems to play a central role in CAR-T-mediated neurotoxicity, as demonstrated by the analysis of some of the 2017 pivotal clinical trials . In addition, the infiltration into the CSF of CD14-positive cells and monocytes, like Tumour-Associated Macrophages (TAMs), has been associated with Grade \u2265 3 neurotoxicity, thus suggesting a role for these cells as a target for preventing ICANS. Currently, the treatment of ICANS includes solely the administration of corticosteroids and supportive care. In addition, since ICANS often occurs with CRS, the resolution of the former is needed for a proper CRS treatment; however, the spontaneous resolution can only happen if ICANS is low-grade, while severe ICANS (grade \u2265 3) remains a critical condition, thus highlighting a gap in knowledge and the need of finding new and resolutive treatments. ICANS and neurotoxicity are obviously crucial concerns to the application of CAR-Ts in the neurological field. Through the observation of haematologic patients with known CNS involvement, presence of MRI changes or baseline neurological disorders have been linked to increased risk of ICANS . In receResistance to CAR-T activity is a multifactorial outcome due to different aspects, mainly related to patients\u2019 cells used to generate the CAR-Ts, T-cell exhaustion and tumour microenvironment (TME).Regarding the influence of starting materials in autologous treatments, patients T cells\u2019 functioning could be defective even before the transduction of T cells with CARs. This seems to be the case in heavily pre-treated patients, who approach CAR-Ts as a third line of treatment , and it is one of the reasons for the interest in allogeneic CAR-Ts. For similar reasons, these therapies have been moved to earlier lines of treatment, like in the case of the Zuma-7 trial (NCT03391466), a comparison trial which allowed Yescarta\u00a9 to be the first CAR-T to ever be available as second-line treatment, for HGBL and DLBCL.Defective T-cell function may also appear as a result of genetic modification. This specific form of decline in functions is known as \u2018T-cell exhaustion\u2019, a poorly understood continuous differentiation process, in which T cells are transformed from precursors to terminally differentiated T cells, thus losing cellular functionality.The tumour microenvironment (TME) plays a critical role in the response to tumour treatments. Suppression exerted by the tumour\u2019s TME can down-regulate activity of both CAR-Ts and physiological T cells. CAR-T inhibition mediated by the TME in solid tumours is a major area of study . CurrentAmong the approaches to TME-mediated resistance, dual-targeting CAR-Ts may be used to stop Cancer-Associated Fibroblasts (CAFs) from inhibiting the CAR-Ts. CAFs are a cellular component of the TME, well documented, for example, in multiple myeloma, which can inhibit BCMA CAR T cells such as Another approach to overcome TME lies in the CAR design called TRUCK CAR Ts, used against inhibitory extracellular vesicles. Such vesicles are part of the milieu of the TME and have been documented expressiLastly, another component of the TME for which potential solutions have been investigated is the cytokine Transforming Growth Factor-\u03b2 (TGF-\u03b2), which has a prominent immunosuppressive role in the TME . An apprNovel designs, such as peptide-centric (PC) CAR-Ts , are posThe number of patients having access to CAR-T-cell treatment is currently quite limited, despite the high and increasing level of interest, investments, and research in the field. According to the EBMT registry , 2500 paAlthough the actual CAR-T-based treatment price can vary significantly among EU member states, it may indicatively be considered at around EUR 350.000, with possible variations due to different agreements stipulated between the Marketing Authorization Holders (MAHs) and member states\u2019 NHSs. Despite this very high price tag, in principle making CAR-Ts profitable products, it crushes their ability to fully become part of the mainstream cancer treatment options. Two potential solutions have been discussed in the available literature, consisting of (i) increasing production outputs to lower costs , and (ii) manufacturing of allogeneic products to lower costs . The first is reasonably the near-term solution that can be expected in the coming years, while the second can constitute instead a long-term objective. Allogeneic products will allow a virtually unlimited quantity of medicinal products, with lower costs and the possibility to increase the complexity, and thus hopefully the effectiveness of CARs, by introducing multiple transgenes and/or mutations to the engineered cell. When analysing and comparing the efforts to address the cost and complexity of CAR-T-cell therapy, a recurrent theme across both literature and scientific institutions developing this technology is the need to pass from a centralised process to a decentralised production process ,51, bothIn a centralised model, the key to production is the company and its GMP facilities. Current challenges in access to therapy partly stem from the complex nature of the process, that can take between 3\u20134 to 6\u20138 weeks from the time that leukapheresis is performed and patient\u2019s cells are shipped to the company, and the product is back to the hospital for being reinfused. The extremely expensive nature of this kind of medicinal product, due to this complex production process as well as the low scale of production, translates to the fact that only major academic centres can afford this therapeutic option, further decreasing access, similarly to the established third-line treatment alternative to CAR-Ts (bone marrow transplantation), which is not available in small clinical centres. In addition, by looking at the clinical trial results with the centralised model, it was observed that some patients were invariably lost because the time to acquire the medicinal product was too long for the patient to wait, and the bridging therapy was not sufficient.On the other side, the decentralised production model aims at significantly reducing the overall time to reach the patient, by cutting storage and transportation times through the use of mobile cell factories. The possibility of doing CAR-T infusions as an outpatient therapy, an increase in the number of centres that can adopt CAR-T-cell therapies, shortening the time for potentially life-saving innovations to reach the clinical setting, are some of the potential advantages of this approach. A nice example of a decentralised and flexible approach was tested in St. James\u2019s Hospital and the Trinity College Dublin Clinical Research Facility in Dublin (July 2022), moving a cell factory closer to patients using a mobile GMP facility for CAR-T .\u00ae [\u00ae [Further investigations into the use of more automated manufacturing platforms and co-located GMP-compliant facilities are needed. Examples of currently ongoing innovative developments in the field are carried out by several players, such as Lonza pharmaceuticals\u00ae , Mylteni\u00ae [\u00ae , and aCG\u00ae [\u00ae . The establishment of GMP manufacturing quality standards is considered an important challenge with regulatory implications. Specific guidelines do not exist yet, and coordination between regulators and manufacturers is needed to bridge this gap in the near future.Another sprawling field of interest which addresses the limitations of CAR-T-cell therapy are \u201cOff-the-shelf\u201d CAR-Ts, obtained from healthy donors , which can provide high amounts of fully functional cells and allow multiple CAR-T-cell products to generate. Such an approach may increase patients\u2019 access to therapy, also reducing the delivery time of products that can be stored like conventional biologics.Their major limitation is the need to undergo an additional editing step to prevent Graft-versus-Host (GvHD) disease-type rejection . Though Another promising approach for the generation of allogeneic CAR-T products foresees the use of in vitro-induced pluripotent stem cells (iPSCs) , insteadThe promise of these engineered stem cells is fascinating: with a potentially perpetual supply of virtually unlimitedly editable cells, the versatility of iPSC-derived CAR-T cells (iCAR-Ts) has garnered increasing investments and interest. iPSC technology is currently in its \u201cfirst wave\u201d and suffers from several limitations, like the absence of a clear regulation for the product\u2019s quality assessment and release for clinical use, the lacking acceptance of key stakeholders towards iPSC technology, especially patients, and the non-availability of robust and scalable manufacturing protocols for clinical-grade iCAR-T cells.Other gaps in knowledge that are likely to impact in the long term the access to iCAR-T therapies are the suboptimal function and developmental maturity of iPSC-derived CAR-Ts in comparison to \u2018primary\u2019, autologous CAR-Ts. So far, studies conducted on T cells derived from iPSCs (TiPSCs) only managed to produce CD8\u03b1\u03b1 CAR-T cells with a low activity profile, similar to innate T cells . This phCurrently, to use TiPSCs as raw material to engineer CAR-Ts, a master iPSC cell bank must be created via gene editing and subcloning, specifically dedicated to the disease in study. This is a long and technically challenging, substantially expensive process, which can generate genotoxic products, and it prevents the introduction of more complex CAR designs . Surpassing this technological challenge appears to be crucial to the ability to scale up quantities of CAR-T produced starting from iPSCs, and the potential of iPSCs to compete with cells from healthy donors. Gene therapies hold a big promise for oncologic patients as well as rare disease patients. The analysed areas for improvement preventing a faster uptake and consolidation of the use of such therapies, and that would require an update in the regulatory approaches, deal with three main aspects: (i) the regulatory approval landscape, which may be overcome by adopting regulatory assessment based on real-world data (RWD) to accelerate approvals; (ii) the manufacture and release model, which could be improved by introducing decentralised (bedside) manufacture to reduce production time and eliminate human variability in case of automated equipment; and (iii) the reimbursement model, which can be approached using the Quality Adjusted Life Years (QALY) shaped for CAR\u2019s Patients ,62.These fields operate based on conventional pharmaceutical models, which slow down access to ATMPs.In the long term, roadblocks that need to be overcome to achieve widespread cell therapy adoption should primarily include standardising cell therapy production and removing transportation hurdles, like the need for default freezing/thawing and transportation of human cells: this introduces risks to cell viability and potency as well as delayed treatment. Working directly with innovators is needed to develop next-generation cell therapies in-hospital, for example, the so-called \u201cGMP-in-a-POD\u201d platforms , and lasInnovations addressing the high cost and the long manufacturing duration of CAR-T-cell therapy will have a tremendous societal and economic impact, and developing the procedures and tools to efficiently produce off-the-shelf, TiSPC-derived CAR-Ts is key to both endeavours. All approved CAR-T-cell products are orphan medicines, authorised for rare diseases. They present several recurrent challenges regarding efficacy, safety, and access that have been only partially solved. Among those challenges identified by this review, the low efficacy in solid tumour, runaway inflammatory side effects, and difficult and expensive manufacturing are considered priorities in the field and must be addressed by regulators and scientists.From the regulatory point of view, this review proposes that the EU regulatory framework is going to require updates to properly oversee such innovative therapies and the challenges they pose. However, the analysis of the framework as is already shows several possible routes to allow the access to patients suffering from rare diseases.From the scientific point of view, CAR technology holds a great promise. This is attested by recent indications\u2019 expansions for the approved CAR-T products, by the encouraging results in the clinical setting, and finally, by a technological basis that is growing in the direction of increasing production while lowering cost, which constitutes a critical need. The ongoing research in the field of CAR-T-cell therapy, such as developments in the sustainable and safe use of advancements in gene editing techniques to achieve multiplex editing (introducing multiple mutations on the engineered cells), shows potential scenarios for future breakthroughs in their clinical applications, and in creating novel sources of starting cellular materials. At the same time, even if gene editing of CAR T cells is limited to somatic cells, ethical aspects regarding modification of primary lineages and off-target editing will need to be addressed from a scientific and regulatory point of view before the full application of this technique can be transferred to patients. The resulting production of more innovative as well as allogeneic products able to avoid graft-Vs-host disease could reshape treatment of several oncologic indications, and even give hope to develop treatments for many diseases still incurable today."