diff --git "a/deduped/dedup_0431.jsonl" "b/deduped/dedup_0431.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0431.jsonl" @@ -0,0 +1,55 @@ +{"text": "Selenium is an important component of several enzymes, replacing sulfur in cysteine residues. Its discovery and significance are described in this primer. Escherichia coli supported synthesis of the enzyme formate dehydrogenase [In retrospect, the history of selenium biochemistry does not differ greatly from the study of a number of other trace elements in that it occurred over many years, progressing from periods of little general interest to widespread concern regarding toxicity problems and eventually to recognition of selenium as an essential nutrient for many forms of life. My involvement in studies on selenium metabolism is a classic example of serendipity. I was studying an interesting enzyme from an anaerobic bacterium that utilized glycine as substrate (glycine reductase), but the amounts of pure protein I could isolate were very limited because it seemed to be produced only during the very early stages of bacterial cell growth. The rich culture medium supported continued luxuriant growth of the bacterium, but the level of the desired enzyme in the cells merely underwent dilution during the process. Finally, after testing many known growth stimulatory supplements to no avail, my colleagues and I tried two inorganic nutrients, molybdate and selenite. This approach was prompted by the report that addition of these inorganic compounds to a medium used for anaerobic growth of rogenase . Much toAstragalus accumulated extremely high levels of selenium, as much as several thousand parts per million. During periods of drought, when appreciable amounts of these selenium accumulator plants were consumed in spite of their unpleasant odor, animals exhibited symptoms of acute poisoning, such as \u201cblind staggers,\u201d loss of appetite, paralysis, and finally death. In the arid regions of the western United States, selenium accumulation in soils is much higher than in areas that have normal rainfall, and as a result even ordinary plants, such as cereal grains, from this area contain unusually high levels of selenium. From 1930 to the mid-1950s, many investigators attempted to determine the chemical form(s) of the selenium present in the toxic plants, and animal nutritionists tested the effects of various inorganic and a few organic selenium compounds administered to animals. However, beyond the observation that much of the selenium in plant materials was protein-bound, actual identification of the toxic selenium compounds present was not accomplished [The element selenium was discovered by the Swedish chemist Berzelius in 1817, and the few organic selenium-containing compounds that were prepared during the subsequent 100 years were considered primarily as chemical curiosities. Finally, in the 1930s, selenium was recognized as the potent toxic substance present in various types of plants that, when ingested by grazing animals, caused chronic symptoms of poisoning. Determination of the selenium contents of several native plants from Wyoming and South Dakota revealed that members of the genus mplished . In retrIn 1957, Klaus Schwarz, a German scientist working at the US National Institutes of Health in Bethesda, Maryland, reported that selenium was the essential component of a dietary preparation termed Factor 3 that prevented severe liver necrosis in rats as a sym75Se) was provided, it was incorporated into both proteins during in vivo synthesis. After much effort spent in learning how to handle oxygen-sensitive selenium compounds, we finally could identify the selenium compound in our glycine reductase protein as selenocysteine, an analog of the sulfur-containing amino acid cysteine [Finally, in 1973, it was reported that the catalytic activities of two different enzymes depended on the presence of selenium in these proteins. One of these enzymes was my favorite glycine reductase from anaerobic bacteria , and thecysteine at the UE. coli 1, which contains selenocysteine at the active site, and compare it to the catalytic mechanism of the two other forms of the mammalian enzyme (MsrB2 and MsrB3) in which cysteine is found instead. Reduction of the methionine sulfoxide substrate to methionine by the active site cysteine residue in MsrB2 or MsrB3 converts the cysteine thiol to a sulfenic acid derivative (SOH), which can be reduced by thioredoxin to regenerate cysteine. In contrast, reduction of the sulfoxide substrate by MsrB1 converts the ionized selenocysteine residue to a selenenic acid derivative (SeOH), which is not reduced directly by thioredoxin (A new paper in this issue of n nature . Kim andoredoxin . HoweverE. coli, and the catalytic activities and substrate affinities of the purified gene products were determined in detail by the authors. The native MsrB2 and MsrB3 enzymes contain three highly conserved amino acid residues\u2014namely, His-77, Val- or Ile-81, and Asn-97\u2014that are part of the active sites of these enzymes. Completely different amino acid residues, Gly-77, Glu-81, and Phe-97, are found at these positions in the fully active form of MsrB1. A series of mutant constructs in which the amino acids at positions 77, 81, and 97 were switched individually or in groups between native selenocysteine and cysteine forms of MsrB, and also between selenocysteine- or cysteine-substituted enzymes, were analyzed in detail. Several of these mutants exhibited marked changes in the ability to utilize the normal electron donor, thioredoxin, for enzyme regeneration. Replacement of the active site selenocysteine in MsrB1 with cysteine resulted in greatly decreased catalytic activity, which could be partially restored by introduction of His-77 and Asn-97 at the active site. Mutants that were modified extensively to allow insertion of selenocysteine at the active site in place of cysteine were also generated, and their characteristics were determined. In general, the replacement of cysteine with selenocysteine frequently resulted in increased activity with dithiothreitol as reductant, but regeneration of active enzyme in these constructs by the natural electron donor, thioredoxin, was not possible. Based on the reported findings, it is clear that different critical amino acids in the active sites of the selenocysteine and cysteine enzymes are required for their maximal catalytic activities, and these also determine electron donor specificity for enzyme turnover.An impressive number of mutant forms of the genes corresponding to the three MsrB enzymes were constructed, cloned, and expressed in The last few years have seen the emergence of numerous important physiological roles for the trace element selenium. In contrast, it took many years from Berzelius's discovery of this new element in 1817, which he named after Selene, the goddess of the moon in ancient Greece, until it attracted growing interest, whereupon it gained a bad reputation as a toxic substance. Even after its recognition as a required nutrient for mammals, the role of selenium as an essential component of important antioxidant enzymes synthesized in our cells is not widely appreciated. Further studies of the unique properties of selenium will help us to understand the selective advantage imparted to cells by their investment in selenoprotein biosynthesis and its retention during evolution."} +{"text": "Enzymes have been extensively used in organic solvents to catalyze a variety of transformations of biological and industrial significance. It has been generally accepted that in dry aprotic organic solvents, enzymes are kinetically trapped in their conformation due to the high-energy barrier needed for them to unfold, suggesting that in such media they should remain catalytically active for long periods. However, recent studies on a variety of enzymes demonstrate that their initial high activity is severely reduced after exposure to organic solvents for several hours. It was speculated that this could be due to structural perturbations, changes of the enzyme's pH memory, enzyme aggregation, or dehydration due to water removal by the solvents. Herein, we systematically study the possible causes for this undesirable activity loss in 1,4-dioxane.2 hydrated salts. Incubation was also accompanied by a substantial decrease in Vmax/KM.As model enzyme, we employed the protease subtilisin Carlsberg, prepared by lyophilization and colyophilization with the additive methyl-\u03b2-cyclodextrin (M\u03b2CD). Our results exclude a mechanism involving a change in ionization state of the enzyme, since the enzyme activity shows a similar pH dependence before and after incubation for 5 days in 1,4-dioxane. No apparent secondary or tertiary structural perturbations resulting from prolonged exposure in this solvent were detected. Furthermore, active site titration revealed that the number of active sites remained constant during incubation. Additionally, the hydration level of the enzyme does not seem to affect its stability. Electron paramagnetic resonance spectroscopy studies revealed no substantial increase in the rotational freedom of a paramagnetic nitroxide inhibitor bound to the active site (a spin-label) during incubation in neat 1,4-dioxane, when the water activity was kept constant using BaBrThese results exclude some of the most obvious causes for the observed low enzyme storage stability in 1,4-dioxane, mainly structural, dynamics and ionization state changes. The most likely explanation is possible rearrangement of water molecules within the enzyme that could affect its dielectric environment. However, other mechanisms, such as small distortions around the active site or rearrangement of counter ions, cannot be excluded at this time. Enzymes have been successfully employed to catalyze a number of transformations and chiral resolutions of biological and industrial importance in organic solvents -10] It iWe have previously reported that the activity of the serine protease subtilisin Carlsberg is significantly reduced upon storage in several organic solvents, irrespective of its preparation, hydration, hydrophobicity of the solvent, the reaction temperature and of the substrates used . A subseThose findings outline a clear drawback in the use of biocatalysts for applications that require prolonged exposure to organic solvents, and showed the need for understanding the mechanism of enzyme inactivation during storage in these media. Herein we study this detrimental effect in detail using two different preparations of the model enzyme subtilisin Carlsberg.The results previously reported from this and other laboratories demonstrated that most enzymes lose their initial high activity exponentially during the first hours of incubation in a variety of organic solvents until a constant activity value is reached. So far, all of the enzymes studied show some residual activity after a long incubation period , which appears to persist indefinitely ,15,17. HFirst, we decided to study the enzyme-powder morphology after suspension in 1,4-dioxane since it has been suggested that incubation could result in larger and more compact aggregates, which could reduce the enzyme activity.Since most of the enzyme preparations studied were suspensions, and therefore the substrates would have to diffuse through solution and through the pores of solid particles to reach their corresponding active sites, it is conceivable that particle aggregation or morphology changes (such as shrinking or powder deformation) could have an adverse effect on the substrate's diffusion to the enzyme's active site. Scanning Electron Microscopy (SEM) of the suspended particles of the co-lyophilized enzyme in 1,4-dioxane showed that no apparent morphological changes or aggregation occurred during incubation calculated for the co-lyophilized preparation increased substantially after the incubation period , so it seems that the observed drop in activity is due to a deficiency in the catalytic efficiency of the enzyme. The increase in the KM value could indicate subtle structural changes of surface amino groups involved in the substrate binding process. Such subtle (reversible) structural changes of active site residues could also explain decreased catalytic efficiency of the catalyst.Next, as a likely cause for this effect, we decided to determine if the fraction of active enzyme molecules decreases during incubation. Active-site titration, employing the method described by Wangikar et al., of the lIt is well documented that the dehydration step, required before introducing an enzyme to the organic medium , induces structural perturbation in the enzyme's secondary structure ,19. It hTo further probe changes in structure, including the tertiary structure, we used circular dichroism spectroscopy (CD). Spectra were recorded for both the lyophilized and the co-lyophilized powders before and after incubation in 1,4-dioxane for 4 days Figure . The neaHydrolases have been shown to exhibit optimum catalytic activity at a specific pH, where the enzyme-catalytic triad residues acquire the most favorable ionization state for catalysis. This \"tuning up\" of an enzyme's pH is a particularly important step in non aqueous enzymology, and it is often accomplished while the enzyme is dissolved in a buffer prior to dehydration before suspension in the organic solvent of choice . It is ac ~10-11 to 10-9 s) the apparent rotational correlation time of spin labels can be obtained from line width and the line amplitude measured from the EPR spectra in a given solvent. Therefore, it is possible that the reduced activity observed after incubation is due to changes of the enzyme's dynamics, occurring either by solvent insertion into the enzyme, water removal or over relaxation of the active-site. Site directed spin labeling (SDSL) is emerging as a new tool for determination of structure and conformational dynamics of proteins . The basa Figure , using Euation 1 .Equation 1: \u03c4(s) = 6.5 \u00d7 10-10 \u0394H0 [(h0/h-1)1/2-1]-9 s), the line positions depend on the rate as well as the amplitude of motion. Hence, the parameter \u0394H0 can be used as a convenient empirical measure of dynamics, including both amplitude and rate of motion (a decrease in \u0394H0 indicates increased degree of freedom). In addition, the parameters hi and ha indicates higher degree of freedom). For our studies, lyophilized subtilisin Carlsberg was spin-labeled at the active site with a classic inhibitor for serine proteases: 4-ethoxyfluorophosphinyloxy-TEMPO show that the mobility of the spin label increases after a 4-day incubation, suggesting increased flexibility of the enzyme or \"over relaxation\" of the active site. Besides the activity loss observed, increased mobility of the spin label seems to be the only consequence of incubation in 1,4-dioxane, and it could therefore be interpreted as the cause for the low enzyme stability. If, however, one argues that increased mobility of the spin-label relates to increased enzyme flexibility, then these results seem to be contradictory to the general belief that increasing the flexibility should increase the enzyme's activity, unless the enzyme's flexibility was already at its optimum before incubation, and increased flexibility would only cause a decrease in activity. But perhaps the most plausible explanation is that the enzyme's active-site is becoming distorted or \"over relaxed\" during incubation, and this results in a decrease in Vmax/KM, in initial activity and an increase in apparent flexibility of the spin label is observed. Since added water has been reported to increase an enzyme's flexibility /VS[R] .2\u20221H2O/BaBr2 was prepared by placing the anhydrous salt (5 g/16.8 mmol) over pure water (75.8 mg/4.2 mmol) in a sealed vessel saturated with water vapor measured for the model transesterification reaction between phenyl alcohol and vinyl butyrate. The alcohol concentration was increased from 1 to 100 mM, while the activated ester concentration was kept constant at 200 mM. The active enzyme concentration was determined following a published procedure [The apparent Michaelis constants and Vrocedure .The enzyme was lyophilized and co-lyophilized from 20 mM potassium phosphate buffers of varying pH, adjusted as needed with dilute solutions of NaOH or HCl. The initial activities at day 0 and after a 7-day incubation period were measured for each of the enzymes (prepared from buffers of different pH).-1 resolution using Happ-Ganzel apodization were averaged to obtain each spectrum. Aqueous spectra were measured using spacers of <15 \u03bcm thickness. Lyophilized protein powders (lyophilized and colyophilized) were measured as KBr pellets (1 mg of protein per 200 mg of KBr). Pellets were produced using a Spectra Tech Macro-Micro KBr Die Kit and Carver 12-ton hydraulic press. Enzyme powders in organic solvents were prepared by sonication for 2 min in a sonication bath and measured in a FTIR cell equipped with CaF2 windows and using 100 \u03bcm thick spacer. When necessary, spectra were corrected for the solvent background and water vapor contributions in an interactive manner using the Nicolet OMNIC 3.1 software to obtain the protein's vibration spectra. Each protein sample was measured at least five times [FTIR studies were conducted with a Nicolet Magna-IR System 560 optical bench as described ,28,31,32-1). The second derivative spectra were obtained with the derivative function of Omnic 3.1 software (Nicolet). The final protein spectrum were smoothed with an 11-point smoothing function (10.6 cm-1). Amide I second derivative spectra were also used to calculate the spectral correlation coefficient (see below).All spectra were analyzed by second derivatization in the amide I region and the spectrum with the varied condition (e.g. after suspension in organic solvent) were imported into the program Sigma Plot and the correlation coefficient was calculated as described in detail by Griebenow et al [Spectral correlation coefficient values (SCC) to quantify procedure-induced protein structural perturbations were calculated as described using the amide I second derivative spectra ,28. AfteFar and near UV Circular Dichroism spectra (CD) of lyophilized and co-lyophilized subtilisin C. were studied in 1,4-dioxane, before and after incubation for 4 days as described. CD spectra were measured at room temperature in a Jasco 810 spectropolarimeter fitted with a rotating sample cell holder. Full details are described in Ganesan et al (2006). Far UV CD and near UV CD spectra were measured using cells of path length 0.01 cm and 0.5 cm, respectively. The spectra were an average of 5 scans collected at a scan speed of 50 nm/min and bandwidth 1 nm for far UV and 1.5 nm for near UV. After the measurement was completed the content of the sample cell was collected (with rinsing), air dried and dissolved in deionized water and protein concentration was estimated from UV absorption at 280 nm.Subtilisin Carlsberg was spin labeled at the active site Ser-221 with 4-ethoxyfluorophosphinyloxy-TEMPO Figure by the mBC carried out the FTIR and incubation studies, and helped draft the manuscript. VB carried out the EPR studies. AG completed the CD studies. PH helped designing the study and drafting the manuscript. FS carried out some of the preliminary FTIR experiments and helped draft the manuscript. AF completed the pH profile and the active site titration study. KG and GB conceived the study, participated in its design and coordination, and drafted the manuscript. All authors read and approved the final manuscript."} +{"text": "The human cytosolic sulfotransfases (hSULTs) comprise a family of 12 phase II enzymes involved in the metabolism of drugs and hormones, the bioactivation of carcinogens, and the detoxification of xenobiotics. Knowledge of the structural and mechanistic basis of substrate specificity and activity is crucial for understanding steroid and hormone metabolism, drug sensitivity, pharmacogenomics, and response to environmental toxins. We have determined the crystal structures of five hSULTs for which structural information was lacking, and screened nine of the 12 hSULTs for binding and activity toward a panel of potential substrates and inhibitors, revealing unique \u201cchemical fingerprints\u201d for each protein. The family-wide analysis of the screening and structural data provides a comprehensive, high-level view of the determinants of substrate binding, the mechanisms of inhibition by substrates and environmental toxins, and the functions of the orphan family members SULT1C3 and SULT4A1. Evidence is provided for structural \u201cpriming\u201d of the enzyme active site by cofactor binding, which influences the spectrum of small molecules that can bind to each enzyme. The data help explain substrate promiscuity in this family and, at the same time, reveal new similarities between hSULT family members that were previously unrecognized by sequence or structure comparison alone. We metabolize many hormones, drugs, and bioactive chemicals and toxins from the environment. One family of enzymes that participate in the metabolic process consists of the cytosolic sulfotransferases, or SULTs. SULTs have a variety of mechanisms of action\u2014sometimes they inactivate the biological activity of the chemical . At other times, the enzymes make the chemical more toxic . Humans have 12 distinct SULT enzymes. Determining how each of these human enzymes recognizes and distinguishes between the thousands of chemicals we confront each day is essential for understanding hormone regulation, assessing environmental risk, and eventually developing better, more-effective drugs. We have studied the human SULT family of enzymes to profile which small molecules are recognized by each enzyme. We also visualized and compared the detailed structural features that determine which enzyme interacts with which molecule. By studying the entire family, we discovered new ways in which chemicals interact with each enzyme. Furthermore, we identified new inhibitors and inhibitory mechanisms. Finally, we discovered functions for many of the human enzymes that were previously uncharacterized. Structural genomics and substrate screening provide \"chemical fingerprints\" and insights into substrate promiscuity for the human family of drug- and hormone-metabolizing cytosolic sulfotransferase enzymes. Cytosolic sulfotransferases (SULTs) comprise a family of enzymes that catalyze the transfer of a sulfonate group from 3\u2032-phosphoadenosine 5\u2032-phosphosulfate (PAPS) to an acceptor group of the substrate . In doinTo date, 13 human cytosolic sulfotransferase (hSULT) genes have been identified; they partition into four families ,3: SULT1Recent progress in the structural biology and characterization of the catalytic mechanism of hSULTs has established that many family members have distinct, but overlapping, substrate specificities and that the enzymes have a sequential catalytic mechanism that is susceptible to substrate inhibition ,7. NeverThe crystal structures of SULT1C3 bound to PAP, apo SULT1C2, a ternary complex of SULT1C2 bound to PAP, and the environmental toxin, pentachlorophenol (PCP), and SULT4A1 were solved at 3.2, 2.0, 1.8, and 2.2 \u00c5, respectively . We alsoIt is generally agreed that sulfonation takes place via a sequential mechanism in which a ternary enzyme complex is first formed, followed by reaction and release of products . HoweverOn the other hand, the structure of SULT2A1 bound to androsterone Figure L hints tThe family-wide structural comparison also suggests an additional or alternative explanation for the well-documented substrate inhibitory effect. Previously reported cases of substrate inhibition have been attributed either to two substrate molecules occupying the active site at the same time ,19 or toIn order to predict and understand the fate of xenobiotics and drug candidates in humans, it is essential to better understand the selectivity and specificity of binding and activity within the hSULT family. Although detailed analyses of individual structures have been very informative in this regard ,8\u201312,16,Based on these binding results, a set of 20 compounds that bound to at least one hSULT plus 11 additional related compounds or known substrates were used as a pool of potential substrates for enzymatic activity of hSULTs 1A1, 1A3, 1B1, 1C1, 1C2, 1C3, 1E1, and 2A1 . We moniThe combined ligand binding and activity screens revealed a unique \u201cchemical fingerprint\u201d for each hSULT and 2. Fp-nitrophenol, 1-naphthol, 2-ethylphenol, 2-n-propylphenol, and 2-sec-butylphenol, as well as the steroid-related compounds, \u03b1-zearalenol and lithocholic acid. SULT1C3 appeared to be most active with \u03b1-zearalenol (4.1 nmol/min/mg) and 2-ethylphenol (2.2 nmol/min/mg). These data suggest SULT1C3 may contribute to the metabolism of steroid and phenolic compounds. Finally, SULTs 2A1 and 1E1, which are reported to metabolize steroids ethane sulfonic acid (1Q1Q), and SULT2B1b with PAP and DHEA (1Q22).The Protein Data Bank ("} +{"text": "At the heart of every reaction of every cell lies an enzyme, a protein catalyst. At its active site\u2014a special pocket on its surface\u2014it binds reactants (substrates) and rearranges their chemical bonds, before releasing them as useful products. Rearranging some bonds may require help from certain chemical elements that are present in trace amounts. Many enzymes place these elements at the center of their active sites to do the most critical job.R-sulfoxide reductase (MsrB) 1, whose job is to repair proteins injured by oxidative damage, caused by sunlight, toxic chemicals, or a variety of other insults.Selenium is one such element. In large quantities, selenium is toxic, but, in trace amounts, it is absolutely essential for life in many organisms, including humans. Selenium is present in proteins in the form of selenocysteine, a rare amino acid that helps promote antioxidant reactions. These selenocysteine-containing proteins are called selenoproteins. One important selenoprotein is the enzyme methionine-In mammals, there are two other forms of MsrB, which also can efficiently perform this task, but use the abundant amino acid cysteine instead of selenocysteine. So why do cells go to the trouble and metabolic expense of acquiring selenium from the environment? In this issue, Hwa-Young Kim and Vadim Gladyshev explore the details of active-site chemistry of these three related enzymes, and show that the selenoprotein form employs a different catalytic mechanism.The authors began by identifying three key amino acids in the active site of the cysteine-containing forms, which did not occur in the selenoprotein MsrB1. When any of these amino acids were mutated, the activity of the cysteine-containing enzymes was greatly diminished. This result indicates that these amino acids likely play a role at the active site, a supposition supported by previous work on related enzymes in bacteria. Kim and Gladyshev next systematically mutated MsrB1 to include one, two, or all three of these amino acids, and discovered that inclusion of one or any combination of them diminished activity of the selenocysteine-containing enzyme. This suggested that while these amino acids support the mechanism of the cysteine-containing forms, they interfere with the mechanism of the selenoprotein. Not surprisingly, when the selenium was removed from MsrB1, the enzyme was significantly impaired. But when the three amino acids were added to this crippled enzyme, they restored some of the diminished activity, probably by carrying out the same mechanism they do in the cysteine-containing enzymes.The authors then inserted a selenium atom into each of the cysteine-containing enzymes, in the same spot in the active site where it sits in MsrB1. They found that the initial activity of each enzyme was increased over 100-fold, indicating the inherent capacity of selenium to promote catalytic activity. These souped-up enzymes were unable to complete the reaction, however, because they lacked other features of MsrB1's active site. Further scrutiny of the enzymes revealed these critical features, and inserting them allowed the artificial selenoproteins to carry out the entire reaction.R-sulfoxide, is found within oxidized proteins. The job of both enzymes is to reduce this compound back to the amino acid methionine. Both do so by accepting an oxygen atom from the sulfoxide.The authors suggest the explanation for these findings relates to a difference in the catalytic mechanism of selenocysteine- and cysteine-containing enzymes. The substrate for both enzyme types, methionine-In the presence of selenium, the oxygen temporarily binds to the selenium. The selenium's electrons then shift to bond with a sulfur on a neighboring cysteine amino acid, kicking out the oxygen as part of a water molecule. Finally, the selenium-sulfur bond is broken and the enzyme is restored to its original state by the intervention of thioredoxin, a ubiquitous cell molecule whose job is to undo just such temporary linkages in a wide variety of enzymes.Richard RobinsonWithout selenium, the oxygen binds directly to sulfur, and thioredoxin intervenes to form the water and restore the sulfur. This reaction occurs in fewer steps, but is slower. The authors propose that the evolution of selenium-containing MsrB1 from cysteine-containing forms was likely favored by the higher rate of reaction it offered, although this trend is likely limited by the requirement for changes in other portions of the enzyme to accommodate the trace element. The authors suggest that selenium provides inherent catalytic advantages to certain types of enzymatic reactions, even though utilization of these advantages is sometimes tricky. If so, manipulation of related enzymes by insertion of selenium may increase their catalytic efficiency, perhaps much above that designed by nature. This may offer advantages for some biotechnology and biomedical applications that depend on antioxidants. \u2014"} +{"text": "The sugar on your table and the oxygen in the air don't spontaneously ignite, but why not? The answer is that the conversion from reactants\u2014sugar plus oxygen\u2014to products\u2014carbon dioxide plus water\u2014requires the reactants to first adopt an extremely unstable configuration, called the transition state, in which their bonds are weakened, but newer, stronger ones have not yet formed. The \u201cenergy hill\u201d that separates reactants from the transition state is just too high, so your sugar remains stable at room temperature.Not so inside a cell, where enzymes catalyze thousands of different reactions that would take days, or millennia, without them. There, a reactant\u2014called a substrate\u2014fits into the enzyme's active site, a pocket or groove on its surface. The active site is lined with chemical groups whose shape and charge complement the shape and charge of the substrate, positive meeting negative, bump nestling into hole.But while the reactant fits in nicely, much of the catalytic power of the enzyme has been thought to be derived from making an even better fit with the transition state. To do this, the enzyme first forms weak, temporary bonds with the reactant. The shape and charge of the active site are such that, as the reactant deforms into the transition-state configuration, those bonds become stronger. Thus, the enzyme can stabilize the transition state, lowering the height of the energy hill and thereby increasing the probability that the reactants will convert into products. Enzymes typically speed up a reaction by many orders of magnitude\u2014a rate increase of a trillion-fold is routine for enzymes.Shape and charge complementarity between enzyme and substrate have been proposed as keys to enzyme function, but are both equally important? That question is devilishly hard to answer, for the most fundamental of reasons: shape and charge are interdependent in most cases, and altering a molecule's shape also changes its charge distribution. In a new study, Daniel Kraut, Daniel Herschlag, and colleagues separate the two effects and show that, for at least this one enzyme, charge makes only a modest contribution to catalytic power.The enzyme ketosteroid isomerase (KSI) rearranges the bonds within its substrate, a multi-ring steroid molecule, by shifting a hydrogen ion from one carbon to another. One step in this process is the formation of two weak, temporary bonds, called hydrogen bonds, between KSI and an oxygen atom on the substrate. As the substrate deforms into the transition state, this oxygen becomes partially negatively charged, and the hydrogen bonds become stronger.KSI can bind other molecules that fit the active site, including one called a phenolate anion. This compound has an oxygen atom in the same position as the steroid oxygen, but phenolate's oxygen is negatively charged, mimicking the transition state for the steroid. That charge can be made weaker or stronger by adding different chemical groups to the far end of the phenolate. Because these additions are made away from the active site, the shape of the molecule within the active site doesn't change, and the authors could evaluate charge independent of shape.The authors did not measure reaction rate directly, but instead measured a key factor that determines reaction rate, the strength of binding interactions formed to the variably charged phenolate anion\u2014a simple-enough sounding procedure that nonetheless drew on the full range of tools in the modern chemist's toolbox, from NMR spectroscopy to calorimetry to X-ray crystallography. Over the entire range of compounds tested, they found a difference in binding strength of only 1.5-fold, corresponding to an estimated change of at most 300-fold in the reaction rate. The authors propose that several other factors, including shape, each contribute modestly to catalysis.While these results are directly applicable to only KSI, they provide a window onto the factors affecting catalysis in many other enzymes. Calculations based on these results may allow estimation of the effects of charge in other enzymes that cannot be manipulated in this same way. The complementary experiment\u2014altering shape while keeping charge constant\u2014may be even harder, and remains to be done."} +{"text": "Evolutionarily unrelated proteins that catalyze the same biochemical reactions are often referred to as analogous - as opposed to homologous - enzymes. The existence of numerous alternative, non-homologous enzyme isoforms presents an interesting evolutionary problem; it also complicates genome-based reconstruction of the metabolic pathways in a variety of organisms. In 1998, a systematic search for analogous enzymes resulted in the identification of 105 Enzyme Commission (EC) numbers that included two or more proteins without detectable sequence similarity to each other, including 34 EC nodes where proteins were known (or predicted) to have distinct structural folds, indicating independent evolutionary origins. In the past 12 years, many putative non-homologous isofunctional enzymes were identified in newly sequenced genomes. In addition, efforts in structural genomics resulted in a vastly improved structural coverage of proteomes, providing for definitive assessment of (non)homologous relationships between proteins.We report the results of a comprehensive search for non-homologous isofunctional enzymes (NISE) that yielded 185 EC nodes with two or more experimentally characterized - or predicted - structurally unrelated proteins. Of these NISE sets, only 74 were from the original 1998 list. Structural assignments of the NISE show over-representation of proteins with the TIM barrel fold and the nucleotide-binding Rossmann fold. From the functional perspective, the set of NISE is enriched in hydrolases, particularly carbohydrate hydrolases, and in enzymes involved in defense against oxidative stress.These results indicate that at least some of the non-homologous isofunctional enzymes were recruited relatively recently from enzyme families that are active against related substrates and are sufficiently flexible to accommodate changes in substrate specificity.This article was reviewed by Andrei Osterman, Keith F. Tipton (nominated by Martijn Huynen) and Igor B. Zhulin. For the full reviews, go to the Reviewers' comments section. The recent efforts in genome sequencing of organisms that inhabit a variety of environments, from deep-sea hydrothermal vents to Antarctic ice, revealed a surprising biochemical unity of these organisms, that is, the uniformity of the key gene expression mechanisms and metabolic pathways, and the enzymes that catalyze them. However, in certain cases, the same biochemical reaction is known to be catalyzed by two or more enzymes that share no detectable sequence similarity with each other ,2. AlthoIn a previous study, in the early days of genome sequencing, we performed a systematic search for potential NISE by identifying all protein sequences listed in GenBank that, although assigned the same 4-digit Enzyme Commission (EC) numbers, had no detectable sequence similarity with each other . WhereveOur 1998 study suggested that NISE could be far more widespread than previously thought and paved the way to further recognition of candidate NISE catalyzing a variety of metabolic reactions ,10,11. HThe goal of the present study was to make use of the vastly expanded sequence and structural data that are currently available, to generate a comprehensive list of NISE and to obtain insights into the evolution of alternative solutions for the same reaction through comparison of the phyletic patterns of their distribution. In the years elapsed since our 1998 analysis, several studies explored various groups of alternative enzymes catalyzing the same biochemical reaction, including those belonging to the same protein (super)families ,14. Herebona fide NISE that were represented by two or more versions with distinct folds had the TIM-barrel [\u03b2\u03b1)8] fold. Among the 18 additional cases of candidate NISE, prediction of distinct structural folds for two enzyme forms proved correct for 15 pairs. In two instances, the two isoforms turned out to possess the same fold, and one case had to be eliminated because the light-dependent and light-independent forms of this enzyme employ different electron donors and so, technically, catalyze different reactions.Of the previously reported 16 EC nodes corresponding to apparent NISE, where the three-dimensional (3D) structures were available for both forms, one case, \u03b2-galactosidase, EC 3.2.1.23, proved to be in error as catalytic domains of both forms NISE, the purported unrelated enzymes proved to belong to the same fold and even the same structural superfamily , although initially assigned to the same EC 3.8.1.2, exhibit different stereo-specificities, which prompted assignment of the latter form to the new EC 3.8.1.9. Likewise, two ubiquitin thiolesterases, represented by UBP5_HUMAN and STALP_HUMAN , despite having the same EC 3.1.2.15, actually possess distinct activities, cleaving polyubiquitin chains linked, respectively, through Lys-48 and Lys-63 residues of ubiquitin )Authors' response: Moonlighting is a very interesting phenomenon that is, however, only tangentially related to the issue of NISE . The very definition of \"moonlighting proteins\" as those that \"have two different functions within a single polypeptide chain\" [refers primarily to enzymes having additional non-enzymatic functions . The above-cited reviews mention a single example of an enzyme with two entirely different enzymatic activities, the monomer of glyceraldehyde-3-phosphate dehydrogenase supposedly acting as uracil-DNA glycosylase [which still remains controversial [In contrast, multifunctional enzymes [usually turn out to consist of two or more different domains. In all these examples of \"multitasking\", the evolutionary constraints are very different from those encountered by non-homologous enzymes that evolved to catalyze the same biochemical reaction.e chain\" ,71referscosylase , which soversial . In cont enzymes usually tReviewer's comment to the authors' response: The problem of 'moonlighting' is surely that the evolutionary pressures on the alternative, non-enzymic, function(s) may be different from those on the catalytic function and thus cannot be ignored when considering the pressure on the catalytic function. Of course, much of the literature assumes that the catalytic function is the main one, but in some cases this may be doubtful.Reviewer 2There are also cases of catalytic promiscuity where an enzyme catalyses distinct types of reaction . If theAuthors' response: Catalytic promiscuity, when alternative chemical reactions take place in essentially the same active site, is an important factor in enzyme evolution ([and references therein). As discussed above, we believe it to be a major source of NISE.olution class I, class II, and so on\", might be clarified, since it would constitute a departure from the strict reaction-catalysed criterion and could risk detracting from its present utility. The authors should clarify what \"classes\" they propose should be included; would it be all analogous and homologous enzymes encompassed by each EC number? In some cases such material may be dealt with, more adequately, by complementary databases, which rely on the EC system. For enzymes that have different mechanism of action, the problem might best be resolved through systems such as the MaCiE database or its oAuthors' response: Adding the notion of a \"class\" to the EC system is only one of a number of possible ways to deal with NISE. Having supplementary specialized databases of enzyme mechanisms, such as MACiE [or sequence-based profiles, such as PRIAM [would be less intrusive but would force the users to rely on those outside sources for important information on the diversity of the enzymes in each EC node. This work identified NISE for almost 8% of all EC nodes, and many more EC nodes include divergent enzyme isoforms that still belong to the same superfamilies [Given the scope of the problem, we felt that it should be brought to the attention of Prof. Tipton and other members of the Enzyme Commission.as MACiE ,80, or sas PRIAM would be families ,14. GiveReviewer 2The authors refer to the \"strict reliance on substrate specificity\" being \"a cause for certain confusion when the EC numbers are applied to mapping reactions on the metabolic map\" and give the example of the enzymes that could catalyse the oxidation of D-glucose. It is not clear why they regard this as a problem. Surely it is beneficial to be able to find all the enzymes that may contribute to a metabolic process? As, for example, in the approach adopted by Reaction Explorer , and theAuthors' response: Although we agree in principle, the decision on whether a certain pathway is operational in a certain organism often hinges on the presence or absence of a small group of pathway-specific enzymes [In such cases, non-critical application of EC numbers may lead researchers to an erroneous assertion of the presence - or absence - of a given reaction (and hence the whole pathway) in the given genome. enzymes . In suchReviewer 2A problem, which the authors touch upon, is that of broad-specificity enzymes, such as alcohol dehydrogenase (EC 1.1.1.1) and monoamine oxidase (EC 1.4.3.4), where the reaction is described in general terms, with little no indication of all the substrates that may be involved. Such information, where known, can be found in the BRENDA database . SimilarAuthors' response: Although we agree, we have to note that this arrangement makes the BRENDA database the sole provider of this critically important information. In our opinion, the EC system might benefit from inclusion of this type of data.Reviewer's comment to the authors' response: BRENDA is not the only source of specificity data and I did not intend to imply that it was. KEGG also gives such information. We collaborate closely with both databases, and take the view that if they are doing a good job, why should we want to duplicate them?I am still not clear what you may have in mind by 'adding a class'. We have received many suggestions in the past for additional EC digit to cover several diverse areas, including mechanism, medically-relevant enzymes, enzymes from different species, isoenzymes etc. So far we have decided that this would not be helpful. The alternative might be adding a 'NISE' field to each entry but, as mentioned above, a direct link to the corresponding PRIAM page might be more helpful.This paper extends the authors' previous work identifying analogous enzymes more than a decade ago. The authors expanded their search methods by utilizing both the Swiss Prot database and the KEGG database to better associate proteins with enzymatic activity. By the author's own admission, no strong trends were observed in the dataset, but they were able to identify a few very interesting patterns, including enrichment of analogous enzymes among glycoside hydrolases, enzymes involved in oxidative stress relief, and among the TIM Barrel and NAP(P)-binding Rossmann structural folds. As expected, the authors find that the number analogous enzymes scales with increasing genome size. The authors discuss the evolutionary origins of some of the trends noted above, as well as the limitations imposed on their identification schemes by the EC numbering system itself. Overall, I do like this paper a lot, especially because in my lab we have recently become interested in one particular family of analogous enzymes. So, I enjoyed looking at a bigger picture, while picturing our own work in its context.The analysis scheme employed is straightforward and utilizes proven bioinformatic methodology. The authors appear to utilize conservative criteria for inclusion of data for the analysis, so the results are likely to under-predict rather than over-predict analogous enzymes. The results greatly expand the listing of analogous enzymes and the extensive supplementary material provides useful information for specialist interested in any particular family of enzymes.The inclusion of numerous genomes through the use of the KEGG database allowed analysis of analogous enzymes to be conducted on a sufficient scale to give a fairly good approximation of the their relative abundance and the importance of analogous inventions during evolution. The coverage of structural information, sequence, and biological information seems to be such that the boundaries for the proportion of analogous enzymes (~10% of the EC nodes) seem unlikely to significantly change with future genome sequencing.Lack of true novelty in this analysis is a minor quibble, as it generated a useful resource in and of itself and the specific cases highlighted are of interest in a number of fields. The use of EC number annotations may be suspect in some cases where the traditional sequence similarity based annotation methods are unreliable or where the EC definitions are inadequate. I can offer a couple of examples, where we happened to dig around a little bit. For instance, Table S1 lists a couple of cellulases (entries #78 and #91) in glycoside hydrolase families 10 and 11. It appears that there are no experimentally defined cellulases in these families, and enzymes shown are putative xylanases. It also might be just a matter of semantics, since these enzyme are likely to be hemicellulases . Anyway, the authors are fully aware of the limitations imposed and there is no way to verify available experimental evidence for each and every entry in such a large-scale effort. The vast majority of the enzymes included in the study are readily identified by sequence similarity based annotations, so the conclusions as a whole are sound.Authors' response: We fully agree with these comments.Supplementary Table S1. An update to the 1998 listing of analogous enzymes. Predicted analogous enzymes pairs from the 1998 list that have been removed from the new list are highlighted in yellow. The EC numbers are hyperlinked with the ENZYME database entries, examples are linked to UniProt, structures - to PDB, folds - to SCOP, families - to Pfam, and references - to PubMed.Click here for fileSupplementary Table S2. A new listing of analogous enzymes. The EC numbers are hyperlinked with the ENZYME database entries, protein entries are linked to the NCBI protein database and UniProt, PDB entries - to PDB, SCOP folds - to SCOP, protein superfamilies - to SUPERFAMILY, protein families - to Pfam, and references - to PubMed.Click here for fileSupplementary Tables S3-S9Click here for file"} +{"text": "A direct link between the names and structures of compounds and the functional groups contained within them is important, not only because biochemists frequently rely on literature that uses a free-text format to describe functional groups, but also because metabolic models depend upon the connections between enzymes and substrates being known and appropriately stored in databases.We have developed a database named \u201cBiochemical Substructure Search Catalogue\u201d (BiSSCat), which contains 489 functional groups, >200,000 compounds and >1,000,000 different computationally constructed substructures, to allow identification of chemical compounds of biological interest.http://bisscat.org/) can be used to find compounds containing selected combinations of substructures and functional groups. It can be used to determine possible additional substrates for known enzymes and for putative enzymes found in genome projects. Its applications to enzyme inhibitor design are also discussed.This database and its associated web-based search program ( Nomenclature is of fundamental importance in science Missing connections between metabolites is a major problem of metabolic modelling. Just as gene-sequence studies have revealed many putative enzymes with unknown substrates (orphan enzymes), metabolomic studies are revealing a plethora of orphan substrates, which makes the need for rational approaches to identifying the enzymes involved in their formation and breakdown a pressing concern. In this context, orphan substrates may be defined in different ways. Poolman et al. The relationship between a chemical structure and its reactivity has been well investigated in pharmacology, the first step of which is pharmacophore searching prior to more detailed molecular analysis . There are a variety of tools for substructure searching, but their main purpose is drug design rather than novel pathway discovery. It is also hoped that BiSSCat will be useful for preliminary screening prior to more detailed molecular modelling studies and QSAR analysis.In the field of organic chemistry, functional groups have been defined as atoms or atom groups that show relatively constant characteristics even when connected to different structures The most reliable clue for guessing the function of putative genes is protein sequence similarity to well-investigated gene products, but such annotations have to be interpreted with caution. This is because they inevitably include uncertainty associated with each of the steps from enzyme studies to genome annotation. Most enzyme-specificity studies are not exhaustive, because experimentalists are generally interested in identifying the presumed physiological substrate(s) and inhibitor(s), or artificial substrates that make enzyme assays easier to perform. Substitution of even a single amino-acid residue can cause changes in terms of substrate specificity or reactivity. The label of being \u201csimilar to\u201d well-investigated genes provides a suggestion about function, but does not necessarily describe functional identity, which further increases the uncertainty associated with annotations.Although some enzymes have very narrow substrate specificity, others are known to display wider substrate specificity. Metabolome analyses have uncovered many secondary metabolites that appear to be species specific and it has been suggested that broad substrate specificity may contribute to metabolome diversity We propose that studies on enzymes or compounds that have been less thoroughly investigated should be made without making any assumptions about enzyme specificity. This provides a starting point for the consideration of possible combinations of recognized and putative enzymes (gene products) and their functions (enzyme reactions) in an expanding set of gene products and metabolites. Substrate specificity is generally described using a free-text description of the functional groups involved, the generic names of compounds, or one or more equations that describe the reaction(s) catalysed. These are subsequently used in genome annotations. Enzymes and their substrates are sometimes identified by class names. For example, the names alcohol dehydrogenase (EC 1.1.1.1) and amine oxidase (EC 1.4.3.4) give no indication of the breadth of the specificities of these enzymes. Indeed, it is likely that several possible substrates for such enzymes are not registered as substrates in reaction databases, because they have not been studied. Such a lack of precision highlights the need to make the relationship among compounds' names, class names, substructures and functional groups clear.http://bisscat.org/) for searching defined substructures and obtaining a list of compounds containing them. One can combine a number of defined substructures to produce more complicated substructures, and can search for enzymes based on functional groups. As an example of what can be achieved using BiSSCat, we have determined which substructures are commonly used by a particular group of enzymes, and then proposed some possible candidate compounds that could act as substrates of those enzymes. Since substructure and location are important for all ligand-binding processes, this approach should also be of wider value. Furthermore, it should help to connect nomenclature and machine-readable expressions of chemical compounds, and to fill in the gaps in our knowledge of genomic and metabolomic relationships.In this paper, we have defined substructures that include known functional groups, and made it possible to obtain chemical compounds from biochemical databases. We have also provided a web-based tool . The merit of having this sub-database is that one can search for any substructure using a number of names without bothering about the definition of SUBSTRUCTURE entries unless one has a very complicated query. Most functional groups referred to in the IUBMB Enzyme List are covered, so the selection of FGROUP entries is currently biased for use with enzymatic reactions. For instance, many organic functional groups such as alcohols are further divided into their subgroups , whereas inorganic functional groups are not so detailed. It is hoped that BiSSCat users will give us feedback on any omissions. The database is designed so that a group of substructures can share one FGROUP, and a single substructure can belong to two or more FGROUPs. This rule might seem complicated, but it reflects the situation found in nature. For example, aldehyde, carboxylate, and amide groups belong to the carbonyl functional group, whereas the N-formyl group belongs to both the aldehyde and amide functional groups. Enzymes and other proteins often recognize more of a substructure than just the functional group(s), and the threshold for distinguishing between these is not always obvious. Therefore, FGROUP assigns names not only for functional groups but also for some larger substructures, such as sugars, which are specifically recognized by glycosyltransferases, glycosidases, The database currently comprises 241,709 chemical compounds whose non-hydrogen atoms are classified into 2,736 different ATOM entries. Each ATOM entry is given an ID number (ATOM0001\u2013ATOM2736) based solely on its order of inclusion in the BiSSCat database. There are also 1,857,839 SUBSTRUCTURE entries in the database. Serial ID numbers are also assigned to these SUBSTRUCTURE entries (S0000001\u2013S1857839) and, as discussed below, the IDs bear no relation to substructure type.http://bisscat.org/fgroup.html). ID numbers are given to FGROUP entries in such a way that they approximate to a hierarchical classification. The FGROUP list does not strictly reflect classification of physicochemical or biochemical characteristics. Since the classification of some functional groups can be based on a number of different aspects, it is impossible to describe the classification of functional groups in a simple tree structure. There are 2,357 instances in the database where all atoms in a functional group are part of those in another functional group, and 8,625 cases where two functional groups share some atoms. The FGROUP list can be expanded to accommodate newly defined functional groups or substructures in the future.489 FGROUP entries were assigned for the current release . These correspond to 660,946 recognized SUBSTRUCTURE entries and to 4,964,487 non-hydrogen-atom locations in the KEGG http://bisscat.org/), and further details are provided on the website's help page. The user must install an Adobe SVG plug-in (http://www.adobe.com/svg/) and enable cookies in order to use these tools. Screenshots of the webpage are shown in BiSSCat provides a number of alternative ways of looking up chemical compounds or biochemical substructures. Here we give an outline of the web-based program , compound and enzyme, can each be searched in three different ways . The first way is using the alphabetically ordered list of these objects' names. The second way is to use the hierarchical classification tree. The difference between an FGROUP and a compound's classification can be explained using \u201chexopyranose\u201d as an example. Hexopyranose is a word used to describe a class of compounds with the molecular formula C2O2 with the central carbon atom being a carboxylate carbon (cx), the other being an aromatic carbon (ar) and two oxygen anions (ep2). There are 17,925 SUBSTRUCTURE entries containing C2O2, which includes many FGROUP entries that are not carboxylates . Among them, 2,988 entries have one carboxylate carbon and two negative oxygen atoms and these belong to six FGROUP entries that have \u201ccarboxylate\u201d in their name . 305 SUBSTRUCTURE entries are obtained when \u201caromatic\u201d is added to the search condition.The third way of searching the database, i.e. the structure search option, needs further explanation. Searches of FGROUP, SUBSTRUCTURE and compound entries can be based on elements, electrostatic and physicochemical properties, and graph topology. For example, aryl carboxylate contains CAnother option is to search for compounds based on structural information. Using the \u201cMultiple Substructure Search\u201d option, one can find compounds based on the presence or absence of substructures or functional groups. This can greatly increase the specificity of the search, and reduce the number of compounds to consider. For example, there are 55 compounds in the database that have \u201ccarboxylate\u201d in their name but there are 22,160 compounds that contain the \u201ccarboxylate\u201d structure. There are 685 compounds containing adenine in the database but there are only 62 compounds that contain both carboxylate and adenine. Of these, 28 of the compounds do not contain a thioester group.FGROUP entries in reaction equations can also be searched to find enzymes. For example, reaction equations that include generic names such as \u201calcohol+NAD\u200a=\u200aaldehyde+NADH\u201d can be searched. Partial equations, such as \u201calcohol\u200a=\u200aaldehyde\u201d or \u201camine\u200a=\u200aaldehyde\u201d can also be used.http://www.enzyme-database.org/) There are currently more than 4030 enzymes with assigned EC numbers above, where a class name is used in the reaction equation. As an example, EC 1.1.1.1 comprises enzymes that oxidize alcohols with the concomitant reduction of NADIn the case of (5), as more than one specific compound is named as a substrate/product, it is possible to deduce substructures that are common to each substrate and/or product. For example, EC 2.1.1.50 (loganate O-methyltransferase) acts on two compounds, loganate and secologanate. The structural difference between these two substrates is, therefore, not sufficient to prevent recognition by the enzyme. Substructures were divided into two groups: those containing reaction centre atoms and those containing other substructures that might be recognized by the enzyme. A compound that has both of these attributes may be considered to be a possible candidate substrate for that enzyme.In a preliminary analysis, candidate substrates were defined as those compounds having one substructure involving a reaction centre and at least three substructures found in a reported substrate for a given enzyme. Application of these criteria to the compounds in the BiSSCat database showed that 1,912 known substrates have more than 10 related structures that were, therefore, candidate substrates, 1,166 known substrates had between 1 and 10 other candidate substrates, and 934 had no alternative candidate substrates.http://www.brenda.uni-koeln.de/) provides additional data on the specificities of many enzymes. In a situation where no information other than the reaction equation is available, the best one can do is to find compounds with the same types of atoms or functional group(s). Substructure searches of the BiSSCat database can be used to find atoms in the same environment. Among the compounds that are not currently known to be associated with any enzyme reaction, 62,402 compounds have the same type of atoms as those involved in reported enzyme reactions, and 2,182 of these are from the KEGG database.In cases where only a single specific reaction is provided, it is not possible to determine commonly used substructures, as there is no means of making a comparison. Some of the enzymes in the IUBMB Enzyme List appear to have narrow substrate specificities, so there might seem to be little need to predict other possible substrates. However, this may be a reflection of lack of knowledge. Furthermore, such information would be valuable if one needs to find the function of the corresponding orthologous gene products. Reaction centres can be defined as in the RPAIR database One example is a group of compounds that include the 5-methylcytidine residue SUBSTRUCTURE entry (S0265987), i.e., deoxy-5-methylcytidine (1), DNA 5-methylcytosine (2), 5-methyldeoxycytidine diphosphate (3) and 5-methyl-2\u2032-deoxycytidine (4). Deoxy-5-methylcytidine can be balanced in metabolic modelling as it is known to be involved in two enzyme reactions (EC 2.1.1.54 and EC 2.7.4.19). DNA 5-methylcytosine and 5-methyldeoxycytidine diphosphate are involved in only one reaction each , and are examples of \u2018orphan metabolites\u2019, as defined by Poolman et al. Although the approach taken in this study cannot ensure that a compound is truly a substrate for a given enzyme, it should help to minimize the number of candidate enzymes and compounds for experimental investigation. Further analysis of substructure changes during a reaction using RPAIR revealed that there were sometimes no corresponding products for the proposed substrates. A solution to this problem might be the addition of potential products to compound databases, however, it would first be preferable to confirm the existence of the predicted substrates/products experimentally, to avoid the inclusion of misleading information.The BiSSCat substructure searching method is applicable to finding possible substrates having binding groups as well as a reaction centre. The process could also be applicable to identifying compounds that are unlikely to be substrates or might be inhibitors of a given enzyme. For example, EC 1.4.3.4 (monoamine oxidase) acts on many compounds that contain a primary amine group. If these substrates also contain a carboxy group, this can prevent the compound from being bound to the enzyme. The presence of an alpha-methyl group will not prevent binding of the substrate to the enzyme, but it does block the conversion of the substrate into the product. If information about binding groups and blocking groups is already known, BiSSCat can be used as an aid to the design of inhibitors. Such data are, in many instances, not presented explicitly in extant databases.http://www.ergo-light.com/) and UMBBD It is intended to further enrich BiSSCat with data about interactions between proteins and small compounds from the existing literature that are not in the present source databases and to incorporate results of future experiments. Several newer techniques, such as text mining of the enzyme-assay literature Given that the objectives of searching complete chemical structures and substructures are usually different, the search methods used are closely related to how they are represented. The first step of our method is to divide a chemical compound into its inherent substructures, which is similar to the first step in obtaining a systematic nomenclature for chemical compounds, such as obtained using IUPAC rules, and a variety of linear representations of chemical compounds, such as WLN Our method takes advantage of a pre-computed and assigned set of substructures, making the search speed faster and interpretation easier. The manual assignment of FGROUP was the most time-consuming process in the construction of the BiSSCat database, but it was an important step as it provides a direct correspondence between the generic names described in the IUBMB Enzyme List and the concrete substructures found in chemical-compound databases. This should make it easier for computer algorithms to distinguish between generic names and specific names. More importantly, it also makes it easier to understand the meanings of substructures found in computational analysis, which could help our understanding of the structure-function relationships of ligand-binding processes, including enzymes.Both the database and search program have scope for further development, for instance by allowing the user to define distances between substructures, input substructures using SMILES or SMART format, or use a structure-drawing tool. These aspects will be addressed in future releases. We believe that our method should be of value in gene-product identification and in increasing our understanding of previously unknown metabolic pathways or drug-selection processes.logP, the octanol/water partition coefficient The SUBSTRUCTURE database was constructed using data on the structures of 10,046 and 247,617 chemical compounds derived from the KEGG In order for a reaction to be catalysed, a chemical compound has to contain the appropriate functional groups, also referred to as the reaction centre. The KEGG/RPAIR database describes which atom in a substrate corresponds to which atom in a product in each enzyme reaction. The RPAIR database also defines reaction centre atoms, which undergo significantly more changes than other atoms in the reactant-pair during a reaction. These reaction-centre atoms are utilized in this study.Biochemical substructures are computationally defined using seven attributes: atom (ATOM), vicinity (VICI), bond (BOND), skeleton (SKEL), ring (RING), fragment (FRAG) and conjugate (CONJ). Every substructure is represented as a graph object, with non-hydrogen atoms and bonds described as nodes and edges, respectively. Each substructure is distinguished in terms of its elements , electrostatic and physicochemical properties, and topology. Detailed definitions of the substructure types are provided below.ATOM entries are distinguished by their elements and by their electrostatic and physicochemical properties, which are calculated for each non-hydrogen atom of each compound. Hydrogen atoms are not assigned individual ATOM entries, but are included with their adjacent non-hydrogen atoms. N-acetyl, and phosphate. A BOND entry is defined as a central bond between a pair of atoms, such as an amide bond. A SKEL entry is defined as a carbon skeleton/backbone, and examples include alkyl and aryl groups.VICI entries are defined in terms of ATOM entries. Other substructures are defined in terms of VICI entries. A VICI entry is defined as a central atom and the atoms attached to it. Many functional groups correspond to VICI entries, e.g., carbamate, A RING entry is defined as a cyclic substructure, containing 3-, 4-, 5- and 6-membered, or larger, rings. Some common examples are the phenyl, imidazole and pyrrole rings. Ring properties are also added to each ATOM entry if the atom is part of a 3-, 4-, 5- or 6-membered ring. These additional properties were added as 3- and 4-membered rings have especially strong ring strain, which gives rise to their specific reactivities . 5- and 6-membered rings are ubiquitous substructures, as found in many sugars etc., and many reactions are known to produce 5- and 6-membered rings. Larger cyclic substructures are not described in ATOM entries but are included in RING entries.cis-trans-isomerases). A bond consisting of one hydrogen atom and one non-hydrogen atom is also excluded. Using this definition, many biologically important polycyclic structures, such as purines, pyrimidines, hemes or sterols, are obtained. Considering rotatable bonds should also be helpful in understanding the conformational changes that occur when a chemical compound is accepted by an enzyme. In pharmacology, an important step of drug design is determining the number of rotatable bonds of possible medicinal compounds A FRAG entry is defined as a fragment obtained when all rotatable bonds are cut. A rotatable bond is defined in the following way: only a single bond (saturated bond) that is not included in any ring substructure can be rotated. Amide bonds are not rotatable, as they are known to have an energy barrier that prevents rotation. Two cases that remain to be incorporated are where steric hindrance prevents rotation, and where an enzymic reaction helps rotation , conjugated double or triple bond (conj) or aromatic ring (ar) property. It is known that the delocalization of electrons leads to unique physicochemical characteristics and reactivities. In fact, CONJ includes many important substructures, such as 2-oxo carboxylate and triphosphate, which are found widely in biochemistry, and carotenoids and pheophytins, which are also found in pigments.Substructures may be derived from other substructure types, which is the reason that IDs bear no relation to the type of substructure. For example, a phenyl ring is derived not only from the definition of RING entries, but also of FRAG entries (and CONJ entries in most cases). When a phenyl ring is connected to a heteroatom, the ring will also have a SKEL entry."} +{"text": "Functionalizing of single molecules on surfaces has manifested great potential for bottom-up construction of complex devices on a molecular scale. We discuss the growth mechanism for the initial layers of polycyclic aromatic hydrocarbons on metal surfaces and we review our recent progress on molecular machines, and present a molecular rotor with a fixed off-center axis formed by chemical bonding. These results represent important advances in molecular-based nanotechnology. Integtert-butyl zinc phthalocyanine [(t-Bu)4-ZnPc] molecules are used to discuss these two issues, respectively. Our discussions are mainly based on the results obtained by a scanning tunneling microscope (STM) direction and ~0.6 nm in the [1 10] direction [2] direction. irection . This inSymmetry determines the domain number for the molecular superstructure on substrate. Generally, molecules and substrates are chosen to meet the requirements for epitaxial growth with single domain orientation. For perylene on Ag(110), the two-fold symmetry of the Ag(110) surface leads to two domain orientations mirrored at a crystal plane of the Ag(110) substrate. In contrast, for perylene on Au(111), single domain orientation was observed because both the Au(111) surface and the molecular superstructure show a six-fold symmetry. Single domain orientation is desirable for organic thin films applied in devices.\u22123 ML/min, the rate for the uniform monolayer, to ~1 \u00d7 10\u22122 ML/min, many metastable structures appeared. In addition, Seidel et al. reported some non-commensurate superstructures for perylene on Ag(110) obtained under slightly different growth conditions direction. The structure of the second layer can be determined. Molecules form an oblique two-dimensional Bravais lattice with two molecules at each Bravais lattice point. We determine that the molecules in the second layer adopt a tilted configuration, instead of a flat-lying configuration, according to the measured height of the islands and the molecular density derived from the STM measurements. The growth of the second layer is completely different from that of the first layer. When additional molecules are evaporated onto the first layer, two-dimensional molecular islands form with an ordered arrangement, as shown in Some previous studies show that perylene molecules assemble with the \u03c0-plane oriented almost or completely parallel to the substrate in the multilayer region ,21. OtheThe difference in growth mode between the first two layers is ascribed to their different environment with respect to the competition between molecule-molecule and molecule-substrate interactions. The dominant force for the growth of the first layer is the molecule-substrate interaction. The growth of the second layer is dominated by molecule-molecule interaction, which results in the growth of ordered molecular islands. The existence of the first layer leads to a remarkable decrease of the interaction between the substrate and the molecules of the second layer. We have observed similar growth phenomenon for iron phthalocyanine molecules on Au(111) surface .ab-plane of the monoclinic \u03b1-perylene single crystal molecules on the reconstructed Au(111) surface possess a well-defined rotation axis fixed on the surface, and also, that these single-molecule rotors form large scale ordered arrays due to the reconstruction of the gold surface. Gold adatoms at the surface function as the stable contact of the molecule to the surface. An off-center rotation axis is formed, by a chemical bonding between a nitrogen atom of the molecule and a gold adatom on the surface, which gives them a well-defined contact while the molecules can have rotation-favorable configurations.The motion of single atoms or molecules plays an important role in nanoscale engineering at the single atomic or molecular scale ,45. The t-Bu)4-ZnPc molecules on Au(111). The molecules adsorb predominantly at the elbow positions of the surface reconstruction, typical for Au(111) surfaces. The t-Bu)4-ZnPc molecules on the Au(111) surface. High-resolution images [see ages see reveal aTo verify that the \u201cfolding-fan\u201d is caused by molecular instability with respect to the substrate surface, we monitored the tunneling current versus time locating the STM tip at a fixed point on the \u201cfolding-fan\u201d see , applyint-Bu)4-ZnPc molecule is involved for each \u201cfolding-fan\u201d feature. In our experiments stationary dimers, trimers, tetramers, and larger clusters of (t-Bu)4-ZnPc can be observed at 78 K. In contrast, a stationary single (t-Bu)4-ZnPc molecule, whose STM image should be composed of four lobes, cannot be observed at 78 K. This indicates that single molecules are not stationary but unstable on the surface at this temperature. Besides that, in our experiments we observed the transition from the stationary state to the unstable state for single (t-Bu)4-ZnPc molecule, and found that single molecule remained stationary when it was attached to the clusters, but it became unstable showing the \u201cfolding-fan\u201d feature as soon as it was isolated.Only one (t-Bu)4-ZnPc molecule. The existence of a rotation center is the prerequisite for rotation, rather than lateral diffusion along the surface at elevated temperatures. Since the center of the STM image of the molecular rotors is dark, the rotation center cannot be at the position of the tert-butyl groups that appear as bright protrusions in STM measurements. Our STM observations combined with the first principle calculations reveal that the most likely rotation center is the gold adatom on the surface. Generally, the array of the single molecular rotors in We propose that the \u201cfolding-fan\u201d feature is induced by the rotation of single (ab-initio simulation package (VASP) [First-principles calculations were carried out in order to find out the role of the gold adatom in the molecular adsorption. Our calculations were based on density functional theory (DFT), a Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) for exchange-correlation energy , projecte (VASP) ,58. A c(t-Bu)4-ZnPc molecule adsorbed on a gold adatom. Here, our calculations show that the distance between the zinc atom and its nearest neighboring gold atom is 4.60 \u00c5, the distance between the bottom nitrogen [colored in yellow in t-Bu)4-ZnPc molecule adsorbed directly on Au(111) show that the distance between the zinc atom and its nearest-neighbor gold atom is 4.35 \u00c5, the distance between the bottom nitrogen atom and its nearest-neighbor gold atom 4.40 \u00c5; the adsorption energy of this configuration is 219 meV. Obviously, the gold adatom enhances the molecular bonding significantly, which is most likely due to the surface dipole originating from smeared out electron charge at the position of the adatom [t-Bu)4-ZnPc molecules at 78 K.e adatom . The strWe observed that the molecular rotation depends to a large extent on the surface atomic arrangement. It is well known that the Au(111) surface reconstructs into a herringbone structure dividing the surface into four types of regions with different arrangements of surface atoms ,61: facet-Bu)4-ZnPc molecule. The rotation center is the gold adatom, chemically bonded to a nitrogen atom in the molecule. On flat Au(111) surface, there are twelve stable adsorption configurations that are 30 degrees apart from each other and can be interpreted as intermediate states for the molecular rotation. The differences in adsorption energies between these stable configurations are only tens of meV. The molecule switches between them with high frequency under thermal excitation. Since the tert-butyl groups are imaged as the bright lobes in STM measurements, the ensuing STM image is the \u201cflower\u201d. The proposed STM image for 360\u00b0 rotation [see t-Bu)4-ZnPc molecule at the elbow sites can also be interpreted based on the model for the rotation in the fcc region. The corrugation ridges form barriers for the molecular rotation at the elbow sites. The molecular rotation is limited within an angle of 120\u00b0 due to the bending of the corrugation lines, which leads to the \u201cfolding-fan\u201d feature. The proposed STM image for 120\u00b0 rotation [see tion see is in gotion see . The rottion see is in gotion see .tert-butyl groups (1.10 \u00b1 0.05 nm), considering that the rotation center is the gold adatom which is not exactly under the nitrogen atom [see The model is in good agreement with the experimental measurements. The experimentally measured distance between the rotor center and the bright lobes on the outer torus is 1.3~1.4 nm, in reasonable agreement with the distance between the nitrogen atom and the atom see . Since hatom see .4.Exploring the self-assembly mechanism of PAHs on metal surfaces at single molecular level by STM provides versatile and valuable information for the development of molecular devices. It is of fundamentally important to obtain a full understanding of the self-assembly of nano-structures and the interface properties between the nanostructure and the substrate through a pertinent combination of advanced experimental techniques and first principle calculations. It is inspiring that some functionality has been observed for single molecules, like single molecular rotors discussed in this review. We observed single molecular rotation with an off-center rotation axis fixed to surfaces, and achieved self-assembly of molecular rotors into large-scale arrays. Designing the chemical bond between an atom of the molecule and an adatom on the surface we not only achieved a fixed rotation axis at the surface, but also a spin of the molecule around an off-center axis. The gold adatom, which provides the fixed rotation axis, can be used as an atomically well-defined electrode. A fixed rotation axis off-centre is an important step towards the eventual fabrication of molecular motors or generators."} +{"text": "Enzymes still comprise a major part of ethanol production costs from lignocellulose raw materials. Irreversible binding of enzymes to the residual substrate prevents their reuse and no efficient methods for recycling of enzymes have so far been presented. Cellulases without a carbohydrate-binding module (CBM) have been found to act efficiently at high substrate consistencies and to remain non-bound after the hydrolysis.Thermoascus aurantiacus containing only two main cellulases: cellobiohydrolase I (CBH I), Cel7A and endoglucanase II (EG II), Cel5A. The yields were decreased by only about 10% when using these cellulases without CBM. A major part of enzymes lacking CBM was non-bound during the most active stage of hydrolysis and in spite of this, produced high sugar yields. Complementation of the two cellulases lacking CBM with CBH II (CtCel6A) improved the hydrolysis. Cellulases without CBM were more sensitive during exposure to high ethanol concentration than the enzymes containing CBM. Enzymes lacking CBM could be efficiently reused leading to a sugar yield of 90% of that with fresh enzymes. The applicability of cellulases without CBM was confirmed under industrial ethanol production conditions at high (25% dry matter (DM)) consistency.High hydrolysis yields could be obtained with thermostable enzymes of The results clearly show that cellulases without CBM can be successfully used in the hydrolysis of lignocellulose at high consistency, and that this approach could provide new means for better recyclability of enzymes. This paper provides new insight into the efficient action of CBM-lacking cellulases. The relationship of binding and action of cellulases without CBM at high DM consistency should, however, be studied in more detail. Efficient enzymatic hydrolysis of lignocellulosic plant cell walls to platform sugars is a key process in all future biotechnical biomass conversion processes to fuels or chemicals. Although a significant reduction in enzyme production costs has been reported by the major enzyme producing companies during the last decade, enzymes still make up at least 15% of ethanol production costs . ReductiTrCel7A (CBH I) is the major enzyme secreted by the well-studied mesophilic fungus Trichoderma reesei, forming approximately 80% of total secreted proteins.Due to the complex structure of lignocellulosic biomass, the action of cellulolytic enzymes including cellobiohydrolases (CBHs), endoglucanases (EGs), lytic polysaccharide monooxygenases (LPMOs) and \u03b2-glucosidases (BGs) and various hemicellulases, especially xylanases (XYLs) are required for efficient saccharification of lignocellulosic biomass . Two typIn general, glycoside hydrolases (GHs) degrading insoluble polysaccharides have a bidomain structure with a carbohydrate-binding module (CBM) attached to the catalytic core domain by a flexible, glycosylated linker . CBMs arT. reesei) or without (Thermoascus aurantiacus) a CBM [T. aurantiacus cellulases reached about the same hydrolysis yields as the corresponding enzymes linked to family 1 CBMs [We have recently shown that the CBMs are not needed for efficient hydrolysis if the concentration of the solid substrate is increased at least to about 10% dry matter (DM) . Thus, ts) a CBM . At elevy 1 CBMs .A feasible industrial process for production of bioethanol from lignocellulosic biomasses requires high DM content throughout the process. The DM content of the biomass feed to pretreatment can be as high as 40% and the DM content during hydrolysis approximately 25% . The proT. aurantiacus, naturally secreted without CBM, were found to act efficiently especially on natural cellulosic substrates [Thermostable CBHs from several fungi have been recently characterized ,25, and bstrates . The aimTaCel7A and TaCel5A of T. aurantiacus, naturally lacking the carbohydrate-binding domains, were able to hydrolyze pretreated wheat straw to the same extent as the corresponding enzyme constructs provided with CBMs (TaCel7A + TrCBM and TaCel5A + CtCBM) [Previously, we have shown that the cellulases + CtCBM) . In our TaCel7A and TaCel5A) with or without the CBM to allow direct comparison of enzymes with and without CBM, supplemented with BG and XYL, and used at two enzyme dosages: low CBM-containing 14.3 mg/g DM, low CBM-lacking 12.7 mg/g DM, high CBM-containing 28.7 mg/g DM and high CBM-lacking 25.5 mg/g DM (Table\u00a0Conversion of pretreated wheat straw was studied with thermostable enzymes during a prolonged hydrolysis up to 120 hours) at high, 20% DM, consistency to reach a high yield for tests at a larger scale. The enzyme mixtures were composed of equimolar amounts of CBH I and EG II and Cel5A (EG II), both naturally lacking the CBM. In an attempt to increase the hydrolysis yields, the addition of CBHs with CBMs was studied. Thus, 25% of the CBH I without the CBM was replaced by the CBM containing CBH I from m Figure\u00a0. The amod Figure\u00a0. The impd Figure\u00a0.T. reesei contained minor EGs, xylanolytic enzymes and other side activities, eventually improving the hydrolysis yields. The hydrolysis temperature was 50\u00b0C, where most of these minor activities were detectable at least for a short period. In order to retain these positive activities in the larger-scale experiments, enzyme preparations without heat treatment were studied. The results showed that at a high enzyme dosage of 28.8 mg/g DM, the glucose yield reached 93% and 82% of the theoretical yields by enzyme mixtures with and without the CBMs, respectively system, hydrolysis experiments were carried out varying the amount of enzymes with or without the CBM and by omitting the heat treatment . Although fairly efficient mixing could be achieved even in the small-scale experiments, results in pilot-scale provide a more reliable comparative basis. The enzymes available for these large-scale experiments included the s Figure\u00a0. After ts Figure\u00a0. The totThe concentration and activities of the free enzymes with and without CBM during the larger scale hydrolysis and fermentation experiments (at the Inbicon pilot unit) were also measured. Initially, after mixing the enzymes and substrate, about 55% of enzymes lacking the CBM and 73% of enzymes with CBM were bound to the substrate Figure\u00a0B. The 4-The results clearly show that the CBM-lacking enzymes, in spite of being less bound to the substrate throughout the hydrolysis, were able to produce almost the same level of hydrolysis. During the course of hydrolysis, both the MUL activity Figure\u00a0A and amoT. reesei progressively up to an ethanol concentration of about 7% [After 70 hours of hydrolysis, the temperature was decreased and yeast was inoculated together with addition of yeast extract; the total added amount corresponding to 1 g/l. The addition is clearly seen as a peak in the amount of free proteins Figure\u00a0B. The adabout 7% , and ethabout 7% . It has about 7% .TrCel7A is known to stabilize the enzyme thermally, increasing the overall melting point of the enzyme [The fermentation time was originally long to ensure complete hydrolysis and fermentation, although it could have been shorter to recover the more sensitive CBM-lacking enzymes. To date, no systematic study has been reported on the effect of CBM on the stability of cellulases in the presence of ethanol, although the CBM of e enzyme . Therefoet al. [Preliminary experiments on the recyclability potential of the CBM-lacking enzymes were carried out using enzyme solutions concentrated after the primary hydrolysis experiments in small and large scales. The ultrafiltration step after the large-scale experiments also removed the sugars and ethanol from the enzyme solutions, which may inhibit hydrolysis. The recovered enzymes were tested and compared with fresh enzymes in small-scale experiments. The recycled and fresh enzymes were applied at the same protein dosages (10 mg/g) in the recyclability tests. Enzymes recycled from small laboratory-scale experiments using heat treated enzyme preparations (without fermentation) produced almost equal sugar yields, 93% of the yield with the fresh enzymes: the recycled CBM-lacking mixture yielded 38.9% of total carbohydrates as compared to 41.9% obtained with the fresh CBM-lacking enzyme mixture could be improved by prolonging the hydrolysis time and by partially replacing the CBH GH family 7 (GH7) with GH family 6 (GH6). The action mechanisms of the CBM-lacking, mostly non-adsorbed enzymes should be further studied to gain deeper understanding of the binding/desorption mechanisms of cellulases, and to design optimal hydrolysis and recycling processes.In this article, we showed that enzyme mixtures lacking the CBM could be successfully used in larger scale at high DM consistency using efficient mixing, resulting in cellulose conversion yields above 80% of theoretical. Based on preliminary experiments, enzyme mixture composed of enzymes without CBM could also be efficiently recycled. The activity and amount of recoverable enzymes after the hydrolysis was significantly higher using enzymes without the binding domains. The hydrolysis yields obtained with the two cellulases lacking CBM using a hydrothermal steam pretreatment without addition of chemicals . The rawboratory . All cheT. aurantiacus were used: the intact CBH I without CBM (TaCel7A) and fused with T. reesei CBM (TaCel7A + TrCBM), and the native EG II without CBM (TaCel5A) and fused to the linker and CBM of C. thermophilum Cel7A (TaCel5A + CtCBM). In addition, XYL TaXyn10A and BG TaCel3A from T. aurantiacus were added to all experiments. The CBH II from C. thermophilum (CtCel6A), naturally carrying a CBM, was added to replace part of the T. aurantiacus CBH I in some experiments.For hydrolysis experiments, thermostable enzymes with and without CBM from C. thermophilum (CtCel6A) were expressed in T. reesei as described previously [T. reesei industrial production strains that lack the genes cbh1, cbh2, egl1 and egl2, encoding for CBH I (TrCel7A), CBH II (TrCel6A), EG I (TrCel7B) and EG II (TrCel5A), respectively. The XYL TaXyn10A and BG TaCel3A were produced accordingly [T. reesei enzymes. For the large-scale experiments in industrial conditions the preparations were used as such.All thermostable enzymes, including the CBH II from eviously . The expordingly . All theThe activity of CBH I was measured using the soluble MUL (Sigma-Aldrich) substrate with 4-methylumbelliferone (Sigma-Aldrich) standard as described previously . AnalyseTaCel7A, EG II TaCel5A, XYL TaXyn10A and BG TaCel3A) using equimolar amounts of cellulases with or without CBM, using a ratio of TaCel7A to TaCel5A of 4:1 (Table\u00a0TaCel7A and TaCel5A with (CBM+) or without CBM (CBM-) were used. The used low dosages of the TaCel7A (CBM+) and TaCel7A (CBM-) were 10.6 and 9.4 mg protein/g DM, and of TaCel5A (CBM+) and TaCel5A (CBM-) were 2.1 and 1.7 mg protein/g DM, respectively. The used high dosages of TaCel7A (CBM+) and TaCel7A (CBM-) were 21.2 and 18.8, and those of TaCel5A (CBM+) and TaCel5A (CBM-) were 4.2 and 3.4 mg protein/g DM, respectively. To compare the role of CBHs, TaCel7A (CBM-) was either partially replaced (25% on molar basis) with TaCel7A (CBM+) or with CtCel6A . For comparison, TaCel7A (CBM-) was also totally replaced by TaCel7A (CBM+). In these experiments, the higher dosage level was used.The substrate was hydrolyzed with two dosages (low and high) of enzymes consistency in 0.05 M Na-citrate buffer, pH 5, in a volume of 2 ml (10-ml tubes) at 50\u00b0C. The samples were mixed with combined gravity and vortex mixing for 24 to 120 hours using an Intelli-Mixer RM-2 with u2 mode at 35 rpm, exerting continuous variable intensity small amplitude vortexing with second speed counter-clockwise rotation . PEG witAfter hydrolysis in small-scale, samples withdrawn from experiments on different DM concentrations were cooled and diluted with buffer to a final 5% DM substrate concentration. The samples were centrifuged to separate the solid and liquid phases. A part of the supernatants was frozen for measurements of protein, and the remaining supernatants were boiled for 10 minutes for determination of glucose. The hydrolysis yields were calculated as the percentage of the theoretical maximum conversion of total glucose (or carbohydrates) in the substrates.2CO3. PEG with an average molecular weight of 6,000 g/mol was added before addition of enzyme. The cellulases TaCel7A and TaCel5A (with or without CBM), as well as XYL (TaXyn10A) and BG (TaCel3A) were dosed on the basis of protein at a ratio of 15:3:2.5:1, respectively. The total amount of loaded protein was 17.2 mg/g DM. In large-scale experiments the enzymes were not heat treated. The biomass was hydrolyzed for 70 hours at 50\u00b0C before it was cooled to 33\u00b0C, then the yeast Thermosacc Dry and yeast extract were added afterwards and the sample was collected at the start of the fermentation. The fermentation was continued for 96 hours. The experiments with both types of enzymes, CBM-containing and CBM-lacking, were carried out in duplicates. Samples withdrawn during the experiments were not diluted prior to separation of solids.Hydrolysis and fermentation experiments at pilot-scale using high DM consistency were carried out at the Inbicon pilot plant of DONG Energy in a specially designed six-chamber reactor with a minimum working volume of 10 kg employing the principle of free-fall mixing as described earlier . The DM Enzyme recycling experiments were carried out at the laboratory-scale using proteins collected from small- and large-scale experiments. At small-scale, samples were collected after 72 hours of hydrolysis at 50\u00b0C and in large-scale after hydrolysis and extended fermentation . Supernatants containing the CBM-lacking enzymes from both experiments were concentrated by 10 kDa Amicon membranes by centrifuging at 4\u00b0C and 3,000 rpm (Merck). The amount of proteins in each filtrate was determined and dosed (10 mg/g DM) at the same level for the recycling tests. Recycled and fresh enzyme mixtures were compared in hydrolysis experiments, carried out as described above . PEG was not added to the recycling experiments.2SO4 as eluent at a flow rate of 0.6 ml/min. D-Glucose and D-xylose (Merck) were used as external standards. The conversion of cellulose was calculated from the cellulose content of the pretreated material (WT%Cel), the amount of pretreated biomass (mbiomass), the DM content (DM%), and the concentrations of cellobiose (CCel), glucose (CGlc) and ethanol (CEtOH):The amount of reducing sugars was determined using the dinitrosalicylic acid (DNS) method with glucose as standard . MonosacEquation 1:2CO3 and 4 g/l NaOH) and quantified by the Lowry method using BSA (Sigma-Aldrich) as standard [g for 10 minutes and diluted approximately eight times to obtain protein concentrations below 800 \u03bcg/ml and used for the ninhydrin assay. All samples were analyzed in triplicates.Proteins in hydrolysis supernatants were measured by two methods: Lowry and ninhydrin methods ,46,47. Istandard . SupernaBG: \u03b2-Glucosidases; BSA: Bovine serum albumin; CBH: Cellobiohydrolase; CBM: Carbohydrate-binding module; DM: Dry matter; DNS: Dinitrosalicylic acid; EG: Endoglucanase; GH: Glycoside hydrolase; HPAEC-PAD: High performance anion exchange chromatography with pulsed amperometric detection; HPLC: High performance liquid chromatography; koff: Dissociation rate constant; LPMO: Lytic polysaccharide monooxygenase; MUL: 4-Methylumbelliferyl-\u03b2-D-lactoside; PEG: Polyethylene glycol; SSF: Simultaneous saccharification fermentation; XYL: Xylanase.The authors declare that they have no competing interests.AP, M\u00d8H and DTD performed data collection and analysis, and undertook manuscript writing. TP and LV conceived and designed the study, and critically revised the manuscript. AV conceived and designed the study, and undertook manuscript writing. All authors read and approved the final manuscript."} +{"text": "Nonomuraea flexuosa (Nf Xyn11) on the adsorption and hydrolytic efficiency toward isolated xylan and lignocellulosic materials were investigated.The enzymatic hydrolysis step converting lignocellulosic materials into fermentable sugars is recognized as one of the major limiting steps in biomass-to-ethanol process due to the low efficiency of enzymes and their cost. Xylanases have been found to be important in the improvement of the hydrolysis of cellulose due to the close interaction of cellulose and xylan. In this work, the effects of carbohydrate-binding module (CBM family II) of the xylanase 11 from N. flexuosa clearly adsorbed on wheat straw and lignin, following the Langmuir-type isotherm. The presence of the CBM in the xylanase increased the adsorption and hydrolytic efficiency on insoluble oat spelt xylan. But the presence of the CBM did not increase adsorption on pretreated wheat straw or isolated lignin. On the contrary, the CBM decreased the adsorption of the core protein to lignin containing substrates, indicating that the CBM of N. flexuosa xylanase did not contribute to the non-productive adsorption.The intact family 11 xylanase of N. flexuosa xylanase was shown to be a xylan-binding module, which had low affinity on cellulose. The CBM of the N. flexuosa xylanase reduced the non-specific adsorption of the core protein to lignin and showed potential for improving the hydrolysis of lignocellulosic materials to platform sugars.The CBM of the Utilization of lignocellulosic materials offers great potential to reduce our dependence on fossil fuels. The enzymatic hydrolysis step converting lignocellulosic materials into fermentable sugars is recognized as one of the major limiting steps in biomass-to-ethanol process due to the recalcitrant and complex structure of the lignocellulosic substrate and the relatively high cost of enzymes. The fermentable sugars are derived from cellulose and hemicelluloses in lignocellulosic materials. To convert cellulose into glucose, three major cellulase groups are required: endoglucanases, cellobiohydrolases and \u03b2-glucosidase, which synergistically hydrolyze cellulose . Xylans,Xylans in annual plants consist of a linear backbone of \u03b2-(1\u2009\u2192\u20094)-D-xylopyranosyl residues, substituted by \u03b1-L-arabinofuranosyl units in the positions of 2-O and/or 3-O, by 4-O-methyl-glucopyranosyl uronic acid in the position of 2-O, and/or by acetyl groups in 2-O and/or 3-O . Furtherhttp://www.cazy.org). Furthermore, these families have been categorized into three types based on their structure, function and ligand specificities: surface binding CBM (type A), glycan-chain binding CBM (type B) and small-sugar binding CBM (type C). Type A CBM includes members of families 1, 2a, 3, 5 and 10 and are recognized to bind on insoluble, highly crystalline cellulose and/or chitin [Most fungal cellulases and some hemicellulases studied so far have a complex modular architecture comprising a catalytic domain (CD) connected usually to the non-catalytic carbohydrate-binding module (CBM) via a flexible linker rich in either proline, threonine, and/or serine residues . The CBMr chitin .It has been reported that CBMs play an important role in the improvement of enzymatic hydrolysis by cellulases -15. BaseTrichoderma reesei mannanase did not bind to mannan but increased the hydrolysis rate of insoluble mannan-cellulose complexes [Aspergillus aculeatus with a family I CBM from A. niger cellobiohydrolase B also improved the hydrolysis of NaOH-pretreated softwood pulp [Clostridium stercorarium, was found to be increased by the presence of two family 6 and one family 9 cellulose-binding modules, respectively [C. stercorarium xylanase to a Bacillus halodurans xylanase also resulted to an increased adsorption on cellulose and insoluble xylan [Thermotoga neapolitana xylanase A were found to bind on xylan but not on cellulose. The fusion of these CBMs with a family 10 xylanase from Bacillus halodurans increased the adsorption on insoluble xylan, and improved the hydrolytic efficiency of insoluble xylan but not of soluble xylan [Cellulomonas fimi xylanase D bound on xylan but not on cellulose [Streptomyces thermoviolaceus increased the catalytic activity of a xylanase from Thermotoga maritima on soluble xylan, but not on insoluble xylan [The impacts of cellulose-binding modules in hemicellulases on the adsorption and hydrolysis of hemicelluloses have also been reported. Obviously, the close presence of hemicelluloses and cellulose in the substrates results in improved hydrolytic efficiency on hemicelluloses by enzymes containing a cellulose-binding module. Thus, the cellulose-binding module of omplexes . Fusion ood pulp . The adsectively ,21. Fusile xylan . Howeverle xylan . It has ellulose whereas ellulose . The famle xylan .T. reesei have been shown to be mainly responsible for the non-specific binding of the enzymes on lignins [T. reesei Cel7A and Cel5A enzymes were found to bind more on isolated lignins than the corresponding core domains. The \u03b2-glucosidase from T. reesei lacking a CBM was, however, found to bind strongly on lignin-rich residues but much less on Avicel and steam pretreated spruce [The family 1 cellulose binding modules of Cel7A and Cel5A of lignins . The intd spruce . LimitedNonomuraea flexuosa has been previously characterized on isolated xylans and lignocellulosic substrates [N. flexuosa contains a family II CBM [N. flexuosa was characterized with respect to its adsorption on insoluble xylan, lignin and pretreated wheat straw. The role of the CBM from N. flexuosa xylanase in the hydrolysis of isolated xylan was evaluated and the effect of CBM in xylanase from N. flexuosa on non-productive adsorption on lignin was investigated. The main objective of this work was to understand the impact of CBM from N. flexuosa xylanases in the total hydrolysis of lignocellulosic materials for platform sugars.The hydrolytic pattern of the core domain of the thermostable Xyn11 bstrates . Based oy II CBM . In thisT. reesei were removed already by the heat treatment for 2 hours. After the heat treatment, the protein concentration decreased from 10.0 to 2.2 mg/ml. It indicated that most of the proteins of T. reesei in the preparation were less thermostable and were precipitated by the heat treatment. The decrease of the specific activities of \u03b2-xylosidase, \u03b2-glucosidase, endoglucanase and FPA indicated that most of these activities were removed (results not shown). As expected, the specific activity of xylanase was increased clearly, which also indicated that the Nf xylF xylanase was thermostable.Most of the less thermostable enzymes in the preparation of the intact xylanase with the CBM (Nf xylF), expressed in the host strain T. reesei. The core xylanase preparation without the CBM (Nf xylC) has been previously purified and characterized [The proteins were further purified by ion exchange chromatography. The purity of Nf xylF was confirmed by sodium dodecyl sulfate polyacrylamide gel electrophoresis on insoluble oat spelt xylan was observed by Nf xylF whereas the CBM-less Nf xylC adsorbed significantly less ). Roughly, the same amount of reducing sugars was released from the soluble oat spelt xylan Figure A, indicaC. fimi, which bound on both soluble and insoluble xylans but did not bind on cellulose [Adsorption experiments showed that both Nf xylC and Nf xylF had low affinity and hardly adsorbed on Avicel Figure A. The CBellulose . The resPads,m), affinity constant (Kp) and strength of binding (A) were calculated (Table 2\u2009>\u20090.85. The maximum adsorption capacities of the CBM-less Nf xylC on lignin (135.8 mg/g substrate) and wheat straw (138.1 mg/g substrate) were higher than those of the intact Nf xylF, surprisingly indicating that the Nf xylF bound less on lignin than the Nf xylC. The maximum adsorption capacities of Nf xylC and Nf xylF on wheat straw were about equal to those on lignin, which could be due to the high content of lignin (42.7%) in pretreated wheat straw. The presence of CBM of xylanase from N. flexuosa decreased the adsorption on lignin . The lignin preparation was produced by extensive enzymatic removal of carbohydrates from thermochemically pretreated spruce followed by protease treatment to remove the bound enzymes . Hydrothet al. [Soluble and insoluble oat spelt arabinoxylans were prepared by using a modified method of Ryan et al. . Oat speN. flexuosa were heterologously produced in a T. reesei strain where the genes cbh1, cbh2, egl1 and egl2, encoding for cellobiohydrolase I, cellobiohydrolase II, endoglucanase I and endoglucanase II, respectively, had been deleted according to the method described elsewhere [The xylanases with and without CBM from lsewhere ,39. All T. reesei, the two xylanase preparations with and without CBM were adjusted to pH 6.0 and were treated at 60\u00b0C for 2 hours. In the heat treatment, the less thermostable enzymes in the xylanase preparations were removed , the same method and system was used but the pH of buffer was adjusted to pH 9.1. At pH 8.0, the Nf xylF did not bind to the column. During the purification, two main peaks were obtained and the peak with high xylanase activity was collected for the adsorption and hydrolysis experiments. After purification by ion-exchange chromatography, hydrophobic interaction chromatography with Phenyl Sepharose Fast Flow was applied for further purification but the band of approximately 82 kDa could not be removed.To remove the less thermostable enzymes produced by the host strain d Figure . The heaet al. [Xylanase activity was assayed using 1% (w/v) birchwood xylan as a substrate in 50 mM sodium citrate buffer according to the method of Bailey et al. . The asset al. . One katProtein was quantified by the Lowry method, using bovine serum albumin as standard . SDS-PAGThe insoluble oat spelt xylan was used for xylanase adsorption experiments. The xylanase preparations (2 mg protein/g DM) were incubated with 1% (w/v) xylan at 4\u00b0C for 1 h. After centrifugation the residual xylanase activity in the supernatant was measured. The amount of enzyme bound to xylan was estimated from the difference between the xylanase activities in the supernatant before and after incubation. In addition to xylan, Avicel, lignin prepared from spruce, and hydrothermally pretreated wheat straw were also used for adsorption studies. The experiments were carried out in 50 mM sodium citric acid buffer (pH 5.0) in 1% Avicel or lignin or wheat straw consistency at 1.5 ml volume. The samples were incubated with 10\u2013400 mg/g DM of xylanase preparation for 90 min at 4\u00baC with magnetic stirring. After this the solids and liquids were separated by centrifugation at 4\u00baC . The protein adsorbed was measured by subtracting the protein in supernatant from the total protein loaded. All adsorption experiments were done in triplicates and average values are presented.Adsorption parameters were calculated according the reported method ,45 usingPads is the amount of adsorbed enzyme (mg enzyme/g substrate); P is the amount of free enzyme (mg enzyme/ml); Pads,m is the maximum adsorption capacity (mg enzyme/g solid); Kp is the adsorption equilibrium constant (ml/mg enzyme) and is a measurement for the adsorption affinity. Pads,m and Kp can be calculated from the plots of P/Pads vs. P, which gave fairly good straight lines. The adsorption strength of the enzyme A is calculated from Pads,m and Kp .where The hydrolysis of soluble and insoluble oat spelt xylan (2.5 mg/mL) was carried out in test tubes with a working volume of 2 mL. The enzyme dosage was 10 nmol/g DM, based on the molecular weights of the cloned enzymes. The hydrolysis of xylan substrates was carried out in 50 mM sodium citrate buffer at pH 5.0 and at 50\u00b0C. Aliquots were removed periodically at different time intervals and boiled for 10 minutes to stop the enzymatic hydrolysis. Two replicates were carried out, and average values of reducing sugars are presented.N. flexuosa; Nf xylF: Xylanase with CBM from N. flexuosa; SDS-PAGE: Sodium dodecyl sulfate polyacrylamide gel electrophoresis.CBM: Carbohydrate binding module; CD: Catalytic domain; DM: Dry matter; Nf xylC: Xylanase without CBM from The authors declare that they have no competing interests.JZ carried out the experimental enzyme work, analyzed the results and drafted the manuscript. UM prepared the lignin and participated in the planning of the adsorption experiments and in the preparation of the manuscript. MT reviewed the paper. LV designed and coordinated the overall study and finalized the paper. All authors approved the final manuscript."} +{"text": "Brassica alba, Brassica nigra, Camellia sinensis, Cinchona officinalis, Citrus aurantifolia, Citrus x aurantium, Ferula assafoetida, Humulus lupulus, Juglans regia, Juniperus sabina, Myristica fragrans, Pelargonium graveolens, Pistacia vera, Punica granatum, Rheum officinale, Rosa damascena, Salix alba, and Zizyphus vulgaris were prepared and screened for their acetylcholinesterase inhibitory activity using in vitro Ellman spectrophotometric method.The aim of this study was to evaluate acetylcholinesterase inhibitory activity of some commonly used herbal medicine in Iran to introduce a new source for management of Alzheimer\u2019s disease. A total of 18 aqueous-methanolic extract from the following plants: 50 values, \u03bcg /ml) of extracts from highest to the lowest was: C. sinensis (5.96), C. aurantifolia (19.57), Z. vulgaris (24.37), B. nigra (84.30) and R. damascena (93.1).According to the obtained results, the order of inhibitory activity (ICC. sinensis showed the highest activity in inhibition of acetylcholinesterase. However, further investigations on identification of active components in the extracts are needed.The results indicated and confirmed the traditional use of these herbs for management of central nervous system disorders. Alzheimer\u2019s disease (AD) is a an age related neurodegenerative disorder with clinical characteristic and pathological features associated with loss of neurons in certain brain areas leading to impairment of memory, cognitive dysfunction, behavioral disturbances, deficits in activities of daily living, which eventually leads to death -3. In 20Although the underlying pathophysiological mechanisms are not clear, AD is firmly associated with impairment in cholinergic pathway, which results in decreased level of acetylcholine in certain areas of brain ,2,5,6.The management of AD focuses on slowing disease progression, symptomatic treatment, maintaining functional status and improving quality of life, and decreasing caregiver stress . AcetylcThe treatment of AD has progressed and shifted since the late 1970s to a transmitter replacement strategy. Elevation of acetylcholine levels in brain through the use of AChE inhibitors has been accepted as the most effective treatment strategy against AD ,8. ThereHowever, studies investigating the use of medications for AD have not been consistently supportive -17. VariConsidering the importance of plant-driven compounds in drug discovery, the present study was undertaken to evaluate the anticholinesterase activity of a number of selected medicinal plants with various ethnobotanical uses, aiming to discover new candidates for anticholinesterase activity to be used in management of AD.Eighteen medicinal plants, which are listed in Table\u00a0Acetylthiocholine iodide (ATCI), 5,5\u2019\u2013dithio-bis-2-nitrobenzoic acid (DTNB), bovine serum albumin (BSA) and electric eel AChE were purchased from Sigma . Physostigmine was used as the standard drug. Buffers and other chemical were of extra pure analytical grade. The following buffers were used: Buffer A: 50\u00a0mM Tris\u2013HCl, pH\u00a08, containing 0.1% BSA; Buffer B: 50\u00a0mM Tris- HCl, pH\u00a08 containing 0.1\u00a0M NaCl, 0.02\u00a0M MgCl2\u2009\u00d7\u20096H2O.50) at 100\u00a0\u03bcg/mL dissolved in aqueous methanol was accurately defined.Each plant sample was individually powdered and 1\u00a0g of each sample was extracted by maceration method under shaking at room temperature with aqueous methanol for 24\u00a0h. After filtration, organic layer was distilled under reduced pressure at 25\u00b0C and then freeze-dried to dryness. The crude extracts were stored at -20\u00b0C until analysis. Determination of fifty percent inhibitory concentrations was calculated according to Michaelis\u2013Menten model by using EZ-Fit. Enzyme Inhibition Kinetic Analysis program .Briefly, 25\u00a0\u03bcl of 15\u00a0mM ATCI, (43\u00a0mg/10\u00a0mL in Millipore water), 125\u00a0\u03bcl of 3\u00a0mM DTNB, (11.9\u00a0mg/10\u00a0mL buffer B), 50\u00a0\u03bcl of buffer A and 20\u00a0\u03bcl of plant extract at concentration of 100\u00a0\u03bc\u00a0g/ml were added to 96 well plates and the absorbance was measured at 412\u00a0nm every 13\u00a0s for five times. After adding 25\u00a0\u03bcl of 0.22 U/ml enzyme, (0.34\u00a0mg AChE dissolved in 100\u00a0mL buffer A), the absorbance was read again every 13 seconds for five times. The absorbance was measured using a Synergy H4 Hybrid Multi-Mode Microplate Reader . Percentage of inhibition was calculated by comparing rates of the sample with the blank (aqueous methanol), control samples contained all components except the tested extract. Physostigmine was used as positive control. Then, the mean ofthree measurements for each concentration was determined (n\u2009=\u20093). Inhibitory concentration and after calculating the mean\u2009\u00b1\u2009SD, the results were compared using Student\u2019s Anacardiaceae, Apiaceae, Brassicaceae, Cannabaceae, Cupressaceae, Geraniaceae, Juglandaceae, Lythraceae, Myristicaceae, Polygonaceae, Rhamnaceae, Rosaceae, Rubiaceae, Rutaceae, Salicaceae, and Theaceae) were obtained from Tehran local herbal market and a total of 18 extracts were screened for AChE inhibitory activity using Ellman\u2019s spectrophotometric method in 96-well microplate. Table\u00a050) respectively. IC50 of physostigmine (positive control) was estimated 0.093\u00a0\u03bcM against AChE. At the test extracts concentration (100\u00a0\u03bcg/ml), physostigmine showed complete inhibition of enzyme activity.Eighteen plant species belonging to 16 plant families (50) of leaves of C. sinensis (5.96\u00a0\u03bcg/ml) was less than the inhibitory concentration (IC50) of the other tested extracts . Report by Chaiyana and Okonogi [C. aurantifolia. Phytochemical investigations of C. aurantifolia revealed the occurrence of limonene, l-camphor, citronellol, o-cymene and 1,8-cineole as the major constituents [aurantifolia.In this study Okonogi revealedtituents ; A grouptituents . Essentitituents . HoweverZ. vulgaris fruit showed moderate AChE inhibitory activity (IC50\u2009=\u200924.37\u00a0\u03bcg/ml) in this study, which is similar to the results of previous studies [Z. vulgaris can be explained by the presence of alkaloids, saponins, and flavonoids in the extract [The extract from studies ,34. The extract .R. damascena floret extract (IC50\u2009=\u200993.10\u00a0\u03bcg/ml) that is not in supported in previous investigations [R. damascena floret jam is used traditionally as a sweetener to tea to increase memory and wellness.We found AChE inhibitor activity from igations . The R. B. nigra. This study shows that seeds of B. nigra present AChE inhibitory activity with an IC50 of 135.0 (IC50\u2009=\u200993.10\u00a0\u03bcg/ml). This finding suggests a moderate inhibitory activity for B. nigra seeds. Though further investigation in active components of the extract is needed.There is limited data on seeds of C. sinensis, C. aurantifolia, Z. vulgaris, B. nigra, and R. damascena) and had lowest inhibitory concentration below 100\u00a0\u03bcg/ml ranging between 5.96 to 93.10 against electric eel AChE. The extract of the herbs alone or in combination of other herbal productions such as essential oil could be considered in herbal remedies of AD management. Since the most strong synthetic or natural product driven AChE inhibitors are known to contain nitrogen, the promising activity of reported medicinal plants could be due to their high alkaloidal contents [The promising finding of the study shows that most of the plant-extracts screened in this study had some degrees of inhibitory activity against AChE Table\u00a0, but fivcontents -40. Alkacontents -40 but oThe search performed using Chemical Abstracts, Biological Abstracts and Scopus database shows that AChE inhibitory activity is not only limited to alkaloids but also other compounds such as flavonoid, coumarins and essential oils are reported to have AChE inhibitory activity -41. The C. sinensis had the most active components with inhibitory properties on AChE. Further researches should investigate more on the chemical composition and mechanism of actions of these herbal extract including in vitro and in vivo studies.A primary screening process was run to investigate AChE inhibitory properties of medicinal plants of Iran. The primary findings of this study suggest that all herbs used in this study exhibited some degree of AChE inhibitory properties. Among the selected plants of this report AChE: Acetylcholinesterase; AD: Alzheimer\u2019s disease.; BuChE: Butyrylcholinesterase.The authors declare that they have no competing interests.All authors contributed to the concept and design, making and analysis of data, drafting, revising and final approval. MA and PP are responsible for the study registration. SBJ, AA and NG carried out plant extraction and enzymatic tests and drafted manuscript. SBJ, PP and MA participated in collection and/or assembly of data, data analysis, interpretation and manuscript writing. All authors read and approved the final manuscript."} +{"text": "In most cases of aspiration pneumonia in children, the disease is specific tothis age group. Clinical and radiological correlation is essential for thediagnosis. The present pictorial essay is aimed at showing typical images of themost common etiologies. Aspiration pneumonias resultfrom passage of the oropharyngeal, esophageal or stomach contents into the lowerrespiratory tract. Theresulting compromise of the lungs depends on the nature and amount of aspiratedmaterial. In thepediatric group, aspiration occurs most frequently because of deglutitionabnormality, congenital malformations and gastroesophageal reflux. Lipoid pneumoniais more rarely observed and is always iatrogenic-17. Chestradiography, sometimes supplemented by computed tomography andesophagealgastroduodenal seriography (EGDS) are almost always enough to make thediagnosis,18.Recently, the Brazilian radiological literature has been worried a lot about therelevance of imaging methods in the improvement of the diagnosis inpediatrics,19. Due to immaturity, central nerve system injuries or drugseffects, this mechanism may be disturbed, and part of the food is diverted into theairways (.The function of conducting food from the mouth to the stomach involves a joint actionof the muscles innervated by the IX, X, XI and XII cranial pairsairways , 2 and 3-20. Usually, this does not occur incases of acquired achalasia and stenosis, because children frequently adaptthemselves to such conditions. Esophageal atresia usually is detected and surgicallycorrected before causing significant aspiration,19. Amongst thosecases of compression by anomalous vessels, compression by double aortic arch is theone that most frequently causes symptoms,21 (,18 (Any stasis resulting from narrowing of the esophageal lumen may lead toaspirationptoms,21 . The dia,21 (,18 .-21. More than highlighting the presence of reflux -whose diagnosis is essentially clinical -, EGDS plays a relevant role in thedemonstration of either normal or pathological anatomy-21. In theabsence of anatomical alterations, reflux is considered to be primary, resultingfrom generally transient immaturity of the distal esophageal high pressurezone,20 (,19,20.Respiratory manifestations stand out in the wide spectrum of gastroesophageal refluxdiseasezone,20 .Surgicazone,20 (13,19,20,15. Aspiration occurs because of the use of mineral oil inthe treatment of intestinal constipation (Ascaris lumbricoides. The oil inhibits the cough reflex and ciliary motion, andsilently reaches the alveoli. Because of the difficulty in removing the oil from thelungs, such pneumonias present a slow evolution pattern,15.Lipoid pneumonia is not related to anatomical or functional anomaliestipation or as an,20,21. The literature reports a most frequentinvolvement of the posterior segments of the upper lobes and the upper segments ofthe lower lobes,13,18. Thishappens as aspiration occurs with the child in dorsal decubitus, like in mostgastroesophageal reflux and vomiting episodes,13. In othersituations, such as tracheoesophageal fistula and lack of motor coordination, otherpulmonary segments may be affected-21 (. Aspiration mayresult in atelectasis or pneumonia, the latter with or without atelectaticcomponent. The absenceof fever suggests pure atelectasis (Aspiration pneumonias involve the alveoliected-21 and 5. Ilectasis and 9."} +{"text": "There is a paucity of home advantage research set in the context of para-sport events. It is this gap in the knowledge that this paper addresses by investigating the prevalence and size of home advantage in the Summer Paralympic Games.Using a standardised measure of success, we compared the performances of nations when competing at home with their own performances away from home in the competition between 1960 and 2016. Both country-level and individual sport-level analyses were conducted for this time frame. A Wilcoxon signed rank test was used to determine whether there was a genuine difference in nations\u2019 performance under host and non-host conditions. Spearman\u2019s rank-order correlation was run to assess the relationship between nation quality and home advantage.p\u00a0<\u00a00.01). When examining individual sports, only athletics, table tennis, and wheelchair fencing returned a significant home advantage effect (p\u00a0<\u00a00.05). Possible explanations for these findings are discussed. The size of the home advantage effect was not significantly correlated with the quality or strength of the host nation (p\u00a0>\u00a00.10).Strong evidence of a home advantage effect in the Summer Paralympic Games was found at country level (While our results confirm that home advantage is prevalent in the Summer Paralympic Games at an overall country level and within specific sports, they do not explain fully why such an effect does exist. Future studies should investigate the causes of home advantage in the competition and also draw comparisons with the Summer Olympic Games to explore any differences between para-sport events and able-bodied events. There is a paucity of research on home advantage in para-sport events targeted at elite athletes with a disability. To date, there has been a solitary study that has attempted to investigate its prevalence in a para-sport competition. Wilson and Ramchandani analysedThe prevalence of home advantage is well documented in professional team sports that are played on a balanced home and away schedule . On the p\u00a0>\u00a00.10) as \u201ca consequence of a small sample size and a lack of statistical power\u201d (p. 8). Recently, Franchini and Takito [The prevalence of home advantage in the Olympic Games has subsequently been verified by other researchers. Balmer, Nevill, and Williams observedd Takito providedd Takito , they coThere have been 15 editions of the Summer Paralympic Games between 1960 and 2016 and 14 different nations have hosted the competition, as shown in Table\u00a0Six sports have been contested in every edition of the competition: archery, athletics, swimming, table tennis, wheelchair basketball, and wheelchair fencing. The number of events contested in these sports is presented in Table\u00a0https://www.paralympic.org/results/historical) and recorded in SPSS (version 24). Our approach to define nations\u2019 performance and calculate home advantage in this study was compliant with that used by Wilson and Ramchandani [The results of each edition of the Summer Paralympic Games between 1960 and 2016 were sourced from the historical results archive of the International Paralympic Committee and 2004 (post-home) was 7.27% and 6.15%, respectively, an average of 6.71%. Therefore, its performance at home in 2000 was 2.79 percentage points better than its average pre/post-home performance . Computing home advantage scores in this way ensured that less successful host countries were not unfairly compared with more successful host countries and avoided biased estimates of home advantage. Consistent with previous research on multi-sport events , countrip\u00a0<\u00a00.05). For this reason, and taking into account the small sample size (n\u00a0=\u00a016), a Wilcoxon signed rank test was used to determine whether there was a genuine difference in nations\u2019 performance under host and non-host conditions. Spearman\u2019s rank-order correlation was run to assess the relationship between team quality and home advantage.For the sport-specific analysis, archery, athletics, swimming, table tennis, wheelchair basketball, and wheelchair fencing were included, because these were the six sports that have been contested in every edition of the Summer Paralympics and they also account for the vast majority of events contested in the competition since 1960 .Table\u00a0rs\u00a0=\u00a00.141, p\u00a0=\u00a00.602).Figure\u00a0Z\u00a0=\u00a02.792, p\u00a0=\u00a00.005), table tennis , and wheelchair fencing returned statistically significant differences between home and away performances.The prevalence of home advantage was found to vary according to sport. In two sports, athletics and table tennis, a home advantage effect appeared to be present on 13 of the 16 occasions (81.25%). Both archery and wheelchair fencing had a prevalence rate of 50%, whereas the corresponding scores for swimming and wheelchair basketball were 37.5% and 18.8%, respectively. The differences between nations\u2019 home market shares and their corresponding average pre/post-home market shares for each sport are shown in Table\u00a0The Spearman rank correlation coefficient for each sport showing the nature of the relationship between the size of the home advantage in a given sport and the strength (away performance) of nations in that sport is shown in Fig.\u00a0Previous home advantage research , 5 suggep\u00a0<\u00a00.01). In other words, strong evidence of a home advantage effect at overall country level was identified. This finding is consistent with what is known about home advantage in the Winter Paralympic Games [Our results show that host nations in the Summer Paralympic Games generally performed better at home than away from home and that the difference between home and away performances was statistically significant p\u00a0<\u00a00.0. In otheic Games , 14\u201316.p\u00a0<\u00a00.05). Evidence of home advantage in archery, swimming, and wheelchair basketball was either weak or inconclusive. Variations in the prevalence and size of the home advantage effect between different sports have also been observed in other studies of multi-sport events [Our results also point to sport-specific variations in home advantage in the Summer Paralympic Games. Across the six sports to be held in every edition of the competition to date, only athletics, table tennis, and wheelchair fencing exhibited a significant home advantage effect .Wheelchair fencing is a combat sport and as such may require some subjective input from judges, which might explain some of the observed home advantage effect in our study. The prevalence of home advantage in combat sports (including fencing) has previously been documented during the Olympic Games . PreviouHome competitors\u2019 familiarity with local conditions or the venue is a game location factor that is sometimes associated with home advantage. For example, Bray and Carron acknowlehttp://www.uksport.gov.uk/our-work/investing-in-sport/historical-funding-figures).From a strategic point of view, there is evidence that countries increase their level of investment in elite sport prior to hosting the Olympic and Paralympic Games . Indeed,Despite this considerable growth in elite sport funding, there was only a marginal improvement in Great Britain\u2019s market share in the Summer Paralympic Games between its pre-home edition in 2008 (7.55%) and its home edition in 2012 (7.62%). Nevertheless, it is still conceivable that increased financial support can contribute to home advantage, particularly when considering that Great Britain\u2019s market shares in the sports of athletics and table tennis at its home edition in 2012 improved by around three percentage points each in comparison with 2008.Building on a recent study , this re"} +{"text": "Gobiosoma\u00a0bosc, a widespread and phenotypically invariable intertidal fish found along the Atlantic Coast of North America. Using DNA sequence from 218 individuals sampled at 15 localities, we document marked intraspecific genetic structure in mitochondrial and nuclear genes at three main geographic scales: (i) between Gulf of Mexico and Atlantic Coast, (ii) between the west coast of the Florida peninsula and adjacent Gulf of Mexico across the Apalachicola Bay, and (iii) at local scales of a few hundred kilometers. Clades on either side of Florida diverged about 8 million years ago, whereas some populations along the East Cost show divergent phylogroups that have differentiated within the last 200,000\u00a0years. The absence of noticeable phenotypic or ecological differentiation among lineages suggests the role of stabilizing selection on ancestral phenotypes, together with isolation in allopatry due to reduced dispersal and restricted gene flow, as the most likely explanation for their divergence. Haplotype phylogenies and spatial patterns of genetic diversity reveal frequent population bottlenecks followed by rapid population growth, particularly along the Gulf of Mexico. The magnitude of the genetic divergence among intraspecific lineages suggests the existence of cryptic species within Gobiosoma and indicates that modes of speciation can vary among lineages within Gobiidae.The adaptive radiation of the seven\u2010spined gobies (Gobiidae: Gobiosomatini) represents a classic example of how ecological specialization and larval retention can drive speciation through local adaptation. However, geographically widespread and phenotypically uniform species also do occur within Gobiosomatini. This lack of phenotypic variation across large geographic areas could be due to recent colonization, widespread gene flow, or stabilizing selection acting across environmental gradients. We use a phylogeographic approach to test these alternative hypotheses in the naked goby This apparent lack of geographically structured phenotypic variability can take place even at oceanic or continental scales and could be due to three main causes: (i) recent colonization across large areas, with insufficient time for lineage sorting and divergence; (ii) ongoing gene flow at regional or oceanic scales associated with pelagic larval stages, which may homogenize gene pools and prevent differentiation; and (iii) strong balancing selection on ancestral phenotypic traits despite local differences in ecological conditions.However, even within groups like the seven\u2010spined gobies, characterized by highly ecologically specialized species assemblages at small geographic scales, some species stand out for showing no apparent phenotypic differentiation across large geographic ranges. Such is the case of the naked goby region encompassing a fragment of the ATP\u2010synthase 6 gene and the entire ATP\u2010synthase 8 gene (hereafter referred to as ATPase) using primers H9236 and L8331 gene using primers Fish1F and Fish1R , and variable sites were checked visually for accuracy. Sequences were unambiguously translated into their amino acid sequence, and no double peaks were observed in the chromatographs of the mitochondrial sequences, suggesting sequences were of mitochondrial origin and not nuclear copies. All sequences have been deposited in GenBank under the following accessions: MF168974\u2010MF169100 and MF182408\u2010182410.Fish were collected in the field at 15 different localities throughout the species distribution , Prochi and Orti), and ge and Orti). We set and Orti. We set arlequin 3.5 indicate an excess of recent mutations and reject population stasis took place about 8.07 MY ago 95% HPD: 3.74\u201313.12), and the separation between West Florida and the Gulf Coast across the Apalachicola break took place about 4.58 MY ago (95% HPD: 2.06\u20137.51). In contrast, most of the genetic structure within each of these three main clades originated relatively recently, within the last 500,000\u00a0years , and West Florida , and with localities such as Cedar Key (CKFL) and Apalachicola (APFL) representing clear examples. Diversity patterns in East Florida reject population stasis as well , yet the rest of the East Coast shows a nonsignificant regional value of Fs , even though the South Carolina locality does show clear signs of expansion and 9.51% (glaucofraenum\u2010venezuelae), although the complex is restricted to the temperate zone. The lack of phenotypic differentiation among naked goby populations remains an extreme case of cryptic variation given the large geographic range and pronounced ecological gradient, from subtropical to temperate latitudes.The lack of phenotypic divergence associated with the formation of independent lineages in Thacker document4.2G.\u00a0bosc complex corresponds to the Florida peninsula, a well\u2010known biogeographic landmark where congruence in contact zones between divergent lineages from the Atlantic and Gulf coasts has been documented for a number of marine organisms, from mollusks to mammals , was found to show a break there as well (Herke & Foltz, The main phylogenetic split in the The factors that caused the Apalachicola break in naked gobies remains unclear. This is partly because the bathymetry of the region over the Pleistocene was very dynamic, leading to dramatically changing habitat configurations for estuarine species over time (Bagley et\u00a0al., Numerous naked goby populations were characterized by star\u2010shaped phylogenies of mitochondrial haplotypes, particularly in the eastern Gulf of Mexico and western coast of Florida, indicating sudden population expansions there. Climatic and bathymetric oscillations during the Pleistocene could have caused sudden reductions in estuarine habitats, reducing population sizes and resulting in population bottlenecks that erased genetic diversity through drift. Marked effects of drift have been reported in gobiids at small geographic scales in California (McCraney, Goldsmith, Jacobs, & Kinziger, 4.3Cryptic taxa represent a challenge for the discovery and quantification of biodiversity, as their detection requires intensive sampling and costly methods such as phylogenetic analysis or molecular barcoding (Bernardi, 5Contrasting with the general pattern observed in seven\u2010spined gobies as a group, where ecology was seen to play a more important role than biogeography in species diversification, our results suggest that in some taxa like the naked goby, geography and drift were more important than ecology and selection in differentiating populations. The lack of phenotypic divergence despite marked genetic structure in neutral markers at different spatial scales suggests that stabilizing selection has prevented the ancestral phenotype from differentiating despite the broad environmental and latitudinal range occupied by the species.None declared."} +{"text": "PDA) is associated with an immunosuppressive tumor\u2010microenvironment (TME) that supports the growth of tumors and mediates tumors enabling evasion of the immune system. Expression of programmed cell death ligand 1 (PD\u2010L1) and loss of human leukocyte antigen (HLA) class I on tumor cells are methods by which tumors escape immunosurveillance. We examined immune cell infiltration, the expression of PD\u2010L1 and HLA class I by PDA cells, and the correlation between these immunological factors and clinical prognosis. PDA samples from 36 patients were analyzed for HLA class I, HLA\u2010DR, PD\u2010L1, PD\u20101, CD4, CD8, CD56, CD68, and FoxP3 expression by immunohistochemistry. The correlations between the expression of HLA class I, HLA\u2010DR, PD\u2010L1 or PD\u20101 and the pattern of tumor infiltrating immune cells or the patients\u2019 prognosis were assessed. PD\u2010L1 expression correlated with tumor infiltration by CD68+ and FoxP3+ cells. Low HLA class I expression was an only risk factor for poor survival. PD\u2010L1 negative and HLA class I high\u2010expressing PDA was significantly associated with higher numbers of infiltrating CD8+ T cells in the TME, and a better prognosis. Evaluation of both PD\u2010L1 and HLA class I expression by PDA may be a good predictor of prognosis for patients. HLA class I expression by tumor cells should be evaluated when selecting PDA patients who may be eligible for treatment with PD\u20101/PD\u2010L1 immune checkpoint blockade therapies.Pancreatic ductal adenocarcinoma ( Pancreatic ductal adenocarcinoma (PDA) is one of the most lethal human malignancies and the fourth leading cause of mortality in Japan reg) PDA is characterized by the presence of a dense desmoplastic stroma infiltrated with immunosuppressive myeloid\u2010derived suppressor cells, macrophages, fibroblasts, and regulatory T cells class I by tumor cells are crucial factors for the tumor development process In this study, we characterized PDA infiltrating hematopoietic cells and determined the expression of PD\u2010L1 and HLA by PDA tumor cells to determine the immunological status of the PDA TME, its immune escape systems, and the impact of these factors on the patients\u2019 prognosis.Tumor samples were obtained from 36 patients (mean age 68.2\u00a0years) who had undergone pancreatic resection for PDA at the Department of Surgery and Science, Kyushu University Hospital, between February 1998 and December 2013. Tumors were diagnosed histologically based on the General Rules for the Study of Pancreatic Cancer by Japan Pancreas society (2009). All patients provided full written informed consent, and the study was approved by the Ethics Committee of Kyushu university (ID number: 27\u201048). The baseline characteristics of patients were shown in Table\u00a0Formalin\u2010fixed, paraffin\u2010embedded tumor sections were assessed immunohistochemically using monoclonal antibodies against HLA class I , HLA DR , PD\u2010L1 , PD\u20101 , CD4 , CD8 , CD56 , CD68 , and FoxP3 , and the streptavidin\u2013biotin\u2010peroxidase complex method. The staining with those monoclonal antibodies is described in Data U\u2010tests. Categorical variables were compared using chi\u2010square tests. Overall and recurrence\u2010free survival rates were calculated with the Kaplan\u2013Meier method, with between\u2010group differences compared using the log\u2010rank test. In risk factor analysis, propensity score matching analysis were performed to reduce the confounding. After balancing the two groups based on propensity scores which were calculated with Cox regression analysis for age, sex, and UICC stage, the risk factors of patient survival were evaluated using the Cox proportional hazard model. A P\u00a0<\u00a00.05 was considered statistically significant.All statistical analyses were performed using SAS software . Continuous variables were expressed as means \u00b1 standard deviations and compared using Mann\u2013Whitney +, CD8+, CD56+, CD68+, and FoxP3+ cells was determined in histological sections were high, and 17 (47.3%) were low for HLA class I expression or CD8+ lymphocytes . Although HLA\u2010DR was not expressed on normal pancreatic ductal cells PD\u2010L1 positive samples , CD8+ , CD68+ , and FoxP3+ cells, but not CD56+ cells Fig. .P\u00a0=\u00a00.006) and with a more advanced clinical stage (P\u00a0=\u00a00.018) were more frequently observed in HLA class I low PDA samples than in HLA class I high samples (Table P\u00a0=\u00a00.001). PD\u2010L1 positive tumors were significantly associated with a higher histological grade than PD\u2010L1 negative tumors was associated with a significant better recurrence\u2010free survival , HLA class I low and PD\u2010L1 negative (n\u00a0=\u00a013), or HLA class I low and PD\u2010L1 positive (n\u00a0=\u00a06) tumors. In this analysis, we also investigated infiltration of CD8 lymphocytes between the groups. The number of infiltrating CD8+ lymphocytes were significantly higher in tumors that were PD\u2010L1 negative and HLA class I high than in tumors with different PD\u2010L1 and HLA class I expression was associated with a significant better recurrence\u2010free survival , HLA class I low and PD\u2010L1 negative (n\u00a0=\u00a016), or HLA class I low and PD\u2010L1 positive (n\u00a0=\u00a02) tumors. The number of infiltrating CD8+ lymphocytes were significantly higher in tumors that were membranous PD\u2010L1 negative and HLA class I high than in tumors with different PD\u2010L1 and HLA class I expression of PDAs. Membranous PD\u2010L1\u2010positive tumors were significantly associated with a higher histological grade than negative tumors , CD40 agonistic antibodies, and colony\u2010stimulating factor 1 receptor blockade, have been developed to transform the immunosuppressive pancreatic TME into conditions that can empower an anticancer immune response In conclusion, the TME of PDA may enable the evasion of the immune system by upregulating PD\u2010L1. Evaluation of both PD\u2010L1 and HLA class I expression on PDA cells may be a good predictor of prognosis for PDA patients. HLA class I expression by tumor cells should be evaluated when selecting PDA patients who may be eligible for treatment with PD\u20101/PD\u2010L1 immune checkpoint blockade therapies.The authors declare no potential conflicts of interest.Figure S1. Enhanced PD\u2010L1 expression by PDA cells in areas of CD4+ or CD8+ T\u2010cell infiltration.Click here for additional data file.Figure S2. HLA\u2010DR expression in normal pancreatic ductal cells.Click here for additional data file.Figure S3. Enhanced PD\u2010L1 expression by PDA cells in areas of CD68+ cell infiltration.Click here for additional data file.Figure S4. The association between PD\u20101 expression and immune cell infiltrates in primary PDA lesionsClick here for additional data file.Figure S5. HLA\u2010DR or PD\u20101 expression and patient survivalClick here for additional data file.Figure S6. The association between membranous PD\u2010L1 expression and immune cell infiltrates in primary PDA lesions.Click here for additional data file.Figure S7. Membranous PD\u2010L1 expression and patient survival.Click here for additional data file.Figure S8. The expression patterns of HLA class I and membranous PD\u2010L1 and the survival outcomes of PDA patients.Click here for additional data file.Table S1. Baseline characteristics of PDA patients with high or low HLA class I expression who underwent pancreatic resection.Click here for additional data file.Table S2. Baseline characteristics of PDA patients with high or low HLA class I expression who underwent pancreatic resection, after matching.Click here for additional data file.Table S3. Baseline characteristics of PDA patients with negative or positive PD\u2010L1 expression who underwent pancreatic resection.Click here for additional data file.Table S4. Baseline characteristics of PDA patients with negative or positive membranous PD\u2010L1 expression who underwent pancreatic resection.Click here for additional data file.Data S1. Immunohistochemical staining.Click here for additional data file."} +{"text": "P\u2009=\u20090.005), whereas MHC class I status provided no prognostic impact in the PD-L1 negative group. Neither PD-L1 nor MHC class I alone showed a significant difference in overall survival. The loss of MHC class I expression in PD-L1-positive HNSCC was associated with a poor clinical outcome. This suggested that MHC class I expression status might be useful for the prognosis of tumor progression in HNSCC when combined with PD-L1 expression status. External validation with enough numbers of participants in such subgroup should be needed for validation.The purpose of this study was to evaluate the prognostic impact of major histocompatibility complex (MHC) class I expression and programmed death-ligand 1 (PD-L1) expression in patients with head and neck squamous cell carcinoma (HNSCC). A total of 158 patients with HNSCC were evaluated retrospectively. The expression of MHC class I and PD-L1 was analyzed in tumor specimens using immunohistochemistry. The association between MHC class I/PD-L1 expression and clinical outcome was evaluated by Kaplan-Meier and Cox regression analyses. Among 158 patients, 103 (65.2%) showed positive PD-L1 expression, and 20 (12.7%) showed no detectable expression of MHC class I. The frequency of PD-L1 positive expression with concomitant MHC class I loss was 7.0%. In the PD-L1-positive group, MHC class I loss was associated with a significantly worse survival compared with MHC class I positivity (median overall survival 39.3 months vs. not reached; Recently, programmed death-1 (PD-1)/programmed death-ligand 1 (PD-L1) blockade has produced remarkably durable clinical responses in HNSCC, with an objective response rate (ORR) of 13.3%, median progression-free survival (PFS) of 2.0 months, and median overall survival (OS) of 7.5 months2.Head and neck squamous cell carcinoma (HNSCC) is the sixth most common malignancy in the world4. PD-1/PD-L1 interaction normally regulates the activation and termination of immune response, but some tumors express PD-L1 and use this interaction as one of the major immune escape mechanisms. Thus, this interaction has become an immunologic therapeutic target in various malignancies5. Furthermore, PD-L1 expression was found to be positively correlated with ORR and PFS, suggesting that PD-L1 expression might be a positive predictive marker in some tumors7. Despite these encouraging results, a significant portion of PD-L1-positive patients with advanced HNSCC do not respond to these immune checkpoint inhibitors (ICIs)8. This suggests that there exist additional escape mechanisms that allow tumor cells blocked by ICIs to avoid attack by cytotoxic T lymphocytes.PD-1 is a transmembrane immune inhibitory receptor that plays the role of self-tolerance in immune response by repressing the effector functions of T cells within the tumor microenvironment. The PD-1 ligand, PD-L1, delivers its inhibitory signals to PD-1-positive T-cells to suppress their cytotoxic activity by cell-intrinsic inhibition of antigen-specific signaling9. The main function of MHC class I molecules is to display peptides, including tumor-associated antigens, to cytotoxic CD8\u2009+\u2009T cells. Defects in MHC class I molecules lead to impaired T-cell mediated cytotoxicity against tumor cells. The deregulated expression of MHC class I genes has been demonstrated in various tumor types13. Particularly, the total loss of MHC class I expression in primary and metastatic HNSCC lesions occurred in approximately 15% and 40%, respectively14. However, although some studies have revealed that MHC class I down-regulation might be a poor prognostic factor in HNSCC17, the prognostic relevance of MHC class I loss remains unclear.It is well established that tumor cells often harbor a loss or down-regulation of the major histocompatibility complex (MHC) class I molecules on their surface, and this is considered an immune escape mechanism of the tumorAlthough MHC class I loss is a frequent event and is thought to confer a tumor escape function, little is known about its clinical significance in PD-L1-positive patients with HNSCC. In the present study, we first analyzed the associative expression patterns of MHC I and PD-L1 in patients with locally advanced HNSCC. We then sought to explore the prognostic significance of impaired antigen presentation caused by MHC I loss combined with positive PD-L1 expression.The baseline characteristics of the patients and their tumors are described in Table\u00a0All patients were analyzed for MHC class I and PD-L1 expression, as shown in Table\u00a0The associations of MHC class I and PD-L1 expression with other clinico-pathological factors were also analyzed. No significant association was detected between PD-L1 or MHC class I expression and other clinico-pathological factors (Supplementary Tables\u00a0P\u2009=\u20090.143). Likewise, MHC class I status did not make a significant difference in OS in HNSCC . The Kaplan-Meier estimates of survival for PD-L1 negative and positive patients are shown in Fig.\u00a0SCC Fig.\u00a0. This suP\u2009=\u20090.005), with a median OS of 39.3 months compared with MHC class I-positive patients . Loss of p16 expression, advanced T classification (T3-T4 versus T1-T2), and receipt of (chemo)radiotherapy as definitive treatment were also associated with poor OS (Table\u00a0P\u2009=\u20090.008), as well as loss of p16 expression was associated with a worse survival .According to a Cox univariate proportional hazards analysis, negative MHC class I expression was significantly associated with reduced OS in the PD-L1-positive group , but it trended toward a worse survival without prognostic significance .Meanwhile, MHC class I loss in the PD-L1-negative setting did not retain any prognostic significance. Poor performance status was significantly associated with poor survival in the PD-L1-negative setting in univariate analysis 25, non-small cell lung cancer (17%)26, and hepatocellular carcinoma (17%)27. This discrepancy might be due to a cutoff point of the high/low expression of MHC class I, or differences in tumor types. For example, in the former case27, MHC class I expression is classified as low expression and high expression groups by using Allred scoring, not as absent or present. On the other hand, various results might result from the small number of participants in studies examining the proportion of MHC class I loss and PD-L1 expression including ours. Nevertheless, we found that some HNSCC patients lose MHC class I expression even with a high level of PD-L1 expression; this pattern is also seen in patients with other tumor types27.Although MHC class I loss has been reported in several types of tumors ranging from 15% to 96%30. This discrepancy of the results might be explained by our findings in HNSCC that the loss of MHC class I on tumors is a prognostic marker of poor survival only when PD-L1 is concomitantly expressed. According to Perea et al.26, non-small cell lung cancer patients with decreased MHC class I and high PD-L1 expression have larger tumor sizes that show a more aggressive phenotype, similar to esophageal cancer patients25. This association is not observed in patients with only MHC class I downregulation or PD-L1 upregulation. Although there was a lack of information on survival outcomes in Perea et al. that could limit the interpretation, the obtained results could explain the poor survival in patients with MHC class I loss in PD-L1-positive tumors as shown in our study. In addition, the most striking finding from the study by Perea et al.26 is that the proportion of CD8-positive T cell infiltration was 100% in PD-L1-positive patients with high MHC class I expression compared to 37% in PD-L1-positive patients with MHC class I loss, which indicates that T-cell-mediated adaptive immune response could not be activated without the presence of MHC class I expression. MHC class I might play a crucial role in an immune escape mechanism of the adaptive immune response exerted by CD8\u2009+\u2009T cells, which induces the up-regulation of PD-L1 through IFN-gamma3. Based on this assumption, we cautiously postulate that patients with MHC class I loss would have low expression of T cell infiltration, induce little adaptive immune response, and show poor clinical outcomes in PD-L1 positive setting despite lack of information on CD8-positive T cell infiltration in our study.The down-regulation of MHC class I alone has been controversial as a prognostic marker. For example, some studies demonstrated that MHC class I down-regulation was associated with improved survival in lung cancer and breast cancer, whereas others reported the opposite outcome in the same types of cancers31. Our result is slightly different from a previous study of classical Hodgkin\u2019s lymphoma in which patients with decreased/absent MHC I expression had worse survival regardless of PD-L1 positivity30. This inconsistent result emphasizes the heterogeneity of cancer. PD-L1 expression was also constitutive in the previous study of Hodgkin lymphoma rather than regulated by an adaptive immune response32. Furthermore, our findings are not in line with results from studies on esophageal cancer and hepatocellular carcinoma27. According to Ito et al.25, esophageal cancer patients with both high expression of PD-L1 and MHC class I seem to have a worse prognosis than patients with high expression of PD-L1 and decreased MHC class I expression. On the other hand, in patients with hepatocellular carcinoma27, MHC class I downregulation appears to be associated with poor prognosis among the low PD-L1 group, whereas no significant difference in survival is observed according to MHC class I expression among the high PD-L1 group. These two studies do not focus on the prognostic role of MHC class I loss in regard to PD-L1 upregulation, but focus on PD-L1\u2019s role in the presence of MHC class I expression. Despite the lack of explanation regarding these findings due to differences among the study aims, we assume that the impact of MHC class I expression in correlation with PD-L1 expression on survival might be different between cancer types. Every tumor type has a different escape mechanism, and therefore it is unique that we found differences in prognosis by MHC class I loss and concomitant PD-L1 expression in HNSCC patients. Yet, the speculation related to the association of CD8-positive T cell infiltration with MHC class I expression remains to be validated.However, Perea and colleagues present some differences in T cell infiltration by MHC class I expression in PD-L1-negative patients, although the magnitude of differences is smaller than those in PD-L1-positive patients. We suggest that a different mechanism than T cell infiltration might compensate for the differences in intra-tumoral T cells and promote tumor progression in PD-L1-negative tumors34. Another group showed that PD-L1 expressed on the immune cell surface is associated with a good prognosis35. In addition, a recent study suggested that PD-L1 on tumor cells is closely associated with a longer disease-free survival36. The results were inconsistent and conflicting. This inconsistency reflects that the host immune surveillance is acting through multiple pathways4. For example, in addition to cell-intrinsic inhibition of antigen-specific signaling mediated by PD-1/PD-L1 interaction, CTLA-4 suppresses CD28-mediated T cell activation by competitive binding with CD80 and CD8637. Furthermore, there are several immune inhibitory molecules such as CTLA-4, LAG-3, TIM-3, and TIGIT with unique functions and different tissue sites38. Besides, metabolic reprogramming by tumor microenvironment39 and regulation of other anabolic and catabolic pathway by mitochondria40 also influences tumor-immune response as well as epigenetic modification of T cells41. Therefore, the prognostic impact of PD-L1 might be complicated by these various factors.PD-L1 expression has been reported in several malignancies including HNSCC, implying its vital role in the process of tumor escape through PD-1 and PD-L1 interaction. Two study groups reported that PD-L1 positivity is a poor prognostic factor with a shortened OS in HNSCC12, discriminating the type of MHC class I by mechanisms is important. However, there is a discrepancy between losses of MHC class I expression detected by immunohistochemistry (IHC) and commonly characterized genetic alterations of MHC class I gene or antigen presenting machinery pathway such loss of heterozygosity (LOH) or mutations. LOH in chromosome 6 (human leukocyte antigen) or 15 (beta-2-microglobulin) is the most commonly found mechanism of MHC class I alteration13. However, some cases with loss of MHC class I expression present with other complex mechanisms and different phenotypes can occur42. As selective loss or allelic loss of MHC class I molecule might not induce the loss of expression and not be detected by IHC, they could be the reason for underestimating MHC class I loss. To minimize underestimation, more comprehensive diagnostic methods are needed including not only IHC, but also microsatellite analysis to detect LOH and sequencing to analyze copy number alterations and mutations. Furthermore, it is also important to distinguish genetic aberrations of MHC class I and its polymorphisms.MHC class I might be important in immune surveillance only in tumors in which the adaptive immune response plays a role. In this context, immune checkpoint inhibitors might not be effective in PD-L1-positive tumors harboring MHC class I loss. Further study on a larger population size is needed to draw conclusions on the clear role of MHC class I in the prognosis of PD-L1-positive HNSCC patients. On the other hand, since treatment strategy may vary depending on the concept of \u2018hard loss\u2019 and \u2018soft loss\u201944. The aforementioned drawback associated with this antibody may be one of the limitations in our study. However, although various companion diagnostics for PD-L1 expression have been recently commercially available, in the early days those tests were not set up and E1L3N was widely used for research purposes. Furthermore, E1L3N showed good agreement rate with other antibodies45 in some studies performed in head and neck cancer patients44. Besides, we evaluated PD-L1 expression only on tumor cells, but not on immune cells. There is a report that the prognostic implication of surface marker expression might differ depending on the cell type on which the marker is expressed35. Thus, PD-L1 expressed on immune cells might affect the survival outcome in a different way from PD-L1 expressed on tumor cells. Third, we did not perform IHC of CD8-positive T cells, which is a key component for activated adaptive immune response. Although our speculation was based on evidence that CD8-positive T cell infiltration is positively correlated with MHC class I expression in several cancer types26 including HNSCC45, it is difficult to judge whether the adaptive immune response is indeed activated in PD-L1-positive tumors. Further studies are strongly needed to verify this association and survival differences. Finally, because our study did not evaluate the presence of LOH but examined MHC class I loss in terms of cell surface protein expression, we were unable to discriminate \u2018hard loss\u2019 and \u2018soft loss\u2019. Regarding this limitation, tests for identifying genetic alterations such as microsatellite markers has to be investigated in further analyses. Future studies to demonstrate the mechanism of MHC class I loss in HNSCC patients are also needed. Regardless of this limitation, our study has its own valuable point as a hypothesis generating study rather than confirmatory study, which shows the different prognostic role of MHC class I loss according to PD-L1 expression.Our study has several limitations. First, this was a single-institutional retrospective study with a small population. The small size of subgroup with MHC class I loss and PD-L1 expression may limit the interpretation of our findings. Because further study with larger sample size can show different results, external validation with enough numbers of participants in such subgroup must be performed to validate our conclusion. Moreover, there could be a selection bias since the tissue microarray was made only from patients with sufficient tissue samples. Second, there are some issues about immunohistochemical detection of PD-L1. We used E1L3N, the rabbit monoclonal antibody for PD-L1 expression, which has been regarded as having relatively lower concordance rate than other antibodies for PD-L1 such as SP142 or SP263The current study demonstrates that loss of MHC class I expression is associated with a poor prognosis in PD-L1-positive HNSCC. This suggests that the combination of MHC class I and PD-L1 might be useful to better predict the clinical course of the disease. However, external validation should be needed to verify our conclusion. Moreover, these biomarkers require further validation, especially in HNSCC patients treated with ICIs.We retrospectively reviewed the medical records of patients diagnosed with locally advanced HNSCC and treated at Seoul National University Hospital (SNUH) between December 2004 and December 2014. The eligibility criteria were as follows: HNSCC tumors were histologically confirmed by a pathologist according to the seventh edition of the American Joint Committee on Cancer; patients had to be 19 years or older; patients were treated initially with induction chemotherapy and/or concurrent chemoradiotherapy or radical surgery; sufficient formalin-fixed, paraffin-embedded tumor samples IHC were obtained. A total of 158 patients were enrolled. Baseline patient characteristics , and treatment factors (including types of definite and adjuvant treatments) were retrospectively obtained from medical records. The study was approved by the Institutional Review Boards of SNUH and was conducted in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.46. Those with a membranous staining score for PD-L1 more than 1 were regarded as positive for PD-L1. MHC class I expression was assessed by IHC with anti-HLA class I A, B, and C antibody . Based on the percentage of MHC class I-positive tumor cells, each patient was classified according to the criteria established by the Human Leukocyte Antigen and Cancer component of the 12th International Histocompatibility Workshop47. When the cell membrane was stained as strong as stromal lymphocytes or endothelial cells in more than 75% of the tumor cells, expression levels were defined as strong (2); If heterogeneous membrane staining was found in more than 25% of the tumor cells, it was defined as weak (1); If less than 75% of the tumor cells lacked membrane staining, it was defined as none (0). p16 was evaluated using mouse anti-p16 (E6H4) monoclonal antibody (mAb) and was considered positive in case of diffuse and strong nuclear and cytoplasmic staining in 70% or more of tumor cells48. All the IHC results were assessed by expert pathologists at SNUH who were blinded to the clinical and pathological data.We used TMA of the patients included in the study for IHC staining. From representative tumor regions identified by H&E stained sections, tissue cylinders were obtained and arrayed into a TMA block. Immunohistochemical staining was conducted using serial sections from this TMA block using the following antibodies. In cases with information of PD-L1 expression by IHC, rabbit anti-PD-L1 (E1L3N) XP\u00ae mAb was utilized with the Ventana Benchmark XT system at the Department of Pathology at SNUH. PD-L1 IHC based on the intensity and proportion of membrane staining in tumor cells was scored as follows: 0, less than 5% of tumor cells; 1, weak in \u22655% of tumor cells; 2, moderate in \u22655% of tumor cells; and 3, strong in \u22655% of tumor cellsP\u2009<\u20090.10). Although MHC class I loss was insignificant in univariate analysis, we included it in the multivariable analysis because we regarded it as a clinically essential variable. Statistical significance was defined as P\u2009<\u20090.05. All statistical tests were two-sided and were performed using STATA, version 15 software .The descriptive data of the patients are presented as numbers with percentages or mean values with standard deviations. The association of PD-L1 positivity and MHC class I loss with demographic and clinico-pathological variables was analyzed using Student\u2019s t-test or chi-square tests, where appropriate. The outcome of interest was OS, which was defined as the time from the date of diagnosis to the date of death or the date of the last follow-up if censored. Survival outcome was analyzed using Kaplan-Meier estimation. We performed the log-rank test to compare OS according to PD-L1 and MHC class I. To identify the impact of MHC class I on OS by PD-L1 expression, we performed subgroup analyses by stratification. In addition, factors associated with OS were analyzed using univariate Cox regression analyses separately for PD-L1-positive and PD-L1-negative patients. We performed multivariable Cox regression analysis employing statistically significant variables from the univariate analysis (Supplementary Table S1, Supplementary Table S2"} +{"text": "As an excellent model organism, zebrafish have been widely applied in many fields. The accurate identification and tracking of individuals are crucial for zebrafish shoaling behaviour analysis. However, multi-zebrafish tracking still faces many challenges. It is difficult to keep identified for a long time due to fish overlapping caused by the crossings. Here we proposed an improved Histogram of Oriented Gradient (HOG) algorithm to calculate the stable back texture feature map of zebrafish, then tracked multi-zebrafish in a fully automated fashion with low sample size, high tracking accuracy and wide applicability. The performance of the tracking algorithm was evaluated in 11 videos with different numbers and different sizes of zebrafish. In the Right-tailed hypothesis test of Wilcoxon, our method performed better than idTracker, with significant higher tracking accuracy. Throughout the video of 16 zebrafish, the training sample of each fish had only 200\u2013500 image samples, one-fifth of the idTracker\u2019s sample size. Furthermore, we applied the tracking algorithm to analyse the depression and hypoactivity behaviour of zebrafish shoaling. We achieved correct identification of depressed zebrafish among the fish shoal based on the accurate tracking results that could not be identified by a human. The notable shoaling behaviour of zebrafish has garnered widespread interest among scientists. It is crucial to achieve accurate and rapid identification and tracking of individuals for relevant mechanistic analyses of and predictions for ecosystems.Zebrafish have been widely applied in many fields as an excellent model organism, for example in biological experiments6. A series of computer vision tracking methods were proposed to solve these problems. These methods generally fall into two categories: the tracking methods based on detection-data association and methods based on identification.However, multi-zebrafish tracking still faces many challenges, such as similar shape, movement and frequent occlusion7. Dicle et al. tracked multi-objects with similar appearances by evaluating the rank similarity of the matrix, which was formed by the trajectory data6. This system worked well in numerous fields. Similar methods also include an extended kalman filter8. However, this types of tracking methods is more suitable for tracking objects with immutable movement. Thus this method cannot be effectively adapted to zebrafish with uncertain mobility.In the first category, researchers extracted the objects from videos first and then associated the positions of the objects continuously to achieve multi-object tracking. In 2008, Nevatia proposed a multi-level data association tracking framework9 and visible implantable elastomer (VIE) tags10, to achieve identification in the early stage. However, the process of manual marking is very complicated, and the marks even may affect the social behaviour of the objects. In 2014, Alfonso P\u00e9rez-Escudero et al. proposed idTracker, which identifies individuals based on fingerprinting features on the back texture of animals11. This fully automated tracking method has achieved good tracking performances in many fields. However, the method has high requirements regarding the quality and quantity of samples, given that the fingerprinting feature is not stable. Experimental videos must be sufficiently long to ensure each zebrafish has approximately 2500 image samples, which results in a slow processing speed that cannot be widely used in practice. Therefore, it is necessary to design new image features to realize a multi-zebrafish tracking method.In the second category, the main idea is to correctly identify individuals. Then multi-object tracking is performed based on the identification. Researchers have manually labelled the targets, such as with colour labelset al. to detect human12. The HOG algorithm calculates the histogram of gradient directions or edge orientations in local regions and is stable when tracking humans. However, the algorithm is orientation-sensitive to the input images and cannot be applied to zebrafish tracking directly. To improve the orientation-sensitive nature of the algorithm and increase stability, we proposed an improved HOG algorithm to calculate the stable back texture feature map of the zebrafish and track multi-zebrafish in a fully automated fashion.The Histogram of Oriented Gradient (HOG) algorithm is an image descriptor proposed by Dalal In this paper, we classified the HOG feature blocks based on the correlation of the zebrafish\u2019s back texture and then outputted the relevant back texture feature blocks spirally from the centre. In addition, the training samples were automatically accumulated based on the trajectory analysis and the improved HOG\u2009+\u2009SVM (Support Vector Machine) classification mechanism. Finally we established the data association strategies based on the identification of multiple zebrafish and obtained the final whole trajectories.11. In the Right-tailed hypothesis test of Wilcoxon, our method performed better than idTracker, with significant higher tracking accuracy. Throughout the video of 16 zebrafish, the training sample of each fish had only 200\u2013500 image samples, one-fifth of the idTracker\u2019s sample size. The proposed method can also calculate the stable feature of zebrafish larvae, and have a good tracking performance. Finally, we applied the tracking algorithm to analyse the depression and hypoactivity behaviour of the zebrafish shoaling. We achieved correct identification of depressed zebrafish among the fish shoal based on accurate tracking results that could not be identified by a human. In summary, the proposed method based on improved HOG features tracks multiple zebrafish in a fully automated fashion with low sample size, high tracking accuracy and wide applicability.To verify the validity of the improved HOG features, we compared the proposed improved HOG algorithm with the previous HOG method by identifying multiple zebrafish. Then we applied the tracking algorithm to identify 3 types of animal models, and all of them were successfully identified. A plurality of zebrafish in the growth cycle more than a month was also identified. Furthermore, eleven videos of different numbers and different sizes of zebrafish were processed. The tracking results showed that the accuracy rates were considerably increased compared with state-of-the-art tracking methods. Statistics of 8 tracking videos revealed that the average accuracy rate of the proposed method is 99.27%, 4.33% higher than idTrackerThe tracking algorithm consists of three modules: the Preprocessing module, the Feature extracting module, and the Tracklets classification and matching module Fig.\u00a0. In the The Preprocessing module contains two parts: zebrafish ROI image samples collection and initial tracklets acquisition. On one hand, we built the background model and intercepted the zebrafish ROIs as image samples. On the other hand, we matched the zebrafish positions into initial tracklets for each individual as long as possible.f frames were used to calculate the intensity of the background model:Gt is the intensity at point on frame t. We extracted the whole zebrafish region by applying the background subtraction method. Then we obtained the minimum enclosing rectangles of the zebrafish in the whole zebrafish region by using an elliptic fitting algorithm, and calculated the centre coordinate and the tilt angle \u03b8 of the enclosing rectangle. The initial value of \u03b8 is ranged from \u221290\u00b0 to 90\u00b0.The flow chart of zebrafish ROI collection is presented in Fig.\u00a0Thre of the aspect ratio of the enclosing rectangle to exclude the zebrafish ROIs where the fish bodies are significantly deformed or overlapping occurs, shown as Figs\u00a0We intercepted the head region of the zebrafish as the zebrafish ROI given that the whole zebrafish is a non-rigid object and the shape may change as it swims. To obtain the stable HOG feature, we set a threshold \u03b8\u2009\u2208\u2009 denotes the Euclidean distance between Oi,t-1 and Oj,t. Then the matching results are:Let Oi,t-1 and Ok,t-1 on frame t-1 matched with the same object Oj,t on frame t according to equation \u2009=\u20090 while both Oi,t-1 and Ok,t-1 matched with Oj,t, which may be caused by the stationary fish or the special matching distance. Then we matched Oi,t-1 and Oj,t if the difference between D and D is greater than threshold Thrd:Additionally, the numbers of objects would change if the zebrafish were crossing, so the crossing determination criteria is defined as follows:We discarded the short tracklets (<10 frames) caused by crossings to avoid introducing the possible overlapping errors.In the Feature extracting module, we proposed the improved HOG algorithm to calculate the stable back texture feature map of zebrafish, which greatly increased the certainty of identification. Then we extracted the improved HOG features of zebrafish image samples.12. It is implemented by dividing the binary image into small spatial regions (\u201ccells\u201d), for each cell accumulating a local 1-D histogram of gradient directions or edge orientations over the pixels of the cell. There is no overlapping between the cells. For better invariance to illumination, shadowing, etc., a certain number of cells are configured to form the larger spatial regions (\u201cblocks\u201d). Then a measure of local histogram \u201cenergy\u201d is generated over blocks and the results are used to normalize all of the cells in the block. Overlapping between the blocks is noted. Finally, the feature vectors are output by sliding windows. However the algorithm has orientation-sensitity to the input images.The basic idea of the HOG algorithm is that local object appearance and shape can often be characterized rather well by the distribution of local intensity gradients or edge directionsPr as the image sample.Scale the zebrafish ROIs to the specified size Bit be the binary value of point on frame t:Bm is the background model in equation in the two groups of tracklets before and after crossing. Then we compared the lengths of the shortest tracklets of two groups to obtain the longer tracklets. Third, we used group Tt with the longer tracklet to train the SVM classifier. Finally, we classified the tracklets in the other group by the classifier and associated the tracklets with the same identities. As shown in Fig.\u00a0A\u2009>\u2009min(NF)B. Then, we stitched the initial tracklets with the same identity. In this manner, the samples were accumulated by the lengthened tracklets, which greatly increases the certain of identification. For training the second type of classifier, many tracklets groups are waiting to be matched, as shown in Fig.\u00a0t) in all groups. Then we compared these shortest tracklets to obtain the longest tracklets with a length of max(min(NFt)). Finally the tracklet group Tt, which contained the longest tracklet, was selected to train the classifier.Let N be the number of zebrafish tracked in the tracking video. Tpij be the predicted probability of Tit classified as the class Tit+n. The probabilities matrix is noted as follows:N pij in equation are calculated as follows:Let Thrp. The error correction function automatically locate these points. Then the user can view the tracking results of adjacent frames and correct the wrong matching results easily by clicking the mouse. This function is detailed in Supplement\u00a0We also provided the manual error correction function to correct tracking errors and improve the accuracy of tracking results. To reduce manual checking, the possible tracking errors are located in the image sequence automatically. The error correction is accomplished by convenient human-computer interaction. We assume that the matching error may occur in tracklets when the matching probability value is less than a threshold Thrs and Thrg are the area threshold and grey threshold, respectively, when applying the background subtraction method to extract the entire zebrafish region. Thrs is determined by the size of the fish body, and Thrg is set based on the grey scale difference between the foreground and background. Thre is the enclosing rectangle aspect ratio threshold of the zebrafish ROI, which is used to to remove the samples with serious body deformation or overlapping. Pr is the scale parameter when scaling the zebrafish ROI to the specified scale as the final image sample. Both values are set based on zebrafish size. Thrd is the distance threshold of equation , which denotes the probability that the target is correctly identified, and the Classification Accuracy (CA), which denotes the probability that the correct labels predicted by the algorithm. In addition, to quantify the occurrence of crossings in the tracking dataset, we defined that the crossing occurs in the current frame if the detected fish number is less than the total fish number. The CrossFrequency denotes the frequency of crossings. Finally, we set the evaluation metrics calculated as follows to evaluate the performances of the tracking system:All of the experimental protocols and procedures involving zebrafish were approved by the Committee for Animal Experimentation of the College of Life Science at Nankai University (no. 2008) and were performed in accordance with the NIH Guide for the Care and Use of Laboratory Animals .https://github.com/deitybyx/hog_ZebraTracker.The datasets generated during and analyzed in this research are available on GitHub: 15 for SVM classifier. See Fig.\u00a0The automatic zebrafish tracking system was implemented in MATLAB (version: 2015b). The codes were run on a platform with a quad-core Intel Core i5-6500, 3.20\u2009GHz CPU, 8GB RAM. We utilized LIBSVM16. In the observation system, an industrial camera with a resolution of 1280 * 960 was used to capture the videos of zebrafish from the top view. A 25\u2009cm * 15\u2009cm * 15\u2009cm tank was used for 10 or less zebrafish, and a tank of 60\u2009cm * 45\u2009cm * 15\u2009cm was used for greater than 10 zebrafish, with a water depth of 10\u2009cm; the vertical distances between the camera and the bottom of the tank were 50\u2009cm and 100\u2009cm respectively, which ensures the images captured by the camera are clear and legible.The experiment was performed in an in-house developed observation systemIn the experiments, we identified 9 zebrafish to compare the improved HOG algorithm and the previous HOG method. In addtion, we used 3 types of animal models and 11 videos with different numbers of zebrafish to evaluate the performances of the tracking algorithm. Furthermore, we applied the tracking system to analyse the depression and hypoactivity behaviour in zebrafish shoaling.We applied the proposed improved HOG algorithm and the previous HOG method to process the same 9 zebrafish. Then, we compared the Classification Accuracy (CA) rate of each zebrafish to evaluate the performances of the two methods. The 9 zebrafish were imaged separately. Each fish video contained approximately 1500 frames. The first 1000 frames in each video were gathered to train the classifier and the last 500 frames of each video were used to test. Category represents the ground truth label of the targets. As shown in Fig.\u00a0In the multi-objects identification experiment, we captured the single model animal sample images, separately. The number of the zebrafish (AB strain) was 30. Each fish video contained approximately 1500 frames. The first 1000 frames in each video were gathered to train the classifier and the last 500 frames of each video were used to test. The experimental results are presented in Table\u00a0Based on the results, the identifying accuracy (IA) rate was 100%, and the average classification accuracy (CA) rate was 86.7%. In addition, we also identified two other animal models: drosophila (ISO4) and black mouse (C57 B6). Both of these animals were correctly identified. The results are detailed in Supplement s1Tables\u00a0To further verify the stability of the improved HOG feature, we identified 30 zebrafish (10 months old) in the growth cycle for greater than one month. In the experiment, the same 30 zebrafish were tested every week and imaged separately. The sample images of 30 zebrafish collected in the first week were used as a training set for the classifier. Then the classifier was applied to identify zebrafish in future growth cycles. The results are presented in Fig.\u00a011, D10 was chose from17, D11 was the zebrafish larvae image sequence of dataset 01 in14. Dataset information is presented in Table\u00a0p-value of the test is 0.0078 <0.05, the result of the hypothesis test h is 1, which indicated that the median of AccuracyRate in our method is significant greater than the median of AccuracyRate in idTracker. The proposed method can also calculated the stable features of zebrafish larvae, with good tracking results in D11, better than idTracker.In the multi-object tracking experiment, we compared the proposed method and idTracker by tracking 11 videos with different numbers of zebrafish. D9 is the experimental video inIn the proposed method, the training sample of each fish in D7 included only 200\u2013500 image samples, but the identification still worked well, one-fifth of the idTracker\u2019s sample size, which demonstrates the stability of the improved HOG features. Although the video quality of D9 and D10 were very poor due to video compression, good tracking results were obtained. As noted from the results of the proposed method, when the number of zebrafish and the probability of crossing significantly increases, the AccuracyRate exhibited a decreasing trend Fig.\u00a0, and ErrThe same experimental datasets were processed by idTracker and the tracking results are also presented in Fig.\u00a018. We observed whether the depression and hypoactivity in single zebrafish introduced by reserpine manifested in the fish group.We conducted a behavioural analysis of zebrafish shoaling behaviour using the tracking system. Reserpine depletes monoamines, and causes depression and hypoactivity in humans and rodents\u22121 was chosen in this study based on previous research concerning effective doses of reserpine on depressive behaviour in zebrafish. A total of 108 adult zebrafish were divided into 2 groups: \u201cReserpine\u201d and \u201cWT\u201d. Nine replicates were conducted for each group. Five wild type zebrafish and 1 target zebrafish were allocated to the \u201cReserpine\u201d group, and 6 wild type zebrafish were allocated to the \u201cWT\u201d group. In the \u201cReserpine\u201d group, nine target zebrafish were exposed in the reserpine solution for 20\u2009minutes, and treated zebrafish were maintained in system water for 7 days to induce depression and anxiety-like behaviour. After 7 days in system water, the locomotion behaviour parameters of the 9 treated fish and the 9 random fish in the \u201cWT\u201d group were measured in a novel tank by the total distance travelled, average velocity, turn angle and angular velocity to assess whether the reserpine works19. The results are presented in Supplement s1 Fig.\u00a0A reserpine (purity \u2265 98.0%) concentration of 40\u2009\u03bcg\u2009mLTo identify the treated zebrafish among the normal zebrafish in the \u201cReserpine\u201d group, two types of videos were captured: video with and without the treated zebrafish. We first applied the tracking system to the 2 types of videos to obtain the final trajectories. Then, we gathered the improved HOG feature maps from the trajectories of the shoal with the treated zebrafish to train the classifier, and the feature maps obtained from the trajectories of the video without the treated zebrafish served the test sets. Finally we calculate the unmatched label as the label of the treated zebrafish based on the classifier. The results for one replicate are presented in Table\u00a020. The sample images were selected for every 5\u2009s of the complete duration of the recorded trials sampling rate. We calculated the nearest neighbor distance and the average inter-individual distance of 9 replicates in 2 groups, and the average values are presented in to distinguish fish of the same congener when crossing occured. Qian Z-M et al. proposed a multiple fish 3D tracking method that analysed objects\u2019 motion from three directions simultaneously21. Both of these methods provided good tracking performances in the context of crossings. In the future, we will deal the videos from multiple dimensions based on improved HOG features to overcome overlappings. In addition, the method of deep learning is also useful for multi-object identifying, since it automatically calculates the features without manual interaction. We will apply the deep learning combined with the motion parameters to analyse overlapping objects, aiming at improving the tracking accuracy further.The problem of objects overlapping caused by crossings is difficult to address in multi-zebrafish tracking. The method inSupplement S1 VideoS1 VideoS2Dataset 1"} +{"text": "Clinical trials are key to advancing evidence-based medical research. The medical research literature has identified the impact of publication bias in clinical trials. Selective publication for positive outcomes or nonpublication of negative results could misdirect subsequent research and result in literature reviews leaning toward positive outcomes. Digital health trials face specific challenges, including a high attrition rate, usability issues, and insufficient formative research. These challenges may contribute to nonpublication of the trial results. To our knowledge, no study has thus far reported the nonpublication rates of digital health trials.The primary research objective was to evaluate the nonpublication rate of digital health randomized clinical trials registered in ClinicalTrials.gov. Our secondary research objective was to determine whether industry funding contributes to nonpublication of digital health trials.To identify digital health trials, a list of 47 search terms was developed through an iterative process and applied to the \u201cTitle,\u201d \u201cInterventions,\u201d and \u201cOutcome Measures\u201d fields of registered trials with completion dates between April 1, 2010, and April 1, 2013. The search was based on the full dataset exported from the ClinlicalTrials.gov database, with 265,657 trials entries downloaded on February 10, 2018, to allow publication of studies within 5 years of trial completion. We identified publications related to the results of the trials through a comprehensive approach that included an automated and manual publication-identification process.a priori search terms, of which 803 trials matched our latest completion date criteria. After screening, 556 trials were included in this study. We found that 150 (27%) of all included trials remained unpublished 5 years after their completion date. In bivariate analyses, we observed statistically significant differences in trial characteristics between published and unpublished trials in terms of the intervention target condition, country, trial size, trial phases, recruitment, and prospective trial registration. In multivariate analyses, differences in trial characteristics between published and unpublished trials remained statistically significant for the intervention target condition, country, trial size, trial phases, and recruitment; the odds of publication for non-US\u2013based trials were significant, and these trials were 3.3 (95% CI 1.845-5.964) times more likely to be published than US\u2013based trials. We observed a trend of 1.5 times higher nonpublication rates for industry-funded trials. However, the trend was not statistically significant.In total, 6717 articles matched the In the domain of digital health, 27% of registered clinical trials results are unpublished, which is lower than nonpublication rates in other fields. There are substantial differences in nonpublication rates between trials funded by industry and nonindustry sponsors. Further research is required to define the determinants and reasons for nonpublication and, more importantly, to articulate the impact and risk of publication bias in the field of digital health trials. Original PaperEmpirical observations demonstrate that not all clinical studies successfully publish their results in peer-reviewed journals. Perhaps, the earliest indication of publication bias in the area of scientific research was in 1979 by Robert Rosenthal with the term \u201cfile drawer problem,\u201d acknowledging the existence of selective publication bias for studies with positive and significant results . The pheIn 2008, a study of publication rates of clinical trials supporting successful new Food and Drug Administration drug applications found that over half of all the included trials were unpublished 5 years after obtaining approval from the Food and Drug Administration . SimilarThe registration of clinical trials, first proposed by Simes in 1986 , provideSince its establishment in the year 2000, the ClinicalTrials.gov website, which is maintained by the United States National Library of Medicine at the National Institutes of Health, has become the world\u2019s largest clinical trial registry, with 286,717 registered trials, 60% of which are non-US\u2013based as of October 11, 2018 ,28-30.A number of studies have analyzed and reported the characteristics of publication rates of clinical trials registered in ClinicalTrials.gov ,9,11,31 The primary research objective was to examine the prevalence and characteristics of the nonpublication rate among digital health randomized clinical trials registered in the ClinicalTrials.gov database. The secondary research objective was to determine whether industry funding contributes to nonpublication of trials. Considering that the ClinicalTrials.gov registry is a US\u2013based registry including 60% of non-US\u2013based trials, we intended to explore differences in the nonpublication rate and trial size between US- and non-US\u2013based trials . We alsoThe ClinicalTrials.gov website provides free, global open access to the online registry database through a comprehensive website search page as well as download capabilities; for example, all registration information for a given trial can be downloaded in XML format via a Web service interface. For our study, we downloaded the entire ClinicalTrials.gov online database, with 265,657 registered clinical trials entries, on February 10, 2018.The research included all eHealth-, mHealth-, telehealth-, and digital health-related randomized clinical trials that are registered in the ClinicalTrials.gov website and include any information and communication technology component, such as cellular phones, mobile phones, smart phones; devices and computer-assisted interventions; internet, online websites, and mobile applications; blogs and social media components; and emails, messages, and texts.We also included interventional and behavioral trials with or without the results. We limited our inclusion criteria to trials with latest completion dates between April 1, 2010, and April 1, 2013. The latest date between trials\u2019 primary completion date and completion date fields was considered the latest completion date. Details regarding the evaluation of the latest completion date of trials are described in Our search allowed for almost 5 years of a \u201cpublication lag period\u201d between the stated trial completion date and the search date for published reports . This strategy allowed us to account for longer publication cycles that may take up to several years, as indicated in prior studies . For exaOur search excluded registered clinical trials that were not randomized or only focused on electronic record-management systems such as electronic medical records, electronic health records, and hospital information systems as well as back-end integration systems, middleware applications, and Web services. Registered clinical trials that only reported on internet, Web-based, online, and computer-based surveys as well as television or online advertisement were also excluded. In addition, the search excluded registered clinical trials that focused only on biotechnology, bioinformatics analysis, and sequencing techniques. Finally, trials on medical devices and those only related to diagnostic imaging device, computerized neuropsychological, cognition, and oxygen assessment tools were excluded.The search terms and phrases were conceptually derived from the inclusion criteria. A complete list of included search terms and phrases was developed through an iterative process of all included 556 randomized clinical trials reported results in the ClinicalTrials.Gov database.We defined a comprehensive and specific categorization of the funding sources of trials. We analyzed the content of the \u201cLead_Sponsor\u201d field, available in trials\u2019 XML files exported from ClinicalTrials.gov, which comprises information regarding the entity or individual that sponsors the clinical study . We wereWe exported all the contents of the 556 included registered randomized clinical trials from the ClinicalTrials.gov website in XML format and then identified existing publications by two processes: automated and manual identification processes. The automated identification process considered all publications referenced in the trial's registry record as well as a PubMed search according to each trial\u2019s National Clinical Trial registration number. The manual identification process was a multistep process aimed to search trial publications by key trial attributes and author details in two major bibliographic databases (PubMed and Medline) as well as the Google search engine. We only considered the results of a clinical trial to be \u201cpublished\u201d if at least one of the primary outcome measures was reported. Complete details of the publication-identification processes are described in We exported the entire ClinlicalTrials.gov database, with 265,657 registered clinical trials entries as of February 10, 2018, into a local Structured Query Language server database. The 47 indicated search terms and phrases were then applied in the Structured Query Language server database as follows:For every search term and phrase, identify matching records by the [Title] OR [Interventions] OR [Outcome Measures] fields. We identified 6717 matching trials.Apply the latest completion date criteria between April 1, 2010, and April 1, 2013. We obtained 803 matching trials.After screening against all inclusion and exclusion criteria, 247 registered clinical trials were excluded as per the following breakdown:149 trials were not randomized.52 trials had false-positive matching terms. For example, the registered clinical trial NCT01287377 examined the association between nicotine patch messaging and smoking cessation. The trial term \u201cmessaging\u201d was a false-positive match to one of our search terms.17 trials were only related to computerized neuropsychological, cognition, and oxygen assessment tools.11 trials focused only on internet, Web-based, online, and computer-based surveys.9 trials were limited to the phone call intervention component.5 trials were related to scanners and diagnostic imaging devices.3 trials were related to television or online advertisement.1 trial was related to electronic medical record systems.Finally, 556 studies were included after screening.A summary of the search results is presented in In summary, 406 of 556 (73%) trials were associated with identified outcome publications and 150 of 556 (27%) trials did not have any identified publications or their identified publications did not report any of their primary outcomes. Only 6 of the 556 (1.1%) published trials did not report any of the primary outcome measures indicated in the trial\u2019s registration protocols .We conducted a statistical descriptive analysis, describing and summarizing the characteristics of all the 556 included registered randomized clinical trials by the following standard data elements exported from and defined by the ClinicalTrials.gov database: age group, condition, country, gender, intervention model, lead sponsor, masking, recruitment status, start date, study arms, study results, trial phase, and trial size [We examined the relationship between trial characteristics and the nonpublication rate using bivariate and multivariate analyses. For bivariate analysis, we used the Pearson Chi-square statistical test, and for multivariate analyses, we used binary logistics regression in SPSS . The results of this analysis are depicted in P<.05) between the nonpublication rate of trials and trial characteristics including trial condition, country, prospective registration, recruitment, trial size, and trial phases. Both tests reported no significant relationships between the nonpublication rate of trials and the age group, follow-up period, gender, intervention model, latest completion date, lead sponsor, primary outcome measures, major technology, masking, start date, study arms, and updates of trials in ClinicalTrials.gov results database.The Pearson Chi-square test and binary logistic regression test results reported significant relationships (P=.005) between the nonpublication rate and the eight different condition groups. The highest nonpublication rate was 45.2% for randomized clinical trials focusing on the \u201cCancer\u201d condition. In contrast, the lowest nonpublication rate was 15.8% for randomized clinical trials focusing on \u201cSmoking, Alcohol Consumption, Substance Abuse and Addiction\u201d conditions. The binary logistic regression test results showed a significant association (P=.01) between the nonpublication rate and intervention condition groups; however, trials on cancer or addiction/smoking conditions were not a significant predictor for nonpublication .The Pearson Chi-square test results showed a significant association (P<.001) in the nonpublication rates between the United States and other countries; the highest nonpublication rate was observed for trials in the United States (32.8%) as compared to non-US trials. The binary logistic regression test results showed a significant association between the nonpublication rate between the US and non-US trials. The odds of publication for non-US trials were significant, and these trials were 3.3 times more likely to be published than the reference group of the US\u2013based trials . The global distribution of all 556 randomized clinical trials included is depicted in The Pearson Chi-square test results showed significant differences (P=.07), which may be explained by the small sample size. We also found that the percentage of industry-funded trials in the US (12%) was five times higher than that in international non-US trials (2%).Only 38 (6.8%) of the 556 included registered randomized clinical trials were funded by industry sponsors. We observed a trend of 1.5 times higher nonpublication rate for industry-funded trials than non-industry-funded trials. However, this trend was not statistically significant (P=.01) between the nonpublication rate of trials and their respective study phases. Of the 556 randomized clinical trials, 427 (76.8%) had no information reported on trial phases. For 129 (23.2%) of the randomized clinical trials that reported a study phase, phase II trials were most commonly reported and had the lowest nonpublication rate (14.3%). There were 42 phase III/IV trials , with the highest nonpublication rate of 40.5%. The binary logistic regression test results showed a significant relationship (P=.004) between the nonpublication rate and trial size, and phase II trials were 3.9 times more likely to be published than other phase trials. The odds of nonpublication showed a trend towards significance for phase III/IV trials , and these trials were 3.1 times more likely to be published ; however, the trend did not reach statistical significance.Our Pearson Chi-square test results showed significant differences (P=.006) between prospective trial registrations and the nonpublication rates, with higher nonpublication rates for prospectively registered trials (11.3%) than retrospectively registered trials. Our analysis also showed that only 163 (29.3%) of all our included trials were registered prospectively. We advanced our analysis to explore the impact of the 2004 ICMJE mandate and the 2008 Declaration of Helsinki on prospective trial registrations in ClinicalTrials.gov [P<.001) between prospective trial registration and the start date of trials, with a lower number of prospective registrations reported for trials that started after 2008 between the trial recruitment status and nonpublication rate. Similarly, the binary logistic regression test showed a significant relationship (P<.001) between the trial recruitment status and nonpublication rate, and the completed trials were 3.3 times more likely to be published . Our results also showed that discontinued trials have higher nonpublication rates than completed or active trials. We referred to trials with withdrawn, suspended, and terminated recruitment statuses as discontinued trials. We extended our analysis to explore the reasons for trial discontinuation as potential contributors to higher nonpublication rate. We examined the reasons for discontinuation of 31 trials with withdrawn, suspended, and terminated recruitment statuses among the included trials between the primary investigators who reported the results in the ClinicalTrials.gov database and the publication of trial results.Results of the Pearson Chi-square test showed no statistically significant relationship . A total of 148 (26.6%) trials were published in the fourth year of the trial. We also observed that half of our included trials were published between the fourth and fifth year after the trial start date.No enrollment values were identified for ten trials in the ClinicalTrials.gov database, and we could not identify any publications for these trials. We stratified all trials into four strata by size at the 5th, 50th, and 95th percentiles and found a statistically significant difference between the nonpublication rate of trials and trial size. The highest nonpublication rate was 51.7% for small trials that enrolled no more than 26 participants (at the 5th percentile), whereas the lowest nonpublication rate was 23.8% for trials that enrolled between 27 and 148 participants (between the 5th and 50th percentile).P<.001). In addition, we found that half of the 546 randomized controlled trials that provided details of the trial size enrolled \u2265148 participants . The cumulative enrolment in the 546 trials was 312,906 participants, split between 236,066 (75.44%) participants in published trials and 76,840 (24.56%) in unpublished trials. We found that the nonpublication rate was twice as high as that for trials below the 5th trial size percentile (\u226426 participants) compared to other trials above the 5th trial size percentile (>26 participants).The Pearson Chi-square test showed a statistically significant relationship between the nonpublication rate and trial size published trials did not report any of the primary outcome measures indicated in the trial registration protocols. Our finding is substantially different and should not be compared to findings from other studies that reported that 40%\u201362% of clinical trials had at least one change in primary outcome when comparing trial publications and protocols ,13,15. TP=.005) and the binary logistic regression test (P=.01). The highest nonpublication rate was 45.2% for randomized clinical trials focusing on the \u201cCancer\u201d condition. This relative underreporting suggests challenges in conducting digital health oncology trials. These challenges align with and may be explained by findings from other studies that reported several barriers to traditional oncology trials, such as recruitment, eligibility, follow-up, and oncologist and patient attitudes [We reported a statistically significant relationship between the nonpublication rate and eight different condition groups in the Pearson Chi-square test . Our analysis of 31 discontinued trials showed that enrollment and funding challenges were major contributors to the higher nonpublication rate among our included trials. This finding is in line with that of another study indicating that recruitment challenges were the most-frequently reported factor contributing to discontinuation of clinical trials [In our study, we reported a statistically significant relationship between the trial recruitment status and trial nonpublication rate, and completed trials were 3.3 times more likely to be published between prospectively registered trials and nonpublication rates, with a higher nonpublication rate for prospectively registered trials (11.3%). We also expected to see an incremental trend in the prospective registration of trials after 2008, when the 7th revision of the Declaration of Helsinki was adopted to raise awareness of prospective trial registration within the scholar community [P<.001) between the prospective trial registration and the trial start date, with a lower number of prospective registrations for trials starting after 2008 (29.6%). This significant decline in prospective registration, compared to the influx in retrospective registration, may be explained by the general emphasis on trial registration after 2008. It is possible that the primary investigators of unregistered trials were increasingly required to register their trials retrospectively prior to publication by the editors or the submission guidelines of the scholarly journals. However, there are two major limitations to this finding in our study: the majority (74.3%) of our included trials started after 2008, and the study scope was limited to digital health trials. These two limitations can impact the internal and external validity of our analysis to evaluate the general impact of adoption of the 7th revision of the Declaration of Helsinki on the nonpublication rate of trials and prospective trial registrations.We postulate that the nonpublication rate may be higher for trials registered prospectively, as the primary investigator would register a trial before the enrollment of any participant, without knowing if the trial would be completed successfully or the results would ultimately be published. The Pearson Chi-square test showed a statistically significant relationship . We also observed that half of our included trials were published between the fourth and fifth year of the trial start date. The timelines of our findings are comparable to those of a 2007 study that analyzed time to publication of clinical trials and reported that clinical trials with statistically significant positive results were published 4-5 years after their start date, whereas trials with negative results were published in 6-8 years .When we analyzed the funding sources of trials, we found that only a small number of trials were funded by the industry. This finding is in contrast with the results of other studies, in which most included trials were funded by the industry. A study of delayed and nonpublication of randomized clinical trials on vaccines reported that 85% of their included trials were funded by the industry . AnotherWe observed a trend of 1.5 times higher nonpublication rates among industry-funded trials than among non-industry-funded trials. However, the trend was not statistically significant, which may be explained by the small sample size. We also found that the ratio of industry-funded trials in the United States is five times higher than that of international trials. Although these findings may be interpreted by the predominantly privately funded healthcare system in the United States, they could also be attributed to the scale of the digital health industry in the United States compared to the rest of the world, with US\u2013based digital health startups holding 75% of the global market shares between 2013 and 2017 -78.Despite ICMJE\u2013mandated trial registration since 2005, not all randomized trials are registered . TherefoIn this study, the ClinicalTrials.gov database was the sole data source of trial registrations. The choice was driven by feasibility challenges with limited research resources available for this study initiative and broader and global adoption of the ClinicalTrials.gov registry within the biomedical research enterprise. There are many other trials registries such as the European Clinical Trials Registry and the Our publication-identification process was conducted between June 29, 2016, and February 10, 2018, for all included 556 randomized clinical trials. Therefore, our findings did not include studies published after February 10, 2018. This study includes trials based on their completion date and primary completion date declared in the registry record in ClinicalTrials.gov. When not provided, we considered the latest completion date as described in From our study of 556 randomized clinical trials in the field of digital health that are registered in the ClinicalTrials.gov database, we found that nonpublication of trials is prevalent, with almost a third of all included trials remaining unpublished 5 years after their completion date. There are distinct differences in nonpublication rates between US- and non-US\u2013based trials and according to the funding sources (industry sponsors vs non-industry sponsors). Further research is required to define the rationale behind the nonpublication rates from the perspectives of primary investigators and, more importantly, to articulate the impact and risk of publication bias in the field of digital health clinical trials. Future studies could also include nonrandomized trials such as projects published in protocols (such as JMIR Research Protocols).It is not clear whether the research or technology failed, or if the results were disappointing and scholars did not write up a report, or if reports were rejected by journals; however, given the multitude of potential publication venues, and increased transparency in publishing, the former seems more likely. Scholarly communication is evolving, and short reports of failed trials may not always be published in peer-reviewed journals, but may be found in preprint servers. With the growing popularity of preprints, future analyses may also include searches for draft reports on preprint servers (such as preprints.jmir.org) to include unpublished reports, which may further shed light on why trials failed or remained unpublished. In the meantime, a general recommendation would be to conduct thorough formative research and pilot studies before conducting a full randomized controlled trial to reduce the risk of failure such as having insufficient power due to lack of participant engagement and nonuse attrition ."} +{"text": "Lactobacillus casei on glycemic control and serum sirtuin1 (SIRT1) and fetuin-A in patients with T2DM.Type 2 diabetes mellitus (T2DM) is related to the gut microbiota with numerous molecular mechanisms. Modulating the gut microbiota by probiotics could be effective in management of T2DM. The aim of the present trial was to evaluate the effect of 8 cfu of L. casei for eight weeks. The patients in placebo group took capsules containing maltodextrin for the same time duration. Anthropometric measurements, dietary intake questionnaires, and blood samples were collected, and the patients were assessed by an endocrinologist at the beginning and at the end of the trial.Forty patients with T2DM (n = 20 for each group) were divided into intervention (probiotic) and placebo groups. The intervention group received a daily capsule containing 10L. casei supplementation significantly increased SIRT1 and decreased fetuin-A levels at the end of the trial .Fasting blood sugar, insulin concentration, and insulin resistance significantly decreased in probiotic group compared with placebo group . Moreover, HbA1c reduced after intervention, but the reduction was not significant . In comparison with placebo, the L. casei supplementation affected SIRT1 and fetuin-A levels in a way that improved glycemic response in subjects with T2DM. Affecting the SIRT1 and fetuin-A levels introduces a new known mechanism of probiotic action in diabetes management. Type 2 diabetes mellitus (T2DM), a metabolic disorder known by high blood glucose, is caused by the combination of not-sufficient secretion of insulin and insulin resistance,2. DiabeAccording to a recent report by International Diabetes Federation, 425 million adults suffer from diabetes, and 1 in 2 remains undiagnosed. The global prevalence of diabetes in adults of 20-79 years is now 7.3% (4.8-11.9%) that is estimated to reach 8.3% 5.6%-13.9%) by 2045[%-13.9% bSirtuin1 (SIRT1), a nicotinamide adenine dinucleotide-dependent deacetylase, is a principal modulator of energy metabolism and exerts positive impacts on glucose homeostasis and insulin sensitivity. SIRT1 activators improve whole-body glucose homeostasis and insulin sensitivity in adipose tissue, skeletal muscle, and liver. Evidence has revealed that the endogenous activators of SIRT1 increase after calorie restriction and weight loss.Lactobacillus casei supplementation on the glycemic response and SIRT1 and fetuin-A levels in patients with T2DM.Fetuin-A, a serum protein, is expressed and secreted by adipocytes and hepatocytes, and its level is up-regulated in hepatic steatosis and other metabolic disorders,10. By bAn 8-week, parallel-group, randomized controlled trial was conducted in the Sheykholrayis Polyclinic of Tabriz University of Medical Sciences, Tabriz, Iran. The recruitment process of participants began in September 2016, and the intervention was carried out in January 2017.2. All patients had been diagnosed with T2DM for at least one year. Exclusion criteria were smoking, the presence of kidney, liver, and/or inflammatory intestinal disease, thyroid disorders, immunodeficiency diseases, required insulin injections, use of nutritional supplements within the previous three weeks of testing, use of estrogen or progesterone, pregnancy or breast-feeding, consuming any type of antibiotics, and consuming any other probiotic products within the previous two months of testing. Primary endpoints were the promotion of SIRT1, reduction of fetuin-A levels, and control of glycemic response, and secondary endpoint was the management of dietary intake and body weight.The target population of the present study was patients with T2DM. Subjects were contacted a day before commencing the supplementation, and the study was thoroughly explained to them. Volunteers were composed of 44 patients with T2DM, 30-50 years of age, and body mass index (BMI) lower than 35 kg/met al.[The sample size for the current study was calculated on the basis of FBS results reported by Ostadrahimi et al. with a c8 cfu L. casei or placebo capsules. Considering the buffering capacity of the food on the survival of probiotic microbes during gastrointestinal transit[Of 44 patients who had met the inclusion criteria, 4 were excluded because of their unwillingness to participate in the study . Subjects were randomly assigned to the probiotic (n = 20) and placebo (n = 20) groups, using a block randomization procedure with stratified subjects in each block based on sex and age. The allocation of the intervention or control group was concealed from the researchers, and the probiotic and placebo capsules had both an identical appearance and labeled information. Therefore, neither the subjects nor the investigators were aware of the treatment assignments in this double-blinded study. Over eight weeks, both groups consumed probiotic capsules containing 10l transit, the patArrangements were made so that the patients would receive the eight-week supply of their probiotic or placebo capsules at the beginning of the trial and were asked to take a capsule daily. Compliance with the capsule consumption guidelines was monitored by telephone interviews once a week. Information on demographic and anthropometric measurements and fasting blood samples were collected at the beginning and at the end of the trial. Nutrient intakes during three days were estimated using a 24-h dietary recall at the beginning, in the middle, and at the end of the study. Three-day averages of total energy intake (TEI) and macro-nutrients intake were analyzed by Nutritionist 4 software . International Physical Activity Questionnaire (IPAQ) was compAnthropometric measurements were recorded by trained personnel. A blood sample was drawn from each patient after an overnight fasting. All whole blood and serum samples were collected and kept at -70\u00b0C until assay. Blood samples were analyzed at the Drug Applied Research Center .Fasting blood glucose was measured using the standard enzymatic method with the Pars Azmun kit . Glycated hemoglobin (HbA1c) in the whole blood was measured by cation exchange chromatography with the NycoCard HbA1c kit . Insulin concentration was determined by a chemiluminescent immunoassay using a LIAISON analyzer . To measure insulin resistance, we used insulin resistance index, HOMA-IR (homeostatic model assessment of insulin resistance), based on the following formula: HOMA1-IR = FPI (mg/dl) \u00d7 FPG (mg/dl))/22.5. Serum fetuin-A and SIRT1 concentrations were measured by human ELISA kits .The present study was conducted in accordance with the guidelines laid down in the Declaration of Helsinki, and all procedures were approved by the Ethics Committee of Tabriz University of Medical Sciences (no. IR.TBZMED.REC.1395.402). A written informed consent was obtained from each patient.L. casei was the active agent of the probiotic capsules, and maltodextrin was used as the excipient. The capsules were prepared using a capsule filling device under aseptic condition. To check the quality of probiotic capsules and ensure that an adequate dose of the probiotic was consumed by the experiment group (at least 108 CFU/day), a food technologist checked the bacterial count of the capsules at the baseline, in the middle and at the end of the trial period, by culturing the contents of three capsules at each. The capsules were cultured with the use of MRS agar via serial dilution and the pour plate technique. Bacterial enumeration of the capsules showed that the capsules were composed of a minimum of 108 colony-forming units of L. casei during the study period. The placebo capsules contained only maltodextrin. Since the bacterial count of the excipient could confound the outcomes of the study, the powder was cultured to ensure it was free of pathogens. Capsule count was performed by the researcher at the end of the study to evaluate compliance.Hard yellow gelatin capsules were used as delivery vehicle in the present study. t-test and/or Chi-square test where appropriate. For within-group comparisons, paired t-tests were used, where before and after intervention measurements were taken. To assess the effect of intervention, the analysis of covariance (ANCOVA) was used to control baseline measurements and confounders. In all analyses, p values less than 0.05 were considered statistically as significant.Statistical analysis was performed by SPSS software . Normality of the numeric variables was checked by Kolmogorov-Smirnov test. Data weForty patients with T2DM were recruited in the present clinical trial (n = 20 for each group). Capsule bacterial counts showed a good compliance in those precipitants that completed the study, and no adverse effects were reported. There were no significant differences between the two groups with regard to any of the baseline characteristics , biomediL. casei supplementation on biochemical parameters is shown in p = 0.002, p = 0.035, and p = 0.001, respectively). Moreover, the between-group differences for the mentioned glycemic response parameters were significant . Evaluation of HbA1c after two-month supplementation showed no significant reduction in probiotic group.The effect of p = 0.006). Moreover, a significant decrease in the serum fetuin-A level was found in the probiotic group (p = 0.008). The between-group changes were statistically significant .As presented in the L. casei on anthropometric variables is shown in L. casei for two months significantly decreased weight, BMI, and waist circumference in probiotic group compared with placebo group . Moreover, the within-group changes for the three parameters were significant in probiotic group . Although the between-group change for waist to hip ratio was not significant, the within-group change was statistically significant (p = 0.001).The effect of consumption of p = 0.003, p < 0.001, p < 0.001, and p = 0.001, respectively). The between-group changes for TEI and protein were significant in 3rd evaluation ; moreover, the between-group changes for carbohydrate and fat intake were significant in both 2nd and 3rd evaluation.The analysis of dietary questionnaires, shown in L. casei supplementation on serum SIRT1 and fetuin-A in patients with T2DM.Controlling diabetes by natural food without side effects is a challenge for medical nutrition therapy of diabetes. This is the first study evaluating the effect of L. casei supplementation improved glycemic response and increased SIRT1 and decreased fetuin-A level compared with placebo group. In comparison with placebo group, L. casei supplementation decreased the intake of total energy and macronutrients significantly; moreover, it improved anthropometric parameters in probiotic group.The outcomes of the present trial showed that eight weeks of Bifidobacteria and Lactobacillus has been evaluated in human and animal studies[et al.[Lactobacillus acidophilus and Bifidobacterium lactis, decreased FBS and HbA1c in patients with T2DM. Ostadrahimi et al.[L. acidophilus, L. casei, and Bifidobacteria decreased fasting blood glucose and HbA1c compared with control group.Improvements in glycemic control by probiotic bacteria, as seen in this study, were in accordance with other similar studies conducted previously-20. The l studies,21-23. Pl studies. Ejtahedmi et al. have shoSeveral possible mechanisms of hypoglycemic effect of probiotics are discussed. Probiotics can affect gut bacteria to produce insulin-tropic polypeptides and GLP-1 (glucagon-like peptide-l), thus increasing glucose uptake by muscle and stimulating the liver absorption of blood glucose. The immet al.[HbA1c reduced after 2-month consumption of probiotic capsules; however, the reduction was not significant. Some of the similar studies have indicated that probiotics could reduce HbA1c in patients with T2DM. Howeveret al. have repet al.[et al.[The improved fasting insulin level found in this trial was in accord with the study conducted by Firouzi et al. in whichl.[et al., demonstet al.[L. acidophilus NCFM for four weeks improved insulin sensitivity in comparison with placebo. Similarly, Kobyliak et al.[After eight-week intervention, the insulin resistance improved in probiotic group. The result was in accordance with other similar investigations,30. Andret al. have revak et al. have fouak et al.. Specifiak et al. and reduak et al. to the sak et al.. Studiesak et al.,37. The et al.[As the results shown in et al. have fouet al.,43.et al.[et al.[et al.[et al.[Fetuin-A, a circulating glycoprotein that is secreted by the liver and adipose tissues, inhibits insulin receptor tyrosine kinase activity in animal studies. Fetuin-et al. have revl.[et al. have deml.[et al. have repl.[et al.. They hal.[et al..L. casei supplementation in subjects with T2DM. Considering the effect of calorie restriction and weight loss on SIRT1 and fetuin-A levels, it can be understood that by reducing the appetite and dietary intake and body weight, probiotics could affect the plasma level of SIRT1 and fetuin-A in patients with T2DM in present trial.As stated in L. casei supplementation for eight weeks significantly affected dietary intake and anthropometric indexes, including weight, BMI, and waist circumference in patients with T2DM. The effect of probiotics on gut microbial composition can affect appetite and food intake and also body composition and weight[et al.[Lactobacillus gasseri for three months induced a significant weight loss and a decrease in BMI, hip, and waist circumferences and body fat mass. Omar et al.[Lactobacillus decreases total body fat mass.nd weight,7. By moht[et al. in whichar et al. have sugA possible way for manipulating the mammalian eating behavior and body weight by probiotic bacteria is appetite-regulating hormones. Supplementation with VSL#3, containing Lactobacillus strains, in mice reduced appetite-inducing hormones and neuropeptide Y in the hypothalamus,52. MoreL. casei could improve glycemic response and SIRT1 and fetuin-A levels. Taking into account the metabolic impacts of SIRT1 and fetuin-A, management of their levels could be effective in diabetes control. The results of present trial help us to reveal a new mechanism of probiotics action in diabetes and its related metabolic disorders control. Besides, as shown in this study, the positive effects of probiotics on body weight could be translated into favorable metabolic effects and have beneficial effects on the homeostasis of glucose.Taken together, this study demonstrates that"} +{"text": "Parameters characteristic for determining the psychophysiological state and typological characteristics of the nervous system were analyzed with the help of computer programs for psychophysiological testing. We determined the latent time of simple and complex reactions in different testing modes. Dispersion analysis was also used. We applied single-factor multidimensional dispersion analysis: one-way analysis of variance and General Linear Model, Multivariate. The indicators of psychophysiological testing were applied as dependent variables. The values of the functional class of athletes were used as the independent variable. To study the influence of damage degree of the upper or lower extremities on psychophysiological indicators, the extremities damage degree was applied as an independent variable. The time in the Paralympic 6 functional class to reach the minimum signal exposure in feedback mode was significantly longer compared with the 10 Paralympic functional class (p < 0.05). Comparing psychophysiological indicators when Paralympians are divided into groups more differentiated than functional classes , significant differences were found in all psychophysiological indicators between the athletes of different groups. The greatest impact on psychophysiological indicators was a lesion of the lower extremities. The training of Paralympians in table tennis should consider the reaction rate indicators. In addition, when improving the functional classification of Paralympians in table tennis, a more differentiated approach should be taken when considering their capabilities, including psychophysiological indicators. During training and functional classification of Paralympic athletes in table tennis, it is important to consider their functional class as well as the degree of damage to upper and lower extremities and the level of psychophysiological functioning.The purpose of the work was to identify the influence of functional class and degree of damage to extremities on psychophysiological indicators of Paralympians. The study involved 33 elite athletes with musculoskeletal system disorders of the 6 ( Paralympic sport is becoming an increasingly significant aspect in society ,2. The lThe training of Paralympians has specific characteristics. These features are specific to each kind of sport . Each atMuscular activity is controlled by the central nervous system . TherefoMany studies have been devoted to the central nervous system of persons with disabilities ,23,24. ADetermining the features of psychophysiological functions (reaction rate and mobility of nervous processes) is important for the implementation of an individual training program for table tennis Paralympians . The comIn the literature, data have been reported concerning the interaction of muscles and the nervous system ,32. An iThe degree of musculoskeletal system disorder is determined by different scales ,34. The One of the main psychophysiological indicators is the reaction rate in various testing modes and typological features of the nervous system. Based on the analyzed literature, our hypothesis is the following: psychophysiological indicators are different in Paralympians with different levels of musculoskeletal system damage. The purpose of the work was to identify the influence of the functional class and the degree of damage to extremities on psychophysiological indicators of Paralympians.n = 15) and 10th (n = 18) functional classes in table tennis, aged 21\u201325 years. The study was carried out in accordance with the principles of the Helsinki Declaration and approved by the Ethics Committee of the H.S. Skovoroda Kharkiv National Pedagogical University, Kharkiv, Ukraine. The study involved 33 elite male athletes with the musculoskeletal system disorders of the 6 . The BBSThe scale we used for functional capacity determination was the Functional Independence Measure (FIM) . The FIMThe scale we used for the risk of falls was the Fall Effect Scale . The FalThe Ashworth scale measures resistance during passive soft-tissue stretching and is used as a simple measure of spasticity . The scaWe also used the Dynamic Gait Index (DGI) . DGI is As a result of our analysis of the existing assessment scales for the musculoskeletal system functions applied in rehabilitation, a comprehensive scale was developed to assess the nature of the musculoskeletal system damage and the volume of muscle with impaired function . The groParalympians were tested on the level of psychophysiological functioning. The obtained data were mathematically processed to identify the effect of functional class of athletes and the nature of the musculoskeletal system disorder (the volume of muscle) on psychophysiological functions in two ways: (1) the influence of the functional class of athletes scale, and (2) We evaluated the physical abilities of the Paralympians on this scale with a standard medical examination by athletes before international competitions. We used the data of the 2016 Paralympics. This procedure is standard for all Paralympians.(1)Time a simple visual\u2013motor reaction. Images appear on the monitor screen. The subject should click the left mouse button as soon as he sees the image. Performs 30 attempts. The average value of the reaction time (ms), the standard deviation (ms), the number of errors is recorded.(2)Choice reaction time (Choice reaction 2\u20133). Images appear on the monitor screen. The subject must press the left mouse button as soon as he sees the image of the geometric figure. The subject must press the right mouse button as soon as he sees the image of the animal. When other images appear, you do not need to click the mouse button. Performs 30 attempts. The average value of the reaction time (ms), the standard deviation (ms), the number of errors is recorded.(3)Time complex visual\u2013motor reaction in the feedback mode. The subject must press the left mouse button as soon as he sees the image of the geometric figure. The subject must press the right mouse button as soon as he sees the image of the animal. When other images appear, the subject does not need to click the mouse button. The faster the subject reacts, the faster the next image appears.The experiment was conducted in March 2018. To determine the psychophysiological state of athletes during the first and last week of the experiment, psychophysiological indicators were recorded using the computer program Psychodiagnostics ,42,43. TThe average value of the reaction time (ms), the standard deviation (ms), the number of errors is recorded. In addition, the smallest time the image stays on the screen ) is fixed. The time from the start of the test to the subject reaching the peak of the reaction in this test is also recorded ).(4)A complex of parameters of a complex visual\u2013motor reaction that involves selecting two of the three elements in feedback mode; as the reaction time changes, the time of signal delivery changes. The \u201clong-term variant\u201d was used in the feedback mode, where the duration of exposure changes automatically depending on the corresponding reactions of the subject. After providing the correct answer, the duration of the next signal is reduced by 20 ms, and after an incorrect response, the duration increases by 20 ms. The range of the signal exposure change during the test subject\u2019s operation is within 20 to 900 ms with a pause between exposures of 200 ms. The correct answer is to press the left (right) mouse button when a certain image is displayed, or during a pause after the current exposure. In this test, the time for achievement the minimum exposure of the signal and the time of the minimum exposure of the signal reflect the functional mobility of the nervous processes. The number of errors reflects the strength nerve processes; the lower the value, the higher the mobility and strength of the nervous system. In addition, the total time of the test reflects a combination of strength and mobility of the nervous system. The duration of the initial exposure is 900 ms, the magnitude of the change in the duration of the signals with correct or erroneous reactions is 20 ms, the pause between the presentations of signals is 200 ms, the number of signals is 120. The indicators are fixed: the average value of the latent period (ms), deviation (ms), number of errors, test runtime (s), minimum exposure time (ms), and exit time to minimum exposure (s).A complex of parameters of a compound visual\u2013motor choice reaction of two of the three elements in feedback mode, that is, as the time reaction changes, the time of signal delivery changes. The \u201cShort version\u201d was used in the feedback mode: the exposure time varies automatically depending on the corresponding reactions of the subject. After the correct answer, the duration of the next signal is reduced by 20 ms, and after the wrong one, the next signal increases by the same amount. The range of the signal exposure change during the test subject\u2019s operation is within 20 to 900 ms with a pause between exposures of 200 ms. The right answer involves pressing the left (right) mouse button while displaying a certain exposure (image), or during a pause after the current exposure. In this test, the time for exciting the minimum exposure of the signal (the time from the start of the test to the subject reaching the peak of the reaction in this test) and the time of the minimum exposure of the signal reflects the functional mobility of the nervous processes (ability to respond quickly to changing situations). The number of errors reflects the strength; the lower the value, the higher the mobility and strength of the nervous system. The duration of the initial exposure is 900 ms, the magnitude of the change in the duration of the signals with correct and therefore erroneous reactions\u201420 ms , a pause between the presentation of signals\u2014200 ms, the number of signals\u201450. The indicators are recorded: the average value of the latent period in ms, deviation in ms, number of errors, run-time test (s), minimum exposure time (ms), and exposure time of minimum exposure (s);t-test for each indicator. The dispersion was also analyzed. We determined the influence of the functional class of athletes on the reaction rate in various test modes. We also determined the effect of the upper and lower extremity damage degree on the reaction rate in various test regimes. The degree of influence was considered reliable at a significance level p < 0.05. The computer programs Microsoft Excel-2016 Data Analysis and SPSS-17 were applied for statistical processing of the obtained data. We determined the average arithmetic value, the mean square deviation S (SD), and statistical significance according to Student\u2019s Each subject in this file corresponds to a row in the Excel SPSS-17 was used for statistical data processing. Since it is difficult to maintain long names of indicators in the SPSS program, all indicators were abbreviated. Explanation of abbreviations is presented in the We formulated two assumptions: (1) Psychophysiological indicators significantly differ in athletes of different functional classes, and (2) psychophysiological indicators vary in athletes with different degrees and patterns of lesions of the musculoskeletal system.(1)t-test Analysis of the influence of the Paralympic functional class on psychophysiological indicators .(3)Analysis of the reliability of differences in the indicators of groups of athletes with different levels and patterns of lesions of the musculoskeletal system. In this case, more than two independent samples were compared. Therefore, analysis of variance (ANOVA) was used .(4)Analysis of the impact of the degree and nature of damage to the musculoskeletal system of the Paralympians on psychophysiological indicators. For this, the following actions were performed in SPSS-17 .To verify these assumptions, we used the following statistical methods:We applied single-factor multivariate dispersion analysis. The indicators of psychophysiological testing were used as dependent variables. The functional class of athletes was applied as the independent variable. To study the influence of the upper or lower extremity damage degree on psychophysiological indicators, we used the point value of the extremity damage degree as an independent variable.p < 0.05 . In athletes in the 10 functional class, the stability of the reaction is significantly higher in comparison with athletes of the 6 functional class (p < 0.05) . In athletes in the 10th functional class, the time for reaching the minimum signal exposure was significant in comparison with athletes in the 6th functional class \u201d and \u201cChoice reaction in feedback mode, time to minimum signal exposure, s\u201d. In the test use to determine the choice of 2 out of 3 objects, it is necessary to press the left mouse button when the geometric shape appeared on the screen and to press the right mouse button when the image from the animal world. We determined the average test run time for each test participant based on 50 attempts. We also determined the number of errors and the standard deviation for 50 attempts for each test participant. A significant impact of the functional class of athletes was found for indicators \u201cChoice reaction 2\u20133, deviation (ms)\u201d and \u201cChoice reaction in feedback mode, time to minimum signal exposure, s\u201d. Another indicator that shows the influence of athletes\u2019 functional class on the test result is \u201cChoice reaction in the feedback mode, time to reach the minimum signal exposure, s\u201d. This test was performed in feedback mode: the faster the subject reacts to the signal, the faster the next signal appears. The faster the participant reaches the individual maximum when performing this test , the higher the mobility of the nervous processes. The obtained data confirms that athletes with minimal musculoskeletal system damage have a maximum reaction rate faster than athletes with more serious disorders. We proved that the musculoskeletal system also affects the mobility of the nervous processes because the nervous system regulates muscle activity. Therefore, muscular system disorders change the working of the nervous system. In Paralympic athletes, some disorders initially affect both the muscular system and the nervous system . Therefore, identifying the effect of the musculoskeletal system disorders on the working of the nervous system is important.However, according to the obtained data, the other studied parameters are not influenced by the athletes\u2019 functional class . This maTherefore, we further aimed to identify the influence of the nature of the musculoskeletal system disorder on a special scale that we developed on the basis of the scales existing in rehabilitation. p < 0.05. In this test, the signals are delivered the faster, the shorter is the reaction time of the participant to the signal. The faster the subject reacts, the faster the next image appears. The faster an athlete reaches their minimum signal exposure time, the higher the mobility of their nervous system. This means that in the central nervous system, the switching of the work from some nerve centers to others is faster. Dispersion analysis showed that athletes in the 10 functional class respond faster compared to athletes in the 6 functional class. In addition, athletes of the 10 functional class have a higher stability in the reaction rate to visual stimuli. However, no significant effect was revealed between athlete\u2019s functional class and reaction time, number of errors, or stability in a simple reaction to a visual signal. In addition, there was no significant effect of the athletes\u2019 functional class on the reaction time or in the number of errors in the choice reaction of two objects out of three. The same was true for the test of the choice reaction in feedback mode: the reaction time, the number of errors, the stability of the reactions, and the minimum signal exposure time do not depend on the functional class of athletes. The obtained data are partially consistent with studies by Van Biesenet et al. [p < 0.001, p < 0.05 with respect to damage of the upper and lower extremity. This means that the speed of reaction to a visual signal, the number of errors during the test of reaction speed, and the mobility of nervous processes in table tennis Paralympians depend on the upper and lower extremity damage degree, but practically does not depend on the functional class of Paralympians in table tennis. The worst psychophysiological results were observed in athletes with disabilities in both lower extremities. Unilateral damage to the extremities and congenital underdevelopment of the extremities had a lesser effect on the psychophysiological functions [We revealed that belonging to a certain functional class of Paralympians only affects the stability of the reaction rate and the achievement time for the minimum signal exposure in the test for the choice reaction rate in feedback mode at t et al. and Santt et al. . Our stuunctions ,45,46.The obtained data are new in the study of psychophysiological functioning of Paralympic athletes. We revealed that a higher damage degree of the upper and lower extremities influences the psychophysiological functions of Paralympic table tennis athletes in comparison with the influence of functional classification. Our results support the need to consider the characteristics of the upper and lower extremity damage in the functional classification of Paralympians. In addition, the functional classification of Paralympic table tennis athletes should consider the level of psychophysiological functioning.These provisions are important for the competitions in the Paralympian sport, in particular, table tennis. The obtained data are important for structuring the training process of Paralympians. The results that show the influence of the upper and lower extremities damage degree on the psychophysiological functioning indicate the need for an individual approach to table tennis athlete training. The results show that when training Paralympics athletes in table tennis, it is important to consider not only their functional class but also the upper and lower extremities damage degree and the level of psychophysiological functioning. Our data complete the concept of an individual approach to sports with these provisions ,48,49.The obtained data contribute to the study of the relationship between motor and psychological functioning, showing that disorders of the motor apparatus are interrelated with the deterioration of the nervous system. The malfunction of the lower extremities has a more pronounced effect on the working of the nervous system compared with disorders of the upper extremities and unilateral damage to the musculoskeletal system.The results also confirm the integrity of the body functioning and the connection between consciousness and motor actions . RestricBelonging to a certain functional class of athlete\u2019s influences the stability of the reaction rate and the time to reach the minimum signal exposure in the speed test for a choice reaction with feedback. Athletes of in the 10 functional class are reliably faster in the minimum signal exposure test compared with athletes in the 6 functional class.There was no significant effect of athlete functional class on the reaction time, the number of errors, and stability in a simple reaction to a visual signal. In addition, there was no significant effect of functional class on the reaction time or the number of errors in the choice reaction of two objects out of three. The same is true for the test of the choice reaction with feedback: the reaction time, the number of errors, the stability of the reactions, and the signal exposure time do not depend reliably on the functional class of athletes.The speed of reaction to a visual signal, the number of errors during the test for reaction speed, and the mobility of nervous processes in Paralympian table tennis athletes depend on the degree of damage to the upper and lower extremities. The worst results in psychophysiological indicators were found in athletes with disabilities in both lower extremities. Unilateral damage to the extremities and congenital underdevelopment of the extremities had less effect on the psychophysiological functions.The revealed higher degree of influence of the upper and lower extremities degree of damage on the psychophysiological functions of Paralympians in table tennis in comparison with the influence of the functional class indicates the need to consider the functional classification of these athletes including features of the damage to the upper and lower extremities. In addition, in the functional classification of Paralympian table tennis athletes should consider the level of psychophysiological functioning of athletes.When training Paralympic athletes in table tennis, it is important to consider their functional class as well as the upper and lower extremities degree of damage and the level of psychophysiological functioning.The results of this study apply only to athletes who specialize in table tennis. The subjects in this study were Paralympians who compete in international competitions: the World Cup, the Paralympic Games. The results of this study do not apply to beginner athletes with disorders of musculoskeletal system, as well as to athletes without disorders of musculoskeletal system. The results of this study do not apply to Paralympic other sports. The study of the characteristics of the reaction rate in various modes of testing athletes with disorders of musculoskeletal system in other sports requires additional research."} +{"text": "These studies have described a reduction in endothelial cell nitric oxide synthesis, the induction of inflammatory and procoagulant phenotypes, an increase in endothelial proliferation, and impairments in vascular remodeling and angiogenesis. Despite these lines of evidence, further research is required to better understand the pathophysiology of endothelial dysfunction in patients with APS. In this review, we have compared the current understanding about the mechanisms of endothelial dysfunction induced by patient-derived aPL under the two main clinical manifestations of APS: thrombosis and gestational complications, either alone or in combination. We also discuss gaps in our current knowledge regarding aPL-induced endothelial dysfunction.The endothelium is a monolayer of cells that covers the inner surface of blood vessels and its integrity is essential for the maintenance of vascular health. Endothelial dysfunction is a key pathological component of antiphospholipid syndrome (APS). Its systemic complications include thrombotic endocarditis, valvular dysfunction, cerebrovascular occlusions, proliferative nephritis, deep vein thrombosis, and pulmonary embolism. In women, APS is also associated with pregnancy complications (obstetric APS). The conventional treatment regimens for APS are ineffective when the clinical symptoms are severe. Therefore, a better understanding of alterations in the endothelium caused by antiphospholipid antibodies (aPL) may lead to more effective therapies in patients with elevated aPL titers and severe clinical symptoms. Currently, while Antiphospholipid syndrome (APS) is an autoimmune disease characterized by a persistence (\u226512 weeks) of moderate to high titers of immunoglobulin isotype G (IgG) and IgM antiphospholipid antibodies (aPL) reactive against either cardiolipin (aCL) or \u03b22 glycoprotein I (a\u03b22GPI); or positive tests for lupus anticoagulant (LA). Clinically, APS is defined as either primary APS, when it occurs in the absence of any other related disease, or as secondary APS, when it is associated with other autoimmune pathologies, such as systemic lupus erythematosus (SLE) , has been described in patients with APS. Patients with APS displaying thrombosis exhibited low plasma levels of nitrites and nitrates, which are the stable metabolites of NO breakdown. Clinically, these low levels were associated with vascular occlusions, suggesting an enhanced risk of thrombotic, and inflammatory events pathway in monocytes. These aPL stimulated phosphorylation of p38 MAPK, nuclear factor \u03baB (NF-\u03baB) translocation to the nucleus, and the upregulation of TF. The aPL-induction of monocyte TF could be prevented by the p38 MAPK inhibitor SB203580 generation, which resulted in increased endothelial expression of vascular cell adhesion molecule-1 (VCAM-1) via p38 MAPK activation . Dimeric \u03b22GPI can bind TLR2 and TLR4, which favor aPL binding , the phosphorylation of ribosomal protein S6 and serine 473 of Akt in endothelial cells of the renal vasculature was evaluated. Among the several interesting findings from this study, we highlight the following. First, aPL isolated from patients with APS induced mTOR activation in endothelial cells S Figure (Canaud S Figure . HoweverPatients with APS are classified into different groups depending on the severity of their clinical manifestations and on their aPL titers. Endothelial dysfunction is a key component of APS. However, our understanding of the precise mechanisms by which aPL induce endothelial dysfunction remains limited, in part, because patients are not always classified according to their clinical manifestations. Nonetheless, some mechanistic events do indicate a greater association with thrombosis, pregnancy complications, or both. In Table MV wrote the draft of the manuscript. MR, VA, CE, and \u00c1C critically revised the manuscript. \u00c1C generated the original idea and proposed topics for revision.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "O-tetradecanoylphorbol-13-acetate) stimulate the degradation of PDCD4 protein. However, the whole picture of PDCD4 degradation mechanisms is still unclear, we therefore investigated the relationship between PDCD4 and autophagy. The proteasome inhibitor MG132 and the autophagy inhibitor bafilomycin A1 were found to upregulate the PDCD4 levels. PDCD4 protein levels increased synergistically in the presence of both inhibitors. Knockdown of p62/SQSTM1 (sequestosome-1), a polyubiquitin binding partner, also upregulated the PDCD4 levels. P62 and LC3 (microtubule-associated protein 1A/1B-light chain 3)-II were co-immunoprecipitated by an anti-PDCD4 antibody. Colocalization particles of PDCD4, p62 and the autophagosome marker LC3 were observed and the colocalization areas increased in the presence of autophagy and/or proteasome inhibitor(s) in Huh7 cells. In ATG (autophagy related) 5-deficient Huh7 cells in which autophagy was impaired, the PDCD4 levels were increased at the basal levels and upregulated in the presence of autophagy inhibitors. Based on the above findings, we concluded that after phosphorylation in the degron and ubiquitination, PDCD4 is degraded by both the proteasome and autophagy systems.PDCD4 (programmed cell death 4) is a tumor suppressor that plays a crucial role in multiple cellular functions, such as the control of protein synthesis and transcriptional control of some genes, the inhibition of cancer invasion and metastasis. The expression of this protein is controlled by synthesis, such as via transcription and translation, and degradation by the ubiquitin-proteasome system. The mitogens, known as tumor promotors, EGF and TPA (12- PDCD4 (programmed cell death 4), also known as antineoplastic factor, contains two MA3 domains homologous to the M1 domain of eukaryotic translation initiation factor 4G (eIF4G), RNA binding domain, and nuclear localization signal (NLS) domains. PDCD4 binds to eIF4A and diminishes its helicase activity and thus inhibits the cap-dependent translation ,3. The RPDCD4 promotor region [PDCD4 gene mutations have not been reported and the expression is mostly controlled post transcriptionally at translation and protein degradation levels [\u03b2TRCP binding motif 71DSGRGD76S. As 71S and 76S in the degron are phosphorylated, PDCD4 protein is ubiquitinated by SCF\u03b2TRCP ubiquitin ligase and degraded by the proteasome system. The phosphorylation of the upstream serine 67 (67S) triggers the phosphorylation of 71S and 76S [O-tetradecanoylphorbol-13-acetate), a potent stimulator of protein kinase C (PKC), also induces the phosphorylation and degradation of PDCD4 protein. The TPA-activated PKC\u03b4 and PKC\u03b5 signaling pathways phosphorylate PDCD4, leading to its degradation by the proteasome system in Huh7 hepatocellular carcinoma cells [Previous studies have revealed the cell cycle inhibitory and apoptosis-inducing roles of PDCD4, so PDCD4 has been regarded as a tumor suppressor ,10,11,12r region . Howevern levels . MicroRNn levels ,20,21,22n levels . EGF cells ,45,46. T2 at 37 \u00b0C. A total of 3 \u00d7 105 cells were seeded in 35-mm dishes and cultured for 72 h, and then the culture medium was replaced with fresh medium according to the experimental purposes.The human hepatoma cell line Huh7 was obtained from the Japanese Cancer Research Resources Bank . Cells were cultured and maintained in Dulbecco\u2019s modified Eagle\u2019s medium (DMEM) of Sigma-Aldrich containing 10% fetal bovine serum, 100 \u00b5g/mL penicillin and streptomycin in 5% COATG5 expression in Huh7 cells, ATG5-specific single-guide RNA (sgRNA) was designed using the online tool CRISPR DESIGN (http://CRISPR.mit.edu). The sgRNA targeting sequence was as follows: 5\u2032-AACTTGTTTCACGCTATATC-3\u2032 in exon 2 of the ATG5 gene. Custom sgRNA targeting oligonucleotides were synthesized by Hokkaido System Science Co., Ltd. . The CRISPR/Cas9 vector was the pRSI9 derivative , in which the PCR-cloned Cas9 open reading frame and the sgRNA sequence backbone had been inserted (Addgene plasmids #41815 and #41824). The sequencing primer (pRSI_R1) was 5\u2032-TACAGTCCGAAACCCCAAAC -3\u2032.CRISPR/Cas9 plasmid construction and cell transfection: to disrupt the ATG5 exon site, the ATG5 Crispr PCR identification primer was designed. The primer sequences were as follows: F 5\u2032-CTTTGGTTGAAATAAGAATTTAGCCTG-3\u2032, R 5\u2032-AAGGTTAAATATCCCATTTGCCAC-3\u2032. The PCR conditions were as follows: reaction at 94\u2009\u00b0C for 2\u2009min followed by 35 cycles of denaturation at 94\u2009\u00b0C, 30\u2009s; annealing at 55.6\u2009\u00b0C, 30\u2009s; extension at 72\u2009\u00b0C, 1 min. The ATG5-mutant plasmid was transfected into Huh7 cells using Lipofectamine LTX of Life Technologies according to the manufacturer\u2019s instructions. Subsequently, transfected Huh7 cells were treated with puromycin (20\u2009\u03bcg/mL). Positive clones were confirmed by Western blotting to identify the ATG5 knockout effects.According to the sgRNA targeting of The growth factor EGF was from R&D Systems . TPA and bafilomycin A1 were purchased from Sigma-Aldrich. Rapamycin and MG132 were purchased from Calbiochem . 3-metyladenine was the product of Adipo Gen Life Sciences . Protein assay kits and Sure Beads Protein A Magnetic Beads were obtained from Bio-Rad Laboratories, Inc. . Magnetic Racks were purchased from Invitrogen . Protease Inhibitor Cocktail Tablets (Complete Mini) were purchased from Roche Diagnostic GmbH . RNAiso Plus was obtained from Takara , High Capacity cDNA Reverse Transcription Kit and Power Up SYBR Green Master Mix were the products of Thermo Fisher Scientific .gG H&L (DyLight650) antibody (ab96878) were purchased from Abcam . Alexa Flour 488 donkey anti-rabbit IgG (H+L) antibody was obtained from Thermo Fisher . Alexa Flour 555-conjugated donkey anti-guinea pig IgG (H+L) antibody (bs-0358D-A555) was obtained from Bioss Antibodies Inc. . Anti PDCD4 mouse monoclonal antibody (sc-376430) was obtained from Santa Cruz Biotechnology, Inc., . The antibodies were used according to the protocols provided by the respective companies.An anti-PDCD4 antibody was prepared by immunizing rabbits with a synthetic peptide corresponding to the N-terminal amino acid sequence . This anPDCD4 and GFP-PDCD4 plasmids [Huh7 cells were cultured for 4 days and then transfected with plasmids using Lig for 10 min, and the supernatant was collected. Protein amounts were determined with a DCTM protein assay kit of Bio-Rad Laboratories, Inc. using bovine serum albumin as the standard by the Lowry method. Protein (15-30 \u00b5g) from each sample was mixed with SDS loading buffer, separated by SDS polyacrylamide gel electrophoresis, and transferred to a polyvinylidene difluoride (PVDF) membrane (Bio-Rad). The membrane was blocked via incubation overnight at 4 \u00b0C in phosphate-buffered saline (PBS) containing 0.1% Tween 20 and 10% skim milk and then incubated with the primary antibody with shaking for 1 h at room temperature or overnight at 4 \u00b0C. After washing five times with PBS containing 0.1% Tween 20, the specific bands were visualized by further incubation with horseradish peroxidase (HRP)-conjugated second antibody followed by enhanced chemiluminescence detection using the ECL system according to the manufacturer\u2019s instructions. The \u03b2-actin antibody was used as a control. The stained membrane was exposed to Fuji Medical X-ray film and the specific protein bands were determined with the Image J software program (https://imagej.nih.gov/ij/) and normalized by \u03b2-actin.The collected cells were extracted by sonication in lysis buffer containing 50 mM Tris-HCl (pH 6.8), 2.3% sodium dodecyl sulphate (SDS) and 1 mM phenylmethylsulfonyl fluoride (PMSF). The cell debris was eliminated by centrifugation at 12,000\u00d7 5 cells were cultured in 35-mm dishes and used for transfection experiment at 80\u201390% confluency levels. siRNA transfection was performed using Lipofectamine RNAiMAX (Life Technologies) according to the manufacturer\u2019s protocols. The SQSTM1-2 (S100057596), SQSTM1-5 (S103089023), SQSTM1-6 (S103116750), SQSTM1-7(S103117513), and negative control (1027281) siRNAs were obtained from Qiagen . The cells were collected for Western blotting analyses after 24 h of transfection.A total of 2\u20133 \u00d7 107 cells/100-mm dish were lysed with 1 mL cold lysis buffer containing 1 mM Complete Mini protease inhibitor. The cell suspension in the lysis buffer was incubated at 4 \u00b0C with rotation for 1 h, briefly sonicated (15\u201320 s) in the presence of ice, and then centrifuged at 12,000\u00d7 g for 10 min at 4 \u00b0C. The supernatant was transferred to another fresh tube, and the protein concentration was determined by protein assay. Sure Beads Protein A Magnetic Beads and Magnetic Racks were used for the immunoprecipitation and isolation of specific protein targets. Immunoprecipitation of 500\u2013700 \u00b5L lysate was performed using 3\u20135 \u03bcg anti-PDCD4 rabbit polyclonal antibody (PD024). Elution of the beads was carried out using SDS buffer with 10 min incubation at 70 \u00b0C. Finally, the purified target proteins were resolved by Western blotting analyses.After 4 h starvation in the presence of 10 \u00b5M bafilomycin A1 and 20 \u00b5M MG132, approximately 1 \u00d7 10T (\u0394\u0394CT) method, and the expression of target gene was normalized by GAPDH. Each experiment was repeated at least three times.Total RNA from treated cells was isolated by using RNAiso Plus and reverse transcribed to cDNA using a High Capacity cDNA Reverse Transcription kit according to the manufacturer\u2019s protocol. Quantitative Real-Time PCR (qRT-PCR) using Power Up SYBR Green Master Mix was performed on Step One Plus system of Applied Biosystems-Thermo Fisher Scientific . The primers of GAPDH and PDCD4 were synthesized by Hokkaido System Science Co., LTD. . The sequences of primers were as follows: GAPDH forward (F) 5\u2032-GTCTCCTCTGACTTCAACAGCG-3\u2032 and reverse (R) 5\u2032-ACCACCCTGTTGCTGTAGCCAA-3\u2032; PDCD4, (F) 5\u2032-ATGAGCAGATACTGAATGTAAAC-3\u2032 and (R) 5\u2032-CTTTACTTCCTCAGTCCCAGCAT-3\u2032. Data were analyzed using the comparative C5 cells were seeded onto glass coverslips in 35-mm dishes and cultured in DMEM + 10% FBS medium. At 80\u201390% confluency, the cells were transfected with PDCD4 plasmid and cultured for a further 24 h. For bafilomycin A1 or MG132 treatment, the cells were transfected with PDCD4 plasmid and cultured for a further 20 h, and then the inhibitors were added at a final concentration of 10 \u00b5M bafilomycin A1 or 20 \u00b5M MG132. For control cells, the same amount of DMSO was used. For FBS absence, the dishes were washed twice with DMEM before adding the new medium. After 4 h of the addition of inhibitors, the cells were fixed in 4% paraformaldehyde by incubating 20 min at room temperature. Before fixation the cells were washed thrice with 1\u00d7 PBS. The fixed cells were blocked at room temperature for 30 min with 1% bovine serum albumin and 1% donkey serum in phosphate-buffered saline. Incubation with primary antibodies was done overnight at 4 \u00b0C followed by 1 h incubation at room temperature with donkey anti-mouse IgG H&L (DyLight650) (ab96878) (Abcam), Alexa Flour 488 donkey anti-rabbit IgG (H+L) (Invitrogen), Alexa Flour 555 conjugated donkey anti-guinea pig IgG (H+L) (bs-0358D-A555) (Bioss Antibodies Inc.) secondary antibodies. One compartment from each treatment group was incubated with a secondary antibody and considered as blank. 4, 6-diamidino-2-phenylindole (DAPI) dihydrochloride was used for nuclear staining. Fluorescence images were captured using a confocal microscope at 20 and 63 (oil) magnifications. The images were processed and viewed using the Zen software program . All images were taken at 22 \u00b1 3 \u00b0C. The captured images were analyzed using the HALO-2 image analyzing software program .For the colocalization analyses, 1.5 \u00d7 10t-test, and p < 0.05 was considered significant. All of the experiments were performed at least in triplicate unless stated otherwise. Data are shown as the mean \u00b1 standard deviation (SD).Differences were determined using Student\u2019s We previously showed that PDCD4 protein was phosphorylated at S67, S71, and S76 via the tumor promotor EGF and TPA-mediated signaling pathways and degraded by the ubiquitin-proteasome system ,26. WhilHuh7 cells were treated with the potent autophagy inhibitor bafilomycin A1 ,48, a vaWe also assessed the effects of 3-methyladenine (3-MA), another kind of autophagy inhibitor, on PDCD4 degradation in time-dependent manner. We found that the PDCD4 levels were upregulated in Huh7 cells treated with 3-MA compared to the control cells, but the autophagy-related proteins p62, ATG5, and LC3-II did not show significant accumulation in the cells B. FigureATG5 mutant cell line (ATG5-16) of Huh7 cells was isolated using the CRISPR-Cas9 system with screening for the ATG5-ATG12 protein band by Western blotting prevents autophagosomes\u2019 formation by inhibiting PI3K. The reagent is expected to inhibit the PDCD4 degradation in the ubiquitin-proteasome system because such degradation is induced by the mitogen-activated PI3K-Akt-mTOR-S6K1 signaling system ,57. AlthAs mentioned earlier ,32,33 p6ATG5 mutant cells, p62 silencing also upregulated the PDCD4 protein levels, as shown in ATG5-16 mutant Huh7 cells were immunoprecipitated by anti-PDCD4 antibody, and the resulting precipitates were analyzed by Western blotting. The band of p62 as well as the smear bands of ubiquitin were found in the precipitates of both wild-type and mutant cells cells [Atg5\u2212/\u2212 or Atg7\u2212/\u2212 cells. However, Tsuboyama et al. conversely reported that autophagosome-like structures positive for the ER marker synthaxin-17 are formed in the absence of the ATG conjugation system [Regarding how 3-MA upregulated the PDCD4 protein levels in the pathways . The fins) cells . They aln system . At presWhile it was initially suggested that autophagy prevents tumor initiation by maintaining cellular homoeostasis ,62,63, aPrevious results showed that EGF and TPA were able to downregulate the PDCD4 levels. EGF and TPA activate the PI3k-Akt-mTOR-S6K1 axis and PKC, respectively, to phosphorylate PDCD4 at S71 and S76 in the degron and may channel it after ubiquitination to proteasomal degradation ,25,26. IWe demonstrated for the first time that the tumor suppressor PDCD4 is degraded by the p62-mediated selective macro-autophagy system in Huh7 hepatoma cells. The autophagy system may contribute at least partly to suppress the levels of PDCD4 and result in the development and progression of tumor cells. Thus, the inhibition of this pathway might be a potential target in cancer therapy."} +{"text": "Cercopithifilaria bainae is a filarioid nematode of dogs. Infection with the parasite was not reported in the USA until 2017, when a dog with skin lesions in Florida was diagnosed. Brown dog ticks, Rhipicephalus sanguineus (sensu lato), are the purported tick vectors, and are widespread in the USA. Therefore, C. bainae is likely present in additional states. Here, we tested dogs and ticks in Oklahoma for evidence of C. bainae infection.12S rRNA gene. Also, ticks observed on surveyed dogs were collected, identified to species level, and tested for filarioid DNA.Dermal punch biopsies were opportunistically collected from municipal shelter and client-owned dogs. Multiple skin samples collected from interscapular and head regions were tested by saline sedimentation to recover live microfilariae for morphometric identification and by PCR to amplify a 330 bp region of the filarioid Cercopithifilaria bainae infections were identified in 2.6% (6/230) of shelter dogs by morphometry of microfilariae in sedimentations and/or amplification of DNA from skin. DNA sequences amplified from PCR positive skin samples were 99\u2013100% identical to C. bainae reported in Italy. All skin samples from client-owned dogs were negative for filarioid infection by saline sedimentation and PCR. A total of 112 ticks, comprised of four species, were collected. Two of 72 R. sanguineus (s.l.), both engorged females found attached to a C. bainae infected dog, harbored C. bainae DNA (99\u2013100% identity). One attached R. sanguineus (s.l.) male on the same dog harbored filarioid DNA sequence which was difficult to interpret at numerous base-pair locations, but was closest in identity (~80%) to C. bainae.A total of 496 saline sedimentations were performed on 230 shelter and 20 client-owned dogs. C. bainae is more widespread than previously known. To our knowledge, we document C. bainae infections in dogs and DNA in brown dog ticks in Oklahoma for the first time. As brown dog ticks are commonly found throughout the USA, veterinarians in this region should consider C. bainae infection as a differential diagnosis in canine patients with dermatitis or polyarthritis.The distribution of Cercopithifilaria bainae is a tick-borne filarial nematode of dogs that was first described in Brazil in 1984 [C. bainae parasitize the subcutaneous tissue of canine hosts, and microfilariae remain sequestered in the dermis, making detection of the parasite in infected dogs challenging [Cercopithifilaria bainae is considered primarily non-pathogenic, but erythematous, papular and pruritic dermatitis, non-healing and ulcerative skin lesions, and subcutaneous nodules associated with infection have been reported [llenging . Cercopireported \u20135. One creported .Cercopithifilaria bainae infections in dogs have been documented predominantly in Mediterranean Europe and in Brazil, and DNA of the parasite has been reported in the suspected tick vector, Rhipicephalus sanguineus (sensu lato), collected in these areas. DNA of C. bainae has also been identified in R. sanguineus (s.l.) collected in other regions, including Australia, Malaysia and South Africa [Rhipicephalus sanguineus (s.l.), commonly called brown dog ticks, are thought to be important natural vectors of C. bainae based on the development of third-stage larvae in adult ticks acquisition fed as nymphs on a naturally infected dog [C. bainae has been molecularly detected in other ticks, including Dermacentor reticulatus and Ixodes ricinus, parasite development within these tick species has not been experimentally demonstrated [h Africa , 4, 6, 7cted dog . Althougnstrated , 9.C. bainae had not been documented in the USA until 2017. A dog from Florida with no travel history was presented with dermatitis, with plaques on the dorsal head, and alopecia, erythema, and ulceration of both medial canthi. Microfilariae isolated from skin biopsy samples via saline sedimentation were identified as C. bainae by PCR and microscopy [Despite the cosmopolitan distribution of brown dog ticks, croscopy .C. bainae is present in dogs in states in addition to Florida [C. bainae in dogs in the USA have been conducted. Raising awareness of the emerging parasite in the USA will assist veterinarians in diagnosing infections, which will generate further information regarding clinical manifestations and pathology in infected dogs, and lead to investigations into treatment and prevention strategies. To determine if C. bainae is present in dogs in Oklahoma, multiple dermal punch biopsy samples were evaluated by saline sedimentation and PCR. Additionally, ticks observed on dogs were tested for filarioid DNA.Brown dog ticks are widespread in the USA, with all stages preferentially feeding on dogs, and it is likely that Florida , 11. To Skin biopsy samples were opportunistically collected from euthanatized dogs in Oklahoma, USA, over a 10-month period (January-October 2018). Shelter dogs were temporarily housed at animal control facilities prior to euthanasia following standard approved shelter protocols and client-owned dogs were submitted for necropsy at the Oklahoma Animal Disease Diagnostic Laboratory . When possible, sex and estimated age were documented. Travel histories were not available for the majority of animals, nor was information regarding prior treatment with parasiticides.C. bainae microfilariae in interscapular and head regions has been previously described [Multiple skin samples were collected from individual animals using sterile 6 mm biopsy punches within hours, but sometimes up to four days, after death. Carcasses were stored at 4\u00a0\u00b0C until sample collection. Higher frequency of escribed , and theSingle interscapular biopsy samples were placed in microcentrifuge tubes containing phosphate-buffered saline (PBS) and transported to the laboratory for storage at \u2212\u200920\u00a0\u00b0C and later DNA extraction and molecular analyses. Additional biopsy samples were placed in PBS-filled, sterile 15 ml conical tubes and, upon transport to the laboratory, processed to recover microfilariae as described below. After processing, the majority of skin samples were stored at \u2212\u200920\u00a0\u00b0C for subsequent DNA extraction and PCR.g for 5 min to concentrate microfilariae. Supernatants were decanted and resulting pellets were stained with 0.1% methylene blue for microscopic examination.To detect microfilariae in skin biopsy samples, up to three skin samples from individual dogs were placed in 15 ml conical tubes containing PBS and incubated for 1\u20133 h at 37\u00a0\u00b0C to allow live microfilariae to migrate out of the tissue . The skiAcanthocheilonema reconditum (215\u2013288 \u00d7 4.5\u20135.8 \u00b5m), Cercopithifilaria bainae (173.8\u2013200 \u00d7 5.6\u20136.9 \u00b5m), and Dirofilaria immitis (280\u2013325 \u00d7 5\u20137.5 \u00b5m) [Stained sediment was transferred to microscope slides and covered with 22 \u00d7 60 mm glass coverslips; all sedimentation material from each skin sample was scanned under 100\u00d7 total magnification. When observed, microfilariae on slides were enumerated, and up to 10 microfilariae were measured using an ocular micrometer (length and width) under 400\u00d7 total magnification. Microfilariae measurements were compared to those available in the literature identifying filarioid species including \u20137.5 \u00b5m) \u201315. MicrAnimals were briefly examined (approximately 1\u20133 min) for ticks at the time of skin biopsy collection. When present, ticks were placed in 70% ethanol and stored at \u2212\u200920\u00a0\u00b0C. At the time of dissection, ticks were removed from ethanol and identified to species by microscopic examination and comparison with standard keys . IdentifD. immitis was used as a positive control.Tick dissection, DNA extraction, PCR amplification, and amplicon purification were carried out in dedicated laboratory areas to prevent DNA contamination. Separate negative water controls were used for DNA extractions and for PCR. A sample containing DNA of TM blood genomicPrep Mini Spin Kit . After tissue digestion, individual tick samples were extracted for DNA using the QIAamp\u00ae DNA Blood Mini Kit . DNA extractions were carried out according to the manufacturer\u2019s instructions specific to each kit.Nucleic acid was extracted from approximately 30 mg sections of skin biopsy samples using the QIAamp\u00ae Fast DNA Tissue Kit . Refrigerated microfilariae (washed with PBS from glass microscope slides) were extracted for DNA using the Illustra12S rRNA mitochondrial gene was performed on DNA extractions from skin, microfilariae, and ticks using previously described primers Fila12SF and Fila12SR [PCR amplifying a ~ 330-bp region of the filarioid 12S rRNA mitochondrial gene was performed on R. sanguineus (s.l.) testing positive for C. bainae, using previously described primers 12SF and 12SR, to determine the genetic lineage of the ticks as previously described [Additionally, PCR amplifying a 340\u2013370-bp region of the escribed , 18.Standard gel electrophoresis in a 2% agarose matrix with GelRed\u00ae staining was used to detect amplicons. Correctly sized amplicons were purified either directly from the gel using the QIAquick\u00ae Gel Extraction Kit (Qiagen) or from PCR reactions using the QIAquick\u00ae PCR Purification Kit (Qiagen).TM) to determine filarioid species identity and R. sanguineus (s.l.) genetic lineage. Sequence alignments were constructed using ClustalW to determine percent similarities of Oklahoma filarioid 12S rRNA mitochondrial gene sequences to each other and to additional filarioid sequences previously contributed to the GenBankTM repository, as well as to determine R. sanguineus (s.l.) genetic lineage.Purified amplicons were bi-directionally sequenced (Sanger method) by Eurofins Genomics or the Oklahoma State University Molecular Core Facility . Sequences from skin samples and ticks were compared to those available in the National Center for Biotechnology Information database males and 43.5% (100/230) females, with reported ages ranging from two months to 14 years . No microfilariae were recovered from skin biopsy samples collected from client-owned dogs.A total of 496 saline sedimentations were performed on 230 shelter dogs and 20 owned dogs. Microfilariae were recovered from 8.7% (20/230) of shelter dogs. A total of eight microfilariae recovered from 1.3% (3/230) of dogs were consistent with Acanthocheilonema reconditum (215\u2013288\u2009\u00d7\u20094.5\u20135.8 \u00b5m) was identified in 1.3% (3/230) of dogs and D. immitis (280\u2013325\u2009\u00d7\u20095.0\u20137.5 \u00b5m) was identified in 5.2% (12/230) of dogs. One dog had a single microfilaria recovered from the interscapular region that desiccated on the slide, so an accurate measurement was not possible for species determination. This dog was later confirmed as having D. immitis by PCR of the sediment washed from the slide with PBS. One microfilaria (measuring 160 \u00d7 4.5 \u00b5m) recovered from a single shelter dog did not fall into known filarioid microfilariae size ranges.A. reconditum and D. immitis in individual skin biopsy samples were higher when compared to C. bainae, which ranged in number from one to four. Of the dogs with A. reconditum or D. immitis, 93.3% (14/15) had detectable microfilariae in interscapular regions, ranging in number from one to 68, and 80% (12/15) had detectable microfilariae in head regions, ranging in number from one to 239.On average, the numbers of microfilariae detected for C. bainae or A. reconditum microfilariae was not detectable in material rinsed from sedimentation slides. DNA of D. immitis microfilariae rinsed from slides was detected in 55% (11/20) of samples.DNA from microscopically identified C. bainae and 6.6% (15/228) of dogs, respectively. When assessing dermal areas of skin biopsy collection, 8.3% (19/228) of dogs had detectable DNA in the interscapular region, including 1.8% (4/228) with C. bainae, 0.9% (2/228) with A. reconditum, and 5.7% (13/228) with D. immitis infections. In the head region, 4.8% (11/228) of dogs had detectable DNA, including 0.4% (1/228) with C. bainae, and 4.4% (10/228) with D. immitis infections. Cercopithifilaria bainae sequences obtained from shelter dogs were 99\u2013100% homologous to each other and to C. bainae reported in Italy (GenBank: KF381408). Cercopithifilaria bainae sequences obtained from dogs in this study were submitted to GenBankTM (MN814265-MN814269). Acanthocheilonema reconditum and D. immitis sequences were 99\u2013100% homologous to GenBankTM accessions JF461460 and MH051846, respectively. None of the samples from owned dogs had detectable filarioid DNA by PCR.Skin samples from 228 shelter dogs and eight owned dogs were tested by PCR, all of which were also tested by saline sedimentation. A total of 9.6% (22/228) of shelter dogs were positive for filarioid DNA, with 2.2% (5/228) having DNA of ae Table\u00a0; two of C. bainae. A total of 110 ticks were collected from 16 shelter dogs and were comprised of Amblyomma americanum , Amblyomma maculatum , Dermacentor variabilis , and R. sanguineus (s.l.) . Two partially engorged A. americanum females were collected from one client-owned dog.A total of 112 ticks were collected from 17 dogs, including two dogs with C. bainae microfilariae by sedimentation were noted to have R. sanguineus (s.l.) on them at the time of skin biopsy sample collection; three attached, engorged females and two attached males were collected from one of these dogs. Two of the engorged R. sanguineus (s.l.) harbored DNA sequences that were 99% identical to each other and 99% homologous to C. bainae from Italy (KF381408); C. bainae sequences from the female R. sanguineus (s.l.) were 99\u2013100% identical to C. bainae sequences amplified from skin of dogs in this study. One of the male R. sanguineus (s.l.) harbored sequence that was difficult to interpret at numerous base-pair locations due to heterozygous and mis-spaced peaks, suggesting co-infection with similar organisms, but was closest in identity (~80%) to C. bainae (GenBank: KF381408). Attempts to clone amplicons from the male R. sanguineus (s.l.) into plasmid vectors to better elucidate nucleotide sequences of single gene fragments were unsuccessful. The R. sanguineus (s.l.) ticks which harbored Cercopithifilaria sp. sequences were identified as belonging to the temperate lineage. The R. sanguineus (s.l.) ticks which tested negative for Cercopithifilaria sp. DNA were not tested for genetic lineage. Both A. americanum collected from the client-owned dog were negative for filarioid DNA by PCR.Two shelter dogs with Cercopithifilaria sp. DNA in ticks, DNA of D. immitis was detected in 13 ticks collected from six different dogs, including five A. americanum and eight R. sanguineus (s.l.) . Two dogs with detectable D. immitis DNA in infesting ticks were positive for D. immitis by skin sedimentation and/or PCR, two dogs did not have microfilariae or detectable filarioid DNA in skin, and two dogs with D. immitis positive ticks were positive for C. bainae microfilariae by skin sedimentations and/or PCR. No A. reconditum DNA was detected in any of the ticks tested.In addition to detection of Cercopithifilaria bainae infections in dogs have predominantly been identified in Mediterranean Europe and Brazil, with detection in 10.5\u201313.9% and 1% of dogs surveyed, respectively; these are regions where researchers have been actively looking for the parasite [R. sanguineus (s.l.), the experimentally demonstrated tick vector, it is logical to deduce that C. bainae infections in dogs are similarly distributed, as are other infections transmitted by this tick group including Anaplasma platys, Ehrlichia canis, canine Babesia spp. and Hepatozoon canis [C. bainae in dogs in Oklahoma for the first time, only the second documentation of the parasite in North America. The parasite was detected in 2.6% (6/230) of shelter dogs when PCR and sedimentation results are considered together. Although PCR and sedimentation results in the present study did not always agree, discrepant results between PCR and sedimentation assays have been documented previously in C. bainae-infected dogs [C. bainae by PCR of skin, but not by microscopy.parasite , 19. Howon canis \u201322. Hereted dogs . It is ated dogs . Longer A. reconditum and D. immitis infections in shelter dogs in this study; both parasites are well-documented in the USA [Acanthocheilonema reconditum infections were identified in 2.2% (5/230) of shelter dogs, and were more commonly detected by skin sedimentation of the head region or PCR of skin samples collected from the interscapular region. The prevalence of A. reconditum infection in dogs in Oklahoma has not been reported. Dirofilaria immitis infections were identified in 8.3% (19/230) of shelter dogs, and were more commonly detected by PCR of interscapular skin samples rather than detection of microfilariae by sedimentation. The overall heartworm prevalence observed in shelter dogs in this study was comparable to what has been previously reported in Oklahoma shelter dogs [It is not surprising that filarioid infections were detected less commonly in client-owned dogs than shelter dogs. Owned dogs often receive more frequent veterinary care relative to shelter dogs, and therefore are more likely to be treated with compounds effective against helminths or ectoparasites , 24. How the USA , 26. Acater dogs .Cercopithifilaria sp. DNA in ticks in the USA, and suggests R. sanguineus (s.l.) may serve as vector in this region, as has been reported in other areas of the world [R. sanguineus (s.l.) were attached to one dog that was later determined to have C. bainae microfilariae; the female ticks were engorged, but it was not apparent for how long the male tick had been attached or if a blood meal was taken. The presence of R. sanguineus (s.l.) on a dog infected with C. bainae is noteworthy, and compels the authors to suspect that the parasite is cycling between this tick group and dogs in the USA. If C. bainae microfilariae had been ingested by immature R. sanguineus (s.l.) stages, they may have gone on to develop into infective third-stage larvae within ticks during ecdysis, as has been experimentally demonstrated in this tick group in other areas of the world [To the authors\u2019 knowledge, the present study is the first report of he world . All thrhe world .Cercopithifilaria sp. DNA amplified from the three ticks may have occurred following incidental ingestion of dermal microfilariae from the infected dog on which they were found. This possibility is evidenced by the fact that DNA of D. immitis was detected in 20% (5/25) of the A. americanum and 11.1% (8/72) of the R. sanguineus (s.l.) tested. As D. immitis is adapted to mosquito intermediate hosts [C. bainae in other tick species (Dermacentor reticulatus and Ixodes ricinus), R. sanguineus (s.l.) is the only tick group which has been experimentally demonstrated to host developing stages of the parasite [C. bainae was not detected in A. americanum, A. maculatum or D. variabilis. However, if more specimens of each of these tick species were tested, then C. bainae DNA may have been detected, especially if ticks had recently fed on infected dogs.Alternatively, the te hosts , it is end 11.1% /72 of thparasite . In thisCercopithifilaria bainae infections in dogs in the USA are more widespread than previously thought. Here, we document infections in dogs and DNA of the parasite in engorged R. sanguineus (s.l.), the experimentally demonstrated tick vector, in Oklahoma for the first time. Due to the ubiquity of R. sanguineus (s.l.), practicing veterinarians should consider C. bainae infection as a differential etiology when diagnosing canine dermatitis and polyarthritis, especially for those animals with known histories of brown dog tick infestations."} +{"text": "Malignant peripheral nerve sheath tumors (MPNSTs) are rare and aggressive soft tissue sarcomas (STS) that, because of their origin, are operated by several surgical subspecialties. This may cause differences in oncologic treatment recommendations based on presentation. This study investigated these differences both within and between subspecialties. \u03c72-tests. A survey was distributed among several (inter)national surgical societies. Differences within and between subspecialties were analyzed by In total, 30 surgical oncologists, 30 neurosurgeons, 85 plastic surgeons, and 29 \u201cothers\u201d filled out the survey. Annual caseload, tumor sites operated, and fellowship training differed significantly between subspecialties. While most surgeons agreed upon preoperative use of MRI, the use of radiological staging and FDG-PET use differed between subspecialties. Surgical oncologists agreed upon core needle biopsies as an ideal type of biopsy while other subspecialties differed in opinion. On average, 53% of surgeons always consider preservation of function preoperatively, but 42% would never perform less extensive resections for function preservation. Respondents agreed that radiotherapy should be considered in tumor sizes >10\u2009cm, microscopic, and macroscopic positive margins. A preferred sequence of radiotherapy administration differed between subspecialties. There was no consensus on indications and sequence of administration of chemotherapy in localized disease. Surgical oncologists generally agree on preoperative diagnostics; other subspecialties do not. Considering the preservation of function differed among all subspecialties. Surgeons do agree on some indications for radiotherapy, yet the use of chemotherapy in localized MPNSTs lacks consensus. A preferred sequence of multimodal therapy differs between and within surgical subspecialties, but surgical oncologists prefer neoadjuvant radiotherapy. Malignant peripheral nerve sheath tumors (MPNSTs) are aggressive soft tissue sarcomas (STS) that can occur at any anatomical site . ApproxiSurgical resection is the only curative treatment option in localized MPNSTs , 10. RadMPNSTs are rare tumors and exact treatment strategies may differ between surgeons because patients can present at different surgical subspecialties due to their origin in nervous tissue and occurrence in NF1. While surgical oncologists consider MPNSTs as part of their sarcoma population requiring radical excision , 18, plaA survey was constructed by two authors (EM and JHC) and tested internally with all coauthors from different surgical subspecialties. A secure electronic data capturing tool (REDCap) provided by the Dutch Plastic Surgery Society (NVPC) was used to construct the survey. This study is part of a larger survey addressing both oncological and reconstructive treatment considerations for localized MPNST. A total of 18 questions were used for this study, of which seven were for demographical purposes. The complete survey can be found in the Supplementary Materials. Approval for this study was obtained from our institutional review board.Several national and international surgical societies were asked to distribute the survey among their members with an accompanying text explaining the purpose of the research. Surgeons involved in the surgical management of MPNSTs were asked to fill out the survey. A reminder e-mail was sent thereafter. The survey was sent to the members of the Dutch Society of Surgical Oncology (NVCO), the Dutch Society for Surgery of the Hand (NVVH), the peripheral nerve section of the Dutch Society for Neurosurgery (NVVN), the American Society for Peripheral Nerve (ASPN), the peripheral nerve section of the European Association of Neurosurgical Societies (EANS), and the Soft Tissue and Bone Sarcoma Group of the European Organization for Research and Treatment of Cancer (EORTC). Survey responses were filled out anonymously and no personal identifying data was inquired.2-tests for categorical data. p values <0.05 were considered statistically significant. Statistical analyses and data visualization were conducted using R version 3.6.0 .Responses were summarized per surgical subspecialty: oncologic surgery, neurosurgery, plastic surgery, and other surgical subspecialties. Differences were calculated with \u03c7p < 0.001); surgical oncologists commonly had completed a sarcoma fellowship (85%), while other respondents more commonly did a fellowship in peripheral nerve surgery (32\u201356%). The highest caseloads were performed by surgical oncologists (p < 0.001). The majority of respondents operated extremity site tumors , but most other tumor sites differed between surgical subspecialties.In total, 174 respondents filled out the survey: 30 surgical oncologists, 30 neurosurgeons, 85 plastic surgeons, and 29 surgeons from other surgical subspecialties. Most respondents were European . The \u201cotp < 0.05). Regarding preoperative imaging studies, surgeons agreed that an MRI is necessary . FDG-PET scans which can be used both for staging and possible differentiation of benign and malignant lesions are more commonly performed by neurosurgeons (67%) and surgical oncologists . The preoperative staging was carried out by 44% of respondents, most commonly by surgical oncologists . A CT-thorax is used by 25%, of which more than half would be in conjunction with an FDG-PET scan. A total of 10% would also carry out other radiologic diagnostics preoperatively. Preferred type of biopsy differed significantly between the surgical subspecialties (p < 0.001). Overall, a core needle biopsy was the preferred type of biopsy, especially among surgical oncologists (96%). Plastic surgeons and \u201cother\u201d surgeons commonly also preferred open biopsies. Plastic surgeons were also most likely not to have a preferred biopsy technique (17%). Respondents that did not regard a preoperative biopsy necessary commonly reported that they considered the chances of tumor spread too high and would therefore directly proceed to surgery.Opinions regarding preoperative work-up of soft tissue tumors that may originate from peripheral nerves differ between surgical subspecialties . The majp > 0.05, n=3), in low-grade MPNSTs (n=1), in case it does not interfere with oncological resection (n=1), when multiple lesions are present (n=1), or if a main nerve bundle is separable from the tumor (n=1). Contrarily, 42% of all surgeons would never perform less extensive resections to preserve functionality and possibly compromise oncological results, and this did not differ between surgical subspecialties (p > 0.05). Others would only resect less if achieving free margins was not presumed feasible (36%), while a minority would consider it in other cases as well (22%). The majority of respondents always look for the nerve of origin preoperatively (74%). In the hypothetical situation of a microscopically complete resectable MPNST, 47% of respondents had the opinion that there is a beneficial effect of resecting more of the originating nerve to decrease local recurrence as microscopic satellite lesions within or along the nerve may be present.On average, 53% of all respondents always consider preservation of function before performing a resection; most commonly plastic surgeons did so . Surgical oncologists preferred neoadjuvant administration (72%), while other subspecialties either preferred adjuvant administration (36\u201353%) or had no preference (21\u201343%).Opinions of indications for the use of radiotherapy in localized disease did not differ significantly among surgical subspecialties . A total of 26% of all respondents were of the opinion that chemotherapy should always be used in localized disease; this differed significantly among surgical subspecialties (p < 0.05). Neurosurgeons most commonly recommended the latter (47.4%). A preferred sequence of chemotherapy in any localized MPNST did not differ between surgical subspecialties (p > 0.05), but no consensus was present. Overall, 24% of respondents did not see a role for chemotherapy in any localized MPNST.Overall, respondents felt that chemotherapy was usually not indicated in localized disease . Only tuIn patients who are referred for soft tissue tumors that are possibly MPNSTs, the reported use of preoperative imaging studies and biopsies differs between surgical subspecialties; the vast majority of surgical oncologists routinely perform both. Some surgical considerations such as the extent of resection margins for the preservation of function in selected cases differ within surgical subspecialties. Moreover, assumed indications for the use of radiotherapy and chemotherapy in localized MPNST differ among surgical subspecialties, as well as their ideal timing of administration.Ideally, MPNSTs are resected with a wide margin to obtain an R0 margin , 21, 22.Complete surgical excision with wide margins is the routine treatment of choice , 21, 22.To date, no study has yet demonstrated that MPNSTs should be treated differently than other high-grade STS , 18. As Limitations to this study are partially inherent to the survey methodology. Respondent bias should always be taken into account as only interested surgeons will fill out the survey. Furthermore, selection bias may be present as we restricted our survey distribution to a certain list of surgical societies, thereby excluding physicians that are not members of these societies. This study is, however, strengthened by the combination of respondents with experience in both sarcoma and peripheral nerve surgery. As patients will present themselves to several surgical subspecialties, it is important that knowledge and experience are exchanged, more so when practice variation is present. Partially, as several elements of MPNST treatment have not been proven by high-level evidence, of which some will likely never be because of their low incidence. Future studies should be encouraged in combining data from several subspecialties and to further explore the ideal combination of surgical treatment and function preservation and the role of multimodal treatment. Multidisciplinary approaches are essential for optimal treatment of MPNSTs, possibly including collaboration of surgical oncologists, nerve surgeons, and reconstructive surgeons. In turn, consensus guidelines among all specialties treating MPNSTs can and should be made.While a consensus among surgical oncologists is more apparent in preoperative diagnostics, this differs between surgical subspecialties. Some disagreement exists as well within subspecialties on less extensive resections in selected cases for function preservation. While surgeons agree on some indications for radiotherapy, the preferred sequence of radiotherapy differed between surgical subspecialties and within subspecialties other than oncologic surgery. Chemotherapy seems less popular in localized disease, and indications for its use lack consensus among surgeons. Differences between surgical subspecialties are likely caused by specialty bias, and combining knowledge between surgical subspecialties may further ameliorate patient outcomes."} +{"text": "The process of changing the attachment of a demolition robot is a complex operation and requires a high docking accuracy, so it is hard for operators to control this process remotely through the camera\u2019s perspective. To solve this problem, this paper studies trajectory planning for changing a demolition robot attachment. This paper establishes a link parameter model of the demolition robot; the position and attitude of the attachment are obtained through a camera, the optimal docking point is calculated to minimize the distance error during angle alignment for attachment change, the inverse kinemics of the demolition robot are solved, the trajectory planning algorithm and visualization program are programmed, and then the trajectory planning for the demolition robot attachment changing method is proposed. The results of calculations and experiments show that the method in this paper can meet the accuracy, efficiency, and safety requirements of demolition robot attachment changing, and it has promising application prospects in the decommissioning and dismantling of nuclear facilities and other radioactive environments. The first remote control hydraulic demolition robot designed for working in dangerous environments was developed in the 1970s and is widely applied for purposes such as nuclear accident emergency response and the decommissioning of nuclear facilities ,3. CompaIn order to make the demolition robot more suitable for operation in nuclear environments, researchers have carried out a lot of work on demolition robots. A remote-control graphic transmission system was developed, and the operator can freely adjust the camera angle to observe the situation around the dismantled robot as needed . A faultThere are two types of methods for changing the attachment of demolition robots, the first of which is manual attachment changing, and the other is remote attachment changing. In the manual attachment changing type of demolition robot, the connection mode and structure of the robot and attachment are similar to those of construction machinery, such as an excavator. In the process of attachment changing, it is necessary to manually complete the assembly of the mechanical structure and hydraulic oil circuit of the attachment and robot. This type of robot is not suitable for working in a radioactive environment because of its radioactive contamination.In the remote attachment changing type of demolition robot, the operator does not need to touch the robot or the attachment during the process of attachment changing. The quick-hitch equipment of a demolition robot and the attachment structure are shown in There are four procedures for remotely changing the attachment of a demolition robot: initialization, preparation, range alignment, and angle alignment. In the initialization stage, as shown in B} and the attachment coordinate frame {T} is given, and the optimal distance interval is proposed.The range of the relative distance between the robot base coordinate frame {The optimal position of joint {4} is calculated, and the joint angles of the robot for attachment changing are solved through inverse kinematics.W} and {T} is minimized.A method for changing the demolition robot attachment by single joint motion is proposed, and the distance error of trajectory between {To solve the issues above, the forward kinematics and inverse kinematics of the demolition robot need to be established and solved ,16,17,18In this paper, B} is set at the bottom-center of the robot tracked mobile platform; the X-axis of {B} is the forward direction, and the Z-axis of {B} is upwards. Joint {1} is the robot chassis rotatory joint, and the Z-axis of {1} is upwards and overlaps with the axis of the robot chassis rotatory joint, and the X-axis of {1} is parallel to the X-axis of {B}. Joint {2} is the upper arm rotatory joint driven by the upper arm cylinder; the X-axis direction of {2} is from joint {2} to joint {3} and overlaps with the connecting line between joint {2} and joint {3}, and the Z-axis of {2} overlaps with the axis of the upper arm rotatory joint and it is vertical paper inward. Joint {3} is the middle arm rotatory joint driven by the middle arm cylinder; the X-axis direction of {3} is from joint {3} to joint {4} and overlaps with the connecting line between joint {3} and joint {4}. The Z-axis of {3} overlaps with the axis of the middle arm rotatory joint. Joint {4} is the fore arm rotatory joint driven by the fore arm cylinder; the X-axis direction of {4} is from joint {4} to joint {5}, and it overlaps with the connecting line between joint {4} and joint {5}. The Z-axis of {4} overlaps with the axis of the fore arm rotatory joint. Joint {5} is the quick-hitch equipment rotatory joint driven by the quick-hitch equipment cylinder; the X-axis direction of {5} is parallel to the X-axis direction of {W}, and the Z-axis of {5} overlaps with the axis of the quick-hitch equipment rotatory joint. {W} is the quick-hitch docking spot coordinate frame, and the axis direction is determined by the structure of the quick-hitch equipment, as shown in T} is the attachment docking spot coordinate frame when the attachment is connected to the quick-hitch equipment, and {W} overlaps with {T}.The Denavit\u2013Hartenberg (D-H) parameters should be described before the kinematic analysis of the demolition robot. The whole link of the demolition robot arm is connected by a set of connecting rods through the joints, and the five joints are all revolute joints. By establishing the modified D-H parameters of the demolition robot ,30, the \u03b81, \u03b82, \u03b83, \u03b84, and \u03b85 are the rotation angles of joints {1}, {2}, {3}, {4}, and {5}, respectively. The lengths of the links between the joints are B} to the joint {4} coordinate frame, which is shown in Equation (1). B} to the robot quick-hitch equipment docking coordinate frame {W}, which is shown in Equation (2).R} is the reference coordinate frame that is installed on the quick-hitch equipment. The purpose of introducing the reference coordinate frame {R} is to compensate for the measurement error. W} to the attachment docking spot coordinate frame {T}, which is shown in Equation (3).{In Equation (3), T} position is the center of the arc drawn with the green dot-dash line, and the distance between {4} and {W} is the radius. If the joint {4} position is on this arc, {W} will perform an arc movement around {4} by rotating joint {4}; the motion trajectory of {W} is shown in T} position is on the motion trajectory of {W}, range alignment for the attachment changing can be completed, as shown in T} rotates clockwise around the supporting point of attachment, as shown in W} and {T} do not overlap. However, due to the structural constraints of the quick-hitch equipment, the attachment is forced to move in the horizontal direction during the angle alignment, and {W} and {T} will only rotate relative to each other. In W} is tangent to the trajectory of {W}, the distance between {4} and {W} is 1.117 m, and the distance between {4} and the quick-hitch equipment edge is 1.185 m. Assuming that the coordinates of {T} are , and get Calculate the trajectory of {3.Calculate the collision region ellipse according to Equation (8), and get 4.Calculate the trajectory of {4} according to Equation (7), and get5.for6.78.9.\u2003Calculate the trajectory of the quick-hitch equipment edge according to Equation (6), and get10.11.if arc , and the rotation angle around its Z-axis is 73.5\u00b0, as shown in 4 is \u221270.9\u00b0, and the maximum trajectory distance error between {W} and {T} is 0.017 m. If the breaking hammer is placed horizontally, the position of {T} is , and the rotation angle around its Z-axis is 73.5\u00b0. According to Algorithm 1, the optimal docking point of {4} is , and RZ4 is \u221270.9\u00b0. Putting If the breaking hammer is placed horizontally, the position of {The constant terms in Equations (11) and (12) are shifted to the left, and then squared and added to get:Putting the constant terms into Equations (13) and (14) gives Putting Equations (18) and (19) into Equations (16) and (17) gives:Putting the constant terms in Equation (23) gives Therefore, W} is parallel to the Z-axis of {T}. By moving the demolition robot, the Y-axis of {B} becomes parallel to the Z-axis of {T}. The distance between {B} and {T} also needs to be restricted. According to Equations (11) and (12), when the joint {4} position obtained by algorithm 1 satisfies Equation (24), all the joints of the robot can be manipulated to the specified position.In the process of attachment changing, the Z-axis of the quick-hitch equipment of {B} and {T}, the farther the distance between the camera coordinate frame {C} and {T}, so the accuracy of obtaining the {T} position decreases. On the other hand, the demolition robot joints {2} to {4} have a rotation angle range, and it is necessary to restrict Since T} position along the X-axis is from 2 to 4 m, the range of the {T} position along the Z-axis is from 0 to 2 m, and the rotation angle around its Z-axis is 73.5\u00b0. The range of the rotation angle of joint {2} is from 30\u00b0 to 140\u00b0, the range of joint {3} is from \u2212108\u00b0 to \u221218\u00b0, and the range of joint {4} is from \u2212100\u00b0 to 23\u00b0. The joint angle required for the distance between {B} and {T} is calculated according to the method introduced in B} as the origin, and the attachment changing process can be completed in the colored area. The color in the colored area represents the angle of joint {2} for the attachment change: the dark red area indicates that the joint {2} angle is \u226590\u00b0, the red area indicates 80\u00b0, and the dark blue indicates \u226440\u00b0. In order to ensure that the camera obtains high-precision attitude information about {T}, the docking position of joint {4} should be in the deep red and red areas. For example, if the Z-axis coordinate of {T} is 0.485 m, the demolition robot should move to make the X-axis coordinate of {T} be between 2.66 m and 2.91 m. If the Z-axis coordinate of {T} is 1 m, the demolition robot should move to make the X-axis coordinate of {T} be between 2.3 m and 2.93 m.If the breaking hammer is placed horizontally, the range of the {An assistance wireframe is designed in the visualization interface to help the operator quickly move the robot to the specified position. When the robot arrives at the specified position, the initialization stage of changing the attachment is completed.W} follows the optimal docking trajectory. Joint {4} is manipulated to rotate counterclockwise, and the range alignment and angle alignment stages are completed. The process of changing the demolition robot attachment is finished.The process of attachment changing trajectory planning is shown in In this study, a trajectory planning toolkit for changing the demolition robot attachment was developed using the ros platform, which includes a robot visualization program, a robot cylinder length data acquisition and joint angle conversion program, a real-time error compensation program, and the trajectory planning program described in W} was parallel to the Z-axis of {T}, it was in an appropriate docking position for the attachment changing, as shown in W} position should be adjusted all the time to ensure that the quick-hitch equipment docks with the attachment. W} and {T} did not overlap, joints {4} and {5} were manipulated at the same time to ensure that the quick-hitch equipment and attachment were assembled smoothly. In W} and {T} was at its minimum, as shown in https://youtu.be/4m-wow-ABio.In In this paper, the motion trajectory of changing a nuclear demolition robot attachment is studied. By calculating the optimal docking position of joint {4}, the inverse kinemics of the demolition robot were used to solve the coordinates of each joint, and the position of the robot base frame was determined. The proposed method for changing an attachment by remote control with a trajectory planning method was investigated through experiments. Compared with the existing attachment changing method, this proposed method did not need to manipulate multiple joints at the same time to complete complex motion, which reduced the operational difficulty of the range alignment and angle alignment in the process of attachment changing. On the other hand, the optimal docking of the attachment change was achieved, which minimized the distance error of the trajectory between the quick-hitch equipment and attachment during angle alignment, and it also ensured that no collision occurred between these two parts. The experimental results show that, at the same operating level, the time consumption in the process of changing the demolition robot attachment could be reduced by 46% by using the trajectory planning method. The method proposed in this paper improved the efficiency and safety of remotely changing a demolition robot attachment.Since commercial demolition robots do not provide remote communication protocols, these can only be controlled by manual remote operation. Our group is developing an intelligent demolition robot called the Huluwa demolition robot. The Huluwa demolition robot, with high radiation resistance, will be equipped with a hydraulic servo control system, which has higher precision than an electro-hydraulic proportional control system. A newly designed quick-hitch device for automatically changing the attachment will also be equipped. In the next step of dynamic robot model research, hydraulic servo control research will be carried out to change the attachment of the HULUWA demolition robot automatically."} +{"text": "Total joint arthroplasty (TJA) delivers highly valuable outcomes to patients with end-stage joint diseases. However, despite the investment in stratified preventative measurements, including the preoperative preparations of the patients, continuous improvements in the clinical settings, training of surgeons and operating room personnel, and perioperative antibiotics, prosthetic joint infection (PJI) remains the most frequent cause of early TJA failure. In addition, a projected increase in the use of TJAs will naturally result in a related increase in the number of PJIs. For this reason, further improvement of TJA practice is absolutely essential in order to control for the lowest incidence of PJI. This is not imaginable without commitment, continuous education, and tight monitoring of outcomes generated by each department providing joint arthroplasty care. This cannot be accomplished without quality improvement based on continuous data mining and the reflection of the efficacy of the preventative, diagnostic, and therapeutic strategy in all clinical settings.A variety of host immune/non-immune cells interact perioperatively and early postoperatively with each other, contributing to the complex process of wound healing with an integration of the implant into the host tissues . An inteThis special issue of the Journal of Clinical Medicine is specifically dedicated to presenting current research topics on PJI. Two narrative reviews and six original articles are included.Bozhkova and members of the World Association against Infection in Orthopaedics and Trauma (WAIOT) presented a study in which they validated previously publishePrevention is always better than treatment. In this issue, we published an extensive review, trying to overlook the field of PJI prevention . It is dThe identification of patients that are at risk preoperatively is part of any reasonable preventative strategy. This offers clinicians an opportunity to modify perioperative conditions in order to prevent PJI. Fujiwara et al. analysed the data of 121 patients undergoing the resection of musculoskeletal tumour of the lower limb conjoined with the implantation of tumour prosthesis . This suA critical step towards the prevention of PJI also lies in the detailed understanding of PJI development, from the initial step of bacterial adhesion to the building of a biofilm structure. Heim et al. examined synovial fluid and pre- and postsurgical blood samples to analyse the impact of TJA and spine surgery on peripheral blood leukocyte frequency, bactericidal activity, and the expression of inflammatory molecules . We haveTwo studies present data related to diagnostics of PJI. Enz et al. examined the diagnostic behaviour of minimally invasive biopsies in relation to the preoperative diagnosis of PJI . Their dR\u00fcwald et al. examined the potential of extracellular vesicle (EV) isolation in the identification of PJI . EVs areThe two final studies of this issue are related to the treatment of PJI. Firstly, Deroche et al. examined the optimal time for the safe re-evaluation of the empirical antibiotic choice during the operative treatment of hip and knee PJI . The autSecondly, Kozaily et al. tried to examine a place for an additional spacer intervention during two-stage PJI treatment . There mThis special issue is intended to offer the readers up-to-date and sound knowledge on a wide range of PJI topics. We believe that the articles have largely fulfilled these expectations. The editorial board pursued this project with the hope of contributing to new research to help tackle this increasingly prevalent and disabling complication of TJA. We would like to thank all of the authors and peer-reviewers for helping us with this excellent body of work."} +{"text": "Neck pain is a common clinical disease, which seriously affects people\u2019s mental health and quality of life and results in loss of social productivity. Improving neck pain\u2019s curative effect and reducing its recurrence rate are major medical problems. Shi\u2019s manipulation therapy has unique advantages and technical features that aid in the diagnosis and treatment of neck pain. Compared with first-line non-steroidal anti-inflammatory drug (NSAID) treatment of neck pain, Shi\u2019s cervical manipulation lacks the relevant research basis of therapeutic advantage, safety, and satisfaction for treating acute and subacute neck pain. Herein, we aim to confirm our hypothesis in a clinical trial that the safety and efficacy of Shi\u2019s cervical manipulation will be more effective, safer, and more satisfactory than NSAIDs to treat acute and subacute neck pain.In this multicenter, positive-controlled, randomized clinical trial, traditional analgesic drug (NSAID) is used to evaluate and show that Shi\u2019s manipulation is more effective, safe, and satisfactory for treating acute and subacute neck pain. Overall, 240 subjects are randomly divided into the trial and control groups, with both groups treated by the corresponding main intervention method for up to 12\u2009weeks. Clinical data will be collected before the intervention and immediately after the first treatment; at 3\u2009days and 1, 2, 4, 8, and 12\u2009weeks after the intervention; and at 26 and 52\u2009weeks after treatment follow-up of clinical observation index data collection. The clinical observation indices are as follows: (1) cervical pain is the primary observation index, measured by Numerical Rating Scale. The secondary indices include the following: (2) cervical dysfunction index, measured by patient self-evaluation using cervical Neck Disability Index; (3) cervical activity measurement, measured by the cervical vertebra mobility measurement program of Android mobile phone system; (4) overall improvement, measured by patient self-evaluation with SF-36; and (5) satisfactory treatment, determined by patient self-evaluation.We will discuss whether Shi\u2019s cervical manipulation has greater advantages in efficacy, safety, and satisfaction of acute and subacute neck pain than traditional NSAIDs, to provide a scientific basis for the dissemination and application of Shi\u2019s cervical manipulation.ChiCTR1900021371. Registered on 17 February 2019China Registered Clinical Trial Registration Center According to the latest surveys, 71.5% of the general population had neck pain for more than a year . AlthougChinese Bone-setting Diagram: \u201cIf the neck is injured, the head on the back cannot be lowered, or the tendon is long and bone is wrong, or the tendon is gathered, or the tendon is strong, use the second manipulation of Xiong Gu Zi\u2019s technique to lift and correct the neck.\u201d On this basis, doctors developed many cervical vertebra techniques and formed different schools with different characteristics and positive curative effects in the diagnosis and treatment of neck pain. \u201cShi\u2019s Traumatology,\u201d a valuable cultural heritage passed from generation to generation in Shanghai over a hundred of years, is one of the most distinctive schools of \u201cShanghai culture.\u201d And the cervical vertebra correction technique developed from it, \u201cDislocation of Bone and Malposition of Ligament,\u201d is mature and effective. Shi\u2019s cervical correction technology can reduce the recurrence rate, improve the efficiency of the price ratio, and support obvious clinical advantages in the diagnosis and treatment of neck pain [Traditional Chinese medicine (TCM) manipulation therapy has unique advantages and characteristics in the diagnosis and treatment of neck pain. As early as the Qing Dynasty, detailed manual treatment of neck pain was recorded in the Traditional Chinese Medical Orthopedics monograph, eck pain \u201310. HoweHere, we present the protocol of a multicenter, positive-controlled, randomized clinical study investigating the efficacy and safety of Shi-style cervical manipulation therapy for treating acute and subacute neck pain. Our hypothesis is that Shi\u2019s cervical manipulation would have more advantages in efficacy, safety, and satisfaction to treat acute and subacute neck pain than traditional NSAIDs.In this study, a multicenter positive-controlled, randomized clinical trial design is used. Patients with acute and subacute neck pain in accordance with the test and research standards are randomly divided evenly into two groups: trial group and control group , with both groups treated by corresponding main intervention method for up to 12\u2009weeks. Clinical data will be collected before the intervention and immediately after the first treatment; at 3\u2009days and 1, 2, 4, 8, and 12\u2009weeks after the intervention; and at 26 and 52\u2009weeks after treatment follow-up of clinical observation index data collection.This is an exploratory, multicenter, positive-controlled, randomized clinical study. This trial began in October 2019 and ends in October 2021. It is organized and implemented by the Department of Orthopedics and Traumatology of Shuguang Hospital Affiliated with Shanghai University of Traditional Chinese Medicine. The research is planned to be completed in Shuguang Hospital Affiliated with Shanghai University of Traditional Chinese Medicine , Shanghai Jing\u2019an District Central Hospital , and Shanghai Xiangshan Hospital of Traditional Chinese Medicine . Shuguang Hospital is a hundred-year-old hospital in Shanghai. It is a grade III, class A comprehensive hospital of traditional Chinese medicine. It is a national model hospital of traditional Chinese medicine and has three key disciplines of the Ministry of Education. Shi\u2019s orthopedics and traumatology, as one of the key disciplines of the Ministry of Education, has a history of nearly 150\u2009years and is one of the first national intangible cultural hospitals. Jing\u2019an Central Hospital is a grade III, class B general hospital with a history of more than 70\u2009years. It is the teaching hospital of Shanghai Second Medical University, Shanghai University of Traditional Chinese Medicine, and Medical College of Shanghai Tongji University. Xiangshan Hospital is one of the first grade II, class A hospitals of traditional Chinese medicine in Shanghai. It has a history of more than 40\u2009years, which is a hospital of traditional Chinese medicine and integrated traditional Chinese and Western medicine. All three hospitals are responsible for the program\u2019s therapeutic interventions and are located in Shanghai, China, and Shuguang Hospital is the main responsible center of this project. The reasons for choosing these three hospitals as the research centers of this project are as follows, On the one hand, these three hospitals are all located in the center of Shanghai, with dense population and convenient transportation, which is conducive to the treatment and follow-up of the subjects; on the other hand, the orthopedics and traumatology department of these three hospitals is very famous in Shanghai, and Shi\u2019s orthopedics and traumatology department of Shuguang Hospital and the other two hospitals often carry out cooperation in clinical diagnosis and treatment techniques and scientific research. These convenient conditions contribute to the smooth development of this research project. According to the standard implementation method of randomized multicenter clinical trials, patients with neck pain who meet the study standard are randomly divided into trial group and control group (subjects 1:1).In the trial group, we used Shi\u2019s cervical manipulation as the main intervention method. Shi\u2019s manipulation intervention scheme was pre-tested in the experimental study of our research group. It was effective immediately after the shortest treatment time, and the longest treatment time was 12\u2009weeks.In the control group, we used oral administration of diclofenac sodium sustained-release capsules , the first-line non-steroidal anti-inflammatory drug to treat neck pain reported in the literature as the main intervention . ClinicaThe subjects of each research center participating in this study shall first meet the following diagnostic standards of acute and subacute neck pain , 13 and Primary symptom of mechanical, non-specific neck pain symptoms, with a duration of <\u200912 weeksPathologic criteria and degree criteria for neck pain: neck pain equivalent to grade I or II according to the Bone and Joint Decade 2000\u20132010 Task Force on Neck Pain and Its Associated Disorders classification ;Grade I neck pain: No signs or symptoms suggestive of major structural pathology, and no or minor interference with activities of daily living; will likely respond to minimal intervention such as reassurance and pain control; does not require intensive investigations or ongoing treatment.Grade II neck pain: No signs or symptoms of major structural pathology, but major interference with activities of daily living; requires pain relief and early activation/intervention aimed at preventing long-term disability.Age range from 18 to 65\u2009years.The subjects meet the above diagnostic criteria for acute and subacute neck pain.NRS score of \u2265\u20093.It is an axial (non-root) neck pain caused by changes in the facet joints, discs, muscles, and ligaments .Those who are willing to participate in this experiment and sign the consent letter.Cervical spine fracture and dislocationCervical vertebra, cervical soft tissue, cervical spinal cord tumor, and tuberculosis.Cervical spine fusion, paravertebral bridge, and severe osteoporosisA history of cervical spine surgeryA history of severe trauma to the cervical spine or neckNeck with skin inflammation, skin damage, etc.Patients with serious heart, liver, kidney, and hematopoietic system diseases; digestive system ulcer; bleeding; black stool; other diseases; and mental illnessPatients with extreme physical deficiency and pregnant womenParticipants who participated in clinical trials of other drugs and treatments within the previous 3\u2009months that would interfere with this studyMeet grade III or IV according to the Bone and Joint Decade 2000\u20132010 Task Force on Neck Pain and Its Associated Disorders classificationGrade III neck pain: No signs or symptoms of major structural pathology, but the presence of neurologic signs such as decreased deep tendon reflexes, weakness, and/or sensory deficits might require investigation and occasionally more invasive treatments.Grade IV neck pain: Signs or symptoms of major structural pathology, such as fracture myelopathy, neoplasm, or systemic disease; requires prompt investigation and treatment.Those who meet 1 of the above conditions cannot be included in this study.Patients misdiagnosed as acute subacute neck painThe subjects mistakenly included (meeting the exclusion criteria)Those who did not receive treatment strictly and used drugs according to the regulations after being selectedNo detailed treatment recordThose who disobeyed the trial plan and combined with other drugsThose who still received other related treatment that affected the evaluation of this study at the time of selection and could not stopThose who meet 1 of the above conditions should be rejected.Intolerable adverse reactions.Serious adverse reactions.The patients\u2019 pain continued to increase, which proved that trial participation was not suitable.The patient\u2019s health may be damaged .Those who voluntarily withdraw or miss visits in the middle.Those who meet 1 of the above conditions should be shedding cases.Information and informed consent forms have been prepared in accordance with the guidelines of the China registered clinical trial ethics review committee. Potential participants receive both forms at least 1\u2009day before their screening visit. During this visit, a study physician explains all study procedures, and written informed consent is only given after participants had adequate time to ask questions. We include male and female subjects with acute and subacute neck pain in this study. Each subject should voluntarily sign an informed consent form before testing begins in the study.The main intervention method in the trial group is Shi\u2019s cervical manipulation therapy. The manipulation treatment team is composed of six clinicians with a minimum of 5\u2009years\u2019 experience in manual operation, who are good at manual spine diagnosis and treatment and have received standard operating procedures (SOP) training in this study. Each selected subject should have a 3-dimensional (3D) computed tomography (CT) reconstruction of the cervical vertebra, as well as a special physical examination of the cervical vertebra. Each subject will receive an initial diagnosis for 15\u201320\u2009min after entering the group, including brief history collection, cervical static palpation (tenderness point and muscle spasm), cervical dynamic palpation , cervical sequence activity inspection, and cervical nerve reflex. Both history and examination results should be recorded in the case report form. According to the examination results, manipulation clinicians will give a comprehensive diagnostic analysis and manipulation therapy to each subject\u2019s cervical spine. Key points of manipulation therapy operation are as follows: (1) acupoint pressing analgesia operation, according to the treatment principle of local acupoint selection and remote acupoint selection of cervical vertebra; point rubbing; and pressing operation are performed on the thumb pulp. The pressure can be light to heavy, then heavy to light, and the operation is repeated. The specific strength is to press until thumb nail color is white, while considering patient tolerance. Each acupoint is operated for 1\u2009min. The specific acupoints are as follows: (a) local acupoints\u2014Fengchi , Fengfu , Tianzhu , Wangu , Dazhui , Jianjing , Jianzhongshu , Quepen , Tianzong , and Ashi; (b) remote acupoints\u2014Lieque , Houxi , and Hegu . (2) Bone-setting manipulation operation, according to the results of palpation and 3D CT reconstruction of the patient\u2019s cervical spine, the clinicians will give directional and fixed-point bone-setting manipulation to treat the cervical vertebrae semidislocation. The specific operation is as follows: the patient sits on the treatment chair, with the head slightly bent forward, and the manual operation physician stands at the patient\u2019s side, with the right side as the patient\u2019s side, for example, the manual operation physician locks his left thumb against vertebral plate between the transverse process and the spinous process of the cervical vertebra with semidislocation, and the other four fingers stick to the patient\u2019s left head and neck, and the middle and upper part of the elbow of the right arm slightly bent on the patient\u2019s lower jaw, according to the vertebrae semidislocation direction, the left thumb with pushing force to the vertebral plate in the opposite direction of semidislocation, while the right forearm rotates rapidly to lift and rotate the head with a direction- and size-controlled flashing force. There is a sliding motion and snapping sound from the vertebra under the left thumb. The whole manual operation treatment lasts for 20\u2009min each time, and is done twice a week, with the longest course of treatment lasting for 12\u2009weeks.The main intervention method in the trial group is oral NSAIDs. Each selected subject should have a 3D CT reconstruction of the cervical vertebra, as well as a special physical examination of the cervical vertebra. The treatment team is composed of three clinicians with the qualification of clinical medical registered doctors and senior professional titles, who have received SOP training in this study. After exclusion of the contraindications of diclofenac sodium , the NSAID diclofenac sodium double-release enteric-coated capsules is the main intervention. Each subject will receive an initial diagnosis for 15\u201320\u2009min after entering the group, including brief history collection, cervical static palpation (tenderness point and muscle spasm), cervical dynamic palpation , cervical sequence activity inspection, and cervical nerve reflex. Both history and examination results should be recorded in case report form. Each subject will obtain a patient log card, which shall be kept by the patients themselves (handed over to the physician in charge of the group at the end of the test), and will be used to record name, dose, time, discomfort, combined medication, and other information about the drug taken truthfully every day. The specific drug dosage shall be specified by the physician in charge of the group according to the drug use instructions, and the maximum duration of the medication shall be 12\u2009weeks.Participants may request to leave the study or they may be withdrawn due to study-related adverse events and shedding criteria. If a subject is discontinued from study participation due to an adverse event, they will be evaluated by the study clinicians for the need of additional treatment for neck pain. Safety data will be collected on any subject who is withdrawn from the study. Participants in both study groups may receive additional treatment. Emergency treatment for patients who have severe pain and cannot tolerate manipulation therapy or cannot improve their pain within a short period of time affecting their sleep or life should be given emergency analgesic or sedative, muscle relaxant drugs, and then be rejected from the trial.To ensure retention of participants, follow-up visits will be scheduled to coincide with routine clinic appointments as far as possible. Moreover, we attract the participants through regular biweekly TCM health lectures in each clinical trial center. In addition, study staff will contact participants, either over the phone or WeChat, before their scheduled follow-up appointment. Finally, participants will receive reimbursement for their time and transportation in the form of a gift card.In principle, other pain killers, muscle relaxants, and other drugs are not allowed to be used during the trial. If there is any special situation, it must be used, and the use situation needs to be truthfully recorded in the medication registration form. The patients with other diseases before enrollment will continue to maintain the original treatment after enrollment, and the medication related to pain shall be recorded in the medication record form. When the case falls off or the patient\u2019s compliance is poor, the reasons for the fall off and poor compliance shall be entered in the case observation form in detail, and the patient\u2019s understanding and support shall be obtained by contacting the patient as frequently as possible; the evaluation items that can be completed shall be completed, and the last treatment time shall be recorded. The case fall off, the patient\u2019s compliance, and adverse reactions shall be statistically described and compared and evaluated between the groups.Once participants complete the study, they will be able to continue receiving clinical care from respective clinical trial centers, for example, TaiChi, Baduanjin, characteristic cervical function training guidance techniques of TCM. Participants study records will be reviewed if necessary. For the patients who have completed the trial but have no effect on remission, doctors should provide alternative treatment, such as combined traction, acupuncture, and other comprehensive conservative treatment, as well as the application of analgesics and muscle relaxants. When the sleep patterns and quality of life of the patients are seriously affected, patients can be identified as inpatients for further treatment, including nerve block injection, surgery, and other treatments.The manual intervention program was pre-tested in our research group\u2019s experimental study and was in effect immediately after the shortest treatment, and the longest treatment time was 12\u2009weeks. Combined with the reported literature on related neck pain , 15, botDuring the test, we should observe whether there are adverse reactions and adverse events, and make records. If there are adverse events, such as adverse effects on patients\u2019 consciousness, feeling, exercise, sleep, blood pressure, pulse, heart rate, respiration, and other normal physiological indicators, cervical fracture, skin damage, spinal cord and vertebral artery injury, gastrointestinal discomfort or ulcer, bleeding, and black stool, the occurrence of adverse reactions shall be recorded in detail on the case observation form, including the time, severity, duration, and treatment measures, and correlation with experimental treatment method shall be analyzed with comprehensive consideration of complications and combined treatment. In case of adverse reactions, the clinical observation physician can decide whether to stop the trial according to the patient condition. Patients who stop treatment due to adverse reactions should be tracked and investigated, and the results should be recorded in detail.Participants in both groups are treated according to the main intervention methods of each group for a period of up to 12\u2009weeks. Clinical data are collected before the intervention and immediately after the first treatment; at 3\u2009days and 1, 2, 4, 8, and 12\u2009weeks after the intervention; and at 26 and 52\u2009weeks after treatment follow-up of clinical observation index data collection , \u03b2\u2009=\u20090.10, boundary value 0.08, sample size ratio k\u2009=\u20091 in both groups. The PASS software is used to calculate the sample size of 102 cases in the two groups. Considering that the sample size loss rate is about 10\u201320%, further calculated the sample size of each group is N\u2009=\u2009120.In this project, the trial group and the control group are to be set up. This study is designed with the control group as the optimal design, and the results are summarized according to previous clinical trials and literature . AccordiAccording to the sample size requirements of 120 cases in the trial group and the control group, this project uses written and online media promotion, broadcast, and other ways to recruit subjects with neck pain to visit Orthopedic Department Outpatients of Shuguang Hospital, Jingzhong Hospital, and Xiangshan Hospital according to the principles of convenient transportation and proximity. The subjects need to go through two baseline diagnostic screenings by clinicians in each research center. The clinicians in charge of diagnosis and screening are not aware of the random grouping. During the scheduled clinical visit, the study clinicians will provide a brief overview of the study. If the participant is interested, the clinicians will determine patient eligibility according to the above eligibility criteria. If the patient is eligible and agrees to participate, the clinicians will conduct the informed consent and enrollment. Participants will be able to discuss study details with the clinicians and ask questions before signing the informed consent. Following enrollment, participants will be randomized by the assistants, who are full-time research assistant in charge of patient grouping, accept their cervical spine examination and imaging examination, and receive the assigned treatment.In this project, a multicenter positive-controlled, randomized clinical trial design is adopted, and subjects who meet the test and research criteria are randomly divided into two groups (proportion of 1:1), which are the trial group and the control group. The random number table was generated by special statistical staff at Shanghai University of T.C.M. According to strict random procedures by using the SAS software . The randomized list is stored on a secure database (Microsoft Office Access 2007) by the data manager and is inaccessible by the relevant case observer or other researchers. Each subject can only register and randomize once, and no detailed information about the subject can be deleted from the database.Randomization parameter settingThe design procedures of the specific randomized scheme of multicenter clinical trials include randomized parameter setting, randomized regulations, SAS-randomized program design, and the resulting random numbers, randomized coding of the trial center, randomized coding of the trial cases, and randomized coding of the treatment group.Seed number, the initial value of the random number, in this study is 151427.The total number of cases is the sample size. Area group length\u2009\u00d7\u2009area group number\u2009=\u2009total number of cases: sample size 240 cases, area group length 4, and area group number 60.Stratification number, taking the center participating in the clinical trial as the stratification factor, because the number of cases completed by each participating clinical trial center cannot be determined in advance, the management method of \u201crandom center\u201d is adopted in this study; that is, when subjects are enrolled, the person in charge of each participating research center immediately requests the random number and corresponding group code from the \u201crandom center\u201d by telephone.The number of test groups is divided into trial group and control group.(2)Randomization rulesDistribution proportion, the number of samples in each treatment group, is allocated by 1:1.Randomization rules for center code distributionThe completion of this study will be done in Shuguang Hospital, Jingzhong Hospital, and Xiangshan Hospital. Each center is sorted into Jing\u2019an J, ShuguangS, and Xiangshan X according to the initials of the hospital abbreviation. Corresponding to the order of \u201crandom number of center code distribution\u201d (random number under Center) generated by the SAS statistical software package, the random code of each center is obtained.Randomization of trial case allocationThis study is divided into two groups: group A is defined as having a Rand of 1, 2, 3, 4, 5, and 6, and group B as having a Rand of 7, 8, 9, 10, 11, and 12.(3)SAS random programmingRandomization rules for distribution of treatment group: this study is divided into two groups. If the first random number is greater than the second random number, group A was the trial group and group B was the control group. Otherwise, group B is the trial group and group A is the control group.Using the SAS9.4 unified software, according to the abovementioned random parameters, the software program successively input the seed number corresponding to \u201cproc plan seed,\u201d \u201crandom number corresponding to the code of,\u201d title \u201ccenter,\u201d stratified number corresponding to \u201cfactors center\u201d 3; \u201crandom number corresponding to the test case of,\u201d title, \u201cfactors blocks = 60, Rand = 4,\u201d and other instructions, and then the operation is completed.In this study, the management method of \u201crandom center\u201d is adopted. The random numbers were generated by the staff in charge of project (random center) statistics of Shanghai University of Traditional Chinese Medicine using the SAS software. The heads of clinical research centers, diagnostic intervention physicians, researchers, and participants of symptom and sign clinical data collection are unaware of the random numbers and corresponding groups.Upon participant enrollment, the diagnostic intervention physician is responsible for asking the person in charge of each participating clinical center for the random number and corresponding group information, and then the person in charge of each participating clinical center immediately asks for the random number and corresponding group code from the \u201crandom center\u201d by telephone, and then the person in charge of each clinical center informs the patient of the random number, corresponding group code, and intervention measures by telephone or WeChat. The intervention physician shall not disclose the above information to the subject and the symptom and sign information collection physician.The random grouping and intervention plan information is a blind setting for subjects and data statistical analysts. Blinding process: the blinding process is carried out in accordance with the blinding operation specifications of single-blind clinical trial drugs, and a blinding process record file is formed; the blind bottom storage adopts a 2-level blind method: the first level is the code corresponding to two treatment groups (randomly designated as a and b), and the second level is the code corresponding to each code. The two levels of blind bottom are sealed separately, each in duplicate, and stored in the Institute of Orthopedics and Traumatology by full-time personnel (not directly involved in the research work of this project).In this study, there are two unblinding provisions. The first unblinding is at the end of blind audit data locking, and the second unblinding is at the end of statistical analysis. According to the emergency unblinding regulations, when patients need to receive emergency treatment due to adverse events, and the emergency treatment is related to the actual situation of receiving the study drug, the project research leader and the project supervisor can decide to open the emergency unblinding and read the corresponding emergency letters. Researchers should fill in the corresponding records with the date, reason, and process of unblinding.The data involved in this project mainly include random grouping data, clinical baseline data, intervention symptoms and signs change, follow-up data, and results of statistical analysis data.The random grouping data shall be provided by the researcher in charge of the random grouping of the project (random center) of Shanghai University of Traditional Chinese Medicine, and only the person in charge of each clinical research center has the right to obtain the data.All clinical research centers are responsible for the diagnosis and screening of clinical baseline data, and the physicians are responsible for recording and collecting the data.The person in charge of each clinical research center shall arrange full-time data collection personnel to collect the intervention symptoms, signs, and follow-up data. They shall not actively know the random grouping, intervention plan, and other information.Statistical analysis results data are collected and provided by the researcher in charge of data statistical analysis of Shanghai University of Traditional Chinese Medicine, who do not know the random grouping and clinical intervention information. After clinical trial completion, the personnel from the \u201cstatistical center\u201d will go to each clinical research center to find the person in charge of each subcenter to coordinate and collect the case observation form, record the observation form in the database, and lock it under the supervision of the general research director of the project and the project supervisor (not directly involved in the project research).To ensure retention of participants, follow-up visits will be scheduled to coincide with routine clinic appointments as far as possible. The study staff will contact participants, either over the phone or WeChat, before their scheduled follow-up appointment, immediately after the first treatment; 3\u2009days and 1, 2, 4, 8, and 12\u2009weeks after the intervention; and at 26 and 52\u2009weeks after treatment follow-up.Before the start of the project, a complete project research operation SOP, investigator manual, and case report form should be established, and work training and summary meetings should be held before the start of the project and in the middle of the project, with detailed training and explanation of the test scheme, research operation SOP, and investigator manual contents. The case observation director of each research center should be established, and the case study work micro should be established by the person in charge of each research center credit group: case observation doctors in charge of each center should take photos of collected cases in the working group of each center, and give them to the person in charge of the subcenter who should, on the day of receipt, upload the standardized filing of the collected case report form. Consequently, Shuguang Hospital, the responsible unit of the project, should assign a special case supervisor, who should establish a WeChat working group with the person in charge of each center, and the supervisor should spot check the case report form from each center at any time to fill in the specification. After the test, the researcher in charge of project data statistics and analysis, a full-time statistician of Shanghai University of Traditional Chinese Medicine, should collect the original data from the case report form of each subcenter, and under the supervision of the chief research director of the project and the project supervisor (not directly involved in the project research), the assistant should cooperate with two people to back up and input those case reports into an Excel form, mutually proofread and correct input errors, check the accuracy with the original data in the case report form, lock the data, and then conduct statistical analysis.The medical records of the participants will be completely saved in the research center of the project. All the privacy data of the subjects are stored in encrypted protection, only to be seen by the main researchers of the project, only for the research of the project, not for other purposes.Not applicable in this study.Statistical results and data analysis are the responsibility of the researcher (Statistics Center) in charge of statistics and data analysis for the entire project, and the patients are divided into the following data sets according to different situations.Full analysis set refers to the ideal set of subjects as close as possible to the intentional analysis principle . The data set is obtained after the elimination of the smallest and reasonable method among all randomized subjects. To estimate the missing value of the main variable, the last observation carried forward (LOCF) method is used to carry forward the missing part of the test data. The number of subjects in each group who evaluate efficacy at the end point is consistent with that at the beginning of the test.Per-protocol set, all patients who meet the trial protocol, who use 80\u2013120% of drugs, have good compliance, do not use prohibited drugs during the trial, and complete the CRF requirements.Safety analysis set, all cases are randomly divided into groups, at least one use of study drug, or receive manual treatment, and at least one follow-up record, constitute the safety analysis population of this study. Safety population is the main population for safety evaluation in this study.The main variables and the comprehensive efficacy analysis are the full analysis set and the compliance scheme set, respectively; the demographic and other baseline characteristics are the full analysis set; the safety evaluation is the safety set.For continuous variables, the number of cases, mean, standard deviation, median, and minimum and maximum values will be listed; for classified variables, the form of frequency table (frequency and percentage) will be listed. Baseline is defined as the last observation data before the first treatment.t test statistical analysis is carried out for each group\u2019s pre-treatment pain and the improvement of each group\u2019s pain after treatment intervention, and single-factor analysis of variance is carried out for each group\u2019s immediate, 3-day, 1-, 2-, 4-, 12-week pain after intervention; secondly, before treatment and immediately after treatment intervention, 3\u00a0days, 1, 2, 4, and 8\u00a0weeks. One-way ANOVA is used to analyze the cervical dysfunction index, cervical activity, overall improvement, and treatment satisfaction of patients at 12\u2009weeks. All the statistical tests are double-sided, P\u2009<\u20090.05 would be considered statistically significant.Specific statistical data analysis, using the latest SPSS statistical analysis software, first of all, paired There will be no other additional analyses beyond the main analyses for the primary and secondary outcomes.When the case falls off or the patient\u2019s compliance is poor, the reasons for the fall off and poor compliance shall be entered in the case observation form in detail, and the patient\u2019s understanding and support shall be obtained by contacting the patient as frequently as possible; the evaluation items that can be completed shall be completed, and the last treatment time shall be recorded. The case fall off, the patient\u2019s compliance, and adverse reactions shall be statistically described and compared and evaluated between the groups.In the middle of the project, a complete project research operation SOP, investigator manual, and case report form should be reassessed and revised, and work training and summary meetings should be held with detailed training and explanation of the test scheme, research operation SOP, and investigator manual contents. A summary of the enrolment progress, treatment success proportions, adverse events, and protocol deviations will be provided to the Data Safety Monitoring Board members, who are not involved in experimental research and treatment.The protocol of the study is publicly available on the website of China Registered Clinical Trial Registration Center with no. ChiCTR1900021371.The datasets generated and/or analyzed during the current study are not publicly available due to China laws on privacy protection but are available from the corresponding author on reasonable request.This project is under the direct supervision and management of the research team from Shanghai University of TCM. The research team and the special case inspector of Shuguang Hospital, the responsible unit of this project, have established a research supervision WeChat group. The supervisor of Shuguang Hospital and the person in charge of each center have established WeChat working group, and the supervisor can spot check the filling standard of case report form of each center at any time. The person in charge of each subcenter shall establish a WeChat group for the case study, and the case observation doctors of each center shall take real-time photos of the collected cases to the person in charge of each subcenter in the working group of each center on the same day and upload the standard situation of filling in the collected case report form.The Data and Safety Monitoring Board (DSMB) will be composed of a physician, medical statistician, ethicist, orthopedic doctor, radiologist, and clinical manager and will oversee the study throughout the study period. They will review the study activities every 3\u2009months. The committee will review the safety data and clinical efficacy reports and determine whether it is clinically safe to continue the clinical trial. They will report their recommendation to the primary investigator from Shanghai University of TCM.During the test, we should observe whether there are adverse reactions and adverse events, and make records. If there are adverse events, such as adverse effects on patients\u2019 consciousness, feeling, exercise, sleep, blood pressure, pulse, heart rate, respiration, and other normal physiological indicators, cervical fracture, skin damage, spinal cord and vertebral artery injury, gastrointestinal discomfort or ulcer, bleeding, and black stool, the occurrence of adverse reactions shall be recorded in detail on the case observation form, including the time, severity, duration, and treatment measures, and correlation with experimental treatment method shall be analyzed with comprehensive consideration of complications and combined treatment. In case of adverse reactions, the clinical observation physician can decide whether to stop the trial according to patient condition. Patients who stop treatment due to adverse reactions should be tracked and investigated, and the results should be recorded in detail.This project is under the direct supervision and management of the research team from Shanghai University of TCM. The research team and the special case inspector of Shuguang Hospital, the responsible unit of this project, have established a research supervision WeChat group. The supervisor of Shuguang Hospital and the person in charge of each center have established WeChat working group, and the supervisor can spot check and audit the filling standard of case report form of each center at any time.After careful discussion and modification by the project team, the study plan of this project was registered on the website of China Registered Clinical Trial Registration Center and further improved and modified according to the modification opinions provided by the expert team, with approval number ChiCTR1900021371. SOP and investigator\u2019s manual for the research scheme of the project are formulated, and full-time researchers of each clinical research center are trained for the project. Each center is required to strictly follow the SOP and investigator\u2019s manual for the research scheme. The ethics of this trial research scheme was reviewed and approved by China registered clinical trial ethics review committee with approval number ChiECRCT20190068. It was modified and implemented according to the expert\u2019s modification opinions.We plan to disseminate the study results through peer-reviewed journal publications and conference presentations. Study findings will also be shared with relevant clinical and scientific groups.The control group is responsible for the distribution of the trial drug by the treatment physician. The patient is only given a 1-week drug quantity when entering the group. The kit is recovered by the observer during the follow-up visit, and the drug application registration form is filled in according to the judgment of the drug application situation. When the amount reaches 80\u2013100% of the required amount of drug, the next course of the drug can be given according to the pain relief situation if the patient needs to continue the treatment. At the end of each course of treatment, each patient needs to recover the kit, and fill in the daily and each course of treatment on the medication registration form.The purpose of this study is to confirm our hypothesis through clinical trials that Shi\u2019s cervical manipulation has more advantages in the efficacy, safety, and satisfaction of acute and subacute neck pain than traditional NSAIDs.To achieve the goal of the study, a multicenter, positive-controlled, randomized clinical trial, comparing traditional analgesic drugs (NSAIDs) for neck pain is used to evaluate and show that Shi\u2019s manipulation is more effective, safe, and satisfactory in the treatment of acute and subacute neck pain.According to the latest survey \u20133, 71.5%Chinese Bone-setting Diagram: \u201cIf the neck is injured, the head on the back cannot be lowered, or the tendon is long and bone is wrong, or the tendon is gathered, or the tendon is strong, use the second manipulation of Xiong Gu Zi\u2019s technique to lift and correct the neck.\u201dTai Chi and bone-setting manipulation are the essence of TCM and have played an indelible role in China\u2019s prevention and treatment of human diseases from thousands of years ago to modern times. In recent years, Tai Chi has been studied by foreign scholars, and a lot of clinical and experimental research has been invested in the application of Tai Chi in the treatment of bone and joint, insomnia, obesity, and other physical and mental diseases , 36. KeyOn this basis, doctors developed many cervical vertebra techniques and formed different schools with different characteristics and positive curative effects in the diagnosis and treatment of neck pain. \u201cShi\u2019s Traumatology,\u201d a valuable cultural heritage passed from generation to generation in Shanghai over a hundred of years, is one of the most distinctive schools of \u201cShanghai culture.\u201d And the cervical vertebra correction technique developed from it, \u201cDislocation of Bone and Malposition of Ligament,\u201d is mature and effective. Shi\u2019s cervical correction technology can reduce the recurrence rate, improve the efficiency of the price ratio, and support obvious clinical advantages in the diagnosis and treatment of neck pain \u201310. HoweWith reference to the clinical trial design related to acute and subacute neck pain in this study, a multicenter positive-controlled, randomized clinical trial design is used , 37, 38.Compared with some excellent multicenter clinical projects, this project has some limitations: for example, due to the limitation of research funds, this study only selected three clinical trial centers and small sample size of case observation, and only set up a control group, and only has 1\u00a0year follow-up period. Therefore, this study is only a preliminary exploration of the clinical evaluation scheme of \u201ceffectiveness and safety of Shi\u2019s manipulation in the treatment of acute and subacute neck pain\u201d. In order to further objectively and fully confirm the effectiveness and safety of Shi\u2019s manipulation in the treatment of acute and subacute neck pain in the future, these research deficiencies will be supplemented and improved in our follow-up research.The treatment protocol version number currently in use is version 1.0, which was revised on 15 October 2019. Recruitment began on 30 October 2019, and the approximate date for the completion of recruitment will be 31 December 2020."} +{"text": "Cmax, AUC(0\u2013t), and AUC(0\u2013\u221e) and shorter Tmax (for CK) than those in the RG group. These results suggest that BRG may lead to a higher absorption rate of bioactive ginsenosides. This study provides valuable information on the pharmacokinetics of various bioactive ginsenosides, which is needed to enhance the therapeutic efficacy and pharmacological activity of ginseng.Individual differences in ginsenoside pharmacokinetics following ginseng administration in humans are still unclear. We aimed to investigate the pharmacokinetic properties of various ginsenosides, including Rb1, Rg3, Rg5, Rk1, F2, and compound K (CK), after a single oral administration of red ginseng (RG) and bioconverted red ginseng extract (BRG). This was a randomized, open-label, single-dose, single-sequence crossover study with washout every 1 week, and 14 healthy Korean men were enrolled. All subjects were equally assigned to two groups and given RG or BRG capsules. The pharmacokinetic parameters of ginsenosides were measured from the plasma drug concentration\u2013time curve of individual subjects. Ginsenosides Rg3, Rk1\u2009+\u2009Rg5, F2, and CK in the BRG group showed a higher Panax ginseng C.A. Meyer is a beneficial herb that has been consumed for a long time by people in East Asian countries [ountries , 2. Consountries , 4. Geneountries \u20138. Furthountries , 10 , and CK (which contains one glucose molecule)) is strongly considered to be useful for avoiding individual variations in ginsenoside metabolism and promoting ginsenoside absorption .However, the differences in ginsenoside pharmacokinetics following RG administration in humans are still unclear. Furthermore, the pharmacokinetics of ginsenosides Rk1 and Rg5 in human plasma after oral administration remains unknown. Therefore, in this study, we aimed to elucidate the pharmacokinetic properties of PPD-type ginsenosides after oral administration of RG and bioconverted red ginseng (BRG) in healthy Korean subjects. The findings of this study provide valuable insights into the pharmacokinetics of various bioactive ginsenosides, thus contributing to enhancing the knowledge regarding the pharmacological activity of ginseng.Ginsenosides Rb1, Rg3, Rg5, Rk1, F2, and CK were purchased from Ambo Institute . RG and BRG extracts were provided by BTC Corporation . Digoxin was used as an internal standard (IS), and formic acid was used to provide a suitable ionization environment for the analytes; both were obtained from Sigma-Aldrich Corporation . Methanol and acetonitrile (high-performance liquid chromatography (HPLC) grade) were obtained from J.T. Baker . The HIQ I water purification system was used to prepare deionized water for HPLC analysis.\u03b3-cyclodextrin purchased from Wacker Chemie AG as an excipient. Subsequently, concentration and vacuum drying were performed. The prepared samples were sealed and stored at 25\u2009\u00b1\u20095\u00b0C until use in the experiments.Dried Korean red ginseng was purchased from Jungwon Ginseng . The raw dried red ginseng was extracted three times with 30\u2009L of 50% ethanol and once with 30\u2009L of water. The extract was then evaporated and dried using a vacuum dryer to be used as RG. To prepare BRG, RG was enzymatically treated with Sumizyme AC obtained from Shin Nihon Chemicals and mixed with \u03bcm; Sigma-Aldrich, MO, USA) was used for separation. The mobile phase consisted of water (solvent A) and acetonitrile (solvent B). The flow rate of the mobile phase was 1.6\u2009mL/min. The gradient conditions of the mobile phase were as follows: solvent A/solvent B\u2009=\u200980/20, 68/32, 60/40, 55/45, 25/75, 0/100, 0/100, 80/20, and 80/20, with run times of 0\u201310, 10\u201340, 40\u201358, 58\u201370, 70\u201393, 93\u201393.5, 93.5\u201398.5, 98.5\u2013100, and 100\u2013105\u2009min, respectively. The injection volume was 10\u2009\u03bcL. The column temperature was kept constant at 25\u00b0C, and the wavelength of the UV detector was set at 204\u2009nm.The content of Rg1, Rb1, Rg3, Rk1\u2009+\u2009Rg5, F2, and CK in the extracts of RG and BRG was analyzed using HPLC. Approximately 450\u2009mg of RG and BRG samples were weighed and dissolved in 20\u2009mL of methanol. After the samples were filtered through a nylon syringe filter, the ginsenoside content was measured using an Agilent 1200 series HPLC system with a diode-array detector . A Discovery C18 analytical column . Subjects with any significant clinical illness within 30 days before the study and blood donation within 2 weeks before the study were excluded. Subjects with alcohol abuse or those who had used any drug or food containing a large amount of saponins, such as red ginseng and ginseng, within 30 days before the study were also excluded. In addition, all subjects who experienced adverse reactions due to the ingestion of ginseng were excluded.This study was approved by the Institutional Review Board of the Bestian Clinical Trial Center . All procedures of the study were carried out in compliance with the principles of the Declaration of Helsinki and Korean Good Clinical Practice guidelines. All subjects underwent screening examinations and provided written informed consent before enrolment in the study. Fourteen physically healthy Korean men aged between 20 and 45 years were enrolled. Eligibility criteria for selecting subjects included being healthy by medical history and physical examination . The mobile phase comprised water (solvent A) and acetonitrile (solvent B) containing 0.1% formic acid. The flow rate of the mobile phase was 0.25\u20130.35\u2009mL/min. The initial condition of 70% solvent A was set to 5% solvent A over 15\u2009min. The content of ginsenosides in human plasma was measured using an API 5500 mass spectrometer capable of multiple reaction monitoring (MRM). The parent-daughter ion pairs monitored were 845.5\u2009\u27f6\u2009799.6 for Rg1, 1153.6\u2009\u27f6\u20091107.5 for Rb1, 829.4\u2009\u27f6\u2009783.5 for Rg3, and F2, 811.4\u2009\u27f6\u2009161.0 for Rg5 and Rk1, 667.4\u2009\u27f6\u2009161.0 for CK, and 825.4\u2009\u27f6\u2009779.3 for digoxin as an IS in the negative ion mode. Nitrogen gas was used in the nebulizer and the collision cell. The ion spray voltage was set to 5500\u2009V.All analyses were performed using an Agilent 1200 series HPLC system (Agilent) with a Luna Phenyl-Hexyl column was added to 90\u2009\u03bcL of blank human plasma. Rg3 was prepared in the concentration range of 0.2\u201340\u2009ng/mL. Standard plasma samples (100\u2009\u03bcL) were added to 200\u2009\u03bcL of methanol containing 50\u2009ng/mL digoxin, an IS. These were mixed for 1\u2009min and then centrifuged at 12,000\u2009rpm for 5\u2009min at 4\u00b0C. Then, 100\u2009\u03bcL of the supernatant was transferred to an LC vial, and 10\u2009\u03bcL was injected into the LC-MS/MS system. Plasma samples stored at \u221270\u00b0C were thawed at room temperature. Samples for LC-MS/MS analysis were pretreated in the same manner as standard plasma samples.Standard plasma samples were prepared such that each ginsenoside was in the concentration range of 0.5\u2013100\u2009ng/mL; 10\u2009y) of the analyte to IS against the spiked concentrations (x) of each analyte. Linearity was assessed by weighted (1/x2) least squares regression analysis. The LLOQ was defined as the lowest concentration on the calibration curve with an S/N ratio of 10, where the precision and accuracy bias were within \u00b120% by five replicate analyses. The intra- and interday precision and accuracy assessments were performed at four concentrations with five replicates on the same day and three consecutive days, respectively. The acceptance criterion recommended by the guideline of precision and the accuracy was \u00b115% of the nominal concentration. The recovery of the seven ginsenosides was determined at three QC levels with three replicates by comparing the peak areas of plasma extracts spiked with analytes before extraction with those of the postextraction spiked samples at the same concentration. The matrix effects were investigated by comparing the peak areas of the analytes dissolved in the pretreated blank plasma with the corresponding concentrations prepared in the reconstitution solution with three replicates. The stability of the analytes in human plasma was assessed by analyzing five replicates of plasma samples at low, medium, and high QC levels under four different conditions. Short-term stability was evaluated after the exposure of QC samples to room temperature for 19\u2009h before processing and analysis. The freeze and thaw stability was tested after three repeated cycles of thawing at room temperature and freezing at approximately \u221270\u00b0C. The long-term stability was assessed after storage at approximately \u221270\u00b0C for 50 days. The post-preparative stability was examined after the exposure of processed samples at ambient temperature (stored in an HPLC autosampler) for 34\u2009h.The LC-MS/MS method was validated to determine selectivity, lower limits of quantification (LLOQ), accuracy, precision, recovery, matrix effect, and stability, according to the guideline for validating the bioanalytical method , 23. TheSide effects that might have occurred during the entire study period were monitored based on volunteer reports, questionnaires, and clinical tests. Investigators evaluated all clinical side effects in terms of severity, correlation with the drug administered, duration, and symptoms.Tmax), maximum observed plasma drug concentration (Cmax), area under the plasma drug concentration-time curve from 0\u2009h to the final measured time (AUC(0\u2013t)), area under the plasma drug concentration\u2013time curve from 0\u2009h to infinity (AUC(0\u2013\u221e)), and terminal elimination half-life (t1/2).Plasma concentrations of seven ginsenosides were calculated from calibration curves by calculating the ratio of the peak area of the test substance to the peak area of the IS. The following pharmacokinetic parameters were measured using Phoenix WinNonlin 6.3 : time to reach maximum observed plasma drug concentration . Differences with p-value <0.05 were considered significant.All data are expressed as mean\u2009\u00b1\u2009standard deviation (SD). One-way ANOVA followed by Duncan's multiple comparisons test or Student's Ginsenoside content in RG and BRG was calculated by substituting into the equation of the calibration curve for each compound. The ginsenoside content of each sample is shown in 2, respectively. The demographic characteristics of all subjects are shown in Supplementary In total, 14 healthy Korean men were enrolled, but one subject voluntarily withdrew from participation in the study . The mea\u03b3-glutamyl transpeptidase (\u03b3-GTP) were measured to confirm the basal state of the subjects before administration. Blood samples were collected 24\u2009h after dosing, and the changes in AST, ALT, and \u03b3-GTP levels were confirmed through additional tests. The results of the biochemical tests in both groups are shown in In this study, no serious adverse reactions were observed in any subject before and after ginseng administration. On the day of hospitalization, the levels of aspartate transaminase (AST), alanine aminotransferase (ALT), and y\u2009=\u20090.0814x\u2009+\u20090.0665 for Rb1; y\u2009=\u20090.2863x\u2009+\u20090.0428 for Rg3; y\u2009=\u20090.0148x\u2009+\u20090.0069 for Rg5\u2009+\u2009Rk1; y\u2009=\u20090.0388x\u2009+\u20090.0067 for F2; y\u2009=\u20090.0301x\u2009+\u20090.0216 for CK. The correlation coefficients were 0.9991 for Rb1, 0.9998 for Rg3, 0.9997 for Rg5\u2009+\u2009Rk1, 1.000 for F2, and 0.9996 for CK, which exhibited good linearity. The measured LLOQ were 0.5\u2009ng/mL for Rb1, Rg5\u2009+\u2009Rk1, F2, and CK, respectively, and 0.2\u2009ng/mL for Rg3. These results were proven acceptable in analyzing PK behaviors of the six ginsenosides in RG or BRG , and AUC(0\u2013\u221e) after oral administration of BRG compared to that after administering RG. The significant increase in the AUC and Cmax of ginsenosides indicated that BRG ginsenosides were absorbed better than RG following intragastric administration, even after considering the differences in the composition ratio of ginsenosides between the two extracts. The content of ginsenoside Rk1\u2009+\u2009Rg5 in BRG was 5.73 times higher than that of RG, and the value of AUC(0\u2013t) of ginsenoside Rk1\u2009+\u2009Rg5 in the BRG group was approximately 7.92 times higher than that of all ginsenosides. Moreover, the BRG group showed a very low CK content but >28.0 times higher AUC(0\u2013t) values than the RG group. Interestingly, in the BRG group, the content of F2 was >4.42 times higher than that of CK ((0\u2013t) of CK was >5.03 times higher than that of F2. Furthermore, the mean Tmax of CK in the BRG group was 4.77\u2009\u00b1\u20093.61\u2009h, which indicates that the absorbed amount of CK in this group was greater than that in the RG group. Furthermore, additional analyses were performed using the Duncan post hoc test to observe the effects of group, timing, and change in subjects within the groups on AUC(0\u2013\u221e) and Cmax. The analysis revealed significant differences in Rg3, Rk1\u2009+\u2009Rg5, F2, and CK in the plasma samples taken at the initial time point (0\u2009h) of the second treatment period took the test extract, and 7 subjects (group 2) took the reference extract; and in the second phase, subjects in group 2 took the test extract, and group 1 took the reference extract. Furthermore, in the results of pharmacokinetic analysis obtained during the first period and the second period, we did not detect Rg3, Rk1\u2009+\u2009Rg5, F2, CK, and Rb1 ((0\u2013t) value of CK was 14.49\u2009\u00b1\u200930.45\u2009ng\u2219h/mL in the RG group, although the content of F2 was lower than the quantitative limit. The mean Tmax of CK in the RG group was 8.15\u2009\u00b1\u200910.21\u2009h, which was approximately 1.71 times lower than that in the BRG group. This indicates that other ginsenosides in the metabolic pathway, including F2, are slowly converted to CK by intestinal microbial flora. In general, the PPD-type ginsenosides Rb1, Rg2, and Rc can be biologically converted to CK, and ginsenosides Rg3, Rk1, and Rg5 can be transformed into PPD by gut microbiota. The pharmacokinetic parameters of PPD from RG in humans have not been elucidated. However, Kim et al. reported that Cmax of CK was 8.35\u2009\u00b1\u20091.97\u2009ng/mL, which was considerably higher than that of ginsenoside Rb1 (3.94\u2009\u00b1\u20091.97\u2009ng/mL), and the half-life of CK was seven times shorter than that of Rb1 [Several studies have shown that there is a large difference in the metabolic activities of gut microbiota among individuals. The AUCt of Rb1 . These rCmax, AUC(0\u2013t), and AUC(0\u2013\u221e) and shorter Tmax(for CK) after BRG administration compared to those after RG administration, suggesting that BRG may lead to a higher absorption rate of bioactive ginsenosides. Thus, consuming various bioactive ginsenosides is essential to enhance the pharmacological activities of ginseng. We showed that the pharmacokinetic properties of ginsenosides, including Rb1, Rg3, Rk1\u2009+\u2009Rg5, F2, and CK, after BRG administration may provide valuable information for future studies investigating the role of ginsenosides in the therapeutic efficacy of RG. This also indicates that the metabolism and absorption of ginsenosides in the body may be affected by the individual intestinal microbial environment. In conclusion, the conversion of ginsenosides into an easily absorbable form using bioconversion technology may increase the bioavailability of RG.This study revealed that ginsenosides Rg3, Rk1\u2009+\u2009Rg5, F2, and CK showed a higher"} +{"text": "We would like to thank Orlhac and Buvat for theiequally affected by the imaging protocol\u201d [The application of the ComBat method of Johnson to radiorotocol\u201d and other simple interpolation schemes, and that downsampling (harmonisation to a coarser resolution) to a common spatial resolution is superior to upsampling. These have simple and obvious physical explanations. However, interpolation to a common pixel size is also not a fix-all. Most importantly, we showed that, regardless of the method applied, a reproducibility analysis is required to select reproducible and harmonisable features.That said, the message of our paper was that ComBat harmonisation is not a fix-all. Rather, we argued that one should first apply harmonisation steps that directly address physical differences in the acquisition of the images. Fundamental imaging physics dictates that differences such as voxel size, slice thickness, mAs, dose, and kV can profoundly impact the appearance of images. At least some of these factors, for example voxel size or slice thickness, can be readily harmonised through appropriate and direct image processing, such as resampling. In our paper, we demonstrate that 3) per layer. In We have also repeated our analysis layer by layer, as recommended by Orlhac and Buvat , using bOrlhac and Buvat also state that the definition of the design matrix of covariates affects the outcome of Combat . We agreIn summary, we disagree with the statement of Orlhac and Buvat that we \u201cmisused\u201d Combat . First, reproducible features, and used to help interpret and generalise radiomic models developed with these features.Thus, the message of our study remains"} +{"text": "Background: Polycystic kidney disease (PKD) is a genetic disorder affecting millions of people worldwide that is characterized by fluid-filled cysts and leads to end-stage renal disease (ESRD). The hallmarks of PKD are proliferation and dedifferentiation of tubular epithelial cells, cellular processes known to be regulated by Notch signaling. Methods: We found increased Notch3 expression in human PKD and renal cell carcinoma biopsies. To obtain insight into the underlying mechanisms and the functional consequences of this abnormal expression, we developed a transgenic mouse model with conditional overexpression of the intracellular Notch3 (ICN3) domain specifically in renal tubules. We evaluated the alterations in renal function and structure and measured the expression of several genes involved in Notch signaling and the mechanisms of inflammation, proliferation, dedifferentiation, fibrosis, injury, apoptosis and regeneration. Results: After one month of ICN3 overexpression, kidneys were larger with tubules grossly enlarged in diameter, with cell hypertrophy and hyperplasia, exclusively in the outer stripe of the outer medulla. After three months, mice developed numerous cysts in proximal and distal tubules. The cysts had variable sizes and were lined with a single- or multilayered, flattened, cuboid or columnar epithelium. This resulted in epithelial hyperplasia, which was observed as protrusions into the cystic lumen in some of the renal cysts. The pre-cystic and cystic epithelium showed increased expression of cytoskeletal filaments and markers of epithelial injury and dedifferentiation. Additionally, the epithelium showed increased proliferation with an aberrant orientation of the mitotic spindle. These phenotypic tubular alterations led to progressive interstitial inflammation and fibrosis. Conclusions: In summary, Notch3 signaling promoted tubular cell proliferation, the alignment of cell division, dedifferentiation and hyperplasia, leading to cystic kidney diseases and pre-neoplastic lesions. Cystic kidney diseases are characterized by fluid-filled cysts, which compress the surrounding renal parenchyma and derogate renal function . They ocThe Notch receptor family consists of four members Notch1\u20134) and is an evolutionarily conserved intercellular signaling pathway involved in numerous biological processes including cell fate determination, cellular differentiation, proliferation, survival and apoptosis and is a.Notch3 is expressed by vascular smooth muscle cells and regulates vascular development and reactivity . We haveIn this study, we examined whether Notch3 is involved in renal epithelial cyst formation.To examine whether Notch3 plays a role in polycystic disease in humans, we analyzed the expression of Notch3 in human biopsies from patients suffering from autosomal dominant polycystic kidney disease (ADPKD) and acquired cystic kidney disease (ACKD). Notch3 was highly upregulated in all patients with ADPKD (n = 5) and ACKD (n = 5). It was mainly expressed by tubular epithelial cells lining the cysts A. Tumor-NOTCH3 mRNA was expressed most strongly in clear cell renal cell carcinomas . In all cases of RCCs, Notch3 was expressed by tubular epithelial cells B. Reanalrcinomas C.We have previously generated transgenic mice overexpressing the Notch3 intracellular domain (N3ICD) in tubular epithelial cells . To examNotch3 and Yfp (yellow fluorescent protein) expression, the latter tagging the N3ICD-overexpressing cassette . In control kidneys, the border between the cortex and medulla was clearly visible. Already one month after doxycycline treatment, tubules, especially in the OSOM, were bigger, hyperplastic and slightly dilated A. Over tCollectively, Notch3 overexpression induced distinct pathological cystic tubular alterations leading to a deterioration of tubular function with decreased ion channel expression, consistent with the development of cystic kidney disease.The water channels aquaporin 1 and 2 showed normal apical localization in the doxycycline-treated mice, suggesting that that Notch3 activation did not influence cell polarity B. AdditiHavcr1/Kim1 mRNA expression increased significantly over time A. In linver time B.Lcn2/NGAL, a widely used marker of tubular stress and injury, showed a similar expression pattern to Havcr1/Kim1, with a small increase at one month being followed by an important upregulation after three months of the proximal tubules significantly decreased from one month or epithelial-to-mesenchymal transition and tumor progression (Twist and Snail) were not significantly altered , indicating that inhibition of Jagged2 or of Notch3 reduced cyst formation ,21. KnocInterestingly, downregulation of Notch signaling in the kidney mesenchyme in mice also caused tubular cysts, with severity corresponding to the degree of inhibition of the Notch pathway . GeneticIn mice lacking Notch2 and one allele of Notch1, the mitotic spindle orientation of epithelial cells of pre-cystic proximal tubules was disrupted relative to the basement membrane . This waNotch signaling is required for maintenance of mature renal tubular epithelial cells ,26. We fEmerging evidence has shown that deregulated expression of Notch receptors, ligands and targets is observed in many types of tumors ,27,28. NOur results illustrate that Notch3 activation can be involved in the development and formation of cysts. Therefore, mutations of Nocth3 that lead to a gain of function and thus to a constitutive activation of Notch signaling may cause cystic kidney diseases in humans, whether acquired during kidney aging or congenital with a later expression in life. In order to investigate this possibility, it will be interesting to scan for such mutation patients with chronic kidney disease and numerous bilateral cysts.In conclusion, we found that activation of Notch3 in the renal tubular epithelium leads, over time, to the formation of renal cysts and the concomitant decline in renal function. This pathological process occurs because aberrant Notch3 signaling influences proliferation, the alignment of cell division and dedifferentiation in the proximal and distal tubules, thus favoring the progression of cystic renal diseases and eventually the development of renal cell carcinoma.The generation of N3ICD-overexpressing mice in TECs has been previously described . N3ICD eHalf kidneys from each animal were fixed in 4% formalin solution and embedded in paraffin. Then, 3 \u03bcm sections were stained with Masson\u2019s trichrome and PAS for histological evaluation of N3ICD mice. Tubular dilation and necrosis were evaluated semi-quantitatively, using the following scale: 0, no tubular damage; 1, damage in 1\u201325% of the tubules analyzed; 2, damage in 26\u201350% of the tubules analyzed; 3, damage in 51\u201375% of the tubules analyzed; 4, damage in >76% of the tubules analyzed. Scoring was performed in a masked manner on coded slides by two different investigators. Interstitial fibrosis was determined semi-quantitatively on Sirius red-stained paraffin sections at a magnification of \u00d7200 using computer-based morphometric analysis software . BUN and creatininemia levels were measured with an enzymatic method and expressed in mM and \u03bcM, respectively.Human renal tissue was obtained from nephrectomy specimens with renal cell carcinoma or polycystic kidney disease taken from the archives of the Institute of Pathology RWTH Aachen. Control kidney tissues were taken either from tumor-free tissue from RCC nephrectomies or from patients who underwent nephrectomy due to trauma, both without other known renal diseases and obvious histopathological alterations. All human samples were handled anonymously and analyzed in a retrospective manner, and the study was approved by the local review board (EK244/14 and EK042/17) and in line with the Declaration of Helsinki of 1975, as revised in 2000 and 2013.\u2212\u0394\u0394CT method. Results are expressed as the ratio of the target gene/internal control gene (HPRT). Sequences of primers used in our studies are listed in Total RNA was extracted from renal tissue using TRIzol reagent . RNA quality was checked by control of the optical density at 260 and 280 nm. Total RNA from cells was extracted using Spin Column Total RNA Mini-Preps Super Kit according to the manufacturer\u2019s instructions. Contaminating genomic DNA was removed by RNase-free DNAse for 30 min at 37 \u00b0C. cDNA was synthesized from 1 \u00b5g of purified RNA using oligo-dT and superscript II RT (Qiagen) for 1 h 30 min at 37 \u00b0C and 10 min at 70 \u00b0C. qPCR experiments were performed as previously described . AnalysiImmunohistochemistry was performed on 3 \u03bcm-thick paraffin-embedded tissue sections. Tissue was de-paraffinized, and 10 mM citric acid, pH 6, at 95 \u00b0C, was used for antigen retrieval. Sections were permeabilized with 0.1% triton/PBS. Antibodies against Notch3 , F4-80 , MCM2 , pax2 , PCNA , aquaporin 1 (Alpha diagnostic AQ11-A), aquaporin 2 , CD13 and Tamm\u2013Horsfall protein were used. Immunofluorescence for LC-3 was performed on 3 \u03bcm-thick paraffin-embedded tissue in a similar manner. Alexa fluor (Invitrogen) secondary antibody was used for detection. Images were obtained with an OlympusIX83 photonic microscope at \u00d7200 magnification.Apoptosis was evaluated with the DeadEndTM Fluorometric TUNEL System according to the manufacturer\u2019s instructions on 3 \u03bcm paraffin sections. Quantification was performed by expressing the number of TUNEL+ tubules in the outer and inner cortices per total number of tubules in the same areas. TUNEL+ tubules were considered tubules with at least one TUNEL+ epithelial cell."} +{"text": "Escherichia coli, Pseudomonas aeruginosa, Staphylococcus aureus and Bacillus cereus. Superior antibacterial activities with maximum inhibition values of 80\u201398% were accomplished by QAMCS3 membranes compared with 57\u201372% for AMCS membrane. Minimum inhibition concentration (MIC) results denote that the antibacterial activities were significantly boosted with increasing of polymeric sample concentration from 25 to 250 \u00b5g/mL. Additionally, all membranes unveiled better biocompatibility and respectable biodegradability, suggesting their possible application for advanced wound dressing.Much attention has been paid to chitosan biopolymer for advanced wound dressing owing to its exceptional biological characteristics comprising biodegradability, biocompatibility and respectable antibacterial activity. This study intended to develop a new antibacterial membrane based on quaternized aminochitosan (QAMCS) derivative. Herein, aminochitosan (AMCS) derivative was quaternized by N-(2-Chloroethyl) dimethylamine hydrochloride with different ratios. The pre-fabricated membranes were characterized by several analysis tools. The results indicate that maximum surface potential of +42.2 mV was attained by QAMCS3 membrane compared with +33.6 mV for native AMCS membrane. Moreover, membranes displayed higher surface roughness (1.27 \u00b1 0.24 \u03bcm) and higher water uptake value (237 \u00b1 8%) for QAMCS3 compared with 0.81 \u00b1 0.08 \u03bcm and 165 \u00b1 6% for neat AMCS membranes. Furthermore, the antibacterial activities were evaluated against It has been reported that CS has the ability to induce fibroblast growth, stop bleeding and stimulate the migration of mononuclear and polymorphonuclear cells, and consequently, boost re-epithelization as well as skin regeneration [\u2212 or primary -NH2 groups [Over the last thirty years, immense interest of researchers has been directed to natural biopolymers and the possibility of expanding their application in various medical and pharmaceutical fields ,2. This neration . In addineration . It has neration . Additioneration . Compareneration . Howeverneration . To addrneration , crosslineration and Schineration , in addineration and quatneration , have ac2 groups .N,N,N-trimethyl chitosan (TMC) [N,N,N-trimethyl O-(2-hydroxy-3-trimethylammonium propyl) chitosans (TMHTMAPC) [Escherichia coli, Staphylococcus epidermidis and Staphylococcus aureus. The versatility and adaptability of quaternized chitosan derivatives offer a unique opportunity for the development of new antibacterial agents in addition to the preclusion of infectious diseases.The quaternization process can meaningfully overcome the poor solubility of CS at the neutral/high pHs, while conserving its positive charge and thus can widen its possible biomedical applications over the entire pH range accordingly ,35. Theran (TMC) , hydroxyan (TMC) , glycidyan (TMC) and N,N,MHTMAPC) , have beThe current study deals with the continuous development of new antibacterial agents-based chitosan biopolymer. Herein, we aimed to develop new quaternized aminochitosan (QAMCS) membranes as efficient antibacterial membranes for wound dressing applications. Th introduced quaternary ammonium salt as well as the existence of extra amine groups on the AMCS backbone were expected to improve its biological properties, specifically boosting antibacterial activity. The chemical structures, thermal properties and surface morphologies of the fabricated QAMCS membranes were investigated various characterization tools, respectively. In addition, the surface charges, roughness and mechanical properties were explored. The bio-characteristics of the developed membranes, including their antibacterial activities against four kinds of bacteria that usually provoking wound infections as well as their biodegradability and blood-compatibility, were also examined.Aminochitosan was provided by ATNMRI, SRTA-City (Egypt). N-(2-Chloroethyl) dimethylamine hydrochloride (assay 99%), N-methyl-2-pyrrolidinone (assay 98%) and sodium iodide (assay \u2265 99%) were acquired from Sigma-Aldrich Co. . Acetic acid (assay 98%), ethanol (assay 99%), glycerol (assay 99%) and sodium hydroxide (purity 98%) were brought by El-Nasr Co. .Escherichia coli and Pseudomonas aeruginosa) and two Gram-positive bacteria (Staphylococcus aureus and Bacillus cereus) were used for the antibacterial evaluation. Herein, we have selected these types of bacteria as they are considered the most common causative organisms associated with wound infections. The tested bacteria were refreshed before use through inoculating them overnight at 37 \u00b0C under constant shaking rate (150 rpm) in Luria\u2013Bertani (LB) culture medium (pH 7), which is composed of peptone (1%), yeast extract (0.5%) and sodium chloride (1%).Two Gram-negative bacteria (w/v) was dropwise added while the reaction was contained for 3 h. The obtained product was precipitated by ethanol (200 mL), centrifuged, washed using acetone and finally dried under reduced pressure. The quaternization process was conducted according to the authors\u2019 reported study , with a w/v) and distilled water (pH 7.0), respectively, to have final concentration of 2% (w/v). An accurate 20 mL of the prepared derivatives (including 0.5 mL of glycerol as a plasticizer) was poured into a clean Petri dish (diameter = 7 cm) and left for 48 h at 25 \u00b0C to evaporate the solvent. Thereafter, the dried membranes were soaked for approximately 30 s in an aqueous solution of NaOH (1 mol\u00b7L\u22121) for neutralization, followed by washing with deionized water. Finally, the wet membranes were rinsed out and fixed to glass plates supported with clamps and allowed to dry at 25 \u00b0C until they reached constant weights.To formulate membranes, AMCS and QAMCS derivatives were dissolved separately under stirring at 25 \u00b0C in acetic acid . TGA analysis was achieved under nitrogen atmosphere (flow rate 40 mL/min), while the temperature was raised gradually from 10 to 800 \u00b0C with a constant heating rate (20 \u00b0C/min). The morphological characteristics were investigated by a scanning electron microscope under a voltage potential of 20 kV. Prior to SEM examination, the examined samples were placed on aluminum stubs and coated with a thin layer of gold via a sputter coating system. Furthermore, the surface charges were estimated by Zeta-Sizer . Surface roughness of all membranes was measured by a surface roughness tester . To examine surface roughness, the membrane samples (4 cm \u00d7 5 cm) were fixed onto a glass slide with double-sided tap, and the obtained data are presented as the mean average of three measurements.The chemical structures of the developed membranes were investigated by Fourier transform infrared spectroscopy . An accurate quantity of tested sample (10 mg) was thoroughly mixed with KBr at 25 \u00b0C, while the absorbance of samples was scanned in the wavenumber range of 500\u20134000 cmWU; %) was estimated according to the following Equation (1) [fW and iW represent the final and initial weights of the tested samples, respectively.Additionally, a universal testing machine was employed to explore the mechanical properties of the tested membranes. The tested sample was placed between the grips of the testing machine at constant grip length (5 cm) and speed rate of testing (12.5 mm/min). Finally, the water uptake profiles of membranes were evaluated by soaking of 0.1 g of the tested sample for 24 h in distilled water at 25 \u00b0C. Subsequently, the swollen samples were gently separated from the swelling medium and placed between two filter papers to eliminate the excess of water adhering to the surface, which was followed by weighing. Water uptake (tion (1) :(1)WU\u00a0%=E. coli, P. aeruginosa, S. aureus and B. cereus were overnight cultured by incubating them in LB broth under a shaking rate of 150 rpm at 37 \u00b0C. Next, bacterial cultures were diluted 100 times using the same LB medium to gain optical densities of 0.9 via measuring the bacterial turbidity at 600 nm. Various concentrations of tested samples were added into sterile 96-well microplates containing 20 \u00b5L of the bacterial culture suspensions. The wells were completed to 200 \u00b5L with LB broth free medium and followed by mixing for 2 min at 100 rpm using a bench-shaker. Finally, the wells were left overnight for aerobic incubation at 37 \u00b0C. Additionally, the negative and positive controls were prepared separately by mixing the diluted bacterial cultures and the examined samples with the free LB medium, respectively. The microtiter plates were shaken for 30 s using a microplate reader, and the turbidity of bacterial cultures was assayed at 600 nm. The test was conducted in triplicate and the inhibition (%) of microbial growth was calculated according to the following Equation (3):aOD and bOD represent the optical density of normal and inhibited microbial growth, respectively.MIC was conducted to inspect the influence of different concentrations of the developed AMCS and its quaternized form on the growth of the studied bacteria. MIC experiment was performed according to the reported studies using the microtiter plate method . In brieTo investigate the bactericidal behavior of the developed materials, a microtiter plate approach was employed according to the reported method . Briefly2) was washed by a freshly prepared phosphate buffer solution (pH 7.4) to remove any attached impurities. The tested sample was then placed in a glass test tube comprising an ACD-blood mixture (prepared by mixing 9 mL of fresh human blood with 1 mL of acid citrate dextrose as an anticoagulant), and followed by incubation for 3 h at 37 \u00b0C. Concurrently, a control was performed using a free-sample mixture, while deionized water and phosphate buffer were used as positive and negative controls, respectively. After incubation, the tubes were centrifuged at 2000 rpm for 15 min. The optical densities of supernatants (OD) at 540 nm were assayed by a UV-spectrophotometer, and the hemolysis (%) was estimated based on the following Equation (4):ODs is the optical density of supernatant in the presence of a studied sample. nOD and pOD represent the optical densities of the negative and positive controls, respectively.Compatibility of the formulated AMCS and its quaternized derivative with human blood was examined according to the formerly reported procedure, with a slight modification . Informe\u22121). The tube was incubated for 24 h at 37 \u00b0C. Next, dinitrosalicylic acid reagent was added carefully to stop the activity of lysozyme enzyme. The mixture was boiled for 15 min and subsequently left to cool. The generated color obtained from the reaction between the DNS reagent with the liberated reduced sugars was analyzed via estimation of the optical density (OD) at 570 nm using a visible-spectroscopy.A biodegradability test was performed according to the reported method ,46. An en = 3), and data are presented as means and standard deviations (\u00b1SD).All experiments were accomplished in triplicates , was moved after the quaternization process to 2863\u20132870 cm\u22121. Moreover, the absorption broad bands at 1619\u20131640 cm\u22121 and 1555 cm\u22121 corresponded to the stretching vibration of C=O and N\u2013H of amide-I and amide-II, respectively. It was also noted that the N\u2013H bending (1619 cm\u22121) of the primary amine in the AMCS spectrum was significantly affected after the quaternization process since it moved to a higher wavelength (1640 cm\u22121). These observations agree with those reported by other researchers [\u22121 are associated with amide (III), which involves C\u2013N stretching and N-H of the amide linkage. The appearance of multi-peaks at 890\u20131062 cm\u22121 correspond with C-C, C-O and C-O-C glycosidic bonds. On the other hand, the observed absorption bands around 1016 cm\u22121 in the QAMCS spectra could be attributed to the stretching vibration of C\u2013N in the quaternary ammonium groups. In addition, the peaks at around 1385\u20131393 cm\u22121 are associated with the C-H symmetric bending of the methyl groups in the generated quaternary ammonium groups [The IR spectra of AMCS derivative and its quaternized forms were attained for more details regarding their chemical structures, as shown in l groups . This baearchers ,49. The m groups ,50. More50% \u00b0C) was 391.05 \u00b0C compared with 381.62, 376.69 and 347.54 \u00b0C for QAMCS1, QAMCS2 and QAMCS3, respectively. These observations could be attributed to the introduction of quaternary ammonium salts into the AMCS backbone, which evidently affects its thermal stability. In all cases, the results confirm that the developed membranes displayed fair thermal stability in the vicinity of human body temperature, which enhances the possibility of their use in advanced wound dressing.The thermal stabilities of the developed membranes were studied in the temperature range from 25 to 800 \u00b0C, as shown in The surface roughness of the formulated membranes was studied as depicted in 2 groups along the AMCS backbone, which can bind with water molecules. Indeed, the water uptake characteristic is considered to be one of the most important features of membrane-based wound dressings. Therefore, wound dressings with high water uptake can effectively provide a moist-wound environment and facilitate the passage of fibroblasts, keratinocytes and endothelial cells to the damaged wound area. Additionally, it can absorb the surplus wound exudates that prompt the wounds to bacterial infections, improve the hemostasis properties and hasten the healing process accordingly [ordingly . In addi2 were recorded by the QAMCS3 membrane compared with 125.5 \u00b1 2.1 N and 46.5 \u00b1 1.4 N/m2 for the neat AMCS membrane. The decent mechanical properties of these observations were associated with the strong internal attraction forces of membrane compositions, which harvest a rise in membrane rigidity [rigidity . In addiThe developed membranes were tested their compatibilities as a function of the blood hemolysis (%), and data obtained are depicted in E. coli. As expected, a significant enlargement in the inhibition (%) was noticed with the rising of the quaternized agent content from 0.267 M (QAMCS1) to 1.06 M (QAMCS3) in the feed mixture. Further introduction of permanent positively charged quaternary ammonium groups on the AMCS backbone dramatically enriched its antibacterial power from 73% to 86% (QAMCS1) to maximum values of 80% to 98% (QAMCS3). Moreover, the highest inhibition (%) values of 98, 94, 86 and 80% were recorded by QAMCS3 membrane against E. coli, P. aeruginosa, B. cereus and S. aureus, respectively, while native AMCS membrane recorded maximum inhibition values in the range of 57\u201372%. Indeed, bacterial infection causes a lot of complications, including the delays in the wound healing process, and it sometimes leads to death. The antimicrobial activities of chitosan/and its derivatives have received substantial attention.2 groups. Although the mechanism of antibacterial action of chitosan/or its derivative is not yet definitely understood, several mechanisms have been proposed to explore this aspect [2 (in case of AMCS) and the quaternary ammonium groups (in case of QAMCS). These functional groups have an affinity to interact with the negatively charged outer membranes, specifically for Gram-negative bacteria. Consequently, a leakage of the proteinaceous and other intracellular ingredients would occur as a result of the membrane disruption of the bacterial cell [Factually, AMCS has a unique self-antibacterial characteristic owing to its positively charged NHs aspect . The mosial cell ,61. In aial cell . AccordiN,N,N-diethylmethyl CS exhibits greater antibacterial action against E. coli compared with non-quaternized CS [N,N,N-trimethyl O-(2-hydroxy-3-trimethylammonium propyl) chitosan (TMHTMAPC) [N-benzyl chitosans (GTMAC) [Staphylococcus aureus and Staphylococcus epidermidis compared with that obtained by native CS.Similar observations have been reported with other quaternized chitosan derivatives, which are consistent with the antimicrobial properties obtained in this study. It has been stated that nized CS . Furthernized CS , N,N,N-tMHTMAPC) and N-be (GTMAC) derivatiE. coli, P. aeruginosa, B. cereus and S. aureus) with raising the quantity of the examined sample from 25 to 250 \u00b5g/mL. The minimal concentration (25 \u00b5g/mL) of all tested polymeric samples demonstrated various inhibition responses; hence, AMCS recorded 12.44% against E. coli, 13.95% against P. aeruginosa, 4.65% against B. cereus and 3.99% against S. aureus. Indeed, MIC could be considered as a practical indicator of primary activity against pathogenic microorganisms.E. coli, 17.5% against P. aeruginosa, 18.54% against B. cereus and 13.82% against S. aureus at the same polymer concentration (25 \u00b5g/mL). These findings agree with other studies, which reported that inhibition of bacteria increases with the increase in the surface positive charges [This assay is essential to examine the antibacterial potency of the new QAMCS derivative compared with neat AMCS, as well as to investigate the vulnerabilities of bacteria to our developed materials. The results also refer to the finding that the inhibition percentage (%) values against both positive and negative Gram bacteria were increased with increasing the quaternization degree as a result of increasing the surface positive charges of the developed membranes. Therefore, the most effective derivative was the QAMCS3 sample, which recorded the maximal values of 20.87% against charges ,66.The developed QAMCS derivatives were assessed for their activities toward the tested pathogenic bacteria, which were previously designated for different periods. E. coli, P. aeruginosa and S. aureus, and then the activity was slowly reduced. On the other hand, in the case of B. cereus, a slower activity was observed at the first 3 h, while the activity rate was faster later. The existence of protein channels within the outer membrane of Gram-negative bacteria might hamper the entrance of AMCS and QAMCS residues into the cells. The presence of extra amine groups in AMCS and its quaternized forms possess more positive charges and consequently damaged bacterial cells. Thus, quaternization of amine groups of AMCS by N-(2-Chloroethyl) dimethylamine hydrochloride generate permanent positive charges on the polymer chains and consequently promote the activity of new QAMCS derivatives against tested bacteria. Similar observations have been reported and have proved that the bactericidal activity of chitosan against Escherichia coli (E. coli-ATCC 25925) was multiplied several times after being converted to N-propyl-N, N-dimethyl chitosan derivative [The obtained results of the bactericidal assay indicated that all tested AMCS and QAMCS derivatives offered bactericidal behavior against bacteria based on the kind of examined polymer and bacteria type. Therefore, the AmCS sample displayed the lowest activity, since the inhibition percentage (%) was going down at the first 2 h in the case of rivative ,67.Indeed, many millions of people who experience injuries suffer from non-healing wounds. Efficient wound healing needs carefully considered treatment, often necessitating various clinic and hospital visits, and therefore, the ensuing costs of wound dressing materials are confounding. Incessant shadowing is essential to guide a proper dressing for wound infection, and rational usage of antimicrobial materials is required to overcome the complications caused by bacterial wound infections. Since a large amount of the crustacean exoskeleton is readily offered as a by-product of the seafood processing industry, the raw material for chitosan production is fairly inexpensive, and thereby the manufacture of chitosan on a large scale from this renewable bio-resource is economically feasible. Another important aspect to be considered is that utilizing the shellfish waste for chitin production provides a solution to the waste disposal problem and provides an alternative for the use of this oceanic resource. The FDA has approved chitosan for medical uses such as bandages for wound dressing and drug delivery systems. Moreover, one Norwegian company that fabricates shrimp-derived chitosan proclaimed in 2001 that its purified chitosan product achieved self-affirmed Generally Recognized as Safe (GRAS) status in the US market. The growing product application in water treatment, pharmaceutical and biomedical and cosmetics industries is expected to drive market growth. There are several challenges facing researchers regarding wound dressing-based chitosan biopolymer. Among these, prospective studies are crucial in order to augment the antibacterial activities as well as to prove the application feasibility of the developed chitosan derivatives. In addition, more efforts are necessary to develop new low-cost modification techniques for the fabrication of wound dressing-based chitosan and for the evaluation of their efficacies on a large-scale for expected successful field applications. Although quaternized CS derivatives have been considered as effective antimicrobial agents, their modes of action need to be further studied in depth. Additionally, the effect of selective substitutions, O-Substitution and N-Substitution, on the antimicrobial efficiency needs further investigation. Reaching a balance between good antimicrobial activity and low mammalian toxicity from one side and the right DS is cutting edge. Finally, the induction of bacterial resistance by quaternized CS derivatives could limit their use, so the mechanisms of action as well as extensive in vivo studies should also be considered. Finally, more comprehensive economic and market examination studies are required.E. coli, P. aeruginosa, B. cereus and S. aureus, respectively, while native AMCS membrane recorded maximum inhibition values in the range of 57\u201372%. The quaternization process greatly boosted the water uptake profiles compared with native AMCS membrane. The highest water uptake (WU) value was achieved by the QAMCS3 membrane as it reached a maximum value of 237 \u00b1 8% compared with 165 \u00b1 6% for the non-quaternized AMCS membrane. A maximum force value of 141 \u00b1 2.6 N and a maximum stress value of 54.6 \u00b1 1.7 N/m2 were recorded by the QAMCS3 membrane compared with 125.5 \u00b1 2.1 N and 46.5 \u00b1 1.4 N/m2 for neat AMCS membrane. In addition, the membrane roughness was increased gradually from 0.94 to 1.27 \u03bcm for (QAMCS1) and (QAMCS3) compared with AMCS membrane . Testing the blood compatibility of QAMCS membranes showed that their hemolytic index values were detected in the safe level, which ranged from 1.42 \u00b1 0.75% to 1.68 \u00b1 0.36%. These findings confirm that the developed membranes are biocompatible. Moreover, the antibacterial activities were significantly boosted after the quaternization process since the highest inhibition (%) values were obtained by the QAMCS3 sample. MIC and bactericidal assays proved the adequate antibacterial activity of the developed membranes against all selected bacteria. Furthermore, the biodegradation profile was slightly improved with increases in the quaternization ratio. Therefore, QAMCS membranes possess the basic requirements to be potentially applied as antibacterial wound dressing for the acceleration of wound healing.Quaternized aminochitosan (QAMCS) membranes were formulated and characterized by FTIR, TGA and SEM characterization tools. The positive charges on the surface of QAMCS membranes were augmented and reached maximum surface potential of +42.2 mV, with an increase in the quaternized agent ratio of up to 1.06 M, which dramatically enriched its antibacterial power from 73% to 86% (QAMCS1) to maximum values ranging from 80% to 98% (QAMCS3). Moreover, the highest inhibition percentage (%) values of 98, 94, 86 and 80% were recorded by QAMCS3 membrane against"} +{"text": "Cardiovascular (CV) disease is the leading cause of mortality in patients with end-stage kidney disease (ESKD). The aim of the present study was to determine whether Proprotein Convertase Subtilisin/Kexin type 9 (PCSK9) could be an independent predictor of CV events and all-cause mortality in black African haemodialysis patients.We carried out a prospective cohort study of all consecutive hemodialysis (HD) patients between August 2016 and July 2020, admitted in six hemodialysis centers of Kinshasa, Democratic Republic of Congo. Independent determinants of plasma PCSK-9 measured by ELISA were sought using multiple linear regression analysis. Kaplan-Meier\u2019s method described the incidence of CV events while competitive and proportional risk models looked for independent risk factors for death at the .05 significance level.p\u2009<\u2009\u00a00.001). Patients with plasma PCSK9 levels in tertile 3 had a higher incidence of CV events and mortality compared to patients with plasma PCSK9 levels in tertile 2 or tertile 1 (p\u2009<\u2009\u00a00.001). Tertile 3 negatively influence survival rates (26.6%) compared to tertile 2 (54.7%) and tertile 1 (85.3%). Patients in tertile 3 and tertile 2 had a 4-fold higher risk of death than patients in tertile 1. After adjustment for all parameters, competitive risk analysis showed that mortality was 2 times higher in patients with stroke. Similarly, serum albumin <\u20093.5\u2009g/dL or PCSK9 in tertile 3 were respectively associated with 2 or 6 times higher rates of deaths.Out of 207 HD patients, 91 (43.9%) died; 116 (56.1%) have survived. PCSK9 level was significantly higher in deceased patients compared to survivors: 28.0 (24.0\u201331.0) ng/l vs 9.6 (8.6\u201311.6) ng/ml (Elevated plasma PCSK9 level is an independent major predictor of incident CV events and all-cause mortality in black African HD patients.The online version contains supplementary material available at 10.1186/s12882-022-02748-0. In chronic kidney disease (CKD) patients not on dialysis, cardiovascular mortality is largely justified by the high level of cholesterol-rich in low-density lipoproteins (LDL-c) . In addiIn this present cohort study, patients receiving hemodialysis (HD) treatment between August 2016 and July 2020 in HD service providers were consecutively enrolled. Inclusion criteria were as follows: being aged at least 16\u2009years with end-stage kidney disease (ESKD) who had been on HD for at least 3\u2009months; receiving three HD sessions a week, and each session lasting 4 h. HD patients having experienced CV events before the enrollment in the present study were excluded and\u2009\u2265\u200930\u2009kg/m2.Variables of interest included: age, sex, cause of death, history of diabetes mellitus, hypertension, smoking, alcoholism, physical activity, hemodialysis vintage, and current medications (in particular statins). Physical examination was carried 15\u2009min before HD and focused on the following parameters: weight (Kg), height (cm), blood pressure (mm Hg), waist circumference (cm), pulse, and heart rate (beat/min). CV events were and recorded on an ad hoc reporting following their manifestations ascertained by the medical team in charge of the patient during the study period. The following biological parameters were recorded: hemoglobin, hematocrit, serum urea, serum creatinine, glycaemia, serum albumin, uric acid, total cholesterol (TC), low-density lipoproteins (LDL-c), high-density lipoproteins (HDL-c), triglycerides (TG), non-HDL-c, calcium, phosphorus, intact parathyroid hormone (PTHi), Vitamin D, PCSK9. Non-biological parameters encompassed: ankle-brachial index (ABI) obtained based on the ratio of the systolic blood pressure (SBP) measured at the ankle to that measured at the arm; the body composition of the study population taken a quarter of an hour before the dialysis session included: sex, age, height, weight, body fat, muscle mass, and BMI. This composition was determined using the scale: OM-BF 214, Brand Omron, Type Body fat monitor, EAN 4015672107045, 2015 with 4 sensor accuracy technology, large LCD panel, 4 user\u2019s memory with guest mode. Overweight and obesity were defined, respectively, by a BMI\u2009\u2265\u200925\u2009kg/mn\u2009=\u200969); tertile 2, PCSK9 of 9.58\u201323.0\u2009ng/ml (n\u2009=\u200969) and tertile 3 PCSK9\u2009>\u200923.0\u2009ng / ml (n\u2009=\u200969). The lipid fractions were assayed according to the enzymatic colorimetric method on the Cobas C 311 revised version 2010 automated system. Isolated dyslipidemia was defined as a total cholesterol level\u2009\u2265\u2009200\u2009mg/dl; HDL-c\u2009<\u200950\u2009mg/dl in women and\u2009<\u200940 in men; LDL-c\u2009\u2265\u2009100\u2009mg/dl or TG\u2009\u2265\u2009150\u2009mg/dl (20.9%) and unknown (10.9%). The patients who died at home or between home and hospital were considered to have an unknown cause of death.A total of 207 HD patients were included (Table\u00a0p\u2009<\u2009\u00a00.001).Detailed baseline characteristics are shown in Table\u00a0R2\u2009=\u20090.518] . The comparison of the type and incidence of CV events across tertiles shows proportions of dilated cardiomyopathy were significantly higher both in patients at tertile 2 and tertile 3 (p\u00a0=\u20090.030). By contrast, the incidence of other CV events was not statistically different across tertiles PCSK 9 level and 98 CV events (47.3%) occurred. The overall survival was 80.2% at 6\u2009months; 68.1% at 12\u2009months; 59.4% at 24\u2009months, 56% at 36\u2009months and 56% at 60\u2009months. Median survival was 11.0 (10.0\u201313.0) months . Tertile 3 negatively influence survival (26.6%) compared to tertile 2 (54.7%) and tertile 1 (85.3%).The comparison of survival curves of HD patients according to PCSK9 tertile levels Fig.\u00a0 showed aTable\u00a0This prospective observational study showed for the first time that the incidence of CV events increases with plasma PCSK9 level in black African HD patients. We found also that Kt/V, serum albumin, total cholesterol, triglyceride, and HDL-C, were independent determinants of PCSK9 in HD patients without statin treatment. These parameters alone accounted for more than half of the fluctuations in PCSK9 levels. Patients with plasma PCSK9 levels at tertile 3 had a high incidence of CV events and mortality. This risk of CV event and subsequent mortality increased linearly with the PCSK9 level. In addition, high PCSK9 levels negatively influenced the survival of the patients in the study. These observations suggest that PCSK9 may be a predicting biomarker for CV event occurrence in the hemodialysis patient population. While total cholesterol, triglyceride, HDLc were independent determinants of PCSK9 level in this study, Hwang HS et al. studyingHD patients at tertiles 2 and 3 of PCSK9 had a higher risk of death than those in tertile 1. This association remained significant after adjusting for all risk factors for death in the patients included in this cohort. A similar association was observed at the beginning of HD in Korean patients with a history of cardiovascular disease and under lipid-lowering treatment . Yet, ouPatients in the present study were not on statin treatment. Indeed, statins often prescribed in CKD increase the circulating level of PCSK9 by activating a binding protein Sterol Regulatory element-binding Protein-2 [SREBP-2] . The higIn addition, PCSK9 is synthesized as a precursor of about 72\u2009kDa which must undergo autocatalytic cleavage between the catalytic pro-domain and domain within the endoplasmic reticulum of the hepatocyte before being secreted into the bloodstream . This moThe present study showed a high incidence of CV events and mortality in haemodialysis patients with PCSK9 in tertile 3. The risk of CV events and mortality increased linearly with PCSK9 level while the levels of plasma lipid fractions remained without significant differences between the survivors and the deceased patients. The current finding is consistent with previous studies showing that PCSK9 contributes to the development of CV events independently of traditional CV risk factors . The levStrengths of this study include the concomitant measurement of PCSK9 and lipid fractions in HD patients attending multicenter of HD in Kinshasa and the investigation of the implications of their association on CV events as well as the survival of these patients. Finally, several variables were included in the analysis including laboratory parameters and HD session associated variables.Limitations are related to the methodological approach. Firstly, measurement of PCSK9 and lipids was only performed once at the beginning of the study. Secondly, PCSK9 activity would be more accurate if PCSK9 has been measured bound to LDL-c receptors. Thirdly, the type of vascular access was not considered in the analysis, the variable which could have some impact on survival. Finally, our study is also based on relatively small sample size.In this study, elevated plasma PCSK9 levels independently predicts incident CV events and all-cause mortality in HD black African patients. Future studies are needed to determine the genetic mutations of PCSK9 in the black African hemodialysis patient population.Additional file 1."} +{"text": "Visual search is a fundamental element of human behavior and is predominantly studied in a laboratory setting using static displays. However, real-life search is often an extended process taking place in dynamic environments. We have designed a dynamic-search task in order to incorporate the temporal dimension into visual search. Using this task, we tested how participants learn and utilize spatiotemporal regularities embedded within the environment to guide performance. Participants searched for eight instances of a target that faded in and out of a display containing similarly transient distractors. In each trial, four of the eight targets appeared in a temporally predictable fashion with one target appearing in each of four spatially separated quadrants. The other four targets were spatially and temporally unpredictable. Participants\u2019 performance was significantly better for spatiotemporally predictable compared to unpredictable targets (Experiments 1\u20134). The effects were reliable over different patterns of spatiotemporal predictability (Experiment 2) and primarily reflected long-term learning over trials , although single-trial priming effects also contributed (Experiment 4). Eye-movement recordings (Experiment 1) revealed that spatiotemporal regularities guide attention proactively and dynamically. Taken together, our results show that regularities across both space and time can guide visual search and this guidance can primarily be attributed to robust long-term representations of these regularities. Our environment is filled with regularities that can guide our perception and facilitate performance . As we pIt is now relatively established that various task properties can influence where or what people attend over short time scales. A clear example comes in the form of \u201crepetition priming\u201d whereby performance is facilitated when target items share features or locatLearning over longer time scales also contributes to the allocation of attention. For example, in probability-cuing tasks observers search for a target in an environment with high-probability target locations, and low-probability target locations. Observers are faster at finding targets that occur in the high-probability locations . This efTaken together, the facilitation of performance based on short- and long-term representations in visual search has been conceptualized within statistical-learning and selection-history frameworks . The cenVisual search studies have provided some of the cornerstones to our understanding of how attention is controlled and guided. To date, most studies only investigate search within discrete perceptual events, such as a briefly presented static display or scene. However, search in the real world occurs within an unfolding temporal context. For example, if you are searching for a friend at a crowded train station. Searching for target items among dynamic sets of distractors in unfolding temporal contexts introduces additional challenges in terms of sustaining optimal performance and learning about useful regularities. Regularities between items often occur across temporal intervals that are filled with distraction. To be effective, therefore, spatial or identity-related predictions should also carry temporal information . Furthermore, extracting regularities about the timing of relevant events occurring among distracting stimuli requires more than learning about simple associations about the order or the temporal interval between individual events. Instead, the timing of target items must be abstracted across the entire duration of irrelevant distracting events if it is to aid search based on item identity or location.Thus, investigating the guidance of attention within temporally extended contexts provides an important next step toward our understanding of the mechanisms contributing to real-life visual search. A few different experimental approaches have started to examine the temporal dimension within visual search. For example, researchers have shown that a regularly occurring sequence of events can attract attention automatically without the need of prior experiences . MoreoveOther studies have suggested that learned associations about the temporal interval between events, and not merely their temporal order, can also be used to improve search performance. Building on the literature of temporal orienting of attention, or temporal expectation see .Taken together, these studies provide a promising foundation for investigating visual search in temporally extended and noisy contexts. Building on this work, we have developed a new experimental approach to ask whether and how spatiotemporal predictions about the locations and timings of relevant items can be extracted within noisy dynamic contexts to guide attention and improve search performance. Specifically, our aim was to answer two important questions within the field: (a) Does what we have learned about regularities in static visual search displays apply to dynamic displays? (b) Can we learn and use regularities regarding the temporal onsets of stimuli appearing in a context of other, temporally unpredictable distractors? Such a finding would require the ability to learn temporal markers for stimuli that are cannot be reduced to simple temporal order or derived by learning the intervals between individual successive events.In our \u201cdynamic visual search\u201d task, targets and distractors unfold over time at various spatial locations, allowing us to test whether learned regularities continue to guide performance in the face of intervening distracting events distributed in time and space. Benefits of performance would require abstraction of predictable locations and timings of relevant stimuli embedded within the context of other dynamically unfolding, but irrelevant stimuli. This approach introduces an important step change from previous studies that have considered the effects of simple associations about the serial order or timing between successive events. Within an extended context with unpredictable intervening distractors, learning simple associations between successive stimuli would be insufficient for exploiting the spatial and temporal regularities in our stimuli.Our experimental approach consists of a visual-search task in which targets and distractors appear and disappear within a noisy background over the course of several seconds. Observers are asked to find multiple instances of a target and to ignore distractors (tilted lines). On each trial, half of the vertical targets appear at predictable times and locations, while the other half of the targets appear completely unpredictably. Across four experiments, we show superior behavioral performance for identifying the targets that appear at consistent locations and times during these dynamic trials (Experiments 1\u20134). We show that the effects are robust across different arrangements of spatiotemporal regularities (Experiment 2). They are not dependent on the immediate repetition of spatiotemporal predictions between trials (Experiment 3), though they did benefit modestly from repeating patterns between successive trials (Experiment 4). We believe our results provide strong and reliable evidence for the ability to utilize spatiotemporal regularities to guide search in extended dynamic contexts, such as those present in our daily interactions. While far from providing a definitive account of how dynamic search happens, our experimental approach provides a flexible and promising experimental platform that can be used to investigate how regularities along multiple stimulus dimensions contribute and interact to prioritize, anticipate, and select relevant items and overcome distraction to guide adaptive behavior.ageM = 23.4, 11 females). All participants had normal or corrected-to-normal vision, provided written consent, and were compensated at a rate of \u00a310 per hour.We tested 25 participants . Eye movements were recorded with the Eyelink-1000-plus desktop mount at 1,000 Hz. A 9-point calibration was used with an error threshold of .5\u00b0 visual angle. Observers who did not meet this calibration threshold were not included in the final eye-track analysis, leaving 20 participants for this analysis. Drift correction was applied between blocks. The experimental script was generated using Psychophysics Toolbox on MATLAThe search display consisted of a 2 \u00d7 2 array of four different 1/F static noise patches that were generated for each trial. Each quadrant extended approximately 8.3\u00b0 \u00d7 5.8\u00b0 . Forty distractor stimuli and eight target stimuli appeared and disappeared at different times over the course of approximately 14 second trials. These bars did not move in their location, but rather faded slowly in and out of view. For all stimuli, the fade-in time was 2 seconds . The stimulus then remained at maximum contrast for another .8 seconds and faded out over 2 seconds. For purposes of analysis, onset times were defined as the first moment from when stimuli started fading in. This differs from the time at which the stimulus becomes subjectively detectable to the participant over which we did not have control. Total stimulus duration was 4.8 seconds. Each stimulus was \u223c.5\u00b0 in length and \u223c.08\u00b0 in width and could appear anywhere within the boundaries of one of the four quadrants as long as it did not overlap with another stimulus. Because they were presented on 1/F noise, the visibility of items varied unpredictably; however, any difference in visibility was unrelated to the status of the item as a predictable or unpredictable target or distractor.All experimental procedures were reviewed and approved by the Central University Research Ethics Committee of the University of Oxford. Observers were instructed to find and click on eight small gray vertical lines which appeared and disappeared over the course of a trial . Of the https://osf.io/py2w4/ .Forty-four other items were presented during each trial. We uniformly distributed the onset of these items across the time in the trial by pseudorandomly choosing a start time when each item began fading in. This distribution was constrained to present 11 of the 44 items within each successive 2.5 second window . Once the 44 items were assigned a particular starting time, we chose four of those items to serve as the unpredictable targets. This assignment was not contingent on the temporal distance between the unpredictable targets or the distance to the predictable targets. This means that the exact temporal relations between distractors and unpredictable and predictable targets were not predetermined or constrained. All stimuli had the same temporal profile, and a trial ended when the last stimulus had completely faded out of the display. The spatial locations of the distractors and unpredictable targets were determined randomly. For each item we randomly chose a quadrant (1\u20134) and then within that quadrant randomly chose a spatial location with the only constraints being that the items should not overlap or cross the quadrant border. Note that this means that the number of unpredictable targets in each quadrant was unpredictable on any given trial. The task is depicted in predictable and 4 unpredictable). Overall, there were 8 \u00d7 4 \u00d7 40 = 1,280 target events per participant. After each trial, observers received feedback in the form of a number between 0 and 8 indicating how many targets they had found. They were able to proceed at their own pace. The experiment lasted approximately 45 minutes.Observers completed four blocks of 40 trials. Trials were terminated when the last stimulus had fully disappeared from the display. This time varied slightly across trials as the onsets of unpredictable targets and distractors were chosen randomly. Each trial contained eight targets (4 SD) from the mean performance of all subjects were discarded. For the remaining observers, responses with RTs above or below 3 SD of the mean were discarded. This resulted in an average loss of fewer than .05% of trials across all four experiments.Behavioral data were analyzed using R using thFor each experiment there were two dependent variables of interest: accuracy and reaction time (RT). Accuracy was defined as the percentage of total targets found . RT was defined as the time between the target onset and the time an observer clicked on a target. These RTs are more complicated than those from a static visual search task. RT will depend on visibility, and visibility of a specific target will depend on many factors including the random noise background beneath and surrounding the target and where an observer happened to be fixating when the target began to appear. In addition, cursor position at the time of target detection will also contribute to the overall RT. While these factors introduce noise into the RT measure, they do not differ systematically between the conditions. As such, the extra noise within this measure does not compromise any comparisons of performance between predictable and unpredictable targets. That is, if predictable targets systematically attract attention before unpredictable targets, this will be clear from the RTs.p-values for the binary accuracy variable are based on asymptotic Wald tests. For the LMMs, we report \u03b2 with the t-statistic and apply a two-tailed criterion corresponding to a 5% error criterion for significance. The p-values were calculated with Satterthwaite\u2019s degrees of freedom method using the lmerTest package with a binomial distribution were used to analyze accuracy as a binary response and linear mixed-effects models (LMMs) were used to analyze the reaction times for correct trials in all experiments. Each row of the model contained a single target event. With eight targets per trial, 40 trials per block, and four blocks per participant there were 1,280 target events per participant. Because RT analysis was restricted to correct trials, there were fewer trial events in the LMMs. These analyses were run using the lme4 package and, within the levels of predictability, target order . Target order was centered and entered the model as a continuous predictor. The critical comparison between predictable and unpredictable targets was modeled using sum contrasts (with predictable targets being coded as 1 and unpredictable targets coded as \u22121). As such the grand mean of the dependent measure served as the intercept. For binary responses such as accuracy in the GLMM approach, the coefficients were represented by logits. We began each model with a maximal random-effects structure that incOf the 25 participants, 20 had usable eye-tracking data. The raw eye-position samples for these 20 participants were first converted to a data matrix using a Matlab script. The raw data matrix contained the X-Y coordinates of gaze position (in pixel units) throughout the task. Data were then recoded to a single vector of spatial quadrant . This conversion from screen coordinates to quadrants was important for two key reasons. First, the spatial predictions formed in this experiment were on the level of the quadrant rather than at the specific position of a target see and thert-value (p < .05) corrected for multiple comparisons based on the \u201ct-max\u201d method . Additionally, we found a main effect of the target order such that observers were better at detecting earlier targets . However, there was no significant interaction between target predictability and target order, indicating the effect of predictability was present throughout the course of the trial . The individual participants\u2019 results in each of the four experiments are plotted in Figure S1 in the online supplementary materials. Specifically, the effect of predictability on accuracy , as well as, significantly faster at finding targets appearing early in the trial compared to late targets . These two factors showed a small yet significant interaction . However, posthoc comparisons showed significant differences between predictable and unpredictable targets at each point . Again, individual participants\u2019 effects are included in Figure S1 in the online supplementary materials.Participants were significantly faster at finding predictable targets compared to unpredictable targets . The results, illustrated in Experiment 1 shows that observers were more accurate and faster at finding spatiotemporally predictable compared to unpredictable targets, and this is true throughout the trial. Experiment 2 was aimed to reduce effects in Experiment 1 that could be attributed to the strictly even distribution of predictable targets. We varied the timing of predictable targets across participants, such that the timing of onsets was consistent for any given participant, but those timings were not necessarily evenly distributed across the course of a trial. If the effects of target predictability are still present, then it is clear that these results cannot be solely attributed to a rhythmic pattern of attentional allocation.SD away from the mean. The remaining 24 participants were between 18 and 30 years old with an average age of 22.83. The sample contained 20 females. All participants had normal or corrected-to-normal vision, provided written consent, and were compensated at a rate of \u00a310 per hour.Twenty-five participants took part in Experiment 2. One participant was discarded for low performance\u2014with an average accuracy more than 2 Participants completed the experiment in a group testing room with a capacity of 20 people, although no more than 12 were tested at once. Participants each sat approximately 60 cm from the monitor . The experimental script was again generated using the Psychophysics Toolbox on MATLAIn Experiments 2\u20134 the timings of the stimuli were changed slightly such that the speed of the fading was increased slightly. Specifically, each stimulus faded in over 1.3 seconds . Then the target stayed on the screen for another 1.3 seconds and faded out over 1.3 seconds. The search display again consisted of four unique 1/F static noise patches that were generated for each trial. Each quadrant extended approximately 12.6\u00b0 \u00d7 8.84\u00b0 . Each stimulus was \u223c.75\u00b0 in length and \u223c.13\u00b0 in width and could appear anywhere within the boundaries of one of the four quadrants as long as it did not overlap with another stimulus.The procedure was nearly the same as in Experiment 1, but a few changes were introduced. In Experiment 2, six (instead of four) sequential time windows were used to determine the onset time for each stimulus and to ensure the onsets were evenly distributed throughout the trial. To determine the onset of the predictable targets, four of the six time bins were randomly selected for each observer and the temporal midpoints of these bins were used. Thus, the appearance of targets was predictable but did not follow regular intervals. The quadrants and specific locations of the unpredictable targets were chosen randomly. Unpredictable targets could appear at any moment throughout the trial and their onsets were not constrained to occur during the time bins used for the four predictable targets. The rest of the experimental procedures were the same as in Experiment 1. The experimental task is depicted in We followed the same analysis procedure as in Experiment 1. In Experiment 2, the random-effects structure of the GLMM contained the participant intercepts as well as by-participant slopes for predictability and target order . The full model was also optimal for the LMM in Experiment 2.Results are summarized in SE = .02, z = 8.59, p < .001). We once again found a main effect of target order on accuracy with higher accuracy for early targets , and the interaction was again not significant .As in Experiment 1, there was a main effect of predictability on the accuracy, with predictable targets being found significantly more often than unpredictable targets as well as target order . More specifically, observers were faster for predictable targets .In a replication of Experiment 1, an analysis of RTs revealed a main effect of predictability . If performance benefits are fully reliant on single-trial priming, then these benefits should disappear on trials immediately following a random trial.SD lower than the mean, leaving a final sample size of 26 . All participants had normal or corrected-to-normal vision, provided written consent, and were compensated at a rate of \u00a310 per hour. The group of participants in this experiment also participated in Experiment 4 (see below) within a single session.Twenty-seven observers were tested. One participant had an average accuracy more than 2 Participants completed the experiment in a group testing room with a capacity of 20 people, although no more than 12 were tested at once. Participants each sat approximately 50 cm from the monitor . The experimental script was again generated using the Psychophysics Toolbox on MATLAThe stimuli were the same as in Experiment 2.The group that participated in this experiment also completed a second experiment , and the order of task administration was counterbalanced.Sixty percent of the trials were standard trials as described in Experiment 1. However, in 40% of the trials all targets were completely unpredictable. That is, in these trials, the four predictable targets appeared at an unpredictable time and quadrant. Trial order was arranged such that random trials were always followed by standard trials. This constraint enabled us to test whether the benefit of the spatiotemporal regularities was completely dependent on intertrial priming . The experimental task is depicted in In line with our experimental manipulation, we included predictability, previous trial type, and their interaction as parameters in the model. By necessity, the first trial in each block and the completely random trials were not included in the analysis. The random-effects structure for the GLMM contained the participants\u2019 intercepts as well as by-participant slopes for predictability and target order. This model was also optimal for the LMM. Significant interactions between predictability and trial type were broken down by defining difference contrasts to model the two critical comparisons .Results are summarized in SE = .02, z = 8.252, p < .001) and a significant effect of target order . Observers were more accurate for predictable and early targets. We once again did not find a significant interaction between predictability and target order . Moreover, we found no effect of the previous trial type , and the previous trial type did not interact significantly with predictability , indicating that there was no significant diminution in the predictability effect immediately following a fully random trial, in which all targets were unpredictable.We found a significant effect of predictability on accuracy and faster responses for early targets , yet no interaction . There was also no effect of the previous trial type when considering RTs , and the previous trial type did not interact significantly with the predictability effect .The equivalent analysis for RTs again showed faster responses for predictable targets appeared at the same time and quadrant as in the previous trial. If observers can benefit from a single repetition of spatiotemporal target information, we should find benefits for the normally unpredictable targets on repeat trials versus standard trials . The experimental script was again generated using the Psychophysics Toolbox on MATLAThe stimuli were the same as in Experiments 2 and 3.We tested the participants in the same room and during the same session as in Experiment 3. The order of the two tasks (3 and 4) was counterbalanced.In this experiment only 60% of the trials were standard \u201cnonrepeat trials\u201d (as described in Experiment 1). The remaining 40% of the trials were \u201crepeat trials,\u201d in which all of the target timings and quadrants from the previous trial repeated. That is, all eight targets had the same spatiotemporal dynamics as the previous trial. Although this was always the case for predictable targets, in repeat trials, what had been the four unpredictable targets on the previous trial now repeated their timing and quadrant from that trial. As such, performance on these one-time repetition targets would reveal if a single repetition of spatiotemporal information was enough to trigger a behavioral benefit. It is important to point out that although the repeated trials maintained the exact timing and quadrants from the previous trial, the exact location within the quadrant was random. Trial order was arranged such that repeat trials were always followed by standard trials. This ensured any effects were attributable to priming across single trials only. The experimental task outline is depicted in In Experiment 4, trial type and its interaction with predictability were included as predictors in the model. The random-effects structure for the GLMM contained the participants\u2019 intercepts as well as by-participant slopes for predictability and target order. This model was also optimal for the LMM.Results are shown in SE = .01, z = 10.96, p < .001) and a significant effect of target order, with early targets being detected more frequently . There was no interaction between predictability and target order . Although, we did not find a significant effect of the trial repetition , this factor interacted significantly with predictability . Planned comparisons revealed that the \u201cunpredictable\u201d targets were found significantly more often in the repeat trials compared to the nonrepeat, standard trials . Additionally, early targets were found faster . A significant interaction indicated a steeper slope of improvement in reaction times for predictable, compared to unpredictable targets\u2014although, numerically, predictable targets were found faster than unpredictable targets throughout the trial. We found no effect of trial repetition on RTs , and trial repetition did not interact significantly with the predictability effect .We found a main effect of predictability such that predictable targets were found faster than unpredictable targets and that a longer-term memory must be involved in order to maintain the regularities over several trials without any deficits. Even so, in Experiment 4, we found that single-trial priming effects do contribute to the overall effect since a single repeated trial was enough to trigger benefits in accuracy. Most likely, STM contributes to the learning of regularities, which, once learned, are held in a more robust longer-term store.Models of visual search have evolved in their consideration of how attention may be allocated during search. Treisman\u2019s Feature Integration Theory was one In the current task, each trial spanned several seconds and required multiple responses, allowing us to monitor the guidance of spatial selection over time. In designing the task, special consideration was given to the dynamics of the displays to minimize exogenous factors. Namely, targets appeared and disappeared slowly from the display, such that attention was not captured by the sudden onset of any event, but rather revealed the guidance by top-down predictive signals that changed dynamically over time. Nevertheless, interestingly, we consistently found a main effect of target order on our performance metrics. Performance was lowest when searching for the third target and improved again at the end. This U-shaped curve may reflect a combination of multiple factors linked to the dynamic and extended nature of the task. For example, performance may have benefited from slightly fewer competing distractors during the final time window in which no new distractors could fade in. Manipulating the frequency, timing and spatial distribution of distractors within the dynamic displays should prove an interesting avenue for future experimental research into how spatial-temporal predictions help overcome competition. In addition, the extended nature of the trial may also reveal natural intrinsic fluctuations in attention or arousal over time . AdditioIn the current work we have shown that spatiotemporal information about a target can be used to guide behavior. This was reflected not only in performance measures such as accuracy and reaction times, but also in eye movements being proactively guided to predictable compared to unpredictable targets. The necessary and sufficient conditions for learning and utilizing these spatial temporal regularities are not fully addressed by our current experimental design. For example, in future studies we intend to probe whether the effects are dependent on responding overtly to targets (action). Attention and action are tightly coupled ; and it Moreover, whereas we demonstrated clear and consistent effects on accuracy and response times, we did not explore the full gamut of potential performance benefits. it remains unclear, for example, whether spatiotemporal predictions can shift observers\u2019 criterion in dynamic visual search. We relied on targets that were readily distinguished from the distractors . We do not expect that observers would have often misreported distractors, but we did not record these possible errors. One could imagine that the increase in performance related to the predictable targets is in part related to dynamic shifts in criterion over the course of the trial. If this were the case, in addition to the high accuracy related to finding more predictable targets, we would also expect higher false alarms at the moments near the expect target onset. It is a limitation of the current work that we cannot speak to shifts in criterion directly, but future studies can examine these systematically, including by varying target\u2013distractor similarity.In Experiments 3 and 4 we arranged \u201crandom\u201d (Experiment 3) and \u201crepeat\u201d (Experiment 4) trials such that they were always preceded by a \u201cstandard\u201d trial. This was considered an important control for specifically examining short-term influences from one individual trial on the next. In Experiment 3, we were interested in whether an interruption of regularities in a single random trial would erode benefits in performance to predictable targets in subsequent trials. We did not find such an effect, suggesting that behavioral benefits were not solely attributable to single trial priming effects. It remains unclear from the current work how several random trials in a row may diminish the effects of predictions in subsequent trials. In Experiment 4 we posed a complementary question: whether single-trial priming effects were sufficient to elicit benefits. We found that a single repetition was indeed sufficient to confer better accuracy for finding previously unpredictable targets. No benefit occurred for in reaction times, and we believe this may point to an interesting functional dissociation between accuracy and speed, which should be examined in future studies.Compared with many studies considering how memory guides spatial attention in static displays , there hWe demonstrated that spatial and temporal guidance of attention can work together to benefit performance within extended dynamic contexts. Our results extend findings showing strong interactions between spatially and temporally informative cues in guiding attention by showiIn the current work, we have introduced a new perspective for considering spatial attention in visual search by including time as an informative dimension. Through this manipulation, we have found that the spatial distribution of attention is allocated flexibly on the basis of temporal predictions. Our task can be extended in several directions to explore whether additional sources of guidance also evolve with time. For instance, it may be interesting to consider whether various features can be prioritized dynamically; for example, whether knowing that a colored target is likely to emerge at a predictable time, without knowing where, can also benefit performance. Additionally, the new experimental framework can be used to characterize the precision of temporal and spatial predictions, independently, by presenting targets across a range of moments in time or locations in space in order to manipulate degrees of temporal and spatial competition.In designing our stimulus arrays, we made some particular choices that may have contributed to the pattern and strength of our effects. Our stimulus displays contained four spatially distinct quadrants. The distinctiveness of the quadrants may have made it easier for observers to associate time and space. In past work it has been shown that observers are able to learn regularities that exist on a quadrant level even when the quadrants were not visually obvious from the display . This suWe demonstrate that spatiotemporal regularities guide attention to the right place at the right time in complex visual search tasks. Memories from multiple timescales can support attentional guidance in these more naturalistic settings. Our simple\u2014yet powerful\u2014experimental framework promises to further the investigation into the dynamic factors guiding attention. Moving forward, time should be considered not only as a crucial dimension for understanding natural behavior, but also as powerful axes over which predictions may be formed.10.1037/xge0000901.supp"} +{"text": "Most hospitals in Saudi Arabia are under-practicing the AUC-guided vancomycin dosing and monitoring. No previous work has been conducted to evaluate such practice in the whole kingdom. The current study objective is to calculate the AUC0\u201324 using the Bayesian dosing software (PrecisePK), identify the probability of patients who receive the optimum dose of vancomycin, and evaluate the accuracy and precision of the Bayesian platform. This retrospective study was conducted at King Abdulaziz medical city, Jeddah. All adult patients treated with vancomycin were included. Pediatric patients, critically ill patients requiring ICU admission, patients with acute renal failure or undergoing dialysis, and febrile neutropenic patients were excluded. The AUC0\u201324 was predicted using the PrecisePK platform based on the Bayesian principle. The two-compartmental model by Rodvold et al. in this platform and patients\u2019 dose data were utilized to calculate the AUC0\u201324 and trough level. Among 342 patients included in the present study, the mean of the estimated vancomycin AUC0\u201324 by the posterior model of PrecisePK was 573 \u00b1 199.6 mg, and the model had a bias of 16.8%, whereas the precision was 2.85 mg/L. The target AUC0\u201324 (400 to 600 mg\u00b7h/L) and measured trough (10 to 20 mg/L) were documented in 127 (37.1%) and 185 (54%), respectively. Furthermore, the result demonstrated an increase in odds of AUC0\u201324 > 600 mg\u00b7h/L among trough level 15\u201320 mg/L group as compared with trough level 10\u201314.9 mg/L group. In conclusion, the discordance in the AUC0\u201324 ratio and measured trough concentration may jeopardize patient safety, and implantation of the Bayesian approach as a workable alternative to the traditional trough method should be considered.The AUC Methicillin-resistant Staphylococcus aureus (MRSA) infection [0\u201324 to MIC (AUC0\u201324/MIC) as the most accurate way to track the vancomycin Pharmacokinetic/ Pharmacodynamic [min is not an accurate surrogate as it underestimates the vancomycin level by 25% [S. aureus bacteremia (SAB) [0\u201324/MIC-guided monitoring will play a crucial role in providing a rapid and accurate AUC0\u201324/MIC prediction and reducing nephrotoxicity compared to Cmin-guided monitoring [0\u201324 , while a high risk of nephrotoxicity is associated with AUC0\u201324 [Vancomycin has been used as first-line therapy against nfection . The dosnfection . The 200odynamic . Howeveria (SAB) ,5. Thus,nitoring ,7. Tsuts0\u201324 with minimum blood sampling to improve the vancomycin efficacy. The computerized Bayesian forecasting platform can be utilized to monitor vancomycin dosing. The Bayesian method uses the subject information to integrate the population PK model and specifically estimates the individual PK parameters to calculate the patient AUC0\u201324 [0\u201324 with minimum concentration data; therefore, the flexibility of sample collection and achieving therapeutic target increase while the patient burden and drug toxicity will be minimized [The 2020 guidelines recommend targeting a vancomycin ratio of 400 to 600 mg\u00b7h/L with an assumption of MIC of the MRSA is 1mg/L [ AUC0\u201324 . The maiinimized .0\u201324 using the Bayesian method, it prefers to obtain two PK samples; however, relying on large samples to integrate the PK model based on Cmin can be sufficient to generate an accurate AUC0\u201324 estimate [0\u201324 using the Bayesian approach if rich samples are used [To accurately estimate AUCestimate ,10. The estimate ,11. Howeare used .0\u201324 from single point concentration (trough concentration) using the Bayesian dosing software (PrecisePK), the accuracy and precision of this software in determining the trough level are also estimated. Assessing the prediction and accuracy of the Bayesian software will enable clinical providers to identify the percentage of patients who receive the optimum dose of vancomycin. Moreover, the current study will highlight the association between the attainment of AUC0\u201324 \u2265 600 mg\u00b7h/L and trough levels of 10\u201314.9 mg/L and 15\u201320 mg/L, respectively. We also sought to identify common factors that increase the probability of the target AUC0\u201324 = 400\u2013600 mg\u00b7h/L group compared to abnormal AUC0\u201324 for which targeted therapeutic drug monitoring (TDM) efforts could be implemented.To our knowledge, no prior studies have been conducted to evaluate such practice in the whole kingdom. The current study objective is to calculate the AUCThis study was carried out at King Abdulaziz medical city, Jeddah (KAMC-J), in the inpatient setting from 1 January 2019 to 31 December 2019.This is a retrospective single-center cohort study conducted at KAMC-J, an 800-bed tertiary hospital located in Jeddah, KSA. The KAIMRC\u2019s institutional review board (IRB) approved this study with a number (NRJ21J/241/10). All demographic data, including age, weight, height, serum creatinine, type of treatment , vancomycin dose, frequency, and trough level measured at a steady state (mg/L) were collected retrospectively using a validated, standardized data collection sheet reviewed by three experts in the same field . Patients with documented infection were identified by using the microbiology database for those who underwent nasal swab MRSA PCR testing or blood and respiratory cultures for MRSA or other gram-positive organisms.Based on the current clinical practice at KAMC\u2014Jeddah, the initial vancomycin dose is 15\u201320 mg/kg every 8\u201312 h. using the actual body weight and administered over 90 to 120 min. Serum trough is usually done as a routine for all patients with normal kidney function 30 min before the 4th dose. The subsequent doses were adjusted according to the trough level [0\u201324 was predicted using the PrecisePK platform based on the Bayesian principle. The two-compartmental model by Rodvold et al. [0\u201324 and trough. The Cockcroft-Gault equation was computed to calculate the creatinine clearance (CrCl).The AUCd et al. in this All adult patients treated with vancomycin empirically or therapeutically for documented or suspected infection were included. Additionally, all the study patients were only included if the dose was within the normal range (15\u201320 mg/kg/dose), otherwise the patients were excluded.Pediatric patients, critically ill patients requiring ICU admission, patients with acute or chronic renal failure or undergoing dialysis, and febrile neutropenic patients were excluded. 0\u201324 using Bayesian software and determine the probability (%) of patients who achieved the targeted AUC0\u201324 of 400\u2013600 mg\u00b7h/L.To estimate the vancomycin AUCTo evaluate the accuracy and precision of the PrecisePK Bayesian platform in determining the trough level.0\u201324 \u2265 600 mg\u00b7h/L and trough levels of 10\u201314.9 mg/L, and 15\u201320 mg/L, respectively.To compare the association between the attainment of AUC0\u201324.To identify factors for achieving target AUC0\u201324 were determined using stepwise multiple regression analysis. Multinomial logistic regression was performed to estimate adj. odds ratios (adj. ORs) and calculate 95% confidence intervals (CIs) for the factors for low and high AUC0\u201324. Data were analyzed using a statistical package for the social sciences . Kolmogorov-Smirnov and histogram tests were performed to determine if data were normally distributed. Categorical variables were expressed using frequency and percentages, while the mean \u00b1 standard deviation was used to present continuous variables. The Pearson\u2019s coefficient was used to determine the correlation between the actual vancomycin trough levels and calculated trough values by Bayesian software. The accuracy and precision of Bayesian software were assessed by calculating mean absolute percentage error (MAPE) and root mean squared error (RMSE), respectively. MAPE was reported as acceptable when \u226420% . The Bay0\u201324 of 400\u2013600 or >600 mg\u00b7h/L was also determined. The \u03c72 test was performed to determine the association between an AUC0\u201324 of >600 mg\u00b7h/L and trough values of 10\u201314.9 mg/L vs. those of 15 mg/L or greater. All reported p-values were 2-sided, and a p-value of <0.05 was considered statistically significant.Within each trough group, the probability (%) of patients who achieved the target AUCThe data of 342 patients in the period between January 1st and December 31st, 2019, were collected. Patients were equally distributed male and female . 71.3% of the patient started vancomycin empirically, and 28.7% were treated based on documented infection. The most frequent indication for vancomycin was an infection of the skin and soft tissues (30%), followed by pneumonia (14.3%) and bacteremia (11.2%). The mean (\u00b1SD) patients\u2019 weight, age, and calculated CrCl at baseline were 68.9 kg \u00b1 20, 57 years \u00b1 19, and 85.6 mL/min \u00b1 42.7, respectively. The mean dose of vancomycin was 1857 mg \u00b1 590, with a mean measured vancomycin trough of 13.6 \u00b1 6.9, and the corresponding calculated trough of 13.38 \u00b1 7.3 .0\u201324 by using the PrecisePK software was 573 \u00b1 199.6 mg\u00b7h/L. The target AUC0\u201324 (400 to 600 mg\u00b7h/L) and measured trough (10 to 20 mg/L) were documented in 127 (37.1%) and 185 (54%) patients, respectively. Additionally, out of 127 patients, 55% had a trough between 10\u201314.9 mg/L, while 14 (11%) patients had a trough level of 15\u201320 mg/L. Moreover, among patients with AUC0\u201324 > 600 mg\u00b7h/L, 18 %, and 48% had trough levels 10\u201314.9 mg/L, 15\u201320 mg/L, respectively. 0\u201324 and measured trough concentration ranges for all the study patients.The mean of the estimated vancomycin AUCThe performance of the posterior model of PrecisePK, as evaluated by comparing measured trough level and PrecisePK predicted values, showed a good correlation (r = 0.92) . The Pre0\u201324 > 600 mg\u00b7h/L, and the result showed a significant association between the AUC0\u201324 > 600 mg\u00b7h/L and trough level 15\u201320 mg/L: p < 0.05. Furthermore, the result demonstrated an increase in odds of AUC0\u201324 > 600 mg\u00b7h/L among trough level 15\u201320 mg/L group , as compared with trough level 10\u201314.9 mg/L group.A chi-square test of independence was used to assess the association between vancomycin trough level 10\u201314.9 mg/L and 15\u201320 mg/L in the attainment of AUC0\u201324 and the patient\u2019s demographics to find potential covariates fitting in a linear regression model. Trough level, CrCl, and BMI had significant correlations with AUC0\u201324 . The following equation summarizes this relation:0\u201324: is area under the plasma concentration-time curve over the last 24-h dosing interval (mg\u00b7h/L), trough: is steady-state trough concentration (\u00b5g/mL), TDD: is the total daily dose (mg/kg/day), BMI: body mass index (kg/m2), CrCl: estimated creatinine clearance using Cockcroft-Gault equation (mL/min).The stepwise regression examined the relationship between AUC0\u201324. Furthermore, multivariate logistic regression was used to determine which factors were more likely to classify patients as normal rather than low or high AUC0\u201324 was 573 \u00b1 199.6 mg\u00b7h/L. The AUC0\u201324 was estimated based on single-point concentration (trough level). In the current study, the mean of the estimated AUC0\u201324 with the Bayesian software, it is recommended to obtain two samples [0\u201324 using the Bayesian method, but further data from different patient groups are required to prove the validity of employing trough-only measurements. The revised therapeutic drug monitoring guideline recommends that \u201ca trough concentration alone may be sufficient to estimate the AUC0\u201324 with the Bayesian approach\u201d [0\u201324 using only the trough concentration based on a population PK model created utilizing richly sampled concentration data [0\u201324 will produce an accuracy (range 0.79\u20131.03) and bias (range 5.1\u201321.2%) with different Bayesian dose-optimizing software [0\u201324 and lowering hospital expenditures compared to two-level methods. Moreover, the authors reported a strong correlation between Bayesian two-level and one-level methods (r = 0.93), with an overall 88.5% clinical decision agreement and a low mean difference (MD) between Bayesian and linear AUC0\u201324 methods [To calculate the AUCpproach\u201d . Howevernterval) . Similarsoftware . A recen methods .0\u201324 value. Comparing the target AUC0\u201324 dosing method with the trough-based method showed that only 37.1% of the included patients achieved the target AUC0\u201324 400\u2013600 mg\u00b7h/L, while the majority attained AUC0\u201324 >600 mg\u00b7h/L. It is interesting to note that the upper limits of the target trough level (15\u201320 mg/L) in this study were significantly associated with AUC0\u201324 > 600 mg\u00b7h/L, as compared to the group of trough level 10\u201314.9 mg/L. These results are in line with a previous study by Lodie et al. that mentioned that when the relationship between steady-state trough concentration and AUC0\u201324 is examined, a trough concentration will not explain more than 50% of the interindividual variability in AUC0\u201324 (r2 = 0.409) [0\u201324 can be explained in this way. With vancomycin trough values between 15 and 20 mg/L, the likelihood of reaching an AUC0\u201324 of 400 mg\u00b7h/L is always 100% without considering the upper range of AUC0\u201324, which varies significantly from patient to patient [In the current study, there is a high degree of variability between a measured trough concentration and the AUC= 0.409) . Further patient ,20. 0\u201324. Two recent studies by Neely et al. and Finch et al. found that the dose and monitoring of vancomycin guided by AUC0\u201324 are associated with less nephrotoxicity compared to trough-based monitoring [0\u201324 ratio and measured trough concentration may lead to treatment failure or expose the patient to severe adverse effects. It is highly important to consider the consequences of discordance in clinical decision classification from a safety and efficacy perspective. Two different scenarios may arise; the unmatched classification represents the most important one when the AUC0\u201324 method predicts a supratherapeutic while the trough method gives a reading in the normal range. The problem is that the prescribers will misclassify this case as therapeutic, and there will be no dose adjustment, resulting in nephrotoxicity, and notably that 27.8% of the normal trough cases in our study lie under this scenario. In contrast, the other scenario may affect the clinical improvement when the AUC0\u201324 predicts a subtherapeutic while trough values lying under the therapeutic range. The consequences of such discrepancies may mislead the physician to maintain the dosage although real exposure is inadequate, thus increasing the risk of treatment failure, hospital length of stay, and mortality [0\u201324 values are different since AUC0\u201324 is the sum of all the times a drug is exposed. However, the trough shows a single point of exposure at the end of the dosing interval. Notably, trough levels are considered a poor detector of AUC0\u201324 ratios, three more recent investigations demonstrated that over 50% with AUC0\u201324 400\u2013600 mg\u00b7h/L had trough values < 15 mg/L [These results provide further support for shifting the practice to the dose and monitoring of vancomycin guided by AUCnitoring ,21. Thisortality ,23. It\u2019s 15 mg/L ,24,25.0\u201324. Patients with low CrCl levels tend to have high AUC0\u201324 (adj. OR = 0.97). However, high trough level, BMI, and TDD were associated with high AUC0\u201324 and vice versa for the low AUC0\u201324 group. No other predictor variables were statistically significant for the development of low and high AUC0\u201324 in this model. No other predictor variables were statistically significant for the development of low and high AUC0\u201324 in this model. These results are in agreement with those of Suzuki et al., who examined predictive factors for high trough concentrations [The trough level, BMI, CrCl, and TDD have been demonstrated to be predictive factors for high AUCtrations .0\u201324 estimation was based on a single point concentration and there was a debate over using a single Cmin to generate accurate AUC0\u201324 estimates. In addition, the current TDM recommendations at our institution indicate achieving roughly steady-state vancomycin concentrations by the fourth dosage following its initiation. As a result, there is a possibility that certain patients\u2019 trough concentrations were collected before the actual steady state. Finally, the AUC0\u201324: MIC ratio was not evaluated in this study, and we considered MIC = 1 mg/L for all strains.Overall, the current study has some limitations; first, this study includes the retrospective nature of data collection and reliance on the computerized provider order entry to extract all data points from a single center that limits the ability to generalize the results. Second, the AUC0\u201324 had a trough level below the targeted goal, and dangerously noted that patients with normal trough levels between 15\u201320 mg/L had a 13-fold increased risk of AUC0\u201324 > 600 mg\u00b7h/L. Despite the limitations abovementioned, the current work might encourage physicians to improve their clinical practice and implantation of the Bayesian approach as a workable alternative to the traditional trough method. Future investigations should be carried toward evaluating the clinical implementation of AUC0\u201324-based dosing using Bayesian software.The main conclusion that can be drawn is that one-third of the patients with the targeted goal of AUC"} +{"text": "In the Appalachian region, a variety of factors will impact the ability of patients to maintain good oral health, which is essential for overall health and well-being. Oral health issues have led to high costs within the Appalachian hospital system. Dental informatics examines preventable dental conditions to understand the problem and suggest cost containment.We aimed to demonstrate the value of dental informatics in dental health care in rural Appalachia by presenting a research study that measured emergency room (ER) use for nontraumatic dental conditions (NTDCs) and the associated economic impact in a hospital system that primarily serves rural Appalachia.The Appalachian Clinical and Translational Science Institute\u2019s oral health data mart with relevant data on patients (n=8372) with ER encounters for NTDC between 2010 and 2018 was created using Appalachian Clinical and Translational Science Institute\u2019s research data warehouse. Exploratory analysis was then performed by developing an interactive Tableau dashboard. Dental Informatics provided the platform whereby the overall burden of these encounters, along with disparities in burden by age groups, gender, and primary payer, was assessed.Dental informatics was essential in understanding the overall problem and provided an interactive and easily comprehensible visualization of the situation. We found that ER visits for NTDCs declined by 40% from 2010 to 2018, but a higher percentage of visits required inpatient care and surgical intervention.Dental informatics can provide the necessary tools and support to health care systems and state health departments across Appalachia to address serious dental problems. In this case, informatics helped identify that although inappropriate ER use for NTDCs diminished due to ER diversion efforts, they remain a significant burden. Through its visualization and data extraction techniques, dental informatics can help produce policy changes by promoting models that improve access to preventive care. Oral health is critical to overall health and well-being . HoweverAppalachia, a region with a largely rural population, spreads over 420 counties in 13 states. Rural Appalachia is medically underserved to the extent that there is an average of only 4 dentists per 100,000 individuals, compared to the United States\u2019 average of nearly 61 dentists per 100,000 . West ViThere are several indicators of oral health in a community. One such indicator is a visit to an emergency room (ER) for a preventable dental condition. When people don\u2019t have access to preventive dental care for problems like gum disease and tooth decay, treatable dental issues become a much bigger problem, often causing excruciating pain, leading people to seek care in an emergency department. Data show that ERs are the first and last resort for many low-income adults nationwide to obtain emergency care for preventable dental conditions. ERs are a significant part of the health care system that should be used for emergency health care needs and should not be a place for routine dental care. Further, these settings are ineffective for treating dental problems . Over thThe use and financial impact of dental-related trips to medical settings, such as ERs and urgent care sites in the Huntington hospital system\u2019s catchment area in WV, has not been documented. We used dental informatics to develop a synopsis of dental visits to ER for preventable dental conditions in southern WV.This paper highlights how dental informatics was used to examine preventable dental conditions in the Appalachian region of WV and suggests cost containment.Dental informatics, a comparatively juvenile field , can be Dental informatics can assist in patient care with newIntegrating dental informatics into health information systems provides an effective service. These combined systems will collect data from population-based surveys, disease surveillance systems, hospital information systems, and family health surveys and provide access to whole-patient patterns that can improve patient dental services . ProvidiTo understand the more enormous challenges within dentistry, especially concerning health in rural areas , the denThe Appalachian Clinical and Translational Science Institute (ACTSI) Division of Clinical Informatics has a functional multi-institutional clinical research data warehouse (CRDW) containing more than 12 years of billing and electronic medical record data. The CRDW consists of relational tables, dimensions, and fact tables that store multi-institutional medical information and provide data for operational and analytical model development (machine learning). It contains structured electronic health record data , non\u2013electronic health record survey data, and unstructured (text) information received from Marshall Health practice plan, Cabell-Huntington Hospital, and Marshall University Joan C Edwards School of Medicine\u2019s Edwards Comprehensive Cancer Center. It uses the technological tools of information science to build a platform that can gather, analyze, and present information to address, in this case, oral health needs .For this retrospective longitudinal study, we used the oral health data mart that was developed using the ACTSI\u2019s CRDW to understand and improve the dental health of the population in the area. Relevant clinical and financial data from the data mart over 9 years (2010-2018) were extracted, verified, and analyzed. ER encounters for preventable dental conditions were identified using the primary diagnosis codes . We calculated the overall burden of dental-related ER visits for avoidable conditions. Disparities in their burden regarding demographic variables, such as patient\u2019s age, gender, and primary insurance for the visit, were also examined. Further, patterns and trends in ER use for such visits and the associated charges were studied.The retrospective research study was conducted with the best intent and information obtained from the CRDW. This study was approved by the institutional review board of Marshall University, Huntington, WV (1069363-1).Transact-SQL coding found that 8372 patients made 11,946 visits to the ER for a nontraumatic dental condition and generated US $18,303,173 worth of hospital charges over 9 years . Of thesUsing the oral health data mart, we found that although the number of visits decreased yearly, the charges quadrupled, with average costs per visit increasing from US $776.64 in 2010 to US $3136.79 in 2018. Additionally, the percentage of visits resulting in an inpatient admission or requiring a surgical intervention rose yearly from 2010 to 2018. Most of these visits were for teeth and supporting structure disorders and periapical abscesses without sinus. Meanwhile, the visits for cellulitis and mouth abscesses were the most expensive. This information was presented using a Tableau dashboard, which provided an interactive display that was informative but also simple to understand .Medicaid was the primary payer for most of these visits in 2018 compared to a third of the payments in 2010. Of all the age groups, adults between the ages of 25 and 29 years had the highest visits and charges. It has also been noted that there was no distinct gender predisposition in the number of visits and charges accrued. However, it has been observed that female patients\u2019 ER visits were less for NTDCs compared to ER visits by male patients. Other predictors of an ER visit for NTDCs could be the age factor and the individuals\u2019 insurance status. It has been observed that individuals between the ages of 25 and 29 years with no insurance coverage had more ER visits. A similar trend was observed regarding the insurance status, as ER visits by uninsured patients were higher than those by the insured patients. The type of insurance also played a role in the ER visits, as it has been noted that Medicaid-insured patients were more likely to be in ER for dental problems compared to Medicare-insured patients.Using interactive visualization tools, we were able to drill down to the patient level, which showed that patients returned to the ER for NTDCs multiple times. An example of such visits by one such patient, along with the primary diagnosis and charges for the visits, is shown in Tableau-derived Dental informatics combines technological tools with information science to hasten improvements in dental practice, research, education, and management ,24,34. IUsing dental informatics, we found that ER dental visits for preventable conditions posed a significant burden of more than US $18.3 million. Further, despite a decelerating trend in the number of visits from 2010 to 2018, the average charges per patient per visit have increased. An accelerating trend has been traced in the percentage of visits that required hospital admissions and dental procedures for the same tenure. Dental informatics was instrumental in demonstrating that patients who visited the ER for NTDCs tended to be sicker every year, accruing more charges per visit. We speculate that this occurred due to the institution of services, such as a mobile dental clinic and residency programs, leading to diverting patients with less severe dental conditions from the ER settings.In the past, several programs promoting these types of services have successfully reduced ER use for NTDCs . DespiteThis retrospective study used available data from 2010 to 2018 and consisted of those patients seen within the Marshall Health practice plan, Cabell-Huntington Hospital, and Edwards Comprehensive Cancer Center. This study did not include other outside hospital systems within the local area.Dental informatics tools and approaches improve the dental practice\u2019s understanding and help assess dental services\u2019 economic burden . Using dAppalachian hospitals are especially concerned about costs with affordability, equity, and nonhealth benefits, factoring into decisions about health spending . It is e"} +{"text": "Pneumoconiosis refers to a\u00a0class of serious diseases threatening the health of workers exposed to coal or silicosis dust. However, the burden of pneumoconiosis is unavailable in China.Incident cases, deaths, and disability-adjusted life years (DALYs) from pneumoconiosis and its subtypes in China were estimated from the Global Burden of Disease Study 2019 using a Bayesian meta-regression method. The trend of the burden from pneumoconiosis was analyzed using percentage change and annualized rate of change (ARC) during the period 1990\u20132019. The relationship between subnational socio-demographic index (SDI) and the ARC of age-standardised death rate was measured using Spearman\u2019s Rank-Order Correlation.In 2019, there were 136.8 thousand new cases, 10.2 (8.1\u201313.6) thousand deaths, and 608.7 (473.6\u2013779.4) thousand DALYs from pneumoconiosis in China. Of the global burdens from pneumoconiosis, more than 60% were in China. Both the total number of new cases and DALYs from pneumoconiosis was keeping increasing from 1990 to 2019. In contrast, the age-standardised incidence, death, and DALY rates from pneumoconiosis and its subtypes, except for the age-standardised incidence rate of silicosis, and age-standardised death rate of asbestosis, experienced a significant decline during the same period. The subnational age-standardised death rates were higher in western China than in eastern China. Meanwhile, the subnational ARC of age-standardised death rates due to pneumoconiosis and its subtypes were significantly negatively correlated with SDI in 2019.China suffers the largest health loss from pneumoconiosis in the world. Reducing the burden of pneumoconiosis is still an urgent task in China.The online version contains supplementary material available at 10.1186/s12889-022-13541-x. Pneumoconiosis is a major public concern around\u00a0the world with 0.20 0.17\u20130.23) million new cases and 0.92 (0.76\u20131.12) million disability-adjusted life years (DALYs) in 2019 [7\u20130.23 miChina is the world\u2019s largest labor market with more than 775 million working population . AccordiIn this study, temporal changes in the burden of pneumoconiosis and its four major subtypes, by age and sex, during 1990\u20132019 in China were reported based on data from the Global Burden of Diseases 2019. The correlations between subnational socio-demographic index (SDI) and annualized rate of change (ARC) of age-standardised death rate were also determined, which hopefully will serve as a reference for policymakers to improve health.th revision (ICD 10), pneumoconiosis was categorized into asbestosis , CWP (codes: J60-J60.0), silicosis (codes: J62-J62.9), and other pneumoconiosis (codes: J63-J65.0). Socio-demographic index (SDI) is a composite indicator that covers income per capita, average years of educational attainment, and fertility rate in females under the age of 25\u00a0years [National and subnational data on incidence, deaths, DALYs and the corresponding age-standardised rates of pneumoconiosis during 1990\u20132019 were obtained from GBD 2019 [The data used to estimate mortality and disease burden of pneumoconiosis included vital registration, mortality surveillance data in China, and systematic reviews. The disease burden from pneumoconiosis was estimated separately for males and females, and also for different age groups (15\u201380\u2009+\u2009years). The three levels of covariates were used in the pneumoconiosis models [The approaches and frameworks used to estimate incidence, deaths, and DALYs for pneumoconiosis have been reported previously , 6. In bAmong the total global burdens due to pneumoconiosis, 68.7% of new cases, 44.3% of deaths, and 66.2% of DALYs were in China. As shown in Table For different subtypes, silicosis was the leading cause of burden from pneumoconiosis, followed by CWP, other pneumoconiosis, and asbestosis\u00a0Table . SilicosTable SNew cases, deaths, and DALYs due to pneumoconiosis in males accounted for 93.3%, 94.5%, and 95.2% of the corresponding total numbers in 2019 . As for the different subtypes, significant negative correlations were observed between SDI and ARC of age-standardised death rate due to silicosis , CWP , asbestosis , and other pneumoconiosis from 1990 to 2019.Subnational ARC of the age-standardised death rate due to pneumoconiosis from 1990 to 2019 was negatively correlated with SDI in 2019 of the global DALYs due to pneumoconiosis. From 1990 to 2019, the counts of incidence and DALYs due to pneumoconiosis kept increasing, whereas the age-standardised incidence, death, and DALY rate from pneumoconiosis decreased by more than 14%. The correlations between subnational SDI and ARC of the age-standardised death rate due to pneumoconiosis in 2019 were significantly negative.Pneumoconiosis is the leading occupational disease and the third cause of burden due to chronic respiratory diseases in China. Among the estimated 775 million workers in China, more than 20 million workers were exposed to occupational risk factors in 2019 . The undAmong four subtypes of pneumoconiosis, silicosis was the leading cause of disease burden from pneumoconiosis, followed by CWP, other pneumoconiosis, and asbestosis in China in 2019. As their names imply, silicosis, coal workers\u2019 pneumoconiosis, and asbestosis are caused by inhaling silica dust, coal mine dust, and asbestos fibers, respectively. As the burden of pneumoconiosis was 100% attributable to occupational exposure, our findings are consistent with previous studies that occupational silica is still the leading risk factor for pneumoconiosis . Other pAge-standardised death and DALY rates of pneumoconiosis and its subtypes decreased by more than 40%, which likely was a result of the persistent efforts by the Chinese government. The greatest number of new cases from pneumoconiosis was observed in 2006, and the age-standardized incidence rate of pneumoconiosis started to drop from 2006 when China began to take strong supervision measures. In 2006, China established the Network Direct Report System of Occupational diseases to monitor occupational diseases. A series of government documents, including the Plan for a Health China 2030 and the 13th Five-Year Plan for Occupational Health Hazard Prevention and Control, have been released to improve occupational health in recent years in China [The new cases of asbestosis remained high in China in\u00a02019. The risk factor of asbestosis is occupational exposure to asbestos fibers which is listed as carcinogenic to humans (Group 1) by the International Agency for Research on Cancer . The useOur result showed the new cases, deaths, and DALYs due to pneumoconiosis in males accounted for about 95% of the corresponding total numbers in 2019, which was consistent with previous reports . The preThe subnational results showed the age-standardised death rate of pneumoconiosis in western China was higher than that in the eastern coastal area, which might have been a result of the difference in economic development level. As SDI, a key factor affecting the socio-economic development of a specific region, was used to explain the regional variations, the relationship between SDI and ARC of the age-standardised death rate due to pneumoconiosis was further analyzed . The resAlthough China has been making ceaseless efforts to prevent and control occupational diseases, especially pneumoconiosis, the challenges remain huge as shown in our study. In order to reduce the incidence and burden due to pneumoconiosis in China, much work is needed to improve the workplace environment, provide personal protective equipment, guide employers to standardized work safety protocols, perfect pneumoconiosis surveillance systems, and increase the intensity of regulatory supervision . AdditioOur study provides a comprehensive summary of incidence, mortality, and DALY\u00a0rates from pneumoconiosis and its subtypes from 1990 to 2019 in China and represents the most up-to-date information on burdens due to pneumoconiosis. Meanwhile, the standardized estimation method employed by GBD allows us to make sub-national comparisons in China. Furthermore, the attribution of SDI is examined to explain the variations of annualized change rate in burden of pneumoconiosis.However, this study is still limited in several ways. First, national and subnational data are unavailable for the occupational exposure levels of dust and fibers in China. The main covariates in pneumoconiosis models in GBD 2019 were asbestos consumption per capita, coal production per capita, gold production per capita, summary exposure value for occupational asbestos, occupational beryllium, and occupational silica . FurtherOur findings show that two-thirds of global health loss from pneumoconiosis occur in China. Prevention and control of pneumoconiosis is still serious health and social issue in China. The heterogeneity between different areas of China highlights the urgent need for preferential policy to less developed regions.Additional file 1:Table S1. Crude incidence, death, and DALY rate of pneumoconiosis in China, 1990-2019.\u00a0Table S2. Incident cases, deaths, and DALYs of pneumoconiosis by sex in China, 2019.\u00a0Figure S1. Sex- and age-specific DALY rate of pneumoconiosis and its subtypes in China, 2019. (A) Males and (B) females."} +{"text": "Drosophila melanogaster, using the eye-SMART test. The obtained results showed that FeAT-NPs were genotoxic only with the two highest tested concentrations (2 and 5 mmol\u00b7L\u22121 of Fe) in surface treatments. These data altogether show that these nanoparticles represent a safe alternative for anemia management, with high uptake level and controlled iron release.A systematic investigation on the cellular uptake, intracellular dissolution, and in vitro biological effects of ultra-small (<10 nm) iron hydroxide adipate/tartrate coated nanoparticles (FeAT-NPs) was carried out in intestinal Caco-2, hepatic HepG2 and ovarian A2780 cells, and the nucleotide excision repair (NER) deficient GM04312 fibroblasts. Quantitative evaluation of the nanoparticles uptake, as well as their transformation within the cell cytosol, was performed by inductively coupled plasma mass spectrometry (ICP-MS), alone or in combination with high performance liquid chromatography (HPLC). The obtained results revealed that FeAT-NPs are effectively taken up in a cell type-dependent manner with a minimum dissolution after 3 h. These results correlated with no effects on cell proliferation and minor effects on cell viability and reactive oxygen species (ROS) production for all the cell lines under study. Moreover, the comet assay results revealed significant DNA damage only in GM04312 cells. In vivo genotoxicity was further studied in larvae from The use of iron oxide nanoparticles (IONPs) as potential therapeutic agents in the management of iron-deficiency severe anemia has become an area of increasing importance over the years . Commerc2+) and ferric (Fe3+) forms via the Fenton and Haber\u2013Weiss reactions [Initial experiments to evaluate the extent of intracellular release of iron ions from these ultrasmall FeAT-NPs were conducted in our research group by developing a specific chromatographic separation strategy that permitted to distinguish among the nanoparticulated as well as the iron ionic forms simultaneously . In thiseactions . This adeactions , which ceactions .3O4) nanoparticles were genotoxic in human SHSY5Y neuronal cells [2O3 nanoparticles [2O3 and Fe3O4 nanoparticles in lung epithelial A549 cells [3O4 nanoparticles exposure of human lymphocytes and breast cancer MCF-7 cells caused no increase in DNA damage [With respect to the induction of DNA damage, previous studies observe genotoxicity with the comet assay, in cultured cells treated with different types of IONPs. For example, silica-coated magnetite , and GM04312 cells, known to be deficient in the nucleotide excision repair (NER) system , that coin vivo, using Drosophila melanogaster as a model organism, recommended for testing the genotoxic potential of nanomaterials [In addition, the effects of these FeAT-NPs were also evaluated aterials , and theaterials ,24,25.\u22121 of Fe, for 3 h, and also from the respective non-treated control cells, as described before [These nanoparticles, described as stable in different conditions and capable of forming small agglomerates, can be taken up by cells . To evald before . The mead before . As obseThe differences observed in the uptake of these nanoparticles among cell lines might be attributed to the fact that they enter the cells through endocytosis ,27. HoweThe next step was to investigate, in vitro first, the release of iron ions from the FeAT-NPs within the cell cytosol. For this aim, nanoparticles were incubated in different media, with two different pH values . In addition, different cytosolic compounds that might influence Fe release, like glutathione (GSH) and ascorbic acid (ASC), were also tested. After 12 h and 24 h of incubation, the different solutions were ultrafiltered and the mass balance of the retained (thus nanoparticulated) and permeated (thus soluble) fractions was obtained. The observed results are presented in To evaluate the possible biotransformation of the FeAT-NPs within the cell cytosol, a modified reversed-phase high performance liquid chromatography (HPLC) separation, using a mobile phase containing sodium dodecyl sulfate was used in combination with ICP-MS detection see . PreviouThis retention time, according to the calibration of the column, might correspond to a size below 10 nm, in the range of 4\u20136 nm, which is also in agreement with previous results .The comparison between the chromatograms of Finally, not remarkable changes seem to be observed in the Fe-peak at about 4.8 min that has been ascribed to ferritin (accumulating nanoparticulated iron) but might also be related to nanoparticles-aggregates [\u22121 of Fe. The results, displayed in p = 0.015 for A2780; b = \u22127.4, p = 0.001 for Caco\u22122; b = \u22128.4, p < 0.0001 for HepG2; b = \u22127.5, p = 0.001 for GM04312). Cell survival as high as 80% was observed with the highest tested concentration. Therefore, despite differences in FeAT-NPs uptake between individual cell types, no significant differences in viability reduction were detected among cell lines. Moreover, the decreases in viability, although significant, were not biologically relevant for the tested FeAT-NPs concentrations.To study the biological effects of the FeAT-NPs, cell viability was determined for all analyzed cell lines using the resazurin assay, after 3 h exposure to them at concentrations of 0, 0.5, 1.0, 1.5 and 2 mmol\u00b7LThe effect of these nanoparticles on cell proliferation was evaluated with the colony formation assay , whose rThese results revealed that these nanoparticles were not toxic at the tested Fe concentrations, and exerted no significant influence on cell proliferation, in agreement with our previous results in Caco-2 and HT-29 cells .tert-Butyl hydroperoxide (TBHP) as positive control. As presented in \u22121) for 3 h did not induce significant ROS formation in Caco-2, HepG2 and GM04312 cells, when compared to their respective untreated (control) cells, although in Caco-2 cells TBHP, at 200 \u03bcmol\u00b7L\u22121, induced significant ROS levels (2.259 \u00b1 0.112 fold-times over the negative control). Differences between these data and those found before with these cells [\u22121 in HepG2, and 1.120 \u00b1 0.063 fold-times, induced with 200 \u03bcmol\u00b7L\u22121 in GM04312 cells). The ability of FeAT-NPs to induce the production of intracellular ROS in the four investigated cell lines was assessed using, as described in Materials and Methods, a 2,7-dichlorodihydrofluorescein-diacetate (DCFH-DA) fluorescent probe, and se cells may be a\u22121 of Fe, with a linear regression analysis . These results would agree with the higher intracellular iron content detected in A2780 cells compared to the other cell types , at 50 \u03bcmol\u00b7L\u22121, which was a rather low concentration as a result of this cell line sensitivity. In A2780 cells, a statistically significant increase in the level of intracellular ROS was detected with increasing concentrations of FeAT-NPs, up to 1 mmol\u00b7Lypes see A. HoweveThese results demonstrated that the ultrasmall FeAT-NPs, at the studied Fe concentrations, did not relevantly influence the generation of intracellular ROS in the different analyzed human cells, even in sensitive ones like A2780, correlating with the results of cell viability and clonogenic activity.\u22121 of Fe) was tested. To assess the toxicity of a given nanomaterial, the induced DNA damage is an essential parameter that may be measured with the comet assay. This assay, in its alkaline version, detects both single and double strand breaks, alkali-labile sites, stalled replication forks and the activity of DNA excision repair systems ,37. Sinc\u22121 of Fe), after 3 h exposure, induced statistically significant DNA damage in A2780, HepG2 and Caco-2 cells, and the linear dose-response regressions presented significant slopes , the induced damage was rather low, never doubling the spontaneous one. Moreover, in the case of Caco-2 cells, treatments of 24 h did not show any indication of genotoxicity (see \u22121 methyl methanesulfonate (MMS), high levels of DNA damage (measured as percentages of Tail DNA) were detected in the same experiments . Therefore, the capacity of ultrasmall FeAT-NPs to induce DNA damage was further assessed by the alkaline comet assay, using the same treatment conditions used for the other studied biological parameters. The obtained results are shown in city see E. Therefp < 0.05) at all tested concentrations, when compared to control cells, with a significant linear dose-response regression slope in 3 h treatments. In this cell model (see p = 0.1256). Higher induced DNA damage was expected in these cells, as compared with the other cell types, and as also detected for the positive control , since GM04312 cells do not repair several types of DNA damage through the NER pathway [In the case of GM04312 cells see F, these odel see F, 24 h e pathway ,38, incl pathway ,20,39, w pathway associatThese results showed that there was no relationship between the intracellular iron concentrations and the DNA damage they induced in different cell types. Altogether these results suggest that not all the FeAT-NPs taken up by cells contributed to increasing the intracellular ionic iron pool, with the subsequent consequences of DNA damage or ROS induction.D. melanogaster are presented in The results obtained in the in vivo evaluation of FeAT-NPs effects in larvae of \u22121 Fe, and of about three-fold in the exposed larvae to 5 mmol\u00b7L\u22121 Fe, without apparent toxicity , that provided a semi-quantitative estimation of toxicity, showed that these nanoparticles were not toxic, at any of the tested concentrations , the increases in mosaic eyes were larger in females than males, indicating that MMS induced mutations but also quite high levels of recombination. That only positive results were detected with surface treatments was not unexpected, as higher doses can be used in this mode of treatment, when compared to the chronic one ; moreoveWhen comparing both repair conditions, the frequencies of mosaic eyes were slightly higher in deficient than in efficient conditions for both sexes and even for negative controls. Similar results have been described before and theyThis lack of NER effect in vivo differs from the results obtained in vitro, with the comet assay. This assay detects DNA damage (strand breaks), whereas the SMART assay detects the consequence of this DNA damage, that is mutations and recombination. Thus, it is feasible that in some cells, the DNA damage is repaired by other systems besides NER before they become the origin of mutations. Furthermore, since the applied doses of nanoparticles were of the same order in both assays, the level of induced DNA damage in vivo should be lower than in vitro and, therefore, easier to repair.The analysis of the average clone size revealed that the induced damage was fixed as mutations (or recombination) quite late on the larvae development, as the size of most spots was rather small. This fact agrees with an easy repair of induced DNA damage, since only the damage induced immediately before pupation seems to be detected. Drosophila [\u22121 concentrations [\u22121) [When compared with other iron oxide nanoparticles, or nanomaterials, whose genotoxicity was studied in osophila ,45, the trations ; whereas40Ar16O and 40Ar16O1H polyatomic interferences) for 56Fe+ monitoring. The ICP-MS instrument was fitted with a cyclonic spray chamber and a conventional concentric nebulizer.All ICP-MS (inductively coupled plasma mass spectrometry) experiments during this work were performed using the triple quadrupole instrument iCAP TQ ICP-MS working in the single quadrupole (SQ)-hydrogen mode . Detection of iron was performed on-line with the iCAP TQ ICP-MS instrument. The flow from the HPLC was introduced into the ICP-MS instrument via a 15 cm long polyether ketone (PEEK\u00ae) tube, which was connected to the polytetrafluoroethylene (PTFE) sample tube of the nebulizerChromatographic separations were carried out using the Agilent 1260 HPLC system equipped with a Nucleosil CFor centrifugation/ultrafiltration steps, a centrifuge Biofuge Stratos Heraeus (Thermo Fisher Scientifc) was used. Fluorescence measurements were performed using an Infinite 200 microplate reader. Flow cytometry experiments were performed using a CytoFLEX S Flow Cytometer .Nucleoids from the comet assay were photographed, for a posterior quantitative determination of DNA damage, in an Olympus BCX-61 fluorescence microscope, with an Olympus DP70 CCD-coupled camera, from the Scientific and Technical Services (SCTs) of the University of Oviedo.\u22121 sodium hydroxide (Merck) was used for the nanoparticle precipitation. Standard solutions of Fe and Ge were used for total Fe determinations by ICP-MS. Sodium dodecyl sulfate and ammonium acetate were used in the mobile phases for the chromatographic separations.All solutions were prepared using 18 M\u03a9 cm deionized water obtained from a PURELAB flex 3 . Iron (III) chloride hexahydrate was used as the precursor for the nanoparticle synthesis. Sodium tartrate dihydrate and adipic acid were solubilized in 0.9% potassium chloride solution to be used as the nanoparticle coating agents. Ammonium acetate was used for the synthesis buffer and 5 mol\u00b7LRPMI 1640 Dulbecco\u2019s culture medium, phosphate-buffered saline (PBS) and fetal bovine serum (FBS) were purchased from Gibco , and modified Eagle\u2019s medium (DMEM) and trypsin, from Biowest, were supplied by VWR-Avantor (Spain); plasmocin was obtained from InvivoGen and methyl methanesulfonate (MMS), tert-butyl hydroperoxide (TBHP), 2,7-dichlorodihydrofluorescein-diacetate (DCFH-DA) were bought from Sigma-Aldrich. Low melting point (LMP) and normal melting point (NMP) agaroses, from Invitrogen, were acquired from Sigma-Aldrich. All other chemicals used were of the highest purity and available from commercial sources.\u00ae Cell Viability Assay Kit was purchased from Promega and 30,000 Da and 3000 Da Ultra-15 MWCO centrifugal filter units were obtained from Millipore .CellTilter-Blue3 was added to a solution containing tartaric acid and adipic acid in 0.9% (w/v) of KCl to achieve a molar ratio of Fe: adipic acid: tartaric acid in the final suspension of 2:1:1. The initial pH of the mixture was always below 2.0 and the iron was fully soluble. The pH was then slowly increased by dropwise addition of a concentrated solution of NaOH (5 mol\u00b7L\u22121) until basic pH for iron precipitation. The entire mixture was then oven-dried at 45 \u00b0C for a minimum of 24 h. Purification of the synthetized FeAT-NPs was performed by two centrifugation and ultrafiltration steps using first a 30,000 Da Ultra-15 MWCO centrifugal filter and then a 3000 Da Ultra-15 MWCO centrifugal filter. The size and shape characterization of these nanoparticles was published in previous articles of our research group [Ultrasmall iron hydroxide adipate tartrate nanoparticles, FeAT-NPs, (5\u201310 nm ferric oxy-hydroxide core coated with tartaric/adipic acid) were synthesized according to previous publications . Brieflych group . To prevA2780 (human ovarian carcinoma), Caco-2 and HepG2 (human hepatocarcinoma) cell lines were obtained from the Biotechnological and Biomedical Assays Unit at the SCTs of the University of Oviedo, and GM04312 , were used due to their sensitivity to ROS inducing agents [mus201 strains, yellow and white (mus201-y and mus201-w), were selected as mus201 is a homologue of the XPG gene [The in vivo analysis of FeAT-NPs effects on larvae of g agents , and forXPG gene and, the6 cells per flask, and after 48 h, when all of them were at around 80% cell confluence, they were treated with the FeAT-NPs at a concentration of 2 mmol\u00b7L\u22121 of Fe for 3 h. After treatment, cells were washed three times with PBS, harvested with trypsin and counted. Cell concentrations varied among cell lines, as a result of their size and growth rate: 7.5\u20139 \u00d7 106 cells for A2780, 9\u201311 \u00d7 106 cells for Caco-2, 5\u20137 \u00d7 106 cells for HepG2 and 3.5\u20135 \u00d7 106 cells for GM04312. Cells were then precipitated by centrifugation to obtain a clean cell pellet. The experiment was made in triplicate for each cell line. Cells not exposed to FeAT-NPs served as negative control in each experiment.Cells were seeded in 25-cm2 flasks, at 1 \u00d7 10g, 5 min, 4 \u00b0C) and the supernatants were collected and analyzed for total Fe content and Fe speciation. Total Fe concentrations were determined, in aliquots of the cell lysate supernatants previously acidified in 0.1% HNO3, by ICP-MS .Cell pellets were lysed by addition of 1 mL of cold ultrapure water, followed by five freeze\u2212thaw cycles using liquid nitrogen and a 60 \u00b0C water bath. After lysis, cell debris was removed by centrifugation . Phosphate buffer was used as the negative control. Seventy-two hours after the treatment, using highly concentrated sucrose solutions, larvae were collected in a glass vial, washed with ultrapure water several times, until all of them were clean and at the bottom of the vial, and then they were removed to Eppendorfs. The larvae samples were subjected to an acid digestion with 200 \u00b5L of HNO3 (65%) for 1 h and 200 \u00b5L of H2O2 (30%) for 4 h until no solid residues were observed. Samples were diluted for further analysis. A Fe calibration curve was performed for the quantification of the iron content in the digested samples.For determination of Fe in \u22121), as mobile phase at a flow rate of 0.5 mL\u00b7min\u22121. Detection was performed by ICP-MS monitoring iron solution, containing SDS (10 mmol\u00b7L\u00ae Cell Viability Assay Kit from Promega. The assay is based on the ability of living cells to convert a redox dye (resazurin) into a fluorescent end product (resorufin), with maximum excitation and emission wavelengths of 560 nm and 590 nm, respectively. In brief, cells were seeded in 96-well plates and incubated for 48 h to allow their attachment to the plate. The cells were then treated in triplicate with FeAT-NPs, at different Fe concentrations (from 0 to 2 mmol\u00b7L\u22121) for 3 h. A positive control was established using 600 \u00b5mol\u00b7L\u22121 of MMS. After the treatment, the medium was removed, and the cells were washed with PBS. Then, fresh medium containing 20 \u00b5L of reaction mixture from CellTilter-Blue\u00ae Cell Viability Assay Kit was added. The plate was shacked for 10 s and incubated using standard cell culture conditions for 4 h, after which fluorescence was measured using a microplate reader . The percentage of cell viability was calculated from the fluorescence emission values obtained with the microplate reader, using the following equation:Cell viability was assessed with the resazurin assay, using the CellTilter-BlueThree independent experiments were performed per cell line.5 cells per well were seeded in a 6-well plate for 24 h and then treated for 3 h, at 37 \u00b0C, in culture medium with FeAT-NPs, at concentrations of 0 and 2 mmol\u00b7L\u22121 of Fe. Immediately after treatment, 2000 cells per well were re-plated in new 6-well plates to assess colony forming efficiency. The plates were left in the incubator for 6\u201310 days, depending on the cell line, until clones of at least 50 cells appeared. The cells were washed with PBS, fixed with methanol: acetic acid (3:1) for 5 min and stained with 0.5% crystal violet in methanol for 15 min. The dye mixture was removed, the plates rinsed with tap water and the colony numbers were counted after drying. Three independent experiments were performed per cell line.The clonogenic activity, defined as the ability of a single cell to grow into a colony (a group of at least 50 cells), was studied with the clonogenic, or colony formation, assay that determines cell reproductive death after treatment . To carrIntracellular ROS was measured using the cell permeation reagent DCFH-DA, which is a fluorescent dye that can measure hydrogen peroxide, hydroxyl radicals, peroxy radicals and other ROS molecules within the cell. After diffusion into the cell, DCFH-DA is deacetylated by intracellular esterase to generate a non-fluorescent compound, which is rapidly oxidized by ROS to 2,7-dichlorofluorescein (DCF), that is highly fluorescent having a maximum excitation and emission spectra at 495 nm and 529 nm, respectively. The fluorescence intensity is directly proportional to the level of ROS in the cytosol. 5 cells per well, were incubated for 48 h, and then were treated for 3 h with FeAT-NPs, at different Fe concentrations (0\u20132 mmol\u00b7L\u22121). Tert-butyl hydroperoxide (TBHP), at varying concentrations depending on the cells (from 50 to 400 \u00b5mol\u00b7L\u22121) due to their different sensitivities, was used as positive ROS inducer. After the treatment, cells were harvested with trypsin, centrifuged at 1200 rpm for 10 min, washed with 5 mL PBS and counted. The cells were then incubated with 20 nmol\u00b7L\u22121 DCFH-DA, prepared at 2 mmol\u00b7L\u22121 in dimethyl sulfoxide (DMSO) and diluted with PBS, at a density of 106 cells per mL, during 30 min in the dark. Cells were then washed thrice, with 10 mL PBS, and were set to a concentration of 106 cells per mL in PBS. The fluorescence was measured by flow cytometry in the FITC channel. Approximately 104 cells per condition were analyzed in each of the three performed independent experiments.For these experiments, cells were seeded in 6-well plates, at a density of 5 \u00d7 106 cells per mL. MMS (250 \u00b5mol\u00b7L\u22121) was used as positive control. Treated cells were embedded in LMP agarose to a final 0.5% concentration and layered onto slides pre-coated with 0.5% NMP agarose. From this last step on, all the process was performed in darkness or under a red light. After gel solidification, the slides were immersed in cold, fresh lysis solution for 1 h, at 4 \u00b0C. The slides were then placed into a horizontal electrophoresis tank and covered with cold electrophoresis buffer , for 20 min at 4 \u00b0C, for DNA unwinding and conversion of alkali-labile sites to single-strand breaks. Electrophoresis was performed in the same buffer at 0.81 V/cm and 300 mA, for 20 min at 4 \u00b0C. After electrophoresis, the slides were neutralized three times for 5 min with 0.4 mol\u00b7L\u22121 Tris buffer (pH 7.5), fixed in absolute ethanol and air-dried overnight. Slides were then coded for blind analysis and stained with 40 \u00b5L of ethidium bromide (0.4 \u00b5g mL\u22121) with 1 \u00b5L of the fluorescence protector Vectashield\u00ae . Nucleoids were visualized at 400\u00d7 magnification with an OlympusBX61 fluorescence microscope, equipped with appropriate filters, and an Olympus DP70 digital camera. Photos taken from 75 nucleoids per slide were analyzed with the interactive automated comet software program KOMET 5 . The percentage of DNA in the comet tail (% Tail DNA) was the parameter used to measure DNA damage. For each cell line, two slides were analyzed per FeAT-NPs concentration in each experiment, and three independent experiments were carried out.The alkaline single-cell gel electrophoresis, or comet, assay was performed to determine the DNA damage as described previously by Collins , with slThis assay monitors in wild-type eyes the presence of white mutant spots, generated by loss of heterozygosity due to point mutations and/or deletions at the white locus, or to mitotic recombination or nondisyunction in heterozygous cells ,24,47. P+w/+w (yellow phenotype) were mass mated with 30 males w/Y (white phenotype) and, after 48 h, they were transferred to bottles with instant Carolina Formula 4\u201324 Drosophila Medium . In chronic treatments this medium was hydrated with solutions of the different FeAT-NPs concentrations (0\u20132 mmol\u00b7L\u22121 of Fe), in phosphate buffer, pH = 6.8, and the flies were allowed to lay eggs for 24 h. In surface treatments, the Carolina medium was hydrated with phosphate buffer pH = 6.8, the flies were allowed to lay eggs for 24 h and, after 60 \u00b1 12 h, 1.5 mL of the different FeAT-NPs concentrations (0\u20135 mmol\u00b7L\u22121 of Fe), in phosphate buffer pH = 6.8, were added to each bottle. In addition to the negative control , a positive control, with 2.5 mmol\u00b7L\u22121 of MMS, was carried out in each experiment. The eyes of hatched females and males, submerged in solutions with ethanol, tween-80 and water to allow a clear scoring of ommatidia, were observed with a stereomicroscope Leica GZ6 (Leica), at 45x magnification, looking for mutant white spots. The eyes with at least one spot (mosaic eyes) were counted, as well as the number of spots per 100 eyes, ; in this last case, the spot size, based on the number of affected ommatidia , was also determined. At least 300 eyes per sex and tested condition, from a minimum of two independent experiments, were analyzed. Toxicity was semi-quantitatively estimated counting the number of emerged flies per bottle, in all the tested conditions.To perform the assay, two different treatments were carried out: chronic and surface. For both of them, 50 virgin females The data are presented as mean values \u00b1 standard deviation (SD), or standard error (SE). The differences between non-treated (negative control) and treated cells, in the different experiments and assays, were evaluated with paired and unpaired Student\u2019s t tests. Linearity of dose\u2013response data were checked with linear regression analyses. Drosophila data, the mosaic eyes induced by each nanoparticle concentration, as well as the positive control, were compared to the negative control with Chi square tests. In addition, the Frei\u2013W\u00fcrgler double-decision chi-square test was applied to the analysis of the number of spots in 100 eyes, with m = 2 for small, medium, and total spots and m = 5 for large spots. In this case, results were expressed as negative (\u2212), positive (+), weakly positive (w+) or inconclusive (i), based on the acceptance, or rejection, of the null (H0) or alternative (HA) hypothesis [For the pothesis .D. melanogaster. The cellular incorporation has been shown to be cell type-dependent, probably as a function of the endocytic mechanisms, with the highest incorporation in the smallest cells, that is in the A2780 model of ovarian cancer. No precipitation of these nanoparticles was ever detected in any experiment, independently of the exposure time. Metabolic evolution within the cell cytosol revealed minimum solubilization of the incorporated particles after three hours exposure that did not compromise cell viability in any of the studied cell models. In agreement with the elemental speciation studies and to the minimal production of free ionic iron within this period, minimum ROS increase was also observed in all the cells under study. Furthermore, induced DNA damage, detected with the comet assay, was only biologically relevant in the NER deficient fibroblasts model, at the highest exposure concentrations. Thus, although DNA oxidative damage could be occurring, the repairing mechanisms of the cells seem to be efficient to eliminate it in the studied models. Finally, in vivo experiments in larvae of D. melanogaster showed some evidence of genotoxicity with the two highest FeAT-NPs concentrations, in surface treatments, in both sexes, but with small increases with respect to the negative control. Overall, the synthetic FeAT-NPs are shown to be safe enough to be used, alone or as carrier, of different drugs in future experiments. Nevertheless, distribution studies in a relevant animal model are essential to evaluate distribution, metabolism and excretion of these nanoparticles and to evaluate the degree of their in vivo degradation or solubilization products.Some pharmacokinetic aspects of nanoparticulated iron products, with regard to their performance in humans, can be modelled by animal and cell-based models, according to the European Medicines Agency. In this work, the incorporation, biotransformation and biological effects of FeAT-NPs have been addressed in cell cultures and in the larvae of"} +{"text": "Chemotherapy is a common approach for cancer treatment, but intrinsic genetic mutations in different individuals may cause different responses to chemotherapy, resulting in unique histopathological changes. The genetic mutation along with the distinct histopathological features may indicate new tumor entities. BCOR-CCNB3 sarcomas is a kind of Ewing-like sarcomas (ELS) occurring mostly in bone and soft tissues. No gene fusion other than BCOR-CCNB3 has been found in this type of tumor.We herein report a case of 17-year-old male patient, presented with a mass on his left shoulder that was diagnosed as undifferentiated small round cell sarcoma according to core biopsy. The patient received 5 courses of preoperational chemotherapy, and the tumor was resected and analyzed. Primitive small round cells and larger myoid cells in the resected tumor tissue but not in biopsy were observed, and arterioles stenosis and occlusion were also detected, indicating a dramatic change of histopathological features of this tumor. In addition, the immunohistochemical results showed the altered staining patterns of BCOR, bcl2, CyclinD1, TLE1, AR, SMA, CD117, STAB2, CD56, and CD99 in tumor tissues after chemotherapy. Notably, RNA sequencing revealed a RNF213-SLC26A11 fusion in the tumor sample.The BCOR-CCNB3 sarcoma with RNF213-SLC26A11 fusion may indicate a subset of tumors that undergo histopathological changes in response to chemotherapy. More similar cases in the future may help to clarify the clinical meanings of RNF213-SLC26A11 fusion in BCOR-CCNB3 sarcomas and the underlying mechanisms. EWSR1 and a member of the ETS family of transcription factors. With the rapid development of molecular diagnostics, new entities of SBRCTs have been recognized. Recent reports described a subset of SBRCTs, Ewing-like sarcomas (ELS), which are morphologically similar to ES. ELS contain a broad spectrum of tumors with different gene fusions, and CIC-rearranged sarcomas are as the most common ELS account for about two-thirds of ELS [BCOR-CCNB3 sarcomas are a kind of ELS that often occur in bone and soft tissues [BCOR-CCNB3 sarcomas are highly cellular sarcomas that are composed of varying spindle and ovoid cells, monomorphic nuclei angulated with finely chromatin and indistinct nucleoli, and prominent delicate capillary network. The stroma showed varying amounts of myxoid and collagen [BCOR-CCNB3 sarcomas was quite varied. Increased cell density and obvious pleomorphism were observed in the recurrent and metastatic lesions of BCOR-CCNB3 sarcomas [BCOR-CCNB3 sarcomas after chemotherapy, with possible changes of immunophenotypes such as decrease or loss of expression of CCNB3 and SATB2 [Primitive small blue round cell tumors (SBRCTs) are malignant soft tissue sarcomas common in children and adolescents. Ewing sarcoma (ES) is the prototypical SBRCT characterized by the fusion of tissues . Morpholcollagen , and thesarcomas . Heterognd SATB2 \u20138.BCOR-CCNB3 sarcomas may have genetic changes such as HOX family and NTRK3 up-regulation [BCOR-CCNB3 fusion have been reported so far. The RNF213-SLC26A11 gene fusion has only been found in chronic myeloid leukemia [BCOR-CCNB3 sarcoma is described with concurrent RNF213-SLC26A11 gene fusion, showing unique morphologic features and peculiar immunophenotype after chemotherapy.The gulation , but no leukemia and glioleukemia . Here, aA 17-year-old male patient presented a mass on his left shoulder without apparent cause 6 months ago. This patient received no treatment until the mass enlarged with pain and discomfort. MRI showed a huge soft tissue mass (10.0cm\u00d78.1cm\u00d76.6cm) growing around the left scapula with the imaging features of long T1, long T2, and high DWI signal slides were continuously sectioned for immunohistochemical staining of BCOR, Bcl2, CyclinD1, ERG, Fli1, TLE1, AR, Ki67, SMA, Desmin, S-100, STAB2, CD117, CD56, and CD99 with the Roche BenchMark ULTRA fully automated immunohistochemistry stainer.BCOR Break Apart FISH Probe kit and Vysis EWSR1 Break Apart FISH Probe kit . Two hundred non-overlapping intact nuclei were examined in each slide by a Leica fluorescence microscope. A specimen was considered positive when at least 20% of the nuclei showed a break-apart signal.According to the manufacturer\u2019s instruction, FISH was performed on 4-\u03bcm-thick FFPE sections (after chemotherapy) with the Dual Color Probe, using Vysis http://code.google.com/p/fusioncatcher/) [Total RNA was extracted from FFPE samples using the RNeasy FFPE Kit (QIAGEN). RNA samples were quantified with BioAnalyzer 2100 (Agilent Technologies), depleted of ribosomal RNA (rRNA) and residual genomic DNA, purified with Agencourt RNA Clean XP Beads, and subjected to the construction of the sequencing library with the KAPA Stranded RNA-Seq Library Preparation Kit. The resultant library was sequenced on Illumina HiSeq X Ten platform (Illumina) for paired-end 150bp sequencing. The outcome in FASTQ format were generated with bcl2fastq v2.16.0.10 software (Illumina), and the somatic fusion genes were identified using FusionCatcher pipeline with default settings (atcher/) . The GRCRT-PCR was performed using the primers BCOR-E15-F: 5\u2032-TCACGAACGAAATTCAGACTC-3\u2032 and CCNB3-E5-R: 5\u2032- GCTACTACTGGTGTGACTTCC-3\u2032 for detecting BCOR-CCNB3 fusion, and RNF213-E2-F: 5\u2032-AGGAGGAAACCCCCAAGTTC-3\u2032 and SLC26A11-E8-R: 5\u2032-TCGAAGGAGTACGCAACCAG-3\u2032 for detecting RNF213-SLC26A11 fusion. For Sanger sequencing, BCOR-E15-F and RNF213-E2-F were used.In the initial core biopsy specimen collected before chemotherapy, the tumor cells were arranged in solid sheets, with the cell morphology of round or oval, eosinophilic cytoplasm, fine chromatin, indistinct nucleoli, and rich capillary network Fig. C. After The immunohistochemical staining showed the diffusely and strongly expression of BCOR, bcl2, CyclinD1, TLE1, and AR in the biopsy specimen, but these markers were focal positive mainly in myoid cells in the radical specimen Dual Color), a 2-fusion signal pattern could be observed were detected. Combined with FISH analysis of EWSR1 (mentioned above), the diagnosis of Ewing\u2019s sarcoma could be excluded.BCOR-CCNB3 gene fusion in this case. This fusion joined the exons 1\u201315 of BCOR (ENST00000342274) to the exons 5\u201312 of CCNB3 (ENST00000276014) , and almost no read was observed in the first four exons of CCNB3.RNA sequencing identified a BCOR exon 15 (BCOR-E15-F) and CCNB3 exon 5 (CCNB3-E5-R) were designed and RT-PCR was performed to obtain the fusion fragment. Sanger sequencing showed that the fusion joined the regular acceptor site of CCNB3 exon 5 to the putative GGTGAG donor splice-site (just before the stop codon TGA) of BCOR, generating a BCOR-CCNB3 fusion protein. The break-down of the BCOR gene at the genomic level was validated by FISH exons 1\u20132 were joined to the SLC26A11 (ENST00000411502) exons 8\u201318, generating an in-frame RNF213-SLC26A11 isoform to enhance BCL6-mediated transcriptional repression [BCOR gene mutated mainly through internal tandem duplications (ITD) and gene fusion, among which the BCOR-CCNB3 is the most common fusion. CCNB3, mainly expressed in the germ cells of testis, is a member of the cyclin B family [CCNB3, BCOR has been reported to be fused with ZC3H7B, CIITA, MAML3, and KMT2D [BCOR-CCNB3 has been reported in BCOR-CCNB3 sarcoma so far. The BCOR-CCNB3 sarcoma with RNF213-SLC26A11 gene fusion is the first discovered case.The pression , 4, 12. nd KMT2D . HoweverBCOR-CCNB3 sarcoma is more common in men, mainly in older children and adolescents, and it is more in bone than in soft tissue [BCOR-CCNB3 sarcoma with RNF213-SLC26A11 gene fusion is similar to regular BCOR-CCNB3 sarcomas in terms of histology. Regular BCOR-CCNB3 sarcoma is often composed of round or oval cells in different proportions, distributing in solid sheets, with less cytoplasm, fine chromatin, indistinct nucleoli, and rich capillary network [The t tissue \u20136. The c network . Other m network , 14\u201317.BCOR-CCNB3 sarcomas treated with neoadjuvant chemotherapy are similar to the untreated samples in morphology, but they display hypocellular loose fibrous tissue. Sometimes, there are spindle cells with slit-like spaces and extravasated erythrocytes, foci of extramedullary hematopoiesis, and pleomorphic tumor cells with multi-nucleated cells. Epithelioid cells appear in small clusters or cords, with focally prominent nucleoli [RNF213-SLC26A11 gene fusion) was observed in the BCOR-CCNB3 sarcoma after neoadjuvant chemotherapy. Most noticeably, two new types of cells, the small round cells similar to classic ES and the larger myoid cells resembling smooth muscle cells or myoepithelioma cells, present in the resected tumor tissue but not the biopsy. The myoid cells showed SMA staining, indicating myogenic differentiation. In addition, we observed a large number of arterioles with thickened or even occluded walls in this tumor, which has never been reported in BCOR-CCNB3 sarcomas. Our discovery may extend the morphological spectrum of BCOR-CCNB3 sarcomas after chemotherapy.nucleoli , 5\u20138, 18BCOR-CCNB3 sarcomas. The diffuse positive expression of CCNB3 can help diagnosis, but the focal or weakly positive staining of CCNB3 occurs in solitary fibrous tumor (SFT), rhabdomyosarcoma, ES, and fibrosarcoma [BCOR-CCNB3 sarcomas [BCOR-CCNB3 sarcomas. The expression of AR in sarcoma often indicates potential progression [BCOR-CCNB3 sarcomas often change after neoadjuvant chemotherapy. For example, CCNB3 and SATB2 often have weakened expression or no expression [So far, there have been no commonly accepted immunohistochemical markers for osarcoma . Althougsarcomas , its spesarcomas . Even wisarcomas , undiffesarcomas . We repogression , but it pression , 15, 18.RNF213 gene locates on chromosome 17q25.3, coding for a ring finger protein with ubiquitin ligase and ATPase activities. The RNF213 gene plays an important role in angiogenesis and inflammatory responses of endothelial cells. RNF213 is a susceptibility gene for moyamoya and it functions to maintain blood flow in the case of hypotension in the brain. It is also a genetic risk factor for pulmonary hypertension and systemic vascular disease [RNF213 mutations have been found in many cancers and sarcomas [RNF213 gene translocation occurs in anaplastic large cell lymphoma and inflammatory myofibroblastic tumor [RNF213-SLC26A11 gene fusion has only been reported in chronic myeloid leukemia [BCOR-CCNB3 sarcomas is unknown. Because RNF213 is involved in angiogenesis and vascular-related lesions, the BCOR-CCNB3 sarcoma with RNF213-SLC26A11 gene fusion has the thickening of the arterial wall which has not been described for BCOR-CCNB3 sarcomas after neoadjuvant chemotherapy. It has been speculated that RNF213-SLC26A11 gene fusion may participate in the vascular change of the tumor.The ic tumor . As a meic tumor . The rolleukemia and glioleukemia , which iBCOR-CCNB3 sarcomas are relatively rare tumors with no standard therapeutic schedule, but neoadjuvant chemotherapy can effectively improve overall survival (OS) and disease-free interval (DFI) of patients [RNF213-SLC26A11 fusion has been reported in chronic myeloid leukemia and glioma, the efficacy and safety of the corresponding treatment to RNF213-SLC26A11 were still unknown in BCOR-CCNB3 sarcomas. For this reason, the therapeutic strategy was not supposed to be changed. In this case, the neoadjuvant chemotherapy with oxaliplatin, pirarubicin, and ifosfamide was also effective, resulting in no recurrence or metastasis of the tumor half a year post surgery. Therefore, accumulating cases with different regimens may help to evaluate the chemotherapy protocol of BCOR-CCNB3 sarcomas.patients . The compatients , 16. Sompatients . Althoug"} +{"text": "Scientific research in colleges and universities is of great significance to national innovation. Based on the evolutionary game theory, this paper constructs a theoretical model of the state, universities, and researchers. This paper also conducts numerical simulation on the model. The results reveal that when the scientific researchers\u2019 success rate reaches a certain threshold, more and more scientific researchers will choose to invest in scientific research. Then, universities and the state will hold a long-term incentive attitude toward scientific research and scientific innovation. The study further found that the greater the success rate of researchers, the faster universities and the state will actively encourage scientific research. Innovation promotes the development of social productive forces. Innovation promotes changes in production relations and social systems. Innovation promotes the development of human thinking and culture. Talent is the core element of innovation. To give full play to the core role of talent in innovation-driven, it is necessary to firmly establish the strategic position of talent to lead development, gather talents in all directions, and strive to consolidate the talent foundation for innovation and development. That innovation-driven development has become an important strategy of national development and placed in the primary position.In the new era of innovation-driven development, as one of the important subjects of knowledge innovation, colleges and universities play an irreplaceable role in promoting national development, social and economic progress, and industry promotion. From the perspective of social development and economic progress, scientific research innovation in colleges and universities plays a great role in promoting the collaborative innovation of colleges and universities, scientific research institutions and industries, as well as the innovation of academic start-up enterprises, and the reform and breakthrough of the new generation of information technology industry . From thEvolutionary game theory can be traced back to Scottish moral philosophy led by Ferguson, Hume, Mondewell, and Smith. In the German historical school and Marxist economics, evolutionary thinking is widely used to analyze the changes of social and economic structures. With the further promotion of Darwinism, the evolution theory has developed more rapidly .According to evolutionary game theory, under the rule of survival of the fittest in nature, individual organisms will imitate the behavior of species whose income is above the average level in the group, adopt the survival rule that most suitable for nature, change their original living habits and behaviors, and promote the group behavior of species to reach equilibrium.Biologists try to use game theory to build various models of biological competition and evolution . CombineEvolutionary game theory combines the \u201cequilibrium view\u201d in economics with the \u201cadaptability\u201d in biology. Under the conditions of incomplete rationality, asymmetric information, and deviation from others\u2019 behavior expectations, evolutionary game describes the process and results of people\u2019s continuous response to the impact of the outside world through imitation, learning, trial and error . EvolutiThe hypothesis of human in evolutionary game theory is \u201climited and rational people.\u201d Limited rationality is expressed as a dynamic evolutionary behavior choice determined by individuals in their understanding and learning of the game environment. This kind of behavior choice is to make behavior judgments and behavior decisions for the players from the positive side, so that the players can realize the target income of individuals in the process of continuous learning and imitation, and finally achieve the dynamic equilibrium of the game group .As time goes by, evolutionary game theory has been used to analyze many social and economic problems . EvolutiEvolutionary game model must meet four main conditions to ensure that all survivors\u2019 behavior is profit maximization. The four conditions are diversity, continuity of behavior, profit driven growth and limited path dependence . DiversiWhen using evolutionary game theory to analyze social phenomena, the government plays a very important role in the game process. For example, Innovation refers to invention, innovation, imitation and transcendence. Innovation is defined as \u201cnew academic ideas, new scientific discoveries, new technological inventions, and new industrial directions\u201d . InnovatTo sum up, the scientific research innovation of enterprises and university researchers has positive externalities. This positive externality can optimize its social benefits through relevant policy adjustments. Adjustment means include certain R&D subsidies or special preferential policies given to R&D personnel by the government, universities and other departments. The influence of the state on university researchers is macro, while that of universities is micro. Therefore, this study chooses two subjects, the state and universities, to analyze the impact of national and university policies on the research behavior of university researchers.On the one hand, innovation ability is related to individual factors. Innovation ability is a process in which people create new ideas or new methods to solve problems in the process of learning, production and research, and make efforts to realize the ideas and methods . The proScientific research performance is also related to external pressure. Performance pressure has a positive and negative boundary effect on teachers\u2019 research behavior. When employees regard performance pressure as a challenge, factors such as job remodeling, job involvement, task proficiency, performance improvement and creativity will have a positive effect on performance pressure .The time input will also affect the scientific research performance of university teachers. There is an inverted U-shaped relationship between work time (work engagement) and work performance . This inThe creativity of college teachers will also affect their scientific research ability. Compared with those with low creativity, those with high creativity have higher cognitive inhibition ability, which can effectively inhibit the reaction tendency unrelated to advantages .On the other hand, innovation ability is related to external factors. Organizational management system and organizational support affect the sense of scientific research efficacy, and then have a significant positive impact on scientific research innovation . Rui et Inclusive leadership has a significant positive impact on the innovation performance of researchers, and the sense of responsibility significantly regulates the relationship between inclusive leadership and innovation performance of researchers . QingjinIn terms of innovation performance of scientific research teams in colleges and universities, In terms of industry-university cooperation and scientific research performance, Evolutionary game theory can be used to analyze teachers\u2019 scientific research. (1) The core of evolutionary game theory is man\u2019s \u201cbounded rationality.\u201d The core of evolutionary game theory is that as time goes on, individual behavior strategy changes and their final behavior choices are individual behavior strategy choices based on group income perception and comparison, and are behavior strategy choices of \u201climited and rational people.\u201d (2) The essence of university teachers\u2019 behavior strategy choice is human\u2019s \u201climited rationality.\u201d College teachers\u2019 time, energy and information are limited. The core of the theory of \u201climited rationality\u201d is that people\u2019s time, energy and information are limited.However, there are few literatures on the research innovation of university teachers using evolutionary game theory. With two players, As pointed out above, innovation has positive externalities and needs the support of the state and government. Therefore, this paper takes the country, universities and researchers as the game players. Assuming that university teachers have scientific research innovation behavior, not all scientific research innovation behaviors will be successful, so this study increases the probability of scientific research success. Other assumptions of this study are different from the existing literature, and the conclusions of this study are new.To sum up, from the perspective of \u201climited and rational people,\u201d this paper analyzes the dynamic influence, evolutionary process, final behavior strategy selection and dynamic stability of university researchers by using the theory of tripartite evolutionary game. The theoretical contribution of this paper is to study the incentive mechanism of scientific research innovation in universities from different research perspectives and different research methods. The practical contribution lies in providing policy reference for scientific research management in colleges and universities.H1: suppose there are three game players, namely, the state, university managers, and university researchers. As a natural person, the state has two strategic choices (high investment or low investment in scientific research innovation). University managers also have two strategic choices (positive incentive or negative incentive to university researchers). Researchers also have two strategic choices (positive responses or negative responses).H2: when the state has a high investment in scientific research innovation in colleges and universities, and scientific researchers actively participate, the state will get huge benefits H3: when colleges and universities actively encourage, and researchers respond positively\uff0ccolleges and universities will obtain great benefitsH4: if researchers actively respond to the incentive policies, they will obtain certain benefitsH5: the probability of high national investment in university scientific research innovation is When the state actively encourages scientific research, if scientific researchers also actively carry out scientific research and innovation, whether colleges and universities actively encourage scientific research or not, the state will receive corresponding returns, and its comprehensive income is When the state treats scientific research negatively, if researchers actively carry out scientific research and innovation, whether colleges and universities actively encourage scientific research or not, the state will also obtain certain benefits from teachers\u2019 active scientific research When colleges and universities actively encourage scientific research, if researchers also actively carry out scientific research and innovation, whether the state actively encourages scientific research or not, colleges and universities will pay corresponding incentive costs according to the completion of scientific research projects, and therefore obtain corresponding returns. At this time, the comprehensive income of colleges and Universities is When colleges and universities treat scientific research negatively, if scientific researchers actively carry out scientific research and innovation, at this time, whether the state actively encourages scientific research or not, colleges and universities will obtain corresponding benefits because of the achievements made by scientific researchers. At the same time, because colleges and universities treat scientific research negatively, it will have a long-term negative impact and cause losses to the future development of colleges and universities. At this time, the comprehensive income of colleges and universities is When teachers actively participate in scientific research and universities and the state encourages scientific research innovation, teachers will receive rewards from the state and universities according to the completion of scientific research achievements, but researchers need to pay a certain cost of time and energy for scientific research. At this time, the comprehensive income of researchers is When teachers treat scientific research negatively and actively encourage scientific research innovation in colleges and universities or the state, teachers will hurt themselves because of their negative treatment of scientific research and need to bear certain losses. At this time, the comprehensive income of scientific researchers is Assuming that the state actively encourages scientific research innovation and makes a large amount of investment, the expected return is According to Suppose that the expected income of actively encouraging scientific research in Colleges and universities is According to Suppose that the expected income of University researchers actively carrying out scientific research isAccording to Let Based on According to the equilibrium point stability judgment method proposed by According to the stability analysis of the above eight equilibrium points, there are four stable equilibrium points in the tripartite evolutionary game of the state, universities, and researchers, namely E5 , E6 , E7 , and E8 .When When When When The above research shows that when When According to the definition and assumptions of tripartite game variables ; by the Science and Technology Project of Jiangxi Provincial Department of Education, grant number (No. GJJ211929); by the Humanities and Social Sciences Key Research Base Project of Universities in Jiangxi Province, grant number (No. JD20112); by the 2022\u20132023 topic of the Professional Committee of Talent Development of the China Education Development Strategy Society \u201cResearch on the Mechanism of the Intention of High level Talents Flow in Jiangxi Universities\u201d [No. RCZWH2022021].The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "To determine if having Raynaud\u2019s phenomenon (RP) affects the work ability, job retainment, or occurrence of sick leave.Surveys on the working-age general population of northern Sweden were conducted in 2015 and 2021, gathering data on RP, occupation and sick leave. Work ability was assessed using the Work Ability Score.p\u2009=\u20090.459 for women and p\u2009=\u20090.254 for men), after adjusting for age, body mass index, physical workload, cardiovascular disease, and perceived stress. Having retained the same main livelihood since baseline was reported by 227 (58.5%) women with RP, 1,163 (51.2%) women without RP, 152 (52.6%) men with RP, and 1,075 (54.1%) men without RP (p\u2009=\u20090.002 for women and p\u2009=\u20090.127 for men). At follow-up, any occurrence of sick leave during the last year was reported by 80 (21.4%) women with RP, 410 (18.6%) women without RP, 48 (17.1%) men with RP, and 268 (13.7%) men without RP (p\u2009=\u20090.208 for women and p\u2009=\u20090.133 for men). Among those reporting sick leave, the mean (SD) duration in months was 2.93 (3.76) for women with RP, 3.00 (4.64) for women without RP, 2.77 (3.79) for men with RP, and 2.91 (12.45) for men without RP (p\u2009=\u20090.849 for women and p\u2009=\u20090.367 for men).The study population consisted of 2,703 women and 2,314 men, among which 390 women and 290 men reported RP at follow-up. For women, the mean [standard deviation (SD)] Work Ability Score was 8.02 (2.24) for subjects reporting RP and 7.68 (2.46) for those without RP. For men, the corresponding numbers were 7.37 (2.03) and 7.61 (2.14), respectively. Multiple linear regression did not show an association between RP status and work ability (For neither women nor men was there a significant effect of having RP on work ability. Women with RP reported a slightly higher job retainment compared to those without the condition, while there was no difference in job retainment among men. For neither gender did the presence of RP influence the occurrence of recent sick leave, nor did it affect the length of time away from work.The online version contains supplementary material available at 10.1186/s12995-022-00354-2. Raynaud\u2019s phenomenon (RP) is the clinical manifestation of vasospasm affecting digital blood vessels . It can The concept of work ability is complex and entails the balance between physical and cognitive demands in relation to the resources of the individual, modified by the organizational context . In addiThe primary aim of this study was to determine if having Raynaud\u2019s phenomenon affects the work ability, job retainment, or occurrence of sick leave. Secondary aims were to investigate longitudinal effects of incident or remittent Raynaud\u2019s phenomenon on work ability, and evaluate potential gender differences.This prospective closed-cohort study was part of the Cold and Health In Northern Sweden (CHINS) research project, which was initiated in 2015 to broadly explore adverse health effects from ambient cold exposure, and has previously been described in detail . The stuth percentile. Cardiovascular diseases were asked about in the baseline survey, and included the presence of physician-diagnosed hypertension, angina pectoris, myocardial infarction, or stroke. The Mann\u2013Whitney U test and Pearson\u2019s chi square test was used to determine statistical differences for continuous and categorical variables, respectively. Simple and multiple linear regression was used to model the relations between the WAS and independent variables . Statistical tests were chosen based on the distribution of data, and non-parametric tests opted for when the assumption of normal distribution was violated. A p value\u2009<\u20090.05 was considered statistically significant. Statistical analyses were performed using SPSS .Responses from both surveys were merged based on social security numbers. Continuous variables data were described as mean values with standard deviation (SD), while categorical variables were presented as numbers and valid percentages. Subjects were defined as having RP through a positive response to a single questionnaire item that was present in both surveys: \u201cDoes one or more of your fingers turn white (as shown on picture) when exposed to moisture or cold?\u201d, and this was supported by a previously developed color chart . In the N\u2009=\u200980) and invalid social security numbers (N\u2009=\u2009111), all survey responses could not be matched to the original dataset, leaving 5,017 subjects available for analysis consisted of 12,627 subjects, of which 888 were deceased or had moved from the study region at the time of follow-up (2021). For an additional 31 subjects, the written invitation to participate in the follow-up survey could not be delivered by the postal services. There were 5,208 responses to the follow-up survey, yielding a response rate of 44.4%. Due to multiple responses and 2,314 men. Other baseline characteristics are presented in Table Among women, the mean (SD) WAS was 8.02 (2.24) for subjects reporting RP at follow-up and 7.68 (2.46) for those without RP. For men, the corresponding numbers were 7.37 (2.03) and 7.61 (2.14), respectively. In unadjusted analyses, there was a significant effect of RP status on the WAS among women, but not men (Table p\u2009=\u20090.077). The corresponding figures for men were 7.23 (2.11) and 7.63 (2.14) (p\u2009=\u20090.025). In addition, there were 104 women and 88 men that reported RP at baseline but remitted during follow-up. For women, the mean (SD) WAS was 7.09 (2.93) among remitted cases and 7.94 (2.26) among those with persistent RP (p\u2009=\u20090.032). The corresponding figures for men were 7.09 (2.25) and 7.50 (1.94) (p\u2009=\u20090.239).In longitudinal analyses, there were 96 women and 112 men who negated RP at baseline but reported RP during follow-up. For women, the mean (SD) WAS was 8.18 (2.19) among such incident cases and 7.71 (2.44) among subjects who negated RP both at baseline and follow-up (p\u2009=\u20090.002 for women and p\u2009=\u20090.127 for men). Reasons for changing livelihood are presented in Table p\u2009=\u20090.004 for women and p\u2009=\u20090.407 for men). Among currently working subjects, a high number of weekly work hours was reported by women with RP . Among those reporting sick leave, the mean (SD) duration in months was 2.93 (3.76) for women with RP, 3.00 (4.64) for women without RP, 2.77 (3.79) for men with RP, and 2.91 (12.45) for men without RP (p\u2009=\u20090.849 for women and p\u2009=\u20090.367 for men).Having retained the same main livelihood since the baseline survey 2015) was reported by 227 (58.5%) women with RP, 1,163 (51.2%) women without RP, 152 (52.6%) men with RP, and 1,075 54.1%) men without RP (.1% men w015 was rOur study did not reveal a significant effect of having Raynaud\u2019s phenomenon on work ability, when analyzed using multiple linear regression. Women with Raynaud\u2019s phenomenon reported a slightly higher job retainment compared to those without the condition, and generally long working hours. There were no statistically significant differences in sick leave occurrence or duration.The prevalence of RP in the present surveys was comparable with the roughly 12% that was reported in a Finnish population-based study . The conThe present study did not show any significant effect of RP status on work ability in the multiple linear regression models, when also using age, BMI, physical workload, cardiovascular disease, and perceived stress as covariates. However, in unadjusted analyses, there was a significant positive effect of RP status on work ability among women. Also, the results of longitudinal analyses suggested that men with incident RP had a slightly lower work ability than healthy subjects, while women with remitted RP reported a lower work ability than those with persistent disease. It is plausible that RP is indeed a hindrance for work, especially in manual outdoor occupations where exposure to ambient and contact cold, as well as hand-arm vibration, can trigger vasospastic attacks. This might reduce the work capacity in tasks requiring grip force and manual dexterity, and motivate the worker to seek a heated environment in order to regain full use of the hands. Such manual outdoor occupations are common among working men in northern Sweden, as evidenced both by the descriptive analyses on occupation in this study Table , as wellImportantly, most subjects reported a good work ability, regardless of having RP or not. Neither were there any significant effects on sick leave parameters. In this context, it is important to recall that most subjects reported a mild state of RP, regarding both attack frequency, distribution of paleness, and disease progression over time. Thus, it is reasonable to assume that the condition only had a minor impact on work ability. However, concern has been raised if the WAS sufficiently captures limitations in work ability for conditions that only affect the hands , since iRegarding the effects of other factors on work ability, the present study showed a significant effect of age on the WAS. A previous Swedish study on work ability among vibration-exposed workers subjects, where the prevalence of RP was 30% among men and 50% among women, reported an effect of age and distribution of neurosensory symptoms, but not vascular symptoms . The \u03b2 cThere was a large proportion of survey non-responders, and an underrepresentation of younger age groups among responders (as presented in Additional file To the authors\u2019 knowledge, this is the first population-based prospective study on work ability among subjects with RP. The study was performed in a Scandinavian setting, where the condition is quite common. The surveys were detailed and enabled longitudinal analyses with six years of follow-up time. Also, both surveys were distributed during the same late-winter season, so that cold exposure, which often serves as a trigger for vasospastic symptoms, would be comparable. Also, since previous studies have reported on differences in RP between women and men , 26, allFor neither women nor men was there a significant effect of having Raynaud\u2019s phenomenon on work ability. Women with Raynaud\u2019s phenomenon reported a slightly higher job retainment compared to those without the condition, while there was no difference in job retainment among men. For neither gender did the presence of Raynaud\u2019s phenomenon influence the occurrence of recent sick leave, nor did it affect the length of time away from work. These results suggest that having Raynaud\u2019s phenomenon does not have a profound effect on the participation in working life.Additional file 1. Distribution of age, gender, and county for the sampling frame, baseline survey responders, and follow-up survey responders. Analysis of responder patterns for the baseline and follow-up survey."} +{"text": "Special focus is given to studies that employ signal enhancement(hyperpolarization) methods such as dissolution dynamic nuclear polarization(dDNP) and involving nuclear spin-bearing solutes that undergo reactionsmediated by enzymes and membrane transport proteins. We extend the workgiven in a recent presentation on this topic to now include enzymes with two or more substrates andvarious enzyme reaction mechanisms as classified by Cleland, with particularreference to non-first-order processes. Using this approach, we can address some pressing questions in the field from a theoretical standpoint. Forexample, why does binding of a hyperpolarized substrate to an enzyme NMR and MRI provide a wealth of informationfrom structure elucidation, protein dynamics and metabolic profiling throughto disease diagnostics in oncology, cardiology and neurology among others.The technique's low sensitivity is one of the primary concerns in themagnetic resonance community and is often a limiting factor in experimentsfrom solid-state NMR to medical imaging. Recent work has shown that thesensitivity of NMR experiments can be improved by using non-equilibriumhyperpolarization techniques such as dissolution dynamic nuclearpolarization (dDNP) to boost signal intensities by many orders of magnitude. Suchtechniques have led to new applications and necessitated the development ofacquisition strategies to exploit the hyperpolarized magnetization in a time-efficient manner as well as new tools for signalprocessing and image reconstruction . A challenge withthe interpretation of these recordings is that, unlike radio tracers, hyperpolarized MR is a non-tracer technique requiring the injection ofphysiological or even supra-physiological concentrations of substrate.To date there have been many mathematical methods devised for analysing the kinetic time courses in dDNP NMR studies . However,until recently there has been little consensus on the best methods foranalysing and then interpreting reaction kinetics measured therein. A theoretical framework has only recently appeared to fully elucidate theunderlying mechanisms . Onechallenge is that the widely used Bloch\u2013McConnell equations describe the exchange of magnetization of only the MR active nuclei, while the reaction kinetics are subject to a plethora of molecular interactions in a(bio)chemical milieu. Furthermore, in a typical hyperpolarized MR experimentthe initial injection of a non-tracer concentration of substrate causes thereaction system to be perturbed from its equilibrium state, or quasi-steadystate, and therefore the concentrations of the reactants are time-dependent. In this regard, challenges relate to the description of non-linear kinetics,for example second-order reactions, and the involvement of unobservable (non-labelled) metabolites to the overall kinetics, e.g. enzyme cofactors, co-substrates and natural abundance Here we address these issues in a stepwise manner by developing a mechanistic approach that combines the MR interactions with the chemicaland/or enzyme-mediated reactions described by the Bloch\u2013McConnell equations. These equations are grounded in the concept of conservation of mass of the species responsible for the hyperpolarized signal plus itsnon-hyperpolarized counterpart and the various products; this was recentlyhighlighted where the MR-visible signal decays to produce an MR-invisible one.1.1We begin addressing the problem by defining the signal-to-noise ratio (SNR)in MR. In its most basic form, sensitivity is described by the ratio of thesignal amplitude divided by the root mean square of the amplitude of thenoise. When a signal 1.2The usual way to proceed when calculating the NMR response of a spin systemto RF pulse sequences is to solve the ordinary quantum mechanical masterequation that describes the evolution of the spin density operator. This is the Liouville\u2013von Neumann equation, which has been extended to include non-coherent interactions (predominantly relaxation phenomena) :Our aim here is to describe the kinetics of exchange between differentsolutes that contain hyperpolarized nuclei, e.g. imbalance between the populations of the The so-called Zeeman polarization term describes the sensitivity ofIn the usual quantum mechanical analysis of multiple spin systems, thedensity operator (that describes the probability density of states) isnormalized to 1, meaning that the summed probability density of allstates is 1. This is expressed mathematically as proportional to the concentration [On the other hand, equilibrium magnetization (2In the absence of intermolecular binding (however transient) or scalar couplings, the motion (time evolution) of magnetizations is described by theBloch equations. Magnetization is explicitly declared to be proportional toreactant concentrations [Equation\u00a0(11) is tedious to solve analytically, but it is readily solvednumerically . On the otherhand, by including the identity operator in the basis set and adding aconstant to the equilibrium magnetization , we obtain a much more compliant matrix equation:2.1We can extend the system of equations from describing an ensemble of singlespins to two or more exchanging spins. The system of equations now accountsfor the magnetization interaction with the lattice and exchange via theforward and reverse chemical reactions. These are the Bloch\u2013McConnell equations .principle of conservation of mass. Specifically, the sum of the rates of changein First, consider the rate expressions for a simple bi-directional chemicalreaction. The coupled differential equations describing first-order reactionkinetics of solute as well as chemical exchange. This yields the inhomogeneous form of the Bloch\u2013McConnell equations, which are written (again in matrix form) asFor the simplest case of two magnetically active solutes, each possessing asingle spin-The inhomogeneous form of the Bloch\u2013McConnell equations can similarly be modified by incorporating the equilibrium magnetization to create ahomogeneous form of this master equation:2.1.1non-hyperpolarized) sample. We seek the NMRspectrum that results from a two-site exchange reaction between solutes Next, consider Eq.\u00a0(19) for simulating the evolution of the MatLab with equilibrium Simulations were performed in The observable signal is proportionalto the complex signal The equilibrium constant was fixed so that 2.2We now consider the predictions made by using Eq.\u00a0(19) when simulating theevolution of the These were performed with equilibrium We next simulate the effect of applying the pulse sequence shown in Fig.\u00a02b corresponding to a time course type of experiment with multiple sampling of the magnetization and acquisition of an FID at each time point. This is representative of real experiments that have been presented in theliterature . The time delayscorrespond to a pre-scan delay The influence of this pulse sequence was then calculated, accounting formultiple sampling of the magnetization. The RF pulse was again specified bySimulations were performed with the same magnitude of noise as in Fig.\u00a01.The time evolution of the magnetization was recorded for the pulse sequenceshown in Fig.\u00a02b with sequential acquisition of 64 spectra and a repetition time of The In the previous examples in Fig.\u00a02c and\u00a0d, with a typical 3We now take a detour into relaxation theory to give an overview of thefactors that determine the values of A master equation for spin systems far from equilibrium based on a Lindbladdissipator formalism has recently been presented and shown to correctlypredict the spin dynamics of hyperpolarized systems. In brief, Eq.\u00a0(2) isonly valid for the high temperature limit and weak-order approximation of a spin system at thermal equilibrium. However, we do not pursue this line of enquiry here because for the enzymesystems studied thus far with dDNP a constant value of Dipole\u2013dipole couplings (DD). The dominant mechanism for the relaxation of nuclear spin magnetization is often the stochastic modulation of dipole\u2013dipole interactions (couplings) to other nuclei, either in the same molecule or other molecules, including thesolvent, as the molecule re-orientates in solution by tumbling.Chemical shift anisotropy (CSA). Nuclear spins resonate at different frequencies depending on the chemical shielding imposed by the local electronic environment and its orientation (a tensor property). The modulation of the chemical shift tensor bymolecular tumbling in solution has a quadratic dependence on the strength ofthe static magnetic field and therefore increases markedly with Paramagnetic sites. Dissolved paramagnetic solutes , such as radical agents thatremain in the dissolution solvent, molecular oxygen, and metal ions, whichcan be deleterious to the nuclear-spin relaxation, particularly in regionsof low magnetic field . However, allspecies can be easily scavenged by co-dissolving chelating agents in thedissolution medium .Scalar relaxation of the second kind. This mechanism operates when the nuclei of interest have scalar couplings to neighbouring nuclei that also relax rapidly . In dDNP NMR experiments this relaxationmechanism is often enhanced during sample transfer steps through areas oflow magnetic field .Spin rotation. The coupling of nuclear magnetization to that of a whole molecule or to mobile parts of a molecule, e.g. methyl groups, can act as an efficientrelaxation mechanism. This mechanism has an unusual dependence ontemperature, with the relaxation rate usually increasing at higher temperatures .Quadrupolar. Many molecules of interest in dDNP experiments contain either Once a sufficiently high level of nuclear spin polarization has beenachieved by implementing dDNP methodologies , and instead we focus on the main results of theiranalyses. Assuming a two-spin system composed of a The so-called spectral density function that is a function of the Larmorfrequency, 3.1It is important to note the influencethat a nearby The dependence of 3.2The majority of dDNP experiments used to study biological systems employIn a dDNP experiment the dissolution and transfer process can take as longas 15\u202fs; it depends on the distance to the point of use from the polarizingsource, and in clinical applications an additional 30\u202fs can easily be added for quality control processes. Such requirements place a bound on the usabletime in which hyperpolarized A major parameter that controls the magnitude of the rotational correlationtime of a spin-bearing molecule is its molecular weight becomes block diagonal, since transverse and longitudinalmagnetization are not interconverted. The evolution of the To reduce clutter in the equations, for all the discussions that nowfollow, we drop the subscript 4.1Confining our analysis to the physical subspace that is composed oflongitudinal magnetizations, which describe first-order kinetics of atwo-site exchange reaction of hyperpolarized substrates, Since Eqs.\u00a0(35) and\u00a0(36) describe the time evolution of the The kinetics of the non-polarized pools are described byEquations\u00a0(35)\u2013(38) can be written asWe can now appreciate the equivalence between this formalism andconventional chemical reaction kinetics written in terms of molecularconcentrations. For first-order reactions, the hyperpolarized magnetization evolves according to the Bloch\u2013McConnell equations, while the concentrations given by the sum of the \u201chot\u201d and \u201ccold\u201d pools evolve according to theconventional form of chemical reaction kinetics for a closed system.Therefore, Figure\u00a04 shows numerical simulations of the time evolution of the systemdescribed by Eq.\u00a0(39) with an initial magnetization vectorThe approach used here enables us to create systems of differential equations that satisfyconservation of mass and therefore allow a study of the influence ofnon-hyperpolarized pools of substrates on reaction kinetics. The approachenables more complicated reaction mechanisms to be described to allow theinclusion of MR-invisible pools of substrates such as 4.2Equation\u00a0(39) can be extended to compartmental models of arbitrarycomplexity: consider a reaction scheme involving three substrates Equations\u00a0(44)\u2013(49) can be recast in matrix form to giveWe can apply a similarity transform given byTo yield an equation of motion in the transformed basis vector given byFigure\u00a05 shows the results of numerical integration of Eq.\u00a0(50) with the initial magnetization vector 4.3We now describe hyperpolarized substrates Again, mass is conserved, as seen by the fact that Equations\u00a0(53)\u2013(58) can be written in matrix vector form asFigure\u00a06 shows numerical simulations of the time evolution of the system ofEqs.\u00a0(53)\u2013(58) with initial magnetization corresponding to the hyperpolarizedsignal 4.3.1The system of differential equations in Eq.\u00a0(59) describing a second-order reaction can be reduced to one with pseudo first-order kinetics by introducing time-dependent rate constants and \u201ccold\u201d pools.In turn, the pseudo first-order rate constants for However, we now encounter a problem. The pseudo first-order rate constants for the reactions of [5Next consider an enzyme-catalysed reaction with a hyperpolarized substrate. The simplest model involves a hyperpolarized substrate Mass is conserved as confirmed by the fact that Equations\u00a0(62)\u2013(68) can be written in matrix vector form given by Eq.\u00a0(A69); see Appendix.We can apply a similarity transform, given by Eq.\u00a0(A70) (see Appendix),to yield an equation of motion in the transformed basis vector given by Eq.\u00a0(A71); see Appendix.5.1A simplified uni-directional enzyme-catalysed reaction is described by setting the reverse rate constant Rearranging Eq.\u00a0(72) yields the Michaelis constant in terms ofhyperpolarized and non-polarized pools of substrate:Thus, using conservation of enzyme mass, the free enzyme concentration isgiven byIn other words, this is the standard form of the Michaelis\u2013Menten equation written as a function of both polarized and unpolarized pools of substrate.5.2Figure\u00a07b\u2013c show the results of numerical integration of Eqs.\u00a0(62)\u2013(68) with an initial hyperpolarized signal It is worth considering some of the consequences of Eq.\u00a0(75) when studyingenzyme-mediated reactions with hyperpolarized substrates. When the substrate concentration Further simulations were performed to explore the influence of a muchshorter value of 5.3Our formalism can be readily extended to account for the influence of aligand/solute to inhibit an enzyme. The simplest case is when a solute bindsreversibly to the free enzyme 5.3.1competitive inhibitor is structurally similar to thesubstrate and binds preferentially in the active site of the free enzyme, uncompetitive inhibitor binds only to the enzyme\u2013substrate complex and therefore causes substrate-concentration-dependent inhibition; and (iii), a non-competitive inhibitor binds toboth the free enzyme and to the enzyme\u2013substrate complex; it causes a conformational change at the active site that inhibits (or even enhances)the reaction. Such an effect is referred to as allosteric inhibition (oractivation).There are three commonly encountered types of reversible enzyme inhibition: (i) a Accounting for all three scenarios, the free enzyme concentration is givenbyAn additional effect that can be considered is where either the substrate ofthe reaction 6We now consider a real system that is of contemporary interest for in vivo clinical studies using dDNP. It is lactate dehydrogenase . Consider the LDH-catalysed reaction of a hyperpolarized substrate; it follows an ordered sequential reaction in which Mass is conserved, as is confirmed by the fact that Equations\u00a0(81)\u2013(89) can be written in matrix vector form, given by Eq.\u00a0(A90); see Appendix.We can apply a similarity transform, given by Eq.\u00a0(A91) (see Appendix),to yield an equation of motion in the transformed basis vector given by Eq.\u00a0(A92); see Appendix.Figure\u00a08b shows numerical simulations of the time evolution of the systemthat is described by Eqs.\u00a0(81)\u2013(89) with initial hyperpolarizedsignal/concentration (see above for a comment on this aspect) The computed time dependence of polarized and unpolarized pools Finally, we consider real case scenarios that are reported in the literature, i.e. measurement of hyperpolarized [1-7not cause an appreciable loss of the signal from the substrate or product. It was also able to answer why the concentration of an unlabelled pool of substrate, for example We have described an approach to formulating the kinetic master equationsthat describe the time evolution of hyperpolarized 10.5194/mr-2-421-2021-supplementThe supplement related to this article is available online at:\u00a0https://doi.org/10.5194/mr-2-421-2021-supplement."} +{"text": "Gout is the most common inflammatory arthritis, increasing in prevalence and burden. Of the rheumatic diseases, gout is the best-understood and potentially most manageable condition. However, it frequently remains untreated or poorly managed. The purpose of this systematic review is to identify Clinical Practice Guidelines (CPG) regarding gout management, evaluate their quality, and to provide a synthesis of consistent recommendations in the high-quality CPGs.(1) written in English and published between January 2015-February 2022; focused on adults aged\u2009\u2265\u200918 years of age; and met the criteria of a CPG as defined by the Institute of Medicine; and (2) were rated as high quality on the Appraisal of Guidelines for Research and Evaluation (AGREE) II instrument. Gout CPGs were excluded if they required additional payment to access; only addressed recommendations for the system/organisation of care and did not include interventional management recommendations; and/or included other arthritic conditions. OvidSP MEDLINE, Cochrane, CINAHL, Embase and Physiotherapy Evidence Database (PEDro) and four online guideline repositories were searched.Gout management CPGs were eligible for inclusion if they were Six CPGs were appraised as high quality and included in the synthesis. Clinical practice guidelines consistently recommended education, commencement of non-steroidal anti-inflammatories, colchicine or corticosteroids (unless contraindicated), and assessment of cardiovascular risk factors, renal function, and co-morbid conditions for acute gout management. Consistent recommendations for chronic gout management were urate lowering therapy (ULT) and continued prophylaxis recommended based on individual patient characteristics. Clinical practice guideline recommendations were inconsistent on when to initiate ULT and length of ULT, vitamin C intake, and use of pegloticase, fenofibrate and losartan.Management of acute gout was consistent across CPGs. Management of chronic gout was mostly consistent although there were inconsistent recommendations regarding ULT and other pharmacological therapies. This synthesis provides clear guidance that can assist health professionals to provide standardised, evidence-based gout care.The protocol for this review was registered with Open Science Framework (DOI 10.17605/OSF.IO/UB3Y7).The online version contains supplementary material available at 10.1186/s41927-023-00335-w. Gout is the most common inflammatory arthritis, estimated to affect 1\u20134% of people worldwide . Gout isGout is both a metabolic urate accumulation disease and an autoinflammatory arthritic disease . A gout Clinical practice guidelines include recommendations intended to optimise patient care based on evidence and expert opinion . SeveralThis systematic review was registered on the Open Science Framework (DOI 10.17605/OSF.IO/UB3Y7) and followed the Preferred Reporting Items for Systematic reviews guidelines . A searcThe original protocol excluded CPGs that addressed one treatment modality only e.g., medication prescribing. However, to improve comprehensiveness of the review, CPGs addressing single interventions were included as they were considered important for gout management. This decision to widen the scope was made during the study selection phase of the review. The timeframe proposed in our original search strategy was between January 2015 and December 2020 but was extended to include published CPGs up until February 14th 2022.www.covidence.org). Titles and abstracts were screened independently by two reviewers (BC and TG or IL) and assessed for eligibility according to criteria. Full texts articles were then screened, and final inclusion of articles was agreed on by consensus between the two reviewers or where necessary, consultation with a third reviewer (SB).Search results from the databases were aggregated in Endnote\u2122 and duplicate records removed. Records were imported to the Covidence software program II instrument . The AGRDomain percentages and overall assessment rating (%) for each reviewer were independently calculated. Our criteria for acceptable inter-rater agreement were domain percentages and overall scores that were less than or equal to 20% difference between reviewers as intraclass coefficient values of 80 or above are considered either excellent or almost perfect , 24. In Data were independently extracted by the first author (BC) using a purpose-designed Excel spreadsheet, adapted from a previous musculoskeletal review . This coThe narrative summary was developed initially by one author (BC) and then reviewed and refined by four authors . The summary included a description of the number of CPGs that reported on an intervention, what the recommendations involved and highlighted areas where recommendations were consistently similar in their details or where there were inconsistencies (Appendix 7). Recommendations that were considered consistent between CPGs, were described as \u2018consensus\u2019 .Ten CPGs were identified. Six CPGs met the eligibility criteria , 31, 32 The AGREE II scores for each CPG are provided in Appendix 6. AGREE II results for included CPGs are provided in Appendix 6.1 with CPGs rated low quality on AGREE II and excluded presented in Appendix 6.2. Quality was assessed across six domains: scope and purpose (range: 67-94%), stakeholder involvement (range: 50-92%), rigor of development (range: 68-82%), clarity of presentation (range: 78-94%), applicability (range: 21-92%), and editorial independence (range: 25-100%). The mean (SD) AGREE II scores for each item, domain and overall scores across all guidelines are displayed in Appendix 5. The mean overall scores for all CPGs were 77% (SD\u2009=\u200912.4). The domain with the lowest mean score was \u2018applicability\u2019 , and the highest mean score was for \u2018clarity and presentation\u2019 \u200b\u200b.The following recommendations were strongly recommended by two or more CPGs: non-steroidal anti-inflammatories (NSAIDs), colchicine and/or corticosteroids should be commenced as first-line medications , 31, 32.The following recommendations were conditionally recommended by two or more CPGs, or equally conditionally and strongly recommended: Cold therapies e.g. ice packs, could be used in combination with other evidence-based therapies , 31. IntThe following recommendations were recommended against by two or more CPGs: IL-1 inhibitors should be avoided if the patient has a current infection . NSAIDs The following recommendations were strongly recommended by two or more CPGs: Serum uric acid (sUA) levels should be monitored and ULT should be based on a treat-to-target strategy and adjusted to achieve a sUA target\u2009<\u20096\u00a0mg/dL (360 mmol/L) lifelong or sUA\u2009<\u20095\u00a0mg/dL (300mmol/L) for patients with severe or tophaceous gout , 29, 32 The following recommendations were conditionally recommended by two or more CPGs, or equally conditionally and strongly recommended: Febuxostat can be used as an alternative second-line xanthine oxidase inhibitor (XOI) for patients who have renal impairment and/or chronic kidney disease, where allopurinol is contraindicated or has not been effective in achieving the therapeutic sUA target , 31, 32.IL-1 inhibitors can be considered if a patient has contraindications or hasn\u2019t responded to standard treatments for inflammation of gout such as: colchicine, NSAIDs and corticosteroids , 29, 31.The following recommendations were recommended against by two or more CPGs: A sUA level\u2009<\u20093\u00a0mg/dL should be avoided long-term, due to the possibility of adverse effects that may be associated with a very low sUA , 31. AllInconsistent recommendations were reported for when to initiate ULT, duration of ULT, vitamin C intake, pegloticase, fenofibrate and losartan , 31, 32.Six of the ten CPGs identified were assessed as high quality on the AGREE II instrument. Quality scores were higher than in previous reviews , 15. ThiThere are consistent recommendations for acute gout management which are patient education, medication to treat inflammation and screening to identify comorbid conditions such as cardiovascular and renal diseases. These anti-inflammatory medications report a similar high-level of efficacy and choice of treatment should be based on the presence of comorbidities and patient preference , 38. CliClinical practice guidelines recommended clinicians take on an educative role with patients around pharmacological management and lifestyle modifications such as, weight loss and dietary changes. However, implementation of education into practice remains an issue. People with gout report rarely receiving clear guidance on these topics possibly due to a lack of detail and clarity of these recommendations , 41. TheUrate lowering remains the mainstay of the long-term management of gout with a lack of adherence being recognised as the main cause of gout management failure , 51. AdhClinician level barriers to effective medical management include a lack of referrals between primary and specialist\u2019s care, knowledge gaps, conflicting recommendations from CPGs and similar misconceptions to patients such as: general practitioners treating gout as an acute condition only , 43, 55.Clinical practice guidelines were consistent in recommending ULT and prophylaxis for managing chronic gout. Allopurinol was recommended at a low dose of 50-100\u00a0mg daily (no greater than 100\u00a0mg) and titrated upwards , 29, 31.Clinical practice guidelines were inconsistent in their recommendation of when to commence ULT, varying from immediately following diagnosis, to only being indicated in certain clinical scenarios, based on an individual patients\u2019 circumstances , 29, 31.Further research is needed to understand why gout CPGs differ in their recommendations and to determine recommendations on the current areas with no consensus: when to initiate ULT and length of ULT, vitamin C intake, pegloticase, fenofibrate and losartan. While developing high-quality CPGs is important, this alone is insufficient to improve health outcomes for people with gout . FurtherStrengths of this systematic review include the use of AGREE II tool as a systematic approach to synthesis , and selThis synthesis of current high-quality CPGs provides guidance for health care providers, on recommended gout care. Recommendations from six CPGs were that acute gout flare management should include anti-inflammatories, education, screening, and rest/elevation of the affected joints. Established gout should be managed with ULT and continued prophylaxis to prevent further gout flare and manage tophaceous and erosive gout. CPGs disagreed on when to initiate ULT and length of ULT, vitamin C intake, pegloticase, fenofibrate and losartan. This synthesis of recommendations is relevant to healthcare providers and can be implemented in clinical practice to standardise high-quality care and optimise patient outcomes.Below is the link to the electronic supplementary material.Supplementary Material 1. Appendix 1.Supplementary Material 2. Appendix 2.Supplementary Material 3. Appendix 3.Supplementary Material 4. Appendix 4.Supplementary Material 5. Appendix 5.Supplementary Material 6. Appendix 6.Supplementary Material 7. Appendix 7."} +{"text": "This editorial explores the transformative impact of artificial intelligence (AI) on orthopedics, with a particular focus on advancements in Asia. It delves into the integration of AI in hospitals, advanced applications in China, and future expectations. The discussion is underpinned by an examination of AI's role in assisted diagnosis, treatment planning, surgical navigation, predictive analysis, and post-operative rehabilitation monitoring. Artificial intelligence (AI) is the ability of machines or systems to perform tasks that normally require human intelligence, such as reasoning, learning, decision-making, and problem-solving. AI has been developing rapidly in recent years, thanks to the advances in computing power, data availability, and algorithm design. AI has also been transforming many industries, such as manufacturing, transportation, education, entertainment, and health care.The dawn of the AI era brought significant investment and rapid advancements in language and visual model integration. This technological evolution has sparked a global wave of innovation, with AI applications increasingly permeating various sectors, including healthcare. A notable example is the development of a robot by the Stanford team in July 2023, which integrates language and visual models to execute commands without specific training . FurtherThose advancements are regarded as the most cutting-edge and complex breakthroughs in the field of artificial intelligence, paving the way for a new era of AI applications in various sectors, including healthcare. Those ground-breaking developments, along with others, have opened new possibilities in the medical field.AI's role in orthopedics is multifaceted , rangingPre-planning and virtual simulated surgeries are other areas where AI has made significant strides , 12. TheRobot-assisted surgery is another area where AI has made significant strides . It now AI can also help improve medical education for orthopedic surgeons and trainees by providing interactive, adaptive, and personalized learning tools . AI coulAsia is one of the fastest-growing regions in the world in terms of population, economy, and technology. Asia is also facing many challenges in terms of health care, such as an aging population, increasing chronic diseases, and limited resources. AI and orthopedic surgical robots, e.g., TINAVI, KUNWU, etc., have great potential to address some of these challenges by improving the quality, efficiency, and accessibility of orthopedic services in China .In partnership with clinical institutes and a high-tech company, ALLinMD, an orthopedic medical platform, developed an intelligent assisted diagnosis system for orthopedic outpatient medical records . This syChina has also seen the development of a real-world data platform for orthopedic surgery. This system assists in planning patients\u2019 routine treatment and rehabilitation pathways Fig.\u00a0 and faciMoreover, the platform developed by ALLinMD allows for the automatic generation of standardized medical records, which greatly improves data quality and usability . This isRecent research from NYU Langone Hospital, which used more than five million clinical records to train a specialized language model, provides a glimpse into the future of AI in orthopedics . The antThe value of digital orthopedics lies in its precision, evidence-based approach, efficiency, and affordability. The future trend of digital orthopedics will likely involve the establishment of specialized data platforms that integrate multiple sources and cover the entire medical process. This approach will improve the consistency of orthopedic diagnostic treatment and education, thereby raising the overall standard of orthopedic healthcare.The application of artificial intelligence (AI) in orthopedics holds immense potential, but it also presents several risks and challenges. (1) One significant concern is the accuracy of AI algorithms. While AI can analyze vast amounts of data quickly, its predictions and analyses can be misleading or incorrect, potentially leading to misdiagnoses or inappropriate treatment plans; (2) Another challenge is the ethical implications of AI use. AI could inadvertently breach patient privacy or confidentiality if not properly managed, and there is also the question of who bears responsibility if an AI-powered system makes a mistake; (3) Additionally, there is the challenge of explainability. AI\u2019s decision-making process can be opaque, often referred to as a \u201cblack box\u201d. This lack of transparency can make it difficult for doctors to trust and understand AI\u2019s recommendations, which is crucial in a field where decisions can have life-altering consequences; (4) Lastly, regulatory hurdles pose a significant challenge. The healthcare industry is heavily regulated, and AI applications must comply with these regulations, which may slow down implementation and innovation.The development of digital technologies will have a significant impact on orthopedic diagnosis and treatment in the near future. Healthcare professionals need to learn and have a quick knowledge of how to use AI-assisted techniques in daily medical practice. While AI has the potential to revolutionize orthopedics, these risks and challenges must be addressed to ensure safe, ethical, and effective use."} +{"text": "Healthcare systems are complex and challenging for all stakeholders, but artificial intelligence (AI) has transformed various fields, including healthcare, with the potential to improve patient care and quality of life. Rapid AI advancements can revolutionize healthcare by integrating it into clinical practice. Reporting AI\u2019s role in clinical practice is crucial for successful implementation by equipping healthcare providers with essential knowledge and tools.This review article provides a comprehensive and up-to-date overview of the current state of AI in clinical practice, including its potential applications in disease diagnosis, treatment recommendations, and patient engagement. It also discusses the associated challenges, covering ethical and legal considerations and the need for human expertise. By doing so, it enhances understanding of AI\u2019s significance in healthcare and supports healthcare organizations in effectively adopting AI technologies.The current investigation analyzed the use of AI in the healthcare system with a comprehensive review of relevant indexed literature, such as PubMed/Medline, Scopus, and EMBASE, with no time constraints but limited to articles published in English. The focused question explores the impact of applying AI in healthcare settings and the potential outcomes of this application.Integrating AI into healthcare holds excellent potential for improving disease diagnosis, treatment selection, and clinical laboratory testing. AI tools can leverage large datasets and identify patterns to surpass human performance in several healthcare aspects. AI offers increased accuracy, reduced costs, and time savings while minimizing human errors. It can revolutionize personalized medicine, optimize medication dosages, enhance population health management, establish guidelines, provide virtual health assistants, support mental health care, improve patient education, and influence patient-physician trust.AI can be used to diagnose diseases, develop personalized treatment plans, and assist clinicians with decision-making. Rather than simply automating tasks, AI is about developing technologies that can enhance patient care across healthcare settings. However, challenges related to data privacy, bias, and the need for human expertise must be addressed for the responsible and effective implementation of AI in healthcare. Artificial Intelligence (AI) is a rapidly evolving field of computer science that aims to create machines that can perform tasks that typically require human intelligence. AI includes various techniques such as machine learning (ML), deep learning (DL), and natural language processing (NLP). Large Language Models (LLMs) are a type of AI algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate, and predict new text-based content \u20133. LLMs AI has evolved since the first AI program was developed in 1951 by Christopher Strachey. At that time, AI was in its infancy and was primarily an academic research topic. In 1956, John McCarthy organized the Dartmouth Conference, where he coined the term \u201cArtificial Intelligence.\u201c This event marked the beginning of the modern AI era. In the 1960 and 1970\u00a0s, AI research focused on rule-based and expert systems. However, this approach was limited by the need for more computing power and data .In the 1980 and 1990\u00a0s, AI research shifted to ML and neural networks, which allowed machines to learn from data and improve their performance over time. This period saw the development of systems such as IBM\u2019s Deep Blue, which defeated world chess champion Garry Kasparov in 1997. In the 2000s, AI research continued to evolve, focusing on NLP and computer vision, which led to the development of virtual assistants, such as Apple\u2019s Siri and Amazon\u2019s Alexa, which could understand natural language and respond to user requests , Scopus, and EMBASE, were independently searched with notime restrictions, but the searches were limited to the English language.In the review article, the authors extensively examined the use of AI in healthcare settings. The authors analyzed various combinations of keywords such as NLP in healthcare, ML in healthcare, DL in healthcare, LLM in healthcare, AI in personalized medicine, AI in patient monitoring, AI ethics in healthcare, predictive analytics in healthcare, AI in medical diagnosis, and AI applications in healthcare. By imposing language restrictions, the authors ensured a comprehensive analysis of the topic.Publications were screened through a meticulous review of titles and abstracts. Only those that met the specific criteria were included. Any disagreements or concerns about the literature or methodology were discussed in detail among the authors.With all the advances in medicine, effective disease diagnosis is still considered a challenge on a global scale. The development of early diagnostic tools is an ongoing challenge due to the complexity of the various disease mechanisms and the underlying symptoms. AI can revolutionize different aspects of health care, including diagnosis. ML is an area of AI that uses data as an input resource in which the accuracy is highly dependent on the quantity as well as the quality of the input data that can combat some of the challenges and complexity of diagnosis . ML, in AI is still in its early stages of being fully utilized for medical diagnosis. However, more data are emerging for the application of AI in diagnosing different diseases, such as cancer. A study was published in the UK where authors input a large dataset of mammograms into an AI system for breast cancer diagnosis. This study showed that utilizing an AI system to interpret mammograms had an absolute reduction in false positives and false negatives by 5.7% and 9.4%, respectively . AnotherFurthermore, a study utilized deep learning to detect skin cancer which showed that an AI using CNN accurately diagnosed melanoma cases compared to dermatologists and recommended treatment options , 14. ResAI tools can improve accuracy, reduce costs, and save time compared to traditional diagnostic methods. Additionally, AI can reduce the risk of human errors and provide more accurate results in less time. In the future, AI technology could be used to support medical decisions by providing clinicians with real-time assistance and insights. Researchers continue exploring ways to use AI in medical diagnosis and treatment, such as analyzing medical images, X-rays, CT scans, and MRIs. By leveraging ML techniques, AI can also help identify abnormalities, detect fractures, tumors, or other conditions, and provide quantitative measurements for faster and more accurate medical diagnosis.Clinical laboratory testing provides critical information for diagnosing, treating, and monitoring diseases. It is an essential part of modern healthcare which continuously incorporates new technology to support clinical decision-making and patient safety . AI has The projected benefits of using AI in clinical laboratories include but are not limited to, increased efficacy and precision. Automated techniques in blood cultures, susceptibility testing, and molecular platforms have become standard in numerous laboratories globally, contributing significantly to laboratory efficiency , 25. AutML research in medicine has rapidly expanded, which could greatly help the healthcare providers in the emergency department (ED) as they face challenging difficulties from the rising burden of diseases, greater demand for time and health services, higher societal expectations, and increasing health expenditures . EmergenMoreover, AI-powered decision support systems can provide real-time suggestions to healthcare providers, aiding diagnosis, and treatment decisions. Patients are evaluated in the ED with little information, and physicians frequently must weigh probabilities when risk stratifying and making decisions. Faster clinical data interpretation is crucial in ED to classify the seriousness of the situation and the need for immediate intervention. The risk of misdiagnosing patients is one of the most critical problems affecting medical practitioners and healthcare systems. Diagnostic mistakes in the healthcare sector can be expensive and fatal. A study found that diagnostic errors, particularly in patients who visit the ED, directly contribute to a greater mortality rate and a more extended hospital stay . FortunaThe fusion of AI and genotype analysis holds immense promise in the realms of disease surveillance, prediction, and personalized medicine . When apML algorithms make it feasible to predict a spectrum of phenotypes ranging from simple traits like eye color to more intricate ones like the response to certain medications or disease susceptibility. A specific area where AI and ML have demonstrated significant efficacy is the identification of genetic variants associated with distinctive traits or pathologies. Examining extensive genomic datasets allows these techniques to detect intricate patterns often elusive to manual analysis. For instance, a groundbreaking study employed a deep neural network to identify genetic variants associated with autism spectrum disorder (ASD), successfully predicting ASD status by relying solely on genomic data . In the The advent of high-throughput genomic sequencing technologies, combined with advancements in AI and ML, has laid a strong foundation for accelerating personalized medicine and drug discovery . DespitePersonalized treatment, also known as precision medicine or personalized medicine, is an approach that tailors medical care to individual patients based on their unique characteristics, such as genetics, environment, lifestyle, and biomarkers . This inThe potential applications of AI in assisting clinicians with treatment decisions, particularly in predicting therapy response, have gained recognition . A studyAI plays a crucial role in dose optimization and adverse drug event prediction, offering significant benefits in enhancing patient safety and improving treatment outcomes . By leveOn the contrary, a novel dose optimization system\u2014CURATE.AI\u2014is an AI-derived platform for dynamically optimizing chemotherapy doses based on individual patient data . A studyTherapeutic drug monitoring (TDM) is a process used to optimize drug dosing in individual patients. It is predominantly utilized for drugs with a narrow therapeutic index to avoid both underdosing insufficiently medicating as well as toxic levels. TDM aims to ensure that patients receive the right drug, at the right dose, at the right time, to achieve the desired therapeutic outcome while minimizing adverse effects . The useOne example of AI in TDM is using ML algorithms to predict drug-drug interactions. By analyzing large datasets of patient data, these algorithms can identify potential drug interactions. This can help to reduce the risk of adverse drug reactions, and cost and improve patient outcomes . AnotherPopulation health management increasingly uses predictive analytics to identify and guide health initiatives. In data analytics, predictive analytics is a discipline that significantly utilizes modeling, data mining, AI, and ML. In order to anticipate the future, it analyzes historical and current data , 62. ML AI can be used to optimize healthcare by improving the accuracy and efficiency of predictive models. AI algorithms can analyze large amounts of data and identify patterns and relationships that may not be obvious to human analysts; this can help improve the accuracy of predictive models and ensure that patients receive the most appropriate interventions. AI can also automate specific public health management tasks, such as patient outreach and care coordination , 62. WhiFurthermore, AI is needed to address these challenges regarding vaccine production and supply chain bottlenecks. Testing algorithms on real-time vaccine supply chains can be challenging. To overcome this, investing in research and development is essential to create robust algorithms that can accurately predict and optimize vaccine supply chains. Edge analytics can also detect anomalies and predict Disease X events and associated risks to the healthcare system .From a Saudi perspective, Sehaa, a big data analytics tool in Saudi Arabia, uses Twitter data to detect diseases, and it found that dermal diseases, heart diseases, hypertension, cancer, and diabetes are the top five diseases in the country . Riyadh AI is transforming how guidelines are established in various fields. In healthcare, guidelines usually take much time, from establishing the knowledge gap that needs to be fulfilled to publishing and disseminating these guidelines. AI can help identify newly published data based on data from clinical trials and real-world patient outcomes within the same area of interest which can then facilitate the first stage of mining information. Then, under the supervision of scientists and experts in the field, AI algorithms can analyze vast amounts of data to identify patterns and trends that can inform the development of evidence-based guidelines in real-time, which allows for a fast exchange of information with essential supervision clinicians for its clinical and ethical implications \u201373.Several professional organizations have developed frameworks for addressing concerns unique to developing, reporting, and validating AI in medicine \u201373. InstAI would propose a new support system to assist practical decision-making tools for healthcare providers. In recent years, healthcare institutions have provided a greater leveraging capacity of utilizing automation-enabled technologies to boost workflow effectiveness and reduce costs while promoting patient safety, accuracy, and efficiency . By intrWith continuously increasing demands of health care services and limited resources worldwide, finding solutions to overcome these challenges is essential . VirtualFurthermore, these tools can always be available, making it easier for patients to access healthcare when needed . AnotherAI has the potential to revolutionize mental health support by providing personalized and accessible care to individuals , 88. SevThe current published studies addressing the applicability of AI in mental health concluded that depression is the most commonly investigated mental disorder . MoreoveOne of the emerging applications of AI is patient education . AI-powePublic perception of the benefits and risks of AI in healthcare systems is a crucial factor in determining its adoption and integration. People\u2019s feelings about AI replacing or augmenting human healthcare practitioners, its role in educating and empowering patients, and its impact on the quality and efficiency of care, as well as on the well-being of healthcare workers, are all important considerations. In medicine, patients often trust medical staff unconditionally and believe that their illness will be cured due to a medical phenomenon known as the placebo effect. In other words, patient-physician trust is vital in improving patient care and the effectiveness of their treatment . For theResearch on whether people prefer AI over healthcare practitioners has shown mixed results depending on the context, type of AI system, and participants\u2019 characteristics , 108. SoAI has the potential to revolutionize clinical practice, but several challenges must be addressed to realize its full potential. Among these challenges is the lack of quality medical data, which can lead to inaccurate outcomes. Data privacy, availability, and security are also potential limitations to applying AI in clinical practice. Additionally, determining relevant clinical metrics and selecting an appropriate methodology is crucial to achieving the desired outcomes. Human contribution to the design and application of AI tools is subject to bias and could be amplified by AI if not closely monitored . The AI-Addressing these challenges and providing constructive solutions will require a multidisciplinary approach, innovative data annotation methods, and the development of more rigorous AI techniques and models. Creating practical, usable, and successfully implemented technology would be possible by ensuring appropriate cooperation between computer scientists and healthcare providers. By merging current best practices for ethical inclusivity, software development, implementation science, and human-computer interaction, the AI community will have the opportunity to create an integrated best practice framework for implementation and maintenance . AdditioConverting AI and big data into secure and efficient practical applications, services, and procedures in healthcare involves significant costs and risks. Consequently, safeguarding the commercial interests of AI and data-driven healthcare technologies has emerged as an increasingly crucial subject . In the One of the major causes that can compromise patient data, disrupt critical healthcare operations, and jeopardize patient safety with the use of AI in the healthcare system is increased cyberattacks , 122. PrThe integration of AI in healthcare has immense potential to revolutionize patient care and outcomes. AI-driven predictive analytics can enhance the accuracy, efficiency, and cost-effectiveness of disease diagnosis and clinical laboratory testing. Additionally, AI can aid in population health management and guideline establishment, providing real-time, accurate information and optimizing medication choices. Integrating AI in virtual health and mental health support has shown promise in improving patient care. However, it is important to address limitations such as bias and lack of personalization to ensure equitable and effective use of AI.Several measures must be taken to ensure responsible and effective implementation of AI in healthcare.Firstly, comprehensive cybersecurity strategies and robust security measures should be developed and implemented to protect patient data and critical healthcare operations. Collaboration between healthcare organizations, AI researchers, and regulatory bodies is crucial to establishing guidelines and standards for AI algorithms and their use in clinical decision-making. Investment in research and development is also necessary to advance AI technologies tailored to address healthcare challenges.AI algorithms can continuously examine factors such as population demographics, disease prevalence, and geographical distribution. This can identify patients at a higher risk of certain conditions, aiding in prevention or treatment. Edge analytics can also detect irregularities and predict potential healthcare events, ensuring that resources like vaccines are available where most needed.Public perception of AI in healthcare varies, with individuals expressing willingness to use AI for health purposes while still preferring human practitioners in complex issues. Trust-building and patient education are crucial for the successful integration of AI in healthcare practice. Overcoming challenges like data quality, privacy, bias, and the need for human expertise is essential for responsible and effective AI integration.Collaboration among stakeholders is vital for robust AI systems, ethical guidelines, and patient and provider trust. Continued research, innovation, and interdisciplinary collaboration are important to unlock the full potential of AI in healthcare. With successful integration, AI is anticipated to revolutionize healthcare, leading to improved patient outcomes, enhanced efficiency, and better access to personalized treatment and quality care."} +{"text": "Results: Overall, 68 pathologists in Poland participated in our study. Their average age and years of experience were 38.92 \u00b1 8.88 and 12.78 \u00b1 9.48 years, respectively. Approximately 42% used AI or ML methods, which showed a significant difference in the knowledge gap between those who never used it . Additionally, users of AI had higher odds of reporting satisfaction with the speed of AI in the medical diagnosis process . Finally, significant differences (p = 0.003) were observed in determining the liability for legal issues used by AI and ML methods. Conclusion: Most pathologists in this study did not use AI or ML models, highlighting the importance of increasing awareness and educational programs regarding applying AI and ML in medical diagnosis.Background: In the past vicennium, several artificial intelligence (AI) and machine learning (ML) models have been developed to assist in medical diagnosis, decision making, and design of treatment protocols. The number of active pathologists in Poland is low, prolonging tumor patients\u2019 diagnosis and treatment journey. Hence, applying AI and ML may aid in this process. Therefore, our study aims to investigate the knowledge of using AI and ML methods in the clinical field in pathologists in Poland. To our knowledge, no similar study has been conducted. Methods: We conducted a cross-sectional study targeting pathologists in Poland from June to July 2022. The questionnaire included self-reported information on AI or ML knowledge, experience, specialization, personal thoughts, and level of agreement with different aspects of AI and ML in medical diagnosis. Data were analyzed using IBM Since the early 2000s, artificial intelligence (AI) and its advanced subtypes, machine learning (ML) with deep learning (DL), have gained significant prominence in the medical field. They are tools used to develop diagnostic algorithms, predict a patient\u2019s survival probability, guide medical diagnostic processes, and suggest the proper treatment protocol . The intNowadays, pathologists increasingly rely on AI/ML in medical diagnosis. There are several applications of AI with ML in this context. Firstly, in image analysis, AI algorithms can be trained to analyze images of tissue samples and detect patterns or features indicative of specific diseases. This capability proves particularly valuable in identifying early stage or subtle disease signs that may elude visual observation . SecondlIt is of utmost importance to underscore the potential drawbacks of AI/ML with the daily practice of pathologists. Firstly, from a legal standpoint, pathologists must assume responsibility for the diagnostic process. Consequently, integrating AI into the daily practice of pathologists may inadvertently prolong the diagnostic timeline, which may not be advantageous in some instances . FurtherCancer is the second cause of death in Poland. The diagnoses will only grow with the gradually aging Polish population . PatholoWe performed a cross-sectional study using an anonymous, self-administered, and structured online survey tool through the \u201cMicrosoft Forms\u201d platform.The inclusion criteria were applied to all individuals who agreed to participate in the study: age \u2265 28 years, live in Poland, and specialized in pathology. There were no restrictions on gender, age, or socioeconomic level. The survey was distributed in June\u2013July 2022 using two mailing lists: (i) an internal mailing list for Poznan University of Medical Sciences working pathologists; and (ii) a mailing list created by the Department of Bioinformatics and Computational Biology in collaboration with the University Clinical Hospital in Poznan. These lists only included some pathologists in Poland, and pathologists were encouraged to share the survey in their departments with only those included in our including criteria.Questions were created based on previous literature that investigated the use of AI/ML in clinical pathology and healthcare ,19,20,21\u00ae SPSS\u00ae Statistics v.26 and PQStat Software v.1.8.2.238. The Shapiro\u2013Wilk test checked normality, and normally distributed data were reported as mean \u00b1 standard deviation (SD), and non-normally distributed data were reported as the median and interquartile range (IQR). The Mann\u2013Whitney U test calculated differences between agreement levels between AL/ML users and non-users. Moreover, the Chi-squared (\u03c72) test measured the significance of frequencies associated with AL/ML concerns. Finally, a stepwise multi-logistic regression model was used to study the predictors of AL/ML users. Logistic regression results were presented as odds ratios (ORs) and 95% confidence interval (95% CI). The computed p-value < 0.05 is considered statistically significant for all tests except in Table 3 due to applying Bonferroni correction, where p-value less than 0.00384 is considered statistically significant. Text mining on RStudio Build 351 was used to analyze the pathologists\u2019 comments and generate the word frequency map.We performed the statistical analysis using IBMA total of 68 pathologists from Poland participated in the survey study. Cronbach\u2019s standardized alpha demonstrated a good reliability of \u03b1 = 0.79 Table 1Table 1.The median age of the participating pathologists was \u201c37 (33\u201341.5)\u201d years. Approximately 53% of the physicians were females, and 37% were males. The pathologists have a median of \u201c10 (6.75\u201315.25)\u201d years of medical experience. Regarding previous experiences with AI/ML, 42.19% of the pathologists have answered (Yes) while 57.81% answered (No). Moreover, on a scale of ten, the pathologists expressed their trust in the AI/ML results, with a median of \u201c7 (5.75\u20138)\u201d. Similarly, they reported the efficacy of AI/ML in diagnosing cancer cells with a median of \u201c7 (6\u20138)\u201d .p < 0.001) between the self-reported knowledge of pathologists who have used AI/ML before, and those who have never used AI/ML before was reported between trust and efficacy of AI/ML models in cancer diagnosis (The frequency of the agreement with the use of AI and ML is displayed in iagnosis .p < 0.001). Moreover, previous users of AI/ML also had higher odds of reporting satisfaction with the speed of AI/ML in the medical diagnosis process .In stepwise multi-regression analysis , we founIn this section, we asked the pathologists three questions: (I) If your medical diagnosis differs from the AI diagnosis, which will you follow; (II) In which field of medicine do you think AI will be most helpful; and (III) Who do you think will be liable for legal problems caused by AI, . For thep = 0.003) between pathologists who had previous experience in AI/ML and those who never had before. The pathologists with previous experience selected that physicians would be liable for legal problems caused by AI (83.33%). In comparison, the pathologists who had never had experience in AI/ML before selected the AI-developing company (55.56%) .Participating pathologists had the opportunity to express their opinions. Most pathologists agreed on the importance of AI/ML in medical diagnosis and how much it would be helpful and save time for them in daily work. Moreover, they believe AI/ML will provide a supportive opinion based on the model\u2019s accuracy during the treatment process and decision-making protocol. However, the majority disagreed that AI/ML will replace their position in the near future. Instead, they think it will be an assisting tool. Finally, the pathologists wish that the government could provide more resources and training related to AI/ML in medical diagnosis generally and in different areas of specialization .This study was conducted to investigate the knowledge and experience of pathologists in Poland toward AI/ML in medical diagnosis. Our results showed that a higher proportion of the pathologists (\u224858%) in this study did not use AI/ML, which influenced their knowledge about its usefulness in fastening the clinical diagnosis and regulation and policies controlling the implementation of AI/ML in the medical field. Additionally, there was a significant difference between pathologists who had previous experience with AI/ML in medical diagnosis and those who had not, pathologists with previous experience selected that physicians would be liable for legal problems caused by AI (83.33%). In comparison, the pathologists who had never had experience in AI/ML before selected the AI-developing company (55.56%). This result was significant compared to pathologists with previous experience with AI/ML. Trust and efficacy of AI/ML models in cancer diagnosis were positively associated with AI/ML in previous users. Therefore, pathologists highlighted the importance of AI/ML in the future as assistance tools without replacing their positions.Comparing our results with other studies, a global study by Sarwar et al. (N = 487 pathologists) showed that 75% of participants were interested in AI as a diagnostic tool to improve the quality and efficacy of daily pathology work. At the same time, almost 20% were concerned that AI/ML tools could soon replace human positions. Moreover, 48.3% of their survey participants believed the final decision of the diagnosis must be for the human physician, and 25.3% thought the decision should be shared between physicians and the AI/ML model . AnotherA survey of 116 pathologists by Meyer et al. found that 87% of the pathologists trust the AI diagnostic ability for cancerous tumors, and 92% of them agreed that AI algorithms play an essential role in their daily basic work, aiding and helping them make the most accurate decision . SimilarFinally, it is worth noting that the revolution of AI applications in the medical field has driven many authorized organizations worldwide to announce regulations that can ensure the safety, ethical, and legal procedures of AI applications in the medical field. The US Food and Drug Administration (FDA) regulates the use of AI in the medical field through its Center for Devices and Radiological Health (CDRH). The FDA\u2019s approach to regulating AI in healthcare is risk-based, meaning that the level of oversight is commensurate with the product\u2019s potential risks. For AI-based medical devices, the FDA has issued guidance on premarket submission requirements which outline the information that manufacturers must provide to the agency to receive clearance or approval for their products. These requirements include information on the device\u2019s performance, design, intended use, and data on the device\u2019s safety and effectiveness . MoreoveFinally, it is essential to highlight the strength and limitations of this study. This is the first cross-sectional study involving Polish pathologists\u2019 perception of AI/ML. The data yielded from this study can be considered high-quality data since all pathologists were active and licensed. Still, our research has limitations. First, our sample size was approximately 15% to 22% of the active pathologists in Poland, which might be considered a low percentage; however, research suggests that 60 or 40 3 intervieOur study results showed that pathologists in Poland are not fully aware of AI/ML applications in medical diagnosis due to their lack of experience using them. Additionally, the legal liability of using AI/ML in the medical field was not clear enough, especially since they expect a promising future of AI/ML applications as an assisting tool for the medical diagnosis process. Hence, they require more training and awareness to be up to date with AI/ML legal issues, guidelines, protocols, and applications in medical practice. Further research is required on a larger group of pathologists to develop medical education modules to fit their learning needs."} +{"text": "Artificial Intelligence (AI) is recognized by emergency physicians (EPs) as an important technology that will affect clinical practice. Several AI-tools have already been developed to aid care delivery in emergency medicine (EM). However, many EM tools appear to have been developed without a cross-disciplinary needs assessment, making it difficult to understand their broader importance to general-practice. Clinician surveys about AI tools have been conducted within other medical specialties to help guide future design. This study aims to understand the needs of Canadian EPs for the apt use of AI-based tools.A national cross-sectional, two-stage, mixed-method electronic survey of Canadian EPs was conducted from January-May 2022. The survey includes demographic and physician practice-pattern data, clinicians\u2019 current use and perceptions of AI, and individual rankings of which EM work-activities most benefit from AI.The primary outcome is a ranked list of high-priority AI-tools for EM that physicians want translated into general use within the next 10\u00a0years. When ranking specific AI examples, \u2018automated charting/report generation\u2019, \u2018clinical prediction rules\u2019 and \u2018monitoring vitals with early-warning detection\u2019 were the top items. When ranking by physician work-activities, \u2018AI-tools for documentation\u2019, \u2018AI-tools for computer use\u2019 and \u2018AI-tools for triaging patients\u2019 were the top items. For secondary outcomes, EPs indicated AI was \u2018likely\u2019 (43.1%) or \u2018extremely likely\u2019 (43.7%) to be able to complete the task of \u2018documentation\u2019 and indicated either \u2018a-great-deal\u2019 (32.8%) or \u2018quite-a-bit\u2019 (39.7%) of potential for AI in EM. Further, EPs were either \u2018strongly\u2019 (48.5%) or \u2018somewhat\u2019 (39.8%) interested in AI for EM.Physician input on the design of AI is essential to ensure the uptake of this technology. Translation of AI-tools to facilitate documentation is considered a high-priority, and respondents had high confidence that AI could facilitate this task. This study will guide future directions regarding the use of AI for EM and help direct efforts to address prevailing technology-translation barriers such as access to high-quality application-specific data and developing reporting guidelines for specific AI-applications. With a prioritized list of high-need AI applications, decision-makers can develop focused strategies to address these larger obstacles.The online version contains supplementary material available at 10.1186/s12913-023-09740-w. EM is oOver the last decade, an increasing number of original research studies and scoping reviews have been published that outline AI-tools for the ED. These articles describe multiple motivations for ED AI applications, for example, to improve patient safety through AI-enabled patient monitoring; to increase the speed and accuracy of triage, or the diagnosis and prognosis of a range of diseases or clinical-syndromes; to aid in targeted medication delivery; to augment imaging interpretation; and many others \u20136.Despite the growth of clinical AI-tools, there are many obstacles to the implementation of AI-technology in medicine. These include concerns about the responsibility for medical-error related to AI, public perception, legal regulation, and the \u201cblack-box\u201d phenomena or lack of \u2018explainability\u2019 of how the\u00a0AI reach conclusions . MoreoveA similar needs-analysis of AI use in EM does not exist. Several literature surveys summarize the current developments and applications of AI in EM, while commenting on its potential future benefits , 15\u201317. The primary aim of this study is to determine which EM work-activities are the highest priority for new AI-tool development in the near future. Secondary aims include identifying Canadian EPs\u2019 understanding of AI, to gauge how AI is used in their practice, and to quantify their beliefs about the impact of AI on EM. Answering these questions will help address the need for additional user-input in the development of AI for ED applications.Opinio , a secure online platform.This study is a cross-sectional mixed-method electronic survey of Canadian physicians practicing EM, conducted in the spring of 2022. The original survey is included in Appendix Participants were contacted using the Canadian Association of Emergency Physicians (CAEP) research-survey distribution service and the Society of Rural Physicians of Canada (SRPC) listserv. Residents, fellows and staff physicians practicing EM in Canada were surveyed. The study aimed to target 365 respondents from an assumed total population of 3431 physicians calculated from the Royal College (RC) medical workforce database . AssuminA survey draft was developed from similar studies in other medical specialties \u201312. In a. that proposes a classification\u00a0system for EP work-activities [First, a list of EM \u2018work-activities\u2019 performed by senior doctors was identified. This list was adapted from a systematic review by Abdulwahid et altivities . Second,tivities \u201325. ThesFollowing iterative revisions, the survey was pilot tested on ten local EPs for written feedback ; the group had a balanced distribution of biological sex, and senior and junior staff.The final survey is divided into four sections: (Section-I) Demographics; (Section-II) Secondary Outcomes: Knowledge and Comfort with Technology; (Section-III) Primary Outcome: Opportunities for AI-Tools in Patient Care in EM; and (Section-IV) Secondary Outcomes: Beliefs and Opinions about AI Impact and Significance.The de-identified data was analyzed to extract summary statistics, and descriptive statistics to outline physicians\u2019 rankings. The TechPH index, a composite score describing \u2018technophilia,\u2019 was calculated from the TechPH scale included in Section-II; see Appendix Section-II includes both the EM \u2018work-activities\u2019 list and the list of existing AI-tools. Here, respondents were asked to indicate their awareness and prior use of these examples.Section-III, measuring the primary outcome, asked participants to rank their top three choices from the same two lists of EM \u2018work-activities\u2019 and existing AI-tool items; an item\u2019s total rank was calculated by weighted-sum, see Appendix a-priori to assess if they significantly impact rank-order preference: Province of practice, hospital setting, prior educational focus (engineering or computer science versus other), TechPH index, prior clinical or research experience with AI, and years in practice.Analysis to assess rank-order preference, and co-variate analysis, were completed using the methodology outlined by Lee et al. . The folResponses to open-ended questions were grouped and summarized in Appendix The enrollment period of four months was reached before the target of 365 responses. 1844 physicians were invited to participate, 230 physicians enrolled in the survey, and 171 completed all questions.Table The primary outcomes of this study are the top priorities for new ED AI-tool development in the next 10\u00a0years. Figure\u00a0Figure\u00a0Participants comfort with technology was low-moderate, with 33.0% identifying as technology-enthusiasts, and 20.3% as technology-hobbyists. 7.5% and 5.7% of respondents previously studied either computer-technology or information-technology, respectively. 4.4% have previously studied engineering and 2.2% computer-science. See Appendix D, Table The participants mean rating of \u2018Technophilia\u2019 based on the TechPH index was a moderate 3.30 (std 0.65) on a range of one (high technology-anxiety) to five (high technology-enthusiasm).Examples of participants definitions of AI are summarized in Appendix Respondent\u2019s experience with AI was low-moderate; the results in Appendix D Table have used AI include \u2018AI-tools for computer use\u2019 (29.1%), \u2018AI-tools for documentation\u2019 (20.1%) and \u2018AI-tools for administration/education/research\u2019 (16.9%). The most common work-activities where EPs have heard-of AI-tools, but not necessarily used the tools, include \u2018AI-tools for making the diagnosis/selecting investigations/risk-stratification\u2019 (42.9%), \u2018triaging of patients\u2019 (42.9%), and \u2018AI-tools for computer use\u2019 (40.2%).Appendix D Table have used in practice include \u2018AI-powered clinical prediction rules\u2019 (51.8%), \u2018AI-powered monitoring of patient vitals and early warning-systems\u2019 (29.9%), \u2018AI-powered PoCUS\u2019 (25.4%), and \u2018AI-powered recommendations of patient handouts/resources\u2019 (20.8%). Interestingly, the most common examples of published AI-tools that EPs have heard-of, not necessarily used themselves, are \u2018AI-powered XRAY\u2019 (64.0%), \u2018CT\u2019 (60.9%), \u2018MRI\u2019 (55.8%) and \u2018US interpretation\u2019 (54.3%).Framing this question in another way, the most common examples of published AI-tools that EPs Regarding EPs\u2019 opinions about AI\u2019s impact on physicians\u2019 jobs over the next 10\u00a0years, 60.6% of respondents believe, because of the impact of AI, \u201cjobs will change slightly\u201d while 35.9% believe \u201cjobs will change substantially\u201d.Further, the responding EP indicated a high potential for AI in EM , and indicated high personal interest in AI for ED patient care .In terms of how the job may change, respondents felt AI is most likely able to complete the following tasks: \u2018Provide documentation\u2019 , and \u2018formulate personalized medication/therapy plans\u2019 . Respondents were neutral regarding AI\u2019s ability to \u2018analyze patient information to reach a diagnosis\u2019, \u2018reach a prognosis\u2019, \u2018formulate personalized treatment plans\u2019 or \u2018evaluate when to refer to a specialist\u2019. Physicians indicated it was \u2018extremely unlikely\u2019 (45.4%) or \u2018unlikely\u2019 (36.2%) for AI to be able to provide empathetic care. See Appendix D, Table This study outlines EPs work-activities that are the highest priority for new AI-tool development. Survey participants were asked to consider the development of a fully translated AI-tool for patient care that would be available at most EDs in Canada in the next 10-years. To triangulate responses, participants were asked to rank a list of common ED work-activities that may benefit from AI, and a list of existing AI-tool examples.The survey sampled 5.65% of Canadian physicians practicing emergency medicine, not including residents, based on 2019 data from the RC . This esConsidering geography, there was a disproportionately high response from the Maritimes (29.2%); the author\u2019s practice location being Halifax. However, the remaining geographic distribution is consistent with the RC database; the other highest response rates come from Ontario (29.2%), Alberta (11.4%) and Quebec (10.0%), which contain approximately 38%, 17.5% and 16.3% of the target population, respectively. The high response result may also relate to each region having large AI institutes with provincial strategies for AI adoption. The results are also biased towards urban practitioners, with only 11.4% practicing in rural or regional centers; important input from rural physicians may been missed.Concerning familiarity with technology in-general, respondents were neutral; approximately half neither \u201cdislike\u201d nor \u201clike\u201d technology and 9.0% indicated \u201cno interest in technology.\u201d The average TechPH index agrees with the finding that most respondents were neutral regarding technology interest . A measuRespondents have low overall experience with AI in their personal lives, clinical roles or work as EPs. We speculate that the \u2018low\u2019 personal experience with AI may relate to misconceptions about the technology, as we assume that most Canadians are daily consumers of AI-enabled apps and productivity tools . This response may also be from the framing and interpretation of the question.When asked about their understanding of AI-technology, 87.2% of respondents \u201cagree\u201d or \u201cstrongly-agree\u201d they \u201cunderstand what is meant by AI.\u201d However, only 23.6% of respondents had a completely correct definition of AI (see Appendix Few respondents have conducted any research in AI (4.5%). This result agrees with follow-up questions, where most respondents indicated \u201cno experience at all\u201d (71.4%) or \u201cvery little experience\u201d (11.1%) with AI-research. Again, these findings corroborate the neutral TechPH index. However, almost all respondents either \u201csomewhat agree\u201d (39.8%) or \u201cstrongly agree\u201d (48.5%) that they are interested in AI for EM.Overall, EPs agree that AI has potential use for EM; however, physicians feel there will be only \u201cslight\u201d (60.6%) to \u201cmoderate\u201d 35.9%) impact on how their job will change. This result suggests physicians believe AI will enhance current roles but not disrupt the specialty over the next 10\u00a0years. This opinion is consistent with findings from surveys of psychiatrists and family-physicians [% impact Considering EPs\u2019 impressions about AI\u2019s capabilities, most thought AI was \u201clikely\u201d or \u201cextremely likely\u201d to be able to provide documentation. Interestingly, much of the current focus of EM\u00a0AI development aims at tasks such as reaching a prognosis, formulating a treatment plan, formulating personalized medications and evaluating when to refer to a specialist, despite these being ranked either \u201cneutral\u201d or \u201clikely.\u201d Providing empathetic care was ranked as \u201cextremely unlikely\u201d or \u201cunlikely\u201d. These findings also match the opinions of psychiatrists and family-physicians surveyed with the same instrument , 12.Respondents also indicated examples where they \u201chave used\u201d or \u201cheard of\u201d AI being used in EM Table . They weAs well, interpreting Table For the study\u2019s primary outcome, there is clear consensus for translation of AI-tools to facilitate documentation, and as mentioned, respondents had the most confidence that AI could facilitate this task. Although many new ED information systems (EDIS) have some AI integration, as indicated, few respondents \u201chave used\u201d or \u201cheard of\u201d \u2018automated charting or report generation\u2019 and only 37% have \u201cheard of\u201d and 20.1% \u201chave used\u201d AI-tools for documentation. Based on the responses, we would suggest that tools for documentation are prioritized to meet both the expectation and needs in EM.The emphasis on \u2018AI-documentation\u2019 is not unique to EM, with recent surveys of primary-care providers, psychiatrists as well as a heterogenous population of US-clinicians also strongly indicating that AI could aid clinical documentation , 12, 13.The \u2018documentation\u2019 category is broad, including electronic charting with voice-to-text, or active listening with AI-powered scribes, or AI-powered summaries of patient records to consolidate them into succinct and accessible formats. Future work should clarify these needs in detail, perhaps using focused interviews.Separate from these specific recommendations, this survey provides insights into a potential strategy for implementing AI tools in an ED setting. As such, we recommend that (i) ED physicians be engaged in the specification, design, evaluation, and implementation of future AI driven tools; (ii) Priority should be placed on developing proof of concept AI-solutions for the high-yield problems identified by ED physicians; (iii) Solutions should embed AI tools within the ED\u2019s existing digital infrastructure and clinical workflow; and (iv) Developers should identify measurable and impactful outcomes for AI use, and use standardized metrics to assess these outcomes.In conclusion, AI in an ED setting can be seen as an innovation agent, as the analysis of ED data can generate new insights about the effectiveness of certain procedures/policies and lead to the optimizations of ED resources. AI is not here to change ED practices, rather it offers solutions to optimize a number of practice challenges. The survey responses clearly point to perceived value for AI in the ED, however certain activities are more amendable to AI driven support. For instance, automated charting particularly using speech recognition and transcription, rapid interpretation of real-time ED data for clinical decision support, patient risk stratification, and forecasting for staffing. The opportunity to benefit from AI based applications relies heavily on their integration within the current clinical workflows and the data sources used by ED physicians. This will ensure that ED physicians do not need to change their practice to make use of AI tools, rather AI driven support is seamlessly available at the point of care. Overall, the growth of AI in medicine is on the rise and it is fair to conclude that the use of AI in ED is quite near in the future.Study limitations are as follows: First, this survey reflects the Canadian perspective and may not be generalizable internationally. Further, there is a sampling bias towards physicians subscribed to CAEP and a selection bias towards physicians interested in AI, and those practicing in the Maritimes. Further, the sample size is less than the apriori target of 365 respondents for representing the Canadian Emergency Physician population; the four-month deadline and maximum allowable three-survey blasts were reached before complete enrollment. The study is also limited by the confounding effects of variables not measured. Additionally, the survey\u2019s questionnaire was not previously validated, despite being carefully designed. There is no standardized classification system for AI tools for emergency medicine, as such, some of the AI examples or physician work-types may be interpreted differently by respondents. For example, \u201cclinical prediction rules\u201d are synonymous with clinical decision rules (CDR) which are tools used to identify patients at higher risk for disease-specific clinical conditions, or are used to prevent the overuse of specific diagnostic testing . While tThere are many limitations in applying survey research methodologies. In addition to the known limitations of electronic surveys, specific to this study, there was confusion about the meaning of AI in-general and no opportunity for participants to clarify certain applications and items in the questionnaire. In the future, alternate methodologies including focused interviews and focus groups should be employed to further explore the themes identified in this study.User-centered design is essential to technology translation. A lack of physician input into AI development is a major translation barrier for these practice-changing AI tools. A survey of Canadian EPs has identified \u2018automated charting or report generation\u2019, \u2018clinical prediction rules\u2019 and \u2018monitoring of vitals with early warning detection\u2019 as high-priority areas for new development. This prioritization can aid policymakers in decision-making for AI data sharing, developing reporting guidelines and facilitating external validations studies for high-demand AI-tools.Additional file 1. Appendix A: Survey.Additional file 2. Appendix B: Calculations.Additional file 3. Appendix C: Open-Ended Responses.Additional file 4. Appendix D: Supplemental Results."} +{"text": "To examine whether incorrect AI results impact radiologist performance, and if so, whether human factors can be optimized to reduce error.Multi-reader design, 6 radiologists interpreted 90 identical chest radiographs (follow-up CT needed: yes/no) on four occasions (09/20\u201301/22). No AI result was provided for session 1. Sham AI results were provided for sessions 2\u20134, and AI for 12 cases were manipulated to be incorrect , 4 false negatives (FN)) (0.87 ROC-AUC). In the Delete AI (No Box) condition, radiologists were told AI results would not be saved for the evaluation. In Keep AI (No Box) and Keep AI (Box), radiologists were told results would be saved. In Keep AI (Box), the ostensible AI program visually outlined the region of suspicion. AI results were constant between conditions.ps\u2009<\u20090.05). FNs were higher in the Keep AI (No Box) condition (33.0%) than in the Keep AI (Box) condition (20.7%) (p\u2009=\u20090.04). FPs were higher in the Keep AI (No Box) (86.0%) condition than in the Delete AI (No Box) condition (80.5%) (p\u2009=\u20090.03).Relative to the No AI condition , FN and FPs were higher in the Keep AI (No Box) , Delete AI (No Box) , and Keep AI (Box) conditions (all Incorrect AI causes radiologists to make incorrect follow-up decisions when they were correct without AI. This effect is mitigated when radiologists believe AI will be deleted from the patient\u2019s file or a box is provided around the region of interest.When AI is wrong, radiologists make more errors than they would have without AI. Based on human factors psychology, our manuscript provides evidence for two AI implementation strategies that reduce the deleterious effects of incorrect AI.When AI provided incorrect results, false negative and false positive rates among the radiologists increased.\u2022 False positives decreased when AI results were deleted, versus kept, in the patient\u2019s record.\u2022 False negatives and false positives decreased when AI visually outlined the region of suspicion.\u2022 The online version contains supplementary material available at 10.1007/s00330-023-09747-1. Artificial intelligence (AI) in radiology has increased dramatically over the past decade. While the first AI system was approved in 2008, only 6 received approval through 2014, and 141 were approved from 1/1/2020 to 10/11/2022 [AI will return a false positive diagnosis for 11 out of 100 true negative cases. More concerning, a sensitivity of 0.87 means that AI will return a false negative diagnosis for 13 out of 100 true positive cases. While radiologist performance improves when they are using versus not using AI [In a review of 503 studies of AI algorithms, receiver operating characteristic area under the curve (ROC-AUC) for diagnostic imaging ranged from 0.864 to 0.937 for lung nodules or lung cancer on chest CT and chest X-ray examinations [What are the consequences of incorrect AI? Can it persuade a radiologist to make a wrong call when they would have otherwise made the right call? One study with CAD suggests this may be the case. Alberdi et al. had two implementation (emphasis added) of AI applications that will help radiology professionals provide improved medical care\u201d [, research examining how AI could adversely impact radiologist decision making is severely lacking (notwithstanding the aforementioned CAD study). To improve patient outcomes, the AI algorithm, as well as the implementation of the AI, should both be optimized.Another question remains: How can the deleterious effect of inaccurate feedback be prevented or mitigated by simply modifying how the AI feedback is presented? Answering this is consistent with the stated goals of the ACR\u2019s Data Science Institute (DSI) to \u201cfacilitate the development and al care\u201d . Indeed,in the context of AI implementation, although it has been broached in several theoretical papers [One field of study that can address these questions is human factors psychology\u2014the study of interactions between humans and systems\u2014here, AI . A few sl papers \u201317.The present study, to our knowledge, is the first attempt to examine two related questions: 1) Do incorrect AI results deleteriously impact radiologist performance, and if so, (2) can human factors be optimized to reduce this impact? We focus on two key human factors with real-world implications: (1) whether AI results are kept or deleted in the patient\u2019s file, and (2) whether AI does or does not visually outline the area of suspicion. Hypotheses are presented in Table Do incorA fully crossed, complete, and balanced block design was used where all radiologists read the same 90 chest x-rays from 90 patients across four different conditions , AI results Delete (No Box), AI results Keep (Box)) Fig.\u00a0. The ordRadiologists were told that the goal of the study was to look at several different AI programs that have been developed for chest radiography. As such, radiologists were told they would interpret radiographs on several different trials. Radiologists were told they should indicate whether follow-up imaging was needed for a possible pulmonary lesion. Unbeknownst to the radiologists, the only pathology present was lung cancer. For one trial, radiologists were told they would not have AI (No AI) so they could serve as the control group for that AI system. For each of the other sessions, radiologists were told they would be interpreting cases with a unique AI program.AI Keep (Box) condition, a box was placed when the AI result was \u201cabnormal.\u201d No such box was used for the other conditions. Because the effect of AI feedback implementation was of interest, AI feedback needed to be controlled. Thus, no real AI systems were actually used. The AI feedback for each image was chosen by MKA. This enabled us to control the number of true positives (TPs), true negatives (TNs), FPs, and FNs, and which cases fell into each of these categories. The AI result was identical between experimental conditions; thus, only the instructions regarding keep vs. delete vs. box differed, (Supplement 1).The AI results for each image were provided in two ways. First, either \u201cAbnormal\u201d or \u201cNormal\u201d was shown visually in the top right-hand corner of the image. Second, the experimenter read the AI results aloud to reinforce the Keep vs. Delete manipulation (Supplement 1). For the Radiologists were told the ROC-AUC value of each system was higher than 0.85, which approximates clinical practice and was A total of 90 frontal chest radiographs (CXR) (50% positive for lung cancer) taken from patients imaged between 2013 and 2016 were used. Positive CXRs were obtained within 2\u00a0weeks of a CT demonstrating a single dominant lung nodule/mass confirmed at biopsy to be lung cancer and NO other airspace opacities, consolidations, or macronodules detectable at CXR; negative CXRs were obtained within 2\u00a0weeks of a CT demonstrating NO nodule/mass. More information is provided in supplement 2.Due to COVID-19, the experimenter and radiologist communicated over videoconferencing. A convenience sample comprising six radiologists consented and participated with 3, 1, 14, 6, 5, and 12\u00a0years of experience, respectively. At the appointed time, the radiologist and experimenter (M.H.B.) joined a videoconference. Radiologists were reminded about the ostensible goal of the experiment . Radiologists were then told that the ROC-AUC for the company\u2019s AI program in the present session was higher than 0.85 (sessions 2\u20134). To encourage effortful participation, radiologists were told that their performance relative to the other radiologists would be evaluated by M.H.B. and M.K.A. (Vice Chair of Research and thoracic radiologist) upon conclusion of the study . Radiologists viewed a sample evaluation sheet. Radiologists were told that in order to fully evaluate each AI system, all images were taken from 55\u201375-year-old smokers who were outpatients, so the dataset was therefore enriched. In actuality, as noted above, the ratio of positive and negative cases was equal though radiologists were not told prevalence rate to avoid confounding their performance . No histRadiologists responded to two questions for each image: (1) whether they recommend follow-up CT imaging (yes/no) and (2) their level of confidence . All responses were read aloud and transcribed by the experimenter. Follow-up imaging, as a clinical decision, was used as the primary outcome. Thus, a failure to follow up a positive is a FN, not following up a negative is a TN, following up a positive is a TP, and following up a negative is a FP.Following the conclusion of the study, radiologists were debriefed, and all deception was explained. One radiologist was excluded from analyses, so analyses were performed on the remaining 5 radiologists. The excluded radiologist did not fully believe the cover story, and correctly thought the experimenter might have been interested in his/her behavior. He/she also reported being unaware of the keep versus delete manipulation. All procedures were approved by the Rhode Island Hospital IRB.All analyses were conducted using SAS Software 9.4 (SAS Inc.). FPs and FNs were examined between conditions to test the hypotheses using generalized linear mixed modeling (GLMM) assuming a binary distribution. GLMM was used to examine diagnostic confidence between conditions. Alpha was established a priori at the 0.05 level and all interval estimates are calculated for 95% confidence. Analyses were conducted by the statistical author (G.L.B.).AI Delete (No Box), radiologists were told AI results would not be saved. In AI Keep (No Box) and AI Keep (Box), radiologists were told the results of AI would be saved. AI Keep (Box), unlike both other AI conditions, provided a box around suspicious regions when AI indicated it was abnormal.The AI result manipulations were introduced at three points during the experiment: (1) at the beginning of the session during preliminary instructions, (2) when the experimenter provided oral AI results, and (3) during the sample evaluation sheet. These are detailed in Supplement 1. Briefly, in Patients were a median of 68\u00a0years old (IQR\u2009=\u200962\u201377). In total, 53.3% identified as female and 46.7% identified as male. Regarding race, 81.1% were White/Caucasian, 7.8% were Black/African American, 1.1% were Asian, 1.1% were Native Hawaiian/other Pacific Islander, and 8.9% reported their race as \u201cother.\u201d Regarding ethnicity, 8.9% were Hispanic and 91.1% were non-Hispanic.p\u2009=\u20090.006), increased to 26.7% ) in the AI Delete (No Box) condition (p\u2009=\u20090.009), and increased to 20.7% ) in the AI Keep (Box) condition (p\u2009=\u20090.02).As anticipated in hypothesis 1, incorrect AI results that a true pathology positive case was \u201cnormal\u201d increased false negatives Fig.\u00a0. We focup\u2009=\u20090.13). As anticipated in hypothesis 5, FNs were higher in AI Keep (No Box) than in AI Keep (Box) .As anticipated in hypothesis 2, this effect was higher in the AI Keep (No Box) versus AI Delete (No Box) condition (33.0% vs. 26.7%), though this difference failed to achieve significance (p\u2009=\u20090.01), increased to 80.5% ) in the AI Delete (No Box) condition (p\u2009=\u20090.008), and increased to 80.5% ) in the AI Keep (Box) condition, though this did not quite achieve significance (p\u2009=\u20090.052).As anticipated in hypothesis 3, incorrect AI results that a true pathology negative case was \u201cabnormal\u201d increased false positives Fig.\u00a0. We focup\u2009=\u20090.03). As anticipated in hypothesis 5, false positives were higher in the AI Keep (No Box) condition versus the AI Keep (Box) condition (86.0% vs. 80.5%), though this did not achieve significance (p\u2009=\u20090.19), and it had an identical value to the Delete (No Box) condition .As anticipated in hypothesis 4, false positives were higher in the AI Keep (No Box) condition versus the AI Keep (Delete) condition . Full results are described in Supplement 4.As anticipated, when AI provided correct positive and correct negative results, radiologists were more accurate in all AI conditions compared to the no AI condition. Furthermore, TPs were higher in the AI Keep (Box) condition compared to the AI Keep (No Box) condition and a non-significant trend towards fewer false positives (86.0% vs. 80.5%) than did the absence of a box. The fact that a box reduced false negatives from 33.0 to 20.7% may be driven by differences in cognitive load. When radiologists interpret positive cases with a box, they might have more cognitive resources to carefully search ostensibly negative cases and contradict AI when AI is wrong. Said differently, providing a box around suspicious regions may mitigate fatigue during a reading session by reducing the region needed to visually search an image, thereby enhancing performance even on cases where no box is provided . Also, tMost importantly, we could not realistically emulate the real-world consequences from AI results being kept or deleted in a patient\u2019s file thereby limiting ecological validity. Unlike clinical practice, radiologists in this experiment knew they were participating in research without the possibility of any real legal (discoverability), financial, ethical, and psychological repercussions to making a mistake\u2014a lack of ecological validity.Cases were read in an artificial setting with only frontal views . Participants comprised a small number of radiologists at one site. There was no AI Delete (Box) condition. Radiologists only indicated whether or not follow-up was needed. Thus, when a case was positive, and a radiologist said it was positive, we technically cannot be certain they focused on the correct region. However, all positive cases contained only one lesion. Although lateral views are valuable and may more likely to be correct, and thus were presumably less effected by incorrect AI. This limitation, along some others discussed above related to the artificiality of the setting, probably blunted the true effect of our manipulations and thus shows the robustness of our results. If true, this would mean that the effects observed in this study are actually larger in clinical practice.Condition order was not fully counterbalanced. We used an enriched dataset consisting of more pathology than typically found in clinical practice, although this was practically needed to have enough positive cases and is consistent with other reader studies , 25. Radwhen AI was both correct and incorrect.AI is often right but sometimes is wrong. Since we do not know when it is accurate, we must consider how to minimize the extent to which radiologists are influenced by incorrect results. In this study, we show that incorrect AI results can influence a radiologist to make a wrong decision. However, this effect is mitigated when radiologists are told the AI results are deleted, versus kept, in the patient\u2019s file, and when AI provides a box that visually outlines suspicious regions. In fact, AI that included a box improved radiologists\u2019 performance This study offers compelling initial evidence that human factors of AI can impact radiologists. To enhance patient care, radiology practices should consider how AI is implemented. Radiological societies should formulate guidelines for radiologists regarding the integration of AI results into the reporting of examinations. Moreover, radiologists should be trained in best practices for using AI tools clinically.Supplementary file1 (PDF 306 kb)Below is the link to the electronic supplementary material."} +{"text": "The rise of artificial intelligence (AI) heralds a significant revolution in healthcare, particularly in mental health.\u00a0AI's potential spans diagnostic algorithms, data analysis from diverse sources, and real-time patient monitoring. It is essential for clinicians to remain informed about AI's progress and limitations. The inherent complexity of mental disorders, limited objective data, and retrospective studies pose challenges to the application of AI. Privacy concerns, bias, and the risk of AI replacing human care also loom. Regulatory oversight and physician involvement are needed for equitable AI implementation. AI integration and use in psychotherapy and other services are on the horizon. Patient trust, feasibility, clinical efficacy, and clinician acceptance are prerequisites. In the future, governing bodies must decide on AI ownership, governance, and integration approaches. While AI can enhance clinical decision-making and efficiency, it might also exacerbate moral dilemmas, autonomy loss, and issues regarding the scope of practice. Striking a balance between AI's strengths and limitations involves utilizing AI as a validated clinical supplement under medical supervision, necessitating active clinician involvement in AI research, ethics, and regulation. AI's trajectory must align with optimizing mental health treatment and upholding compassionate care. Recent and rapid developments in artificial intelligence (AI) place us at the precipice of perhaps the biggest revolution in medical care to date. Already, applications and advances in AI can be found peppered across all levels of healthcare. A 2021 review of AI in mental healthcare covers many of these advances, which range from applying clinical algorithms\u00a0and incorporating data from multiple electronic health record (EHR) systems to utilizing neuroimaging, genetic, and speech data to comment on prognosis for depressive disorders, future substance use, suicide risk, and functional outcomes. AI also allows for the acquisition of information outside of physician-patient encounters, utilizing information from smartphones or wearable devices, offering real-world, continuous data to aid the physician in decision-making and treatment [As with all tools, users must be aware of pitfalls and limitations. Mental disorders are complicated and heterogeneous in nature, and any practicing psychiatrist can speak on the biopsychosocial model at play with all mental health issues. Disease states within psychiatry can rarely be tracked or diagnosed with objective numerical data, unlike other medical disorders. A systematic review from 2023 that focused on studies from 2016-2021 sheds light on significant limitations in AI mental health research. First, studies are largely retrospective without external validation and with a high risk for bias. This review also found that only 28% of studies used original data with over 70% of studies using information from databases or secondary analysis of clinical trials that were not designed for the purpose of AI-related study [Additionally, the use of AI in healthcare raises concerns about the privacy of health information, including tracking and misuse of information by third parties. As mentioned, AI has the risk of bias and is currently not capable of self-reflection. Moreover, AI has the risk of further entrenching existing biases. Concerns about human overreliance on AI for future therapeutic interventions have also been raised, considering constant access that is not available to human clinicians . ConsideAs we think about the future directions of AI, we must assume that AI will be applied in talk therapy. One can look to other industries to see this trend in action. Chatbots are utilized as the initial, affordable, always accessible, \u201clow-touch-no-touch,\u201d self-service option in a tiered approach to customer support\u00a0while the \u201cwhite glove\u201d human support is left for customers with the highest needs or support tier.\u00a0AI remains a potential option, as the marketplace of psychotherapy is ripe for disruption. While efficacious, quality psychotherapy is often inaccessible and expensive. Presumably, an AI therapist could provide a scalable, convenient, and affordable means to deliver basic teachings of cognitive reframing, validation, acceptance, thought defusion, and other psychological tools. For this to take place, an AI therapy service would likely have to create trust , offer fAt an institutional level, healthcare systems have many decisions to make regarding stakeholders for design and implementation, governance, quality control, and long-term maintenance of AI-related tools\u00a0such as clinical decision support (CDS). Already we can see a variety of approaches for who \u2018owns\u2019 AI integration. For instance, a survey of 34 health systems revealed a variety of organizational setups for deploying predictive models of AI: 50% utilized a decentralized translational approach that is driven primarily by research teams while 40% utilized an AI-healthcare team-driven approach that extends the native EHR configuration. Only 10% of surveyed systems utilized an IT-department-led approach, which relies on third-party model vendors and native EHR vendors [Importantly, there are unanswered questions regarding how AI tools will impact healthcare professionals. In an optimistic future, AI tools that ambiently listen to the interview will generate clinical notes, giving doctors more time to spend with their patients. CDS will aid physician decision-making, improving the quality of care. In a pessimistic future, AI tools will exacerbate already hot topics like moral injury, loss of autonomy, and scope of care. AI tools that increase efficiency may lead to ever-higher expectations for revenue generation and productivity. Algorithms could learn from physicians\u2019 own documentation to facilitate the replacement of physicians by non-physicians. Perhaps doctors\u2019 own documentation will be used to train AI models without physician input, consent, or remuneration. Similar issues are currently being litigated with artists whose work has been used without their consent to train generative, art AI systems such as Midjourney. It reasons that this issue would come to medicine sooner than later, and it has. In mid-August of 2023, in the same week, both Zoom and Simple Practice raised alarms when updates to their privacy led to widespread fear that the content of virtual visits would be data mined for AI tool development and corporate profit at the expense of patient privacy.An ideal future is one in which AI provides a well-validated supplement to clinical care while remaining under the supervision and scrutiny of those with appropriate medical training so as to provide evidence-based, equitable care. As the technological revolution of AI races forward, we must react quickly to the benefits and pitfalls revealed in its path. This is needed to best support its optimization for treating mental health while minimizing its usurpation of the necessary human element for compassionate care. For these reasons, it is imperative for clinicians to take an active role in research, development, ethical commentary, and regulation of AI to best serve our patients."} +{"text": "ScienceDirect.com and the journal websites were considered in this review. The results of this research offer a better understanding of AI implementation with roboethics to investigate AI-related issues in the tourism and hospitality industry. In addition, it provides decision-makers in the hotel industry with practical references on service innovation, participation in the design of AI devices and AI device applications, meeting customer needs, and optimising customer experience. The theoretical implications and practical interpretations are further identified.This study aims to give a comprehensive analysis of customers\u2019 acceptance and use of AI gadgets and its relevant ethical issues in the tourism and hospitality business in the era of the Internet of Things. Adopting a PRISMA methodology for Systematic Reviews and Meta-Analyses, the present research reviews how tourism and hospitality scholars have conducted research on AI technology in the field of tourism and the hospitality industry. Most of the journal articles related to AI issues published in Web of Science, Many individuals believe that Industry 4.0 might be characterized by the increased adoption of networking technologies and intelligent automation in current organizations. An innovation paradigm for the advancement of technology seems to be developing , 2, one AI was predicted by practitioners in the hotel business professionals that its application can enhance both the quality of services provided and the experiences provided to customers. They had high hopes that the AI they had implemented would be beneficial to their management and operations. Despite the fact that a growing number of hospitality organisations have adopted AI devices , 18, cusTo minimise unnecessary AI investments and maximise the potential benefits of AI incorporation, hospitality professionals should investigate factors that influence the acceptance and use of AI devices by customers. As more and more applications are found for artificial intelligence, researchers have begun paying a lot more attention to AI\u2019s underlying difficulties. Initially, AI research was carried out by engineers, who mostly concentrated their efforts on AI design challenges . These cBy focussing on a larger number of tourist and hospitality journals, the purpose of this investigation was to find a way around the constraint previously mentioned. Preferred Reporting Items for Systematic Reviews and Meta-Analyses, or PRISMA for short, is a further addition to this research endeavour . In the ScienceDirect.com and the journal websites were used to search for articles published in tourism and hospitality journals of high quality that had reviews in their titles, abstracts, and/or keywords relating to a systematic review, tourism, hospitality, AI, robot, ethic(s) and human-computer interaction is used to describe computer programmes that simulate human intelligence in judgment by combining complex software and hardware components with massive data . AlternaDespite the fact that conventional interactions between consumers and human employees continue to be the norm, artificial intelligence has gained prominence in recent years . ArtificLin et al. outline As a result of the growing prevalence of artificial intelligence (AI) technologies and artificial intelligence gadgets within the hospitality sector, the customer and provider possess distinct points of view regarding the utilization of AI , 88\u201390. Clients might well have conflicting opinions about the adoption and usage of AI gadgets in the hotel industry. On the one hand, some current existing researches suggest that AI gadgets may increase consumers\u2019 perceptions of service excellence and reliability, hence increasing their acceptance of their usage in accommodation facilities , 97. TheMoreover, the perceived human-likeness perceived intelligence and perceived danger including privacy, safety, and security problems might influence consumers\u2019 adoption and usage of AI gadgets in the hotel industry . Some usCustomers\u2019 perceptions of the adoption and utilization of AI gadgets vary from optimism over the enhancement of their experience to the anxiety of an automated society , 112. InIn addition, the previous study has investigated the acceptance and use of AI devices by customers in a variety of service settings , 120. AcCoronavirus disease 2019 (abbreviated as COVID-19) is swiftly disseminated over the globe through human pathogens \u2013126. TheWhen the COVID-19 pandemic is still underway, the public has also been made aware of the advantages that AI gadgets may provide in terms of facilitating the preservation of social distance and minimizing the danger of infection. During the COVID-19 public health disaster, it is probable that hotel consumers may be increasingly keen to utilize AI devices , 139. CuThis study employed a systematic review following the PRISMA guidelines to provide a comprehensive knowledge of AI adoption in the hospitality industry and its relevant ethical issues. According to the PRISMA guidelines, the inclusion criteria and data collection process are explained. The present research examined 89 relevant research articles from prestigious databases such as Web of Science and ScienceDirect.com, as well as journal websites. The paper presents a keyword co-occurrence map and the number of published papers per publishing year to provide an overview of the AI research papers\u2019 landscape in hospitality. The study identified six research domains related to the publication themes, highlighting the advantages and complexities of AI technology in the hospitality industry. It summarizes the applications of AI in service areas and discusses different views on AI adoption from the perspectives of service providers and customers in the tourism and hospitality industry. Furthermore, it also references various studies that have explored the ethical implications of AI in the hospitality industry. The ethical issues related to AI adoption such as resistance by employees, competition with rivals, and legal issues are identified. which are essential and not frequently raised in publications. The paper contributes to the existing literature by providing a comprehensive analysis of AI adoption in the hospitality industry and emphasizing the need for further research in understanding the roboethics issues for AI adoption. The insights gained from this study can help hospitality professionals make informed decisions regarding AI investments and ensure the optimal utilization of AI technologies in their operations.The literature review included in this study suggests that the tourism and hospitality-related publications have developed in terms of not only a rise in volume but also a growing diversity of topics. However, significant research gaps and under-researched areas in the tourism industry were also revealed. Future studies should investigate more complicated smart environments in which robots interact simultaneously with other robots and people, as they become more autonomous and interconnected with the Internet of Things (IoT). In addition, interdisciplinary research collaborations are required to provide more robust and widespread research on AI technology. Future studies on human concerns should include replication studies to examine the effects of robots on the tourism and hospitality experience and the attitudes, requirements, and hopes/fears of staff. The integration of robots into the behaviours of customers and service staff in the tourism and hospitality industry should be examined concerning morality and ethics.This research provides managers and marketers in the hotel industry with essential information to establish appropriate AI device investment and adoption strategies. It aids in increasing their understanding of consumers\u2019 motivations for utilizing AI devices, proposing business strategies for planning, operating, and marketing their businesses, and enhancing customer experience using AI devices. It also enables hotel managers to strike a balance between the increased value-added requests of consumers, the technological advancement of the business, and the high danger of disease transmission.The present research has two limitations. First, the theoretical framework and research findings used in this study are restricted to the present era. Second, the data collection approach will consist of conducting a systematic review of a larger base of hotel research to determine their acceptance and usage of AI devices based on their views. Thus, the findings may vary significantly if the sample consists of actual hotel guests who have stayed in specific hotels that offer service through AI gadgets.As AI technology rapidly advances, customers\u2019 adoption and usage of AI products may alter drastically in the near future. Therefore, it will be important to develop a theoretical framework that encompasses the nature of AI variables in the future to predict the factors that impact consumer acceptance and use of AI devices and the relevant ethical issues that AI created should be laid stress by future research."} +{"text": "Artificial intelligence (AI) is transforming various fields, with health care, especially diagnostic specialties such as radiology, being a key but controversial battleground. However, there is limited research systematically examining the response of \u201chuman intelligence\u201d to AI.This study aims to comprehend radiologists\u2019 perceptions regarding AI, including their views on its potential to replace them, its usefulness, and their willingness to accept it. We examine the influence of various factors, encompassing demographic characteristics, working status, psychosocial aspects, personal experience, and contextual factors.Between December 1, 2020, and April 30, 2021, a cross-sectional survey was completed by 3666 radiology residents in China. We used multivariable logistic regression models to examine factors and associations, reporting odds ratios (ORs) and 95% CIs.P<.001), less favorable views on AI usefulness , and reduced willingness to use AI . Moreover, after adjusting for all other factors, perceived AI replacement and AI usefulness were shown to significantly impact the intention to use AI.In summary, radiology residents generally hold a positive attitude toward AI, with 29.90% (1096/3666) agreeing that AI may reduce the demand for radiologists, 72.80% (2669/3666) believing AI improves disease diagnosis, and 78.18% (2866/3666) feeling that radiologists should embrace AI. Several associated factors, including age, gender, education, region, eye strain, working hours, time spent on medical images, resilience, burnout, AI experience, and perceptions of residency support and stress, significantly influence AI attitudes. For instance, burnout symptoms were associated with greater concerns about AI replacement (OR 1.89; This study profiles radiology residents who are accepting of AI. Our comprehensive findings provide insights for a multidimensional approach to help physicians adapt to AI. Targeted policies, such as digital health care initiatives and medical education, can be developed accordingly. In this digital era, artificial intelligence (AI) technology is gaining increasing importance in many medical specialties ,2. In raRadiologists\u2019 perceptions may influence the actual usage and acceptance of AI technology in clinical practice, and therefore, it is important to understand the variations in perspectives ,11. ManyResearch suggests that identifying the factors influencing users\u2019 attitudes and acceptance of AI is also important. Previous studies have shown that differences in attitudes toward AI can be ascribed to a wide variety of factors, such as demographic characteristics -25 from December 2020 to April 2021 in 215 cities across 31 provinces in China. To ensure the representativeness of the respondents, the CAR approached all 557 radiology residency programs, and 407 (73.1%) programs were included in the survey. All participants receiving the SRT during the survey period were invited to complete the questionnaire voluntarily and anonymously via \u201cWenjuanxing\u201d , a profeEthical approval was obtained from the Institution Review Board of Tsinghua University .The measures included 6 sections, covering demographic characteristics, working status, psychosocial aspects, personal experience, SRT contextual factors, and perspectives of AI. The study outcomes were binary variables indicating whether participants agreed or disagreed with AI usefulness, AI replacement, and AI acceptance. The CAR survey included 3 items on a 7-point Likert scale for AI perception and acceptance, which have been used in prior research ,44,45. SThe demographic characteristics included age (\u226427 or >27 years), gender , educational level , and region . Working status included eye strain symptoms , annual after-tax income , weekly working hours , and hours spent on image interpretation per day . Psychosocial aspects considered burnout symptoms and psychosocial resilience . Personal experience included the experience of working to combat COVID-19 (\u201cyes\u201d or \u201cno\u201d), the experience of making any medical error during the past year (\u201cyes\u201d or \u201cno\u201d), the experience of hearing about AI (\u201cyes\u201d or \u201cno\u201d), and the experience of using AI at work (\u201cyes\u201d or \u201cno\u201d). SRT contextual factors covered SRT training years , perceptive supports from SRT , perceptive stress from SRT , and SRT hospital . These variables were derived from the CAR survey questions and responses. A detailed description of these measures is provided in P values <.05 were considered statistically significant.Descriptive statistical analysis was used to calculate the percentage of characteristics among all participants. Means with SDs were presented for continuous variables. The distribution of responses regarding AI-related experience as well as AI perception and acceptance was computed, and mean scores with SD and the proportions of agreed or disagreed were reported. Participants were categorized into 2 groups using mean values as the cutoff. Multivariable logistic regression models were then conducted to identify associated factors of AI replacement, AI usefulness, and AI acceptance. Odds ratios (ORs) and 95% CI were reported. We performed all statistical analyses in Stata . Two-tailed A total of 3666 radiology residents were included in this study . There wP<.001) and perceiving higher stress levels from the SRT program were more likely to express concerns about AI replacement. By contrast, older participants , those spending more time on image interpretation , individuals who had experience using AI at work , and those who perceived more support from the SRT program were less likely to believe that AI would reduce the demand for radiologists.In model 1, the results indicated that participants showing potential burnout symptoms , possessing greater psychosocial resilience , and perceiving more support from the SRT were more positive about the usefulness of AI. Those who had heard about AI and those who used AI at work were more likely to believe that AI could enhance radiology diagnosis. By contrast, female residents and residents with burnout symptoms had less favorable attitudes toward AI\u2019s usefulness.In model 2, factors associated with the perception of AI usefulness were examined. Respondents experiencing a higher frequency of eye strain , a higher frequency of eye strain , increased workload , higher levels of psychosocial resilience , having heard about AI , experience in using AI at work , and a stronger perception of support from SRT . Conversely, burnout symptoms decreased the intention to use AI.In model 3, potential predictors of the intention to use AI were higher education levels were less inclined to express an intention to use AI, whereas those perceiving higher AI usefulness were more likely to express such an intention.This study explored the predictors of perception and acceptance of AI technology based on a nationwide sample of radiology residents in China. We found that age, gender, education, region, eye strain status, work hours, time spent on medical images, resilience, burnout, the experience of hearing about AI, the experience of using AI, the perceived SRT support, and the perceived SRT stress have varying effects on diverse attitudes and AI acceptance. Furthermore, residents with positive attitudes toward AI have higher intentions to use it, whereas those with negative attitudes have the opposite effect. Our findings provide empirical evidence for strategies to support the successful implementation of AI in health care settings.In this study, most respondents had overall positive attitudes toward AI, which is consistent with the results from previous studies ,52. Out Our results confirm earlier findings that older respondents were less likely to agree that AI would reduce the demand for radiologists, while male radiologists were more inclined to believe that AI would benefit diagnosis. Older groups have more work experience and more confidence in their job performance and thus may be less concerned about being replaced by AI . Our stuImportantly, participants experiencing more eye strain tend to view AI positively and support its adoption in the field. This aligns with prior research showing that eye health consciousness positively influences people\u2019s perception of AI\u2019s usefulness. Radiologists experience a higher rate of eye strain symptoms due to their extended periods in front of computers, reading and analyzing medical images ,68. AI iRadiology residents who frequently worked overtime showed a greater willingness to use AI. In general, the perceived benefits of AI implementation significantly promote its adoption in health care , with tiThis study empirically supports the association between burnout and AI adoption, demonstrating that burnout is differentially related to various aspects of AI attitudes. Individuals with burnout symptoms were more likely to disagree with the usefulness of AI in health care, show less interest in its adoption, and express concerns about being replaced by AI. People experiencing burnout often report a reduced sense of personal accomplishment . This coRadiology residents who have heard of AI and used AI at work tend to recognize its usefulness and be more enthusiastic about its adoption. This is consistent with existing research that AI-related background was associated with positive attitudes . People In line with previous research ,83, our As previous studies have demonstrated, users\u2019 perception of AI significantly influences their intention to adopt the technology ,90-92. OTargeted actions should be taken to promote AI adoption among radiologists. Based on our findings, we recommend specific policy and practice implications. First, considering AI\u2019s potential to reduce radiologist workload , integraSeveral limitations of our study should be noted. First, despite the large sample size, the response rate was relatively low due to the voluntary nature of data collection, potentially introducing selection bias. Second, the use of self-reported data may lead to recall bias, as respondents might provide preselected answers. Third, we used AI as an umbrella term without specifying different types of AI technology in this study, while AI could be categorized based on development stages or specific applications . Fourth, due to limited data resources, we only examined 3 aspects of AI acceptance, and future research should investigate additional perceptions and attitudes toward AI . Furthermore, we used a selection of indicators from the CAR survey to align with our research objectives. It is undeniable that better indicators exist for assessing the variable. For a more comprehensive analysis, future studies should consider expanding the survey dimensions to encompass a broader range of associated factors. For example, important work-related dimensions such as the number of cases reviewed per year and the combination of headache and eye strain symptoms should be considered. Fifth, while we used a nationally representative sample, our study primarily focused on a younger group within the medical system, and the representation of senior physicians was insufficient. Future studies should aim to include senior physicians. Finally, it is important to note that causal conclusions cannot be drawn from cross-sectional observational data.As AI continues to integrate into health care and daily clinical practice, it is crucial to explore service users\u2019 motivation and engagement to maximize the benefits of new technologies. Building on previous research on AI acceptance, this study provides a comprehensive and nuanced examination of the associations between various antecedents and different AI attitudes, including perceived replacement, perceived usefulness, and acceptance. Based on our nationwide survey in China, this study enhances our understanding of the current state of AI acceptance, especially among Chinese radiologists, the majority of whom are willing to embrace AI. We categorized all associated factors into 5 domains, namely, demographic characteristics, working status, psychosocial aspects, personal experience, and contextual factors. We established 5 models to reveal these complex associations. Our findings suggest that medical educators, hospital managers, and policy makers should be mindful of the barriers and facilitators in promoting AI in health care and develop appropriate procedures and policies. It is essential to adopt multidimensional approaches that involve cooperation across diverse areas, including medical education, hospital management, human resources, organizational psychology, and technology management, to facilitate AI adoption among physicians."} +{"text": "This review proposes and explores the significance of \"experience-based medical education\" (EXPBME) in the context of an artificial intelligence (AI)-driven society. The rapid advancements in AI, particularly driven by deep learning, have revolutionized medical practices by replicating human cognitive functions, such as image analysis and data interpretation, significantly enhancing efficiency and precision across medical settings. The integration of AI into healthcare presents substantial potential, ranging from precise diagnostics to streamlined data management. However, non-technical skills, such as situational awareness on recognizing AI's fallibility or inherent risks, are critical for future healthcare professionals. EXPBME in a clinical or simulation environment plays a vital role, allowing medical practitioners to navigate AI failures through sufficient reflections. As AI continues to evolve, aligning educational frameworks to nurture these fundamental non-technical skills is paramount to adequately prepare healthcare professionals. Learner-centered EXPBME, combined with the AI literacy acquirement, stands as a key pillar in shaping the future of medical education. In recent years, the remarkable advancements in the field of artificial intelligence (AI) have led to the emergence of third-generation AI, predominantly powered by deep learning. This cutting-edge form of AI is revolutionizing numerous aspects of the medical domain, showcasing immense potential for transformative applications. One of the key strengths of this AI paradigm is its ability to replicate and utilize intellectual processes akin to that of humans, thereby greatly augmenting the efficiency and effectiveness of various medical settings .First-generation AI, often referred to as \"symbolic AI\" or \"good old-fashioned AI,\" focused on rule-based systems and symbolic reasoning. Researchers programmed explicit rules and logic to represent knowledge and solve problems. These systems were rule-bound and lacked learning capabilities. Examples include expert systems and early natural language\u00a0processing.\u00a0Second-generation AI witnessed a shift toward\u00a0machine learning and statistical methods. Rather than relying solely on handcrafted rules, AI systems were designed to learn patterns and make predictions from data. Techniques, such as neural networks, decision trees, support vector machines, and clustering algorithms, became prevalent. This era saw significant advancements in supervised learning, unsupervised learning, and reinforcement learning .In contrast to first- or second-generation AI, third-generation AI, through its sophisticated algorithms and neural networks, can engage in complex tasks comparable to human cognitive functions. Third-generation AI includes intricate processes, such as\u00a0pattern recognition, data analysis, discovery of hidden patterns and correlations within vast datasets, and learning from previous experiences. By effectively mimicking these higher-order cognitive functions, AI can significantly contribute to medical research, diagnosis, and treatment planning . An illuMoreover, the integration of third-generation AI into healthcare systems has brought about a substantial transformation in the management and analysis of medical information . AI systWhile AI technology has dramatically evolved, aligning educational frameworks to nurture these fundamental non-technical skills associated with AI's fallibility or inherent risks is paramount to adequately prepare healthcare professionals. To cultivate these non-technical skills, active reflection\u00a0learners' own experience\u00a0is warranted. In other words, learner-centered approach on experience is essential.\u00a0In this review, we propose the significance of learner-centered \"experience-based medical education\" (EXPBME) and the acquisition of AI literacy as essential components in shaping the future of medical education.Transformative potential of third-generation AI in healthcareThe ongoing advancements in third-generation AI are poised to bring about a paradigm shift in healthcare, augmenting medical practices, empowering healthcare professionals, and ultimately enhancing the quality of care provided to patients . The intLooking forward, as AI technology continues to evolve and mature, it is poised to revolutionize the entire healthcare ecosystem. AI accelerates drug discovery by analyzing vast datasets to predict drug behaviors, identify potential compounds, and optimize drug properties. This expedites the research and development process, saving time and resources. AI tailors medical treatments to individuals by analyzing genetic, clinical, and lifestyle data. This enables customized therapies and medication plans, optimizing outcomes based on a person's unique characteristics and health history. AI enhances telemedicine by providing diagnostic support, remote monitoring, and personalized health recommendations. It improves accessibility to healthcare services, especially in remote or underserved areas, by leveraging technology for virtual consultations and efficient healthcare delivery. This evolution is expected to progressively shift the traditional workflows\u00a0of physicians, placing a stronger emphasis on patient-centered care and multidisciplinary collaboration . The synThe collaborative integration of AI with the medical field holds immense promise, paving the way for a future where healthcare is more personalized, efficient, and effective .Ethical and educational imperatives in AI integration for enhanced healthcareStriking a harmonious balance between technological advancements and ethical integrity is central to realizing the full potential of AI integration in medical practice for the betterment of healthcare outcomes and patient welfare. The rapid proliferation of AI applications within the realms of medical and healthcare necessitates a heightened awareness among medical educators on a global scale regarding AI and the multifaceted challenges it entails ,11. ThisHowever, in this promising landscape of AI-driven advancements in healthcare, ethical and legal concerns loom large. Issues relating to data privacy, algorithmic bias, accountability, and the potential for overreliance on AI require careful deliberation and the establishment of appropriate regulatory frameworks . MedicalTo ensure that medical students are well prepared for the evolving healthcare landscape, academic institutions are proactively reshaping their curricula to encompass AI and data science literacy . The objA critical aspect of this educational paradigm involves integrating competency development related to AI and its legal and ethical issues\u00a0into medical education curricula . It is eIn sum, as AI applications continue to gain prominence within the medical and healthcare domains, a proactive and informed approach to medical education becomes paramount. The evolution of educational frameworks to encompass AI and data science literacy, coupled with a strong emphasis on ethical considerations and competency development, will play a pivotal role in preparing future healthcare professionals to navigate and capitalize on the dynamic healthcare landscape effectively .Caution to excessive digital dependency in medical education in the AI eraDigital transfer in the medical environment refers to the application of information and communication technologies (ICTs) driven by AI to support health and healthcare, including electronic medical records, videoconferencing technology, and wearable devices, such as mobile applications or virtual/augmented reality ,21. ThesThe onset of the COVID-19 pandemic served as a catalyst, accelerating the digitization of medical education through the widespread adoption of remote classes and virtual training platforms . This shExtensive research has shed light on the elevated levels of digital dependency observed among medical students . While dA comprehensive approach could involve using interactive educational platforms for the practical application of theoretical knowledge. Combining AI resources with hands-on experiences through blended learning creates a holistic learning environment. The AI-driven digital age offers transformative potential for medical education and practice. It is vital to balance digital advancements with experiential learning to nurture healthcare professionals proficient in technology and clinical skills, optimizing healthcare outcomes in today's digital landscape.Essential role of non-technical skills derived from experience in the AI-driven societyBoth medical educators and learners should remain cognizant of the potential risks associated with an overreliance on AI-generated information and emphasize the continued importance of developing non-technical skills through experiential learning. Balancing the integration of AI-driven advancements with the essential cultivation of non-technical skills will be fundamental in preparing healthcare professionals to deliver safe, efficient, and compassionate care in the ever-evolving AI-dominated medical landscape. In the rapidly evolving landscape of the AI era, the significance of both technical and non-technical skills in medical practice cannot be overstated, as they play pivotal roles in ensuring optimal clinical outcomes and fostering advancements in patient care and medical safety ,32. TechHere, we propose the idea of EXPBME, which is based on various reflections based on experiences. EXPBME through clinical or simulation experience stands as an efficacious instructional method in cultivating both technical prowess and these crucial non-technical crisis management skills across diverse medical settings . EXPBME However, as we transition further into the AI era, a new layer of complexity is added to the acquisition of non-technical skills. Medical practitioners must now not only grapple with assimilating this vast trove of AI-generated data but also maintain their focus on developing the nuanced non-technical skills that are equally critical for effective patient care.Significance of learner-centered EXPBME in the AI-driven societyThe constantly changing field of medicine, driven by technology and breakthroughs in research, requires doctors to keep learning and improving their skills continuously\u00a0. TraditiMedical students must embrace the concept of lifelong learning and commence career design strategies early in their educational journey, preferably during their initial clinical training period following the attainment of their M.D. degree. Learner-centered EXPBME, such as early exposure to diverse clinical settings or mentored research experiences, plays a crucial role in shaping their career trajectories . These eFrom the viewpoint of learner-centered EXPBME, incorporating professional development and career-oriented modules into the medical curriculum can further enhance students' readiness to design their careers effectively . These mSignificance of learner-centered EXPBME for acquiring non-technical skills regarding AI literacyExperience-based learning, particularly through simulation exercises employing a rescue method, emerges as a cornerstone in equipping medical professionals with the indispensable competencies needed to mitigate and manage AI errors in real time. Emphasizing learner-centered EXPBME is crucial for developing non-technical AI skills in the medical field. It is rooted in core principles, preparing learners for AI's evolving role in healthcare education. In the AI era, EXPBME gains central importance in shaping medical education.The ongoing transformation within healthcare, driven by the integration of AI and its allied technologies, necessitates a parallel evolution in medical education. Medical learners must develop advanced research skills that enable them to delve deeply into the intricate world of AI, not merely understanding the algorithms and mechanisms that power it\u00a0but also critically evaluating and validating the outcomes that AI-powered systems produce .\u00a0While AI is precise and efficient, it is crucial to acknowledge its fallibility and the potential for errors. Healthcare professionals bear the responsibility for AI-driven diagnoses or suggestions, highlighting the need for human oversight. Understanding the complex deep learning mechanisms of AI can be challenging, underscoring the importance of non-technical skills in detecting AI failures. This underscores the necessity of integrating experiential learning methodologies, designed to simulate AI failures and immerse learners in situations that prompt swift and effective responses.Learners must not only develop AI literacy but also acquire a high level of competency in non-technical skills through practical experiences. These skills empower them to identify errors or mistakes made by AI effectively. This, in turn, enhances\u00a0patient safety and ultimately leads to the delivery of optimal healthcare outcomes in an AI-powered medical landscape that is in a constant state of evolution.Further roles of learner-centered EXPBME in the AI eraIn the evolving landscape of healthcare technology, the role of learner-centered EXPBME transcends its foundational principles, taking on a multifaceted significance in the AI era. Medical education programs, deeply rooted in learner-centered approaches, can elevate AI literacy development by replicating a spectrum of scenarios where AI systems may provide incorrect diagnoses or treatment recommendations. Through hands-on experiences in managing such diverse situations, medical learners not only cultivate a keen awareness of AI errors but also acquire the ability to devise appropriate corrective actions, thus honing their problem-solving skills in a practical context.These AI literacy programs, an integral part of learner-centered EXPBME, can be thoughtfully designed to simulate a wide array of possible AI failures. By exposing healthcare professionals to various AI failure scenarios, these programs ensure that learners are well prepared to navigate the intricacies and challenges that may emerge in a technologically advanced healthcare environment. As AI technology continues its relentless advancement and becomes increasingly interwoven into the very fabric of healthcare delivery, the timeless learning theories that form the bedrock of medical education retain their crucial relevance.Moreover, underscoring the complementary role of non-technical skills, such as effective communication, critical thinking, and ethical decision-making, becomes paramount in the context of learner-centered medical education within clinical settings. The integration of these non-technical skills is instrumental, not only to ensure that medical professionals are proficient in leveraging AI for optimal patient care\u00a0but also to equip them with the essential\u00a0skills necessary to navigate the complexities and uncertainties that accompany the integration of AI in modern healthcare. In other words, learner-centered EXPBME with sufficient reflection on experience can cultivate non-technical skill on controlling AI.By embracing the principles of learner-centered EXPBME and prioritizing the holistic development of a well-rounded skill set, the medical community is poised to effectively harness the transformative potential of AI. Through this approach, the healthcare ecosystem can elevate patient care and safety to unparalleled heights, upholding the highest standards while embracing the technological wave of AI with a robust foundation of educational excellence.Despite rapid technological advancements, the fundamental experiential learning theory has shown resilience to changes. As we navigate the AI era, it becomes imperative to introduce an updated medical education curriculum that delves beyond the application and operation of AI, emphasizing a profound understanding of its principles and associated risks. While AI technology rapidly advances, the core learning theories remain steadfast. Therefore, an essential focus should be placed on recognizing the complementary function of non-technical skills in nurturing learner-centered EXPBME combined with AI literacy, accompanied by a commitment to continuous enhancement. Considering a learner-centered approach in EXPBME, integrating modules focused on professional development and career orientation into the medical curriculum can significantly improve students' preparedness in effectively planning their future careers. Learner-centered EXPBME, built upon these fundamental principles, acts as an essential resource to equip medical learners for the complex and constantly evolving role of AI in the field of medicine."} +{"text": "The aim of our study is to evaluate artificial intelligence (AI) support in pelvic fracture diagnosis on X-rays, focusing on performance, workflow integration and radiologists\u2019 feedback in a spoke emergency hospital.Between August and November 2021, a total of 235 sites of fracture or suspected fracture were evaluated and enrolled in the prospective study. Radiologist\u2019s specificity, sensibility accuracy, positive and negative predictive values were compared to AI. Cohen's kappa was used to calculate the agreement between AI and radiologist. We also reviewed the AI workflow integration process, focusing on potential issues and assessed radiologists\u2019 opinion on AI via a survey.p\u00a0=\u00a00.32). Calculated Cohen\u2019s K of 0.64.The radiologist performance in accuracy, sensitivity and specificity was better than AI but McNemar test demonstrated no statistically significant difference between AI and radiologist\u2019s performance and non-inferiority to radiologist performance. Moreover, the commercially available AI algorithm used in our study automatically learn from data and so we expect a progressive performance improvement. AI could be considered as a promising tool to rule-out fractures and to prioritize positive cases, especially in increasing workload scenarios but further research is needed to evaluate the real impact on the clinical practice. Diagnostic delays caused by interpretative errors may lead to delayed treatments, increased surgical risks, and poor outcomes. Recent studies on patients' complaints revealed that 75% are due to interpretative mistakes and consequent incorrect diagnoses. Fracture misdiagnosis is one of the most frequent diagnostic errors and the major reason for paid malpractice claims: detecting thin fracture lines can be extremely challenging, and anatomic variants or previous traumas may be misinterpreted.In this clinical scenario, artificial intelligence (AI) solutions could have an important role in decreasing the percentage of fracture misdiagnosis The risk of missing a subtle fracture increases according to physician fatigue , even in the case of experienced radiologists Time constraints/efficiency, error avoidance/minimization, and workflow optimization are the most significant drivers for the development of AI as a tool in the healthcare setting.The development of an effective AI system for image reporting could reduce the time spent reviewing images by 20%. This time can be spent on non-automatable tasks such as providing personalized patient care and more complex tasks where human input is crucial AI, machine learning (ML), DL, and convolutional neural networking (CNN) are the keywords, and they are interconnected as follows. AI is defined as computer systems able to perform tasks that mimic human intelligence. ML, a subfield of AI, allows a machine to learn and improve from experience, independently of human action. DL, a more specialized subfield of ML, analyses more data sets, transforming algorithm inputs into outputs through computational models such as deep neural networks The connectivity pattern between neurons and the organization of the animal visual cortex inspired CNN development. Like the receptive field, each cortical neuron reacts to stimuli only in a restricted region of the visual field.The entire visual field is covered thanks to the partial overlap of the different neuron's receptive fields. CNNs apply relatively little pre-processing in contrast with the other image classification algorithms. The network learns to optimize the filters (or kernels) through automated learning, and not through hand-engineered filters such as traditional algorithms. This independence from previous knowledge and human intervention in feature extraction is one of the major advantages The aim of our study is to evaluate prospectively and in a clinical environment the AI performance in fracture diagnosis on X-rays We decided to evaluate AI performance only in pelvic fractures diagnosis due to their important clinical impact and complexity: unstable pelvic fractures can be fatal due to pelvic haemorrhage and can require prompt management. Moreover, their diagnosis can be challenging due to overlapping structures.Our work also focuses on workflow integration issues and preliminary radiologists\u2019 feedback in a spoke emergency hospital.2This prospective study was approved by the institutional review board approval (n 455 17/6/21) and informed consent was waived because analysis used anonymous data.2.1This was a prospective study performed at a single medical centre from August to November 2021. After obtaining institutional review board approval, patients who presented to our ED with a pelvic trauma, underwent pelvis X-rays and were enrolled in our study.In the first steps of our study, AI aided diagnosis was required by radiologists on a voluntary basis. Poor quality exams, precluding human interpretation, were excluded from the study.A total of 223 patients were included and a total of 235 sites of fracture or suspected fracture were evaluated. 7 patients had multiple fractures. The whole ED radiology staff (26 radiologists) participated in the study.2.2We used a commercially available CE class IIA AI solution: a deep CNN based on the \u201cDetectron 2\u201d framework able to detect and localize fractures on native resolution digital radiographs, integrated into a radiology software as a diagnostic aid, highlighting each region of interest with a box and providing a confidence score about the presence of a fracture within it (solid line for highly suspected and dashed line for doubt), as shown in Each exam was at first interpreted by ED radiologist, blinded to AI results (unaided diagnosis). AI processed images were subsequently retrieved for AI aided diagnosis.2.3Final diagnoses were established by a Senior Radiologist with 10 years-experience in Muscoloskeletal imaging. 37 AI- Radiologist discordant cases and concordant negative cases with a positive/doubtful clinical examination underwent Computed Tomography (CT).Scheme of AI software integration in ED radiology workflow is showed in 2.4Statistical analyses were performed using MedCalc software calculator. We compared AI and radiologist performance using the accuracy, sensitivity, specificity, and 95% confidence intervals (CIs) of each parameter.The McNemar test was used to evaluate the accuracy, sensitivity, and specificity non-inferiority of AI compared to radiologists.The kappa coefficient was calculated between the AI and radiologist diagnoses.3The overall accuracy, sensitivity, specificity positive and negative likelihood ratio of AI and radiologists are shown in AI and radiologists' specificity were comparable with respectively 16 and 6 false positive (FP), AI sensitivity was lower than radiologist with respectively 10 and 9 false negative (FN), p\u2009=\u20090.32).The radiologist performance in accuracy, sensitivity and specificity was better than AI but McNemar test demonstrated no statistically significant difference between AI and radiologist\u2019s performance (235 sites of suspect fracture were included and 210 presented with concordant reports (% of agreement was 89.4%): 30 (12.8%) with fractures and 180 (76.6%) with no fractures.There were 25 (10.6%) discordant reports: 15 (6.4%) negatives for AI and positives for radiologist and 10 (4.2%) negatives for radiologist and positives for AI.The Kappa coefficient between AI and radiologist was 0.641 (95% CI from 0.512 to 0.770), which means a substantial agreement.At the end of the study, all radiologists were asked to fill a 5-point response Linkert scale survey for feedback assessment in detecting pelvic fractures when compared to the detection accuracy of radiologists (93.62%).Our prospective study did not demonstrate a tangible improvement in patient outcomes or reporting time. However, we did establish the high NPV of AI (94.62%) and its non-inferiority to radiologists' performance. Considering that CNN can enhance its performance, this type of AI software holds promise as a tool to reduce misdiagnoses.Delayed diagnosis of pelvic fractures, particularly unstable ones, can lead to a poor prognosis and increased risk of death: AI assists in promptly classifying a radiograph as positive or negative, also enabling prioritization of positive cases within the worklist.A previous prospective study on this topic was published; however, AI was not integrated into the clinical workflow To the best of our knowledge, this is one of the first true prospective studies to apply AI in a real-time scenario \u2022Integration of AI hardware and software into the hospital's informatics network and our ED clinical workflow.\u2022Manual transmission of X-ray images to the AI server.\u2022Long processing time for AI (minutes).During the early stage of our study, we encountered the following issues:All these issues were promptly resolved within the first week. RIS-PACS and AI engineers collaborated to achieve optimal network integration, automatic image transmission, and a drop in processing time to seconds.Interpreting pelvis X-rays can be challenging due to artifacts caused by incorrect positioning during image acquisition , overlapping anatomical structures , and bowel meteorism. These artifacts result in interpretation difficulties for both radiologists and AI.Previous study demonstrated that AI performs better in fracture detection in anatomical areas with fewer artifacts and overlapping structures: highest sensitivity was demonstrated on shoulder/clavicle x-rays and lowest sensitivity in ribcage ones Our data on pelvic fracture detection showed that radiologists performed better than AI, although the difference was not statistically significant. AI sensitivity was lower than that of radiologists, but AI and radiologists had comparable specificity . Additionally, at least 5 out of 16 AI FP cases were easily recognized by radiologist review, including 3 skin fold artifacts and 2 old fractures . We expect these minor errors to be corrected by AI's dynamic self-paced function.Our study had some limitations, such as a small patient cohort, short recruitment and observation periods, and the inapplicability of CT examination as a reference standard for all cases.The feedback assessment from radiologists produced conflicting results. Most colleagues were likely sceptical about this new technology, and some of the older ones may have feared being replaced by AI in the future.Concerning the role of AI in malpractice risks, many radiologists perceive AI as a \"black box\" where inputs and outputs are clear, but the intermediate process remains unclear. This lack of transparency could lead to distrust and resistance. The need for Explainable AI is an emerging research topic focused on understanding how AI systems make their choices Only a small number of colleagues currently acknowledge the usefulness and value of AI as a supportive tool in fracture detection, particularly in situations of overload, such as night shifts. These colleagues believe that AI has the potential to enhance radiologists' performance, improve patients' management, streamline the medical decision-making process, and enhance the overall quality of healthcare It is widely recognized that new technologies have the capacity to enhance the quality, efficiency, and safety of healthcare devices. However, introducing a new informatics tool can be a delicate process in certain healthcare settings, as it may entail new risks and elicit individual concerns 5To the best of our knowledge, this study represents one of the first prospective investigations to apply AI in real-time clinical practice and discuss the integration challenges within the clinical workflow.Contrary to our initial expectations, the preliminary results did not demonstrate a significant improvement in patient outcomes or reporting time. However, the study did highlight NPV of AI (94.62%) and its non-inferiority to radiologist performance.Furthermore, the commercially available AI algorithm utilized in our study has the capability to continuously learn from data, which suggests that its performance could progressively improve over time.AI shows promise as a tool for ruling out fractures, particularly when used as a \"second reader,\" and for prioritizing positive cases, especially in scenarios of increased workload, such as emergency departments and night shifts. Nevertheless, further research is necessary to evaluate the actual impact of AI on clinical practice.Institutional review board approval was obtained and the need for written informed consent was waived because the manuscript does not contain any patient data.The authors state that this work has not received any funding.All authors contributed to data acquisition or data analysis/interpretation, Material preparation, data collection and analysis were performed by Rosa Francesca, Duccio Buccicardi, Fabio Borda and Gastaldo Alessandro. The first draft of the manuscript was written by Rosa Francesca and Duccio Buccicardi and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."} +{"text": "Artificial intelligence (AI) has tremendous potential to change the way we train future health professionals. Although AI can provide improved realism, engagement, and personalization in nursing simulations, it is also important to address any issues associated with the technology, teaching methods, and ethical considerations of AI. In nursing simulation education, AI does not replace the valuable role of nurse educators but can enhance the educational effectiveness of simulation by promoting interdisciplinary collaboration, faculty development, and learner self-direction. We should continue to explore, innovate, and adapt our teaching methods to provide nursing students with the best possible education. Artificial intelligence (AI) refers to the development of computer systems capable of performing tasks that have typically required human intelligence, such as visual perception, natural language processing, and decision-making . VariousAI is currently integrated into nursing simulation education to enhance realism and interactivity and to personalize the learning experience for students. It has the potential to revolutionize the way we educate future nurses . AI in nFirst, AI technologies can be used to develop virtual patient models that mimic real-life clinical scenarios. These virtual patients can exhibit realistic physiological responses, such as vital sign changes, symptoms, and behaviors, based on input from the student\u2019s actions. AI algorithms enable virtual patients to adapt their responses dynamically, providing a more realistic and interactive simulation experience . AnotherThe aim of this review was to understand the potential benefits, limitations, ethical considerations, and challenges of nursing simulation education with AI. In addition, the author would like to suggest an optimal direction for nursing simulation education with AI.TM) of the International Nursing Association for Clinical Simulation and Learning (INACSL) [The application of AI in nursing simulation education offers several benefits . However(INACSL) . The fir(INACSL) ,8-11.TM from the INACSL. Simulation with AI was found to be adaptable because these benefits improved the modality, fidelity, and enhanced facilitation of nursing simulation design, outcomes, and objectives for nursing simulation education [These benefits met the criteria of HSSOPEducation ,13. In aducation ,14.There are technical challenges for nursing simulation education with AI. The first technical challenge is the integration of AI systems into existing simulation infrastructure. Adapting AI technologies to work seamlessly with current simulation equipment and software can be complex and require substantial resources ,7,10. ThIn addition to technical challenges, there are pedagogical challenges in nursing simulation education with AI. The first pedagogical challenge is designing effective AI-driven simulations. Developing high-quality AI simulations necessitates collaboration between nursing educators and AI experts to ensure clinical accuracy, realism, and alignment with learning objectives ,8,10. ThTM from the INACSL [While the advantages and development potential of nursing simulation education with AI are infinite, risk factors such as AI hallucination are also possible. Therefore, we must set ethical principles and guidelines . Ethicale INACSL .Strategies to overcome the challenges in nursing simulation with AI include (1) collaboration between nursing educators and AI developers, (2) faculty development programs for AI integration, (3) rigorous evaluation and research on AI-driven simulations, and 4) engaging students in the dialogue on AI in nursing education engaging,8-10.To live in the age of AI, we must remain competitive with AI as suggested by Lee in the aThe future is already here, and AI can pave the way by augmenting, not replacing, the valuable role of nursing educators. Given the challenges and ethical issues in integrating AI into education, we must continue to explore, innovate, and adapt our teaching methods to provide nursing students with the best possible education."} +{"text": "Combined with the spatial data processing capability of the Geographic Information Systems (GIS), the Pan Jiazheng method is extended from two-dimensional (2D) to three-dimensional (3D), and a 3D landslide surge height calculation method is proposed based on grid column units. First, the data related to the landslide are rasterized to form grid columns, and a force analysis model of 3D landslides is established. Combining the vertical strip method with Newton's laws of motion, dynamic equilibrium equations are established to solve the surge height. Moreover, a 3D landslide surge height calculation expansion module is developed in the GIS environment, and the results are compared with those of the 2D Pan Jiazheng method. Comparisons showed that the maximum surge height obtained by the proposed method is 24.6% larger than that based on the Pan Jiazheng method. Compared with the traditional 2D method, the 3D method proposed in this paper better represent the actual spatial state of the landslide and is more suitable for risk assessment. Therefore, it is of vital significance to evaluate the surge wave hazard caused by the reservoir bank landslide.A landslide disaster refers to a disaster caused by the overall downhill slide of rock mass or soil mass under the action of gravity and is one of the main geological disasters in the world. Quick sliding speeds and long sliding distances of medium and large landslides result in massive losses of life and property every year. Among these, the reservoir bank landslide produces a bigger surge during its sliding into the water, and this causes great harm to the passing ships and the surrounding buildings, receiving much concern around the globe. There are many surging events caused by landslides in the world, such as the 100\u00a0m high surges triggered by the landslide in the Vajont Reservoir, Italy, in 1963, killing at least 2,000 people8, numerical simulation method14, and physical modelling method17. Among them, the Pan Jiazheng method in the theoretical analysis method is widely used in engineering applications because of its simple modelling processes, which has few requirements for engineers and high precision.Calculating the surge height is one of the key indexes to evaluate the surge hazard. The methods of calculating the surge height can mainly be divided into theoretical analysis method4 proposed an approximate method to find the amplitude of the largest surge in the nonlinear region by utilizing the solutions obtained from linear theory. Since then, many scholars have conducted more in-depth research. On this basis, Academician Pan Jiazheng5 divided the landslide body into many two-dimensional (2D) vertical strips and calculated the surge height by considering the horizontal and vertical movement of the landslide. This method is called the Pan Jiazheng method. The method has been applied and improved over the years. For example, Dai et al.6 used Pan Jiazheng method to calculate the sliding speed of Xiaduling landslide in the Three Gorges Reservoir area. Huang et al.7 improved the Pan Jiazheng method by considering the resistance of water and the change in the friction coefficient. Miao et al.8 proposed a sliding block model based on the 2D vertical strip method to predict the maximum surge height.The Pan Jiazheng method originated from Noda. Noda18 proposed that the value obtained by 2D state analysis is about 70% of the 3D state value. The Pan Jiazheng method based on 2D model has been unable to obtain high-precision result values, so the study of the Pan Jiazheng method based on 3D state analysis is of great significance to improve its calculation accuracy.Although the Pan Jiazheng method has made some improvements, it is still in the 2D stage, and the calculation results will be different if the 2D section is selected. However, the actual state of the landslide is three-dimensional (3D), and the 2D analysis method cannot simulate the real landslide state in a reasonable way. Hu21. GIS features strong spatial analysis capabilities. It can carry out unified management and storage of spatial data, with functions such as spatial positioning, spatial database management, digital elevation model establishment, etc. And it can provide users with multi-factor spatial analysis, prediction and forecasting, simulation optimization, and other analysis methods of spatial data. Its biggest feature is that it can transform vector data into raster dataset based on grid column unit model. Because of its good 3D spatial data processing capability, many scholars have added geotechnical professional models to their geographic information systems23. Xie's team26 built a 3D limit equilibrium method in the GIS, and developed the slope stability analysis module. This paper, based on the previous study, will establish a set of 3D landslide surge height calculation methods grounded in the GIS.Geographic Information Systems (GIS) is widely used in geotechnical engineeringCoinciding with the spatial data processing capability of the GIS, this paper extends Pan Jiazheng's method from 2 to 3D and proposes a 3D landslide surge height calculation method based on the GIS. Firstly, considering the 3D spatial relation of grid column units, the calculating expression of the required parameters is given. Through the force analysis of the grid column unit, combined with Newton's law of motion and considering the action of water, the dynamic equation based on the grid column unit is established, and the sliding speed of the sliding body during the sliding process is solved, and then the surge height is calculated. At the same time, an expansion module for calculating surge height is developed based on Component Object Model (COM) technology in the ArcGIS environment, and the calculation is carried out through the case of the Kaiding landslide of Houziyan Hydropower Station, and compared with the results of 2D Pan Jiazheng method, which verified the applicability of the module.After this introduction, the second section introduces the GIS-based 3D landslide surge height calculation method, gives the specific surge height calculation formula, and introduces the secondary development of the extension module. The third section verifies the correctness and applicability of the extension module through case calculation. Finally, the fourth section comes to discussion and summary.31. Therefore, the slope can be divided into square columns based on the grid units to form a grid column unit model, as shown in Fig.\u00a0For a slope, the representation of data is mainly in the form of vectors. These data include but are not limited to slip surface, strata, groundwater, fault, slip, and other types of data. These vector data layers can be converted to raster data layers using the spatial analysis capabilities of GIS to form a grid data set. The grid data structure consists of rectangular units. Each rectangular unit has a corresponding row and column number and is assigned an attribute value that represents the grid unit\u03b8 is the dip of the grid column at the slip surface; \u03b1 is the dip direction of the grid column at the slip surface; \u03b2 is the sliding direction of the landslide; \u03b8r is the apparent dip of the main inclination direction of the landslide; \u03b1x is the apparent dip of the X-axis; and \u03b1y is the apparent dip of the Y-axis. Of these six parameters, \u03b1, \u03b8, and \u03b2 are known and is calculated in a paper published by myself25. Other three parameters \u03b1x, \u03b1y and \u03b8r can be calculated according to the spatial relationship in Fig.\u00a0Figure\u00a0cellsize represents the length of the side of each grid column.The bottom area of one grid column is calculated byW of the grid column is expressed asm is the number of strata, hm is the height of each stratum, and rm is the unit weight of each stratum. For the grid column units above the water, rm is calculated from the natural unit weight. For grid column units under water, rm is calculated from the buoyant unit weight.The weight 32.R is the distance from the centre bottom of the grid column to the water surface.The pore water pressure is obtained as follows32.G is the resultant force of the resistance of the water to the sliding body (mN); cw is the viscous resistance coefficient, which is 0.15 to 0.18; \u03c1f is the buoyant density (g/m3), taking the average of all stratum; v is the velocity of the landslide (m/s); and S is the surface area of the grid column in the water (m2).When the sliding body enters the water, the resistance of the water is calculated as followsABCDA1B1C1D1 is selected, and the force analysis is explained as follows26.W; the direction is the Z-axis; and the weight acts at the centroid of the grid column.The weight of one grid column is kW, where k is the \u201cseismic coefficient\u201d; the direction of kW is the sliding direction of the landslide; and the resultant horizontal force acts at the centroid of the grid column.The resultant horizontal seismic force is P; the direction of P is the Z-axis, and these external loads act at the centre of the top of the grid column. The external loads represent loads caused by objects on the surface of the landslide, such as buildings, trees, and so on.The external loads on the ground surface are represented by \u03c3 and \u03c4, respectively. The normal stress is perpendicular to the slip surface, and the shear stress is in the sliding direction of the landslide. The normal and shear stresses act at the centre of the bottom of the grid column.The normal and shear stresses on the slip surface are represented by u. The direction of u is directed as \u03c3.The pore water pressure on the slip surface is y\u2009=\u20090 and vertical face at y\u2009=\u2009\u25b3y are T and T\u2009+\u2009\u25b3T, respectively; the vertical tangential forces on the vertical face at y\u2009=\u20090 and vertical face at y\u2009=\u2009\u25b3y are R and R\u2009+\u2009\u25b3R, respectively; the normal forces on the vertical face at y\u2009=\u20090 and vertical face at y\u2009=\u2009\u25b3y are F and F\u2009+\u2009\u25b3F, respectively; the horizontal tangential forces on the vertical face at x\u2009=\u20090 and vertical face at x\u2009=\u2009\u25b3x are E and E\u2009+\u2009\u25b3E, respectively; the vertical tangential forces on the vertical face at x\u2009=\u20090 and vertical face at x\u2009=\u2009\u25b3x are V and V\u2009+\u2009\u25b3V, respectively; and the normal forces on the vertical face at x\u2009=\u20090 and vertical face at x\u2009=\u2009\u25b3x are H and H\u2009+\u2009\u25b3H, respectively. For convenience, the resultant force between columns in the sliding direction of the landslide is defined as \u0394D.The horizontal tangential forces on the vertical face at As shown in Fig.\u00a05. The force analysis of one grid column and the spatial relationships between parameters at the slip surface are shown in Figs.\u00a0We assume that all the grid column units move continuously, do not separate in the macroscopic dimension and remain vertical after sliding, as also assumed by Pan Jiazhengi and column j). According to Newton\u2019s laws of motion, dynamic equilibrium equations are established in the sliding direction of the landslide and the vertical direction. The force analyses in the sliding direction of the landslide and vertical direction are shown in Fig.\u00a0ax and a are the horizontal acceleration and vertical acceleration of the grid column, respectively; \u03c6 is the effective friction angle of the grid column at the slip surface; g is gravitational acceleration; c is the effective cohesion of the grid column at the slip surface.We arbitrarily selected a grid column unit 10) can bFor the entire sliding body, the forces between the grid columns are internal forces, that is, the resultant force is 0, yielding Eq.\u00a0.12\\documax can be determined by Eq.\u00a011) into i, j) can be set to an arbitrary square. A partitioning line is drawn from the bottom to the top of the landslide every \u0394L in the sliding direction of the landslide, and the resulting regions are numbered zone 1, zone 2, zone 3, \u2026, zone (n-1), zone n. Each partition includes numbers of grid column units, and the length of zone n is less than or equal to \u0394L, as shown in Fig.\u00a0Using the spatial analysis capability of GIS, the landslide body is rasterized, and the size of the grid column unit changes to the weight for zone n, and the weight for zone (n-2) becomes the weight for zone (n-1), and so on . After ax1 is calculated, the following can be established.At d by Eq.\u00a0. Unlike ax and vx in the calculation process can be plotted as respective curves versus the sliding time.The calculation is continued in turn. When the obtained horizontal acceleration is negative, the maximum velocity can be obtained. Finally, The steps in calculating the landslide sliding velocity are as follows.33. In the formula, the main factors that affect the surge height are the sliding velocity and volume of the landslide. The formula for calculating the maximum surge height is as follows.\u03bemax is the maximum surge height (m); d is the comprehensive influence coefficient, with an average value of 0.12; vm is the maximum sliding velocity (m/s); V is the volume of the landslide body in the water (m3); and g is gravitational acceleration, which equals 9.8\u00a0m/s2.The China Institute of Water Resources and Hydropower Research proposed an empirical formula for surge height calculation\u03be is the surge height at a point from the landslide body L metres (m); n is the calculation coefficient, which is 1.4; and d1 is the influence coefficient related to distance L, which is determined by the following formula.The formula for calculating the surge height at different distances from the landslide body is as follows.26.Combined with the surge height calculation method, an expansion module was developed based on component object model (COM) technology in the ArcGIS environmentFirst, rasterize the data related to slope in ArcGIS software to form a raster data set , and calculate the required parameters by using the extension module. Then, divide the sliding body into units, and calculate the sliding speed at each time point step by step with the extension module. And the calculation stops when the acceleration is less than 0. Finally, obtain the maximum sliding speed and calculate the maximum surge height.\u00a0Fig. 3. Plan and section views are shown in Figs.\u00a0The Kaiding landslide is approximately 14.5\u00a0km away from the dam of the Houziyan hydropower station in Sichuan, China. The length of the landslide along the river is approximately 490\u00a0m, the top elevation is 2080\u00a0m, the bottom elevation is 1754\u00a0m, and the volume is approximately 4.5 million mL\u2009=\u200910\u00a0m. The internal friction angle \u03c6 at the slip surface is 22.8\u00b0, the natural unit weight is 18.84 kN/m3, the buoyant unit weight is 19.43 kN/m3, the buoyant density is 2.11\u2009\u00d7\u2009106\u00a0g/m3, the viscous resistance coefficient is 0.18, and the elevation of the reservoir water level is 1810.3\u00a0m. When the landslide body slides, the effective cohesion c at the slip surface will decrease to 0, that is, c\u2009=\u200905. Using this method and Pan Jiazheng's 2D method, the acceleration and velocity curves with the sliding time can be obtained, as shown in Fig.\u00a0The unit size of a grid column is 5\u00a0m\u2009\u00d7\u20095\u00a0m, and \u03942, and the sliding time required to reach the maximum velocity is 8.67\u00a0s. In comparison, the maximum velocity obtained by the Pan Jiazheng method is 9.51\u00a0m/s, the starting acceleration is 0.84\u00a0m/s2, and the sliding time required to reach the maximum velocity is 9.73\u00a0s.The calculation results indicate that the maximum velocity obtained by the proposed method is 11.21\u00a0m/s, the starting acceleration is 1.25\u00a0m/sComparing the results of the proposed method with those of the Pan Jiazheng method, the maximum velocity of the proposed method is 15.2% higher than that calculated by the Pan Jiazheng method, the starting acceleration is 32.8% higher, and the sliding time required to reach the maximum velocity is 1.06\u00a0s short.V of the landslide body under water is 340\u2009\u00d7\u2009104 m3. According to Eqs. 6V of theThe landslide is approximately 14.5\u00a0km from the dam, the crest elevation is 1847.02\u00a0m, and the elevation of the reservoir water level is maintained at 1810.3\u00a0m. When the surge height at the dam site is 0.56\u00a0m, water will not flow over the dam crest and the safe operation of the dam will not be affected.The maximum surge height obtained by the proposed method is 24.6% larger than that based on the Pan Jiazheng method, and the surge height at the dam site obtained by the proposed method is 21.4% larger than that based on the Pan Jiazheng method.13 proposed that the value obtained by 2D state analysis is about 70% of the 3D state value. The result is consistent with Hu's prediction. Compared that of the 2D method, the computational model of the 3D method better represents the actual spatial state of the landslide. Therefore, the method in this paper is more applicable than Pan Jiazheng method in actual risk evaluation.The calculation results show that the difference between the results of the 2D method and the 3D method is 24.6%. HuThis paper proposes a 3D landslide surge height calculation method, and it divides the landslide into several grid column units. The method assumes that all the grid column units move continuously and do not separate in the macroscopic dimension, remaining vertical after sliding. That is, assume that the column units are rigid materials. But in the actual sliding process, the column units are unable to keep vertical regularly, especially in the soil landslide, while for the rock landslide, its column units can keep better integrity during the sliding process, so the method in this paper will have relatively good applicability for the rock landslide.Combined with the spatial data processing capability of the GIS, the Pan Jiazheng method is extended from 2 to 3D, and a 3D landslide surge height calculation method is proposed for the first time. Combined with Newton's law of motion, the dynamic balance equation for calculating the sliding speed of a 3D sliding body is derived, and then the surge height is calculated.This is the first time to combine the surge height calculation model with GIS. At the same time, an extension module is developed based on ArcGIS software, and the feasibility of the module is verified by a case study. The module has the advantages of unified data format and simple preparation process.Because the Pan Jiazheng method is a calculation method focused on 2D sections, the calculation results will be different if different sections are selected. After the 3D landslide is carried out in rasterization, the 3D calculation model based on the grid column unit is established in this paper, which overcomes the above defects and makes the calculation model close to the actual situation, therefore the method in this paper is more applicable than Pan Jiazheng method in actual risk evaluation.As the application of GIS in geotechnical engineering becomes increasingly extensive, the calculation method of surge height established in GIS in this paper will provide a theoretical basis for scholars to add surge height calculation modules in their respective geographic information Systems.Supplementary Information."} +{"text": "Guillardia theta ACR1 (GtACR1_full) for pH measurements in Pichia pastoris cell suspensions as an indirect method to assess its anion transport activity and for absorption spectroscopy and flash photolysis characterization of the purified protein. The results show that the CPD, which was predicted to be intrinsically disordered and possibly phosphorylated, enhanced NO3\u2212 transport compared to Cl\u2212 transport, which resulted in the preferential transport of NO3\u2212. This correlated with the extended lifetime and large accumulation of the photocycle intermediate that is involved in the gate-open state. Considering that the depletion of a nitrogen source enhances the expression of GtACR1 in native algal cells, we suggest that NO3\u2212 transport could be the natural function of GtACR1_full in algal cells.Previous research of anion channelrhodopsins (ACRs) has been performed using cytoplasmic domain (CPD)-deleted constructs and therefore have overlooked the native functions of full-length ACRs and the potential functional role(s) of the CPD. In this study, we used the recombinant expression of full-length Transmembrane \u03b1-helical proteins play vital roles in fundamental biological processes in living organisms. They are involved in the transportation of ions and small molecules, in cellular signal transduction, in enzymatic reactions, and so on. Microbial rhodopsins are a family of such proteins and they function in response to light. In the last 2\u00a0decades, a significant number of microbial rhodopsins have been discovered and, at the same time, the diversity of their light-dependent molecular functions has been clarified, such as ion pumps, ion channels, light sensors, enzymes, and so on .trans-retinal which covalently binds to a conserved Lys residue in the seventh transmembrane helix (A), which has a length of approximately 240 to 300 amino acid residues and thus a molecular mass of 26 to 33\u00a0kDa. Most microbial rhodopsins, such as ion pump and phototaxis sensor-type rhodopsins, have only the rhodopsin domain . However, enzyme- and ion channel-type rhodopsins are different. The enzyme rhodopsins are composed of the rhodopsin domain, an additional transmembrane \u03b1-helix attached to the N terminus of the rhodopsin domain, and the enzyme domain attached to the C terminus of the rhodopsin domain, which is located inside the cytoplasmic side of the cell membrane (B).Microbial rhodopsins are commonly composed of seven transmembrane \u03b1-helices and a chromophore all-ne helix . This hane helix A, which n domain A, left. membrane A, center center) A, right , right) , 6 and f, right) . The siz, right) B.Figure\u00a0Chlamydomonas reinhardtii, it was shown that the CPD did not contribute to the cation transport of the CPD, have been overlooked. Nevertheless, for CCRs that originate from the green alga ransport but was ransport . Moreove , a well-studied ACR from a cryptophyte alga Guillardia theta, using a recombinant expression system. Then, we characterized the anion transport function of GtACR1_full to identify its original function. We successfully constructed a recombinant expression system for GtACR1_full using the yeast Pichia pastoris. Using that expression system and the pH electrode method that we reported previously . However, GtACR1_full showed a significantly enhanced transport preference for NO3\u2212. This result indicated that the CPD has an inhibitory effect on the intensity of anion transport activity but contributes to the development of anion preference by some mechanism. To reveal the mechanism involved at the molecular level, we analyzed the photoreaction cycle, which is called the photocycle and is directly connected to the anion transport function. As a result, the preferential transport of NO3\u2212 by GtACR1_full was considered to result from the extended lifetime, the large accumulation of the photocycle intermediate involved in the gate-open state and the increase in specific efficiency for NO3\u2212 against Cl\u2212, which were provided by the CPD. Based on these results, we considered the biological role of GtACR1_full in native G.\u00a0theta in terms of the preferential transport of NO3\u2212. Furthermore, we hypothesized that the CPD contributes to the preferential transport of NO3\u2212 possibly through an interaction not only with NO3\u2212 but also with the rhodopsin domain. Indeed, it may be easier to conduct experiments if the CPD is deleted but this study has finally begun to show what could only be seen with full-length ACRs in which the CPD has not been deleted.What is the original function of full-length ACRs in nature? How is that achieved at the molecular level? What is the functional role of the CPD in ACRs? To answer those questions, in this study, we prepared full-length eviously , 12, we GtACR1 sequence and VChR2 , reported that in their long CPDs (450\u2013540 residues), there were three highly conserved regions, named con1, con2, and con3, respectively (GtACR1 and GtACR2 are far shorter (140\u2013150 residues) and share overall high sequential homology with each other and Asp/Glu (20.3% for GtACR1 and 20.7% for GtACR2) and therefore have positive charge at a neutral pH. In addition, Ser is the third most contained residue (11.2% for GtACR1 and 11.6% for GtACR2), which may relate to potential phosphorylation of the CPDs as described below.A previous study on CCRs derived from ectively . Howeverity 76%) . Therefoctively) . HoweverGtACR1_full monomers and dimers using AlphaFold2 (A) (GtACR1_\u0394CPD solved as dimers (CrChR2 (A and C. We then applied the amino acid sequence of GtACR1_full to prediction programs named IUPred (B). As a result, the CPD was predicted to be intrinsically disordered (the disorder probability was more than 0.5 (50%)).We constructed model structures of phaFold2 A 18) by byGtACR1s dimers , 16, 19 has multiple phosphorylation sites in its sequence both in the rhodopsin domain and in the CPD . Especially in the CPD (the region indicated by the red bar), more highly scored (more than 95%) candidates were obtained than in the rhodopsin domain (highlighted in orange). Previous study on CrChR1 identified 10 phosphorylation sites in its CPD using mass spectrometry (GtACR1 (C). They are roughly divided into 4 clusters and many of the candidates were predicted to exist in the regions that form secondary structures. Unfortunately, the commonality and difference in the phosphorylation between CrChR1 and GtACR1 are unclear at present. the CPD . Phospho program to identsequence C. Especitrometry . There w agar plates including 100 to 2000\u00a0\u03bcg antibiotics Zeocin. As a result, by comparing to the negative control without incorporation of the ACR gene, red-colored P.\u00a0pastoris cells were obtained, indicating that the functional expression of GtACR1_full had succeeded . However, as expected, the red color of cells expressing GtACR1_full was weaker than cells expressing GtACR1_\u0394CPD , indicating that the increased expression of GtACR1_\u0394CPD is due to deletion of the CPD.As described in the ucceeded A, top. HCR1_\u0394CPD A, middleB). As a result, GtACR1_full was detected as two bands near the 47.3\u00a0kDa and 114.0\u00a0kDa molecular markers compared to the negative control. Because the calculated molecular mass of GtACR1_full is 51.0\u00a0kDa, the smaller and the larger bands corresponded to the monomer (indicated by the black triangle) and the dimer (white triangle), respectively. On the other hand, in the case of GtACR1_\u0394CPD, which has a calculated molecular mass of 34.4\u00a0kDa, three bands were detected, which were assigned as the monomer (black triangle), the dimer (white triangle), and aggregates (asterisk). From the total band intensity, the relative protein expression levels (C) were estimated and the expression level of GtACR1_full was about 24% of GtACR1_\u0394CPD.To confirm the protein expression level and estimate that quantitatively, SDS-PAGE and Western blotting were performed B. As a rn levels C were esGtACR1-expressing P.\u00a0pastoris cells. Light activates GtACR1, which results in the influx of anions through the protein because the anion concentration was adjusted to be higher outside of the cells (300\u00a0mM) than inside the cells. This anion influx induces the penetration of H+ from outside to inside the cells to compensate for the transiently increased negative membrane potential. Therefore, the pH electrode method can indirectly detect the anion transport activity of ACRs. Here, we measured the transport activities of GtACR1_full for various anions, including F\u2212, Cl\u2212, Br\u2212, I\u2212, NO3\u2212, SO42\u2212, and aspartate (Asp-), using the pH electrode method. We also measured the transport activities of GtACR1_\u0394CPD of those anions for comparison.Previously, we measured anion transport activity using a pH electrode method . For thaA shows the time-dependent pH changes originating from the transport activities of GtACR1_full (black solid lines) and GtACR1_\u0394CPD (gray dotted lines) in the presence of various anions. The data for GtACR1_full were corrected by the protein expression level estimated from Western blotting (C). Figure\u00a04B summarizes the initial slope amplitudes calculated from the data shown in Figure\u00a04A. These results clearly show that the anion transport activities of GtACR1_full were smaller in general than those of GtACR1_\u0394CPD, except for SO42\u2212. The initial slope amplitudes of GtACR1_full for Cl\u2212, Br\u2212, and I\u2212 were decreased to nearly one-third compared to GtACR1_\u0394CPD. However, the initial slope amplitude of GtACR1_full in the presence of NO3\u2212 was about two-thirds compared to GtACR1_\u0394CPD and therefore was more than about 2-times larger than that of GtACR1_full in the presence of Cl\u2212, Br\u2212, and I\u2212. A previous patch clamp analysis of GtACR1_\u0394CPD expressed in mammalian cells showed that its relative permeability for NO3\u2212 was higher than that for Cl\u2212, Br\u2212, and I\u2212 . Here, we set two goals, which were to reveal: (1) what the basic photochemical properties of GtACR1_full are compared with GtACR1_\u0394CPD and (2) why GtACR1_full preferentially transports NO3\u2212.To investigate the functional characteristics of \u2212, the spectra of GtACR1_full (black solid line) and GtACR1_\u0394CPD (gray dotted line) were identical in the visible region and thus exhibited the same maximum absorption wavelength (\u03bbmax) at 513\u00a0nm (A). On the other hand, in the UV region, the absorption of GtACR1_full was larger than GtACR1_\u0394CPD due to the additional CPD, which contains one Tyr and three Phe residues . As a result, anion-dependent visible spectral changes were hardly observed for either protein. Therefore, these results indicate that the initial state properties of GtACR1_full and GtACR1_\u0394CPD are identical, meaning that the CPD does not affect the initial state property.For the first goal, we measured UV-visible absorption spectra to invest 513\u00a0nm A. On theresidues and an ul anions B. As a rGtACR1_full and GtACR1_\u0394CPD in the presence of Cl\u2212 to represent the anions used in this study. A and B show the flash-induced difference absorption spectra of GtACR1_full and GtACR1_\u0394CPD, respectively. As previously reported, after excitation by laser flash, GtACR1_\u0394CPD showed absorption changes at 510\u00a0nm , 390\u00a0nm , and 590\u00a0nm (+), which are assigned as the initial state, the M-intermediate, and the K-intermediate, respectively , 390\u00a0nm (+), and 590\u00a0nm (+) were observed in the case of GtACR1_full (A), which indicates that GtACR1_full shares the same photo-intermediates with GtACR1_\u0394CPD during the photocycle. C and D represent the calculated absorption spectra of the initial state, P0, and four kinetic states, P1 \u2013 P4, in GtACR1_full and GtACR1_\u0394CPD, respectively. As a result of spectral separation . This is the same as for GtACR1_\u0394CPD (D).We then measured transient absorption changes using the flash photolysis method. That method can analyze the kinetic behavior of photo-intermediates during the photocycle, which is directly connected to the anion transport function. First, we compared the photocycles of ectively . These aCR1_full A, which ectively C. This iCR1_\u0394CPD D.Figure\u00a0GtACR1_full and GtACR1_\u0394CPD in the presence of Cl\u2212 (E), showing transient absorption changes at 510\u00a0nm , 390\u00a0nm (M-intermediate), and 590\u00a0nm (K-intermediate). The time constants analyzed by global fitting are summarized in 1 and \u03c42 were comparable between them. Especially when comparing the time constant \u03c44 for the last fourth transition, which is the process of recovery to the initial state and thus the rate-limiting step, the value for GtACR1_full (2560\u00a0ms) was about three times larger than that for GtACR1_\u0394CPD (878\u00a0ms). This result indicates that the photocycle of GtACR1_full is roughly three times slower than that of GtACR1_\u0394CPD. Therefore, the absorption changes of GtACR1_full (black lines) and GtACR1_\u0394CPD (gray lines) did not overlap (E). Figure\u00a06F summarizes the photocycle model of GtACR1_full and GtACR1_\u0394CPD in the presence of Cl\u2212, in which the difference in their photocycle kinetics is highlighted by a bold black arrow.However, a difference in the photocycle kinetics was found between e of Cl\u2212 E, showin overlap E. Figure3\u2212 was revealed for GtACR1_full. For the second goal, we investigated the NO3\u2212 transport mechanism of GtACR1_full. A and B show the flash-induced difference absorption spectra of GtACR1_full and GtACR1_\u0394CPD, respectively, in the presence of NO3\u2212. A comparison of the difference absorption spectra in the presence of Cl\u2212 (A and B) revealed that there was less accumulation of the M- (390\u00a0nm) and K- (590\u00a0nm) intermediates, which was also supported by the calculated absorption spectra of P0 \u2013 P4 , 390\u00a0nm (M-intermediate), and 590\u00a0nm (K-intermediate). We noticed that the photocycle duration in the presence of NO3\u2212 was extended in both GtACR1_full and GtACR1_\u0394CPD compared with the presence of Cl\u2212 (E). Such an anion-dependent delay of the photocycle has been commonly observed in light-driven anion pump halorhodopsins (HRs) (GtACR1_full in the presence of NO3\u2212 was larger than that for GtACR1_\u0394CPD (GtACR1_full (9750\u00a0ms) was about twice as long as that of GtACR1_\u0394CPD (4850\u00a0ms) by comparison to the time constant \u03c44. The photocycle model in the presence of NO3\u2212 is summarized in Figure\u00a07F.As shown in e of Cl\u2212 , A and BC and D, and S7. e of Cl\u2212 E. Such ans (HRs) , 35, 36.CR1_\u0394CPD . The pho\u2212 and NO3\u2212 reveals interesting differences. Here we focused on the time constant for the second transition, \u03c42, which corresponds to generation of the M-intermediate that temporally correlates with the gate-closing process and GtACR1_\u0394CPD (30.4\u00a0ms) were similar and thus the rise of the M-intermediate at 390\u00a0nm were almost overlapped (E). On the other hand, in the presence of NO3\u2212, the \u03c42 for GtACR1_full (130\u00a0ms) was more than two times larger than that for GtACR1_\u0394CPD (54.7\u00a0ms) and thus we observed a delayed generation of the M-intermediate for GtACR1_full (E). This result indicates that the lifetime of the gate-open state, the L-intermediate state (discussed below). We also found that the CPD had an inhibitory effect on the intensity of anion transport activity, whereas it enhanced the transport preference for NO3\u2212 by increasing the specific efficiency for NO3\u2212 against Cl\u2212 (discussed below). To the best of our knowledge, this is the first report that characterizes the full-length ACR expressed in a recombinant system.All previous research on ACRs have been conducted using CPD-deleted constructs, which still possess anion transport activities upon light illumination. However, the native functions of full-length ACRs and the role(s) of the CPD remained unknown. To resolve those issues, we used a recombinant expression system to express and purify the full-length ACR, B, the NO3\u2212 transport activity of GtACR1_full was about 2 to 4-times larger than that for Cl\u2212, Br\u2212, I\u2212, and SO42\u2212, when measured using the pH electrode method and Cl\u2212 (500\u2013600\u00a0mM) is calculated to be approximately 0.001 to 0.1 (www.resourcewatch.org). As described above, G.\u00a0theta controls the expression level of GtACR1 in response to the concentration of extracellular nitrogen source and up to approximately 4\u00a0mM of NO3\u2212 as a nitrogen source. In addition, it is known that Proteomonas sulcata, which is also a marine cryptophyte alga and has ACRs, senses the extracellular NO3\u2212 (less than 1\u00a0mM) and accumulates nitrogen as a form of protein-pigment complex phycoerythrin, which contributes to the light-harvesting function for photosynthesis, even in the presence of about a half the concentration of Cl\u2212 of seawater , 43, nam+, Na+, and Ca2+, to induce membrane depolarization in the algal eye spot. On the other hand, ACRs could be responsible for transporting NO3\u2212 for use as a nutrient source. However, this hypothesis needs to be tested by in\u00a0vivo studies.If that is the case, then CCRs and ACRs have distinctly different physiological roles. CCRs play a role as a phototaxis sensor triggered by light-dependent photoreceptor current . That isGtACR1_full, the formation of the M-intermediate, which temporally correlates with the gate-closing state per one photocycle is likely to become longer when transporting NO3\u2212.Comparative analysis of photocycle kinetics suggested that in the case of -closing , 37, wasGtACR1_full and GtACR1_\u0394CPD for Cl\u2212 and NO3\u2212, respectively. For this purpose, we focused on accumulation of the L-intermediate because in the case of GtACR1, that is the only photo-intermediate involved in the gate-open state for transporting anions (\u2212 (F) (3\u2212 (F), the latter of which should be experimentally determined in a future study.The change in photocycle kinetics must change the accumulation of photo-intermediates. Therefore, we discuss the accumulation of the gate-open state to elucidate the relationship between photocycle kinetics and the anion transport activities of g anions , 37. Usig anions under conions . A comparison of each anion shows that in the presence of NO3\u2212, the accumulation was increased for GtACR1_full compared with GtACR1_\u0394CPD while in the presence of Cl\u2212, the accumulation of GtACR1_full and GtACR1_\u0394CPD was almost the same. These results reflect that the lifetime of the gate-open state becomes longer when transporting NO3\u2212, as shown in Figure\u00a07E. In addition, these results indicate that the CPD contributes to increasing the accumulation of the gate-open state especially in the presence of NO3\u2212. We speculate that this results in the preferential transport of NO3\u2212 by GtACR1_full (8B).Figure\u00a08e of Cl\u2212 A. A compCR1_full and 8B.FA and B, we calculated the anion transport activity per accumulation of the L-intermediate, which can be estimated as the transport efficiency, as shown in Figure\u00a08C. As a result, the transport efficiency was decreased in the case of GtACR1_full and in the presence of NO3\u2212. However, the specific efficiency for NO3\u2212 against Cl\u2212 for GtACR1_full is about 1.4-times larger than that for GtACR1_\u0394CPD (D). These results indicate that although the CPD showed an inhibitory effect on anion transport activity (8B), it enhances NO3\u2212 transport activity. In conclusion, the preferential transport activity of NO3\u2212 by GtACR1_full is considered to result in the extended lifetime and the large accumulation of the gate-open state and the increase in the specific efficiency for NO3\u2212 against Cl\u2212. To more accurately quantify the accumulation of the gate-open state, the anion transport activity, and efficiency, electrophysiological measurements should be conducted in the future.What would happen with the increased accumulation of the gate-open state? From the data shown in CR1_\u0394CPD D. These activity and 8B, 3\u2212 transport? How different are the transport mechanisms for NO3\u2212 and Cl\u2212? We discuss these questions in the following sections.What would be a possible molecular mechanism for the preferential NOGtACR1 is modulated by the presence of the CPD , the CPD interacts with the rhodopsin domain; and step (3), as a result, the photocycle is modulated and therefore the CPD further facilitates the influx of NO3\u2212.Although analysis of the amino acid sequence did not reveal any known domains in the CPD or in conserved residues , we show the CPD . How doeGtACR1 consists of 143 amino acids containing 29 acidic residues (Asp and Glu) and 39 basic residues , which is generally known as interactions having a covalent-like stability and contributing to protein\u2013protein interactions , the CPD captures NO3\u2212 inside the algal cells after the initial uptake of NO3\u2212, after which the captured NO3\u2212 induces and stabilizes the folding structure of the CPD. We are now conducting structural studies on the CPD to prove that hypothesis.The CPD of proteins . MoreoveFig.\u00a02C, A, which ractions . At thisn et\u00a0al. , GtACR1 GtACR1_full, as shown in Figure\u00a07E, the formation of the M-intermediate at 390\u00a0nm was clearly delayed in the presence of NO3\u2212, meaning that the lifetime of the gate-open (NO3\u2212-conducting) state was extended. For step (2), we hypothesize that after the CPD captures NO3\u2212, the NO3\u2212-bound CPD interacts with the cytoplasmic part of the rhodopsin domain possibly through an electrostatic interaction. In general, the cytoplasmic domains of transmembrane \u03b1-helical proteins are positively charged due to the inside-positive rule. In fact, we see several positively charged residues that are located on the cytoplasmic part of the GtACR1 rhodopsin domain (3\u2212 (step (1)) may cause the negatively charged amino acids to cluster on the protein surface. Alternatively, it is possible that the NO3\u2212-bound CPD interacts with the rhodopsin domain via covalent-like electrostatic interactions between Arg on the cytoplasmic surface of the rhodopsin domain and the phosphorylated Ser or Thr in the CPD (B) (In the case of n domain . On the the CPD B 44)..GtACR1_fAnabaena sensory rhodopsin also interacts with its transducer protein, which is a soluble protein expressed inside Anabaena cells .3\u2212-bound CPD and the GtACR1 rhodopsin domain could further facilitate the NO3\u2212 influx. As shown in GtACR1_full in the presence of NO3\u2212 is different from that in the presence of Cl\u2212 and thus is modulated to achieve the preferential NO3\u2212 transport activity (discussed below). If this could be experimentally proven in\u00a0vivo, the preferential transport of NO3\u2212 by GtACR1_full is under a positive feedback control. In biological systems, it is effective when the production of depleted biological materials is increased simultaneously. Therefore, we speculate that G.\u00a0theta avoids the depletion of nitrogen sources by increasing not only the expression of GtACR1_full but also the influx of NO3\u2212 through the protein.Finally, for step (3), the interactions of the NOGtACR1_full and GtACR1_\u0394CPD were altered in the presence of Cl\u2212 and NO3\u2212, respectively . This kinetic behavior is clearly different from the case in the presence of NO3\u2212, in which the formation of the M-intermediate was delayed (E). The former delay leads to elongation of the photocycle duration. In addition, the smaller accumulation of the gate-open state for Cl\u2212 transport (A) resulted in a weaker Cl\u2212 transport activity (8B) compared to the case of NO3\u2212. If the disordered CPD also interacts with Cl\u2212, the resulting folding structure is speculated to be different from that induced in the presence of NO3\u2212 ), which induces a delayed decay of the M-intermediate ). This also needs to be proven in future research. If this should prove to be true, the function of the CPD would be a precise mechanism that controls anion transport according to physiological needs.In the presence of ClCR1_\u0394CPD E. This k delayed E. The foransport A resulteactivity and 8B cGtACR1 rhodopsin domain by monitoring the change in photocycle kinetics of GtACR1_\u0394CPD before and after mixing with the CPD in the presence of Cl\u2212. We prepared the CPD in the Escherichia coli expression system and GtACR1_\u0394CPD having a His-tag at the N terminus in the P.\u00a0pastoris expression system . However, even after adding a large excess of the CPD (a 10-fold molar ratio) to GtACR1_\u0394CPD, we could not observe a delayed photocycle as was the case for GtACR1_full. This result indicates that the GtACR1_\u0394CPD and the added CPD do not interact with each other. The cause of this might be the loss of phosphorylation of the CPD because the CPD prepared in the E.\u00a0coli system cannot be modified after translation. The phosphorylation of the CPD might be important for its interaction with the rhodopsin domain .In fact, we have tried to characterize the interactions between the CPD and the GtACR1_full and GtACR1_\u0394CPD are altered in the presence of Cl\u2212 and NO3\u2212, respectively. These results prompted us to reconsider the anion-binding ability of the rhodopsin domain of GtACR1, meaning GtACR1_\u0394CPD, in the initial state. A previous spectroscopic study concluded that GtACR1_\u0394CPD did not bind anions in the initial state because no visible spectral change, that is a color change, was observed when exchanging anions (B). This is a different characteristic from the anion pump HRs, in which spectral (color) changes occur when anions bind in the vicinity of the protonated retinal Schiff base (P.\u00a0sulcata ACR1 (PsuACR1), which is closely related to GtACR1 , was capable of binding Cl\u2212 in the initial state, determined by visible spectral (color) changes similar to HRs ., 12.GtAC\u2212-bound structure of GtACR1 was reported . This re PsuACR1 . Unfortu species and 7, wGtACR1, which has been overlooked in previous studies, is to preferentially transport NO3\u2212 in nature. The preferential NO3\u2212 transport of GtACR1_full was resulted from the extended lifetime and the large accumulation of the gate-open (NO3\u2212-conducting) state. These results also revealed that the CPD has an inhibitory effect on the intensity of anion transport activity, whereas it contributes to the development of transport preference for NO3\u2212 by increasing the specific efficiency for NO3\u2212 against Cl\u2212. Although some hypothetical mechanisms need to be elucidated in the future, such as the mechanism of anion selection by the intrinsically disordered CPD, the role of phosphorylation of the protein, and the positive feedback control of NO3\u2212 transport in\u00a0vivo, we have certainly learned some new facts thanks to the successful preparation of full-length GtACR1 as a recombinant protein. We believe that our study provides important new experimental data and insights into life activities from a molecular perspective.We suggest that the original function of full-length GtACR1 were taken from the JGI PhycoCosm genomic database (Protein ID: 111593) . The procedures for constructing the pPICZ B vector (Thermo Fisher Scientific) for P.\u00a0pastoris were the same as our previous reports (DNA and amino acid sequences of 111593) . GtACR1_ reports and are P.\u00a0pastoris SMD1168H strain (Thermo Fisher Scientific) was used as the protein expression host. The procedures for transformation of the yeast, protein expression, and protein purification were the same as our previous reports . For activation, green (peak wavelength is 530\u00a0nm) LED light was illuminated for 2\u00a0min. To reduce large artifacts on the pH electrode from such a strong light, the internal KCl solution was replaced with 3.3\u00a0M KCl dissolved in India ink , and 0.05% DDM (Dojindo).Static UV-visible absorption spectra were recorded at room temperature using a UV-1800 spectrophotometer (Shimadzu Corp). Flash photolysis measurements for time-dependent absorption changes were performed using a homemade computer-controlled apparatus . The temet\u00a0al. and Drei containing 0.05% DDM and salts . The initial pH was around 5. The ionic strength was kept at 1\u00a0M. A small amount of 0.1\u00a0M NaOH solution was added to the sample solution. Difference UV-visible absorption spectra were calculated by subtracting the spectrum at the initial pH from the others. The difference absorbance at 370\u00a0nm, \u0394Abs370, was plotted against the measured pH. The difference absorbance is presented as a relative value, which was calculated by taking into account the percentage of GtACR1_\u0394CPD deprotonated at an alkaline pH. The data were fitted with the Henderson\u2013Hasselbalch equation with two pKa values:Ka model . pKa,2 corresponds to the pKa of retinal Schiff base. Unfortunately, the origin of the pKa,1 at around eight is currently unknown.For pH titration experiments, All the data supporting the findings in this research are available within the article and the This article contains The authors declare that they have no conflicts of interest with the contents of this article." \ No newline at end of file