diff --git "a/deduped/dedup_0881.jsonl" "b/deduped/dedup_0881.jsonl"
new file mode 100644--- /dev/null
+++ "b/deduped/dedup_0881.jsonl"
@@ -0,0 +1,50 @@
+{"text": "Background to the debate: Umbilical cord blood\u2014the blood that remains in the placenta after birth\u2014can be collected and stored frozen for years. A well-accepted use of cord blood is as an alternative to bone marrow as a source of hematopoietic stem cells for allogeneic transplantation to siblings or to unrelated recipients; women can donate cord blood for unrelated recipients to public banks. However, private banks are now open that offer expectant parents the option to pay a fee for the chance to store cord blood for possible future use by that same child (autologous transplantation.) Private banks offer expectant parents the option to pay a fee for the chance to store cord blood for possible future use by the child. The practice is controversial, for scientific and ethical reasons No one disputes the merit of public cord blood banking, in which women altruistically donate umbilical cord blood (UCB) for haemopoietic stem cell (HSC) transplantation, in a way similar to bone marrow donation. Unrelated UCB transplants have good outcomes in children and are associated with less graft-versus-host disease than adult marrow or peripheral blood stem cells ,2. PubliThe validity of directed UCB storage in \u201clow risk\u201d families, however, has been widely challenged. After early concerns from the American Academy of Pediatrics and AmerFirst, UCB is very unlikely ever to be used. The probability of needing an autologous transplant is less than one in 20,000 ,10, althUmbilical cord blood is very unlikely ever to be used.Second, there are important moral issues. The persuasive promotional materials of commercial UCB banks target parents at a vulnerable time, urging them to take this \u201conce in a lifetime opportunity\u201d to \u201csave the key components to future medical treatment\u201d and freeze \u201ca spare immune system\u201d . Even atThird, collection imposes a considerable logistic burden on the obstetrician or midwife. In addition to consent, parental blood collection, and the associated packaging and paperwork, a large volume of blood has to be collected from the umbilical vessels in utero, requiring multiple syringes under aseptic technique. This may distract professionals from their primary task of caring for the mother and baby at this risky time or, more generally, divert delivery room staff from attending others . This apFinally, individual UCB banks need to remain in business long term if cryopreserved stem cells are to be retrieved. The commercial attractiveness of a service paid years in advance is attested to by the burgeoning number of private providers, yet it seems unlikely that all will survive. Indeed, some US providers are already in trouble for infringing collection patents. There remain reservations about whether laboratories will meet national standards and be accredited. There is a further danger that misplaced enthusiasm for commercial auto-collection will undermine the proven utility of altruistic public cord blood banks.Notwithstanding the above, we accept that the utility of UCB storage in low-risk families is very different from the entirely speculative post-mortem cryonics industry. We acknowledge the possibility that autologously stored UCB stem cells may eventually be used. Indeed, recent research documenting the multi-potentiality of UCB mesenchymal lineages and the Stem cells may potentially be used in life-saving therapies for degenerative diseases or injuries. Stem cells self-replicate and are multi-potential\u2014they can differentiate into diverse cell types . While sTo fully realize this potential will require collection and banking of UCB cells, which are harvested without pain or trauma from placental structures that are normally discarded after birth. We realize that UCB banking (public and private) has sparked controversy. Critics of routine banking question its cost-to-benefit ratio, citing doubts about the clinical relevance of cord stem cells or the likelihood that they will ever be used . Other cThe \u201cstemness\u201d of UCB cells is not merely theoretical . UCB haThe real question is who should pay for umbilical cord blood collection and storage.ESCs also represent an allogeneic source of cells\u2014they are derived from another individual whose tissue type does not match up with the recipient, resulting in immune rejection when transplanted . We knowWith millions of healthy babies born each year, there is potentially a large UCB supply that can be stored, tissue-typed, and made available at short notice. If saved for potential use by the donor, UCB cells become a source of perfectly matched, autologous stem cells (plus there is a 25% probability of being an exact match for a sibling). Yet the American Academy of Pediatrics came out against UCB banking, saying that the odds (with some exceptions) of a donor ever using a UCB sample were low, between 1/1,000 and 1/200,000. While the chance of a donor benefiting may presently be low, this does not automatically mean that another member of society could not benefit. For people with genetic diseases or cancers, the chances of finding an immune-tolerant donor match would obviously be increased by the expansion of cord blood sampling. Also, at the pace that stem cell research is moving, perhaps there will be new uses for UCB cells in the next decade, especially in the field of tissue engineering . ImportaWho should operate cord blood banks\u2014the private or the public sector? There are around 20 private UCB banks in the United States. They charge a collection fee, typically $1,000\u2013$1,500, which includes testing for pathogens and genotyping. Samples are maintained in a frozen state for around $100 a year. An additional $15,000\u2013$25,000 is charged if a sample is used for transplantation . The cost of UCB cell transplantation is significantly less than bone marrow transplantation, and the risk of graft-versus-host disease is lower . The priAny exploitation by companies of the vulnerabilities of expectant parents for financial gain is clearly unacceptable. Federal legislation to establish a national cord blood stem cell bank network\u2014free to all donors\u2014has been introduced in the US Senate and House of Representatives that, if approved, should diminish the risk of exploitation. But unless the network is well-designed from a sociological viewpoint, it could generate a situation where not all cultural and ethnic groups are represented or where benefits are accessible only to families with health insurance or sufficient income to afford transplants. It still remains difficult to find full matches for African, Asian, and Native Americans\u2014mostly because of an insufficient number of UCB donors and the diversity of HLA types in different ethnic groups .The real question is who should pay for UCB collection and storage\u2014the individual donor, who currently has only a small prospect of using their cord blood, or society as a whole? We believe that it is the job of government to assure that people of all ethnic groups are informed and educated about donating UCB. Then, to facilitate UCB banking and the development of technological innovations for its storage and clinical utility, we recommend a national network that is a mixture of for profit, non-profit, and governmental organizations.Markwald and Mironov argue that commercial UCB banking is ethically justified on the grounds that UCB transplantation is effective treatment for many haematological disorders, that autologous UCB is a useful future source of stem cells for the donor, and that there is no second chance to collect these cells.We did not dispute (indeed we acknowledged) the value of UCB HSCs for the treatment of many malignant and non-malignant haematological disorders. However, evidence of their value derives from allogeneic HSCs from public UCB banks . Like maMarkwald and Mironov state that the real question is who should pay for routine UCB collection and storage. However, they take no account of the considerable logistic burden this imposes, the extremely low chance that autologous cells will ever be used , and the costs of routine UCB collection . They alThus the real questions are, first, why should society in general, or the government as a representative of at least a substantial proportion of society, pay for a service not shown to be of any real use? And second, why should commercial banks be allowed to continue to target vulnerable parents anxious to do the best for their children while making no mention of the low chance of use, of alternative sources of available stem cells , or of the risks of reducing stocks of allogeneic HSC in public UCB banks?We agree with Fisk and Roberts that exploiting the emotional vulnerabilities of expectant parents is unjustifiable\u2014thus we support regulation of UCB banking, monitoring, certification, and informed consent. But we disagree that there is a lack of solid scientific evidence for UCB collection and that \u201cfuture therapeutic possibilities are very hypothetical.\u201d Research on stem cells is advancing rapidly, and stem cells derived from UCB are emerging as a reasonable first choice for the field of regenerative medicine.Fisk and Roberts are inconsistent in their views. They claim that stem cells collected in UCB units often are not \u201cin sufficient numbers for use against degenerative conditions\u201d in adult life but then acknowledge that \u201cthe in vitro expandability of cord HSC numbers is sufficient for transplantation into an adult.\u201d They argue that private and public UCB collections create dramatically different \u201clogistic burdens,\u201d but in our experience, the syringes, paperwork, and level of personal distraction are generally the same for public or private banking.We strongly disagree that private UCB banking has no future. While we anticipate a consolidation phase for this industry, surviving companies should be eager to acquire UCB units collected from competitors. If all stem cell sources were under a state monopoly\u2014without private sector contribution\u2014there would be less incentive or opportunity for fostering innovation in long-term storage, expansion, or phenotype characterization of UCB stem cells. The growth of new biotech companies focused on regenerative medicine would be discouraged, compromised, or undermined by the absence of competition, inadequate access to venture capital, and the typical resistance of state health-care systems and their affiliated medical professionals to innovation.Fisk and Roberts are creating obfuscations by mixing \u201cspeculative cryobionic companies\u201d that promise \u201cimmortality and eternity\u201d with serious biotech companies and private UCB banks that focus on a realistic commercialisation of UCB stem cells as a platform for promoting new biotech initiatives.The collection and storage of UCB stem cells is an opportunity for society to build a representative collection of UCB units that can improve the chances of identifying suitably matched donors for transplantation. Human ESCs are mired in ethical concerns and concerns about immunological intolerance. Autologous cells from the bone marrow or elsewhere lose their attractiveness if there is a genetic mutation or a progressive loss of \u201cstemness\u201d due to normal aging . UCB cel"}
+{"text": "Two dimers are further linked to each other through two inter\u00admolecular C\u2014H\u22efO hydrogen bonds, forming an R 3 3(7) ring motif. The nitro groups form an intra\u00admolecular C\u2014H\u22efO hydrogen bond mimicking a five-membered ring. As a result of these hydrogen bonds, polymeric sheets are formed. The aromatic ring makes a dihedral angle of 42.84\u2005(8)\u00b0 with the carboxyl\u00adate group and an angle of 8.01\u2005(14)\u00b0 with the nitro group. There is a \u03c0-inter\u00adaction (N\u2014O\u22ef\u03c0) between the nitro group and the aromatic ring, with a distance of 3.7572\u2005(14)\u2005\u00c5 between the N atom and the centroid of the aromatic ring.The title compound, C \u00c5 b = 8.1050 (5) \u00c5 c = 8.3402 (4) \u00c5 \u03b1 = 75.793 (2)\u00b0\u03b2 = 81.835 (3)\u00b0\u03b3 = 87.686 (2)\u00b0V = 479.21 (4) \u00c53 Z = 2 K\u03b1 radiationMo \u22121 \u03bc = 0.11 mmT = 296 (2) K 0.25 \u00d7 0.20 \u00d7 0.18 mm Bruker Kappa APEXII CCD diffractometerSADABS; Bruker, 2005T min = 0.970, T max = 0.981Absorption correction: multi-scan (9039 measured reflections2518 independent reflectionsI > 2\u03c3(I)1926 reflections with R int = 0.023 R[F 2 > 2\u03c3(F 2)] = 0.042 wR(F 2) = 0.134 S = 1.02 2518 reflections140 parametersH atoms treated by a mixture of independent and constrained refinementmax = 0.24 e \u00c5\u22123 \u0394\u03c1min = \u22120.21 e \u00c5\u22123 \u0394\u03c1 APEX2 (Bruker, 2007APEX2; data reduction: SAINT (Bruker, 2007SHELXS97 (Sheldrick, 2008SHELXL97 (Sheldrick, 2008ORTEP-3 for Windows (Farrugia, 1997PLATON (Spek, 2003WinGX (Farrugia, 1999PLATON.Data collection: 10.1107/S1600536808024999/cs2087sup1.cif Crystal structure: contains datablocks global, I. DOI: 10.1107/S1600536808024999/cs2087Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"}
+{"text": "An intra\u00admolecular O\u2014H\u22efN hydrogen bond results in the formation of a planar six-membered ring, which is oriented with respect to the adjacent aromatic ring at a dihedral angle of 3.38\u2005(11)\u00b0. Thus, the two rings are nearly coplanar. In the crystal structure, inter\u00admolecular N\u2014H\u22efO hydrogen bonds link the mol\u00adecules into centrosymmetric dimers.In the mol\u00adecule of the title compound, C \u00c5 b = 4.8125 (6) \u00c5 c = 30.838 (3) \u00c5 \u03b2 = 99.942 (9)\u00b0V = 3079.9 (6) \u00c53 Z = 8 K\u03b1 radiationMo \u22121 \u03bc = 0.40 mmT = 296 (2) K 0.22 \u00d7 0.18 \u00d7 0.14 mm Bruker Kappa APEXII CCD diffractometerSADABS; Bruker, 2005T min = 0.920, T max = 0.940Absorption correction: multi-scan (16002 measured reflections4067 independent reflectionsI > 3\u03c3(I)1851 reflections with R int = 0.060 R[F 2 > 2\u03c3(F 2)] = 0.046 wR(F 2) = 0.185 S = 1.02 4067 reflections206 parametersH atoms treated by a mixture of independent and constrained refinementmax = 0.36 e \u00c5\u22123 \u0394\u03c1min = \u22120.28 e \u00c5\u22123 \u0394\u03c1 APEX2 (Bruker, 2007APEX2; data reduction: SAINT (Bruker, 2007SHELXS97 (Sheldrick, 2008SHELXL97 (Sheldrick, 2008ORTEP-3 for Windows (Farrugia, 1997PLATON (Spek, 2003WinGX (Farrugia, 1999PLATON.Data collection: 10.1107/S1600536808005084/hk2428sup1.cif Crystal structure: contains datablocks global, I. DOI: 10.1107/S1600536808005084/hk2428Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"}
+{"text": "In the crystal structure, inter\u00admolecular O\u2014H\u22efO and N\u2014H\u22efCl hydrogen bonds link the mol\u00adecules. There is a C=O\u22ef\u03c0 contact between the carbonyl unit and the centroid of the benzene ring. There is a C=O\u22ef\u03c0 contact between the carbonyl unit and the centroid of the benzene ring.In the mol\u00adecule of the title compound, C \u00c5 b = 4.7833 (8) \u00c5 c = 16.381 (3) \u00c5 \u03b2 = 91.605 (9)\u00b0V = 1808.8 (5) \u00c53 Z = 8 K\u03b1 radiationMo \u22121 \u03bc = 0.39 mmT = 296 (2) K 0.28 \u00d7 0.10 \u00d7 0.06 mm Bruker Kappa APEXII CCD diffractometerSADABS; Bruker, 2005T min = 0.975, T max = 0.985Absorption correction: multi-scan (9747 measured reflections2238 independent reflectionsI > 2\u03c3(I)1749 reflections with R int = 0.025 R[F 2 > 2\u03c3(F 2)] = 0.036 wR(F 2) = 0.101 S = 1.05 2238 reflections139 parametersH atoms treated by a mixture of independent and constrained refinementmax = 0.30 e \u00c5\u22123 \u0394\u03c1min = \u22120.22 e \u00c5\u22123 \u0394\u03c1 APEX2 (Bruker, 2007APEX2; data reduction: SAINT (Bruker, 2007SHELXS97 (Sheldrick, 2008SHELXL97 (Sheldrick, 2008ORTEP-3 for Windows (Farrugia, 1997PLATON (Spek, 2003WinGX publication routines (Farrugia, 1999PLATON.Data collection: 10.1107/S1600536808029267/hk2527sup1.cif Crystal structure: contains datablocks text, I. DOI: 10.1107/S1600536808029267/hk2527Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"}
+{"text": "The heterocyclic thia\u00adzine ring adopts a twist conformation. Adjacent mol\u00adecules are attached to each other through inter\u00admolecular C\u2014H\u22efO hydrogen bonds, forming R 2 2(8) and R 2 2(14) ring motifs. The mol\u00adecules are stabilized by intra- and inter\u00admolecular hydrogen bonds, forming a three-dimensional polymeric network.In the title compound, C \u00c5 b = 7.9729 (3) \u00c5 c = 10.4579 (3) \u00c5 \u03b1 = 86.767 (2)\u00b0\u03b2 = 75.773 (1)\u00b0\u03b3 = 66.912 (2)\u00b0V = 573.13 (3) \u00c53 Z = 2 K\u03b1 radiationMo \u22121 \u03bc = 3.76 mmT = 296 (2) K 0.28 \u00d7 0.16 \u00d7 0.12 mm Bruker Kappa APEXII CCD diffractometerSADABS; Bruker, 2005T min = 0.486, T max = 0.639Absorption correction: multi-scan (12369 measured reflections2846 independent reflectionsI > 2\u03c3(I)1840 reflections with R int = 0.034 R[F 2 > 2\u03c3(F 2)] = 0.038 wR(F 2) = 0.095 S = 1.02 2846 reflections158 parametersH atoms treated by a mixture of independent and constrained refinementmax = 0.91 e \u00c5\u22123 \u0394\u03c1min = \u22120.58 e \u00c5\u22123 \u0394\u03c1 APEX2 (Bruker, 2007APEX2; data reduction: SAINT (Bruker, 2007SHELXS97 (Sheldrick, 2008SHELXL97 (Sheldrick, 2008ORTEP-3 for Windows (Farrugia, 1997PLATON (Spek, 2003WinGX (Farrugia, 1999PLATON.Data collection: 10.1107/S1600536809002621/at2711sup1.cif Crystal structure: contains datablocks global, I. DOI: 10.1107/S1600536809002621/at2711Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"}
+{"text": "The crystal structure is stabilized by hydrogen bonding. An intra\u00admolecular N\u2014H\u22efO hydrogen bond results in a six-membered ring. Each mol\u00adecule inter\u00adacts with two others through N\u2014H\u22efO and C\u2014H\u22efO hydrogen bonding, resulting in the formation of nine-membered rings. These hydrogen bonds generate a two-dimensional polymeric network. There are also \u03c0\u2013\u03c0 inter\u00adactions between the aromatic and heterocyclic rings [centroid\u2013centroid distance 3.638\u2005(2)\u2005\u00c5].The title mol\u00adecule, C \u00c5 b = 7.8732 (8) \u00c5 c = 25.651 (2) \u00c5 V = 804.58 (13) \u00c53 Z = 4 K\u03b1 radiationMo \u22121 \u03bc = 0.11 mmT = 296 (2) K 0.25 \u00d7 0.12 \u00d7 0.10 mm Bruker Kappa APEXII CCD diffractometerSADABS; Bruker, 2005T min = 0.975, T max = 0.990Absorption correction: multi-scan (5461 measured reflections1254 independent reflectionsI > 2\u03c3(I)860 reflections with R int = 0.037 R[F 2 > 2\u03c3(F 2)] = 0.040 wR(F 2) = 0.138 S = 1.07 1254 reflections124 parametersH atoms treated by a mixture of independent and constrained refinementmax = 0.23 e \u00c5\u22123 \u0394\u03c1min = \u22120.22 e \u00c5\u22123 \u0394\u03c1 APEX2 (Bruker, 2007APEX2; data reduction: SAINT (Bruker, 2007SHELXS97 (Sheldrick, 2008SHELXL97 (Sheldrick, 2008ORTEP-3 for Windows (Farrugia, 1997PLATON (Spek, 2003WinGX (Farrugia, 1999PLATON.Data collection: 10.1107/S1600536808004923/at2545sup1.cif Crystal structure: contains datablocks global, I. DOI: 10.1107/S1600536808004923/at2545Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"}
+{"text": "Highly infectious diseases (HIDs) are defined as being transmissible from person to person, causing life-threatening illnesses and presenting a serious public health hazard. The sampling, handling and transport of specimens from patients with HIDs present specific bio-safety concerns.The European Network for HID project aimed to record, in a cross-sectional study, the infection control capabilities of referral centers for HIDs across Europe and assesses the level of achievement to previously published guidelines. In this paper, we report the current diagnostic capabilities and bio-safety measures applied to diagnostic procedures in these referral centers. Overall, 48 isolation facilities in 16 European countries were evaluated. Although 81% of these referral centers are located near a biosafety level 3 laboratory, 11% and 31% of them still performed their microbiological and routine diagnostic analyses, respectively, without bio-safety measures.The discrepancies among the referral centers surveyed between the level of practices and the European Network of Infectious Diseases (EUNID) recommendations have multiple reasons of which the interest of the individuals in charge and the investment they put in preparedness to emerging outbreaks. Despite the fact that the less prepared centers can improve by just updating their practice and policies any support to help them to achieve an acceptable level of biosecurity is welcome. A highly infectious disease (HID), such as severe acute respiratory syndrome (SARS) , Lassa fTo prepare the hospital management of HIDs for possible future outbreaks, the European Union (EU) recently funded the European Network of Infectious Diseases (EUNID) . FollowiThe appropriate management of HID cases requires high-level diagnostic capabilities.In 2009, the EUNID has reached a consensus on recommended biosafety procedures for the entire diagnostic process, from specimen sampling to the transport of laboratories . The aimAt the beginning of the project, national health authorities in all of the European countries were asked to suggest a physician with expertise in HID management as a project partner and to identify all the isolation facilities as referral centers for HID in his country. To survey only isolation facilities identified by national health authorities for the referral and management of HIDs, we requested official documents in which these hospitals are clearly indicated.The data were collected during on-site visits using checklists specifically developed during the first year of the project . The cheOn-site visits were performed by the project coordinator assisted, when feasible, by the project partner of the explored country from February-November 2009.The objective of the project was to assess the level of achievement of each surveyed facility to previously published guidelines ,6. With http://www.eunid.eu after free registration.The checklists and the evaluation form are available on the website The participant selection process led to the inclusion of 48 isolation facilities identified for the referral and management of HIDs in 16 countries Table Throughout Europe, 26 (54%) of the surveyed isolation facilities have a biosafety level 4 (BSL-4) laboratory (lab) in theirAmong the isolation facilities surveyed, 48 (100%) and 47 (98%) have a BSL-3 lab [Sixteen (33%) isolation facilities have access to adequate capabilities for other routine diagnostic tests: (i) optimal use of bed-side testing inside the isolation area, (ii) use of the central hospital lab after the inactivation of samples, and (iii) use of the BSL-3 lab . HoweverMicrobiological and routine diagnostic tests are performed directly inside the isolation area in 8 (17%) and 13 (27%) of the surveyed facilities, respectively. Microbiological testing in the majority of isolation facilities and routine testing in a small proportion of the facilities are carried out in a BSL-3 lab. For 19 (40%) and 26 (54%) of the centers, the samples of microbiological and routine testing are sent to the central laboratory, which performs the analysis in a closed-type automatic analyzer. Finally, microbiological and routine tests are performed in the central laboratory without using closed-type auto-analyzers in 5 (11%) and 15 (31%) of the surveyed facilities, respectively.Although most of the isolation facilities surveyed have appropriate diagnostic capabilities and infection control procedures for the safe handling of specimens, 31% and 11% performed their routine and microbiological diagnostic tests in the central laboratory without any measures of biosecurity and biosafety as recommended by the EUNID . The delThe discrepancies among the referral centers surveyed between the level of practices and the EUNID recommendations have multiple reasons. One main explanation is the interest of the individuals in charge and the investment they put in preparedness to emerging outbreaks. Obviously, these centers might benefit from larger funding from their national institution, or if they better allocated their internal resources. Despite the fact that the less prepared centers can improve by just updating their practice and policies any support to help them to achieve an acceptable level of biosecurity is welcome.HID: Highly infectious disease; EU: European Union; EUNID: European Network of Infectious Diseases; EuroNHID: European Network for Highly Infectious Diseases; HCW: Health Care Worker; BSL-4: Biosafety level 4; BSL-3: Biosafety level 3.The authors declare that they have no competing interests.SDT and PB wrote the manuscript, SS, GDI, GT, HCM, RG, HRB, BB, VP, and PB substantial contributed to design the study, participated in the acquisition and interpretation of data, and gave the final approval of the version to be published; GI substantial contributed to design the study, and gave the final approval of the version to be published. All authors read and approved the final manuscript."}
+{"text": "Alexithymia has been a familiar conception of psychosomatic phenomenon. The aim of this study was to investigate whether there were subtypes of alexithymia associating with different traits of emotional expression and regulation among a group of healthy college students.1788 healthy college students were administered with the Chinese version of the 20-item Toronto Alexithymia Scale (TAS-20) and another set of questionnaires assessing emotion status and regulation. A hierarchical cluster analysis was conducted on the three factor scores of the TAS-20. The cluster solution was cross-validated by the corresponding emotional regulation.The results indicated there were four subtypes of alexithymia, namely extrovert-high alexithymia (EHA), general-high alexithymia (GHA), introvert-high alexithymia (IHA) and non-alexithymia (NA). The GHA was characterized by general high scores on all three factors, the IHA was characterized by high scores on difficulty identifying feelings and difficulty describing feelings but low score on externally oriented cognitive style of thinking, the EHA was characterized by high score on externally oriented cognitive style of thinking but normal score on the others, and the NA got low score on all factors. The GHA and IHA were dominant by suppressive character of emotional regulation and expression with worse emotion status as compared to the EHA and NA.The current findings suggest there were four subtypes of alexithymia characterized by different emotional regulation manifestations. Alexithymia has been a familiar conception as \"no words for feeling\" in psychiatry and psychosomatic medicine since it was first termed by Sifneos . Now itsResearchers ,10 foundThe purpose of this study was to examine whether there might be subtypes of alexithymia characterized by different behavioural manifestations. In so doing, the current study adopted a cluster analytical approach to examine whether there were natural grouping of people characterized by different psychological features associating with alexithymia. Cluster analysis is a statistical procedure for determining cases can be placed into groups if they share properties in common, while the cases in different clusters are as dissimilar as possible. It was hypothesized that there were various subtypes of alexithymia characterized by different psychological features associating with alexithymia.Individuals on various level of alexithymia would adopt different ways to express and regulate their emotion. The higher alexithymia groups would perform more serious level of depressive or anxious emotional status and more possible to adopt improper regulation strategy.1788 college students (freshmen and sophomore) were recruited from three regional universities in Guangzhou, south China. 1071 were males and 616 were females, aged 20.44 \u00b1 1.40 years and 20.51 \u00b1 1.39 years respectively, 101 individual did not mention their gender or age. Economic status was also recorded by a multiple-choice question in the checklist for monthly income per person a month. Among all subjects, 110 individuals did not mention their economic status.All subjects were literarily informed the aim of current study was to examine about psychological status of Chinese youngsters and were voluntarily attended this study. All of them would receive a feedback on the assessment results via email. This study was approved by the ethics committee of the Sun Yat-Sen University.Alexithymia was assessed by the 20-item Toronto alexithymia scale (TAS-20) to assess the severity of alexithymia,14. It iEmotion expression tendency was assessed by the Chinese version of the Emotional Expressivity Scale (EES) . It is aEmotion Regulation Questionnaire (ERQ) was usedDepression was measured with the Beck depression inventory (BDI) ,22. It iAnxiety was assessed with the Chinese version of the state portion of State-Trait Anxiety inventory (STAI-T) . This isThe Statistical Package for Social Sciences (SPSS) 15.0 was used for all statistical analyses reported.Independent sample t-tests were conducted to analyze the gender effect on the total scores of TAS-20. ANOVA was conducted to evaluate the potential effect of economic status upon the total scores of TAS-20, whereas correlation analyses were used to explore any association of education and age with the TAS-20 scores.Cluster analyses were conducted in two phases. First, a hierarchical cluster analysis was conducted using the 3 factors: difficulty identifying feelings; difficulty describing feelings; externally oriented cognitive style of thinking scores of TAS-20 as the clustering variables and the between-group linkage method with a squared Euclidean distance measure to discriminate clusters. Second, the cluster solution was validated with analysis of variance (ANOVA) on scores of Emotional Expressivity Scale with its subscales, Emotion Regulation Questionnaire, Beck depression inventory and the state portion of State-Trait Anxiety inventory of the identified groups.No significant difference was found in the total mean TAS-20 scores between boys and girls . Age was significantly correlated the total mean TAS-20 , whereas there was no significant association between education and TAS-20 total score . No significant different in the total mean TAS-20 scores between economic status was found . Given the effect of age upon the TAS-20 score was negligible, it was not controlled for subsequent analyses between cluster comparisons.Table The four clusters did not differ significantly in terms of gender propotion and economic status . However there showed significant difference in terms of age and education .An ANOVA showed that the four subtypes of alexithymia differed significantly in terms of emotional status, emotional expression and regulation as our estimation Table . The genThe major findings of this study showed there were four subtypes of alexithymia and were consistent with previous studies. For example, Vorst and Bermond suggesteValidation of the cluster solution suggested that these subtypes of alexithymia were characterized by different emotional expression and regulation abilities. The general-high alexithymia (GHA) and introversive-high alexithymia (IHA) were characterized by poorer emotional regulation and expression with worse emotion status. In more details description, the extroverted-high alexithymia (EHA) seemed to be modest in emotion status, with emotion regulating more efficiently as compared to the general-high alexithymia (GHA). These results suggest the potential functional outcome of these different subtypes. Mattila found thIt should be noted that the EHA cluster includes the 77.3% of the sample. The EHA cluster shows alterations mostly on externally oriented cognitive style of thinking (EOT). Some studies showed difficulty identifying feelings (DIF) and difficulty describing feelings (DDF) had good internal reliability but not EOT . Some reThe current study has several limitations. First, participants were recruited from a convenient sample pool was the main limitation of this study. Whether these cluster groups could discovered in more broad population needs future study adopted a more rigorous epidemiological approach to improve its representativeness. Second, the findings were based on subjective self-report measures. More rigorous methodologies adopting experimental designs or neurophysiological approaches such as ERP or imaging paradigms should be enforced in the near future in order to validate potential differential neural bases of these subtypes of alexithymia. Finally, the current cross-sectional design could not examine the stability of the cluster solutions across different time points. Future study should adopt a longitudinal design to test the stability of the cluster solution.The current findings suggest there were four subtypes of alexithymia characterized by different emotional regulation manifestations.ANOVA: analysis of variance; BDI: Beck Depression Inventory; BVAQ: Bermond-Vorst Alexithymia Questionnaire; DDF: difficulty describing feelings; DIF: difficulty identifying feelings; EES: Emotional Expressivity Scale; EHA: extrovert-high alexithymia; EOT: externally oriented cognitive style of thinking; ERP: Event-related Potential; ERQ: Emotion Regulation Questionnaire; FFM: Five-Factor Model; GHA: general-high alexithymia, IHA: introvert-high alexithymia; NA: non-alexithymia; SPSS: Statistical Package for Social Sciences; STAI-T: Trait portion of the State-Trait Anxiety inventory; TAS-20: 20-item Toronto alexithymia scale.The authors declare that they have no competing interests.JC designed the study, collected and analyzed data, and wrote the first draft of the manuscript. RCKC conceived and designed the study, and wrote the first draft of the paper. TX analyzed data and participated in writing up the manuscript. JJ participated in writing up the manuscript. All authors read and approved the final manuscriptThe pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-244X/11/33/prepub"}
+{"text": "Novel Retinamides (NRs) target both AR signaling and eIF4E translation in androgen sensitive and castration resistant PCa cells via enhancing AR and MNK degradation through ubiquitin-proteasome pathway. Dual blockade of AR and MNK initiated eIF4E activation by NRs in turn induced cell cycle arrest, apoptosis, and inhibited cell proliferation. NRs also inhibited cell migration and invasion in metastatic cells. Importantly, the inhibitory effects of NRs on AR signaling, eIF4E translation initiation and subsequent oncogenic program were more potent than that observed with clinically relevant retinoids, established MNK inhibitors, and the FDA approved PCa drugs. Our findings provide the first preclinical evidence that simultaneous inhibition of AR and eIF4E activation is a novel and efficacious therapeutic approach for PCa, and that NRs hold significant promise for treatment of advanced prostate cancer.Androgen receptor (AR) and MNK activated eIF4E signaling promotes the development and progression of prostate cancer (PCa). In this study, we report that our Androgen receptor (AR), a ligand dependent transcription factor plays pivotal role in the development and progression of prostate cancer (PCa), the most frequently diagnosed non-cutaneous male malignancy ,2. WhileBesides AR signaling, hyper-activation of eukaryotic translation initiation factor 4E (eIF4E), the mRNA 5\u2032 cap-binding protein of cap dependent translation promotes exquisite transcript-specific translation of key mRNAs that are indispensable in PCa initiation, progression and metastases -10. Analtrans-retinoic acid (ATRA) exert potent anticancer and growth inhibitory effects in human breast/prostate cancer cells and xenograft models [Retinoic acid metabolism blocking agents (RAMBA), a family of compounds that inhibit the P450 enzyme(s) responsible for the metabolism of all-t models -18. Serit models -18. Recet models .In the current study, we explored the effect of NRs to simultaneously target AR signaling and MNK activated eIF4E translation initiation in androgen sensitive and CRPC cells. Our data reveal that NRs are capable of inhibiting the growth and progression of PCa by directly targeting both AR signaling and eIF4E translational machinery via enhancing AR and MNK degradation through the ubiquitin-proteasome pathway, which in turn led to inhibition of downstream events that promote cell growth, proliferation, colony formation, apoptosis evasion, invasion and metastasis. Our findings establish for the first time that agents such as NRs, which simultaneously inhibit activation of both AR and eIF4E to suppress growth and progression in genetically diverse PCa cells at pharmacologically feasible concentration, are novel therapeutics for treatment of both androgen-sensitive and castration resistant PCa.50 value 1.5 - 5.5 \u03bcmol/L and -insensitive and castration-resistant 22Rv1) human prostate cancer cells were treated with NRs , VN/14-1, ATRA, 4-HPR, Casodex, Abiraterone acetate (AA) (Zytiga) and MDV3100 . We also note that Enzalutamide had no significant effect up to a concentration of 100 \u03bcmol/L . Our results revealed that NRs significantly inhibited the growth of these resistant cells with a GIL Figure .Since AR is a major driver of proliferation in PCa , we nextTo further determine whether the inhibition of transcriptional activity could be translated to inhibition of protein expression, we next explored the effects of NRs on AR and its responsive protein, PSA in DHT induced LNCaP cells. As seen in Figure 209 phosphorylated form) in three PCa cell lines in comparison with established retinoids and known MNK inhibitors, CGP57380 and cercosporamide. We observed that 24 h treatment of PCa cells with lead NRs reduced the expression of MNK1, MNK2 and peIF4Eser209 with no notable effect was on the expression of total eIF4E treated controls after 18 h of transfection. Cells co-treated with AR and MNK1 siRNA showed a remarkable decrease in the expression of both AR and MNK1 compared to scrambled siRNA treated cells Figure . MTT ass50 concentration was due to downregulation of AR and MNKs, we performed a dose-dependent analysis by treating LNCaP cells with different concentrations of VNLG-152 for 24 h. As seen in Figure 50 concentration of VNLG-152 in LNCaP cells) and above. However, in the case of MNK2, though VNLG-152 exerted a dose-dependent decline maximal effect was observed only at 20 \u03bcM concentration. Besides MNKs and AR, a dose-dependent decrease in the expression of cell cycle regulatory cyclin D1 and increase in the expression of pro-apoptotic Bax and caspase was also observed in LNCaP cells at the GI50 concentration. These results endorse the fact that VNLG-152 induced loss of cell viability is due to downregulation of AR and MNKs.To further authenticate that VNLG-152 induced loss of cell viability at GISince AR and MNK protein levels were signi\ufb01cantly reduced in response to 24 h treatment of VNLG-152 in a dose-dependent manner, we next investigated the expression of AR and MNK in LNCaP cells following treatment with cycloheximide (CHX), a protein synthesis inhibitor to unveil whether VNLG-152 induced AR and MNK downregulation occurs at the level of protein translation. Our results showed that AR and MNK (MNK1 and MNK2) were profoundly reduced even within 12 h of VNLG-152 treatment compared to control. However, CHX treatment failed to induce noticeable AR/MNK downregulation at the same time points relative to VNLG-152 treated cells signifying that post-translational mechanisms are in play in VNLG-152 induced AR/MNK downregulation Figure .Because our results revealed that AR and MNK protein ablation by VNLG-152 is post-translational, we asked whether this occurred via ubiquitin\u2013proteasome system. For this we treated LNCaP cells with 20 \u03bcM VNLG-152 in the presence or absence of MG-132 (5 \u03bcM), the 26S proteasome inhibitor. As seen in Figures Proteasomal degradation of the AR is known to be mediated by the E3 ligases, MDM2 or CHIP . To testSince the importance of AR signaling and MNK mediated eIF4E activation in PCa development and progression has been reported, several strategies to target the respective signaling pathways have been developed ,20,24,25et al. demonstrated that AR ubiquitination is impaired in MDM2-deficient mouse embryonic fibroblasts (MEFs) compared to MDM2-intact MEFs. Although our work confirms a role for MDM2 and CHIP in degrading AR, the primary E3 ligase responsible for degrading AR in response to VNLG-152 and the molecular mechanisms underlying their preference are not fully understood.Androgen-AR signaling has a critical role in prostate cancer development and progression in part through transcriptional regulation of AR responsive genes . Earlieret al. demonstrated that eIF4E phosphorylation by MNKs supports protein synthesis, cell cycle progression and proliferation in prostate cancer. Furthermore combined deficiency of MNK1 and 2 has been demonstrated to delay tumour development [ser209 without affecting total eIF4E expression. Notably, the effect of NRs in depleting MNKs and peIF4E proteins was far potent than the established MNK inhibitors (cercosporamide and CGP57380) and clinically relevant retinoids-ATRA and 4-HPR.In addition to AR signaling, the role of MNK mediated cap dependent translation in PCa development and progression has been established in recent years ,30. Initelopment . In the et al. who demonstrated that NRs induce MNK degradation and block eIF4E phosphorylation in triple negative and Her2-overexpressing breast cancers.Dissection of the molecular mechanism underlying lead NR (VNLG-152) mediated MNK and peIF4E down-regulation indicated that VNLG-152 stimulated ubiquitination and proteasomal degradation of MNKs, the critical activators of eIF4E. Degradation of MNKs by VNLG-152 may abrogate MNK mediated phosphorylation of eIF4E at serine209, which subsequently impairs its ability to enter the eIF4F initiation complex, binding to 5\u2032 mRNA cap, and activation of cap-dependent translation initiation . The resConstitutive AR signaling and/or eIF4E mediated translation initiation favors translation of key genes involved in oncogenesis . AbrogatIn summary, our study identi\ufb01es for the first time novel retinamides that can simultaneously inhibit both AR signaling and MNK mediated eIF4E activation in prostate cancer cells Figure . Most in2 humidified atmosphere.Androgen-dependent (LNCaP), androgen-independent (PC-3) and castration resistant (C4-2B and 22Rv1) human prostate carcinoma cells were procured from American Type Culture Collection, Manassas, VA, USA and maintained in RPMI 1640 media supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin. MR49F, Enzalutamide resistant LNCaP cells were maintained in RPMI 1640 supplemented with 10% fetal bovine serum, 1% penicillin/streptomycin and 10 \u03bcM Enzalutamide. Immortalized untransformed prostate epithelial cells, PWR1E from American Type Culture Collection were maintained in serum-free Keratinocyte media (Gibco) supplemented with epidermal growth factor and bovine pituitary extract. All cell lines were maintained in a 37 \u00b0C/5% COCell proliferation assay was performed as described previously . For cel5 cells/well in steroid-free medium. The cells were dual transfected with ARR2-Luc and the Renilla luciferase reporting vector pRL-null with LipofectAMINE 2000 transfection reagent according to the manufacturer's protocol. After a 24 h incubation period at 37 \u00b0C, the cells were incubated with fresh phenol-red free serum-free RPMI 1640 medium and treated with DHT, ethanol vehicle and/or the specified compounds in triplicate. After a 24 h treatment period the cells were washed twice with ice-cold DPBS and assayed using the Dual Luciferase kit (Promega) according to the manufacturer's protocol. Briefly, cells were lysed with 100 \u03bcl of luciferase lysing buffer, collected in a microcentrifuge tube, and pelleted by centrifugation. Supernatants were transferred to corresponding wells of opaque 96-well multiwell plates. Luciferin was added to each well, and the light produced during the luciferase reaction was measured in a Victor 1420 scanning multi-well spectrophotometer . After measurement, Stop and Glo reagent (Promega) was added to quench the firefly luciferase signal and initiate the Renilla luciferase luminescence. Renilla luciferase luminescence was also measured in the Victor 1420. The results are presented as the fold induction, that is, the relative luciferase activity of the treated cells divided by that of the control, normalized to that of the Renilla [LNCaP cells were transferred to steroid-free medium 3 days before the start of the experiment, and plated at 1 \u00d7 10Cells were harvested by trypsinization and then \ufb01xed with 70% ethanol for 24 h at 4 \u00b0C. Fixed cells were stained in 1 ml of propidium iodide solution for at least 2 h at 4 \u00b0C. Stained cells were analyzed with a \ufb02ow cytometer using FlowJo software that exploits Watson algorithm to find out peak and S-phase populations from a univariate distribution curve.5 cells and then treated with 5 \u03bcM of indicated compounds for 24 h. Subsequently cells were washed once with phosphate buffered saline and incubated with 100 \u03bcl of 1:1 mixture of acridine orange and ethidium bromide (4 \u03bcg/ml) for 30 min. Following this, cells were immediately washed with PBS and analyzed using Nikon TE2000 fluorescence microscope. Cytoplasmic histone-associated DNA fragments were quantified by using the Cell Death Detection ELISAPLUS kit according to the manufacturer's instructions. Brie\ufb02y, \ufb02oating and attached cells were collected and homogenized in 400 \u03bcL of incubation buffer. The wells were coated with antihistone antibodies and incubated with the lysates, horseradish peroxidase\u2013conjugated anti-DNA antibodies, and the substrate, in that sequence. Absorbance was measured at 405 nm.Apoptosis was evaluated in PCa cells by acridine orange/ethidium bromide dual staining. Briefly, cells were seeded in 12-well plate at seeding densities of 1 \u00d7 105 cells/ well and allowed to form a confluent monolayer for 24 h. Cells were made dormant by pretreating with 0.5 \u03bcmol/L mitomycin C for 2 h to ensure that wounds are filled due to cell migration and not by cell proliferation. Subsequently, the monolayer was scratched with a pipette tip, washed with media to remove floating cells, and photographed (time 0 h). Cells were then treated with indicated compounds (5 \u03bcM) and the experiment was terminated as soon as wound was completely filled in vehicle treated controls. Cells were then photographed again using Nikon TE2000 microscope at three randomly selected sites per well [For wound healing assay highly metastatic PC-3 cells were plated in a 24 well plate at a seeding density of 5 \u00d7 10per well .4 cells/well) were cultured in the upper chamber of the transwell insert for 24 h in serum-free RPMI-1640 medium. The cells were then treated with 5 \u03bcM of indicated compounds for 24 h. RPMI-1640 medium containing 10% FBS was placed in the lower chamber. At the end of incubation, the top surface of the non-migrated cells were scraped gently with cotton swabs and the cells on the lower surface of the membrane (migrated cells) were fixed for 15 min with cold methanol and stained with crystal violet. Cells that had migrated to the bottom of the membrane were visualized and counted using an inverted microscope. For each replicate (n = 3), cells in three randomly selected fields were counted and averaged.The invasion assay in PC-3 cells was performed using Matrigel -coated transwell cell culture chambers as described previously (43). Briefly, PC-3 cells .For western blotting, cells were lysed in modi\ufb01ed RIPA lysis buffer supplemented with a protease inhibitor mix . Unless otherwise described, 30 \u03bcg protein was resolved by SDS\u2013polyacrylamide gel electrophoresis, transferred, and immunoblotted with using the following antibodies: AR, Bax, caspase-3, CHIP, cyclin B, cyclin D1, E-cadherin, eIF4E, MNK1, MDM2, N-cadherin, cleaved PARP, peIF4EThe dose of NRs used in the present study was chosen based on the dose dependent experiment that was initially performed (data not shown). Our results revealed that NRs significantly modulated the expression of proteins analyzed at dose starting at 5 \u03bcM with maximal effect at 20 \u03bcM. Hence 20 \u03bcM concentration was chosen for performing most of the analysis. While no significant difference was observed at 5 and 20 \u03bcM concentration with NRs in cell based assays, 5 \u03bcM was uniformly chosen to perform cell based analysis.5 cells were seeded in 6 cm dish for 24 h in culture medium. The cells were then transfected with 100 nM of Mnk1/AR and non-targeting siRNAs (purchased from Ambion) for 18 h using Lipofectamine\u00ae 2000 Transfection reagent (Invitrogen). Protein silencing was confirmed by immunoblot analysis. For cell growth assay experiments, transfection complex were removed after 18 h, cells were washed twice with phosphate-buffered saline and replaced with growth medium. Twenty-four hour later drug was added and harvested after 72 h. For transfection with MDM2 or CHIP siRNA, LNCaP cells were transfected with 100 nM of MDM2 or CHIP siRNA for 18 h, transfection complexes were washed off and replaced with phenol free media for 24 h. Cells were then treated with 20 \u03bcM of VNLG-152 for an additional 24 h before cell lysis by RIPA lysis buffer [For siRNA transfection, 2 \u00d7 10s buffer ,45.LNCaP cells were treated with VNLG-152 (20 \u03bcM) and MG-132 (5 \u03bcM) and combination thereof for 24 h, harvested and lysed in modi\ufb01ed RIPA buffer. MG-132 was added 8 h prior to the VNLG-152. Ubiquitinated proteins were immunoprecipitated with 20 ml of protein A/G sepharose beads for 45 min and centrifuged at 13,300 rpm for 1 min. Supernatants were then incubated with 1 \u03bcg of polyclonal antibody per 500 \u03bcg of total protein in immunoprecipitate. Protein lysate-antibody complex were rotated for 12 h at 4 \u00b0C and beads added for an additional 1 h. Complexes were centrifuged at 13,300 rpm for 1 min, and the supernatant was discarded. Beads were subsequently washed with 3X IP/lysis buffer and re-suspended in 2X SDS sample loading buffer and boiled at 99 \u00b0C for 5 min. Samples were then resolved by SDS\u2013PAGE, and immunoblotted for ubiquitin after stripping of the membrane for AR/MNK .All experiments were carried out in at least triplicates and are expressed as mean \u00b1 S.E. where applicable. Treatments were compared to controls using the Student's t-test with either GraphPad Prism or Sigma Plot. Differences between groups were considered statistically significant at P < 0.05."}
+{"text": "As such, nitrogen geochemistry is fundamental to the evolution of planet Earth and the life it supports. Despite the importance of nitrogen in the Earth system, large gaps remain in our knowledge of how the surface and deep nitrogen cycles have evolved over geologic time. Here, we discuss the current understanding (or lack thereof) for how the unique interaction of biological innovation, geodynamics, and mantle petrology has acted to regulate Earth's nitrogen cycle over geologic timescales. In particular, we explore how temporal variations in the external (biosphere and atmosphere) and internal (crust and mantle) nitrogen cycles could have regulated atmospheric pN2. We consider three potential scenarios for the evolution of the geobiological nitrogen cycle over Earth's history: two in which atmospheric pN2 has changed unidirectionally (increased or decreased) over geologic time and one in which pN2 could have taken a dramatic deflection following the Great Oxidation Event. It is impossible to discriminate between these scenarios with the currently available models and datasets. However, we are optimistic that this problem can be solved, following a sustained, open\u2010minded, and multidisciplinary effort between surface and deep Earth communities.Nitrogen forms an integral part of the main building blocks of life, including DNA, RNA, and proteins. N Over time this biological revelation cumulatively oxygenated Earth's surface. The result is that Earth's atmosphere became highly reactive, unlike the atmospheres of Mars and Venus which are still dominated by unreactive gases (CO2 + N2). The combination of a uniquely reactive\u2010gas\u2010rich atmosphere and hydrosphere, coupled with subduction zone plate tectonics, means that Earth injects oxidizing material into a relatively reduced mantle is the largest surficial reservoir of nitrogen and comprises around 78% of Earth's atmosphere. Atmospheric nitrogen is incorporated into the biosphere via the process of N2 fixation, whereby specialized prokaryotes convert this inert N2 gas into biomolecules (as C\u2010NH2) accomplish this feat by utilizing the nitrogenase metallo\u2010enzyme complex, which contains Fe and Mo in its most efficient form.Molecular nitrogen in the gas\u2010phase (N) Figure\u00a0. Nitroge4+) is dominantly recycled by the biosphere or sequentially oxidized to nitrite (NO2\u2212) and nitrate (NO3\u2212). Ammonium oxidation (\u201cnitrification\u201d) generally requires molecular oxygen and anaerobic ammonium oxidation , metallic nitride , and nitro\u2010carbide is buffered below the quartz\u2013fayalite\u2013magnetite buffer (QFM) is highly incompatible, whereas ammonium (NH4+) is likely compatible in potassium\u2010bearing mineral phases and volcanism (outgassing), and this interplay ultimately controls atmospheric N33.12 reduction to ammonium within hydrothermal systems could have supplied fixed nitrogen to an early deep biosphere between ~2.4 and 2.3\u00a0Ga to fuel this primary productivity. The evolution of oxygenic photosynthesis in the late Archean would certainly have resulted in a marked increase in global rates of primary production under widespread sulfidic conditions as a switch from molybdenum\u2010limited to modern N2 fixation rates, based on the experiments of Zerkle et\u00a0al. would have changed in response to these changes in the biogeochemical nitrogen cycle. Using conservative estimates for the timing of major biological and geochemical innovations, we can propose a rough timeline for changes in the burial of N into marine sediments over geologic time Figure\u00a0. Nitrogee et\u00a0al. . Note th3.2Our understanding of the evolution of the deep Earth nitrogen cycle and the relevant nitrogen reservoirs is even less well constrained. One puzzle often cited is the \u201cmissing nitrogen conundrum,\u201d which follows because the abundance of nitrogen in the bulk silicate Earth (BSE) appears to be significantly lower than that of other volatile elements Figure\u00a0. The mos2/Ar ratios, which is now known to be inaccurate for the majority of the mantle. In particular, the N2/Ar approach underestimates the true volume of the N reservoir, because some (or a lot of) N should exist as lattice\u2010bound NH4+ ions over geological time a consensus for the fluxes of nitrogen into and out of the mantle, and how these relate to the surficial nitrogen cycle, and (ii) the validity of the proposed missing nitrogen conundrum. We suggest that there are three plausible scenarios to reconcile these issues within the geobiological nitrogen cycle over Earth's 4.6\u2010billion\u2010year history, notably that atmospheric pN2 over geologic time could have scaled with pN2 , it could mean that abiotic sources became enhanced in the early Proterozoic, potentially alleviating some of the biological nitrogen stress proposed for the boring billion.This scenario would suggest that an increase in atmospheric s Figure\u00a0. A posit3.4.3pN2 by altering the speciation and partitioning of nitrogen in subducting sediments. This follows because the speciation and partitioning of nitrogen are redox\u2010sensitive, as discussed in 2 molecules are highly volatile and highly incompatible in all mineral phases, this molecular N2 would be released as a gas\u2010phase to the atmosphere, a concept not quantitatively incorporated into recent flux models and in crystalline rocks (predominantly as NH4+). There are currently no quantitative estimates we are aware of for the contribution of oxidative weathering to atmospheric N2 in the modern Earth system; however, it potentially constitutes an additional N2 source that would have been absent or nearly so prior to global oxygenation.A second, as yet unexplored, link between the GOE and atmospheric oldblatt estimate4In summary, subduction zones inject biologically mediated material into the mantle, which is affected by Earth surface redox. This subducted material affects mantle melting and degassing, and can significantly alter Earth's atmospheric chemistry over geologic time. Atmospheric chemistry has been both a driving force, and a response to, biological innovation and evolution. Therefore, the geological and biological nitrogen cycles are intimately linked, albeit with long lag times possibly verging on hundreds to thousands of millions of years.vice versa. We therefore put this challenge to the Geobiology community, to don your hard hats and delve into Earth's interior (accompanied by your hard\u2010rock colleagues).In the above discussion, we have highlighted some of the looming questions in nitrogen geobiology, and the problems and challenges involved in addressing them. What is painfully obvious from this discussion is that a cross\u2010disciplinary approach will be crucial in tackling these issues. This requirement goes beyond the typical marriage of low\u2010temperature geochemistry, microbiology, and sedimentology, toward a deeper connection between biogeochemistry, mantle petrology, and geodynamics. For example, additional thermodynamic and experimental constraints on N speciation under relevant deep Earth conditions are warranted, but these data are only relevant if the input (sourced primarily from biology) is considered. The geobiological nitrogen puzzle will only be solved by considering the biosphere as intimately connected with the geosphere, and"}
+{"text": "Visual cortical neurons are tuned to similar orientations through the two eyes. The binocularly-matched orientation preference is established during a critical period in early life, but the underlying circuit mechanisms remain unknown. Here, we optogenetically isolated the thalamocortical and intracortical excitatory inputs to individual layer 4 neurons and studied their binocular matching. In adult mice, the thalamic and cortical inputs representing the same eyes are similarly tuned and both are matched binocularly. In mice before the critical period, the thalamic input is already slightly matched, but the weak matching is not manifested due to random connections in the cortex, especially those serving the ipsilateral eye. Binocular matching is thus mediated by orientation-specific changes in intracortical connections and further improvement of thalamic matching. Together, our results suggest that the feed-forward thalamic input may play a key role in initiating and guiding the functional refinement of cortical circuits in critical period development.DOI:http://dx.doi.org/10.7554/eLife.22032.001 Two major transformations take place when visual information from the dorsal lateral geniculate nucleus (dLGN) of the thalamus reaches layer 4 of the primary visual cortex (V1). In one, V1 neurons become selective for bars and edges of certain orientations . In the monocular orientation selectivity in V1 have been extensively studied, providing a foundation for understanding its binocular matching. According to the \u2018feed-forward\u2019 model proposed by Hubel and Wiesel responses and excitatory postsynaptic currents (EPSCs) underlying orientation tuning through the two eyes. We focused our experiments on the excitatory neurons in layer 4 because the synaptic mechanisms of their orientation tuning are better understood. We recorded at two developmental stages, P15-21 and P60-90, i.e., before and after the matching process, respectively.We have previously discovered using extracellular recording that individual neurons in mouse V1 are initially tuned to different orientations through the two eyes until about postnatal day 21 (P21), and the difference declines progressively and reaches the adult level by P30 . To revem responses under current clamp , the orientation tuning of Vge group , and it ge group .Layer\u00a04 neurons receive several sources of synaptic inputs, including intracortical excitation from neighboring neurons and thalamocortical excitation from\u00a0dLGN. As previously revealed in the monocular visual cortex, the orientation tuning of layer\u00a04 cells is determined by the thalamic input, which is then linearly amplified by intracortical circuits . To deteB receptors on their axon terminals, we recorded visually\u00a0evoked local field potentials (VEPs) in the visual cortex and examined the effect of administrating GABAB receptor antagonist CGP54624. In one set of experiments, we first applied the GABAB receptor agonist Baclofen to reduce the VEP amplitude, which was then reversed by the subsequent administration of CGP . As a result, the intracortical input was completely mismatched binocularly in the young mice and our recording time (P15-21) may contribute to this early thalamic binocular correspondence. Alternatively, previous studies have shown that many aspects of visual system organization and function can be established by experience-independent mechanisms, such as ocular dominance columns in cat visual cortex , monoculIn adult mice, the intracortical excitation that individual layer\u00a04 neurons receive prefers the same orientation as the thalamic input , and Fig, therebyInterestingly, we found that the correlation between thalamic and cortical tuning is slightly better for the contralateral pathway in the young mice, indicating that the contralateral pathway starts to mature earlier than the ipsilateral one. A similar developmental sequence of the two pathways has been revealed in both cat and mouse visual cortices. In mouse ocular dominance plasticity, while both contralateral and ipsilateral circuits respond to imbalanced visual inputs in juveniles , only thFinally, it is important to note that, even though it has been widely used in neuroscience research, voltage clamp is normally incomplete in whole-cell recordings due to the so-called \u2018space clamp\u2019 issue . This prIn conclusion, we have discovered the thalamocortical and intracortical circuits underlying binocular matching and revealed their developmental profiles. Our results suggest that the feed-forward thalamocortical pathway may play an important role in the development of visual circuits, possibly through initiating and guiding the functional rewiring and refinement of cortical circuits in visual cortex. These findings provide an exciting foundation for future mechanistic studies of critical period plasticity and binocular matching.tm2(cre)ZjhGad2 and tm32(CAG-COP4*H134R/EYFP)HzeGt(ROSA)26Sor homozygous mice originally from the Jackson Laboratory. Animals were raised on a 12 hr light/dark cycle, with food and water available ad libitum. All animals were used in accordance with protocols approved by Northwestern University Institutional Animal Care and Use Committee (IS00003509).Wild type C57BL/6 and \u2018GAD2-ChR2\u2019 double heterozygous mice of different age groups and both sexes were used in the experiments. The double heterozygous were obtained by crossing 2) was made on the left hemisphere to expose binocular V1. The center of the craniotomy was 3.0 mm lateral and 0.5 mm anterior from the Lambda point. Throughout recordings, toe-pinch reflex was monitored and additional urethane (0.2\u20130.3 g/kg) was supplemented as needed. The animal\u2019s temperature was monitored with a rectal thermoprobe and maintained at 37\u00b0C through a feedback heater .For recordings, mice were sedated with chlorprothixene and then anesthetized with urethane . Atropine and dexamethasone were administrated subcutaneously, as described before , 2010. T2, 3 mM MgSO4, 1 mM NaH2PO4) was applied on top of the cortex to reduce pulsation. Signals were amplified using MultiClamp 700B , sampled at 10 kHz, and then acquired with System three workstation . Pipette capacitance and the open tip resistance were compensated initially. After the whole-cell configuration was achieved, the membrane potential was recorded under current-clamp mode with no current injected. For recording EPSCs, the cell was voltage clamped at ~ \u221263 mV were inserted perpendicular to the pial surface. For single unit recordings to confirm complete silencing, signals were recorded between 600 \u03bcm and 700 \u03bcm in depth, corresponding to deep layers (layer 5 and 6). For single unit recordings to study the impact of callosal projections and for field recordings of visually\u00a0evoked potentials (VEPs), signals were recorded between 350 \u03bcm and 450 \u03bcm in depth, corresponding to layer 4. Electrical signals were filtered between 0.3 and 5 kHz for spikes, and 10 and 300 Hz for VEPs and sampled at 25 kHz using a System three workstation . The spike waveforms were sorted offline in OpenSorter to isolate single units as described before , 2010.m2 luminance) placed 25 cm in front of the animal. The direction of the gratings varied between 0\u00b0 and 330\u00b0 (12 steps at 30\u00b0 spacing) in a pseudorandom sequence. Spatial frequency of the stimuli was 0.02 cycle/degree. Temporal frequency was fixed at two cycle/s. Each stimulus was presented for 1.5 s (three cycles), with 1.5 s inter-stimulus interval or 4.5 s interval when LED was used in order to allow the optogenetically\u00a0activated inhibitory cells to recover.Sinusoidal gratings drifting perpendicular to their orientations were generated with Matlab Psychophysics toolbox , as desc2 at the tip of the optic fiber in all recordings, which was confirmed to be reliably effective in silencing all excitatory neurons in visual cortex driven by a blue LED was placed \u223c0.5 mm above the exposed cortex. The tip of the LED fiber was placed at a similar position in all mice. During recordings, it was buried in the agarose that was applied to reduce the pulsation of the brain and protect the tissue. To prevent direct photostimulation of eyes by the LED light, Metabond used for mounting the head plate was prepared with black ink, and a piece of thick black paper was carefully placed around the fiber to ensure no light could be seen from the front and lateral sides, as described before . The LEDl cortex .2.0% biocytin was added in the internal solution to label recorded cells. After in vivo whole-cell recordings, mice were overdosed with euthanasia solution and perfused with PBS and then 4 % paraformaldehyde (PFA) solution. The brain was fixed in 4 % PFA overnight. Coronal slices 150 \u03bcm thick were cut from the fixed brain using a vibratome . The labeled cells were revealed by visualizing biocytin with Streptavidin-Alex Fluor 547 conjugate . Fluorescence images were captured using a Zeiss LSM5 Pascal confocal microscope in z-series scanning.m were extracted by removing spikes from the raw voltage traces by a 6 ms median filter. With each stimulus trial included three cycles of drifting gratings , the subthreshold Vm traces were cycle-averaged for each stimulus condition (from 50 to 550 ms after each stimulus cycle onset). Note that only the first two cycles were used to cycle-average the responses if the cell showed adaptation in the third cycle (m trace for the blank stimulus (i.e. gray screen) was used to calculate the mean (Vm baseline) and standard deviation of spontaneous Vm fluctuations. The Vm baseline was then subtracted from the smoothed and cycle-averaged Vm trace for each visual stimulus condition, i.e., gratings of certain direction, and the peak amplitude of the resulting Vm trace was used as the response magnitude for that direction.Whole-cell recording data were first analyzed using a custom MATLAB program . For current-clamp data, spikes were detected by calculating the first derivative of raw voltage traces (dV/dt), and the start of a spike was the time point when dV/dt reached a manually set positive threshold. Subthreshold Vrd cycle . The avem baseline was calculated as the mean of the Im trace to the blank stimulus, separately for conditions with LED photoactivation (\u2018LED-on\u2019), and then subtracted from the cycle-averaged trace of each condition to obtain visually\u00a0evoked EPSC responses. The peak EPSC amplitude was used for plotting tuning curves and subsequent analysis. Finally, the intracortical EPSC traces were generated by a point-by-point subtraction of thalamic EPSCs (LED-on) from the total EPSCs (LED-off) traces traces . The scaR(\u03b8) is the response magnitude of Vm or EPSC, at \u03b8 direction of gratings. Its amplitude was used as a global orientation selective index (gOSI). Half of its complex phase was calculated .No statistical methods were used to predetermine sample sizes, but our sample sizes are similar to those reported in the field . All val In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Eve Marder as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Sacha B Nelson (Reviewing Editor and Reviewer #1) and Huizhong Tao (Reviewer #3).Thank you for submitting your article \"Binocular Matching of Thalamocortical and Intracortical Circuits in the Mouse Visual Cortex\" for consideration by The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.Summary:Visual cortical neurons are selective for orientation and respond to stimulation through both eyes. The developmental process by which inputs through the two eyes become matched in their orientation tuning is incompletely understood. Here, the authors use optogenetics and in vivo whole cell recordings to show that this matching occurs at earlier ages for thalamic input than for cortical input.Essential revisions:1) The authors use CGP 55845 to block GABA-B receptors and find no effect on field potentials when intracortical inputs are silenced via the LED. An important positive control for the effectiveness of the antagonist would be to show that it does have an effect when the LED is not turned on. This, or some other control demonstrating the drug was effective should be included.2) The text should be revised so that the order in which the results are discussed is the same for the older and younger animals. The organization of the figures is fine but the order in the text was confusing.The EPSC properties reported for adult mice are presented in the following order:1) The ipsilateral and contralateral scaling factor2) Individual eye matching of the thalamic and total EPSCs (both ipsi and contra)3) Inter-ocular difference of the total EPSC4) Inter-ocular difference of the thalamic EPSC5) Correlation of inter-ocular difference between total and thalamic EPSCThese properties should be presented in the exact same order also for young mice.Furthermore, the intracortical EPSC and its individual eye matching with the thalamic EPSC as well as the inter-ocular difference of the intracortical EPSC should be presented with the rest of the EPSC properties in the adult first and then in the young animal.3) These are difficult experiments and the authors should acknowledge somewhere the limitations of their approach, ideally in the Discussion.A) Voltage clamp in vivo may be incomplete due to space clamp issues. This could even have larger effects on intracortical than thalamic input since the latter are likely to occur more proximally.B) The authors should also acknowledge the limitation of linear subtraction as a method for estimating the cortical contribution. The interaction between thalamic and cortical input is likely not linear.C) The discussion of possible early matching mechanisms in the Discussion is fine, but the statement \"Amazingly, certain degrees of binocular correlation may indeed exist in mice even at the level of retinal waves , possibly via direct retino-retinal connections , retinopetal modulations , or light-driven activity in the intrinsic photosensitive retinal ganglion cells through the closed eye lids \" implies that all such correlation must have occurred before eye opening. However, the experience between eye opening and recording may also contribute.D) Finally, the following conclusion is not warranted, although it could be stated as a speculation: \"feed-forward thalamocortical pathway plays an important role in initiating and guiding the functional rewiring and refinement of cortical circuits in visual system development.\" Correlation does not equal causation.4) The scale factor was calculated as the maximal thalamic EPSC amplitude of the tuning curve divided by maximal total EPSC amplitude. Since preferred orientation often differed between thalamic EPSC and total EPSC, especially for young cells, the scale factor was often calculated from two responses evoked by different stimuli. To have a more accurate measurement of scale factor, the authors can select 4 o 5 best responses of the tuning curves for thalamic EPSC and total EPSC (evoked by the same stimuli), do some averaging and then calculate the ratio. Essential revisions:1) The authors use CGP 55845 to block GABA-B receptors and find no effect on field potentials when intracortical inputs are silenced via the LED. An important positive control for the effectiveness of the antagonist would be to show that it does have an effect when the LED is not turned on. This, or some other control demonstrating the drug was effective should be included.We agree with the reviewers that this is an important positive control. When we analyzed CGP\u2019s effect on VEP amplitude in the absence of LED, we found that it was not different between control and CGP conditions. This is in fact consistent with the literature that administration of GABA-B receptor antagonist alone does not affect the field potential amplitudes . In other words, GABA-B receptors are not normally active during our recording condition. We have therefore performed new experiments to demonstrate the effectiveness of the antagonist, following Lien and Scanziani (2013). In this set of experiments, we first applied the GABA-B receptor agonist Baclofen, which reduced the VEP amplitude. Subsequent administration of CGP was able to reverse Baclofen\u2019s effect, thus indicating the effectiveness of the antagonist. These new data are added 2) The text should be revised so that the order in which the results are discussed is the same for the older and younger animals. The organization of the figures is fine but the order in the text was confusing.The EPSC properties reported for adult mice are presented in the following order:1) The ipsilateral and contralateral scaling factor2) Individual eye matching of the thalamic and total EPSCs (both ipsi and contra)3) Inter-ocular difference of the total EPSC4) Inter-ocular difference of the thalamic EPSC5) Correlation of inter-ocular difference between total and thalamic EPSCThese properties should be presented in the exact same order also for young mice.We have followed this suggestion and rearranged this part of the Results. Now these properties are presented in the exact same order for young mice .Furthermore, the intracortical EPSC and its individual eye matching with the thalamic EPSC as well as the inter-ocular difference of the intracortical EPSC should be presented with the rest of the EPSC properties in the adult first and then in the young animal.We tried to re-organize this part following the above suggestion, but found the flow awkward. So we decided to keep the results on intracortical EPSCs as a separate paragraph (last paragraph of Results). This way, the intracortical EPSC results can be more directly compared between young and adult mice, thereby highlighting one of the two developmental profiles we discovered in this study. It also avoids jumping back and forth between 3) These are difficult experiments and the authors should acknowledge somewhere the limitations of their approach, ideally in the Discussion.A) Voltage clamp in vivo may be incomplete due to space clamp issues. This could even have larger effects on intracortical than thalamic input since the latter are likely to occur more proximally.We have added a new paragraph to acknowledge this issue in the Discussion .B) The authors should also acknowledge the limitation of linear subtraction as a method for estimating the cortical contribution. The interaction between thalamic and cortical input is likely not linear.We have added a new paragraph to acknowledge this issue in the Discussion .C) The discussion of possible early matching mechanisms in the Discussion is fine, but the statement \"Amazingly, certain degrees of binocular correlation may indeed exist in mice even at the level of retinal waves , possibly via direct retino-retinal connections , retinopetal modulations , or light-driven activity in the intrinsic photosensitive retinal ganglion cells through the closed eye lids \" implies that all such correlation must have occurred before eye opening. However, the experience between eye opening and recording may also contribute.We agree with this comment and added the following sentence before discussing experience-independent mechanisms. \u201cIt is possible that the visual experience between eye opening (P12-14) and our recording (P15-21) may contribute to this early thalamic binocular correspondence\u201d.D) Finally, the following conclusion is not warranted, although it could be stated as a speculation: \"feed-forward thalamocortical pathway plays an important role in initiating and guiding the functional rewiring and refinement of cortical circuits in visual system development.\" Correlation does not equal causation.We agree with this comment and have changed our writing to reflect this, from \u201c\u2026plays an important role\u2026\u201d to \u201c\u2026may play an important role\u2026\u201d (in the Abstract) and \u201cthe feed-forward thalamocortical pathway may play an important role in the development of visual circuits, possibly through initiating and guiding the functional rewiring and refinement of cortical circuits in visual cortex\u201d (in the last paragraph of the Discussion).4) The scale factor was calculated as the maximal thalamic EPSC amplitude of the tuning curve divided by maximal total EPSC amplitude. Since preferred orientation often differed between thalamic EPSC and total EPSC, especially for young cells, the scale factor was often calculated from two responses evoked by different stimuli. To have a more accurate measurement of scale factor, the authors can select 4 o 5 best responses of the tuning curves for thalamic EPSC and total EPSC (evoked by the same stimuli), do some averaging and then calculate the ratio.We have changed the way of calculating scale factor following this suggestion. Instead of using the maximum amplitude or choosing 4-5 best responses, we now use the entire tuning curve. We averaged the responses across all directions for thalamic EPSC and total EPSC and then used their mean values to calculate the scale factor. The results are updated for both the figures and the"}
+{"text": "Ascaris eggs in biosolids was modified through a series of recovery efficiency experiments; we seeded soil samples with a known number of Ascaris suum eggs and assessed the effect of protocol modifications on egg recovery. We found the use of 1% 7X as a surfactant compared to 0.1% Tween 80 significantly improved recovery efficiency while other protocol modifications\u2014including different agitation and flotation methods\u2014did not have a significant impact. Soil texture affected the egg recovery efficiency; sandy samples resulted in higher recovery compared to loamy samples processed using the same method . We documented a recovery efficiency of 73% for the final improved method using loamy soil in the lab. To field test the improved method, we processed soil samples from 100 households in Bangladesh and 100 households in Kenya from June to November 2015. The prevalence of any STH egg in soil was 78% in Bangladesh and 37% in Kenya. The median concentration of STH eggs in soil in positive samples was 0.59 eggs/g dry soil in Bangladesh and 0.15 eggs/g dry soil in Kenya. The prevalence of STH eggs in soil was significantly higher in Bangladesh than Kenya as was the concentration . This new method allows for detecting STH eggs in soil in low-resource settings and could be used for standardizing soil STH detection globally.Globally, about 1.5 billion people are infected with at least one species of soil-transmitted helminth (STH). Soil is a critical environmental reservoir of STH, yet there is no standard method for detecting STH eggs in soil. We developed a field method for enumerating STH eggs in soil and tested the method in Bangladesh and Kenya. The US Environmental Protection Agency (EPA) method for enumerating Intestinal worm infections are common in populations living in tropical, low-income countries. People primarily become infected when they consume intestinal worm eggs from contaminated water, hands, and food. Intestinal worm eggs are transmitted from infected people and spread through the environment, particularly via soil. There is no standard laboratory method for counting intestinal worm eggs in soil, which is a major barrier to comprehensive research on the transmission of infection. We tested different laboratory protocol steps to extract soil-transmitted helminth eggs, which is one type of intestinal worm, from soil and propose a new, fast, and efficient field method. We tested the method in Kenya and Bangladesh and found that soil contamination with helminth eggs was prevalent in both study areas. We propose that environmental contamination be included in discussions about intestinal worm transmission, control, and elimination, especially in areas with low infection prevalence. The method we propose will help researchers assess soil contamination, which can be used to examine the effectiveness of intestinal worm transmission control measures. Ascaris and Trichuris infection is spread via an environmentally-mediated fecal-oral transmission route, through ingestion of a larvated egg that has incubated in soil. Hookworm infection is spread by larvae, hatched from eggs after incubation in the soil, penetrating the skin. One hookworm species, Ancylostoma duodenale, can also be transmitted by ingestion of a larvae and 25863 [Bangladesh]), the Kenya Medical Research Institute (KEMRI) Ethical Review Committee (SSC Number 2271), and the International Center for Diarrheal Diseases Research, Bangladesh Ethical Review Committee (PR-11063).Ascaris eggs in wastewater, sludge, and compost [Ascaris eggs are counted using microscopy. The main benefits of this method are that it is a standard method for biosolids in the US and the recovery efficiency is high. For example, a recent study found the efficiency of the method for recovering helminth eggs from composted feces and sugarcane husk was 71.6% [We based our initial method on the U compost , which has 71.6% . The maiAscaris egg recovery efficiency of protocol variations by analyzing seeded samples. We collected organic loam and sand from two different locations at Stanford University to use in the experiments. Ascaris suum eggs were purchased from Excelsior Sentinel, Inc. . Eggs were collected from intestinal contents of infected pigs and preserved in 0.1 N sulfuric acid. Ascaris suum eggs have been used in other laboratory experiments as a proxy for Ascaris lumbricoides eggs because they have a lower health risk to humans, they are easily procured, and they are morphologically identical to Ascaris lumbricoides [We performed experiments at Stanford University to determine the ricoides ,31,32. EWe focused our experiments on three different aspects of the protocol and a total of eight processing steps associated with these to assess their impact on egg recovery: (1) egg detachment , (2) concentrating the sample , and (3) egg flotation . We also tested different soil textures, loam and sand, to determine the effect of soil type on the recovery efficiency.Ascaris from biosolids and hands in hand rinse samples than 0.1% Tween 80 [Ascaris eggs, and that this effect is greater for mammilated eggs than for decorticated eggs [Ascaris in our recovery efficiency experiments.To evaluate the effect of egg detachment on recovery efficiency, we tested two surfactants and 500-mesh sieve (25 \u03bcm) for the final sieving step. Trichuris trichiura eggs range in size from approximately 57 x 26 \u03bcm to 78 x 30 \u03bcm [Trichuris egg recovery in these experiments, we evaluated whether using a 500-mesh sieve would clutter the Sedgwick-Rafter cell with fine soil particles and make microscopic enumeration difficult.We tested several settling volumes as flotation solution, vortexed for 30 seconds, and added additional zinc sulfate solution up to the 40 mL line. We centrifuged at 1000 x g for 5 minutes and then poured the supernatant through a fine stainless steel 500-mesh sieve . We rinsed the contents of the sieve into a clean 50 mL centrifuge tube using distilled water. Then, we repeated this flotation step a second time using a clean sieve. We centrifuged the solution at 1000 x g for 5 minutes to settle the helminth eggs. We removed the supernatant using a clean 25 mL serological pipette until only 1 mL of solution remained. We transferred the final solution to a Sedgwick Rafter slide using a pipettor .Based on the results from our recovery efficiency experiments, we developed an improved protocol and field-tested the method in Bangladesh and Kenya. We added a 15 g aliquot of soil to a 50 mL centrifuge tube to process soil samples for enumeration of STH eggs with the improved method. Then, we added surfactant, 1% 7X , to each sample, bringing the volume up to the 35 mL line, and vigorously shook the samples by hand for two minutes. We rinsed the sides and cap of the tube with 1% 7X, added 1% 7X to the 45 mL line on the centrifuge tube, and left the samples to soak overnight. The next morning, we hand shook each sample for one minute, vortexed for 15 seconds, and poured through a stainless steel size 50-mesh sieve . We rinsed the sample through the sieve with 1% 7X and rinsed the bottom of the sieve with 1% 7X to capture any eggs stuck to the sieve. The settling volume was around 150 mL. We left the samples to settle for 30 minutes and then vacuum aspirated the supernatant. We poured the remaining sample into two 50-mL centrifuge tubes, filled the tubes to the 40 mL line with 1% 7X, and centrifuged at 1000 x g for 10 minutes. We gently poured off the supernatant without disturbing the soil pellet, added 5 mL of zinc sulfate solution under 10x magnification to count To determine moisture content, we dried an aliquot of each soil sample. Moisture content can vary widely based on local conditions, so it is necessary to report concentrations in terms of mass of dry soil. A 15 g aliquot of soil (wet weight) was placed on foil and oven dried overnight for at least 16 hours at 110\u00b0C in a gravity convection oven. Samples cooled for 10 minutes on a countertop before weighing to determine dry weight.We assessed the soil texture of our samples because soil texture may influence STH egg viability and recovery efficiency. We performed soil texture characterization on aliquots of oven-dried soil rather than fresh soil samples, to reduce the risk of pathogen exposure. After determining the dry weight of soil from the oven-dried sample, we added water to the sample until the soil was just saturated but not glistening. We mixed the soil and water well and formed it into a small, thin wire . LaboratAscaris, Trichuris and hookworm eggs based on the list of visual characteristics we developed . We collected approximately 50 g (wet weight) of soil from each household. Field staff transported samples at room temperature to our field laboratory and stored them in a 4\u00b0C refrigerator before laboratory processing began. Field staff in Bangladesh followed the same sample collection protocol as in Kenya, except that samples were transported on ice to the field laboratory.We field tested the improved protocol in Bangladesh from June to August 2015, during the rainy season, and in Kenya from August to November 2015, during the dry season and the beginning of the short rainy season. Field staff obtained written consent from all study participants on a prior visit, as well as oral consent on the day of soil collection. We collected soil samples from 100 rural households in Kakamega in western Kenya and 100 rural households in Mymensingh, Tangail, and Kishoreganj districts in central Bangladesh. We selected households based on their proximity to our field laboratories and their participation in an ongoing intervention trial . We collLaboratory staff processed all samples using the improved protocol. In Kenya, we processed 7% of samples with a laboratory replicate. We counted 44 samples both pre- and post- incubation to determine the percentage of viable eggs and to act as quality assurance and quality control. We also took photos of the first egg seen in a sample for each type of egg for additional review. To ensure consistency across countries, both laboratory teams shared and reviewed each other\u2019s egg photos. In Bangladesh, we made a few adaptations to the lab protocol used in Kenya. First, we increased the settling time from 30 minutes to at least 1 hour. Second, we oven dried 5 grams of soil instead of 15 grams to determine the moisture content. Third, we incubated all samples immediately after processing, instead of counting eggs pre- and post-incubation, due to logistical constraints. Finally, we did not characterize the soil texture of any of the Bangladeshi samples. In Bangladesh, we processed 9% of the samples with a laboratory replicate to assess the variability of the method, and two lab technicians counted 17% of samples in duplicate to assess inter-counter variability for quality assurance and quality control.We calculated recovery efficiency by dividing the final egg count by the initial egg count. We analyzed the results of the recovery efficiency experiments using two-sided t-tests to compare the experiments that had just one variation in the protocol ; we compared recovery efficiencies from three experimental triplicates to another three experimental triplicates. We assessed the difference in the prevalence (the proportion of positive samples) of any STH in soil in Kenya and Bangladesh using a chi-square test and the difference in the total concentration of STH eggs per dry gram of soil using a Mann-Whitney test. Any p-value less than 0.05 was considered to be statistically significant. We analyzed the recovery efficiency experiment results using Excel 2013 and the field results using STATA version 13.Ascaris recovery by 16.2 percentage points over 0.1% Tween 80 (vs 1A). This was the only change to the protocol that resulted in a statistically significant change in recovery efficiency; however, we made several other adaptions to the protocol based on the magnitude of the difference in recovery efficiencies and time savings. The impact of soil texture on recovery efficiency was assessed in experiments 2 and 3; the recovery efficiency was higher by 14.6 percentage points when using sandy soil compared to loamy soil. The recovery efficiency using a 400-mesh and 500-mesh sieve was similar (experiments 2 and 4). In experiments 4 and 5, we compared using two 5-minute flotation steps and one 10-minute flotation step; using two 5-minute flotation steps resulted in an increase of 9.5 percentage points in recovery efficiency , so we adopted this flotation protocol. We compared the protocol without a stir-plate mixing step to the protocol with it (experiments 5 and 6), and although we found a slight decrease in recovery efficiency of 8.1 percentage points , we decided to remove this step to save time. Comparing experiments 6 and 7, there was no loss of recovery efficiency when we reduced the settling volume and time , so we adopted these changes. We compared 1.25 specific gravity with 1.2 specific gravity as a flotation solution and found a 10.2 percentage point increase in recovery efficiency when two 5-minute flotation steps were used (experiments 7 and 10); we therefore decided to use a specific gravity of 1.25. The final, improved method had a significantly higher recovery efficiency (72.7%) than the initial method (37.2%) (experiments 10 and 1). The differences between the initial and improved method are detailed in We compared the use of 0.1% Tween 80 and 1% 7X in experiments 1 and 2; 1% 7X significantly improved Ascaris was most common (22%), followed by Trichuris (21%) (Ascaris eggs were larvated (34.8%) when we isolated them from soil, but few Trichuris eggs were larvated (6.5%) prior to incubation. Most Ascaris eggs were viable (99.3%) and the majority of Trichuris eggs were viable post-incubation (71.6%) . No hook (71.6%) . The medAscaris and 36% of samples contained Trichuris, while hookworm eggs were not detected . Bangladeshi soil also had a significantly higher concentration of STH eggs as compared to Kenyan soil . Slides counted twice by different enumerators had consistent counts, with a mean 9% difference in counts and an overall difference of about 1 egg per sample. The variation of egg counts in laboratory replicates was about 4 eggs/sample.The prevalence of any STH eggs in soil from our study area in Bangladesh was 78% . Sixty-sdetected . Ascarisis eggs. . The medThis paper presents an improved method for enumerating STH eggs in soil that is appropriate for use in resource-constrained settings. This field method is relatively fast; approximately 20 samples can be processed in a day and a half. In comparison, the original US EPA method takes at least 3 days to complete the full protocol on 10 samples. The method also has a higher recovery efficiency of 73% compared to previously published field methods. A recent review of previous methods for detecting STH eggs in environmental media demonstrated a median method recovery efficiency of 25% . Our newAscaris in hand rinse samples [Our recovery experiments in the lab identified one protocol step that affected recovery efficiency. We found that using 1% 7X instead of 0.1% Tween 80 significantly increased the egg recovery efficiency of the method. This result is consistent with a published method for enumerating samples . No otheIn our field tests, we found that the study area in Bangladesh had a higher prevalence and concentration of STH eggs in soil than the study area in Kenya. One factor that could affect the egg prevalence in soil is the infection prevalence of STH in the study area. STH infection is widely geographically variable \u201345, so iTrichuris or hookworm as, unlike Ascaris, these eggs cannot be easily procured in the United States; the recovery efficiency of the protocol may be different for these STH eggs than the value we report for Ascaris.There are several limitations of our recovery efficiency experiments and field tests. We only tested one STH egg concentration (approximately 67 eggs/g wet soil) during the recovery efficiency experiments, and the concentration that was used to seed the samples was higher than what we typically found in soil in Kenya and Bangladesh. Recovery efficiency has been shown to change with the initial concentration of eggs in soil; one study found that recovery efficiency was inversely proportional to the egg concentration. Although they did not test concentrations as low as those that we would expect to see in naturally contaminated samples, this may indicate that the recovery efficiency of our method could be higher than 73% for samples with a low-concentration of STH eggs . AnotherWe did not detect hookworm in any of the soil samples. It is unclear whether hookworm was not present in our study areas or whether the protocol is not appropriate for detecting hookworm. As hookworm larvae hatch from eggs in the soil rather than in the human large intestine, we could expect to detect both eggs and larvae in the soil. Two studies that used sieving, centrifugation, and flotation steps similar to our protocol recovered hookworm eggs from seeded soil samples ,7. In onAscaris lumbricoides eggs from humans are morphologically identical to Ascaris suum eggs from pigs [Trichuris trichiura eggs from humans appear like Trichuris suis eggs from pigs [Trichuris vulpis eggs from dogs are larger than other Trichuris eggs, but there can be some overlap in the size ranges [Ascaris and Trichuris in soil that can differentiate between eggs from human and animal hosts. In particular, inhibition from compounds in soil needs to be addressed before these assays can be deployed. Our cleaning and concentration method could be used in combination with molecular methods to reduce inhibition and increase the volume of processed soil before performing DNA extraction and PCR for detection of STH eggs.It should be noted that STH eggs from humans can be morphologically similar or identical to STH eggs from animals. For example, rom pigs . Also, Trom pigs . Trichure ranges . We idenThe method presented here can be used to examine STH soil contamination to better understand STH transmission. It is relatively fast and efficient compared to other methods, making it more feasible for high-throughput processing in the field. A standard method for enumerating STH in soil will allow comparison of the prevalence and risk factors of soil contamination with STH across different settings, e.g. household sanitation practices , community-level practices , and climatic and environmental effects. Soil contamination measurements can also be an effective tool for evaluating interventions aimed at reducing STH transmission.S1 Dataset(XLSX)Click here for additional data file.S2 Dataset(XLSX)Click here for additional data file.S3 Dataset(XLSX)Click here for additional data file."}
+{"text": "Chrysoperla carnea species-complex to a ternary floral bait. The larvae of these lacewings are important generalist predators in agroecosystems, however adults are non-predatory, they feed on pollen, nectar or honeydew. Squalene, a plant originated compound was previously reported to be attractive to the nearctic Chrysopa nigricornis. In the current study squalene was tested alone and in combination with the ternary bait in field experiments in Hungary. In our experiments, traps baited with squalene attracted predatory males of Chrysopa formosa. Traps baited with squalene and the ternary floral bait attracted adults of both C. formosa and C. carnea complex lacewings. To our knowledge this is the first report of a bait combination attractive to both Chrysoperla and Chrysopa species. This finding is of special interest considering the remarkably different feeding habits of the adults of these lacewings. Potential perspectives in biological control are discussed.Green lacewings (Chrysopidae) are important predators of many soft-bodied pest insects, for instance aphids. Previous studies reported attraction of Among green lacewings, Chrysoperla spp. are of special importance in agroecosystems3. Previous studies reported attraction of Chrysoperla spp. to plant originated compounds (e.g.5). It was also found that a ternary blend, consisting of phenylacetaldehyde, methyl salicylate and acetic acid, was more attractive to Chrysoperla carnea species-complex (i.e. C. carnea complex) than previously published attractants6. Apart from attraction, the ternary blend showed strong behavioural activity, females laid eggs in the vicinity of the baits9. This is of special importance, since larvae of these lacewings are voracious, generalist predators3. Nevertheless, adults of these lacewings are not predatory, but feed on pollen, nectar and honeydew, thus are termed palyno-glycophagous1.Green lacewings are important predators of many soft-bodied insect pests, especially aphids and other SternorrhynchaChrysopa spp. are predatory1, thus, simultaneous attraction of Chrysoperla and Chrysopa species could be of potential significance in biological control. However, to our knowledge no such stimulus combination is known. From this viewpoint attractants for Chrysopa species and their interactions with Chrysoperla attractants are of special interest.On the other hand, both adults and larvae of Chrysopa species (e.g.13). Nevertheless, combination of the ternary floral bait and aphid sex pheromone components resulted in markedly decreased attraction of Chrysoperla spp. lacewings to the otherwise highly attractive ternary floral bait14.Aphid sex pheromone components were found to attract males of some et al.16 found\u00a0attraction of a nearctic Chrysopa species, C. nigricornis Burmeister, 1839 to traps baited with squalene, a plant originated compound.Jones The aim of the current study was to test potential attraction of green lacewing species to squalene in Central Europe, with respect to potential interactions with the ternary floral bait.12).All synthetic compounds were obtained from Sigma-Aldrich Kft . Two different formulations were used: the polyethylene bag (PE bag) and the polyethylene vial formulations, both of which have been found to be effective in previous experiments on green lacewings , which was put into a polyethylene bag (ca 1.0\u2009\u00d7\u20091.5\u2009cm) made of 0.02\u2009mm linear polyethylene foil . The dispensers were heat sealed.For the PE vial formulation compounds were loaded into 0.7\u2009ml polyethylene vials with lid , the lids of the dispensers were closed.6. The loading of each compound was kept at 100\u2009mg.The ternary floral baits were formulated into PE bag dispensers. The baits comprised of three components, i.e. phenylacetaldehyde, acetic acid and methyl salicylate, in a 1:1:1 ratioFor Exp. 1, 100\u2009mg of squalene was formulated into PE bag dispensers. For Exp. 2, loads of 1, 10 or 100\u2009mg and for Exp. 3., 100\u2009mg of squalene were formulated into PE bag dispensers, respectively. For Exp. 4, 100\u2009mg of squalene was formulated into either PE bag or PE vial dispensers.Both PE bag and PE vial dispensers were attached to 8\u2009\u00d7\u20091\u2009cm plastic handles for easy handling when assembling the traps. For storage, baits were wrapped singly in pieces of aluminium foil and stored at \u221218\u2009\u00b0C until used. In the field, baits were changed at 3 to 4 week intervals, as previous experience showed that they do not lose their attractiveness during this period.Field experiments were performed from 2013 to 2015 in a mixed orchard in Hungary at Hal\u00e1sztelek .\u00ae VARL\u2009+\u2009funnel traps were used , which proved to be suitable for catching green lacewings14.For the experiments CSALOMONIn all field experiments one replicate of each treatment was incorporated into a block, so that individual treatments were 5\u20138\u2009m apart. Within each block the arrangement of treatments was randomized. Distance between blocks was 15\u201320 meters. Traps were suspended in the canopy of trees at a height of ca 1.5\u20131.8\u2009m. As a rule, traps were checked twice weekly.Description of experiments:Experiment 1: the objective was to test the potential attraction of green lacewings to squalene. The following treatments were included: ternary floral bait alone, squalene bait alone, and unbaited traps. The experiment was run with 6 blocks from June 5 to July 29, 2013.C. formosa to different doses of squalene. The following treatments were included: traps baited with either 1, 10 or 100\u2009mg of squalene, and unbaited traps. The experiment was run with 7 blocks from May 29 to August 7, 2014.Experiment 2: the objective was to test attraction of Experiment 3: the objective was to test for potential interactions between the ternary floral and squalene baits. The following treatments were included: ternary floral bait alone, squalene bait alone, both ternary floral and squalene baits, and unbaited traps. The experiment was run with 7 blocks from May 29 to August 7, 2014.Experiment 4: in this experiment different formulations of squalene were tested, the following treatments were included: squalene in PE bag formulation, squalene in PE vial formulation, and unbaited traps. The experiment was run with 5 blocks from June 4 to July 27, 2015.20.The green lacewings caught were taken to the laboratory and determined to species level according to relevant taxonomic works on Chrysopidae21.For all experiments, catches per trap were summed and data were tested for normality by Shapiro-Wilk test. Since none of the experimental data was found to be normally distributed, catches were analyzed by Kruskal-Wallis test. Level of significance was set at p\u2009=\u20090.05. If significant differences were found, differences between treatments were evaluated by pairwise Wilcoxon rank sum test with Bonferroni correction. Dose-response correlations were tested by Spearman\u2019s rank correlation test. All statistical procedures were conducted using the software RChrysopa formosa Brauer, 1850 and Chrysoperla spp. were caught in sufficient numbers for further analysis. Among Chrysoperla spp., Chrysoperla carnea s. str. , Chrysoperla lucasina and Chrysoperla pallida Henry, Brooks, Duelli & Johnson, 2002 were caught, however, since no differences were observed between these species in the analyses, these were treated as C. carnea complex.In the experiments C. formosa than other treatments , however, it was not attractive in olfactometer assays22. Furthermore, squalene has also been reported as a semiochemical for ticks23.Squalene was found to be produced in elevated quantities in response to leafminer damage in apple leavesC. formosa were males, female catches were negligible. This is in accordance with the findings of Jones et al.16 on the nearctic C. nigricornis. We cannot offer an explanation off hand on this highly interesting issue, since squalene is a plant orinigated compound emitted upon pest damage22 so sex-specific attraction of males is surprising. Interestingly aphid sex pheromones were also found to attract almost exclusively males of Chrysopa spp. (e.g.11). Aldrich et al.24 found that aphid sex pheromone components could act as precursors in pheromone synthesis of green lacewings, however, these compounds are synthesized by sexual forms of aphids25 which are present late in the season, when green lacewings are in general inactive mostly as preimaginal forms preparing to overwinter26. Further studies may shed more light on the ecological background of the sex-specific attraction to these semiochemicals.In our experiments the vast majority of the attracted Chrysopa have previously been reported, for instance to aphid sex pheromone components (e.g.14) and to -iridodial, a compound identified from males of two nearctic Chrysopa species13. Nevertheless, in view of biological control, semiochemicals attracting Chrysopa females would be highly advantageous. Chauhan et al.27 reported higher abundance of Chrysopa oculata Say, 1839 males and females in a 5 meter radius area around iridodial baits, as compared to unbaited control. At the same time, the authors noted that in another experiment iridodial-baited traps caught almost exclusively males27. These results suggest iridodial to be a potent attractant for C. oculata males, but certainly a different mode of action (e.g. arrestant) for females28. Furthermore, it should be considered that males aggregating around the baits possibly also affected aggregation of females, either by chemical or by vibrational signals, which are also known to be important in communication of green lacewings29. Chauhan et al.27. hypothesised that male C. oculata form leks, which would be in accordance with higher density of males resulting in increased local abundance of females.Attraction of male C. carnea complex (e.g.6), furthermore, females were found to lay their eggs in the vicinity of the baits9. Since predatory C. carnea complex larvae have more limited mobility, they will search for prey in the vicinity, which offers potentials for targeted biological control. On the other hand adults of C. carnea complex are not predatory1, thus, from this respect simultaneous attraction of predatory adults of C. formosa could offer new perspectives for an improved biological control.The ternary floral bait attracts both sexes of R,4aS,7S,7aR)-nepetalactol and -nepetalactone attracted males of C. formosa, however, in combination with the ternary floral bait attraction of C. carnea complex lacewings was considerably decreased14.In previous field experiments conducted in Hungary, aphid sex pheromone components males in combination with iridodial (e.g.31). Methyl salicylate is also an important component of the ternary floral bait, which attracts Chrysoperla spp.6. Nevertheless, in the current study, combination of squalene and the ternary floral bait did not result in increased catches of C. formosa as compared to traps baited with squalene only.Methyl salicylate, a herbivory-induced plant volatile was found to increase catches of C. carnea complex and C. formosa lacewings at the same time without significant decrease of catches of either taxa as compared to individual stimuli. To our knowledge this is the first lure combination that is attractive to both Chrysoperla and Chrysopa lacewings. Thus, this stimulus combination is attractive to lacewing adults with remarkably different feeding habits . This finding may be of benefit in biological control as well, since simultaneous attraction of these taxa with different life history traits may offer more efficient control of harmful insects through predation by both larvae (Chrysoperla spp.) and adults of lacewings attracted.In conclusion, in the course of the current study, traps baited with both the ternary floral bait and squalene together attracted both"}
+{"text": "Non-alcoholic fatty liver disease (NAFLD) as a global health problem has clinical manifestations ranging from simple non-alcoholic fatty liver (NAFL) to non-alcoholic steatohepatitis (NASH), cirrhosis, and cancer. The role of different types of fatty acids in driving the early progression of NAFL to NASH is not understood. Lipid overload causing lipotoxicity and inflammation has been considered as an essential pathogenic factor. To correlate the lipid profiles with cellular lipotoxicity, we utilized palmitic acid (C16:0)- and especially unprecedented palmitoleic acid (C16:1)-induced lipid overload HepG2 cell models coupled with lipidomic technology involving labeling with stable isotopes. C16:0 induced inflammation and cell death, whereas C16:1 induced significant lipid droplet accumulation. Moreover, inhibition of de novo sphingolipid synthesis by myriocin (Myr) aggravated C16:0 induced lipoapoptosis. Lipid profiles are different in C16:0 and C16:1-treated cells. Stable isotope-labeled lipidomics elucidates the roles of specific fatty acids that affect lipid metabolism and cause lipotoxicity or lipid droplet formation. It indicates that not only saturation or monounsaturation of fatty acids plays a role in hepatic lipotoxicity but also Myr inhibition exasperates lipoapoptosis through ceramide in-direct pathway. Using the techniques presented in this study, we can potentially investigate the mechanism of lipid metabolism and the heterogeneous development of NAFLD. Cellular lipids are a heterogeneous category of compounds. Based on the diversity of chemical structures and biosynthetic pathways, lipids are mainly divided into eight categories: fatty acyls, glycerolipids, glycerophospholipids, sphingolipids, sterol lipids, prenol lipids, saccharolipids, and polyketides . These uNon-alcoholic fatty liver disease is characterized by lipid accumulation in the liver without excessive alcohol consumption. Given that hepatic lipotoxicity is implicated in dysfunction of hepatocytes owing to lipid overload, in vitro exposure of liver cells with high concentrations of specific free fatty acids (FFAs) results in lipid overload-aggravated inflammatory responses such as steatohepatitis, non-alcoholic steatohepatitis (NASH), and lipotoxicity ,6. PreviLipidomics is a branch of metabolomics and focuses on the comprehensive identification and quantification of all lipids in a biological system . RecentlThe stable isotope-labeling approaches are capable of providing dynamic information on lipid metabolism . HoweverMoreover, generation of a comprehensive pattern covering as many stable isotope-labeled lipid species as possible is a prerequisite to identify unexpected alterations in lipid metabolism. Recently, several studies have revealed that liquid chromatography\u2013mass spectrometry (LC\u2013MS) technique and stable isotopes were used to investigate alterations in lipid biosynthesis. Various categories of lipids such as phospholipids and triacylglycerols (TGs) could be measured by isotope incorporation after treatment, thereby augmenting our knowledge of lipid metabolism ,14,15. 13C16-palmitic acid or 0.3 mM 13C16-palmitoleic acid for 4, 8, and 16 h. By utilizing stable isotope-labeled tracers to distinguish de novo lipogenesis from pre-existing lipid categories, it is possible to comprehend the regulation of lipid metabolism and transport of lipids in the palmitic acid- and palmitoleic acid-treated metabolic perturbation; this technique also supports measurement of the dynamic changes via the lipidomics approach ) , TG, phosphatidylinositol (PI), Dihydroceramide (dHCer), Cer, sphingomyelin (SM), PC, phosphatidylethanolamine (PE), lysoPC, lysophosphatidic acid (LysoPA), lysoPE and palmitoylcarnitine. Significant incorporation of 13C16-palmitoleic acid over time was observed for TG, DG, PC, PI, and PE. The results demonstrated that robust isotope-labeled TG accumulation was in the palmitoleic acid-treated group, yet major dynamic changes in lipid metabolism were observed for isotope-labeled DG and isotope-labeled Cer in the palmitic acid-treated group. The specific PC (32:2) and PC (32:0) were significantly increased in palmitoleic acid-and palmitic acid-treated groups, respectively.A dynamic picture of lipid metabolism is generated, reflecting a lipid pattern consisting of all the detected lipids ([ + 13C]) . Moreovereatment and palmreatment . SignifiAccording to The pivotal purposes of this study were to understand saturation or monounsaturation of fatty acids on lipid overload-induced metabolic changes in HepG2 cells using stable isotope-labeled tracers. Levels of pro-inflammatory cytokine such as TNF-\u03b1 and IL-8 are usually higher in NAFLD patients and thus promote apoptosis and recruitment of additional inflammatory cells to the liver, thereby deteriorating hepatic inflammation ,23,24,25BODIPY 493/503 staining revealed significant lipid droplet accumulations in palmitoleic acid-treated HepG2 cells, and detected elevated levels of isotope labeled-triglycerides. The results suggest that triglyceride accumulation protected against unsaturated fatty acid-induced lipotoxicity ,30. In tMoreover, the recent paper reported that acylceramide is metabolized from Cer and sequestrated in lipid droplets through DGAT2 and we oTo shed light on and gain broader insights into the complex dynamics of lipid metabolism in HepG2 cells by investigating the effect of saturation or monounsaturation of FFAs on lipid perturbation and injury of hepatocytes, we utilized stable isotope-labeled FFAs. For instance, some metabolites such as FA-carnitines and Lyso-phospholipids were not used in FFA-overloading treatment until isotope-labeled FFA analyses came into practice. Although we observed lipid droplet accumulation in both types of FFA-treated groups and no significant difference in the protein levels of DGAT2, an essential enzyme of TG synthesis ,42, the 13C5-glutamine, to quantify cellular fluxomes unveiled the metabolites of glutamine metabolic pathway in fumarate hydratase-deficient cells. In our study, we demonstrated that palmitic acid-treated cells had high conversion rates for palmitoylcarnitine, Cer, phospholipids and DG , we utils and DG ; palmitos and DG . The pals and DG . On the 13C16-palmitic acid was purchased from Sigma-Aldrich, and 13C16-palmitoleic acid was purchased from Cambridge Isotope Laboratories . C16, C17, C24:1 ceramide and C16, C24:1 dihydroceramide standards were obtained from Avanti Polar Lipids . Myriocin was purchased from Cayman Chemical . All the following experiments are also demonstrated as a workflow chart in Dulbecco\u2019s modified Eagle\u2019s medium (DMEM), fetal bovine serum, penicillin\u2013streptomycin antibiotic solution, and trypsin- ethylenediaminetetraacetic acid (EDTA) were obtained from GIBCO . Boron-dipyrromethene (BODIPY) stain (BODIPY 493/503 (D3922), Hoechst 33342 (H1399) and Alexa Fluor 594 phallodin(A12382)) was purchased from Invitrogen Molecular Probes . Formaldehyde (37%), and isopropanol were acquired from Sigma-Aldrich . LC\u2013MS grade water, acetonitrile, and methanol were obtained from the Fluka . Chloroform, formic acid, and ammonium formate were purchased from Merck . Palmitic acid and palmitoleic acid were obtained from the NU-CHEK . The The FFA-containing medium was prepared by diluting 150 mM FFA stock solution (dissolved in isopropanol) in DMEM supplemented with 1% fatty acid-free bovine serum albumin (Sigma), and followed by incubating overnight at 37 \u00b0C .2 at 37 \u00b0C. For sub-culturing, cells were dispensed with Puck\u2019s buffer solution (Invitrogen) and detached by treatment with 0.25% trypsin-EDTA at 37 \u00b0C [5 HepG2 cells were cultured in 12-well plates for 24 h at 37 \u00b0C by treatment with different concentrations of FFAs . In addition, 2 \u00d7 105 HepG2 cells were treated with 0.3 mM FFAs for 8, 16, and 24 h in the time-course of effect. Moreover, in the assessment of ceramide inhibition, 2 \u00d7 105 HepG2 cells were cultured for 24 h with 2.5 \u00b5m myriocin (stock solution dissolved in dimethyl sulfoxide) adding into DMEM with 0.3 mM and 0.6 mM palmitic acid. At the end of treatments, cells were fixed with 3.7% formaldehyde solution, and followed by addition of Hoechst 33342 (at 1:250 dilution) for 2 h staining. The illumination of fluorescence was detected using IN Cell Analyzer 1000 .HepG2 cell line were cultured in DMEM with low glucose (1 g/L) and 10% fetal bovine serum and 1% PenStrepl and were incubated under 5% COat 37 \u00b0C . The Hep5 HepG2 cells stained by BODIPY 493/503 for observing the distribution of the lipid droplets through the LSM 510 Meta Confocal Microscope [After treatment with 0.3 mM FFAs for 16 h, there were 2 \u00d7 10Germany) ,45.6) were treated with 0.3 mM FFAs for 2, 4, and 8 h and subsequently harvested for total RNA isolation. The total RNA was extracted using TRIzol reagent , and quantified using NanoDrop . One microgram of RNA was reversely transcribed to cDNA using RevertAid First Stand cDNA Synthesis Kit .The quantitative, real-time RT-PCR (qRT-PCR) was performed with SsoFast EvaGreen Supermix reagent using a CFX96 Touch Real-Time PCR Detection System (Bio-Rad). The thermal cycle program was set as follows: 95 \u00b0C for 3 min, 39 cycles for amplification at 98 \u00b0C for 5 s and 60 \u00b0C for 5 s. The following primer pairs were used for qRT-PCR analysis: human interleukin-8 , human tumor necrosis factor-alpha (TNF-\u03b1 forward: 5\u2032-CCTGTGAGGAGGACGAAC-3\u2032 and reverse: 5\u2032-CGAAGTGGTGGTCTTGTTG-3\u2032), and the housekeeping Actin gene (forward primer: 5\u2032-GAGATGCGTTGTTACAGGAA-3\u2032 and reverse: 5\u2032-GCATTACATAATTTACACGAAAGC-3\u2032). The relative mRNA expression was normalized to the control actin gene.HepG2 cells . After electrophoresis, proteins were transferred to a poly-vinylidene fluoride membrane for 2 h at 4 \u00b0C at 300 mA. Western blots were incubated with anti-diglyceride acyltransferase 2 (DGAT2) antibodies . Immunoblotting was conducted using secondary antibodies (anti-rabbit or anti-mouse) and protein bands were detected using ECL reagent and the ChemiDoc MP Imaging System (Bio-Rad).Thirty microgram of whole-cell lysate quantified by using a SpectraMax 340PC384 microplate reader was isolated from 106 HepG2 cells using the modified Folch method [v/v/v) mixture. After vortexing and centrifuging the mixture, the supernatant was transferred into a vial for LC\u2013MS analysis. Samples per group were biological triplicates and each of biological triplicates was detected three times for technical triplicates (triplicates).In brief, total lipids were extracted from 10h method ,47. The v/v) and the mobile phase B was isopropanol/acetonitrile , and both solvents contained 10 mM ammonium formate and 0.1% formic acid. The flow rate was 0.4 mL/min, and the solvent gradient was as follows: 0\u20132 min, 40\u201343% solvent B; 2\u20132.1 min, 43\u201350% solvent B; 2.1\u201312 min, 50\u201354% solvent B; 12\u201312.1 min, 54\u201370% solvent B; 12.1\u201318 min, 70\u201399% solvent B; 18\u201318.1 min, 99\u201340% solvent B; 18.1\u201320 min, 40% solvent B [Mass spectrometry analysis was performed using the ultra-performance liquid chromatography (UPLC) system coupled with time-of-flight mass spectrometry . Chromatographic separation was performed on an ACQUITY UPLC CSH C18 column (2.1 mm \u00d7 100 mm \u00d7 1.7 \u00b5m). Column temperature was maintained at 55 \u00b0C. For metabolite profiling, the mobile phase A was acetonitrile/water system operating in positive- and negative-ion ESI mode. The capillary voltage was set at 2700 V in ESI-positive mode and 2000 V in ESI-negative mode and cone voltage was set at 35 V, respectively. Desolvation gas flow rate was set at 800 L/h, and cone gas flow was maintained at 25 L/h. The desolvation and source temperatures were set at 400 \u00b0C and 100 \u00b0C, respectively. MS data were collected in centroid mode over a range of 20\u2013990 m/z value, retention time (RT), and ion intensity. OPLS-DA and S-plot analysis were also performed using the Pareto scaling method and SIMCA-P software. Significance of variables was selected under the conditions of p(corr) value of >0.75 or <\u22120.75, a p value of <0.001, and VIP values of >1.0.Data matrices were determined utilizing the Progenesis QI software V.2.3 by the extracted http://metlin.scripps.edu/index.php), LIPID MAPS (http://www.lipidmaps.org/), Human Metabolome Database (HMDB) (http://www.hmdb.ca/), and in-house database were marked as M* in isotope-labeled FAs sections and the peak lists were evaluated and picked the ion doublets , triplets , or quadruplets in MS spectra defined as potential isotopomers using our in-house Matlab program with m/z tolerance of 30 ppm (13C \u2212 12C = 1.003355 was used) and RT shift tolerance of 0.1 min. All filtered isotopomers belonging to one lipid species were then matched against the METLIN. Additionally, isotope labeling in conjunction with MS/MS also allows the determination of the labeling position. The significantly altered metabolites were represented as bar charts based on their relative ion intensity and their roles in lipid metabolism were mapped.For achieving an isotope labeling pattern, as shown in v/v) and the mobile phase B was isopropanol/acetonitrile , and both solvents contained 10 mM ammonium formate. The flow rate was 0.45 mL/min, and the solvent gradient was as follows: 0\u201310 min, 40\u201399% solvent B; 10\u201310.1 min, 99\u201340% solvent B; 10.1\u201312 min, 40% solvent B.Mass spectrometry analysis was performed using the UPLC system coupled with tandem mass spectrometry . Chromatographic separation was performed on an ACQUITY UPLC BEH C18 column (2.1 mm \u00d7 100 mm \u00d7 1.7 \u00b5m). Column temperature was maintained at 60 \u00b0C. For optimized parameters, the mobile phase A was acetonitrile/water mixture and major MS/MS fragment patterns were determined [Mass spectrometric analysis was performed using the Waters Xevo TQ-S system operating in positive-ion ESI mode. The capillary voltage was set at 1500 V and cone voltage was set at 30 V. Desolvation gas flow rate was set at 900 L/h, and cone gas flow was maintained at 150 L/h. The desolvation and source temperatures were set at 550 \u00b0C and 120 \u00b0C, respectively. MS data were collected in centroid mode at a rate of 0.1 scan/s. C16, C24:1 dihydroceramide and C16, C24:1 ceramide standards were dissolved in isopropanol/acetonitrile/water , and ion intensity. Targeted ceramide data were normalized to internal control (C17 ceramide) and total cell protein levels.Data matrices were determined utilizing the MassLynx software V.4.1 by the extracted"}
+{"text": "Up to 2016, low\u2010 and middle\u2010income countries mostly introduced routine human papillomavirus (HPV) vaccination for just a single age\u2010cohort of girls each year. However, high\u2010income countries have reported large reductions in HPV prevalence following \u201ccatch\u2010up\u201d vaccination of multiple age\u2010cohorts in the year of HPV vaccine introduction. We used the mathematical model PRIME to project the incremental impact of vaccinating 10\u2010 to 14\u2010year\u2010old girls compared to routine HPV vaccination only in the same year that routine vaccination is expected to be introduced for 9\u2010year\u2010old girls across 73 low\u2010 and lower\u2010middle\u2010income countries. Adding multiple age\u2010cohort vaccination could increase the number of cervical cancer deaths averted by vaccine introductions in 2015\u20132030 by 30\u201340% or an additional 1.23\u20131.79 million over the lifetime of the vaccinated cohorts. The number of girls needed to vaccinate to prevent one death is 101 in the most pessimistic scenario, which is only slightly greater than that for routine vaccination of 9\u2010year\u2010old girls (87). These results hold even when assuming that girls who have sexually debuted do not benefit from vaccination. Results suggest that multiple age\u2010cohort vaccination of 9\u2010 to 14\u2010year\u2010old girls could accelerate HPV vaccine impact and be cost\u2010effective. What's new?To prevent cervical cancer, many low\u2010 and middle\u2010income countries employ HPV vaccination programs for girls of a certain age, usually 9. What about girls too old when the program starts? Here, the authors ask how many lives could be saved by offering vaccination at a wider age range. Using data from 73 countries, they determined that a vaccinating girls between ages 9 and 14 years could prevent 30\u201340% more cancer deaths, with a similar cost\u2010effectiveness. Routine vaccination of 9\u2010year\u2010olds prevents 1 death per 87 vaccinations; adding 10\u2010 to \u221214\u2010year\u2010olds only bumps that slightly to 101 vaccinations per death prevented. DoVDecade of VaccinesHPVhuman papillomavirusLMICslow\u2010 and middle\u2010income countriesWHOWorld Health OrganizationHuman papillomavirus (HPV) vaccination protects vaccinees against HPV infection, a necessary cause of cervical cancer. Cervical cancer kills 266,000 women every year, with 82% of them in low\u2010 and middle\u2010income countries (LMICs).HPV vaccine introduction in most high\u2010income countries was accompanied with multiple age\u2010cohort (multi\u2010cohort) or \u201ccatch\u2010up\u201d vaccination during which females older than the age of routine vaccination were offered vaccination, with an upper limit of around 15\u201326 years depending on the country.In contrast, LMICs have mostly introduced routine HPV vaccination for just a single age\u2010cohort of girls each year. Until 2016, the World Health Organization (WHO) recommended prioritizing routine vaccination of 9\u2010 to 13\u2010year\u2010old girls without any mention of multi\u2010cohort vaccination.In October 2016, the WHO's Strategic Advisory Group of Experts on Immunization revised its position to recommend delivering vaccination to multiple age\u2010cohorts of girls aged 9\u201314 years.To address these concerns, we conducted data analysis and modeling work to project the potential incremental impact of multi\u2010cohort vaccination in 73 Decade of Vaccines (DoV) countries projected to introduce HPV vaccination in 2015\u20132030. DoV countries are those that the Global Vaccine Action Plan for 2011\u20132020 focuses on consisting of countries classified as low or lower\u2010middle income by the World Bank in 2011.We estimated the impact of different HPV vaccination strategies in DoV countries using the Papillomavirus Rapid Interface for Modelling and Economics (PRIME). PRIME is a static, proportional impact model of HPV vaccination that was developed in collaboration with WHO to estimate the impact and cost\u2010effectiveness of introducing HPV vaccination in LMICs. It is also used to inform vaccine impact estimates used by Gavi and the Bill & Melinda Gates Foundation.http://www.primetool.org). Herd (indirect) effects and cross\u2010protection against nonvaccine HPV types are not considered, so impact estimates for routine vaccination should be regarded as conservative. However, previous validation exercises suggest that PRIME gives comparable cost\u2010effectiveness estimates for routine female\u2010only vaccination to transmission dynamic models in the literature.The model equations and inputs have been extensively described elsewhereWe assume that vaccinating girls prior to infection with HPV types 16 and 18 fully protects them from developing cervical cancer caused by HPV 16 and 18, in accordance with vaccine trials.Country population. United Nations World Population Prospects 2015 figures were obtained for number of females in the 5\u20139 and 10\u201314\u2010year\u2010old age groups in 2015\u20132030.Vaccine coverage. HPV vaccine introduction years and subsequent vaccine coverage were based on Gavi's Strategic Demand Forecast version 12, released in 2015.Age at sexual debut. Demographic and Health Survey (DHS) data report the proportion of females who have become sexually active by age 15, 18, 21 and 25 years. DHS data were available for 53 out of 73 countries comprising 84% of the 9\u2010year\u2010old female age\u2010cohort .Data sources for model parameters are summarized in Supporting Information, Appendix 4. Of 94 DoV countries, we excluded 15 not projected to introduce HPV vaccination in 2015\u20132030 and 6 lacking both sexual activity and World Development Indicator information. For the remaining 73 countries , input parameters used in previous publicationsa and b to minimize the sum of squared residuals between the proportion of sexually active females at age x years and the function f(x)\u2009=\u20091/(1\u2009+\u2009e\u2010a(x+b)) (logit) or f(x)\u2009=\u20091/\u0393(a) \u03b3 (gamma). The best fitting of the two models (based on the deviance) was used to extrapolate sexual debut in females younger than 15 years.Data are not available for sexual activity before age 15 years. Hence, we fitted two functions (a logit function and a gamma cumulative distribution function) to data at the four ages with data, giving equal weight to each point, that is, we chose the values of We validated our model using sexual debut data in 12\u2010 to 30\u2010year olds from Benin,Of the 73 countries, we examined, 20 had no relevant DHS sexual activity data. These were matched to countries with such data in a three\u2010step process based on similarity of other variables: (i) using linear regression to select predictors of female sexual activity at age 15 years from a basket of indicators in the 53 countries with data, (ii) using an clustering algorithm to partition 73 countries into eight clusters based on similarities in the predictors of sexual activity and (iii) matching countries without relevant data to the same\u2010cluster country with the highest proportion of sexually active females at age 15 years. Technical details of these procedures are given in Supporting Information, Appendix 2.We compared two scenarios: (i) the current Gavi scenario, in which only 9\u2010year\u2010old girls are offered vaccination and (ii) a multi\u2010cohort vaccination scenario, in which girls aged 9\u201314 years are offered vaccination in the first year of vaccine introduction. In subsequent years, only 9\u2010year\u2010old girls are vaccinated. We assumed that a multi\u2010cohort campaign would enable first year coverage in all catch\u2010up age groups to be equal to the highest routine coverage attained. An alternative scenario at 75% of the highest routine coverage was also explored. Vaccinations expected to take place in the period 2015\u20132030 were considered.The primary outcome is the number of deaths due to cervical cancer prevented by vaccinating these cohorts over the lifetime of the vaccinated cohorts. Results are presented aggregated over (i) the year in which vaccination is delivered and (ii) the year in which the outcome (averted deaths) occurred.As a secondary outcome, we calculated the number needed to vaccinate to prevent one cervical cancer\u2010related death, a common metric used to describe the efficiency of HPV and other vaccines.The proportion of girls reported in DHS to be sexually active at age 15 ranges from 0.3% to 35.0% (Chad), with a mean proportion of 14.4% averaged over countries. The gamma model fit DHS data on sexual activity better than a logit model, with deviance of 0.15 (gamma) versus 0.46 (logit) . The gamma model was also able to reproduce sexual activity data on 12\u2010 to 30\u2010year olds in Benin, India, the United States, Canada and the United Kingdom .Our results show that multi\u2010cohort vaccination of 10\u2010 to 14\u2010year\u2010old girls when routine HPV vaccination for 9\u2010year\u2010old girls is introduced could substantially increase the impact of vaccination by accelerating reduction in cervical cancer deaths. Up to 2016, the focus in DoV countries has been on delivering HPV vaccines to girls at the lower end of the age range for HPV vaccine indications . This is because vaccine effectiveness is reduced if vaccinees are HPV infected before vaccination.The number of girls that need to be vaccinated to prevent a cancer death in multi\u2010cohort vaccination is only slightly greater than that for routine vaccination. As all females under 15 years are recommended to receive two doses of vaccine, if each 10\u2010 to 14\u2010year old can be given a vaccine dose at the same cost as a 9\u2010year old, then multi\u2010cohort vaccination would have a similar cost\u2010effectiveness profile as routine vaccination. As routine vaccination is cost\u2010effective in almost all countries in the world,This is the first paper to look at the impact of the new WHO recommendations on multi\u2010cohort vaccination in 10\u2010 to 14\u2010year\u2010old females in DoV countries. Most modeling papers looking at the impact of HPV catch\u2010up campaigns have been limited to high\u2010income countries.Our analysis used PRIME, a static model that projects vaccine impact without requiring detailed information about sexual mixing and intermediate disease markers such as HPV prevalence or screening outcomes. It does not capture indirect (herd) effects on unvaccinated females as a result of reduced transmission in the population. However, the error in ignoring herd effects is small when evaluating vaccinating young females at coverage close to 100%. Given that 80/87 (92%) of the countries expected to introduce HPV vaccination between 2015 and 2030 are projected to achieve coverage of 80% or greater, the estimates using PRIME are likely to be satisfactory. Furthermore, transmission dynamic models have found that the benefits of vaccinating females in multi\u2010cohort campaigns are similar to vaccinating routine cohorts as long as the females are under around 15 years old. However, the precise magnitude of the herd effects depends on type\u2010specific transmission and clearance rates as well as the characteristics of the population in each country.Coverage assumptions were based on Gavi projections of future vaccine demand. However, not many countries have achieved the high levels of coverage that Gavi projects. Furthermore, in several countries vaccine coverage has fallen following (unfounded) safety concerns.Another simplification is that we assumed that any girl who has sexually debuted is HPV 16/18 infected and does not benefit from HPV vaccination at all. This assumption allows us to adjust the differences in HPV exposure using only data on the onset of sexual debut across the countries. However, the proportion of 15\u2010year olds who have sexually debuted does not exceed 35% in any DoV country with relevant DHS data, and is substantially lower in most countries. Hence even when making this extremely pessimistic assumption, the effectiveness of vaccination at age 14 is only slightly lower than at age 9. For comparison, although 18% of British females are sexually active at age 15,All these model simplifications lead to our analysis underestimating the benefit of vaccination, that is, the benefit of multi\u2010cohort vaccination may be even greater than we show here. Hence we have followed WHO guidelines, which allow the use of a conservative static model (that underestimates vaccine impact) provided that this still produces outcomes that are favorable to vaccination.A further limitation is that we assume (like most published models) that cervical cancer incidence will not change in the future. Future incidence depends on trends in sexual behavior, screening uptake, HIV prevalence, all\u2010cause mortality and other factors. However, long\u2010term cervical cancer incidence projections taking all relevant factors into account have never been published.While the impact of multi\u2010cohort vaccination is potentially large, there are still delivery questions that need to be addressed. First, multi\u2010cohort vaccination will require much larger HPV vaccine stocks, particularly in 2018 when several large countries are predicted to introduce HPV vaccination. Second, female school enrolment in many countries drops after primary school. Hence school\u2010based vaccination may have lower coverage in multi\u2010cohort age groups compared to routine cohorts. Furthermore, DHS data indicate an association between being sexually active by age 15 and not having secondary or postsecondary education at the country level (data not shown). This may suggest that out\u2010of\u2010school girls are more likely to be at risk of HPV infection and disease. Hence vaccinating girls at secondary school age may require strategies that are able to reach out\u2010of\u2010school girls to have maximal impact.Supporting InformationClick here for additional data file."}
+{"text": "Human papillomavirus (HPV) is the most widespread sexually transmitted infection worldwide. It causes several health consequences, in particular accounting for the majority of cervical cancer cases in women. In the United Kingdom, a vaccination campaign targeting 12-year-old girls started in 2008; this campaign has been successful, with high uptake and reduced HPV prevalence observed in vaccinated cohorts. Recently, attention has focused on vaccinating both sexes, due to HPV-related diseases in males (particularly for high-risk men who have sex with men) and an equity argument over equalising levels of protection.We constructed an epidemiological model for HPV transmission in the UK, accounting for nine of the most common HPV strains. We complemented this with an economic model to determine the likely health outcomes for individuals from the epidemiological model. We then tested vaccination with the three HPV vaccines currently available, vaccinating either girls alone or both sexes. For each strategy we calculated the threshold price per vaccine dose, i.e. the maximum amount paid for the added health benefits of vaccination to be worth the cost of each vaccine dose. We calculated results at 3.5% discounting, and also 1.5%, to consider the long-term health effects of HPV infection.At 3.5% discounting, continuing to vaccinate girls remains highly cost-effective compared to halting vaccination, with threshold dose prices of \u00a356-\u00a3108. Vaccination of girls and boys is less cost-effective (\u00a325-\u00a353). Compared to vaccinating girls only, adding boys to the programme is not cost-effective, with negative threshold prices (-\u00a36 to -\u00a33) due to the costs of administration. All threshold prices increase when using 1.5% discounting, and adding boys becomes cost-effective (\u00a336-\u00a347). These results are contingent on the UK\u2019s high vaccine uptake; for lower uptake rates, adding boys (at the same uptake rate) becomes more cost effective.Vaccinating girls is extremely cost-effective compared with no vaccination, vaccinating both sexes is less so. Adding boys to an already successful girls-only programme has a low cost-effectiveness, as males have high protection through herd immunity. If future health effects are weighted more heavily, threshold prices increase and vaccination becomes cost-effective.The online version of this article (10.1186/s12879-019-4108-y) contains supplementary material, which is available to authorized users. Uptake in different countries has varied considerably since the introduction of HPV vaccination. The United Kingdom (UK), which is the primary focus of this work, has relatively high national uptake rates of around 76-90% [Human papillomavirus (HPV) is the world\u2019s most common sexually transmitted infection, with the majority of people being infected at some point in their lifetime , 2. At ath sexes . A nonavd 76-90% \u201315. Somed 76-90% , 17.Vaccinating when young, before sexual debut, is optimal , 19. EarIn 2011 Giuliano et al. questionHere we present an analysis of HPV infection and vaccination, to estimate the incremental cost-effectiveness of vaccinating boys as well as girls. The study consists of three parts: firstly, the fitting of parameters associated with HPV transmission, infection and recovery, by use of an epidemiological model incorporating sexual partnerships between individuals, matched to multiple HPV prevalence data sources. Secondly, the simulation of a range of vaccination strategies using the parameters from the above model. Thirdly, an economic analysis of the different strategies, taking into account the potential consequences of HPV infections, to assess the cost-effectiveness of each vaccination strategy. We have not included potential changes to the UK\u2019s cervical cancer screening service that might be precipitated by any future reduction in HPV prevalence. In this regard we follow the earlier analysis of female-only vaccination and focuTo fit the transmission model to HPV prevalence rates before vaccination programmes were introduced, a variety of data, across a range of countries, was used. In total, results from 13 detailed epidemiological studies were used; information is given in Additional file\u00a0To model partnership behaviour of individuals, we used data from National Survey of Sexual Attitudes and Lifestyles (NATSAL) 2 and 3 : UK-wideIn the following section we describe the epidemiological model in brief; the transmission framework is explored in more detail in Datta et al. , and in We used an individual-based modelling framework, with SIRS-V (Susceptible - Infected - Recovered - Susceptible - Vaccinated) dynamics, thus accounting for both short-duration natural immunity and longer-lasting protection due to vaccination. Populations of 50,000 individuals were generally modelled; this population size was a compromise between stochastic uncertainty and speed of the computationally intensive simulations. We used yearly data from Natsal-2 and Natsal-3 to determine distributions for the rates of new partnerships that involve unprotected sex and could therefore allow the spread of HPV [The model was generally run for 100 years, allowing individuals to age, form new partnerships, become infected and recover. When an individual stochastically picked a new partner, the characteristics of the new partner were probabilistically determined by the status of the individual choosing. If the chosen partner was infected with one or more types of HPV, these could be stochastically transferred, with separate transmission probabilities for each type and with asymmetric transmission between the sexes . Once inWe used 13 datasets to fit the parameters in the epidemiological model; the datasets used are listed in Additional file\u00a0For predicting the impact of vaccination, we followed the UK\u2019s Joint Committee on Vaccination and Immunisation (JCVI) guidelines and used the \u2018best\u2019 parameters from the model fitting (i.e. the mode from the posterior for each fitted parameter). We then estimated future levels of HPV in the population for different vaccination strategies. Although the parameters used for each run were identical, due to the stochastic nature of the simulation, there was considerable variability between runs necessitating multiple simulations (500 runs per vaccination strategy). When simulating future vaccination scenarios, we used available uptake rates to simulate the girls-only vaccination that had occurred in the period 2008-2016 inclusive, using the bivalent vaccine until 2012, and quadrivalent after that , 54; we Halted vaccination: historical vaccination, followed by a halting of all vaccination in 2017;Girls: historical vaccination, followed in 2017 by selecting 85% of 12-year-old females to be vaccinated at the start of each year (based on predictions from JCVI on future uptake rates);Girls and boys: historical vaccination, followed in 2017 by selecting 85% of 12-year old girls and 85% of 12-year old boys to be vaccinated at the start of each year ;Girls and boys equal: historical vaccination, followed in 2017 by selecting 42.5% of 12-year old girls and 42.5% of 12-year old boys at the start of each year to be vaccinated );Girls na\u00efve: no historical vaccination, and vaccinating 60% of 12-year old girls from 2008 onwards;Girls and boys na\u00efve: no historical vaccination, and vaccinating 60% of 12-year old girls and 60% of 12-year old boys from 2008 onwards.The following vaccination strategies were simulated into the future, using the bivalent, quadrivalent or nonavalent vaccine. We define \u2018historical vaccination\u2019 as simulating girls-only vaccination for 2008-2016, with uptake rates for the main and catch-up programmes taken from UK data . The final three scenarios were designed to provide a scientific understanding of the generic conditions under which a gender-neutral vaccination programme would be cost-effective. Strategy 4 represents countries (like the UK) that have already commenced a girls-only vaccination programme (at varying uptake rates) and are interested in adding boys to the schedule; whilst strategies 5 and 6 represent countries which are yet to begin vaccinating against HPV. In such a way, we showed how impacts changed depending on both the coverage of vaccination in the population and existing herd immunity.threshold vaccine dose price; that is, the maximum amount the healthcare system is willing to pay given the associated health benefits . Positive prices per vaccine dose below this threshold price will tend to generate positive net health benefits, whilst negative prices per vaccine dose offer no incentive to the manufacturer to provide the vaccine.The economic model took the form of a continuous time individual patient simulation and personal social services perspective with costs presented in pounds sterling (2013-14 prices). The following sections outline the basic clinical, cost and health utility parameters that fed into this economic model.Age- and sex-specific incidences of the six cancer types included in the model , and cervical intraepithelial neoplasia were taken from 2013 UK cancer registration statistics. Age- and sex-specific incidences of genital warts were taken from a UK Health Protection Agency report , and ageProportions of each of these clinical events associated with the HPV types included in the model were extracted from a published meta-analysis and a liData on disease incidence, proportion of disease associated with HPV, and age- and sex-stratified proportions of people infected with each HPV type pre-vaccination were combined to give annual event rates for the nine diseases included in the model, stratified by age, sex and current and past HPV infection status.All-cause mortality data were taken from the Office of National Statistics , as wereHealth utility decrements associated with cases of genital warts and recuCosts of recurrent respiratory papillomatosis , genitalThe costs and health utility decrements used in the model are summarised in Additional file\u00a0The time horizon of the base case model was 100 years post the point where the different vaccine strategies affected individuals in the model. Thus, people who were born at the start of 2000 were included in the analysis, as were all subsequent newborns.To comply with the JCVI\u2019s guidelines, two criteria were considered. Firstly, that for the most likely set of parameters (modes of posteriors) the mean discounted costs and outcomes should be evaluated against a \u00a320,000 cost-effectiveness threshold value for a QALY . SecondlAs an alternative scenario, we also evaluated the effects of applying a 1.5% discount rate to health impacts; this was in response to the CEMIPP report which hiFor the uncertainty criterion, we note that a single simulation contains stochasticity due to both parameter uncertainty, and also the finite size of the modelled population and the chance nature of transmission. Our results show that this second form of stochasticity is largely parameter invariant, and therefore we were able to separate these two effects . The results shown for the uncertainty analysis therefore reflect only our uncertainty in parameter estimates and not variability between simulations.Patient and public involvement (PPI) in research has become embedded in health research with patients and the public involved as collaborative partners through the research process , 73. WhiFor this analysis, patients and the public were not involved in developing either the research question or the design of the study in relation to the modelling approach, primarily because this is one of the first studies to include PPI in modelling. As such it is exploratory in nature, with our intention to identify the ways in which patients can contribute to modelling. We utilised the development of the HPV model as an opportunity to establish a PPI Reference Group (comprised of public members), to explore the potential for patients or the public to contribute to both the epidemiological and economic modelling components of the study, as part of the wider programme of work.The Reference Group met regularly at key points in the study, with email contact in between. The wider aim of PPI within the project was to contribute towards conceptual development of PPI in mathematical and economic modelling, through the development of a new framework co-produced with patients and the public. Throughout the project we aimed to identify any impacts of the PPI Reference Group, and these will be disseminated through policy recommendations made by the Department of Health. A separate piece of work on the PPI contribution to the study is currently in progress.The fitting scheme produced well-defined parameter distributions had the effect of reducing HPV prevalence across the entire population from approximately 8% to 6.9%. Assuming that girls-only vaccination continues at 85%, by 2050 prevalence is predicted to drop to around 0.56% (yellow line). Adding boys\u2019 uptake at 85% to the girls-only programme from 2017 onwards further reduces prevalence to approximately 0.13% (green line). As an alternative, keeping the number of vaccinations equal to the girls-only programme but splitting them equally between girls and boys (so that uptake is 42.5% in both sexes) leads to a less steep decline in prevalence, falling to around 1.5% by 2047 (blue line). Interestingly, halting vaccination entirely in 2017 leads to a continued fall in prevalence until 2025 (red line), due to the delay between vaccination and girls entering the sexually active population; however, in the longer term prevalence returns to approximately pre-vaccination levels.As an alternative to the eight years of girls-only vaccination at high uptake, we investigated the effect of a lower uptake HPV vaccination campaign from 2008. Vaccinating just 60% of girls leads to a less marked decline in prevalence, reducing to around 2.5% by 2050 (black solid line); vaccinating 60% of both sexes further reduces the prevalence to 0.31% (black dashed line).We note that, due to basing pre-2010 individual-level behaviour on Natsal-2 and post-2010 behaviour on Natsal-3 which reported marginally increased sexual behaviour , we obseThese results have two important public-health implications. Firstly the reduction in cases from adding boys to the vaccination program is markedly less that the initial impact of adding girls. Secondly, a gender-neutral campaign vaccinating 60% of the population has comparable impact on infection prevalence as vaccinating 85% of girls (42.5% of the population). Given the heterosexual nature of the majority of the UK population, it is clear that vaccination of girls is generating considerable herd-immunity for boys.A detailed breakdown in the prevalence of different HPV strains, by age and gender, under alternative vaccination strategies is given in Additional file\u00a0The mean results of the cost-effectiveness model are shown in Fig.\u00a0Vaccinating girls only or both girls and boys, with any of the vaccines, was always cost-effective compared to not vaccinating, with positive threshold dose prices and positive confidence intervals in all instances. However, vaccinating girls alone was more cost-effective per dose, with a higher threshold price for each vaccine, compared to a gender-neutral strategy. Generally, the nonavalent vaccine was the most effective in preventing disease, followed by the quadrivalent, and finally the bivalent, as would be expected from the level of protection offered, hence a greater threshold price. Incremental to a girls-only vaccination campaign, adding boys gave threshold dose prices very close to, but below, zero, at 3.5%. The results from individual simulations varied widely and 500 replicates were needed to achieve relatively tight confidence intervals around the mean. For the quadrivalent vaccine, the mean dose price was negative (at -\u00a32.92) and, given that the confidence intervals are below zero (from -\u00a33.64 to -\u00a32.18), we can say with 95% confidence that the threshold price is negative. The same arguments apply to both the bivalent and nonavalent vaccines.At 1.5% discounting all threshold prices increased, but with the same qualitative patterns; for girls-only vaccination compared to halted vaccination threshold prices were \u00a3687 \u2013 \u00a3811 for the three vaccines and gender-neutral vaccination was less cost-effective, with threshold prices of \u00a3362 \u2013 \u00a3429. Incremental to girls only, gender-neutral vaccination had positive threshold prices of \u00a336 \u2013 \u00a347. This is due to the lower discount rate adding more weight to economic values placed on health conditions in the future. In general, a lower discounting rate will always make vaccination more cost-effective for infections like HPV, where the health consequences are experienced years or decades after infection.A detailed breakdown in the reduction in cases of the health sequelae under alternative vaccination strategies is given in Additional file\u00a0Employing the probabilistic approach as per the JCVI\u2019s guidelines, whereby parameters in both the transmission and economic models were sampled from appropriate distributions, and increasing the cost-effectiveness threshold for a QALY to \u00a330,000, the threshold dose prices at the 10th percentile of simulated values are shown in Table\u00a0It is evident that, while vaccination of girls fulfilled the JCVI criterion of 90% of simulations generating cost-effective results Fig.\u00a0a, addingConversely, at 1.5% discounting Fig.\u00a0c all thrConsidering the different levels of vaccine uptake in more detail and the resulting herd-immunity provides a richer understanding of the cost-benefit relationship Fig.\u00a0. Two eleWe note that, using 3.5% discounting Fig.\u00a0a, at 85%The modelling work performed here combined epidemiological and economic insights with advice from our PPI group, and provides cost-effectiveness results for a variety of vaccination strategies to combat HPV, following standard methodologies are well defined, there is greater uncertainty surrounding the additional five types in the nonavalent vaccine , reflecting the sparsity of data sources compared to halted vaccination. Girls-only vaccination was highly cost-effective versus halted vaccination, with threshold dose prices of \u00a355.80, \u00a399.64 and \u00a3108.05 for the bivalent, quadrivalent and nonavalent vaccines, respectively , none of the vaccines had a positive dose threshold price at 3.5% discounting , with confidence intervals that were all below zero Table\u00a0. MoreoveAt 1.5% discounting all threshold dose prices increase, and gender-neutral vaccination, incremental on girls-only vaccination is cost-effective, with threshold prices of \u00a336.46, \u00a344.70 and \u00a346.88 , boys\u2019 vaccination becomes less cost-effective Fig.\u00a0. This isGirls carry a larger economic burden of HPV-related disease than boys , due to It is clear from our analysis that mixing patterns are an important factor in the spread of HPV. An aspect not explicitly modelled here was the possibility of disease import from outside the population (i.e. immigration and tourism). This has been considered in some of our work, but unprotected sex with unvaccinated individuals from outside the UK is likely to be a relatively minor component . It is aOne limitation of our modelling approach is the decoupling of HPV vaccination from cervical cancer screening \u2013 we have implicitly assumed throughout this work that cervical cancer screening will continue in its current form. It is possible that the nature of (and hence costs and consequences of) the cervical cancer screening programme will change in the future. In the near term this is most likely to be caused by evidence showing HPV-based cervical screening is more effective than the current cytology-based programme, and supports increasing the screening intervals from three to five years . A progrA key assumption in our models, which may require further study is that vaccination is an independent random process, and in particular is not correlated with sexual behaviour. If this is not appropriate, it may be that the girls who are missing out on vaccination are in the highest risk groups, and may be disproportionately contributing to transmission of HPV. Due to the high prevalence of HPV across both men and women \u201398, thisThe generic conclusion from this work is that as coverage in girls increases, there is less incremental benefit from adding boys to the programme, due to existing herd-immunity. In the case of the UK, with the highest reported sustained HPV vaccine uptake rates in girls of any country, it is unlikely that adding boys will be cost-effective within standard economic guidelines which assume a 3.5% economic discounting. However, given the long time-scales associated with HPV infection and resulting disease, it may be more appropriate to adopt a 1.5% discounting, in which case adding boys to the programme becomes cost-effective for all three vaccines considered.Additional file 1Appendix S1. A detailed overview of the key assumptions underpinning the epidemiological model employed in the paper. (PDF 70 kb)Additional file 2Appendix S2. Economic model assumptions. (PDF 40 kb)Additional file 3Table S1. A summary of the datasets used to fit the HPV transmission model to for pre-vaccination populations. (PDF 53 kb)Additional file 4Table S2. Vaccine uptake in the UK, 2008\u20132016. (PDF 746 kb)Additional file 5Table S3. Efficacy of the three HPV vaccines against different HPV types. (PDF 48 kb)Additional file 6Table S4. Clinical parameter values and sources. (PDF 50 kb)Additional file 7Table S5. Costs and health utility decrements. (PDF 55 kb)Additional file 8Figure S1 and Table S6. Parameter distributions. (PDF 176 kb)Additional file 9Figures S2 and S3. Comparing HPV prevalence between the model and data. (PDF 328 kb)Additional file 10Figure S4. The mean threshold price per dose, under different vaccination strategies, for the base case scenario, at both 3.5% and 1.5% discount rate. (PDF 154 kb)Additional file 11Table S7. The mean prevalence of different HPV strains (as a percentage) in different ages and genders, after 50 years of simulating a range of vaccination strategies. (PDF 51 kb)Additional file 12Table S8. Cases of disease for different vaccination strategies. (PDF 1254 kb)Additional file 13Tables S9 and S10, and Figure S5. Incremental cost-effectiveness ratios. (PDF 146 kb)"}
+{"text": "Avocados contain nutrients and bioactive compounds that may help reduce the risk of becoming overweight/obese. We prospectively examined the effect of habitual avocado intake on changes in weight and body mass index (BMI). In the Adventist Health Study (AHS-2), a longitudinal cohort , avocado intake (standard serving size 32 g/day) was assessed by a food frequency questionnaire (FFQ). Self-reported height and weight were collected at baseline. Self-reported follow-up weight was collected with follow-up questionnaires between four and 11 years after baseline. Using the generalized least squares (GLS) approach, we analyzed repeated measures of weight in relation to avocado intake. Marginal logistic regression analyses were used to calculate the odds of becoming overweight/obese, comparing low (>0 to <32 g/day) and high (\u226532 g/day) avocado intake to non-consumers (reference). Avocado consumers who were normal weight at baseline, gained significantly less weight than non-consumers. The odds (OR (95% CI)) of becoming overweight/obese between baseline and follow-up was 0.93 , and 0.85 for low and high avocado consumers, respectively. Habitual consumption of avocados may reduce adult weight gain, but odds of overweight/obesity are attenuated by differences in initial BMI values. The current prevalence of overweight and obesity in the United States (U.S.) is 70.7% and 37.9%, respectively, among adults . WorldwiNutrient-dense, whole food choices may help to abate adult weight gain and reduce the risk of overweight or obesity. Avocados, a nutrient-dense and medium-caloric-dense whole food ,8,9, mayThere are very few studies that have examined the relationship between avocado intake and adiposity. Animal studies indicate a trend towards lower body weight related to avocado administration ,13,21. OThis was a longitudinal analysis on changes of weight and BMI in subjects from the Adventist Health Study-2 (AHS-2) cohort . This cohort is comprised of approximately 96,000 members , who at the time of enrollment, resided in the U.S., and Canada . Due to Collection of follow-up data is ongoing and includes information about self-reported weight, and changes in health history. The study was conducted in accordance with the Declaration of Helsinki, and the Institutional Review Board of Loma Linda University approved the AHS-2 cohort study. Subjects gave informed consent to participate in the AHS-2 cohort study.n where f = the weighted frequency of avocado; s = the weighted portion size of avocado; and n = standard serving size of avocado (grams). Daily avocado intake was categorized into the following groups: 1) consumers versus non-consumers; and 2) non-consumers, low (>0 to <32 g/day), and high (\u226532 g/day) consumers.Dietary intake assessment: The comprehensive lifestyle questionnaire included a 204-item quantitative food frequency questionnaire (FFQ), which was used to assess dietary intake . AvocadoValidity of avocado intake was assessed in a calibration sub-study (representative sample of ~1000 subjects from AHS-2), comparing intake estimated from a FFQ with intake calculated from 24-h dietary recalls. The mean de-attenuated correlation between the FFQ and 24-h dietary recalls for avocado intake was 0.52 and 0.50 for white and black subjects respectively . Data fr2. Self-reported height and weight were validated in the calibration sub-study. Weight and height were measured during calibration sub-study clinic visits, the details of which were reported previously [Anthropometric data assessment: The comprehensive lifestyle questionnaire included questions regarding self-reported height and weight. The question to assess height and weight was \u2018what is your current height and weight?\u2019 with both write-in and circle-in-weight response options. This data was used to calculate baseline BMI as kg/meviously . The coreviously .2, and obesity as a BMI \u2265 30 kg/m2.Two follow-up questionnaires were sent to collect information on current self-reported weight ,33, whic2 or >40 kg/m2; if they reported being pregnant at baseline (not assessed during follow-up in an older cohort); and/or indicated they were currently smoking at baseline. Subjects were also excluded for implausible anthropometric values or changes. These exclusions were based on the distribution of the samples, and clinical judgement. Female subjects were excluded if they reported a height <142 cm or >183 cm, and male subjects were excluded if they reported a height <152 cm or >198 cm. Subjects were also excluded if there were implausible changes in weight. These exclusions reduced the sample size by 13,766 subjects. Subjects were also excluded due to missing questionnaire return dates (n = 2), and missing avocado intake (n = 5547). These exclusions reduced the baseline sample size to 77,154 subjects. The analytical sample was further reduced to 55,407 subjects due to missing covariates (gender (n = 22), race (n = 815), age (n = 81), education level (n = 910), physical activity (n = 9162), sedentary time (n = 7842), dietary patterns ). Attrition bias analyses did not indicate differences between subjects with or without missing data.Statistical analysis: Subjects were excluded from the analysis if calculated energy consumption was <2092 KJ or >18,828 KJ per day ; baseline BMI was <18 kg/mDescriptive analyses were done for variables of interest. The analyses included calculation of means (SD), or medians (IQR). Marginal means of baseline BMI were calculated after adjusting for age, gender, race, and energy intake. Two-sample t-tests were used to assess the differences between avocado consumers and non-consumers for normally distributed continuous variables, whereas the Mann-Whitney test was used as a nonparametric test for other continuous variables. One-way analysis of variance (ANOVA) was used to assess the differences between avocado non-consumers, low consumers, and high consumers for normally distributed continuous variables. The Kruskal\u2013Wallis test was used as a nonparametric test for other variables. Chi-square analysis was used to assess differences between avocado consumers and non-consumers, and between non-consumers, low consumers, and high consumers for categorical variables. Chi-square analysis was also used to assess differences in the percent change of BMI between baseline and follow-up.e(X + 1) where X is avocado intake) to dampen the influence of outliers. Daily avocado intake was also adjusted for total energy intake, using the residual method [To assess the relationship between changes in BMI and weight over time, and avocado intake, repeated measures data were analyzed using the generalized least squares (GLS) approach. Time (in years), between baseline and follow-up, was calculated for each subject, and used as a continuous variable in the GLS model. For the covariance pattern among repeated measures, the exponential covariance structure was found to be most appropriate for the data. This covariance structure was used in all longitudinal models for weight and BMI. The two dependent variables, weight and BMI, were log-transformed prior to analysis to achieve approximate normality. The main exposure of the model, daily avocado intake (g/day), was also log-transformed ; baseline BMI ; gender ; race (black and non-black); and dietary patterns (vegetarian and nonvegetarian). GLS models were also used to estimate weight and BMI at baseline, and 5 years after baseline for avocado intake at 0 g/day and 32 g/day for the subgroup analyses.2 calculated from Wt1 and Wt2) based on the level of avocado intake , in those who had a normal BMI at baseline. The marginal models were fit using the alternating logistic regression (ALR) algorithm [Models were used to calculate odds ratios (OR) (95% CI) of becoming overweight or obese .2, respectively. The proportion of vegetarians (including vegans) was 37.8%. Additional baseline descriptive data can be found in 2, respectively, and for Wt2 were 75.1 kg, and 26.3 kg/m2 respectively.The average age (SD) of the analytical sample at baseline was 55.9 (13.7) years. The average baseline weight (SD) and BMI (SD) were 76.0 (15.9) kg and 26.6 (4.7) kg/mIn terms of frequency of intake, approximately 41% of individuals reported never or rarely consuming avocado, and 34% of subjects consumed avocado occasionally (at least 1\u20133 times per month). Regular consumers (consuming avocado at least once per week) made up ~25% of subjects. Of those consuming avocado, 69% consumed the standard serving size of avocado, while 18% consumed less and 13% consumed more.p < 0.0001) and BMI (SD) (27.3 (4.8) versus 26.0 (4.5) kg/m2; p < 0.0001), and were less likely to be sedentary, but more likely to be vigorously active than non-consumers.Median avocado intake among consumers was 2.3 g/day, with a range of 1.1 to 120.1 g/day. Avocado consumers, on average, had a significantly higher energy (SD) intake than non-consumers (7535.8 (3063.9) versus 8261.7 (3025.0) KJ/day). Consumers, however, had a lower average weight (SD) (77.9 (16.2) versus 74.5 (15.5) kg; When comparing zero, low (>0 to <32 g/day), and high avocado intake (\u226532 g/day), age, caloric intake, and sedentary hours per day differed significantly . Weight 2, 26.0 kg/m2, and 24.7 kg/m2 for avocado non-consumers, low, and high consumers respectively. Similarly, at Wt2, average BMI was 27.0 kg/m2, 25.9 kg/m2, and 24.5 kg/m2 among non-consumers, low and high consumers, respectively.In terms of follow-up weights, average BMI at Wt1 was 27.1 kg/mp < 0.0001). For Wt2, 19.0% of non-consumers, 15.6% of low consumers, and 10.2% of high consumers converted to being overweight/obese (p = 0.004).Among subjects who had a normal BMI at baseline, approximately 16.7% and 16.6% of became overweight/obese by Wt1 and Wt2, respectively. In assessing the impact of avocado intake on the conversion to overweight/obesity, we found that avocado non-consumers were more likely to become overweight/obese compared to consumers. For Wt1, 18.7%, 15.8%, and 10.5% of non-consumers, low, and high consumers became overweight/obese respectively (p = 0.04), and 13.4%, 14.7%, and 17.6% of non-consumers, low, and high consumers respectively, became normal weight by Wt2 (p = 0.004).Alternatively, among those subjects who were overweight or obese at baseline, 11.7%, 12.7%, and 12.8% of non-consumers, low, and high consumers respectively, became normal weight Wt1 . For a 75 kg individual this translates to a difference of 0.4 kg in weight change over 5 years between non-consumers and high avocado consumers. We did not find significant changes in weight and BMI among those who were overweight or obese at baseline. Among older subjects in the AHS-2 cohort, there is a tendency to lose weight over time. We found that avocado consumers \u226560 years of age, had less of a tendency to lose weight and BMI over time compared to non-consumers. We did not find significant results for other subgroup analyses . Sensitivity analyses including residential region as a covariate did not substantially change the results. See Among subjects of normal weight at baseline, those who consumed avocado had a lower odds of becoming overweight or obese during follow-up . The oddWe found that, among avocado consumers in the AHS-2 cohort, there was a reduction in the odds of becoming overweight/obese compared to those who did not eat avocado, but this finding was attenuated by adjusting for baseline BMI. Differences in baseline BMI had more of an impact on the odds of becoming overweight/obese than differences in avocado intake. These results may also be partly explained by the relatively small difference in weight or BMI changes between avocado consumers and non-consumers.n = 347) of 70.1 (5.4) g/day [Utilizing data from one 24-h dietary recall, collected during several cycles of NHANES (2001\u20132008), Fulgoni and colleagues reported average (SD) avocado intake among avocado consumers n = 34 of 70.1 2 versus 28.4 kg/m2) [2 versus 27.3 kg/m2) than non-consumers. Our findings confirm that habitual avocado intake, of fairly minimal average amounts, are associated with lower weight and excess adiposity cross-sectionally. Our longitudinal analyses indicate that among subjects who had a normal BMI at baseline, avocado consumers gained less weight and BMI over time than non-consumers. The differences however are small, and for individuals who were normal weight at baseline, the results are attenuated by differences in baseline BMI.Despite these differences, the findings in our study correspond with what Fulgoni and colleagues have reported. They reported that NHANES avocado consumers had significantly lower weight (78.1 kg versus 81.1 kg) and BMI than non-consumers ,15. Our Avocados are considered to be a rich source of dietary fiber, which is known to increase satiety . We haveFulgoni et al. reported that avocado consumers had a significantly higher diet quality than non-consumers . It may Strengths of our study include the size of the sample, and the fact that the analyses were longitudinal. Additionally, more than half of our cohort reported habitual avocado consumption. Avocado intake was quite varied, which makes this a useful cohort to examine the health effects related to intake. Due to the unique health habits espoused by most of the cohort members, very few smoke, or drink alcohol, which limits confounding by these factors when examining the relationship between diet and health. These unique health habits may however somewhat limit the generalizability of the results.Limitations of this study include use of self-reported anthropometric measurements and dietary intake method. As validation of these measurements suggest that these methods are good to excellent approximations of reference standard measurements ,32, we fIn conclusion, habitual avocado intake is associated with a lower prevalence of excess weight, and attenuates adult weight gain in normal weight individuals over time in this health oriented population. Higher amounts of habitual avocado intake are associated with lower odds of becoming overweight and/or obese, but this is attenuated by differences in baseline BMI. While the clinically the changes in weight are minor, there are possible overall public health implications of long-term changes in weight at the population level."}
+{"text": "Morris Water Maze (MWM) test was used to evaluate the success of AD modelling. On this basis, an advanced technique with UPLC-QqQ MS/MS was built up and applied to determine the levels of 8 neurotransmitters in rat plasma. Significant alternation in methionine, glutamine, and tryptophan was observed in AD rats' plasma after the administration of HLJDD, relative to the model group. Meanwhile, HLJDD could upregulate the levels of SOD, GSH-Px, AMPK, and SIRT1 and downregulate the content of MDA in the peripheral system of the AD rats. The underlying therapeutic mechanism of HLJDD for the treatment of AD was associated with alleviating oxidation stress, inflammation, neurotransmitters, and energy metabolism. These data provide solid foundation for the potential use of HLJDD to treat AD.Huang-Lian-Jie-Du Decoction (HLJDD), traditional Chinese medicine (TCM), is proven to have ameliorative effects on learning and memory deficits of Alzheimer's disease (AD). The current study aims to reveal the underlying mechanism of HLJDD in the treatment of AD by simultaneous determination on the regulation of HLJDD on oxidative stress, neurotransmitters, and AMPK-SIRT1 pathway in AD. AD model rat was successfully established by injection of D-galactose and A Rhizoma coptidis (Rc), Radix scutellariae (Rs), Cortex phellodendr (Cp), and Fructus Gardeniae (Fg) with a weight ratio of 3\u2009:\u20092\u2009:\u20092\u2009:\u20093. In clinic, HLJDD has been used to treat AD in China and other Asian countries recently [\u03b2), and improving memory in AD mice [HLJDD, a classical TCM formula used for heat clearance and detoxification, consists of recently , 2. Our recently . Recentl AD mice \u20138. Howev\u03b2 in the AD brain occurred before the development of visible senile plaques and formation of senile plaques and NFTs [\u03b2 with mononuclear phagocytes, including microglia and recruited peripheral blood monocytes, would further induce neuroinflammation. Consequently, the occurrence of oxidative stress, disturbance of the energy metabolism, and varied levels of the neurotransmitters are closely related to the development of AD.AD is an age-related neurodegenerative disorder involving behavioural changes and difficulty in thinking , 10. Theand NFTs . The preand NFTs \u201314. The and NFTs . Neuronsand NFTs \u201318. Neurand NFTs . The int\u03b2 oligomers transiently decreases intracellular ATP levels and AMPK activity [AMP-activated protein kinase (AMPK), a key kinase involved in regulating cell energy metabolism, is an important regulator of A\u03b2 generation , 21. Shoactivity . When coactivity , 23. Accactivity . Therefo\u03b225-35-ibotenic acid were selected as AD model and this aims to explain the potential mechanism of HLJDD in treating AD rats from a new perspective. To the best of our knowledge, this is the first report on the investigation of the therapeutic mechanism of HLJDD on AD rats from the perspective of peripheral oxidative stress, inflammation, energy metabolism, and neurotransmitters.HLJDD, as well as its major components, has ameliorative effects on learning and memory deficits. However, the therapeutic mechanism is still unclear. In this study, the rats injected with D-galactose and AScutellaria baicalensis Georgi (voucher specimen number: SB-0315), Coptis chinensis Franch (voucher specimen number: CC-0311), Phellodendron chinense Schneid (voucher specimen number: PC-0311), and Gardenia jasminoides Ellis (voucher specimen number: GJ-0311), respectively, by professor He Xi-Rong [\u03b225-35, and ibotenic acid were purchased from Sigma . Among them, diazepam was used as the internal standards (S1). Tryptophan and methionine were obtained from Accelerating Scientific and Industrial Development thereby Serving Humanity . Adrenaline and glutamine were provided by The National Institute for the Control of Pharmaceutical and Biological Products . HPLC grade methanol and acetonitrile for the qualitative analysis and extraction were obtained from Honeywell Burdick and Jackson . HPLC grade formic acid was provided by Thermo Fisher Scientific , and ultrapure water was purified by Millipore system . Other chemicals and solvents were of analytical grade.Rs, Rc, Cp, and Fg were obtained from their geoauthentic product areas. These four herbal medicines were authenticated as ciences) \u201327. The In the preparation of HLJDD, four samples of dried plants were grinded into powders and mixed in a ratio of 2\u2009:\u20093\u2009:\u20092\u2009:\u20093 (Rs\u2009:\u2009Rc\u2009:\u2009Cp\u2009:\u2009Fg), further decocted twice with boiling water for 2\u2009h. Then, the aqueous extract was concentrated and dried on a rotary vacuum evaporator at 80\u00b0C.n\u2009=\u20096) received subcutaneous injection of the same volume of saline. In the forty-sixth day, rats were randomly divided into a sham group (n\u2009=\u20095) and AD's model group (n\u2009=\u200920). A\u03b223-35 was dissolved in 0.9% saline and incubated for 7 days at 37\u00b0C; then ibotenic acid was added to form the A\u03b225-35-ibotenic acid solution (4.0\u2009mg/mL A\u03b223-35 and 2.0\u2009mg/mL ibotenic acid). Sodium pentobarbital was intraperitoneally injected before surgery. A\u03b225-35-ibotenic acid solution (2\u2009\u03bcL) was administered into the nucleus basalis magnocellularis (NBM) over a period of 5\u2009min, and then the needle was left in place for 10\u2009min after the infusion. Rats in the sham group were injected with 2\u2009\u03bcL of 0.9% saline by the same procedure. After the operation, all rats received subcutaneous injection of benzylpenicillin sodium immediately to prevent infections. The ethics committees of Cisco North Biotechnology Co., Ltd. and the China Academy of Chinese Medical Sciences approved the experimental protocol. The ethical approval number was BJAM2016052105.Wistar rats , purchased from Cisco North Biotechnology Co, Ltd. , were grown up in an environmentally controlled room (12\u2009h light cycle) at 20\u2009\u00b1\u20091\u00b0C and 50\u2009\u00b1\u200910% relative humidity and feed with constant access to rodent chow and water. A certain number of rats were randomly selected for subcutaneous injection of 50\u2009mg/kg D-galactose for 45\u2009days, which was dissolved in 0.9% saline (20\u2009mg in 2\u2009mL). The control group were divided into two groups at random: AD rats with HLJDD for one week and AD rats with physiological saline group . The other two groups were sham group (n\u2009=\u20094) and control group (n\u2009=\u20096). The prepared HLJDD extract power (HLJDD-EP) solution was administered to the rats by oral gavage for one week at 2\u2009mL/100\u2009g body weight . The sham group, model group, and control group received the same dose of physiological saline by oral gavage.After the success of AD modelling, the remaining AD rats ). All of the procedures were performed by the same operator according to the manufacturer's protocol.\u03bcL from plasma were mixed with 10\u2009\u03bcL of ascorbic acid , 10\u2009\u03bcL of IS, and 380\u2009\u03bcL of methanol (containing 0.2% formic acid), respectively. Followed by vortex and centrifugation at 12000\u2009rpm for 15\u2009min, the aliquots were analyzed by UPLC-QqQ MS/MS.Eight reference standards including serotonin, glutamate, creatinine, arginine, tryptophan, methionine, adrenaline, and glutamine were dissolved in 50% methanol and diluted with 50% methanol (containing 0.1% formic acid) to a series of concentrations. An internal standard stock solution was also prepared with 50% methanol. Aliquots of 100\u2009\u03bcL. The separation was achieved at 25\u00b0C using an optimized Waters ACQUITY UPLC BEH Amide column .An Agilent 6490 triple quadrupole LC-MS system equipped with G1311\u2009A quaternary pump, G1322\u2009A vacuum degasser, G1329\u2009A autosampler, and G1316\u2009A thermostat was used for the UPLC-QqQ MS/MS analysis. The mobile phase consisted of acetonitrile containing 0.05% formic acid (solvent A) and water containing 20\u2009mmol ammonium acetate (solvent B). The stepwise linear gradient was optimized as follows: 0\u201320\u2009min, linear from 95% to 70% A; 20\u201321\u2009min, linear from 70% to 50% A; 21\u201324\u2009min, held at 50% A; 24\u201325\u2009min, linear from 50% to 95% A; and 25\u201330\u2009min, held at 95% A for equilibration of the column. The flow rate was 0.3\u2009mL/min. The injection volume was 3\u2009m/z) and CE values for neurotransmitters are shown in The analytes were determined by monitoring the precursor-product transition in the MRM mode using ion polarity switching mode. To ensure the desired abundance of each compound, the CE values and other parameters were optimized and were as follows: cycle time, 300\u2009ms; gas temp, 200\u00b0C; gas flow, 14\u2009L/min; nebulizer, 20\u2009psi; sheath gas flow, 11\u2009L/min; capillary voltage, 3\u2009kV; nozzle voltage, 1.5\u2009kV; and Delta EMV(+), 200\u2009V. The optimized mass transition ion pairs and stored at \u221220\u00b0C. Protein concentrations were determined by BCA assay kit. After SDS-PVDF, proteins were transferred from gel to nitrocellulose membranes. Membranes were blocked in 5% no fat dried milk in TBST (1.65\u2009mL of 20\uff05 Tween was added to 700\u2009mL TBS) for 1\u2009h and then incubated overnight with the specific antibody of the AMPK and SIRT1 . After incubation with the relative second antibody, immunoreactive bands were quantified using imaging system . Values were corrected with the absorbency of the \u03b2-actin .Excised hepar, spleen, and kidney tissues were homogenized in ice-cold saline with a SCIENTZ glass homogenizer (DY89-1). Further, 200\u2009\u03bcg RNA was carried out. The qualities of RNA and cDNA were checked using 2720 nucleic acid analyzer . Special primers designed against rat SIRT1 and AMPK subunit were verified in NCBI Blast. Primers against rat \u03b2-Actin were used as the internal control. Sequences of the primers along with their annealing temperature are shown in \u03bcL, and 1\u2009\u03bcL cDNA was used as the template. Fluorescence was detected using Roche Light Cycler\u00ae 480II Detection System. PCR products were visualized with gel electrophoresis to confirm a single product of the correct size. Ratios of the target gene to \u03b2-actin were calculated and compared between samples.Total RNA was extracted from hepar, spleen, and kidney using standard Trizol RNA isolation method. Reverse transcription of 10\u2009t-tests. A p value less than 0.05 was considered statistically significant.All values measured were presented as means\u2009\u00b1\u2009SD. Statistical significance was determined by one-way ANOVA followed by Fisher's LSD test or Student's p < 0.01), demonstrating successful establishment of AD model rats.The swimming distance and time were recorded in the MWM test using video tracking . Data wap < 0.01; hepar, spleen, and kidney: p < 0.05). After the gastric gavage of HLJDD for one week, the level of SOD was improved in different degrees . The concentration of GSH-Px in diverse tissues showed different variations in the sham group . After treating with HLJDD, the contents of GSH-Px went up except in kidney . MDA, an important product of lipid peroxidation, presented a slight change in the sham group and turned back after the administration of HLJDD . Totally, the levels of SOD and GSH-Px in the periphery declined and MDA raised in the sham group and the model group, and the trend of this change was much more obvious in the model group.The oxidative stress related substances including SOD, MDA, and GSH-Px were measured in the present study. Compared with the control group, the contents of SOD in the plasma, hepar, spleen, and kidney displayed the same downward trend in the sham group , especiaam group but dispam group but incry\u2009=\u2009ax\u2009+\u2009b) and a weighting factor of 1/x2. All the calibration curves indicated good linearity with correlation coefficients (r) ranging from 0.991 to 0.999. The limits of detection (LOD\u2009:\u2009S/N\u2009=\u20093) and the limits of quantification (LOQ\u2009:\u2009S/N\u2009=\u200910) were from 0.05 to 40.56\u2009ng/mL and from 0.1 to 101.4\u2009ng/mL, respectively. The precision of the method was determined using quality control (QC) samples (n\u2009=\u20096), and the results are summarized in Different standard solutions (containing IS) were diluted with 50% methanol (containing 0.2% formic acid) to six different concentrations. The calibration curves was plotted against the concentration) of neurotransmitters were obtained using the least-squares linear regression fit and Tg-APP/PS1 mice , 29. In e stress and was e stress , 32. Intathology . Plenty amyloid , 35. Oveediators , 37. Com\u03b225-35-ibotenic acid tended to be constant in 5th day [th day. After administration for one week, there was a slight fluctuation in peripheric oxidative stress of the sham group compared to the control group, but it is not obvious. However, in the peripheral system, including plasma, hepar, spleen, and kidney, significantly decreased levels of SOD and GSH-Px and an increased contents of MDA were found in the model group by comparison with the control group. Combining with the MWM test, injection with D-galactose and A\u03b225-35-ibotenic acid leads to oxidative stress disorder and cognitive impairment. HLJDD significantly lowered the levels of oxidative stress markers (MDA) in the peripheral system and enhanced the activities of antioxidases (SOD and GSH-Px), as compared with the model group. The cross talk between oxidative stress and A\u03b2 deposition may occur via multiple ways affecting transcription of the APP gene or translation of APP mRNA [\u03b2 is critical to A\u03b2-induced oxidative stress and neurotoxicity [\u03b2 deposition may be the potential mechanism of HLJDD in treating AD.In our previous study, 69 compounds of HLJDD were identified, mainly including iridoids, alkaloids, and flavonoids, and berberine is a representative element . The ach 5th day , rats weAPP mRNA . Studiestoxicity , 42. Sevtoxicity . In this1A [Tryptophan, an essential amino acid, is the sole precursor of peripherally and produced serotonin (5-HT), and tryptophan metabolism by the Kynurenine (Kny) pathway generates neurotoxic metabolites . Tryptop1A \u201347, whic1A . In the 1A . Meanwhi\u03b1 and \u03b2-secretases, thus, affecting APP processing and A\u03b2 generation [+-dependent histone deacetylase, became famous molecules for slowing aging and decreasing age-related disorders. During oxidative stress, the NAD+-dependent DNA repair enzyme, poly (ADP-ribose) polymerase-1 (PARP), is activated and decreases NAD+ level which increases aging [+ and NAD+/NADH and inverse SIRT1 by activating liver kinase B1 (LKB1) to react to AMPK [\u03b2 deposition and decreases the expression of \u03b2-secretases via activating AMPK in neuroblastoma cells and primary cultured cortical neurons [In addition, AMPK-SIRT1 pathway may be closely related to oxidative stress, inflammation, and energy metabolisms. The increase in the expression of AMPK and SIRT1 by HLJDD may be owned to the reduction in oxidative stress observed in this model, which may furtherly influence the inflammation, and energy metabolism. AMPK, a major cellular energy sensor, plays a key role in cellular energy homeostasis. AMPK could regulate the expression of neration . SIRT1, es aging . Declinees aging . AMPK co to AMPK . It was to AMPK . Study r to AMPK . Berberi neurons . And eveIn this investigation, a method for the simultaneous measurement of 8 neurotransmitters in AD rat's plasma was established using an advanced technique with UPLC-QqQ MS/MS. Western Blot and Real-time PCR were used for the analysis of AMPK and SIRT1 to illuminate the mechanism underlying the anti-inflammation and regulating energy metabolism effects. The underlying therapeutic mechanism of HLJDD for the treatment of AD was associated with alleviating oxidation stress, inflammation, neurotransmitters, and energy metabolism. These data provide solid foundation for the potential use of HLJDD to treat AD."}
+{"text": "We tested whether this tingling sensation on the lips was modulated by sustained mechanical pressure. Across four experiments, we show that sustained touch inhibits sanshool tingling sensations in a location-specific, pressure-level and time-dependent manner. Additional experiments ruled out the mediation of this interaction by nociceptive or affective (C-tactile) channels. These results reveal novel inhibitory influence from steady pressure onto flutter-range tactile perceptual channels, consistent with early-stage interactions between mechanoreceptor inputs within the somatosensory pathway.Human perception of touch is mediated by inputs from multiple channels. Classical theories postulate independent contributions of each channel to each tactile feature, with little or no interaction between channels. In contrast to this view, we show that inputs from two sub-modalities of mechanical input channels interact to determine tactile perception. The flutter-range vibration channel was activated anomalously using Features extracted from stimuli to the skin are conveyed to the brain through distinct classes of afferent fibre ,2. Some independent information about specific tactile features [Although the characteristics of each perceptual channel have been explored, little is known about how the information from each channel interacts to provide an overall sense of touch. For example, inhibitory interaction between mechanical and pain/thermal channels has been well established ,7, but ifeatures ,9, and tfeatures \u201313. The features ,14,15. Hhuman psychophysical evidence that signals from different mechanical feature channels do indeed interact to determine tactile perception. Specifically, we show that perception of flutter-range mechanical vibration neurophysiological channel) is inhibited by concurrent activation of the perceptual channel for steady pressure (putatively corresponding to a slowly adapting (SA) channel). Thus, \u2018touch inhibits touch\u2019, in a manner similar to the established inhibitory interaction between mechanoreceptive and nociceptive channels (i.e. \u2018touch inhibits pain\u2019) [Here, we show, to our knowledge, the first s pain\u2019) ,7.chemically activating one target tactile feature channel, and then measuring the resulting percept in the presence or absence of additional mechanical stimulation to a second channel. In particular, we activated the perceptual flutter-range vibration channel (corresponding to a putative RA channel) using hydroxyl-a-sanshool, a bioactive compound of Szechuan pepper (hereafter sanshool) that produces localized tingling sensations with distinctive tactile qualities.Testing for interaction between perceptual channels might logically involve psychophysical tests of frequency-specific stimuli both alone, and in combination. However, delivering pure frequency-resolved stimuli to mechanoreceptors is difficult, because of the complex propagation of mechanical stimuli through the skin . Here, wOthers have previously demonstrated that sanshool activates the light touch RA fibres \u201320, and 2.(a)A total of 42 right-handed participants (age range: 18\u201338 years) volunteered in experiments 1\u20134 ; experiment 2: 10, five females; experiment 3: 8, six females; experiment 4: 14, 12 females). All participants were naive regarding the experimental purpose and gave informed written consent. All methods and procedures were approved by University College London Research Ethics Committee. See the electronic supplementary material for the inclusion criteria of each experiment.(b)Experiment 1 tested whether the tingling sensation induced by sanshool (putative RA channel activation \u201323) is ma). This stimulation site was chosen because of its dense innervation of mechanoreceptors [a) was manually stimulated by the experimenter with a calibrated probe for 10 s. The locations touched included three positions each on the upper and lower lip vermillion and two positions above and below the vermillion border, respectively (a). Participants were instructed to always attend to the medial part of the lower lip (position 6 in figure\u00a01a), and to judge the intensity of tingling in this specific target location, while the sustained pressure probe contacted one of the eight locations on the lips. Participants rated the tingling sensation relative to the previous baseline. A rating of 10 indicated that the perceived tingling was at the same level as the intensity at baseline; a rating of 0 meant that the participant did not perceive any tingling sensation at all; ratings above 10 would indicate a higher tingling intensity than the baseline period. The rating was given while the mechanical probe remained in stable contact, and 10 s after it had been first applied, to minimize any transient effects of touch onset. An inter-trial interval of a few seconds without mechanical stimulation was always included, to allow the tingling sensation to return. The next trial started only when this was confirmed by the participant. The experiment consisted of six blocks. Each block consisted of 10 trials; two trials each on positions 3 and 6, and one for the remaining six positions . This was done to increase sensitivity for the conditions we thought more relevant to the interaction hypothesis. The order of locations for mechanical stimulation was randomised within each participant. The data table for experiments 1\u20134 can be found in the electronic supplementary material, tables S1\u2013S4.Tingling sensation was induced on the upper and lower lip vermilions by applying sanshool using a cotton swab a. This seceptors and thineceptors , which aeceptors . Particiectively a. Partic(c)b) randomly chosen on each trial. This time, instead of only rating the sanshool tingling at a single, fixed location, participants gave separate ratings of tingling intensity for all four lip quadrants, with the order of prompting being randomised. Participants completed six blocks. In each block, sustained touch was applied once to each location (16 ratings).Experiment 2 aimed to replicate and generalize the results of experiment 1. The procedure was largely similar to experiment 1. To make sure that the effect obtained in experiment 1 was not owing to sustained spatial attention to a single target location, participants experienced sanshool tingling all over the lips, and sustained pressure was applied to one of four quadrants b randoml(d)Experiment 3 investigated whether sanshool tingling is modulated by different contact force levels. Given that SA receptor firing is proportional to contact force, ,2,24, anc). Participants rested on a chinrest with their lips kept apart. Prior to the main experiment, we confirmed that the stronger solution level of sanshool (lower lip) induced stronger intensity of tingling sensation compared to the weak solution (upper lip) . Next, the medial part of the lower lip, which experienced the stronger tingling sensation, was stimulated with different contact forces . Forces were applied by a closed-loop system comprising a linear actuator and a force gauge , which continuously maintained the desired pressure level. A cotton bud (diameter 4.5 mm) was placed between the force sensor and the lip. Participants performed a two-alternative forced choice comparison task to indicate whether the upper or the lower lip experienced the more intense tingling sensation. In each trial, one of five different levels of force were applied to the lower lip. One second after the onset of steady pressure, an auditory tone signalled that participants should judge whether the lower or the upper lip currently had the higher intensity of tingling sensation. Participants performed three blocks, each consisting of 10 repetitions of the five contact forces, in random order, giving 150 trials in total.First, we arranged a situation where tingling intensity was higher for the lower lip than the upper lip, by applying 80% and 20% concentration sanshool solutions to the lower and upper lip, respectively c. Partic(e)Experiment 4 tested how tingle intensity varied according to the time course of a sustained pressure stimulus. The discharge rate of SA neurons in response to static touch decreases gradually over time, dropping to 30% of the initial firing after 10 s . Therefod). However, sanshool (80% solution) was applied on the lower lip only, while the upper lip rested on the probe of a vibro-tactile shaker . In each trial, participants first estimated the intensity of sanshool tingling on the lower lip by adjusting the amplitude of 50 Hz vibration [figure\u00a01d; electronic supplementary material, figure S3, and video S1). An auditory signal was delivered when the closed-loop system had achieved a steady force at the target level. Participants were instructed to note the intensity of the tingling sensation on the lower lip at the time of the beep, and to adjust the amplitude of mechanical vibration to the upper lip until it had a perceptually equivalent intensity. They were instructed to make the adjustment as accurately as possible, while taking no longer than 5 s . Two further beeps sounded 5 and 10 s after the initial application of sustained force contact, requiring two further matching attempts. Thus, four successive estimations were collected in each trial, one before and three after the pressure application. The experiment consisted of three blocks, with each block consisting of 10 repetitions of the three force levels . The order of the forces was randomized within each participant.The set-up was similar to experiment 3 d. Howeveibration applied 3.(a)a). A one-sample t-test was used to compare the perceived intensity of tingle when the probe was present, to the null mean value of 10 which was defined in our rating scale as the perceived intensity at baseline). The result showed a significant reduction .When the probe was applied at the judged target position , tingling intensity was dramatically reduced (to a mean 24.7% \u00b1 s.d. 34.0 of the perceived intensity at baseline before the probe was applied) a. A one-p > 0.25, Bonferroni corrected). However, a significant reduction in tingling intensity relative to baseline was found when pressure was applied to the two lower lip locations adjacent to the judged target location . A repeated measures ANOVA showed a clear spatial gradient on the lower, but not the upper lip .The tingling sensation at the target position was not affected by pressure on the upper lip or off the lips b). We could thus compare the effect on tingling of delivering sustained touch to either the same lip as the location where the tingling rating was judged, or the other lip, and likewise for sustained touch on the same side of the midline as the rated location, or the opposite side. The realigned data showed significant reduction of the tingling rating from the pre-defined baseline value of 10 at the quadrant where the sustained touch was applied , and also when touch was applied at the other quadrant on the same lip , but not when touching the other lip (b).For the quadrant where sustained touch was applied, we replicated the results of experiment 1, finding robust reduction of tingling during pressure, relative to the baseline . We re-aligned the rating data of each remaining quadrant relative to the quadrant where the sustained touch was applied b. We courrected) b.p = 0.003, p < 0.001, p = 0.001, t9 = 6.30, p < 0.001, dz = 1.99). Interestingly, on the untouched lip also, the quadrant on the same side as the touch again had lower ratings than the other side . This implies that the inhibition of the tingling depends on the spatial distance between the location where tingling is judged and the location of sustained touch, both within and across lips. Because the lips did not touch during the experiment (see Methods) this rules out mechanical propagation of sustained pressure as the cause of altered RA mechanoreceptor transduction. Instead, the interaction appears to occur at some neural processing level where afferents from the two mechanoreceptors are integrated in a spatially organised manner.Next, we directly compared the tingle ratings across different locations in respect to the probe . A 2 \u00d7 2 repeated measures ANOVA revealed significant main effect both for the factor of the lip ((c)t7 = 6.94, p < 0.001, dz = 2.45) .We first checked that sanshool concentration influenced tingling intensity. As expected, participants reported significantly higher intensity for the 80% concentration on the lower lip (average rating: 6.6 \u00b1 s.d. 1.55) compared to the 20% concentration on the upper lip (average rating: 3.2 \u00b1 s.d. 1.06) . The suppressive effect of pressure on tingling intensity was confirmed by a significant linear trend analysis d. The su(d)e). For each contact force level, the perceived intensity of tingling was significantly reduced at all time points compared to the initial baseline period without any pressure contact , replicating the result of experiments 1\u20133.After initial inspection of the data, we found that the distribution of the vibration amplitude matches deviated significantly from the normal distribution . The statistical analysis was therefore conducted after log-transforming the data. However, to maintain the data in interpretable scale, we report and show the means and the standard errors in the original units (\u03bcm). The initial perceived tingling on the lower lip without pressure was matched by, on average, 13.9 \u00b5m (\u00b1 s.d. 5.9) peak-to-peak amplitude of a 50 Hz vibration on the upper lip. Sustained contact force of 0.05 N on the lower lip reduced the tingling to a level that was now matched by 8.4 \u00b5m (\u00b1 s.d. 4.6) of vibration amplitude. Contact forces of 0.20 N and 0.35 N were matched by 8.2 \u00b5m (\u00b1 s.d. 4.1) and 7.6 \u00b5m (\u00b1 s.d. 3.4) vibration amplitudes, respectively e. For eaF2,26 = 4.32; p = 0.024; F2,26 = 4.92; p = 0.015; F4,52 = 0.27; p = 0.897) (e).We specifically wanted to investigate whether the reduction of tingling sensation would change with the force level applied, and whether that reduction would change as a function of time from onset of probe contact. A 3 \u00d7 3 repeated measures ANOVA on the vibration amplitude showed significant main effect for both factors of contact force level (= 0.897) e.t13 = 3.54, p = 0.004, dz = 0.94). Comparison with the lowest force level (0.05 N) showed a similar trend (e). Therefore, the intensity of tingling was suppressed in a force-dependent fashion, as expected from experiment 3. We investigated the effect of time in the same way. The perceived tingle intensity recovered as time elapsed (e). Because activity of SA neurons gradually reduces over time owing to the adaptation to sustained pressure input [We used Fisher's LSD methods to identify conditions that differed significantly. For the force level factor, estimated tingling amplitude was significantly reduced in the highest (0.35 N) compared to the middle force level (0.20 N) ( = 0.57) e. Theref = 0.63) e. Becausre input , this mo4.perceptual evidence for neural interactions between somatosensory submodalities has been lacking until now. We have used a novel approach involving anomalous chemical stimulation of mechanoreceptor channels, to show strong inhibitory interactions between distinct perceptual channels encoding different preferred frequencies. Specifically, we show that the tingling sensation associated with the flutter-range vibratory channel (putative RA channel) is inhibited by the input of sustained pressure (putative SA channel). We further showed that this inhibitory interaction is spatially selective and proportional to the activation of the pressure channel. By combining a mechanical stimulus with an anomalous chemical stimulus, we could avoid the methodological uncertainties of possible nonlinear mechanical interactions between mechanical stimuli that may affect other studies.Somatosensory perception involves integration of multiple features that reach the brain through different afferent channels. A central question is therefore whether and how inputs from these different channels interact with each other ,14,27. CIn the current study, we investigated perceptual channels based on psychophysically defined characteristics. These methods identify perceptual channels by threshold differences across different stimulus frequencies, and by observing perceptual modulations owing to adaptation and masking. Although the peripheral (receptor/afferent fibre) basis of tactile feature processing have been extensively studied by neurophysiologists, we still do not know the precise details of the mapping between channels defined by peripheral physiology, and the perceptual channels defined by psychophysics. Nevertheless, the principle of studying principles of central nervous system (CNS) organisation based on psychophysically defined perceptual channels has been well established, for example in the visual system . By analSanshool has been shown to activate the rapidly adapting light-touch fibres in rats ,20. Usin\u03b2 neurons have also been recently described in humans [Nevertheless, psychophysical techniques can also help to investigate whether other non-mechanical channels might contribute to sanshool tingling sensations . C-nociceptive and C-tactile fibres have both been suggested to mediate tingle. Although nociceptive An humans , it is nn humans , there in humans as well n humans ,35 of thn humans ,20 as wen humans ,37 . A direct mechanical effect on the RA receptor should presumably remain constant as long as sustained touch lasts. By contrast, the modest recovery of tingle with continuing pressure is consistent with a neural, as opposed to mechanical account, based on the adaptation of SA afferent firing rates.Another alternative explanation is based on the effective stimulation at the receptors themselves. Recent studies showed that action potentials are accompanied by mechanical deformations of the cell surface ,50, as wfigure\u00a02perceptually, and their identification with specific peripheral receptors and afferent fibres can only be putative. Although the physiological characteristics of sanshool are well studied in animal research [Our study presents a series of limitations, which should be studied in more detail in the future. First, in our study, channels are defined research \u201320, the Second, the perceptual characteristics of sanshool tingling should be studied in more detail. In our study, we focus on the feature of flutter-level vibration, but other aspects of the sensation remain to be systematically investigated. For example, in a previous study, we have shown that sanshool produces tingling at a frequency of 50 Hz and impaThird, the duration of static touch varied widely across our experiments . We varied the duration of static touch because our experiments required different numbers of tactile stimulation trials, which had all to be completed within the typical duration of tingling that follows a single application of sanshool (approx. 40 min). Despite the varying tactile durations, we consistently found suppression of tingling sensation, suggesting a rather general effect.Finally, some of our experiments involved manual delivery of tactile stimuli. These cannot provide precise control over contact force. Given that RA mechanoreceptors are exquisitely sensitive to dynamic changes in contact force, our experiments 1 and 2 may have included uncontrolled micromovements that activated RA channels. Nevertheless, the precisely controlled mechanical stimuli of experiments 3 and 4, which should have drastically reduced micromovements, also produced a strong attenuation of tingling, several seconds after touch onset. These results suggest that the tactile attenuation of tingling is likely to be mediated by an SA rather than by an RA channel activated by unintended micromovements.At what level in the CNS, then, would putative RA and SA channels interact? Either cortical or sub-cortical interactions are possible. Several circuit mechanisms of presynaptic inhibition have recently been described . In the et al. [What might be the functional relevance of a putative SA-RA mechanoreceptor channel interaction? A few studies have previously investigated whether vibration perception is affected by the indentation of the vibrotactile stimulator ,58. For et al. found threverse direction, from RA to SA signalling, offer important clues to possible function of such interactions. Bensmaia and colleagues [To our knowledge, the current study is the first to suggest an inhibitory effect of SA on RA signalling. However, previous reports of an effect in the lleagues ,60 showelleagues ,61 travelleagues ,63. Late"}
+{"text": "The only licensed live Bacille Calmette-Gu\u00e9rin (BCG) vaccine used to prevent severe childhood tuberculosis comprises genetically divergent strains with variable protective efficacy and rates of BCG-induced adverse events. The whole-genome sequencing (WGS) allowed evaluating the genome stability of BCG strains and the impact of spontaneous heterogeneity in seed and commercial lots on the efficacy of BCG-vaccines in different countries. Our study aimed to assess sequence variations and their putative effects on genes and protein functions in the BCG-1 (Russia) seed lots compared to their progeny isolates available from immunocompetent children with BCG-induced disease .ppsC, eccD5, and eccA5 involved in metabolism and cell wall processes and reportedly associated with virulence in mycobacteria. One isolate preserved variants of its parent seed lot 361 without gain of further changes in the sequence profile within 14\u2009years.Based on the WGS data, we analyzed the links between seed lots 361, 367, and 368 used for vaccine manufacture in Russia in different periods, and their nine progeny isolates recovered from immunocompetent children with BCG-induced disease. The complete catalog of variants in genes relative to the reference genome (GenBank: CP013741) included 4 synonymous and 8 nonsynonymous single nucleotide polymorphisms, and 3 frameshift deletions. Seed lot 361 shared variants with 2 of 6 descendant isolates that had higher proportions of such polymorphisms in several genes, including The background genomic information allowed us for the first time to follow the BCG diversity starting from the freeze-dried seed lots to descendant clinical isolates. Sequence variations in several genes of seed lot 361 did not alter the genomic stability and viability of the vaccine and appeared accumulated in isolates during the survival in the human organism. The impact of the observed variations in the context of association with the development of BCG-induced disease should be evaluated in parallel with the immune status and host genetics. Comparative genomic studies of BCG seed lots and their descendant clinical isolates represent a beneficial approach to better understand the molecular bases of efficacy and adverse events during the long-term survival of BCG in the host organism. Mycobacterium tuberculosis infections and their progression to disease. The live Bacille Calmette-Gu\u00e9rin (BCG) vaccine is the only licensed vaccine that is successfully used for nearly a century to prevent childhood TB, although it shows a modest protective effect in adults [Mycobacterium bovis progenitor strain, while attenuation by numerous in vitro passages resulted in world distribution of parental BCG strain in 1924. During the next decades, maintenance of the BCG strain progenies by further passages using different culture media and transfer schedules in different countries contributed to the in vitro evolution of BCG. The production of seed stocks without lyophilization until the 1960s resulted in the generation of BCG daughter strains (sub-strains) with variable morphological, culture, antigenic characteristics, and residual virulence due to adaptation in vitro [Treatment of tuberculosis (TB) and vaccination remain inevitable health-care interventions to prevent new n adults , 2. The n adults , 4. Briein vitro \u20137.The up-to-date genealogy of widely used BCG vaccine strains is based on genomic variations that occurred due to large deletions, tandem duplications, insertion sequences, and the subsequent series of insertions/deletions and single nucleotide polymorphisms (SNPs). Accordingly, BCG strains were classified into \u201cearly\u201d strains represented by BCG Japan, Moreau, Russia (and the descendant BCG Sofia) strains that show fewer chromosomal deletions than \u201clate\u201d strains - Danish, Pasteur, Glaxo, and others \u201311. AltoM. bovis BCG strain was transferred to Russia in 1924, and since 1925 it was referred to as M. bovis BCG-1 . In the 1940s, M. bovis BCG-1 was lyophilized, and in 1954 the seed lot system was adopted in Russia. The first seed lot used in 1963 for the manufacturing of freeze-dried BCG vaccines in the former Soviet Union was seed lot 352 (\u201cch\u201d). Since 2006, the 368 (\u201cshch\u201d) generation of M. bovis BCG-1 (Russia) strain no. 700001 is in use for the production of the BCG vaccine in Russia [Historically, the original n Russia , 13. Then Russia .The attenuated BCG vaccines are believed safe and effective mainly to prevent the most severe meningitis and miliary forms of childhood TB associated with high mortality in infants and young children, especially in high TB-burden countries , 2. HoweVaccine-related complications, i.e., BCG-induced disease (BCG-ID), may occur later than 12\u2009months after vaccination as local site reactions or distal to the site of inoculation, and they appear to differ from cutaneous lesions, lymphadenopathy/lymphadenitis, and hypersensitivity reactions to osteitis and disseminated BCG infections , 23\u201325.The BCG-induced adverse reactions are believed to be due to the use of genetically distant vaccine strains with variable residual virulence, the culture technique, the route of administration and the dose of viable cells in the BCG vaccine delivered, along with the age, immune status, and host genetics of the vaccinees , 26, 27.To standardize the vaccine production and to prevent adverse reactions related to BCG vaccination, the WHO has established the international requirements for the manufacture and quality control including genetic characterization of seeds and final commercial lots of the reference strains, i.e., BCG Danish 1331, Tokyo 172\u20131, and Russia BCG-1 used for vaccine manufacture and distribution worldwide , 28, 29.M. bovis BCG-1 (Russia) used for immunization against TB in Russia since 1948 till present, and 32 commercial lots of the vaccine produced by Russian manufacturers were proved by multiplex polymerase chain reaction [Given recommendations, the identity and genome stability of seven seed lots of reaction .M. bovis BCG Moscow strain in South Africa [During the last decades, the whole-genome sequencing (WGS) was applied to assess the genome stability of BCG vaccine strains from different culture collections and to evaluate the impact of minor sequence variations occurring in seed and commercial lots on the efficacy of BCG-vaccines used in different countries \u201334. Receh Africa .M. bovis BCG-1 (Russia) 368 (\u201cshch\u201d) freeze-dried seed lot and its working seed lot used for vaccine production by one of the Russian manufacturers are available in GenBank (NCBI) under accession numbers CP011455 [M. bovis BCG-1 (Russia) 368 within the entire vaccine production flow was confirmed by spoligotyping, 24 loci VNTR-typing, and WGS [The complete genome sequences of the current CP011455 , CP00924CP011455 , and CP0 and WGS . HoweverRaw reads of seed lots 361, 367, 368, and descendant clinical isolates were mapped to the reference genome of BCG-1 (Russia) (GenBank: CP013741). Variant calling allowed to produce the complete catalog of 15 differences, including 12 SNPs (four synonymous and eight nonsynonymous) and three disruptive deletions (frameshift variants) in CDS of the seed lots and clinical isolates relative to the reference genome. Overview of sequence variation patterns identified in BCG seed lots and their progeny clinical isolates are presented in Table\u00a0pncA (Rv2043c) gene conferring high intrinsic resistance to pyrazinamide in M. bovis was detected in all BCG-1 (Russia) progeny isolates. A single-nucleotide insertion in the recA gene was not detected in any of three seed lots, nor their descendant clinical isolates.His57Asp mutation in We examined a set of nine BCG-1 (Russia) clinical isolates recovered from duly BCG-vaccinated immunocompetent children with a culture-confirmed diagnosis of BCG-ID. For each case, analysis of medical records allowed to follow the history of primary BCG vaccination, including information on commercial vaccine products used for intradermal inoculation, its manufacture, and finally, the parent seed lot Tables\u00a0 and 3.TBCG clinical isolates shared the majority of genomic features with their parent seed lots. However, the proportions of reads with alternative alleles varied in BCG seed lots, and their progeny isolates. These findings were summarized in Table The population of BCG-1 (Russia) seed lot 361 was polymorphic with variable proportions of SNPs and deletions in reads spanning seven CDS (Table rocD) was detected in almost 1/3 reads of seed lot 361, but it was absent in the descendant isolates. Alternatively, seed lot 361 shared nonsynonymous (missense) SNPs in type VII secretion system ESX-5 (eccD5 and eccA5), dolichol-phosphate mannosyltransferase (ppm1) and hypothetical protease genes with its progeny clinical isolates 2925 and 5448 that had higher proportions (up to 100%) of alternative variants. However, sequences of these isolates differed in amino acid transporter (cycA) and polyketide synthase (ppsC) genes. A nonsynonymous SNP in the PE-PGRS family protein PE_PGRS7 gene was found in 38% reads of isolate 5340, the progeny of seed lot 361.Synonymous SNP in the ornithine aminotransferase gene current seed lot 368 (Table amiD) gene and a frameshift deletion in the thioesterase (pks13) gene with descendant isolate 577, also demonstrating a polymorphism (43% reads) in the ATP synthase subunit beta gene (atpD). Polymorphism in the N-acetylglucosaminyl-diphospho-decaprenol L-rhamnosyltransferase gene (wbbL1) and a frameshift deletion in the universal stress protein UspA gene were identified in clinical isolate 1032 , the progeny of seed lot 368 (Table The sequenced seed lot 368 shared a single base deletion in methionine aminopeptidase (gene wbbL and a frgene wbbL and a frgene wbbL and a frpncA (Rv2043c) gene conferring high intrinsic resistance in M. bovis, and consequently, all BCG vaccine strains [All nine BCG clinical isolates recovered from tissue samples taken during the surgery were susceptible to first-line antibiotics: streptomycin, isoniazid, rifampin, ethambutol. However, they were pyrazinamide-resistant due to the typical His57Asp mutation in strains .glnD (ortholog Rv2918c in M. tuberculosis H37Rv) gene, without Ala replacement in position 713 of 808 amino acids in the conservative domain of the PII uridylyltransferase protein in all genomes studied. The G variant was present in 100% reads of sequenced seed lots 361 and 367, and eight of nine clinical isolates. However, it was detected only in 25% reads of current seed lot 368 (sequenced in this study), and 46% reads of its progeny isolate 577.Previously, spontaneous heterogeneity of BCG seed lots and commercial vaccines during vaccine production determined by variant calling was analyzed in the Tokyo-172 vaccine strain . In lineglnD in other sequenced BCG genomes.These findings inspired further examination of A/G polymorphisms in the glnD in BCG-1 (Russia) current seed lot 368 and some earlier vaccine products [glnD gene of the BCG-1 (Russia) 368 working seed lot (GenBank: CP013741) and consistent vaccine batches within 2 years was confirmed by WGS [The G variant was previously reported in the products , though products , 36. TheInterestingly, allele G was recorded in the genomes of BCG str. Tokyo 172 which represents a well-defined Type I subpopulation of BCG Tokyo-172, the closest to BCG Russia (GenBank: NZ_CUWO01000001) and Russia ATCC 35740, members of presumably monophyletic \u201cearly\u201d substrain group of BCG , 32. TheglnD gene of BCG-1 (Russia) strain represented by different seed lots and descendant clinical isolates occurs due to the selection of variants during laboratory and vaccine production manipulations, rather than mutations in patients\u2019 organisms. Otherwise, the observed heterogeneity can be partly explained as a result of processing nucleotide variants among the mapped reads using different bioinformatics tools and quality criteria.Altogether, these data suggest that intrinsic polymorphism in the recA gene leading to nonfunctional truncated recombinase A reported previously in a vaccine production lot of M. bovis BCG Russia (corresponding to ATCC 35740) [recA in the previously sequenced complete genomes of BCG-1 (Russia) [Another interesting finding: a notorious single-nucleotide insertion in the C 35740) was not (Russia) , 35, 36 (Russia) .Based on the comparative study of WGS data, we attempted to analyze the links between BCG-1 (Russia) seed lots used for vaccine manufacture and their progeny clinical isolates associated with specific adverse events, e.g., BCG-osteitis and an abscess at the point of injection.eccD5, eccA5, ppm1 genes, and hypothetical protease gene with its progeny clinical isolates 2925 and 5448. The latter typically had higher proportions (reaching up to 100%) of alternative alleles than the parent seed lot 361, thus suggesting the likely accumulation of such changes under selective pressure in the host organism. The similar SNPs we observed in corresponding genomic positions in M. tuberculosis H37Rv, BCG strains Tokyo-172 type I and Pasteur 1173P2. These polymorphisms affected type VII specialized bacterial secretion ESX-5 , reportedly associated with virulence in mycobacteria, and dolichol-phosphate mannosyltransferase and protease genes involved in the vital cell processes , intermediary metabolism, and respiration [cycA) and polyketide synthase (ppsC) sequences.The population of BCG-1 (Russia) seed lot 361 appeared heterogeneous with variable proportions of SNPs and deletions in reads spanning seven CDS. Seed lot 361 shared SNPs in piration , 41. TheA unique missense variant in the PE-PGRS family protein PE_PGRS7 gene with an unclear function in antigenic variation and interactions of bacteria with the host immune system was found in 38% reads of isolate 5340, the progeny of seed lot 361.glnD) were detected in isolates 1986, 4159, and 5075, progenies of the seed lot 361 (used for BCG manufacture in 2002\u20132007) compared to the reference sequence, i.e., BCG-1 (Russia) current seed lot 368.Interestingly, no sequence polymorphisms (except amiD) gene essential for protein translation and required for mycobacterial survival during infection (virulence factor) and a frameshift deletion in the thioesterase region of polyketide synthase 13 (pks13) gene involved in the final assembly of mycolic acids, major and specific lipid components of the envelope essential for the survival of a mycobacterial cell [The sequences of seed lot 367 (used for vaccine production in 2001\u20132008) also did not show variations in 14 of 15 CDS compared to the reference genome and thus appeared the most conservative in our study. However, its single progeny, isolate 3363, carried a missense variant in amidase gene compared to the previously reported complete genomes of BCG-1 (Russia) current-generation 368 [atpD). Both proteins encoded by mapA and atpD genes are involved in the processes of intermediary metabolism and respiration [The population of the sequenced seed lot 368 was almost homogenous containing a single base deletion (69% reads) in methionine aminopeptidase not altering the enzyme structure and function in the cell wall and cell processes was identified in clinical isolate 1032 (18% reads). Another finding in isolate 1032 (28% reads) was a frameshift due to a single-base cytosine deletion in codon 630 of the universal stress protein UspA gene (orthologue Rv1996 in M. tuberculosis H37Rv), one of eight Usp paralogues essential for the intracellular survival of pathogenic mycobacteria in hypoxia, low levels of nitric oxide and carbon monoxide environment. This deletion completely altering 27 amino acids from the position 212 onwards resulted in a gained opal stop codon (TGA). The latter was leading to a premature protein truncation in the residue 239 yielding to the loss of 3 ligand-binding loci in the putative ATP binding motif of the UspA domain 2 [A synonymous SNP in the N-acetylglucosaminyl-diphospho-decaprenol L-rhamnosyltransferase gene , 3363 (progeny of seed lot 367), and 5340 (progeny of seed lot 361) did not affect the vaccine survival in the organisms of vaccinees. Moreover, none of the frameshifts, which presumably could alter essential cell functions, were crucial for the microbial population in vitro, nor in vivo, thus emphasizing the long-term viability of BCG-1 (Russia) strain in the human organism.Adverse events following immunization with BCG are considered underreported since assays to confirm isolates as BCG are still relatively rare. Moreover, based on the routine laboratory diagnostics and even molecular techniques, it is difficult to associate the clinical BCG isolates with the particular vaccine lot administered \u201320. TherUnfortunately, the administered commercial vaccines were unavailable for WGS, which limited the comparative analysis of sequence variations through the whole vaccine production chain. Nevertheless, bioinformatics tools provided a possibility to identify sequence variants and their putative effects on genes and protein functions in BCG-1(Russia) seed lots 361 and 368 used for vaccine manufacture in Russia in different periods relative to their progeny clinical isolates from patients with confirmed BCG-related disease and the genomes of internationally recognized BCG reference strains.BCG osteitis is a rare but severe adverse event that typically presents in children before the age of 5\u2009years, and there are few reports concerning the late onset of the disease in teenagers , 19. In To our knowledge, we reported for the first time the results of a WGS-based comparative analysis of the BCG-1 (Russia) seed lots used for the vaccine production in different periods and their progeny clinical isolates recovered from immunocompetent children with BCG-induced adverse reactions . Our study provided the background genomic information, which allowed us to follow the BCG diversity starting from the freeze-dried seed lots to descendant clinical isolates presenting the results of the long-term survival of vaccine populations in the human organism.ppsC, eccD5, and eccA5 involved in metabolism and cell wall processes and reportedly associated with virulence in mycobacteria. However, such polymorphisms did not alter the genomic stability and viability of BCG-1 (Russia)-derived vaccines in vivo. Sequence variants occurred in polymorphic CDS of seed lot 361 used for vaccine manufacture before 2006 appeared accumulated in two of six progeny isolates, hypothetically, under selective pressure in the human organism. Although polymorphisms were identified in two unrelated clinical isolates 2925 and 5448 from children immunized in different years with different vaccine lots produced from the BCG-1 (Russia) seed lot 361 by the same manufacturer, the impact of such changes in the context of association with the development of BCG-induced disease remains questionable due to the lack of detailed information on the immune status and host genetics of presumably immunocompetent patients with BCG-osteitis.Two of three BCG-1 (Russia) seed lots were heterogeneous with various proportions of sequence variants in several genes, including Despite promising results in the development of a new generation of BCG vaccines, there is no reliable alternative to those which are still in use in the nearest future.The contributions of genomic diversity and variations in gene expression profiles during the BCG vaccine manufacture flow and adaptations in vivo are yet to be uncovered in parallel with the molecular mechanisms of innate and adaptive immune responses in the human organism. Thereby, the comparative studies of genomic variations in production strains, their seeds, administered vaccine lots, and the descendant clinical isolates (if available) represent a beneficial approach to better understand the bases of vaccine efficacy and adverse reactions of present and future BCG-based vaccines at genomic, transcriptomic, and proteomic levels.We retrospectively reviewed 65 medical records at the clinics of St. Petersburg Research Institute of Phthisiopulmonology selected as codes A18.0 and A18.2 according to the International Statistical Classification of Diseases and Related Health Problems (ICD-10). The probable BCG-ID was suspected in nine immunocompetent children (eight patients aged under 5 years and the 14-year old patient) with developed osteitis (eight patients), or cutaneous lesion near the BCG inoculation site (one patient), well-defined BCG inoculation records, and without a history of TB contacts admitted for treatment in 2006\u20132018 . The M. bovis BCG cultures were stored at \u2212\u200980\u2009\u00b0C for further examination.The definition for the diagnosis of BCG-ID (\u201cBCG-itis\u201d) included 1) the histology showing granulation tissue with caseous necrosis in biopsy samples taken while surgery and 2) mycobacteria cultures recovered from tissue samples on the L\u00f6wenstein-Jensen (L-J) medium if they were identified as M. bovis BCG cultures was assessed by using conventional absolute concentration method and/or BACTEC MGIT 960 System .Drug susceptibility to first-line antibiotics and pyrazinamide of the M. bovis BCG-1 (Russia) freeze-dried seed lots and nine BCG clinical isolates cultured on L-J medium, each recovered from a child patient with confirmed BCG-ID.For genomic analysis, we used genomic DNA extracted as described from M. M. bovis BCG-1 (Russia) substrain No. 700001 in ampoules were obtained from the National State Collection of Pathogenic Microorganisms (NSCPM) of Scientific Center for Expertise of Medical Application Products of the Ministry of Health, Russia along with the characteristics (the origin and manufacture) of corresponding BCG vaccine commercial lots used for immunization (Table The lyophilized seed lots of three generations 361 (\u201csh\u201d), 367 (\u201cshch\u201d), and 368 (\u201cshch\u201d) of Clinical isolates 1986, 2925, 4159, 5340, 5075, and 5448 - progenies of BCG-1 (Russia) seed lot 361 (\u201csh\u201d), isolate 3363 \u2013 of seed lot 367 (\u201cshch\u201d) and isolate 1032 \u2013 of seed lot 368 (\u201cshch\u201d) were obtained from biopsies of patients with BCG-osteitis affecting femur, tibia, calcaneus or ribs; isolate 577 \u2013 the progeny of seed lot 368 (\u201cshch\u201d) was obtained from a patient with an abscess at the point of injection (Tables https://www.bioinformatics.babraham.ac.uk/projects/fastqc/) to retrieve proper parameters for trimming and filtering with Trimmomatic software version 0.32 [M. bovis BCG-1 (Russia), available at NCBI GenBank under the accession number CP013741 (RefSeq: NZ_CP013741.1) using BWA-MEM algorithm [http://broadinstitute.github.io/picard/) SortSam, MarkDuplicates, and BuildBamIndex tools, respectively. Next, deduplicated reads were subjected to the realignment based on the realignment target list performed with the Genome Analysis Toolkit (GATK) software version 4.1.1.0 IndelRealigner tool [M. bovis BCG-1 (Russia) current seed lot 368 (\u201cshch\u201d) available at NCBI under the accession number CP013741. Particular genome features were retrieved from the PATRIC platform 3.6.2 (https://www.patricbrc.org/view/Taxonomy/1763) [https://www.uniprot.org/) [https://mycobrowser.epfl.ch) [DNA samples were purified by RNase A (Thermo Scientific\u2122 #EN0531) and sonicated using Bioruptor\u00ae UCD-200 system to prepare paired-end (P.E.) libraries by following NEB Library Preparation Protocol, as seen in the product manual for the NEBNext Ultra II DNA Library Preparation Kit for Illumina NEB #E7645). P.E. genomic libraries were subjected to WGS performed using the MiSeq platform . The average coverage of 28x was achieved. Raw PE reads were qualitatively analyzed using FastQC software , UniProtot.org/) , and Mycepfl.ch) .pncA gene was searched for mutations conferring resistance to pyrazinamide in BCG clinical isolates [The isolates .M. tuberculosis H37Rv (NC_000962.3) and M. bovis BCG strains: BCG-1 (Russia) , Tokyo-172 type I (NC_012207), Pasteur 1173P2 (NC_008769), BCG Russia (NZ_CUWO01000001), and Danish 1331 (NZ_CP039850) were used to analyze variations in coding DNA sequences (CDS) by the NCBI basic local alignment and search tool (BLAST) (https://blast.ncbi.nlm.nih.gov/Blast.cgi)Reference genomic sequences of Additional file 1: Table S1. Sequence variants in BCG-1 (Russia) parent seed lots 361, 367, and 368 and their progeny clinical isolates (complete information). Sequence variants are presented as reads with alternative allele/all reads, ratio, %; del, deletion; fs, frameshift.Additional file 2: Figure S1. Comparative circular genome diagram of M. bovis BCG 1 (Russia) vaccine seeds and clinical isolates (circos plot was created with Circa (http://omgenomics.com/circa). Tracks from inside: scale in megabases; G.C. content histogram (yellow fill with black stroke) calculated from the ratio (G\u2009+\u2009C)/(A\u2009+\u2009T\u2009+\u2009G\u2009+\u2009C) using a 1 kilobase (kb) non-overlapping sliding window; G.C. bias plots calculated from the ratio (G C)/(G\u2009+\u2009C) using a 1\u2009kb non-overlapping window (blue plot) and a 10\u2009kb non-overlapping window (translucent black plot); strain number labeled tracks each depicting strain-specific insertions/deletions (blue/red transverse marks) and SNPs (dots*) identified relative to the reference genome of M. bovis BCG-1 (Russia) (GenBank accession number CP013741); tracks depicting ORFs (blue transverse marks) alongside affected CDSs on forward and reverse strands; names of affected genes colored according to an impact type of the most severe variant affecting a particular gene. Notes. *Sizes and colors determine a particular SNP type: small black \u2013 synonymous SNP, large orange \u2013 missense SNP; \u2020color determines an impact type of a particular variant on a particular CDS assigned according to the following concept: low impact (black mark) \u2013 synonymous SNPs, moderate impact (orange mark) \u2013 missense SNPs and conservative insertions/deletions, high impact (red mark) \u2013 nonsense SNPs and frameshift insertions/deletions. (PPTX 311 kb)Additional file 3: Figure S2. UspA domain 2: sequence alteration and truncation. (A) Sequences of affected UspA gene region. The top two rows depict reference region aligned against respective loci of strain 1032, indicating cytosine deletion in the codon 210. The third row represents the consensus sequence of the region highlighting premature opal (TGA) stop codon. (B) UspA amino acid sequence alteration and protein truncation. Domain 2 altered region and corresponding changed residues indicated in red. Ligand-binding sites depicted as green regions. (PPTX 59 kb)"}
+{"text": "While the majority of influenza-infected individuals show no or mild symptomatology, pregnant women are at higher risk of complications and infection-associated mortality. Although enhanced lung pathology and dysregulated hormones are thought to underlie adverse pregnancy outcomes following influenza infection, how pregnancy confounds long-term maternal anti-influenza immunity remains to be elucidated. Previously, we linked seasonal influenza infection to clinical observations of adverse pregnancy outcomes, enhanced lung and placental histopathology, and reduced control of viral replication in lungs of infected pregnant mothers. Here, we expand on this work and demonstrate that lower infectious doses of the pandemic A/California/07/2009 influenza virus generated adverse gestational outcomes similar to higher doses of seasonal viruses. Mice infected during pregnancy demonstrated lower hemagglutination inhibition and neutralizing antibody titers than non-pregnant animals until 63 days post infection. These differences in humoral immunity suggest that pregnancy impacts antibody maturation mechanisms without alterations to B cell frequency or antibody secretion. This is further supported by transcriptional analysis of plasmablasts, which demonstrate downregulated B cell metabolism and post-translational modification systems only among pregnant animals. In sum, these findings corroborate a link between adverse pregnancy outcomes and severe pathology observed during pandemic influenza infection. Furthermore, our data propose that pregnancy directly confounds humoral responses following influenza infection which resolves post-partem. Additional studies are required to specify the involvement of plasmablast metabolism with early humoral immunity abnormalities to best guide vaccination strategies and improve our understanding of the immunological consequences of pregnancy. The normal response to influenza A infection ranges from mild to asymptomatic; indeed, a serosurveillance study of volunteers who tested positive for antibodies against H1N1 revealed that the majority did not experience any symptoms . Indeed,+ and CD8+ T cells or 30 \u03bcl 2 \u00d7 LD50 of mouse adapted A/California/07/2009, approximately 120 p.f.u. . Non-infected mice were intranasally dosed with a dilution of BALB/c lung lysate in sterile PBS.Animal studies were conducted according to Emory University Institutional Animal Care and Use Committee (IACUC) guidelines outlined in an approved protocol (PROTO201800113) in compliance with the United States Federal Animal Welfare Act (PL 89\u2013544) and subsequent amendments. Timed pregnancies were generated in 8- to 12-week old female BALB/c mice as described previously at \u221220\u00b0C. Mice were euthanized for organ collection at 0, 2, 4, 7, 10, 14, and 42 days post-infection. Lung and placental samples were weighed, homogenized into 1x DMEM (ThermoFisher) /1 \u00d7 Halt Protease Inhibitor, and stored at \u221280\u00b0C for western blots. Left lung lobes were homogenized in serum-free DMEM and stored in 1x Halt Protease Inhibitor (ThermoFisher) for viral titers. Viral titers were determined by plaque assay in MDCK cells as described previously . LeukocyOrgans were isolated at the indicated days post infection and submerged in histology cassettes in 4% paraformaldehyde overnight at 4\u00b0C. Tissues were embedded in paraffin, sectioned in 4 \u03bcm, and fixed to glass microscopy slides. Hematoxylin and eosin staining was performed by Yerkes National Primate Research Center Pathology Core, and slides were imaged on a Zeiss Axioscope with SpotFlex 15.2 camera with Spot Advanced 4.7 software.Cytokines were quantified in sera, lungs, and placentas at days 4 and 7 post-infection using Bio-Rad 23-plex Mouse Group I Cytokine Assay Kit per manufacturer's instructions . Progesterone and prostaglandin F2\u03b1 (PGF2\u03b1) were quantified with hormone ELISA kits . Cytokine and hormone expression values were normalized by dilution and tissue lysate concentration. Fold changes were plotted in heat maps, and raw values plotted and reported in Molecular analysis of protein expression was performed on 5 mg of tissue lysate via Western blot. Purified anti-murine MMP-9 and anti-murine MMP-2 were detected with goat anti-rabbit antibody. Rabbit anti-murine COX-2 and rabbit anti-murine C13orf24 (PIBF) were detected with goat anti-rabbit antibodies. Rabbit anti-murine \u03b2-actin was used as a loading control with goat anti-rabbit, secondary antibodies. Blots were developed with Super Signal Femto Maximum Sensitivity Substrate (Thermo Scientific 34096) and imaged using a Bio-Rad ChemiDoc Touch. Volume intensity of signal was normalized to \u03b2-actin loading controls. Representative western blots can be found in Antibody titers were determined via ELISA with biotinylated anti-IgG, IgM, IgA antibodies against recombinant hemagglutinin H1N1 (rH1) A/California/07/2009 ; Manassas, VA) as described previously . HemagglSingle cell suspensions were obtained from blood, spleen, liver, uterus, placenta, and lung-draining mediastinal lymph nodes (MLN) 2 days after infection. For innate immune cell analysis, cells were stained with anti-CD11c (N418), -CD11b (M1/70), -Ly6G (1A8), -F4/80 (BM8), -CD45 (30-F11), -IA/IE (M5/114.15.2), -CD64 (C54-5/7.1), and -CD24 (30-F1) . Dead cells were excluded by gating out cells positive for Live/Dead fixable dead stain . Following fixation in 2% paraformaldehyde, samples were acquired with an LSRII and analyzed using FlowJo v 9 . Gating methodologies are depicted in 6 cells/well) were incubated at 37\u00b0C for 16 h. Influenza-specific antibodies were detected using isotype-specific, biotinylated murine Ig antibodies . To detect cytokine secreting cells, 5 \u00d7 105 cells/well were overlaid in ELISPOT plates coated with 100 ng/well of capture antibody . Cells were stimulated with 200 ng/well rH1 for 48 h at 37\u00b0C. The cells were washed and incubated with 100 ng/well biotinylated detection antibodies and developed with streptavidin-HRP and diaminobenzidine. Plates were analyzed via ImmunoSpot Reader 5.0 and normalized to 1 \u00d7 106 cells/well.To quantify antibody-secreting cells (ASCs), polyvinylidene fluoride ELISpot plates were coated with 200 ng/well rH1. Splenocytes and lung lymphocytes (1 \u00d7 10+B220int) were sorted from the spleens of pregnant and non-pregnant influenza virus infected mice at 10 days post infection. Single-end RNA sequencing was performed on samples in duplicate to increase sequencing depth (101 base pair reads). Sequencing was checked for quality using FastQC . Processing of single end reads was performed within the R programming language. The Rsubread package align function (Mus musculus genome (GRCm38.93). BAM files of duplicate samples were merged prior to the counting of reads for each gene using the featureCounts function of the Rsubread package. Differential gene expression analysis was performed in R using the DESeq2 package . Placental progesterone and PGF2\u03b1 data were analyzed using Welch's t-test. Placental western blot data was analyzed using two-way ANOVA using Sidak correction for multiple comparisons. Cell frequencies in lungs, uterus, and placenta were analyzed via two-way ANOVA using Sidak correction for multiple comparisons. Antibody and cytokine secreting cell data were analyzed with two-way ANOVA using Sidak correction for multiple comparisons. All error bars represent Standard Error of Mean (SEM) and data represents the mean except All analysis, graphing, and annotation was performed in GraphPad Prism 8 . \u201cn\u201d represents the number of mice used for each experiment. Weights were analyzed using two-way ANOVA with Tukey post-host corrections. Survival data were analyzed using Mantel-Cox test and gestation length data were analyzed using a Kruskal-Wallis test with a Dunn's 50) and a high (2 \u00d7 LD50) dose of mouse adapted H1N1 A/California/07/2009 to recapitulate human infection at the end of 2nd to 3rd trimester of pregnancy.To determine the impact of pandemic influenza virus infection on maternal health, we infected non-pregnant and E10-12 pregnant BALB/c mice with a low (0.5 \u00d7 LD50) infected pregnant animals reached a peak bodyweight of 41.2% greater than their original bodyweight at 16 days of gestation (50) infected pregnant animals reached a peak bodyweight of 37.8% greater than their original bodyweight at the same time point (p < 0.0001), and 26.4% to 17.5 days following high dose influenza infection (p = 0.0001) . While t 0.0001) .Infection with pandemic influenza H1N1 A/California/07/2009 virus interrupted the normal progression of pregnancy; we observed attenuated weight gain and pre-term labor in a dose dependent manner at both high and low infectious doses, and continued weight loss following delivery at high doses in our mouse model. Interestingly, we did not observe significantly differing responses in high dose infected pregnant and non-pregnant animals, as both these groups experienced comparable weight loss and complete mortality by day 8. Therefore, low dose infection recapitulates the clinical phenotype of morbidity and mortality observed in human women infected mid-gestation with pandemic influenza virus (p = 0.0035) (p = 0.0001) (p < 0.0001).Next, we examined the impact of A/California/07/2009 infection on offspring viability and health status. Numbers of viable and non-viable pups were recorded; bodyweights were taken at birth and classified as non-viable (\u22641.0 g), small for gestational age (SGA) (1.0\u20131.25 g) and healthy (>1.25 g). The average offspring weight from uninfected pregnant mice was 1.4 g . Pups bo 0.0035) . Similar 0.0035) . When mo 0.0001) . In unin 0.0001) . InfectiOverall, offspring born from infected mothers had reduced body weight compared to offspring of uninfected mothers. A low dose of influenza greatly increased the incidence of stillborn pups from 1.4 to 22%; a high dose further increased the percentage of stillbirth to 25% of offspring, an 18-fold increase over uninfected dams. The frequency of SGA offspring increased by 3.3-fold at high infectious dose compared to uninfected control mothers. Low birthweight is a common outcome of pregnant mothers infected with pandemic H1N1 influenza virus during the second and third trimester (p < 0.0001) in pregnant animals compared to non-pregnant at 2 DPI , respectively; these groups became statistically indistinguishable 12 days after weaning (p = 0.004), this difference disappeared following delivery (p = 0.78 14 DPI). Contrary to the pronounced PGF2\u03b1 response seen in non-pregnant mice, infection did not increase PGF2\u03b1 levels in pregnant mice. However, pregnant infected and non-pregnant infected controls had elevated lung PGF2\u03b1 through 42 DPI. Infected pregnant mice had 1.8-fold higher (p = 0.0002), and infected non-pregnant 3-fold higher (p < 0.0001) PGF2\u03b1 than their respective uninfected controls. Therefore, while PGF2\u03b1 levels were maintained following influenza infection, reduced progesterone levels and the negative feedback loop between PGF2\u03b1:progesterone suggest a mechanism for reduced gestation length.PGF2\u03b1 is a naturally occurring prostaglandin which is clinically used to induce labor or as an abortifacient and acts= 0.003) . In the = 0.003) . High PG= 0.003) . While up = 0.01) (p = 0.01). This differential concentration of COX-2 isoforms was not observed in either uninfected animals. Further, the 45 kD COX-2 isoform was decreased 2.2-fold in lungs of infected pregnant mice compared to uninfected pregnant controls (p < 0.0001), and the 69 kD form was reduced 5.3-fold (p < 0.0001). This reduction was not significant in non-pregnant infected animals.We further evaluated hormone regulators cyclooxeganse-2 (COX-2) and progesterone-induced blocking factor (PIBF). COX-2 is a key regulator in the arachidonic acid and prostaglandin synthesis pathway, and increased COX-2 expression is associated with increased PGF2\u03b1 secretion by bronchial epithelial cells . Reducti = 0.01) . In pregp = 0.03). However, infection did not significantly affect PIBF 34 kD in pregnant animals (p > 0.05 for 4 and 7 DPI). The 55 kD isoform was differentially affected by infection only in pregnant mice; at 7 DPI, uninfected pregnant mice had 3.2-fold higher levels of this protein in the lungs than infected pregnant mice (p < 0.0001). The 66 kD splice variant was strongly downregulated in pregnant infected mice compared to uninfected pregnant controls; at 4 DPI, uninfected mice had 9.7-fold higher expression (p < 0.0001), and at 7 DPI, uninfected mice had 7.2-fold higher expression (p < 0.0001). This loss of immunomodulatory PIBF 55 and 66 kD during infection may explain the reduced gestation length in mice infected with 0.5 \u00d7 LD50 A/California/07/2009 due to reduced inhibition of NK cell activity. Lastly, the 90 kD isoform of PIBF was unaffected by infection or pregnancy. The lack of pregnancy-associated changes in PIBF isoforms 34 and 90 kD in the lungs suggests that these isoforms may exert a maternal protective mechanism.Successful pregnancies require expression of PIBF, which promotes Th2 cytokine production and inhibition of NK cell activity, resulting in reduced cytolytic activity at the fetal-maternal interface and potentially increases maternal susceptibility to infection . This prq = 0.04) . When co = 0.04) , compareq < 0.01) (q = 0.02) (q = 0.04) (q = 0.03) in infected pregnant mice . Pregnan = 0.02) . Infecte = 0.04) . Lastly,ant mice .Thus, preterm birth in our model is correlated with increased acute viral titers and enhanced lung pathology. Cytokine profiling suggests that the status of pregnancy contributes to acute lung inflammation through immune mediators that affect recruitment of innate and adaptive immune cells to the lungs at peak influenza viremia. We observed upregulated mediators of monocytes and neutrophil recruitment at the lungs, which may account for increased acute viral titers and enhanced pathology. Further, infection resulted in long term PGF2\u03b1 upregulation which is responsible for bronchoconstriction and pulmonary vasoconstriction leading to reduced vital capacity, respiratory compromise, and retardation of fetal growth. Lung progesterone\u2014a bronchodilator that increases with pregnancy\u2014factors in respiratory distress since its reduction leads to bronchoconstriction . FurtherPlacental or uterine inflammation caused by infection or autoimmune responses may induce adverse pregnancy outcomes, including preeclampsia, endometriosis, and spontaneous abortion . Howeverp > 0.5 at all timepoints) but was increased 2.3-fold in the infected pregnant animals compared to the uninfected pregnant controls 4 DPI (p = 0.0004) (p = 0.09).Serum progesterone was unaffected by infection in the non-pregnant cohort ( 0.0004) . Serum P 0.0004) . AdditioCytokine production analysis revealed that infected non-pregnant and pregnant animals had similar changes respective to their uninfected controls at 4 DPI . We obseIn sum, patterns of cytokine production in serum in response to influenza infection were similar between pregnant and non-pregnant animals; however, at 7 DPI, pregnant animals tended to have exacerbated cytokine and chemokine production compared to non-pregnant infected controls. This presence of inflammatory cytokines and chemokines in the serum may play a role in adverse outcomes at the fetal-maternal interface.in utero infection (p < 0.0001) and 3.1-fold (p < 0.0001) lower, respectively, in infected animals than uninfected controls 4 DPI . As prog 0.0002) or labor 0.0002) . While t 0.0002) . PGF2\u03b1-r 0.0002) . Expressls 4 DPI . Thus, aq = 0.03), IL-1\u03b2 1.4-fold (q = 0.03), IL-6 1.2-fold (q = 0.03), and TNF-\u03b1 by 1.3-fold (q = 0.03) (q < 0.01) (q = 0.03) (q < 0.01), IL-9 (by 3.4-fold (q = 0.04), and IL-5 (q = 0.03) (q = 0.03) (q = 0.03), and RANTES concentration increased 1.4-fold above control (q = 0.03) and a 2- < 0.01) . We also = 0.03) , and dow = 0.03) . However = 0.03) in place = 0.03) . Additio = 0.03) .Overall, most cytokines were increased much less than their serum counterparts; the only cytokines increased specifically at the placenta were T-cell growth factors. This indicates that the placenta and the fetuses are mostly shielded from the maternal inflammatory cytokine signature. Recruitment and activation of granulocytes to the placenta by the local cytokine milieu may mediate observed loss of placental architecture and, in conjunction with loss of progesterone and PIBF, induce poor fetal outcomes.p = 0.0001) whereas infection did not affect the percent of lung neutrophils in non-pregnant animals (p = 0.011) (p = 0.016) and lung= 0.016) , pregnanp = 0.04); infection resulted in total loss of all uterine eosinophils for both non-pregnant (p < 0.001) and pregnant animals (p < 0.001) (p < 0.0001) while this did not occur in non-pregnant animals . Lastly, animals .p < 0.0001), and placenta from dead pups had 54.9% neutrophils (p < 0.0001) (p = 0.009) and 4.3% in infected placenta from stillborn pups. Within the infected pregnant cohort, placentas from stillborn pups had 2.7-fold more monocytes than placentas from live pups (p = 0.0001) (p = 0.0471) . This in 0.0001) . Lastly, 0.0471) .Enhanced lung pathology in our pregnant mouse model was accompanied by increased neutrophilic proportions and reduced local adaptive immune populations. Shortened gestation times, stillbirth, and SGA pups following sublethal infection from influenza virus in our model were associated with increased macrophages and T cells in the uterus, and increased neutrophils and T cells in the placenta. Loss of placental monocytes following influenza infection may further propagate loss of placental architecture and fetal tolerance, as placental macrophages play pivotal roles in placental homeostasis and immunity.+ T cells to produce TH1\u2013 and TH2-type cytokines, have been reported to decrease during pregnancy (p = 0.0045) (p = 0.025) . Influen 0.0045) . While t 0.0045) . Further= 0.025) . These d= 0.025) .p = 0.034) (Progesterone is an established negative regulator of B cell activation . Additio= 0.034) . No diffAs hemagglutinin (HA) is indispensable to influenza binding to cellular sialic acids and therefore viral entry, we determined the level of hemagglutination inhibition by influenza virus-specific antibodies developed in pregnant and non-pregnant animals. By 14 DPI, both pregnant and non-pregnant animals had A/California/07/2009-specific HAI titers of at least 40, which correlate with protection. These titers continued rising in both groups through 42 DPI. Pregnant mice however showed delayed increases compared to non-pregnant controls, although endpoint titers at 84 DPI were comparable at 592 and 512, respectively . Virus np-value threshold of 0.01 and log2 fold change threshold of 2 (adj < 0.05). Only one gene set was positively enriched, the mitotic spindle gene set, the members of which are involved in mitotic spindle assembly . Importantly, we expand upon previous work to evaluate the quality of adaptive immune responses. How the condition of pregnancy impacts long-term maternal anti-influenza immunity is contentious, and published research on adaptive immune responses in animal models of influenza during pregnancy is limited. Imbalanced antibodies in pregnant women hospitalized with 2009 H1N1 guidelines outlined in an approved protocol (PROTO201800113) in compliance with the United States Federal Animal Welfare Act (PL 89\u2013544) and subsequent amendments.EL and IS conceived of and planned experiments. EL, KB, EE, LM, DW, and OA carried out experiments. EL, DS, LM, JB, and KB contributed to the interpretation of the results. DS wrote the manuscript. All authors provided critical feedback and helped shape the research, analysis, and manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "The rise of neoliberalism has influenced the health care sector, including the chiropractic profession. The neoliberal infiltration of market justice behavior is in direct conflict with the fiduciary agreement to serve the public good before self-interests and has compromised the chiropractor, who now may act as an agent of neoliberalism in health care. The purpose of this paper is to present an overview of the impact of neoliberalism on the chiropractic profession and provide recommendations for a professional philosophical shift from a market justice model to a communal and social justice model. Chiropractic has been scrutinized for much of its existence by those outside the profession . JohnsonInterrogation of chiropractic history suggests a lapse of emphasis on patient-centeredness, a deficiency in social justice and public health behavior, and instead, prevalent engagement in market justice behavior , 4. YounThe aim of this paper is to explore the role of the chiropractor as an agent of neoliberal ideology which compromises patient care and is in direct conflict with the public\u2019s trust built on professionalism. We also offer strategies to aid in shifting the chiropractic practice from a market justice model toward a social justice model.Neoliberalism is a theory suggesting that humans may thrive most by enhancing entrepreneurial independence via private property rights, individual autonomy, unencumbered markets, and free trade . NeolibeFor example, spondylosis is understood to be a naturally occurring phenomenon, is highly prevalent, and has poor evidence to suggest causality of back pain . \u201cVertebKim suggests that professionalism \u201cunderpins the trust the public has in doctors\u201d . ProfessThe public views chiropractors as spine specialists and the To point, Haldeman, et al. delineated greater than 30 marketable manual therapy techniques, more than 20 exercise programs, greater than 25 passive physical modalities, and myriad products marketed for back pain . These mChiropractors claim to have a long-standing history of evaluating and treating the \u201cwhole person\u201d . This apWhen social determinants of health and social inequities are overlooked, there may be risk of compromising the completeness of patient care and substituting an enhancement of self-interest. Patients may be pushed deeper into ongoing delivery of care, resources may be depleted, and health conditions may be inadequately addressed. For example, if a patient fails to exercise as recommended, due, for instance, to limited access to space or exercise equipment, they may be labeled as \u201cnoncompliant\u201d and neglectful of personal responsibility to comply with lifestyle advice. The patient may in reality conclude that consistent exercise is not a credible choice for them and therefore believe that further consumption of other products and services associated with the \u201cchiropractic lifestyle\u201d is their only alternative \u2013 ultimately advancing the financial self-interests of the chiropractor. However, under neoliberal policies with a free market and a fee-for-service model of health care, one\u2019s ability to live a \u201cchiropractic lifestyle\u201d is associated with access to care, which includes financial aptitude. In this circumstance, the individual consumer who is unable to continue to afford the consumption of a \u201cchiropractic lifestyle\u201d may be at risk of losing security of their care. This purported \u201cwhole person\u201d care model may actually institutionalize social injustice, in which inequitable health care is provided under neoliberal policies.Within the neoliberal model, the chiropractor is motivated to practice more like a neoliberal technician than a professional clinician. In this model the chiropractor is a marionette, providing products and services as neoliberal puppeteers pull the strings. Instead of learning from engagement with patients and implementing this learning to the betterment of other patients, neoliberalism incentivizes prioritizing increased service production, practice management skills, and building a practice through retaining patients who can afford the \u201cchiropractic lifestyle\u201d. This is exemplified by a portion of chiropractors delivering care outside of guideline recommendations including providing excessive services that promote ongoing passive care, offering services without evidence to support their use, and over-utilizing imaging .Certain chiropractic communication and marketing practices may elevate chiropractors\u2019 financial self-interests over best patient care practices. Chiropractic has embraced marketing and entrepreneurialism with financial achievement seen as a gauge of success . MarketiAnother common practice building tactic is tallying and advertising patient satisfaction metrics and testimonial stories to the public. The consumer may believe these advertisements indicate the chiropractor\u2019s professional expertise, altruism, compassion, and self-sacrifice. Ironically, even a genuinely well-intentioned chiropractor\u2019s decision making, when framed by this neoliberal construct, may be motivated more by accumulating positive patient reviews or likes on their social media feed than by implementing best clinical judgement, especially when such judgment conflicts with patient expectations. This scenario reinforces the neoliberal system by exploiting the patient experience as a means to generating greater profits, potentially at the expense of best clinical decision making and patients\u2019 wellbeing.According to West's translation,\u00a0Socrates argued that \u201cthe unexamined life is not worth living\u201d . AccordiEducational curricula for both chiropractors and chiropractic students should include sociopolitical influences on health outcomes and delivery of health care. Reflection of the health care system\u2019s adoption of social justice versus market justice models, including chiropractic practices, should not be limited to the academic. Exposure to these concepts should be introduced within pre-chiropractic undergraduate and chiropractic college curricula. Continued dialogue should take place in post-graduate continuing education programs.Green, et al. have called for a transition from the market justice model of chiropractic care to a social justice model . The autMoving to a community-centered model requires a transformation of the individual chiropractic patient encounter. Instead of just providing products and services concentrated on attaining and maintaining a specific ideal body structure or unrealistic experience, chiropractors must recognize and treat the actual \u201cwhole person,\u201d which includes patients\u2019 social and environmental milieux. We anticipate that this transformation will develop as chiropractors move from isolated, competitive care settings to transdisciplinary teams with shared goals of community and patient-centered outcomes. How different might clinical outcomes look if each patient\u2019s team of health care professionals systematically recognized and collaboratively addressed the full spectrum of biological, psychological, and social determinants of health without concern for competitive profit generation? We call on chiropractors to seek out and become involved on such teams in their communities. We invite chiropractors to identify and report transformative care practices and efforts to reform the neoliberal spine care model and reestablish the professionalistic community- and patient-centered model.The neoliberal model of health care, widely adopted in chiropractic, influences clinicians to act as vehicles of neoliberal practices. The chiropractor may emulate a Manchurian candidate who, manipulated by the health care system\u2019s neoliberal machinery, has adopted a market justice model and betrayed the public\u2019s trust once built on professionalism. This paper has discussed chiropractors\u2019 roles as neoliberal propagators in the domains of the health care marketplace, patient care, and patient relations. The very nature of the entrepreneurial health care system dictates that the chiropractic provider practice in a manner that is in conflict with the tenets of professionalism which concentrate on placing societal needs before self-interests. Furthermore, the neoliberal message of individual personal responsibility as an equitable framework has infiltrated the chiropractic dialogue in the form of the \u201cchiropractic lifestyle.\u201d We concede that profit is not evil or wrong by nature. Nevertheless, we voice concern that some chiropractors raised on a lifetime of individualism, market-based decision making, and high levels of commodification are coerced to reach/elevate profit at the expense of the community through day-to-day clinical behaviors. Deeper examination of patient care and organizational practices is needed to maneuver the chiropractic practice away from a market justice model to a communal and social justice model."}
+{"text": "The rapid increase in online shopping and the extension of online food purchase and delivery services to federal nutrition program participants highlight the need for a conceptual framework capturing the influence of online food retail environments on consumer behaviors. This study aims to develop such a conceptual framework. To achieve this, mixed methods were used, including: (1) a literature review and development of an initial framework; (2) key informant interviews; (3) pilot testing and refinement of the draft framework; and (4) a group discussion with experts to establish content validity. The resulting framework captures both consumer- and retailer-level influences across the entire shopping journey, as well as the broader social, community, and policy context. It identifies important factors such as consumer demographic characteristics, preferences, past behaviors, and retailer policies and practices. The framework also emphasizes the dynamic nature of personalized marketing by retailers and customizable website content, and captures equity and transparency in retailer policies and practices. The framework draws from multiple disciplines, providing a foundation for understanding the impact of online food retail on dietary behaviors. It can be utilized to inform public health interventions, retailer practices, and governmental policies for creating healthy and equitable online food retail environments. Online food retail is an increasingly popular means of acquiring food and is expected to grow rapidly over the coming decade. In 2017, online food retail represented $13 billion in sales ,5. BasedOnline food retail has also emerged as an important avenue to improve food access. In 2018, the United States Department of Agriculture (USDA) selected 10 retailers in 9 states for a two-year Online Purchasing Pilot (\u201conline Electronic Benefits Transfer (EBT)\u201d) to test the use of Supplemental Nutrition Assistance Program (SNAP) benefits as payment for online food purchases . The affThese shifts in consumer food purchase behaviors, the increasing investment in online infrastructure, and the expansion of online food purchasing to participants in federal programs, highlight the need for an assessment of the online food retail environment. A substantial and growing body of evidence captures the influence of in-store food marketing on consumer purchases ,12. In cWhat is currently lacking is an integrated framework capturing both consumer- and retailer-level factors and their interaction that influence consumer behaviors within online environments. The few existing frameworks focus on specific consumer determinants like their attitudes, privacy concerns, social influences, facilitating conditions, hedonic motivations, and perceived risk or satisfaction with the online experience ,18,19. RIn the absence of a comprehensive conceptual framework that looks at consumer grocery purchase behaviors it becomes impossible to systematically study the effect of food retail environments on food choices. Such a framework is crucial for informing public health interventions, guidelines, retailer practices, and governmental policies to create healthy and equitable online food retail environments. To address this gap in the literature, the present study aims to develop and refine a conceptual framework capturing factors that influence consumer food purchase behaviors within online food retail environments. For the purposes of this study, online food retail environments were described as websites providing click-and-collect or food delivery services. \u2018Retailers\u2019 include e-commerce platforms hosted by the retailer themselves or by a third-party vendor .A literature review and development of an initial frameworkKey informant interviewsPilot testing and refinement of the draft frameworkGroup discussion with experts to establish content validityThe development and refinement of the conceptual framework was guided by the approach suggested by Jabareen, 2009 , and invThe study methodology was reviewed and determined to be non-human subjects research by the Institutional Review Board at the Johns Hopkins Bloomberg School of Public Health.food, grocer*, supermarket*, retail*, shop*, store*, purchas*, buy*, online, ecommerce (or e-commerce), internet, and web. Search results were supplemented with health agency reports, trade publications, and industry reports from 2015 to 2019. Reference lists from peer-reviewed publications were also scanned. While peer-reviewed literature was not restricted by geographic location, only US-focused analyst reports and trade publications were included to ensure contextual relevance. Paper titles and abstracts were screened. A total of 136 industry reports and 97 peer-reviewed papers informed the development of the draft framework.A scoping review was undertaken between April and June of 2019 to identify peer-reviewed and grey literature, in English, on the attributes, preferences, and shopping behaviors of consumers that make purchases online, consumer interaction with and acceptance of technology, online and in-store food marketing and merchandising, and design of online retail environments. Databases were searched from January 2009 to May 2019 using combinations of the keywords We interviewed seven experts in grocery merchandizing and marketing, e-commerce and online retail, behavioral psychology, public policy, computer science and data privacy and digital marketing. Experts were identified through a combination of known contacts, through their published work, or were referred to by other experts during interviews. The interviews were conducted in person or via teleconference by a member of the research team and lasted 45\u201360 min.The interview began with an overview of the objectives of the study. Experts were presented with the draft framework and asked: (i) for their feedback on the extent to which it captured their understanding of factors influencing consumer food choices when shopping online; (ii) to identify constructs that could be improved or simplified; and (iii) for additional constructs that should be included. Follow-up questions were tailored to each key informant\u2019s area of expertise. For example, an expert in computer science and engineering was asked an additional question about when and how personal data are collected from consumers along the path to purchase. Experts in food retail marketing were asked how personal data are used to adapt online marketing practices to specific consumers. Insights were requested on how the key domains influenced one another. Suggestions were incorporated, and the framework was refined after each interview.Study researchers independently tested the internal consistency of the conceptual framework by using it to guide a mock shopping exercise. This was done to identify additional concepts that may have been missed during the literature review and the key informant interview, but not with the aim of formally testing the framework. An online shopping account was created at two U.S. online food retailers. Researchers navigated each store\u2019s website, browsed through their departments, added three grocery items to the shopping cart, and proceeded to the checkout. The applicability of the conceptual framework to the experience of a consumer searching for and selecting food was discussed in detail. Revisions to the framework were incorporated as necessary.A teleconference discussion was convened with six members of the Healthy Food Retail Working Group to determine the content validity of the conceptual framework. This group is a collaborative effort of the Robert Wood Johnson Foundation\u2019s Healthy Eating Research program and the Centers for Disease Control and Prevention\u2019s Nutrition and Obesity Policy Research and Evaluation Network. Members include researchers and technical experts working on healthy food retail and related areas.To engage fully with the framework, members of the Working Group were asked to undertake a mock shopping exercise, similar to the one conducted by the study authors, two weeks prior to the call. A Qualtrics form was created to guide the sequence of product searching and selection activities and to record feedback. After selecting an online food retailer from a list of 21 options, members navigated to the grocery department homepage, the breakfast cereal department page, and the product pages of two specific brands of bread and canned fish. They documented marketing strategies, customizable features, tools, options, and multimedia content on each of these webpages. Members also recorded ease of navigation and site policies. Their feedback from this exercise was discussed in the teleconference, during which the purpose of the research was clarified, and a draft of the conceptual framework (including what the concepts represent and how they relate to each other) was presented. Members were asked whether the current framework captured their understanding of the range of factors influencing consumer behavior in the online store and for ideas for improvement. Notes from the discussion were recorded, and relevant revisions were incorporated into the framework.Insights from the literature review informed the development of a draft framework that captured the influence of consumer characteristics, online food marketing, and retailer and website characteristics on online grocery shopping intention. Feedback from the key informant interviews added further detail by delineating the different stages of the online shopping process, the sequence of online consumer behaviors, the key role of personalized marketing, relationships between online retailers and manufacturers, disclosure of sponsored content, factors influencing site design, and the use of personal information in customizing the platform.The pilot test provided additional insights into retailer characteristics related to membership and loyalty programs, privacy and data use, and order payment, fulfillment, and collection. This exercise also differentiated the static versus the dynamic elements of the online platform , differences in site functionality as viewed by an anonymous shopper versus a registered shopper, and marketing strategies employed at checkout.Healthy Food Retail Working Group members acknowledged the detail and clarity of the current version of the framework and agreed that it captured relevant constructs that influence consumer food choice in online environments. Their feedback focused on the applicability of the conceptual framework to specific subgroups of consumers , retailers , and online formats . These ideas were discussed, and relevant constructs in the framework were added or made more salient.The final framework combined elements of the Technology Acceptance Model, consumer behavior and decision-making frameworks, brick-and-mortar marketing categories ,23, key Central to the conceptual framework is the sequence of consumer behaviors involved in online grocery shopping. This Path to Purchase consists of four stages\u2014Pre-Shop, Online Shopping, Pick-Up or Delivery, and Post-Shop\u2014encompassing six behaviors. Under Pre-Shop behaviors, the consumer selects an online retailer. He/she then searches for or discovers, selects, and purchases food (Online Shopping). The consumer receives the order (Pick-Up or Delivery) and prepares and consumes the food or/and discards it (Post-Shop). The quality of the consumer\u2019s experience engaging in each of these behaviors determines their overall satisfaction with the retailer and likelihood of shopping again.Consumer behaviors are influenced by consumer- and retailer-level attributes, presented above and below the Path to Purchase in Consumer Characteristics, Preferences, and Past Behaviors: this domain encompasses consumers\u2019 Technology Acceptance, Individual and Household Demographics, Food-Related Preferences and Behaviors, and Attitudes and Beliefs , membership requirements prior to shopping, and delivery service areas . Privacy and Data Sharing policies include the terms and conditions that govern the collection, tracking, storage, use, and sharing of personal information by the retailer. Policies on Inventory Management guide the availability of products and brands, pricing strategy, and accuracy of inventory tracking, while policies on Collection and Payment determine the payment options accepted by the retailer , the integration of loyalty and reward programs, fees for delivery and restocking, and delivery options .The Pick-Up and Post-Shop behaviors are likely to be determined by Retailer Policies and Practices that govern Order Fulfillment, Order Delivery, and Returns and Order Cancellation. Policies on Order Fulfillment include the price and appropriateness of product substitutions during stock-outs and the quality of the food received. The availability of convenient time slots, delivery coordination , and the availability of secure delivery options are examples of policies associated with Order Delivery. Policies determining the ease of cancelling incomplete or incorrect orders of items purchased online relate to Returns and Order Cancellation.Personalized Marketing by Retailers: a consumer\u2019s search, selection and purchase of food are influenced by factors within the dynamic domain of Personalized Marketing. These marketing strategies are based on personal information provided by the consumer when registering with the online platform and from past purchases and browsing history. They may also be based on consumer data purchased by the retailer, shared by third parties, or automatically collected upon visiting the site . The consumer, either knowingly (by actively agreeing to) or unknowingly or without full knowledge of the implications, allows retailers to use various sources of information to create a tailored experience.Personalized Marketing maps onto the marketing mix of Product, Price, Placement, and Promotion (\u201cthe 4Ps\u201d), but manifests differently in the online food retail environment than in brick-and-mortar stores . ProductCustomization of the Website by the Consumer: this is the other dynamic domain that influences the search, selection, and purchase of food and includes Product Information Display, Site Navigation, and Shopping Tools. These attributes allow consumers to change what nutrition information they see (Product Information Display), filter products based on preferred attributes, save shopping lists, or request certain product comparisons (Shopping Tools). Combined with tools and tutorials to ease website navigation (Site Navigation), website customization features can increase the convenience of product searches, enhance consumer engagement with the product catalogue and improve the quality of the food purchase experience.Other Food Marketing: this dynamic domain includes Promotional Strategies, Social Media Strategies, and Immersive Strategies, recognizing that exposure to marketing in other settings (brick-and-mortar stores), through direct-to-consumer promotions, product endorsements, and sponsorships, and targeted marketing through social media platforms will affect food choices made online.Sophistication of Website and Frequency of Use: technological progress in interface design, communications, and data security is likely to improve consumer trust and increase the volume and frequency of purchases made online. Advancements in personal data collection, advertising technology, purchase data analytics, and consumers\u2019 increased involvement in the co-creation of food retail platforms will allow retailers to better profile customers and more efficiently match them to products and promotions, increasing engagement. In this way, more sophisticated online platforms and frequent consumer visits will ensure greater personalization of retailer marketing strategies and a more customized website.Equity and Transparency are fundamental to retailer engagement with the consumer. Equity refers to the differential impact of retailer policies and practices on the food behaviors and privacy of underserved populations . For example, a consumer\u2019s ability to utilize online services may be affected by the retail service area, availability of convenient delivery slots, or accepted payment methods. A retailer\u2019s targeted and personalized marketing strategies may trigger impulse purchases or increase the basket size, differentially impacting low-income consumers, especially if the promoted products are of inferior nutritional quality. Transparency in policies and practices captures the retailer\u2019s clear and upfront disclosure of data collection, storage and use of data, surveillance methods, marketing, sponsorships, etc., that may consciously or unconsciously influence the consumer\u2019s choices along the Path to Purchase. For instance, disclosure of fees and hidden costs and collection of personal data prior to checkout may affect a consumer\u2019s choice of retailer. Disclosure of product sponsorship at point of purchase may affect product selection.Social, Community, and Policy Context: this conceptual framework is nested in the socio-ecological model . ConsumeThis study presents a conceptual framework that captures factors influencing consumer behavior within online food retail environments. It also details the methodology for framework development and refinement\u2014a process that identified and integrated evidence across multiple fields of study. The conceptual framework captures both consumer- and retailer-level influences across the entire Path to Purchase as well as the broader social, community, and policy context. Important static attributes of retailer policies and practices and consumer characteristics, preferences, and past behaviors are captured. The framework also emphasizes the dynamic attributes of the online platform, including those of personalized marketing by retailers, customization of the website content, navigation by the consumer, and the two-way interaction between these domains that enables a variety of online food retail interfaces uniquely tailored to consumer preferences.This conceptual framework makes an important contribution to our understanding of the burgeoning field of online food retail. It serves as a foundation for a deeper study into the influences on consumer food purchase behaviors within online platforms and the interactions between them. To our knowledge, this is the first time that the relatively under-studied domains of personalization and customization of the online food retail environment or the domains of equity and transparency or the social, community, and the policy context, have been considered in any framework. Further investigation is certainly warranted. Future studies could use the framework to compare brick-and-mortar retailers and their online platforms to identify the convergence and divergence of consumer behaviors and retailer responses within these settings. The conceptual framework itself could be empirically tested to support its validity and better establish a hierarchy between attributes. Previous work has used structural equation modelling techniques to examine the hypothesized relationships between constructs in a proposed model and identify possible causal relationships between them . While oThe conceptual framework provides a foundation for understanding how a lack of transparency within online retail platforms could impact health equity. Indeed, if an equity lens is not applied in the development and implementation of retailer policies, online platforms may unintentionally widen disparities in healthy food access, affordability, and diet quality for vulnerable groups . For insThe framework may help to study the effect of predatory marketing tactics, similar to those employed across other digital media where communities of color are targeted with the least nutritious products . Indeed,Finally, the conceptual framework could be used to inform and evaluate public health interventions aimed at improving consumer food choices in the online food environment. On the policy and practice front, the framework could inform: (i) recommendations and standards for best practices related to online food marketing; (ii) specific guidance for online retailers to ensure policy transparency, equitable access, and assurance of privacy; (iii) tailored educational content for consumers unfamiliar with online grocery retail; and (iv) personalized nutrition education and communication. For example, nutrition interventions delivered via the online retail platform could offer personalized healthy shopping lists that draw on information about consumer preferences and budget constraints, offer personalized healthy recipes or meal solutions, or develop a personalized labeling campaign that makes specific nutritional attributes of a product more salient at the point of purchase. SNAP-Ed\u2014SNAP\u2019s voluntary nutrition education program\u2014could partner with online retailers to allow participants to interact with a registered dietitian in real time while food shopping. Local WIC agencies could work with online retailers to create a WIC-friendly web interface with WIC-eligible products, label products as being part of WIC food packages, and allow only WIC-approved substitutions in case of stock-outs.This study does have its limitations. Despite a comprehensive approach to development, the resulting conceptual framework may not have captured all the elements of the online food retail environment. The descriptions of the constructs in the framework serve as examples and are not exhaustive. The literature reviewed was almost entirely from the US and Europe. Pilot testing and content validity were established for the US context. Therefore, it is possible that the framework may not adequately capture the online environments in other contexts and would need further testing to gauge applicability in different settings, including its applicability to food purchases made via mobile applications. Advances in technology will result in new and innovative features that will need to be incorporated into the evolving conceptual framework.This study has several strengths. It leveraged multiple methods in the development and refinement of the framework, including key informant interviews, comprehensive systematic literature reviews, mock shopping exercises and group discussions, improving its validity. The framework development drew from multiple disciplines and benefited greatly from the insights of experts across different fields, allowing for an in-depth understanding of the factors influencing consumer purchases and underscoring the need for the public health community to collaborate with scientists and policymakers from non-traditional public health disciplines to map the influence of the online food environment. Finally, applying a public health perspective to the development of the framework expanded its utility in informing future interventions in this field.This paper integrated multiple perspectives across a wide range of fields to develop a framework capturing both consumer- and retailer-level factors influencing consumer purchases in the online food retail environment, as well as the broader social, community, and policy context. It identifies important static factors and emphasizes the dynamic nature of personalized marketing by retailers and customizable website content. Equity and transparency in retailer policies and practices are also captured. Researchers, retailers, advocates and policymakers are encouraged to utilize the framework to guide the development and evaluation of interventions, policies, and practices in the online food retail space."}
+{"text": "There are concerns internationally that lockdown measures taken during the coronavirus disease 2019 (COVID-19) pandemic could lead to a rise in loneliness. As loneliness is recognised as a major public health concern, it is therefore vital that research considers the impact of the current COVID-19 pandemic on loneliness to provide necessary support. But it remains unclear, who is lonely in lockdown?This study compared sociodemographic predictors of loneliness before and during the COVID-19 pandemic using cross-cohort analyses of data from UK adults captured before the pandemic and during the pandemic (UCL (University College London) COVID-19 Social Study, n\u00a0=\u00a060,341).Risk factors for loneliness were near identical before and during the pandemic. Young adults, women, people with lower education or income, the economically inactive, people living alone and urban residents had a higher risk of being lonely. Some people who were already at risk of being lonely experienced a heightened risk during the COVID-19 pandemic compared with people living before COVID-19 emerged. Furthermore, being a student emerged as a higher risk factor during lockdown than usual.Findings suggest that interventions to reduce or prevent loneliness during COVID-19 should be targeted at those sociodemographic groups already identified as high risk in previous research. These groups are likely not just to experience loneliness during the pandemic but potentially to have an even higher risk than normal of experiencing loneliness relative to low-risk groups. \u2022We compared data from 31,000 UK adults during 2017-2019 with 60,000 UK adults during the COVID-19 pandemic.\u2022Some risk factors for loneliness were the same as in ordinary circumstances .\u2022Other groups experienced even greater risk of loneliness than usual (e.g. younger people and people of low income).\u2022Some groups were at risk of loneliness who are not usually considered high risk (e.g. students). Previous research has highlighted that particular groups at risk of loneliness include women, being either younger (e.g. aged younger than 25 years) or older (e.g. aged older than 65 years), living alone and having low socio-economic status, as well as poor mental and physical health.Data were drawn from two sources. For data collected before the pandemic, we used Understanding Society: the UK Household Longitudinal Study (UKHLS); a nationally representative household panel study of the UK population (2009\u20132019). Our analyses used the most recent wave of UKHLS (wave 9), where the loneliness measures were introduced. The wave nine data were collected between January 2017 and June 2019. To be consistent with the UCL COVID-19 Social Study, we restricted participants to those aged 18+, leaving us a total sample size of 34,976 participants. Furthermore, we excluded those who had missing value in loneliness or any of the covariates (11%). This provided a final sample size of 31,064.For data during the COVID-19 pandemic, we used data from the UCL COVID-19 Social Study; a large panel study of the psychological and social experiences of more than 50,000 adults (aged 18+) in the UK. The study commenced on 21st March 2020 involving online weekly data collection from participants for the duration of the COVID-19 pandemic in the UK. Whilst not random, the study has a well-stratified sample that was recruited using three primary approaches. First, snowballing was used, including promoting the study through existing networks and mailing lists , print and digital media coverage and social media. Second, more targeted recruitment was undertaken focusing on (i) individuals from a low-income background, (ii) individuals with no or few educational qualifications, and (iii) individuals who were unemployed. Third, the study was promoted via partnerships with third sector organisations to vulnerable groups, including adults with pre-existing mental illness, older adults and carers. The study was approved by the UCL Research Ethics Committee (12467/005), and all participants gave informed consent. In this study, we focused on participants who had a baseline response between 21st March and 10th May 2020. This provided us with data from 67,142 participants. Of these, 10% of participants withheld data on sociodemographic factors including gender and income and therefore were excluded, providing a final analytic sample size of 60,341.In both data sets, loneliness was measured using the three-item UCLA loneliness scale (UCLA-3). The questions are as follows: (1) how often do you feel a lack of companionship? (2) how often do you feel isolated from others? (3) how often do you feel left out? Responses to each question were scored on a three-point Likert scale ranging from hardly ever/never, to some of the time and to often. Using the sum score provided a loneliness scale ranging from 3 to 9, with a higher score indicating higher levels of loneliness. In addition, we also examined the single-item direct measure of loneliness asking how often the respondent felt lonely, which was coded on the same scale as the UCLA-3 items.Covariates included age groups , gender (woman vs. man), ethnicity (non-white vs. white), education , low income , employment status , living status and area of living . All variables aforementioned were harmonised between the two data sets.To compare risk factors for loneliness, we used Ordinary Least Square regression models fitted separately in the two data sets. Survey weights were applied to both samples throughout the analyses to yield national representative samples of UK adults. The analyses of UKHLS used cross-sectional adult self-completion interview weights, whereas analyses of the UCL COVID-19 Social study were weighted to the proportions of gender, age, ethnicity, education and country of living obtained from the Office for National Statistics.Descriptive statistics for the two samples are shown in Risk factors for loneliness were near identical before and during the pandemic. ,This study explored who was most at risk of loneliness during the UK lockdown due to the COVID-19 pandemic and compared whether risk factors were similar to risk factors for loneliness before the pandemic. Young adults, people living alone, people with lower education or income, the economically inactive, women, ethnic minority groups and urban residents had a higher risk of being lonely both before and during the pandemic. These results echo previous studies on risk factors for loneliness.This study has a number of strengths including its cross-cohort comparisons of two large samples with harmonised measures before and during the pandemic, as well as its consideration of a broad range of sociodemographic characteristics. However, the data compared are from different participants, hence it is not clear whether those individuals experiencing loneliness during lockdown had previous experience of loneliness. Furthermore, the COVID-19 Social Study is a nonrandom sample. Hence the results presented here are not presented as accurate prevalence figures for loneliness during the pandemic. It is possible that the study inadvertently attracted individuals who were feeling more lonely to participate. Finally, the study looked at broad risk categories. Future studies are encouraged to (i) consider whether the interaction between different risk categories or accumulation of multiple risk factors affected loneliness levels during the pandemic, (ii) track the trajectories of loneliness across lockdown and (iii) explore the potential buffering role of protective social or behavioural factors.,Overall, these findings suggest that interventions to reduce or prevent loneliness during COVID-19 should be targeted at those sociodemographic groups already identified as high risk in previous research. These groups are likely not just to experience loneliness during the pandemic but to have an even higher risk than normal of experiencing loneliness relative to low-risk groups. Such efforts are particularly important, given rising concerns that loneliness could exacerbate mental illness and lead to non-adherence to government regulations.UCL Ethics Committee. All participants provided fully informed consent. The study is GDPR compliant.Ethical approval for the COVID-19 Social Study was granted by the All authors declare no conflicts of interest.10.13039/501100000279Nuffield Foundation [WEL/FR-000022583], but the views expressed are those of the authors and not necessarily the Foundation. The study was also supported by the MARCH Mental Health Network funded by the Cross-Disciplinary Mental Health Network Plus initiative supported by 10.13039/100014013UK Research and Innovation [ES/S002588/1], and by the 10.13039/100010269Wellcome Trust [221400/Z/20/Z]. D.F. was funded by the 10.13039/100010269Wellcome Trust [205407/Z/16/Z]. The researchers are grateful for the support of a number of organisations with their recruitment efforts including: the UKRI Mental Health Networks, Find Out Now, UCL BioResource, HealthWise Wales, SEO Works, FieldworkHub, and Optimal Workshop. The funders had no final role in the study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. All researchers listed as authors are independent from the funders, and all final decisions about the research were taken by the investigators and were unrestricted. All authors had full access to all of the data in the study and can take responsibility for the integrity of the data and the accuracy of the data analysis.This Covid-19 Social Study was funded by the www.COVIDSocialStudy.org.Anonymous data will be made available after the end of the pandemic. Full details of the COVID-19 Social Study including a the study protocol and user guide are provided at F.B., A.S. and D.F. conceived and designed the study. F.B. analysed the data, and F.B. and D.F. wrote the first draft. All authors provided critical revisions. All authors read and approved the submitted manuscript."}
+{"text": "Background: During late 2019 a viral disease due to a novel coronavirus was reported in Wuhan, China, which rapidly developed into an exploding pandemic and poses a severe threat to human health all over the world. Until now (May 2021), there are insufficient treatment options for the management of this global disease and shortage of vaccines. Important aspects that help to defeat coronavirus infection seems to be having a healthy, strong, and resilient immune system. Nutrition and metabolic disorders, such as obesity and diabetes play a crucial role on the community health situation in general and especially during this new pandemic. There seems to be an enormous impact of lifestyle, metabolic disorders, and immune status on coronavirus disease 2019 (COVID-19) severity and recovery. For this reason, it is important to consider the impact of lifestyle and the consumption of well-defined healthy diets during the pandemic.Aims: In this review, we summarise recent findings on the effect of nutrition on COVID-19 susceptibility and disease severity and treatment. Understanding how specific dietary features might help to improve the public health strategies to reduce the rate and severity of COVID-19. The recent outbreak of coronavirus disease 2019 (COVID-19), caused by a new zoonotic severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) . Beyond The high prevalence of these risk factors, is for a significant part, associated with the pattern of nutrition such as increased consumption of high amounts of saturated fat , refined carbohydrates and low levels of fibre and antioxidants. Balanced nutrition has a potentially important role in the maintenance of immune homeostasis and resilience and for this reason resistance against disease including infections with viral and bacterial pathogens. Malnutrition has prolonged effects on physical and mental health by influencing gene expression, cell activation, and interfering with signalling molecules that shape and modulate the immune system . Thus, pDisparities in nutrition or obesity are impacted by cultural background and closely correlated with severe COVID-19-related outcomes . The hosIndeed, nutrition and obesity play a crucial role in the fate of viral infectivity in general and the community health situation during this present pandemic. In this review, we summarise recent findings regarding the impact of nutrition on the variation in COVID-19 disease severity and also its potential impact on the control of the disease during the current pandemic. Understanding the dietary pattern that is deleterious to COVID-19 survival might help to improve public health strategies toward reducing the spread of COVID-19 and designing new approaches for control and maybe even treatment of this new disease.SARS-CoV-2 virus primarily affects the respiratory system, although other organ systems are involved as well. Lower respiratory tract infection-related symptoms including fever, dry cough, and dyspnoea were reported in the initial case series from Wuhan, China , 21. In Genetic susceptibility can be a major factor in the host response to infectious diseases where inborn errors of the immune system are often critical . DiffereCOVID-19 morbidity and mortality rise dramatically with age and co-existing health conditions, including cancer and cardiovascular diseases. While most infected individuals recover, even very young, and otherwise healthy patients may unpredictably succumb to this disease . QuestioIn this line the greater severity of the disease was associated with maladapted immune responses and host ACE2. However, some other genetic parameters for SARS-CoV-2 receptor and entry gene expression and function have been described .An intact immune system is essential for an effective defence against invading microorganisms. However, due to the immunological defects seen with COVD-19, there is reduced scope for a defence to be mounted against SARS-CoV-2 . The masMen are at a greater risk of severe symptoms and worse outcomes from COVID-19 than women. The precise reason for this discrepancy is not fully understood, but genetic factors, the effects of sex hormones such as oestrogen and testosterone as well as differences in immune cell function such as that of mast cells may be important factors .Prostate cancer patients who were receiving androgen-deprivation therapy (ADT), a treatment that suppresses the production of androgens that fuels prostate cancer cell growth, had a significantly lower risk of SARS-CoV-2 infection . This suBalanced nutrition is an important determinant in immune function against infectious disease in general . Poor nuSFA-rich diets induce chronic activation of the innate immune system while inhibiting adaptive immunity. In fact, high SFA diets induce a lipotoxic state which could activate toll-like receptor (TLR) 4 on the surface of macrophages and neutrophils and lead to chronic activation of the innate immune system. This, in turn, may trigger other inflammatory signalling pathways and the production of proinflammatory mediators , 43. TheA high fat diet (HFD) and obesity increases TLR9 expression in visceral adipose tissue in mice and human . HFD indIn animal models of influenza infection, a HFD enhanced lung damage and delayed the onset of the adaptive immune response. This was associated with impaired memory T cell function and a reduced capacity to respond to antigen presentation and clearance of the influenza virus . The mecAs a result, the elderly, patients with comorbidities, and those with risk factors for COVID-19 should be cautious with the consumption of unhealthy diets that could pose an increased risk to COVID-19 severity. A healthy, balanced diet should contain the necessary macro- and micronutrients, vitamins, minerals, and maybe even unique microbes such as probiotics that can restore and maintain immune function .Proteins, vitamins and minerals have, for a long time, been considered important factors in health and resistance against infection due to their impact on immune homeostasis . The immIn the current COVID-19 pandemic, there are reports of vitamins and minerals affecting the severity of infection and mortality. For example, low prealbumin levels is associated with increased severity of ARDS in patients with SARS-CoV-2 . VitaminIn summary, the nutritional status of an individual has a significant impact on not only the susceptibility to, but also the severity of, COVID-19 infection. The next section provides additional details concerning the impact of proteins, vitamins and minerals in viral respiratory infections that might help finding new strategies for the prevention and control of SARS-CoV-2 infection .+, CD3+/CD8+, and CD3+/CD4+T lymphocytes and glutathione peroxidase (GSH-Px) which helps to protect the host against viral infection . Thus, methionine deficiency can result in oxidative damage and lipid peroxidation, which will lead to a failure in cellular immunity.Amino acids are also important components for cytokine production. The production of interleukin (IL)-1, IL-6 and tumour necrosis factor (TNF) \u03b1 is strongly dependent on the metabolism of sulphur-containing amino acids including methionine and cysteine .The effect of dietary proteins in improving immune function has been reported in cancer patients. In a clinical trial, whey protein isolate (WPI) enriched with Zn and Se improved cell-mediated immunity and antioxidant capacity in cancer patients undergoing chemotherapy. WPI is an alternative oral nutrition supplement (ONS) that contains high quality protein and amino acid profiles. WPI increases GSH function because of its cysteine-enriched supplementation, reduces oxidative free radical formation and prevents infection (5). This suggests that WPI supplementation may improve GSH levels and thereby enhance immunity in subjects at risk of COVID-19 as well as reducing the severity of the disease in patients already infected with SARS-CoV-2.A healthy immune system may aid the prevention and treatment of patients with COVID-19 . Vitaminvia the nuclear retinoid acid receptor \u03b1 and \u03b2 in response to influenza A virus and this may explain its ability to protect against coronavirus infection . Indeed,Vitamin C also promotes the repair of the damaged tissues and highInterestingly, apart from individuals with impaired glucose 6-phosphate activity and renal failure, no adverse effects of large doses of intravenously or orally administrated vitamin C have been detected , 106.In addition to vitamins, several minerals have a beneficial and supportive role in enhancing antiviral immune responses and thus could be beneficial in controlling COVID-19 . Zinc plin vitro experiments Zn inhibits the SARS-CoV-2 RNA polymerase and hepatitis C virus (HCV) . In a RCSelenium is another trace element with a broad range of effects from antioxidant to anti-inflammatory properties . Seleniuin vitro supressed viral replication and release from the infected cells developed substantially less ventilator-associated pneumonia compared with placebo viral entry system. This results in pronounced pro-inflammatory chemokine and cytokine release , 127. In placebo , 129. DuEubacterium ventriosum, Faecalibacterium prausnitzii, Roseburia, and Lachnospiraceae taxa) and enrichment of opportunistic pathogens . It is u nordii) . Favoura nordii) . To date nordii) .The COVID-19 pandemic poses a significant threat to humans. Until the widespread availability of effective, long-term, vaccines, and effective treatment and prevention measures. An important therapeutic and preventive strategy, may be to reduce the incidence or severity of infection. This will involve having a healthy and resilient immune system. An individual's nutritional status has a significant impact on the susceptibility to COVID-19, response to therapy, and on the long-lasting consequences of infection. As such, it is critical to consider the impact of lifestyle and the consumption of healthy diets during the pandemic.A good healthy balanced nutrition is vital in the recovery process for all patients with COVID-19, particularly those who have suffered cardiac distress, pulmonary distress, or those who have been critically ill due to the weight loss, frailty or sarcopenia associated with these conditions . These pIn this respect, access to healthy foods should be a priority for individuals and governments to reduce the susceptibility and prolonged effects of COVID19. Given the over-representation of minorities with the disease and those who also have poor nutrition, we should aim to increase the access to healthy fresh food as well as provide nutritional education to these at-risk subjects.EM and SA designed draft and wrote first version of manuscript draught. GB, GF, SM, JG, and IA revised and comments to the manuscript. All authors has seen and approved final version of manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Violence against women is a major, complex, multidimensional and widespread public health concern worldwide. The current qualitative study was conducted to understand the experience of violence among HIV negative married women in heterosexual serodiscordant relationships.A qualitative description (QD) was conducted from October 2018 to January 2020 in Mashhad, Iran. The participants were 15 HIV negative women, who married and lived with HIV positive men, through purposive sampling method. The data were collected using semi-structured interviews. Data analyzed using conventional content analysis adopted by Graneheim and Lundman.The main overarching theme emerged entitled: life loaded with threat and vulnerability. This theme consisted of four categories of self-directed violence, intimate partner violence, cultural violence and structural violence. The violence began soon after awareness of husband's infection with acts such as suicide attempts and a sense of abhorrence for living with an infected person, and continued with confrontation with various types of violence in the family and society, which put women in constant threat and vulnerability.This study provided an insight into different aspects of violence in Iranian women in HIV serodiscordant relationships. Considering the role of men in the occurrence of violence, policymakers must create and execute family-centered interventions to address attitudes and behaviors that lead to marital conflicts and spousal abuse in order to prevent violence. Health care professionals should also be trained to screen women for violence and refer those who require care to specialists to reduce vulnerability.The online version contains supplementary material available at 10.1186/s12905-021-01546-4. The Joint United Nations Program on HIV/AIDS 2019 (UNAIDS) announced that, more than 37.9 million individuals currently live with HIV worldwide .Violence and the fear of violence have been identified as an important key factor for vulnerability to HIV infection for women of different countries . ViolencRecent statistics by the World Health Organization showed that one in every three women is vulnerable to physical and sexual violence and UN (United Nations) added that only 40% cases of violence seek for any sort of help . AccordiOne of the key populations for prevention of HIV are serodiscordant couples . HIV serThe issue of violence, which is occurred within a social context could be influenced by gender norms, interpersonal communications and sexual stereotypes. Many have debated that women\u2019s experiences of violence could not be examined with traditional quantitative approaches, regardless of its context , 24. QuaWe used a qualitative description (QD). Thorne et al. introducFor the purpose of this study, the Clinic of Behavioral Disease Counseling was chosen as the study setting. The reasons for the selection of this clinic were that it was located in the central part of Mashhad, Iran and all patients with HIV and their wives referred to it to receive their health and counseling services and their medical records were available too. The study was conducted from October 2018 to January 2020 (16-month period). The participants were 15 HIV-negative married women who lived with HIV-positive men. The inclusion criteria for the study were women with serdiscordance relationship, the ability to talk and express emotions and feelings, and the desire to participate in the study. Participant with unwillingness to continue the interview were excluded from the study.The purposive sampling method was used to draw the sample for the study. To recruit participants, one of the health care providers in the clinic contacted the eligible members by phone call and explained the research objectives to them. Then, a convenient time was set for interviews with the individuals, who agreed to take part in the study.The method used to data collection was semi-structured interviews. The interviews were semi structured and carried out individually and face to face. All interviews were conducted in a room at the Clinic of Behavioral Disease Counseling. Due to the sensitivity of the phenomenon under investigation and in order to achieve a greater insight into the experiences of the participants, all interviews were conducted by the first researcher, who is a reproductive health researcher with previous experience of interviewing females with high-risk sexual behavior in prison. An interview guide was used to conduct face-to-face interviews . All participants were given an informed consent form in Persian which was the native language of the participants and the researchers, which they read and signed. In addition to the verbal consent, written consent was also obtained from the participants. Also, it was emphasized to all women that they can withdraw from the study at any time without any prejudice. If the participants were not comfortable to answer any question, they were not forced to give response. All interviews were conducted in the participant\u2019s native language (Persian). It was explicitly stated to the participants that if they decline to participate in the study, it would not influence their provision of care\u201d in order to eliminate \u201cfear, unconscious coercion, and secrecy\u201d . To achiAt the end of the interview, a gift was given to the participants for compensation. All the information collected from participants was kept confidential and instead of using their names, they were given unique numbers to be used in analysis. If necessary, the participants were referred to the psychologist of Infectious Disease Clinic because we were not qualified to offer counseling and therapy in the field of psychiatry. Ethical approval for the current study was obtained from ethical committee of Mashhad University of Medical Science (Code of Ethics. IR.MUMS.NURSE.REC.1397.022).All interviews were audio-recorded, transcribed verbatim and entered into MAXQDA version 12 that was developed and distributed by VERBI Software based in Berlin, Germany. Data analysis began immediately after the first interview. The analysis process was first conducted in Persian and then translated into English. We used the Graneheim and Lundman\u2019s method to analyIn this study, credibility was established through interviews and including appropriate number of participants using maximum variation strategy. Also, the field notes, which contained the researcher\u2019s thoughts and feelings, impressions, as well as the interpretation of interviewees\u2019 non-verbal cues and body language enhanced credibility. Indeed, the field notes were written by the first author after each interview to identify any differences that she noticed, as well as to set aside her own impressions, emotions and assumptions about the research focus and in order to keep an open mind in communication instead of being judgmental. Also, field notes prompted the development of the next set of questions to be asked of other participants. It is noteworthy that non-verbal communications have long been recognized as a valuable source of information and a useful supplement to the inquiry of human verbal behavior. Actually, what a person does not say may often reveal by body language more than what he or she says. Qualitative researchers can obtain useful information from participants' silence . As an iFor dependability, the researcher gave data to an independent coder who was skilled in the field of qualitative research to do an independent examination of the data and confirm them. To ensure confirmability supervisor review of findings, interpretations, and conclusions of the study greatly increased the confirmability of this study; so, conducting an audit trial was possible. For transferability, clear and thick description of culture, context, the method of participant selection, and characteristics of participants as well as the process of the data analysis was provided.The participants were 15 married women. The ages of the respondents ranged from 30 to 60. Exception of one couple, all couples had children. Four of the couples (26.6%) had one child, while two couples had four. The age of the children ranged from under seven to over thirty-four. The majority of women were in their childbearing age, when they discovered their status. Most of the participants completed lower secondary school education and three had a diploma. The majority of women chose not to disclose their status to their children. Only one participant informed her children and one informed her entire family. The characteristics of participants are presented in Table The results of the analysis included 96 meaning units, 12 subcategories, four categories and one theme. Examples of the inductive process in the content analysis are shown in Table The main theme that derived from the analysis of data was \"life loaded with threat and vulnerability.\" The violence started almost immediately after the women became informed of their husband's disease, with destructive actions and feelings. Then, they encountered various forms of violence, including physical, psychological, verbal, and sexual, while living with their abusive husband. They were also threatened in the community due to the complexity of AIDS, and they were experienced to violence that was embedded in the society's culture and structure. Women's experiences showed that they had a life that is constantly full of threats and vulnerabilities.A topic raised by study participants was self-directed violence. Women sometimes experienced self-injuries activities in the event of confronting with illness of husband or reactions of relatives.: \"When I was pregnant, I understood he betrayed me. I was suspicious of my husband\u2019s behaviors and I recently found a condom in his pocket. I was afraid that my surroundings do not accept me. Most of the times I think what should I do? Because of this, I tend to commit suicide for fear of getting AIDS\" (P2. 38 Y).Some women disclosed that their husbands engaged in high-risk sexual behavior such as occasional extra-marital relationships, which resulted in having suicidal thoughts. In this regard, one participant said\"My husband is not concerned about living expenses, for instance, if I want to buy anything he never agrees with. He sells home appliances and doesn\u2019t pay any attention to the kids. I take care of the children and always seek to give them comfort, but is it just my responsibility? In this condition, I do not want to eat and I like to hit my head against the wall\" (P2. 38Y).Some women disclosed that their husbands engaged in high-risk sexual behavior such as occasional extra-marital relationships, which resulted in having suicidal thoughts. One female said that, \"I can't do anything without my husband's permission. I wish I could work outside and I could be financially independent. But my husband prevents me from working, he does\u2019not financially support me at all. The dependency makes me more vulnerable to my husband\u2019s acts of violence \u2026 I sometimes hate myself for not getting divorced\" (Touching her face) (P11. 40 Y).The women interviewed mentioned that living with an HIV-infected husband had consequences for their psychological system. Most of the participants stated that their activities were restricted by their husbands. It led to their negative feelings about themselves. Only one participant stated that her activities are not limited by her husband:Women in their serodiscordant relationships with a partner are usually victims of a combination of violence including verbal harassment, psychological aggression, sexual assault and physical harm.. One woman said that: \"When we speak to one another, he uses bad word. He begins to scream and cannot control himself at all. When he comes home, he stars constantly ordering me to do something with a high voice. He feels pride doing it\" (P 2. 38Y).Women experienced verbal violence in a variety of forms such as scurrility and screaming\"my husband is very suspicious. He constantly checks my mobile phone, even though my phone is always at home. He consistently says that women are worthless. When my husband says it, my confidence is going down\" (P5. 30Y).Psychological violence includes controlling daily commutes and phone calls, scepticism, humiliating a woman or her family or ignoring her and her family members. The findings showed that almost the majority of the participants in the study mentioned psychological violence by their husband. One woman expressed that, I strongly believe condom continue to be used before a cure is found for HIV. But he doesn\u2019t want to use the condom. He said, I don\u2019t like using a condom. In this condition I cannot protect myself, because my husband doesn\u2019t put on a condom. I am forced to give him sex because I fear that he embarrasses my children\" (P11. 40Y).A few of serodiscordant women talked about sexual assault. Sexual violence victims who shared their stories have reported low or very low levels of this type of violence. They indicated having constant arguments and disputes towards sexual issues. One woman has mentioned forced unprotected sex with her husband: \"One female stated: \"he beat me up in front of kids. He used physical force such as hands and feet for choking and shoving me. I begged him to stop, but he said I like killing you and the kids\" (P7. 37 Y).The women interviewed stated that their husbands beat them in various ways at home, the most common of which was hand beating. But in some cases, this was done by pulling their hair or banging their head against the wall, as well as pushing and throwing them. The effects of physical assaults were usually more expressed than other types of violence.. Cultural beliefs could play an important role for HIV / AIDS patients and their families in adapting, coping with the disease and violence toward them and their families. Participants said that when relatives became informed that their spouse is HIV positive; they reject them from the family. Also, it was resulted in labeling and stigmatization in the community.\"My family' behavior has changed toward me. I am seronegative, but some people, who see me at the clinic with my HIV positive husband, they think that I am positive too. My family used to gather and eat together, but when they know that my husband is HIV positive, left their habit. In our society when people hear that such a woman has a HIV positive husband it is a very shameful condition. I think, this is an example of overt violence. I say to my husband that I am really suffering from this problem\" (silent) (P 4. 45 Y).Serodiscordant women chose not to disclose their status to their family and friends, as they found that women who disclosed their status were rejected by their relatives. The following quote explain this point: \u201cShe believes in women's complete obedience to men and inherent aggression against them. My husband spends most of his time at her home and used to listen to his mother advice, who is not a right person for consultation. For example, she said to my husband you have power and you can transfer it (HIV) to your wife and you can infect her. My mother- in-low and I is always arguing together\" (P 6.34 Y).The women explained that due to the interference of families, especially the husband's family they have to face several problems. A participant specifically mentioned about her mother-in-law interference: Structural violence, which reveals inequality in the distribution of jobs and resources among people living with HIV / AIDS, was one of the issues that serodiscordant women experienced due to living with HIV positive men. They confronted with deprivation from economic participation, the lack of legal protection and also health disparity.\"my husband decided to disclose his status to the company, but they told her that they couldn\u2019t give her a job. He is unemployed. He afraid to go to other factories to look for a job, you know, because he feels that he will be approached the same. Several times I went to look for a job for him, but did not approached well. In this situation he\u2019s become more nervous with me and the kids\"(P 8. 39Y).Participants stated that their husbands lost their job and sources of income when their employer was informed of their status. One woman mentioned \"The law does not give sufficient protection for women in instances of violence. Men also know it. Once I went to court to divorce, but the judge did not pay any attention to me and said, you are a mother, you must endure\" (P 12.36 Y).From the interviews conducted with the women, it was found that the majority of the women still has not reported the violence to the legal organization because they thought that they would not be supported by the legal system. One woman mentioned: \"When he (my husband) got sick, I strongly believed that, it is my responsibility to accompany her for going to the hospital without shame, but in the hospital where we went for care, one of the care providers loudly said that he is HIV positive. Also, the nurse gave care to my husband with a violent behavior, you know, and when I protested him, he, with a bad tone, said, your husband is HIV positive and I should take care of myself. In this situation, I just cried and came out of the room. I felt so bad\" (Lip biting) (P 13. 35 Y).The majority of participants in this study had strong relationships with their health care providers. But a few participants referred to the bad experiences from the healthcare providers in hospital. A woman who had a good relationship with her husband in this relation said: The key findings that emerged from the study indicated the experience of various types of violence among HIV serodiscordant women. Violence, rooted in misconceptions about AIDS, began soon after the awareness of husband's infection with acts of suicide attempts and a sense of hatred for living with an infected person, and continued with confrontation with various types of violence in the family, and society. HIV is a phenomenon in serodiscordant relationships that involves both partners. It is associated with unique stressors. In a study a high rate (83.1%) of suicidal attempt was reported in HIV-positive people . It is aPsychological violence can be subtle and there is no clear definition for it . In factA study by Smith and colleagues describes women's experiences of physical, emotional and verbal violence. Women were burned with cigarettes, thrown to the wall and were attacked in different ways. Some of them experienced emotional intimidations along with physical threats with a weapon. Their partners insulted them verball. Besides, they controlled women and violated their right of freedom, including having the right of financial independence . The resPart of the experience of violence is related to interactions in society. It is a type of violence that is not visible but its effects can be felt. In general, women in our study not only lacked social support, but were also subjected to violence by relatives. Social exclusion was occurred following disclosing of HIV by families and community; although, the structure, source and consistency of social support networks, play a significant and successful role in helping women with abusive partners . The finWHO notes the establishment of international and national legal structures promoting gender equality and strengthening police and other criminal justice agencies' responses to violent cases . In addiThere is a possibility that sociocultural issues are barriers for disclosure of violence and may hinder women to reveal the violence so that this might influence the responses of the participants. This study was conducted on a population of women attending the Behavioral Disease Counseling Clinic, and other women who did not attend the clinic were not interviewed, therefore, the findings are representative of only women who attended clinics. Also, men's perspectives have not been explored in this study, as they are a significant complement to their wives' views.Despite the aforementioned limitations, there are several strengths to this study. The studies conducted about HIV/AIDS has mainly focused on the HIV positive individuals rather than HIV negative ones , but we According to the findings of this study, it is recommended that HIV intervention programs should address gender-based violence among seroderodiscordant couples. It is argued that these findings can help policymakers in designing care plans and empowerment programs for HIV serodiscoedant couples. In addition, our findings indicate that there is a need to reform the rules of dealing with a spouse who has committed violence against women. In order to better understanding of the problem of violence against women there is a need for more research. Suggestions for further research include investigating the perceptions of HIV positive men towards violence, to examine the cultural issues affecting the violence against women. Also, understanding how HIV-negative women experience violence following disclosure of their HIV positive husband's illness needs more investigation in the future research. Given that some of the men in this study were involved in extramarital relationships, which is a risk factor itself for HIV/AIDS, studies should be conducted to understand how the dynamics that erode the marriage/trust could create environments where men are acquiring HIV outside of their relationship. Additionally, measuring the prevalence, causes and types of violence using quantitative methods in HIV serodiscordant couples is recommended for further research. Due to the fact that some healthy women with an HIV/AIDS husband are hesitant reluctant to attend health centers to receive care. It is suggested that the focus be on empowerment and social interventions outside of the HIV clinic that address some of the issues of social norms and cultural violence, also help these couples improve conflict resolution skills and rethink patriarchal thought and behavior patterns.This study provided an insight into different aspects of violence in Iranian women in serodiscordant HIV relationships. In summary, women experienced all types of violence. Traditional beliefs in the community and the lack of appropriate laws and services to deal with perpetrators of violence lead to continuing violence against women. According to the findings of this study, an effective response to violence against women should include improving women's empowerment and enhancing their awareness of prevention strategies, decreasing patriarchal culture, and strengthening accountability for survivors of violence through the approval of effective laws. Also, attitudes and behaviors that lead to marital conflicts and, eventually, spousal abuse should be addressed. Therefore, considering the role of men in the occurrence of violence, policymakers must create and execute family-centered interventions to prevent violence. Health care providers at HIV/AIDS counseling centers should be able to address topics related to the violence for HIV negative women in serodiscordant relationships. This requires that they to be trained for screening of women for violence and refer those who need care and therapy to the specialists.Additional file 1. Interview guide."}
+{"text": "Over the past 20 years natural killer (NK) cell-based immunotherapies have emerged as a safe and effective treatment option for patients with relapsed or refractory leukemia. Unlike T cell-based therapies, NK cells harbor an innate capacity to eliminate malignant cells without prior sensitization and can be adoptively transferred between individuals without the need for extensive HLA matching. A wide variety of therapeutic NK cell sources are currently being investigated clinically, including allogeneic donor-derived NK cells, stem cell-derived NK cells and NK cell lines. However, it is becoming increasingly clear that not all NK cells are endowed with the same antitumor potential. Despite advances in techniques to enhance NK cell cytotoxicity and persistence, the initial identification and utilization of highly functional NK cells remains essential to ensure the future success of adoptive NK cell therapies. Indeed, little consideration has been given to the identification and selection of donors who harbor NK cells with potent antitumor activity. In this regard, there is currently no standard donor selection criteria for adoptive NK cell therapy. Here, we review our current understanding of the factors which govern NK cell functional fate, and propose a paradigm shift away from traditional phenotypic characterization of NK cell subsets towards a functional profile based on molecular and metabolic characteristics. We also discuss previous selection models for NK cell-based immunotherapies and highlight important considerations for the selection of optimal NK cell donors for future adoptive cell therapies. However, although the NK cell repertoire is highly heterogeneous both between and within individuals, relatively little attention has been given to the initial selection of NK cells which harbor the greatest antitumor activity. Here, we review the current state of donor selection for peripheral blood NK (pb-NK) cell-based immunotherapies and discuss the factors which drive NK cell effector function along with the challenges associated with identifying highly potent NK cell populations for immunotherapy.Natural Killer (NK) cells were first characterized in the 1970s by their ability to detect and eliminate tumor cells without prior antigen sensitization . Indeed,+CD45RA+ CLPs are thought to migrate to various anatomical sites where they subsequently undergo interleukin-15 (IL-15) mediated differentiation along the NK cell lineage (NK cells are a cytotoxic subset of innate lymphoid cells (ILCs) with marked potency against malignant cells. NK cells and other ILCs originate from the same bone marrow-derived common lymphoid progenitor cells (CLPs) as B and T lymphocytes . AlthougbrightCD16- and CD56dimCD16+. CD56bright cells represent approximately 10% of the pb-NK cell population and primarily act as potent producers of pro-inflammatory cytokines such as interferon gamma (IFN\u03b3) following cytokine stimulation : CD56anzyme B . The CD5Unlike T and B lymphocytes, NK cell receptors do not undergo somatic rearrangement to generate antigen specificity. Rather, NK cells rely on the stochastic expression of germline-encoded activating and inhibitory receptors, with the complex integration and hierarchy of signals generated through these receptors tightly controlling NK cell function. NK cells express a suite of activating receptors which detect various molecules upregulated by malignant cells. Simultaneous engagement of multiple activating receptors is typically required to overcome an NK cell\u2019s intrinsic activation threshold and trigger effector function . A notabKIR locus on chromosome 19 molecules and regulate self-tolerance to healthy tissues by dominantly inhibiting NK cell activation . Two majosome 19 , resultiplotypes , 19. Happlotypes .Interaction between these major inhibitory receptors and their specific HLA ligand is critical for NK cells to achieve functional maturation through a process known as \u201clicensing\u201d or NK cell education . Educateversus-host disease, and potential for combination with other treatment strategies, NK cell-based therapies have emerged as promising candidates for the treatment of a variety of hematological malignancies and solid tumors directed against specific tumor-associated antigens . Sireviewed \u201328]. Altreviewed . Indeed,reviewed \u201331. Furtl effect . More rein vivo, the initial selection of highly functional cells with strong innate potency is essential for the widespread success of NK cell therapies. Over the past decade it has become increasingly evident that not all NK cells have the same baseline capacity to eradicate leukaemic cells direction was 65% compared to 5% in patients without predicted alloreactivity . These observations formed the basis of the KIR haplotype model of donor selection achieved complete remission compared to 2 out of 15 patients (15%) without predicted alloreactivity , to generate the final CD3-CD19-CD57+NKG2C+ NK cell product drives the expansion of the CD57+NKG2C+KIR+ adaptive NK cell population with an increased capacity for ADCC NK cells have been described following in vitro stimulation with IL-12/IL-15/IL-18 coincide with NK cell development and effector function. In mice, developing NK cells utilize both glycolysis and OxPhos to fuel the energy-intensive process of proliferation , whereast subset . CD56dimNK cells . HoweverNK cells . As NK cNK cells . HoweverNK cells , 130. AcNK cells , 125. FuNK cells . In addiNK cells , 133]. LNK cells \u2013136. AccNK cells and uptaNK cells . More ree stress . Strikin is also critical for NK cell development and activation. mTOR is a highly evolutionarily conserved serine/threonine kinase comprised of two distinct complexes: mTOR complex 1 (mTORC1) and mTOR complex 2 (mTORC2). Together, these complexes act as master regulators of cellular metabolism and integrate signals for nutrient availability, growth, and activation to adjust the rates of glycolytic metabolism and biosynthesis accordingly . Severalmulation . Transfomulation , 145. InNK cells . Further and single cell RNA-seq (scRNA-seq) has provided researchers with unprecedented insight into the developmental and functional plasticity within the NK cell compartment. A particular interest has arisen in unravelling the developmental trajectory of human NK cells. Based on phenotypic analyses, the current model of NK cell differentiation describes a linear relationship between the immature CD56elopment . Using snt cells . Furthernt cells . Althougntiation , 150. In derived . Indeed, derived . Yang et derived . Interes thought . An additivation . Heterogpression . In addipression , 154, lipression , 153, anpression . HoweverKIR and IFNG loci during NK cell differentiation corresponds with acquisition of KIR expression changes, and microRNA (miRNA) expression [reviewed ]. NK celpression and the pression , respectpression . Using tntiation . Additiontiation . Howeverterparts . Neverth BLIMP-1 . Chromat BLIMP-1 . Interes BLIMP-1 . Entinos BLIMP-1 . Mechani targets . Althoug targets . FurtherIFNG locus increases the accessibility of the CNS1 region and drives the characteristic increase in IFN\u03b3 expression displayed by adaptive NK cells (FCER1G and SH2D1B (encoding EAT-2) loci corresponds with the reduced expression of these signaling proteins (ZBTB16 locus (encoding PLZF) was also observed, corresponding with a striking 77% downregulation of this transcript in adaptive NK cells compared to conventional NK cell populations , and the Cancer Council Western Australia and the Western Australia Department of Health (research funding to BF).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."}
+{"text": "Genome of an organism has always fascinated life scientists. With the discovery of restriction endonucleases, scientists were able to make targeted manipulations (knockouts) in any gene sequence of any organism, by the technique popularly known as genome engineering. Though there is a range of genome editing tools, but this era of genome editing is dominated by the CRISPR/Cas9 tool due to its ease of design and handling. But, when it comes to clinical applications, CRISPR is not usually preferred. In this review, we will elaborate on the structural and functional role of designer nucleases with emphasis on TALENs and CRISPR/Cas9 genome editing system. We will also present the unique features of TALENs and limitations of CRISPRs which makes TALENs a better genome editing tool than CRISPRs.Genome editing is a robust technology used to make target specific DNA modifications in the genome of any organism. With the discovery of robust programmable endonucleases-based designer gene manipulating tools such as meganucleases (MN), zinc-finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and clustered regularly interspaced short palindromic repeats associated protein (CRISPR/Cas9), the research in this field has experienced a tremendous acceleration giving rise to a modern era of genome editing with better precision and specificity. Though, CRISPR-Cas9 platform has successfully gained more attention in the scientific world, TALENs and ZFNs are unique in their own ways. Apart from high-specificity, TALENs are proven to target the mitochondrial DNA (mito-TALEN), where gRNA of CRISPR is difficult to import. This review talks about genome editing goals fulfilled by TALENs and drawbacks of CRISPRs.This review provides significant insights into the pros and cons of the two most popular genome editing tools TALENs and CRISPRs. This mini review suggests that, TALENs provides novel opportunities in the field of therapeutics being highly specific and sensitive toward DNA modifications. In this article, we will briefly explore the special features of TALENs that makes this tool indispensable in the field of synthetic biology. This mini review provides great perspective in providing true guidance to the researchers working in the field of trait improvement via genome editing. Genome editing is a procedure that allows for site-specific modifications to be made in the genome of any organism . There aGenome editing is being applied in many plant and animal species. The use of ZFN in marine animals such as zebrafish and many live-stock animals has been shown, while TALEN and CRISPR systems are used in the cell lines of many cattle species such as cows, pigs, and chickens , 7. EvenTo modify a target gene, the genome-editing tools are designed to create a double-stranded break (DSB) precisely at the target specific region . DSBs arOut of three genome editing technologies available, TALENs and CRISPR are common in practice to make site-specific gene modifications , 17. FigS. Pyogenes bacteria\u2019s innate immune system, has ushered this generation of genome engineering owing to its flexible use and easy construction. To configure Cas9 to knockout a given target DNA, the order of the guide RNA (gRNA) sequence must be designed to have a 5\u2032 end complementary to the target site [The CRISPR/Cas9 system for genome editing is considered the biggest scientific development of the decade, leading the Nobel prize to its inventors and opening up tremendous opportunities in the field of medicine and sustainable improvement in agriculture .The Cas9get site . Cas9 taget site . The CRIget site . Howeverget site .Despite their wide range of applications, these designer nucleases are not thought to be safe or precise enough for site-directed therapies, particularly in gene therapy. Though off-target effects exist in all genome editing systems, the high prevalence (\u2265 50%) of unpredictable off-targets in CRISPR/Cas9 technology is a major disadvantage . ResearcXanthomonas [TALEs proteins were first reported in 2009, derived from phytopathogenic bacterial genus thomonas . TALE isthomonas . In the thomonas due to iA typical TALEN unit comprises a central DNA-binding domain of 12-28 repeats, a nuclear localization signal (NLS), an acidic domain for target gene transcription activation, and Fok1 nuclease . The bioThe four most common RVDs identified by various experimental validations are NN, NG, HD, and NI with unique preferential binding affinity toward G/A, T, C, and A respectively bestowing target specificity . RemarkaThe popularly used TALEN system comprises 2 units of DNA binding domain (DBDs) from TALE proteins. Each unit is attached with a catalytic domain from Folk1 restriction enzyme. Fok1 nuclease of the TALENs dimerizes which generates a cleavage on both the strands of DNA-double helix, activating the DNA repair machinery to fix disruption.As we align repeat modules (RVDs) in a particular structure, it is possible to create TALENS with the required sequence precision. There is, however, a limit to the option of target sites for TALEN. A thymine at position 0, i.e., immediately precursory to the TALE-repeat bound sequences is invariably required . The comThis era of gene targeting is ruled by the two very recent and robust engineering nucleases, TALEN and CRISPR. CRISPR-Cas9 is a very familiar tool for many molecular biologist greatly known for its easy-programming of gRNA feature . Since nS. pyogenes to recognize its unique Protospacer Adjacent Motif, or PAM immediately adjacent to a 20-nucleotide target site where gRNA hybridizes (Watson-crick base pairing) with the strand opposite the PAM site channelizing Cas9 to cut DNA. CRISPR has been shown in a number of recent papers to induce DSB formation at very high frequencies at the desired DNA locus [The gRNA of CRISPR is an integrated product of custom-designed crRNA and trRNA scaffold . On the NA locus .On the other hand, TALEs have a well-defined DNA base-pair choice, offering a basic strategy for scientific researchers and engineers to design and construct TALEs for genome alteration. Engineered TALES can currently be used in cells and model organisms. A protein repeat tandem is responsible for recognizing individual DNA base pairs. Tandem repeats are made up of a pair of alpha helices linked by a loop of three-residue of RVDs in the shape of a solenoid. For the creation of TALEs with variable precision and binding affinity, the six conventional RVDs are frequently used. HD and NG are associated with cytosine (C) and thymine (T) respectively. These associations are strong and exclusive .NN is a degenerate RVD showing binding affinity for both guanine (G) and adenine (A), but its specificity for guanine is reported stronger. RVD NI binds with A and NK binds with G. These associations are exclusive but the binding affinity between these pairs is less due to which they are considered weak. Therefore, it is recommended to use RVD NH which binds with G with medium affinity. It is also worth noting that the binding affinity of TALE is influenced by the methylation status of the target DNA sequence .A typical TALEN system usually consists of 18 repeats of 34 amino acids. A TALEN pair must bind to the target site on opposite sides, separated by a \u201cspacer\u201d of 14-20 nucleotides as an offset since FokI requires dimerization for operation. As a whole, such a long (approximately 36 bp) DNA binding site is predicted to appear in genomes very rare.Remarkably, the high degree of specificity and low cytotoxicity of TALENs has been applied in diverse cell types . DespiteThe TALEN code is degenerate, which means that certain RVDs can bind to multiple nucleotides with a diverse spectrum of efficiency. The binding ability of the NN (for A and G) and NS repeat variable di-residue empowers the TALENs to encode degeneracy for the target DNA . This deApart from the four typical nucleotides A, T, G and C, the epigenetic DNA nucleobases 5-methylcytosine(5mc) and 5-hydroxymethylcytosine(5mhc) plays important regulatory roles, contributing to around 75% of CpG islands in a mammalian genome , 54. TheOn the other hand, TALE proteins can be successfully re-engineered with sensitivity toward these DNA chemical modifications. Methylated cytosine is not efficiently bound by the canonical RVD HD; however, owing to the high structural resemblance of methylated cytosine to thymine, NG is capable of binding to methylated cytosine . Also, fCRISPR-Cas9 is known for its broad spectrum of successful applications in this modern era of nuclear DNA editing, but researchers find it challenging to import the gRNA and Cas9 complex to reach into the mitochondria to selectively eliminate mutations. The CRISPR/Cas9 platform, tailored to mitochondria, could prompt a revolution in mitochondrial genome engineering and biological understanding . HoweverControl over the endogenous gene expression has always fascinated scientists. Programmable designer transcription factors fused to desired transcriptional activator and repressor protein domains provides this flexibility to the researchers to keep a strong hold on transcriptional machinery of a gene. Before their application in genome modifications, proteins like ZFN and TALE have demonstrated a wide range of applications in this sector of modulating the expression of any gene of any organism , 67.Most recently, a novel modified version of the easy to design non-functional Cas9 based CRISPR-Cas9 system was also developed to be used as an artificial transcription factor; however, due to the large size of the complex, it is not as efficient as TALE protein-based artificial transcription factors.Artificial transcription factors were first generated by the fusion of engineered zinc finger protein with a 16 amino acid peptide (VP16) from herpes simplex virus as a transactivation domain . With a To explore CRISPR/Cas9 as a transcription modulator, the Cas9 protein of the system is catalytically deactivated and then fused with the desired effector domains like VP64 . The inaEarlier in its old versions, CRISPR/Cas9 acts as a repressor modulator of transcription by blocking the target site. The steric hindrance of the CRISPR system hindered the transcription machinery to affect the target on the DNA. In the modified version, the involvement of the repression domain was more efficient than only steric repulsions. In the revised versions, a mechanism known as CRISPR interference in which the dead Cas9 protein blocks the gene expression by obstructing the transcription start site is also in practice. Fusing dCas9 to transcriptional repressor domains is another way to effectively silence a gene from the promoter . To accoNatural modifications in the cytosine base are absolutely necessary to maintain regulation of genome expression and genome stability. ATFs are programmed to engineer the epigenome to modulate the expression of a gene without altering the DNA sequence. These epigenome editing techniques help us explore the role of epigenetics in the crucial process of gene expression. Moreover, epigenome editing uncovers the exact sequence of events of chromatin remodeling and its effect of gene expression, which is key to understanding many biological processes and diseases in humans. In comparison with ZFNs, TALE-proteins have significantly emerged as critical DNA-binding scaffolds governed by a simple cipher. Their compatibility with a broad range of epigenetic modifiers is commendable . With thTo understand the function of histone modifications, zinc-finger proteins were fused with a methyltransferase in a research by Carl who successfully demonstrated that the methylation of H3K9 can ultimately result in gene repression . The lysA wide range of cytosine modifications naturally exists like 5methylcytosine (5mC) and 5 hydroxymethyl (5hmC). Unwanted DNA methylations are associated with many neurodegenerative diseases. The catalytic domain thymidine DNA glycosylase (TDG) was first demonstrated to abolish the DNA methylation and induce gene expression . In the For the real-world clinical applications of the genome editing systems, it is important to develop efficient in vivo delivery strategies to improve safety. The traditional in vivo delivery methods are either viral vector based or non-viral vector based. Viral vectors such as adeno-associated virus (AAV), adenovirus (AdV), and lentivirus (LV), are well established to carry and deliver small ZFNs and even TALENs into target cells, but are not generally preferred for the CRISPR-Cas9 system due to its bulky architecture and large cargo size (~4.3kb) (effect of genome size on AAV vector packaging). Viral vectors are popular for the delivery of nucleases in the cells with well-established protocols as they do not cause insertional mutagenesis and have higher efficiency. Though, it is reported that the LV-encapsulated delivery vector of CRISPR-Cas9 system for gene therapy produced efficient-targeted mutations but higher risk of off-targets limits its applications. In contrast, the easily designed and constructed non-viral vectors have capability to carry large size designer nucleases but their low transfection efficiency and poor specificity causes high cellular toxicity which limits their use in gene therapy .TALEN is a robust and promising genome editing tool which offers the scientific community a wide choice to target unlimited sequences of any organism. It is a powerful tool with high specificity and precision with low cytotoxicity which makes it ideal for therapeutic applications especially in humans. It is easy and relatively cheap to design and assemble TALENs and thus it becomes the first choice for targeted mutagenic applications. The potentials of TALENs are neglected over CRISPR due to its ease of design but, when it comes to the real world problems, the outstanding competence of TALENs is undoubted."}
+{"text": "In the past two decades, genome editing has proven its value as a powerful tool for modeling or even treating numerous diseases. After the development of protein-guided systems such as zinc finger nucleases (ZFNs) and transcription activator-like effector nucleases (TALENs), which for the first time made DNA editing an actual possibility, the advent of RNA-guided techniques has brought about an epochal change. Based on a bacterial anti-phage system, the CRISPR/Cas9 approach has provided a flexible and adaptable DNA-editing system that has been able to overcome several limitations associated with earlier methods, rapidly becoming the most common tool for both disease modeling and therapeutic studies. More recently, two novel CRISPR/Cas9-derived tools, namely base editing and prime editing, have further widened the range and accuracy of achievable genomic modifications. This review aims to provide an overview of the most recent developments in the genome-editing field and their applications in biomedical research, with a particular focus on models for the study and treatment of cardiac diseases. Genome editing is a catch-all definition that refers to a variety of techniques capable of achieving site-specific DNA modifications in a living cell. The concept itself is far from new, as the first reports of targeted genetic changes date back to the late 1970s and the 1980s . The earStreptococcus pyogenes Cas9 (SpCas9) recognizes the 3\u2032NGG sequence, while Staphylococcus aureus Cas9 (SaCas9) recognizes either 3\u2032NNGRRT or NNGRR(N) [A crucial breakthrough in genome editing occurred with the discovery of the CRISPR/Cas9 system in bacteria, in which it serves as a defense system against bacteriophagic infections. The CRISPR acronym derives from \u201cclustered regularly interspaced short palindromic repeats\u201d, while Cas derives from \u201cCRISPR-associated endonuclease\u201d. Since the discovery of the biological significance of the mechanisms behind it, and thanks to its simplicity and adaptability , CRISPR/NNGRR(N) . After cNNGRR(N) .Importantly, upon modifications of Cas9\u2014leading to the so-called dead Cas9, or dCas9\u2014the system has also been used to insert methyl groups and block or activate transcription in precise positions . As far The two classes of DNA base editors described are cytosine base editors (CBEs) and adenine base editors (ABEs), which, collectively, are able to induce C > T, T > C, A > G, and G > A transitions without CBEs consist of a macromolecular complex in which a catalytically impaired CRISPR/Cas9 nuclease is fused to an APOBEC1 deaminase enzyme and a uracyl glycosylase inhibitor protein. Upon Cas binding, hybridization of the guide RNA to its target DNA sequence leads to the displacement of the PAM-containing DNA strand to create an R-loop by local denaturation of the DNA duplex . In ordeHowever, these uracil and inosine intermediates are mutagenic, and DNA repair mechanisms have evolved to remove these bases from the DNA. For uracil, this is done by uracil DNA N-glycosylase (UNG2). To increase the half-life of uracil and consequently increase the purity\u2014referred to as the ratio of intended edits to random edits\u2014and editing efficiency, CBEs are fused to uracil glycosylase inhibitor proteins (UGIs) , left. Further improvements to editing efficiency were achieved by designing the Cas nuclease in such a manner that it generates a nick in the non-edited DNA strand, thereby stimulating cellular repair of the non-edited strand using the edited strand as a template . AdditioWith these mechanisms, base editors can introduce point mutations more efficiently and with fewer undesired byproducts compared to the CRISPR/Cas9 method .Whereas base editing lacks the capability to perform a transversion mutation other than C > T, T > C, A > G, and G > A without excess byproducts, prime editing is able to efficiently mediate targeted insertions, deletions, all 12 possible point mutations, and combinations thereof. With this recently developed method, the desired genetic information is directly written into a specified DNA site without requiring DSBs or a donor DNA template . Prime ePrime editors can be created by fusing a catalytically impaired Cas9 endonuclease (Cas9 nickase), with an inactivated HNH nuclease domain, to an engineered reverse transcriptase (RT) domain . The tarVarious prime editor systems have been developed by Anzalone et al., starting from PE1, in which Cas9 nickase is fused to a wild-type Moloney murine leukemia virus (M-MLV) RT . To incrUpon target binding by the pegRNA, the Cas9 RuvC nuclease domain nicks the PAM-containing DNA strand and liberates the 3\u2032 end at the target DNA site . The PBSCardiovascular disorders are the main cause of death worldwide ; therefoIn recent years, advances in the technology for the generation of human-induced pluripotent stem cell (hiPSC)-derived cardiomyocytes (hiPSC-CMs) from affected patients opened up new possibilities for the modeling of cardiac diseases . In thisMoreover, CRISPR/Cas9 has been broadly used to generate in vitro allele-specific knockout models of cardiac diseases, such as the channelopathy long QT syndrome (LQTS) . CardiacTherefore, these studies not only shed light on the pathophysiological mechanisms of cardiac diseases but also offered important insights into potential therapeutic approaches for hereditary cardiac diseases. Interestingly, CRISPR/Cas9 was also used to perform genome-scale functional screening. Sapp et al. successfully investigated the mechanisms underlying doxorubicin-mediated cell death in hiPSC-CMs .DMD exon 51 deletion mutation [DMD gene. Subsequent base editing on the mutant hiPSCs, before the differentiation, rendered a restored expression of the dystrophin protein in hiPS-CMs [On the other hand, only recently were base editing and prime editing applied to rescue phenotypes in vitro in models of cardiac diseases. Chemello et al. employed both base editing and prime editing to restore dystrophin protein expression in iPSC-CMs by inducing exon skipping or exon reframing, respectively, for the correction of the mutation . This dehiPS-CMs . Additionally, prime editing was employed as an alternative correction method. This system was utilized via nucleofection to reframe the correct open reading frame of exon 52 and also showed the efficient correction and restoration of dystrophin expression in hiPSC-CMs .DMD mutations and open intriguing new opportunities for the study and possibly treatment of other genetic cardiac diseases.Together, these results demonstrate the application and effectiveness of base editing and prime editing for the correction of several Genome editing has also been used in vivo to generate disease models and to test novel therapeutic approaches for diseases such as cardiac disorders. Up to now, CRISPR/Cas9 is the genome-editing tool that is most frequently used for the generation of in vivo models of cardiac disorders. Meanwhile, base editing and prime editing are still in their infancy, despite some in vivo studies that have already been performed ,43. PRKAG2 gene leading to PRKAG2 cardiac syndrome, characterized by ventricular tachyarrhythmia and progressive heart failure [When using genome editing to correct a mutation that leads to an autosomal dominant disorder, it is essential to disrupt solely the mutant allele while keeping the wild-type allele intact . This wa failure . Post-na failure . CRISPR/Cas9 was also used to correct large deletions in vivo, such as those responsible for Duchenne muscular dystrophy (DMD). El Refaey et al. targeted exon 23 of the dystrophin gene by using SaCas9\u2014for its smaller size\u2014and sgRNA packaged into an AAVrh74 and delivered these to neonatal mice . The resMore recently, base editing has also been employed as a genome editing approach to treat DMD in mice. Xu and colleagues succeeded in correcting the DMD-causing mutation in adult mdx4cv mice treated with a modified ABE (iABE-NGA), characterized by an engineered NG PAM-interacting domain variant that requires a more relaxed PAM sequence nearby the mutation site, besides showing improved on-target DNA editing activity and specificity . After tIdua gene mutation in a mouse model of mucopolysaccharidosis type I (MPS-IH), a lysosomal storage disease affecting multiple organs, including the heart. In utero-treated mice showed a reduced cardiac lysosomal accumulation of glycosaminoglycans, increased IDUA activity in the heart, and improved echocardiographic function compared to control mice as a result of partial cardiomyocyte editing (13.9 \u00b1 0.8%). Considering the natural characteristics of the developing fetus, as well as the timing of the onset of some diseases, prenatal base editing might become an attractive therapeutic approach, and this study confirms the potential of this system. Additionally, the authors assessed the post-natal effect of the ABE treatment, confirming the attenuation of the disease progression in the treated mice at 10 weeks of age [An outstanding study recently demonstrated the potential of base editing for the correction of disease-causing mutations in utero, thus preventing the onset of certain pathologies . The AAVs of age . Myh6) gene expression with cardiotropic AAV9 to deliver the associated sgRNA. The described method, cardioediting, is valuable for disease modeling, can provide a manner for the rapid editing of genes in the heart, and can be used to explore possible gene therapies regarding cardiac disease and dysfunction [One of the main issues with genome editors in vivo concerns their delivery to the target tissue, due to the large size of the key components and the low capacity of the delivery vectors. To circumvent this, K. J. Carroll et al. created a transgenic mouse model that constitutively expressed high levels of Cas9 exclusively in the heart, thereby only requiring AAV9 to deliver sgRNA, which is within its packing limit . This wafunction . Howeverfunction .DMD exon 51 deletion mutation [DMD exon 50 of a mouse model harboring a deletion of exon 51. This system was packaged into AAV9 with the use of a split-intein system, resulting in the successful skipping of exon 50 and yielding the restoration of functional dystrophin expression [As an alternative approach, many research groups use a dual-vector system to deliver genome editing components separately in AAV constructs, allowing for the modification of the ratio between the two components, which proved to be vital for efficient targeting . Recentlmutation . This repression . This stLMNA) gene associated with Hutchinson-Gilford progeria syndrome (HGPS) [LMNA mutation. A dual AAV9 vector system was used to transduce ABE into progeria-relevant tissues, including the heart and muscle of a mouse model carrying the human LMNA mutation. A persistent correction of 20\u201360% was achieved, giving rise to the restoration of normal splicing and diminished levels of progerin, as well as notably improving vascular disease, thereby increasing the lifespan of ABE-treated mice by 2.4-fold compared to the control group [Another recent study reported by Koblan et al. assessed in vivo base editing by employing an ABE to correct the mutation in the lamin A (e (HGPS) . This muol group . These oThe different genome editing systems vary in terms of efficiency, off-target effects, and sensitivity. The therapeutic applications of genome editing remain restricted by technical and biological problems. Before utilizing a particular technique, several technical and ethical considerations need to be addressed, especially for gene therapy. The identified hurdles yet to be overcome, such as off-target effects, the efficacy of HDR, the fitness of edited cells, immunogenicity, as well as efficiency and specificity, give rise to future directions for the optimization of genome editing.http://www.rgenome.net/cas-offinder/) [https://cctop.cos.uni-heidelberg.de:8043/index.html) [http://crispor.tefor.net) [One of the major concerns for the application of CRISPR/Cas9 is the high frequency of off-targets in human cells, observed in varying cell types, including cardiomyocytes ,53. Off-finder/) , CCTop (ex.html) , and CRIfor.net) . Howeverfor.net) , suggestfor.net) . Severalfor.net) . AdditioBesides the potentially dangerous effects of off-targets, on-target mutagenesis should also be taken into account. As opposed to the desired on-target edits, the introduction of a specific modification at the target site with HDR after the generation of a DSB by CRISPR/Cas9 can be inefficient. Particularly in post-mitotic tissues, NHEJ can lead to disruption of the target gene, resulting in potentially more disrupted cells than cells containing the desired edit. This limitation is particularly problematic in the case of the correction of a disease-causing mutation in vivo and should be taken into account before employing a technique such as CRISPR/Cas9 . Base editors can also potentially induce off-target editing, both outside and within the activity window . In the Lastly, prime editing seems to be associated with lower off-target mutagenesis in comparison to base editing and CRISPR/Cas9 , which iAnother major restriction of CRISPR/Cas9 is the requirement for a PAM sequence in proximity to the target site, limiting its applicability and effectiveness. In fact, it has been shown that the efficiency of editing rapidly decreases with an increase in distance from the cut site . To overBecause with base editing there is the possibility to induce bystander edits, an ideal base editor should possess a narrow activity window allowing a focus solely on the target base. Nonetheless, such a narrow activity window would require the ability of the base editor to be deployable for a wide range of PAM sequences, raising an additional shortcoming of the base editing system . Namely,On the other hand, prime editors are capable of introducing point mutations far (>30 bp) from the nicking site, resulting in greater targeting flexibility than Cas9 nuclease-mediated HDR, and less restrictive PAM availability in contrast to other precision editing methods, including base editing. Therefore, although less efficient, prime editing could complement base editing, not only to avoid bystander off-target effects but also in case of the lack of an appropriately located PAM ,21. The DSBs induced by CRISPR/Cas9 often trigger apoptosis as opposed to the intended genome edit. This DNA-damage toxicity highlights a major safety issue for the application of DSB-inducing CRISPR/Cas9 therapy . Apart fImmunotoxicity is typically lower for base and prime editors since, once optimized, these techniques solely require a single administration, though this needs to be further investigated and characterized . InteresAdditionally, since prime editors necessitate an RT template, random complementary DNA could potentially be incorporated into the genome, causing further safety concerns .Given that NHEJ is the favored repair mechanism in somatic cells, unpredictable insertions or deletions can occur after DSB induction. Specifically, HDR is very inefficient in cardiomyocytes, and the DSB-repair relies on error-prone NHEJ . This riConsidering that both base editing and prime editing are dependent on cellular mismatch repair mechanisms as opposed to recombinant repair mechanisms, such as HDR, these more recent methods represent an alternative strategy for genome editing in non-dividing cells, such as cardiomyocytes.Finally, as the heart is a post-mitotic organ, the repair of DSBs induced by CRISPR/Cas9 rarely occurs via HDR. Besides using base editing or prime editing, which do not rely on HDR, an alternative solution might be homology-independent targeted integration (HITI). This allows the insertion of exogenous DNA into the genome of dividing or non-dividing cells, such as cardiomyocytes, in a fashion that relies on NHEJ-based ligation . For genome editing to exert the desired effect, the components associated with the utilized system need to be transported to the nucleus of the target cells. Therefore, the delivery of genome-editing systems is a vital process. To achieve this, the different components of the chosen genome editing system require a robust delivery tool that allows the genome-editing machinery to reach the target cells. Researchers have explored various options for this, including delivering DNA or mRNA encoding Cas9 that allows in situ production of the protein, as well as directly delivering the native form of Cas9. Due to the large size of Cas9 and the negative charge of the gRNA, the cell membrane will naturally block such molecules from entering . To overSince viruses have developed to become highly effective at invading cells and inserting exogenous cellular components inside the host cell, viral particles have been employed for the delivery of genome editing systems. The viral genes responsible for replication are removed and replaced by a transgene of interest to ensure that the virus can still enter the host cell but is no longer able to replicate itself. Several types of viruses, including adenovirus, lentivirus, retrovirus, and AAV have been modified for utilization in gene therapy applications . The AAV is currently the most commonly used viral vector for the delivery of Cas9 and its associated components . This veAn additional challenge concerning viral delivery is its long-term expression. Ideally, genome editing tools should be transiently expressed to reduce the risk of off-target nuclease genotoxicity and possible immune responses against the bacterial-derived proteins . In thisThere are numerous delivery methods that do not utilize viral vectors to transport Cas9 across the cell membrane. These comprise physical methods to disrupt cellular barriers, nanoparticle-mediated delivery, and chemical alterations to circumvent cellular barriers. The most substantial benefits these bring about are the evasion of the packaging size restrictions associated with viral vectors , allowinTemporarily breaching the cellular membrane to facilitate the entry of Cas9 and other genome editing components into the cell is a common technique used for transfecting cells in vitro. The most extensively studied method is electroporation, in which molecules can enter the cell and, subsequently, the nucleus, by means of a temporary electrical pulse that is applied to the respective cell, thereby making it more porous and non-selective. This technique is also viable for cardiomyocytes and, therefore, for ex vivo therapeutic approaches in cardiovascular diseases. However, in vivo electroporation is more challenging due to the complicated choice of parameters and the risk of damaging tissues .Next to this highly efficient method of delivery, microinjection into one-cell embryos is also able to achieve high efficiency. Micron-scale needles pierce the cell membrane and directly release the cargo. Given that this process needs to be performed on individual cells, it is solely feasible in embryonic stages . The most advanced in vivo non-viral delivery method is by using solid lipid nanoparticles (SLNs). This FDA-approved strategy is an appealing option for delivering Cas9 RNPs due to its high efficiency and outstanding clinical track record . In an aThe promising results accomplished by non-viral vectors notwithstanding, these approaches are generally not very efficient for the genome editing treatment of cardiovascular diseases, and vectors such as AAVs remain the most suitable for targeting the heart . (RinconThe field of genome editing is rapidly advancing, and the most recently developed techniques, CRISPR/Cas9, base editing, and prime editing, show great potential for future broad applications in different fields.For genome editing systems to be applicable in the clinic, further understanding and advances are necessary. Firstly, the improvement of technical approaches is needed to increase target specificity and to minimize or eliminate off-target effects. In addition, as base editing and prime editing technologies are still in their infancy, these systems need to be further characterized and optimized to allow for their therapeutic application in vivo. Moreover, since the most commonly used Cas9 proteins, SpCas9 and SaCas9, are large in size, they present a major delivery challenge in terms of packaging into AAVs. Future research is necessary to discover smaller Cas9 orthologs or to reduce the size of the presently available Cas9 proteins. Alternatively, novel or optimized delivery methods, such as SLNs, can aid the further enhancement of genome editing systems and their application in vivo for therapeutic use.An additional limitation that calls for advancement is the immunogenicity of CRISPR/Cas9 proteins. As a consequence of frequent pre-existing adaptive immune responses to SpCas9 and SaCas9, Cas9 orthologs to which humans have not yet been exposed ought to be identified to reduce the risk of immunological responses. Since the possibilities and applications for genome editing have dramatically grown in the past years, an analogous increase in concerns about the ethics of human genome editing has been observed\u2014in particular when it comes to its clinical applications\u2014highlighting the importance of setting up specific regulatory systems. Genome editing on somatic cells potentially avoids ethical issues surrounding the permanent editing of the germline and allows for the treatment of already-diseased subjects. However, only when all limitations associated with genome editing techniques are resolved will it then be considered safe to employ this system in vivo, preventing dangerous and unethical outcomes.In the most optimal scenario, optimized genome editors show increased target specificity, a low or absent rate of off-target effects, a small size, flexible PAM availability, and easy accessibility. This would create the most optimized form of a genome editor, which has minimal associated limitations, is easily applicable, and greatly outweighs the potential risks so that it could be applied in the clinic and benefit the medical world. While ex vivo genome editing is most likely the first possible clinical application, for direct in vivo editing of post-mitotic tissues, such as the heart, genome editing is probably still far away.Thus far, only a limited number of genome editing studies have been conducted in the cardiac field, indicating that the opportunities provided by these tools have not yet been fully explored.The potential of genome editing is great, and should therefore be further researched to allow these systems to substantially benefit humankind and possibly treat diseases that were hitherto thought to be untreatable."}
+{"text": "Background: Neurodegenerative diseases are a group of progressive disorders that affect the central nervous system (CNS) such as Alzheimer, Parkinson, and multiple sclerosis. Inflammation plays a critical role in the onset and progression of these injuries. Periodontitis is considered an inflammatory disease caused by oral biofilms around the tooth-supporting tissues, leading to a systemic and chronic inflammatory condition. Thus, this systematic review aimed to search for evidence in the association between neurodegenerative disorders and periodontitis.Methods: This systematic review was registered at International Prospective Register of Systematic Reviews (PROSPERO) under the code CRD 42016038327. The search strategy was performed in three electronic databases and one gray literature source\u2014PubMed, Scopus, Web of Science, and OpenGrey, based on the PECO acronym: observational studies in humans (P) in which a neurodegenerative disease was present (E) or absent (C) to observe an association with periodontitis (O). The Fowkes and Fulton checklist was used to critically appraise the methodological quality and the risk of bias of individual studies. The quality of evidence was assessed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE).Results: From 534 articles found, 12 were included, of which eight were case\u2013control, three were cross-sectional, and one was a cohort, giving a total of 3,460 participants. All the included studies reported an association between some neurodegenerative diseases and periodontitis and presented a low risk of bias. According to the GRADE approach, the level of evidence of probing pocket depth was considered very low due to the significant heterogeneity across the studies' upgrading imprecision and inconsistency.Conclusions: Although all the included studies in this review reported an association between neurodegenerative diseases and periodontitis, the level of evidence was classified to be very low, which suggests a cautious interpretation of the results. Neurodegenerative disease is a broad expression for a group of disorders that damage the central nervous system (CNS), characterized by the progressive loss of neuronal structure and function. These diseases are incurable and lead to a progressive decline or even the complete loss of sensory, motor, and cognitive functions , Huntington's disease, Parkinson's disease (PD), and multiple sclerosis are the most frequently occurring .This review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) in which a neurodegenerative disease was present or absent to observe an association between this and periodontitis . The aim was to answer the focused question: Is there any association between neurodegenerative disease and periodontitis in adult patients?in vitro studies were excluded.All titles, abstracts, and full-text reading of the articles were independently analyzed by two reviewers (MA and IM) who imported all relevant citations into a bibliographic reference manager . In case of disagreement between the examiners, a third reviewer (RL) was involved. Studies that included patients without a diagnostic of neurodegenerative disease, groups with gingivitis only, case reports, descriptive studies, review articles, opinion articles, technical articles, guidelines, animal and The following data were extracted from the articles: authors and year; study design; characteristics of the sample ; evaluation method ; statistical analysis; results ; and the outcome. Data were extracted and tabulated independently by two reviewers (MA and LB).The checklist developed by Fowkes and Fulton was usedThe quality of evidence was rated using the GRADE approach , modified or probing depth (PPD >3 mm), bleeding on probing (BOP% >25%) of evaluated sites, and/or >30% of radiographic bone loss. Another criterion that was considered in some studies was the Community Periodontal Index (CPI) score 3 (PPD of 3.5\u20135.5 mm) and score 4 (PPD >5.5 mm) . Nine studies assessed periodontitis by clinical parameters such as CPI, CAL, BOP, and PPD , B-lymphocyte activators. via pathogen-associated molecular patterns, complement 1q, and adenosine triphosphate release from astrocytes, producing TNF-\u03b1, Il-1\u03b2, and Il-6 , compared with the diseases isolated (summarized in More longitudinal studies and multicenter trials with larger sample sizes should be conducted to assess whether periodontitis could be a risk factor for the onset and/or progression of neurodegenerative diseases, impacting the quality of life in elderly people. At the moment, it can be concluded that there is an association between neurodegenerative diseases and periodontitis, but causality cannot be claimed.The original contributions presented in the study are included in the article/MA drafted the paper with input from all authors. NF and LM designed the study. MA, IM, and LB performed the searches and data extraction. MA and DF performed and interpreted the qualitative analysis. RL and CR revised the manuscript critically for important intellectual content and final approval of the version to be published.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Bemesia tabaci and Spodoptera litura was assessed. After analyzing 212 soil samples, 497 isolated fungi were identified. Out of them, 490 isolates were classified in 45 species of 24 genera, whereas the other seven isolates, belonging to Paecilomyces and Purpureocillium genera, were not identified under species level. Furthermore, the EF biodiversity from soil of Sichuan, Yunan, and Guizhou areas, analyzed by Shannon Wiener Index (SWI) was rated at 2.98, 1.89, and 2.14, while SWIs-biodiversity in crop, forest, grassy, orchard and arable areas was rated at 2.88, 2.74, 3.05, 2.39, and 2.47. SWI data suggested that soil from Sichuan area and grassy had higher EF biodiversity compared with other analyzed provinces and areas. Virulence bioassay results indicated that, out of the 29 isolates tested, 24 were pathogenic against B. tabaci and S. litura, resulting in mortality rates >10%. In conclusion, this study reports the EF distribution and biodiversity in soil from three provinces located at Southwest China, whereas their potential use as a tool for the B. tabaci and S. litura biocontrol must be further investigated.Entomopathogenic fungi (EF), who represent active agents to control insect natural populations, usually persist in terrestrial habitats. Southwest area in China has various climate conditions and abundant plant biodiversity . Nevertheless, the potential of soil-inhabitant EF as insect pest biocontrol agents, is unknown. In this study, first the EF biodiversity from soil of three provinces was surveyed. Then, the virulence of 29 isolated strains against Beauveria bassiana and Metarhizium anisopliae, have been extensively developed as mycoinsecticides in China and other countries [Isaria fumosorosea (Paecilomyces fumosoroseus), Lecanicillium lecanii (Verticillium lecanii) and Purpureocillium lilacinum (Paecilomyces lilacinus) are often used for pest control worldwide [Entomopathogenic fungi (EF) play an important role in pest biocontrol and have high economic significance. There are more than 1000 EF species in 100 genera recorded in the world [ountries ,4,5. In orldwide .EF have complicated life cycles. The most hypocreales EF usually have two stages, the infecting stage occurs on host insects during from adhering of conidia on cuticle to production of new conidia on insect cadavers. In the second stage, EF persist in soil and live in rhizosphere or grows as endophyte . In the Bemesia tabaci and Spodoptera litura are worldwide pests that have caused huge economic losses to agriculture worldwide [B. tabaci and S. litura relies mainly on chemical pesticides, but the massive use of chemical pesticides has led to an increasingly serious problem of pest resistance and serious environmental pollution. Therefore, biological control methods are receiving increasing attention. EF are important biological control resources, which are selective, harmless to humans and animals, have long residual effect period, remarkable prevalence, and are not easily resistant to pests, etc. They occupy an important position in biological control of pests [B. tabaci and S. litura. Southwest China is one of areas with the most abundant biodiversity, depending on the distinct geography conditions with the Yunnan-Guizhou Plateau, Hengduan Mountains and Sichuan Basin, and the various climates from tropical to cold zone. A lot of rare species of insects and other organisms are distributed in this area [Both the orldwide ,12. Currof pests . It is ohis area ,15. HoweThe current study was aimed to investigate the distribution and abundance of EF under different soil habitats in Southwest China, including Sichuan, Yunnan and Guizhou provinces, in order to determine the diversity and prevalence of EF in Southwest China, and provide new fungal resources for the biological control.Soil samples were collected in different habitats, including crop, forest, grassy, orchard and arable. The longitude and latitude in each site were recorded by ICEGPS 100C . From each site, approximately 100 g soil beneath the ground 10\u201315 cm in three randomly selected points were collected and mixed as a sample stored in a plastic bag at 4 \u00b0C for further use. The total 212 samples were collected from 133 sites in three provinces, Sichuan, Yunnan and Guizhou of Southwest China .First, let each soil sample pass through a 40 mesh sieve (425 micron aperture) and separate into three batches of 10 g. Each batch was suspended with 100 mL 0.1% Tween-80 solution. Then, 100 \u03bcL suspension from each batch was inoculated on the selective medium and cultured on 25 \u00b1 1 \u00b0C. When fungi grow out, the single colony was transferred on PDA plate and cultured at 25 \u00b1 1 \u00b0C for identification . The isoThe fungal isolates were identified based on the morphology and similarity of the rDNA-ITS sequences. In general, the colonies features on PDA plates were surveyed, while the conidia and sporulation structures were measured by optical microscope system equipped with a digital camera . For ITS sequence analysis, the total DNA from each isolate was extracted by using the DNA extraction kits and referring to its protocol. The sequences were amplified by employing a T100TM Thermal Cycler with the primers ITS1 (5\u2032-TCCGTAGGTGAACCTGCGG-3\u2032) and ITS4 (5\u2032-TCCTCCGCTTATTGATATGC-3\u2032) and the standard PCR cycling protocol. After PCR products were sequenced, the ITS sequences were compared and analyzed by BLAST of NCBI and found out the fungal species with the highest sequence similarity. Then, the phylogenetic trees were constructed by MEGA-X with a statistical method of maximum likelihood, a bootstrap test of 500 replications, and the Jukes\u2013Cantor model . ITSs ofThe biodiversity of fungal species was evaluated by the Shannon\u2013Wiener index (SWI). SWIs were calculated based on the Formula (1).8 spores/mL. The working suspensions for bioassay were prepared by diluting the stock.The spores of rare fungal isolates were collected from the PDA plates and suspended with 0.05% Tween-80 solution and calibrated to a stock of 1.0 \u00d7 10B. tabaci reared with Hibiscus rosa-sinensis for more than 20 generations in a greenhouse was used. The leaf immersion method was employed. H. rosa-sinensis leaves with second instar nymphs were dipped into working suspension for 20 s during treatment. Treated larvae were reared using fresh H. rosa-sinensis leaves. The pest\u2019s numbers were surveyed every 24 h after treatment. The nymphs were considered as diseased death when they lost their normal yellow-green color, turgidity, smooth cuticle structure, and subsequently mildew grown.In the whitefly bioassay, the population of B-biotype S. litura bioassay, the population was fed with a semi-artificial diet [S. litura into a 1.5 mL centrifuge tube and add 1 mL of the prepared spore stock solution, quickly covered, and reversed for 20 s, transferred to a disposable plastic bowl covered with filter paper, fed with special feed, placed at 25 \u00b1 1 \u00b0C, photoperiod 14L:10D, relative humidity 75 \u00b1 5% in an artificial climate chamber. The pest\u2019s numbers were surveyed every 24 h after treatment. According to the characteristic hyphae growing on the surface of the insect\u2019s body and observed the conidia and conidiophores to determine whether it was caused by fungal infection. The 0.02% Tween-80 solution was used as a control group. The experiment was replicated three times. Corrected mortality was calculated based on the Formulas (2) and (3).In the ial diet . The trep < 0.05.All data were statistically analyzed using Excel 2010 and DPS 9.5 . The one-way ANOVA was performed as per Duncan\u2019s multiple range test to determine the significant difference at P. lilacinum with 82 isolates was the richest species, but the congeneric species P. lavendulum only had 30 isolates with fungi and EF, up to a 96.77% isolation rate and a 2.98 SWI . MeanwhiThe soil environment has a strong influence on the number and isolation rate of fungal isolates. The orchard and fallow soil samples had the higher EF isolation rates with >90%, followed by grassy and crop samples with 85\u201388%, and the lowest were forest samples with 73.17% only. However, the SWI indicated a different trend, which grassy soil had the highest SWI of 3.05 while fallow had the lowest SWI of 2.39 .B. tabaci and S. litura. The results indicated that, when treated at the concentration of 1 \u00d7 108 spores/mL, all isolates had a certain pathogenicity to whiteflies with a corrected mortality of 4\u201358%, except for C. rossmaniae CrSC40B04 , but also is an opportunistic pathogen to infect humans causing keratitis and skin diseases, etc. [P. lilacinum in soil is probably due to its intraspecific genetic diversity with stronger adaptability to the environment and the large scale application on farms [P. lavendulum with 30 isolates seems much more scare, it is perhaps related to the fact that it cannot tolerate temperature >35 \u00b0C [In this study, es, etc. ,48. The on farms . Howevere >35 \u00b0C .B. bassiana, I. fumosorosea, I. javanica, M. Anisopliae, M. carneum and M. marquandii are common EF, which are often found in infected natural insects and are usually used as biological control agents. It is maybe the reason that these EF can be easily isolated from soil [Aspergillus spp. and Penicillium spp. are distributed extensively and habitat to live in soil [As for rom soil ,52,53. A in soil . Many sp in soil ,30,32,33Ophiocordyceps sinensis, I. cicadae and I. tenuipes are expensive traditional Chinese medicines used in East Asia regions [Furthermore, EF are important medicinal resources. For example, regions . On the regions ,57. In a regions .A. hispanica, A. subramanianii, A. tabacinus, C. microsporum, C. halotolerans, G. macrocladum, L. spinosa, M. cirrosus, P. manginii, P. madriti, R. similis, T. purpureogenus and T. trachyspermus, were the first to discover the pathogenicity to the 2nd instar larvae of B. tabaci or S. litura. However, they only caused a lower corrected mortality range of 4\u201321%. It may be because they are opportunistic EF. If they infect other pests or have an effect on these pests in the fields that need more research to validate.The bioassay results showed that the 13 species, B. tabaci and S. litura. The dominant EF species were P. lilacinum, M. anisopliae, M. marquandii and P. citrinum. Furthermore, the grassy soil has the best EF biodiversity with a Shannon Wiener Index (SWI) of 3.05. The following are the soils from crop, forest, fallow, and orchard with SWIs of 2.88, 2.74, 2.47, and 2.39, respectively. This research will give a new insight for understanding of EF distribution characteristics and their biodiversity conservation and application.In conclusion, 490 isolates in 45 species of 24 genera were found in 212 soil samples from Guizhou, Sichuan, and Yunnan in Southwest China. Among of them, 32 species (459 isolates) had been reported as EF, while the other 13 species were first found to have pathogenicity to"}
+{"text": "However, the cost and resources of AI were the major reported barriers to adopting AI-based technologies. The study highlighted a remarkable dearth of AI knowledge among PTs. AI and advanced knowledge in technology need to be urgently transferred to PTs.Artificial intelligence (AI) has been used in physical therapy diagnosis and management for various impairments. Physical therapists (PTs) need to be able to utilize the latest innovative treatment techniques to improve the quality of care. The study aimed to describe PTs\u2019 views on AI and investigate multiple factors as indicators of AI knowledge, attitude, and adoption among PTs. Moreover, the study aimed to identify the barriers to using AI in rehabilitation. Two hundred and thirty-six PTs participated voluntarily in the study. A concurrent mixed-method design was used to document PTs\u2019 opinions regarding AI deployment in rehabilitation. A self-administered survey consisting of several aspects, including demographic, knowledge, uses, advantages, impacts, and barriers limiting AI utilization in rehabilitation, was used. A total of 63.3% of PTs reported that they had not experienced any kind of AI applications at work. The major factors predicting a higher level of AI knowledge among PTs were being a non-academic worker (OR = 1.77 , One of the physical therapists\u2019 (PTs) responsibilities is to perform physical rehabilitation assessment to design an appropriate clinical plan of care for patients with physical disorders such as stroke and the AI is an algorithm process that has been used in healthcare and rehabilitation fields to generate decision-making and facilitate patient care services ,6. MoreoGiven that focal changes could arise by implanting AI in medical practices, more research was conducted to investigate AI knowledge and attitudes of healthcare practitioners in various specialties ,11,12,13Robotics and AI tools have a remarkable impact on healthcare and rehabilitation care delivery services. Recent studies reported that higher accuracy and faster medical diagnosis and predictions could be obtained from employing AI applications that improve patients\u2019 outcomes ,17,18. FIn PT practices, AI systems can be used to train patients and monitor progress either by using virtual (informatics) or physical (robotics) AI concepts . In a stMoreover, supervised machine learning was studied to investigate the ability of AI-enabled technology to monitor patients\u2019 exercise adherence at home. A study conducted in 2018 by Burns et al. demonstrFalling is a serious public health issue, especially among older adults. The convolutional neural network (CNN), which is a deep learning technology, has been identified as a useful AI technology that has the ability to predict sophisticated patient outcomes . In 2020This study targeted PTs who are currently working in any academic or non-academic settings such as hospitals, clinics, home healthcare, or universities, and they were invited to participate voluntarily in the study. Participants had to be PT professionals to participate. The ethical approval for conducting this study was obtained from the NITTE Institute of Physiotherapy, NITTE University (NIPT/IEC/117/18/01/21).A 22-question questionnaire was developed and adapted from previous studies ,13 and f-In your opinion, which patients would benefit more from AI applications and why ? Please explain your response.-In your opinion, what are the major challenges or barriers that may limit AI applications?Questions 1 to 8 aimed to collect the demographic characteristics of the participants, including age, gender, PT license, experience years as PT, educational degree, primary workplace setting, and sub-specialty in PT. Questions 9 to 11 sought PT participants\u2019 knowledge regarding AI-based technologies that are used in general, healthcare, and rehabilitation fields. Knowledge questions were captured using a yes/no format. In question 12, PTs were asked to select all the possible sources of AI information they mostly depend on. Question 13 asked PTs about the number of AI applications at work, and it was presented in a multiple-choice format from no application to more than 4. Questions 14 and 15 were designed to have participants\u2019 attitudes toward the advantages of rehabilitation. Question 16 investigated the opinion of participants on the impact of AI technologies on the future of rehabilitation. The AI advantages, uses, and impacts question items were assessed using a 5-point Likert scale coded as 5 = strongly agree, 4 = agree, 3 = neutral, 2 = disagree, 1 = strongly disagree. Questions 17 and 18 investigated the ethical implications of AI technologies in rehabilitation, and it was snapshotted using multiple choice formats. Question 19 was to determine whether the PTs think that AI applications should be taught in rehabilitation curricula. Questions 20 and 21 were open-ended questions where PTs were asked to express and explain their response to the following questions:Lastly, question 22 was to know how willing PTs are to receive more information on AI. In this study, PTs\u2019 knowledge and attitudes were explained based on five predictors, including gender, years of experience, educational degree, subspeciality, and workplace. Regarding the variable categories, the years of experience variable was dichotomous (>10 years or \u226410 years). The workplace variable was categorized as academic or non-academic. Educational degree categories were undergraduate or postgraduate (Master\u2019s and Ph.D.), while subspeciality categories were musculoskeletal, neurorehabilitation, and general.This study is a mixed-method design. In this study, the investigators embedded a qualitative component within a preliminary quantitative design to support the findings of the quantitative data to help an in-depth understanding of the research problem. The quantitative design element of the study used a cross-sectional, predictive design with exploratory predictors to understand the PTs\u2019 knowledge and attitudes toward AI applications in rehabilitation. However, the qualitative part utilized open-ended questions, and it permitted the principal investigator (PI) to create themes of participants\u2019 responses that would lead to the further discovery of PTs\u2019 perceptions and acceptance of AI-based technologies in rehabilitation.The questionnaire was created using Google Forms in May 2021. A brief explanation of the study was posted in the preface section at the top part of the questionnaire with a highlighted statement that all information will be used confidentially and for the purpose of this study only. Informed consent was taken before answering the questionnaire to confirm the participation agreement.PTs were recruited through e-mails and posts on social media . The snowball sampling method was facilitated by encouraging PTs to forward the electronic questionnaire to their colleagues in the PT sectors. The minimum sample size required to achieve a power of 0.8 was calculated using G-power software . For a priori power calculation, a logistic regression test was chosen with an odd ratio of 1.5 and a significance level of 0.05. The minimum sample was indicated to be 208 respondents.Quantitative data were coded and then analyzed using IBM Statistical Package for Social Sciences software (SPSS), 26th edition, IBM, United States. All data were checked for completeness before the analyses. Continuous demographic data were analyzed using means and standard deviations, but in the case of categorical data, percentages and frequencies were used to describe the sample age, gender, years of experience, education qualifications, workplace settings, and the number of AI applications at work. Chi-square cross-tabulations and binary multivariate logistic regression tests were used to determine the predictors of AI knowledge and attitudes among PTs. The odds ratio (OR) and 95% confidence interval (CI) were reported to explain the relationship magnitude of the predictor variables with the dependent variable. For all the statistical analyses, \u03b1 level of 0.05 or less was used to determine the statistically significant predictors.For the qualitative analysis, open-ended questions were analyzed using thematic content analysis to interpret the meaning of the PT responses. The PI (M.A.) analyzed PTs\u2019 responses using pre-established codes identified in the literature ,29. If ap = 0.04]). Moreover, it was found that when holding less than ten years of experience constant, the odds of knowing about AI increased by 2.44 for PTs who have more than 10 years of experience. Education qualification was a significant predictor with an OR of 1.97 . Compared to undergraduates, postgraduate PTs were 1.97 times more aware of AI-based technologies in PT clinical settings. Moreover, subspecialty was found to be a significant indicator of AI knowledge. Compared to the neurorehabilitation specialty, PTs who specialized in musculoskeletal or are general PTs were less likely to have knowledge about AI in rehabilitation by 0.52 and 0.36 times , respectively. However, results showed that gender was not a significant predictor of AI knowledge among PTs (p = 0.76). Multivariate logistic regression was performed to find the best predictors among different factors influencing knowledge of PTs toward AI uses in rehabilitation. In the 3-step model, the number of AI applications at work was the best predictor, followed by experience and neurorehabilitation specialty . In order to have a snapshot of PTs\u2019 attitudes towards AI-technology applications, respondents were asked to indicate their level of agreement towards three listed advantages of AI based on multiple predictors using a 5-point Likert scale. Respondents\u2019 attitude toward AI is illustrated in In this study, 109 (46.2%) male PTs either agree or strongly agree that AI would reduce professional workload. It was also found that the same percentage of non-academic PTs agreed or highly agreed with this statement . Based on experience, the majority of PTs who agreed or strongly agreed that AI would ease their workload were junior PTs with less than 10 years of experience. Regarding PTs\u2019 education, the number of undergraduate PTs who either agreed or strongly agreed that AI could be used to reduce the load in clinical practices was lower in comparison to postgraduates (50 (21.2%) and 130 (55.1%)), respectively.One hundred and thirteen (47.9%) male PTs reported their positive attitude towards implementing AI applications in clinical settings to facilitate patients\u2019 ease of care. Moreover, ease of care was supported to be an advantage of AI by 50% (118) of the total sample size who work in the non-academic sector. Surprisingly, PTs in both categories (less or more than 10 years of experience) were almost equal in their level of positive agreement toward the ease of care statement (50 (40.2%) and 130 (40.1%)), respectively. The majority of respondents from the undergraduate category had a positive opinion toward AI utilization in clinical practices to help in easing patient care.The majority of male and female PTs (63 (26.7%) and 44 (18.6%)) reported positive attitudes toward the role of AI in disease prevention. However, 83 out of 263 (35.2%) respondents had no opinion toward this advantage based on gender. This study also found that a greater proportion of non-academic PT respondents had a positive attitude toward using AI to prevent diseases. An almost equal percentage of junior and senior PTs believed that AI-based applications could be used to limit the burden of diseases . Furthermore, 45.3% (107) of undergraduate and postgraduates expressed their greater agreement on disease prevention as an advantage of employing AI in rehabilitation practices.PT respondents reported their opinion regarding multiple AI uses in rehabilitation, and the opinions were captured using the Likert scale. A great percentage of males (36%) had a positive opinion about the utilization of AI in rehabilitation to forecast patients\u2019 medical status. The results also showed that non-academic respondents were slightly higher in their agreement on employing AI-based technologies in rehabilitation to help therapists in predicting diseases than PT educators . Based on years of experience, this study found approximately equal agreement on using AI to generate disease prediction. A high percentage of postgraduate PTs had a positive impression that AI facilitates disease prediction in rehabilitation settings.Most of the male and female respondents either agreed or strongly agreed that goal setting is a beneficial use of the AI system. Compared to academic PTs, a higher proportion of non-academic (49.2%) had positive attitudes toward AI as means to assist in developing goal settings based on patients\u2019 health conditions. However, PTs had a nearly equal positive opinion that goal setting can be designed by AI. The majority of undergraduate and postgraduate PTs believe that goal setting is a benefit of employing AI in clinical practices.The study found male PTs (53.4%) significantly agreed or strongly agreed that AI is an assistive technology in rehabilitation. The findings also highlighted the positive attitude of non-academics toward categorizing the use of AI applications as assistive technologies in patients\u2019 management processes. Based on PTs\u2019 experience and qualifications, only 4 out of 236 (1.6%) respondents had a negative opinion about using AI as an assistive technology tool in clinical practices.In this study, male PTs had higher agreement than females that AI applications are utilized to provide the diagnosis. The majority of non-academic PTs stated their positive attitude toward AI as a diagnostic tool for several medical cases. Moreover, the results indicated that the majority of junior and senior PTs highly believe in using AI for diagnostic determinations. However, using AI as a diagnostic tool was highly supported by postgraduate PTs .Respondents were instructed to express their agreement level toward the three listed impacts. Detailed attitudes of PTs toward AI impacts are shown in A total of 171 out of 236 (72.4%) male and female PTs either agreed or strongly agreed that AI has a role in reducing human resources. The findings also showed that non-academic PTs supported that AI implementation would result in limiting human resources in rehabilitation. Only 20 respondents (8.5%) either disagreed or strongly disagreed that AI applications would have a negative impact on human resources. Fifty-three percent (134) of PTs who were Master\u2019s or Ph.D. holders had a high level of agreement that AI would strike down the human workforce in clinical aspects.A high percentage of both genders had a positive opinion toward increasing productivity by AI in rehabilitation. Based on the workplace, non-academic PTs positively supported the statement of increasing therapists\u2019 work productivity by AI. Few PTs had negative attitudes that AI would help to enhance work productivity in rehabilitation. It was found that 58.1% of the postgraduate PTs were significantly higher in their agreement than undergraduates (22.1%) that AI would be a facilitator for productivity in clinical practices.The study found that male PTs (44.5%) were higher than females (29.2%) in their agreement that AI-based technologies have an impact on increasing patients\u2019 quality of life. Additionally, the study results implied that non-academic PTs were positive toward employing AI to improve patients\u2019 quality of rehabilitation. Equal proportions of PTs totally agreed on the positive impacts of AI systems based on their years of experience. The current study results showed a significant positive agreement among postgraduate respondents that improving quality of life could be a result of implementing AI technologies in healthcare.The last section of the questionnaire explored the opinions of PTs regarding the ethical implications of AI in rehabilitation. A total of 92 out of 236 (40.1%) PTs respondents expressed their primary fear of using AI applications as the inability of the AI technology to produce clinical reasonings for the cases beyond its programming scope. Moreover, 35.2% (83) of the respondents reported their ethical concerns about the AI system and their failure to understand or feel human beings. However, only 25.8% (61) of the respondents were worried that AI technology creators might have minimal or no experience in clinical practices.A question was asked, \u201cwhich decision should be taken if there was a conflict between AI and clinicians\u2019 predictions?\u201d The majority of PT respondents believed the clinicians\u2019 opinion should be considered in case of conflict, whereas 51 (21.61%) thought that patients\u2019 preferences should be prioritized over AI and clinicians\u2019 judgments. However, very few PTs respondents (3.4%) believed that AI produces trusted predictions. In this study, results showed that the majority of the respondents believed that AI courses should be taught and integrated into the PT curriculum.Upon reviewing the responses, it was clear that the majority of PTs have limited knowledge and skills to adopt AI-advanced technologies being used in clinical practices.The first qualitative question was navigating PTs\u2019 opinions regarding which patients would benefit more from AI-based applications; neurorehabilitation, geriatric, and musculoskeletal impairments were the most indicted patients\u2019 conditions that could benefit from AI implementation in rehabilitation settings. Pre-established codes were driven by previous research ,35,36,37Theme 1. All patients based on the impairments.AI gives benefits to patients depends on disease condition, impairment.\u201d Another respondent also said, \u201cAll of them as AI is a tool to make things easy, and it can be developed and used based on individual rather than for a particular department\u201d.The majority of PTs stated that all patients could gain advantages from AI but based on the disease or impairment. For example, one respondent said, \u201cOn the other hand, a high number of the respondents thought that neurological and geriatrics patients would obtain the most benefit from AI-based technology. PT respondents explained their selection of these two specific conditions as AI would help those patients in their daily life activities, help therapists in their management process and assist in expecting patients\u2019 responses.\u201cGeriatric, neurologically impaired because it can assist these kinds of cases in their daily activities and predict their response.\u201d\u2014Participant 163.Moreover, a set of PT respondents believed that musculoskeletal and sport injury patients could obtain more advantages from AI applications in PT settings because most of the musculoskeletal cases are not cognitively impaired and that may allow them to follow the programmed instructions easily.\u201cMusculoskeletal, as it will be easier for the patient to understand and apply.\u201d\u2014Participant 179.Theme 2. Selected patients based on AI advantages.Several respondents believed that using AI technology would help to reduce therapists\u2019 workload when managing selected cases such as musculoskeletal and neurological because patients\u2019 movement can be monitored or guided by AI smart machines.\u201cI think musculoskeletal and neuro patients would benefit more from AI compared to other areas because, by the application of AI, rehabilitation can be performed more precisely and accurately with a constant rhythm throughout the session when compared to manual techniques.\u201d\u2014Participant 126.The majority of the respondents also stated that AI could assist therapists in managing treatment sessions in the absence of human power, which may reduce their workload.\u201cGeriatrics and neurological impaired because they will be guided by AI to do things correctly even in the absence of a Physiotherapist.\u201d\u2014Participant 14.Theme 3. Selected patients based on AI uses.\u201cArtificial limbs and rehabilitation, AI can play a major role in movement learning and error correction.\u201d\u2014Participant 29. Another respondent wrote, \u201cAlmost all the patient groups shall be benefited by AI, provided it is customized.\u201d\u2014Participant 20.Many respondents mentioned various uses of AI in rehabilitation, such as monitoring or correcting patients\u2019 movements during the therapeutic session and customizing treatment plans based on patients\u2019 input data. For example, a respondent explained the use of AI as, Furthermore, many respondents mentioned that AI-based technologies could be used to provide feedback for some cases who need to be encouraged throughout the PT session, which helps to improve patient outcome measures.\u201cNeurological impairments as most therapies targeting neurological disorders are feedback based\u2026 so the more accurate feedback the more accurate outcome.\u201d\u2014Participant 30.\u201cDefinitely it will support clinicians\u2019 effort to treat a neurologically impaired patient like visual or audible feedback is necessary to retrain the least amount of response from the patient.\u201d\u2014Participant 192.Theme 4. Selected patients based on AI impacts.Some respondents mentioned the impact of AI-enabled applications in improving the quality of care via guiding PTs throughout the assessment and treatment process. Additionally, respondents revealed the importance of AI technologies to help therapists in monitoring patients\u2019 progress and adherence to home programs.\u201cMostly all of the above mentioned will be benefited as would help them to work more efficiently, effectively and consistent way.\u201d\u2014Participant 189.\u201cIt can help to those staying in remote areas where availability of medical facilities is less. As well it can help even a Physiotherapist to track record and keep data for analysis of progress.\u201dTheme 5. Selected patients based on AI ethical and trust issues.A few participants stated that they have no experience in utilizing any AI-based smart technologies in their practice, and that raises their ethical concerns and questions about the ability of AI to replace human efforts.\u201cAs I haven\u2019t experienced the applications of AI in all sectors, I am not sure. Still, I think musculoskeletal patients may be benefitted as in Neurological cases are more complicated the judgement of therapist matters more.\u201d\u2014Participant 96.The second qualitative question asked about the perceived barriers that might limit AI utilization in rehabilitation. Respondents identified several barriers that may limit the usability of AI, in their opinion. The researchers started coding the data based on the pre-established codes and applied them to the data set. Codes were primarily derived from the literature ,39,40,41Theme 1. Inability of AI to manage all patients\u2019 health conditions or impairments.\u201cClinical situations to suit its application\u201d\u2014Participant 1.Very few participants were concerned about the inability of AI applications to be customized according to the patient\u2019s conditions, impairments, or clinical scenario. \u201cInability of AI to cater to a variety of patients\u201d\u2014Participant 30.Some respondents also responded negatively about the capability of AI technologies to accommodate or handle different cases: Theme 2. Cost and available resources of AI in clinical settings.All AI equipment are very costly and even the patient couldn\u2019t afford the treatment fees.\u201d Another said, \u201cMainly the cost that can\u2019t afford for all type of patients\u201d.Many respondents reported cost as the main barrier to AI implementation, particularly in PT practices. Respondents identified costs as the cost of AI machines and the cost of treatment. Two respondents wrote, \u201cMoreover, most PT respondents indicated that not only the cost of the equipment and treatment are barriers to using AI but also the cost of implementation, such as developing software and giving AI education courses, and training the users. Respondents believe that healthcare providers and AI developers both need the training to use AI effectively, which might be expensive. For instance, one of the responses was, \u201cFirst and foremost will be the cost of implementation. Additionally, difficulties in procuring and developing the hardware and software for AI based rehab; Intense training needed for the operator as well as the service user; digital illiteracy of the therapists etc. are the major challenges\u201d.Theme 3. Compliance and adoption of AI among patients and therapists.Acceptance by the patient and experienced professionals\u201d.Most participants mentioned patients\u2019 and/or therapists\u2019 acceptance and adoption of AI technologies as major barriers to AI implementation. For example, one participant stated his opinion on barriers to implementing AI by saying, \u201cInvestment cost is high as well as patient\u2019s cost is also high which results in poor patient compliance\u201d.In addition, patient compliance was mentioned by the number of PT respondents as a concern of AI technologies in rehabilitation settings. One of the participants said, \u201cTheme 4. Lack of knowledge and proficiency.lack of familiarity and knowledge about AI; clinical inertia to use such technology; cost and higher degree of skill required; availability of such applications; patient acceptance; inability to cater to a variety of patients\u201d.Various respondents admitted the insufficient knowledge and skills that they have regarding AI-based applications. One respondent expressed his opinion on AI barriers as \u201cIn addition, respondents expressed that they are worried about AI developers being outside of the medical proficiency field and not having the clinical knowledge and skills that may affect the patient\u2019s outcome. PTs respondents felt that collaboration among healthcare professionals and AI developers might lead to successful implementation.\u201cCost will obviously go towards higher side. And other limitation will be from developer side as they need to have the knowledge about medical profession, so in turn they will require to couple up with the medical professionals as whole team and need to careful design the script for its successful functionality and will need to set a perfect paradigm for its use.\u201d\u2014Participant 54.Theme 5. Technology trust in clinical settings.Every individual is different and requires tailor made protocol, which is not possible by AI. So, quality of treatment will be a major concern.\u201d In addition, some participants were worried about the automation function of AI, and they could not understand some of the complicated or advanced cases. \u201cComplex patient experiences may not be accurately captured by the software.\u201d\u2014Participant 199.Many participants expressed their trust issues toward AI applications that are used in clinical practices. Some respondents questioned the ability of AI to suit every clinical case scenario. Quality of care was also a major concern since PTs were not confident that AI could generate a customized plan of treatment for every patient. \u201cBeing electronic device, malfunction may result in new problem to patients.\u201d\u2014Participant 183.Moreover, some PTs prefer traditional treatment given by humans as they have concerns about the technical issues that could arise from using machines: \u201cTheme 6. Ethical implications of AI applications.Patients-therapist interaction is crucial in many cases management. That may be lacking in AI.\u201d\u2014Participant 110.A number of the respondents mentioned therapist-patient interaction as a barrier to AI implementation in the medical field. PTs were questioning the automation function of AI and the absence of the human touch or emotion: \u201cIn physical therapy human interaction and hands on therapies play a vital role and AI can\u2019t completely take over that human factor. Also cost can be a barrier and reach may be limited to high end centers\u201d.Furthermore, some respondents were concerned about the absence of emotions, feelings, and human touch when using AI automation clinical tasks; some of the PTs think that empathy and human connection cannot be found in advanced management technology. For example, a respondent expressed his opinion as, \u201cTheme 7. Patients\u2019 perception and understanding of AI.AI based rehab programs are generally costlier than the normal rehab. Also there is very less awareness among patients about it\u2026\u201d.Patients\u2019 perception, awareness, and understanding were also mentioned as barriers to AI application in rehabilitation: \u201cIt is surely the patient\u2019s perception on using AI applications as they may be not in a mental status or position to understand or follow the programmed treatment\u201d.One participant raised his concern about the understanding of the patients and the ability to follow the automation task. \u201cThe primary aim of this study was to understand PTs\u2019 perceptions and identify the perceived factors that may limit AI adoption in rehabilitation. Moreover, the influence and association of multiple demographic factors on AI adoption were investigated. The key findings were that; years of experience, education qualification, subspeciality, and workplace were significant predictors of AI adoption among PTs. In this study, it was found that senior PTs, postgraduate, non-academic PTs, and neurorehabilitation specialties were significant indicators of knowledge of AI application in rehabilitation.Although there are multiple benefits of AI in predicting patients\u2019 diagnosis and prognosis, there is no clear evidence regarding the current understanding of PTs\u2019 views and preparedness to use AI in their practices which raises the necessity for further exploration. This is the first study that discovers PTs\u2019 knowledge and experience with AI-enabled applications in addition to the perceived barriers that may prevent therapists from operating AI in rehabilitation. Previous research believed that employing AI facilities would result in having a consistent plan of care that may increase clinical work productivity and quality of care . That waIn clinical settings, practice is an important facilitator that helps in gaining the interest of therapists and clinicians to learn about AI and adopt it in their clinical applications. In this study, only 5% of the total sample reported their hands-on familiarity with AI applications at work. This was consistent with previous research that found that less than 10% of surgeons currently use robotic surgery techniques in hospitals, and 60% of surgeons documented the absence of AI and robotic technologies in their clinical practices . The resAmong various healthcare specialties, the amount of healthcare providers\u2019 experience was studied as a predictor of AI utilization in clinical practices. In this study, regression analyses showed that the more year of experience, the more likely to adopt AI applications among PTs. In New Zealand, a survey was targeted among medical physicists and radiation oncologists to describe the adoption of AI in their practices, and it was found that experience was positively associated with AI adoption . A possiSubspeciality was a significant predictor of AI knowledge among respondents. Findings indicated that neurorehabilitation PTs were more knowledgeable than other PT subspecialties. There were no previous studies conducted to understand the PTs\u2019 attitudes and perceptions about AI. Nevertheless, the embedded open-ended questions provided further understanding of the regression analysis, so many respondents thought that neurological patients need AI tools to improve their quality of life and functionality. Another possible explanation of the specialty factor was that many AI studies had been conducted in the field of neurology rehabilitation, such as stroke and ParkThe qualitative part allowed having an in-depth understanding of AI adoption and acceptance among providers. High numbers of PT respondents reported their concerns about patients\u2019 perceptions and acceptance of automation and AI innovations in rehabilitation. In the literature, patient and healthcare system trust was documented as an essential factor in the successful implementation of AI in healthcare ,16,17. MOn the other hand, many respondents expressed their worries about the absence of human touch when relying on AI tools, especially in rehabilitation. In addition, therapist-patient communication was mentioned as an ethical concern of AI utilization in healthcare. Previous studies had similar findings that using AI tools might eliminate the effective interactions between healthcare providers and their patients . MoreoveThis study identified a lack of knowledge as a barrier to adopting AI among therapists, although AI technologies have promises to facilitate health delivery systems by providing diagnosis and prognosis, which contribute to optimal care. In this study, 144 out of 236 PT respondents reported less knowledge about AI technologies in both healthcare and rehabilitation fields than in general AI applications. In rehabilitation research, AI is being applied in the form of computer interfaces, virtual reality, and exoskeleton rehabilitation programs ,28,30. HThe strength of this study was employing the mixed-method design where the qualitative data supplement the primary quantitative information regarding PTs\u2019 existing knowledge and opinion regarding AI, which provided an in-depth understanding of perceived barriers of limited AI utilization in rehabilitation. Another strength was investigating the associations between AI adoption and multiple associated factors. As with any research, some limitations could be addressed. Convenience sampling may limit the generalizability of this study\u2019s findings. Moreover, the self-reported questionnaire may permit bias in some responses. The design of the study also did not allow for the investigation of the causal relationship.The results of this study could be used as a baseline for further research to explore the adoption of AI clinical tools in rehabilitation. Future studies could investigate patients\u2019 perceptions of AI applications to add to the results of this study. In addition, stakeholders could be interviewed in the future to investigate their preparedness to utilize AI in medical practices. The absence of AI understanding among the general public and healthcare professions should be considered a factor in translating AI-advanced research into medical practices.The current study findings highlighted the limited knowledge and applications of AI in rehabilitation sectors. Moreover, the cost and available resources of AI were the most common barriers addressed by PTs. The study also demonstrated that AI application at the workplace, years of experience, and sub-specialty were associated with AI knowledge and attitudes among PTs. Future studies should focus on improving AI knowledge among PTs to bridge the gap between the existing research evidence and current PT practices."}
+{"text": "Responsible adoption of healthcare artificial intelligence (AI) requires that AI systems which benefit patients and populations, including autonomous AI systems, are incentivized financially at a consistent and sustainable level. We present a framework for analytically determining value and cost of each unique AI service. The framework\u2019s processes involve affected stakeholders, including patients, providers, legislators, payors, and AI creators, in order to find an optimum balance among ethics, workflow, cost, and value as identified by each of these stakeholders. We use a real world, completed, an example of a specific autonomous AI service, to show how multiple \u201cguardrails\u201d for the AI system implementation enforce ethical principles. It can guide the development of sustainable reimbursement for future AI services, ensuring the quality of care, healthcare equity, and mitigation of potential bias, and thereby contribute to realize the potential of AI to improve clinical outcomes for patients and populations, improve access, remove disparities, and reduce cost. At the same time, AI that does not offer such health benefits should not be incentivized. We focus on patient-specific, assistive, and autonomous AI systems, where such financial incentives are overseen by both public and private payors, while also requiring involvement, oversight, and support by affected stakeholders in healthcare, including patients, policy makers, clinicians, regulators, provider organizations, bioethicists, and AI creators. Such support and involvement were solicited as documented in early studies by Abramoff and other researchers, which led to an ethical framework for healthcare AI1 systems that learn from training data in order to, when deployed, perform tasks that are intended to mimic or extend human cognitive capabilities. These include highly cognitive tasks, such as those typically performed by trained healthcare professionals, and are not explicitly programmed.Herein, we use the definition of AI systems used elsewhere:de novo authorized autonomous AI system, in both the Medicare Physician Fee Schedule (MPFS), and the Outpatient Prospective Payment System (OPPS), for IDx-DR system 3. This autonomous4 AI system makes a clinical decision without human oversight, and diagnoses a specific disease, as described by its new CPT\u00ae code 92229. CMS also, for the first time, established a national add-on payment for assistive AI, under the New Technology Add-on Payments (NTAP) in the Inpatient Prospective Payment System (IPPS), for the Viz LVO system for stroke detection5, and the Caption Guidance system for cardiac ultrasound6, to enable expanded access.AI has now entered mainstream healthcare. This is illustrated by the U.S. Centers for Medicare and Medicaid Services (CMS), for the first time, establishing a national payment amount for an FDA 8, and improved clinical outcomes. While adoption of AI is growing, so is the evidence for a positive impact created by FDA-regulated AI systems10, though there is also evidence for negative impact, particularly with unregulated AI systems12.Now, more than ever before, AI designed, developed, validated, and deployed to promote safety and efficacy is poised to realize the aforementioned advantages. Thereby, it can address the escalation in cost, lack of access to essential healthcare services, and the resulting injustice of health disparities, as well as to advance health equityreimbursement, the dollar amount paid for the service, as well as coverage, the likelihood of payment when the service is provided to a specific patient for a medically indicated reason. While it is beyond the scope of this manuscript, the issue of medical liability also plays a role, and for autonomous AI, the American Medical Association\u2019s Augmented Intelligence Policy requires autonomous AI creators to assume liability for their performance13.The decision by providers and health care systems on whether to adopt and deploy a specific healthcare AI is greatly influenced by payment amounts. From their perspective, financial incentives for a service are determined both by 6. While it has led to AI payments by CMS, at the moment of writing it is an extremely high bar to reach: it is technology-specific with a complicated approval pathway. Currently, NTAP is limited to services provided to inpatients, as part of IPPS, and does not cover outpatient or physician care; payments are time-limited as it lasts only 3 years for a specific technology or specific indication; the maximum payment is limited to 65% of the cost, and only if a hospital\u2019s costs for a specific stay exceed the MS-DRG bundled payment amount, which means the hospital has to show financial loss; finally, it requires \u2018newness\u2019, cost, and substantial clinical improvement criteria for the technology, which are difficult to satisfy\u2014though FDA breakthrough status for new technology can satisfy newness and clinical improvement criteria, newness may only last until utilization claims data exists. Other technologies may become eligible for the NTAP in-patient should they satisfy the very stringent requirements for the add-on payment, as viz.ai and Caption Health have already been able to achieve.An existing payment framework that has been implemented is the NTAP mentioned above14.Given the above limitations, others have tried to develop incentive frameworks for AI but only in a limited context, without considering existing, complex multi-payor US healthcare coverage and reimbursement systems, and ignoring the role of affected stakeholders2, (b) allows more optimal alignment between the ethical, equity, workflow, cost, and value perspectives for AI services, (c) enhances support and involvement by affected stakeholders, (d) is transparent, and (e) maps onto the existing payment amount and coverage systems in the US. Its goal is to quantify and estimate the payment amount, which can be optimized by finding an optimum for the framework. One such optimum might be where it cannot be improved for some stakeholders, without making it less optimal for others, called a Pareto optimum15.Instead, we propose a comprehensive framework that: (a) maximizes alignment with ethical frameworks for healthcare AIWhile we focus on the US landscape, this framework can be a useful template for any healthcare system. The framework is focused on payment for the use of AI, and while implementing an AI may have downstream effects, such as more patients being recommended for treatment, or a provider needing a more or less complex E/M visit, is beyond its scope, as there are existing methodologies and frameworks to adjust payments for such cases if appropriate. As a practical example, we provide a case study to demonstrate how this framework, involving affected stakeholders, can lead to insurance coverage and sustainable reimbursement for a specific validated autonomous AI service.Finally, we alert the reader that the first author is an AI creator, and therefore is potentially biased towards monetary reimbursement for AI; a similar conflict exists for JG. While all potential conflicts have been fully disclosed, we have sought to ensure a balanced discussion as the other authors are not conflicted, are on the professional and payor side, and do not have financial or other interest in any AI receiving monetary reimbursement.17, including:Whether AI improves patient and population clinical outcomes (rather than worsening them); andAI bias and impact on health equity; andPotential lack of data privacy, meaningful consent, stewardship responsibilities, and ownership; andHow liability is assigned.As mentioned, healthcare can negatively affect outcomes and undesirable effects of the rapid introduction of AI into clinical practice have been described2. One study found racial bias in a widely used AI, which assigned Black patients the same level of risk of a poor clinical outcome even though they were sicker than White patients, so that the number of Black patients identified for extra care was halved18. Consequently, as we consider a framework for appropriate reimbursement of healthcare AI, it is important to incentivize AI systems that have been clinically validated under FDA oversight or otherwise, and align with an ethical framework.While a comprehensive review of such valid and real concerns with healthcare AI, as well as their interaction is beyond the scope of this comment, we want to give a recently published example. Our goal is to alert the reader that a basis in an ethical framework can be helpful to discover, prevent, or mitigate other ethical concerns with healthcare AI. We clarify that this Comment is not meant to be an introduction to AI ethics, or the metrics used to evaluate AI ethics. We refer instead to previous publications on AI ethics, where terms such as sensitivity, specificity, and equity are discussed in detail19. This is illustrated by the recent expansion of telemedicine during the COVID-19 public health emergency, where coverage and reimbursement were available under a broader range of conditions20. We propose that a sustainable and optimal balance between value and cost will be crucial for the successful adoption and integration of healthcare AI.As mentioned, whether or not to deploy or use a specific healthcare AI is influenced greatly by payment and coverage policies, for digital technology1. Meanwhile, creators and investors give substantial weight to sustainable and predictable financial return on their investment\u2014in other words, after design, research and development, validation, and marketing have been paid and accounted for.As we consider payment amounts, as well as the complexity of our healthcare system, and the interdependencies between all the stakeholders, it follows that a high degree of alignment between stakeholders is essential. While different healthcare stakeholders necessarily weigh specific benefits and risks for healthcare AI as more or less important, generally, stakeholders prefer AI that is affordable, high quality, equitable, safe, efficacious, outcome improving, and ethically designedFew frameworks for the valuation of (autonomous) AI in healthcare exist\u2014and if any exist, they are ad hoc. From a health economics standpoint, we define the cost of AI systems as what AI creators charge for the patient-specific service that uses AI system, for recoupment of their investment in research, development and validation, ongoing operating expenses, liability protection, and assurance of continued patient safety, efficacy, and equity (including the cost of maintenance and updates).value of AI systems\u2014as it were, of the AI work -as what payors, patients, providers, and society as a whole see as the utility of the service in terms of improved clinical outcomes\u2014at both patient and population levels, access to care, and provider satisfaction, in addition to any downstream cost savings9. CMS and others use a shorthand for this same definition: value\u2009=\u2009quality/cost. In this relationship for diagnostic services, value goes up when diagnostic accuracy is better and if cost is less, other factors being equal21. We next consider a non-exhaustive set of approaches to derive the value of a given service involving AI:cost-effective value is derived from cost benefit analyses (CBA) or cost effectiveness analyses (CEA). These models analyze the extra expenditures if a service is not provided, compared to when the service is provided. Such analyses are based on many assumptions, especially about the value people place on their health22. For example, a late diagnosis increases treatment costs, or a poorer outcome has adverse financial consequences. The cost benefit threshold is where expenditures for the service equals the extra expenditures across the at-risk population.The substitution value, derived from what payors are currently paying for the service provided by a\u2014human\u2014provider, such as specialist. Many of these services are valued based on the provider\u2019s expertize, time spent, and ancillary services required.The access maximizing value, derived from the current total value for a subset of the population assigned to the entire population getting the service, at the population level:The We define the ec be the current total expenditure for a service, performed by humans only; vc, the current value per patient that undergoes that service; n the number of patients at risk who would benefit from that service; and c the fraction of n that undergoes that service\u2014the \u2018compliant\u2019 population\u2014, thenec is the lower bound of \u201cpayor willingness to pay,\u201d in other words, the rate at which payors like CMS are currently reimbursing the service, without AI, for Medicare beneficiaries. The assumptions are a) \u201cpayor willingness to pay\u201d does not go down when the entire population undergoes the service, instead of just a subset, and b) that the AI can service the entire at-risk population n. Thus, when e is the total expenditure at full access, thenve the lower bound for the access maximizing value per patient, when using AI. In effect, the total expenditure is capped at the \u201cpayor willingness to pay,\u201d with ve potentially allowing all patients to undergo the service.Let Marginal cost. One of the advantages of AI is its scalability. Marginal cost considers the cost of a single patient\u2019s diagnostic service, i.e., the incremental cost for one more patient, after the AI has already diagnosed (for example) a million patients. The marginal cost disregards necessary investment in R&D, including training the AI23, validation of safety, and efficacy1, even though ongoing monitoring and quality assurance will be required. Marginal cost is then the sum of the cost of continuing to operate the AI in the workflow, patient specific clinical labor, supplies and equipment, liability protections, electricity, and other resources for running the AI software for that patient. The marginal cost of the pure AI decision itself\u2014mostly inference\u2014is typically low23.Total cost of ownership, reflects the sum of investment in R&D, including training the AI, validation of safety, and efficacy, as well as the ongoing marginal costs mentioned above. Given these high upfront costs for AI, charging the total cost of ownership, such as a capital expense model, is likely only affordable to the wealthiest healthcare systems. Under-resourced healthcare systems and providers will not be able to afford the AI, leading to diminished access and potentially increasing healthcare disparities.We can also consider the cost to provide a service to use as a starting point for the value of a given service involving AI, as follows:cost-effective value or the substitution value are unlikely to lead to any cost savings, and in fact, the cost-effective value may not be optimal15. Using the total cost of ownership is likely to increase health disparities rather than decrease them. Using marginal cost will not be sustainable for AI creators.Considering these alternatives to derive the value of an AI service, and considering the goals of different stakeholders, we can see that, using the access maximizing value ve, decreasing expenditure per patient, and simultaneously incentivizing access, is an attractive payment derivation. Understanding the economic value of an AI service in healthcare is required for AI creators, as the AI\u2019s economic value forms an important constraint during the AI product development lifecycle, in addition to the constraints derived from ethical considerations, such as high accuracy, efficacy and lack of bias, and requirements derived from workflow needs such as autonomy, clinical labor requirements, availability at point-of-care, liability considerations, and user experience.Under this framework, the 24. Practice expense within the RBRVS are considered either \u201cdirect\u201d or \u201cindirect\u201d. Direct expenses are priced by CMS based on invoices charged in a competitive marketplace by suppliers of the AI service25. This allows the AI system value to be mapped onto the existing regulatory and reimbursement structures within practice expense. In fact, stakeholders have at their disposal existing tools to ensure AI systems are aligned with their specific goals, such as:26.Federal legislative and Executive branches of the US government: for example, federal legislation that impacts the adoption of healthcare AI, such as the 21st Century Cures Act and related amendments to the Social Security Act, and also the formation of the National Artificial Intelligence Advisory Committee (NAIAC) in the Executive Branch30, and ethics, in collaboration with other stakeholders24. The Federal Trade Commission has regulatory oversight, but does not ensure the safety, and efficacy, of products. Finally, the U.S. Department of Health and Human Services\u2019 Office of Civil Rights, and state consumer protection and civil rights agencies, may take action against discriminatory practices due to AI systems.Regulators: the US Food and Drug Administration (FDA) has developed and continues to develop a wide range of regulatory guidelines and discussion papers on AI systems safety31, involvement in the comment process for CMS proposed rules, and in FDA\u2019s regulatory processes through the Patient Preference Initiative and other guidance32Patients and patient organizations: creation and updating of Standards of Care, such as ADA\u2019s Standards of Medical Care for Diabetes and updating it to support the use of autonomous AI for the diabetic retinal examEvidence-based medicine experts: the National Committee of Quality Assurance (NCQA) can create or update new quality measures, described as the Healthcare Effectiveness Data and Information Set (HEDIS), and there are financial incentives tied to meeting a HEDIS measure for payors and providers. The United States Preventative Services Task Force (USPSTF) makes evidence-based recommendations about preventative services, which may include AI, which can affect patient access.33. The CPT Editorial Panel has published a taxonomy describing a hierarchy of AI services34. The RVS Update Committee (RUC) makes recommendations on the physician work and direct practice expenses of the AI services to CMS35. Where applicable, whether the patient continues to have a \u201cmedical home\u201d when AI systems are implemented may be important.Physician and provider organizations: the American Medical Association\u2019s Current Procedural Terminology (CPT\u00ae) Editorial Panel evaluates new services, including digital technology and those supported by AI systems, through literature analysis, determination of widespread use and FDA regulatory authorization, evaluating whether they meet criteria of safety, effectiveness, and usefulness; and subsequently creates or updates CPT\u00ae codes36, Outpatient Prospective Payment System (OPPS), and Inpatient Prospective Payment System (IPPS)37. Additionally, CMS administers the Merit-based Incentive Payment System (MIPS), which requires payment adjustments according to performance on cost and quality measurements for certain eligible Medicare Part B providers. MIPS measures may also align with external quality measure sets, such as HEDIS, and thereby can affect services using AI.Government insurance: the Centers for Medicare and Medicaid Services (CMS) periodically creates coverage and reimbursement, through its rule making processes, and CMS considers practice expense, market-based supply, and equipment pricing updates, as well as rate-setting refinements, malpractice expense, labor rates, and geographic practice cost indices to set rates for the Medicare Physician Fee Schedule (MPFS)38.Commercial health insurance: these entities have their own processes to setting their rate; in practice they often set reimbursement by applying a\u2009>100% percentage of Medicare PFSAnalyzing reimbursement on a per patient cost, as per the above framework, additionally aligns to the model of relative valuation within the Medicare Physician Fee Schedule and its Resource Based Relative Value Scale (RBRVS), which includes distinct physician work, professional liability, and practice expense relative value units (RVUs)Collectively, the processes described above on which stakeholders engage are important first steps in creating a complete system of guardrails exist. The goal of these guardrails is to address concerns how a specific healthcare AI system affects patients and populations with respect to clinical outcomes, safety, AI bias, cost, and health equity.15. An earlier form of this framework was presented by the first author as part of a US Congressional briefing on May 28, 201939.As presented, the above, transparent, framework allows optimizing a balance between the ethical, workflow, cost, and value perspectives on AI services, by all stakeholdersde novo FDA authorization in April 201840, after extensive clinical testing41, and has since been widely implemented. We examine both how the AI creator came to the cost for the use of the AI system, and how all stakeholders were involved in creating the guardrails and appropriate payment for the use of the AI system.In this real-world example, we examine how the framework can be used for an autonomous AI for the diabetic retinal exam that diagnoses diabetic retinopathy and diabetic macular edema without human oversight. This AI system , is used in primary care and endocrinology clinics where patients with diabetes are managed. Rather than a patient being referred to an eye care provider for the diabetic retinal exam, the autonomous AI system, which includes a robotic fundus camera, makes a point of care diagnosis in real time, and an individualized diagnostic report without review by a human. Only if the result is abnormal, i.e., disease is present, is the patient referred to an eye care provider. It received access maximizing value approach to arrive at ve \u2245 55, based on the following assumptions: vc = $175 for the Medicare population, based on the median Medicare reimbursement for an eye exam by an eye care provider in 2015;36c = 0.3 for the Medicare population\u2014the estimated compliance with the diabetic retinal exam in that population43. This ve thus served as an anchor for the AI creator to charge $55 per patient for this AI service, the AI work, as well as the cost to obtain inputs and manage AI output. Under this approach, while AI creators can use the per patient ve, or any other value or cost, as a design anchor, this does not preclude them charging for the AI service instead, per volume, per population, or per usage. This allowed the creator to optimize for this cost; for example, choosing a more expensive retinal camera hardware might allow higher accuracy and diagnosability at lower R&D expenses for the diagnostic artificial intelligence algorithms, but would increase the cost per patient to a level that exceeds ve. A less expensive camera instead required a more sophisticated AI algorithms, with higher R&D expenses, but because of AI\u2019s scalability, allows the creator to still meet the ve chosen above.The creator used the 41, FDA De Novo authorized this autonomous AI in 201840. Specifically, this preregistered pivotal trial included hypothesis testing for the statistical absence or presence of racial, ethnic, and sex bias on diagnostic sensitivity and specificity, and which was indeed confirmed. As a prescription device, it ensures the patient remains in the medical home.Regulators: As mentioned, the pivotal trial for this autonomous AI system was completed in 2017, and because the endpoints included safety, equity, and efficacy, as well as addressing bias31.Patient and patient organizations: The scientific evidence for the safety and efficacy of this autonomous AI was cited by the American Diabetes Association in the 2020 update to its Standards of Medical Care for Diabetes. This updated standard of care supported the use of autonomous AI for the first time44. It measures the percentage of people with diabetes in a covered population that has received a documented diabetic retinal exam, and intends to incentivize payors and provider organizations to have this measure be at least 80%45. USPSTF confirmed that the diabetic retinal exam, is not a primary preventative service, as the patient has been diagnosed with diabetes .Evidence-based medicine experts: NCQA updated HEDIS measurement language to support the use of autonomous AI to close the diabetic retinal exam care gap and meet this measure46, and noted: \u201cThe addition of code 92229 for retinal imaging with automated point-of-care, [\u2026], increase early detection and incorporation of findings into diabetes care. Innovative solutions like the augmented intelligence technology described by new code 92229 have the potential to improve access for at-risk patient populations by bringing retinal imaging capabilities into the primary care setting\u201d47.Physician and provider organizations: The CPT\u00ae Editorial Panel created the first CPT\u00ae code for autonomous AI , 92229, in May 201948. As usual, the exact reimbursement per patient that a provider using the AI service will receive depends on geographic and other factors. CMS also confirmed that the autonomous AI exam qualifies for MIPS measure 117, and can close the diabetic retinal exam care gap49.Government health programs: CMS in November 2021 finalized their proposal to establish national values for the autonomous diabetic retinal exam (described by CPT\u00ae code 92229) with transparent RVUs at $45\u2013$64 per exam, per Jan 1, 2022Commercial insurance: as mentioned, these entities refer to CMS rate setting as well as other factors, to establish their own payment amounts.The guardrails allow continuous and ongoing focus on support by all stakeholders, and they allowed a specific AI system for the autonomous AI diabetic retinal exam, supported by all stakeholders, and its reimbursement, to be determined:15. In addition, this decision allows similar, transparent, analyses of cost and value for other AI services\u2019 reimbursement.These decisions have laid the groundwork for the financial sustainability of healthcare autonomous AI for the diabetic retinal exam, resulting in an optimum between the ethical, workflow, cost, and value perspectives on AI services1. In fact, it is focused on the enhancement of health equity\u2014more patients receiving the service, especially where currently underserved\u2014reducing cost per patient, and predictable sustainable financial incentives for AI creators. It allows optimality between ethical, workflow, cost, and value, as well as sustainability perspectives on AI services, thus allowing continued support by all stakeholders including patients, providers, legislators, payors, and AI creators, such as the AI Healthcare Coalition . We demonstrated how, for a specific autonomous AI service, the diabetic retinal exam, the present framework maps well onto existing reimbursement, regulatory, and value-based care processes, and how sustainable reimbursement can be achieved in a transparent manner. The present framework is designed so that AI systems, that have been shown to be safe, effective, and where potential bias has been mitigated, and developed under an ethical framework, can be priced and reimbursed at a sustainable level, with multiple \u201cguardrails\u201d overseen by all stakeholders that enforce ethical principles overseen by regulators, providers, and patient organizations. The resulting reimbursement allows for sustainable, predictable financial incentives for AI creators, and continued research.We have presented a framework for establishing reimbursement for the work component of healthcare AI. It aligns with existing ethical frameworks for AIThe financial incentive framework presented here may be helpful in analyzing the value and cost of each unique AI service, to guide the development of sustainable reimbursement for future AI services, while ensuring the quality of care, healthcare equity, and where potential bias has been mitigated. Thus, appropriate financial incentives for healthcare AI will contribute to realizing the potential of AI to improve clinical outcomes for patients and populations, remove disparities, lower cost, and improve access."}
+{"text": "Artificial intelligence (AI) is widely positioned to become a key element of intelligent technologies used in the long-term care (LTC) for older adults. The increasing relevance and adoption of AI has encouraged debate over the societal and ethical implications of introducing and scaling AI. This scoping review investigates how the design and implementation of AI technologies in LTC is addressed responsibly: so-called responsible innovation (RI).We conducted a systematic literature search in 5 electronic databases using concepts related to LTC, AI, and RI. We then performed a descriptive and thematic analysis to map the key concepts, types of evidence, and gaps in the literature.After reviewing 3,339 papers, 25 papers were identified that met our inclusion criteria. From this literature, we extracted 3 overarching themes: user-oriented AI innovation; framing AI as a solution to RI issues; and context-sensitivity. Our results provide an overview of measures taken and recommendations provided to address responsible AI innovation in LTC.The review underlines the importance of the context of use when addressing responsible AI innovation in LTC. However, limited empirical evidence actually details how responsible AI innovation is addressed in context. Therefore, we recommend expanding empirical studies on RI at the level of specific AI technologies and their local contexts of use. Also, we call for more specific frameworks for responsible AI innovation in LTC to flexibly guide researchers and innovators. Future frameworks should clearly distinguish between RI processes and outcomes. AI techd domain and selfary data . Accordiary data .In LTC, AI is said to enable and improve an increasing variety of intelligent technologies such as remote monitoring systems, recommendation and decision support software, social robots, and virtual assistants that interact with older adults and their caregivers on a daily basis. One widespread expectation of AI is that it allows such technologies to learn about their environment and adapt to changing contexts of action . For exaresponsible innovation (RI), which requires innovators, users, and other stakeholders to have a critical look at the social and ethical consequences of AI technologies for older people, their environment, and society as a whole.Despite its promises and benefits, the increasing relevance and adoption of AI in LTC and other domains of society has encouraged debate over the societal and ethical implications of introducing and scaling AI . It is rimplementation and impact of such principles in the actual design and implementation of AI in practice. This could be problematic because high-level principles leave much room for interpretation as to how they can be practically applied in specific contexts of use such as LTC LTC, (b) AI and technologies in LTC that are potentially driven by AI, and (c) RI see . Five daAll authors were involved at the beginning, middle, and end of the screening process to ensure consistency and investigator triangulation. We defined and refined inclusion criteria for each of the core concepts in our search before and throughout the iterative screening process:(1) LTC: eligible papers address technological systems or services that are (to be) used by older adults who receive LTC, and/or used by their formal and informal caregivers. By LTC, we mean the assistance given over an extended period of time to people who, as a result of aging and related conditions such as dementia, experience inabilities to perform tasks associated with everyday living . This ca(2) AI: eligible papers provide information about the (semi)autonomous decision-making capabilities of the addressed technologies, that is, about the data-processing mechanisms that enable them to carry out certain tasks independently. Responsible AI innovation can only be properly assessed if clear explanations are provided about the role of AI in the article .if AI technologies can be responsibly used in LTC without discussing how they can be responsibly designed or implemented. Papers are also excluded when they discuss which RI issues should be addressed in context of a particular AI technology, without providing clues on how to address these issues at the level of the technology\u2019s design or implementation. Further, papers are excluded if they solely assess the accuracy, usability, or acceptability of technologies.(3) RI: eligible papers report on recommendations for decisions in practice to foster the responsible design and/or implementation of AI technologies in LTC. For instance, eligible papers describe how certain measures relating to design or implementation of AI technologies contribute to the ethical acceptability, sustainability, and/or social desirability of these technologies or to thbetween the three core search concepts were excluded. For papers by the same authors and with similar content, only the most recent peer-reviewed article was included. Finally, 25 papers were selected for the review. An overview of the search and screening process is shown in The review comprised two stages. To minimize subjective biases, the authors acting as literature reviewers performed each stage independently from each other. First, a title and abstract screening was performed (by D. R. M. Lukkien and H. P. Buimer) to select papers that met all three main inclusion criteria. When one reviewer had doubt on compliance with one or more criteria, or if there was any disagreement between the reviewers, they discussed the article orally, or if necessary, together with a third reviewer (H. H. Nap), to reach consensus. After exclusion of duplicates following the preliminary screening, 106 papers were subject to full-text reading. In a second round of full-text screening (by D. R. M. Lukkien and H. H. Nap), records that discussed any of the three core search concepts only marginally and that made an insufficient link degree of application means that we distinguish between papers that report on actual measures taken to address responsible AI in existing innovation practices, and papers that only contain recommendations to address RI at the level of the design and/or implementation of AI technologies. This distinction shows if responsible AI innovation is actually addressed in practice. With the level of application, we refer to classified papers as being related to a specific AI system, a particular category of AI technologies in LTC , or AI in LTC in general. This is relevant because it shows the context-specificity of the reported measures or recommendations. If applicable, we also report on the specific context of application, for instance a specific AI system, project, or geographical area in which responsible AI innovation is studied or practiced.For each paper selected, we report descriptive results about the authors with the year of publication, the country of the first author, the types of technologies discussed, the role of AI in the technology, the type of study, and (if applicable) the methods and stakeholders involved for empirical data collection. Also, to provide an impression about practical approaches to responsible AI innovation in LTC, we report on responsible AI principles that the article addresses, and categorized papers in terms of their degree, level, and context of application. The Our in-depth analysis of the included literature comprised an inductive thematic analysis to identify, analyze, and report repeated patterns across the articles . The corThe systematic search in the digital libraries was conducted from June 2020 to September 2020. n = 15) was published between 2018 and 2020. The identified papers mainly address RI in context of care robots (n = 12) and monitoring and smart home technology (n = 7). The papers differ in terms of how specific they describe the role of AI in their contexts. Nineteen of the included studies did not involve primary research but described the authors\u2019 conceptual perspective on responsible AI innovation in LTC, the related technical approach, its feasibility, and/or an analysis of the literature. In total, six empirical studies were included, of which five used qualitative methods and one applied mixed methods.n = 22), while three papers discuss measures to address responsible AI innovation that are independent of principles . The other 17 papers solely provide recommendations for addressing RI in the design and implementation of AI technologies. While four of them discuss technical approaches and methods to address principles such as trust and transparency in AI, these were classified as \u201csolely recommendations\u201d because they do not report the respective methods being actually applied in existing AI technologies , in light of a particular category of AI-based technologies (n = 13), or without specific regard to particular types of technologies (n = 3).Regarding the tion see , a distiactual measures taken to address responsible AI innovation at the level of specific AI-based systems in LTC subthemes recur in the included papers see . First, foster inclusivity and equity in the design and implementation of AI technologies. For instance, Second, five papers discuss the need to safeguarding the human dimension in AI-driven care. This is firstly to foster social connectedness and avoid exacerbating the social isolation of older adults and secondly to have human supervision over AI-driven outcomes. One suggestion is that AI technologies should primarily be designed to assist human caregivers in supporting older adults, foster meaningful interactions between older adults, or substitute human caregivers when they are not available (Third, 11 papers stress the importance of vailable . A contrtechnical fix to certain RI issues that are associated with supportive technologies in LTC, rather than as an RI problem in itself. The respective papers discuss conceptual, technical, or methodological approaches to delegating some degree of responsibility to AI technologies themselves. For instance, three papers discuss technical approaches to enabling AI technologies to determine what information should be shown to different users at a given moment (In total, 11 papers discuss reasons and ways to use AI as a solution to RI issues see . These pn moment . This isn moment , p. 8910n moment . Furthern moment . Misselhhybrid approach to responsible AI innovation as a means to achieving context-sensitivity in RI (top-down formulation of principles by experts and the realization of these principles in the generic design of AI technologies. On the other hand, it requires bottom-up engagement with the perspectives of individual users that are affected by AI technologies. In this way, the set of principles that guides AI\u2019s behavior can be attuned to the specific context of use, but within the parameters of the general ethical framework (In total, 13 papers explicitly discuss the need and/or ways to be sensitive to the specific context of use of AI technologies in LTC when addressing RI. The included literature reflects this theme in multiple ways. First, some papers position context-sensitivity as a conditional factor for, or as an integral part of RI, regardless of particular issues at stake. For instance, four papers advocate a ty in RI . A hybriramework . Second,ramework . For insWhile many studies recognize that responsible AI innovation in the LTC for older adults requires contextualization, limited studies address RI at the level of specific AI technologies and their local contexts of use. The ongoing scientific efforts to practice responsible AI innovation in LTC seem to be largely centered around the discussion of social and ethical concerns of AI, the perspectives of intended users and other stakeholders, and frameworks and principles that are adequate in this domain. We found limited empirical substantiation of practical measures that support responsible AI innovation and address principles in specific contexts of use.hybrid approach to responsible AI innovation that involves top-down expert perspectives and bottom-up user perspectives. However, they do so as part of mulling over the delegation of moral responsibilities to AI (Still, the reviewed literature does describe the rationales and ways to further address responsible AI innovation in LTC \u201cin context.\u201d Innovators often have difficulties in reconciling insights about user- or context-specific requirements or they even \u201cdecontextualized\u201d design solutions because of their own need to offer somewhat standardized and scalable solutions . Howeveres to AI . This diIn the meantime, it strikes us as pertinent that a hybrid approach to responsible AI innovation in LTC is pursued by human decision making involving older adults, their caregivers, and technology developers. This calls for innovators and future research about AI innovations in LTC to seek direction from principles and experts. Concurrently, innovators and researchers should continue to iteratively engage with users and people who are affected by specific AI technologies, even if some users such as people with dementia may have difficulties in expressing their feelings and wishes . While uoutcomes and RI processes.Our findings have consequences for future frameworks for responsible AI innovation in LTC. The majority of included papers address the relevance and application of certain principles for responsible AI innovation, such as autonomy, informed consent, privacy, transparency, justice, fairness, and trust see . HoweverRI outcomes, for instance when personalized feedback loops in the system\u2019s design foster users\u2019 understanding and transparency (RI processes such as inclusion of voices and data of minority populations to foster fairness, diversity, and nondiscrimination and ensure, for example, that technologies are made to fit both the eastern \u201cinterdependent\u201d perspective on aging and the western \u201cindependent\u201d perspective (flexibly attuned in context, from early design to local use.RI outcomes concern the characteristics that a given technology should possess and the societal needs or values and principles that must be addressed by innovation . RI procsparency and whensparency . Principspective . Future Another condition for such frameworks is that they are backed by illustrative empirical evidence that helps researchers and AI practitioners in LTC to flexibly address responsible AI innovation in different contexts of use. Further, such frameworks need to be continuously reshaped over time, since socially shared normative frameworks evolve with the emergence of new technologies and their routinization . Lastly,In addition to the generation of frameworks, we call for expanding the empirical evidence on how responsible AI innovation is addressed in actual practice. It is important for researchers and innovators to explicate what decisions or actions in the design or implementation of AI technologies in LTC underpin RI, to think about local embedding and to more concrete suggestions at that level. To this respect, it could be useful to adopt the guidance ethics approach of This literature review included only papers that were fairly explicit about why the addressed technologies are labeled as \u201csmart,\u201d \u201cintelligent,\u201d or \u201cadaptive,\u201d for instance, and how AI plays a role in their operation. For this reason, discussions between the literature reviewers were held over a fair number of abstracts and full texts to reach consensus. In many cases it was decided to exclude specific papers because they insufficiently explicated whether AI was involved. Also, our review included academic research papers. Hence, it cannot claim to be complete and exhaustive in terms of the practical efforts that are or can be made to foster the responsible design and implementation of AI technologies in LTC. Incomplete access to the AI work being pursued by leading commercial technology companies is a limitation. A thorough examination of the gray literature could be useful to further reveal how this topic is addressed in practice. We acknowledge the challenge to be complete with regards to the dimensions of responsible AI innovation in LTC that can be addressed. Therefore, we have set up a comprehensive search strategy by using concepts from a global review on AI and ethics guidelines , among ooutcomes and processes, from early design to local use. It could also be explored how these outcomes and processes can be flexibly attuned in context. Therefore, we recommend expanding the empirical evidence on RI at the level of specific AI technologies and their local contexts of use in LTC.Based on our in-depth analysis of the relevant literature, we found three overarching themes that represent focus areas in practicing responsible AI innovation in the LTC for older adults: user-oriented AI innovation; framing AI as a solution to RI issues; and context-sensitivity. The results underpinning these themes provide insights into the efforts that can be made to foster the responsible design and implementation of AI technologies in LTC. This review therefore provides directions for AI researchers and practitioners when determining how AI technologies in LTC can be responsibly designed and implemented in the future. Importantly, a common thread in the studied literature is that responsible AI innovation requires a nuanced contextualization of RI issues and solutions. At the same time, the review points out that the current literature lacks clear substantiation about how certain measures affect responsible AI innovation in specific contexts. Future empirical research and frameworks on responsible AI innovation in LTC could reveal how certain principles are at the basis of RI gnab180_suppl_Supplementary_MaterialClick here for additional data file."}
+{"text": "With the potential integration of artificial intelligence (AI) into clinical practice, it is essential to understand end users\u2019 perception of this novel technology. The aim of this study, which was endorsed by the British Society of Gastroenterology (BSG), was to evaluate the UK gastroenterology and endoscopy communities\u2019 views on AI.An online survey was developed and disseminated to gastroenterologists and endoscopists across the UK.One hundred four participants completed the survey. Quality improvement in endoscopy (97%) and better endoscopic diagnosis (92%) were perceived as the most beneficial applications of AI to clinical practice. The most significant challenges were accountability for incorrect diagnoses (85%) and potential bias of algorithms (82%). A lack of guidelines (92%) was identified as the greatest barrier to adopting AI in routine clinical practice. Participants identified real-time endoscopic image diagnosis (95%) as a research priority for AI, while the most perceived significant barriers to AI research were funding (82%) and the availability of annotated data (76%). Participants consider the priorities for the BSG AI Task Force to be identifying research priorities (96%), guidelines for adopting AI devices in clinical practice (93%) and supporting the delivery of multicentre clinical trials (91%).This survey has identified views from the UK gastroenterology and endoscopy community regarding AI in clinical practice and research, and identified priorities for the newly formed BSG AI Task Force. There is limited knowledge of the perspective of end users in gastroenterology/endoscopy to artificial intelligence (AI) technology. To date, this has only been explored in the US gastroenterology community.Quality improvement in endoscopy (97%) was perceived to be the most significant benefit in applying AI to clinical practice, while the most significant challenges were accountability for the incorrect diagnoses (85%) and bias of algorithms (82%). Participants consider the priorities for the British Society of Gastroenterology (BSG) AI Task Force to be identifying research priorities (96%), guidelines for adopting AI devices in clinical practice (93%) and supporting the delivery of multicentre clinical trials (91%).The survey results provide an insight into the views of UK gastroenterologists and endoscopists to AI technology. The findings can help propel the specialty forward in the clinical translation of AI, identify priorities for the newly formed BSG AI Task Force and hopefully improve patient care and outcomes.Artificial intelligence (AI) is the ability of computers to perform tasks that traditionally require human intelligence such as learning and problem-solving.There are now multiple regulatory-approved AI systems available in the market to aid colonic polyp detection and characterisation and the identification of dysplasia in Barrett\u2019s oesophagus. New technologies are being developed including automated interpretation of video capsule images, determining the severity of mucosal inflammation in inflammatory bowel disease and a host of non-endoscopic AI tools using natural language programming. With the likely integration of AI into clinical practice, it is essential to understand the end user perception of this novel technology. To date, this has only been explored in the US gastroenterology community.The aim of this study, which was endorsed by the British Society of Gastroenterology (BSG), was to survey the UK gastroenterology and endoscopy community to assess their views on the benefits and barriers to the clinical application of AI technology, research in AI and the priorities for the newly established BSG AI Task Force.This is a prospective cross-sectional observational study. Participants from diverse clinical roles and workplace environments were recruited to complete an online survey.For the survey\u2019s conceptualisation of themes, we created a focus group of five experts in the field of AI in gastroenterology. Themes and question items were identified by consensus between members of the focus group. Junior and senior gastroenterologists undertook pilot testing at a teaching hospital in London. Minor adaptions were made before the survey was finalised.Questions used 3-point and 5-point Likert scales and multiple-choice questions. Five themes were explored: (1) participant demographics, (2) participant experience in AI, (3) benefits and barriers of adopting AI in clinical practice, (4) priorities of and barriers to research in AI, and (5) priorities for the BSG AI Task Force. It was mandatory to complete the questions for themes (1) and (2) while the remaining ones were optional.The survey was distributed via multiple methods over a period of 5 months (October 2020\u2013February 2021). All BSG members were invited to complete the survey via the electronic BSG Newsletter, which advertised the survey weblink. The newsletter was emailed to 3154 members and was opened by 1268. Additionally, the survey weblink was disseminated via email to members of the BSG AI Task Force. The survey weblink was also available on the BSG Open Survey webpage, which is accessible to BSG and non-BSG members. Participants were encouraged to invite colleagues to complete the survey.All participants provided electronic consent at the start of the survey. Participants entered their responses online using Research Electronic Data Capture tools hosted on the University College London Data Safe Haven.2 test . A p value of <0.05 indicates statistical significance. All statistical calculations were performed using GraphPad Prism software , V.9.0.0, while graphs were constructed using Microsoft Excel V.16.48 and RStudio V.1.1.1106.Descriptive statistics were used to summarise the survey results. Categorical data were reported as proportions (percentages) and analysed through cross-tabulation statistics using the \u03a7A total of 104 participants completed the survey.Participants (n=104) were spread across 3 countries and 11 regions, with the majority from London (63%) . Partici10.1136/flgastro-2021-101994.supp2Supplementary data10.1136/flgastro-2021-101994.supp3Supplementary dataMost of the participants had no formal education or qualification in AI (72%), which we defined as either attendance at an organised AI teaching day (25%) or completion of an AI course with certification (3%). Almost half of participants rated themselves as only \u2018slightly familiar\u2019 (47%) with research methodology in AI, 26% as \u2018not familiar at all\u2019, 19% \u2018moderately familiar\u2019 and 8% as \u2018very familiar\u2019. A similar proportion (45%) had read less than 5 AI-related papers, with 25% having read 5\u201320, 15% none and 14% more than 20.Participants (n=92) perceived the most significant benefits in the application of AI to be in quality improvement of endoscopy (97%) and better endoscopic diagnosis (92%) . Non-con10.1136/flgastro-2021-101994.supp4Supplementary dataThe most significant perceived challenges of using AI were accountability for incorrect diagnoses (85%) and bias of algorithms (82%) . ConsultParticipants (n=64) viewed the most significant barriers to adopting AI in routine clinical practice to be a lack of guidelines (92%) and local access of the hospital to AI devices (89%) . Those iParticipants (n=60) expressed that the priority of AI research should be in real-time endoscopic image diagnosis (95%) followed by general quality improvement (87%), automated reporting (82%) and lastly, natural language processing (NLP) (70%) .10.1136/flgastro-2021-101994.supp5Supplementary dataFor AI research in endoscopy, 92% of participants ranked colonoscopy as a very high/high priority, followed by the upper gastrointestinal system (UGI) (67%) and capsule endoscopy (35%). Participants in secondary care viewed research in the UGI system as a higher priority than tertiary care colleagues .Participants (n=54) perceived the most significant barriers to AI research to be funding (82%), the availability of annotated data (76%) and access to Big Data (72%) . Non-conThe main priorities for the BSG AI Task Force (n=94) ascertained through this survey were to identify research priorities (96%), develop guidelines for adopting AI devices in clinical practice (93%) and support the delivery of multicentre clinical trials (91%) . ParticiWe report the results of the first survey to evaluate the perceptions of the UK gastroenterology and endoscopy community to AI.Quality improvement of endoscopy and endoscopic diagnosis was the greatest perceived benefit of AI to clinical practice. This may reflect the greater familiarity with AI\u2019s application to endoscopy. Currently, all AI-related RCTs, human benchmarking studies and regulatory-approved technologies within gastroenterology are in their application to endoscopy.Accountability for incorrect diagnoses and bias of algorithms were identified as the main challenges of using AI. The issue of accountability for wrong diagnoses is a complex problem. Errors will almost certainly occur with AI, and the cause of errors resulting in patient harm may differ in different scenarios.Despite the availability of regulatory-approved AI technology, its adoption into routine clinical practice in endoscopy has been slow. A lack of guidelines was identified as the main barrier to its adoption, and addressing this would help drive the specialty forward in the clinical translation of AI. Local access of hospitals to AI technology was another important barrier. Careful consideration and planning are required by national organisations such as the National Institute for Health and Care Excellence, NHSx and the BSG to avoid a two-tier healthcare system emerging in the National Health Service.Respondents expressed that the priority of AI research should be in endoscopic image diagnosis followed by quality improvement. Ninety-two per cent of participants ranked colonoscopy as the highest priority for AI research, followed by 67% for UGI endoscopy, with participants in secondary care prioritising the UGI system more than tertiary care colleagues. This suggests that endoscopic diagnosis of the UGI system using AI may play a more significant role in secondary care and should be explored further.Participants perceive the most significant barriers to AI research to be funding, the availability of annotated data and access to \u2018Big Data\u2019. Funding is an issue that is broadly applicable to all research, whereas the availability of annotated data and access to Big Data is more specific to AI.The priorities for the BSG AI Task Force in the view of participants are identifying research priorities in AI, guidelines for adopting AI devices in clinical practice and supporting the delivery of multicentre trials. Most of the research to date is limited to computer vision, but this is a narrow application of AI. Given the abundance of text used in clinical practice, it may have a far greater reach and potential with other applications such as NLP. The Task Force should identify research priorities to help guide research to where it can best improve patient care and maximise the potential of AI. Developing guidelines for adopting AI in clinical practice was identified as the main barrier to its clinical adoption and a high priority for the Task Force, emphasising its paramount importance. Concerns relating to accountability for the wrong diagnoses and bias of algorithms should be addressed in these guidelines. While gastroenterology is at the forefront of clinical studies of AI, there are still only a limited number of clinical studies with the majority of these single centres.There are several limitations to this study. As with any voluntary survey, participants who choose to engage with the survey can bias the results. Participants from London made up more than half of the cohort. This is potentially due to AI research mainly being carried out in teaching centres, which is most densely populated in London. The selection of responses in the survey was also limited to those questions decided by the survey developers. The Likert scale also limits participants to categorical responses, which means that we could not measure the true attitude of participants. Furthermore, our cohort represents a small proportion of the UK gastroenterology and endoscopy community, which may not represent the community as a whole.This survey of UK gastroenterologists and endoscopists identified some of the perceived benefits, challenges, and barriers to applying AI in clinical practice and AI research. The BSG AI Task Force should consider identifying research priorities, guidelines for adopting AI devices in clinical practice and supporting the delivery of multicentre clinical trials.10.1136/flgastro-2021-101994.supp1Supplementary data"}
+{"text": "The growing application of artificial intelligence (AI) in healthcare has brought technological breakthroughs to traditional diagnosis and treatment, but it is accompanied by many risks and challenges. These adverse effects are also seen as ethical issues and affect trustworthiness in medical AI and need to be managed through identification, prognosis and monitoring.We adopted a multidisciplinary approach and summarized five subjects that influence the trustworthiness of medical AI: data quality, algorithmic bias, opacity, safety and security, and responsibility attribution, and discussed these factors from the perspectives of technology, law, and healthcare stakeholders and institutions. The ethical framework of ethical values-ethical principles-ethical norms is used to propose corresponding ethical governance countermeasures for trustworthy medical AI from the ethical, legal, and regulatory aspects.Medical data are primarily unstructured, lacking uniform and standardized annotation, and data quality will directly affect the quality of medical AI algorithm models. Algorithmic bias can affect AI clinical predictions and exacerbate health disparities. The opacity of algorithms affects patients\u2019 and doctors\u2019 trust in medical AI, and algorithmic errors or security vulnerabilities can pose significant risks and harm to patients. The involvement of medical AI in clinical practices may threaten doctors \u2018and patients\u2019 autonomy and dignity. When accidents occur with medical AI, the responsibility attribution is not clear. All these factors affect people\u2019s trust in medical AI.In order to make medical AI trustworthy, at the ethical level, the ethical value orientation of promoting human health should first and foremost be considered as the top-level design. At the legal level, current medical AI does not have moral status and humans remain the duty bearers. At the regulatory level, strengthening data quality management, improving algorithm transparency and traceability to reduce algorithm bias, and regulating and reviewing the whole process of the AI industry to control risks are proposed. It is also necessary to encourage multiple parties to discuss and assess AI risks and social impacts, and to strengthen international cooperation and communication. Artificial intelligence (AI) has been described as the fourth industrial revolution following the first \u201csteam engine revolution\u201d, the second \u201celectrical revolution\u201d, and the third \u201cdigital revolution\u201d . From auThe rapid development and application of medical AI symbolize more universal and efficient medical assistance, more convenient and accurate medical treatment, bringing a revolutionary breakthrough in traditional medicine at many levels. However, most of the current medical AI does not replace human doctors but rather speeds up and helps humans to diagnose with theWith the rapid development of technology, people increasingly find that while enjoying the convenience of science and technology, there are also various uncertainties and uneasiness, which brings people confusion and a sense of crisis about the development of technology. The social consequences of technology are difficult to predict early in its development. When the demand for technological change becomes intense, such change has become very difficult and time-consuming. It is called the \u201cdilemma of technology control\u201d, which is also known as the \u201cCollingridge dilemma\u201d . TechnolThe rapid development of AI technology is also accompanied by many risks and challenges. There is now a growing consensus among experts to view the adverse effects of AI as ethical risks . These aThis paper focuses on trustworthy medical AI from an ethical perspective. We analyzed both the design level (whether the technology is reliable) and the application level to assess the factors that affect people's trust in medical AI, and proposed corresponding governance countermeasures from ethical, legal and regulatory aspects according to the ethical governance framework, which points out the direction for the controllable and sustainable development of medical AI.The main factors affecting the trustworthiness of medical AI include whether it is technically safe and reliable, and whether it is used in a way that respects fundamental human rights and conforms to universal human values. We adopted a multidisciplinary approach to analyze the factors affecting the trustworthiness of medical AI from both the design and application levels Fig.\u00a0. The desIn the era of big data, the data are numerous and complicated. There is a saying in the computer field, \u201cgarbage in, garbage out.\u201d The quality of data directly determines the quality of medical AI. Medical-related data mainly comes from multi-source and heterogeneous data including literature data, clinical trial data, real-world data, and also health data collected by a large number of intelligent wear, fitness applications and other devices . AI systEnsuring high-quality data is the primary prerequisite for AI development. At present, there are some problems in the data of medical AI, such as data errors and omissions in the original data entry process, lack of unified metadata standards, difficulties in data fusion, lack of data management, non-standardized strategy for data cleaning, and much medical data is stored in unstructured forms such as text and images, which increases the difficulty of data management and integration. The accuracy of the data annotation determines the quality of the dataset. Even if the data is accurate and representative, the result will be meaningless if there is a problem in the data annotation process. The risk in the data annotation is mainly in the consistency aspect. For example, the gold standard for clinical diagnosis of pulmonary nodules is a pathological biopsy, but not every patient with lung nodules will have a biopsy . CT scanThere are some who believe that it may be possible for AI to mitigate existing biases in the healthcare system, such as reducing human errors and the Algorithmic bias includes both human-induced bias and data-induced bias. Human-induced bias is intentionally or unintentionally written by the developers because individuals are always influenced by their own moral perceptions and relevant interests, which affects data training , 26. ForThere are three possible reasons for opacity: (1) algorithms are trade secrets that companies intentionally hide; (2) the inability of lay people to understand programming and algorithmic techniques; (3) the complex nature of the algorithms themselves, which are incomprehensible to humans . The firThe opacity of the algorithm creates \u201cignorance\u201d among human agents and will affect the trust of patients and clinicians in AI tools. According to the research, patients are generally repulsed by unexplained AI interventions in diagnostic and treatment sessions. They usually only accept AI to handle administrative matters such as registration, bill payment, and guidance . MoreoveThe technical community has proposed the goal of explainable AI (XAI) that is,The safety issues of medical AI are the risks and harms that occur in its practices, such as program errors, being affected by cybersecurity, the need for adequate testing, difficult software certification, etc., covering various legal and ethical issues . No techThe risks of medical AI comes more from the algorithms. First, the algorithmic black box makes models lack explainability and are difficult to proofread. If the algorithm is flawed or incorrect, the output will lead to even greater errors, which will most likely cause diagnostic errors, harm human health, and even deprive human lives. In 2015, the British used a medical robot to perform heart valve repair surgery, and the robot not only made serious operational errors, but also interfered with the correct operation of human doctors, resulting in the patient's death . It is iThe data breach is also a security concern. In today's world, data is known as the \"new oil\" due to its economic value . HealthcIn addition, AI challenges the perceived authority of clinicians and may influence their independent judgment. Introducing AI into treatment decision-making may reintroduce the paternalistic model of \u201ccomputer knows best\u201d. Computers recommend treatments based on specific parameters that may not actually best reflect the values and preferences of a particular patient . What isMedical AI replaces certain tasks previously performed by physicians, which will undoubtedly change the relationship between doctors and patients, and poses a dilemma in the division of ethical responsibilities. If medical accidents occur, who should be responsible? Can AI itself be the subject of liability? If not, what moral status should we give to AI? To what extent should it be held responsible? Or, who should be responsible for AI? These are all tough questions.Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics [European Civil Law Rules in Robotics (ECLR), i.e., the more autonomous the robot, the less the trainer's liability. However, it is controversial to grant personality and rights to robots. Some scholars argue that although AI may surpass humans in many aspects, they do not possess free will essentially and does not have moral subjectivity [Before discussing whether AI is at fault, we need to clarify whether AI can be qualified as an independent legal responsibility subject. So, can AI be a subject of liability? The Turing test suggests that complex AI may have a certain level of consciousness . In 2017Robotics the Euroectivity , 64. Froectivity . It is aectivity . Althougectivity . A gap s. The development of AI should not overly pursue some \u201ctechnological singularity\u201d. Technology should serve humans, not replace human talent and creativity. The autonomy of machines should not eliminate human subjectivity. Keep the development and application of AI technology from hindering human autonomy, and prevent the development and improper use of AI products beyond human control, which is the direction we should follow.For the emergence of strong AI or super AI in the future that people are worried about, its role may be closer to humans or even surpass humans, will AI at that time have the possibility of becoming a subject of responsibility? Some authors believe that in the context of strong AI, humans only play the role of a supervisor, and AI should be given the qualification of a legal subject . Some whSo who is responsible for AI applications? What kind of responsibility should they each bear? Who should be blamed if a doctor accepts a wrong diagnosis or treatment recommendation from a medical AI? Is it the doctor who makes the final decision, the medical institution that decides to use the AI, the producer of the AI, or the algorithm itself? What about when humans do not have enough control over the use of AI? In 2022, a report by the National Highway Traffic Safety Administration (NHTSA) showed that in an investigation of 16 Tesla crashes, several showed that Autopilot (Automated Assisted Driving) gave back control of the car to the human driver in \"less than a second\u201d on average before the crash and resulted in a total of 15 injuries and one fatality . This auA new technology is first generated by social needs. Then people will conduct relevant scientific and ethical research. After that, we will put it into the application, continuously improve the technology and ethics in use to finally adapt the technology to the needs of society and ethics in line with the general values of people. It is the same for medical AI. For this new technology, we should fully assess the existing risks and those that may occur, which is analyzed earlier in the paper. To address the related ethical issues, we sorted out the governance countermeasures of trustworthy medical AI using the ethical governance framework of ethical values-ethical principles-ethical regulations Fig.\u00a0. EthicalEthically Aligned Design: A Vision for Prioritizing Human Well-being with Artificial intelligence and Autonomous System (AI/AS) [Asilomar AI Principles [, Ethical Guidelines for Trustworthy AI by European Union [Next Generation AI Governance Principles-Developing Responsible AI from China [Ethics and Governance of Artificial Intelligence for Health: WHO Guidance [Principles of Biomedical Ethics: respect for autonomy, nonmaleficence, beneficence, and justice[The law is a mandatory norm with certain lagging defects, and the legislative process is characterized by a harshness and long cycle, and more often than not, it can only be \u201chindsight\u201d and cannot provide timely and effective protection measures. Therefore, ethics and morality become an effective complement to the legal system, and AI technology innovation must be carried out in accordance with ethical requirements. An ethical framework for AI design, manufacturing, and use should be established to evaluate the rights and wrongs of decisions and actions in the AI field. Ethical values as a foundation for developing AI technologies allows for broader presuppositions to address potential technological risks. In 2016, the Institute of Electrical and Electronics Engineers (IEEE) released its first AI report- (AI/AS) . Since tinciples , Ethicalan Union , Next Geom China , and EthGuidance , etc. Thd justice These foAlthough people agree with ethical values, there is a huge gap between ethical orientation and application. For example, is the design of collecting users' private data to provide better services to them consistent with promoting the good and not harming human beings? These issues require sufficient ethical discussions and make certain restrictions on the practices to allow technology to develop sustainably in a controlled manner. Another example is the programming of autonomous vehicles. Whether the response in the face of a sudden emergency is designed to avoid pedestrians who suddenly cross the road but may hit an obstacle and cause casualties to the occupants of the car or to protect the safety of the occupants but may harm innocent passers-by, which will also involve the classic ethical dilemma \"trolley problem\". In 2016, the product leader of Mercedes-Benz responded to such questions raised by the media with \"protecting the occupants of the car in priority\", which is understandable for the manufacturer, otherwise, who would go for a car that does not protect them? However, it was not a responsible decision which also caused an outcry. Because the consumer group of Mercedes-Benz cars belongs to the wealthy class, does this mean that wealthy people can make the final decisions, which is unfair to the poor. So this is not only an ethical issue but also relates to the social acceptance of the products, which needs to be discussed in depth. It has also been proposed that with the increase of AI autonomous decision-making capability, ethical algorithms should be embedded in the algorithmic system to increase the reliability and security of AI decisions. Three approaches were considered: a top-down approach, a bottom-up approach, and a hybrid approach . The topImproving laws and regulations related to AI is the fundamental guarantee for the implementation of ethical reshaping. Only by clarifying responsibilities and providing norms at the legal level can ethical constraints be made practical and feasible. At present, there is no unified standardized quality standard, access system, evaluation system and guarantee system for the application of AI in the medical field, and the related policy and regulation system has not been completely established yet. In addition, the algorithms of medical AI are based on the pre-existing human experience. Medicine itself is potentially risky and uncertain. Therefore, no matter how scientific AI is, there is always the possibility of making mistakes. Whether existing laws and regulations are applicable to attribute responsibility for medical disputes caused by medical AI is an important issue at the legal and practical levels.In the previous section, we have elaborated that existing medical AIs are not moral enablers and do not have the ability to think and make decisions independently and cannot be considered as duty bearers. Humans should be responsible for AI. In order to better use AI, we need to divide the responsibilities of different actors. First, we can examine whether doctors have operational errors when using AI. If the doctor has errors in operation, the doctor and the medical institution are responsible. AI robot\u2019s participation in diagnosis and treatment is predicated on the approval of the medical institution where it is located. If a doctor causes damage, the medical institution can recover compensation from the doctor after taking responsibility. It is also necessary to review whether the medical institution has put in place training for doctors in AI use in order to evaluate the extent of their liability. In the second scenario, the doctor has no improper use of AI, and the AI itself is faulty. In this scenario, AI researchers, designers, and manufacturers' responsibilities must be divided based on the problematic aspects of AI, such as data labeling, program design, and product quality. At the same time, doctors are not exempt from liability because they are the main actors in diagnosing patients. At the current level of medical AI development, doctors are still in the position of supervising and they should not let machines make final decisions without their permission. Besides, current medical AI falls under the category of medical devices, and both the department that approves AI for marketing and the medical institution that introduces AI in clinics need to consider whether there are loopholes in the process and risk control. In the third scenario, the related people are scrupulous in their duties, but still cannot prevent the medical AI from making an incorrect diagnosis that leads to the patients\u2019 misfortune. There is no clear evidence of who is responsible, or we cannot attribute responsibility to any individual. That means there may be an empty field of responsibility. Floridi proposed a principle of moral responsibility of faultless responsibility, which means that no one is at fault, but they are still responsible for it. Floridi suggested that we can develop a mechanism that moves away from concerning the intentions and perceptions of each individual agent, but instead, allows these agents to act as a network that shares risk and responsibility . HoweverLegal experts concerned with AI governance issues criticize ethical principles as flawed and inadequate in addressing AI\u2019s ethical and social issues. A few companies are keen to propose ethical standards rather than binding rules. The reasons for this are apparent because there is no substantial penalty if they change or disregard ethical standards under this circumstance . The mosDeep Learning Assisted Decision-Making Medical Device Software Approval Points [Approval points further subdivide the data sets into training sets , validation set , and testing set , etc., and specifies different acquisition requirements. It also provides requirements for the access qualification, selection, training, and assessment of data trainers.Current AI technology essentially obtains data by measuring the real world, extracts algorithmic models from the data, and uses the models to make relevant predictions. Therefore, data and algorithms are the basis of AI computing and decision-making. The utilization rate of big data in healthcare is low. Although the data in hospitals are enormous, most of them are unstructured data, which cannot bring out the value of \u201cbig data\u201d. Many hospitals have not yet established a unified data management system, which is not conducive to the unified analysis of data and impacts the application of AI technology in the medical field. Many countries have incorporated quality management of training data and data trainers into their regulatory frameworks to ensure data quality. For example, China\u2019s l Points requiresGeneral Data Protection Regulation (GDPR) [Second, on the sharing of health care data. The main obstacle to data sharing is the ownership of data. There are several views of data ownership in academic circles: ownership by individuals, ownership by organizations such as enterprises, ownership by the state, and ownership by all human beings. The debate around ownership does not only include questions as to who owns data, but also whether there should be a notion of ownership. Macnish and Gauttier argue thn (GDPR) . Unlike n (GDPR) argue thn (GDPR) , 89. Hown (GDPR) . Patientn (GDPR) . In the n (GDPR) . We beliReducing AI bias is necessary to promote better and more equitable health outcomes. To avoid bias, the design goal should be \u201cethics by design, not after a product has been designed and tested\u201d . AI manuEthics Guidelines for Trustworthy AI issued by the European Union (EU), Principles for Responsible Stewardship of Trustworthy AI proposed by the G20, etc. Although algorithm transparency is not equal to algorithm explainability, it will form a powerful deterrent and encourage more diverse subjects, such as medical institutions, insurance companies, and social security institutions, to participate in supervision, which will greatly compensate for the lack of supervision of regulatory authorities. Some scholars suggest that disclosure of algorithm source code to relevant subjects be set as a legal obligation for companies to improve the post-marketing regulatory system of medical AI [When algorithm explanation becomes more and more complex, we should appropriately turn our concerns to algorithm transparency and traceability. It is generally accepted that algorithm transparency means that algorithm developers should disclose the algorithm elements including source code, input data and output results. Most scholars believe that some degree of algorithmic transparency should be guaranteed by law, and various international documents also stipulate the principle of algorithmic transparency, such as dical AI . NeverthWe have explained AI's current lag in laws and regulations above. As a precursor and effective supplement to laws, the ethical review should run through the whole process of the design and use of AI. The risks and benefits of AI products should be thoroughly assessed and supervised by relevant organizations. First, the government should establish an AI ethics committee to oversee the direction of AI development and make corresponding changes and additions to previous systems, rules or laws and regulations based on supervision, inspection and evaluation results. All companies should review and approve the design and manufacture of robots through the relevant institutional ethics review committees, and programs with serious risks should be further ethically justified and reviewed and approved by higher-level ethics committees to ensure that their risk-to-benefit ratios and respect for people meet the requirements of ethical principles.Secondly, medical AI will belong to the category of medical devices for a considerable period of time, and its main function is to assist doctors in diagnosis and treatment. Therefore, medical AI should be placed in the framework of medical devices for regulation. As of 2020, the U.S. Food and Drug Administration (FDA) has approved a total of 222 AI medical device products, and Europe has approved a total of 240 AI medical device products with European conformity certification . CountriThird, algorithms may be continually updated beyond their initially approved clinical function, which may require particular policies and supervision. Regulatory agencies must develop standard procedures, including effective post-sales monitoring mechanisms through which developers can document the development of their AI medical device products . EducatiThe challenges and risks facing medical AI are multifaceted, wide-ranging, and cross-fertilized. Therefore, the governance of healthcare AI requires the cohesion of multiple parties, including governments, professional communities, research institutions, healthcare facilities, the public, and the media. The professional community includes AI experts, medical experts, ethicists, and legal experts. All parties need to assess medical AI\u2019s risks and social impacts before, during, and after the AI application.The government should research and collect multiple opinions before formulating policies and laws. In the past, scientific and technological work was often the result of scientists setting up projects, the relevant departments or enterprises giving money, the government approving them, the public unaccountably affected, and humanities and social science experts cleaning up the mess. In fact, what needs to be done first is to involve humanities and social science experts upstream in the decision-making process, and to understand the background and results of the research. Experts from other disciplines, such as social sciences, law, and ethics should be brought in to collaborate so as to understand the attitudes of non-scientist groups and the possible ethical, legal, and social consequences of the work. The government should attract public representatives to participate in decision-making and establish monitoring and feedback channels. The professional community should try to propose and reach a consensus on ethical norms and governance of medical AI through adequate discussions and form industry norms. Technicians should strengthen ethical self-discipline and reflect ethical value orientation in the process of research and development. Many scientists already attach great importance to the ethical issues of AI, but it is still essential to strengthen relevant training and education. Doctors should also be involved in the research and development process of medical AI to improve the medical literacy of AI developers and the AI literacy of doctors. Trust in AI will improve through a more transparent development process with a better understanding of algorithms and AI functions. For AI companies, the vital thing should be to take social responsibility and take effective measures to prevent ethical risks rather than unilaterally pursuing economic interests. Only with the participation of all relevant sectors of society and multiple parties can an ethical and publicly acceptable medical AI be developed.The challenges posed by medical AI are global, and its value goal is based on the fundamental interests of all human beings. Therefore, it is necessary to strengthen international cooperation and communication. However, international cooperation also faces many obstacles, such as cultural and legal systems differences in each country, which may lead to different attitudes and positions in the face of medical AI. Through sufficient discussions and communications, we can distill common themes and differentiated expressions, and establish a sound ethical governance system for medical AI that meets the actual situation of each country by taking into account its own conditions and drawing on advanced foreign experiences.In this paper, we explored the factors that affect the trustworthiness of medical AI, including poor data quality, algorithmic bias, opacity, safety and security risks, and difficulty in responsibility attribution. We proposed that ethical values should first be considered to guide AI development, with the promotion of human health and well-being as the fundamental goal. At the legal level, we clarified that medical AI does not have moral status at this stage, and humans remain the responsibility bearers. We tried to improve AI legislation by clarifying the attribution of relevant responsibilities based on existing laws. At the level of specific risk management, we proposed relevant countermeasures such as strengthening data quality management, data security, and privacy protection, promoting data set sharing, increasing algorithm transparency and traceability to reduce algorithm bias, and regulating and reviewing the whole process of AI, including design, production, marketing, and after-sales. Multiple parties should also be encouraged to participate in discussing and assessing AI risks and social impacts, and strengthen international cooperation and communication to address related challenges jointly."}
+{"text": "Artificial intelligence applications are prevalent in the research lab and in startups, but relatively few have found their way into healthcare provider organizations. Adoption of AI innovations in consumer and business domains is typically much faster. While such delays are frustrating to those who believe in the potential of AI to transform healthcare, they are largely inherent in the structure and function of provider organizations. This article reviews the factors that govern adoption and explains why adoption has taken place at a slow pace. Research sources for the article include interviews with provider executives, healthcare IT professors and consultants, and AI vendor executives. The article considers differential speed of adoption in clinical vs. administrative applications, regulatory approval issues, reimbursement and return on investments in healthcare AI, data sources and integration with electronic health record systems, the need for clinical education, issues involving fit with clinical workflows, and ethical considerations. It concludes with a discussion of how provider organizations can successfully plan for organizational deployment. The potential of artificial intelligence to transform every aspect of medicine and healthcare is real. It\u2019s vital for healthcare industry leaders who are embarking on this AI journey to understand and maximize its benefits. However, it is difficult to understand the potential maturity of a technology when there is both substantial hype and skepticism about the application of AI to human health. This difficulty is compounded because AI is not a single technology but several, encompassing diverse capabilities and applications.While there are rapidly\u2013growing numbers of AI innovations in healthcare research labs, relatively few have yet been fully deployed in provider organizations. Healthcare is different from most industries in the extent to which it must rely on public scientific methods to introduce new products and practices. There is a significant regulatory machine that exists, e.g., at the FDA, to ensure that scientific rigor is followed. Most patients appreciate the conservative approach to new treatments. Studies to determine the clinical utility of incorporating AI into clinical practice will take years: to conduct each study, to publish the results, for the medical community to accept the results and alter clinical practice, and for payers to approve reimbursement.The development and introduction of most consumer-oriented AI products and services, such as driving assistance and autonomy, do not undergo this degree of public scientific rigor. Therefore, adoption of AI in healthcare has been slower than in several other industries, although some types of AI use cases are further along in the adoption process than others. Healthcare providers face the issues of how to accelerate the deployment of AI and overcome barriers to adoption. In this article we describe the key factors that govern AI adoption in provider organizations , and discuss how provider executives can speed adoption processes if desired.While clinical applications of AI are perhaps more exciting, administrative applications\u2014improving payment processes, limiting fraud, or scheduling operating rooms more efficiently\u2014are likely to be much easier to implement. Better and less expensive healthcare administration through AI is currently in reach , and proAdministrative processes for AI adoption aren\u2019t subject to regulatory approval. The consequences of errors resulting from AI\u2013based decisions are much less problematic in administrative applications than those that impact a patient. When the government is the payer, relevant administrative applications have to comply with its prescribed reimbursement processes, but for internal administration, providers are free to employ AI in any way that benefits them. In addition, the economic return from administrative AI is more under the control of the health system than with clinical applications, which generally require that payers and regulators are also involved.Many provider institutions\u2014particularly in the U.S. but in other countries as well\u2014are already applying AI for administrative purposes. They work directly with payers, for example, to smooth and speed claims or prior authorization processes. They look for ways to identify patients who need help paying their medical bills\u2014sometimes even in advance of incurring them. They use AI to ensure proper disease coding on bills, or to make appointment scheduling easier for patients.What is typically required for administrative AI applications to be deployed is similar to administrative AI in other industries. The application has to be effective, leading to better decisions or higher productivity. It must be integrated with existing systems and processes, which may be easier if the AI application is procured from an existing vendor. There may also be training and upskilling necessary for those who will use the AI system.AI for clinical purposes\u2014specifically, diagnosis, treatment, and monitoring\u2014is eventually going to impact every healthcare organization in one or more of these categories as vendors incorporate these capabilities into existing products or develop new ones. Some applications will need regulatory approval depending on the extent to which they are directly involved in patient care. The U.S. Food and Drug Administration classifies certain applications of AI as \u201csoftware-based medical devices\u201d and has regulated them accordingly through several different pathways. As of mid-2022, the FDA has approved almost 350 such applications [(1) What is the scope of products that are available for my intended use?(2) How were the models trained and how were they validated?(3) Once purchased, will an AI application perform as expected in my practice? How can I monitor the performance of the model after deployment?However, regulatory clearance alone can\u2019t guarantee that an AI-based application will always work as billed in clinical use. A 2021 commentary article recommended that clinicians be able to answer the questions below when considering adopting AI . They apHealthcare providers around the world must worry about how to pay for any innovation in healthcare, including AI. In the best case, innovations pay for themselves, allowing providers to offer better care at the same cost, or to offer the same quality care at lower cost. Some AI-based innovations may fit this best-case scenario, but many will require payer approval and reimbursement for providers to afford to adopt them. In the UK, the National Health Service announced in 2019 that it would begin to reimburse for AI-based care in 2020 to incent more rapid adoption, though details have been sketchy . The NHSIn China, in part because of the COVID-19 pandemic, the Chinese National Health Commission approved reimbursement for online consultations using AI and other digital tools in 2020. China has seen massive growth in the use of AI for general practitioner advice, which can determine whether a face-to-face consultation is required. We could find no evidence that some of the more advanced image detection use cases are reimbursed in China, although there are plenty of startups in that space.At this writing fewer than ten AI-based applications\u2014including one for diagnosing blood clots in the brain and another for diabetic retinopathy\u2014have been approved for reimbursement by the U.S. Centers for Medicare and Medicaid Services (CMS), which pays for about half of U.S. healthcare . It is ehealth rather than simply providing health care for particular illnesses or medical issues, reimbursement for AI-based innovations may become more common. However, the movement to value-based care is very slow. The COVID-19 pandemic has made it even slower as providers focus on very short-term measures to rein in costs and meet immediate patient needs. Patient volume in most health systems has also not recovered to pre-pandemic levels. When value-based care does become a reality, provider organizations will need to understand and manage their patient populations in new ways, and AI-enabled decisions may be their best route to doing so.As healthcare moves to value-based payment models, which require providers to support their patients\u2019 Today, however, many provider-based clinical uses of AI are experimental. They are neither approved by the FDA or other regulatory body, nor approved for reimbursement by payers. Few generate a high level of productivity improvement. Therefore, they provide little return on investment. As a result, the provider organizations that currently support extensive AI development are likely to be large, research-focused, and relatively wealthy.Data is the fuel of AI, and is required to train machine learning models. Despite some progress over the past couple of decades, healthcare data is generally still as fragmented and siloed as the healthcare system that creates it, at least in the U.S. Most hospitals and group medical practices have their own EHR data and little else. Unless they are also providers, payers generally have only claims data, although some are partnering with providers to get access to their EHR data. It is extremely rare to have all a patient's healthcare data\u2014across all providers and payers\u2014available in one easily\u2013accessible repository. That means that data used to train machine learning models will of necessity be limited and will probably not encompass all of a patient\u2019s interactions with the healthcare system. Even within a particular institution, data scientists or engineers will often need to spend considerable time integrating and curating data.Some national healthcare systems have a common EHR system, which makes it relatively straightforward to both gather data to train models and to integrate new AI-based scoring systems into clinical practice. For example, the U.K.\u2019s NHS, which doesn\u2019t have an overall common EHR system but does have one for general practitioners, has created and deployed an \u201cElectronic Frailty Index\u201d from EHR data. The machine learning model creates a score for elderly patients that is integral within the EHR system. If the GP sees a patient with a severe or moderate frailty index, special care measures are mandated or recommended .Limited data integration does not impact all clinical AI algorithms. AI methods directed to interpreting radiology images, for example, do not require the integration of a broad range of EHR data. However, exciting AI opportunities, such as comparative effectiveness determination and understanding the factors that increase the risk of disease, will be hobbled by poor interoperability. Moreover, as the range of health-related data increases to include, for example, social determinants of health and wearable sensors, limited data integration will become increasingly problematic. The potential value of AI may drive better data standards, integration and sharing over time.Clinicians will need substantial education in AI to use it effectively in clinical practice. Medical schools have yet to integrate AI across the curriculum . Although it\u2019s early days for this type of training, some courses are beginning to be offered, particularly in a few AI-oriented specialty areas. For example, the Radiological Society of North America has announced an imaging AI certificate that radiologists can earn online.However, such programs are still relatively rare. They are largely absent in other fields where AI is increasingly capable of image analysis, such as pathology and ophtClinicians may resist using AI systems that don\u2019t fit well into clinical workflows. This complaint has been leveled against EHRs in general, but they are so critical to modern medical practice that most physicians use them anyway. If AI systems require separate systems, apps, APIs, or logins, they are much less likely to be adopted. Since EHRs have become the dominant technology that structures clinical workflows, AI will need to be integrated into those systems to be widely deployed with any success.Some specialties involve tasks and workflows that are more conducive to AI use than others. A McKinsey study, for example, classified \u201cautomatable hours\u201d in clinical roles in an analysis of the impact of AI on healthcare jobs in Europe . It founEmpathy and understanding of mental health (\u201cinterfacing with stakeholder\u201d tasks) were deemed unlikely with AI in the AI impact study; psychiatrists had the fewest automatable hours among physicians. However, for some conditions intelligent, emotion-oriented chatbots that employ cognitive behavior therapy may be able to help patients, particularly since there is a serious shortage of mental health professionals in many countries .Specialists such as radiologists and pathologists who do not normally see patients in person may be more affected by AI. Image interpretation is a substantial component of their jobs, and they often communicate with patients and other physicians through reports that could be automatically generated. However, these specialists do perform a number of tasks that are not likely to be automated soon .Clinical professionals whose primary focus is caring for patients across a broad spectrum of needs, such as nurses, seem unlikely to be greatly affected by AI. Those who primarily provide diagnosis and advice, such as physicians, seem more likely to be affected. Perhaps the greatest impact from AI and related automation capabilities will involve administrative workers in healthcare rather than clinical ones.Ethical AI is a concern for all industries but a greater one for healthcare. The ethical principles developed by the World Health Organization in 2021 for AI use in healthcare address such issues as protecting human autonomy; ensuring transparency, explainability, and intelligibility; and fostering responsibility and accountability. Complying with such principles, however reasonable they seem, will not be easy or even possible for many AI systems. Most deep learning models for radiological image analysis, for example, are today neither transparent nor explainable.Leading provider organizations in the adoption of AI have begun to specify their ethical principles, but few have created a management structure to ensure that all AI applications\u2014those developed in-house as well as those from vendors\u2014comply with the principles. We expect that close adherence to ethical considerations will slow down the development and adoption of clinical AI applications.The advantage of a deliberate pace for AI adoption is that It gives healthcare organizations time to plan and adapt. Positioning a provider organization for success depends on several factors.Certainly AI adoption will move faster in organizations that declare adoption to be a strategic priority than in organizations that view it as a novelty or niche technology. Some research and innovation-focused providers, such as Mayo Clinic for example, have many AI projects underway and have created new organizational roles and structures to facilitate the growth of AI across the organization .In terms of other attributes, organizations that have deployed core transaction applications, such as electronic health records and revenue cycle applications, will be better positioned to incorporate AI into the workflow. This transformation will also be simpler and faster in organizations that have a base of applications from one vendor across the enterprise than those with applications from multiple vendors. Provider organizations are likely to adopt AI more quickly and smoothly if they already know how to move new technologies from pilot to broad deployment and manage the accompanying workflow and/or cultural changes.AI technologies are changing rapidly, but healthcare processes and professionals change much more slowly. The value of the technology, however, is sufficiently great that all substantial healthcare providers should begin to evaluate AI technologies and consider how they might help to transform clinical and administrative processes."}
+{"text": "Artificial intelligence (AI) has been described as one of the extremely effective and promising scientific tools available to mankind. AI and its associated innovations are becoming more popular in industry and culture, and they are starting to show up in healthcare. Numerous facets of healthcare, as well as regulatory procedures within providers, payers, and pharmaceutical companies, may be transformed by these innovations. As a result, the purpose of this review is to identify the potential machine learning applications in the field of infectious diseases and the general healthcare system. The literature on this topic was extracted from various databases, such as Google, Google Scholar, Pubmed, Scopus, and Web of Science. The articles having important information were selected for this review. The most challenging task for AI in such healthcare sectors is to sustain its adoption in daily clinical practice, regardless of whether the programs are scalable enough to be useful. Based on the summarized data, it has been concluded that AI can assist healthcare staff in expanding their knowledge, allowing them to spend more time providing direct patient care and reducing weariness. Overall, we might conclude that the future of \u201cconventional medicine\u201d is closer than we realize, with patients seeing a computer first and subsequently a doctor. Microorganisms that cause diseases, such as parasites, microbes, viruses, and fungi induce infectious illnesses and allow symptomatic or asymptomatic disorders to exist. Specific infectious conditions, for example, the human immunodeficiency virus (HIV), may be relatively asymptomatic but, if left untreated, may have catastrophic effects after a few years . DiseaseAI has been described as one of the extremely effective and successful scientific methods for humanity among the currently available tools . Huge amAI and its related advances are becoming more widely used in the business and society, and they are starting to show up in medical treatment. Many facets of healthcare, as well as the regulatory procedures within providers, payers, and pharmaceutical companies, may be transformed by these innovations. Various trials have also demonstrated that AI may perform as well as or better than humans in essential medical treatment activities such as the diagnosis of diseases. In terms of identifying cancerous cells and educating researchers on how to grow communities for costly clinical trials, algorithms are already outperforming radiologists. However, we believe it will be decades before AI replaces humans in a range of medical procedures for a variety of reasons. Although AI is poised to make a significant impact in healthcare, there are a few ethical problems to consider when putting these systems in place and making decisions about them. Accountability and openness in such systems\u2019 decisions, the risk of team harm due to algorithmic prejudice and professional duties, and the integrity of therapists are only a few ethical concerns. As a result, it is critical to think about and analyze the possible benefits of high-quality healthcare systems with the most precise and cost-effective intelligence calculation at a very low cost when employing such programs. Furthermore, AI algorithms are capable of performing predictable computer analysis by filtering, modifying, and searching patterns on massive databases from several sources in order to make rapid and accurate conclusions. Therefore, in the present review, we discussed how AI can help with various aspects of healthcare, as well as some of the barriers to AI\u2019s accelerated acceptance in healthcare, as described in 6]. . 6]. AI is a group of technologies rather than a single technology. Many of these technologies are rapidly operating in the healthcare sector, but the specific procedures and functions they support may vary widely. Some of the most important AI healthcare technologies are described below.Among in-depth learning and emotional networks, machine learning is a mathematical way of incorporating data models and teaching models to \u2018learn\u2019 by training them with the data 9,10]. ,10. 9,10According to a 2018 Deloitte survey of 1100 US managers, whose organizations were already exploring AI, 63 percent of the companies surveyed were employing machine learning in their operations . The mosSince the 1950s, AI researchers have sought to understand human language. Speech recognition, text analysis, translation, and other language-related applications are all examples of NLP applications. There are two types of NLP: mathematical and semantic. Mathematical NLP is based on machine learning and has contributed to recent improvements in visual accuracy. You need to have a large \u2018corpus\u2019 or a language course in which you can learn. The most common use of NLP in healthcare involves the creation, understanding, and classification of clinical literature and published research. NLP systems can analyze randomized clinical notes on patients, prepare reports , record patient interactions, and conduct AI dialogue ,30,31,32These technologies perform organized digital control tasks, such as those integrating information systems, as if human users are following a text or set of rules. It is less expensive, easier to configure, and more transparent in its behaviors than other types of AI. Robotic process automation (RPA) is a type of automation that uses computer programs that run on servers rather than robots. It employs workflow, business rules, and a combination of \u2018presentation layer\u2019 and information systems to function as a less intelligent programming user. They are used in healthcare to perform repetitive tasks such as prior authorization, updating patient records, and billing. When combined with other technologies such as image recognition, it can be used to extract the data from faxed images, for example, and be integrated into transaction systems . These tScientists often believe the terms explainability and interpretability to be interchangeable, but they have practical distinctions ,35. AlthThere are various management applications in healthcare. In this domain, the use of AI has less flexibility than inpatient care but can provide significant efficiency. This is required in healthcare because the average US nurse, for example, spends 25% of her time on the job on administrative duties . RPA tecMYCIN has been used to detect plasma infections since the 1970s when this was developed at Stanford . The priPatient involvement and compliance have long been considered the \u201clast mile\u201d barrier in the healthcare industry, the last line of defense between poor and good health outcomes. The better the outcomes\u2014utilization, cost, and member experience\u2014the more patients take an active role in their health and care. In a study of more than 300 physician healthcare executives and legislators, more than 70% of those who responded claimed that fewer than half of their patients were actively interested, and 42% said that less than a quarter of their customers were deeply connected . Can AI-The concern about how AI might lead to process advancement and major job losses has received a lot of press, as per a Deloitte\u2013Oxford Martin Institute collaboration . AI mighThis is not a new problem, but a \u201cbias as old as a human civilization\u201d; it is human nature that most members of the ruling party ignore the experience of other parties. However, AI-based decision-making has the potential to magnify the existing biases and transform new categories and conditions, which may lead to new types of bias. These ever-increasing concerns have led to the re-evaluation of AI-based programs to implement new approaches that address the impartiality of their decisions. The latest technologies for bias in AI-based decision-making systems, as well as open challenges and guidelines for AI solutions for the public good, are discussed. Bias is divided into three broad categories.Methods that can be adjusted and explicitly described to aid in understanding how bias is formed in society and to integrate into our sociotechnical systems manifest themselves in data utilizing AI algorithms and can be modeled and formally defined.Pre-processing, processed, and post-processing strategies addressed bias in the various stages of AI decision making, with pre-processing, processed, and post-processing methods focusing on data entry, learning algorithms, and model results, respectively.Methods that introduce bias either in the present, as a result of data collecting bias, or in the past, as a result of interpreting AI judgments in human terms. We know that bias and prejudice are not limited to AI and that technology can be used (consciously or unconsciously) to reflect, enhance, or distort the real-world perspective. As a result, it is naive to believe that technological fixes will suffice, because the core causes of these issues are not limited to technology. To ensure the long-term well-being of all parties, more technological solutions are needed, including socially acceptable definitions of fairness and reasonable interventions. Fairness is critical machine learning in AI . It is wIn the healthcare, the circular data processing is used to produce sound communicable decisions. The increasing growth in clinical data has added to the stress of healthcare employees\u2019 jobs, limiting their capacity to offer high-quality and efficient care. Healthcare organizations should reconsider their tactics to ensure that staff are completely satisfied and supported in their work. The use of AI has the potential to improve operator performance. The use of AI in healthcare is not new, but it has made the great strides in the field in recent years. This has been made possible in part by substantial advancements in big data analysis, which have been supported by the increasing access to healthcare data. When used in conjunction with proper analytical methodologies, such as machine learning tools, AI has the potential to alter many aspects of the healthcare.AI may alter the function of healthcare providers and, as a result, the interaction between them and their patients. On the one hand, as automation grows in power, there is fear about the future, but there is also concern that increased technological output would render some healthcare services obsolete . While mAdministrative duties, data extraction from health records, treatment plan design, and consulting are just a few of the applications where AI is applied. Some time-consuming repetitive processes can be performed quickly and effectively using AI. This allows healthcare practitioners to dedicate more time to the treatments that are tailored to the clinical conditions and demands of their patients. Furthermore, AI enables healthcare providers to oversee the care of huge groups of the patients. The adoption of AI-enabled tools in nursing has been shown to enhance the productivity by 30\u201350 percent . A stronWork stress, which impacts the quality of care and patient outcomes, accounts for a large portion of the workload ,69. PrevAI systems have the potential to improve the diagnostic and treatment decisions, while reducing medical errors. In the fields of medical imaging and diagnostics, AI has made great progress. In-depth learning techniques aid in the prevention of diagnostic errors and the improvement of test results. For example, AI has been shown to improve the clinical imaging investigations in the detection of cancer and diabetic retinopathy ,75. ManyThe current status of healthcare necessitates cooperation and collaboration between healthcare providers. To promote collaborative decision making, coordinated activities, and progress tracking, excellent communication is required. AI may combine the data from a variety of formal and informal sources to provide the integrated, quick, and consistent access to the patient data across the numerous settings and instructions. Chatbots have been used to arrange and coordinate the therapy sessions, provide reminders, and educate the physicians on the patient\u2019s condition based on the symptoms in some cases.AI has the potential to significantly improve the quality and efficiency of healthcare, resulting in increased productivity, provider satisfaction, and user experience, as well as better outcomes. Policymakers, industry, healthcare providers, and patients must all face new obstacles in order to fully grasp AI\u2019s potential.Traditionally, clinical decision making has been the purview of the licensed healthcare specialists. Because AI is frequently utilized to aid with clinical operations, AI decision support systems may have an impact on the professional obligations of healthcare practitioners in each patient. Given AI\u2019s ability to make incorrect conclusions, the legal obligation of AI-assisted decisions is frequently misinterpreted. This is complicated further by the fact that developing relevant legal concepts and guidelines takes longer than developing technological skills. Another fear is that AI may deter healthcare providers, preventing them from double-checking results and challenging inaccuracies .The skills and competence required by healthcare providers are anticipated to alter as a result of the introduction of various new technologies. In some cases, AI may be able to perform tasks that humans previously performed. Furthermore, as AI progresses in healthcare, new skill sets, such as informatics, may become more in demand. To satisfy the needs of the labor market, education, and training programs, it will need to be adjusted. There are also concerns that AI systems will be used to justify the hiring of low-skilled personnel. If technology fails and staff are unable to spot the mistakes or accomplish needed duties without the assistance of computers, this might be troublesome ,79.The skills and competence required by the healthcare providers are anticipated to alter as a result of the introduction of various new technologies. In some cases, AI may be able to perform tasks that humans previously performed. Furthermore, as AI progresses in healthcare, new skill sets, such as informatics, may become more in demand. To satisfy the needs of the labor market, education, and training programs will need to be adjusted. There are also concerns that AI systems will be used to justify the hiring of low-skilled personnel. Finally, AI\u2019s application in healthcare creates a slew of legal issues. Humans used to make almost all healthcare decisions, so having smart technologies produce or assist with them raises issues of accountability, transparency, consensus, and secrecy . With toComputer systems are an important branch of ethics that began to emerge in the late 1950s and early 1960s. It arose as a result of the introduction of computers and the moral implications that resulted. Computer ethics is about the effects of ethical behavior on the existence and use of computers. In healthcare, AI has many behavioral effects. The first behavioral problem is that of AI\u2019s ethical responsibility. A moral obligation is an obligation to accept responsibility for one\u2019s actions. Some may argue that they have no moral obligation because AI is sensitive. It is important to note, however, that AI may be morally responsible. For example, the computer program used in medical examinations is not emotional, but it has the moral obligation to do so. The second moral error is the responsibility of the AI developer. It is responsible for ensuring that AI is able to meet people\u2019s needs. The third difficulty of behavior is the responsibility that comes with using AI. It is our responsibility to ensure that AI is not used for unethical purposes. Responsibility related to people affected by AI is the fourth behavioral problem. AI is responsible for ensuring that it does not have a negative impact on a particular group of people or society. The fifth meaning of ethics is the responsibility that comes with using AI. It is our responsibility to ensure that AI is not used in ways that infringe on the rights of others. The responsibility associated with the ethics used to guide AI design is the sixth ethical riddle. These principles are used to help AI developers ensure that AI works. The WHO provides the following principles as the basis for AI control and governance in order to limit the risks and maximize the potential for the use of AI in healthcare :This means that people should govern healthcare systems and medical decisions; privacy and confidentiality must be safeguarded, and patients must give informed consent utilizing proper legal frameworks for data protection. Information sharing agreements could be utilized to provide the institutions access to the health information of the patients . It is kRegulatory standards for the safety, accuracy, and efficacy of well-defined application cases or indicators must be met by AI technology designers. Measures to increase the quality of AI use and control practice performance should be in place .To demonstrate this, sufficient material must be published or written prior to the invention or deployment of AI technology. Such data should be readily available to allow for real public participation and debate regarding how technology is built and how it should or should not be utilized .Although AI technology is capable of performing certain functions, it is the responsibility of participants to ensure that it is used under the right conditions and properly trained people. Individuals and groups affected by algorithm-based decisions should have access to effective question-and-answer methods .Involvement requires that AI health is designed to promote the use and access to equity as broadly as possible, regardless of age, gender, sex, income, race, ethnicity, sexual orientation, ability, or other aspects protected by human rights .Designers, engineers, and users should evaluate AI systems on a regular basis and in public to see if it responds adequately and effectively to expectations and requirements. AI algorithms should also be designed to have the least amount of environmental impact and to use as little energy as feasible. Governments and businesses should plan for potential workplace disruptions, such as the training of healthcare personnel to adapt to AI systems and the potential loss of jobs as a result of automated systems. These principles will guide future WHO efforts to guarantee that AI\u2019s full promise for healthcare and public health is realized .In today\u2019s world, AI has exploded in popularity. The use of AI as a tool could help to lower the danger of death, environmental damage, and societal impact, as well as respond to disasters more intelligently . The funWe believe that AI will play a significant role in the healthcare industry. Precision medicine, which is widely known for much improvement in healthcare, is fueled by this capability. Though early attempts at diagnosis and therapy guidance proved challenging, we expect that AI can now master the domain as well. Thanks to significant improvements in AI for imaging science, many radiology and pathology images are projected to be evaluated by a device at some point. Voice and text analytics are increasingly used for tasks such as patient communication and diagnostic report collection, and this pattern is expected to continue. The most challenging challenge for AI in such health domains is maintaining its acceptance in ordinary clinical practice, rather than whether the technologies are competent enough to be effective. AI programs should be licensed by governing bodies, capable of EHR systems, standardized to the point that identical devices function in the same way, trained by physicians, financed by either public or commercial payers, and modified in the long run in the sector for universal acceptance to occur. These obstacles will be solved in the end, but they will require even more than that for the maturation of the technology itself. As a result, within the next five years, we expect to see minimal AI application in clinical practice, with more widespread use by the next decade. It can increase the productivity and efficiency of care delivery and allow healthcare systems to provide more and better care for more people. Compared to previously reported articles on AI, this review focused on the applications of AI in healthcare systems especially in the diagnosis and treatment of infectious diseases . Based oWe believe that AI will play an important role in future healthcare delivery. A critical talent in the development of precise medicine, which is universally recognized as a much-needed advance in healthcare. Although early attempts to make diagnostic and therapeutic advice proved tough, we believe AI will finally grasp the subject. Given the rapid development of imaging techniques, it appears that most radiology and pathology images will be scanned by machines at some point. Speech and text recognition are currently in use for things such as patient communication and clinical photography, and they will continue to grow in popularity. The most difficult challenge for AI in various healthcare settings is assuring its availability in day-to-day clinic operations, not whether the technology will be useful. To achieve widespread acquisition, AI programs must be approved by regulators, integrated with EHR systems, standardized until the same products do the same, trained by physicians, paid for by public or private organizations, and updated in the field over time. These obstacles will be solved eventually, but it will take longer for the technology to evolve. As a result, we anticipate modest AI applications in clinical practice during the next five years, followed by widespread adoption over the next decade. It is also evident that AI algorithms will not, on a big scale, replace human doctors but will instead intensify their efforts to care for patients. Human physicians may eventually switch to careers that require specialized human skills such as empathy, persuasion, and the integration of large images. Those healthcare providers who refuse to cooperate with AI may be the only ones who lose their jobs over the time."}
+{"text": "As the pandemic continues to spread worldwide, many of healthcare facilities are exploring new methods to keep their patients safe from potential hospital-acquired infections (HAIs) during the pandemic. This study explored the attitudes about artificial intelligence (AI) among providers who utilized AI-based hand hygiene monitoring system (HHMS) at a rural medical center during the pandemic.A self-administered questionnaire was mailed to 48 healthcare providers at a rural medical center in north Texas, with a 75% percent response rate (n = 36). The survey collected information on providers attitudes about AI-based HHMS use. In addition, the study also examined the relationship between provider\u2019s well-being and the level of satisfaction with the AI-based HHMS use. The lessons learned from this study will be used to determine important factors to consider when attempting to advance and expand AI technologies in rural healthcare settings.Results revealed that the integration of AI technology within the existing electronic health record (EHR) system remains a challenge for many providers. Plus, the lack of user-centered design approaches to incorporate the AI tool into existing workflows has reduced providers satisfaction about the new technology.The findings suggested that although AI technology has great promise to reduce the number of hospital-acquired infections (HAIs), successfully implement of an AI-based tool that meets the expectations of users requires significant levels of consolidation to ensure that it fits within the existing workflows and is accepted by users.\u2022\u2002AI application indirectly affects the well-being of providers, particularly in rural healthcare settings.\u2022\u2002A better AI application interface design that meets the expectations of providers is needed."}
+{"text": "An excellent performance was derived from the reduced grain size (10 nm) and the smaller density change rate (4.76%), which are less than those of Ge2Sb2Te5 (GST) and Sb2Te3. Hence, platinum doping is an effective approach to improve the performance of PCM and provide both good thermal stability and high operation speed.Phase change memory (PCM), due to the advantages in capacity and endurance, has the opportunity to become the next generation of general\u2212purpose memory. However, operation speed and data retention are still bottlenecks for PCM development. The most direct way to solve this problem is to find a material with high speed and good thermal stability. In this paper, platinum doping is proposed to improve performance. The 10-year data retention temperature of the doped material is up to 104 \u00b0C; the device achieves an operation speed of 6 ns and more than 3 \u00d7 10 However, poor 10\u2212year data retention (~85 \u00b0C), slow operating speed (~20 ns), and a density change rate of 6.5% limit its wider application in electrical devices [In the past decades, rapid advances in artificial intelligence ,2, super devices ,14. Ther devices ,16.2Te3 shows fast operation speed. However, the low crystallization temperature (<100 \u00b0C) makes the amorphous state unstable, which means that Sb2Te3 is not suitable for PCM application. Doping is a good way to improve thermal stability and speed. Some researchers have obtained high\u2212performance phase change materials by doping Sb2Te3, such as Sc0.2Sb2Te3. It achieved an ultra\u2212fast operation speed of 700 ps and the data retention of ~87 \u00b0C [The PCM device based on Sbf ~87 \u00b0C , which sf ~87 \u00b0C ,14, high2Te3 devices and microscopic characterization of films. The PCM devices based on Pt0.14Sb2Te3 (PST) show fast operation speed, high data retention, and good endurance. Meanwhile, the corresponding microstructure of PST explains the origin of its high performance.In this work, we have performed electrical tests based on Pt\u2212Sb2Te3, Pt0.1Sb2Te3, Pt0.14Sb2Te3 (PST), and Pt0.22Sb2Te3 films are deposited by sputtering of Pt and Sb2Te3 targets. The compositions of these films were measured by energy\u2212dispersive spectroscopy (EDS). Films with a thickness of 200 nm were deposited on SiO2/Si (100) substrates for resistance\u2212temperature (R\u2212T) and X\u2212ray diffraction (XRD) tests. In situ R\u2212T measurement was conducted by a homemade vacuum heating table, and the heating rate was 20 \u00b0C /min. The film was heated in a vacuum chamber with a heating rate of 60 \u00b0C/min, and the isothermal change in resistance with increasing temperature was recorded to estimate the 10\u2212year data retention. The X\u2212ray reflectivity (XRR) experiment (Bruker D8 Discover) was used to test the density change of films before and after crystallization. X\u2212ray photoelectron spectroscopy (XPS) experiment was used to evaluate the bonding situation. Then the film (about 20 nm) was deposited on the ultra\u2212thin carbon film, and its microstructure was studied by Transmission Electron Microscope (TEM). TEM is manufactured by Hitachi Limited in Tokyo, Japan.The SbT\u2212shaped PCM devices were prepared by 0.13 \u03bcm complementary metal\u2212oxide semiconductor technology. The diameter of the tungsten bottom electrode is about 60 nm. The 70 nm\u2212thick phase change material and 20 nm\u2212thick TiN as adhesion layer were deposited through the sputtering method over a 60 nm diameter of tungsten heating electrode. The device is measured by the Keithley 2400 C source meter and Tektronix AWG5002B pulse generator. The Keithley 2400 C source meter and Tektronix AWG5002B pulse generator are manufactured in the Beaverton, OR, United States by Tektronix.2Te3 can enhance the crystallization temperature of the material, and the crystallization temperature increases with more Pt. The amorphous resistance of the material first increases and then decreases with the content of Pt. This is due to the low crystallization temperature of as\u2212deposited Sb2Te3 film and partly crystallization, which will be confirmed by subsequent XRD experiments. Dopant atoms can increase scattering probability, so the effect of scattering is enhanced as the doping concentration increases and results in an increase in resistivity. However, when the doping concentration is too high, the metallicity of the material increases and the resistivity decreases. The crystallization temperature can be measured via Raman or XRD measurements and is simply approximated by the curve of resistivity. In this paper, we chose to use the R\u2212T curve to calculate the crystallization temperature. In the R\u2212T diagram, the crystallization temperatures of Pt0.1Sb2Te3, Pt0.14Sb2Te3 (PST), and Pt0.22Sb2Te3 are 137 \u00b0C, 199 \u00b0C, and 236 \u00b0C, respectively, which indicates that the thermal stability of the Sb2Te3 alloy is improved after Pt doping. The resistance of the PST drops by more than an order of magnitude, which is enough to distinguish the ON/OFF states used in the PCM storage devices. Therefore, we believe that the performance of the PST film is greatly improved. The films with different Pt compositions were performed by resistance\u2212temperature (R\u2212T) tests, as shown in a)E of 2.57 eV and 1.86 eV. The activation energy errors are 0.05 eV and 0.40 eV, respectively. We find that 10\u2212year data retention of PST films is higher than that of most phase change memories, such as GST (~85 \u00b0C) and SST (~87 \u00b0C) [The 10\u2212year data retention for GST and PST are expected to be 85 \u00b0C and 104 \u00b0C, respectively, with corresponding activation energies (RESET/RSET) is about two orders of magnitude, which can meet the requirement of the ON/OFF ratio used in PCM. When the voltage pulse width of 6 ns, the SET/RESET voltage of the PST device requires 1.2 V/3.8 V. However, GST requires 4.6 V/5.5 V with a 10 ns operation speed [5 switching cycles with a resistance ratio of two orders of magnitude. The switching cycles and resistance ratio of PST are better than Sb2Te3 [2Te3 with suitable composition is a promising novel phase\u2212change material. Accordingly, based on standard 0.13 \u03bcm complementary metal\u2212oxide semiconductor (CMOS) technology, T\u2212shape PCM devices based on PST were fabricated, as shown in on speed . This coon speed . As shown Sb2Te3 . The endn Sb2Te3 using th2Te3 films at different annealing temperatures. The diffraction peak of Sb2Te3 appears in the deposited state, indicating that the deposited Sb2Te3 has crystallized. At this time, there is no diffraction peak of PST, so the PST has not crystallized. At 200 \u00b0C, the FCC phase appeared in the PST, which indicated that Pt inhibited the formation of the FCC phase and increased the crystallization temperature. When the annealing temperature is 260 \u00b0C, both PST and Sb2Te3 have only the diffraction peaks of the hexagonal phase. Compared with pure Sb2Te3 film, the diffraction peaks of PST film become wider, the intensity of the peak becomes lower, and some diffraction peaks disappear. In addition, a difference in the full width at half maximum (FWHM) of the diffraction peak is observed on the XRD curves. According to the Scherrer formula:The XRD method was employed to characterize the lattice structure of PST film. K in the equation is the Scherrer constant (K = 0.89), \u03b2 is the grain size, L is the full width at half maximum (FWHM) of the diffraction peak of the sample, \u03b8 is the diffraction angle, and \u03bb is the X\u2212ray wavelength (0.154056 nm). The FWHM of PST was significantly higher than that of Sb2Te3, indicating that the incorporation of Pt inhibited the crystallization growth process, and grain refinement was obvious. Reducing grain size is ideal for programming areas [ng areas .2Te3 film and PST films are presented in 2Te3 from 50 nm to about 5~10 nm, which confirms that the half\u2212height width of PST is much larger than that of Sb2Te3. Meanwhile, according to 2Te3 film without forming any new phase or structure.To study the crystalline phase and grain size more intuitively, high\u2212resolution transmission electron microscopy (HRTEM) and the associated selected area electron diffraction (SAED) patterns for Sb2Te3 and GST films are 7.5% [Crystallization usually leads to an increase in film density and a reduction in film thickness. The information on the density change upon crystallization is of paramount importance in phase change media technology since it is related to the stresses induced in the system during the write/erase cycle. The change of density before and after the phase transition of the sample was measured by XRR. are 7.5% and 6.5%2Te3 and PST is revealed by XPS. When the Pt atom enters Sb2Te3, if the Pt atom replaces the Sb atom and combines with the Te atom, since the electronegativity of Pt (2.2) is higher than that of Sb (2.05) and Te (2.12), the binding energy of Te will shift towards the high binding energy, which is consistent with the phenomenon in the experiment in Experiments have proved that when element B is replaced by element C and bonded with element A, if the electronegativity of element C is greater than that of element B, the binding energy of element A increases . In Figu2Te3 film is explained clearly. The reduced grain size and formation of Pt\u2212Te bonds are the main reasons for the improved properties. Subsequently, a boost in device endurance gave the credit to the reduced density change rate. The improvement of these properties is conducive to the commercial application of the material. Such experimental results show that PST has broad application prospects in complex environments.In this work, we systematically studied the performance of PST. The PCM devices based on PST can achieve higher speed and data retention than GST devices. According to XPS and TEM analyses, the microstructure feature of Pt\u2212modification Sb"}
+{"text": "Coronavirus disease 2019 (COVID-19) is\u00a0a viral respiratory disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The respiratory system is the main target of the virus; however, apart from lung disease, a relatively large proportion of patients develop thrombosis as well. We present the case of a 19-year-old male who was admitted after contracting community-acquired right-sided pneumonia. The patient had a history of COVID-19 infection four weeks before admission. The echocardiographic assessment revealed a 16\u00a0x 6-mm right ventricular (RV) thrombus. He underwent a cardiovascular magnetic resonance (CMR) study, which confirmed the findings.\u00a0After ruling out the most common causes of hypercoagulability, COVID-19 was judged to be the cause of the thrombus. The patient was treated with warfarin. Follow-up imaging with echocardiography and CMR six months later revealed complete resolution of the thrombus. Hypercoagulability is a major complication of COVID-19 and in situ thrombosis can occur both in the arterial and venous circulation. The recognition of intracardiac thrombi even in low-risk patients with a history of COVID-19 infection and the immediate initiation of antithrombotic treatment to minimize the risk of embolization is of paramount importance. Advanced imaging techniques are often required to establish the diagnosis of this condition. Even though coronavirus disease 2019 (COVID-19) is primarily a respiratory disease, there is strong evidence to indicate the development of a prothrombotic environment leading to both arterial and venous thrombosis in many infected patients . RV appeared to have a normal size with preserved systolic function .\u00a0Balanced steady-state free precession (bSSFP) cine images showed an oval-shaped mass of 16 x 6 mm within the cavity of the RV apex, with borders distinguishable from ventricular endothelium and trabeculations, raising the suspicion of an RV apical thrombus Figure . First-pA hypercoagulable workup including antiphospholipid antibodies, antinuclear antibodies, beta-2-glycoprotein, anticardiolipin antibody, homocysteine, antithrombin, protein C, and protein S was normal. The recent COVID-19 infection appeared to be the only plausible cause of the RV apical thrombus. LMWH was bridged to warfarin. The patient was successfully discharged within the next few days after achieving therapeutic levels of international normalized ratio (INR) between 2 and 3. Following the anticoagulation of the patient for six months, a follow-up TTE Figure showed aFollowing the COVID-19 outbreak, a strong correlation between COVID-19 infection and hypercoagulopathy has been confirmed. This coagulopathy seems to be one of the most severe consequences of the disease. The COVID-19-related thrombosis as well as the abnormal coagulation parameters have been perceived as a poor prognostic marker in COVID-19 patients . Among tAn autopsy study of patients who died from COVID-19 showed a high incidence of deep venous thrombosis 58%) and fatal pulmonary embolism 33%) [3% [3]. S8% and faThe prophylactic administration of low-molecular heparin in all patients with COVID-19 infection is of paramount importance.\u00a0Similarly, early diagnosis of thrombosis is vital, and TTE might be a reasonable option in most patients with a history of COVID-19 infection. TTE remains the main imaging technique for assessing the RV function, but since thrombus and myocardium may have similar echogenicity, TTE alone is sometimes not suitable to differentiate these two types of tissue. Whenever there is a high suspicion of RV thrombus or in cases of an inconclusive TTE and/or inconclusive transoesophageal echocardiogram\u00a0(TOE), advanced imaging techniques like CMR could be employed as an additional modality to confirm the diagnosis.The superiority of CMR to detect thrombus in the RV with its higher sensitivity and specificity compared to the current standard with TTE and/or TOE has already been described in some case reports and case series ,9. BarbaIrrespective of the cause of RV thrombus formation, multimodality imaging, especially the use of CMR, despite its cost, offers a very useful additional diagnostic tool. CMR can be a supplemental tool besides TTE/TOE in the diagnosis and follow-up of incidental intracardiac masses in the RV. In our study, the TTE findings raised the suspicion of a thrombus. RV thrombus was diagnosed after the CMR during the analysis of all the obtained images, i.e., bSSFP cine sequences, FPP, and LGE sequences.Hypercoagulability is a major complication of COVID-19 and thrombosis can occur both in arterial and venous circulation including the RV. These thrombotic events appear to be most common in hospitalized COVID-19 patients, but our case showed that thrombotic events can be present\u00a0in low-risk post-COVID-19 patients as well. The recognition of intracardiac thrombi even in low-risk patients with a history of COVID-19 infection, like our case, and the immediate initiation of antithrombotic treatment to minimize the risk of embolization, is crucial. The use of advanced imaging techniques like CMR can help in confirming the diagnosis."}
+{"text": "The purpose of this research study was to define a TPB-based structural latent variable model so as to explain variance in breastfeeding intentions and behaviors among a cohort of Midwest breastfeeding mothers.The Theory of Planned Behavior (TPB) has guided the investigation of breastfeeding since the 1980\u2019s, incorporating the major constructs of attitudes, subjective norms/normative beliefs, perceived behavioral control, and intentionslavaan package in R.The longitudinal descriptive study utilized questionnaire data collected from a convenience sample of 100 women with low-risk pregnancies with the intention to breastfeed at three separate time points . Data were coded and analyzed using IBM SPSS, SAS and the n\u2009=\u200994), married , college-educated , and had previous breastfeeding experience . The majority gave birth vaginally . Varimax analysis revealed a plurality of factors within each domain. Attempts to fit a structural model, including both hierarchical and bi-factor latent variables, failed, revealing a lack of statistical significance and poor fit statistics.Participants were predominantly White (94%, These findings illustrate the importance of using methods that fit the phenomena explained. Contributors to poor model fit may include outdated tools lacking cultural relevance, a change in social norms, or a failure to capture the possible influence of social media and formula marketing on breastfeeding behaviors. The null finding is a significant finding, indicating the need to revisit and refine the operationalization and conceptual underpinnings of the TPB through qualitative methods such as exploring the lived experiences of breastfeeding women in the Midwest region. The larger study used volunteer and paid undergraduate nursing research students to recruit participants and collect both survey data and human milk samples. The study was approved by Hope College\u2019s Human Subjects Review Board. All enrolled participants were provided written research materials and consented prior to participation in the study.One hundred women with low-risk pregnancies and the intention to breastfeed were enrolled via convenience sampling after 30\u2009weeks gestation and completed three questionnaires . Women were eligible to participate if they were 21\u2009years of age or older, English proficient, intended to breastfeed with a singleton gestation, and lived within a 75-mile radius of the study site. To reach a large group of mothers intending to breastfeed, the sample was recruited via social media, recruitment materials posted at local hospitals and businesses, and snowballing. Participants were provided with a $20 USD store card as partial compensation for their time and participation.The Antepartum Questionnaire collected participant demographic data and included the Predictive Breastfeeding questions developed by Manstead and colleagues see Tab using thThe targeted behaviors were exclusive breastfeeding at Day 10 and Day 60 postpartum. The Day 10 and Day 60 Questionnaires measured participants\u2019 feeding practices postpartum. A series of multiple-choice questions measured feeding method, mode of milk expression, and frequency of feeding to conceptualize exclusivity and duration of breastfeeding (see Table XM(R). Eligible participants were invited to review and complete an online consent form. Once the consent form was signed, participants were directed to complete the Antepartum Questionnaire. Participants were instructed to notify the research team when they gave birth. Based on the provided birth date, Day 10 and Day 60 Questionnaires were scheduled to be distributed to participants. No more than two reminders to complete any of the three questionnaires were sent to participants.The consent form and questionnaires were administered electronically via Qualtricslavaan package in R for latent variable modeling [Descriptive data were analyzed using IBM SPSS (Version 24). Additional data analysis were completed using SAS University Edition and the modeling . AnalyseWhen it was clear confirmatory factor analysis was insufficient, the strategy became exploratory factor analysis, fit latent variables (LVs) and trim manifest measures if necessary, and fit a structural model. Authors primarily relied on the Bayesian information criterion (BIC) to compare model fit between the first-order LV, hierarchical LV, and bifactor LV (described in more detail below). Model fit for each latent construct was assessed using cutoffs suggested by Schreiber and colleagues and the n\u2009=\u200994), married , college-educated , and had previous breastfeeding experience . The majority gave birth vaginally . Complete sample characteristics are available in Table\u00a0From the original 100 participants, 87 completed all three questionnaires in full. Participants were predominantly White . As such, the analysis progressed to exploratory factor analysis with a combination of a factor analysis with varimax rotation, and a MAP, VSS analysis, and parallels. This indicated that each domain within the TPB model was not a single factor, except for breastfeeding intentions. The feeding attitude factor was more accurately modeled as five factors, subjective norms as three factors, and perceived behavioral control as three factors. Additionally, the items that factored together did not always indicate a clear conceptual category. For example, the attitude item regarding expense factored with infection and nutrition , but unlike the bi-factor model, it then fits the variance that is in common between these lower-order latent variables (as opposed to the observed variables as seen in the bi-factor model). The BIC did decrease with all of these HLVs, and many of the fit statistics also improved with this specification, but the best fit was with Norms, and this, while close, still did not hit acceptable model fit (RMSEA\u2009=\u20090.09 and SRMR\u2009=\u20090.06).In a final attempt to fit a SEM approximating the TPB, analyses explored which of the factors from each domain of the TPB were most conceptually clear and best fit the overall theoretical argument of the model. Subsequently, just those factors were included. The model fit indicated that the overall structure was not reliable and did not always converge. Thus, the reliability of even the significant paths is questionable.This study attempted to use the TPB to explain variance in breastfeeding intentions and behaviors among a cohort of Midwest breastfeeding mothers. To do so, the research team used SEM to fit constructs within the TPB as latent variables and model breastfeeding behaviors. The constructs, however, did not hold up when modeled in this population. These issues with poor model fit may have gone unnoticed had the research team not opted to use SEM. This is especially apparent when considering construct alpha scores. The alpha scores for each construct with the TPB would have been acceptable, or nearly acceptable, for a traditional regression model (\u03b1\u2009>\u2009than 0.7). However, latent variables can parse measurement error more accurately. Alpha scores are based upon correlations among variables, but McDonald\u2019s omega scUnlike Duckett and colleagues and DodgGuo and colleague\u2019s meta-anaFor some time, research focused on the formation of beliefs and values and the actions that flow from socialization has highlighted the importance of close ties. Exposure to relationships, social structures, and one\u2019s position within them, shape perceptions of the world that are then replicated in behaviors , 27. TypStudy participants were primarily recruited via social media (Facebook), indicating this cohort of childbearing women is active on social media and, at least in this case, social media activity is connected to actions. Recent research has shown that women are more active in seeking out health information than men and that the internet plays a key role in this information consumption . This id. Beyond the fact that formula measures factoring well is conceptually notable, this may also be a small indication that the failure of TPB to fit this data well is not merely a data artifact. After all, it would not make sense to say it is merely a data problem when some manifest measures that clearly belong together (formula feeding questions) statistically do hold together quite well. If there were a fundamental problem with the data, it would be odd for it to not be widespread throughout the measures. The uniformity of messaging by the formula industry may explain why those items may more clearly factor out together. Strong, uniform messaging is also a key reason why mothers may discontinue breastfeeding [While the overall efforts to fit a SEM for the TPB failed, variables related to formula feeding clustered together quite well in this studytfeeding . Althougtfeeding , 35.The limitations of this study include convenience sampling, a limited sample size, homogeneity of the study sample, and the use of previously developed questionnaires with a limited ability to capture the constructs under investigation. Despite these limitations and null findings, this study remains particularly valuable to nursing science, in which the development of interventions is driven by theory. Among dissertations focused on breastfeeding research during the last 10\u2009years, approximately eight dissertations utilized the TPB as a guiding theory. This does not encompass the multitude of studies in maternal and child health currently underway, nor published manuscripts using the TPB as a guiding framework.This research highlights limitations in tools developed to measure the TPB theoretical constructs of attitudes, subjective norms/normative beliefs, perceived behavioral control, and intentions related to breastfeeding behavior. Despite the fact that data in this study are composed of a relatively homogeneous sample of mothers from the same community, attempts to fit an SEM failed. The research team speculates that these deficits may be related to the use of outdated tools lacking cultural relevance, a change in social norms, and a failure to capture the possible influence of social media and formula marketing on breastfeeding behaviors.In addition, this study demonstrates the importance of using methods that fit the phenomena explained. The research team used SEM to fit constructs within the TPB as latent variables and model breastfeeding behaviors, which did not hold up when modeled. These issues with poor model fit may have gone unnoticed had the research team not opted to use SEM. As a result, the present null finding is a significant finding indicating the need to revisit and refine the operationalization and conceptual underpinnings of the TPB through qualitative methods. This would include exploring the lived experiences of breastfeeding women in pregnancy and during lactation, taking into account breastfeeding difficulties or maternal and infant factors that may have a significant impact on women\u2019s decision or ability to continue breastfeeding in the Midwest region."}
+{"text": "Background: RAD-140, one of the novel selective androgen receptor modulators (SARMs), has potent anabolic effects on bones and muscles with little androgenic effect. Despite the lack of approval for its clinical use, RAD-140 is readily accessible on the consumer market. Hepatotoxicity associated with the use of SARMs has only rarely been reported in the literature.Case Report: A 24-year-old male presented with a 2-week history of diffuse abdominal pain, scleral icterus, pruritus, and jaundice. Prior to presentation, he had been taking the health supplement RAD-140 for muscle growth for 5 weeks. He had a cholestatic pattern of liver injury, with a peak total bilirubin of 38.5 mg/dL. Liver biopsy was supportive of a diagnosis of RAD-140\u2013associated liver injury characterized pathologically by intracytoplasmic and canalicular cholestasis with minimal portal inflammation. Symptoms and liver injury resolved after cessation of the offending agent.Conclusion: To date, only select descriptions of the potential hepatoxicity associated with the use of SARMs, including RAD-140, have been published. Given their potential hepatoxicity and ready availability on the consumer market, RAD-140 and other SARMs should be used judiciously and under close clinical supervision until further hepatic safety data become available. Approximately 2 months earlier, he had started taking up to 15 mg daily of RAD-140 for a total of 5 weeks for muscle growth. The only other medication that he took was acetaminophen 250 mg/aspirin 250 mg/caffeine 65 mg . He had stopped both medications 2 weeks prior to presentation as instructed by his primary care provider (PCP) because of the incidental finding of elevated liver chemistries , alkaline phosphatase of 151 IU/L , alanine aminotransferase of 171 IU/L , and aspartate aminotransferase of 71 IU/L . Gamma-glutamyl transpeptidase, international normalized ratio (INR), total protein, albumin, and complete blood count were within normal limits , but transferrin saturation was only 21% and he was negative for C282Y/H63D mutations. Ceruloplasmin level was normal. Alpha-1 antitrypsin (AAT) level was normal, and AAT phenotype was Pi*MZ. Urine toxicology screen, alcohol screen, and acetaminophen level were negative. Doppler ultrasound of the liver was unremarkable. Abdominal axial computed tomography and magnetic retrograde cholangiogram revealed hepatomegaly and limited focal fatty infiltration with patent biliary tree and vasculature , A and BThe patient had an unremarkable hospital course. Serial laboratory testing showed gradually improving liver chemistries and stable INR. As acute liver failure was not a concern, N-acetylcysteine was not given.The patient was discharged on hospital day 4. He returned to the hepatology clinic for regular visits up to 5 months postdischarge. At his most recent clinic visit, his symptoms had entirely resolved, and his liver tests had markedly improved and a severity score of 3 (severe injury).-140 use .9-12 Sim14Our patient had a history of using acetaminophen and salicylic acid prior to the onset of his liver injury. However, acetaminophen and/or salicylic acid\u2013related hepatoxicity was deemed unlikely because of the small accumulated dosage, and more importantly, the lack of classic zone 3 necrosis/apoptosis and/or microvesicular steatosis. On the other hand, bland cholestasis, a pathologic feature commonly observed in anabolic-androgenic steroid\u2013induced hepatotoxicity, highly suggests cholestatic injury from RAD-140.16 Whether the association of RAD-140 with androgen receptors regulates BSEP through receptor-associated signaling pathways remains to be defined. In humans, the ABCB11 mutation was reported to increase the genetic susceptibility of anabolic-androgenic steroid\u2013induced cholestasis.17 Although we do not know if our patient has the ABCB11 mutation, he is heterozygous for alpha-1-antitrypsin Z (Pi*MZ), a phenotype reported to be a predisposing factor for liver disease and fibrosis.18 Whether Z heterozygosity contributes to drug-induced cholestasis remains to be determined.The molecular mechanisms underlying SARM-induced hepatotoxicity are largely speculative. The bland cholestasis seen in both anabolic-androgenic steroid\u2013 and RAD-140/Enobosarm\u2013associated hepatotoxicity highly suggests involvement of androgen receptors in dysregulation of bile transport. In animal studies, the bile salt export pump (BSEP), an ATP-binding cassette subfamily B member 11 (ABCB11) transporter, was reported to be involved in anabolic-androgenic steroid\u2013induced cholestatic injury.The accumulating cases of drug-induced liver injury from SARMs raise concern about their hepatic safety and question the tissue selectivity of these agents. We caution the use of SARMs outside of clinical investigation and advocate for tighter regulation, close monitoring, and prompt reporting of adverse events associated with SARMs."}
+{"text": "Gastrointestinal cancer is becoming increasingly common, which leads to over 3 million deaths every year. No typical symptoms appear in the early stage of gastrointestinal cancer, posing a significant challenge in the diagnosis and treatment of patients with gastrointestinal cancer. Many patients are in the middle and late stages of gastrointestinal cancer when they feel uncomfortable, unfortunately, most of them will die of gastrointestinal cancer. Recently, various artificial intelligence techniques like machine learning based on multi-omics have been presented for cancer diagnosis and treatment in the era of precision medicine. This paper provides a survey on multi-omics-based cancer diagnosis using machine learning with potential application in gastrointestinal cancer. Particularly, we make a comprehensive summary and analysis from the perspective of multi-omics datasets, task types, and multi-omics-based integration methods. Furthermore, this paper points out the remaining challenges of multi-omics-based cancer diagnosis using machine learning and discusses future topics. Cancer is one of the leading causes of death worldwide , usuallyIn order to improve the cancer treatment effect, as well as prolong the survival time for cancer patients, it is very essential to improve capabilities in precision medicine by using specific information about a patient's tumor to help make an accurate diagnosis, plan an effective treatment, find out how well treatment is working, or make a prognosis. In particular, accurate diagnosis can be used for the early diagnosis of cancer, many cancers can be cured if detected early and treated effectively . Even inHowever, accurate diagnosis of cancer is a scientific problem in the field of biomedicine. Fortunately, over the past few years, with the development of artificial intelligence technology, especially machine learning (ML) and deep learning (DL) , smart mThis paper provides a comprehensive review of multi-omics-based ML models or artificial intelligence technologies in the field of cancer diagnosis, and then we highlight its prospects and applications in gastrointestinal cancer. Finally, we point out the difficulties in the current multi-omics-based ML integration methods and discuss some future research directions.As shown in Typical types of cancer tasks based on multi-omics data integration methods are cancer molecular subtype classification, survival analysis, drug response prediction, and biomarker discovery. In addition, some tasks are not well studied in the literature, such as metastasis prediction, recurrence prediction, etc., they will not be discussed in this review.To customize the optimal treatment strategy for patients and achieve the purpose of precision medicine, it is of great significance to improve the accuracy of a cancer diagnosis. Specifically, cancer is generally further divided into multiple molecular subtypes, and different molecular subtypes adopt different treatment strategies to achieve the best therapeutic effect . For exaTo improve the survival rate of cancer patients, a large number of researchers studied and analyzed the factors affecting their survival times by collecting the survival times of cancer patients and using machine learning methods to discover possible survival rules , 18. CanIC50 is widely used to assess the sensitivity of drug response, and it is the concentration of drug required to reduce the number of viable cells by half after drug administration . Drug reIn this review, the goal of biomarker discovery is to find genes associated with cancer prognosis by combining multi-omics data, which can advance the understanding of molecular mechanisms of cancer and offer new ideas for clinical diagnosis and treatment , 28. ForIt has become increasingly apparent that many novel omics data sequencing technologies have emerged since the Human Genome Project was proposed and implemented \u201334, and In this section, we introduce the multi-omics cancer datasets that are widely used in the literature. The multi-omics datasets are shown in The Cancer Genome Atlas (TCGA) is a project jointly launched by the National Cancer Institute (NCI) and the National Human Genome Research Institute (NHGRI) in 2006 . It inclThe Genomics of Drug Sensitivity in Cancer (GDSC) omics database was jointly developed by the Wellcome Trust Sanger Institute in the United Kingdom and the Massachusetts General Hospital Cancer Center in the United States . GDSC coIn addition to the TCGA and GDSC datasets, other widely used databases also appear in the relevant literature, such as Molecular Taxonomy of Breast Cancer International Consortium (METABRIC), COSMIC Cell Lines, CPTAC, LinkedOmics, and the Cancer Imaging Archive (TCIA) \u201341.Although multi-omics data can be used for cancer diagnosis using ML integration methods, there are still some problems with multi-omics data. We list some of the challenges that are quite general in the relevant literature , 42 as fThe first challenge is that almost all existing omics datasets suffer from the problem of a small number of observations in a specific class, with most classes having < 100 observations. The features of omics usually have higher dimensionality, which is much larger than the number of observed samples, leading to the problem of the curse of dimensionality . In thisThere are many missing values in clinical information and omics sequencing results in multi-omics datasets. Some studies have proposed that , 19 whenThere is a problem of class distribution imbalance between different cancer types, as well as between different cancer molecular subtypes, respectively. To solve this problem, up-sampling and down-sampling techniques are usually employed .In recent years, with the increase in computing power and the decline in the cost of high-throughput sequencing, and the success of ML technology in various fields, ML has been widely employed in the fields of biomedical and bioinformatics computing . In partHere we will briefly introduce three subgroups of models applied in data integration: traditional ML models, classical deep learning models, and auto-encoder models.Logistic regression (LR), support vector machine (SVM), random forest (RF) and Xgboost are widely used traditional ML models , 31, 47,In contrast, it is not necessary for classical deep learning models like fully connected neural networks (FCNNs) and convolution neural networks (CNNs) to reduce the dimensionality of omics features to very low dimensions, due the models can automatically learn useful information from high-dimensional space , 48\u201351.Auto-encoder is an unsupervised neural network model where the network can be replaced by FCNN, CNN, or other DNNs . Auto-enIn recent years, Graph Neural Networks (GNNs) have shown strong capabilities in handling non-Euclidean graph-structured data by naturally combining network topology structure and the information of node and link, GNNs have been employed to integrate multi-omics data since the last 2 years. Wang et al. proposedThe transformer model is widely used in different fields such as natural language processing and computer vision, it is becoming one of the most frequently used deep learning models \u201356. The In this paper, we review the multi-omics-based integrative approaches using ML with potential applications in gastrointestinal cancer. Firstly, several cancer task types are elaborated on and discussed. Then we describe widely used cancer multi-omics datasets, and the challenges encountered in their use for integration based on ML. Finally, we analyze currently the state-of-the-art multi-omics-based data integration approaches in detail and divide them into three groups such as conventional ML technologies, graphic neural network technologies, and transformer technologies.Although ML has performed excellently in the application of multi-omics data integration, there are still some challenges that require us to consider and explore deeply. Specifically, the existing methods for a missing value of multi-omics data are almost all treated as discards, rather than trying to fill them in. Therefore, to efficiently utilize the existing precious multi-omics data, it is necessary to further explore the method of filling in missing values in multi-omics. Additionally, since biomedical data are very precious and difficult to obtain, the patients who contain multiple omics sources and histopathology images simultaneously are particularly scarce. Hence, in the future, a pre-trained visual representation model may be transferred to histopathology images on a limited number of samples, which can be potentially solved by few-shot learning strategies. More importantly, more effective approaches for integrating multi-omics and histopathology images need further investigation for gastrointestinal cancer diagnosis and treatment, as a promising future research direction.ShW contributed to the conception of the study and the verification of analytical methods. SuW performed the statistical analysis and wrote the manuscript. ZW revised the manuscript and approved the version to be published. All authors contributed to the article and approved the submitted version."}
+{"text": "Midwives in the intervention period provided an orientation session for the birth companions on supportive labor techniques. Coping was assessed throughout labor and anxiety scores were measured after birth. Independent t-test and Chi-Square tests were used to assess the differences by study period. Anxiety scores were reduced among women in the intervention period (p = 0.001). The proportion of women able to cope during early active labor was higher during the intervention period (p = 0.031). Women in the intervention period had 80% higher odds of coping (p = 0.032) compared to those in the control period. Notable differences in anxiety and coping with labor were observed among first-time mothers, younger women, and when siblings provided support. Midwife-provided orientation of birth companions on labor support lowers maternal anxiety and improves coping during labor. Findings could inform the planning and development of policies for the implementation of the presence of birth companions in similar low-resource settings.The study aimed to assess the effect of midwife-provided orientation of birth companions on maternal anxiety and coping during labor. A stepped wedge cluster randomized trial design was conducted among 475 participants (control Improving the quality of care around the time of birth has been acknowledged as the most impactful strategy for reducing stillbirths and maternal and newborn deaths . A womanLabor companionship, like other non-clinical interventions, has not been regarded as a priority in many settings, yet it is an essential component of the experience of care . SeveralPresently, women in Uganda are allowed to have a companion of choice during labor. These companions, however, neither receive an orientation nor have defined roles and responsibilities. It is acknowledged that actively involving family members in the process of labor ensures ownership and engagement ,16.There is low-quality evidence on the effect of continuous labor support in low-income settings. Additionally, it is still unknown whether training impacts the effectiveness of continuous labor support . TraininA cross-sectional stepped wedge cluster randomized trial was used. In this design, different individuals in the control and intervention are used with a single observation of outcomes . This apThe study was carried out in the Bugisu sub-region located in the eastern part of Uganda. The Bugisu sub-region consists of six districts, including Manafwa, Mbale, Bududa, Sironko, Namisindwa, and Bulambuli. According to the Uganda National Bureau of Statistics, the sub-region is home mainly to the Gisu people with an average household size of 4.8 and a literacy rate of 51.5% . The subCluster sampling: Each health facility in this context was considered a cluster. The inclusion criteria were the functionality of an operating theatre. Hospitals and HCIVs with a functional operating theatre were included. It was assumed that the presence of a functional theatre meant that the chances of referring women for cesarean sections to other facilities were small, enabling us to monitor more deliveries. Four clusters were selected for this trial. These were Mbale Regional Referral Hospital, Bududa, district hospital, Muyembe HCIV, and Busiu (Manafwa) HCIV.Individual specific criteria. Women who had a birth companion, in spontaneously established labor and, expecting a vaginal delivery. Exclusion criteria were women with multiple pregnancies, previous cesarean section, and mental illness, or deaf or mute women. We excluded women who had previous cesarean sections because they had a higher chance of having another cesarean section.Intervention: The intervention was \u201cmidwife-provided orientation of birth companions\u201d. The admitting midwife provided an orientation session for the birth companion on supportive labor techniques. We assumed that providing an orientation session was likely to boost birth companion confidence, hence increasing the effectiveness of continuous support. The content for the orientation session consisted of providing emotional and physical support. Emotional support included being present, demonstrating a caring and positive attitude, saying calming verbal expressions, using humor, and praise, and encouraging and acknowledging efforts. Physical support included supporting her to change positions favoring upright positions, walking with her, giving her drinks and food, massaging, reminding her to go and pass urine, helping her find a comfortable position for pushing, and wiping her face with a cool cloth. The content was developed based on the literature on labor companionship techniques ,11. The Control : Women are escorted to the health facilities by one or more family members or friends. One person is allowed, besides her, to provide support. The support persons do not receive any orientation sessions and have no designated roles. Routine analgesia is not given. Midwives, medical officers, and obstetricians provide skilled care. Typically, two to three midwives are allocated per 8-h shift managing about six laboring women at a given time.Outcomes: This article is a part of a larger study assessing the effectiveness of midwife-provided orientation of birth companions on several outcomes. These were the incidence of having a spontaneous vaginal delivery, length of labor, Apgar score, coping, anxiety, and maternal satisfaction. The primary outcome was the chance of having a spontaneous vaginal delivery. This article is a reporting on two of the secondary outcomes of the actual trial. The rationale for reporting maternal anxiety and coping separately was to give more attention to the psychological aspects of childbirth. Psychological research on childbirth is scarce . The autSample size and randomization: Stepped wedge trials are designed to study the effect of an intervention . The samRandomization for stepped wedge trials is not performed individually but rather involves the crossover of clusters from control to intervention until all clusters are exposed . Using aWomen\u2019s self-reported anxiety levels were measured within 6 h after birth using the 10 cm Visual Analogue Scale for Anxiety (VAS-A). It ranges from 0 to 10, with a higher score representing high levels of anxiety. The VAS-A has demonstrated validity and reliability for measuring anxiety and has been used in several studies to assess anxiety in similar low-resource settings ,27,28,29p-value of <0.05 was taken to be statistically significant. Subgroup analysis was also undertaken to evaluate treatment effects for specific endpoint groups defined by specific baseline characteristics [p-values.Analysis: In this study design, the distribution of results across control periods is compared with that across the intervention periods . Anxietyeristics . MultivaThe majority of the respondents were in the age group of 15\u201324 years. Most of the respondents were first-time mothers 44.7%. The overall mean of gestation was 38.3 weeks (SD 1.0); Maternal anxiety for women who received continuous labor support from birth companions who had a midwife-provided orientation (intervention) was compared to those who received usual care (control).p-value = 0.001). See The overall mean anxiety score was 5.4 (SD = 2.0). Anxiety was higher in the pre-intervention period compared to the post-intervention period . There was a statistically significant difference in the mean anxiety score between the pre-and post-intervention period to (3.4-SD2.2)). Additionally, high differences were noted among women who were younger, first-time mothers, with a low education status, and those who were married. Additionally, anxiety levels were much lower when sisters (siblings) (6.2(2.0) vs. 4.8(2.3)) offered support compared to the parents, spouses, friends, or other relatives. See p = 0.001). See A multivariable analysis was performed to assess the relative contributions of different factors that could affect maternal anxiety. We found a statistically significant difference in the anxiety scores by study period (p = 0.031) was found during early active labor (4\u20137 cm). These findings show that the percentage of women coping with labor reduced as the labor progressed. No significant findings were found for the later phases of labor. See The proportion of those able to cope was highest during the early active labor (89.7%) and lowest during the second stage (53.4%). A significant difference (p = <0.001) at the regional referral hospital. Furthermore, the intervention was more effective among those who were having their first child and the difference was statistically significant (p = 0.049). Though not statistically significant, the proportion of those able to cope was higher among the younger women (p = 0.07) and those with a lower level of education (p = 0.06) see .A multivariable analysis was performed to assess the relative contributions of different factors that could affect coping. Confounding baseline characteristics adjusted for were age, parity, cervical dilatation on admission, and relation of support person. Women in the intervention period had 80% higher odds of coping at 4\u20137 cm than those in the control period. The same results were found after adjusting. Furthermore, women having their second or more children and those who were supported by siblings had higher odds of coping. See In this study, we assessed the effect of midwife-provided orientation for birth companions on maternal anxiety and coping with labor. Results from our study showed that maternal anxiety score was reduced and the proportion of women coping was higher in the intervention period. Similar findings are also reported in other related studies where the presence of trained husbands during delivery decreased maternal anxiety ,35. FindThe current study further found that the intervention was more effective among first-time mothers. A study conducted in Malawi to determine the efficacy of a companion-integrated package for primigravid women showed that birth companions enhanced childbirth self-efficacy . StudiesCoping during childbirth is a significant predictor of the development of post-traumatic stress disorder symptoms after birth ,43. PoorIt is also imperative to note that the current study found non-significant findings on coping during the later stages of labor. A related study found that the anxiety levels of women were high during the last stage of labor, irrespective of the intervention . A similFollowing WHO implementation, labor companionship has been implemented in Uganda to a certain extent. Birth companions, however, are not oriented nor the midwives trained on how to integrate the birth companion in the woman\u2019s care. This study highlights the effectiveness of midwife-provided orientation on maternal anxiety and coping during labor. Findings could inform the feasibility of implementing the presence of birth companions. We would like to acknowledge that having 20 min for orientation of each birth companion in low-resource settings may perhaps be a challenge. This intervention can be modified by considering these options: group orientation sessions, conducting sessions during the antenatal period, and video recordings of supportive techniques playing on TV screens in the admission and waiting areas in busy facilities.To our knowledge, this is the first study to assess the effect of continuous labor support on the events of labor and outcomes in Uganda. The selected study design enabled all the facilities to receive the intervention. The facilities were their controls, thereby buffering the effects of heterogeneity. Nonetheless, caution should be taken in generalizing the findings, given the following limitations: randomization was performed to determine the order of introduction of the intervention to clusters and not by individual participants. The ratings of the anxiety were recalled retrospectively and could have been affected by the arrival of the baby. Additionally, coping in labor was midwife assessed, which could have led to potential bias. The numbers within the subgroup analysis were small and this may perhaps have generated a spurious correlation. However, it is beneficial to note that subgroup analysis of treatment effects in subgroups of participants may provide useful information for the specific care of women and for future research.Our results suggest that midwives providing an orientation on continuous labor support lowers women\u2019s anxiety and enhances their ability to cope with pain during early active labor. Findings from this study may be of benefit in informing the development of protocols for the implementation of the presence of birth companions in similar low-resource settings. Future evaluation of the intervention is necessary to assess the effect of additional orientation during the antenatal period. An evaluation of the acceptability and perceptions of midwives regarding guiding or orienting birth companions is essential for implementation. Women\u2019s experience of care during childbirth is key, and organized involvement of birth companions could improve women\u2019s experience of care and consequently their health and future reproductive decisions."}
+{"text": "Ikaros transcription factor plays an important role in hematopoiesis in several cell lines, especially in the lymphoid lineage. We hypothesized that Ikaros might influence immune reconstitution, and consequently, the risk of opportunistic infections, relapse, and graft versus host disease (GVHD). Samples were collected from the graft and from the peripheral blood (PB) of the recipients 3\u00a0weeks after neutrophil recovery. Real-time polymerase chain reaction (RT-PCR) was performed to analyze the absolute and relative Ikaros expression. Patients were divided into two groups, according to Ikaros expression in the graft and in the recipients\u2019 PB based on the ROC curves for moderate/severe cGVHD. A cutoff of 1.48 was used for Ikaros expression in the graft, and a cutoff of 0.79 was used for Ikaros expression in the recipients\u2019 PB. Sixty-six patients were included in this study. Median age of patients was 52\u00a0years (range 16\u201380\u00a0years), 55% of them were male, and 58% of them had acute leukemia. Median follow-up period was 18\u00a0months (range 10\u201343\u00a0months). There was no association between Ikaros expression and the risk of acute GVHD, relapse, or mortality. However, a significant association was observed with the risk of chronic GVHD. Higher Ikaros expression in the graft was associated with a significantly higher cumulative incidence (CI) of moderate/severe chronic GVHD according to the National Institute of Health (NIH) classification at two years . A higher Ikaros expression in the recipients\u2019 PB 3\u00a0weeks after engraftment was also associated with a significantly higher risk of moderate/severe chronic GVHD . In conclusion, Ikaros expression in the graft and in the recipients\u2019 PB after transplantation was associated with a higher risk of moderate/severe chronic GVHD. Ikaros expression should be evaluated in larger prospective trials as a potential biomarker for chronic GVHD.Immune reconstitution after hematopoietic stem cell transplantation (HSCT) is a complex and extremely variable process. The Unfortunately, non-relapse mortality is still very high, mainly due to infections and acute and chronic graft-versus-host disease 3. cGVHD can affect up to 50% of patients and is also responsible for significant comorbidities and low quality of life after HSCT4. The diagnosis of chronic GVHD is based on specific clinical features, although not all patients exhibit these signs and symptoms, and other nonspecific features may be the main manifestation. In doubtful cases, there are only a few laboratory tests that may be useful for diagnosis5. Several biomarkers have been studied to help establish and predict diagnosis and prognosis of cGVHD; however, to date, no biomarkers have been validated for clinical practice7.Allogeneic hematopoietic stem cell transplantation (HSCT) is a potentially curative therapy approach for several malignant and non-malignant diseases and, in some cases, the only with curative intentIkaros transcription factor could be a good candidate as a prognostic biomarker for the risk of cGVHD. Ikaros is a member of a family of zinc finger transcription factors encoded by IKZF1 gene. It is an essential regulator of hematopoiesis8, with an important role in T and B cell differentiation and their mature cell function11, as well as in cells of the myeloid lineage, in erythroid and neutrophil differentiation13.15. As an essential hematopoietic transcription factor implicated in lymphocyte and myeloid differentiation, IKZF1 activity may be a critical component in immune reconstitution and in acute and cGVHD pathophysiology. To our knowledge, IKZF1 expression has not yet been studied in the context of HSCT. In the present study, we explored whether IKZF1 expression in mononuclear cells in the graft and in the recipients\u2019 peripheral blood (PB) after engraftment could be associated with the risk of aGVHD or cGVHD.IKZF1 haploinsufficiency due to germline mutations can be responsible for common variable immunodeficiency with a decrease in B cell lymphocytes, but can also lead to a more pronounced immunodeficiency with low eosinophils, neutrophils, and myeloid dendritic cells, and a dysfunction in T cells and monocytesThis was a non-interventional prospective study that included patients older than 16\u00a0years who underwent allogeneic HSCT between January 2017 and January 2020 in two transplant centers, Hospital Sirio-Libanes and Hospital Sao Paulo, both in Sao Paulo, Brazil. Conditioning regimen, graft source, GVHD prophylaxis, time to transplantation, and all other clinical decisions were made according to each center\u2019s guidelines. Prophylaxis, diagnosis and treatment of GVHD are based on established consensus and do not differ between the two centers.All subjects provided written informed consent prior to enrollment. The study was conducted in accordance with the Declaration of Helsinki and was approved by the research ethics committees of each center .17. We observed stronger correlations between the analyzed biomarkers and the outcomes from samples obtained 3\u00a0weeks after transplant compared to samples obtained at other post-transplant time-points.All blood samples were prospectively collected. Samples were taken from the graft immediately before infusion on the day of the transplant (graft) and from the recipients\u2019 PB 3\u00a0weeks after engraftment (engraftment\u2009+\u200921). Initially, our goal was to identify a readily detectable and cost-effective biomarker that could have practical applications in our country of origin where cost is a common practical limitation. In prior studies conducted by our team, we found that a three-week post-transplantation period was adequate to identify immune reconstitution, which strongly correlated with transplant outcomesIkaros expression was measured using real-time PCR. Total RNA was extracted from mononuclear cells using PureLink\u2122 Micro or llustra RNAspin Mini reagent and cDNA transcripts were quantified using the Superscript III Cells Direct cDNA kit. Reactions were amplified using a 7500 Fast Real-Time PCR System using TaqMan probes , according to the manufacturer\u2019s instructions. The value of 2\u2212\u0394\u0394Ct was used to calculate the fold change in gene expression, according to the studies of Schmittgen et al. and Vandesompele et al.19.Mononuclear cells were separated and stored according to institutional guidelines. Briefly, the collected material was immediately sent for freezing at the laboratory of the Institute of Education and Research at Hospital S\u00edrio-Liban\u00eas. All collected material was individually processed to extract peripheral blood mononuclear cells (PBMCs) from each sample. Using Ficoll-Paque as the separation gradient, the samples were centrifuged to form a buffy coat layer above the gradient. The cells in the buffy coat were separated at room temperature using a sterile pipette and subjected to three cycles of washing and resuspension in sterile phosphate buffered saline (PBS), with centrifugation between washes. The material from each patient was then preserved in a fetal bovine serum solution with 10% dimethyl sulfoxide (DMSO), stored in vials, and gradually frozen in a glycerol box at \u2212\u00a020\u00a0\u00b0C and subsequently at \u2212\u00a080\u00a0\u00b0C. Finally, they were stored in a liquid nitrogen tank until used in all subsequent tests. Ikaros expression and the incidence and severity of cGVHD. Secondary endpoints include the correlation between Ikaros expression and the incidence and severity of aGVHD, as well as overall survival (OS), progression-free survival (PFS), relapse incidence (RI), and non-relapse mortality (NRM). The severity of aGVHD was graded based on the Mount Sinai Acute GVHD International Consortium20, while cGVHD was scored as mild, moderate, or severe according to NIH standards5. Patients\u2019 comorbidities and disease risk were classified as previously published22.The primary endpoint was the correlation between Ikaros relative expression and receiver operating characteristic (ROC) curves were used to divide patients into two groups based on high or low Ikaros expression levels in both the graft and the PB after engraftment, and to correlate these results with the presence of aGVHD and cGVHD.The median https://www.R-project.org/), and RStudio version 2023.03.0\u2009+\u2009386 'Cherry Blossom' .PFS and OS probabilities were calculated using the Kaplan\u2013Meier method and compared using the log-rank test. Cumulative incidence (CI) rates were calculated for aGVHD, cGVHD, NRM, and relapse/progression, with death considered a competing event. Ninety-five percent confidence intervals (95% CIs) were estimated using the Greenwood formula. Adjusted probabilities for outcomes after transplantation were estimated using the Cox proportional hazards method (PFS and OS) and the Fine-Gray risk regression model . The statistical analyses were performed using SPSS version 20 , R version 4.2.3 , those who died 1\u00a0week after engraftment or earlier (n\u2009=\u200910), or those who were lost to follow-up (n\u2009=\u20092) were excluded from the analyses. There were no significant differences between the included and excluded patient groups regarding any of the clinical features (data not shown).A total of 66 patients were included in final analysis, 43 from Hospital Sirio-Libanes and 23 from Hospital Sao Paulo. The main patient characteristics are presented in Table Ikaros relative expression in mononuclear cells in the graft sample was 0.298 (range 0.002\u201325.683), while median Ikaros relative expression in mononuclear cells from the recipients\u2019 engraftment\u2009+\u200921 samples was 0.073 (range 0.002\u20131.870).Median Ikaros relative expression between grafts obtained from bone marrow (0.536) or mobilized PB stem cells .There was no significant difference in the median of Ikaros in the engraftment\u2009+\u200921 samples was not significantly different between patients who received post-transplant cyclophosphamide and those who did not . There was also no significant difference in the relative expression of Ikaros in the engraftment\u2009+\u200921 samples of patients who received antithymocyte globulin compared to those who did not .The relative expression of Ikaros expression in the graft and in the recipients\u2019 PB based on the ROC curves for moderate/severe cGVHD. A cutoff of 1.48 was used for Ikaros expression in the graft, and a cutoff of 0.79 was used for Ikaros expression in the recipients\u2019 PB.Patients were then divided into two groups, according to Ikaros expression in the graft (71%) than in patients with lower expression in the graft than in patients with lower Ikaros expression compared to patients without this complication , use of ATG or post-transplant cyclophosphamide .In multivariate analysis, higher Ikaros expression in the engraftment\u2009+\u200921 sample remained an independent risk factor for moderate/severe cGVHD , after adjusting for the same covariates.However, higher Ikaros expression in the graft or in the recipients\u2019 engraftment\u2009+\u200921 samples for the other analyzed outcomes .There was also no significant association between Ikaros expression in mononuclear cells in the recipients\u2019 PB after engraftment is correlated with a higher risk of moderate/severe cGVHD.In the present study, we identified that a higher 23. The management of post-transplant immunosuppression is extremely difficult due to the delicate balance between the risks of opportunistic infections and relapse, and the risk of GVHD24. Although several studies have investigated possible biomarkers for cGVHD, none are yet available for daily clinical practice25. Moderate/severe cGVHD, which was associated with a higher Ikaros expression in our study, is generally treated through systemic immunosuppression and carries a high risk of morbidity and mortality. Predicting the risk of moderate/severe cGVHD is of particular interest as a possible biomarker because the possibility of early intervention might be effective in reducing associated long-term morbidity and complications. On the other hand, biomarkers are probably less important for mild cGVHD, since they are not associated with an increased risk of serious complications and are generally associated with the beneficial graft-versus-tumor effect26.Despite all advances in HSCT in recent years, including significant improvements in survival, cGVHD remains the most important cause of long-term morbidity and mortalityIkaros expression and the risk of aGVHD, whose pathophysiology differs significantly from that of cGVHD28. In aGVHD, there is an initial phase with tissue damage due to the conditioning regimen used and/or the presence of infectious complications, which causes the activation and proliferation of donor T lymphocytes stimulated by antigen-presenting cells, followed by an effector phase dependent on cellular and soluble inflammatory mediators such as TNF-\u03b1, IFN-\u03b3, and IL-1 which causes tissue damage and activation of downstream pro-inflammatory pathways28. In contrast, cGVHD pathophysiology is characterized by a deficiency in the immune tolerance system, such as T and B cells, involved in chronic inflammatory activity, with subsequent development of fibrosis and, therefore, a wide variety of organs and tissues may be affected24. Some risk factors for cGVHD, such as the source of hematopoietic stem cells29 and the use of anti-thymocyte globulin, are less important for aGVHD31. Retrospective studies have shown that haploidentical stem cell transplantation with high-dose cyclophosphamide post-transplant also had lower rates of cGVHD, with no difference in the risk of aGVHD when compared to related and unrelated donors33. These data support the idea that graft characteristics and events in the early phase after graft infusion might play an important role in cGVHD, and Ikaros expression might be a significant contributing factor to this pathophysiology.In our study, we did not observe any association between Ikaros in immune reconstitution after HSCT. Ikaros is one of the most important transcription factors involved in hematopoiesis regulation8 and it influences differentiation of several cell lines, including lymphoid and myeloid cell lines34, with particular importance in B cell lymphoid development9.To the best of our knowledge, this is the first study that has explored the role of 36. Female donor-to-male receptor is a well-established risk factor for cGVHD, in part due to antibodies directed against epitopes encoded on the Y chromosome37. In addition, B cell activation factor (BAFF) is generally elevated in cGVHD, resulting in the rescue of autoreactive B lymphocytes, providing them with greater activity40. Ikaros can contribute to this increase in BAFF-induced B-cell activation. Patients with systemic lupus erythematosus also have elevated levels of BAFF, which leads to greater activation and proliferation of B lymphocytes. An in vitro study showed that the reduction of Aiolos and Ikaros reduced this effect of BAFF41.B cell lymphocytes and the presence of alloreactive antibodies can be an essential part of cGVHD pathophysiologyIkaros expression. Future studies focusing on specific analyses of B cells, CD4 and CD8 T cell lymphocytes, monocytes, dendritic cells, or other cells, could address this topic. On the other hand, the simpler collection and analysis procedures presented in this study could be more easily replicated and even used in clinical practice.In our study, we could not identify the specific cell type responsible for this increased Multivariate analysis confirmed the statistical significance of the engraftment\u2009+\u200921 but not the graft sample. With a small number of patients, it could be hypothesized that the sample size was not sufficient to identify the statistical significance of the graft. It can also be assumed that the graft, which has not yet been exposed to host antigens, may not have sufficient stimulus for the activation of Ikaros. Further studies should be conducted to confirm the relevance of Ikaros expression in the graft.42. In patients with elevated Ikaros expression, aGVHD treatment could be intensified, given that they already have a higher risk of cGVHD. Another possible intervention could be the intensification of prophylactic immunosuppression, such as the addition of sirolimus to standard cyclosporine plus mycophenolate mofetil-based prophylaxis, which has been shown to result in lower cGVHD rates without higher relapse rates43. Possibly, with a better stratification of patients\u2019 GVHD risk, we can move from the actual \u201cone size fits all\u201d kind of GVHD prophylaxis to a more personalized strategy. Donor selection may also be improved if the patient has more than one donor available, assuming that the donor's baseline Ikaros expression also influences cGVHD, a strategy that could be investigated in a prospective trial in the future.Among the previously described biomarkers for cGVHD, only a few were collected from the graft and in the early phase after transplantation, as in our study. This characteristic is particularly favorable for decision-making during the management of immunosuppression after HSCT. One of the main risk factors for moderate/severe cGVHD is progression after aGVHDIkaros expression, samples were frozen for the shortest possible time after collection. In addition, the same technician processed the samples both during freezing and in the final analysis. The higher Ikaros expression in the graft than in the PB after engraftment provides evidence that the laboratory analysis is representative of its activity in vivo. The graft is an environment with greater cell proliferation and differentiation and it is expected to have significantly higher Ikaros expression than in the PB after engraftment. Additionally,, while our study cohort is very heterogeneous, it is quite comparable to other case series in the literature, including those analyzing data on cGVHD44. We had a slightly higher number of haploidentical transplants than most of the previous studies, but this seems to be a trend worldwide44. The limited number of each type of transplant including related, unrelated, and haploidentical, prevented a comprehensive subgroup analysis, which should be addressed in future studies. There is also an excessive number of reduced-intensity conditioning regimens, largely explained by an institutional protocol that prospectively analyzed the results of exclusively reduced-intensity conditioning regimens in one of the transplant centers. Additionally, all patients diagnosed with lymphoproliferative diseases, SMD/MPN, or aplastic anemia only received RIC regimens at their physician\u2019s discretion. As we had a low number of patients who received myeloablative conditioning regimens, the possible effects of Ikaros expression in this scenario remain unclear.Our study has several limitations. Initially, the small number of cases reduces the power of subgroup analysis, as discussed above. Another concern was pre-analytical laboratory errors, as samples were frozen for later analysis. To minimize the possible risk of interfering with Ikaros expression in mononuclear cells in the PB after engraftment was significantly correlated with a higher risk of moderate/severe cGVHD, supporting its use as a prognostic biomarker. Further studies should be conducted to confirm these findings and to identify how to incorporate this marker in clinical practice.In conclusion, a higher"}
+{"text": "PBM data was replaced with SVG by xgml2pxml:000000000000000000000000000000000000111111111111000000000000111111111111000000000000000000000000000000000000O, O\u2013H, and C\u2013H bonds. The optimized condition of ultrasound-assisted extraction (UAE) was compared with the optimized condition of the microwave. The result of ultrasound-assisted extraction was observed to be better than microwave-assisted extraction.This study employs artificial neural network (ANN) and particle swarm optimization (PSO) to maximize antioxidant and antimicrobial activity from green coconut shells. Phytochemical analysis was carried out on the extract obtained from ultrasound-assisted extraction performed at different combinations of time , temperature (30, 35, and 40 \u2022Coconut shell contains phytochemicals such as Polyphenols, tannins, and phytosterols.\u2022Coconut shells include flavonoids and phenolic acids.\u2022Lignans, phytoestrogens in coconut shells, could regulate hormones.\u2022Coconut tannins are antibacterial, antioxidant, and anti-inflammatory in nature. Oxidative stress is a risk factor that can be brought on by numerous risk factors, including smoking, radiation, and environmental contaminants. By preventing the onset or growth of an oxidative chain reaction, antioxidants can postpone or prevent the oxidation of biomolecules in tissues and cells [Processing plant-based foods can produce by-products that are high in bioactive substances, such as phenolic compounds, which can have a variety of physiological effects, including those that are anti-allergenic, anti-microbial, cardioprotective, anti-inflammatory, antioxidant, and vasodilator ,2; Hung,nd cells . The natnd cells . The effDevavarikshas (God's trees) and is considered a very useful tree in the world. The uses of coconut can be judged by the Indonesian saying \u201cThere are many uses of coconut as there are days in the year\u201d. Cocos nucifera L. member of the Arecaceae family, a tree of the palm family, and the edible portion is coconut fruit. Typically, it is found on sandy shorelines of tropic regions [In India, the coconut is among the five regions . The coc regions . In Indi regions . The gre regions . Coconut regions .Currently, only a few sectors and agriculture industries use coconut fiber waste from coconut shells. It is novel to use the shells of coconut as a reservoir of chemicals, particularly phenolic compounds. The old Soxhlet procedure is losing popularity in place of ultrasound extraction which is being used in the food and pharmaceutical industries. UAE has emerged as a feasible alternative to conventional extraction techniques due to its high efficiency, minimum water usage, and low energy need . Additio22.12CO3, Gallic acid, Folin- Denis reagent, Tannin Acid, Aluminium Chloride, and Quercetin) and Potassium Acetate were purchased from Himedia. The methanol of analytical grade was used for extraction. The DPPH was purchased from GX chemicals for antioxidant activity. The standard bacterial strains E. coli and S. aureus were obtained from IIRC, Integral University, Lucknow, to study the antimicrobial effect.The green coconut shell (sample) was gathered from the local shops of Lucknow, Uttar Pradesh (India). It was cut into small fragments and dried using a hot air oven for 24\u00a0h at 50\u00a0\u00b0C. A grinder was used to reduce the sample size followed by sieving to pass through a 150-\u03bcm sieve and stored in a sterilized airtight container. The standard of phytochemicals extraction . The extraction was performed at various combinations of time-temperature. The extraction time was 10, 20, and 30\u00a0min. The temperature used was 30, 35, and 40 of 50 \u00b0C ,19.2.2.2The extract from green coconut shell powder was recovered using a microwave. The green coconut shell powder will be mixed with solvent methanol in a Solvent Volume of 1:50. The green coconut shell and solvent processing mix were exposed to MAE processing value of power was 300 Watt and time was 2\u00a0min combination for the extraction of phytochemicals . After e2.2.3st), temperature (XT), and solid-solvent ratio (Xr). The phytochemical extraction from the green coconut shell was conducted using the Box Behnken design (BBD). Three significant independent variables were sonication time (X2.3ih are the hidden (LH) and input (LI) layer interconnection weights, whereas Who are the hidden (LH) and output (LO) layer interconnection weights. In order to minimize mistakes, weights are adjusted through back propagation. It is accomplished by adjusting initial estimated weight values (i.e. Wih and Who) acquired between projected and known outputs throughout the training phase. Different neurons make up the input (LI) and output (LH) layers, which represent the process's input and output, respectively [The development of the ANN model is based on the natural neuron. Information processing in ANN models results from interactions between several simulated neurons. The artificial neural network's key elements include the input layer, hidden layer, summation function, threshold function, and output layer. By linking several processing components with varied weights, artificial neural networks can be created. Each layer is connected to the layer below by interconnection strengths or weights. In a neural network, Wectively .im, a linear function of the inputs (Xi) and weights (Wim) on these connections, stands in for the input to the hidden layer for the mth neuron. The mathematical expression for HLim is presented in Eqn. HLm) values are additionally added as inputs to the hidden (Bh) and output (Bo) layers when the ANN architecture is processed. The neural network calculation makes use of a transfer function called tansig. Hence, Equation om.Bias , Phenols (GAE/g), Flavonoids, (QE/g), Tannins (TAE/g) and Antioxidants (%) made up the output layer and three neurons viz. sonication time, temperature, and solid-solvent ratio made up the input layer of the network. While the neuron in the hidden layer was changed after each run to get the highest coefficient of determination (R2) and lowest mean square error (MSE) values.2.4i1, \u2026,uim, \u2026uiM) and i1, \u2026,xim, \u2026,xiM) respectively.For the purpose of optimizing the constructed models of the ultrasound-assisted extraction from coconut shell powder, the standard particle swarm optimization (SPSO) method was employed. The SPSO algorithm used the chosen model as its objective function. In MATLAB R2019a, the SPSO algorithm was programmed. SPSO is a sophisticated population-based evolutionary optimization technique that replicates the flocking behavior of birds. This optimization technique can successfully and efficiently identify solutions among a certain population because it is based on collaboration and competition. According to previous observations of other particles, each particle travels in a M-dimensional space Z, and its location and velocity are described as im denotes the particle's location in the mth dimension, vim denotes its velocity in the mth dimension, c1 and c2 denote acceleration constants, and rand denotes a random number between 0 and 1. The vector pi represents the position of the particle I with the best fitness value, whereas the vector pg represents the position of the best particle. w stands for the inertia weight used to balance the effectiveness of global and local search.The search process of the particles takes place according to Equations 2.52.5.1The quantity of extract obtained was compared with the initial sample amount known as the extraction yield (%), which measures how effectively the solvent extracts particular components from the original material . For eac2.5.22CO3 was mixed with the solution. The absorbance of the solution was measured by spectrophotometer at 725\u00a0nm, after 90\u00a0min of incubation. The same analytical process as samples was used to create the standard Gallic acid range of 0\u2013125\u00a0mg/ml. The result was represented in milligrams of GAE per gram of material [A 200\u00a0\u03bcL sample was mixed with 1.5\u00a0mL of diluted FC reagent 1:10, v/v) and kept in incubation for 5\u00a0min at room temperature . After t, v/v and2.5.32CO3 (10\u00a0ml) was pipette with one ml of sample extract. The mixture was properly mixed and made 50\u00a0mL using distilled water and kept for 20\u00a0min or until a bluish-green color emerged. The identical procedures used with the sample above were used with tannic acid solutions in the 0\u2013500\u00a0ppm range. After the bluish-green hue had fully formed, the absorbance of the tannic acid (reference solutions), as well as the samples were measured using a spectrophotometer at 760\u00a0nm. Tannin concentration was determined using TASC (tannic acid standard curve), which was represented in milligrams of Tannic acid equivalence (TAE) per 100\u00a0g of dried material.The tannin estimation was done using method. 2.5.4The aluminum chloride colorimetric technique was modi2.5.5In 100\u00a0ml of methanol, the 0.0078\u00a0g DPPH was dissolved. 1.5\u00a0ml of the sample and 1.5\u00a0ml of the DPPH solution in methanol were combined in an aliquot (0.2\u00a0mM). The samples were incubated at room temperature for 30\u00a0min in the dark. At 517\u00a0nm, the solution's absorbance was determined. The assay was carried out in the same way as the control, except that methanol was used in place of the sample solution. Analysis was done in triplicate for each extract . The tes(7)2.6Antimicrobial Activity was evaluated by Agar well diffusion method according to Refs. ,28. Muel2.71 with a resolution of 4\u00a0cm\u22121.FTIR is the most important tool for determining the presence of various chemical bonds and functional groups present in the samples. The distinguishing characteristic of chemical bonds shown in the annotated spectrum is the wavelength of light absorbed. The chemical bond in a substance can be identified by reading the infrared absorption spectra. FTIR analysis was conducted using dried powdered methanol solvent extract from green coconut shells. To create translucent sample discs, (10\u00a0mg) of dried crude extract was encapsulated in (100\u00a0mg) of KBr pellet. Each extract powder sample was placed in an FTIR spectrophotometer with a scan range of 400\u20134000\u00a0cm2.8The findings of each experiment were represented as the mean value\u00a0\u00b1\u00a0standard deviation. Using Excel 2010, Design Expert (version 7.0.0), and MATLAB R2019a software the model development, optimization, and other statistical analysis were performed.33.1At various levels of independent variables the extraction yield ranged from (19.98\u201336.1 %), TPC ranged from (7.08\u201333.46 GAE/gm), TFC ranged from (7.08\u201333.46 QE/gm), TTC ranged from (70.5\u2013141.09 TAE/gm), and Antioxidant ranged from (66.1\u201349.98\u00a0%) for UAE.3.2Based on the BBD, the response of yield for extracts of green coconut shell using UAE was optimized, and the data were fitted to a polynomial equation of second-order. The ANOVA results as shown in 2 was 0.9964, which predicted that 99.6\u00a0% of the variations could be explained by the fitted model. The adjusted coefficient of determination (R2 Adj), which is comparable to R2, was 0.9934 and indicated that the observed and predicted values were highly correlated. Additionally, a low coefficient of variation (C.V.\u00a0=\u00a01.25\u00a0%) showed that the experimental results with a great precision level could be trusted and there was little variance in the mean value. From the equation, the negative effects of sonication time and temperature and the positive effect of solid solvent ratio could be observed. 3.3The BBD was used to optimize the response of phenol, and the data from the BBD were fitted to a polynomial equation of second order. The ANOVA results as shown in 2) of 0.9970. The Adj R2, which is comparable to R2, was 0.9937 and indicated that the observed and predicted values were highly correlated. Additionally, a low coefficient of variation (C.V.\u00a0=\u00a02.71\u00a0%) showed that there was little change in the mean value and that the experimental results with a greater precision level could be trusted. From the equation, the negative effects of sonication time and temperature and the positive effect of solid solvent ratio could be observed. 3.4The BBD was used to optimize the response of flavonoids, and the data from the BBD were fitted to a polynomial equation of second order. The ANOVA results as shown in 2) of 0.9969. The adjusted determination coefficient (Adj R2), which is comparable to R2, was 0.9935 and indicated that the observed and predicted values were highly correlated. Additionally, a low coefficient of variation (C.V.\u00a0=\u00a03.67\u00a0%) indicated that the mean value had not changed significantly and that the highly precise experimental results should be believed. From the equation, the negative effects of sonication time and temperature and the positive effect of solid solvent ratio could be observed. 3.5Tannin response was optimized using the Box Behnken design, and data from the BBD were fitted to a polynomial equation of second order. The ANOVA results as shown in 2) of 0.9978. The (Adj R2), which is comparable to R2, was 0.9953 and indicated that the observed and predicted values were highly correlated. Additionally, a low coefficient of variation (C.V.\u00a0=\u00a01.04\u00a0%) suggested that the mean value had not changed considerably and that it was safe to assume the extremely accurate experimental results. From the equation, the negative effects of sonication time and temperature and the positive effect of solid solvent ratio could be observed. 3.6The BBD was used to optimize the antioxidant activity response, and data from the BBD were fitted to a polynomial equation of second order. The ANOVA results as shown in 2) of 0.9973 indicates that the fitted model could explain 99.7 % of the variables. The Adj R2, which is equivalent to R2, was 0.9943, indicating a strong relationship between the experimental and predicted values. Furthermore, a low coefficient of variation (C.V.\u00a0=\u00a00.552\u00a0%) suggested that the mean value had not changed considerably and that it was safe to assume the extremely accurate experimental results. From the equation, the negative effects of sonication time and temperature and the positive effect of solid solvent ratio could be observed. Extended exposure to elevated temperatures can result in the deterioration of thermally vulnerable antioxidants. For instance, certain antioxidants, such as vitamin C, are vulnerable to heat and can be annihilated when subjected to extended sonication at high temperatures. Excessive sonication can produce reactive oxygen species (ROS) as a result of cavitation, which is the rapid creation and collapse of bubbles in the liquid. ROS can induce oxidative harm to antioxidants and other substances, hence diminishing their efficacy. The model's validity could be confirmed by the non-significant lack of fit value. The determination coefficient , phenols (GAE/gm), flavonoids (QE/gm), tannins (TAE/gm) and antioxidant (%) were the output values. As a result, there were three and five neurons in each of the input and output layers, respectively. Trial and error were used to determine the number of neurons in the hidden layer. The ANN algorithm used 2000 iterations per run, with learning rates ranging from 0.5 to 1. A total of 20,000 NNs were produced after 2000 iterations were performed ten times for each ANN architecture. The constructed MATLAB code was used to fit the experimental data after it had been divided into three equal sections .2) were calculated for each run of a certain ANN architecture. The network with the highest R2 and lowest MSE values was chosen as the best network. The best ANN architecture for the ultrasound-assisted extraction process was 3 neurons in the input layer, 4 neurons in the hidden layer, and 5 neurons in the output layer (3-4-2). The architecture is shown in The mean square error (MSE) and coefficient of determination , it was stopped. The optimum process parameters as determined by hybrid ANN-PSO were time of 15\u00a0min, temperature of 33\u00a0\u00b0C, and solid solvent ratio of 24(w/v). The experimental and predicted values under the optimum process conditions of the extraction process are represented in The optimal ANN architecture, 3-4-5 of Time, Temperature, and Solid Solvent Ratio, along with Yield (%), Phenols (GAE/gm), Flavonoids (QE/gm), Tannins (TAE/gm), and Antioxidant (%) was further employed to achieve process parameter optimization. The maximizing of all the responses was accomplished by formulating the fitness function of the optimization process. For the optimization procedure, the modelled parameters as derived from ANN were employed. Consequently, the 3-4-5 ANN architecture's weight and bias values were taken into account as input for the particle swarm optimization (PSO). According to Khawas et al. (2015), hybrid optimization for ANN-GA is a similar notion. The standard particle swarm optimization (SPSO) algorithm was used to calculate PSO. MATLAB R2015a was used to create the SPSO algorithm. The algorithm's starting population size was 40 particles. The weight values varied with the generations from 0.4 to 0.6, whereas the acceleration constants, c3.9E. coli was 9.6\u00a0mm and the minimum was 3.7\u00a0mm , the Antimicrobial activity of dye extracts was examined based on diameters with clear inhibition zones surrounding the paper disks. If there is no inhibition zone, it is implicit that the dye extract did not possess antimicrobial activity towards the studied bacterial strains. Thus, the antimicrobial effect of the extract was shown when the Ultrasound-assisted extraction technique was employed.The antimicrobial activity of extracts was examined based on the diameters of clear inhibition zones surrounding the Petri disks. The visual representations of the results of the inhibition zone are shown in s 3.7\u00a0mm A whereass 5.8\u00a0mm B. The pr3.10\u22121. In Ultrasound-assisted extraction, a broad peak at wavenumber 3386.44\u00a0cm\u22121 showed O\u2013H bonding with cumulative C\u2013H stretching. A sharp peak at wavenumber 1614.24\u00a0cm\u22121 represented CO bonding. A small peak at wavenumber 1056.99\u00a0cm\u22121 depicted C\u2013O bonding whereas in the case of microwave-assisted extract the strong broad peak could be observed at a wavenumber of 3347.81 cm-1, showing the stretching of C\u2013H and elongation of O\u2013H bondings. A sharp peak at 1613.20\u00a0cm\u22121 represented CO bond elongation. According to researchers, the prominent peak observed at a wavenumber of 3370\u00a0cm\u22121 in the Fourier Transform Infrared (FTIR) spectrum of the extract derived from green coconut shells can be attributed to the stretching vibration of phenolic hydroxyl groups, namely the oxygen-hydrogen (O\u2013H) bond [\u22121 can be attributed to the stretching frequencies of the 'C\u2013H\u2032 bonds [\u22121 showed the presence of the C\u2013O group as illustrated in Data from the infrared analysis may aid in understanding the sample's chemical composition. An example of evidence for the stretched hydroxyl groups (O\u2013H) in aliphatic and phenolic compounds was the peak of absorption at 3400\u00a0cm\u2013H) bond ,30; Zhen\u2013H) bond . The obs4S. aureus and E. coli. Furthermore, this comparative analysis showed that the phytochemicals of green coconut shell extract using MAE were significantly lower than in the UAE. The overall findings of the present investigation demonstrated that UAE was a viable and successful method for extracting Phytochemicals from green coconut shells, as well as indicating the utility of green coconut shells. In popular trash green coconut shells, we quantitively found certain promising phytochemicals. Every year, imports of phytochemicals, which are all necessary raw materials for the food and pharmaceutical industries, are made. We can save a sizable sum of foreign currency if we can produce phytochemicals using green coconut shells. Annually, people worldwide dispose of one million metric tons of green coconut shells. As a result, its significance will increase in relation to concerns about environmental pollution. Hence, the implementation of a strategy to establish an interconnected sector focused on processing waste materials from plants and extracting extremely valuable phytochemicals, which are crucial for the food and pharmaceutical industries, would provide significant benefits.In this study, an integrated approach of artificial neural network (ANN) and particle swarm optimization was used to standardize the ultrasound-assisted extraction (UAE) process of phytochemicals compounds from the green coconut shell. Results showed the significant effects of Solvent time, temperature, and the solvent-solid ratio on the yield, phenol, tannin, flavonoid content, and antioxidant activity. Three extraction factors have also been examined in this study. The optimum process parameters as determined by hybrid ANN-PSO were a Sonication time of 15\u00a0min, Temperature of 33\u00a0\u00b0C, and Solid solvent ratio of 24 (w/v) for a maximum extraction yield of 38.41\u00a0%, Total phenol content of 40.99 GAE/g, Total Flavonoid content 36.13 QE/g, Total Tannin content 176.73 TAE/g and Antioxidant activity was 68.39\u00a0%. The extract also exhibited antimicrobial activity against No data was used for the research described in the article.10.13039/501100012550National Research, Development, and Innovation Fund of Hungary, financed under the TKP2021-NKTA funding scheme.Project No. TKP2021-NKTA-32 has been implemented with support from the Poornima Singh: Writing \u2013 original draft, Software, Methodology, Investigation, Formal analysis, Data curation. Vinay Kumar Pandey: Writing \u2013 original draft, Software, Methodology, Investigation, Formal analysis, Data curation. Sourav Chakraborty: Writing \u2013 original draft, Validation, Methodology, Investigation, Formal analysis, Data curation. Kshirod Kumar Dash: Writing \u2013 review & editing, Writing \u2013 original draft, Supervision, Resources, Project administration, Funding acquisition, Formal analysis, Data curation, Conceptualization. Rahul Singh: Writing \u2013 review & editing, Writing \u2013 original draft, Validation, Supervision, Resources, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization. Ayaz Mukarram shaikh: Writing \u2013 review & editing, Validation, Software, Methodology, Investigation, Data curation. Kov\u00e1cs B\u00e9la: Writing \u2013 review & editing, Resources, Project administration, Funding acquisition, Formal analysis, Data curation.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."}
+{"text": "Additionally, the meat shearing force of lambs fed a diet with 16% grape pomace (GP) was significantly higher than that of the 24% GP group (p < 0.05), while the 24 h meat color parameter a* value of the control group was notably higher than that of the 8% GP group (p < 0.05). In addition, compared to the control group, lambs fed with a diet containing 16% GP had higher levels of oleic acid (C18:1n-9c), linoleic acid (C18:2n-6c), behenic acid (C22:0), tricosanoic acid (C23:0), lignoceric acid (C24:0), and conjugated linoleic acid (CLA), at a ratio of \u2211CLA/TFA, \u2211n-6, \u2211MUFA, and \u2211PUFA in the longissimus dorsi muscle (p < 0.05), but the reverse case was applicable for Total Volatile Basic Nitrogen (TVB-N) content (p < 0.05). GP supplementation did not substantially affect the expression of stearoyl-CoA desaturase (SCD), peroxisome proliferator activated receptor alpha (PPAR\u03b1), and peroxisome proliferator-activated receptor gamma (PPAR\u03b3) genes (p > 0.05). The findings indicated that incorporating grape dregs in the diets of fattening lambs leads to notable enhancements in meat production and the antioxidant capacity of lamb meat, and effectively extends the shelf life of the meat.This study was conducted to evaluate the potential effects of dietary grape residue levels on the slaughter indicators, meat quality, meat shelf-life, unsaturated fatty acid content, and expression of fatty acid deposition genes in the muscle of lambs. Sixty 30-month-old male Dorper and Small-Tailed Han F1 hybrid lambs were assigned to a single factor complete randomized trial design and fed with four different diets including 0%, 8%, 16%, and 24% grape dregs, respectively. The findings regarding meat production efficacy in the lambs revealed substantial differences. The control group showed notably lower dressing percentage, carcass weight, net meat weight, meat percentage concerning carcass, meat-to-bone ratio, relative visceral and kidney fat mass, and rib eye area compared to the other groups ( The nutritional richness of mutton, including its high content of protein, iron, zinc, and vitamin B, is gaining recognition and preference among people . In receSeveral studies have shown that the supplementation of grape seed , grape rSCD) serves as a crucial enzyme capable of catalyzing the creation of double bonds from both saturated and unsaturated fatty acids. It plays a pivotal role in regulating the internal production of conjugated linoleic acid (CLA) in animals [SCD) in the liver and mammary microsomes [PPAR) can affect the expression of SCD gene [The factors that influence the fatty acids in the meat are primarily the hardness of adipose tissue, shelf life (oxidation of lipids and pigments), and the composition of aromatic substances ,14. The animals . When trcrosomes . The rescrosomes . Diacylgcrosomes . Interescrosomes . MoreoveSCD gene .SCD, PPAR\u03b1, and PPAR\u03b3 in lamb tissues. These findings hold significant promise for the future utilization of grape pomace in both the food and feed industries.Building on prior research findings, this study sought to enhance the quality of lamb meat, prolong its shelf life, and modify the composition and proportion of fatty acids or CLA by introducing grape residue into the diets of fattening lambs. Additionally, we examined the potential impact of grape dregs on the expression of genes associated with CLA deposits, including All samples were collected strictly following the ethical code (GSAU-Eth-AST-2022-035) approved by the Animal Welfare Committee of Gansu Agricultural University.Fresh grape dregs (the ratio of grape skin to grape seed is 1:1.22) were obtained from Shiyanghe Winery of Minqin in Gansu Province, and were dried naturally, crushed, and used in the experiments after passing through a 0.5 mm sieve. \u00d7 Small-Tailed Han sheep, F1 hybrid were selected in a single factor complete randomized trial design. Following the principle of equal weight distribution, 15 sheep in each group were evenly allocated into 4 dietary treatment groups. These groups were fed diets comprising 0%, 8%, 16%, and 24% grape dregs, respectively. After measuring the dry matter, crude protein, calcium, phosphorus, and condensed tannin in the grape residue of feed raw materials, the trial diet formula was developed according to 90% of the nutrient requirements for fattening of NRC (2007) commercial lambs and slaughtered. The measurements recorded after the slaughter included carcass weight, bone weight, net meat weight, carcass net meat percentage, dressing percentage, rib eye area, growth rate (GR) value, as well as weights of visceral fat, kidney fat, tail fat, and dorsal fat in accordance with the methodology recommended by Orzuna-Orzuna et al. .longissimus dorsi muscle of lambs was collected and placed into a 4 \u00b0C refrigerator to determine both meat quality and shelf life following slaughter. The pH value was measured with a pH meter after 45 min and 24 h. The water loss rate was measured with a WW-2A strain gauge unconfined force meter. The cooked meat rate and dripping loss were analyzed based on a method recommended by Orzuna-Orzuna et al. [The left a et al. . The mealongissimus dorsi muscle of each sheep were collected within 1\u20132 h following slaughter. Following division into 11 distinct portions, they were vacuum-packed and stored in a refrigerator at 4 \u00b0C for subsequent testing. Meat color, pH, and total volatile basic nitrogen (TVB-N) were measured at intervals of 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10 days. The analysis of volatile basic nitrogen was conducted following the method outlined in the Analysis of Health Standards for Meat and Meat Products (GB/T5009.44-2003). Each sample was tested in triplicate.Approximately 300 g samples of the longissimus dorsi acid deposition. Approximately 50 g of the right longissimus dorsi was collected from the lambs of the control and 16% GP groups, which were vacuum packed and stored at \u221280 \u00b0C for determination of fatty acid as well as CLA.The observations indicated that the diets containing 16% and 24% grape residue effectively extended the shelf life of the meat. However, lamb meat fed with 24% grape residue exhibited a notably higher drip loss compared to those fed with 16% grape residue. Therefore, the control and 16% GP groups were selected to study the fatty acid content and expression of various genes related to fatty 2SO4 was added and kept in a 55 \u00b0C water bath for another 1.5 h, shaking briefly every 15 min, followed by cooling to room temperature. Subsequently, 3 mL of n-hexane was added, and the solution was vortexed for 5 min before it was transferred to a 15 mL centrifuge tube. After centrifugation at 1500 r/min for 5 min, 1 mL of the supernatant was collected and analyzed using gas chromatography\u2013mass spectrometry .Fatty acid extraction: First, an appropriate amount of sample was weighed and ground into a powder using liquid ammonia. The powder was then placed into a hydrolysis tube, followed by the addition of 5.3 mL methanol and 0.7 mL 10 mol/L KOH solution . The mixture was shaken vigorously until the sample was completely immersed in the liquid. It was then put into a 55 \u00b0C water bath for 1.5 h, shaken briefly every 10 min, and allowed to cool. Next, 0.6 mL of 24 mol/L HChromatographic conditions: The analysis utilized an SP2560 chromatographic column (100 m \u00d7 0.25 mm \u00d7 0.20 \u00b5m). The injection port temperature was set at 220 \u00b0C, utilizing a 9:1 split flow mode for injection. The temperature program for the chromatographic column proceeded as follows: the initial column temperature was maintained at 120 \u00b0C for 5 min. It was then increased to 200 \u00b0C at a rate of 3 \u00b0C/minute and held for 10 min. Finally, the temperature was elevated to 240 \u00b0C at a rate of 1.5 \u00b0C/min, resulting in an operational time of 78.333 min. Helium was employed as the carrier gas.Mass spectrum conditions: In the analysis, the full scan mode was utilized with a 9 min solvent delay. The gain factor set was 10. The ion source temperature was maintained at 230 \u00b0C, with a maximum of 250 \u00b0C. The quadrupole temperature was set at 150 \u00b0C, with a maximum of 200 \u00b0C.longissimus dorsi muscle, back fat, and liver from 10 lambs in groups A and C were immediately collected, placed in liquid nitrogen, and then sent back to the laboratory for storage at \u221280 \u00b0C for the total RNA extraction. TaKaRa MiniBEST Universal RNA Extraction Kit (Code: No.9767) was utilized to extract RNA from the sheep tissues (operated according to the instructions provided in the kit). The RNA concentration and purity were determined with an ultramicro spectrophotometer . This test analysis indicated varying extraction efficiencies of genomic RNA in different tissues. However, the concentration consistently exceeded 80 ng/mL. The extracted RNA exhibited an OD260/OD280 ratio ranging between 1.8 and 2.0, and an OD260/OD230 ratio of around 2.0, meeting the prerequisites for subsequent analyses. To assess RNA integrity, 1% agarose gel electrophoresis with EB staining was performed. Under ultraviolet light, bright 28s and 18s RNA bands were observed, with the former being twice as bright as the latter, indicating the absence of sample degradation gene, peroxisome proliferator activated receptor alpha (PPAR\u03b1), peroxisome proliferator-activated receptor gamma (PPAR\u03b3), and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA sequences were detected by NCBI. The primers for these genes were designed by PrimerPremier5 primer synthesis software using their gene mRNA as templates and synthesized by Dalian Bao Biological Engineering Co., Ltd., Dalian, China. The primer sequences of the target gene and internal reference gene have been shown in Sheep \u25b39 dehydrogenase reaction, all the genes were tested with conventional PCR primers to verify their amplification products. For the determination of SCD, PPAR\u03b1, and PPAR\u03b3 gene expression, a 10 \u03bcL real-time PCR reaction system was prepared according to the specifications outlined in Before the Light Cycler\u2212\u0394\u0394CT was used to measure expression of SCD, PPAR\u03b1, and PPAR\u03b3 genes. The test data were analyzed with an independent samples t-test and One-Way ANOVA using SPSS23.0 statistical software, and were expressed as p < 0.05 indicated significant differences. When the difference was significant, Duncan\u2019s method was used for multiple comparison.According to the Ct value obtained by the real-time fluorescence quantitative PCR method, 2p < 0.05). Additionally, the carcass weight and net meat weight of the 16% GP group surpassed those of the other groups (p < 0.05). Interestingly, lambs on a diet containing 24% GP showed a higher slaughtering rate, bone-to-meat ratio, and relative mass of visceral, kidney, and dorsal fats compared to the other groups (p < 0.05).The potential effects of GP levels in diets on lamb meat production are shown in p < 0.05). The 24 h meat color parameter a* value of the control was higher than that of the 8% and 16% GP groups (p < 0.05).The effects of dietary GP levels on lamb meat quality are shown in The effects of dietary GP levels on lamb meat pH are shown in p < 0.05).The effects of dietary GP levels on the various flesh parameters of lamb meat are shown in p < 0.05). However, the TVB-N of 16% and 24% GP groups was higher than those of the 8% GP group on the first day of the storage (p < 0.05). The content of TVB-N of the control and 8% GP group lamb stored in vacuum packaging in a refrigerator at 4 \u00b0C for 0\u20137 days was less than 15 mg/100 g, as specified in the national GB2707-2016, and after the 8th day, the content of TVB-N exceeded the value specified in the national standard. The content of TVB-N in 16% and 24% GP group lamb stored in a refrigerator at 4 \u00b0C for 0\u20138 days was found to be below 15 mg/100 g, as reported in the national GB2707-2005, and after the 9th day, the content of TVB-N exceeded the value specified in the national standard.The effects of dietary GP levels on TVB-N of lamb meat are depicted in longissimus dorsi muscle are shown in p < 0.05).The effects of dietary GP levels on lamb meat fatty acid content in the SCD in the liver, longissimus dorsi muscle, and dorsi fat of lambs in the experimental group supplemented with 16% GP were 94.87%, 61.64%, and 121.29% higher than that in the control group, respectively. The mRNA expression of PPAR\u03b1 in the liver and longissimus dorsi muscle of the experimental group fed 16% grape residue was observed to be 18.75% and 13.49% higher, respectively, compared to the control group. Meanwhile, the expression of the PPAR\u03b3 gene in the same experimental group was notably higher, showing a 196.89% increase in the liver and a 28.43% increase in the longissimus dorsi muscle compared to the control group.As depicted in The important indexes for measuring animal growth and slaughtering are dressing percentage and net meat weight. In this study, the dressing percentage of lambs fed diets containing 8%, 16%, and 24% GP was 48.84%, 48.77%, and 49.18%, respectively, which was significantly higher than that of the control and was consistent with the report of Tian et al. . HoweverThe results indicated that the rib eye area, GR value, relative mass of the visceral fat, kidney fat, and dorsal fat in lambs fed with a GP-containing diet were significantly higher than those of the control, and the rib eye area of lambs in this experimental group was also remarkably higher than that of the lambs reported in the previous study , suggest45min, L*, b*, drip loss, cooking loss, and marbling score, were not affected by dietary supplementation of grape seed extract (GSPE) [Interestingly, several previous studies have reported that some characteristics related to the meat quality, such as pHt (GSPE) , but diet (GSPE) , whereast (GSPE) ,29. Whilt (GSPE) , excludit (GSPE) . Therefot (GSPE) . Our fint (GSPE) .In addition, the change in meat color is closely linked to lipid oxidation and hemoglobin acts as the catalyst for lipid oxidation . Grape dPrevious studies have shown that lipid peroxidation and reduced meat color in ruminant meat are primarily affected by fatty acid composition and antioxidants present in the tissues . It was Butyrivibrio fibrisolvens control [longissimus dorsi muscle of lambs, effectively enhancing the CLA levels in lamb meat. Francisco et al. [There has been only limited research on the impact of condensed tannins on fatty acids in lamb, particularly regarding the influence of grape dregs on lamb fatty acids. Tannins play a significant role in altering fatty acid deposition within the rumen\u2019s biohydrogenation pathway . In the control . Costa e control believe o et al. reportedo et al. found tho et al. completeo et al. .SCD [SCD in the muscles. In our work it was found the content of C14:1 (carnitine oleic acid) in (4.99 mg/100 g fresh meat) in the 16% GP group was also substantially higher than that of the control (5.45 mg/100 g fresh meat). Therefore, it can be speculated that the condensed tannins contained in the 16% GP group effectively increased the expression of SCD in the muscle.Vasta et al. found thSCD . Vasta eSCD found thSCD gene is still unclear. Besharati et al. [SCD gene expression in muscle cannot be excluded. SCD can effectively catalyze the endogenous synthesis of MUFA and c9-t-11 CLA [SCD expression and its activity can eventually lead to an increase in both MUFA and c9-t-11CLA in ruminant meat. The results revealed that 16% grape residue can significantly increase the expression of SCD mRNA in the liver, longissimus dorsi, and adipose tissue of lamb, which was consistent with the previous report of Vasta et al. [The mechanism of action of tannins on the i et al. assumed i et al. . Besharai et al. reportedi et al. . An indit-11 CLA . It was a et al. . The mecPPARs, as ligand-activated transcription factors, play a crucial role in governing the expression of various genes implicated in diverse lipid metabolism pathways. These pathways encompass fatty acid transport, intracellular fatty acid binding, degradation (including \u03b2-oxidation and \u03c9-oxidation), cellular absorption, and storage of lipids [PPAR gene in ruminants is predominantly based on PPAR\u03b3, but there are only a few studies related to PPAR\u03b1 reported in the literature. Yang et al. [PPAR\u03b3 gene in longissimus dorsi muscle was negatively correlated with intramuscular fat content in the sheep. Muhlhausler et al. [PPAR\u03b3 gene was associated with the fat thickness on the back of lambs. The results showed that the mRNA expression of PPAR\u03b1 in both the liver and dorsi fat of the experimental group including 16% GP was 18.75% and 13.49%, respectively, which was significantly higher than that of the control group, and PPAR\u03b3 gene expression was 196.89% and 28.43% higher than that of the control group, respectively. Thus, supplementing feed with grape dregs was observed to enhance the expression of PPAR genes in both the liver and dorsi fat. This finding holds significance for the potential uses of grape pomace extract in the food and feed industry. However, a more detailed exploration is needed to understand the precise regulatory mechanisms behind this effect.f lipids . Currentg et al. reportedr et al. found thlongissimus dorsi muscle of the lambs. This increase ultimately led to a significant rise in the \u2211CLA/Total Fatty Acid (TFA) ratio.The inclusion of 16% and 24% grape dregs in the diet notably extended the shelf life of lamb meat. Specifically, the presence of 16% grape dregs significantly boosted Conjugated Linoleic Acid (CLA) levels and the concentrations of various unsaturated fatty acids , as well as saturated fatty acids (C23:0 and C24:0) in the"}
+{"text": "Infancy is a highly sensitive period for mental, motor, and emotional development. Such development is strongly dependent on environmental factors and particularly on the quality of early parent-child interaction.Perinatal obstetrical and emotional factors may have effects on the first encounter with the child, thus setting the basis of parent-child interaction in a transactional perspective.Early parent-child interaction is a dynamic, bidirectional, and synchronized process, in which the infant plays a very active part. In all likelihood, parental or child physical disorders or illnesses will have a significant impact on the ability of both the parents and child to synchronize with each other.in utero at birth for cardiac defects, or severely disabled mothers who had no access to maternity care.Recent discoveries on the biological components of mother-fetus and mother-child synchrony have given new insights into the links between parent-infant interactions and children's neurological and emotional development. Medical advances in obstetrics as well as in neonatal intensive care have opened up the possibility to investigate these questions in new populations of infants and parents who had not been previously studied, for example, very early premature babies at 24 weeks gestational age, children operated All these advances have allowed for much more favorable outcomes for the infants in question than could have been imagined before. This medical progress has opened up a whole new field for child mental health and psychiatry, with a series of new ethical issues needing to be addressed. Early perinatal parent-child interventions may be highly useful for infants born in these situations but studies are still rare in this new field. To guide clinical practice and treatment, further research is needed.The objectives of this Research Topic are to present data and studies on perinatal and early life parent-child interactions from the prepartum period up to 2 years of age based on studies on families in these specific circumstances.Boissel et al.; Dollberg et al.), specific treatment approaches , prenatal exposition to drugs , prenatal stress , and early symptom formation presents good internal consistency and interrater reliability and could therefore be a useful tool for both care professionals and researchers.Maternal sensitivity is affected by prematurity. Scales seeking to measure parent-premature baby interactions are rare. The development of early psychotherapy to foster maternal sensitivity during children's hospitalization in neonatal intensive care units (NICU) is a major challenge for the prevention of subsequent developmental and mental disorders in the child. Dollberg et al., from Isra\u00ebl, analyzes the interaction between premature babies and their parents through the prism of parental reflective functioning and its influence on parenting stress. Her article \u201cParental reflective functioning as a moderator of the link between prematurity and parental stress\u201d helps us to better understand this aspect of parent-child interactions, with perspectives to help alleviate the parent's burden.Bustamante Loyola et al., from Santiago in Chile, describes in his article \u201cThe impact of an interactive guidance intervention on sustained social withdrawal in preterm infants in Chile: randomized controlled trial\u201d the efficacy of a standardized behavioral intervention by pediatricians on sustained social withdrawal in preterm infants. He shows how this intervention reduces the prevalence and the possible associated negative outcomes of children's withdrawal behavior.During their first year, preterm infants have a higher probability of developing sustained social withdrawal than infants born at full term. Boissel et al.'s study, \u201cA narrative review of the effect of parent-child shared reading in preterm infants,\u201d shows the positive effect of shared reading sessions on the physiological parameters of preterm infants in neonatal intensive care. An original therapeutic mediation\u2014book-reading to a premature child in intensive care\u2014was tested in a neonatal intensive care unit. The team proposed to the parents to read out loud to their premature children in the NICU. This therapeutic mediation was found to be feasible and well-accepted as it provided concrete support for positive parenting in this highly stressful context.Vitte et al., from France, created with her team an innovative mother-baby unit integrated into a neonatal care service. She describes this new unit and its function in her article: \u201cPanda unit, a mother-baby unit nested in a neonatal care service.\u201dP\u00e9rez Mart\u00ednez et al. in her article \u201cThe prevalence of social withdrawal in infants with Cleft Lip and Palate: the feasibility of the full and the modified versions of the Alarm Distress Baby Scale (ADBB)\u201d shows the interest in the ADBB scale for evaluating social withdrawal as a sign of distress in these infants. Indeed, her study found a relatively high level of social withdrawal in infants with cleft lip and palate but with a higher prevalence in \u201csimple\u201d lip cleft compared to cleft lip associated with palate clefts.Strategies for handling other child pathologies that impair early interactions are also explored. Cleft lip and palate, for example, clearly impose a higher risk of physical and emotional distress in infants with a major impact on parent-infant relationships. Benevent et al.'s team, from France, in her article \u201cPrenatal drug exposure in children with a history of neuropsychiatric care: a nested case-control study\u201d highlights links between children's neuropsychiatric symptoms and their exposure to nervous system drugs in utero. She provides guidelines for assessing the long-term neuropsychiatric effects after prenatal medication exposure, without focusing on psychotropic medications.The epidemiological study developed by Encouraging research in this new field of perinatality, a transversal and transdisciplinary field at the crossing of obstetrics and pediatrics, as well as child and adult psychiatry is clearly a key issue. All of the present articles will hopefully lay the ground for a whole new area of study to be developed over the coming years.SV wrote the first draft of the manuscript. AG, AB, and MT-T contributed to manuscript revision, read, and approved the submitted version."}
+{"text":
\ No newline at end of file