diff --git "a/deduped/dedup_0184.jsonl" "b/deduped/dedup_0184.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0184.jsonl" @@ -0,0 +1,39 @@ +{"text": "In this paper the biochemical properties of the antigens detected by six murine monoclonal antibodies (MAbs) are described. These MAbs react selectively with the multidrug-resistant small cell lung cancer (SCLC) cell line, H69AR, compared to its sensitive parent cell line, H69 . Because H69AR cells do not overexpress P-glycoprotein, the antigens detected by these MAbs may be markers for non-P-glycoprotein-mediated mechanisms of resistance. We found that the 36 kDa protein precipitated by MAb 3.186 is phosphorylated and has a pI of approximately 6.7. The 55 kDa protein precipitated by MAb 3.50 is also phosphorylated and has a pI of approximately 5.7. Several observations suggest that MAbs 3.80, 3.177 and 3.187 recognise the same 47 kDa molecule and hence only MAb 3.187 was characterised further. This MAb precipitates an acidic protein which runs as a streak on isoelectric focusing gels. The 25 and 22.5 kDa cell surface proteins precipitated by MAb 2.54 both have a pI of approximately 7.6. Treatment of immunoprecipitates with glycosidase F indicated that none of the proteins detected by MAbs 2.54, 3.187, 3.50 and 3.186 have large N-linked carbohydrates. The peptide nature of the epitopes detected by MAbs 2.54 and 3.186 was unequivocally demonstrated by precipitation from in vitro translation products of H69AR RNA. The antigens detected by MAbs 3.50 and 3.187 were not detectable in immunoprecipitates of translation products but the epitopes are probably peptides because they were destroyed by boiling in sodium dodecyl sulphate. When the reaction of the MAbs with a panel of 15 paired drug-sensitive and -resistant cell lines was examined in a cell enzyme-linked immunosorbent assay, only a few resistance associated reactions were observed. Most of the reactions were either negative or not resistance-associated. When tested with three SCLC cell lines, MAb 3.187 reacted in a manner consistent with the relative resistance of the cell lines. Antigens that had similar electrophoretic mobility to those from H69AR cells were precipitated from extracts of five human cell lines of various tumour types. These data indicate that the cross-reactivities of the MAbs are due to antigens shared among the cell lines and not just the expression of common epitopes on different proteins. Resistance-associated proteins with the biochemical properties of the antigens described in this paper have not been reported previously and they remain potential markers for the as yet to be determined mechanisms of drug resistance in SCLC and other human malignancies."} +{"text": "Through the first decades of the USSR, it was transformed to suit the ideological requirements of a totalitarian state and biased directives of communist leaders. Later, depressing economic conditions and isolation from the international research community further impeded its development. Contemporary Russia has inherited a system of medical education quite different from the west as well as counterproductive regulations for the allocation of research funding. The methodology of medical and epidemiological research in Russia is largely outdated. Epidemiology continues to focus on infectious disease and results of the best studies tend to be published in international periodicals. MEDLINE continues to be the best database to search for Russian biomedical publications, despite only a small proportion being indexed. The database of the Moscow Central Medical Library is the largest national database of medical periodicals, but does not provide abstracts and full subject heading codes, and it does not cover even the entire collection of the Library. New databases and catalogs (e.g. Panteleimon) that have appeared recently are incomplete and do not enable effective searching.In the 20 As with many other aspects of life in the Soviet Union, professional and research training was severely hampered during nearly 70 years of communist rule\u2022 Chinese \u2013 traditional characters (Mr. Isaac Chun-Hai Fung) [see Additional file 2]\u2022 French (Ms. Annick Borquez) [see Additional file 3]\u2022 Russian (The authors) [see Additional file 4]\u2022 Spanish (Ms. Annick Borquez) [see Additional file 5]The authors declare that they have no competing interests."} +{"text": "The post test results of UMCU students were comparable (p\u00a0=\u00a00.48) with UI, but significantly different (p\u00a0<\u00a00.001) from UM. Common problems for the modules in both UI and UM were limited access to literature and variability of the tutors\u2019 skills. Adoption and integration of an existing Western CE-EBM teaching module into Asian medical curricula is feasible while learning outcomes obtained are quite similar.Clinical epidemiology (CE) and evidence-based medicine (EBM) have become an important part of medical school curricula. This report describes the implementation and some preliminary outcomes of an integrated CE and EBM module in the Faculty of Medicine Universitas Indonesia (UI), Jakarta and in the University of Malaya (UM) in Kuala Lumpur. A CE and EBM module, originally developed at the University Medical Center Utrecht (UMCU), was adapted for implementation in Jakarta and Kuala Lumpur. Before the start of the module, UI and UM staff followed a training of teachers (TOT). Student competencies were assessed through pre and post multiple-choice knowledge tests, an oral and written structured evidence summary as well as a written exam. All students also filled in a module evaluation questionnaire. The TOT was well received by staff in Jakarta and Kuala Lumpur and after adaptation the CE and EBM modules were integrated in both medical schools. The pre-test results of UI and UM were significantly lower than those of UMCU students ( To be able to provide \u2018best practices\u2019, all health care professionals should be able to practice evidence-based medicine (EBM). This requires that medical decisions are based on the best available, current, valid and relevant evidence. In order to do that, medical graduates should \u2018be able to gain, assess, apply and integrate new knowledge and have the ability to adapt to changing circumstances throughout their professional life\u2019 , 2. NowaThe Sicily statement on teaching evidence-based practice recommends to incorporate knowledge, skills and attitudes of EBM into medical training . HoweverThe Asia link clinical epidemiology and evidence-based medicine (CE-EBM) aims to Currently, little is known about the effects of the \u2018adapt to adopt\u2019 approach used in curriculum adaptation with regard to EBM and CE teaching. Many papers on EBM and CE teaching report comparisons of different methods of teaching, usually within one medical school. This report describes the process of adapting the existing Utrecht CE and EBM modules and the implementation in an Indonesian and Malaysian curriculum and reports some preliminary outcomes in three medical schools . It is also aims to illustrate how some challenges and differences were addressed during the implementation.The University Medical Centre Utrecht revised its medical school curriculum in 1999. This involved development of a clinical epidemiology (CE) module and an evidence-based medicine (EBM) module. The Utrecht CE module is a 6-week full-time module targeted to third-year students, which is also the year of their first clinical rotation. The Utrecht EBM module is a 6-week part-time module for sixth year students. During that sixth year, students also have their clerkship. Table\u00a0Medical training in the three medical schools differs. At UMCU it comprises a 3-year preclinical Bachelor\u2019s and a 3-year clinical Master\u2019s programme; at UI it takes three preclinical years and two clinical years; at UM it takes two preclinical years and three clinical years of training. Therefore, it was not possible to copy and implement the UMCU modules. It was decided to merge and integrate the Utrecht CE and EBM modules. The new module was developed through intensive discussion between the coordinators of the CE and EBM modules of UMCU, UI and UM. It was decided to translate all teaching materials into English. A native speaker assisted this translation and checked the final versions. At UM, English was already the language of teaching but not at UI. Still, it was decided to familiarize the Indonesian students with English as the most-used language in medical literature.In UMCU both the CE module and EBM modules were coordinated by the clinical epidemiology division of the Julius Center of the Health Sciences and Primary Care, who have approximately 8\u00a0years of experience in teaching this module. Clinicians from various departments are also involved as tutors in group work. In UI, the lecturers are clinicians or non-clinicians who have formal training in clinical epidemiology or EBM. Some have more than 5\u00a0years\u2019 experience in conducting EBM courses while the tutors are clinicians who have participated in the CE or EBM course. In UM, the lecturers are staff of the Social and Preventive Medicine Department who are experienced in teaching epidemiology and biostatistics. The tutors are also clinicians; however some have never participated in CE or EBM courses.Before, during and after implementation of the module, personal teaching experiences and evaluation results were shared and discussed, both in an informal and a formal manner. First of all, to familiarize the local staff later involved in teaching the module, a training of teachers (TOT) in UI and UM was led by two experienced lecturers and the module developers from UMCU. This TOT was conducted for 3\u00a0days and consisted of lectures and computer practice. In addition, there were regular formal discussion meetings between teachers teams within each university and between the module coordinators of UI (ISW), UMCU (GvdH) and UM (MFM). At the end of each meeting, action points for implementation and optimization of the modules were articulated whenever necessary.The CE-EBM module in UI was given as a four-week module (condensed) at the end of their fourth year. In UM, it was conducted dispersed within the 3-month period of the social and preventive medicine (SPM) module for third-year medical students. The time allocated for the CE-EBM within the SPM module is a 3-h session twice a week. Moreover, the SPM module is held in a remote satellite campus. Before attending the CE-EBM module, students have passed a research module (UI) or epidemiology and biostatistics module (UM). The comparison of the CE-EBM module structure in the three medical schools is presented in Table\u00a0A series of lectures on the diagnostic, prognostic, therapeutic (intervention), and aetiological research were given to re-orientate students on the research design methods relevant to the CE-EBM module. These were further reinforced through computer practice. Lectures introduced students to EBM and the specific skills needed; notably, attention was also given to formulating clinical questions and searching the literature. Critical appraisal skills for relevance and risks of bias and summarizing evidence were taught in small working groups.The final task of the students was to develop an evidence-based case report (EBCR). An EBCR summarizes the best available evidence and translates it into practice. It follows an explicit and transparent approach to identify such evidence and thereby can help resolve a dilemma in decision-making in real-life patient management. It can be applied at all stages of patient care, notably diagnosis, prognosis and treatment , 11.n\u00a0=\u00a05). Once a week (in UI and UMCU), the progress in EBCR development was reported in a plenary presentation. These plenary presentation sessions were moderated by experts in clinical epidemiology and EBM, who provided students with feedback. Through these plenary presentations, students were able to learn about different clinical problems of other groups and the appropriate method(s) of dealing them. Due to constraints of the existing curriculum at UM, the plenary presentations were conducted twice during the 3-month module: the first to report the clinical question and the second to report the final EBCR.Students developed their EBCR based on a real clinical patient scenario they had encountered during a previous clinical rotation in tutor-supervised small student groups regarding concepts of aetiological, diagnostic, prognostic, therapeutic research and frequency measures. Most of the questions have yes or no answering options, some have five options. In UMCU, this test is used to evaluate the CE module.EBCRs were assessed using a standard scoring form. The form was based on principles of explicit and transparent reporting of evidence summaries, and included: question, information search, study assessment, data extraction, data synthesis, conclusion, and discussion. The EBCR grades, on a scale of 1\u20135 , were based on the overall impression of assessors. A student passed the EBCR assignment with a score equal to or above three . Ratings for the above separate criteria were also provided as feedback on the aspects that could be improved, with a comment for three categories with the lowest ratings and the highest ratings. The overall scores were multiplied by two to comply with the standard ten-point rating scale used at UI and UM. At UMCU and UI the EBCRs were rated by tutors of working groups, while at UM this was done by the module coordinator.At the end of the CE module at UMCU and the CE-EBM module at UI and UM a summative evaluation was taken by asking the students to assess the quality of one research article. Students completed a module evaluation questionnaire including questions on the quality of teachers, content and organization of the module.t test. ANCOVA was performed to compare the knowledge test score before and after the module, with control for differences in the pre-test results. All analyses were performed using the SPSS 11.0 .The knowledge test score was converted into a 1\u2013100 score by computing the percentage of correct answers assuming that all questions have similar weight. The scores between universities were compared using an independent The CE-EBM module implemented in UI and UM was to become part of the medical curriculum and the effect assessment became part of module evaluation which should be undertaken by all the students. As such, the managerial considerations on the curriculum revision dominated over possible ethical considerations. However, all data were analysed and reported anonymously.We evaluated the results of the module implementation in 2010. In UI, the CE-EBM module was conducted from May to July in two rotations with a total of 202 students. In UM, the module was implemented from September until December 2010 for 200 students. The EBM module in UMCU was conducted throughout the year, divided into six groups; a total of 381 students participated.p\u00a0<\u00a00.001). After the module, the improvement (change) of knowledge in UI students was significantly higher than for the UMCU students (p\u00a0<\u00a00.001), which resulted in comparable post-test results (p\u00a0=\u00a00.484). On the other hand in the UM, even though the improvement (change) of knowledge of the students was also significantly higher than at UMCU (p\u00a0<\u00a00.001), the post-test results were still significantly lower than for the UMCU (p\u00a0<\u00a00.001). The mean EBCR score of UI students was significantly higher (p\u00a0=\u00a00.001) compared with both UMCU and UM. Examples of some of the clinical problems answered by EBCR are presented in Table\u00a0Table\u00a0ts p\u00a0=\u00a00.84. On thThe three medical schools have a different module evaluation format, thus no statistical testing was performed to compare the results. In general, more than 80\u00a0% of students in UI and UMCU agreed that the module increased their knowledge and skills as shown in Table\u00a0Only 49\u00a0% of UM students agreed that this module had achieved its objectives as compared with 89\u00a0% in UI. Several problems were mentioned by students in the module evaluation as well as reported by module coordinators in both UI and UM including limited access to literature (UI), inconsistent internet connection as students were located in a remote satellite campus (UM) and the variability of the tutors\u2019 skills despite the TOT (UI and UM). To help the students retrieve some articles which are not accessible through the current library access at UI, tutors or module coordinators sometimes needed to contact their colleagues in other institutions with better library access (both inside and outside the country) who could provide the articles. The variability in tutors\u2019 skills was handled by providing adequate time for discussion among tutors and resource persons during the module through regular meetings and electronic communication. Besides that, both UI and UM students thought that the searching skills should have been introduced earlier.This report shows that implementation of an adopted CE-EBM curriculum from a Western country (the Netherlands) in Eastern countries resulted in a comparable increase in medical student knowledge. The introduction of a CE-EBM module as part of the medical curriculum is needed as a response to global development. Adaptation of an established module was chosen as a more practical approach instead of developing a new one. Yet, it still has its own challenges. A modification process which comprises broad aspects such as learning objectives, resources, institutional mandates and values needs to be carried out to provide more context .A survey about EBM teaching in UK medical schools has identified similar challenges including the need for standard curriculum and teaching materials, the lack of people trained in EBM to facilitate small group sessions, and also the need for an identifiable coordinator who is responsible for the development and integration of EBM into the medical school curriculum . Our expEBM is defined as the uptake and use of best available evidence, and its integration with clinical expertise and patients\u2019 values, in circumstances of individualized patient care. The underlying assumption in the definition of EBM is that all aspects on the principles, knowledge and skills needed for this are mastered. The modules used at UMCU, UI and UM are based on the principles of information mastery included in the Sicily statement on teaching evidence-based practice . In the To determine the effect of a curriculum intervention, a prospective pre-test-post-test controlled trial is the strongest study design . The resA standard scoring form was used for EBCR assessment, but there was no training or standardization of these assessments. Differences in the EBCR ratings may be explained by different raters and variation among them. While at UM there was only one rater, the Jakarta tutors may\u2014compared with the UMCU raters\u2014have been too unfamiliar to this kind of report. This may in part explain the negatively skewed distribution of EBCR ratings at UI.Students knew that the MCQ test scores served as a formative evaluation, so these were not included in the module grading. We did not apply correction for guessing for the MCQ test. Since all questions were to be completed and no answers could be left blank, guessing might result in both higher or lower scores . Still, The knowledge test was developed based on the predefined module learning goals, and thus determined the content and face validity of the test as educational measurement tool. The test outcomes presented may, however, represent a mix of results based on the quality of the module , teacherAlthough a formal psychometric evaluation of the student assessment tests may deepen insight into their quality and may perhaps provide a more detailed view on the reported differences between UMCU, UI and UM in learning outcomes, it is not the purpose of our paper to show the quality of the tests used. But given the content and face validity of the tests used we consider these appropriate to illustrate the success of the adoption of the CE-EBM module at UI and UM medical schools.Despite a lot of differences between the three medical schools, the post-module results show that similar effects were achieved on students\u2019 knowledge and skills in UI and UMCU. In design and organisation the UI module more closely resembled the original Utrecht teaching modules. The full-time 4-week CE-EBM module in year 4 at UI resulted in comparable score with UMCU, which ran a separate full-time 5-week CE module in year 3 and a 5-week part-time EBM module in year 6.Due to cultural differences in the approach to teaching and learning and in health care systems, it was expected that the use of Western teaching material and approach in the Asian context and environment may pose particular challenges. This may have played a role in the success of this curriculum revision at UM and UI. Indonesia and Malaysia are still considered to be developing countries. Several obstacles to teaching and practising EBM in the developing world have been identified, including limited resources, limited access to databases and libraries, and lack of role models . Those fFrom the start, the UM module had a more complex environmental and organisational structure. The differences in the module implementation were probably too abundant in UM thus similar results were not being observed. This is also corroborated with the result of the module evaluation which showed that a great number of the UM students felt that this module failed to achieve its objectives. Moreover, students and teachers at UI and UM were quite used to a problem-based learning (PBL) or a self-directed learning approach. For teaching EBM, however, directed learning including lectures followed by a group tutorial may be more effective than PBL .Systematic differences in the structure, time, approach of the module and a difference in their place in the curricula of the three medical schools could not be controlled for. Some differences in the tools used for student assessments may have had an impact on our findings. Despite these limitations, the measured objective and standardised outcomes in the large number of participating students in a multi-institutional comparison contribute to the robustness of the findings of our study .Adoption of an existing Western EBM module into Asian medical curriculums is feasible and more practical than developing a new one. Despite differences in the programmes and the students, similar learning goals can be reached in different ways. Our experience could be used by other medical schools in Asian countries to start teaching EBM to their medical students.To introduce clinical epidemiology and evidence-based medicine teaching to medical students, adaptation of an established module from another medical school is more practical than developing a new one.The adaptation process should consider the broad aspect of the local situation including institutional values, resources, structure and organization of the existing medical school curriculum.Several contributing factors for successful implementation of the module are initial capacity building for teachers (training of teachers), continuous team discussions and feedback to the teachers, mentoring from the original module developers, and a dedicated module coordinator who is supported by the management of the medical school."} +{"text": "In the crystal, N\u2014H\u22efN hydrogen bonds link the mol\u00adecules into chains running along the a axis. \u03c0\u2013\u03c0 stacking is also observed between parallel benzene rings of adjacent mol\u00adecules, the centroid\u2013centroid distance being 3.7527\u2005(13)\u2005\u00c5.In the title compound, C \u00c5 b = 7.8114 (16) \u00c5 c = 8.9785 (18) \u00c5 \u03b1 = 93.58 (3)\u00b0\u03b2 = 94.65 (3)\u00b0\u03b3 = 97.47 (3)\u00b0V = 523.42 (18) \u00c53 Z = 2 K\u03b1 radiationMo \u22121 \u03bc = 0.13 mmT = 293 K 0.39 \u00d7 0.32 \u00d7 0.15 mm Rigaku R-AXIS RAPID diffractometerABSCOR; Higashi, 1995T min = 0.950, T max = 0.980Absorption correction: multi-scan (5120 measured reflections2359 independent reflectionsI > 2\u03c3(I)1485 reflections with R int = 0.026 R[F 2 > 2\u03c3(F 2)] = 0.043 wR(F 2) = 0.118 S = 1.04 2359 reflections167 parameters1 restraintH atoms treated by a mixture of independent and constrained refinementmax = 0.19 e \u00c5\u22123 \u0394\u03c1min = \u22120.15 e \u00c5\u22123 \u0394\u03c1 RAPID-AUTO I, global. DOI: 10.1107/S1600536811054523/xu5407Isup2.hkl Structure factors: contains datablock(s) I. DOI: 10.1107/S1600536811054523/xu5407Isup3.cml Supplementary material file. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"} +{"text": "The first author\u2019s name is incorrect. The correct name is: G. W. Wieger Wamelink.The third author\u2019s name is incorrect. The correct name is: Joep Y. Frissel.The correct citation is: Wamelink GWW, Goedhart PW, Frissel JY (2014) Why Some Plant Species Are Rare. PLoS ONE 9(7): e102674. doi:10.1371/journal.pone.0102674"} +{"text": "The segmented inner and outer vessel surfaces served as solid domain for the fluid-structure interaction (FSI) simulation. To compare wall stress distributions within the aneurysm wall and at the rupture site, FSI computations are repeated in a virtual model using a constant wall thickness approach. Although the wall stresses obtained by the two approaches\u2014when averaged over the complete aneurysm sac\u2014are in very good agreement, strong differences occur in their distribution. Accounting for the real wall thickness distribution, the rupture site exhibits much higher stress values compared to the configuration with constant wall thickness. The study reveals the importance of geometry reconstruction and accurate description of wall thickness in FSI simulations.Computational Fluid Dynamics is intensively used to deepen the understanding of aneurysm growth and rupture in order to support physicians during therapy planning. However, numerous studies considering only the hemodynamics within the vessel lumen found no satisfactory criteria for rupture risk assessment. To improve available simulation models, the rigid vessel wall assumption has been discarded in this work and patient-specific wall thickness is considered within the simulation. For this purpose, a ruptured intracranial aneurysm was prepared ex vivo, followed by the acquisition of local wall thickness using Although intracranial aneurysms have been intensively investigated within the last two decades , many opHowever, due to patient-individual properties that are unknown or due to the requirement of fast computations, all numerical studies are based on several model simplifications. The most severe but commonly used assumption is the treatment of the luminal vessel surface as a rigid, non-flexible wall with infinite resistance. Since three-dimensional segmentations of the diseased dilations are normally gained from contrast-enhanced imaging modalities, only the vessel lumen is represented; no information of the actual wall structure is obtained. However, a study by Fr\u00f6sen et al. has demoTo extend previous numerical studies by considering mechanical exchanges between blood flow and the surrounding vessel tissue, fluid-structure interaction (FSI) simulations were carried out. Already in 2009 Bazilevs et al. proposedAlthough these studies are important steps towards realistic hemodynamic predictions and FSI simulations in intracranial aneurysms, none of them considered the patient-specific wall thickness. Therefore, the present study is, to the authors' knowledge, the first of its kind that incorporates the measured vessel wall thickness of a ruptured aneurysm into FSI computations. To evaluate the importance of patient-specificity, a simulation assuming constant walls is performed for comparison. The analyses of stress predictions within the complete aneurysm sac as well as at the particular rupture site address the question, whether patient-specific wall thickness is required in related simulations.With approval of the local ethics committee, a complete Circle of Willis (CoW) of a 33-year-old male patient was investigated, which was explanted in the course of a forensic autopsy. Two intracranial aneurysms were found, one at the anterior communicating artery (Acom), the other at the carotid T. Death was caused by subarachnoid hemorrhage due to aneurysm rupture. The Acom aneurysm could be unambiguously identified as the ruptured one, as it was enclosed in a large blood clot and the wall defect was clearly visible see .To enable the further examination and imaging of the explant, the CoW was put into formaldehyde (4%) for fixation immediately after explantation. Then, the blood clot was carefully removed and the arteries were flushed with formaldehyde. For imaging of the ruptured aneurysm, the anterior cerebral arteries were dissected approximately 10\u2009mm proximal and distal to the anterior communicating artery. After that, plastic tubes were inserted in the anterior cerebral arteries to avoid collapse of their lumen. Plastic was used, because it has a different X-ray density compared to biological tissue and consequently the following postprocessing steps, especially segmentation, are facilitated. The tubes were then stuck into a silicone block in such a way that the specimen had no contact to the silicone surface.\u03bcA, and reconstructed voxel size of 7.5 \u00d7 7.5 \u00d7 7.5\u2009\u03bcm3.For image acquisition, an industrial computed tomography system was selected. Despite its low contrast resolution\u2014and thus the impossibility to distinguish different tissue layers of the vessel wall\u2014the device was chosen because of the superior spatial resolution compared to clinical CT and MRI scanners. This allows for the accurate measurement of the wall thickness and visualization of the inner and outer boundary of the specimen. Imaging parameters were as follows: tube voltage of 50\u2009kV, tube current of 150\u2009\u03bcCT data. Then, a separate segmentation of both walls was carried out. The workflow is derived from the pipeline for aneurysm surface extraction described in [\u03bcCT data as well as small artifacts, for example, detached tissue parts or blood clots, due to the ex vivo preparation. In Two 3D surface meshes, one of the inner and one of the outer vessel wall, were extracted from the tomographic ribed in . Initialribed in . The iniNext, surface meshes for the inner and outer vessel wall were extracted with Marching Cubes based on the segmentation masks in MeVisLab. Postprocessing of the surface meshes included the manual smoothing of small bumps and artifacts with Sculptris 1.02 . Furthermore, in- and outlets of the aneurysm were artificially extruded and perpendicularly cut with Blender 2.74 to provide sufficiently long enough and straight vessel sections for the subsequent FSI simulation. The resulting 3D surface meshes are depicted in Since growth and rupture of an intracranial aneurysm are complex problems connecting blood flow and arterial wall behavior, FSI simulations were carried out. Therefore, the segmented aneurysm model was divided into two subdomains consisting of the fluid region and the solid region, respectively. The first was solved numerically using CFD based on a finite volume discretization, while the latter was treated as a structural problem using the finite element method. Both domains were coupled at the interface, the luminal surface. This coupling was implemented as data transfer, exchanging fluid pressure and WSS as well as wall displacement, respectively.\u03b70 = 15.92\u2009mPa\u2009s, \u03b7\u221e = 4\u2009mPa\u2009s, \u03bb = 0.08268\u2009s, a = 2, and n = \u22120.4725, parameters acquired in the local rheology lab) fluid with a density of 1055\u2009kg/m3. The inflow conditions were obtained from a healthy volunteer using 7\u2009T PC-MRI [3, 1\u2009MPa, and 0.45, respectively [The fluid was modeled as incompressible, non-Newtonian the complete aneurysm sac and (b) the rupture site , which is of particular interest due to its known location. Subsequently, for both regions of interest the spatial-average stress level was calculated and classified into bins of 500\u2009Pa.As presented in The main differences between both configurations concern the effective stress inside the aneurysm wall. Figures To further investigate the aneurysm's rupture site, the quantitative comparison is now concentrated on a smaller region of interest, around the rupture site. Regarding realistic blood flow predictions in intracranial aneurysms, the reconstructed geometry has an essential impact . In addiHowever, the use of patient-specific wall thickness is limited by the difficulties in acquisition, even ex vivo. This might be one reason for the fact that constant wall thickness is used in almost all similar studies of intracranial aneurysms. Nevertheless, promising models exist, which are related to wall mechanics, but do not take into account the wall thickness itself. Cebral et al. used theThe main focus of the comparison lies in the known rupture site. For the patient-specific wall thickness configuration a good correlation with spots of high stresses is found, contrary to the constant wall thickness configuration. The latter shows a lower averaged stress of 55.2% in the area close to the rupture location. Taking the whole aneurysm into account, high similarity of both approaches in terms of average wall stress is present; the difference is only 3.8%. Accordingly, the choice of wall thickness for the artificial constant configuration is not responsible for the different stress level at the rupture site; it is a direct result of the patient-specific wall thickness. However, it needs to be pointed out that the rupture location does not correlate with the overall highest effective stress value. A reason for that might be the more complex structure of the aneurysm tissue or the surrounding vasculature, which was not considered during the modeling. There might be a general stress level that is dangerous, enabling rupture depending on the wall condition. However, this was not the objective of this study, which only aims at the comparison with constant wall thickness\u2014a common assumption that is often used in FSI computations. Considering this particular case, obvious differences in the local stresses are observed, pointing at limitations associated with the constant wall thickness approach.Another interesting aspect with respect to the rupture site consists in its location at a daughter aneurysm revealing a bleb-like shape. Cebral et al. investig\u03bcCT offers a good basis for a detailed segmentation process. However, it requires a lot of manual artifact elimination and local smoothing to provide appropriate vessel surfaces. Regarding the inflow condition and wall properties, a representative 7\u2009T PC-MRI measurement and literature values, respectively, were used. It must be kept in mind that the homogenous, isotropic, linearly elastic material model used in this study is far from the real, complex tissue structure found in reality as function of age, activity, location, biological constitution, and so forth. However, Torii et al. [In order to receive numerical predictions with reasonable computational effort, uncertainty and simplifications must be accepted and certainly influence the results. Concerning imaging, vessel position and arrangement as well as fixation differ from the in vivo setting. In addition, the resolution is limited, although i et al. pointed Future work should take into account a more detailed numerical description of the aneurysm geometry and material. This can be achieved by adding additional information obtained from histology, for example, the distinction between vessel layers and pathologies. The surrounding tissue might play an important role as well and could be considered by specified solid boundary conditions. Finally, a higher number of cases must be included, even if acquiring the real wall thickness is a difficult and time-consuming task.The findings of this study highlight the importance of proper geometry reconstruction and accurate description of local wall thickness regarding hemodynamic FSI simulations. The patient-specific wall thickness seems to play an important role for the prediction of stress distributions inside aneurysm walls. While the spatial-averaged wall stresses of the complete aneurysm sac show almost no difference (only 3.8%) compared to those obtained with a constant wall thickness, high differences (55.2%) are observed around the known rupture site. Despite many simplifications, the presented results are a consequent step towards a deeper understanding of aneurysmal wall behavior. Future research is required and should include more cases as well as a more advanced modeling of the wall mechanics."} +{"text": "Brain pHe is measured by biosensor imaging of redundant deviation in shifts (BIRDS) with lanthanide agents, by detecting paramagnetically shifted resonances of nonexchangeable protons on the agent. To test the hypothesis that BIRDS-based pHe readout remains uncompromised by presence of SPIO-NPs, we mapped pHe in glioma-bearing rats before and after SPIO-NPs infusion. While SPIO-NPs accumulation in the tumor enhanced MRI contrast, the pHe inside and outside the MRI-defined tumor boundary remained unchanged after SPIO-NPs infusion, regardless of the tumor type (9L versus RG2) or agent injection method . These results demonstrate that we can simultaneously and noninvasively image the specific location and the healing efficacy of D-NPs, where MRI contrast from SPIO-NPs can track their distribution and BIRDS-based pHe can map their therapeutic impact.Since brain's microvasculature is compromised in gliomas, intravenous injection of tumor-targeting nanoparticles containing drugs (D-NPs) and superparamagnetic iron oxide (SPIO-NPs) can deliver high payloads of drugs while allowing MRI to track drug distribution. However, therapeutic effect of D-NPs remains poorly investigated because superparamagnetic fields generated by SPIO-NPs perturb conventional MRI readouts. Because extracellular pH (pH Treatment and management of glioblastoma, the most common and malignant form of primary brain tumors, represent an unmet clinical challenge . While gThe transport and delivery of therapeutic agents into the brain parenchyma are impeded by a dense network of capillary endothelial cells, pericytes, and perivascular macrophages, which together form the BBB . In the Tumor-specific delivery of D-NPs can be further enhanced by coating the D-NPs with ligands that target overexpressed receptors and/or transporters in tumors \u201329. Despe) is a hallmark of cancer pathogenesis and promotes tumor invasion and resistance to therapy [e mapping methods to enable monitoring of glioma invasion. Since some drugs only work in certain pH ranges, precise knowledge of pHe can aid in choosing and tailoring therapeutic regimens [e, for example, by drugs that alter pHe directly or affect tumor's aerobic glycolysis. Many MRI methods exist for measuring and mapping pHe. Relaxation-based methods are highly dependent on the degree of tissue perfusion and local agent concentration thus making quantification of pHe difficult [e-sensitive MRI methods based on proton exchange such as chemical exchange saturation transfer (CEST) are also dependent on agent concentration and may additionally be complicated by magnetization transfer effects [31P MRS with 3-aminopropyl phosphonate (3-APP), which have pHe-sensitive exchangeable protons [Because low extracellular pH using lanthanide agents, for example, thulium 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetrakis(methylene phosphonate), TmDOTP5\u2212 [e readout with BIRDS is independent of agent concentration [e readout in glioma-bearing rats remains uncompromised by the presence of SPIO-NPs. We compared pHe measured with TmDOTP5\u2212 by BIRDS before and after infusion of SPIO-NPs in rats bearing 9L gliosarcomas and RG2 gliomas. We used different agent administration methods to inhibit the rapid clearance of the agent by the renal system. In addition, we compared the transverse relaxation rate enhancement from SPIO-NPs across brain regions. Our results suggest that we can use the MRI contrast from SPIO-NPs to track the distribution of D-NPs and then use the BIRDS-based pHe readout to map their therapeutic impact.We previously obtained pHTmDOTP5\u2212 , 58. Sinntration , 60. Thentration . Previountration . Here we5\u2212 for BIRDS was purchased from Macrocyclics Inc. , while SPIO-NPs (Molday ION) were purchased from BioPAL Inc. . The Molday ION were used without further modification or dilution to avoid altering their physical properties. Probenecid was purchased from Sigma-Aldrich . Fischer 344 rats were obtained from Yale University vendors. RG2 and 9L tumor cell lines were purchased from American Type Culture Collections . All animal experiments were conducted in accordance with Yale University's approved institutional animal care and use committee (IACUC) protocols. Tumor inoculation, animal preparation, and handling were conducted as described in our previous work [1H surface RF coil.TmDOTPous work , 58. In 2 in DMEM media containing 10% heat-activated fetal bovine serum and 1% penicillin-streptomycin. The cells were harvested when they reached 80% confluence and suspended in serum-free media for inoculation. Rats were anesthetized with 3% isoflurane and placed on a stereotactic holder. A heating pad was used to maintain the rat at physiological temperature (36-37\u00b0C). An aliquot volume of 5\u2009\u03bcL with RG2 cells or 9L cells was injected into the right striatum 3\u2009mm laterally to the right of bregma and 3\u2009mm below the dura using a 10\u2009\u03bcL Hamilton syringe fitted with a 26-gauge beveled needle. The 5\u2009\u03bcL volume was injected over the course of 5 minutes and the needle was left in place for an additional 5 minutes after the infusion stopped. The needle was then withdrawn slowly to prevent backflow of the cells. The cranial burr hole was sealed with bone wax. The scalp was sutured and treated with antibiotics to prevent infection. Meloxicam (1\u2009mg/kg) was administered to prevent pain and inflammation.The RG2 and 9L tumor cell lines were cultured and grown at 37\u00b0C and 5% CO2O/30% O2). The rats were placed on a heating pad to keep them warm during surgery. A femoral vein was cannulated with a PE-10 line for contrast agent administration (1\u2009mmol/kg for TmDOTP5\u2212 and 14\u2009mg Fe/kg for SPIO-NPs). A femoral artery was cannulated with a PE-50 line for monitoring animal physiology throughout the experiment. The rat was then anesthetized with \u03b1-chloralose using an intraperitoneal line. To inhibit renal clearance and enhance contrast agent extravasation into the extracellular space and accumulation in the tumor, rats either received a coinfusion of TmDOTP5\u2212 and probenecid (n = 5) or underwent renal ligation and infusion of TmDOTP5\u2212 alone (n = 3). While renal ligation inhibits clearance efficiently, it is not suitable for longitudinal studies. Previously, we demonstrated that probenecid temporarily inhibited renal clearance when coinjected with the agent, thus enabling longitudinal studies and obviating the need for invasive renal surgeries [\u03bcL/min), followed by a waiting period of 20 minutes, and then coinfused slowly with TmDOTP5\u2212 over a period of 90 minutes. A water-heating blanket was used to maintain body temperature of the animals between 36 and 37\u00b0C over the course of the experiment. A rectally placed fiber optic probe was used to monitor the body temperature during the scans.The tumor-bearing rats were scanned ~3 weeks after tumor inoculation when the tumor diameter was at least ~3\u2009mm. The rats were anesthetized with 2% isoflurane, tracheotomized, and artificially ventilated (70% Nurgeries . ProbeneR2) maps were obtained using a standard spin-echo sequence with 11 slices, 128 \u00d7 128 in-plane resolution, 1\u2009mm slice thickness, field of view (FOV) 25 \u00d7 25\u2009mm2, recycle time (TR) 6\u2009s, and 12 different values of echo time (TE) from 10\u2013120\u2009ms. The transverse relaxivity (r2) of Molday ION (SPIO-NPs) was measured in vitro using the same pulse sequence using samples of varying concentrations of Molday ION (1\u2009mg/kg to 15\u2009mg/kg). The relaxivity was calculated from the slope of the linear fit of R2 versus concentration. Although extreme pH changes can significantly alter properties of NPs, Liu et al. showed that the zeta potentials and hydrodynamic diameters of dextran-coated SPIO-NPs are fairly stable at physiologically relevant pH and ionic concentrations [e range of tumors, normal tissue, and blood.In vivo transverse relaxation rate pulse of 35\u2009kHz bandwidth, 90\u2009kHz separation, and 205\u2009\u03bcs duration was used to selectively excite the H2/H3 and H6 protons of TmDOTP5\u2212 . The phase encoded gradient duration was 160\u2009\u03bcs, the spectral width was 250\u2009kHz, and the acquisition time was 4.1\u2009ms. The total acquisition time for each 3D CSI dataset scan was 12 minutes. First a pHe map was acquired before the SPIO-NPs injection. Then a spin-echo dataset was obtained to determine the R2 enhancement induced by TmDOTP5\u2212. Next, the TmDOTP5\u2212 infusion was stopped and SPIO-NPs were injected slowly (over 5 minutes). Then another spin-echo dataset was obtained 15 minutes after the infusion of SPIO-NPs to determine the additional R2 enhancement due to SPIO-NPs. Finally, infusion of remaining TmDOTP5\u2212 dose was then resumed and another pHe map was obtained after infusion of SPIO-NPs.The 3D CSI datasets were acquired with a reduced spherical encoding of k-space, as previously described , with a R2 maps were obtained by fitting the absolute MRI intensity at different TEs to a single exponential function using Matlab . R2 values from the 3 conditions were compared to determine the relaxation enhancement of each contrast agent. Average R2 values were measured in regions of interest (ROIs), where 1\u2009mm circular rings were taken from the center of mass of the tumor. The tumor edge was defined as regions 1\u2009mm immediately outside the MRI-defined tumor core. Comparing the measured R2 against the relaxivity of Molday ION allowed the amount of SPIO-NPs in each region to be approximated.5\u2212 before and after infusion of SPIO-NPs. The linewidth (LW) of the H6 resonance was measured to generate LW maps and create histograms before and after infusion of SPIO-NPs. While any of the three resonances could have been used to make the LW maps, H6 was chosen because it had the highest signal-to-noise ratio (SNR). BIRDS-based pHe maps of the brain obtained with TmDOTP5\u2212 were calculated as previously described [e was calculated by fitting the H2, H3, and H6 resonances to a0, a1k, and a2kj were calculated from linear least-squares fit of pHe as a function of the resonances \u03b42, \u03b43, and \u03b46 [e values before and after infusion of SPIO-NPs were determined as a function of distance from the center of mass of the tumor, similar to the procedure described above for the R2 maps.The 3D CSI datasets were used to create maps of the H2, H3, and H6 resonances of TmDOTPescribed , 60, 62.\u03bcm thick coronal sections of the fixed tissue were incubated in a solution of 4% potassium ferrocyanide and 4% hydrochloric acid twice for 10 minutes and then counterstained with nuclear fast red. Regions with Fe3+ (from SPIO-NPs) were expected to stain blue due to formation of ferric ferrocyanide.Rats were sacrificed at the end of the experiments and brains were perfusion-fixed in 4% paraformaldehyde for Prussian blue iron staining to assess the distribution of SPIO-NPs. 10\u2009R2 maps before any contrast agent infusion , a superior MRI contrast was observed after the infusion of SPIO-NPs. The circular ROIs that were drawn from the tumor center are shown in R2 relaxation enhancement was ROI-dependent with higher R2 values inside the tumor and lower R2 outside the tumor of both TmDOTP5\u2212 and SPIO-NPs was highest in the tumor core and lower in regions farthest from the tumor's center of mass.The infusion , after Tinfusion , and aftinfusion are showhe tumor . Since RR2 values in the tumor , the R2 values were 25.8, 27.3, and 31.5\u2009s\u22121 before contrast agent administration, after infusion of TmDOTP5\u2212, and after infusion of SPIO-NPs, respectively. The measured R2 relaxivity of Molday ION at 9.4\u2009T in vitro was 2.45\u2009s\u22121\u2009mg\u22121 Fe/kg. By comparing the R2 enhancement by SPIO-NPs against the relaxivity of the Molday ION, the average concentration of SPIO-NPs in the tumor (ROIs 1\u20133\u2009mm) was determined to be 7.27\u2009mg Fe/kg. In healthy/nontumor tissue (ROIs 4\u20139\u2009mm), there was a 4.1\u2009s\u22121 change in R2 with SPIO-NPs, which corresponds to 1.69\u2009mg Fe/kg. Thus, the concentration of SPIO-NPs in the tumor was 4.3 times greater than in healthy/nontumor tissue, suggesting a fourfold enhanced extravasation/accumulation in the tumor.The \u22121\u2009s\u22121 in vivo versus 2.45\u2009mg\u22121\u2009s\u22121 in vitro), the concentration of SPIO-NPs in the tumor would be 29.16\u2009mg Fe/kg while the concentration in the healthy tissue would be 6.72\u2009mg Fe/kg (4.3 times lower than in the tumor). However, we expect that most of the SPIO-NPs will accumulate in the extracellular space where the microenvironment is more similar to the in vitro situation than that of SPIO-NPs internalized in cells. Moreover, we do not expect the relaxation of SPIO-NPs to change significantly over the pH range of our in vivo studies . Liu et al. and others have shown that the R2 of dextran-coated SPIO-NPs was not significantly different over this pH range [R2 increase with increasing Molday ION dose was uniform across different brain regions [Given the physical characteristics of Molday ION, we anticipate that the induced MRI effect is from SPIO-NPs within the extracellular milieu and calculating the concentration of SPIO-NPs in vivo should not be significantly affected by using the relaxivity measured in vitro. Girard et al. showed that the relaxivity of SPIO-NPs internalized in cells was lower than that of freely dispersed (in vitro) SPIO-NPs by as much as up to 4 times . Taylor pH range , 70. Usi regions . Thus we3+. Although the results from Prussian blue staining are not quantitative, regions that showed higher levels of SPIO-NPs were stained blue, indicating presence of Fe3+ . The Prussian blue stained images show an abundance of SPIO-NPs in the tumor, but very little staining on the healthy/nontumor contralateral side of the brain, supporting the enhanced accumulation of SPIO-NPs in the tumors observed with R2 mapping.The preferential distribution of SPIO-NPs in tumors over healthy/nontumor tissues was tested with Prussian blue staining for FeR2 data shown in 5\u2212 and probenecid . The amount of SPIO-NPs in the tumor was 2 times higher than in nontumor tissue . In this case, the R2 enhancement was slightly lower than that observed in a renal-ligated rat , CSI maps (ii), LW maps (iii), and pHe maps (iv)) before before and afte) before infusione maps [e maps of RG2 tumors show lower pHe within the tumor core, but also beyond the tumor boundary, which is in good agreement with previous observations of this aggressive tumor type [e was 7.0 \u00b1 0.1 within the tumor and 7.3 \u00b1 0.1 in the healthy/nontumor tissue on the contralateral side for the RG2 tumor-bearing brain , 60. Themor type , 58. Bef+ agents . MoreoveR2, CSI, LW, and pHe maps as those shown in 5\u2212 and probenecid . The results show that these distributions were similar to those observed in renal-ligated rats. Generally, the LWs increased after infusion of SPIO-NPs in all regions of the brain, but higher LW increases were observed inside the tumor. Before infusion of SPIO-NPs, the pHe was 6.85 \u00b1 0.03 in the tumor and the pHe was 7.15 \u00b1 0.06 in healthy/nontumor tissue (iv)). After infusion of SPIO-NPs, the pHe was 6.86 \u00b1 0.07 in the tumor and the pHe was 7.17 \u00b1 0.06 in healthy/nontumor tissue (iv)). The pHe of the tumor edge (ROI 4) was also relatively acidified (pH 6.98 \u00b1 0.13 before and 6.90 \u00b1 0.09 after infusion of SPIO-NPs) compared to healthy/nontumor tissue farthest from the tumor core (ROIs 5\u20139).Similar R2 and pHe before and after infusion of SPIO-NPs for all RG2 tumor-bearing rats that underwent coinfusion of TmDOTP5\u2212 and probenecid (n = 5). The R2 enhancement was region-dependent . In contrast to the R2 measurements, the average pHe values were not affected by the SPIO-NPs infusion in the tumor and highest (7.2 \u00b1 0.1) in the healthy/nontumor tissue farthest from the tumor. Low pHe (6.9 \u00b1 0.1) was also measured on the tumor edge. While the pHe of the tumor edge in RG2 gliomas was acidic relative to healthy/nontumor tissue, the R2 enhancement between the tumor edge and the healthy/nontumor tissue were similar, suggesting that the vasculature in the tumor margin was still intact despite the acidic transformation of their microenvironment. Future experiments should look at the vascularization inside, around, and far beyond the tumor boundary, for example, with dynamic contrast enhanced MRI and with epidermal growth factor receptor staining.ependent ; Table 1infusion ; Table 2e maps before and after infusion of SPIO-NPs in rats bearing the less aggressive 9L gliosarcoma (n = 4) using coinfusion of TmDOTP5\u2212 and probenecid .In addition to measurements obtained in the aggressive RG2 glioma, we also acquired pHobenecid . The pHeer Ki-67 . In contR2 enhancement from extravasation of SPIO-NPs was observed, with higher R2 increases inside the tumor and smaller R2 increases outside the tumor. Although SPIO-NPs affected MRI contrast in all tissues, excellent SPIO-induced MRI contrast delineated the glioma boundary due to greater extravasation of SPIO-NPs from the vasculature into the tumor relative to healthy/nontumor tissue. We also measured pHe with BIRDS using TmDOTP5\u2212 before and after infusion of SPIO-NPs in rats bearing 9L and RG2 brain tumors. The results demonstrate that the pHe readout was unaffected by the presence of SPIO-NPs, because the intratumoral-peritumoral pHe gradients were essentially identical before and after the infusion of SPIO-NPs, despite slight variations in LWs of the proton peaks for TmDOTP5\u2212. The measured pHe was lowest inside the tumor and increased with the distance from the center of mass of the tumor in the more aggressive RG2 tumors. However, in the less aggressive 9L tumors, pHe was notably higher immediately outside the tumor boundary. We envisage coinjection of BIRDS agents and NPs containing drugs and SPIO, as a new methodology that can deliver high drug payloads to the tumor, image drug distribution, and track tumor location/size (by MRI), and at the same time monitor pHe response to therapy (by BIRDS) [Elevated aerobic glycolysis in gliomas leads to elevated lactic acid and proton production, which upon extrusion from the intracellular compartment results in acidification of the extracellular milieu . Additioy BIRDS) .The brain's microvasculature is either degraded or immature in several neuropathologies, including glioblastomas. Breakthroughs in glioma imaging and therapy exploit the fact that NPs, containing either SPIO (for MRI) or drugs (for therapy), can extravasate through the leaky microvasculature. The SPIO-NPs extravasate into the tumor to generate superior MRI contrast while tumor-targeted D-NPs safely deliver high payloads of drugs to the tumor .R2 enhancement (from TmDOTP5\u2212 and SPIO-NPs) occurred in the tumor and was lowest in healthy/nontumor tissue farthest from the tumor. Because the R2 enhancement comes entirely from the infused agents, this region-specific enhancement suggests a corresponding spatial variation in vascular permeability and consequent extravasation. In addition to the enhanced extravasation, the chaotic vascular architecture in tumors contributes to poor clearance leading to increased retention of SPIO-NPs in the interstitial space of the tumor core. By using the R2 enhancement and the relaxivity of Molday ION (SPIO-NPs), we calculated that the amount of SPIO-NPs in the tumor was 2 to 4 times higher than in healthy/nontumor tissue. The EPR in tumors has been widely utilized to preferentially deliver high amounts of imaging agents and D-NPs, both passively and actively [In the present study, the highest actively , 76.R2 in the center relative to the periphery suggests higher permeation and accumulation of SPIO-NPs in the center of the tumor due to greater extent of BBB disruption within the tumor niche. Prior studies support these observations. Beaumont et al. did not observe any necrosis in their RG2 rat gliomas at similar time points as our experiments [R2 increase from the SPIO-NPs would be observed in the tumor core. Additionally, because gliomas including RG2 are known to have an increased presence of macrophages relative to healthy brain tissue, the higher amount of SPIO-NPs in the tumor could be due in part to macrophage phagocytosis [High-grade solid brain tumors tend to develop necrotic cores due to a combination of poor vascularization and inadequate perfusion \u201379. Becaeriments . Their seriments . Therefoocytosis .R2-weighted MRI resulting from the strong superparamagnetic fields generated by SPIO-NPs. Because both the drugs and SPIO-NPs are contained in the same nanocarrier, the location and distribution of the SPIO-NPs, as observed by MRI, reflect the biodistribution of D-NPs. By quantifying the SPIO-induced MRI contrast attenuation, it is possible to quantify the D-NPs delivered to the tumor.Owing to their strong superparamagnetic properties, tunable size, shape, coating, and magnetic susceptibility, SPIO-NPs have gained utility as therapeutic agents in alternating magnetic field hyperthermia \u201385, as Me promotes drug resistance, degradation of the extracellular matrix, angiogenesis, tumor invasion, and metastasis, drugs that raise pHe by targeting the acid-generating glycolysis in tumors have demonstrated significant inhibition of tumor growth and enhanced apoptosis [e inhibit tumor invasion and metastasis [e in a few days, methods that quantitatively measure tumor pHe longitudinally may provide an effective evaluation of their therapeutic efficacy and allow for prompt modification of therapy if the initial treatment is not working. A recent study has reported that temozolomide, which is an alkylating agent and is adjuvant chemotherapy used to clinically treat glioblastomas, arrests glioma growth and normalizes intratumoral pHe [Currently, measurement of tumor size is the only FDA-approved method to assess the response to therapy noninvasively. Because changes in tumor size following treatment may take up to a month to manifest, this method is not ideal for aggressive brain cancers, especially when the treatment is later found not to have been effective. Thus a clear need exists for methods that can provide prompt assessment of therapeutic efficacy so that treatment can be altered quickly if desired. Recently, it was shown that quantitative monitoring of the tumor microenvironment following a pharmacologic challenge provides a better way to monitor therapeutic efficacy . Becausepoptosis , 98, 99.tastasis , 101. Beoral pHe .5\u2212 agent [3+ electrons, we hypothesized that BIRDS-based pHe readout of TmDOTP5\u2212 will remain uncompromised by SPIO-NPs. Although SPIO-NPs altered MRI contrast in all tissues, SPIO-based MRI contrast clearly demarcated the tumor boundary due to greater extravasation of NPs through leaky blood vessels. Nonetheless, the quality of BIRDS-based pHe readout with TmDOTP5\u2212, for both intratumoral and peritumoral regions, was unaffected by the presence of the SPIO-NPs, since the pHe maps obtained before and after the infusion of SPIO-NPs were very similar.Given the significant relaxation enhancement of the nonexchangeable protons on the TmDOTP5\u2212 agent , 104 due5\u2212 and SPIO-NPs were employed in the present study, future studies might assess the possibility of combining them [e-sensitive agent on the surface of the NPs could possibly enhance the sensitivity of BIRDS to monitor the immediate environment of D-NPs and prolong their lifetime to enable multiple monitoring sessions at various treatment time points. Ordinarily, BIRDS agents have fast renal clearance owing to their small size and thus renal inhibition is necessary for accumulation [While separate infusions of TmDOTPing them . Conjugamulation , 60, 105mulation . Towardsmulation .e gradients using BIRDS in rat models of brain gliomas. Furthermore, we demonstrated that both the intratumoral and peritumoral pHe readouts, measured with BIRDS using TmDOTP5\u2212, are not compromised by the presence of SPIO-NPs. Thus, we propose a new cancer imaging protocol that can target high drug payloads (via D-NPs) to tumors and image the drug delivery (via SPIO-NPs), concurrently map tumor location and size (by MRI), and at the same time monitor therapeutic efficacy through drug-induced changes in pHe (by BIRDS) [The treatment of brain gliomas is hampered in part by a limited availability of reliable in vivo methodologies that can simultaneously and noninvasively measure glioma invasion, drug delivery, and its therapeutic benefits. In this study, we demonstrated superb MRI contrast enhancement and tumor delineation with SPIO-NPs and quantitative imaging of intratumoral-peritumoral pHy BIRDS) .Figure S1. Prussian Blue Staining for iron (SPIO-NPs) distribution.5- and SPIO-NPs infusion on the transverse relaxation rate (R2) in Probenecid-infused glioma-bearing animals.Figure S2. Effect of TmDOTPe) and TmDOTP5- linewidths (LW) measured before and after SPIO-NPs infusion in Probenecid-infused glioma-bearing animals.Figure S3. Extracellular pH (pHe) in 9L tumor-bearing animals that underwent renal ligation or Probenecid co-infusion to inhibit renal clearance.Figure S4. Comparison of Extracellular pH (pH"} +{"text": "Simulation-based training (SBT) has become a standard for medical education. However, the efficacy of simulation based training in airway management education remains unclear.The aim of this study was to evaluate all published evidence comparing the effectiveness of SBT for airway management versus non-simulation based training (NSBT) on learner and patient outcomes.Systematic review with meta-analyses were used. Data were derived from PubMed, EMBASE, CINAHL, Scopus, the Cochrane Controlled Trials Register and Cochrane Database of Systematic Reviews from inception to May 2016. Published comparative trials that evaluated the effect of SBT on airway management training in compared with NSBT were considered. The effect sizes with 95% confidence intervals (CI) were calculated for outcomes measures.Seventeen eligible studies were included. SBT was associated with improved behavior performance in comparison with NSBT. However, the benefits of SBT were not seen in time-skill , written examination score and success rate of procedure completion on patients .SBT may be not superior to NSBT on airway management training. Airway management is often a life-saving procedure for patients. However, it may be difficult for many health-care providers to gain enough experience to become and remain expert in airway management based solely on their clinical experience . Thus, iThere are several methods of medical education on airway management training. Simulation based training (SBT) has gained much attention as it may improve patients safety and increase learner competence , 3. SystWe followed PRISMA guideline in reporting this systematic review and meta-analysis . A revieFeature SBT as an educational intervention involving one or more of following modalities: partial-task trainer , high-fidelity mannequins, virtual reality, or computer software Feature NSBT as a comparison groupIn single-task or multitask course which included training for airway management technique (e.g. direct laryngoscope and/or intubation (DL), bag-mask-ventilation (BMV), flexible laryngoscope or bronchoscope (FL), supraglottic airway management, cricoids pressure and surgical airway).Assessment of learner and/or patient outcomes.We considered all published comparative trials that evaluated the effect of simulation on airway management training in comparison with NSBT. We used the following inclusion criteria to select the pool of eligible studies:Data from letters, case reports, reviews or abstracts were excluded.airway; fiberoptic, fiberscope; bronchoscopy; laryngoscopy; intubation; supraglottic, laryngeal mask, combitube; cricoids pressure; bag-mask-ventilation; cricothyroidotomy, surgical airway. These terms were searched as subject headings, medical subject headings, and text words where appropriate. We combined these using the Boolean operator \u201cand\u201d with education terms: training; education; learning; teaching; and teach. No language restriction was placed on our search. To maximize the sensitivity of our search, we did not limit our search to terms related to simulation or study type. The reference lists of all eligible publications and reviews were scanned to identify additional relevant studies. Two authors screened and reviewed independently all titles and abstracts for eligibility. For abstracts that did not provide sufficient information to determine eligibility, full-length articles were retrieved. Disagreement on inclusion or exclusion of articles was resolved by consensus.A systematic search of PubMed, EMBASE, CINAHL, Scopus, the Cochrane Controlled Trials Register and Cochrane Database of Systematic Reviews from inception to May 2016, was performed to identify published potential trials. The search strategy was developed using following search terms: Studies were reviewed and data extracted independently by two authors using a pre-designed standard form. The following data points were extracted: 1) simulation modality, 2) trainee characteristics, 3) airway management techniques, 4) type of study design, 6) method of assessment, 7) learning outcomes, including time-skill (time to complete task), behavior performance and knowledge, 8) learner reaction , 9) patient clinical outcomes (i.e. success rate of procedure completion on patients and complications of airway management). Attempts were made to contact the authors for missing data. If detailed information was not received, the study was excluded from the current meta-analysis.To assess methodological quality, we used elements of the Medical Education Research Study Quality Instrument (MERSQI) . The MER2were used to assess heterogeneity across studies, which determined the appropriate use of either fixed-effects or random-effects model. Heterogeneity was considered as a P-value\u2009<\u20090.05 or I2\u2009>\u200925% and1, n\u2009=\u20092 , 14] forOne study reportedSix studies , 18, 19 Seven studies , 18\u201320 rThree studies , 22, 24 There are high expectations associated with SBT since it could apply knowledge in a hands-on approach and offer a venue for problem solving in real-life situation without patient risk or time constraints \u201327. The Our findings supported the previous evidence that SBT are enjoyable and attractive instruments for airway management training . HoweverSeveral limitations of this review are note worthy. First, our analysis revealed high inconsistency between studies, reflecting variation in instructional design, learner groups, NSBT methods and outcome measures. Secondly, some of the included studies had methodological limitations or failed to describe clearly the context, instructional design, or outcomes; and these deficits limit the strength of our inferences. Some studies could not be included in the pooled analyses because of missing data, despite numerous attempts to contact the authors for more information. Thirdly, only one-third of the included studies measured outcomes on real clinical setting and three studies provided data on skill retention, thus limiting our ability to comment on translation of outcomes from the simulated environment to the real life clinical environment. Last, pooling effect sizes across study designs is problematic. We have therefore provided results for our meta-analyses, stratified by study design. Results from meta-analyses of RCTs remain consistent.This meta-analysis , within limitations of the existing data and of the analytic approaches used, shows that SBT is associated with improving learner behaviour performance and increasing learner interest and satisfaction. But no significant effect of SBT on time skill and knowledge acquisition for airway management was found. Further well-designed studies are needed to address this issue."} +{"text": "Despite a wide range of current and potential applications, one primary concern of brittle materials is their sudden and swift collapse. This failure phenomenon exhibits an inability of the materials to sustain tension stresses in a predictable and reliable manner. However, advances in the field of fracture mechanics, especially at the nanoscale, have contributed to the understanding of the material response and failure nature to predict most of the potential dangers. In the following contribution, a comprehensive review is carried out on molecular dynamics (MD) simulations of brittle fracture, wherein the method provides new data and exciting insights into fracture mechanism that cannot be obtained easily from theories or experiments on other scales. In the present review, an abstract introduction to MD simulations, advantages, current limitations and their applications to a range of brittle fracture problems are presented. Additionally, a brief discussion highlights the theoretical background of the macroscopic techniques, such as Griffith\u2019s criterion, crack tip opening displacement, J-integral and other criteria that can be linked to the fracture mechanical properties at the nanoscale. The main focus of the review is on the recent advances in fracture analysis of highly brittle materials, such as carbon nanotubes, graphene, silicon carbide, amorphous silica, calcium carbonate and silica aerogel at the nanoscale. These materials are presented here due to their extraordinary mechanical properties and a wide scope of applications. The underlying review grants a more extensive unravelling of the fracture behaviour and mechanical properties at the nanoscale of brittle materials. In the 21st century, fracture mechanics has been identified as one of the most emerging and promising fields of engineering mechanics. This is because the cracking-induced failure of devices and constructions presented a gigantic perturb to human communities, which is also related to safety and reliability concerns. But propitiously, advances in the field of fracture mechanics have helped to optimise many of the structural designs and, thus, to eliminate potential fracturing-related dangers and catastrophes to geophIn general, the fracture in materials usually initiates locally from a crack tip, which results in a global failure through crack propagation across the whole structure. In this connection, In brittle fracture, the materials exhibit little or no evidence of ductility or plastic degradation before the occurrence of the crack in the sense of material discontinuity ,9. BesidExperiments play a vital role in new findings in science. The results obtained from experiments provide a basis for the understanding of mechanical processes, e.g., crack initiation, propagation and complete failure. However, as the growth of cracks is very fast in the event of brittle failure, the dynamics of crack initiation and growth is experimentally very challenging to study. When one observes the fracture in the structure at a macroscopic scale, it is a result of crack initiation and propagation at the nanoscale and carried over various length-scales . TherefoIn 1976, the molecular dynamics simulations were first time introduced to model fracture by Ashurst and Hoover . The goaRecently, Patil et al. investigThis review is organised as follows. MD simulation is a computational technique used to determine the time-dependent behaviour of molecular systems. In this method the initial coordinates and velocities of an ensemble of particles are provided as an input, this method then integrates the equation of motion and produce new positions and velocities at every iteration. For general techniques of molecular dynamics, we refer to the book of Rapaport . In partN is number of interacting atoms in the system. i, respectively. i, which is experssed asHere, An MD trajectory of the system is generated by updating the positions and velocities calculated as function of time. The expression for force-field can be divided into bonded and non-bonded interactions and its sum for all atoms give the total potential energy of the system n is the number of minima or maxima between 0 and 2Bonded interactions, per) see . Covalenper) see . Bond strium see A. A bondEquation using thf \u03b80 see B. Torsiof \u03b80 see potentials are presumably the most popular interatomic potentials for metals and alloys. It describes the metallic bonding characteristics more precisely than the two-body potentials ,31,32. TU of a system consists of two parts, a pair interaction i and j represents the repulsion between the electrostatic core\u2013core and a cohesive term represents the energy of individual atoms i, which is always demonstrated through the effective electron density i by the function In the EAM, the potential energy j. Based on the experimental properties, the functions ith and jth atom, In the literature, the Vashishta potential ,34 has bUij=Hijri\u2212rijr4s,Uijk=Bjika et al. ,34.The AIREBO force-field has been widely used to model the C\u2013C interactions in carbon nanotubes and graphene ,37. In tf length ,38,39,40In the literature, a number of numerical integration schemes have been proposed to solve Equation , and henIn the leap-frog algorithm, velocities and positions are of the third order, and therefore, this numerical integration scheme is equivalent to the Verlet algorithMD simulations have been utilised to study dynamics properties at the nanoscale, e.g., viscosity, diffusivity, thermal conductivity and structural relaxation times. It has also been used to explore the physical properties of advanced nanostructured materials that do not exist or cannot presently be created. Due to high temporal and spatial resolution, MD simulations are ideally suitable to investigate the fracture behaviour of brittle materials, wherein crack initiation, propagation and complete failure occurs in picoseconds.Nowadays, typical MD simulations can be performed on systems containing hundred thousands, or, even, millions of atoms, and for simulation times higher than one millisecond. However, to compare these numbers with macro and microscale systems, one may face situations, where time and/or size limitations become crucial. For example, in experiments related to dynamical processes, the relaxation times of the physical properties last for longer than the time scale of a few milliseconds. Thus, choosing MD system sizes with sufficient complexity and sufficiently long timescales becomes the primary challenge in MD simulations.Fracture is a failure process, characterised by an occurring discontinuity of a material as a result of subjected external loading or stresses. Generally speaking, it can be divided into two types, i.e., ductile and brittle fracture. On the one hand, ductile fracture involves substantial plastic deformation at crack edges before fracture. A typical example of ductile fracture under tensile deformation is the material necking, which leads to the formation of cavities, and cavities coalesce to form a crack. This class of cracks usually propagates slowly as well as in a stable manner, where a high amount of energy is dissipated before the ultimate failure. On the other hand, in the matter of brittle fracture, an insignificant amount of plastic deformation occurs before fracture. The material does not undergo necking, and cavities do not form. Moreover, cracks propagate rapidly, and very low amount of energy is dissipated in the form of plastic deformations before fracture. Cracks are usually unstable and propagate at high speed without any increase in the applied stresses. Thus, brittle fractures occur suddenly (without any warning), which is dangerous for any application.Opening mode (Mode I): tensile stresses are applied perpendicular to the crack plane.Shearing mode (Mode II): shear stresses act along the crack plane and normal to the crack front.Tearing mode (Mode III): shear stresses act along the crack plane and parallel to the crack front.Depending on the way of loading application, three different modes of material fracture can be distinguished:Mode I is the common fracture mode, and it is used in the fracture toughness testing. Fracture toughness is the material property, which characterises the crack resistance offered by the material . Mode IIIn the literature, many theories have been presented to predict crack propagation in the materials. Here, the important criteria used at the nanoscale to calculate the fracture mechanical properties are discussed.E is the elastic modulus, a is the crack depth. Griffith\u2019s criterion is valid only for the linear elastic materials, which are ideal brittle materials like glass. This approach is also known as linear elastic fracture mechanics (LEFM) and the related theory is valid only for a plate of infinite size. Therefore, for a semi-infinite sized plate, Equation theory was developed by Wells to modelIn 1960, Dugdale introducIn the experimental work of Xia and Wang and KellJ is a line integral, which is path-independent. Hence, crack-tip stresses are characterised by J-integral in nonlinear elastic materials. J is defined asU represents the potential energy, and A depicts the crack area. For the linear elastic material response, J is equal to G. J as a path-independent line integral is given byw, J) and Rice introducRecently, in the work of Cao et al. for multand Yuan , Nakatanand Yuan , and Khaand Yuan .In some cases, the classical approaches, such as the aforementioned ones, for modelling brittle fracture, can\u2019t be applied. Therefore, essential non-classical methods can alternatively be employed. Neuber-Novozhilov criterion ,66 is onGriffith\u2019s equation cannot be used in case of fracture of rubber since the deformation is nonlinear. Hence, Rivlin and Thomas independGc .Kalthoff and Shockey proposedIn Under certain conditions, materials such as glass, ceramics, some types of polymers and metals can undergo brittle fracture. For instance, metals, that are generally ductile, might fail under very low temperatures in a brittle way, possibly with catastrophic consequences. In a mode-I brittle fracturing, cracks propagation path is nearly perpendicular to the applied loading, which creates a relatively flat surface at the fracture surface. Moreover, the fracture surface shows a typical pattern, e.g., some brittle materials are characterised by lines and ridges, which start at the fracture origin and spread along the crack surface. Due to the lack of plastic deformations prior to failure onset, this is considered as the most dangerous sort of fracturing. However, due to the remarkable thermo-mechanical properties of highly brittle materials, it is unavoidable to replace these materials for specific applications, e.g., silicon carbide used together with graphite for high-performance brake pad in a disc-pad braking system. Therefore, to use such high brittle materials, one has to understand its fracture behaviour and properties.In this work, the considered brittle materials are the most studied for a wide range of current as well as potential applications by several researchers. For example, carbon nanotubes and graphene nanomaterials have attracted tremendous interest in research and industry due to their unique and outstanding electric and mechanical attributes. These materials have been used for many applications, e.g., candidates for the reinforcement in composite materials, transistors, battery electrodes, solar cells, and tissue engineering scaffolds. Another brittle material we have considered here is the amorphous silica (SiOCarbon nanotubes (CNTs) have attracted a lot of scientific works in nanotechnology since their first observation . They shA single-walled carbon nanotube (SWCNT) can be imagined as rolling of a graphene sheet to a smooth tube with a fixed diameter. In the present work, we modelled the SWCNT of with the length of 20 nm, which consisted of 6560 atoms. Moreover, the double-walled carbon nanotube (DWCNT) was modelled with outer CNT and inner CNT of , which has 11480 atoms. The atomistic simulations were implemented in the open-source package LAMMPS . The inta = 1.2 nm), there was no crack propagation observed. However, in the next toms see C. The pra increases, the fracture stress and strain decrease significantly. In all the parameter variations as well as a change in crack length simulations, the elastic modulus was nearly constant (0.78 \u00b1 0.03 TPa).In 1991, Iijima reported\u25a0the scope of collapse strains of defect-free CNTs was in the range of 10 to 16%,\u25a0a decrease in the strength was obtained in connection with defect occurrence,\u25a0fracture strain showed strong dependency on the inflection threshold in the potential, and\u25a0chirality seemed to have just a moderate influence on CNTs strength.One of the first studies on the fracture of CNT was carried out by Belytschko et al. in 2002.Li and Chou exploredBuehler et al. suggesteIn 2016, Yang et al. computedTheoretical studies on graphene can be found in the literature since 1947. However, it was isolated and characterised by Novoselov et al. in 2004 ,108 at tIn the following, a graphene sheet is modelled using Visual Molecular Dynamics (VMD), and coordinates of the atom coordinates are retrieved . The dimce-field ,37. The We focus in the following on the fracture characteristics of graphene. Though it has a high Young\u2019s modulus, its fracture is characterised to be brittle, as it is likely to crack like ceramic materials. These properties have been investigated using MD simulations in several research works as briefly described in the following.In 2007, Khare et al. coupled Zhao et al. proposedg et al. reportedg et al. investigZhang et al. studied The classical Griffith\u2019s theory of brittle fracture applied to graphene was verified using experiment and MD simulations by Zhang et al. . The resRecently, determination of reinforcement effects was studied for graphene-polymer , single-Silicon carbide (SiC) is also known as Carborundum. It can be mass produced and mainly used as an abrasive. It has a lot of potential applications such that involving high-frequency radiation hardened and high-temperatures. It also finds numerous applications in the fields of electronics, automobile parts, foundry and jewellery. It has recently been used to synthesise graphene by graphitisation at high temperatures . In the A literature review of recent and past research works that focused on studying the brittle nature of SiC using MD simulations is presented in the following. Briefly, we also present a description of the MD systems, the fracture criteria and some significant results, which allows us to track the advancements in this field.Szlufarska et al. reportedPan et al. did inteRecently, Li et al. investigSilica (or silicon dioxide) has the chemical formula SiOn et al. , the proIn the following, critical milestones in the MD simulations of brittle fracture behaviour of amorphous silica are reviewed. This chronological sequence presentation, together with the description of the MD systems and the applied fracture setups allows us to compare and follow the advancements in this research area.Ochoa et al. conductel et al. . The impl et al. developee et al. describeVashishta et al. proposedRecently, Rimsza et al. reportedy et al. investigIn recent years, brittle ceramic materials have been in focus of active researchers in order to get highly reliable structural elements for engineering applications. In particular, ceramic has the ability to sustain much higher temperatures compared to metals. Therefore, it can be used in many applications with elevated operating temperatures. Besides, due to its low cost and long life, ceramic can replace metallic materials.In our previous work, MD simulations were applied to understand the brittle fracture behaviour of aragonite nacre (CaCO3) . These s problem . Moreove process . The bril), width (w) and height (h) of 19.80 \u00d7 2.20 \u00d7 19.70 nmFollowing this, an edge-crack aragonite tablet model was built, and it was used for three different loading conditions to represent three modes. The model was constructed with length method was consulations to imitaulations , pullingn Mode I A, a loadE, which was in the range of 110 to 144 GPa, agreeing with the values found in experimental and theoretical studies in the literature, see . Th. Th17]. In the past two decades, a significant role has been played by MD simulations in developing an understanding of material behaviour and mechanical properties at the nanoscale. This comprehensive review elucidated the importance of MD simulations, which is a potent and valuable tool revealing many interesting hidden mechanisms and parameter correlations, which underlie the macroscopic behaviour. Moreover, because of the recent developments in the field of computer science, computers have become fast and inexpensive with significant memory capabilities, which is enabling many researchers to investigate the fracture processes of large-dimensions and complex materials.Here, we summarised the crucial stages in the calculation of fracture mechanical properties from MD simulations using the macroscopic techniques such as Griffith\u2019s criterion, J-Integral, crack tip opening displacement (CTOD) and other criteria. In particular, Griffith\u2019s criterion is the most used theory to calculate the strain energy release rate, stress intensity factor and the dependence of fracture stress on crack depth. Recently, a multiscale modelling of brittle fracture has received enormous attention, wherein physically-motivated MD simulations are conducted to mimic quasi-static brittle crack propagation on the nanoscale and later correlated with macroscopic modelling of the fracture utilising the finite element technique. In this approach, the continuum parameters acquire an entire physical meaning. This bottom-up approach that entails of mechanics on the atomic and continuum levels can help multiscale modelling and reduce complexity, for example, the modelling and simulations of hierarchical-structured biomaterials and the tailoring of advanced materials.The unique properties of highly brittle materials such as carbon nanotubes, graphene, silicon carbide, amorphous silica, calcium carbonate and silica aerogel, can be used for the current and many potential applications, and these materials can also provide new opportunities, e.g., reinforced composites. The success heavily relies on the selection of material for a particular application with having a fundamental understanding of its failure nature and fracture mechanical properties. In the present work, we studied the impact of crack length on SWCNTs, DWCNTs and graphene fracture using MD simulations. Moreover, the three modes of fracture studied on calcium carbonate, and the fracture properties were estimated from Griffith criteria.Finally, the development of nanoscale fracture techniques is still under progress and is being consistently reviewed by eminent scholars from various disciplines. The key to success in this field is that the proposed methods have to be continuously reviewed by the researchers so that all the advantages, as well as drawbacks, can be addressed."} +{"text": "Puccinia graminis (Pg)], NHR is largely unexplored due to the inherent challenge of developing a genetically tractable system within which the resistance segregates. The present study turns to the pathogen\u2019s alternate host, barberry (Berberis spp.), to overcome this challenge.Non-host resistance (NHR) presents a compelling long-term plant protection strategy for global food security, yet the genetic basis of NHR remains poorly understood. For many diseases, including stem rust of wheat [causal organism Pg-resistant Berberis thunbergii (Bt) and Pg-susceptible B. vulgaris was developed to investigate the Pg-NHR exhibited by Bt. To facilitate QTL analysis and subsequent trait dissection, the first genetic linkage maps for the two parental species were constructed and a chromosome-scale reference genome for Bt was assembled (PacBio + Hi-C). QTL analysis resulted in the identification of a single 13\u2009cM region on the short arm of Bt chromosome 3. Differential gene expression analysis, combined with sequence variation analysis between the two parental species, led to the prioritization of several candidate genes within the QTL region, some of which belong to gene families previously implicated in disease resistance.In this study, an interspecific mapping population derived from a cross between Berberis spp. enabled the identification and annotation of a QTL associated with Pg-NHR. Although subsequent validation and fine mapping studies are needed, this study demonstrates the feasibility of and lays the groundwork for dissecting Pg-NHR in the alternate host of one of agriculture\u2019s most devastating pathogens.Foundational genetic and genomic resources developed for The online version of this article (10.1186/s12870-019-1893-9) contains supplementary material, which is available to authorized users. Puccinia graminis (Pg), has for millennia been one of the most destructive diseases of wheat and related small grains . After quality parsing and demultiplexing, an average of 3 million high quality reads per genotype were retained by the GBS-SNP-CROP pipeline [Genotyping-by-sequencing (GBS) libraries were constructed for the two parental lines (pipeline (AdditioSNPs\u2009=\u200941.5) and 1368 indels (Dindels\u2009=\u200936.4), were identified by mapping all high-quality reads from the population to the MR. A detailed account of the winnowing of these markers via a progression of filters to obtain the final sets of markers for linkage map construction is provided in Table\u00a01 progeny with >\u200930% missing data, 161 and 162 individuals were retained for B. thunbergii and B. vulgaris linkage map construction, respectively. The B. thunbergii map was constructed using a total of 1757 markers . For both parental species, the remaining markers coalesced into 14 distinct linkage groups, in agreement with the reported chromosomal number in these Berberis spp. and has a total length of 1474\u2009cM. The numbers of bins in each of the 14 linkage groups (LGs) range from 23 (LG14) to 60 (LG2), with an average distance between adjacent bins of 2.6\u2009cM. In comparison, the B. vulgaris map consists of 347 bins and a total length of 1714\u2009cM. The numbers of bins in each of these 14 LGs range from 13 (LG14) to 37 (LG2), with an average distance between adjacent bins of 5.5\u2009cM. Marker names, alleles, and genetic positions (cM), as well as a color-coded visualization of the recombination events within all members of the mapping population are provided in Additional\u00a0file\u00a0B. thunbergii) and Additional file\u00a0B. vulgaris).Summary statistics of the two genetic linkage maps are detailed in Table\u00a0Pg, the parents and all F1 progeny were inoculated with basidiospores ejected from germinated teliospores produced by overwintered telia of Pg found on naturally infected Elymus repens. The progeny segregated into four clear phenotypic classes, ranging from resistant to susceptible analysis was conducted using the linkage maps of both parents and the 4-point stem rust reaction type described above. Based on the LOD threshold score of 3.9 declared via permutation analysis, CIM analysis resulted in the identification of a single significant QTL located 25\u2009cM from the telomere of the short arm of B. thunbergii chromosome 3 , two values which bound the previously published B. thunbergii haploid genome size (1C) of 1.51 Gb [Approximately 129 Gb of sequence data was generated from 115 PacBio Single Molecule Real Time (SMRT) cells (P6-C4 chemistry on RS II), with an average read length of 10,409\u2009bp and a read length N50 of 15,021\u2009bp aligned to the final assembly. After the initial FALCON-Unzip assembly, 119 primary contigs showed significant sequence similarity to plant cpDNA and mtDNA sequence; but this number dropped to only one primary contig in the final assembly as a result of intensive haplotig purging and curation.B. thunbergii, as shown in the Hi-C heatmap on the basis of three-dimensional proximity information obtained via chromosome conformation capture analysis (Hi-C) . Of the B. thunbergii and B. vulgaris linkage maps, respectively. The physical positions of a small percentage of loci in both linkage maps (3.9% in B. thunbergii and 5.1% in B. vulgaris) were ambiguous, in that they could not be assigned to unique positions in the physical assembly. Another small percentage of loci (0.93% in B. thunbergii and 1.12% in B. vulgaris) exhibited unambiguous BLAST hits to different chromosomes than in the linkage map, as indicated by dots in Fig. Using BLASTn with MR centroids as queries, the positions of the mapped GBS markers within the final Hi-C assembly were used to anchor the genetic linkage maps of both parental species to the Kobold physical map. As illustrated in Fig.\u00a0To assign chromosome numbers to linkage groups, the pseudo-molecules from the Kobold physical assembly were sorted, longest to shortest. The linkage group (LG) that anchored to the longest pseudo-molecule in the Kobold assembly (99.76 Mbp) was designated LG1; the next longest pseudo-molecule was designated LG2 (99.56 Mbp); and so on to LG14 (54.72 Mbp) . In an effort to refine the assembly within the QTL region, these 20 contigs were locally re-assembled using canu [Qpgr-3S region was masked as repetitive elements using A. thaliana as the model. A total of 219 retroelements were found, of which 178 are LTRs (79 Ty1/Copia and 99 Gypsy/DIRS1) and 41 are LINEs (L1/CIN4). Another approximately 9 kbp of sequence were found to correspond to DNA transposons. Regions of simple sequence repeats occupy a total length of 130 kbp, and 32 small RNAs were found.The 13\u2009cM ing canu , resultiing canu , 5.6% (~QPgr-3S region resulted in the identification of 576 high confidence (HC) genes. Of these, 450 were annotated based on the reference transcriptome (evidence-based) and 126 were annotated based on gene prediction models (ab initio). To help identify a short list of candidate genes potentially associated with Pg-NHR and prioritized for ongoing investigation, the list of HC genes was cross-referenced to the results of two other analyses: Differential gene expression (DGE) and presence/absence analysis . Time course DGE analysis led to the identification of five genes that express differentially under Pg inoculation or are missing whole exons (MA262) in B. vulgaris in the UNIPROT and Phytozome databases. Specifically, gene TR20791 is associated with a dormancy-related auxin repressor protein family; TR27614 exhibits high sequence similarity with zinc finger DNA-binding proteins; and TR12856 belongs to the glutamine synthetase (glutamate-ammonia ligase activity) protein family .2 lines, backcross populations, doubled haploids, and recombinant inbred lines. In self-incompatible perennial plant species, however, particularly those with long generation times like barberries, such typical mapping populations are difficult, if not impossible, to produce. To overcome such challenges, the so-called \u201cpseudo-testcross\u201d strategy was first proposed by Grattapaglia and Sederoff (1994) and successfully applied to construct a genetic linkage map in forest trees [1 progeny is developed by crossing two unrelated and highly heterozygous (i.e. not inbred) individuals. Gametic recombinations can be tracked in such a population because strategically-chosen sets of markers obey the segregation patterns found in typical testcrosses. The strategy has been widely used in plant species for which other approaches are unsuitable [Familiar, commonly used mapping populations for genetic linkage map construction in plants include segregating Fst trees . Accordisuitable \u201354.B. thunbergii and B. vulgaris from a single interspecific F1 mapping population. As a result of the stringent quality filters applied to the set of de novo GBS markers used, nearly 100% of the markers were placed successfully in the linkage maps of the two species. Although flow cytometry analysis indicates comparable genome sizes between the two parents , the total length of the BtUCONN1 (B. thunbergii) linkage map obtained in this study is roughly 15% smaller than that of the Wagon Hill (B. vulgaris) map (1474\u2009cM vs. 1714\u2009cM). This incongruity with the expected differences in physical genome sizes is likely due to the significantly fewer markers available for the B. vulgaris map as compared to those available for B. thunbergii (706 vs. 1757). Low marker density often results in inflated genetic distances [B. vulgaris linkage map. The significantly lower number of markers available for B. vulgaris is likely a result of the relatively lower level of diversity observed in this species as a result of the severe genetic bottleneck presumed during its colonial introduction from Europe into North America [In this study, using a pseudo-testcross strategy, genetic linkage maps were developed for both istances , so it i America .B. thunbergii and B. vulgaris, respectively. In addition, the strong synteny observed between the two independent maps is strong evidence of their reliability on the short arm of B. thunbergii chromosome 3 , zinc ion binding proteins (TR27614), and glutamine synthetase proteins (TR12856).The long-term goal of this research is to identify candidate gene(s) governing The current model of disease resistance suggests that plant immune responses can be grouped broadly into two major classes, namely pre-invasion defense triggered by pathogen-associated molecular patterns (PAMP-triggered immunity) and post-invasion defense triggered by pathogen effectors (effector-triggered immunity) , 57, botQPgr-3S region and a set of high-priority candidate genes demonstrates the utility of the genetic and genomic resources developed in the study to probe the genes underlying Pg-NHR exhibited by B. thunbergii. Such results, however, are but the first step toward identifying the genes governing Pg-NHR; and further work is required to validate and dissect the QTL region, in addition to testing candidate gene hypotheses.The identification of both the Pg-NHR concern the nature and modes of inheritance of the underlying genes. As previously observed in a natural interspecific barberry hybrid population [1 interspecific hybrids exhibit a range of reactions to Pg, from fully resistant to fully susceptible, with various intermediate forms. This range of reactions was similarly observed in the F1 mapping population developed for this study ; thus a single gene governing the Pg-resistance in B. thunbergii is unlikely. Polygenic NHR has been suggested in other studies as well, including rice NHR to wheat stem rust and barley NHR to powdery mildews, oat stem rust, and other non-adapted rust species [From the practical standpoint of breeding for improved resistance to wheat stem rust, the central questions regarding pulation , F1 inteudy Fig. . If one species , 61, 62.QPgr-3S region plays a role in Pg-NHR, the data suggest that its underlying gene(s) are necessary but not sufficient for resistance. In other words, this study at most provides a first insight into a larger gene network regulating Pg-NHR in B. thunbergii. Indeed, in light of the lack of segregation in the non-host parental species B. thunbergii, the segregation of resistance among F1 hybrids suggests the possible existence of some critical gene(s), by definition fixed within the B. thunbergii genepool, upstream of QPgr-3S. Because of their fixed state within B. thunbergii, such gene(s) cannot be mapped in an F1 population; but if recessive, their single dosage in an F1 would permit susceptibility to Pg, thus allowing the detection of background resistance genes (e.g. QPgr-3S). In all likelihood, QPgr-3S is not a critical region conferring Pg-NHR but is rather a region contributing to Pg resistance. Strategic crosses among the F1 progeny and/or backcrosses to B. thunbergii will be necessary to test this hypothesis and identify those critical gene(s) regulating Pg-NHR in B. thunbergii, work shown to be feasible by the current study.If indeed the Berberis-Pg pathosystem, including the first genetic maps for two Berberis species (B. thunbergii and B. vulgaris), a chromosome-scale reference genome for B. thunbergii, and a related transcriptome to facilitate the characterization of genetic mechanism(s) of Pg-NHR. Future work should focus on the validation, further characterization, and dissection of the identified QTL, including testing of candidate gene hypotheses. Beyond this, now that the Berberis-Pg pathosystem has been shown to be a viable means of probing the mechanism of Pg-NHR in B. thunbergii, future work must also wrestle with the significant question of potential translatability of such resistance to wheat. Such translatability is certainly not a given, particularly in light of the fact that the infecting spores are different for Berberis (basidiospores) and grass (urediniospores) hosts. However, because the two life stages in question belong to the same pathogenic organism and because Berberis is the likely ancestral host of that organism prior to its host expansion to the grasses (see Background), the possibility exists that the mechanism of Pg-NHR in B. thunbergii may provide relevant insight into breeding durable resistance in wheat. With this study, the foundation is laid to eventually answer this question.In this paper, we report the development of publicly-available foundational genetic and genomic resources for the novel B. \u00d7ottawensis mapping population consisting of 182 F1 individuals was derived from an interspecific cross between B. thunbergii accession \u2018BtUCONN1\u2019 (pollen parent) and B. vulgaris accession \u2018Wagon Hill\u2019 . True to its species, BtUCONN1 is a non-host to the stem rust pathogen and is a small shrub that displays 1.3\u20133.8\u2009cm long entire leaves and 1\u20132\u2009cm long inflorescences with few umbellate but mostly solitary flowers. In contrast, Wagon Hill is susceptible to stem rust and is a relatively taller shrub that displays 2\u20135\u2009cm long obovate to obovate-oblong leaves with highly serrated margins (>\u200950 serrations) and has 5\u20138\u2009cm long pendant racemes of bright yellow flowers. The pollen parent BtUCONN1 was a feral plant maintained in the barberry collection at the research farm of the University of Connecticut , and the female parent Wagon Hill is a feral plant growing along the shoreline of the Great Bay Estuary in Durham, New Hampshire .A 1 mapping population in plastic pots filled with PRO-MIX HP growth media in the Macfarlane Greenhouse facility at the University of New Hampshire.To make the interspecific cross, pollen was harvested from mature flowers of BtUCONN1 using the previously described N-pentane method and stor1 status of the individuals in the mapping population, a PCR-based species-specific marker was designed based on available GBS data [1 status of a putative hybrid individual was considered validated if both bands from the two parental species were detected , following manufacturer\u2019s protocol. Reduced representation libraries were constructed using the two-enzyme (PstI-MspI) GBS protocol described by Poland et al. [Genomic DNA of the 182 verified FB method . Prior td et al. and sequ1 progeny to the MR, following the pipeline\u2019s recommended parameters for diploid species. Complete details of the GBS-SNP-CROP command lines used in this analysis, including all specified pipeline parameters, are provided in Additional file Raw FASTQ files were generated by CASAVA 1.8.3 and analyzed using the reference-free bioinformatics pipeline GBS-SNP-CROP , 66. A MThe sequence of filters applied to obtain the final sets of markers for linkage map construction is summarized in Table Linkage analysis was performed using the R package ONEMAP v2.0\u20134 , and sepTo identify potential genotyping errors, common in GBS data , maps we1 individuals in the mapping population were inoculated with basidiospores ejected from germinated teliospores produced by Pg telia found on naturally-infected Elymus repens, as previously described [B. thunbergii. In contrast, the female parent Wagon Hill exhibits the clear susceptible reaction of B. vulgaris, with well-developed mature aecia visible on the abaxial surfaces of leaves. Images of typical reactions of the parents and of individuals in the F1 mapping population are presented in Fig. To determine disease responses, the parents and all Fescribed . The polPg resistance was performed using both the parental and maternal genetic linkage maps using the R package R/qtl v1.39\u20135 [QTL analysis for v1.39\u20135 . Haley-K v1.39\u20135 was usedPg-NHR research but also to ornamental breeding, B. thunbergii cv \u2018Kobold\u2019, a commercial green-leafed cultivar common in the ornamental industry, was selected for whole genome sequencing. Kobold is a heterozygous diploid (2n\u2009=\u20092x\u2009=\u200928) and is a non-host to stem rust [Due to its relevance not only to tem rust . Cuttingtem rust . For seqtem rust and quanhttps://github.com/PacificBiosciences/GenomicConsensus) was used to polish the phased primary contigs and their associated haplotigs. Genome size was estimated using both k-mer analysis of the error-corrected PacBio reads [Pisum sativum L. Citrad (2C\u2009=\u20099.09\u2009pg) as an internal standard (BD Accuri\u2122 C6 Cytometer) [The FALCON and FALCON-Unzip toolkits (FALCON-integrate v1.8.2) were useio reads as well tometer) .Further polishing and curation of the assembly was accomplished using the Purge Haplotigs pipeline . High leEscherichia coli genome (CP017100.1), and 16S and 18S rRNAs. The rRNA database was created using the SILVA project [Quality of the final curated assembly was assessed using QUAST , and ass project , and the project , GMAP [8 project , and BWA project , respectTo linearly order and orient the primary contigs into chromosome-scale pseudo-molecules, a proximity-guided assembly was performed using Phase Genomics\u2019 Proximo\u2122 chromosome conformation capture (Hi-C) technology . Tissue https://matplotlib.org/index.html). The above anchoring method was also used to project the detected Pg-NHR QTL region onto the physical map, thus permitting insight into its underlying physical sequence.Orthogonal sets of markers were used to build the genetic linkage maps of the two parents; thus the two maps share no markers in common, preventing a direct assessment of synteny between the two species. The physical assembly, however, presents a potential \u201ccommon language\u201d by which the two maps can be compared, provided the markers in the linkage maps can be uniquely located in (i.e. anchored to) the physical assembly. To accomplish this, BLASTn was perfPg inoculation, were collected from a clonally propagated plant of B. thunbergii cv. \u2018Kobold\u2019 analysis experiment was conducted to identify genes whose levels of expression detectably change under challenge by Pg. Three biological replicates of immature leaves were sampled from clonally propagated B. thunbergii cv. \u2018Kobold\u2019 plants at four different time points: pre-inoculation (T0) and 48, 72, and 144\u2009h post-inoculation . Total RNA was extracted, sequenced, and processed as described above. Transcript abundance was quantified using Kallisto [Combinations of approaches were taken to pare down the full set of HC genes to those more likely to contribute to Kallisto , and timKallisto . CompletB. vulgaris parent Wagon Hill (i.e. >30x re-sequencing depth) were aligned to the QTL region in an effort to identify HC genes with no apparent homolog in B. vulgaris. The final list of high-priority candidate genes is composed of those HC genes in the QTL region that are either differentially expressed under Pg inoculation or have at least one complete CDS sequence absent in B. vulgaris mapping population used in this study. (XLSX 20 kb)Sequencing details of Additional file 2:Figure S1. Genetic linkage maps of B. thunbergii accession \u2018BtUCONN1\u2019 and B. vulgaris accession \u2018Wagon Hill\u2019. Figure S2. Hi-C heat map of the scaffolded primary contigs of B. thunbergii cv. \u2018Kobold\u2019. Figure S3. Venn diagrams of high-priority candidate genes identified for further investigation. Figure S4. Time course expression plots for the five candidate genes found via DGE analysis. Figure S5. Base-by-base coverage plots in B. vulgaris accession \u2018Wagon Hill\u2019 for the two candidate genes identified via presence-absence analysis. Figure S6. Gel image of the marker used to validate the hybrid status of the individuals in the F1 mapping population. Table S1. Summary of the raw PacBio data obtained for B. thunbergii cv. \u2018Kobold\u2019. Table S2. Summary statistics of the 14 pseudo-molecules of the B. thunbergii cv. 'Kobold' reference assembly. Table S3. Details of the library of ten tissues from B. thunbergii cv. \u2018Kobold\u2019 used for transcriptome assembly. (PDF 5620 kb)Supplementary Figures and Tables. Additional file 3:B. thunbergii accession \u2018BtUCONN1\u2019 and associated information. (XLSX 412 kb)Linkage map of Additional file 4:B. vulgaris accession \u2018Wagon Hill\u2019 and associated information. (XLSX 251 kb)Linkage map of Additional file 5:MAKER features and detailed functional annotation for the seven candidate genes. (XLSX 13 kb)Additional file 6:Text S1. Cluster sequences and primer information for the PCR-based markers used to validate the F1 status of the individuals comprising the B. \u00d7ottawensis mapping population. Text S2. Detailed record of the GBS-SNP-CROP command lines used in this study. Text S3. Complete details of the FALCON assembly parameters used in this study. Text S4. Complete details of the script used for purging haplotigs. Text S5. Complete details of parameters used for quantifying transcripts and the sleuth R code for the time course analysis. (PDF 128 kb)Supplementary Text."} +{"text": "At the end of the nineteenth century, the northern port of Liverpool had become the second largest in the United Kingdom. Fast transatlantic steamers to Boston and other American ports exploited this route, increasing the risk of maritime disease epidemics. The 1901\u20133 epidemic in Liverpool was the last serious smallpox outbreak in Liverpool and was probably seeded from these maritime contacts, which introduced a milder form of the disease that was more difficult to trace because of its long incubation period and occurrence of undiagnosed cases. The characteristics of these epidemics in Boston and Liverpool are described and compared with outbreaks in New York, Glasgow and London between 1900 and 1903. Public health control strategies, notably medical inspection, quarantine and vaccination, differed between the two countries and in both settings were inconsistently applied, often for commercial reasons or due to public unpopularity. As a result, smaller smallpox epidemics spread out from Liverpool until 1905. This paper analyses factors that contributed to this last serious epidemic using the historical epidemiological data available at that time. Though imperfect, these early public health strategies paved the way for better prevention of imported maritime diseases. This dreadful disease had occurred in sequential epidemics throughout the nineteenth century in British cities,This paper describes the factors that contributed to the pattern of national smallpox outbreaks in the United States and United Kingdom, and specifically in the cities of New York, Boston and Liverpool, between 1901 and 1903. Reconstructing these historical epidemics using imperfect sources is challenging, and methodological limitations are considered. The primary aim is to describe public health approaches to control smallpox during epidemics in major transit ports for Atlantic shipping, and factors which influenced these efforts. Experience with different responses helped develop a more evidence-based approach to disease control and to anticipate the general public\u2019s response to such measures. A knowledge base for disease control was growing, given experience with other maritime imported epidemic diseases, such as cholera and plague, but smallpox differed due to the availability of an effective preventive vaccine, the efficacy of which had not been fully assessed. In analysing these data, a secondary aim is to examine the evidence that smallpox cases occurring in Liverpool in 1901 and 1902 may have originated from American imported cases. Peak smallpox incidence in the United States spanned the period 1901 and 1902 and a high infection risk was channelled via ships travelling from Boston to Liverpool, where the outbreak peaked in April 1903. Outbreaks across northern and central England were temporally related to the Liverpool epidemic.The response to the epidemics was influenced by new epidemiological approaches and public health practices in both countries, although public health recommendations differed. In the United States, national and state vaccination strategies varied, as did exemption regulations for children and adults. In both countries, local factors influenced commercial interests, variable clinical disease patterns, delayed diagnosis and quarantine practices. An improved understanding of smallpox disease epidemiology slowly emerged and contributed to eventual control and elimination through broadened international efforts.2Extracting and interpreting late nineteenth-century information from historical medical records on smallpox in order to quantify risk factors, a standard method in modern disease epidemiology, is subject to several pitfalls. Instead of meticulous tracing and recording of known cases and their contacts, the basic assumption at that time attributed the social and domestic habits of the poor to be the principal factors spreading smallpox.,Despite such progress, the present reconstruction of historical outbreaks and examination of their risk factors is affected by several criteria which are difficult to quantify. These included: variable definitions of reported events; misdiagnoses; lack of detailed household transmission data; spatial heterogeneities; inadequate information on vaccine effectiveness \u2013 partly because of lack of reliable estimation methods; difficulties in early recognition of smallpox cases and confusion with chicken pox or measles; notification delayed until the afflicted person had been suffering for many days and the absence of explicit statistical analyses.33.1,In the late nineteenth century, quarantine stations and regulations for the sanitary inspection of ships were present at many British seaports, and general sanitary arrangements were satisfactory in two-thirds of the sixty port sanitary districts.There was some collaboration between the two countries. The United States Assistant Surgeon, Dr Carroll Fox (b.1874), stayed in the Liverpool United States Consulate for three months in 1902 to review infection control policy and practice and the numbers of smallpox and typhus cases.,Lancet editorial commented in 1880 that it only survived because it was plausible, seductive and fitted the unreasonable demands of certain Continental powers, and that \u2019it was derogatory to England that she should submit to these hideously farcical detrimental proceedings\u2019.In the last decade of the nineteenth century, the twin systems of medical inspection and quarantine were in use, but with greater emphasis on medical inspection and case isolation. The risk posed by foreigners had become more evident to the general public in the United States as migration sensitised opinions, and foreigners became a focus for quarantine policies.3.2In Britain, quarantine stations, including some more isolated offshore establishments, had existed in the early nineteenth century. This remained the only effective measure until later in the century when contact tracing and surveillance were introduced. Port Sanitary Authorities established hospital ships in a number of locations around Britain to isolate suspected smallpox cases. In 1884, the Metropolitan Asylums Board moored three converted ships in the Thames to serve as a floating hospital,,Procedures were in place for ship fumigation, cleaning and painting of vessels, disinfection of clothing, and vaccination of passengers and seamen, although often seamen refused vaccination.In the United States, maritime quarantine was initially a state service but was transferred to a national Public Health Service between the 1880s and the 1920s. The Marine Hospital, dedicated to the care of ill and disabled seamen in the United States Merchant Marines, the US Coast Guard and other federal beneficiaries, eventually evolved into the Public Health Service Commissioned Corps.4,Variola minor), with had a death rate of 2 to 6% among unvaccinated individuals, which was considerably lower than with the Variola Major strain.Variola major nonetheless remained present in several American cities, particularly in the northeast.Smallpox, when diagnosed, was reported and case fatalities were recorded across the country. Dr Charles Value Chapin (1856\u20131941), an American pioneer in public health research and practice, and Health Superintendent (1884\u20131932) for Providence, Rhode Island, compiled a detailed outline of smallpox in the United States between 1895 and 1912.,Disease notification was incomplete in some cities and rural areas, and some states omitted returns.s Figure\u00a0. In MassThe Boston epidemic commenced in May 1901 in a large factory.Variola major form of smallpox, but these were not necessarily representative, as many attacks were mild.The difference in severity of cases between epidemics warrants further examination. Characteristics of this epidemic can be gleaned from the clinical records of 243 patients consecutively admitted to the Southampton Street smallpox hospital in Boston.The Boston epidemic coincided with a smaller smallpox epidemic (after accounting for the difference in population size) in London, commencing in June 1901 and lasting until January 1903, with 9484 notified cases.5Pallas.In the nineteenth century, thousands of emigrants from the British Isles left from Liverpool Port. Packet lines sailed regularly from 1818, and in 1822 smallpox was transmitted from Liverpool to Baltimore on board the ship SS New England, which arrived with nineteen cases on board. This ship left Boston on 1 February, arriving in Liverpool on 30 March. On leaving Boston with 525 passengers and 268 crew, including fifty-five clergymen and many elderly people, it travelled to the Mediterranean.Fortnightly Gazette reported that a number of these passengers fell ill with smallpox at Naples and in other places in Italy and France.SS Ivernia from Boston also landed a single smallpox case at Queenstown while en route to Liverpool.Prior to the 1901\u20133 epidemic, smallpox was imported on eight known occasions to Liverpool in 1900, the most important being that of the Competitive, fast transatlantic passenger and mail steamers were efficient disease vectors. Figure\u00a0SS Kansas arrived in Boston on 15 January 1902 following one smallpox death at sea. The ship was quarantined but the vessel was allowed to leave for its return trip to Liverpool after only six days. When it arrived back in Liverpool, nine clinical cases were identified on arrival and transferred to the Port Sanitary Hospital at New Ferry, one of whom died.Dates in Figure\u00a0SS Devonian carried infected seamen on separate occasions in early December, January and February. With the mild type illness, diagnosis was unclear until medical advice on skin spots was sought.SS Campania arriving from New York on 5 April, on a ship holding the fastest transatlantic crossing time.SS Kansas imported nineteen cases from Boston that were admitted to the New Ferry hospital. When the ship sailed from Liverpool on 4 January 1902, two crew were treated at sea, one of whom died. Late acquisition of smallpox explained these cases. Upon reaching Boston, as many as twenty men were put ashore and all cattlemen were taken to the quarantine station and revaccinated.SS Kansas put back to New York and landed several further crew suffering from smallpox. Later arriving at Liverpool on 6 February, nine convalescents and two contacts were identified and removed to the Port Sanitary Hospital.The Figure\u00a06The connexion between seaports and smallpox had been initially observed in England in the epidemic of 1870\u20132, when Liverpool and London were the first places to feel the effects of the continental outbreaks associated with the Franco-Prussian war. In Liverpool, smallpox had been introduced by Spanish sailors.SS Minnehaha.The 1901 Liverpool outbreak was the last major smallpox epidemic in this city. Its magnitude was comparable to earlier 1876\u20138 outbreaks, as shown in Figure\u00a0John Christie McVail 1849\u20131926), Medical Officer of Health for Stirling and Dumbarton in Scotland, and a leading advocate of smallpox vaccination in the early twentieth century, suggested that smallpox was no longer indigenous in the United Kingdom and insisted that epidemic outbreaks were imported.6, MedicaVariola minor was dominant. Chapin had considered it highly probable that the mild type of smallpox was carried to England from Boston in 1902 and during the following years.The Liverpool focus is distinct from those in Glasgow, London, Edinburgh, Tynemouth, Hull and South Wales, which are all ports, but which did not have regular scheduled transatlantic links with eastern United States seaports. In Southampton, only two cases of smallpox were reported on vessels bound for the port in 1902.The first infected Liverpool resident was identified on 12 December 1901 Figure\u00a0, from a ,SS Hispania, which arrived from Bombay.Figure\u00a0In 1881, although smallpox was epidemic in the metropolis, the cases occurring in the port were few in number and \u201cquite isolated\u201d. There were six imported cases. The smallpox epidemics occurring in London in 1876, 1881, and 1884 cannot with certainty be traced toinfection from abroad: it is possible that the strain introduced in 1870 was working itself out with diminishing virulence, and that the port sanitary authorities were successful in preventing its refreshment from abroad,\u2019\u2018In December 1876 the Port Medical Officer reported that although smallpox was widely epidemic in London, it had so far manifested itself in the floating population to an infinitesimal extent.\u2019\u2018Variola minor strain in Boston and Liverpool, whereas the higher case fatality in unvaccinated cases in London and Glasgow is consistent with Variola major as the primary source of infection.The resolution of the Liverpool epidemic was rapid, with a marked fall in incidence in June 1903, which compares to a much slower resolution in Boston over several months. As described below, this may relate to differences between the two cities in the effectiveness of the public health response, epidemiological approaches to disease control, as well as the lower vaccine uptake in Boston and staggered introductions of vaccine delivery. Case fatality in unvaccinated persons in Boston was very similar to that in Liverpool ,In the United Kingdom, the Scottish obstetrician Sir James Young Simpson (1811\u201370) wrote an influential outline on smallpox prevention based on household isolation policies. Its departure point was early case notification, quarantine of infected patients, vaccination of carers, hygienic cleaning of everything in contact with the patient, and strict disinfection and bedding procedures.7.2In the late nineteenth century, the mode of transmission of smallpox was not well understood. It was considered intensely infectious, arising potentially from aerosol spread, infected fomites and direct contact with the patient, their clothing or belongings. In industrialised countries, it was considered a \u2018winter disease\u2019British Medical Journal in 1901 reported smallpox transmission to Nottingham by Mormon mail from Salt Lake City,The smallpox register for Liverpool queried whether two cases had occurred from contact with foreign mail.,,,,A major concern was infection of communities living in proximity to isolation hospitals, including the New Ferry hospital. Confusion over aerial transmission, and in particular controversy over spread of disease from hospitals, was a major concern,,Modern understanding has improved knowledge of infection risk, although the minimum infectious dose of smallpox remains unknown, ie. how many virus particles an individual needs to inhale to become infected.7.3,,Table\u00a0From the estimates of vaccine efficacy in Table\u00a0In both countries, community-wide vaccination and revaccination campaigns were started during the epidemics. In Boston, uptake was impaired by the unpopularity of the vaccine, side effects seemingly a greater problem than mild smallpox.In Liverpool, widespread smallpox vaccination was prioritised and the benefits of vaccination were strongly promoted locally.The reasons for higher vaccine efficacy in Liverpool compared to Boston are unknown. This may relate to vaccine potency or storage conditions, differing pre-epidemic population immunity, or herd immunity,Vaccine virus deteriorates if not cooled in transit, and transport in baggage cars in the United States had to avoid proximity to steam coils which affected vaccine quality.Studies in Smallpox and Vaccination.In theory, vaccination not only diminishes the susceptibility of vaccinated individuals but also reduces the degree and duration of infectiousness.7.4The Liverpool Echo for use of alternative remedies, such as a \u2018curative syrup\u2019 (Mother Seigels) in times of epidemic, that would keep the whole body strong, avoiding enfeeblement by indigestion, anaemia, blood disorders and lack of stamina necessary to resist contagion.Two Liverpool city newspapers printed regular press communications, with occasional notices in the local Wirral Birkenhead press, which covered the New Ferry Sanitary Hospital area. The press was responsible for reporting accurately the local position. Detailed summaries of medical presentations by Dr Edward Hope, held at The Liverpool Medical Institute, were printed for public consumption, and included conclusions of local City Council meetings.Boston Globe and Cambridge Chronicle as the outbreak developed. The Globe reported in December 1901 that the outbreak had abated, was at no time serious, and that the public seemed to have been needlessly alarmed with largely imaginary dangers.Multiple press news releases appeared in the 8SS Cilicia in April 1950, a P & O steamer mostly engaged in the India run.Variola minor from infected ship crews or officials occurred in 5% of all reported cases (31/652) during that period.SS Tuscania in Glasgow in 1929, and SS Cathay in the Port of London in 1938.SS Mooltan, when contacts were quarantined for only four to five days and were then allowed free travel, leading to an outbreak which required an immense public health control effort.P.vivax malaria in a child whose subsequent death was attributed to staphylococcal septicaemia,was later realised to be smallpox.Variola minor became the dominant form. In 1906, so far as can be learned from the United States Public Health Department\u2019s records, severe smallpox cases were scarce, occurring in only nine of 12\u00a0503 reported cases.The data presented above support the view that in the late nineteenth and early twentieth century, smallpox had become a disease primarily introduced to the north of England from abroad by merchant seamen or by travellers. Sometimes the disease was detected onboard ship, but inadequate public health measures lead to occurrence of many missed cases. Between 1904 and 1916, only 144 further cases (six deaths) occurred in the United Kingdom, dwindling to two cases in the last three years.9Smallpox was regularly transmitted across the Atlantic after its arrival from tropical Africa in the fifteenth century.Dr Edward William Hope, the Liverpool Medical Officer of Health during the epidemic, was essentially the counterpart of Dr Samuel Holmes Durgin, Head of the Public Health Department in Boston, who was considered one of the greatest public health physicians in nineteenth-century America.The maritime origins of infection transported by record-breaking transatlantic liners of that time reflects the influence of commercial shipping interests requiring rapid turnaround times. The reluctance to implement strict quarantine, perhaps especially in Boston, was partly dictated by cost and commercial considerations. There is much to suggest that the Liverpool smallpox epidemic originated in Boston, but this is perhaps less important than other conclusions. The interchange is a reminder of the greatly magnified risks with modern air transport of unrecognised infections combined with a high priority for commercial efficiency. This historical event is an unusual example of special ties forged between the United States and the United Kingdom, through disease transmission rather than diplomacy."} +{"text": "The Motivation and Confidence domain questionnaire in the Canadian Assessment of Physical Literacy (CAPL) was lengthy , and thus burdensome to both participants and practitioners. The purpose of this study was to use factor analysis to refine the Motivation and Confidence domain to be used in the CAPL\u2013Second Edition (CAPL-2).n\u2009=\u2009205, Mage\u2009=\u20099.50\u00a0years, SD\u2009=\u20091.14, 50.7% girls), completed the CAPL-2 protocol, and two survey versions of the Motivation and Confidence questionnaire. Survey 1 contained the Motivation and Confidence questionnaire items from the original CAPL, whereas Survey 2 contained a battery of items informed by self-determination theory to assess motivation and confidence. First, factor analyses were performed on individual questionnaires to examine validity evidence and score reliability . Second, factor analyses were performed on different combinations of questionnaires to establish the least burdensome yet well-fitted and theoretically aligned model.Children, primarily recruited through free-of-charge summer day camps (2(63)\u2009=\u200981.45, p\u2009=\u20090.06, CFI\u2009=\u20090.908, RMSEA\u2009=\u20090.038, 90% CI ).The assessment of adequacy and predilection, based on 16 single items as originally conceptualized within the CAPL, was not a good fit to the data. Therefore, a revised and shorter version of these scales was proposed, based on exploratory factor analysis. The self-determination theory items provided a good fit to the data; however, identified, introjected, and external regulation had low score reliability. Overall, a model comprising three single items for each of the following subscales was proposed for use within the CAPL-2: adequacy, predilection, intrinsic motivation, and perceived competence satisfaction. This revised domain fit well within the overall CAPL-2 model specifying a higher-order physical literacy factor contains supplementary material, which is available to authorized users. The Canadian Assessment of Physical Literacy (CAPL) is a comRecently, confirmatory factor analyses were used to refine the 25 aggregated indicators of the CAPL, and results supported the factor structure of a 14-aggregated-indicator version called the CAPL\u2013Second Edition may be a distal indicator of motivation and confidence. Similarly, perceived barriers might nIn re-evaluating the conceptualizations of motivation and confidence in the original CAPL, it became apparent that the domain was not sufficiently grounded in a specific theory of motivation that had empirical evidence within physical activity contexts. Whitehead\u2019s description of motivation and confidence is somewhat vague, and does not map directly onto specific definitions of motivation and confidence outlined within theories of motivation (or similar constructs such as self-efficacy or competence). Nonetheless, theoretically anchoring the definition of motivation and confidence within the CAPL is critical to the advancement of measuring this important domain of physical literary. Indeed, using a theoretical framework of motivation can both complement Whitehead\u2019s definition of motivation and confidence and extend it by enhancing precision of measurement and predictive abilities.One theory that is often used to understand the quality of motivation and perceived competence is self-determination theory , which wAnother issue related to the CAPL Motivation and Confidence domain pertained to participant burden and instructional clarity. CAPL administrators had informally reported to the study coordinating centre that, if children independently read the instructions and practice questions rather than reviewing them orally as a group, then some children had trouble understanding how to answer. Within the CAPL, the items assessing perceived adequacy , perceived predilection , and perceived injury-risk, all used a structured alternative response format . With stLastly, CAPL administrators had identified comprehension issues pertaining to the perceived barriers items that were used as one component of the benefits-to-barriers difference score . These iThe purpose of this study was to explore refinements to the CAPL Motivation and Confidence domain to address the issues identified above. Specifically, we sought to: (1) reduce the number of items participants needed to complete; (2) enhance instruction clarity; (3) ensure that the items within the Motivation and Confidence domain were more closely aligned with well-supported motivational theory; and (4) ensure that these items demonstrated good factor structure and reliability. It is important to note that our purpose was to use existing questionnaires that have demonstrated initial score reliability and validity in children and youth for the assessment of motivation and confidence. Our intention was not to re-develop items, item response formats, or create new items. Rather, our goal was to refine existing CAPL questionnaires and add existing questionnaires to theoretically anchor the Motivation and Confidence domain within CAPL. We view the development of CAPL as an ongoing process, and this contribution should be seen as one initial step in the ongoing process of validation.Finally, although we recognize that Whitehead\u2019s concept of charting progress in physical literacy is well aligned to objective measurements , the CAPn\u2009=\u2009205, Mage\u2009=\u20099.50\u00a0years, SD\u2009=\u20091.14\u00a0years, 50.7% girls) who were enrolled in YMCA free summer camps in southwestern Ontario completed the CAPL-2 (see ). Interpretation of the geomin rotated loadings indicated that four items loaded strongly onto a factor that matched \u201cadequacy\u201d; three items loaded strongly (range\u2009=\u20090.58\u20130.68) onto a factor that matched \u201cpredilection\u201d; and three items loaded onto a factor we labelled \u201cbehaviour\u201d because the items reflected activities in which the children actually engaged. The fourth factor had a mix of weak and strong factor loadings that did not have an apparent pattern. The fourth factor comprised a mix of negatively worded predilection and adequacy items. We therefore re-estimated a confirmatory factor analysis using the first three latent factors and omitting the items that loaded onto the fourth latent factor, and the resultant data were a good fit to the model . Consistent with Sebire and colleagues ). The Knowledge and Understanding indictor asking \u201chow to improve sport skill\u201d did not significantly load onto knowledge and understanding . All other factor loadings were significant . Daily step count was not significantly correlated with any CAPL domain (ps\u2009>\u20090.14). Physical Competence was correlated with Knowledge and Understanding and Motivation and Confidence . Motivation and Confidence was uncorrelated with knowledge and understanding .Lastly, we ran a confirmatory factor analysis to examine the fit of the revised Motivation and Confidence domain in a four-correlated factor model representing all of the protocols within CAPL-2. In this model, a latent factor representing daily behaviour, comprising self-reported physical activity and daily step counts, could not be estimated because the two items were uncorrelated (2(63)\u2009=\u200981.45, p\u2009=\u20090.06, CFI\u2009=\u20090.908, RMSEA\u2009=\u20090.038, 90% CI ; see Fig.\u00a02(63)\u2009=\u200974.88, p\u2009=\u20090.15, CFI\u2009=\u20090.927, RMSEA\u2009=\u20090.033, 90% CI ), and parameter estimates in the re-estimated model were very similar to when these participants were included (results available from Katie E. Gunnell upon request).Next, this model was re-estimated specifying the four domains of CAPL-2 to load onto a single physical literacy latent factor. Results indicated a good model fit , thereby breaking a responding pattern children were familiar with in the other questionnaires. Furthermore, having only one item of confidence is limiting when researchers are seeking to perform factor analyses. Therefore, a decision was made to replace this one item with three items of perceived competence satisfaction from an Our confirmatory factor analysis of the alternative response scores from the original CAPL adequacy and predilection scores indicated that these scores alone did not provide a good fit to the data. Modification indices suggested cross-loadings as well as numerous correlated errors. These alternative response items were taken from the Children\u2019s Self-Perceptions of Adequacy and Predilection for Physical Activity Scale , for whiConsistent with the findings of Sebire and colleagues , we founThere were a few findings that were inconsistent with past research and theory. First, daily step counts were not associated with motivation and confidence, or any other domain of the CAPL. This finding is inconsistent with past research , 20, as Although we were able to refine the motivation and confidence assessments within CAPL, limitations are worth noting. First, the sample size was relatively small and we estimated numerous models, which could increase the odds of chance findings. Therefore, and in recognition that validation is an ongoing process, researchers should continue to replicate these finding with larger and more generalizable samples. Additionally, it is incumbent upon researchers who adopt these questionnaires to ensure that they demonstrate good score reliability and validity in their own samples before making inferences based on the data. Our sample might not generalize to other children since they were a select group of children participating in camps at YMCA. For example, it is possible that these children were more likely to be active than children who might have been recruited through other avenues; their parents may have prioritized physical activity more than other parents who did not enroll their children in the camps; or they could have come from lower socioeconomic status given that the YMCA offers physical activity programming for free.Moreover, we were unable to model the Daily Behaviour domain comprised of both items of daily step counts and self-report physical activity because the two items were uncorrelated in this sample. Although weak correlations between pedometers and self-report physical activity have been noted in previous reviews , it was Although we were able to provide score validity and reliability evidence for our final selected model, other sources of validity should also be examined. The revised questionnaire to assess motivation and confidence in CAPL-2 comprises four subscales that include two response formats: namely, the structured alternative response format used in the adequacy and predilection sections, and the Likert-type response formats used in the intrinsic motivation and perceived competence satisfaction measures. Both formats have been criticized in previous literature for being difficult for children to understand , 17. It Finally, our goal was to reduce the total number of items used to measure motivation and confidence. This, of course, comes at the cost of potentially reducing content representation and reliability. Researchers may wish to further investigate these issues to ensure that the items selected have good content validity evidence and reliability.Based on the findings from this study, we propose a revised questionnaire to assess motivation and confidence as part of the CAPL-2. The revised questionnaire is reduced to 12 single items that aggregate to four subscales, contains clearer instructions, and is theoretically aligned with a major theory of motivation. Researchers using the CAPL-2 should use the revised motivation and confidence questionnaire presented herein and presented in Additional file Additional file 1:Survey 1, original CAPL Motivation and Confidence questions. (DOCX 84 kb)Additional file 2:Survey 2, new CAPL Motivation and Confidence questions. (DOCX 167 kb)Additional file 3:Modified wording to new CAPL Motivation and Confidence questions. (DOCX 16 kb)Additional file 4:Item descriptive statistics. (DOCX 34 kb)Additional file 5:Model syntax. (DOCX 23 kb)Additional file 6:Final Motivation and Confidence domain questionnaire. (DOCX 86 kb)Additional file 7:Exploratory models. (DOCX 16 kb)"} +{"text": "Automatic detection and analysis of human activities captured by various sensors play an essential role in various research fields in order to understand the semantic content of a captured scene. The main focus of the earlier studies has been widely on supervised classification problem, where a label is assigned to a given short clip. Nevertheless, in real-world scenarios, such as in Activities of Daily Living (ADL), the challenge is to automatically browse long-term (days and weeks) stream of videos to identify segments with semantics corresponding to the model activities and their temporal boundaries. This paper proposes an unsupervised solution to address this problem by generating hierarchical models that combine global trajectory information with local dynamics of the human body. Global information helps in modeling the spatiotemporal evolution of long-term activities, hence, their spatial and temporal localization. Moreover, the local dynamic information incorporates complex local motion patterns of daily activities into the models. Our proposed method is evaluated using realistic datasets captured from observation rooms in hospitals and nursing homes. The experimental data on a variety of monitoring scenarios in hospital settings reveals how this framework can be exploited to provide timely diagnose and medical interventions for cognitive disorders, such as Alzheimer\u2019s disease. The obtained results show that our framework is a promising attempt capable of generating activity models without any supervision. Activity detection has been considered as one of the major challenges in computer vision due to its utter importance in various applications including video perception, healthcare, surveillance, etc. For example, if a system could monitor human activities, it could prevent the elderly from missing their medication doses by learning their habitual patterns and daily routines. Unlike regular activities that usually occur in a closely controlled background , Activities of Daily Living (ADL) usually happen in uncontrolled and disarranged household or office environments, where the background is not a strong cue for recognition. In addition, ADLs are more challenging to detect and recognize, because of their unstructured and complex nature that create visually perplexing dynamics. Moreover, each person has his/her own ways to perform various daily tasks resulted in infinite variations of speed and style of performance which accordingly add extra complexity to detection and recognition tasks.From the temporal aspect, detecting ADLs in untrimmed videos is a difficult task since they are temporally unconstrained and can occur at any time and in an arbitrarily long video . Therefore, in activity detection, we are not only interested in knowing the types of the activities happening, but also we want to precisely know the temporal delineation of the activities in a given video .Most of the available state-of-the-art approaches deal with this problem through detection by classification task ,2,3. TheMost of the above-mentioned methods are single-layered supervised approaches. In the training phase of the activities, the labels are fully (supervised) ,19,20 oran unsupervised framework for scene modeling and activity discovery;dynamic length unsupervised temporal segmentation of videos;generating Hierarchical Activity Models using multiple spatial layers of abstraction;online detection of activities, as the videos are automatically clipped;finally, evaluating daily living activities, particularly in health care and early diagnosis of cognitive impairments.In this work, we propose an unsupervised activity detection and recognition framework to model as well as evaluate daily living activities. Our method provides a comprehensive representation of activities by modeling both global motion and body motion of people. It utilizes a trajectory-based method to detect important regions in the environment by assigning higher priors to the regions with dense trajectory points. Using the determined scene regions, a sequence of primitive events can be created in order to localize activities in time and learn the global motion patterns of people. To describe an activity semantically, we can adapt a notion of resolution by dividing an activity into different granularity levels. This way, the generated models describe multi-resolution layers of activities by capturing their hierarchical structures and sub-activities. Hereupon, the system can move among different layers in the model to retrieve relevant information about the activities. We create the models to uniquely characterize the activities by deriving relative information and constructing a hierarchical structure. Additionally, a large variety of hand-crafted and deep features are employed as an implicit hint to enrich the representation of the activity models and finally perform accurate activity detection. To summarize, the core contributions of this paper set forth below:Following these objectives, we conducted extensive experiments on both public and private datasets and achieved promising results. The rest of the paper is organized as follows: For the past few decades, activity recognition has been extensively studied and most of the proposed methods are supervised approaches based on the hand-crafted perceptive features ,23,37,38Recent re-emergence of deep learning methods has been led to remarkable performances in various tasks. That success followed by adapting convolutional networks (CNNs) to activity recognition problem for the first time in . The incThe goal in activity detection is to find both the beginning and end of the activities in long-term untrimmed videos. The previous studies performed in activity detection were mostly dominated by sliding window approaches, where the videos are segmented by sliding a detection window followed by training classifiers on various feature types ,46,47,48Recently, several studies ,9,49,50 The methods used in ,59,60,61Apart from the supervised methods mentioned above, recently there has been increasing attention on methods with unsupervised learning of activities. A pioneer study conducted by Guerra-Filho and Aloimonos sought tSo far, state-of-the-art methods are constrained by full supervision and require costly frame level annotation or at least an ordered list of activities in untrimmed videos. By growing the size of the video datasets, it is very important to discover activities in long untrimmed videos. Therefore, recent works propose unsupervised approaches to tackle the problem of activity detection in untrimmed videos. In this work, we use training videos to specify temporal clusters of segments that contain similar semantics throughout the all training instances.The proposed framework provides a complete representation of human activities by incorporating motion and appearance information. It automatically finds important regions in the scene and creates a sequence of primitive events in order to localize activities in time and to learn the global motion pattern of people. To perform accurate activity recognition, it uses a large variety of features, such as Histogram of Oriented Gradients (HOG), Histogram of Optical Flow (HOF), or deep features, as an implicit hint.As W pixels sized grid in multiple scales. Each trajectory is tracked separately at each scale for L frames and the trajectories exceeding this limit are removed from the process. Once the trajectories are extracted, the descriptors in the local neighborhood of the interest-points are computed. There are three different types of descriptors extracted from the interest-points: Trajectory shape, motion , and ap.k.a MBH ) descripL, its shape can be described by a sequence represents static appearance information by calculating gradient vectors around the calculated trajectory points.Motion descriptors (HOF and MBH) are computed in a volume around the detected interest-points and throughout their trajectories . Size of the constructed volume is i at time t and w shows the sliding window size: t, where Geometrical descriptors are also used for representing the spatial configuration of the skeleton joint information and model human body pose in each frame. To represent the skeleton, both joints\u2019 Euclidean distances and angles in polar coordinate are calculated using normalized joint positions. In order to preserve temporal information in pose representation, a feature extraction scheme based on temporal sliding window is adapted . At eachIn order to compare the effect of hand-crafted and deep features on our generated activity models, the framework uses Trajectory-Pooled Deep-Convolutional Descriptors (TDD) introduced in . Computii is the label of the tracked subject and T is the number of trajectories in each sequence. Each scene region characterizes a spatial part of the scene and will be represented as a Gaussian distribution: Information about the global position of the subjects is indispensable in order to achieve an understanding of long-term activities. For person detection, the algorithm in is applil is defined as a set of scene regions (SR):k indicates the number of scene regions defining the resolution of the topology. The scene regions are obtained through clustering which takes place in two stages. This two stages clustering helps to reduce the effect of outlier trajectory points in the overall structure of the topologies. In the first stage, the interesting regions for each subject in the training set are found by clustering their trajectory points. For each k clusters: i. Cth cluster center [https://team.inria.fr/stars/demcare-chu-dataset/).In most of the trajectory-based activity recognition methods, a priori contextual information is ignored while modeling the activities. The proposed framework performs automatic learning of the meaningful scene regions (topologies) by taking into account the subject trajectories. The regions are learned at multiple resolutions. By tailoring topologies at different levels of resolution, a hierarchical scene model is created. A topology at level on (BIC) . We defiStartRegion and EndRegion variables take values of SR indices. For example, if StartRegion of EndRegion of Primitive Event\u2019s type is Stay, when the region labels (Such as SR1) stay constant between two time intervals. It is equivalent to a sequence of Stays in the scene region P:To fill the gap between the low-level image features and high-level semantic description of the scene, an intermediate block capable of linking the two is required. Here, we describe a method that defines a construction block for learning the activity models. With a deeper look at the activity generation process, it can be inferred that the abstraction of low-level features into high-level descriptions does not happen in a single step and this transition is gradual. As a solution, we use an intermediate representation named Primitive Event (PE). Given the two consecutive trajectory data points between two successive time instants occurs. It is equivalent to a region transition:T for every video sequence, a corresponding primitive event sequence When a Activity Discovery. Annotating the beginning and end of the activities is a challenging task even for humans. The start/end time of the annotated activities varies from one human annotator to another. The problem is that humans tend to pay attention to one resolution at a time. For example, when a person is sitting on a chair, the annotated label is \u201csitting\u201c. Later, when the subject \u201cmoves an arm\u201c, she is still sitting. Discovering activities using a different resolution of the trained typologies helps to automatically detect these activity parts and sub-parts at different levels of activity hierarchy using previously created semantic blocks (Primitive Events). Input for activity discovery process is a spatiotemporal sequence of activities described by primitive events. After the activity discovery process: (1) The beginning and end of all activities in a video are estimated and the video is automatically clipped. (2) The video is classified naively into discovered activities indicating similar activities in the timeline. A discovered activity (DA) is considered either as (1) staying in current state (\u201cStay\u201c) or (2) changing of the current state (\u201cChange\u201c). Basically, a Stay pattern is an activity that occurs inside a single scene region and is composed of primitive events with the same type:We refer to the detection of the boundaries of the activities as Change\u201d pattern is an activity that happens between two topology regions. A \u201cChange\u201d activity consists of a single primitive event of the same type:A \u201cAlthough detection of primitive events takes place at three different resolutions, the activity discovery process only considers the coarse resolution. Therefore, after discovery process, the output of the algorithm for the input sequence is a data structure containing information about the segmented input sequence in the coarse level and its primitive events in two other lower levels. This data structure holds spatiotemporal information similar to the structure in NxN pixels and L frames from videos. Fisher Vector (FV) method [Although Discovered Activities present global information about the movement of people, it is not sufficient to distinguish activities occurring in the same region. Thus, for each discovered activity, body motion information is incorporated by extracting motion descriptors . These d) method is then Here, the goal is to create activity models with high discriminative strength and less susceptibility to noise. We use attributes of an activity and its sub-activities for modeling and accordingly, learning is performed automatically using the DAs and PEs in different resolutions. Learning such models enables the algorithm to measure the similarity between them. To create the models, a method for assembling the DAs and PEs from different resolutions is required. This is achieved by the concept of hierarchical neighborhood.A at resolution level l is a recursive representation of the links between A and its primitive events A in the next finer resolution. The links between the different levels are established using temporal overlap information. For example, primitive event B is sub-activity of activity A in a higher level if their temporal interval overlaps in the activity timeline. Formally, B is sub-activity of A if the following statement holds:The hierarchical representation of activity By applying to a disN) of the activity tree. Clustering is performed using Type attribute of the PEs which groups PEs of the same type in one cluster. This process is repeated for all levels. After clustering, nodes of the tree model are determined followed by linking them together to construct the hierarchical model of the tree. The links between the nodes are realized from the activity neighborhood of each node (Type attribute is extracted from the underlying primitive or discovered activity (in case of the root node). For node N, Stay or Change states.Instances list PEs of training instances indicating the frequency of each PE included in the node.Duration is a Gaussian distribution n is the number of PEs or DAs.Image Features store different features extracted from the discovered activities. There is no limitation on the type of feature. It can be extracted hand-crafted features, geometrical or deep features (features . It is cNode association indicates the parent node of the current node (if it is not the root node) and the list of neighborhood nodes in the lower levels.Hierarchical activity model (HAM) is defined as a tree that captures the hierarchical structure of daily living activities by taking advantage of the hierarchical neighborhoods to associate different levels. For an input DA (Mixture and Timelapse. Mixture shows contribution of the type of the sub-activities (Timelapse of the nodes (with the same type and level in different training instances) represents the distribution of the temporal duration of the sub-nodes. This attribute is also computed as a Gaussian distribution The above-mentioned attributes do not describe the relationship between the nodes which is important in the overall description of the activities. In order to model the relationship among the nodes, for each node, two other attributes are defined regarding their sub-nodes: H) characterize local motion and appearance of a subject. Knowing the vector representation of the descriptors of discovered activities enables the use of a distance measureV (one for each region) by clustering the descriptors (using k-means). The codebook of each region is stored in the created activity model of that region. During the testing phase, when a new video is detected by the scene model, its descriptors are extracted and the feature vectors are created. These feature vectors are encoded with the learned dictionaries of the models. The distance of the current descriptor is calculated with the trained codebooks of all regions (to find the closest one) using the Bhattacharyya distance:N is the number of learned code words and BC is the Bhattacharyya coefficient:N and M display dimensions of the descriptor and trained codebooks, respectively. The most similar codebook is determined by the minimum distance score acquired. That codebook (and its corresponding activity model) is assigned by a higher score in the calculation of the final similarity score with the test instance in the recognition phase.As it is shown in Perceptual information, such as trajectories of a new subject, is retrieved.Using the previously learned scene model, the primitive events for the new video are calculated.By means of retrieved primitive events, the discovered activities are calculated.Using the collected attribute information, a test instance HAM . n of the activity in the test video, i, n is defined as Equation decision rule. If Equation ):(16)p(\u03c9Equation )(17)p\u02dc(Equation :(18)p(\u03c9iEquation .(19)p\u02dc\u03c9Equation ).(23)i^The performance of the proposed framework is evaluated on two public and one private daily living activity datasets.http://www.demcare.eu/results/datasets) in a clinic in Thessaloniki, Greece. The camera monitors a whole room where a person performs directed ADLs. The observed ADLs include: \u201cAnswer the Phone\u201d, \u201cEstablish Account Balance\u201d, \u201cPrepare Drink\u201d, \u201cPrepare Drug Box\u201d, \u201cWater Plant\u201d, \u201cRead Article\u201d, \u201cTurn On Radio\u201d. A sample of images for each activity is presented in The GAADRD activityhttps://team.inria.fr/stars/demcare-chu-dataset/) and it contains 27 videos. For each person, the video recording lasts approximately 15 min. Domain experts annotated each video regarding the ADLs. Similar to GAADRD, for this dataset, we randomly chose 2/3 of the videos for training and the rest for testing.This dataset is recorded in the Centre Hospitalier Universitaire de Nice (CHU) in Nice, France. The hospitals collecting the dataset have obtained the agreement of an ethical committee. Volunteers and their carers have signed informed consent. Data have been anonymized and can be used only for research. It contains videos from patients performing everyday activities in a hospital observation room. The activities recorded for this dataset are \u201cPrepare Drink\u201d, \u201cAnswer the Phone\u201d, \u201cReading Article\u201d, \u201cWatering Plant\u201d, \u201cPrepare Drug Box\u201d, and \u201cChecking Bus Map\u201d. A sample of images for each activity is illustrated in The DAHLIA dataset consistsWe use various evaluation metrics on each dataset to evaluate our results and compare it with other approaches. For the GAADRD and CHU datasets, we use Precision and recall metrics. True Positive Rate (TPR) or recall is the proportion of actual positives which are identified correctly: i and ground-truth label j. Precision, Recall, and F-Score metrics.For evaluation of the unsupervised framework, as the recognized activities are not labeled, there is no matching ground-truth activity label for them. The recognized activities are labeled, such as \u201cActivity 2 in Zone 1\u201d. In order to evaluate the recognition performance, first, we map the recognized activity intervals on the labeled ground-truth ranges. Next, we evaluate the one-to-one correspondence between a recognized activity and a ground-truth label. For example, we check which ground-truth activity label co-occurs the most with \u201cActivity 2 in Zone 1\u201d. We observe that in c in the dataset, we assume c, respectively. We also define Intersection over Union (IoU) metric as:C is the total number of action classes.In order to evaluate the DAHLIA dataset, we use metrics based on frame level accuracy. For each class Y axis (MBHY) descriptor in the activity models with codebook size set to 256.First, the results and evaluations of the three datasets are reported and then compared with state-of-the-art methods. Different codebook sizes are examined for the Fisher vector dictionaries: 16, 32, 64, 128, 256, and 512. X (MBHX) and Y (MBHY) components. As the activities involve many vertical motions, MBHY descriptor is able to model the activities better compared to the other dense trajectory descriptors and even deep features. It can be noticed that the performance of temporal deep features gets better as the codebook size gets bigger. In addition, motion features perform better than appearance features and temporal deep features perform better than spatial TDDs. The reason for the lower performance of appearance features might be due to the activities performed in a hospital environment. Hereupon, the background does not contain discriminative information which can be encoded in activity models. It is clear that the Geometrical features perform poorly. Daily living activities are comprised of many sub-activities with similar motion patterns related to object interactions. It seems that geometrical features do not contain sufficient information to ensure encoding these interactions which result in poor detection. Furthermore, the confusion matrix in Figure 10 indicates that the activities with similar motion in their sub-activities are confused with each other the most.Based on the obtained results, there is no special trend regarding the codebook size. For some features , the performance increases with an increase in the codebook size and drops when the codebook size becomes much bigger. For TDD temporal feature, performance increases linearly with the codebook size. For the geometrical features, particularly for the Angle feature, there is a big drop of performance with bigger codebook sizes. For others , medium-size codebook performs the best. Finding an optimal codebook size is challenging. Small datasets usually work better with smaller codebook size, and as the datasets\u2019 size grows, the codebook performs better. Regardless of the codebook size, MBHY descriptor performs better than other features in this dataset. The MBH descriptor is composed of On CHU dataset, the unsupervised framework achieves promising results . SimilarThis section summarizes the evaluations and comparisons conducted on GAADRD , CHU Se, and DAHThe results obtained from our proposed framework on GAADRD and CHU datasets are compared with the supervised approach in , where vation of containsDifferent from the two other datasets, the results on the DAHLIA dataset are compared with all the previous evaluations we could find in the literature. Meshry et al. exploitsOverall, although our unsupervised framework does not utilize any supervised information, it achieved promising recognition performances. Compared to the fully supervised hybrid method , the unsAn online unsupervised framework is proposed for detection of daily living activities, particularly for elderly monitoring. To create the activity models, we benefited from the superiority of unsupervised approaches on representing global motion patterns. Then, discriminative local motion features were employed in order to generate a more accurate model of activity dynamics. Thanks to the proposed scene model, online recognition of activities can be performed with reduced user interaction for clipping and labeling a huge amount of short-term actions which are essential for most of the previously proposed methods. Our extensive evaluations on three datasets revealed that our proposed framework is capable of detecting and recognizing activities in challenging scenarios. The evaluations were intentionally conducted on the datasets recorded in nursing homes, hospitals, and smart homes to examine the implication of the method on ambient surveillance in such environments. Further work will investigate how to generate generic models that can detect activities in any environment with minimum modification of the models. Our goal is to use the developed framework in the evaluation of long-term video recordings in nursing homes and to assess the performance of the subjects to impose early interventions which will result in early diagnosis of cognitive disorders, especially Alzheimer\u2019s disease."} +{"text": "Mycobacterium leprae, the causative agent of leprosy, at the proteomic level may facilitate the identification, quantification, and characterization of proteins that could be potential diagnostics or targets for drugs and can help in better understanding the pathogenesis. This review aims to shed light on the knowledge gained to understand leprosy or its pathogen employing proteomics and its role in diagnosis.Although leprosy is curable, the identification of biomarkers for the early diagnosis of leprosy would play a pivotal role in reducing transmission and the overall prevalence of the disease. Leprosy-specific biomarkers for diagnosis, particularly for the paucibacillary disease, are not well defined. Therefore, the identification of new biomarkers for leprosy is one of the prime themes of leprosy research. Studying Mycobacterium leprae. Despite effective multidrug therapy (MDT), the torpid decline in new leprosy cases demonstrates that transmission in the society is persistent. In 2018, new diagnosed cases were 208,619, and India alone accounted for more than half of new cases reported globally [Despite advances toward the elimination of leprosy over the last four decades, leprosy still remains an important health problem ,2. It isglobally . Recentlglobally describeglobally . PersistM. leprae possesses a longer generation time and lacks an artificial medium for in vitro growth; therefore, animals are used for in vivo propagation of bacilli [Sciurus vulgaris), and their existence was confirmed by the M. leprae and M. lepromatosis genome in the animal. Two modes of transmission of leprosy viz. anthroponotic and zoonotic were discussed. The transmission of M. leprae may occur from the reservoir to the target population. Transmission from an animal reservoir to the environment involves interconnection through an ecological cycle. Transmission and reservoir of the M. leprae complex might assist in understanding the pathogenesis of the disease [ bacilli . Nine-ba bacilli ,8. Later disease ,10. The disease .Mycobacterium leprae is a rod-shaped, acid-fast, non-motile, non-spore forming, slow-growing (generation time 12\u201314 days), obligate intracellular pathogen that affects mainly peripheral nerves and skin, leading to nerve damage and disfigurement. It might also affect other body parts such as bone marrow, liver, spleen, lymph nodes, lungs, oesophagus, kidney, eyes, and testes in human leprosy [Dasypus novemcinctus) or footpads of the mouse or cooler parts of host, especially human [M. leprae has the smallest genome (3.3 Mb) among mycobacteria with 1614 protein-encoding genes and remarkable 1300 pseudogenes [M. leprae, it has become host-associated [M. leprae has managed a minimal gene set that allows its survival within the host. Since the availability of the M. leprae genome sequence, various studies have focused on identifying genes encoding M. leprae-unique antigens to design new diagnostic tests [ leprosy . It cannudogenes ,16,17. Asociated . Despiteic tests .Mycobacterium indicus pranii (MIP), an indigenous vaccine developed by the Indian National Institute of Immunology, New Delhi is another vaccine that has shown promising results in hospital and population-based trials against leprosy. It reduces the bacillary load; completes clearance of granuloma; reduces reactions, neuritis, and MDT duration; and it upgraded lesions histopathologically in leprosy patients [To date, Bacillus Calmette Guerin (BCG) is the only vaccine being used against mycobacterial diseases tuberculosis and leprosy . Mycobacpatients ,22,23. PM. leprae infection unlike BCG [Another vaccine candidate for leprosy is LepVax (LEP-F1 + GLA-SE), whose phase I antigen dose-escalation trial related to safety, tolerability, and immunogenicity has been recently conducted in healthy adults. It is safe and immunogenic in healthy individuals, and the authors supported its testing in leprosy endemic regions . LepVax like BCG .Diagnosis before clinical manifestations is vital to the reduction of transmission. Recent strategies to stop leprosy transmission rely on prophylactic protocols using rifampicin and/or BCG . The diaM. leprae gelatin particle agglutination test [M. tuberculosis. The limitation of the use of interferon-\u03b3 (IFN-\u03b3) for diagnosis is that individuals with adequate immunity against M. leprae also produce substantial concentrations of IFN-\u03b3. Palit and Kar have nicely reviewed the current scenario on the prevention of transmission of leprosy [M. leprae-specific repetitive element (RLEP), and real-time PCR have been used to detect the components of M. leprae in the patient lesions or household contacts. None of the tests was successful in detecting early leprosy. One of the major obstacles in the early diagnosis of leprosy is the lack of good markers. Proteomics is a very powerful technology for biomarker discovery in many diseases [M. leprae-specific tests for the early diagnosis of leprosy.Several attempts have been made for the development of specific tests for the early detection of leprosy but with little success. Various assays that detect leprosy-specific antibody responses such as ELISAs , the M. ion test , the dipion test , and theion test have bee leprosy . The Net leprosy evaluatediseases , and durdiseases reviewedProteomics is the global analysis of proteins expressed in a cell or tissue or an organism. It is more complicated compared to genomics, as an organism\u2019s genome is more or less constant, whereas the total protein expression profile changes with time and is also influenced by environmental conditions. Nucleic acid-based systems offer rapid and sensitive methods to detect the presence of genes; however, developments in molecular and cellular biology have imposed doubts on the ability of genetic analysis alone to predict any complex phenotypes ,41. In aProteomics has been extensively used for both basic as well as translational research in the areas of infectious diseases, diabetes, cancers, cardiovascular disease, etc. Proteomics can either be qualitative or quantitative. The major steps involved in analytical proteomics are isolation, separation following digestion into peptides or vice versa, and identification. After the isolation of proteins, separation is usually done by two-dimensional gel electrophoresis (2DGE) or various chromatography-based approaches. Despite landmark progress made in the development of alternative protein separation techniques, 2DGE is still a powerful technique to study proteins. Peptides generated as a result of enzymatic digestion are analyzed by mass spectrometry (MS), either MALDI-TOF or ESI, and data generated thereafter are matched with available databases using various bioinformatics software. During the past couple of years, much advancement has been made in the field of proteomics. The development of sensitive, rapid, and powerful MS-based methods have resulted in the accurate identification, quantification, and modification of any expressed protein. Quantitative proteomics could be useful both for the early detection of diseases and evaluation of pathological status ,43. Non-M. leprae proteins employing proteomics tools. Knowledge gained on the biology and pathogenesis of M. leprae from proteomic studies has been reviewed by Prakash and Singh [M. leprae subcellular fractions employing 2DGE and mass spectrometry. This was the first study where the application of proteomics has been extended to a host-derived Mycobacterium. In total, 147 protein spots corresponding to 44 genes were identified, and 28 were found to be new proteins. Furthermore, two highly basic proteins with pI more than 10 were isolated, employing heparin affinity chromatography. For some time, in silico tools were mainly used for the identification of antigens, and proteomic approaches have not been explored to study M. leprae. Wiker et al. [Leprosy is one of the infectious diseases that has also been benefitted by proteomics. Several developments have been made toward the identification of nd Singh . The empnd Singh . On analnd Singh carried r et al. were theM. leprae proteins as biomarkers and resolved 391 proteins employing 2DGE from three cellular fractions viz. the cell wall, membrane, and cytosol. A total of 14 protein spots were identified, and among these, eight protein spots were identified based on reactivity with monoclonal antibodies and relative size/pI, while six protein spots were identified by microsequencing. They eventually identified new proteins\u2014elongation factor EF-Tu and Mycobacterium tuberculosis (M. tb) MtrA response regulator. In another study, they [M. leprae cell envelope employing a high-throughput proteomic approach and identified 218 new M. leprae proteins. The proteins were mainly enzymes involved for lipid biosynthesis and degradation, the biosynthesis of major components of the mycobacterial cell envelope, proteins involved in transportation across lipid membranes, and lipoproteins and transmembrane proteins with unknown functions. The identification of proteins expressed in vivo by the bacillus will be of great significance in understanding the mycobacterial pathogenesis. Silva et al. [M. leprae cell surface-exposed proteome to unravel potentially relevant adhesins and highlighted the role of adhesins in bacillus\u2013epithelial cell interaction. A total of 279 cell surface-exposed proteins were identified by shotgun mass spectrometry. Rana and co-workers [M. leprae. They suggested that 11 OMPs with B-cell epitopes may be considered as important candidates for developing immunotherapeutics against M. leprae.Marques et al. discussedy, they deciphera et al. studied -workers presenteM. leprae was found to be a probable biomarker of active infection [M. leprae antigens such as PGL-1, lipoarabinomannan, and four recombinant protein in understating the dynamics of patient antibody responses during and after drug therapy. This could assist in monitoring the treatment efficacy in leprosy patients and assess the disease progression of those who are at risk of developing the disease.Biological fluids from patients and controls are a reliable source for the identification of protein markers. Serum/plasma proteome is complex but offers an important window on individual variation. Serological biomarkers of infection, disease progression, and treatment efficacy for leprosy have been studied. Patil and co-workers studied nfection . Soares nfection suggestenfection indicatenfection reportednfection used antM. leprae that might be responsible for extensive tissue damage during type 1 reaction. Owing to their small size, peptides can be expressed on the surface of bacteriophage to select mimicking peptides from different targets. Alban et al. [As multiple factors such as bacterial, genetic, environmental, and nutritional contribute to clinical manifestations, studies related to metabolites from the serum of persons affected with leprosy were carried out . Three pn et al. suggesteUrinary signatures as biomarkers in case of leprosy were first reported by Mayboroda et al. . The groM. leprae, such as the Leprosy Infectious Disease Research Institute Diagnostic-1 (LID-1) antigen, provide the possibility of producing chimeric antigens that could provide greater sensitivity for the identification of MB and possibly PB patients. Few studies are underway to determine the immunoreactivity and specificity of new antigens that can be integrated into the PGL-1 antigen, intending to obtain a higher seropositivity test between MB and PB patients [M. leprae infection in various groups is provided in Stefani et al. describepatients ,75,76. Rpatients reportedM. leprae infection, which might play a significant role in immune modulation in leprosy. Contacts of leprosy patients are a population at high risk of contracting and suffering from the effects of the disease during their lifetime. They can also act as M. leprae carriers and therefore serve as sources for transmission and infection. Being important links in the chain of transmission, several epidemiological studies [Benjak et al. studied studies ,81 with studies suggeste(a)M. leprae and present mainly in the cell wall and capsule of the bacteria. It is highly specific due to the trisaccharide units and gets entered inside the cell by binding specifically to the G domain of the laminin a2 chain in the basal lamina of Schwann cell-axon units [Phenolic glycolipid-1 (PGL-1): It is specific to on units .(b)M. leprae developed later and is still in use. This antigen is superior to other derivatives of the PGL-I antigen [Natural disaccharide octyl bovine serum albumin (ND-O-BSA) or human serum albumin (ND-O-HSA): It is the modified (conjugated with protein BSA), semisynthetic antigen representing the PGL-1 molecule of antigen ,85. An i antigen .(c)M. leprae ESAT-6 (L-ESAT-6) is the homologue of M. tb ESAT-6 (T-ESAT-6) having 36% similarity at an amino acid level. It is an important M. leprae antigen that stimulates T-cell dependent IFN-\u03b3 production in M. leprae-exposed individuals. Remarkable cross-reactivity was observed between T-ESAT-6 and L-ESAT-6, which suggests that L-ESAT-6 may play a crucial role in the diagnosis of leprosy [L-ESAT-6: Early secreted antigenic target-6 (L-ESAT-6): leprosy ,88(d)Leprosy IDRI diagnostic (LID-1): This marker was developed by the fusion of two selected proteins ML0405 and ML2331 (involved in the diagnosis of MB patients) and has been named LID-1 (Leprosy Infectious Disease Research Institute Diagnostic-1) . A signi(e)Natural disaccharide octyl and LID-1 (NDO\u2013LID): As the name suggests, it is the conjugate of NDO and LID-1 into the single fusion complex. This complex possesses antibody-detecting capabilities of the individual antigens and is good for antibody-based detection for leprosy patients than singly . An incr(f)Monocyte chemoattractant protein-1 (MCP-1) or CCL2: It is a signaling molecule secreted by monocytes, memory T cells, and recruiting other immune cells to the sites of inflammation and infection. An increased level of this chemokine has been observed in leprosy patients than in healthy individuals .(g)Macrophage inflammatory protein-1\u03b2 (MIP-1\u03b2) or CCL4: It acts as a chemo-attractant biomarker for monocytes, and it inhibits T cell activation through TCR signaling . The fun(h)Platelet-derived growth factor-BB (PDGF-BB): These molecules are processed by SSV-transformed or PDGF-B expressing cells. There are two genes viz. PDGF-A and PDGF-B which encode three proteins\u2014PDGF-AA, PDGF-AB, and PDGF-BB\u2014comprising PDGF family . PDGF-BB(i)Interleukin-1\u03b2 (IL-1\u03b2): It is a pro-inflammatory cytokine that is linked with inflammasome development and is crucial for Th17 cells differentiation . Liu et M. leprae-unique Ags, particularly ML2478, act as biomarker tools to measure M. leprae exposure using IFN-\u03b3 or IFN-inducible protein-10, and they also show that MCP-1, MIP-1\u03b2, and IL-1\u03b2 can potentially distinguish pathogenic immune responses from those induced during asymptomatic exposure to M. leprae [M. leprae [M. leprae antigen L-ESAT-6 (Early secretory antigenic target 6) stimulates T-cell-dependent gamma interferon production in a large proportion of individuals exposed to M. leprae. Meneses et al. [Potential biomarkers aid in understanding the mechanisms of leprosy reactions and diagnosed the clinical stages. The elevated level of circulating cytokines CXCL10 and IL6 act as promising markers for leprosy in T1R. Similarly, IL7 and PDGF-BB represent potential markers of T2R . Medeiro. leprae . Reduced. leprae . MIP-1\u03b2 . leprae . An immu. leprae . Geluk e. leprae studied s et al. found ths et al. .Mycobacterium leprae, the causative agent of leprosy, is still persistent in society. Various approaches have been used in the past with varying degrees of success, and therefore, the identification of new biomarkers for leprosy is the need of the hour. Numerous studies aimed at the identification of protein(s) as prognostic/diagnostic biomarkers employing proteomics exist. Proteomic profiling helps unravel the connections between various cellular pathways and thus complements both the genomics and traditional biochemical approaches. Proteomics is expected to be the tool of choice for diagnosing patients and searching for therapeutic biomarkers in the years to come.The transmission of"} +{"text": "Aims: To determine the risk of liver injury associated with the use of different intravenous lipid emulsions (LEs) in large populations in a real-world setting in China.Methods: A prescription sequence symmetry analysis was performed using data from 2015 Chinese Basic Health Insurance for Urban Employees. Patients newly prescribed both intravenous LEs and hepatic protectors within time windows of 7, 14, 28, 42, and 60 days of each other were included. The washout period was set to one month according to the waiting-time distribution. After adjusting prescribing time trends, we quantify the deviation from symmetry of patients initiating LEs first and those initiating hepatic protectors first, by calculating adjusted sequence ratios (ASRs) and relevant 95% confidence intervals. Analyses were further stratified by age, gender, and different generations of LEs developed.Results: In total, 416, 997, 1,697, 2,072, and 2,342 patients filled their first prescriptions with both drugs within 7, 14, 28, 42, and 60 days, respectively. Significantly increased risks of liver injury were found across all time windows, and the strongest effect was observed in the first 2 weeks [ASR 6.97 (5.77\u20138.42) \u223c 7.87 (6.04\u201310.61)] in overall patients. In subgroup analyses, female gender, age more than 60 years, and soybean oil-based and alternative-LEs showed higher ASRs in almost all time windows. Specially, a lower risk for liver injury was observed in the first 14 days following FO-LEs administration , but the risk started to rise in longer time windows.Conclusion: A strong association was found between LEs use and liver injury through prescription sequence symmetry analysis in a real-world setting, which aligns with trial evidence and clinical experience. Differences revealed in the risks of liver injury among various LEs need further evaluation. It occuAs an integral component of PN, a wide variety of commercial lipid emulsions (LEs) is now available. However, the amount, type, and infusion time of intravenous LEs were reported to affect the risk of inducing liver complications ; b. ExpoPrescription sequence symmetry analysis (PSSA) is a valid method used for rapid signal detection of adverse drug events (ADE) by calculating the sequence ratio between exposure and outcomes , which iThis study was conducted by analyzing data from the 2015 Chinese Health Insurance Research Association (CHIRA) database, which is a national-level claims database collecting sampled hospital record of patients from the Urban Employee Basic Medical Insurance scheme all over mainland China . The datWe performed an observational study using PSSA to explore the association between LEs and liver injury. PSSA, first proposed by The index drugs in our study are LEs, consisting of S-LEs, alternative-Les, and FO-LEs . The marPatients prescribing both an index drug and a marker drug between Jan 1st, 2015, to Dec 31st, 2015, were identified from the 2015 CHIRA database. We applied a washout period based on the \u201cwaiting-time distribution\u201d to ensure that patients were new users of the index and marker drugs . The obsp) that the index drug will be prescribed before the marker drug in the background population, where p is calculated asm and n indicate the consecutive days of the study period, u is the last day of the study period, d indicates the specified observation time window, LEm indicates the number of patients receiving first LE prescription on date m, and Hn is the number of patients initiating hepatic protectors on date n. Given p, NESR can be generated as p/(1 \u2212 P). ASR can be calculated as CSR/NESR. The estimation of confidence interval (CI) is based on binomial distribution, using the Wilson (Score) method can be calculated by dividing the number of patients initiating LEs first by that of those initiating hepatic protectors first . Drug prescribing trends over time can extraneously affect the treatment sequence, and it may result in a biased effect estimate. To adjust for this, we further calculated adjusted sequence ratio (ASR) using a correction method proposed by ) method , calcula studies .Subgroup analyses were conducted by LE generation , age (\u226560 and <60 years), and gender. We performed sensitivity analyses in different time windows to test the robustness of PSSA results and find out during which time period the adverse effects were more likely to occur. ASRs with the lower limit of 95% confidence interval (CI) bigger than 1 were considered statistically significant. All analyses were performed using SAS .n = 416), 14 (n = 997), 28 , and 42 (n = 2072) days. Of the 2,342 patients with a maximum time window of 60 days, 35.1% were female. The average age was 62.6 \u00b1 15.0 (standard deviation) years. Alternative-LEs were the most commonly used LE in these patients (62.7%), followed by S-LEs (36.5%). FO-LEs were seldom used (0.73%).In the analysis where index and marker drugs were initiated within 60 days, the adjusted prescribing trends resulted in an ASR of 3.60 (95% CI 3.26\u20133.97), indicating that initiating a LE is associated with a 3.6-fold increase in the rate of liver injury receiving hepatic protectors during a 60-day period after initiating LEs. Moreover, a strong asymmetrical pattern of treatment sequence was revealed in In subgroup analyses by the type of LEs initiated , all theThe associations of LEs and hepatic dysfunction were also significant in different age and gender groups, but ASRs differed slightly between groups . GeneralIn this large, population-based observational study, we found an association between the index drugs (LEs) and the marker drugs (hepatic protectors), with positive and robust ASRs across all time windows and different generations of LEs, suggesting that LEs might possibly induce hepatic dysfunction in Chinese patients. There was a 3.6- to 7.9-fold increase in the risk of hepatotoxicity within the 2 months after initiating LEs. In 1971, Peden et al. was the first to report a case of an infant who had received total PN for 2.5 months before dying from liver failure . In a prDrug-induced liver injury accounts for approximately 20% of inpatients with acute liver injury in China, owing to a wide use of traditional Chinese medicine and anti-tuberculosis drugs . The marAbcd11/BSEP, Abcd2/MRP2) via antagonism of the nuclear receptors and failure of upregulation of the hepatic sterol exporters (Abcg5/g8/ABCG5/8). Studies/ABCG5/8). As a sm/ABCG5/8); b. A ra/ABCG5/8).Being opposite to the increasing time trend of risk for liver injury in FO-LEs, the ASRs were highest in the first 14 days after S-LEs or alternative-LEs initiation and decreased thereafter with longer time windows. This suggested that the majority of patients suffered from acute hepatotoxicity after S-LEs or alternative-LEs initiation, possibly resulting from exposure to intravenous LE >1\u00a0g/kg/d caused by inappropriate prescription or MEs (rapid infusion speed) . Some paMoreover, the magnitude of risk for liver injury following LEs initiation was higher among female patients and those with elder age. We noted an increased risk of liver injury in the female group, which was also found in a retrospective study conducted in a New York hospital . The mulOur study has several strengths. To our knowledge, it is the first to evaluate the association of LEs with the potential risk of hepatic dysfunction in a large Chinese population. It provides information on LE-relevant liver injury in a real-world setting that could help us better understand the current situation of safety problems in clinical practice of PN. PSSA has been proved an effective and fast signal detection method in drug safety evaluation, with moderate sensitivity and high specificity . In the Our study also has some limitations. Firstly, we only included one-year patient data because the maximum follow-up duration of CHIRA database was one year as a result of the annually resampling data collection strategy. This limited the sample size of study population, especially in the subgroup of FO-LEs, whose effect estimates would thus be more easily affected by random fluctuation. It should also be noted that some delayed events, such as cholestasis, can occur years after PN initiation in adults . HoweverOur results show that there is a strong association between LEs and hepatotoxicity with an asymmetrically distributed treatment sequence. The findings suggest that hepatic dysfunction after LEs is common in China, and it is important to strengthen the appropriate use of LEs and enhance patient education at the initiation of LEs to reduce the liver injury."} +{"text": "To the Editor\u2014Among the symptoms of SARS-CoV-2 infection (or COVID-19), olfactory or gustatory dysfunction may possibly present first or may be the only symptom.1 Three Japanese professional baseball players complained of smell and taste dysfunction. Although 2 of them had neither fever nor cough, a viral polymerase chain reaction (PCR) test revealed that all 3 were SARS-CoV-2 positive . Two nurses working in the National Cancer Center Hospital underwent the viral PCR test because they had similar symptoms, and they were both SARS-CoV-2 positive, although they had neither fever nor cough .4 The influenza and parainfluenza type 3 viruses were reported to be causative of olfactory loss most frequently. Seasonal changes in the incidence of olfactory loss have been reported with respect to influenza and parainfluenza type 3 infections, occurring most frequently in winter and spring, respectively.5 Flanagan et al6 reported that the proportion of persons who received influenza vaccination was significantly lower among those with olfactory dysfunction than that in a control group. However, the adverse effect of olfactory dysfunction due to influenza vaccination was also reported. Dotty et al7 attributed 9 of 4,554 patients (0.19%) with olfactory dysfunction to influenza vaccination. Suzuki et al8 confirmed the presence of various viruses in the nasal discharge of patients with postviral infection olfactory dysfunction, such as rhinovirus, parainfluenza virus, Epstein-Barr virus, and coronavirus. Significant recovery was not observed after 24 weeks in almost all of the patients.8 In contrast, olfactory dysfunction due to hepatitis virus was recovered within 6 weeks in almost all cases.9 In acute viral hepatitis, hyposmia, dysosmia, and dysgeusia are common symptoms. As smell and taste are closely associated; persons with olfactory dysfunction and normal gustatory function often complain that they \u201ccannot taste coffee.\u201d9Olfactory dysfunction is caused by blockage of the nasal airways or disturbance of the sensory system, including olfactory receptor cells, and the nervous system. As the olfactory receptor cells adjoin the upper part of the nasal cavity, the receptor cells are vulnerable. Viral infection was the most common cause of loss of olfactory function. With this viewpoint, we reviewed the available literature on olfactory and gustatory dysfunction caused by influenza and other viruses. Postviral infection olfactory dysfunction was more common in women and elderly people.10 reported that anosmia induced by SARS-CoV continued for >2 years in a 27-year-old woman. We believe that epidemiological investigation is required regarding the effect of SARS-CoV-2 on the olfactory and gustatory functions in terms of the frequency, time course, and relationship with other symptoms.Some recent reports described early improvement of olfactory and gustatory dysfunction in many COVID-19patients. According to the newspaper, olfactory and gustatory function in the professional baseball players also returned to normal relatively soon. However, only short-term follow-up investigation has been conducted regarding the effect of SARS-CoV-2 infection on the chemosensory function. Hwang"} +{"text": "Anosmia is a well-described symptom of Corona Virus Disease 2019 (COVID-19). Several respiratory viruses are able to cause post-viral olfactory dysfunction, suggesting a sensorineural damage. Since the olfactory bulb is considered an immunological organ contributing to prevent the invasion of viruses, it could have a role in host defense. The inflammatory products locally released in COVID-19, leading to a local damage and causing olfactory loss, simultaneously may interfere with the viral spread into the central nervous system. In this context, olfactory receptors could play a role as an alternative way of SARS-CoV-2 entry into cells locally, in the central nervous system, and systemically. Differences in olfactory bulb due to sex and age may contribute to clarify the different susceptibility to infection and understand the role of age in transmission and disease severity. Finally, evaluation of the degree of functional impairment (grading), central/peripheral anosmia , and the temporal course (evolution) may be useful tools to counteract COVID-19. A wide spectrum of symptoms characterizes SARS-CoV-2 infection, ranging from serious conditions, including acute respiratory distress syndrome (ARDS), to mild/moderate and also asymptomatic forms of the disease, contributing to the spread of the viral infection.The Corona Virus Disease 2019 (COVID-19) rapid worldwide spread has led to characterization of \u201cminor\u201d symptoms, such as anosmia , in most cases, reversible. Seldom, this dysfunction may persist, suggesting a sensorineural central damage . One of Since the olfactory bulb (OB) is considered an immunological organ , its invAs members of Coronaviridiae family are known to cause CNS dysfunction, it appears mandatory to understand the role of SARS-CoV-2 neurotropism in the development of clinical manifestations.The first symptomatic characterization of COVID-19 evolved over the past months adding to major symptoms a broad spectrum of minor symptoms. Among those, increasing olfaction disturbance (OD) observations have made that anosmia was identified as an emerging symptom and subsequently as a marker of SARS-CoV-2 infection , 5.Anosmia had attracted the most public interest between physicians and the general population both because of media coverage in an atmosphere of increasing and constant alarm and concern and for its potential capability of early identification of infection. For instance, in our country, after journalistic and media announcement of anosmia as a symptom of COVID-19 in March 2020, this term had a peak in search volumes on Google .In scientific literature, after the first reports of olfactory and taste disorders (OTDs) in COVID-19, an increasing and rapidly evolving detailed analysis of this symptom was progressively collected to evaluate the prevalence and patterns of anosmia and its significance in the context of COVID-19.In February 2020, a retrospective study of 214 COVID-19 patients in Wuhan reportedGiacomelli et al. from Sacco Hospital in Milan, in March 2020, highlight the prevalence of chemosensory dysfunction in 59 patients with laboratory-confirmed SARS-CoV-2 infection through a verbal interview. Of these, 20 (33.9%) reported at least one taste or olfactory disorder and 11 18.6%) reported both; 20.3% presented the symptoms before hospital admission, whereas 13.5% presented during the hospital stay. Females reported OTDs more frequently than males (.6% reporp = 0.001). In addition, among the 18.2% of patients without nasal obstruction or rhinorrhea, 79.7% were hyposmic or anosmic and 69.2% (days 25\u201335). Anosmia or severe hyposmia affected 70.9% of patients in the early stages; they improved after the first 10 days, reaching moderate hyposmia values. Despite a more effective recovery of taste with back to normal range after 15 days, the olfactory score improved significantly in the first 2 weeks without returning to average values but always remaining in the range of hyposmia, even in the group of patients evaluated in the 3rd and 4th week from the clinical onset. Interestingly, chemosensitive symptoms were the first symptom of COVID-19 in 29.2% of patients and the only one in 9.5% of the cases. In this study, according to other previous studies, no correlation was found between olfactory and gustatory disorders and nasal obstruction or rhinitis symptoms. No significant correlation was found between the gustatory and olfactory scores and the patients' gender and age (P = 0.036). Moreover, patients who did present at least one OTD were younger than those without OTDs , olfactory neurons, and olfactory pathways, up to the CNS (OB and olfactory brain areas). Common conductive disorders arise from obstructive nasal diseases, such as chronic rhinosinusitis, nasal polyposis, allergic rhinitis, and nasal masses, characterized by a combination of obstruction to nasal airflow transporting odorant and mucosal edema that is inflammation related . On the It was estimated that children suffer from 6 to 10 colds per year, whereas adults from 2 to 4 per year. More than 200 different viruses can cause cold symptoms, but the great majority are not associated with anosmia, hyposmia, and CNS involvement. Besides, most previous reviews, similar to what is observed in the context of the COVID-19 pandemic, have found that POVD is more common in women. Still, olfactory dysfunction could occur only after infection by a specific virus with peculiar neurotrophic proprieties, and when the host had some predisposing factors, virus may invade the CNS , 17. HowNow, we know the straight association between anosmia and SARS-CoV-2. Thus, we should take advantage to better understand the pathogenesis of POVD and the neuroinvasive potential of the virus through the olfactory neuroepithelium (ONE) and olfactory pathway.After the intranasal inoculation of several viruses, previous animal studies have shown central olfactory damage and damage to deeper areas of CNS , 19.Therefore, the question is whether the olfactory dysfunction in COVID-19 and other viral infection arise from peripheral OR damage as a result of local inflammation or involvement of central olfactory pathways, or a combination of both .Previous studies on POVD highlight direct evidence of a broad spectrum of epithelial damage, from the reduced number of ORs to abnormal dendrites that did not reach the epithelial surface or that were lacking sensory cilia to decreased nerve bundles or substitution of ONE with metaplastic squamous epithelium \u201323.In contrast with these studies, Chung et al., analyzing the nasal mucosae of patients affected by SARS-CoV-2, showed minimal inflammatory changes represented by mild infiltrations of lymphocytes, plasma cells, and occasional neutrophils localized in the stroma but without detailed characterization of neuroepithelium alterations .In a recent study, local TNF-\u03b1 and IL-1\u03b2 levels were assessed in COVID-19 patients. TNF-\u03b1 was significantly increased in the olfactory epithelium of the COVID-19 group compared to the control group. However, no differences in IL-1\u03b2 were seen between groups. In the authors' opinion, this evidence implies that inflammation can lead to OR impairment, and according to a previous study, this impairment arises from inflammation, which can damage olfactory neurons .On the contrary, Kim and Hong demonstrated that persistent PVOD is associated with decreased metabolism in specific brain regions where the olfactory stimuli are processed and integrated, suggesting that anosmia is, in some cases, caused by a central injury mechanism . Virallyvia the olfactory nerves, as already shown in mice , express ACE2, the primary SARS-CoV-2 receptors . TherefoOlfaction, although not indispensable to the survival, is a crucial sense that induces several feedback processes, also unconscious in response to the molecular sampling of the environment. These processes are very complex, as well as the anatomical substrates that allow them. From odor receptors, the stimuli converge in the OB and then, through a multitude of projections, reach the higher brain regions, including the amygdala, septal nuclei, pre-pyriform cortex, the entorhinal cortex, hippocampus and the subiculum, thalamus, and the frontal cortex. These bidirectional connections provide a unique dynamic system . ConsideThese observations suggest that the olfactory system is intricately related to immunological function, and perturbations in the immunological system and in the olfactory system may be significant to each other .via the olfactory nerves to the OB and further spread over the whole CNS. On the contrary, control mice infected with the same virus showed infection of olfactory nerves, but within the OB, the virus was arrested in the glomerular layer , and type 1 interferon (IFN-I) is increased in astrocytes, microglia, and ONE. The increase in INF-I and the rapid infiltration of both CD4+ and CD8+ T cells decrease viral load in the OB . TherefoIn an interesting recent work, according to a previous study that demonstrated selective expression of interferon-gamma in sustentacular cells inducing anosmia without damage to the neuroepithelia, the authors hypothesized that IFNs, or other cytokines, can activate an antiviral response within the OR neurons that suppress OR expression. They also demonstrated that interferon signaling correlates with OR neuron dysfunction . If thisvia non-olfactory paths. It was shown that, after intranasal VSV instillation, the olfactory route is preferentially used for CNS entry. Only if OR neurons are destroyed, are alternative entry paths, such as via the cerebrospinal fluid or the trigeminal nerve or blood, used associated with anosmia 3 days later his onset. No brain abnormalities were seen in other patients with COVID-19 presenting anosmia who underwent brain MRI in this and other studies , it is pod, used . The locod, used . It woul studies . Neverth studies . A recen studies .Virus entry into specific cells and virus spread in different organs depend on virus\u2013receptor interaction and the involvement of coreceptors. Using multiple receptors might be advantageous for virus spread to various organs. Some viruses can use more than one receptor or mutate their envelope proteins by acquiring the ability to bind different receptors or coreceptors. Thus, in the beginning, infection usually has a minor impact on the host, while the subsequent replication of the virus may significantly damage secondary organs , 47. Admvia the olfactory epithelium is considered the primary receptor for cellular entry for SARS-CoV-2 . This knithelium . Autopsyithelium . The conithelium \u201352. Thusomponent , 54. TogOur attention was focused on a family of receptors, \u201csensory G-protein coupled receptors (GPCRs)\u201d , still pORs represent the largest gene family in the human genome (418 genes classified into 18 families). ORs are divided into two classes, class I receptors and class II receptors, based on the species they were initially identified: aquatic and terrestrial animals, respectively. In humans, all class I genes are located on chromosome 1, while class II genes are located on all chromosomes except chromosomes 20 and Y , 56.ORs are expressed throughout the body, and their expression in non-olfactory tissues has been documented for more than 20 years, but the most significant part is still \u201corphans\u201d of ligands. Their functional roles were unknown, but many studies have demonstrated that these G-protein-coupled receptors are involved in various cellular processes , 57.Olfaction is one of the most developed senses in animals , and evOR proteins are composed of highly conserved (each family having >40% sequence identity) amino acid motifs that distinguish them from other GPCRs and some highly conserved motifs with other non-OR GPCRs. These OR residue sequences seem to have specific functional activities. In analogy to other Class A GPCRs, each OR has seven transmembrane domains (TM1\u2013TM7) connected by extracellular and intracellular loops. Additionally, there is an extracellular N-terminal chain and an intracellular C-terminal chain that, together with TM4, TM5, and the central region of TM3, are highly variable, participating in ligand binding. The fact that some amino acid sequences have been evolutionarily conserved across species implies that they may have critical roles .There is a wide variability of functional OR genes among different people. Recent studies, genotyping 51 odor receptor loci in 189 individuals of several ethnic origins, found 178 functionally different genomes. Additional variation in the population may come from differences in gene expression. Indeed, other experiments have found that the expressed OR repertoire of any pair of individuals differs by at least 14%, suggesting that polymorphisms also exist in the promoter and other regulatory regions. Furthermore, variation in the copy number of OR genes contributes significantly to individual olfactory abilities , 61, 62.ORs are detected in migrating neural crest, smooth muscle, endothelial precursors and vascular endothelium, endocardial cells, neuroepithelium, and ocular tissues. ORs were found in various additional non-olfactory tissues, including the prostate, tongue, erythroid cells, heart, skeletal muscle, skin, lung, testis, placenta, embryo, kidney, liver, brain, and gut . MoreovePhysiological functions of non-olfactory ORs are not entirely understood and seem to be unrelated to the olfactory system in diverse cell types. For instance, renal and cardiovascular ORs regulate blood pressure, ORs on airway smooth muscle decrease remodeling and proliferation, while exposure of the airways to \u03b3/LPS resulted in markedly increased OR expression . ORs on A large number of OR genes appear to be detectable only after birth. In mice, experiments demonstrated that the expression level of ORs could be classified into different patterns that reach a peak at different ages. For example, some ORs reach a height of expression between the 10 and 20th day of life and then reduce to a low level while other ORs reach a peak at the 10th day of life and continue to be expressed at a high level until 18 months. In the authors' opinion, these expression patterns may correlate with their functions in each life stage, such as nursing or reproductive cycle . AdditioInterestingly, sex differences in olfaction are highlighted in a meta-analysis: the female OB presents more dense microcircuits and slower aging than males . AnotherConcerning the CNS, in the adult human brain, several ORs are expressed in neurons of the neocortex, hippocampus, dentate gyrus, striatum, thalamus, nuclei of the basal forebrain, hypothalamus, nuclei of the brainstem, cerebellar cortex, dentate nucleus, and neurons of the spinal cord. ORs have also been reported in the autonomic nervous system and murine sensory ganglia. Their functions and kinetic expressions are still unknown . InteresAll these features make ORs ideal viral receptors and could also contribute to explain the broad spectrum and wide interindividual variability of clinical manifestations in COVID-19.We have no data to support our hypothesis. Our knowledge of the ORs is insufficient in terms of physiological and pathological functions, intra-individual diversity during life, epigenetic processes acting on ORs expression, and above all ORs ligands within the different cell types. Identifying these cell-surface receptors as required for viral infection, given their peculiar characteristics, may be necessary for developing antiviral therapies and effective vaccines. To our knowledge, the only literature data supporting the possible involvement of ORs in virus entry into a cell is on OR14I1 as a receptor for HCMV infection. This OR is required for HCMV attachment, entry, and infection of epithelial cells, revealing previously neglected targets for vaccines and antiviral therapies , 74. UnfUnderstanding the mechanisms behind COVID-19-related olfactory dysfunction will require further investigation to delineate his prognostic value concerning coronavirus neuroinvasion, immune reaction, and virus spread from the nasal cavity to other distant organs. However, considering all the data above described, it is possible to propose several hypotheses.Since anosmia has been observed generally in the absence of cold and rhinosinusitis and considering the reported persistent hyposmia also detected after clinical recovery, we may hypothesize the prevalence of sensorineural dysfunction.Defining the role of local inflammatory mediators in host defense and tissue damage of ONE may explain the mechanism of COVID-19-related anosmia. Indeed, in non-infected cells, the interaction between virus and receptor may induce defense mechanisms resulting in cytokine secretion , apoptosis, and innate immune response, which can have a significant impact on the development of disease both locally and at the systemic level.Adults with severe disease have a depletion of the B-cell compartment , and levOn the contrary, in the pediatric population, recent studies showed that an early polyclonal B-cell response augmenteIt may be suggested, considering the above-discussed interaction between olfactory and immunological systems, that the nasal epithelium and OB may be one of the first battlefields between SARS-CoV-2 and host; the outcome of this battle may be critical for the pathological development of COVID-19. Considering the OB as an immune organ, if the local fight against SARS-CoV-2 is successful, the damage remains localized, leading to anosmia, as in the case of women and younger patients; conversely, the virus can spread and replicate in the upper olfactory sites causing central anosmia or directly invading the CNS. The prevalence of chronic olfactory impairment increases with age. Olfactory deficit affects up to 50% of people ages \u226565 years and >80% of people ages \u226580 . It is cThis hypothesis may have the power to attract the attention of the broader community of scientists and neuroscientists on the olfactory system to investigate the biological significance of these neglected receptors in sickness and health.Besides the acute neurological involvement of SARS CoV-2 infection, there are many overlaps between SARS-CoV-2-related manifestations and OR-related disease. For instance, it has been shown in animal and human studies that coronaviruses could be implicated in the pathogenesis of Parkinson's disease, acute disseminated encephalomyelitis, multiple sclerosis, and other neurodegenerative diseases, as well as for ORs. Further monitoring for long-term sequelae may reveal viral contribution in pathophysiology or increased risk for neuroinflammatory and neurodegenerative diseases and the possible link with OR dysregulation or damage.In light of these observations, the role of ORs and OB in COVID-19 infection could be significant, explaining at least in part the age- and sex-related differences in the clinical course.In conclusion, from anosmia onset in SARS-CoV-2-positive patients, a precise timing for the olfactory route climbing by the virus can be speculated. In this window period, a potential early intervention could change the disease's course, supporting natural defenses when they lack, as a result of age, sex, or other genetic backgrounds. Since PAMPs (pathogen-associated molecular patterns) can improve the up-regulation of IFN, it could be hypnotized to use immune-stimulatory molecules to increase the ability to fight the infection. Simultaneously, the use of other topical pharmacological agents could be helpful.The evidence of OB involvement in COVID-19 remains scarce, but the knowledge of this different way of spreading could lead to significant developments in the management of SARS-CoV-2.Magnetic resonance imaging cannot be an early detector tool in all COVID-19 patients with anosmia. However, the latest evidence on the CNS involvement, beyond the anosmia, could justify it as a valid indication in high-risk patients. Autopsies of the COVID-19 patients, detailed neurological investigation, and attempts to isolate SARS-CoV-2 from OB and neuronal tissue can clarify the role of this novel coronavirus in the mortality linked to neurological involvement. Existing studies to assess the incidence of anosmia and related immunological patterns are limited; therefore, investigating the local cytokine composition at the onset of symptoms could be useful.In conclusion, we invite to focus on anosmia in each patient suspected of infection or with a positive swab for SARS-CoV-2. Future studies should evaluate the degree of functional impairment (grading), central/peripheral anosmia , and the temporal course (evolution) through MRI and olfactory tests, perhaps through standardized workup protocol to explore this issue better.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.All persons who meet authorship ICMJE criteria are listed as authors, and all authors certify that they have participated equally in the work to take public responsibility for the content, including participation in the concept, design, analysis, writing, or revision of the manuscript. Furthermore, each author certifies that this material or similar material has not been and will not be submitted to or published in any other publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Ovarian endometrioma are found in up to 40% of women with endometriosis and 50% of infertile women. The best surgical approach for endometrioma and its impact on pregnancy rates is still controversial. Therefore, we conducted a literature review on surgical management of ovarian endometrioma and its impact on pregnancy rates and ovarian reserve, assessed by anti-M\u00fcllerian hormone (AMH) serum levels. Ovarian cystectomy is the preferred technique, as it is associated with lower recurrence and higher spontaneous pregnancy rate. However, ablative approaches and combined techniques are becoming more popular as ovarian reserve is less affected and there are slightly higher pregnancy rates. Preoperative AMH level might be useful to predict the occurrence of pregnancy. In conclusion, AMH should be included in the preoperative evaluation of reproductive aged women with endometriosis. The surgical options for ovarian endometrioma should be individualized. The endometrioma ablation procedure seems to be the most promising treatment. Endometriosis is an inflammatory condition characterized by the presence of endometrial-like tissue outside the uterus. It affects mostly women of reproductive age and approximately 30\u201350% of women with endometriosis may present infertility .Between 17% and 44% of endometriosis patients have endometriotic ovarian cysts (endometrioma), which are bilateral in about 19\u201328% of cases . The aetRecommendations on the different surgical options available for ovarian endometrioma have recently been published by the working group of the European Society for Gynaecological Endoscopy (ESGE), the European Society of Human Reproduction and Embryology (ESHRE) and the World Endometriosis Society (WES) . In summLaparoscopic ovarian cystectomy is performed by the stripping technique, in which the drained endometrioma and ovarian cortex are pulled apart and haemostasis is applied on the ovarian cyst bed . Tractio2 laser, bipolar coagulation or plasma energy [In the ablative approach, endometrioma is fenestrated, drained and washed out and the cyst wall is then destroyed with an energy source, such as a COa energy . Care mua energy .In cases of large ovarian endometrioma, a three-step approach could be suggested, requiring a first laparoscopy for draining the cyst, followed by 3 months of gonadotropin-releasing hormone (GnRH) agonist therapy ,11. At t2 laser is then used to vaporize the remaining 10\u201320% of the endometrioma close to the ovarian hilus. Indeed, in this region of the ovary, dissection is usually more difficult and is associated with a higher risk of bleeding which needs coagulation close to the ovarian vessels.In order to avoid two laparoscopic procedures, Donnez et al. describeSurgical treatment of endometrioma improves patients\u2019 symptoms, such as pain, but the most appropriate approach for reproductive outcomes is still controversial, according to the Royal College of Obstetricians and Gynaecologists (RCOG) . The guiOvarian reserve is defined as the functional potential of the ovary and reflects the number and quality of the follicles in the ovaries at any given time. Anti-M\u00fcllerian hormone (AMH) is a reliable marker of ovarian reserve . AMH is The risk of postsurgical ovarian failure has reopened the debate between excision and ablation . The delThe aim of this review was to evaluate the effect of surgical management of endometrioma on ovarian reserve, assessed by serum AMH concentration, and on pregnancy rates, through a review of the literature.The literature search was done using the PubMed and Cochrane search engines. The keywords used were \u201cendometrioma\u201d, \u201csurgery\u201d, \u201covarian reserve\u201d, \u201cAMH\u201d, \u201canti-M\u00fcllerian hormone\u201d and \u201cspontaneous pregnancy\u201d. This research was limited to English and French language publications, focusing on the last 5 years (2015\u20132019). The studies were selected based on the abstract. This research was supplemented by the bibliography of experts and the references cited in the documents reviewed. Clinical cases and comments were excluded.A recent systematic review and meta-analysis confirmed previous studies and systematic reviews reporting consistent evidence of a negative impact of excision of endometrioma on ovarian reserve ,21,22. ICelik et al. showed that cystectomy leads to a significant and progressive decrease (61%) in serum AMH levels in a prospective study with 65 patients comparing AMH measured preoperatively (1.78 \u00b1 1.71 ng/mL), at 6 weeks (1.32 \u00b1 1.29 ng/mL) and 6 months after surgery (0.72 \u00b1 0.79 ng/mL) . Alborzip < 0.001), but no further fall in the 1-year assessment [Subsequent studies assessing AMH levels up to 1 year after surgery revealed that this decrease would be only temporary and could recover ,28,29,30p < 0.001) [p < 0.05). However, in the evaluation performed 1 year after surgery, that reduction did not remain significant (0.46 (0.14\u20130.73) vs. 0.21 (\u20130.52\u20130.78), p = 0.34) [n = 35) with a group with other benign ovarian tumours (n = 35). The decline in serum AMH levels in the first 3 months following surgery was 3 times higher following laparoscopic cystectomy of endometrioma [In a prospective cohort study with 59 patients with endometrioma and 16 with other benign cysts, the comparison of the postoperative decline in serum AMH revealed a higher and significant decrease in the group with endometrioma . The sam = 0.34) . Kostrze= 0.021) . The sam= 0.021) .The reduction in AMH level after surgery is higher in bilateral endometrioma ,27,28,34Kim et al. reported that the decrease in AMH levels was also dependent on the stages of endometriosis, with stages III and IV having a significantly greater decrease in AMH from the pre- to postoperative period in comparison with lower stages ,31. HoweIn a prospective controlled study, Muzii et al. observed that surgery for recurrent endometriomas is more harmful to ovarian reserve, even though they only used antral follicle count (AFC) and ovarian volume .p < 0.001; 0.89 \u00b1 0.36 vs. 0.51 \u00b1 0.27 ng/mL, p < 0.001, respectively) [Recently, in a prospective study with 124 patients, Zhou et al. verified that a decrease in AMH levels after surgery happened in both patients with high (>2 ng/mL) and low (\u22642 ng/mL) preoperative AMH levels .The presurgical identification of patients with decreased ovarian reserve and the risk of poor postoperative ovarian response can be predicted using preoperative measurements of serum AMH. Ozaki et al. proposed that 2.1 ng/mL was the best cut-off value of preoperative AMH for predicting diminished ovarian reserve (DOR) at 3 and 6 months in patients undergoing unilateral cystectomy. In cases of bilateral ovarian surgery, the optimal cut-off points were 3.0 ng/mL to predict DOR 3 months after surgery and 3.5 ng/mL to predict DOR 6 months after surgery .After complete excision of the cyst capsule, final hemostasis must be guaranteed . The traRoman et al. conducted a prospective study analysinA more recent study by Stochino-Loi et al. gatherinSaito et al. performe2 laser vaporization in 60 patients with endometrioma larger than 3 cm. Three months after surgery, they observed a significant decrease in serum AMH in the subjects treated with cystectomy , while no significant reduction was evident in the group treated with CO2 laser vaporization .The multicentre randomized clinical trial of Candiani et al. compared2 fiber laser vaporization or cystectomy [p = 0.74) and of endometriosis-related pain .A retrospective study with prospective recording of data performed by the same group showed that postoperative recurrence rates were comparable between patients that underwent COstectomy . During p = 0.011) [For large endometriomas (>5 cm), a prospective randomized study performed by Giampaolino et al. revealed that the decrease of AMH levels assessed 3 months after surgery was greater following excisional surgery than ablative treatment .The combined approach, using excision of 80\u201390% of the cyst and ablation of the rest, has been proven not to be deleterious to the ovary through comparison of the ovarian volume and AFC. AMH serum levels were not analysed in this study published by Donnez et al. .p > 0.05) compared with cystectomy of endometrioma . This is explained by the fact that vaporization avoids ovarian tissue ablation and excessive thermal damage.Tsolakidis et al. performeThe role of surgery to improve the pregnancy rate in infertile women with endometriosis is controversial.In a retrospective study with 43 infertile women with surgically proven endometriosis and no other factors, Lee et al. reported that the spontaneous conception rate was 41.9% during the first year after laparoscopic surgery, which involved the destruction or removal of all visible endometriotic implants and the lysis of adhesions .For endometrioma, surgery seems to improve the success rates of fertility treatment by between 20% and 60% ,52,53,54In a meta-analysis by Vercellini et al., the chance of pregnancy after laparoscopic excision of endometriomas ranged from 30% to 67%, with an overall weighted mean of about 50% . Zhou etp = 0.04) [Women with higher AMH levels had a significantly higher cumulative pregnancy rate after surgery for endometrioma ,54,57. O = 0.04) . Accordi = 0.04) . Thus, p = 0.04) .Studies comparing AMH level after cystectomy between patients who became pregnant and those who did not showed a higher AMH level 1 year after surgery in the group of pregnant women ,58.p = 0.03) [When the likelihood of spontaneous pregnancy after laparoscopic cystectomy of endometriomas was compared with other benign ovarian cysts, it was observed that it is more than 3 times higher in the group of patients with other benign tumours .A retrospective pilot study by Roman et al. evaluateStochino-Loi et al. performeMotte et al. conducteA Cochrane review by Hart et al. published in 2008 showed a beneficial effect of excisional surgery over drainage or ablation of endometrioma when considering achievement of spontaneous pregnancy in subfertile women (odds ratio (OR) 5.21, CI 2.04\u201313.29) . HoweverIn a descriptive and prospective study, Donnez et al. reported a pregnancy rate of 41% at a mean follow-up of 8.3 months after the combined approach for endometrioma .The benefit of endometrioma excision for pain management is consensual, but surgical excision for the sole purpose of improving reproductive outcomes is controversial . OvarianThe reduction of ovarian reserve after surgery for endometrioma is inevitable, regardless of the technique. Both excisional and ablative approaches lead to a postsurgical decrease of up to 60% in AMH levels. However, studies comparing the two techniques show a higher and significant decrease after cystectomy ,47.2 laser, the glandular epithelium and the underlying stroma [2 laser would provide better control of the depth of vaporization, remaining superficial compared to bipolar electrocoagulation [2 laser as well as plasma energy are techniques for sparing ovarian tissue with a shallower thermal diffusion [2 technology may be used to treat endometrioma with minimal damage to the adjacent healthy ovarian tissue and it might be an alternative treatment in women with a desire for pregnancy.The decline in ovarian reserve after ovarian surgery is multifactorial. Healthy ovarian tissue may be unintentionally removed during ovarian cystectomy due to the absence of a clear histologic cleavage plane, which can result in loss of follicles. This justifies the theory that ovarian reserve is better preserved by ablation than by cystectomy. However, other proposed mechanisms for the ovarian reserve decline include thermal damage caused by bipolar coagulation, ovarian vascular injury and postoperative inflammatory response ,24,28,61g stroma ,47 are dgulation . This isgulation . The CO2iffusion ,59. Theiiffusion ,50. ThusExcision of the ovarian cortex could be involved in the reduction of ovarian reserve just after surgery, but a continuous decrease could be attributed to other factors, such as vascular compromise by excessive coagulation or adhesiolysis as well as postsurgical inflammation ,26,27,44The number of studies that have evaluated changes in ovarian reserve after cystectomy over a period longer than 6 months is limited, but it seems that the decrease in AMH following surgery for endometrioma is temporary and can be recovered. This can be explained by surgery-related reversible mechanisms related to ovarian vasculature and inflammation-mediated injuries. After ovarian injury, compensatory mechanisms may include the recruitment and growth of primordial follicles and the excessive activation of granulosa cells . This leBilaterality, size of endometrioma, stage of endometriosis and patient\u2019s age are independent factors that should be also considered when planning a surgery in patients who are interested in preserving their fertility ,30. BilaAll of these factors will allow clinicians to select therapies to prevent further decline of ovarian reserve, especially for infertile patients with ovarian endometrioma . TherefoThe decline of AMH levels after surgery is higher in patients with ovarian endometrioma than in those with other benign tumours ,31,32,33According to the ESHRE guideline, there is evidence to suggest that ovarian cystectomy via stripping is the preferable surgical technique for management of endometrioma, compared with other excisional/ablative techniques in terms of the pregnancy rate ,14. HoweFavourable preoperative ovarian reserve and its postoperative maintenance together may be implicated in postsurgical pregnancy after surgery for endometrioma ,37. The In patients with stage III and IV endometriosis submitted to ablative surgery, the probability of pregnancy and the risk of decreasing ovarian reserve is similar in patients with high and low preoperative AMH levels . Therefo2 laser, seems to be the most interesting surgical technique, with the least impact on postoperative AMH levels and better pregnancy rates. However, this review has some limitations as more studies, namely, randomized clinical trials, are needed to draw definitive conclusions. Additionally, more studies assessing live birth rate rather than pregnancy rate are needed, as live birth rate was recently defined as a core outcome set for endometriosis [This review highlights the importance of preoperative evaluation of AMH in the therapeutic planning of patients with endometrioma and in the selection of the surgical technique. Based on this value, it is possible to offer more detailed preoperative counselling regarding the pregnancy rate after surgery and the risk of decreased ovarian reserve, assessed through AMH values. Recent studies suggest that the ablative approach, namely, with the use of a COetriosis .In conclusion, measurement of AMH should be included in the evaluation of reproductive-age women with endometriosis. The indication of surgery for an ovarian endometrioma should be thoroughly discussed with the patient, with particular emphasis on the issue of possible damage to the ovarian reserve. The review of the literature demonstrates that the endometrioma ablation procedure, even if performed in patients with a decreased ovarian reserve, is beneficial in terms of pregnancy."} +{"text": "FT-IR analysis indicated that A. glandulosum root extract had 2 main functional groups (hydroxyl and amide I groups). Saponin with the highest foam height (4.66\u202fcm), concentration (0.080\u202fppm) and antioxidant activity (90.6 %) was extracted using 10\u202fg of the root powder and pH value of 4. Non-significant differences were observed between the predicted and experimental values of the extraction response variables. The study demonstrated good appropriateness of resulted models by Response Surface Methodology. Furthermore, higher values of R2 was attained for the foamability (>0.81) and antioxidant activity (>0.97), as well as large p-values (p\u202f>\u202f0.05) indication of their lack-of-fit response verified the acceptable fitness of the provided models. The extracted saponin also showed bactericidal effect, which shows potential as a natural antibacterial compound.Saponin was extracted from Most of them are found in the eastern regions of Iran, especially in Khorasan Province, and it is locally known as Chubak [Acanthophyllum is a rich source of saponin, a natural biosurfactant with high potential applications in food industries. However, saponin is extracted from several natural sources such as Ziziphus spina-christi, Glycyrrhiza glabra root, plum and strawberry fruits, but Acanthophyllum glandulosum root is the primary source of saponin [s Chubak . Root of saponin .Saponin is known as defense system of the plants toward pathogenic microorganisms and has tri-terpenoids or steroid glycosides which that conjugates to sugar chains (one or more), with glycoside bond in its structure ,7. SaponSeveral methods have been commonly utilized to extract valuable compounds from medicinal plants, such as Soxhlet, maceration and reflux extractions which are established on utilizing organic solvents for long heating time . Due to A. glandulosum root powder on saponin extraction yield through hydrothermally extraction method, and ii) assess foamability, antioxidant and antibacterial activities of the saponin by obtained optimal extraction conditions.Subcritical water is well-known as pressurized water (pressure of higher than 1\u202fbar) with high temperature (higher than 100\u202f\u00b0C) which has its liquid state at the mentioned conditions and its polarity leads into ethanol and methanol and increases the extraction yield . These c22.1A. glandulosum root (in dried state) was provided from a local traditional market . Saponin was obtained from Merck Company . Distilled water (DW), as a solvent, was bought from Dr. Mojallali Chemical Complex Co. . Escherichia coli (PTCC 1276) and Staphylococcus aureus (PTCC 1431) were purchased from the biological source from Persian Type Culture Collection . Nutrient agar source was obtained from Biolife .2.2A. glandulosum roots were washed, dried and ground by an electrical grinder . Defined amounts of the produced powder ranging 10\u221220\u202fg, were dissolved into 100\u202fmL DW and the pH of solutions was adjusted between 4\u20139. Provided solutions were transferred into an autoclave and heated at 121\u202f\u00b0C and 1.5\u202fatm, for 15\u202fmin. After that, the samples were filtered using No.1 Whatman filter paper and kept at refrigerator for further analysis [2.3A. glandulosum root extract, were monitored using Fourier transform infrared (FT-IR) spectrometer on a Bruker Tensor spectrometer at the 4000\u2013400\u202fcm\u22121 region. Turbidity and colour intensity of the extracted samples which those were qualitatively related to the existing saponin in the aqueous solutions, were assessed using UV\u2013vis spectrophotometer at a wavelength of 420\u202fnm and 625\u202fnm, respectively. The absorbance unit (% a.u.) obtained was used to identify the colour and turbidity characteristics of the saponin extracts [A. glandulosum root extract, the filtered samples were hand-shaken vigorously for 30\u202fs and the volume of the foam generated was measured. High performance liquid chromatography (HPLC) with a C 18 column (Eurospher 100\u22125 c18) and diode-array detector was utilized to measure concentration of saponin in the extracted samples. Wavelength of the instrument was fixed at 203\u202fnm. For this test, extracted samples were added into mobile phase containing acetonitrile (40 % v/v) - water (60 % v/v) and injected into the system with sampling rate of 1 (points/second) and total flow rate of 1\u202fmL/min. All the created peaks ranging 2\u201315\u202fmin (retention time) were recorded [Chief existed functional groups in the extracts . To moniabs) and control (Sample extract abs) was measured at wavelength of 517\u202fnm, and antioxidant property of the extracted saponins, based on the percentage of DPPH radical scavenging was obtained by Equation (2):Antioxidant activity of the extracted saponin, was assessed according to free radical-scavenging manner . For thi2.4S. aureus and E. coli was assessed using well diffusion method. Bacterial suspensions, having 1.5\u202f\u00d7\u202f108 colony forming units per mL and based on 0.5 McFarland standard solution, were provided and the surface of set PCA culture media in the plates was inoculated with 0.1\u202fmL of them. Several holes, in 5\u202fmm diameter, were punched in the PCA and 10\u202f\u03bcL of the extracted samples were placed into them and incubated at 37\u202f\u00b0C, for 24\u202fh. Bactericidal effect of the extracted saponins, was manifested in diameters of inhibition growth zones, around the holes and where, higher diameter, shows higher antibacterial activity and vice versa [Bactericidal activity of the extracted saponin from the provided root powder, toward ce versa ,17.3A. glandulosum root powder (X1) and pH of the mixture solutions (X2), on the foam volume and antioxidant activity of the extracted solutions. RSM has shown many advantages compared to the conventional one-variable-at-a-time method, particularly in generating large amounts of data from a small number of experimental runs. The potential of RSM as a model to analyze the interaction between several variables on the responses makes it a useful technique to evaluate the relationship between the nandispersion preparation variables and response variables of the prepared nanodispersions [Central composite design (CCD) and response surface methodology (RSM) were utilized to experimental design and evaluate of the effects of two independent parameters, namely amount of persions ,17. Furtpersions .0), linear (\u03b21 and \u03b22), quadratic (\u03b211 and \u03b222) and interaction (\u03b212) terms, was employed to model response parameters as function of the two independent parameters [According to the CCD, 13 experimental runs were achieved with five replications for center point using Mirameters .(3)Y = \u03b22, obtained. The analysis of variance (ANOVA) study was employed to show significance/non-significance of the terms of the generated models according to their P-values (< 0.05) [Suitability of the models were studied based on the coefficient of determination, R(< 0.05) .A. glandulosum root and pH of the solutions to extract saponin with maximum foamability and antioxidant activity. Appropriateness and precision of the produced models were certified by extraction of saponin with attained optimal extraction factors and assessment of the achieved values for the dependent factors, in experimentally and prediction manners [Surface and contour plots were employed to well observe the extraction factors effects on the dependent factors . Graphic manners .44.1A. glandulosum root , antioxidant activity, colour intensity and turbidity of the extracted samples containing saponin were ach2 and P-values of the lack-of-fit for these two models are showed in 2 (> 0.81) and (> 0.97) relates to the foamability and antioxidant activity of the extracted saponin, respectively, while the high P-values of the lack-of-fit for both of them verified the sufficient fitness of the models generated based on the experimental data obtained. As can be observed from the table, linear and quadratic terms of pH had significant (p\u202f<\u202f0.05) effects on both the response variables and pH showed a profound effect in extraction of saponin from A. glandulosum root. However, only quadratic term of amount of root and interaction term of both selected independent variables had significant effects on foamability and antioxidant activity of the extracted saponin from A. glandulosum root, respectively.The regression coefficients of the models generated along with their R4.3A. glandulosum root is showed in In A. glandulosum root, saponin content also increased which that causing higher foamability [Camellia oleifera at the media with pH of 4.1.According to mability . On the mability . They al4.4A. glandulosum root, in which based on the CCD (According to the CCD . Based o4.5A. glandulosum root powder with highest concentration (foamability) and antioxidant activity, obtained numerical optimization result revealed that hydrothermally extraction of saponin using 10\u202fg A. glandulosum root powder and pH of 4 for the solution attained to extract saponin with highest foam height, 4.66\u202fcm and antioxidant activity of 90.6 %. Graphical optimization shows optimum area for amounts of both selected independent variables was related to the extracted saponin.Based on ectively . Further4.7A. glandulosum root using obtained optimum hydrothermally extraction conditions against S. aureus and E.coli indicated that its antibacterial activity, manifested as diameter of clear zone, against S. aureus (14\u202fmm) was higher than that of on the E. coli (11\u202fmm), because of the higher crated clear zone around the wells.Bactericidal effects of the extracted saponin from 5A. glandulosum root without need to further chemical solvents and solvent removal process at the end of extraction. In addition to using the simple, environmental friendly, low energy and cost-effective hydrothermal extraction technique based on subcritical water, optimization of other extraction parameters namely amount of A. glandulosum root and pH of the solution, could effectively increase extraction yield of saponin and obtained results revealed that using minimum amount of A. glandulosum root into the acidic solution, maximum saponin was extracted. Furthermore, results also revealed that RSM could be successfully used to generate models, optimize the extraction process and predict saponin concentration with the definite ranges for the selected extraction variables. Such established extracted method can be developed for extraction of saponin from other natural sources.Saponin, a natural emulsifier, has been utilized in numerous emulsions and nanoemulsions for applications in food and medicine. Extraction of saponin from local and rich natural sources is more interested subject, specially using novel and green extraction methods. Subcritical water, a green solvent, has polarity close to the polarity of methanol, which in turn, increases the extraction yield of saponin from Roza Najjar-Tabrizi: Formal analysis, Data curation, Investigation, Writing - original draft. Afshin Javadi: Formal analysis, Investigation, Writing - original draft. Anousheh Sharifan: Software, Funding acquisition. Kit Wayne Chew: Validation, Writing - review & editing. Chyi-How Lay: Validation, Writing - review & editing. Pau Loke Show: Project administration, Resources, Visualization. Hoda Jafarizadeh-Malmiri: Conceptualization, Funding acquisition, Methodology, Supervision. Aydin Berenjian: Conceptualization, Project administration, Supervision.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."} +{"text": "To investigate the diagnostic and clinical utility of a partially automated reanalysis pipeline, forty-eight cases of seriously ill children with suspected genetic disease who did not receive a diagnosis upon initial manual analysis of whole-genome sequencing (WGS) were reanalyzed at least 1 year later. Clinical natural language processing (CNLP) of medical records provided automated, updated patient phenotypes, and an automated analysis system delivered limited lists of possible diagnostic variants for each case. CNLP identified a median of 79 new clinical features per patient at least 1 year later. Compared to a standard manual reanalysis pipeline, the partially automated pipeline reduced the number of variants to be analyzed by 90% (range: 74%-96%). In 2 cases, diagnoses were made upon reinterpretation, representing an incremental diagnostic yield of 4.2% . Four additional cases were flagged with a possible diagnosis to be considered during subsequent reanalysis. Separately, copy number analysis led to diagnoses in two cases. Ongoing discovery of new disease genes and refined variant classification necessitate periodic reanalysis of negative WGS cases. The clinical features of patients sequenced as infants evolve rapidly with age. Partially automated reanalysis, including automated re-phenotyping through CNLP, has the potential to identify molecular diagnoses with reduced expert labor intensity. While guidelines for reanalysis of whole-genome sequencing (WGS) or whole-exome sequencing (WES) data for undiagnosed patients do not yet exist, a recent position statement by the American Society of Human Genetics underlined the ethical obligation that clinical diagnostic laboratories and research groups have to support periodic WGS/WES data reanalysis8. Upon reanalysis, new diagnoses are made due to ongoing advances that include the discovery of new disease genes, accumulation of classified variants in publications and public databases, improvements in bioinformatics analyses, and phenotypic evolution in children in whom the full manifestations of disease were not apparent at initial analysis9. Current Procedural Terminology codes and Medicare fee payments have been established for the reanalysis of both WES (81417) and WGS (81427) for unexplained constitutional or heritable disorders or syndromes. The reanalysis pipelines described to date include manual phenotyping and extensive variant assessment, both of which are costly in terms of expert time. Automating or de-skilling portions of expert reanalysis can ease this burden, but the diagnostic yield and estimates on the savings of limited genomic analyst and laboratory director resources of such an approach have not yet been demonstrated. Here we present one solution to the challenge of ongoing WGS reanalysis. This pipeline integrates phenotyping from electronic health records (EHRs) by clinical natural language processing (CNLP) and a phenotypically driven analysis pipeline devised to alleviate the burden of next-generation sequencing (NGS) interpretation during reanalysis.For patients with suspected genetic disorders that remain undiagnosed after genomic sequencing, diagnostic yield is improved by periodic reanalysisThe first 48 inpatient children with suspected genetic disorders who received negative WGS reports after manual analysis between July 2016 and April 2017 were selected for partially automated reanalysis, using the original VCF files generated for analysis. The average patient age at enrollment was 4.8 months , including two diagnoses that were reportable under Rady Children\u2019s Institute for Genomic Medicine research protocols, and four possible diagnoses that were not reportable. The latter did not meet reporting requirements due to classification as variant(s) of uncertain significance (VUS) and, in some cases, an uncertain gene\u2013disease relationship or unclear phenotypic matching with the patient Table . ConsideIGF2 c.267C>A, p.Cys89*, which was phased by Sanger sequencing to the paternal allele using a nearby informative single-nucleotide polymorphism . Since IGF2 is an imprinted gene that is expressed exclusively from the paternal allele, phasing was crucial for reporting a diagnosis of Silver\u2013Russell syndrome (OMIM #180860). This variant was not identified during initial analysis due to a manual error in patient data handling. This diagnosis was expected to change the patient\u2019s clinical care, indicating targeted monitoring for hypoglycemia, premature adrenarche and maxillofacial abnormalities, and potentially treating with growth hormone . They were a missense variant, c.1583G>A, p.Gly528Glu and an intronic variant, c.\u221215+3G>T, both of which were classified as VUSs at initial analysis and thus not reportable under the institutional review board (IRB) protocol for this study. During the period between analysis and reanalysis, ERCC6 c.1583G>A was reported in a patient with Cockayne syndrome, type B (OMIM #135540), changing its classification to Likely Pathogenic11. In parallel, research functional testing of unscheduled DNA synthesis from proband fibroblasts confirmed an impairment of ERCC6. This functional data changed the variant classifications to Pathogenic (c.1583G>A) and Likely Pathogenic (c.\u221215+3G>T), rendering them reportable under our IRB protocol. This diagnosis was also expected to inform clinical care, indicating avoidance of metronizadole, monitoring of renal and hepatic function and blood pressure, screening for cataracts and strabismus, increased sun protection, and potentially treating with baclofen or carbidopa/levodopa were not evaluated at the time of original analysis or at initial reanalysis. In a separate effort, we evaluated CNVs in these 48 cases and found two potential diagnoses: an inherited intragenic KAT6B deletion in case 6033, classified as Likely Pathogenic and thought to co-contribute to the patient\u2019s phenotype terms from patient EHRs delimited by the date of enrollment and compared these with terms extracted at the time of reanalysis. The median number of HPO terms generated by automated phenotyping was 160 at enrollment range: 28\u2013501) and 267 at reanalysis range: 39\u2013679), representing a median increase of 79 terms, or 55% . The variant shortlist size correlated positively with the input HPO list size at both enrollment and reanalysis . Furthermore, the proportional increase in input HPO term list size at reanalysis correlated positively with both the absolute change in resultant variant shortlist size but with either the enrollment or reanalysis HPO lists. The size and content of the variant shortlists output by Moon correlated with the size and content of the input HPO term lists. The median shortlist size increased from 7 variants at enrollment (range: 2\u201320) to 8 variants at reanalysis (range: 2\u201323), representing a median increase of 4.2% at enrollment and 30/514 (5.8%) at reanalysis contributed to ranking of the IGF2 variant at enrollment and 74/679 (10.9%) at reanalysis contributing to the ranking of the ERCC6 variants . For case 6009, the number of HPO terms generated using automated phenotyping increased from 250 to 514 between enrollment and reanalysis. During variant shortlist generation, each variant was ranked in part based on overlap between the associated gene\u2013disease model and the input HPO terms for that case. For case 6009, the causal 10. Variant shortlists contained a median of 11 variants, representing a reduction from median of 122 under a manual reanalysis protocol, and offering a solution to mitigate the burden of expert time typically required for reanalysis of NGS data.We present here a partially automated WGS reanalysis pipeline that relies on CNLP of patient EHRs to automate phenotyping and downstream generation of a variant shortlist that has been previously shown to achieve high sensitivity when compared with fully manual curation by experienced analystsFour of the six potential diagnoses identified on reanalysis were not returned to patients on clinical reports, largely due to the corresponding variants\u2019 classification as VUSs. The IRB protocol under which participants were consented for this study allowed only for the reporting of variants classified as Likely Pathogenic or Pathogenic. Future developments, such as reports of these variants in patients with similar phenotypes or functional studies demonstrating their pathogenicity (or lack of pathogenicity) have the potential to alter their classification.12. Second, while all analysis and reanalysis pipelines rely on automated variant annotation and filtering on variables such as sequence quality and population allele frequencies, this pipeline generates a stringently filtered variant list also accounting for the phenotypic terms input for the patient. This shortlist represents a dramatic reduction in the total number of variants to be considered by the analyst, allowing manual interpretation to proceed more quickly, or to be undertaken by individuals lacking the advanced training needed to correctly filter and evaluate hundreds or thousands of variants1.The pipeline presented here differs from those used in other NGS cohort reanalysis studies primarily in its greater incorporation of automation. In contrast to the standard practice of manual assessment and encoding of phenotyping data, it relies on automated phenotyping from the EHR12. This study differs from previously reported reanalysis efforts in several ways that may contribute to the slightly lower yield that we report. First, the analysis was predominantly performed on newborns and infants. The 48 negative cases eligible for reanalysis had a median age of 4.8 months at enrollment and 25 months at reanalysis. Other reanalysis cohorts have median ages ranging from 4 to 6.7 years at enrollment14. Many genetic syndromes are not easily recognizable at early ages, potentially complicating diagnosis even at the reanalysis time point used in this study. Second, our inclusion criteria were intentionally broad, requiring not that patients have a suspected genetic disorder but rather that their phenotypic features be potentially attributable to a genetic disorder. In addition, no major technical changes that are typically included in the reanalysis literature (such as sequencing improvements or bioinformatics pipeline upgrades) were included in this study. Use of the original VCF file generated at the time of analysis, for reanalysis, represents a potential source of data processing savings that may be attractive to clinical laboratories wishing to implement iterative reanalysis.Using the partially automated reanalysis pipeline described here, we report a yield of 4.2% : 0.5\u201314.3%), which is comparable to but slightly lower than other reported NGS reanalysis yields1. In the intervening time between analysis and reanalysis, these diagnoses became more compelling due to several factors, including variant publication or classification in ClinVar in connection with disease, or functional testing of patient cells that provided orthogonal support to a diagnosis (6033). In case 6009, the diagnosis was initially missed due to a manual error in data labeling, which was discovered only upon reanalysis. Thus, in addition to the benefit of greater efficiency, a partially automated reanalysis pipeline such as the one described here can serve a quality control function, flagging errors made in patient data processing during analysis.Although new gene\u2013disease discovery is a major factor driving diagnosis upon reanalysis, all six diagnoses (reported and possible) in this reanalysis cohort were variants found in genes known to be involved in human disease at the time of initial analysisThe number of HPO terms extracted from patient EHRs via automated phenotyping increased from a median of 160 to 267 terms between enrollment and reanalysis, and this increase correlated with the \u201cturnover\u201d in the variants included on the variant shortlist. While both diagnostic variants reported from this reanalysis cohort were highly ranked in the shortlist using HPO terms from the time of enrollment as well as reanalysis, future cases are likely to benefit from the sensitivity of the shortlist algorithm to new phenotypic input.15. Ideally, reanalysis should be repeated periodically for all cases that remain negative and should incorporate new clinical information for the patient16. The four cases with possible (but not reportable) diagnoses presented here illustrate the uncertainty that can remain in clinical NGS cases for which a negative report has been issued and the benefit of performing iterative reanalysis. Future studies may examine the utility of such iterative analysis using this automated pipeline.As genomic data and knowledge of genetic disorders continue to accumulate rapidly, reanalysis of initially negative cases will likely become a standard practice for many clinical laboratories. Although many clinical laboratories provide some mechanism for NGS reanalysis, either by provider request or through an internal procedure, these efforts are limited by the strain on staffing resources posed by reanalysis. Partial automation may ease this burden, allowing more regular reanalysis. Outside of the scope of this publication, but of importance in the realm of NGS reanalysis, are the questions of how to initially prepare patients or research subjects for the possibility of a reanalysis result delivery years after initial testing and how to responsibly deliver NGS reanalysis results18.Retrospective comparison of the diagnostic utility of reanalysis of WGS by manual and partially automated methods was approved by the IRB at the University of California, San Diego . Inpatients at RCHSD without etiologic diagnoses, in whom a genetic disorder was possible, were nominated for diagnostic, rapid WGS by diverse clinicians from 26 July 2016 to 3 April 2017. Informed consent was obtained from at least one parent or guardian of each patient included in the study. Of the 82 children who received rapid WGS during this period, 48 who received WGS that was not diagnostic at initial manual analysis and for whom at least 1 year had elapsed since initial analysis were studied herein. The clinical characteristics of 26 of the 48 children have been previously reported HiSeq 2500 or 4000 instruments with paired 101-nt reads. Alignment and nucleotide variant calling was performed using the DRAGEN hardware and software platform (version 2.1.5)10. Yield ranged from 115.8 to 239.8\u2009Gb, resulting in 4,765,952 to 5,654,509 variant calls per individual and an average of 45.3\u00d7 coverage. Analysis considered single-nucleotide variants (SNVs) and small insertions and deletions only.WGS was performed on DNA extracted from the blood samples of study participants as previously described20. Manual variant analysis relied on a number of tools and resources, including variant ranking tools Phevor and VAAST, population frequency databases such as ExAC and gnomAD, in silico damage prediction scores , the Human Gene Mutation Database, ClinVar, literature searches, and manual inspection of reads using the Integrative Genomics Viewer. Manual reanalysis variant counts were generated using the same tools and resources, following the current variant filtering protocols used by Rady Children\u2019s Institute for Clinical Genomics for manual analysis/reanalysis. Tool version details are as follows: VAAST: 1.1; dbSNP: 147\u2013149; Genome Reference Consortium Human Genome Build v37; ExAC: 0.3; SIFT, MutationTaster, PolyPhen: dbNSFP v.2.9; HGMD: 2017.1\u201320.17.2; ClinVar: May 26, 2016-March 27, 2017 weekly releases; IGV: 2.3.76\u20132.3.86. Phenotypic features were manually extracted from EHRs by analysts, and interpretation was performed on trios in 28 families, duos in 11 families, and the proband only in 7 families. CNV calls, which were analyzed separately from analysis and reanalysis, were generated after realignment and variant calling with DRAGEN 3.4.5, using an automated pipeline that integrates the tools Manta and CNVnator as previously described22.VCF files from DRAGEN were annotated and analyzed in Opal Clinical versions 4.20\u20134.28 according to standard guidelines10. Briefly, unstructured clinical records were transformed into JSON format, encoded as SNOMED CT expressions by CLiX ENRICH, and transformed to an HPO list using a CLiX query map. Study participant VCF files, together with HPO term lists from CLiX ENRICH, were uploaded to Moon for automated, phenotype-driven variant analysis, as previously described10. For comparisons of Moon variant shortlists with HPO terms drawn from EHRs at enrollment or reanalysis, a newer version of Moon was used.HPO terms were re-extracted from patient records at the time of reanalysis, using CLiX ENRICH as previously described23. Briefly, the read pair-based tool, Manta, was used to detect smaller CNVs while the coverage-based caller, CNVnator, was used to detect larger CNVs. Calls were filtered for events overlapping known disease genes and filtered by an internal allele frequency of <2%.Copy number analysis was performed as previously describedp values were calculated using Prism . The 95% CI for the proportion of new diagnoses made upon reanalysis was calculated using the binomial exact (Clopper\u2013Pearson) method. The information content of each HPO term was calculated as previously described10.Nonparametric Spearman correlations, Wilcoxon signed-rank tests, and corresponding two-tailed Further information on research design is available in the Supplementary InformationReporting Summary"} +{"text": "Purpose: To explore the relationship among leisure motivation, barriers, attitude and satisfaction of middle school students in Chengdu, Sichuan, to help students establish a positive leisure attitude and provide a reference for youth leisure counseling.Methods: Based on consulting research literature, this paper designs a survey volume of teenagers\u2019 leisure motivation, barriers, attitude, and satisfaction; 2249 valid questionnaires of middle school students in Chengdu were obtained by stratified random sampling. The data were statistically processed by the combination of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).Results: (1) There are significant positive correlation effects between leisure motivation and leisure attitude, leisure attitude and leisure satisfaction, and leisure motivation and leisure satisfaction; (2) There is a low degree of positive correlation effect (r = 0.35 *) between leisure barriers and leisure motivation, which is contrary to common sense and needs to be further studied in the follow-up; (3) Leisure barriers has no significant direct impact on leisure satisfaction, but it can have a significant negative impact on leisure satisfaction with the intermediary variable of leisure attitude; (4) Leisure motivation is the most important variable in the whole leisure model structure. It not only has the greatest direct impact on leisure satisfaction but also has a great positive impact on leisure satisfaction through the intermediary of leisure attitude.Conclusion: Adolescent leisure motivation, barriers, attitude, and satisfaction are complementary and interdependent. Among them, leisure motivation is the core variable and leisure attitude is the dual intermediary variable. Through the initiation of leisure motivation, helping adolescents establish a positive leisure attitude may be the key to ensure their leisure satisfaction. Leisure life is the product of national economic growth and the change of social industrial structure. Due to the development of science and technology and socio-economic progress, the form of social life has changed significantly, which indirectly allows Chinese people to have more free time, and the demand and willingness for leisure activities have also increased. The rise of leisure activities has gradually become the focus of Chinese life, If leisure time is used to plan and improve leisure activities, it can not only bring personal health and relaxation benefits but also promote the beneficial interaction between people. Teenagers are the most energetic population in society. It is not only a critical period of personality development and life adaptation but also a stage with the greatest impact of physiological and psychological changes. The growth experience and the establishment of many ideas and behaviors in this period often have a decisive impact on their future personality development and behavior characteristics. At present, youth education is facing new opportunities and challenges. As an important educational activity, sports, leisure, and entertainment are inextricably linked with the educational theory itself. Research Report from sports, leisure, and entertainment educators: 11\u201316 years old is the most important stage to complete the socialization process of teenagers. If collective leisure and entertainment activities can be arranged at this stage, it is very important to train teenagers\u2019 cooperation ability and team spirit. Leisure projects such as orienteering, multi-person rowing, and sailing are the best choice for this age group .Attitude plays an important role in forming personal behavior. A correct sports attitude can improve sports behavior, and attitude and behavior affect each other. Learning theory emphasizes that past behavior experience is also one of the factors forming attitude. Many scholars believe that leisure attitude is an individual\u2019s response tendency for leisure, which represents an individual\u2019s likes and dislikes for leisure activities and an individual\u2019s readiness for leisure activities, and divides the structure of leisure attitude into three dimensions : (1) cogMotivation is a force that urges people to take a certain behavior to meet a certain demand, because Motivation is the psychological or internal force that urges a person to carry out activities and it is the internal process that causes an individual activity and maintains the activity toward a certain goal . It is aLeisure barriers are the hindrance of personal perception or experience, which is not necessarily the result of not participating in leisure activities, but may affect personal leisure preferences and change leisure participation. It can be seen that leisure barriers are related to an individual\u2019s ability to overcome and deal with obstacles to successfully engage in leisure and has an impact on leisure experience and behavior . Cho andLeisure satisfaction is a subjective feeling that individuals affect their leisure experience. It is the concrete realization of motivation, preference, demand, or expectation . Rosa etThe relationship between leisure satisfaction and leisure attitude, leisure motivation, leisure barriers, and other variables has long attracted the attention of scholars at home and abroad. To sum up, it is not difficult to find that attitude plays an important role in the formation of personal behavior. It is very necessary to enable middle school students to adjust their study and life through leisure activities and improve their cognition of leisure activities. By establishing a correct leisure attitude, we can achieve substantial results in the formation of leisure experience; Leisure motivation is the internal driving force to promote and maintain people\u2019s activities. Understanding the reasons and motivation of individuals engaged in leisure activities can obtain the psychological motivation and tendency of individuals engaged in leisure activities. The stronger the leisure motivation, the higher the frequency of leisure participation; Leisure barriers is a factor affecting individuals to engage in leisure activities. The frequency of leisure participation and leisure barriers are negatively related, and leisure satisfaction is a positive psychological result after engaging in leisure activities, providing fascinating and unforgettable leisure experience. At present, although many scholars at home and abroad have discussed teenagers\u2019 leisure attitude, leisure motivation, and leisure barriers, few scholars have a comprehensive and in depth understanding of the relationship between leisure satisfaction and leisure attitude, leisure motivation and leisure barriers. Understanding the relevant factors and relationships affecting teenagers\u2019 leisure behavior, properly planning their leisure life, and appropriately engaging in leisure activities will have a positive impact on Teenagers\u2019 school life and personality growth. Based on this, this study puts forward the following four hypotheses based on previous studies: (1) there are multiple groups of typical correlation structures among leisure motivation, leisure barriers, leisure attitude, and leisure satisfaction; (2) Leisure motivation has a positive impact on leisure attitude and leisure satisfaction, while leisure attitude also has a positive impact on leisure satisfaction; (3) The influence of leisure barriers on leisure attitude is negative, and the influence of leisure motivation on leisure attitude should be higher than that of leisure motivation on leisure satisfaction; (4) Leisure attitude plays an intermediary role in leisure barriers and leisure satisfaction. At the same time, leisure attitude also plays an intermediary mechanism between leisure motivation and leisure satisfaction see .First, junior high schools and senior high schools in Chengdu are stratified according to different districts, and then 5 schools are selected by random sampling. Each school is divided into grade 1 and grade 2 of junior middle school and grade 1 and grade 2 of senior high school . Students at each level are randomly selected for the questionnaire survey. A total of 2550 questionnaires are sent out, 2351 are recovered, and 102 invalid questionnaires are excluded, with an effective recovery rate of 88%. See The whole questionnaire consists of basic data of subjects and four scales: The basic data of the subjects include gender, age, grade of study, accommodation, monthly discretionary funds for leisure activities, average academic achievement, and family residence. Among the four scales, The measurement of leisure attitude is compiled by On March 15, 2019, this study selected two classes in Chengdu Shishi middle school and Jinniu middle school respectively, distributed 300 questionnaires, recovered 278 questionnaires, deducted 12 invalid questionnaires and 266 valid questionnaires. The validity and reliability of the questionnaire were tested and corrected based on the pre-test. The formal investigation was completed from September 15 to 30, 2019.p < 0.001), and three common factors could be extracted, and the corresponding Cronbach\u2019 \u03b1 coefficients were 0.84, 0.81, and 0.83 respectively. In addition, the overall scale \u03b1 coefficient = 0.85; Confirmatory factor analysis (CFA) showed that the fitness indexes AGFI, CFI, NFI, and IFI were 0.92, 0.95, 0.93, and 0.91, which were all greater than the standard of 0.90, RMSEA = 0.03 ; In addition, the combined reliability of the three common factors is more than 0.79, which shows that the scale has good reliability and validity.Exploratory factor analysis (EFA) showed that the sports leisure attitude scale with 16 items was suitable for factor analysis . The scale could extract 4 common factors, corresponding to Cronbach\u2019 \u03b1 coefficient is between 0.79\u20130.85, and the overall scale \u03b1 Coefficient = 0.88; Confirmatory factor analysis showed that the fitness indexes AGFI, CFI, NFI, and IFI were 0.91, 0.92, 0.92, and 0.93, which were all greater than the standard of 0.90, RMSEA = 0.04 ; In addition, the combined reliability of the four dimensions is more than 0.81, which shows that the reliability and validity of this scale are good.Exploratory factor analysis showed that the leisure motivation scale with 21 items was suitable for factor analysis . The scale could extract three common factors, corresponding to Cronbach\u2019 \u03b1 coefficient is between 0.82\u20130.86, and the overall scale \u03b1 Coefficient = 0.85; Confirmatory factor analysis showed that the fitness indexes AGFI, CFI, NFI, and IFI were 0.94, 0.90, 0.91, and 0.93, which were all greater than the standard of 0.90, RMSEA = 0.02 ; In addition, the combined reliability of the three dimensions is more than 0.79, which shows that the reliability and validity of this scale are good.Exploratory factor analysis showed that the leisure barriers scale with 17 items was suitable for factor analysis . The scale could extract 6 common factors, corresponding to Cronbach\u2019 \u03b1 coefficient is between 0.81\u20130.86, and the overall scale \u03b1 Coefficient = 0.84; Confirmatory factor analysis showed that the fitness indexes AGFI, CFI, NFI, and IFI were 0.91, 0.93, 0.95, and 0.92 in order, which was all greater than the standard of 0.90, RMSEA = 0.03 ; In addition, the combined reliability of the six dimensions is more than 0.81, which shows that the reliability and validity of this scale are good.Exploratory factor analysis showed that the leisure satisfaction scale with 25 items was suitable for factor analysis . It means that the typical factors of the standard variable can be explained by the typical factors of the control variable (not less than 10%). The third is the structural coefficient . It is intended to control the correlation between the variable and the criterion variable to their respective typical linear combinations. The absolute value of the coefficient must be more than 0.30 to explain that their respective typical linear combinations have explanatory power.Canonical correlation analysis is a statistical method used to test the correlation degree between one group of control variables and another group of criterion variables. It aims to find the maximum correlation between the linear combination of control variables and the linear combination of criterion variables. Therefore, canonical correlation analysis tests the canonical correlation combination of multiple criterion variables and multiple control variables, Canonical correlation analysis can produce a combination of significant and insignificant canonical correlation. Generally, it can provide the following basic information: one is the typical correlation coefficient can reflect the correlation degree between the linear combination of control variables and the linear combination of standard variables. The typical correlation coefficient must reach a significant level to represent the significant correlation between the two groups of linear combinations. The second is the judgment coefficient and confirmatory factor analysis (CFA). Explore and calculate the mediating effect of physical activity according to the bootstrap method . In thisr = 0.44\u2217\u2217 and reached a significant level, and the determination coefficient R2 = 0.194, indicating that the canonical factors in the control variable group can explain 19.4% of the total variation of canonical factors in the standard variable group (exceeding the minimum standard of 10%). In the control variable group, leisure cognition, leisure behavior, and leisure emotion were highly correlated with leisure attitude, and the typical factor loads were 0.90, 0.88, and 0.75 respectively. Therefore, it can be considered that leisure attitude affects leisure barriers through leisure cognition, behavior, and emotion in its variable group, while the variables highly related to leisure barriers are internal barriers and structural barriers, and the corresponding loads are \u22120.68 and \u22120.87 in turn; From the positive and negative signs of factor load, the relationship between the two is reversed.The first canonical correlation reflects the relationship between leisure attitude (control variable) and leisure barriers (criterion variable). The canonical correlation coefficient r = 0.49\u2217\u2217 and reaches a significant level, and its determination coefficient R2 = 0.24, indicating that the canonical factors in the control variable group can explain 24% of the total variation of canonical factors in the criterion variable (exceeding the minimum standard of 10%). In the control variable group, intrinsic and structural barriers have a high correlation with leisure barriers, and their typical factor loads are \u22120.68 and \u22120.87 respectively. Therefore, it can be considered that leisure barriers mainly affect leisure motivation by the intrinsic and structural barriers in the variable group, while the variables highly correlated with leisure motivation are the development of intelligence, social skills, proficiency, and stimulus avoidance, The corresponding loads are \u22120.74, \u22120.63, \u22120.79, and \u22120.89; From the positive and negative signs of factor load, the relationship between the two is in the same direction.The second group of canonical correlation reflects the relationship between leisure barriers (control variable) and leisure motivation (criterion variable). Its canonical correlation coefficient r = 0.67\u2217\u2217, reaching a significant level, and the determination coefficient R2 = 0.45, indicating that the canonical factors in the control variable group can explain 45% of the total variation of canonical factors in the standard variable (exceeding the minimum standard of 10%). Among the control variables, the development of intelligence, social skills, competence, proficiency, and stimulus avoidance is highly correlated with leisure motivation, and the typical factor loads are 0.65, 0.72, 0.85, and 0.66 respectively. Therefore, it can be considered that leisure motivation affects leisure attitude through the development of intelligence, social skills, competence, proficiency, and stimulus avoidance, while the variables highly correlated with leisure attitude are leisure cognition The corresponding loads of behavior and emotion were 0.90, 0.88, and 0.75 respectively; From the positive and negative signs of factor load, the relationship between the two is in the same direction.The third group of canonical correlations reflects the relationship between leisure motivation (control variable) and leisure attitude (criterion variable). The canonical correlation coefficient r is 0.29\u2217\u2217, reaching a significant level, but the determination coefficient R2 is only 0.08, indicating that the canonical factors in the control variable group can only explain 8% of the total variation of canonical factors in the criterion variable and fail to reach the minimum standard of 10%. Therefore, it can be considered that the correlation between leisure barriers and leisure satisfaction is weak, and the impact on each other is limited.The fourth group of canonical correlation reflects the relationship between leisure barriers (control variable) and leisure satisfaction (criterion variable). Its canonical correlation coefficient r = 0.77\u2217\u2217 and reaches a significant level, and the determination coefficient R2 = 0.59, indicating that the canonical factors in the control variable group can explain 59% of the total variation of the canonical factors of the criterion variable (exceeding the minimum standard of 10%). In the control variable group, the development of intelligence, social skills, competency proficiency, and stimulus avoidance is highly correlated with leisure motivation, and the typical factor loads are \u22120.79, \u22120.71, \u22120.89, and \u22120.73 respectively. Therefore, it can be considered that leisure motivation affects leisure satisfaction through the four dimensions of development intelligence, social skills, competency proficiency, and stimulus avoidance in the variable group, The variables highly correlated with leisure satisfaction were mental health, educational pleasure, social satisfaction, stress relief, physical health, and site aesthetics, and the corresponding loads were \u22120.66, \u22120.76, \u22120.62, \u22120.87, \u22120.75, and \u22120.55 respectively; From the positive and negative signs of factor load, the relationship between the two is in the same direction.The fifth group of canonical correlation reflects the canonical correlation between leisure motivation (control variable) and leisure satisfaction (criterion variable). Its canonical correlation coefficient r = 0.61\u2217\u2217, reaching a very significant level, and the determination coefficient R2 = 0.37, indicating that the canonical factors in the control variable group can explain 37% of the total variation of canonical factors in the standard variable group (exceeding the minimum standard of 10%). In the control variable group, leisure cognition, behavior, and emotion are highly correlated with leisure attitude, and their typical factor loads are \u22120.91, \u22120.87, and \u22120.84, respectively. Therefore, it can be considered that leisure attitude affects leisure satisfaction through cognition, behavior, and emotion in the variable group, while the variables highly correlated with leisure satisfaction are mental health, social satisfaction, physical health, and site aesthetics, The corresponding loads are \u22120.88, \u22120.50, \u22120.46, and \u22120.75; From the positive and negative signs of factor load, the relationship between the two is in the same direction.The sixth group of canonical correlation reflects the relationship between leisure attitude (control variable) and leisure satisfaction (criterion variable). The canonical correlation coefficient x2 = 81.01, X2/DF = 9.17, P = 0.000 < 0.05, indicating that the covariance matrix of the hypothetical model is not well matched with the observed data ; GFI = 0.817 (> 0.90 is the adaptation), AGFI = 0.808 (> 0.90 is the adaptation), RMSEA = 0.341 . From the value-added adaptation test results, NFI = 0.787 (adaptation > 0.90), IFI = 0.801 (adaptation > 0.90), CFI = 0.830 (adaptation > 0.90), RFI = 0.785 (adaptation > 0.90). In short, whether absolute fit or Increment fit test, the initial correlation model of this study is not well matched with the actual data, so the correlation model must be corrected.From the results of absolute fit test: the initial mode absolute fit index X2 = 5.88, X2/DF = 1.89, P = 0.081 > 0.05, indicating that the covariance matrix of the model is adapted to the observed data (X2/DF = 1.78 is adapted between 1 and 3); GFI = 0.925 (> 0.90 for adaptation), AGFI = 0.930 (> 0.90 for adaptation), RMSEA = 0.054 (good at 0.05\u223c0.08). From the increment fit test results, NFI = 0.932 (adaptation > 0.90), IFI = 0.941 (adaptation > 0.90), CFI = 0.934 (adaptation > 0.90), RFI = 0.945 (adaptation > 0.90). It can be seen that the initial correlation model of this study is well adapted to the actual data after correction one by one. The results show that the absolute adaptation index of the modified model is tion see .\u2217\u2217, which is very significant. The confidence interval of 0.102\u223c0.411 does not contain zero, while the direct effect of leisure motivation on leisure satisfaction = 0.60\u2217\u2217, which is very significant. The confidence interval of 0.326\u223c0.781 does not contain zero. At the same time, the total effect of leisure motivation on leisure satisfaction = 0.82\u2217\u2217 and the confidence interval of 0.501\u223c0.902 also does not contain zero, This fully affirms that leisure attitude plays a partial intermediary role between leisure motivation and leisure satisfaction. The other path is the impact of leisure barriers on leisure satisfaction, in which the indirect effect of leisure barriers on leisure satisfaction is \u22120.07\u2217\u2217, reaching a significant level, with a confidence interval of \u22120.158 \u223c\u22120.008, obviously excluding zero, while the direct effect of leisure barriers on middle school students\u2019 leisure satisfaction = 0.01, not significant, with a confidence interval of \u22120.109\u223c0.159, obviously including zero. Therefore, we can judge that leisure attitude plays a complete intermediary role between leisure barriers and leisure satisfaction.The indirect effect of leisure motivation on middle school students\u2019 leisure satisfaction is 0.22R2 = 0.41, which shows that 41% of the variation of middle school students\u2019 leisure attitude can be explained by leisure motivation and leisure barriers. Among them, the impact of leisure motivation on leisure attitude is positive, and the explanation strength is about 46% , while the impact of leisure barriers on leisure attitude is negative, and the explanation is about 5% , the influence of leisure motivation on leisure attitude is significantly higher than that of leisure barriers. In the influence of leisure motivation, leisure barriers, and leisure attitude on middle school students\u2019 leisure satisfaction, the judgment coefficient R2 = 0.62, which shows that 62% of the variation of middle school students\u2019 leisure satisfaction can be explained by the direct or indirect influence of leisure motivation, leisure barriers, and leisure attitude. The direct influence of leisure motivation, leisure barriers and leisure attitude on leisure satisfaction was 36% (0.60 \u00d7 0.60 = 0.36), 10% (0.32 \u00d7 0.32 = 0.10), 1% (0.11 \u00d7 0.11 = 0.01) respectively, intermediary I influence 22% (0.68 \u00d7 0.32 = 0.22), the influence of complete mediation II is negative, and the size is 7% (\u22120.23 \u00d7 0.32 = \u22120.07). The overall influence is positive (direct influence + indirect influence), and the size is 62%.It can be further found from \u2217\u2217, which also supports the research results of In the first group of canonical correlations, the absolute values of the structural coefficients of the control variables\u2019 leisure attitude are more than 0.70, indicating that each control variable has a high degree of correlation with leisure attitude; In the criterion variable leisure barriers, except interpersonal barriers, the absolute values of the structural coefficients of personal internal barriers and structural barriers are greater than 0.6. It can be seen that leisure cognition, behavior, and emotion are the main factors affecting personal internal barriers and structural barriers. That is, the subjects\u2019 cognition, feeling, and preference for leisure activities and experience, as well as all their leisure behavior patterns, It will affect their personality traits and mental state, leisure preferences, and leisure participation. In addition, from the perspective of leisure attitude, the structural coefficient symbols of each control variable and each leisure barriers criterion variable are reversed, indicating that the better the subjects\u2019 leisure attitude, the lower the barriers they encounter in leisure. This conclusion is the same as r = \u22120.87); In terms of the criterion variable leisure motivation, the absolute value of the structure coefficient of each dimension is greater than 0.6, and the correlation between stimulus avoidance and leisure motivation is the highest (r = \u22120.89). This typical relationship reveals that personal internal obstacles and structural obstacles significantly affect leisure motives such as developing intelligence, social skills, competence, proficiency, and stimulus avoidance. Since the structural coefficients of each dimension of leisure barriers and leisure motivation are negative, the correlation coefficients of the two are shown in \u2217\u2217), so it is certain that the lower the degree of leisure barriers suffered by teenagers, the lower their motivation to engage in leisure. This result also does not support the view of scholar The second group of canonical correlation showed that among the control variables, individual internal barriers and structural barriers were closely related to typical factors leisure barriers, among which structural barriers had the best correlation (r exceeds 0.65), while the factor load of structural factors cognition, behavior and emotion of leisure attitude in the variable group exceeds 0.74, and the symbol of factor load of leisure motivation and leisure attitude is the same direction. \u2217\u2217). This fully affirms that the higher the individual\u2019s leisure motivation, the stronger their leisure attitude. This result is consistent with the research findings of The third canonical correlation shows that the development of intelligence, social skills, competence, proficiency and stimulus avoidance in the variable group of leisure motivation are highly correlated with them (r = 0.287\u2217\u2217) and the structural model The fourth canonical correlation means that the canonical correlation between leisure barriers and leisure satisfaction is not strong , while leisure satisfaction is only highly correlated with mental health, aesthetic feeling, physiological and social satisfaction. The sign of the load of the two factors shows the same direction, which shows the subjects\u2019 cognition of leisure activities and experience Feelings and preferences and all leisure behavior patterns will affect their leisure preferences and interest tendencies in open space and natural beauty, building interpersonal interaction and leisure activities. This is consistent with the research results of \u2217\u2217). According to The sixth canonical correlation shows that in the leisure attitude variable group, leisure cognition, behavior, and emotion are highly correlated with leisure attitude , Therefore, it can be inferred that the main influence on leisure satisfaction is leisure motivation, which has a direct influence of 36%, and it also has an indirect influence of 22% through the intermediary of leisure attitude. As In order to improve and increase middle school students\u2019 leisure satisfaction, we should make good use of the factors affecting leisure activities. The results of this study infer that as long as it can actively stimulate middle school students\u2019 leisure motivation and help them establish a positive leisure attitude, it should improve students\u2019 leisure satisfaction and improve their quality of life. Therefore, it is necessary to strengthen middle school students\u2019 leisure education, establish correct leisure concepts and attitudes, emphasize leisure benefits and induce their leisure motivation.In terms of leisure motivation, schools should add leisure facilities, beautify the leisure environment and strengthen the safety of leisure venues, which will be of positive help to students\u2019 participation in after-school leisure; In terms of leisure barriers, we should provide leisure consulting and leisure counseling services, increase students\u2019 leisure sports knowledge and skills through appropriate leisure sports courses and leisure sports related activities, and regularly handle leisure sports lectures, so as to reduce the increase of students\u2019 leisure barriers, which will create a virtuous circle for the overall leisure model of teenagers .There are many factors and levels involved in the causal model affecting middle school students\u2019 leisure. This study only discusses the causal model from the variables such as leisure attitude, leisure motivation, leisure barriers, and leisure satisfaction. However, in order to fully understand the overall picture of complex leisure model, more influencing variables should be added appropriately, When verifying similar hypothetical models in the future, we should consider adding different variables to the impact model in order to obtain more complete leisure model information.The survey participants of this study are mainly middle school students in Chengdu in the western region. Therefore, the follow-up researchers should cover the subjects in the northern, central, southern and eastern regions of China, and expand the scope of subjects to primary school students, college students and middle-aged and elderly people, so as to make the research results more representative. In addition, this study does not measure teenagers\u2019 real leisure time and specific leisure behavior, but indirectly involves this problem in the form of questionnaire, which needs to be discussed in depth in the future.There are six typical correlation structures among leisure motivation, barriers, attitude, and satisfaction. Among them, leisure motivation has a significant positive correlation with leisure attitude and leisure satisfaction, and leisure attitude has a significant positive correlation with leisure satisfaction; Leisure barriers has a significant negative impact on leisure attitude, and the direct impact of leisure motivation on leisure attitude is significantly higher than that of leisure motivation on leisure satisfaction.Leisure attitude is not only the intermediary between leisure motivation and leisure satisfaction, but also the intermediary between leisure barriers and leisure satisfaction. The variables of leisure motivation and leisure barriers can jointly explain 44% of the variation of leisure attitude, while the variables of leisure attitude, leisure motivation and leisure barriers can jointly explain 59% of the total variation of leisure satisfaction. After removing the negative effects of leisure barriers, it shows that, Leisure motivation is the determinant of leisure attitude and leisure satisfaction.In terms of the impact of leisure motivation, barriers and attitude on leisure satisfaction, leisure motivation is the core variable, but leisure attitude plays a dual intermediary role. Therefore, it is possible that educating teenagers may establish a productive leisure attitude and improve leisure satisfaction.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.This study was reviewed and approved by the Ethics Review Committee of Chengdu Institute of Physical Education. However, this study does not involve human and animal experimentation, and written informed consent was not required.YW was mainly responsible for the design of the manuscript and the preparation of the questionnaire and participates in the writing of the manuscript. JS, FF, and XW were mainly engaged in the distribution of the questionnaire and data processing and analysis. YP was mainly responsible for the coordination among the members of the research group, financial support, and revision of the manuscript. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "For this study, we explored the prognostic profiles of biliary neuroendocrine neoplasms (NENs) patients and identified factors related to prognosis. Further, we developed and validated an effective nomogram to predict the overall survival (OS) of individual patients with biliary NENs.We included a total of 446 biliary NENs patients from the SEER database. We used Kaplan-Meier curves to determine survival time. We employed univariate and multivariate Cox analyses to estimate hazard ratios to identify prognostic factors. We constructed a predictive nomogram based on the results of the multivariate analyses. In addition, we included 28 biliary NENs cases from our center as an external validation cohort.The median survival time of biliary NENs from the SEER database was 31 months, and the value of gallbladder NENs (23 months) was significantly shorter than that of the bile duct (45 months) and ampulla of Vater . Multivariate Cox analyses indicated that age, tumor size, pathological classification, SEER stage, and surgery were independent variables associated with survival. The constructed prognostic nomogram demonstrated good calibration and discrimination C-index values of 0.783 and 0.795 in the training and validation dataset, respectively.Age, tumor size, pathological classification, SEER stage, and surgery were predictors for the survival of biliary NENs. We developed a nomogram that could determine the 3-year and 5-year OS rates. Through validation of our central database, the novel nomogram is a useful tool for clinicians in estimating individual survival among biliary NENs patients. Neuroendocrine neoplasms represent a group of highly heterogeneous diseases (depending on the primary site) and originate from peptidergic neurons and neuroendocrine cells . GastroeGiven the rarity of biliary NENs, the clinicopathological characteristics and prognosis of these patients remain unclear. To date, the literature on biliary NENs is relatively sparse, and most studies are case reports , 4. ReceIn the present study, we sought to analyze and compare the prognostic features of biliary NENs based on a relatively large number of cases collected from the SEER database and to develop an elaborate nomogram to predict 3-year and 5-year overall survival (OS) rates based on significant prognostic factors. Further, we carried out external validation for this prediction model using our hospital database.We obtained the data in this study from two sources. The first was from the SEER database. We used the SEER 18 Registries provided by the SEER*Stat Database (version 8.3.8), which consists of information on the neuroendocrine neoplasms of biliary patients . We derived the frequency and case distribution data from the SEER 18 Databases. The other data source was comprised of biliary NENs patients who were diagnosed with NENs and received treatment at Peking Union Medical College Hospital from 1991 to 2017. And histological assessment of tumor tissues and immunohistochemical tests were performed at the Pathology Department of Peking Union Medical College Hospital to confirm pathology and histological classification. Since SEER data are publicly available and all patient data are de-identified, institutional review board approval and informed consent were not required for this study. The included patients from our center provided oral consent and approved by the Institutional Review Board of Peking Union Medical College Hospital (S-K597). This study was performed in accordance with the 1964 Helsinki Declaration and its later amendments ethical standards.We identified all patients with a diagnosis of neuroendocrine carcinoma, carcinoid, small cell carcinoma, large cell neuroendocrine carcinoma, and mixed adenoneuroendocrine carcinoma (MANEC) of the gallbladder, bile duct, and ampulla of Vater (AoV) using the SEER codes generated from the\u00a0International Classification of Diseases for Oncology published by the World Health Organization (WHO). The corresponding ICD-O-3 codes were 8246/3, 8240/3, 8041/3, 8013/3 and 8244/3, respectively. For the primary site of the disease, we used the topographical codes \u2018C23.9, C22.1, C24.0, C24.9 and C24.1\u2019. In addition, all included cases had a positive pathological diagnosis. We excluded patients for whom demographic or survival information was not available. We extracted demographic information , clinicopathological characteristics , survival time, and therapy information (surgery) from the chosen cases. We performed the pathological classification of NENs according to the 2010 ENETS/WHO criteria was defined as the period from the date of diagnosis to the date of death from various causes. Patients alive at the date of the last contact were censored. We used the univariate Cox proportional hazards model to screen out significant prognostic variables for further multivariate Cox analysis and to establish their covariate-adjusted effects on survival time. We designed all significant variables in the multivariate Cox regression and previously defined \u2018variables of interest\u2019 (site of primary tumor) as prognostic factors in the performance of nomogram construction. We carefully chose variables for inclusion to ensure parsimony of the final model.via univariate and multivariate Cox analyses. Based on the predictive model using the identified prognostic factors, we built a nomogram to determine the 3- and 5-year OS rates. The performance of the nomogram validation included its discrimination and calibration curves through the external validation set from our hospital. We evaluated discrimination by employing a concordance index (C-index), which quantifies the probability that of two random patients, the patient who relapses first has a higher probability of the event of interest. A higher C-index indicates better discrimination. We generated a calibration plot by comparing the mean predicted survival rate with the mean actual survival rate, established through Kaplan\u2013Meier analysis. We performed all analyses using SPSS version 25 and R version 4.0.3. We considered p<0.05 to be statistically significant.For nomogram construction and external validation, we used the SEER database as the training set, and harnessed our hospital patient data set as the external validation cohort. We selected the prognostic variables for survival time We selected a total of 446 biliary NENs cases diagnosed between 2000 and 2017 from the SEER database. In addition to the primary site (p=0.476 and 0.459)\u2014which we previously defined as \u2018variables of interest\u2019\u2014we recognized the following variables as prognostic factors for survival time in multivariate Cox regression analysis: age, pathological classification, stage, surgery, and tumor size. Therefore, we included all of the above variables to develop the nomogram for survival time. The nomogram can be used to predict the probability of a patient\u2019s survival rate at 3 or 5 years Figure\u00a03In our study, by reviewing the clinicopathological characteristics of biliary NENs patients and exploring the prognosis and related risk factors, we developed a nomogram for the prediction of 3-year and 5-year survival rates for these patients, and performed nomogram validation using the data from our center. By using the Kaplan\u2013Meier method and univariate and multivariate Cox analysis, we found that being older than 65 years, advanced SEER stage, increased tumor size, and pathological classification of NECs was statistically and significantly related to decreased survival time. Moreover, biliary NENs patients who underwent surgery had a better survival outcome. The developed nomogram model we used helps to easily ascertain clinical and pathological risk factors to predict the OS time for patients and physicians.Previous studies, including case reports and literature reviews, have studied the survival time and risk factors for the prognosis of NENs with different classifications at various biliary system sites. Ayabe et\u00a0al. illustraIn addition to the pathological classification related to the prognosis of the biliary system, other clinical or pathological characteristics obtained in the course of diagnosis and treatment can be used to evaluate individual outcomes. For example, patients with metastasis\u2014regardless of regional lymph nodes or adjacAlthough the multivariable Cox analysis in our research identified prognostic factors\u2014age, SEER stage, surgery, tumor size, and pathological classification\u2014these variables could not provide an accurate and discriminatory prediction for biliary system NENs, especially the survival rates that have been a concern for clinicians, patients, and their families. Thus, a prognostic prediction model is needed to answer these questions. For NENs, the TNM staging system and ENETr was included. Although the verification results and power analysis (Power = 0.8689) were good, the value of the C-index may change after adding samples or centers. Future studies could include validation cohorts from different centers to control for selection bias to some extent. Besides, in terms of treatment of NENs, we only considered the effects of surgery on prognosis, ignoring neoadjuvant or adjuvant therapy , and Program Focus Health of Liver and Gallbladder in Elder (ZYJ201912).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Togaviridae family and are widely distributed across the globe. Venezuelan equine encephalitis virus (VEEV) and eastern equine encephalitis virus (EEEV), cause encephalitis and neurological sequelae while chikungunya virus (CHIKV) and Sindbis virus (SINV) cause arthralgia. There are currently no approved therapeutics or vaccines available for alphaviruses. In order to identify novel therapeutics, a V5 epitope tag was inserted into the N-terminus of the VEEV E2 glycoprotein and used to identify host-viral protein interactions. Host proteins involved in protein folding, metabolism/ATP production, translation, cytoskeleton, complement, vesicle transport and ubiquitination were identified as VEEV E2 interactors. Multiple inhibitors targeting these host proteins were tested to determine their effect on VEEV replication. The compound HA15, a GRP78 inhibitor, was found to be an effective inhibitor of VEEV, EEEV, CHIKV, and SINV. VEEV E2 interaction with GRP78 was confirmed through coimmunoprecipitation and colocalization experiments. Mechanism of action studies found that HA15 does not affect viral RNA replication but instead affects late stages of the viral life cycle, which is consistent with GRP78 promoting viral assembly or viral protein trafficking.Alphaviruses are a genus of the Togaviridae family that are significant human and veterinary pathogens. Venezuelan equine encephalitis virus (VEEV) and eastern equine encephalitis virus (EEEV) are endemic to the Western Hemisphere. More specifically, VEEV is endemic to the United States, Central and South America . E2 canGiven the critical role of the E2 protein for viral entry and budding, it is anticipated that there are additional host-protein interactions yet to be discovered. In this study, a V5 tag attached to the N-terminus of the VEEV E2 glycoprotein was used to enable proteomic identification of VEEV E2-host protein interactions. Various inhibitors of the identified host protein interactors were tested and the compound HA15, an inhibitor of the ER chaperone GRP78 developed by Cerezo et al. 2016 [8 plaque forming units (PFU)/mL and both plateaued at 24 hpi attaining titers \u2265 1 \u00d7 109 PFU/mL. Cell lysates from HEK 293T cells infected with VEEV TC-83 and TC-83 V5-E2 were analyzed by western blot analysis , VER155008 (HSP70 inhibitor), BTB06584 (ATP synthase inhibitor), Enterostatin (ATP synthase inhibitor), Metformin (GRP78 inhibitor), and HA15 (GRP78 inhibitor). Fluvastatin, VER155008, BTB06584, and Enterostatin had no impact on VEEV infectious titers using non-toxic concentrations in Vero cells . These d10 D. Metfor10 . HA15 tr5 PFU/mL E. The ef5 PFU/mL B,C and l5 PFU/mL F,G. In a10 reduction observed in cells treated with 50 \u00b5M of HA15 (7 PFU/mL (DMSO treated cells) to 1.00 \u00d7 104 PFU/mL and 1.43 \u00d7 106 PFU/mL, respectively (7 PFU/mL (DMSO treated cells) to 3.33 \u00d7 104 PFU/mL (HA15 50 \u00b5M) and 3.13 \u00d7 106 PFU/mL (HA15 25\u00b5M) , EEEV FL93-939, SINV EgAr 339, and CHIKV 181/25 was assessed. For this analysis, Vero cells were pre- and post-treated with HA15 (50 \u00b5M and 25 \u00b5M) and infectious titers determined at 16 hpi. Both VEEV TrD and EEEV infectious viral titers were significantly impacted by HA15 treatment with a 3 log of HA15 A,B. HA15ectively C. Likewi15 25\u00b5M) D. These 15 25\u00b5M) E, indicaHaving established that inhibition of GRP78 impacts VEEV as well as other alphaviruses, the interaction between GRP78 and the VEEV E2 glycoprotein was further evaluated. Co-immunoprecipitation assays were performed to confirm that GRP78 is an interacting partner of VEEV E2. HEK 293T cells were infected with VEEV TC-83 V5-E2 or the parental VEEV TC-83 at a MOI of 5. After 16 hpi cells were collected and the membrane fraction was isolated using a Qiagen QProteome Cell Compartment kit. The membrane fractions were then immunoprecipitated using an anti-V5 antibody. The precipitate was analyzed by western blot and a band for GRP78 was observed in the cell lysates from cells infected with VEEV TC-83 V5-E2 but not in the control C. Collec9 to 3.35 \u00d7 109 PFU/mL. Previous studies have shown that loss of GRP78 results in a corresponding increase in GRP94 protein levels as a way to compensate for a loss of GRP78 [To further confirm the importance of GRP78 for VEEV TC-83 replication, siRNA against GRP78 was used to knockdown protein expression. Primary human astrocytes were transfected with siRNA against GRP78 or a negative control. After a 48-h transfection period the cells were infected with VEEV TC-83 at a MOI of 0.1. At 16 hpi cell lysates were collected and analyzed by western blot which showed that GRP78 expression was successfully reduced A. The suof GRP78 ,18. Inde11 to 7.88 \u00d7 1010 VEEV-TC83 copies/mL at 25 \u00b5M HA15 and almost 3 logs at 50 \u00b5M HA15 to 6.45 \u00d7 108 copies/mL , VER155008 (HSP70 inhibitor), BTB06584 (ATP synthase inhibitor), Enterostatin (ATP synthase inhibitor), Metformin (GRP78 inhibitor), and HA15 (GRP78 inhibitor). Interestingly, only HA15 treatment showed a significant reduction in VEEV TC-83 viral titers. HA15 is a potential cancer therapeutic that directly inhibits GRP78 by reducing the ATPase activity of the chaperone . In the GRP78 is a member of the HSP70 family of chaperon proteins that has multiple roles within the cell including protein folding, regulation of the unfolded protein response (UPR), and apoptosis. During ER homeostasis GRP78 is bound to the transmembrane UPR sensor proteins: activation transcription factor 6 (ATF6), protein kinase R (PKR)-like endoplasmic reticulum kinase (PERK), and inositol-requiring enzyme 1 (IRE1). During periods of ER stress GRP78 will dissociated from these proteins and begin its chaperone activity in the ER and the UPR sensor proteins are then activated. Activation of the UPR leads to a decrease in protein synthesis while increasing expression of ER chaperones, including GRP78. After prolonged periods of ER stress, apoptosis is triggered via the PERK pathway ,21. VEEVGRP78 has been shown to be important for other viruses including Dengue virus, Ebola virus, Hepatitis B virus, HIV, human cytomegalovirus (CMV), and Zika virus ,28,29,30One limitation of the current study is that the in vivo importance of GRP78 to VEEV pathogenesis was not tested. However, there is precedence for successfully targeting GRP78 for other viral infections. Phosphorodiamidate morpholino oligomers directed against GRP78 were able to completely protect mice from lethal Ebola virus infection . As HA152. Cells were plated at 1.5 \u00d7 104 cells in a 96-well plate or 3 \u00d7 105 in a 6-well plate unless otherwise stated.Vero cells and HEK 293T cells were grown in Dulbecco\u2019s modified minimum essential medium (DMEM) supplemented with 10% heat-inactivated fetal bovine serum (FBS) , 1% penicillin and streptomycin antibiotics , and 1% L- glutamine . Primary human astrocytes were grown in astrocyte growth medium BulletKit . All cells were grown at 37 \u00b0C in a humidified environment at 5% COThe original plasmid containing the infectious cDNA of VEEV TC-83 was obtained from Ilya Frolov at the University of Alabama Birmingham and usedCells were lysed in Blue Lysis Buffer composed of 25 mL 2\u00d7 Novex Tris-Glycine Sample Loading Buffer SDS , 20 mL T-PER TissueProtein Extraction Reagent , 200 \u00b5L 0.5MEDTA pH 8.0, 3 complete Protease Cocktail tablets, 80 \u00b5L 0.1M Na3VO4, 400 \u00b5L 0.1M NaF, and 1.3 mL 1M dithiothreitol. 25 \u00b5L of cell lysate was separated by gel electrophoresis on a NuPAGE 4\u201312% Bis-Tris gel and transferred to PVDF membrane . The membrane was blocked in 5% milk in PBS\u20140.1% Tween (PBST) solution for 30 min at room temperature. Primary antibody was incubated at room temperature for 1hr or overnight at 4 \u00b0C in 5% milk PBST. Antibodies were used as follows: Mouse anti-V5 antibody at 1:1000, Rabbit anti-GRP78 at 1:1000, Rabbit anti-GRP94 at 1:2000, actin HRP at 1:30,000, Rabbit anti-Calnexin at 1:1000, and Mouse anti-V5 HRP at 1:30,000. The membrane was washed three times for five minutes with 5% Milk PBST. Secondary antibody was prepared in 5% Milk PBST and incubated at room temperature for 1hr. Goat anti-Mouse was used at 1:1500 and Goat anti-Rabbit was used at 1:3000. Membranes were washed twice for five minutes with PBST and twice for five minutes with PBS. SuperSignal West Femto Maximum Sensitivity Substrate kit was used to image blots on a Chemidoc Imaging System .6 in a T-75 flask and incubated at 37 \u00b0C overnight. Cells were infected at a MOI of 5 with either VEEV TC-83 or VEEV TC-83 V5-E2. After the specified time, supernatant was removed and 10mls of PBS was added and cells were scraped off the growth surface. Cells were pelleted at 500 g for 10 min and washed twice with cold PBS by resuspending in 2 mL cold PBS and pelleting at 500 g for 4 min. Qiagen Qproteome Cell Compartment kit was used to extract the membrane fraction of the cell. Anti-V5 antibody was added to the membrane fraction and incubated on rotator overnight at 4 \u00b0C. Dynabeads Protein G magnetic beads were used to recover immune complex. 50uL of Dynabeads were added to overnight IP sample and incubated at room temperature for 45 min. Beads were washed twice with Tris sodium EDTA (TNE) buffer with 300 mM NaCl and 0.1% NP-40, once with TNE buffer with 150 mM NaCl and 0.1% NP-40, once in TNE buffer with 50 mM NaCl and 0.1% NP-40, and two final washes with PBS. Fifty uL of Blue Lysis Buffer was added to beads and samples were boiled for 10 min.Cells were seeded at 5 \u00d7 10Immunoprecipitated proteins on Dynabeads were mixed with 20 \u00b5L of 8 M urea and incubated at 50 \u00b0C for 5 min. The mixture was spun at 16,000 g for 2 min and the supernatant was transferred to a clean 0.6 mL tube. The proteins in the supernatant were reduced with 10 mM dithiothreitol, alkylated with 50 mM iodoacetamide, and digested with trypsin at 37 \u00b0C for 4 h. The sample was desalted by ZipTip, dried in SpeedVac, then reconstituted with 10 \u00b5L of 0.1% formic acid for mass spectrometry (MS) analysis. Liquid chromatography coupled tandem mass spectrometry (LC-MS/MS) experiments were performed on an Orbitrap Fusion equipped with a nanospray EASY-nLC 1200 HPLC system. Peptides were separated using a reversed-phase PepMap RSLC 75 \u03bcm i.d. \u00d7 15 cm long with 2 \u03bcm particle size C18 LC column from ThermoFisher Scientific. The mobile phase consisted of 0.1 % aqueous formic acid (mobile phase A) and 0.1% formic acid in 80% acetonitrile (mobile phase B). After sample injection, the peptides were eluted by using a linear gradient from 5% to 50% B over 60 min and ramping to 100% B for an additional 2 min. The flow rate was set at 300 nL/min. The Orbitrap Fusion was operated in a data-dependent mode in which one full MS scan from 300 m/z to 1500 m/z was followed by MS/MS scans in which the most abundant molecular ions were dynamically selected by Top Speed, and fragmented by collision-induced dissociation (CID) using a normalized collision energy of 35%. \u201cEASY-Internal Calibration\u201d, \u201cPeptide Monoisotopic Precursor Selection\u201d and \u201cDynamic Exclusion\u201d (10 s duration), were enabled, as was the charge state dependency so that only peptide precursors with charge states from +2 to +4 were selected and fragmented by CID. Tandem mass spectra were searched against the NCBI human database including the VEEV protein sequences using Proteome Discover v 2.3 from the ThermoFisher Scientific. The SEQUEST node parameters were set to use full tryptic cleavage constraints with dynamic methionine oxidation. Mass tolerance for precursor ions was 2 ppm, and mass tolerance for fragment ions was 0.5 Da. A 1% false discovery rate (FDR) was used as a cut-off value for reporting peptide spectrum matches (PSM) from the database search.Cells were plated on a 96-well plate and incubated overnight. Medium was removed and new medium containing inhibitor dilutions was added to the plate. Plates were incubated at 37 \u00b0C for 24 h. ATP production was used as a measure of cell viability and detected using Promega CellTiter-Glo .Cells were treated with inhibitor (or solvent control) for one hour at the specified concentrations in the appropriate medium prior to infection (pretreatment). Pretreatment was removed and cells were subsequently infected as described. Fresh medium containing inhibitor (or solvent control) was reapplied. HA15 , VER-155008 , BTB06584 , and Enterostatin were dissolved in DMSO to a final concentration of 50mM. Metformin and Fluvastatin were dissolved in water to a final concentration of 100 mM.After infection cells were fixed in 4% paraformaldehyde for 10 min at room temperature followed by permeabilization with 0.1% Triton X-100 in PBS for 10 min at room temperature. The cells were washed three times with PBS and blocked with 1% bovine serum albumin, 0.3M glycine, and 0.01% Triton X-100 in PBS. Cells were washed three times with PBS and incubated with the primary antibodies in 1% BSA and 0.01% Triton X-100 in PBS overnight at 4 \u00b0C. Mouse anti-V5 antibody was used at 1:700 and Goat anti-GRP78 was used at 1:1000. Cell were washed three times with 0.01% Triton X-100 in PBS. Secondary antibodies were prepared in 0.01% Triton X-100 in PBS and incubated at room temperature for one hour. Donkey anti-rabbit Alexa Fluor 488 and donkey anti-mouse Alexa Fluor 568 were used at 1:500. Cells were washed twice with 0.01% Triton X-100 in PBS followed by one five-minute wash with 0.01% Triton X-100 and DAPI at 1:1000 in PBS. Fluorescence Imaging data was obtained using a Nikon Eclipse Ti2-E microscope. Image acquisition and analysis was performed using Nikon NIS-Elements Imaging Software version 5.20.01 .5 cells per well in a 24-well plate and transfected with SMARTpool siRNA targeting GRP78 at 25nM, negative-control siRNA , and DharmaFect 1 following manufacturer recommendations. After 24 h, transfection medium was replaced with complete medium and cultured for an additional 48 h before infection. After infection supernatants were analyzed via plaque assay and cell lysates were analyzed by western blot analysis as described above.Human primary astrocytes were plated at 2.5 \u00d7 10TM Real-Time PCR System . Viral RNA was detected using Invitrogen\u2019s RNA UltraSenseTM One-Step Quantitative RT-PCR System using Integrated DNA Technologies primer pairs and TaqMan probe against nucleotides (nt) 7931 to 8005 of the viral sequence. A standard curve was generated using serial dilutions of VEEV TC-83 RNA at known concentrations. Absolute quantification was performed using StepOne software v2.3 based on the threshold cycle relative to the standard curve.Total RNA was extracted from Vero cells after mock infection or infection with VEEV TC-83 using the RNeasy mini kit following manufacturer\u2019s instruction. Viral RNA from the supernatants of the infected cells was extracted using the MagMax-96 Viral RNA isolation kit following manufacturer\u2019s instructions. Reverse-transcription- quantitative PCR (RT-qPCR) was performed using the StepOnePluswww.graphpad.com (accessed on 23 January 2021).Statistical analysis was calculated using an unpaired, two-tailed Student\u2019s t-test using GraphPad Prism version 8.3.0 for Mac OS X, GraphPad Software, San Diego, CA, USA,"} +{"text": "Bone remodeling is a continuous process of bone synthesis and destruction that is regulated by osteoblasts and osteoclasts. Here, we investigated the anti-osteoporotic effects of morroniside in mouse preosteoblast MC3T3-E1 cells and mouse primary cultured osteoblasts and osteoclasts in vitro and ovariectomy (OVX)-induced mouse osteoporosis in vivo. Morroniside treatment enhanced alkaline phosphatase activity and positively stained cells via upregulation of osteoblastogenesis-associated genes in MC3T3-E1 cell lines and primary cultured osteoblasts. However, morroniside inhibited tartrate-resistant acid phosphatase activity and TRAP-stained multinucleated positive cells via downregulation of osteoclast-mediated genes in primary cultured monocytes. In the osteoporotic animal model, ovariectomized (OVX) mice were administered morroniside (2 or 10 mg/kg/day) for 12 weeks. Morroniside prevented OVX-induced bone mineral density (BMD) loss and reduced bone structural compartment loss in the micro-CT images. Taken together, morroniside promoted increased osteoblast differentiation and decreased osteoclast differentiation in cells, and consequently inhibited OVX-induced osteoporotic pathogenesis in mice. This study suggests that morroniside may be a potent therapeutic single compound for the prevention of osteoporosis. However, these medicines have limitations in terms of dosage and frequency of use due to their adverse effects [Osteoporosis is a common skeletal disorder characterized by bone mineral density (BMD) loss caused by the dysregulation of bone homeostasis, including bone resorption by osteoclasts and bone formation by osteoblasts . The imb effects . Cornus officinalis (CO) has been widely utilized as a traditional medicine, with positive effects on type 2 diabetes, liver disease, and the menopausal syndrome [Medicinal plants have been broadly used as alternative therapies in East Asia for various diseases because of their few side effects . Numerousyndrome ,13,14. Asyndrome . Ethnophsyndrome . Morronisyndrome . Previousyndrome . However, the specific anti-osteoporotic effects of morroniside on murine models of osteoclasts and osteoporosis have not been reported. In this study, we examined the anti-osteoporotic effect of morroniside on mouse preosteoblast MC3T3-E1 cells and mouse primary cultured osteoblasts and osteoclasts in vitro and on ovariectomy (OVX)-induced mouse osteoporosis in vivo.Previous studies have demonstrated that morroniside enhances osteoblast differentiation in mouse preosteoblast MC3T3-E1 cells and bone marrow mesenchymal stem cells ,20. PreoAlpl , Runt-related transcription factor 2 (Runx2), and osterix (Sp7), as previously described [Runx2 is a major transcription factor involved in osteoblast differentiation [Sp7 is an osteoblast differentiation-specific gene that is downstream of Runx2 [Alpl, Runx2, and Sp7, compared to the non-treated group and osteoblasts (bone formation) . DeficitTo evaluate the effect of morroniside on osteoclast differentiation, we isolated mouse primary monocytes from mouse femoral bone. Osteoclast differentiation was induced by adding osteoclast media containing M-CSF and RANKL, and co-treatment with morroniside at different concentrations for 5 days. Morroniside did not affect the viability of primary monocytes A. HoweveCtsk), matrix metalloproteinase 9 (Mmp9), and tartrate-resistant acid phosphatase 5 (Acp5) [Nfatc1), which are master regulators of osteoclastic function [Nfatc1, Ctsk, Mmp9, and Acp5), and mRNA expression levels were assessed by RT-PCR. As a result, morroniside treatment significantly decreased the mRNA expression levels of osteoclastogenic genes, including Nfatc1, Ctsk, Mmp9, and Acp5, compared to the non-treated group (Bone resorption (osteoclast differentiation) is regulated by several osteoclast-inducible enzymes, such as cathepsin K (5 (Acp5) . These efunction . We inveed group . These rBased on the effects of morroniside in osteoblasts and osteoclasts in vitro, we examined the anti-osteoporotic effects of morroniside in ovariectomized (OVX) mice. OVX mice are a well-known murine model for evaluating skeletal effects because of their hormone deficiency, which is typically used as a postmenopausal osteoporosis model . The maj\u00ae 40 \u03bcm cell strainer. Mouse primary monocytes were isolated from bone marrow cells in the femur of a nine-week-old mouse as previously described [2 incubator.The mouse pre-osteoblastic cell line MC3T3-E1 was obtained from the RIKEN Cell Bank and the cells were maintained in a growth medium supplemented with 10% fetal bovine serum (Gibco) and 1% antibiotic-antimycotic reagent (Gibco)). Primary osteoblasts were isolated from mouse calvaria as previously described . BrieflyFor osteoblast differentiation, MC3T3-E1 cells and primary osteoblasts were incubated with a growth medium containing 10 mM \u03b2-glycerophosphate and 50 \u03bcg/mL ascorbic acid and co-treated with different concentrations of morroniside without changing of the medium for 3 days. For osteoclast differentiation, primary monocytes were incubated with \u03b1-MEM containing 50 ng/mL M-CSF and 50 ng/mL RANKL (PeproTech) for 5 days. During the osteoclast differentiation period, the induction medium was changed one time at 3 days of the experiment.Cells were seeded in 96-well plates for 24 h and treated with morroniside for 3 days (MC3T3-E1 cells and primary osteoblasts) or 5 days (primary osteoclasts). Cell viability was assessed using the D-Plus\u2122 CCK Cell Viability Assay Kit . Cells were incubated with 10 \u03bcL of WST solution for 2 h, and the cell viability was measured at an absorbance of 450 nm using a microplate reader .Cells were harvested with cell lysis buffer , and the ALP activity was assessed using 1-Step\u2122 p-nitrophenylphosphate in accordance with the manufacturer\u2019s instructions. For ALP staining, cells were fixed with 4% paraformaldehyde for 15 min and incubated with BCIP/NBT (Sigma) for 30 min at room temperature. TRAP activity/staining in primary osteoclasts was assessed using an Acid-Phosphatase Kit (Sigma) according to the manufacturer\u2019s instructions. Representative images of ALP/TRAP-positive cells were visualized using a light microscope .Alpl, forward 5\u2032-TAA AGT GAC AGT GGA CGG TCC C-3\u2032 and reverse 5\u2032- AAT GCG CCC TAA ATC ACT GAG G-3\u2032 for mouse Runx2, forward 5\u2032-CAG GAA GAA GCT CAC TAT GG-3\u2032 and reverse 5\u2032-GTC CAT TGG TGC TTG AGA AG-3\u2032 for mouse Sp7, forward 5\u2032-GGA GAG TCC GAG AAT CGA GAT-3\u2032 and reverse 5\u2032-TTG CAG CTA GGA AGT ACG TCT-3\u2032 for mouse Nfatc1, forward 5\u2032-AAT ACC TCC CTC TCG ATC CTA CA-3\u2032 and reverse 5\u2032-TGG TTC TTG ACT GGA GTA ACG TA-3\u2032 for mouse Ctsk, forward 5\u2032-CTT CGA CAC TGA CAA GAA GTG G-3\u2032 and reverse 5\u2032-GGC ACG CTG GAA TGA TCT AAG-3\u2032 for mouse Mmp9, forward 5\u2032-TGG TAT GTG CTG GCT GGA AAC-3\u2032 and reverse 5\u2032-AGT TGC CAC ACA GCA TCA CTG-3\u2032 for mouse Acp5, forward 5\u2032-AGG TCG GTG TGA ACG GAT TTG-3\u2032 and reverse 5\u2032-TGT AGA CCA TGT AGT TGA GGT CA-3\u2032 for mouse Gapdh, forward 5\u2032-GAG GAG TCC TGT TGA TGT TGC CAG-3\u2032 and reverse 5\u2032-GGC TGG CCT ATA GGC TCA TAG TGC-3\u2032 for mouse Hprt. Gene expression levels were normalized to mouse Gapdh (osteoblast) or Hprt (osteoclast) expression, and the results were presented as the 2-\u0394\u0394Ct method (\u0394\u0394Ct = \u0394CtTreatment-\u0394Ct-Induction).Total RNA was harvested from cultured cells using TRIzol reagent according to the manufacturer\u2019s instructions. Complementary DNA (cDNA) was synthesized using the RevertAid\u2122 H Minus First Strand cDNA Synthesis Kit . qRT-PCR was performed using the SYBR Green I qPCR Kit with gene-specific primers. The specific primer sequences were as follows: forward 5\u2032-TCC CAC GTT TTC ACA TTC GG-3\u2032 and reverse 5\u2032-CCC GTT ACC ATA TAG GAT AGC C-3\u2032 for mouse Eight-week-old sham-operated or OVX ddY mice were obtained from Shizuoka Laboratory Center, Inc. . Mice were housed under controlled pathogen-free conditions at the Laboratory Animal Research Center of Ajou University Medical Center, provided with sterile food pellets and sterile water ad libitum. For the experiment, mice were administered different concentrations of morroniside (2 or 10 mg/kg/day) for 12 weeks. All animal experiments, including primary cell culture, were approved by the Institutional Animal Care and Use Committee (IACUC) of Ajou University School of Medicine (2016-0062).Mice were anesthetized with zolazepam/tiletamine by intraperitoneal injection, and the BMD was measured using a PIXI-mus bone densitometer . At the end of the experiment, mice were sacrificed, and the right femur was fixed in 4% paraformaldehyde for 24 h, and then placed on the sample holder of the scanner. The scans were performed along the longitudinal axis of the specimen using a Skyscan 1173 under identical conditions . Three-dimensional axial images were reconstructed, and representative two-dimensional images were captured using the NRecon software .p < 0.05 was considered as statistically significant.Data in the bar graphs are expressed as the mean \u00b1 standard error of the mean (S.E.M.) using GraphPad Prism 9.0 software . Comparisons of multiple groups were analyzed by one-way analysis of variance (ANOVA) with Tukey\u2019s honest significant difference (HSD) post-hoc test. In the present study, we examined the anti-osteoporotic effect of morroniside in osteoblasts and osteoclasts in vitro and in an OVX-induced osteoporosis mouse model in vivo. Morroniside promoted osteoblast differentiation by upregulating ALP activity and osteoblastogenesis-associated genes. By contrast, morroniside prevented osteoclastogenic differentiation by inhibiting TRAP activity and the expression of osteoclastogenic genes. In a murine osteoporotic model, morroniside administration prevented OVX-induced BMD loss and bone mineral compartment in the femoral bone. Taken together, these results indicate that morroniside may be a potential therapeutic single compound for the prevention and treatment of osteoporosis pathogenesis by improving bone homeostasis."} +{"text": "Post-stroke constipation is a major complication of stroke and increases the incidence of poor neurological outcomes and infectious complications and, therefore, warrants active and prompt treatment. In East Asian countries, several types of herbal medicines have been used for the treatment of post-stroke constipation because they are considered safer than existing pharmacotherapies. However, no systematic review has investigated the efficacy and safety of traditional East Asian herbal medicine in the treatment of post-stroke constipation. With this systematic review and meta-analysis, we aimed to evaluate the efficacy and safety of traditional East Asian herbal medicines for the treatment of post-stroke constipation.Eight electronic databases will be searched for relevant studies published from inception to April 2021. Only randomized controlled trials (RCTs) that assess the efficacy and safety of traditional East Asian herbal medicines for the treatment of post-stroke constipation will be included in this study. The methodological qualities, including the risk of bias, will be evaluated using the Cochrane risk of bias assessment tool. After screening the studies, a meta-analysis of the RCTs will be performed, if possible.This study is expected to generate high-quality evidence of the efficacy and safety of herbal medicines to treat post-stroke constipation.Our systematic review will provide evidence to determine whether herbal medicines can be effective interventions for patients with post-stroke constipation.Ethical approval is not required, as this study was based on a review of published research. This review will be published in a peer-reviewed journal and disseminated electronically and in print.Research registry reviewregistry1117 Therefore, active and prompt treatment of post-stroke constipation is essential.Post-stroke constipation is a major complication after stroke and has been reported to occur in 22.9\u201379% of patients with stroke.,7 However, these medications are known to cause adverse effects, including electrolyte imbalance, nausea, headache, diarrhea, abdominal pain, anaphylaxis, and carcinogenesis.,8 Therefore, there is a shortage of effective strategies for the treatment of constipation in patients with stroke, mostly elderly patients. In addition, long-term use of conventional pharmacotherapies can cause dependence and permanent changes in the bowel habits of patients with stroke.,8Currently, pharmacotherapies, such as laxatives (osmotic and stimulant), anticholinesterases, enterokinetic medications, secretagogues, and serotonin 5-HT4 receptor agonists, have been mainly used to treat post-stroke constipation.\u201311 and several related studies on the effects of traditional medicine to treat functional constipation or post-stroke constipation have steadily emerged. Dahuang (Rhei Radix et Rhizoma) is the most commonly used herb for treating constipation. A prospective, double-blind, double-dummy, randomized controlled trial suggested that MaZiRenWan, which contains Dahuang, could be effective to treat functional constipation. An open-label study reported that another herbal prescription, Daikenchuto, which does not contain Dahuang, significantly improve the constipation score (constipation scoring system [CSS]) in patients with post-stroke constipation.,15 In order for decision makers to easily utilize these existing evidence in the clinical setting, a systematic review is needed to identify, evaluate, and summarize related studies. However, to date, no systematic review has been conducted to evaluate the efficacy and safety of traditional East Asian herbal medicine to treat post-stroke constipation.These limitations of existing therapies warrant the need to develop safer and more effective treatments for post-stroke constipation. Traditional medicine, which mainly uses herbs, acupuncture, and moxibustion, is still widely used in Northeast Asian regions, such as Korea, China, Japan, and Taiwan,Therefore, the aims of this study are as follows:(1)To assess whether traditional East Asian herbal medicine therapies for the treatment of post-stroke constipation are more effective and safer than conventional Western medicine therapies or placebo.(2)To assess whether adjunct traditional East Asian herbal medicine therapies in combination with conventional Western medicine therapies is more effective and safer than conventional Western medicine therapies alone for the treatment of post-stroke constipation.22.1 and has been registered with the Research Registry 2021 under number review registry1117.The protocol of the present study adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocol (PRISMA) guidelines and checklist2.22.2.1Only randomized controlled trials (RCTs) investigating the efficacy and safety of traditional East Asian herbal medicines for the treatment of post-stroke constipation will be included in this study, without any publication or language restrictions. Quasi-randomized controlled trials , non-RCTs, case reports, case series, uncontrolled trials, and laboratory studies will be excluded. Studies that fail to provide detailed results will be excluded. Cross-over trials will also be excluded because of the potential for a carry-over effect.2.2.2Eligible participants will be defined as adult patients (over 18\u200ayears of age) having constipation after a life-first or recurrent stroke. Post-stroke constipation should be diagnosed according to at least one of the current diagnostic criteria or diagnostic criteria at the time of the study. Patients with a history of constipation before the diagnosis of stroke will be excluded. There will be no restrictions based on sex, ethnicity, symptom severity, disease duration, and clinical setting. However, patients with subdural hemorrhage or subarachnoid hemorrhage will be excluded.2.2.3We will include studies using traditional East Asian herbal medicines alone or adjunct traditional East Asian herbal medicines in combination with conventional Western medicine therapies as experimental interventions. In the present study, only oral administration forms of traditional East Asian herbal medicines will be included, with no limitations on the dosage, frequency, duration of treatment, and formulation . Therefore, intravenous or acupuncture point injections of herbal medicines will be excluded.The control interventions will include placebo, placebo + conventional Western medicine therapies, or conventional Western medicine therapies alone. We will exclude studies comparing other traditional East Asian medicine therapies, such as those using different types of traditional East Asian herbal medicines, acupuncture, or moxibustion. Studies comparing the effect of traditional East Asian herbal medicines with other traditional East Asian medicine therapies, such as acupuncture treatment or moxibustion, will also be excluded.2.2.4), the frequency of use of rescue medications , mean transit time, total effective rate for post-stroke constipation, and other parameters evaluating neurologic deficits, such as the National Institute of Health Stroke Scale score, modified Rankin Scale (mRS) score, modified Barthel Index (mBI), and quality of life (QoL). We will also investigate the number and severity of adverse events.For the primary outcome, we will assess the frequency of spontaneous defecation, defined as the mean number of spontaneous defecations per week. For secondary outcomes, we will include the CCS and gas volume score , the Cochrane Central Register of Controlled Trials (CENTRAL), Excerpta Medica dataBASE (EMBASE), Scopus, Citation Information by Nii (CiNii), China National Knowledge Infrastructure Database (CNKI), Oriental Medicine Advanced Searching Integrated System (OASIS), and National Digital Science Library (NDSL). The specific search strategies are listed in Table We will make relative modifications in accordance with the requirements, and an equivalent translation of the search terms will be adopted to ensure that similar search terms are used in all databases. If additional information is needed from the identified studies, we will contact the corresponding authors.2.3.2A manual search will also be performed to search the reference lists of the relevant articles. Clinical trial registries , conference presentations, and expert contacts will also be searched.2.42.4.1Two reviewers (SK and CJ) trained in the process and purpose of study selection will independently review the titles, abstracts, and manuscripts of the studies and screen them for eligibility for inclusion in the analysis. After removing duplicates, the full texts will be reviewed. All studies, identified by both electronic and manual searches, will be uploaded to EndNote X9 , and the reasons for excluding studies will be recorded and shown in a PRISMA flowchart, as shown in Figure 2.4.2One review writer (CJ) will independently extract the data and fill out the standard data extraction form, which includes study information\u2014the first author, publication year, language, sample size, characteristics of participants , details of randomization, blinding, interventions , comparison , treatment period, outcome measures, primary outcome, secondary outcome, and statistical method used. Another independent review writer (SW) will confirm the contents of the extraction. Disagreements, if any, will be resolved by consulting another review writer (BHJ).2.4.3 which includes references to random sequence generation (selection bias), allocation concealment (selection bias), blinding of participants and personnel (performance bias), blinding of outcome assessment data (detection bias), incomplete outcome data (attribution bias), selective reporting (reporting bias), and other biases. The assessment results will be presented as follows: low, unclear, and high RoB.Two reviewers (SK and CJ) will assess the risk of bias (RoB) based on the Cochrane Collaboration tool,2.4.4For continuous data, the pooled results will be presented as the mean difference (MD) or standardized MD with 95% confidence intervals (CIs). For dichotomous data, the pooled results will be presented as a risk ratio (RR) with 95% CIs.2.4.5If there are missing, insufficient, or unclear data, we will contact the corresponding author and gather relevant information. If the information cannot be obtained, only the remaining available information, which will be discussed, will be analyzed.2.4.62 test to evaluate statistical heterogeneity. Statistical heterogeneity will be considered if I2 is greater than 50%.We will perform the I2.4.72 is \u2264 50%, the fixed-effect model will be used to evaluate the outcome data. Otherwise, a random-effects model will be used. The studies will be synthesized according to the type of intervention and/or control as follows:The Review Manager program will be used for statistical analysis. If I1.Traditional East Asian herbal medicines vs. conventional Western medicine therapies2.Traditional East Asian herbal medicines vs. placebo3.Traditional East Asian herbal medicines + conventional Western medicine therapies vs. placebo + conventional Western medicine therapies4.Traditional East Asian herbal medicines + conventional Western medicine therapies vs. conventional Western medicine therapies alone.The heterogeneity levels will be assessed in the included literature, and if enough studies are available to investigate the causes of heterogeneity and its criteria, the groups indicated below will be assessed. If more than 10 studies are included in the meta-analysis, we will estimate publication bias using Egger's test and depict the results visually with a funnel plot. We will use the Grading of Recommendations Assessment, Development and Evaluation (GRADE) pro software from Cochrane Systematic Reviews to create a Summary of Findings table.2.4.8If sufficient studies are available to investigate the cause of heterogeneity and its criteria, the following will be assessed: the types of stroke , stroke duration , the name of the herbal medicines used, and the formula of the herbal medicine (such as granules or decoctions).2.4.9We will perform a sensitivity analysis to verify the robustness of the results. This will be done by assessing the impact of sample size, high RoB, missing data, and selected models. Following the analyses, if the quality of the studies is judged to be low, these studies will be removed to ensure the robustness of the results.2.4.10A formal ethical approval was not required for this protocol. We will collect and analyze data based on published studies, and because no patients are directly or specifically assessed in this study, individual privacy will not be a concern. The results of this review will be disseminated to peer-reviewed journals or presented at a relevant conference.3,3\u20135 It not only leads to poor QoL in patients with stroke but also increases the prevalence of complications, such as pneumonia and urinary tract infection.,3\u20135 However, currently used pharmacological treatments have a one-off effect and adverse effects, such as electrolyte imbalance and anaphylaxis, which could be fatal to patients with stroke;,8 thus, the necessity for the development of new treatments continues to increase.Post-stroke constipation can negatively impact the prognosis of patients with stroke. and Daikenchuto (without Dahuang),15 could be effective in treating functional constipation. Both herbal prescriptions are herbal combinations with a long history of use and listed in the \u201cSynopsis of Prescriptions of the Golden Chamber, \u201d published during the Han Dynasty in ancient China; these formulations have been used to improve constipation. The pharmacological mechanisms underlying the clinical effects of these two prescriptions have also been reported. In a previous study, the focused network pharmacology approach was used to analyze the mechanism of action of MaZiRenWan on constipation; the study found that representative compounds of MaZiRenWan, such as amygdalin, albiflorin, emodin, honokiol, and naringin, could induce spontaneous contractions of colonic smooth muscle. Furthermore, several previous studies have suggested that Zanthoxylum fruit, one of the components of Daikenchuto, could improve delayed propulsion in the small intestine and distal colon, while maltose, another component, induces endogenous cholecystokinin secretion, both of which reportedly helps to improve constipation. Thus, traditional East Asian herbal medicines are likely to become newer alternatives to existing Western medicines to improve post-stroke constipation.Clinical trials have reported that MaZiRenWan (which contains Dahuang)The current review will be conducted to assess the efficacy and safety of using herbal medicine to treat post-stroke constipation and establish novel management strategies that is expected to reduce the burden on patients and their caregivers.Conceptualization: Seungwon Kwon.Data curation: Bo-Hyoung Jang, Jin Pyeong Jeon, Ye-Seul Lee, Seung-Bo Yang, Seungwon Kwon.Formal analysis: Bo-Hyoung Jang, Jin Pyeong Jeon, Ye-Seul Lee, Seungwon Kwon.Funding acquisition: Seungwon Kwon.Project administration: Seungwon Kwon.Writing \u2013 original draft: Chul Jin, Seungwon Kwon.Writing \u2013 review & editing: Chul Jin, Bo-Hyoung Jang, Jin Pyeong Jeon, Ye-Seul Lee, Seung-Bo Yang, Seungwon Kwon.The grant number appeared incorrectly as HB20C0147 and has been corrected to HF20C0147."} +{"text": "Montmorillonite (Mt) is a kind of 2:1 type layered phyllosilicate mineral with nanoscale structure, large surface area, high cation exchange capacity and excellent adsorption capacity. By virtue of such unique properties, many scholars have paid much attention to the further modification of Mt-based two-dimensional (2D) functional composite materials, such as Mt-metal hydroxides and Mt-carbon composites. In this review, we focus on two typical Mt-2D nanocomposite: Mt@layered double hydroxide (Mt@LDH) and Mt@graphene (Mt@GR) and their fabrication strategies, as well as their important applications in pollution adsorption, medical antibacterial, film thermal conduction and flame-retardant. In principle, the prospective trend of the composite preparation of Mt-2D nancomposites and promising fields are well addressed. Montmorillonite (Mt) is one of the most common 2:1 type clay minerals, which is rich in natural content and non-toxic , which cMt also exhibits promising advantages including high thermal stability, high modulus, high strength and low expansion coefficient. However, due to the hydrophilic and oleophobic properities of natural Mt, as well as poor compatibility with polymers, it fails to be widely applied. Therefore, many scholars have modified Mt and enhanced its adsorption and ion exchange properties via preparation with other two-dimensional materials , extendiEven though some researchers have made progress of the modification of Mt, the preparation of Mt@two-dimensional materials a systematic review on the composite preparation methods and applications of Mt and two-dimensional materials is still lacking . Thus, t2-xMgx)Si4O10(OH)2\u00b7(M\u00b7nH2O) . The isomorphic substitution in layers leads to insufficient charge (Fe2+ or Mg2+ replaces Al3+), and the charge is compensated by cations absorbed in an interlayer space of Mt, which leads to its important cation exchange properties [The layer height of Mt is nearly 1 nm, and the interlayer distance is less than 1 nm, indicating Mt has a nanosized porous structure . Mt is noperties . Such a Highly depending on pH, the polymerization degree of Mt decreases as pH increases before reaching the isoelectric point. Moreover, the charge on edges and surfaces is different . The unin-isopropylacrylamide (NIPAm) to explore which chemical group was the driving force for AMPS adsorption in the surface of Mt, and finally revealed that it was the main interaction between AMPS and the clay [Currently, the modifications of Mt are mainly divided into three categories, including organic modification, inorganic modification, and organic-inorganic composite modification. For organic modification, cations in the interlayer space are exchanged by the ions with a functional group of surfactant molecules, which act as an interlayer opener . Organicthe clay . In 2015the clay found th2+, Cd2+, Cu2+, Co2+, etc., it can quickly reach equilibrium within 10 min. Indeed, the performance of Mt, especially in terms of thermal stability and adsorption capacity of heavy metal ions, has been improved through inorganic modification.Inorganic modification is widely used for preparing Mt-based 2D composites. To enhance the adsorption capability of natural Mt for heavy metal ions, as early as the 1870s, inorganic methods were proposed to modify Mt, such as acid activation modification, inorganic salt modification and pillared modification. These methods can increase the interlayer distance and strengthen the thermal stability of Mt for using it as the adsorption material to remove heavy metal ions . Cheng eBesides, the organic-inorganic modification has also been used. Surfactant modification can greatly improve the adsorption selectivity of Mt for heavy metal ions, while it blocks the interlayer space of Mt, reducing the porosity and the adsorption capability of pollutants. However, pillared Mt can significantly increase the pore volume and surface area and has stronger tolerance to pH and coexisting inorganic ions. Therefore, preparing inorganic-organic composite modified Mt contributes to improving its adsorption capability for pollutants .The concept of two-dimensional materials was proposed in 2004, when scholars at the University of Manchester successfully prepared single-layer graphene by mechanical stripping method . Two-dim(1)Simple substances: graphene, graphdiyne, black phosphorus (BP), metals and new boron, arsenic, germanium, silicon, bismuth, and so on.(2)3N4), boron carbon nitrogen, and various graphene derivatives.Inorganic compounds: hexagonal boron nitride (h-BN), graphite phase nitrogen carbide (g-C(3)3, metal halide, transition metal oxyhalide (MOX), III-VI layered semiconductor (MX).Metal compounds: transition metal disulfide (TMDs), Layered double hydroxide (LDH), transition metal oxide (TMOs), transition metal carbon/nitrogen/carbonitride (MXenes), metal phosphorus trisulfide APX(4)3) clay mineral .Salts: inorganic perovskite compound (AMX(5)Organic frameworks: layered metal-organic framework compounds (MOFs), layered covalent organic framework compounds (COFs) and polymers .According to the composition and structure of materials, the existing two-dimensional nanomaterials can be divided into five categories:(1)Ultra-high mechanical strength. Two-dimensional materials have strong fracture resistance, good toughness, and are ductile but not easy to break .(2)Good electrical properties. On the one hand, electrons are restricted to the limited domain of two-dimensional materials with no interlayer interaction, which can greatly stimulate the electronic properties; on the other hand, the large transverse dimension and ultra-thin thickness give them extremely high specific surface area, exposing more active sites on the surface to the greatest extent, thus they are widely used in catalysis or energy storage fields .Due to the limitation of electrons in two dimensions, two-dimensional features are unique and necessary for obtaining unprecedented physical, electronic and chemical properties . CompareAt present, many researchers have explored efficient preparation methods for two-dimensional nanomaterials . The recAmong two-dimensional nanomaterials, GR is the most powerful nanomaterial and has been widely used in many fields . MeanwhiLayered double hydroxide (LDH), also known as anionic clay or hydrotalcite-like, is a kind of mineral material composed of two or more metal elements, which is similar to the octahedral brucite layered structure. It has the characteristics of adjustable metal composition, interlayer anion, size and thickness of the laminate, etc. . The uniThe main content of intercalation assembly method is that LDH and Mt are exfoliated and intercalated to obtain composite materials. When the exfoliated LDH nanosheets are intercalated with Mt, the swelling property and high specific surface area of Mt in water can be used to increase the exposure of the inner space of LDH layer. In this way, the layered stacking structure of LDH can be changed to increase the specific surface area, and finally the composite material with chip assembly structure can be obtained. The process of synthesis of NiAl-LDH/Mt composites by Zhou et al. is illusThere are two key challenges in the preparation of composite materials by intercalation methods. One is how to develop an effective method to peel off the laminated composite completely, and the other is how to adjust and control the assembly of nanosheets to obtain the nanocomposite with specific structure . Before Mt and LDH are closely aligned due to the interaction between interlayer ions and laminates, so stripping Mt and LDH to obtain stable sol is a prerequisite for preparing thin film materials . Most scAccording to the different stripping agents used, the LDH stripping methods can be divided into three types: stripping in short chain alcohols, stripping in formamide and stripping in water. Stripping in polar solvents such as formamide is considered to be one of the simplest and most efficient methods, which makes formamide one of the most common stripping agents .In addition, using chloroform as stripping agent is also common. Gong and Dai However, the use of the above two stripping agents has certain limitations or potential hazards. The strong corrosive environment of the formamide stripping system limits the application of the stripped nanosheets, and the formamide solution is not volatile, which is not conducive to the synthesis of composite materials. Meanwhile, chloroform is sensitive to light. When exposed to light, it can react with oxygen in the air and gradually decompose into highly toxic phosgene, which is not only harmful to the environment but also dangerous in the preparation process. Therefore, some scholars put forward the stripping of LDH and Mt in water or other cheap degradable agents, to realize the economical and environment-friendly stripping.3-LDH by a hydrothermal method, Liang et al. [Due to the strong polarity of water molecules between LDH layers, interlayer anions with hydrophilic groups can improve the swelling performance of LDH in water. To achieve a more thorough stripping, it is necessary to balance the size of intercalated anions, as well as their relationship with hydrophilicity and hydrophobicity between laminates. Acetate, as a common short-chain carboxylate, can better meet the requirements for swelling and stripping of intercalated LDH in water . To syntg et al. obtainedMa et al. preparedKojima et al. found thIn addition, some researchers used biodegradable materials such as polylactic acid (PLA) for lamellar peeling. Hu et al. completeBecause Mt and hydrotalcite are similar in structure, the charges between layers are completely opposite. These characteristics provide natural conditions for the combination of Mt and hydrotalcite through electrostatic interaction, which have been used to combine Mt lamellae with LDH lamellae.Liang et al. first diChen et al. added a In-situ synthesis is a novel method to prepare composite materials. The basic principle is that different elements or compounds react chemically under certain conditions, and one or several ceramic phase particles are formed in the metal matrix, to improve the performance of single metal alloy. The reinforcement nucleates and grows spontaneously in the metal matrix, so there is no pollution on the surface of the reinforcement. The compatibility between the matrix and the reinforcement is good, and the interfacial bonding strength is high. Compared with other composite materials, the complicated pretreatment process of reinforcement is omitted, and the preparation process is simplified.Zhou et al. pointed 2/g) of the composite obtained by intercalation method was quite different from that obtained by in-situ method (168 m2/g), which indicated that the internal structure of the composite obtained by the two methods was different.They also discovered that the specific surface area , polystyrene (PS) and high impact polystyrene (HIPS). The tape casting process has the advantages of a convenient process, adjustable size and good film forming quality . It is nIn the process of compounding Mt with LDH, there are three existing routes: in-situ synthesis methods, intercalation methods and melting methods. Most scholars use intercalation methods to combine the two materials. These can be divided into two steps: delamination and composition. Formamide and chloroform are the most common stripping agents used in lamellar stripping, and their stripping steps are quite simple and efficient. However, in some cases, they have certain use restrictions or represent potential hazards. Therefore, some scholars have been working on the use of water as stripping agent, as well as other environmentally friendly degradable materials.Heavy metal pollution can affect human health through the food chain , so how LDH, with anion exchangeability, has attracted much attention due to its unique properties in the removal of pollutants from water . On the 2+ and other heavy metal ions in water were adsorbed and thus removed. The experimental results proved that the adsorption effect was improved with the increase of temperature and the prepared Mg-Zn-Al (LDH)@Mt had better adsorption effects than LDH and Mt alone. Shehap et al. [Bakr et al. preparedp et al. reached p et al. also fouWang and Li found thSeddighi et al. synthesiDue to their remarkable chelate effect on heavy metal ions, hydrotalcite-like materials have potential applications and economic advantages as water purification agents, which is a new research direction in recent years. Therefore, it is meaningful to further optimize the properties of the hydrotalcite and expand its application fields.Most industrial chemical reactions are carried out under the action of catalysts. Because catalysts have different degrees of activity under different conditions, it is very important to choose appropriate and efficient catalysts in industrial production. Although some reactions are catalyzed by acid sites on the catalyst surface, the base sites play a synergistic role to some extent in reactions such as alkylamine decomposition, aldol condensation and the Knoevenagel reaction, etc. . This diPu et al. and Li eDue to metal corrosion, discarded and unusable metals account for 15% of the output in the world, while the steel equipment scrapped due to corrosion is equivalent to about 30% of the annual output . To imprSome researchers found that ion selective coatings had high protective performance . Dong etDong et al. added LDInterestingly, bipolar anti-corrosion materials have been gradually developed, which also provides new ideas for the further development of anti-corrosion materials. Besides, we can find that Mt@LDH may have the following promising applications:(i) Sustained drug release. Mt can prevent gastrointestinal discomfort caused by other drugs and has a very high positioning ability. After oral administration, Mt can evenly cover the whole surface of the intestinal lumen and last for 6 h. As Mt is not easily absorbed after oral administration, it does not enter the blood circulation, so it is very safe during the application period and has no toxic reactions. Magnesium aluminum hydrotalcite (MgAl-LDHs) has good biocompatibility. In summary, Mt and LDH with layered structure are two clay minerals that can accommodate a large amount of drugs between the layers, making them the perfect choice for preparing drug sustained-release carriers . Chen 9 has stud+, which reduced the uncharged PA (procainamide hydrochloride) species to below pH 4, destroying the environment. Hydrotalcite has a certain buffering performance.Kevadiya and Bajaj have als(ii) Flame retardant effect. Mt and LDH combined with other materials respectively have the effect of improving thermal stability . When hyThe advantages and disadvantages of the two materials complement each other, and the composite material should show a greater improvement in flame retardancy. However, this field is currently less studied in academia, and it is worthy of further researches in the future.Graphene, also known as \u201csingle-layer graphite sheet\u201d, refers to a dense layer of carbon atoms wrapped in a honeycomb crystal lattice . The car2/g) [As the thinnest substance known in the universe , graphen2/g) , ultra-h2/g) , outstan2/g) , electri2/g) ,102 and 2/g) . In addi2/g) . Based o2/g) .Although the synthesis of graphene faces huge challenges, considerable progress has been made in this field of research in the past ten years or so . Many scDry-freezing is a simple and environmentally friendly method to produce Mt@GR . Its bas2 solution, and then the GO-Mt/SA were repeatedly washed five times, and finally subjected to vacuum freezing for 14 h. Adsorption experiments indicated that the GO-Mt/SA showed a better adsorption capacity towards MB (150.66 mg/g).For example, Tao et al. added a Impregnation methods use a carrier to contain a specific solution impregnation, relying on capillary pressure to add the components into the carrier, while adsorbing on its surface. The principle and operation of vacuum impregnation are basically the same, but the porous material needs to be evacuated first, and then the impregnation liquid was added. Compared with the traditional wet impregnation method, the vacuum impregnation method produces a more uniform and efficient material.Peng et al. producedWith the continuous development of industrial economy, more and more wastewater containing heavy metal or organic pollutants is being discharged into natural waterbodies. Due to their high toxicity, the increasing discharge volume and the non-biodegradability, heavy metal pollution and organic pollutants in dye wastewater have become one of the main problems endangering human life . TherefoCompared with traditional chemical methods, physicochemical methods and biological methods, the adsorption method has attracted much attention because of its environmental protection, high efficiency, low cost, and high removal efficiency . RecentlDue to its unique physical and chemical properties, graphene has received widespread attention as a new type of adsorbent for various heavy metal and organic pollutants, and has become a highly potential adsorbent . However2+ in wastewater.Adsorption materials based on Mt-graphene composites have been extensively studied by scholars due to their high adsorption performance, low price, and simple preparation process. The composite material, which was made by freeze-drying, hydrazine hydrate reduction, and wet impregnation , had gooSome scholars have modified natural Mt through organic and inorganic methods to improve the adsorption capacity of composite materials. Wei et al. used the2+ and PNP were 19.79 mg/g and 15.54 mg/g, respectively. Compared with Mt and GO, the total adsorption capacity of MG for the two pollutants was significantly improved. However, while preparing materials using metal-containing catalysts, a large amount of metal waste will inevitably occur. Therefore, it is necessary to find a sustainable method to achieve environmental friendliness by improving the recyclability of adsorbent materials. Xiao et al. [Zhang et al. used meto et al. first pr2/g and an adsorption capacity of 0.49 mmol/g for CO2, which was 42% higher than that of other materials. This showed that clay-based materials represented by Mt were high-efficiency CO2 adsorbents. In terms of the adsorption of the greenhouse gas carbon dioxide, Stanly et al. first syOverall, the data illustrates that Mt, as a typical natural organic compounding agent, can be compounded with graphene to reduce the stacking of graphene layers and increase the actual adsorption capacity of the adsorbent, obtaining better adsorption performance at a lower cost.According to reports, the current infections caused by bacteria are still one of the biggest health problems in the world, afflicting millions of people every year . HoweverStaphylococcus aureus (99.43%) was significantly stronger than that on Escherichia coli (84.30%).Yan et al. mixed grEscherichia coli and Staphylococcus aureus, the antibacterial rates of 10 mg/L GM-CPB were 92.3% and 99.9%, respectively. This indicated a great improvement of the antibacterial activity of the composite. In terms of the durability of antibacterial time, many scholars have also explored and tried to improve it. Yang et al. [2O) nanocomposite material.In order to further improve the antibacterial properties, Wu et al. have sucg et al. used ascThrough the above exploration, it is not difficult to find that, compared with traditional antibiotics, Mt@GR composite has great advantages and potential in the field of antimicrobial medicine.According to worldwide trends, all walks of life are pursuing the use of lighter, thinner, more reliable, and more durable materials, stimulating people\u2019s strong interest in nanotechnology. However, harmful residual products may be produced during combustion, which are not easy to decompose and can cause toxic biological accumulation . TherefoInspired by the hierarchical structure of nacre, Ming et al. found thMithilesh and Sharif preparedIt can be seen that the flame retardancy of the abovementioned Mt-GO composite material has been greatly improved, laying the foundation for it to become a new generation of combustion-supporting substances, and also provides a new idea for the research of developing new flame retardant materials.2/PVA in the composite film was 2:1 and the mass fraction of binary filler was 12%, the thermal conductivity of the composite film reached 66.4 W/(m\u00b7K), which was at least 132 times higher than that of pure PVA. The improvement of the structure and composition of the composite film through Mt and rGO could significantly improve the thermal conductivity of the polymer, and it also provided an idea for the preparation of other high thermal conductivity polymer composite films. Liu et al. [Plastic has become one of the most widely used materials in the world, but its inherent structural characteristics determine its low thermal conductivity. Therefore, some studies had been done to enhance the thermal conductivity of plastics. Zhu et al. used Mt/u et al. have sol2 nanocomposites, which provides the possibility to construct a new two-dimensional nanoelectrochemical hydrogen evolution composite material performance.Mt@GR can also be further compounded with other materials for anti-friction additives, lightweight fire-resistant conductors , or cata2, MoSe2, BiOCl, MnO2 nanosheets [In addition to LDH and GR, Mt can also be composited with two-dimensional materials, such as MoSnosheets , etc. AsHydrothermal synthesis is a simple synthesis method. Its advantages are high concentration, good dispersibility and easy control of particle size. In the process of hydrothermal synthesis, the samples in aqueous solution are mixed equally at a high reaction rate. In addition, it can form crystalline powder with high purity, so it has been widely used.2 by hydrothermal synthesis. Firstly, Mt nanosheets were prepared by ultrasonic exfoliation as the exfoliation of GO. Then Mt was added into deionized water, with stirring and centrifuging to remove coarse particles. Collecting supernatant, further ultrasonic stripping, and centrifuging to obtain supernatant. (NH4)6Mo7O24\u00b74H2O and CH4N2S were dissolved in water and added to Mt nanosheet suspension. After halving, the suspension was added into a Teflon-lined autoclave for heat treatment. Finally, Mt@MoS2 was obtained via freeze-drying.Yang et al. synthesi2SO4\u00b72H2O, Se, NaBH4 were added to deionized water after ultrasonic treatment to produce a uniform solution. Then Mt was added to the above solution and stirred at room temperature. Then, the suspension was transferred to a stainless steel autoclave lined with Teflon-lined stainless-steel autoclave and treated at 200 \u00b0C for 24 h. After natural cooling, it was filtered, washed and dried to obtain Mt@MoSe2. Rhodamine B was selected for decolorization to evaluate the adsorption performance and photocatalytic ability of Mt@MoSe2. It is found that the total decolorization rate can reach 98.2% after 45 min of visible light irradiation.The research of Li and Peng also useUltrasound-assisted chemical precipitation methods mainly use the propagation characteristics of ultrasonic waves, increasing the reaction rate. Ultrasonic waves can cause drastic changes in sound pressure, which leads to strong cavitation and emulsifying phenomena in liquids. In a very short time, a large number of tiny air bubbles are generated, and violent blasting occurs continuously. The impact force produced by blasting makes the reaction particles in full contact, achieving the purpose of accelerating the reaction rate.3)3\u00b75H2O and KCl were dissolved in ethylene glycol, and magnetic stirring was carried out at normal temperature until the solution was clear. Under ultrasonic irradiation conditions, a certain amount of Mt was added to the clear solution to form a suspension. After ultrasonic treatment, distilled water was poured into the suspension system and reacted under magnetic stirring to form a milky white precipitate. The precipitate was filtered by suction, washed with ethanol and distilled water, and dried at 50 \u00b0C to obtain Mt@BiOCl. After illumination for 120 min, the degradation rates of catechol and p-aminobenzoic acid were 98% and 92.5%, respectively, indicating that Mt@BiOCl had good adsorption and photocatalysis properties.Mao et al. studied With the expansion of industrialization, a large amount of organic dye wastewater is produced and discharged every year, which seriously threatens the ecological environment and human health. Photocatalysis with safety and environmental friendliness have been considered as a popular methods to treat organic dye wastewaters. By compounding Mt and other two-dimensional materials with photocatalytic properties, the active sites can be well increased and then the photocatalytic efficiency can be improved greatly.2 is an effective photocatalyst for degrading organic dyes, but its poor dispersion in water and easy aggregation of nanosheets limit its further applications. When MoSe2 is compounded with Mt, the aggregation of molybdenum disulfide can be inhibited, improving its dispersibility. Finally, the photocatalyst Mt@MoSe2, with high efficiency and low cost, can be obtained [2 photocatalytic reaction mechanism is shown in p-aminobenzoic acid. This reaction follows quasi-first-order reaction kinetics [MoSeobtained . The schkinetics . BesidesBased on the excellent photocatalytic performance of the composite, it is necessary and meaningful to protect the environment using pollution-free and sufficient light energy for sewage treatment. Hence, the combination of Mt and other two-dimensional materials with photocatalytic properties has great potential in catalysis and wastewater treatment.2, BiOCl and other two-dimensional materials is also introduced laterIn this review, we have summarized the preparation and applications of Mt and other two-dimensional materials, and focused on Mt@LDH and Mt@GR. Mt shows strong water absorption, adsorption and good cation exchange performance, but its hydrophobicity and poor compatibility with polymers limit its wider application. At the same time, two-dimensional materials, especially their own active sites, are under-exposed, which makes their functions not well reflected. If the two materials can be compounded, the deficiencies of the two single materials can be overcome. In the preparation of Mt@LDH, there are three general kinds of methods: intercalation methods, in-situ growth methods and tape casting methods. Herein, several methods of delamination are summarized according to the strippers used. Among them, polar solutions of formamide are considered to be the simplest and most effective way. However, because formamide has certain environmental issues, PLA and other environment-friendly stripping agents have been considered to be selected by some scholars. On the other hand, dry-freezing and impregnation methods are also used to prepare Mt@GR. The former is a simple and environmentally friendly method. In the latter method, a carrier containing a specific solution is impregnated, and components enter the carrier by capillary pressure and are adsorbed on its surface. The principle and operation of vacuum impregnation are basically the same, but porous materials need to be evacuated first, and then the impregnation liquid is added. Compared with the traditional wet impregnation method, the materials produced by vacuum aging methods are more uniform and efficient. The preparation of MoS"} +{"text": "Randomized clinical trials (RCT) are the gold standard for informing treatment decisions. Observational studies are often plagued by selection bias, and expert-selected covariates may insufficiently adjust for confounding. We explore how unstructured clinical text can be used to reduce selection bias and improve medical practice. We develop a framework based on natural language processing to uncover interpretable potential confounders from text. We validate our method by comparing the estimated hazard ratio (HR) with and without the confounders against established RCTs. We apply our method to four cohorts built from localized prostate and lung cancer datasets from the Stanford Cancer Institute and show that our method shifts the HR estimate towards the RCT results. The uncovered terms can also be interpreted by oncologists for clinical insights. We present this proof-of-concept study to enable more credible causal inference using observational data, uncover meaningful insights from clinical text, and inform high-stakes medical decisions. Randomized clinical trials are often plagued by selection bias, and expert-selected covariates may insufficiently adjust for confounding factors. Here, the authors develop a framework based on natural language processing to uncover interpretable potential confounders from text. The gold standard for assessing treatment effects is randomized clinical trials (RCT). However, RCTs can be very expensive, time-consuming, and limited by the lack of external validity3. Hence, there has been a growing interest in using observational data to compare and evaluate the effectiveness of clinical interventions, also known as comparative effectiveness research (CER)2.As the number of highly targeted cancer treatments increases, it is increasingly difficult for oncologists to decide on optimal treatment practices. In recent years, medicine has seen the reversal of 146 standard medical practices4. Moreover, population-based CERs in oncology often also face small data challenges. Electronic medical records (EMRs) are another source of rich observational information on patient demographics and past medical history. We hypothesize that the more detailed unstructured data present in EMRs can be harnessed to reduce confounding compared to prior CER studies.Many studies have used large-scale observational registries such as the Surveillance, Epidemiology, and Ends Results (SEER) and National Cancer Data Base (NCDB) to perform CER. However, such studies may be unreliable due to the systemic bias present in observational data and the presence of unmeasured confounders2. However, such studies are often unreliable, and many observational studies have been refuted by RCTs soon after4. For example, Yeh et al.5 performed a comparison of surgery vs. radiotherapy for oropharynx cancer and suggested that surgery may be superior to radiation for quality of life outcomes. A few years later, this claim was refuted by an RCT study by Nichols et al.6, which showed that radiation is in fact superior to surgery in terms of 1-year quality of life scores. A similar example is seen with prostate cancer. In 2016, Wallis et al.7 showed through population-based studies that surgery is superior to radiation for early-stage prostate cancer for overall and prostate cancer-specific survival; a few months later, the finding was refuted by Hamdy et al.8, which showed that surgery and radiation are equivalent in terms of overall and prostate cancer-specific survival. Many other studies have shown the fallibility of population CERs that rely on expert-curated features to draw conclusions about treatment effects9.In the past decade, there has been a growing interest in using observational data for clinical decision-making and causal inference in oncology11; see ref. 12 for a review. There is also a growing amount of literature that adapts machine-learning models, such as random forest or regularized regression, for doubly robust ATE estimation in high-dimensional settings16. However, most of the methods do not include unstructured data.Beyond clinical studies, there is a relatively large literature on performing causal inference from observational data. Various papers have explored how to correct for bias when evaluating average treatment effect (ATE) from observational studies with propensity score matching or weighting20. Roberts et al.18 propose text matching to employ textual data for causal inference. Mozer et al.17 apply text matching to patient charts texts for a medical procedure evaluation; however, they focus on continuous outcomes and rely mostly on expert-curated terms from the clinical text. Veitch et al.19 is another work that employs unstructured data for causal inference; however, they rely on black-box models that are not interpretable. Moreover, many existing causal inference methods are developed for continuous outcomes and do not transfer easily to the time-to-event outcomes for survival analysis used in oncology. Of the ones that perform causal inference on time-to-event outcomes for medical applications22, we did not find any that include unstructured data in a systematic way. Austin22 presents methods for using propensity scores to reduce bias in observational studies with time-to-event outcomes. Our study leverages some of the ideas and methods in this literature to develop our approach for identifying and evaluating the potential confounders from the unstructured clinical notes. Keith et al.20 present a review of the literature on using textual data to adjust for confounding. Our paper contributes to this literature by addressing obstacles in using NLP methods to remove confounding.Recent literature has shown the usefulness of conditioning on textual data to adjust for confounding24, clinical risk prediction25, and prediction of multiple medical events26. However, most current work involving EMRs focuses on prediction tasks. In studies that include unstructured notes, most use deep learning to produce context-rich embedding representations of words or documents26. While these representations are highly accurate for prediction tasks, they are often black-box and very difficult to interpret for causal insights. Our approach differs in that we use simple NLP techniques to generate matrix representations that can be easily mapped to specific words and phrases. This increases the interpretability of our method and allows us to explain our confounders to clinicians.There is also a growing literature that seeks to better employ EMRs for clinical tasks. Existing work has employed structured EMR data and unstructured clinical notes for survival prediction and analysis7. Observational studies are more reliable when we can better control for these confounders. While structured EMR data, such as billing codes, can be used to encode expert-curated patient characteristics, studies suggest that administrative claims data may contain errors28 and expert-curated covariates may not capture all potential confounding29. EMR clinical text is a potential source of additional information about factors that might relate to both treatment assignment and prognosis.We study how EMRs, especially clinical text, can be used to reduce selection bias in observational CER studies and better inform treatment decisions in oncology. A confounder is a variable that is associated with both treatment assignment and the potential outcomes a subject would have under different treatment regimes. In the presence of confounders, the correlation between treatment assignment and outcomes cannot be interpreted as causal. One way that confounding may arise is when patients are selected for a treatment group on the basis of the severity of their illness. In such a case, failing to adjust for patient severity can lead to selection bias when attempting to estimate causal effects. For example, surgery tends to be performed on younger or healthier patients; certain doctors or institutions may prefer one treatment over another, and this creates confounding if those doctors or institutions treat patients with systematically different severity. Studies based on a small set of covariates tend not to capture the important confounders and result in biased estimates30. NLP can be used to process the unstructured clinical notes and create covariates that supplement the traditional covariates identified through expert opinion. We then augment our dataset with covariates that impact both treatment assignment and patient outcomes, where attempting to estimate causal effects while omitting such variables leads to biased estimates31. Finally, we use methods designed to estimate causal effects in observational studies with observed confounders to estimate treatment effects in our augmented dataset. We show that controlling for these confounders appears to reduce selection bias when compared against the results from established RCTs and clinical judgment.We propose an automated approach using natural language processing (NLP) to uncover interpretable potential confounders from the EMRs for treatment decisions. For high-stake settings such as cancer treatment decisions, it is important to design models that are interpretable for trust and understanding32 was then used to select the terms that are predictive of both the treatment and survival outcome as potential confounders. Finally, we validated our method by comparing the hazard ratio (HR) from survival analysis with and without the confounders.We apply our method to localized prostate and lung cancer patients. Based on cohorts from established RCTs, we built four treatment groups for comparison. We uncovered interpretable potential confounders from clinical text and validated the potential confounders against the results from the RCTs. Simple NLP techniques were used to construct a bag-of-words representation of the frequently occurring terms. A Lasso model22, and most NLP studies on clinical text focus on prediction or classification settings28. Our paper differs from existing studies by employing NLP for causal analysis; we use NLP methods to predict the treatment and survival outcome, and then employ a causal framework to combine the two models for uncovering potential confounders. We are the first to uncover interpretable potential confounders from clinical notes for causal analysis on cancer therapies, and one of the few works that combine NLP and causal inference in a time-to-event setting. Our method allows researchers to extract and control for confounders that are not typically available. While we present our work as a proof-of-concept study, this appears to be a useful step for future observational CER studies to help reduce selection bias unique to that dataset. The research presented can help unlock the potential of clinical notes to help clinicians understand the current clinical practice and support future medical decisions. We also outline several limitations that need to be overcome for use in practice in \u201cDiscussion\u201d.Our main contribution is presenting a framework for uncovering interpretable potential confounders from clinical text. Existing work in observational causal inference rarely employs unstructured dataOur study advances both the clinical and causal inference literature by using NLP to perform causal inference on clinical text in time-to-event settings. We hope this will inform clinical practice and improve patient outcomes.We apply our methods to localized prostate and stage I non-small cell lung cancer (NSCLC) patients and compare the results against established RCTs. We select these diseases due to data availability and having established clinical RCTs for validation. After filtering and assignment, we include 1822 patients for prostate cancer, with 988 surgery patients, 385 radiation patients, and 449 active monitoring patients; the average follow-up time is 4.11 years. For stage I NSCLC, we include 749 patients, with 492 surgery patients and 257 radiation patients; the average follow-up time is 4.96 years. The patient characteristic descriptions of the prostate cancer cohort are shown in Table\u00a08 compared active monitoring, radical prostatectomy, and external-beam radiotherapy. A total of 1643 patients were included in the study, with 553 men assigned to surgery, 545 men assigned to radiotherapy, and 545 men to active monitoring. They observed no significant difference among the groups for prostate cancer or all-cause mortality (P\u2009=\u20090.48 and P\u2009=\u20090.87 respectively). Similarly, a recent study showed that the difference in treatment effects for surgery vs. radiation observed from observational studies is entirely due to treatment selection bias29. For stage I NSCLC, the Chang et al.33 study is a pooled study comparing stereotactic ablative radiotherapy (SABR) to surgery. A total of 58 patients were included, with 31 patients assigned to SABR and 27 to surgery. The study observed that SABR had slightly better overall survival than surgery (P\u2009=\u20090.037), but claims to be consistent with the clinical judgment that surgery is equipoise to radiation.We use the findings from established RCTs and clinical judgment as a benchmark for evaluating our results. For localized prostate cancer, Hamdy et al.8 and Chang et al.33, we evaluate our results for the following four treatment groups for an outcome of all-cause mortality:Surgery vs. radiation for prostate cancerSurgery vs. monitoring for prostate cancerRadiation vs. monitoring for prostate cancerSurgery vs. radiation for stage I NSCLCWe do not analyze other treatment groups for lung cancer due to patient count constraints.Following the design of Hamdy et al.Our approach identifies covariates that are likely potential confounders in this particular dataset from the high-dimensional and high-noise EMR data. These covariates are interpretable as they are represented by structured data or words from a bag-of-words matrix. To evaluate the effectiveness of the potential confounders selected in the model, we use these potential confounders to perform survival analysis for the treatment groups for prostate and stage I NSCLC. We compare the results of various methods for time-to-event analysis in terms of HR. Although we cannot know what the true HR is, we suggest that using medical notes improves on the traditional covariates. We compare our results against existing RCTs to evaluate how the confounders we have uncovered can help correct selection bias. The overall workflow is shown in Fig.\u00a0We show that our methods uncover terms that are predictive of both the treatment and survival outcome. Hence, these are potential confounders that should be controlled for in observational CERs to reduce selection bias. Please see Supplement\u00a03 for a discussion on the structures of potential confounding our method can capture.15. However, in survival analysis, it is recommended that the covariates analyzed be constrained by the statistical 1 in 10/20 rule of thumb with respect to the event count35. In our high-dimensional setting, the union of covariates that are predictive of treatment and outcome yields too many potential confounders relative to the sample size. Hence, we use intersect as a heuristic to focus on the most important confounders.We select the intersection covariates from our treatment and outcome prediction models as the potential confounders. We base this idea on the selection of union variables to reduce confounding when performing causal inference on observational data in the case of continuous outcomesx axis plots the coefficient from the treatment prediction model while the y axis plots the coefficient from the survival outcome model. Each covariate is labeled by the text next to it. The intersection covariates, intersect, are shown in blue; these are the covariates that have strong effects in both models. For the structured covariates, we illustrate in black the coefficients for the covariates that were not selected; these coefficients are closer to at least one of the axes in the figure. We do not illustrate the coefficients for unstructured covariates that are not selected, as there are a large number of these covariates. The axes are labeled to indicate which treatment the coefficient predicts and whether the coefficient is indicative of a good or bad survival prognosis. For example, in the treatment model, patients with a high bladder word occurrence have a higher likelihood of receiving surgery; in the outcome model, patients with a high bladder occurrence have a lower likelihood of survival.In Fig.\u00a0R2 correlation among all the selected covariates for each treatment group.In Supplement\u00a05, we show the 7.Structured: Using only the structured covariates. We use this as a baseline because these are covariates that are typically used in retrospective oncology studies and are readily available in the structured dataIntersect: Using only the intersection covariates identified as confounders.Struct+intersect: Using the union of the structured and intersection variables.We then perform survival analysis using univariate Cox proportional hazard models (Cox-PH) with propensity score matching (matching), univariate Cox-PH model with inverse propensity score weighting (IPTW), and multivariate Cox-PH model with inverse propensity score weighting (multi.coxph). We hypothesize that struct+intersect will perform the best by including both the structured and unstructured data. In Fig.\u00a0We evaluate these potential confounders by comparing the results on three covariate combinations:We observe that with the additional covariates, we are able to shift the estimate of the HR toward the direction of the RCT for an outcome of all-cause mortality. We also compare the covariate-specific HR of each of the selected covariates in terms of univariate and multivariate Cox-PH analysis for an all-cause mortality outcome in Tables\u00a08. With structured, we observe a significant effect that radiation is superior to surgery, a result that disagrees with most retrospective studies7. Each center can have different patient populations and treatment patterns that shift the only structured adjusted survival rates. For instance, at our center we have a busy high-dose-rate brachytherapy program which is an attractive option for fit patients with few comorbidities who might otherwise receive surgery. This would be expected to bias the survival outcomes in favor of radiation, as observed in our study. We seek to uncover potential confounders from the text that can reduce bias when performing retrospective studies, whichever way the bias lies. After adjustment with the uncovered confounders, we observe a significant shift in the HR toward equipoise with the additional identified confounders for intersection and struct+intersect. For structured, we observe an HR of 2.51 with 95% CI (2.39\u20134.55) and P value of 0.002 with multi.coxph. For struct+intersect, we estimate an HR of 1.54 with 95% CI (0.78\u20133.03) and P value of 0.214 with multi.coxph. We shift the HR point estimate by 0.97, or 38.6%, toward equipoise.In Fig.\u00a08, the RCT, reports the HR for surgery vs. active monitoring as 0.93 with 95% CI and P value of 0.92. With structured, we again have a significant effect that active monitoring is superior to surgery; this disagrees with most retrospective studies7 and Hamdy et al.8. We again observe a significant shift in the HR toward equipoise with the additional identified confounders. For structured, we observe an HR of 2.71 with 95% CI (1.55\u20134.75) and P value\u2009<\u20090.001 with multi.coxph. For struct+intersect, we estimate an HR of 1.10 with 95% CI (0.55\u20132.21) and P value of 0.781 with multi.coxph. We shift the HR point estimate by 1.61, or 59.1%, toward equipoise.In Fig.\u00a08 record the HR for radiation vs. active monitoring as 0.94 with 95% CI of and P value of 0.92. We observe that matching estimated the HR closest to the RCT results when compared against IPTW and multi.coxph. All results with intersect and struct+intersect shift the HR estimate slightly toward equipoise, with the most shift of 0.32, or 71.1%, by intersect and IPTW; this is closely followed by a shift of 0.20, or 45.5%, with intersect and multi.coxph. While the adjusted results are not as close to the RCT results as compared to Fig.\u00a0In Fig.\u00a0 0.65, 1.6 and P v33 and clinical judgment tells us that surgery and radiation should be about equipoise for stage I NSCLC. The shift is not as significant as with prostate cancer, but we also note that the established clinical standard for lung cancer is not as well studied. We observe a more significant shift with multi.coxph, with an average shift of 0.15 or 38.5%. We observe an average shift of 0.06, or 15.4%, with matching and an average shift of 0.02, or 5.1% with IPTW. For structured, we observe an HR of 0.39 with 95% CI (0.30\u20130.51) and P value\u2009<\u20090.001 with multi.coxph. For struct+intersect, we estimate an HR of 0.54 with 95% CI (0.40\u20130.53) and P value\u2009<\u20090.001 with multi.coxph. We shift the HR point estimate by 0.15, or 38.5%, toward equipoise. While the adjusted results are not as close to the RCT results as compared to Fig.\u00a0multi.coxph seem to perform better under these settings.In Fig.\u00a0Overall, our methods uncover several potential confounders that can reduce selection bias in observational data. Although our method cannot uncover all potential confounders, we are able to uncover confounders that are not usually included in expert-selected covariates. Supplementary analysis of propensity scores and covariate balance plots for each analysis is seen in Supplement\u00a04.We show that the potential confounders we have uncovered are interpretable through clinical expertise. We examine the effect on survival for each selected covariate in term of univariate and multivariate survival analysis with a Cox-PH model. In univariate analysis, a single covariate is regressed on the survival outcome and describes the survival with respect to a single covariate. In multivariate analysis, all the selected covariates are regressed on the survival outcome and describe each covariate\u2019s effect on survival while adjusting for the impact of all selected covariates. For a particular variable, an HR below 1 indicates that the covariate is a positive predictor of survival, an HR above 1 indicates a negative predictor of survival, and an HR equal to 1 means that the variable does not seem to effect survival.For surgery vs. radiation and surgery vs. active monitoring with prostate cancer, struct:patient_age, text:bladder, and text:urothelial are chosen as intersection covariates. Moreover, they are also shown to be significant through both univariate and multivariate covariate analysis in Tables\u00a0Patient age is a known confounder in treatment decision and survival outcomes. Older patients are more likely to receive radiation due to surgery risk. However, older patients also have higher mortality. In Fig.\u00a07. Examples of text:bladder in the clinical notes are \u201che notes incomplete bladder emptying\u201d, \u201cevidence of benign prostatic hyperplasia and chronic bladder outlet obstruction\u201d, and \u201cdiagnosis of bladder cancer\u201d. Examples of text:urothelial in the notes are \u201cpathology showed high-grade urothelial carcinoma with muscle present and not definitively involved\u201d, \u201cit was read as a high-grade urothelial cancer which involved the stroma of the prostate as well as the bladder\u201d. Patients with bladder cancer or bladder issues are more likely to get surgery than radiation. Radiation does not work well for bladder cancer. Patients with bladder problems may prefer surgery because radiation can irritate the bladder and cause urinary problems. However, these are also patients with higher mortality and more health issues. In Fig.\u00a0We hypothesize that text:bladder and text:urothelial are identified because prostate cancer patients often have bladder symptom issues and can also have urothelial cancer. Most retrospective prostate cancer studies have not excluded patients with early-stage bladder cancerMoreover, for this dataset, we note that the confounding appears to be observable. The bias of surgery being worse than radiation and monitoring is due to a group of patients who are diagnosed with prostate cancer through a resection for bladder cancer or other bladder issues. When a patient with bladder cancer has a cystoprostatectomy in which the bladder and prostate are both removed, a pathologist can sometimes find a prostate tumor in the pathology specimen. Bladder cancer patients tend to be older, have more medical issues, and a higher mortality rate. The terms text:bladder and text:urothelial describe this group of patients. Our method can capture some characteristics of this group and use this to reduce selection bias.For radiation vs. active monitoring, we do not observe confounders that present a significant shift in treatment HR in Table\u00a0We repeat the same process for lung cancer. We examine Table\u00a0We note that age, gender, race, and diagnosis year are known confounders for treatment decision and outcome.36. It\u2019s been observed that patients with the ALK mutation have worse disease-free survival, citing higher rates of recurrence and metastasis36. Alternatively, we hypothesize that text:alk is significant because the ALK mutation is mutually exclusive from the EGFR mutation37. The EGFR mutation is often present in asian patients and EGFR patients typically have better survival. Hence, the significance of text:alk can be related to the absence of the EGFR mutation. In Fig.\u00a0The covariate text:alk points to the ALK mutation for NSCLC. About 5% of NSCLCs have a rearrangement in a gene called ALK; the ALK gene rearrangement produces an abnormal ALK protein that causes the cells to grow and spread. This change is often seen in non-smokers (or light smokers) who are younger and who have the adenocarcinoma subtype of NSCLC39. This can also be related to the absence of the EGFR mutation, since EGFR mutation occur less frequently in the lower lobe38. In Fig.\u00a0The covariate text:left.low can point to NSCLC on the lower left node of the lung. Studies have observed that lung cancer on the lower lobe or lower left lobe has worse survival40. In Fig.\u00a040.The covariate text:nipple can indicate a history of breast cancer. Studies have shown that patients with a history of breast cancer are diagnosed with lower stages of NSCLC and show better prognosis when compared to women with first NSCLC, perhaps due to heightened surveillance compared to the general populationThe covariate text:sponge can refer to sponges used for surgical preparations. The sponge is commonly used in surgery and can be an indication that the patient has some history of receiving surgery. Patients who receive surgery tend to be healthier and have better survival. In Fig.\u00a041. Examples of text:severe include phrases such as \u201csevere pulmonary hypertension\u201d, \u201csevere COPD\u201d, or \u201csevere emphysema\u201d. Examples of text:rib include phrases such as \u201crib fractures\u201d or \u201crib shadows\u201d. In Fig.\u00a0The covariates text:severe and text:rib could be pointing to a severe conditions related to lung and other problems that indicate poor overall health and performance status, which has been shown to be related to a patient\u2019s survival outcomesOverall, we are able to uncover some potential confounders that are easy to interpret and capture useful clinical insights.We have demonstrated how causal inference methods can be used to draw more reliable conclusions from population-based studies. Our paper shows that (1) clinical notes, or unstructured data, can be an important source for uncovering confounders, and (2) current clinical tools can be augmented with machine-learning methods to provide better decision support. Furthermore, our proof-of-concept framework can be easily adapted to use textual data to reduce selection bias in retrospective studies more generally.Our framework can be used to improve clinical practice. Due to the simplicity of the machine-learning tools employed, it can be easily implemented as an additional step in the design of observational CER studies. Our results also show that the method is generalizable to different types of cancer and for various types of study cohort comparisons. With the continued digitization of clinical notes and the increasing access to EMRs, we recommend this as an essential step for any researcher seeking to draw clinical insights from observational data. The terms uncovered with our method can not only be used to improve observational CERs but also be used to generate interpretable insights about current clinical practice. The uncovering of relevant information and subsequent insights can then be used to inform high-stakes medical decisions.We believe that our work is the first to explore the potential of including unstructured clinical notes to reduce selection bias in oncology settings. We are also one of the first works to incorporate unstructured data into causal inference estimators and Cox-PH models. Although our method has been developed to address a specific problem in oncology and applied in the clinical setting, it can also be easily adapted for application in any observational study that seeks to incorporate unstructured text. We propose our method as an automated selection procedure that can be used to supplement expert opinion when uncovering potential confounders for a particular observational study population. There is much work to be done in using NLP and unstructured text for causal inference. Our work presents a simple and flexible way to generate interpretable causal insights from the text of any sort. Our method can also be applied to studies within and beyond medicine to extract important information from observational data to support decisions.26. We are interested in how these deep learning methods can be applied to generate causal insights on another study population with a larger sample size. We are also interested in developing ways to better address ambiguity in the notes .Our study also has several limitations. We begin by outlining potential areas for future work. First, we use simple NLP methods to process the clinical notes and extract the top 500 or 1000 features for variable selection. In the process, much information in the text nodes is discarded and the sequence of past medical events is not taken into account. We choose this setup due to the small sample size of oncology study cohorts, which makes it difficult to train more complicated models for textual processing. In theory, the more work that is placed into the clinical notes preprocessing and the higher quality of the features generated from these notes, the more informative the uncovered potential confounders will be. For future work, we hope to explore how other NLP techniques, such as topic modeling or clustering, can be used to build even higher-quality features from unstructured text. There are also an increasing number of deep learning models that can be used to identify interpretable insights24. Future work could explore alternative models that do not rely on the assumption42.Second, we rely on the proportional hazard assumption for our Cox-PH models. In cases of many covariates, the assumption may be violated. We feel the simplicity and interpretability of the model outweigh the performance improvement resulting from increased complexity. For EMR datasets with many covariates, the assumption is often used and does not seem to present a practical issue43.Third, more work can be done to mitigate immortal-time bias in our HR estimates. We discuss our approach in \u201cStudy cohort\u201d. An alternative method to address this problem would be to use a time-dependent Cox-PH modelFourth, we focus on the comparison of methods that can be applied in a time-to-event setting, and leave out more novel methods that are developed for continuous settings of ATE estimation. It would be interesting to explore how these methods can be extended and applied to a time-to-event setting.Fifth, our approach of selecting intersection covariates is an empirical approach designed for uncovering the most valuable potential confounders. As a result, we filtered out most of the features and only focused on a few confounders. While our approach works well empirically in this study, future work involves developing more sensitive and statistically grounded methods for identifying potential confounders.Sixth, our work is constrained to localized prostate and lung patients at the Stanford Hospital and state cancer registry data. It would strengthen the validity of our methods if experiments can be performed on large multiinstitutional registries for cancer or other diseases.Seventh, we acknowledge that an average follow-up of about 4 years is relatively short for prostate cancer survival analysis. For this sample from the EMR, the actual follow-up time for each patient varies from 6 months to 10 years. Future studies can perform the analysis on larger multi-institutional datasets.44. We do include tumor grade in the structured data, which is shown to have a strong correlation to PSA45. We acknowledge this as a limitation of our study and future work can be done to augment the structured features.Eighth, we include a limited set of structured features as structured data such as diagnosis codes are often under-reported in the EMR. For example, we did not include the PSA scores because they are not well recorded in the structured data as many patients had PSA tests done at outside facilitiesIn addition to future works, we also outline two limitations to applying our framework. First, our method can only uncover potential confounders that can be observed in notes. There are many sources of confounding in observational data and even rich EMR data cannot capture everything. If the confounding is unknown and unobservable, no method to our knowledge will be able to adjust for it. Hence, it would be good practice to perform sensitivity analysis to evaluate the result\u2019s robustness to unknown confounding. Please see Supplement\u00a03 for additional discussion on the potential confounding situations we can capture.Second, the validity of causal inference models cannot be determined without prospective experimental data. Therefore, the uncovered confounders and estimated HR can only be validated by clinicians. We are identifying potential candidates for the bias and then evaluating these candidates of bias against RCTs.Many challenges remain for employing unstructured data for causal inference analysis and medical settings. We hope this work interests both clinical practitioners augmenting existing clinical support tools and researchers using textual data to reduce confounding in observational data. We hope our workflow, problem framing, and experimental design can serve as such a sandbox for testing more complex algorithms or adapting to other application areas. Ultimately, we hope this research will find causal information in clinical notes and provide a transparent way for machine learning to inform medical decision-making.Our research conforms with all relevant ethical regulations and is approved by the Stanford Institutional Review Board (IRB). Patient consent was waived through obtaining the IRB. We curate a dataset of non-metastatic prostate and lung cancer patients from the Stanford Cancer Institute Research Database (SCIRDB). The database includes patients seen in the Stanford Health Care (SHC) system from 2008 to 2019 for prostate cancer and 2000 to 2019 for lung cancer. SHC clinical sites include one academic hospital, one freestanding cancer center, and several outpatient clinics. From SCIRDB, we pull a total of 3638 prostate cancer patients with 552,009 clinical notes and 3274 non-small cell lung cancer (NSCLC) patients with 648,505 clinical notes. The clinical notes include progress notes, letters, discharge summaries, emergency department notes, history and physical notes, and treatment planning notes.For each patient, we also pull the structured EMR and data from the inpatient billing system. From the California Cancer Registry (CCR), we pull the available initial treatment information, cancer staging, tumor description, date of diagnosis, date of death, and date of the last follow-up for these selected patients. For NSCLC, we also pull the recorded Epic cancer staging information. Demographic information such as age, race, gender, and ethnicity are self-reported in our dataset.We build our study cohorts from SCIRDB with reference to existing observational study principles and clinical expertise. We try our best to select patients for each treatment group built from the EMRs to match the RCTs criteria.43. We then filter for only patients with treatment of interest for analysis. Because we extract only initial treatments (rather than treatments for cancer recurrence) as recorded in SEER, most of the treatments are administered within 6 months of the diagnosis date46. This is similar to the setup for traditional landmark analysis43. To ensure the proportional hazard condition, patients who are still living are censored at the time of last follow-up47. The patient filtering and cohort selection process is shown in Fig.\u00a0For each patient, we combine all treatments with the same Diagnosis ID in the CCR as the initial line of treatment. For patients with multiple Diagnosis ID, we keep the first record of treatment. For prostate cancer, patients without a recorded treatment are labeled as active monitoring. To avoid explicit revelation of the treatment choice, we only include notes more than 2 months before the treatment start date for prostate cancer and 1 month for NSCLC. We rely on domain expertise to determine the 1 or 2-month pre-treatment cutoffs. Lung cancer patients typically have higher mortality and tend to start treatment pretty quickly. For prostate cancer, patients progress more slowly and get second opinions before making a treatment decision. We then select for patients with at least one note before the specified time. We select only patients who survived at least 6 months past their date of diagnosis to mitigate immortal-time bias33 and it is more rigorous to impute the missing clinical stage with a model trained on the pathological stage and other relevant covariates. We train the clinical-stage imputation model with struct:patient_age, struct:pathological_stage, struct:diagnosis_year, and struct:tumor_grade. For NSCLC, text:tumor_grade is not included due to missing information. For both prostate and NSCLC, we train and validate a random forest model49 on patients with both clinical and pathological stage available. The imputed stages are used as the clinical stage for those patients. For patients with both clinical and pathological stage missing, we are able to fill in some through clinical chart reviews.For patients with unknown clinical stage but known pathological stage, we impute the clinical stage by training a clinical-stage classification model using the pathological stage and other patient information. The pathological stage is usually a little higher than clinical stage due to the staging based on biopsy samples instead of imaging; hence, it is inaccurate to group them together. Clinical stage is more frequently used for similar observational studies50.We assign patients to the treatment groups based on the initial treatment decision to capture the intent to treat rather than the actual treatments administered. We assign patients with only surgery records into the surgery group and patients with only radiation records into the radiation group. For patients with both radiation and surgery, patients who received surgery first are assigned to the surgery group and patients who received radiation first to the radiation group. For prostate cancer, patients from all stages are included, except for patients with distant metastases, and patients with no recorded treatment are assigned to the active monitoring group. For NSCLC, only patients with clinical stage I are included. The data processing is performed in python with pandasWe build the covariates used for uncovering confounders through the process shown in Fig.\u00a051, we form the categorical variable struct:patient_age by splitting age into ranges of \u226449 years old, 5-year buckets from 50\u201384 years old, and \u226585 years old. Race and ethnicity are encoded as one-hot vectors, with each feature indicating one race or ethnicity. Race is combined based on what is done in Li et al.51. We select these structured covariates because they are commonly accepted by clinicians as potential confounders and often included in CER studies7. For race, struct:race_unknown is not included as a covariate. For ethnicity, only struct:hispanic is included as a covariate. For tumor grade, patients with unknown grade are imputed with the median grade value. The indicator variable struct:grade_unknown is added to indicate which patients have been imputed. The covariates struct:tumor_grade and struct:grade_unknown are not included for NSCLC due to missing information of tumor grade and clinical judgment. In the end, we have nine structured covariates for prostate cancer and seven structured covariates for NSCLC. While billing codes can be used to generate additional structured features for diagnosis and past treatments, existing studies have found these can be unreliable28. Hence, we chose to focus mainly on clinical notes to capture additional information that can influence survival time, such as patient symptoms and performance status.We include age, race, ethnicity, clinical stage, and diagnosis year as part of the structured data. For prostate cancer, we also include SEER-recorded tumor grade, which are highly correlated with the Gleason grade. For NSCLC, we also include gender. Based on age range categories used in Li et al.52. To remove noise, we remove clinical field labels and two sentences from the beginning and end of each document. We also remove sentences with common locations and medical doctor names as these are often prefix or suffix to note documents. To avoid including conditions patients do not have, we remove sentences if they contain less than 15 words including a negation term . For example, this prevents us from extracting \u201csmoking\u201d as a covariate from \u201cNo history of smoking.\u201dWe build word frequency representations of the clinical notes for the unstructured covariates. For each patient, we compile notes within the specified time . We only use notes from before treatment so that we are not predicting survival outcome with information unavailable at the time of treatment decision. The different time windows for the two diseases were selected as NSCLC treatment generally starts more quickly than prostate cancer treatment due to the more rapidly progressing nature of cancer. The notes are segmented based on clinical field labels , tab spaces, NLTK sentence tokenization53. scispaCy is a spaCy54-based model for processing biomedical, scientific, and clinical text. The scispaCy models identifies a list of all the entities in the text that exist in a biomedical dictionary, such as the Unified Medical Language System55. We then lemmatize and combine all biomedical entities identified from the sentences for each patient into a single document. For lemmatization, we used the scispaCy lemmatizer, which is based on the spaCy lemmatization model. To further remove noise, we remove stopwords using a combination of the NLTK stopwords52 and data-specific stopwords such as medical units , time terms , and medical or Stanford specific terms that are very common but irrelevant to the task at hand. We also create a dictionary of synonyms in the dataset and use the dictionary to combine these words. The dictionary includes lexical variations that are not reduced to the same root during lemmatization , abbreviations , and common synonyms . Please see a list of the synonyms included in the Appendix.We then identify biomedical entities from the preprocessed clinical notes with scispaCy56. Bag-of-words (BOW) model is a simplifying representation in natural language processing. It represents text (such as sentence or document) as a vector of word occurrence count. TF-IDF, is a score that reweighs the BOW matrix to reflect how important a word is to a document in a collection or corpus. We implement this with scikit-learn49. For prostate cancer, we select for the top 500 most frequent features using only unigrams. For NSCLC, we select for the top 1000 most frequent features using both unigrams and bigrams, and apply a document frequency threshold strictly lower than 0.7 to filter out dataset-specific stopwords. Although there are more prostate cancer patients, the lower number of death events makes it more difficult to include as many covariates when performing survival analysis. Hence, we have 500 unstructured covariates for prostate cancer and 1000 unstructured covariates for NSCLC.Finally, we remove punctuation and generate term frequency representations of the text using bag-of-words (BOW) with term frequency\u2013inverse document frequency (TF-IDF) weightingWe scale and normalize both the structured and unstructured covariates before concatenating them. In total, we build 509 covariates for prostate cancer and 1007 covariates for NSCLC. These covariates are then used to uncover potential dataset-specific confounders.Yi,\u2009Ei), where Ei\u2009\u2208\u2009{0,\u20091} is an indicator for whether a death event has been observed during follow-up. The treatment, Wi\u2009\u2208\u2009{0,\u20091}, is an indicator for either surgery, radiation, or monitoring, depending on the treatment group. The covariates, Xi, includes the structured dataset pulled from the EMR data and the bag-of-words matrix representation generated from EMR notes.We define our survival outcome as and the survival outcome with Lasso32 using glmnet57. Lasso is a L1\u2009\u2212\u2009penalized linear regression that can produce coefficients for covariates that are exactly zero, and is, hence, often used for creating sparse models58 or variable selection15. We select the intersection of covariates with non-zero coefficients from both the treatment and survival outcome models as potential confounders. For surgery vs. radiation and surgery vs. active monitoring for prostate cancer, we select the intersection covariates that correspond to the Lasso shrinkage penalty for the most regularized model such that the error is within one standard error of the minimum, lambda.1se. With radiation vs. monitoring for prostate cancer and surgery vs. radiation for stage I NSCLC, we select the intersection covariates that correspond to the shrinkage penalty that gives the minimum mean cross-validated error, lambda.min. The intersection terms selected are more stable with lambda.1se. However, we choose lambda.min for the latter two treatment groups because lambda.1se did not select any covariates from the text.We find the potential confounders by identifying covariates that are predictive of both treatment and survival outcome. We train prediction models for treatment is the probability the patient will die within time t given covariates X. The HR is the ratio of the hazard rate of the two treatments. In survival outcomes analysis, the HR is interpreted as the effect on survival for choosing the treatment of interest, Wi\u2009=\u20091.We then evaluate each of the covariate combinations with propensity score-adjusted survival analysis. Propensity scores for patient 61. We assume the proportional hazards condition62, which states that covariates are multiplicatively related to the hazard, e.g., a covariate may halve a subject\u2019s hazard at any given time t while the baseline hazard may vary. Hence, the effect of covariates estimated by any proportional hazards model can be reported as the HR of the covariate.We use the Cox-proportional hazard (Cox-PH) model to perform survival regressionp is the number of covariates, h0(t) the baseline hazard, bW the effect size of the treatment, and bj the effect size of the jth covariate. The HR for a covariate is equal to In a Cox-PH model, the hazard rate of an individual is a linear function of their static covariates and a population-level baseline hazard that changes over time. We adjust for covariates against the duration of survival and a binary variable indicating whether the outcome event has occurred. We estimatematching)22: We perform nearest-neighbor propensity score matching (NNM) on selected covariates and estimate the HR on the matched population using a univariate Cox-PH model regressed on the treatment.Nearest-Neighbor Matching on Propensity Score (IPTW)63: We estimate the HR using a univariate Cox-PH model regressed on the treatment with inverse propensity score weighting with stabilization63. The weights are defined asInverse Propensity of Treatment Weighting (multi.coxph)64: We estimate the HR using a multivariate regression model on the treatment and selected covariates to see how covariates interact with each other. The multivariate model is also weighted with the inverse propensity scores above to form a doubly robust model.Multivariate Cox proportional hazard (We use three methods to estimate the HR:65 with robust variance. Nearest-neighbor matching is performed using the MatchIt R package66.All Cox-PH models are trained using the survival R package67 with glmnet57, stochastic gradient boosting68 with gbm69, and generalized random forests with grf13. We select the propensity score estimation method with the best overlap and covariate balance post propensity score adjustment.We estimate the propensity scores using logistic regressionWe then compare the three methods for estimating HR using forest plots.P value calculated using the Wald test from the survival R package65. Note that for the multivariate Cox-PH covariate analysis, we do not weight the model with the inverse propensity scores.For each covariate in struct+intersect, we also show the univariate and multivariate Cox-PH model HR, 95% HR confidence interval, and Further information on research design is available in the\u00a0Supplementary InformationReporting SummaryPeer Review File"} +{"text": "Chronic liver diseases often involve metabolic damage to the skeletal system. The underlying mechanism of bone loss in chronic liver diseases remains unclear, and appropriate therapeutic options, except for orthotopic liver transplantation, have proved insufficient for these patients. This study aimed to investigate the efficacy and mechanism of transplantation of immature hepatocyte-like cells converted from stem cells from human exfoliated deciduous teeth (SHED-Heps) in bone loss of chronic liver fibrosis.4 received SHED-Heps, and trabecular bone density, reactive oxygen species (ROS), and osteoclast activity were subsequently analyzed in\u00a0vivo and in\u00a0vitro. The effects of stanniocalcin 1 (STC1) knockdown in SHED-Heps were also evaluated in chronically CCl4 treated mice.Mice that were chronically treated with CCl4-treated mice. SHED-HepTx reduced hepatic ROS production and interleukin 17 (Il-17) expression under chronic CCl4 damage. SHED-HepTx reduced the expression of both Il-17 and tumor necrosis factor receptor superfamily 11A (Tnfrsf11a) and ameliorated the imbalance of osteoclast and osteoblast activities in the bone marrow of CCl4-treated mice. Functional knockdown of STC1 in SHED-Heps attenuated the benefit of SHED-HepTx including anti-bone loss effect by suppressing osteoclast differentiation through TNFSF11\u2013TNFRSF11A signaling and enhancing osteoblast differentiation in the bone marrow, as well as anti-fibrotic and anti-ROS effects in the CCl4-injured livers.SHED-Hep transplantation (SHED-HepTx) improved trabecular bone loss and liver fibrosis in chronic CClThese findings suggest that targeting hepatic ROS provides a novel approach to treat bone loss resulting from chronic liver diseases. \u20224 exposure enhances hepatic ROS levels and osteoclast activity.Chronic CCl\u20224 damage.SHED-HepTx reduces hepatic ROS production caused by chronic CCl\u20224 damage.SHED-HepTx reduced osteoclast activity caused by chronic CCl\u2022STC1 attenuates the benefit of SHED-HepTx.Functional knockdown of \u2022Targeting hepatic ROS is a novel approach for bone loss in chronic liver diseases. ACTA2actin alpha 2, smooth muscleALTalanine aminotransferaseASTaspartate aminotransferaseBMCsbone marrow cellsBMDbone mineral densityColl1a1collagen type 1 alpha 1CTX-IC-terminal telopeptide of type I collagenCtskcathepsin KELISAenzyme-linked immunosorbent assayGSH-Pxglutathione peroxidaseHepPar1human hepatocyte paraffin 1 antigenHLA-ABChuman leukocyte antigens A, B, and CHSCshepatic stellate cellsHYPhydroxyprolineIL-17interleukin 17MDAmalondialdehydemHepsmouse hepatocytesmicroCTmicrocomputed tomographyMNCsmultinuclear cellsMSCsmesenchymal stem cellsNOX4nicotinamide adenine dinucleotide phosphate oxidase 4NFATc1nuclear factor of activated T-cellPBSphosphate buffered salinePpargperoxisome proliferator-activated receptor gammaROSreactive oxygen speciesRT-qPCRquantitative reverse transcription polymerase chain reactionSAAserum amyloid ASEMstandard error of the meanSEMA3Asemaphorin 3ASHEDstem cells from human exfoliated deciduous teethSHED-Hepshepatocyte-like cells converted from stem cells from human exfoliated deciduous teethSHED-HepTxSHED-Hep transplantationsiCONTscrambled control siRNAsiRNAsmall interfering RNAsiSTC1STC1siRNA specific for STC1stanniocalcin 1Th17 cellsT helper 17 cellsTNFtumor necrosis factorTNFSF11TNF superfamily member 11TNFRSF11ATNF receptor superfamily member 11aTRAPtartrate resistant acid phosphatase1The liver is a central organ possessing complex metabolic and xenobiotic functions in the digestive system, and it also participates in the endocrine system. Liver metabolism is highly involved in bone metabolism under physiological conditions through the function of somatotropic axis hormones such as growth hormone, insulin-like growth factor-I, and insulin-like growth factor binding protein 3 and of calciotropic hormones, including parathyroid hormone and vitamin D. Chronic liver diseases can potentially cause abnormal metabolism in the skeletal system . The abnReactive oxygen species (ROS) are known to trigger the progression of chronic liver fibrosis . ROS areAbnormal mineral turnover occurs due to the accelerated osteoclast function that underlies bone loss in patients with chronic liver disease ,5. OsteoHuman deciduous pulp stem cells were first identified tissue specific mesenchymal stem cells (MSCs) with clonogenicity properties with self-renewal and multipotency within the dental pulp tissues of exfoliated deciduous teeth and are referred to as stem cells from human exfoliated deciduous teeth (SHED) . Current22.1Human deciduous teeth were collected from discarded clinical samples from healthy pediatric donors with written informed consent from the guardian of each child donor at the Department of Pediatric Dentistry, Kyushu University Hospital. Procedures for handling human samples were approved by the Kyushu University Institutional Review Board for Human Genome/Gene Research . All animal experiments in this study were approved by the Institutional Animal Care and Use Committee of Kyushu University (protocol numbers: A20-041-0 and A21-044-1). All methods were performed in accordance with relevant guidelines and regulations.2.2C57BL/6J mice and pregnant mice were obtained from the Jackson Laboratories Japan . The animals were housed individually and freely provided with sterile water and standard chow under controlled environmental conditions with a 12\u00a0h light/12\u00a0h dark cycle.2.3stanniocalcin 1 (STC1) or with scrambled control . The culture details are described in the SHED were isolated by a colony-forming unit fibroblast method, cultured, and characterized as previously described . SHED-He2.44 , was intraperitoneally injected into mice twice a week. SHED-Heps or PBS control (100\u00a0\u03bcL) were transplanted into 4-week CCl4-treated mice via the spleen, and additional CCl4 was administered for four weeks , referred to as control mice. All animals did not receive any immunosuppression and conditioning throughout this study. Mouse livers, long bones, and serum were harvested eight weeks after CCl4 treatment.Freshly prepared CCl2.5Acta2, collagen type 1 alpha 1 (Coll1a1), Il-17, peroxisome proliferator-activated receptor gamma (Pparg), and nicotinamide adenine dinucleotide phosphate oxidase 4 (Nox4) was analyzed by quantitative reverse transcription polymerase chain reaction (RT-qPCR). Malondialdehyde (MDA) levels and glutathione peroxidase (GSH-Px) activity were measured in mouse livers by colorimetric analysis. The expression of serum amyloid A1 (Saa1) in mouse livers were analyzed by RT-qPCR. Serum levels of granulocyte stimulating factor (G-CSF), IL-17, SAA1, and transforming growth factor beta (TGFB) were analyzed using ELISA.Serum levels of aspartate aminotransferase (AST), alanine aminotransferase (ALT), total bilirubin, and hepatic hydroxyproline (HYP) were measured by colorimetric analyses. The hepatic distribution of collagen was analyzed by Sirius Red staining. The hepatic localization of actin alpha 2 and smooth muscle (ACTA2) was analyzed by immunohistochemical analysis. The hepatic expression of 2.66 in 100\u00a0\u03bcL PBS) or PBS (100\u00a0\u03bcL) and then intrasplenically infused into 4-week-CCl4 treated mice. Ventral images of the mice were obtained 24\u00a0h after infusion with IVIS Lumina III (Perkin Elmer) using living image software (Perkin Elmer).SHED-Heps were labeled with XenoLight DiR NIR fluorescent dye and of human hepatocyte paraffin 1 antigen (HepPar1) was analyzed by immunohistochemical analysis. The co-distribution of HepPar1 and ACTA2 in mouse livers was analyzed using double immunofluorescence analysis.2.8Trabecular bones of mouse tibiae were analyzed by micro-computed tomography (microCT) assays performed on a SkyScan 1076 scanner using CT-Analyzer and CT-Volume software (Bruker) . Serum l2.94 or recombinant mouse IL-17 and/or anti-mouse TNFSF11 goat IgG or goat IgG . Mouse BMCs were treated for 4 days in the absence and presence of H2O2 . The expression of Il-17 and Tnfrsf11a in mouse BMCs was analyzed by RT-qPCR.Mouse BMCs were isolated from femurs and tibiae of mice and co-cultured with newborn calvarial osteoblasts and the osteoclast formation was determined as reported previously ,25, as d2.104 or H2O2 .Mouse bone marrow stromal cells (BMSCs) were cultured under an osteogenic induction condition, and determined as previously , as desc2.115 per well) were incubated with or without CCl4 for 4\u00a0h and co-cultured with or without SHED-Heps using 0.4\u00a0\u03bcm cell culture inserts for 6\u00a0h in 10% fetal bovine serum , 5% non-essential amino acids , and premixed antibiotics containing 100 U/mL penicillin and 100\u00a0\u03bcg/mL streptomycin in Dulbecco\u2019s modified Eagle\u2019s medium . The ROS content in the conditioned medium and the cell viability of mHeps were both analyzed by colorimetry.Primary mouse hepatocytes .Each test was performed in triplicate, and the results were expressed as the mean\u00a0\u00b1\u00a0standard error of the mean. Comparisons between two groups were performed using independent two-tailed Student\u2019s t-test. Multiple group comparisons were performed using one-way repeated measures analysis of variance followed by the Tukey\u2019s post hoc test. Statistical significance was set at P 33.1Isolated SHED exhibited characteristics of MSCs, including attached colony formation, immunophenotypes, and mesenchymal multipotency . SHED-HeIn\u00a0vivo imaging demonstrated that DiR-fluorescence activity was detected the donor cell in situ in the liver region of DiR-labeled SHED-HepTx mice but not in that of the non-infused mice at 5 days after infusion (4 mice (infusion A. Using pTx mice B. No flupTx mice B. Immunoctively) C, D. ELI and cocultured with calvarial osteoblasts. In\u00a0vitro osteoclastogenic assay revealed that the CCl4-BMCs increased the number of TRAP-positive multinuclear cells (MNCs) and levels of Tnfrsf11a, nuclear factor of activated T-cell (Nfatc1), and cathepsin K (Ctsk), compared to the Cont-BMCs by TRAP staining and RT-qPCR . In\u00a0vitro osteogenic assay revealed that the CCl4-BMSCs reduced the osteogenic capacity compared to the Cont-BMSCs, as indicated by the decreased formation of mineralized nodules and suppressed expression of runt-related transcription factor 2, alkaline phosphatase, and bone gamma-carboxyglutamate protein, four and two weeks after osteogenic induction by Alizarin Red staining and RT-qPCR, respectively by RT-qPCR and compared to siCONT-SHED-Hep transplanted CCl4-treated mice (siCONT-SHED-HepTx mice) to examine the benefit of STC1 to liver fibrosis. The siCONT-SHED-HepTx mice improved the hepatic anti-fibroinflammatory effects compared to the CCl4 mice by colorimetric analysis and RT-qPCR (Il-17 compared to the siCONT-SHED-HepTx livers by RT-qPCR (Pparg, and Nox4 compared to the siCONT-SHED-HepTx livers (Next, siSTC1-SHED-Heps were transplanted into CClCl4 mice . MeanwhiCl4 mice A\u2013C. The RT-qPCR D, E. The RT-qPCR F. By colx livers G, H.Figu4 mice enhanced the levels of hepatic Saa1 and serum SSA1, G-CSF, and IL-17 compared to the control mice by RT-qPCR and ELISA (Saa1 and serum SAA1, G-CSF, and IL-17 in the CCl4 mice, while siSTC1-SHED-HepTx attenuated the benefit of siCONT-SHED-HepTx in the CCl4 mice (The CClnd ELISA . siCONT-Cl4 mice .3.64 mice (In\u00a0vitro osteoclastogenic assay revealed that the BMCs of siSTC1-SHED-HepTx mice, siSTC1-SHED-HepTx-BMCs, increased the number of TRAP-positive MNCs and expression of Tnfrsf11a, Nfatc1, and Ctsk compared to the BMCs of siCONT-SHED-HepTx mice, siCONT-SHED-HepTx-BMCs (Il-17 and Tnfrsf11a than the siCONT-SHED-HepTx-BMCs (Tnfrsf11a, Nfatc1, and Ctsk; however, the treatment with anti-TNFSF11A antibody neutralized the IL-17-enhanced effects but did not affect the control IgG treatment (siCONT-SHED-HepTx exhibited the anti-bone loss effects in the CCl4 mice . siSTC1-4 mice A\u2013E. In\u00a0vpTx-BMCs . The siSpTx-BMCs C. Furthereatment .Figure\u00a064 mice expressed the increased serum levels of TGFB compared to the control mice (4 mice (In\u00a0vitro osteogenic capacity demonstrated that the CCl4-BMSCs exhibited the decreased osteogenic capacity compared to the Cont-BMSCs by Alizarin Red staining and RT-qPCR, but the BMSCs of siSTC1-SHED-HepTx mice, siSTC1-SHED-HepTx-BMSCs, suppressed the improved osteogenic capacity compared to the BMSCs of siCONT-SHED-HepTx mice, siCONT-SHED-HepTx-BMSCs (4-BMSCs exhibited the decreased expression of Sema3a compared to the Cont-BMSCs two weeks after osteogenic induction by RT-qPCR but the siSTC1-SHED-HepTx-BMSCs improved the Sema 3a level in the siCONT-SHED-HepTx-BMSCs (ELISA demonstrated that the CClrol mice , which i (4 mice . In\u00a0vitrTx-BMSCs . SEMA3A Tx-BMSCs .44-induced chronic liver disease model mice. MSC-releasing STC1 play an important role in treating several ROS-induced diseases, including retinal degeneration, obesity-induced hepatitis, and lung fibrosis [4-damaged liver fibrosis is related to oxidative stress and PPAR signaling pathway [4-induced liver fibrosis. These findings suggest that hepatic ROS-targeting may offer a novel modality for treating chronic liver fibrosis in SHED-Hep-based therapy.We demonstrate that hepatic fibro-inflammation is caused by hepatic ROS released from damaged hepatocytes in CClfibrosis . Recent pathway . A previ pathway . The pre4 exhibits liver toxicity, but does not cause bone toxicity, indicating that the liver-releasing factors affects bone metabolism in CCl4-induced chronic liver disease, as correlated with the previous studies [in vivo and in\u00a0vitro studies indicate that liver releasing ROS induced expression of Il-17 and Tnfrsf11a in BMCs enhances the osteoclast formation via TNFSF11\u2013TNFRSF11A signaling. Moreover, we show that hepatic ROS-induced osteoblast dysfunction is associated with the bone reduction in CCl4-induced mice, as reported previously in cholestasis of patients and bile duct ligated or CCl4-treated mice [4-induced mice. Further functional knock-down of STC1 in donor SHED-Heps attenuate the suppressed osteoclast and inducible osteoblast functions of SHED-HepTx in CCl4-induced mice. Thus, these findings suggest that liver releasing ROS target the bone cells including BMCs and BMSCs to cause bone loss through the imbalance between osteoclast and osteoblast differentiation in chronic liver fibrosis and indicated that SHED-Hep-based therapy targets liver releasing ROS to regulate the bone metabolism, as well as fibro-inflammation, in chronic liver fibrosis.The present study demonstrates that CCl studies ,38. Howe studies ,38. It i studies . Our in ted mice ,32,40,414 induced mice. Liver-releasing ROS recruits IL-17-producing immune cells into the injured liver of chronic liver disease [Saa-overexpression recruits IL-17-secreting neutrophils in bone marrow, leading to exacerbating bone loss [4-induced hepatic ROS enhances the expression of bone marrow Il-17 and secretion of hepatic SAA1 in chronically CCl4-treated mice, we speculate that IL-17-secreting immune cells may contribute the liver\u2013bone axis to induce bone loss in chronic liver diseases. Further study will be necessary to elucidate the mechanism of requiting IL-17-producing cells into bone marrow under hepatic ROS condition.We speculate another pathological sequence of gained expression of IL-17 in BMCs of CCl disease ,43. Receone loss . An incrone loss . Given t4-treated mice, we suppose another possibility of anti-bone loss efficacy in SHED-Hep-based therapy that the recipient BMSC-targeting STC1 released from SHED-Heps might contribute the bone recovery in chronic liver disease. Recent STC1 knock-in and knock-down study shows the STC1-enriched EVs released from adipose MSCs participate in angiogenesis in carotid endarterium mechanical injury [4-treated mice with SHED-Hep-Tx. Further study will be necessary to elucidate the mechanism of SHED-Hep-releasing STC1 target the recipient BMSCs in chronic liver disease.Given the present findings that liver-releasing ROS targets BMSCs in chronically CCll injury . SHED-rel injury ,23. STC1l injury . Locallyl injury ,49. We dTaken together, the present findings suggest hepatic ROS-induced chronic liver fibrosis causes bone loss by the imbalance of osteoclast and osteoblast activities in a liver\u2013bone axis. The present study also indicate that targeting of hepatic ROS may provide a valuable means for anti-bone loss treatment, as well as anti-fibro-inflammatory treatment, in chronic liver fibrosis. This hepatic ROS-targeting SHED-Hep-based approach may provide a feasible tool for the development of effective therapies for various liver disorders and their associated secondary disorders.Conceptualization, TY; Formal analysis, SS, SM, and TY; Investigation, SS, SM, HY, RY, JF, KY, TM, and TY; Resources, SS, HY, and TY; Data curation, SS, SM, HY, and TY; Writing \u2013 Original Draft, TY; Writing \u2013 Review & Editing, all authors; Visualization, SS and TY; Supervision, SO, TaT, ToT, and TY; Project administration, SS and TY; Funding acquisition, SS, SM, and TY; All authors approved the manuscript for publication."} +{"text": "The viral G-protein-coupled receptor (vGPCR) BILF1 encoded by the Epstein\u2013Barr virus (EBV) is an oncogene and immunoevasin and can downregulate MHC-I molecules at the surface of infected cells. MHC-I downregulation, which presumably occurs through co-internalization with EBV-BILF1, is preserved among BILF1 receptors, including the three BILF1 orthologs encoded by porcine lymphotropic herpesviruses (PLHV BILFs). This study aimed to understand the detailed mechanisms of BILF1 receptor constitutive internalization, to explore the translational potential of PLHV BILFs compared with EBV-BILF1.A novel real-time fluorescence resonance energy transfer (FRET)-based internalization assay combined with dominant-negative variants of dynamin-1 (Dyn K44A) and the chemical clathrin inhibitor Pitstop2 in HEK-293A cells was used to study the effect of specific endocytic proteins on BILF1 internalization. Bioluminescence resonance energy transfer (BRET)-saturation analysis was used to study BILF1 receptor interaction with \u03b2-arrestin2 and Rab7. In addition, a bioinformatics approach informational spectrum method (ISM) was used to investigate the interaction affinity of BILF1 receptors with \u03b2-arrestin2, AP-2, and caveolin-1.We identified dynamin-dependent, clathrin-mediated constitutive endocytosis for all BILF1 receptors. The observed interaction affinity between BILF1 receptors and caveolin-1 and the decreased internalization in the presence of a dominant-negative variant of caveolin-1 (Cav S80E) indicated the involvement of caveolin-1 in BILF1 trafficking. Furthermore, after BILF1 internalization from the plasma membrane, both the recycling and degradation pathways are proposed for BILF1 receptors.The similarity in the internalization mechanisms observed for EBV-BILF1 and PLHV1-2 BILF1 provide a foundation for further studies exploring a possible translational potential for PLHVs, as proposed previously, and provides new information about receptor trafficking.The online version contains supplementary material available at 10.1186/s11658-023-00427-y. Several herpesviruses encode G-protein-coupled receptors (vGPCRs), which are transmembrane proteins related to endogenous GPCRs, and have presumably been acquired from hosts over years of coevolution \u20133. They Lymphocryptovirus) in Opti-MEM at 4\u00a0\u00b0C to prevent internalization during labeling. Afterwards, cells were washed four times using HBSS supplemented with 1\u00a0mmol/L CaCl2, 1\u00a0mmol/L MgCl2, and 20\u00a0mmol/L HEPES, pH 7.4. Then, 50\u00a0\u00b5mol/L of prewarmed (37\u00a0\u00b0C) fluorescein\u2010O\u2032\u2010acetic acid was added to the cells, and the measurements were recorded immediately after. Internalization was measured every 4\u00a0min for a total of 88\u00a0min at 37\u00a0\u00b0C in PerkinElmer EnVision 2104 Multilabel Reader using a 340\u00a0nm excitation filter. Emissions were detected using 520\u00a0nm (acceptor) and 615\u00a0nm (donor) emission filters. Results are presented as a ratio of donor over acceptor emissions (615/520\u00a0nm). The first value (timepoint 0) for each curve was used as a baseline. Control, empty-vector-transfected cells are presented on graphs and were used for normalization. Experiments were performed at least three times in triplicate. To compare the amount of internalization in the different conditions, the area under the curve (AUC) parameter was calculated as described previously [Cells, mixed with the transfection mixture, were seeded in poly-rewarmed \u00a0\u00b0C fluor2-tagged \u03b2-arrestin 2, 17aa, or EGFP-tagged Rab7 using Lipofectamine LTX reagent. Forty-eight hours after transfection, 180\u00a0\u00b5L of resuspended cells at a density of \u223c\u00a01.1 million cells/mL were distributed in 96-well microplates . Then, 10\u00a0\u00b5L of 100\u00a0\u00b5M coelenterazine 400A (BRET2) or coelenterazine h was added to each well using an injector. Sequential measurements of the emissions to measure the RLuc8 luminescence signal at 410\u00a0nm (BRET2) or 480\u00a0nm (BRET1) and the emissions of the light from excited GFP2 at 515\u00a0nm (BRET2) or EGFP at 540\u00a0nm (BRET1) were performed using a TriStar LB 942 microplate reader . Results are presented as ratios and expressed in milliBRET units (mBU); BRET ratio\u2009\u00d7\u20091000. The expression levels of RLuc8- and GFP2- or EGFP-tagged constructs for each experiment were assessed on the basis of total luminescence and fluorescence . Measurements were performed in triplicate.BRET2 and BRET1 experiments were performed as previously described , 38, 39.According to the ISM approach, sequences (protein or nucleotide) are converted into signals by assigning numerical values to each constituent (amino acid or nucleotide). These values correlate to the electron\u2013ion interaction potential (EIIP), a parameter that determines the electronic properties of amino acids and nucleotides. Fourier transform is then used to decompose the resulting signal into periodic functions, resulting in an output consisting of a series of frequencies and their amplitudes. The obtained frequencies correspond to the distribution of structural motifs with defined physicochemical characteristics that are responsible for the biological function of the sequence. By comparing the biological or biochemical function of proteins, we can detect code\u2013frequency pairs that are specific to their common biological properties , 41. TheThe immunoprecipitation assay was performed in HEK-293 cells, which were transiently transfected with RLuc8-tagged BILF1 receptors. Then, 48\u00a0h after transfection, cells were washed with cold PBS and were scraped and lysed for 20\u00a0min at 4\u00a0\u00b0C in NP-40 lysis buffer. Lysates were transferred to a cold tube, and the debris was removed by centrifugation. To control the transfection efficiency, 25\u00a0\u00b5L of the lysate was stored. The remaining lysate was incubated with monoclonal anti-AP-2 antibody (4\u00a0\u00b5g/mL) (Sigma-Aldrich) at 4\u00a0\u00b0C for 1\u00a0h. Thereafter, 50\u00a0\u00b5L of protein G agarose beads were added, and the tubes were incubated at 4\u00a0\u00b0C overnight. The next day, the samples were centrifuged, and the immunoprecipitated complexes were washed extensively in NP-40 lysis buffer. After the final wash, the pellet was suspended in 550\u00a0\u00b5L of lysis buffer, and 180\u00a0\u00b5L was plated in white 96-well microplates. Total luminescence signal was measured in the presence of 10\u00a0\u00b5L of 100\u00a0\u00b5M coelenterazine 400A (Biotium) per well using a TriStar LB 942 microplate reader (Berthold Technologies). Results are presented as raw data with subtracted background. Experiments were performed three times.P-value\u2009<\u20090.05 was considered statistically significant.Data were analyzed using GraphPad Prism (9.3.1) and reported as the mean\u2009\u00b1\u2009standard error of the mean (SEM). Statistical analysis was performed with GraphPad Prism using one-way analysis of variance . The statistical test was chosen on the basis of the data distribution determined using the normality test with GraphPad Prism. A t1/2). The AUC was normalized to EBV-BILF1 (100%), and the results showed\u2009~\u200935% and\u2009~\u200943% lower constitutive internalization for PLHV1-BILF1 and PLHV2-BILF1, respectively parameter and their half-time values , a previously described selective inhibitor of clathrin-mediated endocytosis , 50 and BILF1 receptors were transfected alone or together with increasing concentrations of Dyn K44A Fig.\u00a0. DNM Dyn2-\u03b2arr2 with RLuc8-tagged BILF1 receptors. Similarly, BRET2 saturation experiments have been previously performed, confirming the interaction between \u03b2-arrestin 2 and human GLP-1R [2-\u03b2arr2 co-transfected with constant concentrations of RLuc8-BILF1 receptors, induced the so-called bystander BRET, presented as a simple linear regression curve (R2\u2009>\u20090.8). The result was comparable to the finding of the control experiment, wherein we measured the interactions between BILF1 receptors and the 17aa membrane insert (R2\u2009>\u20090.9) showed that there was high interaction affinity (high S/N ratio) between all BILF1 receptors and AP-2. Furthermore, results of the immunoprecipitation assay confirmed this interaction for BILF1 receptors , which wThe largest effect of the Cav S80E was observed with EBV-BILF1- and PLHV1-BILF1-mediated endocytosis, where the addition of 7\u00a0ng per well of the mutant resulted in 62\u201268% lower AUC values. For PLHV2 BILF1, the co-transfection resulted in a\u2009~\u200953% lower AUC Fig.\u00a0b. It is 50 values of 0.028\u20120.172 [t1/2 of 20\u00a0min) [t1/2 values of\u2009~\u200915\u00a0min and\u2009~\u20095\u00a0min, respectively) [t1/2 of 20\u00a0min and even slower constitutive internalization for PLHV1-2-BILF1 with a t1/2 of 37\u201245\u00a0min. A role of constitutive internalization for EBV-BILF1 has been proposed previously, suggesting that the receptor forms a complex with MHC-I molecules at the plasma membrane and induces its internalization, resulting in hindered immunorecognition by CD8+\u2009T cells [Internalization kinetics have been previously reported for other vGPCRs and GPCRs, and examples include thyrotropin-releasing hormone receptor (TRH) with rapid internalization (2.2\u00a0min) , gonadot 20\u00a0min) , and CXCctively) . Moreovectively) . Compare\u2009T cells . The kinPrevious studies reported the use of Dyn K44A , 50, 72 \u2212/\u2212 cells that express few or no caveolae, suggesting that caveolin-1 regulation of endocytosis does not depend on the formation of caveolae. In the same cell line, they also reported caveolin-1-mediated regulation of EGF-R diffusion and signaling [The caveolae-mediated pathway has been described as an alternative pathway used by several GPCRs, viruses, and bacteria to enter the cell . Despiteignaling . In our Furthermore, increasing concentrations of Cav S80E reduced BILF1 surface expression. This suggested that the receptor is retained intracellularly after impairment of the caveolin function. Previously, a chaperone function of caveolin was proposed and was shown to be important for several GPCRs. Using the DNM of caveolin-1 and generation of receptor mutants with modified caveolin binding sites, the impaired surface expression was shown for glucagon-like peptide 1 (GLP-1) receptor, insulin (IR) receptor, excitatory amino acid carrier 1 (EAAC1), and type 1 receptor for angiotensin II (AT1), suggesting that these receptors require functional caveolin-1 to be expressed at the cell surface , 92\u201395.After internalization from the plasma membrane, GPCR trafficking leads to a recycling or degradation pathway. Previous studies on KSHV-ORF74 showed trafficking of the vGPCR through both recycling and late endosomes/lysosomes . Here, wIn summary, we have shown that BILF1 receptors exhibit slow constitutive internalization through a \u03b2-arrestin-independent, clathrin-mediated pathway. Furthermore, we have also identified that caveolin-1 may potentially be involved in BILF1 trafficking. After internalization, the BILF1 receptors are presumably processed through both recycling and degradation pathways. The results can serve as a foundation for future studies to explore a PLHV-infected porcine model as a mechanism to study BILF1 as a potential therapeutic target in EBV-related disease.Additional file 1. Informational spectrum (IS) frequency representing common informational characteristics determined using the informational spectrum method (ISM). Common peak corresponding to IS frequency F(0.216) represents physicochemical characteristics based on the structural motif distribution of the BILF1 receptors and corresponds to the biological function of the protein.Additional file 2. Principle of the real-time FRET-based method and the intracellular receptor pool calculation. Schematic representation of the principle used to calculate the intracellular receptor pool using RT-FRET-based internalization method.Additional file 3. BILF1 receptor expression in HEK-293 cells co-transfected with caveolin and dynamin DNMs and in \u03b2-arrestin 1/2 KO cells. The figure shows the expression of BILF1 receptors in HEK-293A or \u03b2arr1/2 KO cells, which was measured in parallel with internalization using real-time FRET-based method. The presented expression was measured at timepoint 0\u00a0min for all BILF1 receptors in different conditions.Additional file 4. Control experiments for \u03b2-arrestin-mediated internalization and \u03b2-arrestin recruitment. Figure shows the results from real-time FRET-based internalization assay for control GIP-R and the control BRET2 saturation assay, where we co-expressed BILF1 receptors together with a membrane insert in HEK-293 cells. The linear regression curve represents random collision between surface-expressed BILF1 receptors and membrane insert.Additional file 5. Immunoprecipitation experiments. Immunoprecipitation experiments, confirming the interaction between BILF1 receptors and AP-2."} +{"text": "Trophic interactions between mobile animals and their food sources often vector resource flows across ecosystem boundaries. However, the quality and quantity of such ecological subsidies may be altered by indirect interactions between seemingly unconnected taxa. We studied whether emergent macrophytes growing at the aquatic\u2013terrestrial interface facilitate multi\u2010step aquatic\u2010to\u2010terrestrial resource flows between streams and terrestrial herbivores. We also explored whether aquatic animal aggregations indirectly promote such resource flows by creating biogeochemical hotspots of nutrient cycling and availability.Odocoileus virginianus) in eastern North America vector nutrient fluxes from streams to terrestrial ecosystems by consuming emergent macrophytes (Justicia americana) using isotope and nutrient analyses of fecal samples and motion\u2010sensing cameras. We also tested whether mussel\u2010generated biogeochemical hotspots might promote such fluxes by surveying the density and nutrient stoichiometry of J. americana beds growing in association with variable densities of freshwater mussels .We tested whether white\u2010tailed deer (J. americana) vegetation, whereas upland deer ate more terrestrial foods. Motion\u2010sensing cameras showed deer eating J. americana more than twice as frequently at mussel\u2010generated hotspots than non\u2010mussel sites. However, mussels were not associated with variation in J. americana growth or N and P content\u2014although N isotopes in J. americana leaves did suggest assimilation of animal\u2010derived nutrients.Fecal samples from riparian deer had 3% lower C:N and 20% lower C:P ratios than those in upland habitats. C and N isotopes suggested riparian deer ate both terrestrial and aquatic and of macrophytes suggest that cervid\u2010driven aquatic\u2010to\u2010terrestrial nutrient flows may be widespread and ecologically important. Mobile animals often conduct resource flows across ecosystem boundaries. Despite their broad geographic distribution and anecdotal accounts of feeding on aquatic vegetation, white\u2010tailed deer have yet to be explored as potential vectors for aquatic\u2010to\u2010terrestrial resource flows. Here, we found that white\u2010tailed deer likely mediate transfers of aquatic\u2010derived nutrients into terrestrial habitats when they feed on macrophytes. Animals play important roles in conveying resource subsidies in all ecosystem types , which enters boreal lakes and ponds to feed on aquatic plant matter. When they return to land, they transfer large amounts of aquatic\u2010derived nutrients to terrestrial ecosystems and more variable in 15N \u2014the macrophytes improve mussel habitat by stabilizing sediments, while mussel excretion helps meet the plant's macronutrient demands Figure\u00a0. J. amerm Lopez,\u00a0. Experimm Lopez,\u00a0\u2014a patterm Lopez,\u00a0. Becausem Lopez,\u00a0, mussel\u2010s Figure\u00a0.J. americana facilitates an indirect pathway allowing white\u2010tailed deer to transfer aquatic animal\u2010derived nutrients into terrestrial ecosystems and that freshwater mussel\u2010generated hotspots may enhance the magnitude and nutritional quality of this subsidy and macronutrient stoichiometry (C:N:P) from J.\u00a0americana tissue and from deer fecal pellets in contrasting habitats to determine whether deer consume significant amounts of J. americana and whether such consumption increased deer diet quality. We also used motion\u2010sensing cameras to evaluate whether deer feed more frequently on J. americana at mussel\u2010generated hotspots and used the density and C:N:P stoichiometry of J. americana as metrics of the quantity and quality of this macrophyte as a food source. We tested the following hypotheses: (H1) White\u2010tailed deer fecal samples collected from riparian zones would have relatively higher N and P content and be more enriched in 15N and 13C compared with those collected from upland ridges bounding the watershed because of access to nutritionally and isotopically enriched macrophytes; (H2) deer more frequently consume J. americana from mussel sites compared with other stream segments and terrestrial vegetation because of greater nutrient content; (H3) mussel\u2010generated hotspots increase ambient N and P concentrations via excretion or mortality, which increases J. americana density and/or the relative N and P content of J. americana tissues; and (H4) regardless of nutrient concentrations, J. americana tissue \u03b415N values would increase at mussel\u2010generated hotpots because more of the available N will be animal\u2010derived.Here, we explored the possibility that y Figure\u00a0. We cond22.1All sampling for the study described herein was conducted in the Kiamichi River watershed of southeastern Oklahoma, USA Figure\u00a0. To test13C and \u03b415N values were calibrated using externally certified standards standard for C and N content). The algae (Spirulina) standard was used for QA/QC and had an average standard deviation of <0.2\u2030 for both \u03b413C and \u03b415N between sample runs. Total phosphorus (P) content was estimated by combustion at 500\u00b0C and acid digestion at 105\u00b0C followed by soluble reactive phosphorus (SRP) analysis by the molybdate blue method leaf samples collected at each upland fecal sample location. Smilax spp. is a preferred food source of deer in the Ouachita Forest and nitrogen (N) content and isotopes in fecal pellets using a Thermo Isolink CN Elemental Analyzer integrated with a Thermo Delta V Advantage IRMS through a Conflo IV (Thermo Fischer Scientific). \u03b4J. americana beds, triggering a 30\u2010s time\u2010stamped video, and whether they were observed consuming J. americana. We placed cameras at 10 stream reaches, but flooding caused the loss of five cameras. One additional camera malfunctioned and ceased recording data early in the survey, leaving us with only four stream reaches at which we could compare herbivore activity; two cameras overlooking stream reaches that contained mussel\u2010generated biogeochemical hotspots and two reaches with no mussels. The loss of equipment limited our ability to compare deer behavior between locations. However, we still explored whether the data we were able to retrieve aligned with our hypotheses by comparing differences in the frequencies with which white\u2010tailed deer visited and foraged at mussel reaches (sites KTM and KS7M) and non\u2010mussel reaches (sites K2N and KTN). We also compared the proportion of individuals counted at each site that were seen eating J. americana.To test whether terrestrial herbivores more frequently consumed macrophytes at mussel\u2010generated hotspots, we analyzed data collected during a motion\u2010sensing game camera survey originally described anecdotally, but not analyzed, in Lopez et al.\u00a0. Briefly2.2J. americana beds along a natural mussel density gradient from July 10 to August 14, 2019. Eight sites were along an ~118\u2009km stretch of the Kiamichi River OK, and one site was on North Jackfork Creek\u2014a major tributary of the river . To test the potential effect of mussels on ambient nutrients, we estimated gravel bar porewater nutrient concentrations within J.\u00a0americana beds at each site. We sampled ammonium (NH4+\u2010N) by the phenol\u2010hypochlorite method and SRP using the molybdate blue method and sampled a minimum of one plot per 15\u2009m (range\u00a0=\u00a02\u201310 plots). We also sampled environmental covariates that can influence plant growth and nutrient composition: light availability , and the median sediment grain size by measuring the length of the medial axis of 50 individual grains values for site\u2010level J. americana and upland Smilax spp. leaf tissue and deer fecal pellet stoichiometry and isotopes are reported in Appendix\u00a0Smilax spp. and J. americana to assess whether J. americana contributes more to the diet of deer in riparian habitats than deer in upland habitats. We chose not to use a mixing model to test this hypothesis because we could not reasonably assume the two food resources that we sampled comprise the entire diet of deer , proportion of damaged or clipped stems (compensatory effects), sediment size and porewater NH4+\u2010N:SRP ratio (nutrient effects) as potential drivers of J. americana density and stoichiometry. Because mussel presence is often correlated with sediment stability . Due to the large number of models tested, when multiple models for a given response variable had \u0394AICc values <2 (indicating similar fit), we present only the model that explained the most variance based on its adjusted R2 value. To test J. americana \u03b415N response to mussel density, we only had one driver to consider, so we used Seigel's robust regression , indicating the egestion of more excess N relative to P. Riparian and upland fecal samples differed in isotopic composition was over 2.5 times higher than at non\u2010mussel (0.63\u2009\u00b1\u20090.38) sites than at non\u2010mussel (31\u2009\u00b1\u200916%) sites .The number of deer counted per video was similar between mussel (2.40\u2009\u00b1\u20090.39) and non\u2010mussel (1.88\u2009\u00b1\u20090.64) sites did slightly increase in association with mussel density . Porewater NH4+\u2010N did not strongly covary with mussel density or sediment size . J. americana stem density did vary with porewater nutrient availability, but the effect did not appear to be associated with mussel density and was constrained by the negative effect of percent shade , suggesting potential co\u2010limitation of J. americana growth by light and N .Mussel density was not strongly related to macronutrient concentrations in y Figure\u00a0. SRP ince Figure\u00a0. There wAppendix\u00a0. Rather,Justicia americana tissue stoichiometry did not respond to porewater nutrient stoichiometry, further indicating a lack of any mussel\u2010related macronutrient effect on J. americana. Increasing median sediment size and light availability tended to increase C content. Leaf C:P varied by 65% across sites, increasing with sediment size but decreasing with percent shade , suggesting that leaf C content is associated with light and physical habitat structure at our sites . Increases in sediment size were also associated with increases of 42% in stem C:N . No other associations between J. americana tissue stoichiometry and the drivers we tested were detected .This study provides evidence that white\u2010tailed deer are a previously unrecognized vector for aquatic\u2010derived nutrients to flow into nearby terrestrial ecosystems via herbivory on emergent macrophytes and subsequent defecation on land. White\u2010tailed deer feces in terrestrial riparian habitats were more nutrient rich and showed isotopic signatures closer to aquatic macrophytes than feces in upland habitats, which aligned closer to terrestrial vegetation (supporting H1 and H2). Although our motion\u2010sensing camera survey was limited by flooding, we did find that white\u2010tailed deer fed more frequently on macrophytes at freshwater mussel\u2010generated biogeochemical hotspots. This pattern aligned with H2, but due to the loss of equipment and resultant small sample size, further evidence is needed to claim support or lack thereof for this hypothesis. N and P dynamics did not covary with mussel density as we initially hypothesized (contrary to H2 and H3). However, when N isotopes in Smilax spp. These two separate clusters are consistent with known differences in the composition of terrestrial and aquatic plants, with riparian samples and macrophytes tissues both being enriched in 15N and 13C relative to upland samples and terrestrial plants ; data curation (lead); formal analysis (lead); funding acquisition (supporting); investigation (lead); methodology (lead); visualization (lead); writing \u2013 original draft (lead); writing \u2013 review and editing . Daniel C. Allen: Data curation (supporting); methodology (supporting); resources ; writing \u2013 review and editing . Caryn C. Vaughn: Data curation (supporting); funding acquisition (lead); investigation (supporting); methodology (supporting); resources ; visualization (supporting); writing \u2013 review and editing .The authors declare that they have no conflicts of interest.Appendix S1Click here for additional data file."} +{"text": "Concordant categorical risk ratings were assigned in just over a third of cases, suggesting that consistency remains a concern with the system, particularly when conceptually disparate tools are applied. Densities of criminogenic needs varied widely among persons assigned the same risk level by the Static-99R and diverged from the descriptions ascribed by the system. These findings can inform clinical assessments and further refinement of the system.This study examined the Council of State Governments\u2019 five-level system for risk communication, as applied to the Static-99R and Violence Risk Scale\u2013Sexual Offense Version (VRS-SO). Aims of the system include increasing consistency in risk communication and linking risk categories to psychologically meaningful constructs. We investigated concordance between risk levels assigned by the instruments, and distributions of VRS-SO dynamic needs associated with Static-99R risk levels, among a multisite sample ( Technology to assess risk for recidivism continues to evolve, reflecting empirical and practical advancements. As per Bonta and colleagues , whereasRisk instruments continue to proliferate , reflecthigh risk), professionals may vary widely in their interpretations of the label . Third, Level II was defined, essentially, by accommodating those persons with scores falling between Levels I and III. Defining the highest two categories was less straightforward given the focus on sexual recidivism rates, because no group was identified that could reasonably be described as being virtually certain to reoffend, which is a defining feature of Level V in the original system. Ultimately, the developers opted to populate five total levels using criterion-referenced or relative risk indicators, but assigned labels of IVa and IVb to the highest categories while omitting Level V. Level IVb, the highest category, includes persons with an expected recidivism rate that is double that of the category below and 4 times that of Level III. Level IVa, essentially, comprises individuals falling between Level IVb and Level III.The Violence Risk Scale\u2013Sexual Offense Version is a fourth-generation risk assessment instrument and treatment planning tool for persons with sexual offending histories. Updated VRS-SO risk levels were devDevelopers of the two sexual violence risk instruments referenced above navigated hurdles in adopting the system by applying 5-year sexual recidivism rather than 2-year rates, by both omitting Level V and dividing Level IV into IVa and IVb . ResearcWhile the preceding examples pertain to specialized tools, recent research applying the system to a general risk/need tool, more closely aligned to the foundation of the system, has also illustrated some important considerations. Readers should also note that converting risk instruments to the five-level system is not simply a matter of style or semantics\u2014the changes have the potential to impact both correctional resources and individuals\u2019 lives. Consider for instance, a study by Much of the impetus for developing a common language in risk assessment stems from the aforementioned problems with risk communication, such as inconsistencies in terminology, in the meaning applied to terminology, and in the conclusions derived from shared data, when comparing across risk tools. Theoretically, shared categories and shared principles for populating them should facilitate comparison and understanding across criminal justice applications. In fact, an explicit aim of the five-level system developers was to pAlthough The present study is an attempt to validate the five-level system by focusing on two of its objectives: consistency in risk level assignment, and construct validity of clinical descriptions tied to risk levels. This was done by evaluating consistency in risk level assignment between established sexual violence risk instruments and by exploring construct validity via profiles of psychologically meaningful risk factors associated with assigned risk levels. More specifically, the first research objective was to assess concordance between risk levels assigned by the two instruments, the Static-99R and the VRS-SO, and by VRS-SO Static and Dynamic categories. The second research objective was to describe and compare the distributions of dynamic needs, as measured by the VRS-SO Dynamic scale, associated with Static-99R risk level assignments.In the authors\u2019 view, the Static-99R and VRS-SO are uniquely well suited for such a test of the five-level system. First, to the authors\u2019 knowledge, at the time of writing these instruments represent the first, and only, instruments to formally operationalize and adopt the five-level system for applied use. In addition, while they are designed to predict the same outcome, the conceptual differences between them nonetheless provide a stern test of whether the five-level system can indeed ensure shared meaning regardless of the instrument employed. Such a test is warranted and necessary, given that the system itself does not make allowances for distinctions among the various generations of risk tools.N = 1,490; posttreatment N = 1,365). All individuals were serving custodial sentences due to sexual offenses and participated in a sexual-offense-specific treatment program while incarcerated. The four non-overlapping samples included two groups of consecutive admissions to the high-intensity Clearwater treatment program administered by the Correctional Service of Canada (CSC), from 1983 to 1997 , with cases selected if they had a minimum 10 years\u2019 community follow-up and had both pre- and posttreatment scores. Consistent with the Static-99R, VRS-SO total scores correspond to five risk categories adapted from the five-level system: Level I or Very Low (0 to 14), Level II or Below Average (15 to 23.5), Level III or Average (24 to 39.5), Level IVa or Above Average (40 to 49.5), and Level IVb or Well Above Average (50 to 72). Unique to the VRS-SO when compared with the Static-99R, dynamic total scores can be used to assign risk levels absent static scores, as follows: Level I (0 to 10.5), Level II (11 to 16.5), Level III (17 to 27.5), Level IVa (28 to 34.5), and Level IVb (35 to 51). Finally, for the VRS-SO static total scores absent dynamic scores, the following risk levels apply: Level I (0 to 1), Level II (2 to 5), Level III (6 to 12), Level IVa (13 to 15), and Level IVb (16 to 21).As mentioned briefly above, the VRS-SO\u2019s dynamic items are designed to evaluate changes in risk elicited through participation in formal treatment or by other credible change agents. One\u2019s level of change on each treatment need is assessed at multiple time points, using a modified version of Risk measures were rated by formally trained raters in each sample. In the case of the NaSOP sample, Static-99 and VRS-SO ratings were coded by service-providers in real time, consistent with a prospective study design . In the other three samples, risk ratings were coded retrospectively for research purposes using institutional files. Although both pretreatment and posttreatment scores were available, pretreatment scores were utilized for key analyses, unless otherwise specified, given that the pretreatment ratings coincide with the application of Static-99R ratings. Readers should note that because risk level assignments are derived directly from scores with respect to both the Static-99R and VRS-SO, reliability between risk level assignments using a single score and the same tool was not in question in this study.To address the first research objective, cross-tabulation was completed to determine the proportion of cases assigned to each risk level by the respective instruments, the proportion of cases assigned to concordant and discrepant levels by the two instruments, and the distribution of discrepant cases across categories. Next, percentage agreement was calculated to illustrate the proportion of cases assigned to the same category by both instruments. Finally, a weighted kappa statistic was computed to evaluate agreement, incorporating the magnitude of disagreements across levels .To address the second objective, descriptive statistics were computed to ascertain the number of psychologically meaningful risk factors, as measured by the VRS-SO Dynamic scale, demonstrated by persons assigned to each Static-99R risk level or VRS-SO Static risk level. One-way analysis of variance was also used to determine whether VRS-SO Dynamic scores varied among Static-99R risk levels. Finally, post hoc tests producing the Tukey beta statistic were used to compare mean VRS-SO Dynamic scores between pairs of Static-99R risk levels.These analyses were repeated comparing the VRS-SO Static and Dynamic five-level risk categories . SimilarOf note, and as would be expected, VRS-SO Static and Static-99R risk categories showed slightly better concordance, with observed weighted kappa of .321 and fair levels of agreement.F = 103.06, p < .001, and at posttreatment, F = 95.71, p < .001. However, Tukey\u2019s post hoc tests indicated that the mean VRS-SO Dynamic scores were not significantly different between Levels I and II, at either pretreatment or posttreatment.One-way analysis of variance was conducted to investigate whether VRS-SO Dynamic scores varied among Static-99R risk levels. The results indicated that the VRS-SO scores did differ among Static-99R risk levels at pretreatment, F = 115.63, p < .001, and posttreatment, F = 103.22, p < .001, Dynamic scores. Again, results of Tukey\u2019s beta post hoc multiple comparisons demonstrated VRS-SO Dynamic scores (pre and post) were significantly different between all but VRS-SO Static Levels I and II.When the analyses were repeated for VRS-SO Static and Dynamic scales , a similThis study evaluated the consistency and construct validity of risk levels assigned by the Justice Center Five-Level system, using two specialized sexual offense risk assessment tools in a large multisite sample of men treated for sexual offenses. Risk levels assigned by the two tools were discrepant for approximately two thirds of the sample. Furthermore, persons assigned to particular risk levels by the Static-99R or the VRS-SO Static scale varied widely with regard to their profiles of psychologically meaningful risk factors/treatment needs, as measured by the VRS-SO dynamic factors.As noted in the introduction, researchers have rarely compared risk category assignments across instruments . The current authors are familiar with only two examples of the latter type of concordance research , conductIn our view, the current construct validity findings provide valuable insight into the meaning of the discrepancies identified above and may elucidate potential paths forward. A well-validated second-generation actuarial risk measure, the Static-99R is moderately correlated with sexual offense recidivism . That saPerhaps more notable, however, were the consistencies among Levels I and III. As per the Static-99R evaluators workbook , Level IAggregate statistics, such as means and standard deviations, provide valuable information about the validity of risk measures, but they do not constitute the only psychometric properties of substantive interest for professionals. Examination of other relevant metrics, such as minimum and maximum scores, can also provide critical insights. For instance, in this study, VRS-SO Dynamic scores of the persons assigned to Level III by the Static-99R ranged from a low of 2.7 to a high of 51.0; this gulf in scores is equivalent to the difference between a person having only one known treatment need and a person maximally exhibiting all 17 possible risk factors tapped by the VRS-SO. Despite the observed discrepancy in needs, individuals at either extreme were assigned the same labels and descriptions by the Static-99R , which iFor professionals conducting assessments and distributing reports that may influence high-stakes decisions about individuals, this study raises substantive concerns. Forensic practitioners have an ethical obligation to ensure that the opinions they proffer about individuals are based on scientifically validated practices , which hA potential solution to the noted problems with regard to risk assessment is illuminated by an accumulating evidence base supporting the incremental predictive and clinical validity of existing third- and fourth-generation risk/need measures. With regard to sexual violence, notable evidence for this phenomenon can be observed among studies of the VRS-SO e.g., and the Returning to For risk assessment, structured measures of known criminogenic needs may provide a way forward. Standardized measures tapping criminogenic factors like substance abuse, pro-criminal attitudes, and antisocial peer networks represent constructs of interest across diverse offending populations because they predict recidivism while guiding interventions consistent with the principles of effective offender rehabilitation . WhereasFinally, it is critical to remember that problems in producing standardized and meaningful terminology with respect to psychological assessments are, of course, not unique to risk assessment. Notably, the American Academy of Clinical Neuropsychology AACN; recentlyTo the authors\u2019 knowledge, this study represents one of the first to directly evaluate the construct validity of the psychologically meaningful elements of the Justice Center Five-Level system, as applied to a second-generation risk assessment instrument. The large and diverse multi-site sample of men who have sexually offended raises confidence in the generalizability of the findings. On the other hand, the study is limited in that it focuses on only two instruments and a specialized outcome\u2014sexual offending. Further research will be necessary to determine whether similar phenomena arise with respect to other applications, tools, sampling populations, and offending categories.any offending of approximately 40% is associated with Level III based on the original system, and a sexual recidivism rate of approximately 40% has been pursued to populate Level III among persons who have sexually offended. A conceptual problem with this decision is that any indicator of recidivism based on a subset will be less common than an omnibus measure; some individuals who have violently offended will also engage in other offending behaviors. To focus on only one type of offending is akin to focusing on only those offenses that occurred on Weekdays. To compensate for the relatively lower rates of sexual as opposed to general recidivism, sexual violence tool developers have altered follow-up periods from 2 to 5 years; while defensible in many respects, this approach fundamentally changes the meaning of the base rates in question. To illustrate, As If nothing else, these findings should prompt caution among professionals and decision-makers utilizing the five-level system\u2019s narrative descriptions of need profiles based on second-generation instruments. These descriptions are not interchangeable with those derived from third- and fourth-generation tools and may substantively misrepresent an individual\u2019s needs, thereby potentially misguiding management resources. As such, much like"} +{"text": "Glioblastoma Multiforme (GBM) is considered one of the most aggressive malignant tumors, characterized by a tremendously low survival rate. Despite alkylating chemotherapy being typically adopted to fight this tumor, it is known that O(6)-methylguanine-DNA methyltransferase (MGMT) enzyme repair abilities can antagonize the cytotoxic effects of alkylating agents, strongly limiting tumor cell destruction. However, it has been observed that MGMT promoter regions may be subject to methylation, a biological process preventing MGMT enzymes from removing the alkyl agents. As a consequence, the presence of the methylation process in GBM patients can be considered a predictive biomarker of response to therapy and a prognosis factor. Unfortunately, identifying signs of methylation is a non-trivial matter, often requiring expensive, time-consuming, and invasive procedures. In this work, we propose to face MGMT promoter methylation identification analyzing Magnetic Resonance Imaging (MRI) data using a Deep Learning (DL) based approach. In particular, we propose a Convolutional Neural Network (CNN) operating on suspicious regions on the FLAIR series, pre-selected through an unsupervised Knowledge-Based filter leveraging both FLAIR and T1-weighted series. The experiments, run on two different publicly available datasets, show that the proposed approach can obtain results comparable to (and in some cases better than) the considered competitor approach while consisting of less than Glioblastoma Multiforme (GBM) is considered one of the most aggressive malignant tumors beginning within the brain, cerebellum, and brain stem . DespiteOnce diagnosticated, neurosurgery, radiation therapy, and chemotherapy are the possible treatments . In morehttps://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification, accessed on 13 July 2021) to identify the genetic subtype of glioblastoma using MRI with the aim of detecting the presence of MGMT promoter methylation. Despite being far from conclusive, results achieved by different teams seem to suggest that some correlations may actually exist and can be found by using Deep Learning (DL) approaches. Nonetheless, several participants highlighted the difficulties associated with (i) the high inter-subject variability and (ii) the resulting need for a wider amount of data to train huge DL models.To support this line of research, recently the Radiological Society of North America (RSNA) and the Medical Image Computing and Computer Assisted Intervention Society (the MICCAI Society) have jointly launched a competition . More in detail, we propose a Convolutional Neural Network (CNN), a particular artificial neural network consisting, among others, of convolutional layers able to autonomously learn a set of morphological and textural features that fit the specific task to solve. Moreover, leveraging the fact that medical images are more than pictures [multimodal Knowledge-Based Filtering (KBF) approach to serve as an early fusion technique to merge information coming from two different MRI series. In particular, we fuse the T1-weighted (T1-w) and the Fluid Attenuated Inversion Recovery (FLAIR) series, both very common in brain MRI, with the aim of retrieving as much useful information as possible from patients. The resulting system consists of a supervised approach operating on suspicious regions on the FLAIR series, pre-selected through an unsupervised knowledge-based filter leveraging both FLAIR and T1-weighted series. To estimate the effectiveness of the proposed approach in a real clinical context we also tested our approach on the UPENN-GBM dataset [To cope with these problems, in this work, we introduce a new simple but effective DL-based approach able to perform better than the official competition winner while consisting of less than pictures , in the ne 2022) . FinallyThe rest of the paper is organized as follows: https://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification) [In recent years, a few approaches were explored to build an efficient MGMT promoter methylation detector. Most of them adopt DL techniques, in particular CNNs, which are used to detect distinctive methylation features in the tumor areas, both in 2D slices and in 3D brain volumes. In 2018, L. Han et al. proposedication) is used ication) stronglyhttps://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification) [https://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification) [It is worth noting that despite the reported solutions exploiting different models and approaches, they all share the need for detailed information about methylation sites or segmented tumor areas. However, this is rarely available in a real scenario, to the point that even the Brain Tumor AI Challenge (ication) highlighication) tried toication) . After pication) , they hahttps://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification) [The first one is provided by the Brain Tumor AI Challenge (ication) , consisthttps://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70225642) [The second dataset is the UPENN-GBM one (0225642) , consist0225642) uses theAs for most biomedical tasks, identifying a suited sample of subjects properly representing the real population is a non-trivial task. To try to limit the impact of this choice and to estimate the clinical effectiveness of the proposed approach, in this paper, we focus on two datasets:Both datasets include Fluid Attenuated Inversion Recovery (FLAIR), T1-weighted (T1w), T1-weighted post-contrast (T1wCE) and T2-weighted (T2w) MRI sequences.Data Preparation step, generating isotropic and normalized acquisitions; the Knowledge-Based Filtering (KBF), leveraging the medical knowledge to pre-select, in an unsupervised manner, the Region of Interest (ROI) corresponding to possibly tumor regions in the MRI scans; the MGMT promoter methylation identification, using a 2D or 3D CNN for the identification of a methylation process. The next sections detail each module, highlighting input and output while explaining the rationale behind the choices made.In this paper, we propose a DL-based approach for MGMT promoter methylation identification leveraging medical knowledge to deal with the lack of tumor segmentation masks. In more detail, the implemented solution consists of three main blocks, as summarized in Data-Preparation step consists of volume retrieval, co-registration of acquisitions to the same anatomical template [Slice Location tag, available in each DICOM file, obtaining for each patient a set of aligned acquisitions for the co-registration as proposed in [Pixel Spacing, which is determined by two values p. Additionally, the Spacing Between Slices feature, denoted by the numeric value p. Since various subjects\u2019 resolutions might differ, all patient volumes are equally scaled to provide acquisitions with an isotropic size of Image Orientation attribute specifies the direction cosines of the first row and the first column with respect to the patient, and it is composed of three two-element vectors for the z axes directions. The information included in the above tag enables a proper rotation of the isotropic volume to a standard patient orientation space. At the end of the data preparation module, all the volumes will have a sagittal orientation. Since volumes may include extra-cerebral tissues, which are not required for our purposes, a skull stripping process is performed. This process requires the adoption of a 3D semantic segmentation network for brain detection in order to generate a brain mask, which will be used to crop what is outside of it. In this case, we exploit the HD-BET tool [https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70225642) [https://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification) [Data-Preparation step is summarised in In MRI acquisition, the slices are stacked into 3D volumes representing the brain. As reported in ,17, the template , inter-mposed in and the BET tool , based o0225642) , since tAs described in The proposed KBF module aims to compensate for the lack of segmentation masks, reducing the effort required by the physicians and making our methodology applicable to datasets where tumor areas are not identified. The KBF exploits properties of T1-w and FLAIR sequences, resulting in a multi-modal knowledge-based pre-processing procedure. In particular, we implement an early fusion technique, in which information coming from multiple sources is merged to highlight different characteristics . In thisTo reduce the amount of data to process, we crop each FLAIR volume considering the smallest cubical box around the brain, obtaining acquisitions of size MGMTClassifier, a sequential network with seven convolutional blocks and two fully connected layers separated by the Rectified Linear Unit (ReLU) as an activation function, whose architecture is represented in We exploit CNN to face the task of MGMT promoter methylation identification. In particular, we introduce the A\u201d hereafter) and another gathered by the University of Pennsylvania (denoted as dataset \u201cB\u201d hereafter). We tested the proposed methodology on both datasets separately, also performing experiments by merging them to further assess the generalization ability of the designed approach. All the experiments were executed using a 5-fold cross-validation strategy. It is worth noting that we did not consider the test set provided by the Brain Tumor AI Challenge (https://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification) [As described in ication) since thpositive volumes in which the methylation process is present and as negative the others. As a consequence, SEN corresponds to the fraction of methylation cases correctly identified, whilst SPE acts on the MRI volumes in which this process is not available (negative cases), reporting the portion of them properly predicted by the implemented model. As aforementioned in As described in https://github.com/priamus-lab/GBM-MGMT-Detection, accessed on 18 September 2022).To better frame the results achieved by the proposed approach, we also compared it against the solution proposed by Tunisia.ai, the competition-winning team, proposing to implement a 3D residual network trained from scratch considering only the T1-w CE sequence. For the comparison, we use the code that the team released on the competition website. As aforementioned, in this paper, we did not use the test set provided by the competition, where the winners achieved 62% of AUC since labels have not been made public. The use of a 5-fold cv proved a more robust evaluation than the hold-out implemented in the competition. Our aim is to compare two different approaches that are the one presented in this paper and the solution proposed by Tunisia.ai, in which we retain the input sequence used by the team (T1-w CE). All the experiments were run using Python 3.9, with the proposed CNN implemented in PyTorch (version 1.10). We used a Linux workstation equipped with AMD Ryzen 7 5000 and an 8 GB DDR4 RAM NVIDIA RTX 3080 . All the codes used to derive the results reported in this paper will be made available to the research community , SPE (54.44%), PRE (59.93%), and F1 (63.33%). Similarly, B dataset. In this case, the 3D MGMTClassifier outperforms the other models by a wide margin, achieving 60.06% in ACC, 74.03% in SPE, 64.40% in PRE and 52.53% in F1.In this section, we report the results obtained by the proposed approach on dataset cross-dataset scenario. In particular, A and tested on B, while B and tested on A. In both cases, we retain the same 5-fold CV division, making the results in A+B setting. In this case, we still implement a 5-fold CV by merging, in each iteration, the corresponding folds previously identified on datasets A and B separately.As aforementioned, to further assess the generalization ability of the proposed approach, we also experimented with a https://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification) [https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70225642) [In this work, we introduced a new approach leveraging deep learning and unsupervised voxel pre-selection to perform MGMT promoter methylation identification in brain MRI when suspect lesion masks are not available. In particular, we propose a Convolutional Neural Network (CNN) operating on suspicious regions on the FLAIR series , pre-selected through an unsupervised Knowledge-Based filter leveraging both FLAIR and T1-weighted series. To estimate the effectiveness of the proposed approach, we performed experiments on two different datasets: the Brain Tumor AI Challenge (ication) , a compe0225642) , consistWhen the two datasets are considered separately , resultsOne of the biggest concerns associated with using AI models in a real clinical context, especially when performances are not astonishing, is associated with their trustworthiness. Thus, we also report some Explainable-AI (XAI) analyses to assess the interpretability of the solution showing the best performance . In particular, we use the Integrated Gradients and Occlhttps://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification), it is possible to note a big difference in terms of performance between the solutions proposed in the literature and those proposed in this work, as well as those submitted to the competition . We str"} +{"text": "International Journal of Molecular Sciences, provides up-to-date information about the effects of a range of toxicants on the reproduction and development of many animal species, including humans. As so often is the case, the results from toxicant exposures to animal species can serve as sentinels for human toxicant exposures. Furthermore, reproductive systems, as well as developing embryos and fetuses, display greater risk from exposure to most toxicants, from pharmaceuticals to environmental contaminants. Developing new alternative models, including in vivo, in vitro, or in silico models, is critical to increasing our understanding of how toxicants affect reproduction and development. Not only is modeling critical, but it also is essential to be able to carry out research at all levels of scientific inquiry\u2014from molecules to integrated systems of organisms.Toxicology is an incredibly complex and diverse area of biomedical science that includes numerous areas of specialization. The overarching goals for investigators working in all areas of toxicology are to identify and define the exposures to potential toxicants, assess the risks, and mitigate the impacts. This Special Issue, Reproductive and Developmental Toxicology 2.0, in the Drosophila melanogaster [Caenorhabditis elegans [Currently, there are ten articles published in this Special Issue, providing a broad sampling of this vital and vibrant area of toxicology research. Several themes emerge when looking at the articles included in this Special Issue. First, there is a distinct emphasis on developmental toxicology, which indicates its importance in the broader area of toxicology. In the past, the assessment of developmental toxicity has utilized animal studies, primary mammals, the vast majority being rodents. However, new, more advanced, high-throughput modeling systems are rapidly being developed. As evidence of this shift in research focus, six of the ten articles report on studies that utilize non-mammalian testing systems, including nogaster ,2, sea unogaster , choroidnogaster , zebrafinogaster , Caenorh elegans . Regulat elegans , albeit elegans and embr elegans , resulti elegans ,4,5,9, a elegans , and PAH elegans . Environ elegans , which i elegans . Underst"} +{"text": "The valuable information of this work would promote the further development of this research field, as well as others in aggregate.Organic luminogens with room temperature phosphorescence (RTP) have been paid great attention and developed rapidly for their wide application values. Until now, the internal mechanism and source of phosphorescence are still obscure, especially for the relationship between molecular dimer and RTP emission. Hence, we designed and synthesized eight phenothiazine 5,5-dioxide derivatives to directly reveal how the monomer and dimer in packing affect the RTP behavior. Dimers with strong \u03c0-\u03c0 stacking ( 1) and excimer (T1*) based on \u03c0-\u03c0 stacking is the origin for their changed RTP properties.An ideal model containing eight phenothiazine 5,5-dioxide derivatives was established to clearly prove the formation of triplet excimer. Competition between monomer (T In order to design and produce more smart optoelectronic materials, it is particularly important to make clear their internal mechanism, especially for the relationship between material structure and properties. Throughout the history, people\u2019s perception of it is changing all the time \u201d, that is the basic atoms correspond to notes, and a melody with alignment of notes is similar to molecules constructed by atoms with specific sequences. Correspondingly, the MUSICs, which are heavily dependent on the aggregated states with various packing modes, resemble a symphony with the coming together of music produced by different instruments12. Thus, in the 21st century, scientists turned their eyes to the effect of molecular stacking, that is molecular aggregation science, as most materials were utilized in solid or aggregate states18.Advance and development in organic optoelectronic materials have enabled excellent innovations in our daily life for their wide applications in organic light emitting diodes (OLEDs), organic field effect transistors (OFETs), solar cells and bio/chemo probing etcime Fig. . Since 119. For example, excimer, the short-lived dimeric molecule formed from two species , in which the molecular packing affects the RTP effect heavilytc Chart . Then wh38. Inspired by it, in this work, a rational molecular design was carried out and eight target compounds with two phenothiazine-5,5-dioxide groups linked by alkyl chains with different carbon numbers were synthesized accordingly were easily synthesized in two steps with C-N coupling following an oxidation reaction in the presence of hydrogen peroxide and high-performance liquid chromatogram (HPLC) spectra solution in solution state at 77\u2009K, while the one at 500\u2009nm was thought to be from the effect of molecular packing, such as molecular dimer with triplet excimer emission. Also, much different RTP lifetimes were obtained for the crystals of these eight compounds. For the RTP peak at 445\u2009nm, the corresponding lifetimes ranged from 30.3\u2009ms to 142.8\u2009ms, while those at 500\u2009nm were from 59.8\u2009ms to 256.1\u2009ms. As the lifetimes for the bands at 445\u2009nm are comparable to or even longer than those at 500\u2009nm, the phosphorescence emission from high-lying triplet excited state (i.e. T2) can be excluded40. Based on the different RTP behaviors of these compounds, the application of multiple anti-counterfeiting was successfully realized behaviors, especially for the RTP behaviors, were studied in detail Figs. . Upon thved Fig. . In cryszed Fig. . In comp2PtzO-nC crystals. It could be clearly observed that intermolecular \u03c0-\u03c0 interactions widely exist for these crystals, although the corresponding strengths are different from each other for the introduction of different lengths of alkyl chain between two phenothiazine 5,5-dioxide units. To well evaluate the strength of \u03c0-\u03c0 interactions, the analyses of displacement angle (\u03b8) and vertical distance (d) for the adjacent benzene rings involved in \u03c0\u2013\u03c0 stacking were carried out41, in which smaller displacement angle and shorter vertical distance indicate stronger \u03c0\u2013\u03c0 interaction , strong \u03c0\u2013\u03c0 interaction could be clearly observed for the bilateral molecular dimers with small displacement angles (17.56\u00b0\u2009<\u2009\u03b8\u2009<\u200920.66\u00b0) and short vertical distances (3.56\u2009\u00c5\u2009<\u2009d\u2009<\u20093.86\u2009\u00c5). In crystal 2PtzO-9C with dominant triplet excimer emission (@500\u2009nm) and weak monomer RTP one (@445\u2009nm), two kinds of \u03c0\u2013\u03c0 interaction existed for the bilateral molecular dimers, in which one was strong and the other was relatively weak . As for other three crystals with comparable dual RTP emissions, much weaker \u03c0\u2013\u03c0 interactions were presented. For example, the displacement angles (\u03b8) and vertical distances (d) for 2PtzO-4C and 2PtzO-6C increased to 27.02/30.58\u00b0 and 3.84/4.06\u2009\u00c5 for the bilateral molecular dimers. In 2PtzO-5C, just unilateral dimer has been formed with weak \u03c0\u2013\u03c0 interaction . These weak \u03c0\u2013\u03c0 interactions within molecular dimers will lead to the competition between monomer and excimer phosphorescence, thus resulting in the dual RTP emissions. All in all, the universality of the relationship between \u03c0\u2013\u03c0 stacking strength and RTP behavior was clearly and accurately summarized in Fig. \u03b8) and the shorter the vertical distance (d), the stronger the \u03c0\u2013\u03c0 interaction in molecular dimer, and the greater the phosphorescence of the triplet excimer.In order to clarify the relationship between RTP emission and molecular packing, the single crystal structures for these eight compounds were measured and analyzed carefully Tables and S4. ion Fig. . As show1 state were calculated for the molecular dimer is relatively large, the short vertical distance (d\u2009=\u20093.47\u2009\u00c5) still leads to the weak orbital coupling for T1 state. This should be the main reason for its dominant triplet excimer emission in crystal state. As for 2PtzO-4C/2PtzO-5C/2PtzO-6C crystals with comparable dual RTP emissions, no obvious orbital coupling in T1 state could be observed for the molecular dimer with weak \u03c0\u2013\u03c0 stacking. Thus, the strong \u03c0\u2013\u03c0 stacking in dimer as the main origin for triplet excimer emission could be further demonstrated. In addition, the calculations of HOMO/LUMO orbital distributions for these molecular dimers were calculated. As shown in the Figs. Further on, time-dependent density functional theory (TD-DFT) calculations were carried out to study the relationship between \u03c0-\u03c0 stacking and RTP emission. In particular, the natural transition orbitals (NTOs) of Ter Figs. \u2013S18. As 0) are first excited to the first excited singlet state (S1), then some of them will return to S0 through fluorescence emission, while some others can jump to the first excited triplet state (T1) through intersystem crossing (ISC) transition. Because of the formation of molecular dimer with \u03c0-\u03c0 interaction, the excitons in T1 state will tend to approach the adjacent one in S0 state to form triplet excimer (T1*). At this time, for the dimers with strong \u03c0-\u03c0 interaction, the formation of triplet excimer (T1*) should be much easier and faster, thus leading to the pure triplet excimer emission. As for the ones with weak \u03c0-\u03c0 interaction, the competition between monomer (T1) and excimer (T1*) phosphorescence could occur, thus resulting in the dual RTP emissions.According to the relationship between molecular packing and RTP property, a rational excited process was proposed. As depicted in Fig. 42. Consequently, the temperature-dependent phosphorescence spectra from 100 to 300\u2009K were measured for these eight crystals and triplet excimer (T1*) in Fig. 1 state to another one in ground state, the triplet excimer should belong to dynamic excimer. Besides, because of the short lifetime of singlet excitons, the kinetic process for the formation of singlet excimer was hard to be achieved, thus no corresponding singlet excimer emission could be observed for these target compounds.As the formation of triplet excimer is a typical kinetic process, it should be affected by temperaturels Figs. and S28.mer Fig. . With thtal Fig. . As for tal Fig. and S28.tal Fig. and S6, Correspondingly, the phosphorescence processes in different states and different compounds could be simplified as two ways below:0(monomer) \u2192S1(monomer)\u2192T1(monomer)\u2192S0(a) S0(monomer)\u2192S1(monomer)\u2192T1(monomer)\u2192T1\u2009+\u2009S0(dimer)(b) S1*(excimer)\u2192 S0\u2192T2PtzO-nC solution at 77\u2009K goes through way (a); 2PtzO-4C, 2PtzO-5C, 2PtzO-6C and 2PtzO-9C crystals go through both ways (a) and (b), while 2PtzO-3C, 2PtzO-7C, 2PtzO-8 and 2PtzO-10C crystals were way (b). Thus, through simply alkyl chain regulation, the adjustment of excited process in solid state was successfully realized for the changed molecular packing.Phosphorescence of 2PtzO-3C, 2PtzO-7C, 2PtzO-8C, 2PtzO-10C crystals only exhibit pure excimer RTP emission while other crystals show dual RTP emissions of both monomer and excimer. Detailed analyses of their single crystals demonstrated that the pure excimer RTP results from strong \u03c0\u2013\u03c0 interaction , while the dual RTP emissions come from weak one . Further on, the corresponding excited processes of RTP emission were successfully proposed with the aid of temperature-dependent phosphorescence measurements for these eight crystals. It was found that the competition between monomer (T1) and excimer (T1*) should be the main origin for their changed RTP properties. It is believed that this work would be of great importance for gaining a clear and deep understanding of the whole RTP process, thus guiding the further development of this research area, as well as others in aggregate51.An ideal model containing eight phenothiazine 5,5-dioxide derivatives was established to clearly and accurately prove the formation of triplet excimer. Among them, N,N-Dimethylformamide (DMF) solution (25\u2009mL) was added phenothiazine , and stirred for 30\u2009min under N2. Then, 1,3-dibromopropane was added dropwise and the mixture was stirred at 0\u2009\u00b0C under N2. After 4\u2009h, water was added to the mixture to quench the reaction. The organic layer was collected with dichloromethane (DCM) and dried over by anhydrous Na2SO4 and concentrated by rotary evaporation. The crude product was purified by column chromatography on silica gel using petroleum ether (PE)/DCM (10: 1\u2009v/v) as eluent and to afford a white solid. Then the collected product was dissolved in DCM (20\u2009mL), acetic acid (9\u2009mL) and H2O2 (6\u2009mL). After reacting for another 24\u2009h at 60\u2009\u00b0C, the reaction mixture was extracted with dichloromethane and further purified by column chromatography using PE/DCM (1:5\u2009v/v) as eluent to afford a white solid in a yield of 27.4%. mp: 325\u2009\u00b0C; 1H NMR : \u03b4 7.98 , 7.31 , 7.21-7.12 , 4.27 , 2.14-2.09 ; 13C NMR : \u03b4 141.50, 133.01, 125.70, 122.93, 122.19, 117.34, 41.97, 25.50; HRMS (ESI), m/z: [M\u2009+\u2009Na]+ calcd. for C27H22N2NaO4S2, 525.0913; found, 525.0935.To an ice-cooled suspension of NaH in dry 1H NMR : \u03b4 8.07 , 7.56-7.52 , 7.30-7.22 , 4.16 , 1.92-1.89 ; 13C NMR : \u03b4 141.42, 133.20, 125.41, 123.59, 122.22, 116.77, 67.18, 23.76; HRMS (ESI), m/z: [M\u2009+\u2009Na]+ calcd. for C28H24N2NaO4S2, 539.1070; found, 539.1074.White solid (88.4%). mp: 301.3\u2009\u00b0C; 1H NMR : \u03b4 8.12 , 7.56 , 7.29 , 4.17 , 1.90-1.83 , 1.53-1.45 ; 13C NMR : \u03b4 141.17, 132.20, 124.92, 123.61, 122.01, 47.18, 26.21, 22.68; HRMS (ESI), m/z: [M\u2009+\u2009Na]+ calcd. for C29H26N2NaO4S2, 553.1226; found, 553.1248.White solid (68.7%). mp: 249.2\u2009\u00b0C; 1H NMR : \u03b4 8.10 , 7.58 , 7.32-7.24 , 4.16 , 1.90-1.85 , 1.45-1.41 ; 13C NMR : \u03b4 141.14, 133.17, 124.77, 123.67, 121.96, 116.37, 47.69, 26.8, 25.99; HRMS (ESI), m/z: [M\u2009+\u2009Na]+ calcd. for C30H28N2NaO4S2, 567.1383; found, 567.1361.White solid (66.7%). mp: 236.6\u2009\u00b0C; 1H NMR : \u03b4 8.09 , 7.59 , 7.31-7.23 , 4.13 , 1.88-1.81 , 1.44-1.36 , 1.27-1.22 ; 13C NMR : \u03b4 141.08, 133.17, 124.63, 123.71, 121.93, 116.26, 48.03, 28.80, 26.82, 26.43; HRMS (ESI), m/z: [M\u2009+\u2009Na]+ calcd. for C31H30N2NaO4S2, 581.1539; found, 581.1536.White solid (44.5%). mp: 263.2\u2009\u00b0C; 1H NMR : \u03b4 8.10 , 7.60 , 7.36-7.24 , 4.15 , 1.90-1.83 , 1.43-1.36 ; 13C NMR : \u03b4 140.99, 133.05, 124.55, 123.65, 121.81, 116.12, 48.06, 28.92, 26.78, 26.23; HRMS (ESI), m/z: [M\u2009+\u2009Na]+ calcd. for C32H32N2NaO4S2, 595.1696; found, 595.1686.White solid (47.2%). mp: 244.4\u2009\u00b0C; 1H NMR : \u03b4 8.09 , 7.59 , 7.32-7.22 , 4.13 , 1.90-1.83 , 1.40-1.31 , 1.30-1.22 ; 13C NMR : \u03b4 141.01, 133.18, 124.47, 123.74, 121.89, 116.16, 48.27, 29.37, 28.93, 26.86, 26.51; HRMS (ESI), m/z: [M\u2009+\u2009Na]+ calcd. for C33H34N2NaO4S2, 609.1852; found, 609.1857.White solid (80.0%). mp: 186.4\u2009\u00b0C; 1H NMR : \u03b4 8.10 , 7.60 , 7.33-7.23 , 4.14 , 1.92-1.85 , 1.46-1.40 , 1.37-1.30 . 13C NMR \u03b4 (ppm): 140.98, 133.18, 124.41, 123.75, 121.87, 113.09, 48.39, 29.41, 29.19, 26.88, 26.68; MS (ESI), m/z: [M\u2009+\u2009Na]+ calcd. for C34H36N2NaO4S2, 623.2009; found, 623.2005.White solid (82.1%). mp: 216.3\u2009\u00b0C; 1H NMR spectra and 13C NMR spectra were recorded on a 400\u2009MHz Bruker AVANCE III spectrometer using CDCl3 as solvent. Mass spectra were measured on a UHPLC/Q-TOF MS spectrophotometer. High-performance liquid chromatogram spectra were recorded on Agilent 1100 HPLC. Thermo-gravimetric analysis curves and differential scanning calorimeter curves were recorded on Thermo Gravimetric Analysis TG-209F3 and Differential Scanning Calorimeter DSC214 ployma. UV-vis spectra were measured on a Shimadzu UV-2600. Photoluminescence spectra and phosphorescence lifetimes were performed on a Hitachi F-4600 fluorescence spectrophotometer. Photoluminescence quantum yields, fluorescence lifetimes and temperature-dependent phosphorescence spectra were determined with FLS1000 spectrometer. The single-crystal X-ray diffraction data of these samples were collected in XtaLAB SuperNova X-ray diffractometer.0) geometries of dimers were obtained from the single crystal structures and no further geometry optimization was conducted in order to maintain the specific molecular configurations and corresponding intermolecular locations. The HOMO/LUMO orbital distributions and natural transition orbitals (NTOs) of T1 state of dimers were evaluated by the TD-m062x/6-31\u2009g*.The Gaussian 09 program was utilized to perform the TD-DFT calculations. The ground state (SSupplementary InformationArticle confidentiality and copyright transfer agreementCopyrright for Chart S1-1Copyrright for Chart S1-2Copyrright for Chart S1-3Copyrright for Chart S1-4Copyrright for Chart S1-5"} +{"text": "Atherosclerosis is the leading cause of cardiovascular diseases in Mexico and worldwide. The membrane transporters ABCA1 and ABCG1 are involved in the reverse transport of cholesterol and stimulate the HDL synthesis in hepatocytes, therefore the deficiency of these transporters promotes the acceleration of atherosclerosis. MicroRNA-33 (miR-33) plays an important role in lipid metabolism and exerts a negative regulation on the transporters ABCA1 and ABCG1. It is known that by inhibiting the function of miR-33 with antisense RNA, HDL levels increase and atherogenic risk decreases. Therefore, in this work, a genetic construct, pPEPCK-antimiR-33-IRES2-EGFP, containing a specific antimiR-33 sponge with two binding sites for miR-33 governed under the PEPCK promoter was designed, constructed, and characterized, the identity of which was confirmed by enzymatic restriction, PCR, and sequencing. Hep G2 and Hek 293 FT cell lines, as well as a mouse hepatocyte primary cell culture were transfected with this plasmid construction showing expression specificity of the PEPCK promoter in hepatic cells. An analysis of the relative expression of miR-33 target messengers showed that the antimiR-33 sponge indirectly induces the expression of its target messengers (ABCA1 and ABCG1). This strategy could open new specific therapeutic options for hypercholesterolemia and atherosclerosis, by blocking the miR-33 specifically in hepatocytes. Cardiovascular diseases (CVD) are the leading cause of death in Mexico and worldwide, and atherosclerosis is the most important risk factor. The number of people who suffer with atherosclerosis is increasing ,2. LipopOn the other hand, gene therapy is a treatment strategy that uses \u201ctherapeutic\u201d nucleic acids introduced into the target cells to express themselves and cause a beneficial biological effect for the The antimiR-33 sponge sequence was designed using the Kluiver method, specifically the oligonucleotide duplex approach , with twTo optimize the sequence of the antimiR-33 sponge, the online miRNAsong software was used to analyze it with standard settings ; this waTM , beginning with the pIRES2-EGFP eukaryotic expression vector , with a size of 5.3 Kb. The in silico construction was performed by releasing the CMV promoter and a fragment of the MCS with AseI, BglII, SacI, and XmaI restriction enzymes and by ligating the inserts of interest (PEPCK promoter and antimiR-33 sponge).The genetic vector design was carried out using the VectorNTI AdvanceFrom the sense and antisense oligonucleotides (corresponding to the antimiR-33 sponge) diluted in nuclease-free water (500 ng/\u00b5L), an alignment was performed to obtain the double-stranded DNA fragments by subjecting the oligonucleotide pair to 95 \u00b0C for 10 min and gradually cooling them to room temperature, then storing at 4 \u00b0C. The formation of the duplex was verified through 2% agarose gel electrophoresis, in TBE regulator 1\u00d7 (Invitrogen) at 90 V for 60 min, together with a 50 bp ladder. The gel was stained with ethidium bromide (0.5 \u00b5g/mL), then observed and imaged on the Kodak Gel Logic 100 Digital Imaging System Transilluminator .TM (Invitrogen), which contains AseI and BglII restriction enzyme cleavage sites: forward: 5\u2032-CGCATTAATGCTTACAATCACCCCTCCC-3\u2032 and reverse: 5\u2032-AATAGATCTCAGAGCGTCTCGCC-3\u2032 to give directionality to cloning and obtain the correct transcription for the therapeutic gene. Then, the PEPCK promoter was amplified by PCR from the pPEPCK-hGH vector using the PCR Master Mix .The primer design from which to amplify the PEPCK promoter fragment was performed using Vector NTI AdvanceTM II Gel Extraction Kit . Then, the open vector was ligated with the PEPCK promoter by a T4 DNA Ligase . Ligation reactions were performed in 20 \u00b5L with 1 \u00b5L 5 U/\u00b5L T4 ligase in the buffer supplied by the manufacturer. The recombinant pPEPCK-IRES2-EGFP was transformed in a vector/insert ratio of 1:5 by the heat shock method into E. coli DH5\u03b1 competent cells and was isolated with the UltraClean\u00ae Maxi Plasmid Prep Kit . Subsequently, the purified recombinant plasmid was digested by SacI and XmaI and then ligated with the oligonucleotide duplex of the antimiR-33 sponge by a T4 Ligase. This recombinant vector pPEPCK-antimiR-33-IRES2-EGFP was transformed in a vector/insert ratio of 1:20 into E. coli Inv\u03b1F\u2019 competent cells for amplification and was isolated by alkaline lysis with the UltraClean\u00ae Maxi Plasmid Prep Kit (MO BIO Labs) and stored at 4 \u00b0C.The pIRES2-EGFP vector and the amplified PEPCK promoter were digested by AseI and BglII, the digestion products were separated with 1% agarose gel electrophoresis, and the interest fragments were retrieved and purified using the QIAEXTM Spectrophotometer (Thermo Fisher Scientific). The construction identity was confirmed by enzymatic restriction and PCR, then, agarose gel electrophoresis was used to verify the fragments\u2019 size. In addition, this plasmid was submitted for Sanger automatic sequencing ABI Prism (IBT-UNAM) to confirm the insert sequence integrity. The obtained sequence was used to perform a local alignment analysis with the online software EMBOSS Water (https://www.ebi.ac.uk/Tools/psa/emboss_water/ accessed on 30 January 2020), comparing it with the designed recombinant plasmid sequence.The pPEPCK-antimiR-33-IRES2-EGFP DNA concentration and purity were validated with the Nanodrop 10002-containing atmosphere. For transfection, Hek-293 FT and Hep G2 cells were passaged and plated at 50,000 cells per well in a 24-well plate (for expression specificity assay) and 100,000 cells per well in a 12-well plate (for mRNA assay) and incubated for 24\u2009h at 37 \u00b0C in a 5% CO2-containing atmosphere, at 80% confluence, to be used in transfection with the recombinant pPEPCK-antimiR-33-IRES2-EGFP and the control pIRES2-EGFP plasmids.To validate the effectiveness of the sponge transcript to bind to the desired miRNA, first Hek-293 FT (human embryonic kidney cells) and Hep G2 (human hepatocarcinoma cells) were cultivated in Dulbecco\u2019s modified Eagle\u2019s medium with 10% fetal bovine serum at 37 \u00b0C in a 5% COTM, Tokyo, Japan). Subsequently, 250,000 viable cells were seeded per well in a 6-well plate and incubated for 24\u2009h at 37 \u00b0C in 5% CO2-containing atmosphere, at 80% confluence, to be used in transfection with the recombinant and the control plasmid.The mouse hepatocytes culture was carried out by two cellular disintegration methods: mechanical and enzymatic disintegration, based on Freshney protocols , using HTM 2000 (Thermo Fisher Scientific). Lipoplexes were obtained by mixing pDNA (pIRES2-EGFP or pPEPCK-antimiR-33-IRES2-EGFP) with Lipofectamine in no serum DMEM for plasmid quantities between 860 ng and 3.4 \u00b5g, according to the number-well plate.Transfection was carried out according to the instructions included with the LipofectamineTM 2000 pIRES2-EGFP (positive control), and LipofectamineTM 2000 pPEPCK-antimiR-33-IRES2-EGFP (recombinant) and were transfected in two independent experiments by pouring culture medium from wells until the cells were barely covered and adding the lipoplex solution on each well, incubating for 2\u2009h at the same culture conditions. After that, 500\u2009\u03bcL, 1 and 2 mL of complete DMEM were added for incubation . After a period of 72 h after transfection, reporter gene expression was directly evaluated using fluorescence microscopy in Axio Vert.A1 with a 20\u00d7 objective, and using the reporter gene EGFP, transfection efficiency was evaluated by comparing cells EGFP positive against total cells per field. Software Image J version 1.53t (Open Source Software (OSS) license) was used to count cells. Then, the transfected cells were cultured in G418 (Gibco) selection medium to screen for resistant cells since the pIRES2-EGFP backbone contains the Neomycin resistance gene. After 7 days, the positive cell clones were collected and used for subsequent mRNA assays.Cells were divided into three groups: Cell lines (negative control), LipofectamineTM (Invitrogen), according to the manufacturer\u2019s protocol. The RNA concentration and purity were determined using the Nanodrop 1000 spectrophotometer (Thermo Fisher Scientific) and the structural integrity was determined by denaturing electrophoresis in 1% agarose gel added with 1% sodium hypochlorite [Total RNA was isolated from the negative control group cells and the transfected recombinant group cells, using TRIzol reagentCA, USA) . Three p\u2212\u0394\u0394Ct) [RT reactions were performed using the Cloned AMV First-Strand cDNA Synthesis Kit (Invitrogen) and qPCR was performed using the QuantiNova SYBR Green PCR Kit (QIAGEN) on the Rotor-Gene Q series (QIAGEN), according to the manufacturer\u2019s protocols. Thermocycling conditions were as follows: Initial denaturation at 95 \u00b0C for 2 min, followed by 35 cycles of 95 \u00b0C for 10 s and 60 \u00b0C for 20 s. Each reaction was independently tested two times. GAPDH was used as the internal control and the target genes levels of miR-33a were quantified using the comparative CT method (2\u2212\u0394\u0394Ct) . Equation = 4 with respective SD. Statistical analysis was conducted using GraphPad Prism version 9.5.1 . The Shapiro\u2013Wilk test was performed to determine the distribution type of the samples. A Brown\u2013Forsythe and Welch ANOVA test followed by Dunnett\u2019s post hoc test was used to evaluate the differences between the three groups, for those with normal distribution; on the other hand, the samples with non-normal distribution were analyzed with the non-parametric Kruskal\u2013Wallis test followed by Dunn\u2019s post-hoc tests. p \u2264 0.05 was considered to indicate a statistically significant difference.Data are shown as the mean of The designed antimiR-33 sponge sequence contains two perfect binding sites for miR-33a, separated by a short 4 nt spacer sequence. The restriction enzymes sites at 5\u2032 and 3\u2032 ends of the oligonucleotide duplex work to give directionality to cloning .The bioinformatic analysis for functionality of the antimiR-33 sponge sequence, showed two different ways of interacting with miR-33 in each binding site of the antimiR-33 sponge: a partial but hard interaction with member miR-33b, with a \u0394G value of \u221262.7 kcal/mol, and another totally complete interaction with member miR-33a, with a \u0394G value of \u221282.4 of kcal/mol A. In addThe design and in silico construction of the recombinant vector pPEPCK-antimiR-33-IRES2-EGFP contains the antimiR-33 sponge and the green fluorescent protein gene, both sequences under the transcriptional control of the PEPCK promoter .The formation of the duplex (58 bp) was verified through gel electrophoresis, together with the single-stranded oligonucleotides (not shown). The genetic construction pPEPCK-antimiR-33-IRES2-EGFP contains the antimir-33 sponge sequence as a therapeutic sequence and the green fluorescent protein (EGFP) reporter gene, governed under the transcriptional control of the PEPCK promoter. The identity of this genetic construction was verified by enzymatic restriction , PCR Fi, and seqWith this genetic vector, the Hep G2 and Hek 293 FT cell lines, as well as a primary mouse hepatocyte culture, were transfected. Reporter gene (EGFP) expression was directly evaluated 72 h after transfection, using fluorescence microscopy, taking advantage of the fluorescence activity of the pIRES2-EGFP plasmid. Since the EGFP gene is found downstream of the antimiR-33 sponge and these two sequences are separated by an IRES region from the encephalomyocarditis virus, although the antimiR-33 sponge was not translated, the IRES region allows the translation of the reporter gene to indirectly determine the sponge expression, like a bicistronic plasmid. The expression of the EGFP reporter gene was achieved in liver cells when they were transfected with the recombinant plasmid pPEPCK-antimiR33-IRES2-EGFP and with the control plasmid pIRES2-EGFP (transfection efficiency of 20% in Hep G2 and 5% in primary mouse hepatocyte culture); however, in the control cells there was no expression of EGFP when they were transfected with the recombinant plasmid, but there was with the control plasmid (transfection efficiency of 65% in Hek293 FT) .p = 0.173 and 0.1809, respectively) (p = 0.068). In the case of ABCG1, the expression was up to 18 times higher in the transfected group with the recombinant plasmid, compared with non-transfected cells and transfected with pIRES2-EGFP (p = 0.0057) A,B; howectively) C, althou 0.0057) D, with aWith all these results, it was found that the antimiR-33 sponge expression, through recombinant plasmid pPEPCK-antimiR-33-IRES2-EGFP, can be used to efficiently induce the expression of the miR-33 target genes, ABCA1 and ABCG1, although additional studies are required to determine the adequate reduction in miR-33, as well as to evaluate the specificity of expression in other cell types.CVDs continue to be the main cause of death in Mexico and worldwide, hypercholesterolemia and atherosclerosis being the main triggers . miR-33 Currently, several antagonism strategies for miR-33 have been designed as therapeutic agents. The simplest method, called anti-miRNA oligonucleotides (AMOs), and derived from ASO, uses oligonucleotides complementary to the sequence of the mature miRNA to prevent the interaction with its physiologically relevant target messengers . HoweverIn preclinical trials, it has been possible to suppress the effect of miR-33 using antimiR-33 ASO, thus reversing the effects of atherosclerosis in the employed models ,19,20. HKluiver et al. conducteIt has been shown that, with the use of genetic constructs for gene therapy, the transcript which has more than one antisense site, but no more than six, can improve the inhibition of a given miRNA, which would produce a greater biological effect and thusThe antimiR-33 sponge binding specificity and functionality for the inhibition of the miR-33 family was confirmed with the miRNAsong online software, an open-access tool for the in silico analysis of miRNA sponges, which has a regular update plan and covers 219 species, as well as more than 35 thousand mature miRNAs. There are other available web servers like STarMir software or PITA tool that can be used for sponge testing, but they present several disadvantages . The resTo give specificity to the expression of introduced genes in association with cationic liposomes, the use of promoters from specific tissues is resorted to; in this case, the genetic construction of pPEPCK-antimiR-33-IRES2-EGFP was carried out, which contains the antimiR-33 sponge sequence as a therapeutic sequence and the EGFP as a reporter gene, governed under the transcriptional control of the PEPCK promoter, an enzyme of the gluconeogenesis pathway, so that genetic expression could be directed exclusively to the hepatocyte, without having any effect on other types of cells, as could be seen in the in vitro transfection assays. Likewise, the miRNAsong results showed that there are some miRNAs that can bind nonspecifically to the antimiR-33 sponge but most of these are generally not expressed in hepatocytes, which is where expression is directed with this genetic construct. Furthermore, as briefly mentioned above, introducing the recombinant plasmid into the target cells of an organism, in association with and protected by a liposomal vehicle, to express the antimiR-33 sponge, avoids the administration of naked RNA, which has been proven to be more vulnerable to the action of nucleases in the blood, and significantly reduces the half-life of the genetic material in circulation, so its therapeutic effect is not very effective .On the other hand, a controlled inhibition of miR-33 is very important, since a decrease in miR-33a has been observed in hepatoma cells, and this inhibition is also related to a low survival rate in patients with HCC . In addiAs mentioned, the recombinant plasmid pPEPCK-antimiR-33-IRES2-EGFP contains the 674 bp PEPCK promoter, reported by Short et al. in 1992 . Further\u2212\u2206\u2206CT method and this difference was statistically significant for ABCG1 (p = 0.0057).In the transfection assays, the specificity of expression of the antimiR-33 sponge was demonstrated only in liver cells, because the expression is regulated by the tissue-specific promoter PEPCK. For the RT-qPCR assays, CT values are used to quantify the expression of the gene of interest, using relative quantification . In thisAltogether, the recombinant plasmid pPEPCK-antimiR-33-IRES-EGFP could be used in other pre-clinical trials to increase the expression levels of the ABCA1 and ABCG1 transporters by blocking the miR-33 specifically in hepatocytes, since the antimiR-33 sponge can be expressed inside the cell and using its molecular machinery, so it is expected not to have undesirable side effects . The intAn in silico analysis of the antimiR-33 sponge sequence indicated that it may be functional to specifically repress miR-33 activity, although additional assays are required to prove it. It was shown that the therapeutic gene (antimiR-33 sponge) and the tissue-specific promoter that governs its expression (PEPCK), are correctly and fully inserted in the recombinant plasmid pPEPCK-antimiR-33-IRES2-EGFP. In addition, it was indirectly revealed that the expression of the antimiR-33 sponge is specific for liver cells and it functions to induce the expression, at the transcriptional level, of the target genes of miR-33: ABCA1 and ABCG1. This construction could be used in additional assays with an in vivo model, in order to search the first solid bases for the future development of a possible and safer gene therapy to reduce hypercholesterolemia and, therefore, atherogenic risk."} +{"text": "However, temperatures well below CTmax may also have pronounced effects on insects, but have been relatively less studied. Additionally, many insects with out-sized ecological and economic footprints are colonial such that effects of heat on individuals may propagate through or be compensated by the colony. For colonial organisms, measuring direct effects on individuals may therefore reveal little about population-level impacts of changing climates. Here, we use bumble bees (genus Bombus) as a case study to highlight how a limited understanding of heat effects below CTmax and of colonial impacts and responses both likely hinder our ability to explain past and predict future climate change impacts. Insights from bumble bees suggest that, for diverse invertebrates, predicting climate change impacts will require a more nuanced understanding of the effects of heat exposure and additional studies of carry-over effects and compensatory responses by colonies.Global declines in abundance and diversity of insects are now well-documented and increasingly concerning given the critical and diverse roles insects play in all ecosystems. Habitat loss, invasive species, and anthropogenic chemicals are all clearly detrimental to insect populations, but mounting evidence implicates climate change as a key driver of insect declines globally. Warming temperatures combined with increased variability may expose organisms to extreme heat that exceeds tolerance, potentially driving local extirpations. In this context, heat tolerance limits (e.g., critical thermal maximum, CT Even selection of the appropriate temperatures for developmental heat stress experiments is difficult because we lack measurements of wild nest temperatures under normal and heat wave conditions (but see Morphological changes resulting from heat stress could have important carry-over effects that are largely unstudied , with th but see .in vitro and in vivo (Male bumble bees leave the colony soon after emerging as adults and never return to the protection of the nest. It is therefore highly likely that they will be exposed to temperature extremes while searching for a mate. Laboratory experiments have found that exposing adult males to heat stress significantly reduces sperm viability in several bumble bee species . Conside in vivo , however in vivo , with po in vivo . Future B. terrestris and B. impatiens (, so we have little information on most wild bumble bee species (but see Bumble bees are critically important pollinators both in natural landscapes and commercial agriculture , and popmpatiens , so we h but see in wild but see , a challWhile beyond the scope of this review, interacting effects are also an important consideration for heat stress see . Exposur"} +{"text": "However, tumor heterogeneity significantly hinders the algorithm\u2019s performance.Multiple instance learning (MIL) is a powerful technique to classify whole slide images (WSIs) for diagnostic pathology. The key challenge of MIL on WSI classification is to discover the Here, we propose a novel multiplex-detection-based multiple instance learning (MDMIL) which targets tumor heterogeneity by multiplex detection strategy and feature constraints among samples. Specifically, the internal query generated after the probability distribution analysis and the variational query optimized throughout the training process are utilized to detect potential instances in the form of internal and external assistance, respectively. The multiplex detection strategy significantly improves the instance-mining capacity of the deep neural network. Meanwhile, a memory-based contrastive loss is proposed to reach consistency on various phenotypes in the feature space. The novel network and loss function jointly achieve high robustness towards tumor heterogeneity. We conduct experiments on three computational pathology datasets, e.g. CAMELYON16, TCGA-NSCLC, and TCGA-RCC. Benchmarking experiments on the three datasets illustrate that our proposed MDMIL approach achieves superior performance over several existing state-of-the-art methods.https://github.com/ZacharyWang-007/MDMIL.MDMIL is available for academic purposes at Whole slide imaging, which refers to scanning and converting a complete microscope slide to a digital whole slide image (WSI), is an efficient technique for visualizing tissue sections in disease diagnosis, medical education, and pathological research . In receTypically, the gigapixel WSIs with a size of about 40\u00a0000 Currently, most MIL methods are modeled with self/cross-attention mechanisms, Transformers, or Graph Neural Networks. For example, To tackle the above issue, we propose a novel approach termed multiplex-detection-based multiple instance learning (MDMIL). MDMIL is developed based on the internal query (IQ) generation module (IQGM) and multiplex detection module (MDM) and assisted by the memory-based contrastive loss during the training phase. Specifically, IQGM generates the probability distribution of instances on deep-transferred instance features through a classification layer and generates the IQ by aggregating highly reliable instances after the probability analysis. Then, MDM, which consists of the multiplex-detection cross-attention and multi-head self-attention MHSA, s, we adopThere are basically two categories of MIL methods in WSI classification: instance-level and embedding-level algorithms. The instance-level algorithms assign tIn neural networks, the attention mechanism is a technique that mimics human cognition. Initially, the attention mechanism was used to extract meaningful information from sentences in machine translation ; the attTransformer prevails in many different areas due to iConvolution Block and the first three Residual Blocks of the ResNet50 . The overall architecture is illustrated in ResNet50 as the ecls together with a softmax function and a max-pooling operation to retrieve features with the highest probability corresponding to each subtype. However, this strategy has two obvious shortcomings: (i) the prediction accuracy of cls is relatively low, and the absolute predictive power of the whole model will be logically limited by the upper bound of cls\u2019s expression; (ii) due to the application of deep transfer learning and the tumor heterogeneity, the retrieved features tend to be short of class-level representative and have much patch-specific information.Our proposed IQGM aims to generate the IQ as the internal assistance for the subsequent MDM. In DSMIL , they utA straightforward approach to tackle the above issues is averaging the top instance features of each subtype. However, this approach will introduce noise with a large chance, thus affecting the convergence of the model and the stability of the model training.LN) , and onei through the mean value Then, we define a confidence factor q. If no Suppose that the maximum one Using the proposed IQGM, we can generate a reliable IQ with less noise, thus laying a good foundation for the following prediction.Algorithm 1Internal query generation moduleInput:A bag of feature embeddings Output:Reliable IQ of each subtype;1: \u2009Get the transformed features with DPL;2: Get the probability distribution of instances through a classification layer;3: Calculate the confidence factor for each subtype through the top 4: Estimate the IQ;if \u2003\u2003Average the top else:\u2003\u2003Average the top The proposed MDM aims to detect the critical instances that trigger the prediction under tumor heterogeneity circumstances. Previous methods adopt either an unreliable IQ or a sinquery one\u2002query two\u2002key K and value V arise from the m parts along the channel dimension, and get i and j represent the indices of query and key, respectively. Each element of the attention matrix indirectly indicates the similarity between query and key. We take the weighted sum of Essentially, MDCA is a modified cross-attention module based on the standard architecture of the MHSA. MDCA has three inputs: IQ from IQGM, trainable VQ, and the bag of features N is the number of subtypes. With the cross-attention between IQ and K, we successfully establish the connections between all instances and a collection of potential critical ones. Through the cross-attention between VQ and K, we further supplement other features to improve the model\u2019s robustness. Besides, since IQ is derived from the bag of features, it converges fast and can provide internal assistance for feature detection. As for VQ, it is optimized through the whole training set, providing external and global assistance for feature detection. Thus, the two kinds of queries can be complementary with each other; their combination can greatly improve the model\u2019s performance. Moreover, with the cross attention, we can considerably reduce the instance-dimension from the original patch number n to subtype number N, requiring much less computation in the following calculation than other transformer-based methods, e.g. TransMIL , TCGA-RCC, and CAMELYON16.TCGA-NSCLC dataset has two subtypes: lung squamous cell carcinoma (TCGA-LUSC) and lung adenocarcinoma (TCGA-LUAD). There are 993 diagnostic WSIs, including 507 LUAD slides from 444 cases and 486 LUSC slides from 452 cases, respectively. After processing, the mean number of patches extracted per slide at The TCGA-RCC dataset has three subtypes: kidney chromophobe renal cell carcinoma, kidney renal clear cell carcinoma, and kidney renal papillary cell carcinoma. There are 884 diagnostic WSIs, including 111 KICH slides from 99 cases, 489 KIRC slides from 483 cases, and 284 KIRP slides from 264 cases. After processing, the mean number of patches extracted per slide at CAMELYON16 is a public dataset of metastasis in breast cancer. There are 270 WSIs in the training set, including 159 normal tissues and 111 tumor tissues. As for the testing set, there are in total of 130 WSIs. After processing, the mean number of patches extracted per slide at Convolutional Block and three Residual Blocks).Following We provide a detailed description of the implementation details and evaluation metrics in We evaluate our proposed model on both detection and subtype classification datasets. For the detection dataset, e.g. CAMELYON16, the positive WSI contains metastases while the negative ones do not contain metastases. As for the subtype classification datasets, e.g. TCGA-NSCLC and TCGA-RCC, each subtype has its unique pattern. We present all the experimental results in CAMELYON16: Since the positive slides only contain a small portion of metastasis tissue of the algorithms in Analysis of each component: To verify the effectiveness of each proposed component, we present the ablation study of the internal query generation module (IQGM), deep projection layer (DPL), multiplex detection module (MDM), and memory-based contrastive loss and assisted by the memory-based contrastive loss in the training phase. The proposed IQGM and MDM jointly detect critical instances through generated IQ and predefined VQ in the form of internal and external assistance. Since IQ and VQ can be complementary with each other, the multiplex detection strategy significantly improves the model\u2019s perception ability towards critical instances. Meanwhile, the memory-based contrastive loss helps to reach consistency on various phenotypes in the feature space. Our novel network and loss function jointly achieve excellent robustness towards tumor heterogeneity. We conduct experiments on three computational pathology datasets. MDMIL\u2019s outstanding performance verifies its ability against tumor heterogeneity. Meanwhile, our ablation study demonstrates each proposed module can work independently and cooperatively. The limitation of MDMIL lies in losing the contextual information during the computation. Therefore, for tasks like survival analysis, which need to analyze tumor micromovement and interactions between tumor cells and their neighbors, MDMIL may not perform as well as contextual-reserved methods.btad114_Supplementary_DataClick here for additional data file."} +{"text": "Creativity has traditionally been considered an ability exclusive to human beings. However, the rapid development of artificial intelligence (AI) has resulted in generative AI chatbots that can produce high-quality artworks, raising questions about the differences between human and machine creativity. In this study, we compared the creativity of humans (n\u2009=\u2009256) with that of three current AI chatbots using the alternate uses task (AUT), which is the most used divergent thinking task. Participants were asked to generate uncommon and creative uses for everyday objects. On average, the AI chatbots outperformed human participants. While human responses included poor-quality ideas, the chatbots generally produced more creative responses. However, the best human ideas still matched or exceed those of the chatbots. While this study highlights the potential of AI as a tool to enhance creativity, it also underscores the unique and complex nature of human creativity that may be difficult to fully replicate or surpass with AI technology. The study provides insights into the relationship between human and machine creativity, which is related to important questions about the future of creative work in the age of AI. One of the key issues surrounding the implementation of AI technologies pertains to their potential impact on the job market3. With AI systems becoming increasingly capable of performing tasks that were once solely within the purview of humans, concerns have been raised about the potential displacement of jobs and its implications for future employment prospects4. In the field of education, questions have been raised about the ethical and pedagogical implications of such technologies, as well as concerns about how AI systems might reduce critical thinking skills5. Another aspect of the debate involves the legal and ethical ramifications of AI-generated content7. As these tools produce increasingly sophisticated works, ranging from articles to artistic creations, it raises the issue of whether AI-generated products should be granted the same legal protections as human-created works, and how to assign responsibility and credit for such creations.The development and widespread availability of generative artificial intelligence (AI) tools, such as ChatGPT . Additionally, AI seems to perform well in art-related creativity. Recent AI tools can produce high-quality art pieces that have been bought for high prices10, as well as poetry that is indistinguishable from human-made art11. These findings seem to suggest that AI is capable of creating products that humans typically perceive as creative. But what exactly is creativity?AI has shown tremendous potential for greater and more enormous possibilities in areas that require reasoning and creative decision making. This is demonstrated, for example, by the rise of chess engines, neural networks, and deep learning-based chess networks, which are capable of defeating chess masters , flexibility (the ability to think about a topic from different perspectives), originality (the ability to produce unique or novel ideas), and elaboration (the ability to expand upon or add detail to ideas). However, the assumption that divergent thinking would in fact represent creativity as a phenomenon has been disputed14. Nevertheless, divergent thinking has often been measured in the context of psychological research on creativity and tasks measuring divergent thinking are well established15.Traditionally, creativity has been defined as the ability to produce ideas that are, to some extent, both original and useful16 model proposed that the creative process involves an interplay between spontaneous (divergent) and controlled (convergent) modes of thinking. The more spontaneous divergent thinking is responsible for the originality and novelty of the ideas, whereas the controlled process evaluates the relevance of the ideas in relation to the demands of the task. The associative theory of creativity18 assumes that creative ideas result from making connections between weakly related concepts to form novel ideas. This theory proposes that individuals with a flat structure of semantic knowledge are more likely to activate and associate remote ideas, thus increasing the probability of forming original combinations of ideas compared to those with strictly hierarchical or steep structures. This view is supported by recent computational methods19 and functional brain imaging studies20, which suggest that creative individuals have more connected and flexible semantic networks than less creative individuals.The most accepted theories regarding the creative process are based on the dual-process view. Guilford's22. These accounts can be integrated into a hybrid view assuming that bottom-up, associative processes are beneficial for creative thinking, while top-down processes contribute by providing executive control during the retrieval of concepts from semantic memory. For example, top-down processes during creative process can generate and maintain retrieval cues, inhibit salient and highly associated information, and shift attention22.The controlled-attention theory emphasizes that executive functions are necessary for creative idea generation16. The most used test of divergent thinking is the Alternate Uses Task (AUT), in which participants are asked to produce uncommon, creative uses for everyday objects . We investigated the differences in creative potential between humans and AI chatbots using the AUT. The key process of creative thinking in humans is the ability to access remotely related concepts. Current AI chatbots have a vast memory and the ability to quickly access large databases. Therefore, one might hypothesize that AI chatbots will outperform humans in the associative component of divergent thinking, and thus in the originality of responses. To operationalize originality, we used a computational method23 to objectively quantify the semantic distance between the object probes and the responses. Additionally, human raters who were blind to the presence of AI-generated responses evaluated the responses. These raters provided a human view of creativity, as it is possible that mere semantic distance may not capture all aspects of creative products that humans consider original or surprising.Divergent thinking has traditionally been assessed by tests requiring open-ended responseshttps://osf.io/fy3mn/?view_only=f1cf960d0170433dba9d31df68a6eaf7). Native English speakers were recruited via the online platform Prolific (www.prolific.co) and paid 2\u00a3 for the about 13-min participation. A total of 310 participants opened the link to the study and full data was obtained from 279 participants who performed the study from the start to the end. Of the 279 participants with full data, 256 passed the attention checks and their results were included in the present study. The attention checks consisted of easy visual detection and recognition tasks. The average age of the participants was 30.4\u00a0years, ranging from 19 to 40\u00a0years; 44 of them were students. The employment status was fulltime for 142, part time for 37, unemployed for 30, and other, homemaker, retired or disabled for 42. The participants reported no head injury, medication, or ongoing mental health problems. They resided in the United Kingdom (n\u2009=\u2009166), USA (n\u2009=\u200979), Canada (n\u2009=\u20099), or Ireland (n\u2009=\u20092). All participants provided informed consent prior to the start of the study. The collection of the human data was performed in accordance with the Declaration of Helsinki and it had the acceptance of Ethics Committee for Human Sciences at the University of Turku.The AUT data from human participants were collected in the context of another study and its\u2019 method was preregistered at OSF.io in the context of that study (The AI chatbots ChatGPT3.5 (referred as ChatGPT3 in the following text) ChatGPT4, and Copy.Ai (based on the GPT 3 technology) were tested. ChatGPT3 was tested 30.3.2023, ChatGPT4 was tested 5.4.2023, and Copy.Ai was tested 1.4.- 2.4.2023. Each chatbot was tested 11 times with four object prompts in different sessions. We did not want to increase the number of sessions as we noted during piloting that the chatbots tended to repeat some responses across the sessions, although the combination of the responses to each object was different between the sessions. Thus, we had 11 test sessions with the four objects for each chatbot (n\u2009=\u2009132 observations). This seemed a reasonably large sample to obtain sufficient power to detect differences at 0.05 alpha level when compared to the 256 humans\u2019 1024 observations in single trial analyses.rope, box, pencil, and candle, respectively. Before staring the tasks, the human participants were presented with the instruction stressing quality instead of quantity, following the guidelines given by Beaty and Johnson23: \u201cFor the next task, you'll be asked to come up with original and creative uses for an object. The goal is to come up with creative ideas, which are ideas that strike people as clever, unusual, interesting, uncommon, humorous, innovative, or different. Your ideas don't have to be practical or realistic; they can be silly or strange, even, so long as they are creative uses rather than ordinary uses. You may type in as many ideas as you can, but creative quality is more important than quantity. It's better to have a few really good ideas than a lot of uncreative ones. You have 30\u00a0s to respond each object\u201d. After having read the instruction, the tasks started. Each object name was presented for 30\u00a0s, during which the participants entered their ideas into text boxes located below the object name. In the beginning of each task, the participants were reminded that they should \u201ccome up original and creative uses for an object\u201d.The Alternate Uses Task (AUT) included four tasks with the object probes You can type in one [or two/three/four/five/six] ideas.\u201d In addition, without any restriction in the number of words for expressing the ideas, the AIs would have generated rather long and elaborated responses, which are not comparable to the human responses which consisted typically of 1 \u2013 3 words . Therefore, we added to the end of the instruction: \u201cUse only 1\u20133 words in each response.\u201d ChatGPT3 and ChatGPT4 followed well the instructions, while Copy.Ai sometimes needed further instructions, for example such as \u201cI asked for three ideas\u201d, or \u201cState your previous response with 1 \u2013 3 words.\u201dAlthough each human was tested once with the four objects in one session, the testing of each AI chatbot consisted of 11 sessions with each object. The four objects were tested always once within one session , after which the session was closed, and a new session was started so that the memory of AI was cleared from the contents of the previous session. The instructions for AI were otherwise identical to those given for humans, but two exceptions had to be made. First, piloting with the chatbots suggested that if ChatGPT3 were given no explicit restriction to the number of ideas, it always generated 10 ideas, while ChatGPT4 generated between 7 and 8 ideas; Copy.Ai generated a more variable number of ideas. To restrict the number of ideas so that they would correspond to those given by humans, we first examined the distribution of the number of human ideas. The median number and mode for humans was 3 ideas, with slightly rightward tail in the distribution .23). In the semantic distance analysis, \u201cmultiplicative\u201d compositional model option in SemDis was used to account for AUT responses with multiple words. The responses were preprocessed using the \u201cremove filler and clean\u201d setting which removes \u201cstop words\u201d and punctuation marks that can confound semantic distance computation. In addition, other editing of the responses was needed to control the confounding effects between humans and AIs. The AIs used relatively often the expression \u201cDIY\u201d (\u201cdo it yourself\u201d): GPT3 four times, Copy.Ai seven times, and GTP4 three times, whereas the 256 humans used it only a total of two times. It is evident in the present context that the expressions with DIY and without DIY (\u201ccat bed\u201d) mean the same usage of the object. Because we noted that the inclusion of DIY increases the semantic distance scores produced by SemDis, we removed the DIYs from the responses before entering them into the analysis. For the same reason, we removed also expressions \u201cMake a _____\u201d, Making a _____, Use as a ___ \u201c from the beginning of the responses.The originality of divergent thinking was operationalized as semantic distance between the object name and the AUT response. The semantic distance was determined with SemDis platform . For statistical analyses, we computed for each participant and for each AI test session both the mean semantic distance score across all the responses generated to each probe object during a session, and the maximum score from the responses to each object . In the statistical analyses, each AI session was processed as it were from an individual participant; therefore, we got 11 observations per object for each chatbot.For each response in AUT tasks, semantic distance between the object name and the response was computed with five semantic models and their mean value was used in further processing . The instruction stressed that they should stress novelty over usefulness and use the instruction given for participants as the reference point against which to evaluate the responses. They were explicitly instructed that a common use, such as \u201ccutting\u201d in response object scissors, should be given a low score, and that also a confusing or illogical response as well as a lacking response should receive score 1. Each rater had a different order in which the four objects were evaluated. The order in which the responses within object categories were presented was randomized separately for each rater. The scores of each rater were averaged across all the responses a participant (or chatbot in a session) gave to an object, and the final subjective scores for each object were formed by averaging the 6 raters\u2019 scores, The inter-rater reliability was assessed by calculating Intraclass Correlation Coefficients with irr package (https://CRAN.R-project.org/package=irr). In this model systematic differences between raters were irrelevant. The ICCs were 0.88, 95% CI for rope, 0.93, 95% CI for box, 0.90, 95% CI for pencil, and 0.93, 95% CI for candle.We collected subjective creativity/originality ratings from six briefly trained humans. They were not told that some of the responses were generated by AI. They rated each response for creativity/originality using 5-point likert scale and Fluency , and random intercept for participants (and session for AI) served as the random effect. In the next set of analyses, Group and Object, and their interactions were the fixed effects, and Fluency served as the covariate. The group variable consisted of four levels and the object variable involved four levels . In these analyses the R\u2019s anova function was applied on the models to obtain Type III analysis of variance results (Satterthwaite's method) as it makes the interpretation of main effects and interactions simpler than the standard outputs of the linear mixed-effect models. The post-hoc pairwise comparisons were adjusted for multiple comparison with mvt method in package emmeans v.1.8.2. (https://CRAN.R-project.org/package=emmeans). For simplicity, we refer to 95% CI as CI in the results section.Separate linear mixed-effect analyses were performed with lme4 packageTable B\u2009=\u20090.049, SE\u2009=\u20090.010, CI , t(274)\u2009=\u20094.949, p\u2009<\u20090.001 and B\u2009=\u20090.027, SE\u2009=\u20090.009, CI , t(268)\u2009=\u20093.037, p\u2009=\u20090.003, respectively. Fluency as a covariate decreased the mean scores, B\u2009=\u2009\u22120.012, SE\u2009=\u20090.003, CI , t(279)\u2009=\u2009\u22124.170, p\u2009<\u20090.001, and increased the max scores, B\u2009=\u20090.011, SE\u2009=\u20090.002, CI , t(274)\u2009=\u20094.486, p\u2009<\u20090.001.To get an overall picture of the differences between humans and AI, we began the analyses with linear mixed-effect models with Group as the fixed effect and Fluency as a covariate. Figure\u00a0B\u2009=\u20090.453, SE\u2009=\u20090.082, CI , t(282)\u2009=\u20095.496, p\u2009<\u20090.001, and fluency decreased the mean scores, B\u2009=\u2009\u22120.088, SE\u2009=\u20090.023, CI , t(284)\u2009=\u2009\u22123.877, p\u2009<\u20090.001. Additionally, the max scores were higher for AI than humans \u2009=\u20095.495, p\u2009<\u20090.001. Fluency decreased the max scores, B\u2009=\u2009\u22120.088, SE\u2009=\u20090.023, CI , t(283)\u2009=\u2009\u22123.877, p\u2009<\u20090.001.The human subjective ratings of creativity showed similar results. The mean scores Fig.\u00a0C were hiThe distribution of the subjective scores in Fig.\u00a0Next, we studied in more detail the responses of humans and each AI chatbot to each object, with Group and Object and their interactions as fixed effects and Fluency as a covariate. The effect of Fluency was statistically significant in all the following analyses, showing similar pattern as in previous analyses (decreasing mean scores and increasing max scores), so we do report them.F\u2009=\u20099.000, p\u2009<\u20090.001. Post-hoc pairwise comparisons with \u2018mvt\u2019 adjustment for multiple comparisons indicated that this effect was due to ChatGPT3, t(282)\u2009=\u2009\u22122.867, CI , p\u2009<\u20090.0213, and Chat GPT4, t(282)\u2009=\u2009\u22124.115, CI , p\u2009<\u20090.001, obtaining higher mean semantic distance scores than humans. Semantic distance differed between the objects, F\u2009=\u200910.102, p\u2009<\u20090.001, with responses to rope receiving lower scores than those to box, t(845)\u2009=\u2009\u22125.030, CI , p\u2009<\u20090.001, pencil, t(845)\u2009=\u2009\u22122.997, CI , p\u2009=\u20090.015, and candle, t(845)\u2009=\u2009\u22124.445, CI , p\u2009<\u20090.001. The interaction between Group and Object was not statistically significant, F\u2009=\u20091.098, p\u2009=\u20090.361.The analysis of mean semantic distance Fig.\u00a0 showed aF\u2009=\u20093.088, p\u2009=\u20090.028, but post-hoc pairwise comparisons did not reveal any statistically significant differences between the groups after accounting for multiple comparisons . The main effect for object, F\u2009=\u20093.256, p\u2009=\u20090.021) resulted from the responses to box receiving higher scores than those to rope, t(839)\u2009=\u2009\u22123.055, CI , p\u2009=\u20090.0124. Group and object did not interact statistically significantly, F\u2009=\u20090.641, p\u2009=\u20090.762.The analysis of max semantic distance Fig.\u00a0 also shopencil (1.124), was higher than the corresponding highest human max score (1.101).In summary, mean semantic distance scores of ChatGPT3 and ChatGpt4 were higher than those of humans, but no statistically significant differences between the AI chatbots were detected. However, it can be noted from Fig.\u00a0F\u2009=\u200916.147, p\u2009<\u20090.001, and Object, F\u2009=\u200914.920, p\u2009<\u20090.001. The performance of ChatGPT4 was superior: its responses received on average higher points than humans, t(283)\u2009=\u2009\u22126.6649, CI , p\u2009<\u20090.001, ChatGPT3, t(283)\u2009=\u2009\u22123.459, CI [\u22121.112 \u22120.1674], p\u2009=\u20090.003, and Copy.AI, t(283)\u2009=\u20093.609, CI , p\u2009=\u20090.002, which did not differ between each other. However, the superiority of ChatGPT4 could not be generalized to object pencil, as suggested by the Group\u2009\u00d7\u2009Object interaction, F\u2009=\u20092.486, p\u2009=\u20090.008. Responses to candle received lower ratings than responses to rope, t(852)\u2009=\u20094.788, CI , p\u2009<\u20090.001, box, t(852)\u2009=\u20093.283, p\u2009=\u20090.006, CI , and pencil, t(852)\u2009=\u20093.104, CI , p\u2009=\u20090.011.The analysis of human subjective rating mean scores Fig.\u00a0 revealedF\u2009=\u200910.612, p\u2009<\u20090.001) was due to ChatGPT4 getting higher scores than humans, t(283)\u2009=\u2009\u22125.400, CI , p\u2009<\u20090.001, ChatGPT3, t(283)\u2009=\u2009\u22122.711, CI , p\u2009=\u20090.033, and Copy.Ai, t(283)\u2009=\u20093221, CI , p\u2009=\u20090.010. Although the boxplots in Fig.\u00a0pencil and lower as compared with its own responses to the other objects, the Group\u2009\u00d7\u2009Object interaction did not reach statistical significance, F\u2009=\u20091.801, p\u2009=\u20090.064. Similarly to subjective mean scores, the subjective max scores in response to candle were lower than those to rope, t(849)\u2009=\u20093.561, CI , p\u2009=\u20090.002, box, t(849)\u2009=\u20095.541, CI , p\u2009<\u20090.001, and pencil, t(849)\u2009=\u20093.126, CI , p\u2009=\u20090.010. There were two AI sessions where the max score in response to box was higher than the corresponding highest human max score 4.67 .Figure\u00a0We compared the performance of AI chatbots and human participants in a typical divergent thinking task, AUT. On average, the AI chatbots outperformed the human participants in both mean scores and max scores (the best response to an object). This advantage was observed for both the semantic distance of the responses and the subjective ratings of creativity provided by unbiased human raters who were unaware of that some of the responses were generated by AI.12. This definition does not specify the internal processes that produce the creative idea, but instead attribute creative ability to an agent based on the creative products.The standard definition describes creativity as the ability to produce ideas that are, to some extent, original and useful8, these results suggest that the production of creative ideas may not be a feature only displayed in conscious human beings.Therefore, the present empirical data shows that AI can produce creative outputs that have reached at least the same level, and even higher, as the average of humans in this task. Just as in the case of the artspencil) and two instances where AI chatbots (ChatGPT3 and ChatGPT4 in response to box) achieved the highest subjective scores. In all other cases, the highest scores were achieved by humans. However, it is evident from Figs.\u00a0Although AI chatbots performed better than humans on average, they did not consistently outperform the best human performers. There was only one instance in which an AI chatbot achieved the highest semantic distance score . In associative theories of creativity, individuals differ in their structure of semantic memory and creativity is linked to flexible and highly connected semantic networksbox with \u201ccat amusement park,\u201d while one human and ChatGPT3 responded with \u201ccat playhouse,\u201d which received lower creativity ratings. The correlations between semantic distance and humans' subjective ratings were relevant (>\u20090.50), but far from perfect, suggesting that they measured only partially the same aspects of creativity. Human raters may be more sensitive than automatic algorithms to recognize surprise or other emotional components in the combined concepts, and ChatGPT4 seems to be able to include such components in its ideas.One question that arises from the results is why ChatGPT4, the newest and most efficient AI chatbot currently available, performed so well according to human raters, compared to humans and other AI chatbots. According to OpenAI, ChatGPT4 can process eight times more words at once than ChatGPT3. However, ChatGPT4 was not better than the other chatbots as measured with the \u201cobjective\u201d semantic distance. This suggests that access to remote concepts alone may not explain why ChatGPT4's responses were evaluated as so creative. Perhaps an explanation lies in a more nuanced and surprising way the concepts were combined by ChatGPT4. For example, in one session, ChatGPT4 responded to 29, we had to ask the chatbots to produce specific amounts of responses and limit the word count in their responses, even though they are capable of generating several ideas within seconds. This purposely impaired the potential of the AI. Comparing human and chatbot creativity at process levels seems impossible, because chatbots are \u201cblack boxes\u201d, and we cannot know precisely how they generate responses or what information they have access to. It remains possible that they simply retrieve ideas that exist in their database. In such a case, their performance would merely reflect semantic retrieval, not creativity in the sense of combining concepts in new ways. The same problem exists with human participants who may retrieve ideas they have encountered previously. Future studies should develop completely new tests for which no prior ideas exist. Moreover, the human group consisted of young and middle-aged adults from Western countries, which limits generalizations of the differences between humans and AI.A limitation of in our study is the restricted number of observations from each chatbot, which limited the statistical power, especially in comparisons between individual chatbots. Additionally, to control for the confounding effects of fluency and elaborationUnderstanding how AI systems and humans interpret, understand, and articulate language could potentially bridge the gap between machine efficiency and human intuition. As we move forward, it becomes imperative for future research to explore avenues where AI can be integrated to bolster and amplify human creativity, thereby fostering a close interaction between technology and human potential.The study provides insights into the relationship between human and machine creativity. The results suggest that AI has reached at least the same level, or even surpassed, the average human's ability to generate ideas in the most typical test of creative thinking (AUT). Although AI chatbots on average outperform humans, the best humans can still compete with them. However, the AI technology is rapidly developing and the results may be different after half year. On basis of the present study, the clearest weakness in humans' performance lies in the relatively high proportion of poor-quality ideas, which were absent in chatbots' responses. This weakness may be due to normal variations in human performance, including failures in associative and executive processes, as well as motivational factors. It should be noted that creativity is a multifaceted phenomenon, and we have focused here only on performance in the most used task (AUT) measuring divergent thinking.Supplementary Information."} +{"text": "Pooled CRISPR screen is a promising tool in drug targets or essential genes identification with the utilization of three different systems including CRISPR knockout (CRISPRko), CRISPR interference (CRISPRi) and CRISPR activation (CRISPRa). Aside from continuous improvements in technology, more and more bioinformatics methods have been developed to analyze the data obtained by CRISPR screens which facilitate better understanding of physiological effects.Here, we provide an overview on the application of CRISPR screens and bioinformatics approaches to analyzing different types of CRISPR screen data. We also discuss mechanisms and underlying challenges for the analysis of dropout screens, sorting-based screens and single-cell screens.Different analysis approaches should be chosen based on the design of screens. This review will help community to better design novel algorithms and provide suggestions for wet-lab researchers to choose from different analysis methods. CRISPR screen has become a promising tool in the identification of essential genes or drug targets. In this review, we provide an overview on the application of CRISPR screens and bioinformatics approaches to analyzing different types of CRISPR screen data. We also discuss mechanisms and underlying challenges for the analysis of dropout screens, sorting-based screens and single-cell screens. Clustered regularly interspaced palindromic repeats (CRISPR) loci with endonuclease (Cas) proteins is an immune defense system in bacteria, among which CRISPR-Cas9 is the most common one . WinningThe aim of genome-scale screens is to generate a population of cells with different perturbations to identify genes or regulatory regions that will play a role in specific phenotypes. Because of the wide range of potential target sequences, CRISPR system has enabled powerful pooled screens. Based on different mechanisms, CRISPR screens can be categorized into three types: CRISPR/Cas9 knockout (CRISPRko) screens, CRISPR/dCas9 activation (CRISPRa) screens and CRISPR/dCas9 interference (CRISPRi) screens . In CRISThe pooled CRISPR screens were initially used to identify essential genes for cell viability . CombiniAs a genome wide high-throughput screening technology, whether CRISPR screens can effectively provide insights for us largely depends on the accuracy of data analysis. There have been quite some challenges for the development of CRISPR screen analysis methods. Because of next-generation-sequencing (NGS), we have to handle large size of sequencing data with noise. Meanwhile, due to the fact that multiple sgRNAs are designed for one target, we are also faced with variable sgRNA efficiency and off-target effects. The method is also expected to deal with different phenotype effects from simple cell viability to complicated transcriptome profiles. Despite the difficulty, various methods with different focus have been developed for CRISPR screen analysis. The overall workflow of those methods usually includes sequence quality assessment, read alignment, read count normalization, estimate changes of sgRNA abundance and aggregating sgRNA effects for the overall effects of targeted genes. In addition to those novel algorithms, some previously designed methods for RNA interference (RNAi) screening analysis can be repurposed for CRISPR screens analysis. Here, we will start with a comprehensive review and discussion of those computational approaches specifically developed for CRISPR screens. Next, we will introduce a group of the shRNA screening methods that have been repurposed for CRISPR screens analysis. Finally, we will review the computational platform that can be used for single cell CRISPR screens and drug-gene interaction. A summary of the tools for CRISPR screen data analysis is shown in p-values calculated from the negative binomial distributions to rank sgRNAs, and a robust ranking aggregation (RRA) [MAGeCK was the on (RRA) method tRecently, MAGeCK have been further developed into integrated workflows, MAGeCK-VISPR simulations were used. This model can be understood as a mixture linear model that incorporates gene activity and variable guide silencing efficiency. ScreenBEAM outperformed other approaches especially with relatively low-quality screen data, which is small in size with noise. In addition, ScreenBEAM can deal with data obtained from both microarray and large-scale NGS.A limitation with existing analysis approaches such as MAGeCK is that they have to accurately estimate individual shRNA/sgRNA effects, but it is hard to achieve in reality with the lack of enough replicates. Instead of previous two-step analysis, ScreenBEAM was deveBAGEL was developed in 2016 for analyzing gene knockout screens . Based op-value at the gene level by the permutation test with no distribution assumptions [2-fold changes of sgRNA counts to present overall gene effect so that it is less susceptible to outliers and off-target effects. Gene labels were randomly permuted to generate p-values for each gene. After that, genes with smaller p-values were intentionally removed and a more accurate null distribution was generated without significant genes. Updated p-values and FDR for each gene could also be computed from the null distribution. A common null probability distribution was employed instead of a gene-specific distribution, which will lead to more computational time. Testing with real datasets, PBNPA has better FDR control than MAGeCK, and it is also more robust to data variability.PBNPA was developed in 2017, which computed umptions . They us2-fold changes of sgRNA were taken as input, and it was assumed that they follow a mixture distribution of effective guides and ineffective ones. FDRs were first calculated by the posterior probability that each gene is nonessential, then marginalizing all possible mixture distributions and final FDRs were obtained. Large improvements were found in CRISPRi/a screen analysis because CRISPhieRmix distinguished genes with variable guide efficiencies. However, CRISPhieRmix was largely dependent on good control guides [Developed in 2018, CRISPhieRmix was one of the few methods intentionally developed for CRISPRi and CRISPRa screens so far . Unlike p-value, which shared a similar idea in BAGEL. By better modeling the guide-specific effect in multiple screens, JACKS improves the estimate of gene essentiality compared with other methods. In this way, JACKS increased the signal-to-noise ratio and worked better especially for negative selection [The problem of various sgRNA efficiencies is one of the sources of confounding in CRISPR screen analysis. Developed in 2019, JACKS is an algorithm based on Bayesian methodology that is able to model sgRNA efficiencies by obtaining information from multiple screens utilizing same sgRNA library design . JACKS celection . Althougp-values for each sgRNA and aggregated sgRNA effect with \u03b1-RRA algorithm [Developed in 2020, gscreend focused on the accurate modelling of read count distribution in CRISPR screens for improved experiment outcomes . A certalgorithm . It may z-scores can be calculated for each guide and then aggregated for the estimation of element effect, which can be either annotated by the target genes or identified by sliding window methods in tiling screens. They used Stouffer\u2019s method to combine guide-level z-scores into gene-level significance. Generally, MAUDE is a useful approach for identifying regulatory elements in sorting-based screens.MAUDE was designed for CRISPR screens with FACS readouts that sort cells into separate bins and get sgRNA abundances in each bin by NGS in 2020 . When dein vitro [in vivo [RNA interference (RNAi) is the phenomenon of homologous mRNA degradation caused by double strand RNA (dsRNA). This approach has been used in large-scale screens to identify gene functions in vitro and in v[in vivo . RNAi do[in vivo . RNAi ma[in vivo and disp[in vivo . Despite2-fold change. An iterative hypergeometric distribution was then used to calculate a p-value, which indicated the probability of all guides targeting one gene being nonrandomly distributed at the top rankings. Guides clustered at the top were regarded as active, and the rest are labeled as negative guides. Due to the fact that RSA was probability-based, a gene with some moderately active guides was regarded to be more essential than a gene with single but extremely active guide. Due to its consideration of collective effects of all guides targeting one gene, it is also a powerful way to get rid of sgRNA off-target effects, and is used in both RNAi screens and CRISPR screens [RSA was developed in 2007 in order to deal with the off-target effects in RNAi screens. A statistical score was designed to estimate the probability of a gene hit according to multiple siRNA effects per gene. In RSA analysis, all guides were first ranked by their signals, such as log screens .RIGER was developed in 2008 and integrated the effects of multiple shRNAs targeting one gene to identify essential genes in RNAi screens . The corCompared with MAGeCK, which was designed for CRISPR screens analysis, RIGER had a lower sensitivity at the gene level, and it missed some of the essential genes . RSA tenin vivo perturb-seq system, introduced gene perturbation into an embryo and performed single-cell sequencing in developing brain cells to identify the functions of autism-related genes in different brain cells [It may not be accurate enough to assume a homogenous cell population when trying to analyze transcriptome profile of perturbed cells, especially in studies that diverse types of cells are involved, such as immune response or brain development. Single-cell CRISPR screens combine the advantages of CRISPR screens and scRNA-seq well. In general, the designed sgRNA library is transduced to different cell populations, which is conducive to the abundance of gene perturbations. Then, scRNA-seq serves as the readout to show how the transcriptome responds to specific perturbation, and it largely increases the number of phenotypes researchers may obtain. Thus, single-cell CRISPR screens are useful for the exploration of complicated mechanisms in heterogenous cell population. For example, researchers developed an in cells . HoweverDifferent technologies have also been developed to perform single-cell CRISPR screens. In Perturb-seq and CRISIt is very challenging to analyze single cell CRISPR screens because of large scale and high variation. Gene expression clustering using similar mechanism in scRNA-seq analysis can be used. Each cell in the screen is usually categorized into different clusters by clustering analysis on their transcriptome . Then, rMIMOSCA is the analysis method designed for Perturb-seq in 2016 . The inpDeveloped in 2019, MUSIC is an inSingle-cell MAGeCK was deveExpanded CRISPR-compatible cellular indexing of transcriptomes and epitopes by sequencing (ECCITE-seq) is a tecz-values are calculated from a skew-t distribution similar to CRISPhieRmix [p-value is computed by comparing z-values to the generated null distribution.SCEPTRE is derivPhieRmix . FinallyEmbedded in MAGeCK-VISPR workflow, MAGeCK-MLE algorithm is able 2 fold changes of sgRNAs are calculated after normalizing the read count with the control group at each time point. A z-score will be calculated for each guide, and the variance will be estimated by empirical Bayes. Gene-level z-scores are obtained by combining guide-level scores, after which we can get p-values from normal distribution. Both of synergistic and suppressive interactions can be discovered in one experiment at the same time. DrugZ works well with CRISPRko, CRISPRi/a screens and it has a higher sensitivity than other algorithms.Based on the framework of CRISPR screens, drug treatment is added to the cell population which enables researchers explore the mechanism for drug resistance. Unlike essential gene screen analysis where the read counts of sgRNAs after culturing for a period of time are compared to the initial sample, the abundance of sgRNA in a drug-treated group is compared to an untreated group at each time point as a pair in drug-gene interaction screens. DrugZ is an algorithm intended to identify synergistic and suppressor interactions between chemical compounds and genes from CRISPR screens . The logt-test will work well [In summary, the most challenging part for CRISPR screen analysis is to estimate sgRNA abundance and to aggregate sgRNA effects with the same target to infer gene-level effect. Different methods have different hypothesis, and different distributions are utilized such as normal distribution, Poisson distribution and negative binomial distribution. Negative binomial model is more suitable because it considers the large variance of read counts in NGS data which efork well . To get ork well . Unlike 2-fold change of sgRNA read counts between control and treatment group, and computes it as a sum of knockout effect and copy number effect, which is determined by the targeted loci and the copy number at each locus with the input of CNV file [Copy number variation (CNV) is also a significant phenomenon in genetic variation that shoCNV file . AdditioCNV file .Some platforms such as CRISPRCloud , CRISPRAAlthough single-cell CRISPR screen is a promising tool to uncover complicated interactions, the number of cells that can be sequenced and analyzed in a screen is still limited. The identification of barcodes in single cells also requires improvement of the sensitivity of scRNA-seq and higher pairing accuracy. The cost of time and money for single-cell CRISPR screens may be further reduced by specifically amplification of target genes or depletion of unrelated genes with high abundance. Some algorithms that can analyze single-cell CRISPR screen have been developed, but they are often designed for a specific methodology of building a library and lack of generality. In addition to the false positives or false negatives problems similar to traditional CRISPR screen analysis, it is even more challenging to accurately model hierarchies in different signaling pathways, which requires development of novel algorithms to deal with a huge amount of data with intrinsic noise. Moreover, phenotype changes may be not restricted to transcriptomic level, and efforts should be made to further incorporates diverse phenotypes such as chromatin state and protin vivo and in vitro have enabled the discovery of many hit genes, to perform large-scale CRISPR screens of real patient for target identification is not feasible yet. Recently, by integrating shRNA screens, CRISPR/Cas9 screens, transcriptomics and mutation profiles of TCGA samples, a deep learning method is able to predict cancer-specific vulnerabilities in clinical samples with in silico CRISPR/RNAi screens [Data sharing plays an important role in genomic discovery. In order to make it easier for researchers to compare the experimental results in parallel, some CRISPR screen databases have been developed. CRISP-view is a com screens . This st" \ No newline at end of file