diff --git "a/deduped/dedup_0846.jsonl" "b/deduped/dedup_0846.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0846.jsonl" @@ -0,0 +1,44 @@ +{"text": "As evidence mounts that secondhand smoke (SHS) can harm human health, an increasing number of U.S. and Canadian cities are passing bans on smoking in restaurants and bars. Proposed bans have been opposed by a few commercial establishments and their respective trade associations, who fear they may lose clients as a result. Some heating, ventilating, and air-conditioning (HVAC) contractors have suggested that a method called \u201cdisplacement ventilation\u201d can effectively control SHS, making it unnecessary to impose smoking bans. But a recent study indicates these systems cannot be depended upon to bring SHS down to safe levels.Regulatory Toxicology and Pharmacology reported that displacement ventilation can control SHS in smoking areas of restaurants. That study has been used to justify opposing local and provincial smoking ban proposals.Displacement ventilation systems typically introduce fresh air at or near floor level at a temperature slightly below the desired room temperature. This cooler fresh air displaces the warmer room air at the occupied level; heat and pollutants rise to the ceiling and are drawn out by an exhaust fan. A study in the December 2001 issue of IAQ Applications. They selected the same establishment used in the 2001 study. The Black Dog Pub housed a smoking bar connected by two pass-through windows and two open doorways to a nonsmoking dining room. Ventilation air was drawn into the nonsmoking area and exhausted out the far corner of the smoking area.Citing various flaws in that study, James Repace, an adjunct professor of public health at Tufts University School of Medicine, and Kenneth Johnson, a research scientist with the Public Health Agency of Canada, undertook their own study of displacement ventilation, which was published in the fall 2006 issue of 3 in the Black Dog\u2019s smoking section and 16 ng/m3 in the nonsmoking section. RSP levels of 199 \u03bcg/m3 and 40 \u03bcg/m3 were recorded in the smoking and nonsmoking areas, respectively. Measurements taken later, after a smoking ban was implemented, showed that levels of RSP and PPAH dropped by 80% and 96%, respectively, in the smoking area, and by 60% and 80% in the non-smoking area. According to Repace, de minimis risk levels of SHS would occur at average RSP concentrations of 0.075 ng/m3 for persons exposed to an average of 8 hours a day over 40 years (PPAH is not regulated).Repace and Johnson conducted real-time measurements of particulate polycyclic aromatic hydrocarbons (PPAH), a tobacco smoke carcinogen, and respirable suspended particles (RSP), known to contribute to a variety of respiratory problems. The tests measured PPAH levels of 152 ng/m3 in the smoking bar and 229 \u03bcg/m3 in the adjacent nonsmoking restaurant. PPAH levels averaged 304 ng/m3 in the bar and 451 ng/m3 in the restaurant. At T.G.I. Friday\u2019s, RSP levels averaged 205 \u03bcg/m3 in the smoking bar and 306 \u03bcg/m3 in the nonsmoking restaurant. PPAH levels averaged 13 ng/m3 in the bar and 2 ng/m3 in the restaurant (the latter reflects in part a period during which an outside door was propped open).The following year, Repace and Johnson conducted similar tests in two restaurants in Mesa, Arizona. The restaurants were exempt from the city\u2019s nonsmoking ordinance based on their managers\u2019 claims that they could meet smoke-free standards by using displacement ventilation. At Romano\u2019s Macaroni Grill, RSP levels averaged 80 \u03bcg/mBased on the nonsmoking sections\u2019 having higher levels of pollutants than the smoking sections, the authors concluded that the ventilation systems in both restaurants were seriously out of balance. However, the Black Dog system, though properly designed and operated, still could not prevent all workers and patrons from being exposed to hazardous levels of SHS.David Sutton, a spokesman for Phillip Morris USA, says he can\u2019t comment on displacement ventilation in particular, but maintains that \u201cin many indoor public places, reasonable ways exist to respect the comfort and choices of both the smoking and nonsmoking adults.\u201d Sutton says establishment owners \u201cshould have the flexibility to address the preferences of nonsmokers and smokers through separation, separate rooms, and/or high-quality ventilation.\u201dHowever, Repace and Johnson concluded that banning smoking is the only way to guarantee a smoke-free indoor environment. \u201cThe 2006 Surgeon General\u2019s report states flatly that there is no safe level of SHS exposure,\u201d Repace says. \u201cDisplacement ventilation is not a viable substitute for smoking bans in controlling SHS exposure in either designated smoking areas or in contiguous designated nonsmoking areas.\u201d Repace says studies indicate that if you can\u2019t smell tobacco smoke, you are probably not being exposed to a dangerous amount. However, he adds, people with heart conditions or asthma should avoid any place where people are smoking."} +{"text": "One of the methyl groups is disordered over two sites, with site occupation factors of 0.47\u2005(15) and 0.53\u2005(15). The crystal packing is controlled by van der Waals forces and a possible C\u2014H\u22efO inter\u00adaction, forming a chain running parallel to the a axis.In the title compound, C \u00c5 b = 5.9077 (1) \u00c5 c = 37.0971 (7) \u00c5 \u03b2 = 92.999 (1)\u00b0V = 2037.00 (7) \u00c53 Z = 4 K\u03b1 radiationMo \u22121 \u03bc = 0.08 mmT = 296 K 0.24 \u00d7 0.20 \u00d7 0.13 mm Bruker X8 APEXII CCD area-detector diffractometer25575 measured reflections4138 independent reflectionsI > 2\u03c3(I)3411 reflections with R int = 0.038 R[F 2 > 2\u03c3(F 2)] = 0.070 wR(F 2) = 0.159 S = 1.25 4138 reflections257 parameters1 restraintH-atom parameters constrainedmax = 0.27 e \u00c5\u22123 \u0394\u03c1min = \u22120.26 e \u00c5\u22123 \u0394\u03c1 APEX2 used to solve structure: SHELXS97 (Sheldrick, 2008SHELXL97 (Sheldrick, 2008PLATON (Spek, 2009publCIF (Westrip, 2010Data collection: 10.1107/S1600536810007506/pv2262sup1.cif Crystal structure: contains datablocks I, global. DOI: 10.1107/S1600536810007506/pv2262Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"} +{"text": "Typically a large amount of information is collected during healthcare research and this information needs to be organised in a way that will make it manageable and to facilitate clear reporting. The Chiropractic Observation and Analysis STudy (COAST) was a cross sectional observational study that described the clinical practices of chiropractors in Victoria, Australia. To code chiropractic encounters COAST used the International Classification of Primary Care (ICPC-2) with the PLUS general practice clinical terminology to code chiropractic encounters. This paper describes the process by which a chiropractic-profession specific terminology was developed for use in research by expanding the current ICPC-2 PLUS system.The coder referred to the ICPC-2 PLUS system when coding chiropractor recorded encounter details . The coder used rules and conventions supplied by the Family Medicine Research Unit at the University of Sydney, the developers of the PLUS system. New chiropractic specific terms and codes were created when a relevant term was not available in ICPC-2 PLUS.Information was collected from 52 chiropractors who documented 4,464 chiropractor-patient encounters. During the study, 6,225 reasons for encounter and 6,491 diagnoses/problems were documented, coded and analysed; 169 new chiropractic specific terms were added to the ICPC-2 PLUS terminology list. Most new terms were allocated to diagnoses/problems, with reasons for encounter generally well covered in the original ICPC 2 PLUS terminology: 3,074 of the 6,491 (47%) diagnoses/problems and 274 of the 6,225 (4%) reasons for encounter recorded during encounters were coded to a new term. Twenty nine new terms (17%) represented chiropractic processes of care.While existing ICPC-2 PLUS terminology could not fully represent chiropractic practice, adding terms specific to chiropractic enabled coding of a large number of chiropractic encounters at the desired level. Further, the new system attempted to record the diversity among chiropractic encounters while enabling generalisation for reporting where required. COAST is ongoing, and as such, any further encounters received from chiropractors will enable addition and refinement of ICPC-2 PLUS (Chiro). More research is needed into the diagnosis/problem descriptions used by chiropractors. The chiropractic profession in Australia is an important component of the healthcare system. There are approximately 4,300 registered chiropractors in Australia and eachClassification of information in clinical practice is useful for both clinical and research purposes. In clinical situations, classification is a way of organising information and can act as a common language between health professionals . In reseThe International Classification of Primary Care (ICPC) is widely used for classification in general medical practice. The first version (ICPC Version 1) was publIn addition to general medical practice, ICPC has also been used for classification in areas such as pharmacy, nutrition and traditional Chinese medicine. Van Mil et al. 1998) created a pharmacy sub-set of ICPC codes for use by community pharmacists to document complaints/diagnoses of clients when providing pharmaceutical care 98 create. In relaICPC-2 uses three character alpha-numeric codes to classify symptoms/complaints, problems/diagnoses or processes of care. For each ICPC-2 code, the alpha component represents a chapter, or body system , and the two digit numeric component represents a concept within the body system . This three character code is called a rubric .To allow for greater detail and specificity to be recorded in Australian general practice, the ICPC-2 PLUS terminology was developed by The Family Medicine Research Centre, University of Sydney. Each PLUS term is classified to ICPC-2. ICPC-2 PLUS uses a six character term identifier by adding a three digit number to the ICPC-2 rubric to which the term has been classified . New ICPIn research situations such as BEACH, secondary coding of collected clinical data is performed to allocate an ICPC-2 PLUS term to each reason for encounter, diagnosis/problem, and process of care. To do this, coders search an extensive keyword list with keywords linking by logic to the ICPC-2 PLUS six character codes. On selecting a keyword, the coder is presented with all available associated terms. The coder then selects the term that is considered to most closely reflect the practitioner documentation .To ensure consistent coding by different researchers, the BEACH program (using ICPC-2 PLUS) has developed a set of \u2018coding rules\u2019. These rules cover situations such as coding a patient\u2019s history of disease and coding when no reason for encounter (or diagnosis) has been documented . AnotherIn addition to allowing coding of clinical information to a specific PLUS term, the PLUS terminology enables standardised grouping of similar concepts (or groups of concepts). This assists in the organisation of data collected for research reporting purposes. Grouping using the three digit ICPC-2 rubric provides internationally comparable data at the ICPC-2 level. However, BEACH also allows grouping using individual terms separate from any other terms within their rubric. For example, individual osteoarthritis terms from different rubrics are grouped together for reporting \u2018Osteoarthritis-all\u2019.This coding and grouping of practice generated data enables reporting at different levels depending on the audience; reporting the body system involved (chapter), a specific condition, or a grouping of similar concepts . For exaWhile the ICPC-2 classification and its associated ICPC-2 PLUS terminology are extensive, there is currently no classification system specifically designed for researching the clinical activities of the chiropractic profession. The aim of this study was to develop a research tool ICPC-2 PLUS (Chiro) by extending the current PLUS terminology with additional relevant terms for the chiropractic profession. This paper describes the process of development of a chiropractic specific terminology and classification system for use in research. This included the creation of new terms, codes and reporting groups to accurately represent chiropractic encounters.The Chiropractic Observation and Analysis STudy (COAST) was a cross sectional, prospective, observational study that aimed to describe chiropractic clinical activity in Victoria, Australia. Chiropractors were trained over the telephone to complete data collection forms and recorded anonymous patient encounter data on hand written paper encounter recording forms in free text and with the use of check boxes. COAST used the ICPC-2 PLUS terminology to secondarily code free text information from chiropractor-patient encounters, but new terms were required to describe chiropractic clinical practice. In developing the chiropractic specific ICPC-2 PLUS (Chiro), researchers followed the BEACH coding rules. Upon receipt of completed COAST chiropractic clinical encounter forms, one researcher coded each form. Where necessary, and in consultation with a second researcher, new chiropractic specific terms were added to the current ICPC-2 PLUS term list. At the completion of data collection and data entry, these new terms were allocated to an ICPC-2 rubric and then grouped via these rubrics for reporting purposes. See Table\u2009The BEACH coding rules (using ICPC-2 PLUS) were used to guide the coding of the chiropractic terms handwritten on the encounter forms. A maximum of three reasons for encounter and three diagnoses/problems could be documented at each chiropractic-patient encounter. When more than three reasons for encounter or diagnoses/problems were provided on the encounter form, the first three recorded were used. When a reason for encounter or diagnosis/problem was repeated on an encounter form, it was only recorded once.Where a reason for encounter, problem, or process of care was documented that had no corresponding ICPC-2 PLUS term, a new term (and code) was created. In line with the ICPC-2 PLUS structured format, each new code contained two parts. The first three characters of each new code were \u2018J99\u2019 to identify it as a new code not yet classified to ICPC-2 and the last three digits provided a unique numeric identifier. Using \u2018J99\u2019 ensured these new codes could not be mistaken for existing codes as there is no \u2018J\u2019 chapter in ICPC-2. See Figure\u2009before data entry and coding. In these cases, the research team created a list of chiropractic specific diagnosis/problem terms (and related codes) that they anticipated would be recorded during the COAST encounters. These new terms represented a chiropractic specific diagnosis/problem and a site. These sites were uniform across each of the new terms, so each problem would have the same site options available. For example, terms created for problems with the wrist allowed the choice of \u2018Chiropractic subluxation;wrist\u2019, \u2018Dysfunction;wrist\u2019 or \u2018Restriction/Fixation;wrist\u2019.Two methods were used to create new terms for reasons for encounter, diagnoses/problems and processes of care. First, a list of anticipated terms were generated by the research team. When a reason for encounter or diagnosis/problem was common in chiropractic practice, but was not represented in ICPC-2 PLUS (such as \u2018chiropractic subluxation\u2019), a \u2018J-code\u2019 was generated Second, if an unanticipated reason for encounter, diagnosis/problem or process of care was identified during COAST data entry for which there was no existing ICPC-2 PLUS term, and no relevant term in the anticipated list, an additional term (and corresponding J-code) was created. This was done by the coder during data entry and later discussed with the research team at a coding meeting. In each case the research team discussed the information documented by the chiropractor on the encounter form, possible ICPC-2 PLUS term options, and whether a new J-code was required. Examples of terms created in this way included the reason for encounter \u2018Wellbeing\u2019 and the diagnosis/problem \u2018Piriformis Syndrome\u2019. Because these new terms were developed to describe chiropractic practice, there was no restriction on the generation of new terms for problems/diagnoses or procedures. Further, there was no attempt to merge terms that had similar meanings, e.g. joint dysfunction and manipulable lesion, as these terms come together through their classification at a later stage at rubric level.Any new terms generated during COAST that the research team considered to be relevant to general practice were submitted to the Family Medicine Research Centre, The University of Sydney, for consideration as additions to ICPC-2 PLUS updates.In classifying the new chiropractic terms to ICPC-2, researchers identified the most appropriate ICPC-2 rubric for each term. ICPC-2 PLUS Keyword and Rubric Indices were searched for terms similar to the new chiropractic term. For example, the new chiropractic term \u2018Dysfunction;sacroiliac joint\u2019 was classified to the ICPC-2 rubric L03 (Low back symptom/complaint) using the PLUS term \u2018Dysfunction; joint\u2019 as a guide for chapter allocation. The term was then mapped to L03, as this rubric was more site specific than the more general L20 (Joint symptom/complaint not otherwise specified).The COAST research team identified that the existing ICPC-2 classes, together with the additional PLUS reporting groups, were not suited for use in reporting common reasons for encounter and diagnoses/problems in chiropractic practice. As such, entirely new COAST-specific reporting groups were devised which were based only upon data collected during the study and not ICPC-2 grouping conventions. In addition, groups were developed only at the ICPC-2 rubric level rather than using separate PLUS terms as is sometimes done in the BEACH study.The groups were created specifically for reporting to chiropractors, and were devised in one of two ways. The first was to group problems to a site. For example, all problems related to the shoulder were grouped to \u2018Shoulder Problem\u2019. The second grouping approach was to group all types of a problem (such as headache or depression) to one group. Each group was mutually exclusive, with no rubric being present in more than one group.In this way all J99 codes could be grouped together with existing ICPC-2 PLUS terms for reporting through their link to an ICPC-2 rubric. For example, the J-code \u2018Spinal Subluxation Syndrome\u2019 was grouped together with ICPC-2 PLUS terms \u2018Dysfunction Spine\u2019 under our new group \u2018Spinal Problem\u2019.Full results of COAST will be reported elsewhere. In brief, information was collected from 52 chiropractors (45% response rate) with 4,464 chiropractor-patient encounters documented, including 6,225 reasons for encounter and 6,491 diagnoses/problems, which were coded and analysed during the study. Figures\u2009In COAST, 169 new chiropractic specific terms were generated and added to the ICPC-2 PLUS term list. See the Additional file The 169 new terms were used in a large proportion of the encounters recorded in COAST: 274 of the 6,225 (4%) recorded reasons for encounter, and 3,074 of the 6,491 (47%) recorded diagnoses/problems. New terms for chiropractic methods of care covered 10,855 (72%) of the total 15,179 methods of care recorded and new terms for clinical advice and education were used in coding 9% of recorded recommendations . Most of the 169 new chiropractic terms were classified into the Musculoskeletal ICPC-2 chapter, followed by the General and Unspecified chapter .Of the new terms generated, the most commonly used were those for techniques and care provided . The most common new term for problem/diagnosis was \u2018Chiropractic Subluxation\u2019 (n=389). Fifty seven of the terms generated in anticipation of their use in chiropractic clinical practice were not recorded by the chiropractors in COAST. For a full list of new terms generated and their frequency of occurrence in the COAST data see the Additional file Fourteen reporting groups were generated using combinations of diagnostic terms via their rubrics. Of the 6,491 times a diagnosis was recorded by a chiropractor during COAST, 5,407 of these were grouped into one of the 14 COAST specific reporting groups. The remaining 1,084 remained as individual terms.While the majority of terms created during COAST were chiropractic specific, some (such as \u2018cervicogenic headache\u2019 and \u2018advice about footwear\u2019) were considered to be relevant to general medical practice. Nine terms were submitted to the Family Medicine Research Centre, The University of Sydney, for consideration of addition to ICPC-2 PLUS. Eight of these were accepted and added to a subsequent update of ICPC-2 PLUS.To accurately record chiropractic encounters, the creation of a large number of new terms was required. This study has shown that by adding chiropractic specific terms to the ICPC-2 PLUS terminology, it is possible to code a large number of chiropractic encounters to enable classification and reporting of chiropractic encounters to the desired level. However this is a work in progress and further data collection will require the addition of new terms.Although existing ICPC-2 PLUS terms mostly covered the reason for the encounters and processes of care, the PLUS terms were not as successful in representing the diagnoses/problems recorded by chiropractors. Just under half of the total diagnoses/problems recorded in COAST were coded using newly created chiropractic specific terms.The strength of this study came from using the well-established ICPC-2 PLUS terminology as a base and then adding to this to meet chiropractic specific needs. A large number of chiropractor specific terms were added to record chiropractic encounters. General practice and chiropractic are different in their scope so this had been expected. Using the ICPC-2 PLUS process allowed the straightforward creation of these new terms and then enabled these to be grouped together for ease of reporting.The new terms generated in this study are a reflection of terms used by chiropractors in practice to represent what occurred in their patient encounters. Having a term assigned does not mean the diagnosis/problem can be substantiated by evidence, it simply means that one or more chiropractors used the term to record their patient encounters. More research is needed into the diagnosis/problem descriptions used by chiropractors and the level of evidence that supports the existence of the condition the chiropractors labeled. This issue has been extensively examined in the general medical practice setting, including that a definitive diagnosis is not apparent in about half of general practitioners\u2019 consultations, that many patients present to general practice without a serious physical disorder, and that there is wide variance in the way general practitioners describe the diagnosis/problem under management .This study highlighted the wide range of terms used in documentation of chiropractic encounters. This resulted in separate terms being created for what essentially could be considered the same diagnosis/problem. All new terms were mapped to ICPC-2 rubrics and chapters, so the inter-clinician variance in terms used in clinical practice is reduced when reported at these levels, where like terms are classified to the one rubric.While a consultation process took place among the members of the research team to determine if a new term should be created, 169 new terms were still required. We assume that any further documentation of chiropractic encounters will require the generation of additional terms, and possible merging of the terms already generated, particularly the terms that were not used by the chiropractors in COAST. Future research in this area should include investigation into the terms used in chiropractic to distinguish synonyms from separate terms. A more extensive consultation process with members of the chiropractic profession would potentially allow synonyms to be identified and linked to one term rather than to have several separate terms. For example, Restriction/Fixation;pelvis may be linked to the PLUS term \u2018Dysfunction;pelvis\u2019 rather than be a separate term.Two examples of new chiropractic terms generated in COAST highlight the different meanings of the same term used in the general medical practice profession and the chiropractic profession. First, the term \u2018subluxation\u2019 is already present in ICPC-2 PLUS in the L80 chapter \u2018Dislocation/Subluxation\u2019. However, this term is listed under the accepted medical definition of subluxation, that is, a partial dislocation. Some chiropractors use this term in a different context hence the series of terms related to \u2018chiropractic subluxation\u2019 were generated was straight forward, such as allocating \u2018Dysfunction; joint; sacroiliac\u2019 to the Musculoskeletal chapter using \u2018Dysfunction; joint\u2019 as a reference guide. However, in some cases rubric selection was more subjective and investigators acknowledge that other researchers may allocate different ICPC-2 rubrics to the J99 codes.Using the COAST grouping process made it possible to report both the distribution of individual diagnoses/problems relevant to a chiropractic audience and also to the wider health community by using broader groups. The existing groups used by BEACH are general medical practitioner focused; for example Hypertension, Neoplasm and Abdominal Pain. Although the existing groups did include musculoskeletal groups such as Osteoarthritis and Sprains/Strains, in some cases the ICPC-2 PLUS group did not include terms a chiropractor would use. For example, the ICPC-2 PLUS group \u2018Back Complaints \u2019 did not include the rubric L20 \u2018Joint Symptom/complaint Not Otherwise Specified\u2019 which was considered essential by the research team to include for chiropractic reporting.Special consideration was required when assigning rubrics to COAST reporting groups, particularly as the majority of the groups were derived from newly created \u2018J-codes\u2019. Great care had been taken when classifying new chiropractic terms to ICPC-2; however, with each allocation of an ICPC-2 rubric to a COAST group, a \u2018double check\u2019 of the rubric was made. This ensured that the \u2018J-code\u2019 had been classified to the most appropriate rubric and that the rubric was assigned to the most appropriate group according to the COAST data. In this way the research team produced the groups they felt were most relevant to chiropractic. For example, to better report chiropractic care, the reporting group \u2018Health Maintenance/Preventative Care\u2019 combined any ICPC-2 PLUS term that included \u2018Health Maintenance\u2019 and \u2018Check Up\u2019 with the J99 code of \u2018Wellbeing\u2019.It should be noted that the term \u2018Problem\u2019 has been used to name the COAST groups rather than \u2018Symptom\u2019 or \u2018Complaint\u2019. Within chiropractic clinical encounters, there is often no symptom or complaint as the reason for encounter, as shown by our large number of encounters being recorded as wellbeing and health maintenance visits. The COAST research team considered that the profession would be more accepting of the alternate title for reporting of results.Researchers who wish to use the new ICPC-2 PLUS (Chiro) need to be aware of its limitations. The chiropractic version of ICPC-2 PLUS only contains terms recorded by the 52 participants in the COAST study who recorded 4,464 chiropractor-patient encounters, including recording details of 6,225 reasons for encounter and 6,491 diagnoses. Expansion of this study to a wider group of participants would be expected to result in additional terms added to the classification system.Further, COAST specific reporting groups are not transferable to other studies, because they only include the ICPC-2 PLUS terms used in this study, plus the newly generated chiropractic terms. This is especially true because the COAST groups were created at the rubric level rather than at the term level. For example the COAST group \u2018CG103-Back syndrome with radiating pain\u2019 included all terms allocated to the rubric N99 \u2018Neurological disease, other\u2019. In the COAST data, only the terms \u2018Neuralgia\u2019 (N99 014) and \u2018Radiculopathy\u2019 (N99 038) were present from the whole N99 rubric. In the PLUS terminology, there are currently 34 terms allocated to the N99 rubric, including terms such as \u2018Narcolepsy\u2019 (N99 013) and \u2018Encephalopathy\u2019 (N99 042) which are not relevant in the \u2018CG103-Back syndrome with radiating pain\u2019 group.A comprehensive chiropractic grouping tool would require each of the ICPC-2 PLUS terms to be considered for each of the COAST groups. In some cases, this would result in individual terms within a rubric assigned to different groups. For example, neuralgia might be grouped to \u2018CG103-Back syndrome with radiating pain\u2019, while Narcolepsy might not be assigned to a chiropractic group. More work is needed before this grouping can be used by other research teams.When previous studies have used ICPC in their research, the ICPC classification was used as required for each study\u2019s particular needs. The focus of Meier and Rogers\u2019 (2006) study of Traditional Chinese Medicine encounters was to develop data management and reporting guidelines . While tThe production of ICPC-2 PLUS (Chiro) for COAST differed in two main ways from these previous studies. COAST used ICPC-2 PLUS to develop the system rather than ICPC-2; this provided coders with a large number of chiropractic relevant terms already present within the terminology. In addition, COAST was able to create new terms specific to chiropractic rather than only using those available. By using the ICPC-2 PLUS system, researchers had a wider range of keywords to search when assigning terms to reasons for encounter, diagnoses and procedures. Although using terms specifically relevant to general practice, the ICPC-2 PLUS keyword list was suited to coding information documented at chiropractic encounters. This was shown by the low percentage of new terms that were created to accurately describe reasons for encounter.When terms relevant only to chiropractic were not present on the ICPC-2 PLUS term list, researchers were able to add new terms. This enabled a significant number of problems identified by chiropractors to be recorded that would have otherwise been placed under a non-specific term if forced to fit into the existing system. The research team had anticipated the need for new chiropractic specific terms due to the differing styles of practice and the wide range of terminology used in the profession.This is the first published chiropractic specific classification system that has been generated for reporting chiropractic clinical encounters. The research team set out to produce a system specific to chiropractic which could be used as a research tool. This is a first step in the long-term development of ICPC-2 PLUS (Chiro). COAST is ongoing, and as such, any further encounters received from chiropractors will enable addition and refinement of ICPC-2 PLUS (Chiro). We will continue to build the terminology and further develop the reporting groups as new data from a wider range of chiropractors becomes available. Development of a robust terminology and chiropractic specific classification will enable researchers to study information particular to chiropractors, using specific descriptions to accurately represent chiropractic encounters, while allowing reporting of findings to the wider health community. More research is needed into the diagnosis/problem descriptions used by chiropractors.SF is an Associate Editor with Chiropractic & Manual Therapies and had no involvement in the editorial process for this paper. Otherwise, the authors declare that they have no competing interests.SF, MC and HB conceived the study. SF, JG and BP applied for funding for the study. SF, MC and KF participated in the study design and coordination. MC and SF drafted the manuscript, with significant input from HB. All authors read and approved the final manuscript.Full list of new chiropractic specific terms generated during COAST, their frequency in the study, and the ICPC-2 rubric and chapter they were mapped to.Click here for file"} +{"text": "A total of 45 female patients who underwent PET/CT scan due to raised CA-125 levels, clinical suspicion of ovarian cancer recurrence or alterations detected on ultrasound (US), CT or magnetic resonance imaging (MRI) were included in this retrospective study. PET/CT results were compared with histological findings (n=15) or clinical, laboratory and repeated imaging techniques during subsequent follow-up for at least six months (n=30). CA-125 was elevated in 34 patients, 14 patients had clinical symptoms of disease and 23 presented with alterations on US, CT and MRI. A total of 42 patients were confirmed to have ovarian cancer recurrence, all with abnormal findings on PET/CT. Three patients remained free of disease during clinical follow-up, all with normal PET/CT findings. There were 11 patients with raised CA-125 levels and normal conventional imaging, all with positive PET/CT. Among the 11 patients with normal CA-125 levels, eight presented with positive PET/CT scan. Lymph nodes were the most frequent site of relapse of disease, followed by peritoneal implants. Distant sites of metastasis included the liver, spleen, pleura, lung and bone. PET/CT detected unsuspected lesions in 20/45 patients (44.4%). 18FDG PET/CT was a useful tool for evaluating the extent of ovarian cancer recurrence. In the current series, lymph nodes were the most frequent site of relapse of disease, with supradiaphragmatic lymph node metastasis in a large number of cases.The aim of the present study was to evaluate the use of 2-deoxy-2-( Ovarian cancer accounts for 3% of all cases of cancer in females; however, it has the highest mortality of all gynecological cancers . Most paet al, computed tomography (CT) and magnetic resonance imaging (MRI), should be performed during patient follow-up when there is clinical suspicion of ovarian cancer recurrence or CA-125 elevation. A systematic review and meta-analysis by Gu et al evaluateWhile the level of CA-125 has been shown to be a sensitive marker for tumor recurrence and levels may rise 3 to 6 months before there is clinically apparent disease, it does not provide information concerning the size and distribution of the lesions ,5. CT ha18F)-fluoro-D-glucose (18FDG) PET/CT may play an important role in ovarian cancer recurrence, as the metabolic tracer is able to increase lesion detection, the fusion of metabolic and anatomical imaging aids the determination of the exact location of disease and it is capable of surveying the whole body. Several studies have examined the performance of PET/CT scanning in patients with recurrent ovarian cancer . The management was changed in 53 patients (58.9%) based on PET/CT scan findings. PET/CT was superior to abdominal and pelvic CT in the detection of nodal, peritoneal and subcapsular liver disease and it also allowed the identification of patients whose disease was likely to progress within 12 months. The authors suggested that PET/CT should be the preferred imaging modality in patients with suspected ovarian carcinoma recurrence.An Australian prospective, multi-center cohort study of 90 females assessed18FDG PET/CT in patients with suspicion of ovarian cancer recurrence and describe the distribution of metastasis.The aim of the present study was to evaluate the use of A total of 45 female patients with suspicion of ovarian cancer recurrence were included in this retrospective study. The patients underwent a PET/CT scan at PET/CT Campinas, private clinic, Campinas, S\u00e3o Paulo, Brazil, between November 2006 and November 2010. Indications for PET/CT were clinical suspicion of relapse of disease, elevated CA-125 or abnormal or equivocal findings on abdominal/pelvic US, CT or MRI. All patients had undergone surgery and all but one had received adjuvant chemotherapy at the time of diagnosis. A total of 18 patients had already had relapse of disease during their previous follow-up and PET/CT was performed for the suspicion of new progression of disease. The study was approved by the ethics committee of the Medical Sciences Faculty, State University of Campinas, Unicamp, S\u00e3o Paulo, Brazil. Detailed patient and tumor characteristics are shown in 18FDG. The patients rested in a supine position for 40 to 60 min after the injection and were then positioned for PET/CT imaging. All PET/CT scans were performed on a combined 16-slice CT/BGO PET scanner . The patients received oral contrast , two glasses before the 18FDG injection and two glasses immediately before the imaging. A contrast-enhanced CT was acquired from the top of the head to mid-thigh, without any specific breath-holding instructions. Intravenous contrast was injected, unless the patient was allergic to iodine. The parameters of the CT scan were 140 kV, 150\u2013250 mAs, slice thickness of 3.75 mm. The CT was followed by PET scanning, covering the same transverse field of view during normal breathing. The imaging was acquired with 6 to 8 bed positions on a 2D mode for 5 min per bed position (n=20). In August 2008, the protocol of the institution changed, therefore, the scans of 25 patients were acquired on a 3D mode for 3 min per bed position. PET images were reconstructed iteratively using the contrast-enhanced CT data for attenuation correction. Coregistered images were displayed on a workstation, using dedicated software which allowed the viewing of PET, CT and fusion images on transaxial, sagittal and coronal displays.All patients fasted for at least 6 h, maintaining their blood glucose levels <150 mg/dl, before the injection of \u223c12 mCi of 18FDG PET/CT scans were interpreted by an experienced radiologist in conjunction with an experienced nuclear medicine physician, who were both aware of the suspicion of ovarian carcinoma recurrence and the laboratory and imaging findings of the patients. The 18FDG PET portion and the CT portion of PET/CT were jointly interpreted using a dedicated image fusion workstation. All areas of increased 18FDG uptake that corresponded to a CT abnormality were interpreted as positive for recurrent disease. Semi-quantitative analysis was also performed to derive a standardized uptake value (SUV). All PET/CT reports and images were reviewed by an experienced nuclear physician for consistency of the data.All 18FDG PET/CT were correlated with patient follow-up information for at least 6 months after the examination . The diagnosis of recurrence was confirmed with surgery (n=15) or clinically (n=30), by persistent elevation of CA-125 levels with abnormal findings on further imaging and treatment response following chemotherapy.The results of For the comparison of the SUVs from different tumor types the ANOVA test was used, and P<0.05 was considered to indicate a statistically significant difference.A total of 42 patients were diagnosed with recurrence of ovarian cancer after surgery or during clinical follow-up. Three patients remained free of disease during clinical follow-up. CA-125 levels were raised in a total of 34 patients, 14 patients had clinical suspicion of recurrence and 23 presented with alterations on US, CT or MRI. There were 11 patients with raised CA-125 levels and normal imaging examinations. The characteristics of the patients according to the PET/CT findings are shown in 18FDG PET/CT scan was positive in all 42 patients who were confirmed to have recurrence of disease. 18FDG PET/CT scan was negative in 3 patients, all free from disease during follow-up, with normal CA-125 levels and no evidence of disease on imaging examinations. One of the patients without ovarian cancer recurrence presented with focal abnormal uptake in the right thyroid lobe and a new primary tumor was diagnosed following surgery.There were 11 patients with elevated CA-125 levels and normal conventional imaging, all with positive PET/CT findings. However, of the 11 patients with normal CA-125 levels, eight presented with a positive PET/CT scan . Four ofOverall, lymph nodes were the most frequent site of relapse of disease , being l18FDG uptake (PET/CT found unsuspected lesions in 20 out of 45 patients (44.4%), most being supra-diaphragmatic lymph node metastases or normal sized abdominal lymph nodes with abnormal G uptake . There wG uptake .A total of 12 patients (26%) died during follow-up . Five of these patients had disseminated abdominal and supra-diaphragmatic disease, while 7 had disease limited to the pelvic/abdominal region.PET/CT correctly diagnosed patients with suspected ovarian cancer recurrence. All patients with elevated CA-125 levels and normal conventional imaging had positive PET/CT scan. However, most patients with normal CA-125 levels in this series presented with positive PET/CT scan. Lymph nodes were the most frequent site of relapse of disease, most being in the pelvic/abdominal region and others in the thoracic region. Peritoneal implants were found in more than half of patients. Distant sites of metastasis included the liver, spleen, pleura, lung and bone. PET/CT detected unsuspected lesions in almost half of the patients, most being supra-diaphragmatic lymph node metastasis.Our population had a high pre-test probability of disseminated disease, given that 18/45 patients (40%) had previously had relapse of ovarian cancer and the referral to PET/CT was to evaluate the progression of the disease and to restage. The advantage of PET/CT in this clinical setting was ability to evaluate the whole body that may aid the correct selection of patients who are amenable to surgical resection.et al(Most of our findings are in accordance with those previously described in the literature, with the exception of the high prevalence of supra-diaphragmatic lymph node metastases \u201314. Iagaet al retrospeIn the Australian study , PET/CT et al(The change in management based on PET/CT was also previously described by Simcock et al. The autNumerous clinicians routinely measure the level of CA-125 since it is often the first evidence of ovarian cancer recurrence and may rise 3 to 6 months before clinical evidence of disease (Recent meta-analyses evaluated CT, MRI, PET and PET/CT for the detection of metastatic lymph nodes in patients with ovarian cancer . PET andet al(PET/contrast-enhanced CT in the same study showed aet al showed t18FDG. Contrast material may aid the distinguishing of vessels and urethers from small nodal disease, which can result in better sensitivity of the PET/CT scan. This may be of particular importance in patients with ovarian cancer, since most metastases involve the pelvic and abdominal lymph nodes or implants. Certain authors (The PET/CT evaluation of pelvic and abdominal regions may be challenging due to urinary excretion and bladder concentration of authors suggest 18FDG uptake. However, the confirmation of all the sites would not have been ethical solely for the purpose of validation of PET/CT findings. Accurate surgical assessment of pelvic and retroperitoneal lymph nodes is difficult, and surgery appears to be an unreliable gold standard, with disease recurrence in a third of females with negative surgical findings (et al(A limitation of our study is that there was no pathological confirmation of all the sites of abnormal Another limitation was that we did not have data concerning the treatment plans prior to the PET/CT, therefore, it was not possible to evaluate the change in management in our study. However, PET/CT revealed unsuspected lesions in 44.4% of our patients, which is in accordance with previously published data.There is no evidence that PET/CT improves the overall survival of patients diagnosed with ovarian cancer recurrence. However, the whole body examination shows the extent of the disease. This may aid the correct restaging of patients considered for further treatment.18FDG PET/CT was an accurate and useful tool for diagnosing ovarian cancer recurrence. The advantage of a whole body scan and metabolic imaging is that it may aid the detection of additional sites of disease. Supra-diaphragmatic disease in this series of patients with suspicion of ovarian cancer recurrence was more frequent than previously described.In conclusion,"} +{"text": "Carcinoma of unknown primary tumors (CUP) is present in 0.5%-9% of all patients with malignant neoplasms; only 20%-27% of primary sites are identified before the patients die. Currently, 18F-fluorodeoxy-glucose positron-emission tomography (18F-FDG PET) or PET combined with computed tomography (PET/CT) is widely used for the diagnosis of CUP. However, the diagnostic yield of the primary site varies. The aim of this study was to determine whether PET or PET/CT has additional advantages over the conventional diagnostic workup in detecting the primary origin of CUP.Twenty patients with unknown primary tumors that underwent PET or PET/CT were included in this study. For all patients, the conventional diagnostic workup was unsuccessful in detecting the primary sites. Among 20 patients, 11 had PET scans. The remaining nine patients had PET/CT. In all 20 patients, neither the PET nor PET/CT identified the primary site of the tumor, including six cases with cervical lymph node metastases. The PET and PET/CT revealed sites of FDG uptake other than those associated with known metastases in seven patients, but these findings did not influence patient management or therapy. Two patients had unnecessary invasive diagnostic procedures due to false positive results on the PET or PET/CT.Although it is inconclusive because of small sample size of the study, the additional value of PET or PET/CT for the detection of primary sites in patients with CUP might be less than expected; especially in patients that have already had extensive conventional diagnostic workups. Further study is needed to confirm this finding. Carcinoma of unknown primary tumors (CUP) is a biopsy-proven malignancy in which the anatomical origin of the tumor cannot be identified from the patient history, physical examination, laboratory testing, chest radiographs, computed tomography of the chest, abdomen and pelvis, and (in women) mammography . CUP is Currently, positron-emission tomography (PET) with 18F-fluorodeoxyglucose (FDG) or PET combined with computed tomography (PET/CT) is widely used in the diagnostic evaluation of patients with CUP . The ratThe medical records of patients with CUP that underwent PET or PET/CT imaging were reviewed retrospectively. All patients were admitted to the Seoul National University Hospital for further evaluation between January 2003 and September 2005. Carcinoma of unknown primary tumor was defined as a biopsy-proven malignancy whose anatomical origin could not be identified by a conventional diagnostic workup . All patients had biopsy-proven malignancies and the results of conventional diagnostic examinations were negative. The workup performed was determined based on the histological results, and therefore, the procedures used to detect the primary sites of tumors differed among the patients.All patients underwent whole-body 18F-fluorodeoxy-glucose positron-emission tomography (18F-FDG PET) or PET/CT scans according to the following procedure. Patients were fasted for at least 8 h before receiving an intravenous injection of 555-740 MBq of 18F-FDG. The uptake period was 60-90 min. The PET was performed on a dedicated PET scanner with a 5-min emission acquisition per imaging level. Attenuation correction was performed using the CT technique in the case of the PET/CT. PET images were reconstructed with a 128 \u00d7 128 matrix, an ordered subset expectation maximum iterative reconstruction algorithm , a 2-mm Shepp filter and a 16.2-cm field of view. PET/CT images were reconstructed with a 144 \u00d7 144 matrix and a 3 D row action maximum likelihood algorithm . The results of PET or PET/CT scans were evaluated by two experienced nuclear medicine physicians that were unaware of the histology of the metastatic sites.'Detection of the primary tumor using PET or PET/CT' was defined when additional information about the primary tumor was revealed by PET or PET/CT imaging. Although the suspected primary site was seen on the PET or PET/CT, it was not considered 'detection by PET or PET/CT' if the suspected primary site was seen on other imaging modalities such as the CT. When the FDG uptake site in the PET or PET/CT was confirmed as a benign lesion, this was defined as a 'false positive' PET or PET/CT result.n = 6), bones (n = 4), abdominal lymph nodes (n = 3), axillary lymph nodes (n = 2), brain (n = 1), skin (n = 1), omentum (n = 1), peritoneum (n = 1), and ureters (n = 1). The histological findings were distributed as follows: poorly differentiated carcinoma (n = 11), adenocarcinoma (n = 5), squamous cell carcinoma (n = 2), signet ring cell carcinoma (n = 1), and leiomyosarcoma (n = 1) (Table Twenty patients (nine men and eleven women) were included in the study. The median age was 54 years and the mean follow-up duration was 26.5 months. Metastases were located in the cervical lymph nodes , and two were pathologically confirmed as another metastatic lesion after biopsy. Three out of five false positive cases also displayed FDG uptake by the thyroid or pharynx ; in patients where the initial physical examinations showed normal thyroids and pharynxes. Clinically, these thyroid glands and pharynxes were not considered to be the primary tumors and did not exhibit malignant changes during the follow-up period.In two out of five false positive cases, the results of the scans were initially thought to show the primary tumors and a diagnostic workup of these patients was expanded to include invasive procedures. For example, one patient (patient no. 5) with a metastatic adenocarcinoma of the skin had additional FDG uptake around the mid-esophagus (SUV 8.9). To confirm this lesion, the patient underwent a repeat esophagogastroduodenoscopy ; however, there was no evidence of a malignancy in the esophagus. A mid-esophagus lesion was observed as subcarinal lymph node uptake on a subsequent chest CT. Because the follow up PET/CT scan showed decreased size and FDG uptake of the subcarinal lymph node, this lesion was confirmed to be a benign lesion. Another patient, with metastatic leiomyosarcoma of the brain (patient no. 10) had mild hypermetabolic findings in the lower left lung field. Because a malignancy could not be ruled out, the patient underwent bronchoscopy and a chest CT. There was no evidence of a malignancy. Because of the false positive PET result, the patient had unnecessary invasive procedures.In two patients (patient no. 13 and 20) out of the seven that had additional FDG uptake, other metastatic lesions were confirmed by pathological examination of biopsies. Because these metastatic sites were just additional, the management plans of these patients did not change. The results with additional FDG uptake did not positively influence the management and therapeutic plans of the patients.In four cases, the PET/CT was performed after the initial PET scan . The PET/CT, which is anatomically more accurate than the PET, did not confer any additional advantage in the detection of the primary sites of patients with CUP.Detection of the primary tumor can change the prognosis of patients with CUP by enabling targeted treatment. Previous studies have indicated that PET and PET/CT are useful for the detection of primary sites ,8,13-15.Previous studies have reported that PET detects primary lesions in 24%-41% of patients with CUP . HoweverAccording to the previous literature , 20-27% The poor resolution of PET has been superseded by PET/CT, which identifies anatomical landmarks more accurately. The PET/CT detects the primary tumor in 22-73% of patients with CUP, according to a recent review article . HoweverThe PET and PET/CT have gained widespread acceptance as useful methods for the management of cancer . HoweverThe PET or PET/CT revealed FDG uptake lesions other than the known metastases in seven patients. These additional uptake lesions were of no value for detecting the primary sites of tumors, and false positive FDG uptake lesions complicated the diagnosis. Despite no additional value of the PET or PET/CT in the detection of the primary site, primary lesions were identified in two cases by immunohistochemical staining of biopsied metastatic lesions during the follow-up period Table . VariousThe limitations of this study included the following. First, the sample size was small and the study design was retrospective. Second, this study was performed in the early stages of PET and PET/CT, when the PET and PET/CT were not widely used. It is possible that the study results do not reflect current PET or PET/CT scanning.In conclusion, neither PET nor PET/CT improved the detection of primary sites in patients with CUP in our study. Although it is inconclusive because of small sample size of the study, the additional value of PET or PET/CT for the detection of primary sites in patients with CUP might be less than expected; especially in patients that have already had extensive conventional diagnostic workups. Further study is needed to validate this finding.The authors declare that they have no competing interests.SML had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. JSP contributed to analyzing data and drafting the manuscript. WJK, JKC contributed to collecting and analyzing data. JJY, CGY, YWK, SKH and YSS contributed to making conception and design of this study. All authors read and approved the final manuscript."} +{"text": "Esophageal stenosis following endoscopic submucosal dissection (ESD) is a serious adverse event that makes subsequent management more difficult.This parallel, randomized, controlled, open-label study was designed to examine whether local steroid injection is an effective prophylactic treatment for esophageal stenoses following extensive ESD. This single center trial was conducted at the Keiyukai Hospital, a tertiary care center for gastrointestinal disease in Japan [University Hospital Medical Network Clinical Trial Registry (UMIN-CTR) on 15 September 2011 (UMIN000006327)]. Thirty-two patients with mucosal defects involving \u226575% of the esophageal circumference were randomized to receive a single dose of triamcinolone acetonide injections (n\u2009=\u200916) or be treated conventionally (n\u2009=\u200916). The primary outcome was the frequency of stricture requiring endoscopic dilatation; the surrogate primary endpoint was the number of dilatation sessions needed. Secondary outcomes included adverse event rates, the minimum diameter of the stenotic area and the duration of the course of dilatation treatments.versus 12.5 [95% CI 7.1\u201317.9] sessions in the control group; P\u2009=\u20090.04). The perforation rate was similar in both groups. The minimum diameter of stenotic lumens was significantly greater in the treatment group than controls . The perforation rate was not significantly different between the groups . Steroid injection was effective in cases of mucosal defects encompassing the entire esophageal circumference.The frequency of stricture was not significantly different between the groups because of insufficient statistical power, but the number of dilatation sessions required was significantly less in the steroid group (6.1 sessions Prophylactic endoscopic steroid injection appears to be a safe means of relieving the severity of esophageal stenoses following extensive ESD. In Japan, endoscopic submucosal dissection (ESD) is widely accepted as a standard treatment for early esophageal squamous cell carcinomas without documented metastasis. The ESD technique has been shown to reduce the risk of local recurrence, and perforations arising as a consequence of treatment are generally well tolerated . As ESD Patients with esophageal stenosis are frequently treated by endoscopic dilatation therapy. The risk of perforation complicating the procedure increases with the number of therapeutic sessions . It is iSystemic or localTo our knowledge, no randomized studies to date have analyzed the potential preventative benefits of endoscopic steroid injection therapy, or whether it is safe and effective, for stenosis caused by a mucosal defect involving the entire circumference of the esophagus after ESD. We undertook a prospective, randomized controlled trial to analyze the prophylactic effects of endoscopic steroid injection therapy for esophageal stenoses complicating extensive ESD.This randomized, controlled, open-label study was performed at Keiyukai Sapporo Hospital, Japan. All participants gave their written informed consent, based on the Helsinki Declaration of the World Medical Association, and the Ethics Committee of Keiyukai Sapporo Hospital approved the study protocol. The study was designed according to the CONSORT guidelines and was registered with the University Hospital Medical Network Clinical Trial Registry (UMIN-CTR) on 15 September 2011 (UMIN000006327).Patients who had undergone ESD to treat histologically confirmed early squamous cell carcinoma of the esophagus from February 2010 to October 2011 and who were expected to have a mucosal defect encompassing \u226575% of the circumference of the esophageal mucosa after ESD were eligible for the study. Patients who received additional adjuvant treatments, such as surgery or chemoradiation therapy, and patients who were not regularly or adequately followed-up were excluded. Depth of tumor invasion was determined based on the findings of endoscopy and/or endoscopic ultrasonography. Mucosal to slightly invasive submucosal cancers (of invasion less than 200\u00a0\u03bcm in depth) were regarded as indications for ESD. Removal of a carcinoma involving two-thirds of the circumference of the esophagus by ESD was expected to result in a mucosal defect spanning more than three-quarters of the circumference. Patients enrolled in the study were randomized to receive steroid injection therapy or to be treated conventionally. Randomization was computer-generated with concealed allocation using sequentially numbered containers. Data were collated at Sapporo Medical University and independently analyzed by one author (Y.A.). The baseline demographic and clinical characteristics of the study population were compared on the basis of age, sex, tumor location, proportion of the esophageal circumference involved, number of multiple Lugol voiding lesions , clinicaAs previously described ,10, patiEsophagogastroduodenoscopy (EGD) was performed to assess for stenosis, bleeding or perforation at the injected sites 6\u00a0days after treatment Figure\u00a0. Barium t tests were used to compare age, resection size and procedure time. The primary study endpoint was the frequency of stricture requiring endoscopic dilatation for esophageal stenosis after ESD. A surrogate primary endpoint, the number of dilatation sessions required, was subsequently included in the analysis because the primary endpoint did not reach statistical significance. Secondary endpoints included the frequency of complications that occurred as a consequence of either local steroid injection or endoscopic dilatation, the minimum diameter of the stenotic area and the duration of the course of dilatation treatments with the group of those with lesions that involved less than the whole circumference were enrolled in the study because they were expected to have mucosal defects extending over three-quarters of the esophageal circumference due to the ESD. Since one of these 42 patients declined to participate, in total 41 were enrolled and randomized. 21 were allocated to the injection (treatment) group whereas 20 were allocated to the non-injection (control) group. However, after ESD, nine patients were excluded from the study: one whose follow-up was inadequate and eight who received additional therapy. Of the latter eight patients, seven had submucosal invasion that exceeded 200\u00a0\u03bcm and one had lymphatic invasion despite a depth of invasion of only 180\u00a0\u03bcm. Ultimately, 16 patients were allocated to each group . The perforation rate caused by dilatation procedures was 1.0% (one out of 97 sessions) in the steroid injection group and 0.5% (one out of 200 sessions) in the control group. The mean minimum diameter of stenotic lumens just before dilatation therapy was greater in the treatment group than controls and the mean number of dilatation therapy sessions required was significantly more in those with WCMD lesions. The perforation rate caused by dilatation procedures was similar: 0.6% (one out of 163 sessions) in the WCMD group compared with 0.7% (one out of 134 sessions) in the NWCMD group. The mean minimum diameter of stenotic lumens immediately before dilatation therapy was smaller in the WCMD group (7.2 versus 9.9\u00a0mm in the NWCMD group) but the difference was not significant (P\u2009=\u20090.10). The mean duration of dilatation therapy was significantly longer in the WCMD group . The incidence of stricture was significantly more frequent and those treated conventionally (n\u2009=\u20095) revealed no significant differences in baseline demographic, clinical or ESD characteristics (data not shown). The only treatment-related factor that differed significantly between the groups was the mean number of dilatation therapy sessions required, which was lower in those treated with steroids compared with controls . These results suggest that a single prophylactic dose of steroid administered after ESD is safe and well tolerated. The mean minimum diameter of stenotic lumens immediately before the first dilatation treatment was significantly greater in the treated group than controls . The differences observed in duration of dilatation therapy were not statistically significant. Our trial is the first to demonstrate the partial but significant prophylactic effect of steroid injection on stricture formation in this clinical setting.We found that endoscopic triamcinolone injection did not reduce the frequency of stricture formation, but reduced the mean number of dilatation sessions per patient from 12.5 to 6.1, suggesting that steroid injection may partially relieve esophageal stenoses. No steroid-related adverse events were observed, and the perforation rate during dilatation procedures was similar in the treated and control groups (1.0% post hoc analysis confirm that patients with WCMDs are more likely to develop strictures, require more dilatation sessions and longer duration of treatment, but that they benefited most from a single prophylactic steroid injection after ESD. In patients with WCMDs and esophageal diameters of approximately 7\u00a0mm, prophylactic steroid treatment almost halved the number of dilatation sessions needed and the overall duration of treatment. This in itself is a clinically important finding, not least because a reduced incidence of esophageal perforation would be likely to reduce morbidity and mortality. As it is well recognized that patients are at a lower risk of esophageal perforation if they undergo fewer dilatation treatments [Hanaoka and colleagues previously stated that a randomized controlled trial comparing a single injection of steroid at the time of dilatation therapy may not be ethically acceptable, as the efficacy of steroid injection therapy is well recognized . Furthereatments , prophylOur findings also concur with those of previous studies, which showed that patients with smaller mucosal defects also benefited from endoscopic steroid injections to prevent post-ESD strictures. Our study may not have been adequately powered to detect these smaller \u2013 but nonetheless clinically relevant \u2013 differences in the frequency of stricture formation. Our results should be further confirmed by a large well-powered randomized controlled trial, which should also examine whether multiple steroid injections, administered during dilatation treatments, might benefit those patients that go on to develop esophageal stenoses.A course of oral prednisolone has been reported to be an effective means of preventing strictures . Endoscopost hoc or subgroup analysis.The estimated sample size of 16 patients per group was determined by a power calculation based on expected stricture rates of 13% and 60% with and without steroid injection, respectively, informed by previously published data on esophageal peptic strictures , which sIn summary, prophylactic endoscopic steroid injection can relieve the severity of esophageal stenoses following extensive ESD. Future studies should attempt to optimize steroid injection therapy to establish the best means of preventing stricture formation in patients at risk of developing esophageal stenosis. As it is well recognized that patients are at a lower risk of esophageal perforation if they undergo fewer dilatation treatments , prophyl"} +{"text": "Nutrition is an important determinant of health. At present, nutrition programs in India mainly emphasize improving maternal and child nutrition. Adult nutrition has not received due attention, though diseases like hypertension and diabetes are largely preventable through changes in dietary and physical activity behaviour. Little is known about the best approaches to improve dietary behaviours, especially the role of modern information technology (IT) in health education. We describe the protocol of the SMART Eating health promotion intervention. A Cluster Randomised Controlled Trial will evaluate the effect of an IT-enabled intervention on nutrition behaviour among urban adults of Chandigarh, India. Formative research using a qualitative exploratory approach was undertaken to inform the intervention. The IT-enabled intervention programme includes website development, Short Message Service (SMS), e-mail reminders and interactive help by mobile and landline phones. The IT-enabled intervention will be compared to the traditional nutrition education program of distributing pamphlets in the control group. The primary outcome will be the percentage of study participants meeting the dietary intake guidelines of the National Institute of Nutrition, Hyderabad, India and the change in intake of fat, sugar, salt, fruit and vegetables after the intervention. The difference in differences method will be used to determine the net change in dietary intakes resulting from the interventions. Measurements will be made at baseline and at 6 months post-intervention, using a food frequency questionnaire. The formative research led to the development of a comprehensive intervention, focusing on five dietary components and using multi-channel communication approach including the use of IT to target urban North Indians from diverse socio-economic backgrounds. The Cluster Randomised Controlled Trial design is suitable for evaluating the effectiveness of this IT-enabled intervention for dietary behaviour change. Diet is one of the important social determinants of health. Over a period of time, due to industrialisation and urbanisation, diets have changed. Dietary intakes of fat, sugar, salt, or fruit and vegetable depend more on taste, culture, affordability etc. rather than on the dietary recommendations . AustralIndia is suffering from the double burden of malnutrition. More than 58.4% children, 53% women and 23% men are suffering from some form of anaemia. On the other hand, overweight and obesity increased by 8.1% in women and 9.3% in men between 2005 and 2016 . Most ofSeveral interventions have been used to modify dietary and physical activity behaviours. Most of these interventions have targeted either diets or physical activity separately. Face-to-face individual counseling with an interpersonal component or group education sessions are common approaches . TechniqWeb-based computer interventions present an alternative approach which allows individualized education in a cost-effective way . Follow-The approach is based on the PRECEDE-PROCEED Model 20]. Fo. Fo20]. A sector of Chandigarh city, located in northern India, has been chosen for the study. The selected sector has a population of about 30,000 and includes people of all socio-economic strata with the type of housing, assigned by the Chandigarh Administration as low-income group (LIG), middle-income group (MIG) and high-income group (HIG), taken as proxy for socio-economic status. This sector has been purposely selected because The Community Medicine Department of The Post Graduate Institute of Medical Education and Research (PGIMER), Chandigarh, has been working here to strengthen healthcare services, and use of mobile phones and internet is also widespread in this area.About 89% households in Chandigarh use mobile or landline phones . A rapidA cluster-randomised controlled trial design with two groups will be employed: (1) The intervention group will receive community-led information technology-enabled SMART Eating Intervention. (2) The control group will receive traditional nutrition education through pamphlets . The objA multi-level sampling strategy will be used. Twelve clusters, based on the type of housing , have been selected bearing in mind that there is sufficient geographic separation to avoid any spill-over effect of the intervention. Similar clusters will be paired together to form six pairs. For each of the six pairs, computer generated randomisation will be used to allocate clusters to intervention and control groups by a researcher not involved in the study. The number of clusters and cluster size will be fixed. Equal numbers of families will be recruited from each cluster through systematic random sampling. One adult (35\u201370\u00a0years) per family will be randomly selected as the index case for the baseline and post-intervention assessment.One family champion (the adult member who cooks food most of the time) will be selected from each family for the implementation of the intervention. In recognition of the fact that not all family champions will be using information technology tools or mobile phones (as demonstrated through the formative research), one co-champion will also be selected from each family. A co-champion can be any person from the family identified by the family champion who would be able to assist him/her in using the internet and mobile phone communications.The inclusion criteria for recruitment of study participants will be: families from low, middle and high-income group housing ; residing in the study area 6\u00a0months or more; having access to mobile phone/landline phone/internet; and including adults between 35 and 70\u00a0years of age. This age group has been selected because most of the youth (20\u201334\u00a0years) move away due to higher education and job opportunities etc. Moreover, adults between 35 and 70\u00a0years of age have higher chance of developing chronic diseases. Pregnant women and families not providing consent to participate in the study will be excluded.Sample size calculations were done for each of the outcome variables: fat, salt, and fruit and vegetable intake, but not for sugar intake, because estimates of sugar intake were not available. The prevalence of adequate dietary intake for the different foods and nutrients was used to calculate the sample size for fat and fruit and vegetables based on dietary surveys and based on urinary salt excretion for salt. As no similar trials were available to inform an assumption of improvements in the intervention arm, the sample size was calculated based on a 20% improvement. Sample size for estimation of change in salt required the largest sample size. Based on a previous study, which indicated that 15% of adults had a salt intake of <5\u00a0gram per day , and assThe primary outcome for the study is the % change in study participants meeting India\u2019s National Institute of Nutrition (NIN) dietary guidelines for fat, sugar, salt, fruit and vegetable intake. The NIN recommend that, for adults, 20\u201330% energy should be from fat (10% from visible fat and 10\u201320% from foods other than visible fat); no more than 20\u00a0g sugar for women and 25\u00a0g sugar for men, less than 5 g salt, and at least 100\u00a0g of fruits and 300\u00a0g of vegetables per day.2), blood pressure (mm Hg), haemoglobin (g/dL), fasting plasma glucose (mg/dL), total serum cholesterol (mg/dL), high-density lipoprotein (mg/dL), low-density lipoprotein (mg/dL), triglycerides (mg/dL) and urinary salt excretion (g/day).Secondary outcomes include % change in body weight, and mean change in body mass index , the contents of the nutrition education program were prepared . This inAll intervention tools were designed by the first author (JK). Content validation was done by eight experts from related fields in PGIMER, Chandigarh. The website design was finalised by two co-authors and one nutrition expert. The tools were pre-tested on five health professionals and 15 families from different socio-economic groups and modifications were made.Interpersonal Component includes: home visits for training of family champions and co-champions using a flip book over a period of one month; distribution of the SMART Eating kit with written information in Hindi language; and health check-up reports after the baseline survey. Information Technology (IT) Component will be implemented, after training of family champions, through mobile phone, landline phone and internet communications.The nutrition education intervention will have two components . The IntThe intervention will be implemented at a family level using the family champion approach \u2013 an adaptation of health champion approach \u2013 and thInvolvement of family members: especially children as co-champions with a view to increasing self-efficacy for dietary changes.Creating awareness: by providing information on nutrition.An emphasis on seasonal fruits and vegetables: which are less expensive at their peak season, and are full of nutrients.Increasing visibility: by placing a tray containing fruits and vegetables on the dining room or kitchen table, and keeping snacks and food items high in fat, sugar and salt in places less visible to family members.Highlighting good cooking practices: such as methods requiring less fat, sugar and salt.Avoidance and substitution: avoiding food too high in fat, sugar and salt or substituting snacks with fruits and vegetables.Emphasis on eating by sharing: fruits and vegetables among family members whatever the amount available.Building self-efficacy: SMART eating kit articles to increase the self-efficacy in changing dietary behaviour.Cutting down on the medical bills: emphasising the benefits of increasing fruits and vegetables and decreasing fat, sugar and salt to prevent chronic diseases which in turn may help to reduce medical bills.Kitchen gardening: Encouraging, if feasible, kitchen gardening or growing vegetables in earthen pots.Intervention implementation strategies will include:Control group families will be provided with a pictorial pamphlet on dietary recommendations from the National Institute of Nutrition, India. The content and pictures in the pamphlet will be the same as that used for the dining table mat to be provided to intervention group families. One side will have information on seasonal fruits and vegetables along with dietary recommendations; the other side will have pictures of measuring spoons showing the amount of salt, sugar and fat in one spoon; dietary recommendations; pictures of foods high in fat, sugar and salt; and information on reducing the intake of these nutrients. The pamphlet will be in Hindi and will be provided to study participants along with their blood test reports. Participants will be asked to read the information provided in the pamphlet in their own time, to make changes to their diet accordingly and to convey the same information to their family members.et al (1998) to assess participant\u2019s self-rated dietary intake [The following tools have been developed for data collection: (1) Household profile proforma to identify Index case; (2) Structured Questionnaire for Index case: Part A \u2013 Socio-demographic data, medical history and physical measurements, Part B \u2013 \u2018Stages of change\u2019 questions based on those developed by Lechner y intake and Party intake ; (3) Proy intake ; (6) Houy intake ; (7) SphHeight will be measured to the nearest 0.1 cm using the anthropometric rod. Weight will be measured with minimum clothing without shoes to the nearest 0.1\u00a0kg on a portable electronic weighing scale. Blood pressure will be measured twice using a standard instrument to the nearest 2\u00a0mm Hg . Blood sAfter implementing the intervention for a period of one month, all participants will be invited, through a SMS, phone call or home visit, to ask whether they have made some changes to their diets. For process evaluation, feedback will also be taken from the participants using a proforma. A log book will be maintained to record barriers and enablers related to the use of the IT-enabled intervention as well as identifying actions that could be taken to improve its use. Qualitative in-depth interviews using \u2018Extreme or Deviant case\u2019 sampling will also be conducted to understand barriers and facilitators.A team of three members, the first author and two other members who are outsiders to the community, will collect the data. The team members have received training on data collection. Neither the investigator nor the data collectors will be blind to intervention. Intervention implementation, process evaluation, and quality control will be conducted by the first author.t-test. Unpaired t-tests will be applied to independent samples for between group comparisons. In order to explore potential differences between groups multivariable regression analysis will also be used.The quantitative data analysis will be performed using Statistical Package for Social Sciences (SPSS) version 21 based on Intention-to-treat analysis. The cluster design will be taken into consideration during analysis and cluster effect will be reported. Descriptive statistical analysis will include calculation of sample means, standard deviation (SD) and proportions according to the type of variables. Categorical variables will be compared using chi-square test. Within group changes in quantitative variables from baseline to endline at six months will be analysed using a paired The FFQ data will be entered into the spreadsheet software used in the PURE study . This soParticipants\u2019 nutrient intakes per day will be compared against the NIN dietary guidelines to calculate the percentage of participants meeting the guidelines for fat, sugar, salt, fruit and vegetable intake. Sub-group analysis by disease status will also be performed. Changes in fat, sugar, salt, fruit and vegetable intake in both the groups will be calculated as post-intervention minus pre-intervention nutrient intakes. The net intervention effect will be calculated as the change in dietary intake in the intervention arm minus the change in dietary intake in the control arm from baseline to the study end at six months (difference in differences method) . Dietary2. Blood samples for haemoglobin, fasting plasma glucose and lipid profile, and urine samples will be analysed using standard laboratory methods. The 24\u00a0h urinary sodium excretion (mg/day) will be estimated from a single morning fasting urine sample using the Kawasaki formula [BMI will be calculated as weight (Kg) divided by height (m) formula .Participants will be classified into five stages of change separately for fat, sugar, salt, fruit and vegetable intake: Pre-contemplation, Contemplation, Preparation, Action and Maintenance stage on the basis of quantitative data. Stage will also be assessed based on qualitative data in relation to people\u2019s knowledge of dietary intake guidelines and their perception and awareness about their own dietary intakes and dietary practices.The use of specific components of the intervention, satisfaction with the program and ideas for improvement will be analyzed by summarising the answers to open ended questions from the feedback proforma. Website use will be measured through feedback regarding login by number of participants and the visitor count.The use of information technology is increasing and mobile health has the potential to reach large numbers of people quickly at low cost . TherefoThe review of literature on nutrition education interventions revealed many gaps. Most previous studies have targeted individuals at high risk of disease as \u2018at risk\u2019 individuals are more motivated than the general population; other studies have had a small sample size or non-representative sample \u201343. InstThe existing literature shows that the diseases linked to high fat, sugar and salt intake, such as heart disease, diabetes, some cancers, and nutritional deficiencies, are preventable through increased consumption of fruits and vegetables \u201359. Yet,Considering the potential risk of non-use of IT components by the families, process evaluation will be undertaken to assess the extent of various intervention components and identify factors hindering the use of IT-enabled intervention. In case participants do not make sufficient use of IT components, we may plan to continue the intervention by distributing printed materials.That said, the ability to measure any changes in dietary intake will require robust assessment measures. FFQs have emerged as a useful tool in epidemiological studies across the world, and these have become popular in Indian settings because they impose fewer burden on the subjects compared to other dietary assessment methods. Our FFQ has been developed and validated in a northern Indian setting . The dieAssessment of salt intake from FFQs usually underestimates the intake so urinary sodium assessment provides better information . Though The existing pattern of housing , as assigned by Chandigarh administration, was taken as proxy for socio-economic status (SES). In order to ensure equal representation of all socio-economic groups in both study arms, we stratified the sample according to type of housing, and matching of clusters was done before random allocation of similar clusters in intervention and control group.In contrast to a recent study from South India , our forOur formative research found multiple opportunities for change at individual as well as family level (Supplementary file 1). It established that the majority of the target audience was at the pre-contemplation stage of change, which was different to other studies where participants were classified into various stages . In contThe use of IT tools, such as mobile phone and the internet, for delivering nutritional messages is considered acceptable for the target population as these tools are available in most families. In addition to IT tools, participants (in the formative research) also indicated the need for face-to-face education and provisioning of printed material which can be displayed in the house. This led to the development of the SMART Eating kit containing a kitchen calendar and dining table mat etc. In view of the fact that people lack skills in measuring the amount of fat, sugar and salt recommended, measuring spoons have been added to the SMART Eating kit.The formative research showed that men and women were equally supportive of dietary behaviour change, but the majority of them identified women as the main facilitator for dietary behaviour change in the family as women are usually responsible for cooking the food. Easy availability of vegetables and fruits in local markets and vendors was considered to be another facilitating factor for the intervention.The main strength of this study is inclusion of all socio-economic groups in a sufficiently large representative sample with focus on all dietary components. Inclusion of all socio-economic groups will likely enhance the generalisability. The calculated sample size is large compared to other RCTs which should provide strong power. Strict randomisation throughout the sampling procedure will also help to minimise any potential bias although the lack of blinding will mean the results will have to be interpreted with caution.Involvement of stakeholders in the formative research for intervention development is another strength of our study which draws on other studies . The mulAssessment of behaviour change is proposed at 6\u00a0months after the intervention. Ideally assessment should also be done 6\u00a0months after the active intervention to assess the maintenance of behaviours. However, due to resource constraints, we may not be able to continue beyond 6\u00a0months. Another limitation of our study is the selection of only one member from each family to measure dietary changes, as it is impractical to measure dietary intake of all members of the family. This may underestimate the effect size.Access and use of IT by family champion can be a challenge, but a rapid survey of 120 families from different types of housing in the study area suggested high prevalence of IT use. Each family had at-least one mobile phone and majority had smart phones or computers with access to internet including those living in LIG housing. It is, therefore, assumed that this intervention which has made use of available IT tools will not pose additional burden on the families.Though cost-effectiveness has not been considered as one of the primary objective of the study, the capital and recurring cost for 5\u00a0years is estimated to be INR 889,000 for intervention group and INR 6000 for the control group. A formal cost-effectiveness analysis can also be attempted. If found to be cost-effective, this trial may pave the way for carrying out dietary behaviour change intervention on large scale so as to have a larger impact on prevention of nutrition related diseases.Click here for additional data file."} +{"text": "However, docking is a computationally intensive and time-consuming process, usually restricted to small size binding sites (pockets) and small number of interacting residues. When the target site is not known (blind docking), researchers split the docking box into multiple boxes, or repeat the search several times using different seeds, and then merge the results manually. Otherwise, the search time becomes impractically long. In this research, we studied the relation between the search progression and Average Sum of Proximity relative Frequencies (ASoF) of searching threads, which is closely related to the search speed and accuracy. A new inter-process spatio-temporal integration method is employed in Quick Vina 2, resulting in a new docking tool, QuickVina-W, a suitable tool for \u201cblind docking\u201d, (not limited in search space size or number of residues). QuickVina-W is faster than Quick Vina 2, yet better than AutoDock Vina. It should allow researchers to screen huge ligand libraries virtually, in practically short time and with high accuracy without the need to define a target pocket beforehand. It is entitled to produce and screen drug candidates more effectively than the physical assessment of thousands of diverse compounds a day, using high-throughput screening robotics, and thus increasing the rate of drug discovery while reducing the need for expensive laboratory work. Molecular docking is the core of virtual screening. It aims at prediction of the modes and affinities of non-covalent binding between a pair of molecules. Oftentimes, the molecules consist of a macromolecule (the receptor) and a small molecule (the ligand). The multidimensional search space of the ligand includes the degrees of freedom of its translation, rotation, and torsions of flexible bonds that may exist within it. Some packages consider flexibility in the receptor as well4.In the in silico drug discovery domain, \u201cVirtual Screening\u201d is defined as \u201cautomatically evaluating very large libraries of compounds using computer programs\u201det al. performed a comprehensive evaluation of ten famous currently available docking programs, including five commercial and five academic programs. Wang et al. studied their accuracies of binding pose prediction (sampling power) and their binding affinity estimation (scoring power) and concluded that AutoDock Vina4 has the highest scoring power among them5.A successful docking application needs to have two pillars: 1) a method to explore the ligand-receptor conformation space for plausible poses [the search algorithm], and 2) a method to relatively order those plausible poses [the scoring function]. In a recent study, Wang 6, performed on initial seeds (pseudorandom points), followed by 2) local optimization with BFGS method7. The modified Monte Carlo search is to perform a cycle of what we call an \u201cessential\u201d local optimization first before testing the proposed point according to the Metropolis acceptance criteria. Please refer to\u00a0the supplementary material for illustration on the search process. The dimensions of the search space in Vina family include three translations and three rotations of the ligand (applied at its root), as well as the torsion angles of all active (rotatable) bonds within it or within the receptor. That is to say, the number of degrees of freedom (N) in Vina is 6\u2009+\u2009number of rotatable bonds. Quick Vina (referred to as \u2018QVina 1\u2019 hereinafter)8 was developed to speed up Vina using heuristic to save local optimization by trying the potentially significant points only. A \u201cpotentially significant point\u201d is a point that is expected to undergo optimization through a new pathway not explored by other points before. The technique is to check any provisional point against the search thread history of visited points and accept only the points where there is at least one (near) history point such that for each design variable pair , the partial derivatives of the scoring function with respect to the variable at both points either have opposite signs or one of them is zero. This means that accepted points are assured to be in a new unexplored energy well. QVina 1 uses less search time than the original Vina does, however it was designed to run on impractically big number of CPUs to overcome the high rate of false negatives. QuickVina 2 (referred to as \u2018QVina 2\u2032 hereinafter)9 restored the lost accuracy of QVina 1 (compared to Vina) by using a more robust test that considers the first-order-consistency-check. Please refer to the methodology and supplementary methodology sections of our previous work9 for more details and illustrations. Vina, QuickVina 1 and 2, depend on multiprocessing to achieve fast search, where several threads traverse the search space simultaneously.AutoDock Vina (referred to as Vina hereinafter) utilizes a powerful hybrid scoring function and employs an evolutionary search, for the minimum-energy docking conformations (solutions). In evolutionary search, a solution is iteratively optimized until a considerably accepted solution is found. If we think of every \u201cpossible solution\u201d as a \u201cpoint in the search space\u201d, the search process in Vina is performed as iterations of 1) global optimization in the form of modified Monte Carlo10.Blind Docking refers to docking a ligand to the whole surface of a protein without any prior knowledge of the target pocket. Blind docking involves several trials/runs and several energy calculations before a favorable protein-ligand complex pose is found. However, the number of trials and energy evaluations necessary for a blind docking job is unknown. In their paper, Hetenyi, and Van Der Spoel recommended a number of trials to exceed 100 times, and at least 10 million energy evaluations per trial in case of flexible ligands12 or sacrificing the flexibility of some parts of the ligand13), or repeating the search several times using different seeds14, and in both cases, they later merge the results together manually.When it comes to Blind Docking, most -if not all- of the famous (non-exhaustive) docking tools are quite limited. That is because the stochastic nature of search for a fixed number of steps makes it unlikely to sample the whole energy landscape surface thoroughly enough to find all the important poses. Researchers usually mitigate this issue by either reducing the search complexity , that is suitable for blind docking, eliminating the need to run the docking tool several times or to split the docking box and then to merge the search results.9 is to optimize local searches (the most time-consuming search step) only to potentially significant points, by means of keeping track of the visited points in the search history and examining every new potential point against up to P history points before it is accepted and allowed to undergo local optimization. This was perfect for relatively small search spaces. However, it is quite limited for large-sized search space, because the search threads are diluted over the huge search volume, and hence inefficient sampling takes place.The philosophy behind QuickVina1 (\u226aP) of high quality points from all available threads history. The second step, Individual step (I), is the normal QVina 2 check against thread\u2019s individual history points P2 (=P\u2009\u2212\u2009P1). This way,other threads allows us to make use of other threads experience and make decisions in already explored energy landscape areas, while having history from an individual same thread allows us to make decisions in virgin areas.Having history from significant point, starting with P1 check decreases the number of checks needed before accepting the point (increased decision-making speed).For a insignificant potential point on the other hand, having the number of high quality checks, P1 kept to a considerably small value, leaving large enough number of history points to check in P2 before rejecting a point, ensures confidence that this rejection is not due to a false negative (no compromise in accuracy). Going through a full set of P checks then rejecting a potential insignificant point is faster than sending it to a set of unnecessary iterations of local optimization, the most time-consuming step of the search (increased search speed).For an The more the time passes, the more the high-quality points accumulate in the global history, the faster (and the more accurate) the\u00a0decision is taken in the G stage. This will end up with each thread being either thoroughly exploring unexplored areas or just traversing explored areas.The increased speed allowed us to scan more points in the same time\u00a0interval, increasing the overall search accuracy of the tool.The core of enabling wide docking box search is substituting the first few checks against a thread history with checks on a high-quality collection of common history points. That is to say, the normal (P) checks in QVina is split into two steps: The first step, Global step (G), is to check a small number PIn Fig.\u00a0n) number of exploring threads, the relative frequency (Fr) of the head of thread i to pass in proximity to any point of the history of thread j at time t\u2009\u2260\u20090 isd is the Euclidian distance between points xi and xj, and R is a predetermined cutoff. The sum of proximity relative Frequencies (SoF) of thread i head to pass near to the history of any other thread j at time tn]. It is important to note that SoF can exceed 1.0 (because a thread may pass near to more than one other thread simultaneously). This is particularly common in two cases: 1) near local minima [in pockets], where several threads tend to converge, and 2) progressively towards the end of the search, as all the threads tend to cover the entire search space extensively. The Average SoF (ASoF) at any time t isASoF over t\u2009\u2208\u2009 would show a progressing trend.To illustrate the theory; consider a search space with /3\u2009=\u20090.167. When t\u2009=\u20093, A3 passed by the history of B2, and B3 passed by C1, so AsoF3\u2009=\u2009(1/3\u2009+\u20091/3\u2009+\u20090)/3\u2009=\u20090.222. When t\u2009=\u20094, A4 was already descending in the well, near to B2 again (from the other side), and got a score of 1; B4 was near to both A1 and C0, thus got a score of 2; while C4 was close to B0, getting another score of 1. Therefore, ASoF4\u2009=\u2009(1/4\u2009+\u20092/4\u2009+\u20091/4)/3\u2009=\u20090.333. Consequently, the progress of the ASoF would be , which increases with time. Our hypothesis is that the increase in ASoF is associated with increased speed and accuracy of the\u00a0decision making, as we will elaborate using an example later.Next, consider the theoretical 2D search space shown in Fig.\u00a0An essential question here is which points we should keep from the history of all threads. We decided to use the output of last iteration of local optimization . Typically, these points had undergone up to 300 cycles of essential local optimization then up to other 300 cycles of local optimization . To avoid any possibility of bias, the ligands were then randomized using vina \u201c\u2013randomize_only\u201d parameter, to generate random starting respective poses different from the experimental ligands. Moreover, for every complex, we used the same stochastic search seed for both the small and large search spaces and for all the tested configurations.We tested our theory on the core set of PDBbind 2015, which includes representative 195 protein-ligand complexesWe then validated QVina-W by virtually screening the 54520 structures from Maybridge library screening collection against the crystal structure of influenza A H1N1 Nucleoprotein (NP) chain A monomer, obtained from protein data bank (PDB ID: 2IQH), comparing the results to those obtained from Vina on the same structure.We had two search space settings: one for searching a certain pocket and the other for searching the whole receptor surface without any preference to any pocket (referred to as large search space hereinafter).9. For each of the 195 complexes of the test set, the search space is defined as the minimal rectangular parallelepiped, aligned with the coordinate system that includes the docked ligand, plus added 5\u2009\u00c5 in the 3D . Additional 5\u2009\u00c5 were added randomly to either sides in each dimension, to decentralize the search space over the target pocket. If the search space in any dimension was less than 22.5\u2009\u00c5, it was uniformly increased to this value to ensure the search space allowed the ligand to rotate. For the influenza A NP, we used the T-loop binding site used in Awuni et al.20.The small search space is similar to that defined in Vina, QVina 1 and QVina 219. We did not take any other measures to decentralize the search space because it is already centered on the protein center of geometry, while the target pocket is somewhere on the protein surface.We defined the large search space (for both the PDBbind core set and Influenza NP), by determining the largest dimension of the ligand, and adding its value to the protein at both directions in each of the three dimensions, following the recommendation that the ligand should be allowed to rotate in the search spaceAfter preparing the benchmarking dataset, we profiled the performance of QuickVina on the dataset using different configurations of internal tool parameters on small search space, in order to select the best candidate configuration. Afterwards, we projected that configuration to large search space, where we applied the inter-process communication, and profiled its performance. Then we kept increasing its maximum number of steps, until we reached four folds of the number of steps the original Vina undergoes. We will describe the procedure steps in detail in the next sections.Vina has a parameter called \u201cexhaustiveness\u201d that controls how comprehensive its search is. The more the exhaustiveness, the less the probability a good result is missed. Throughout the profiling, we kept the exhaustiveness value equal to the number of CPUs used. We changed the code of QVina 2 to test different configurations. These configurations include maximum number of checks (P) mentioned earlier and buffer size (Q). For our study, we tested different combinations of configurations {}, as well as exhaustiveness level (E). Configurations are {| P\u2009\u2208\u2009{0.5\u2009N, N, 2\u2009N, 4\u2009N, 6\u2009N, 8\u2009N} AND Q\u2009\u2208\u2009{N, 2\u2009N, 5\u2009N, 10\u2009N, 20\u2009N, 40\u2009N} AND P\u2009\u2264\u2009Q}. Combinations are in the form {, E) | E\u2009\u2208\u2009{8, 16, 32}}. The value N is the number of degrees of freedom . For example, suppose that QVina 2\u00a0is run with configuration and exhaustiveness (E\u2009=\u200916). When the ligand has 4 bonds, there will be 4\u2009+\u20096\u2009=\u200910 degrees of freedom (N), so the maximum number of checks would be the nearest 40 (4\u2009N) points among the latest 50 (5\u2009N), for each of the 16 threads exploring the search space.After small search space profiling, we selected the configuration that showed the best results to project it to the large search space. The selected configuration was P\u2009=\u20094\u2009N, Q\u2009=\u20095\u2009N. We then used exhaustiveness value of 64 in the large search space. Details of the profiling process is available in the supplementary document.hybrid buffer, which means for a thread to check whether a potential point is significant or not, it would check the nearest points in the global buffer first. If not detected as significant, or if there are not enough near points in the global history to decide, then the thread searches the history of its own individual buffer next.We kept the maximum number of checks and the buffer size to 4\u2009N and 5\u2009N respectively. We modified the QVina 2 code to add a global buffer in addition to the individual buffer for every thread. We refer to the combination of both the global and individual buffers as the 1\u2009\u226a\u2009P). The proximity cut-off is calculated as the Euclidian distance in the three dimensions.We then profiled the two parameters of the new check: the proximity cut off radius (R) and the maximum number of checks allowed from the global buffer . After p1 checks are done, the rest of P checks (p2) are taken from the thread\u2019s own individual buffer. Please note that p2\u2009=\u2009P\u2009\u2212\u2009p1 (not p2\u2009=\u2009P\u2009\u2212\u2009P1). In both steps, the history points to be checked are ordered from nearer to farther according to their Euclidian distance to the potential point in all N dimensions. To extend the previous example with selected configuration and a ligand with four rotatable bonds, the total maximum checks to be done (P) is 4\u2009N (i.e. 40 checks). Now, if the maximum allowed checks from the global buffer P1 is N (i.e. 10) and none of them passed the test, then the remaining 30 checks are completed from the individual thread buffer. If there are only 8 points in the history of the global buffer within the cut off R, then the individual checks p2 will be 32.The number of checks 1\u2009=\u20091\u2009N, R\u2009=\u20095\u2009\u00c5, P\u2009=\u20094\u2009N, and Q\u2009=\u20095\u2009N as shown in the supplementary document.The best configuration we found was that with P0\u21921000 is the time taken to do steps S0\u21921000, if we duplicate S, then T0\u21922000\u2009<\u20092 * T0\u21921000 as we will show in the results.Applying the hybrid buffer boosted a leap of speed in QVina 2 without loss of accuracy. We made use of that boosted search speed without compromising the accuracy, and increased the accuracy further, by increasing the maximum number of steps (S) optimization iterations Vina undergoes. S is determined as a function of the ligand number of movable atoms and rotatable bonds. At a first sight, it seems that increasing S would increase the total duration of the search, and would slow the searching speed. However, as we showed earlier, the more steps taken so far (s\u2009\u2264\u2009S), the higher the probability to pass nearby high-quality history points, and consequently, the faster (and the more accurate) the decision-making will be. That is to say, provided T1\u2009=\u20091\u2009N, R\u2009=\u20095\u2009\u00c5, P\u2009=\u20094\u2009N, and Q\u2009=\u20095\u2009N) and kept increasing the number of steps S, expecting the accuracy to increase, and keeping in mind to preserve the speed faster than \u2013 or at least comparable to- that of Vina, until we reached up to 4 folds the number of steps.We elected the configuration with the best results so far of history points. The octree root is a cell that spans over the whole search space; and the history points are distributed in the octree according to their spatial distribution in the three dimensions. The choice of the Octree data structure to store the history points from all threads is related to the fact that blind docking is a [spatially-non-focused search]. Therefore, injecting spatial orientation to enforce spatio-temporal integration necessitates choosing a data structure that best performs in relation to the 3D position, which is the octree.Figure\u00a0With a fixed proximity cutoff, the tree traversal and processing time increases with increasing the limit of maximum number of contents a node possesses, because that increased limit implies more unnecessary tests. While on the other hand, decreasing the maximum cell (node) content limit in a recursive binary search causes longer processing time, because it implies deeper recursive search overhead. That means one has to balance between the depth and the breadth of the search.MIN, given an arbitrary value of 0.1 Angstrom) and another one on the maximum number of points a cell can contain ; and giving WMIN a higher priority over SMAX. This way, every leaf node can accept up to SMAX. Every time a new point is added to a full (containing SMAX points) leaf node, this node is converted into an internal node and is split into eight leaf children, unless the node is too small to be divided (i.e. unless each cell dimension Wi in the 3D is less than WMIN). In such a case, the node will not be divided. Instead, the new point will just be added and the node will contain more than the default SMAX capacity. We made this decision because that condition usually occurs around the local minima, where nodes tend to accumulate very close to each other . In this case, 1) Searching such area will be slow, because it will go so deep (down to 14 levels according to our primary experiments), and the cell width might fall beyond the capacity of C\u2009++\u2009float type precision. 2) Most \u2013if not all\u2013 of the adjacent nodes should be considered for checking and consequently there is no need for recursive calls overhead.We managed this tradeoff by having two limits, one on the minimum cell size width to the new point. Then, those which are within the cut off are ordered according to the Euclidian distance to the new point from nearer to farther. Points that pass the local optimization step are added to the octree (the history buffer). Lastly, we synchronized reading/writing to the octree using C\u2009++\u200909 Shared Mutex.When a new potential point is proposed, the currently held points in the tree are filtered according to the Euclidian distance in the first three dimensions For the PDBbind dataset, we collected the output data as PDBQT files, and compared our results to the experimental data. We used OriginLab to facilitate studying the several dimensions of the data; , in order to compare the results from the combination of configurations.TS), which is the time taken purely to search for probable solutions. Search time acceleration (aTs) is calculated asTsVina and TsQvina refer to the search time of Vina and QVina respectively.The search time (TH), which is the time necessary to load the input files, prepare for the run, and write output files. All the versions before parallelizing the preparation step share almost identical set of values for overhead time, and all versions after parallelization have another almost identical set.The overhead time (TO): All the clock time taken by the tool process run from start to finish, and equals the sum of the previous two times. The overall-time acceleration (aTo) is calculated asThe overall time against that of Vina for the same complex.9) with respect to the experimental structure is less than 2\u2009\u00c5. The percentage of complexes with successful RMSD was calculated over 195 total complexes of the PDBbind database.For the RMSD measure, a prediction was considered to be successful if the RMSD of the predicted pose .In addition, we outputted all the history points from all the threads into separate files to allow our retrospective analysis. We counted the number of passes/fails in QVina 2 acceptance check per both the global buffer and the individual buffer, along with the number of checks per every passed test . We monitored the progression of the ratio between success in global check and success in individual check.To study the search process, we ran a blind docking search on every complex of the PDBbind Core dataset using 64 threads, and counted the average sum of relative frequencies of any one of the running threads to fall in close proximity of 5 Angstrom to the history \u201cfoot print\u201d of any of the other 63 threads (ASoF). In Fig.\u00a0Figure\u00a0With increased relative frequency of proximity, we expect an increased rate of passing the QuickVina acceptance check through the global check G. We show the progression of success in that stage among every 7500 passed checks from all concurrent threads, in Fig.\u00a01\u2009\u2264\u2009P1\u2009\u226a\u2009P), we expect that the more accepted points from the global stage, the less the total checks become, hence less time is required for decision making. Additionally, as more history is accumulated in the global buffer, the search keeps becoming faster towards the end than it was in the beginning. That effect is already found in Fig.\u00a0Since the global stage (G) precedes the individual stage (I) and the number of all the performed steps in G stage represents a small portion of the total checks . Again, Fig.\u00a0Finally, to prove the relation between the ASoF and sensitivity, we plot both the ASoF from Fig.\u00a0After deciding the best configuration of our tool (5\u00c51N_4N5N as shown in the supplementary file), we tested the effect of increasing the maximum steps on the same best setting after changing the maximum steps to be doubled (\u00d72) and quadrupled (\u00d74).In term of the binding energy illustrated in Fig.\u00a0Similarly, Fig.\u00a0It is here worth mentioning that for blind docking experiments over the whole receptors surface, it is useful to consider all predicted modes, not the first one only.As we accelerated the overhead time as well, it is more legitimate to calculate the acceleration based on the overall time rather than the search time only. In Fig.\u00a0Finally, if we consider that a single run of the QVina-W is effectively equivalent to 4 runs without having to repeat the overhead time, we can normalize the acceleration calculation by means of dividing the QVina time by its base B\u2009\u2208\u2009{1, 2, 4}). This way, the calculation would be, 2, 4}. From the results shown above\u00a0and in the supplementary data, we can conclude that the latest configuration of QuickVina with global buffer explores four folds points in the search space more\u00a0than\u00a0AutoDock Vina and previous QVina 2. It obtained better results than Vina, yet in faster time compared to QVina 2. The better results are\u00a0in terms of both Binding Energy and RMSD. It is faster than Vina in a crude comparison when the ligand heavy atoms are \u226411 or \u226539; and faster than both Vina and QVina 2 in a normalized acceleration where it scored 34.33 fold maximal acceleration and 3.60 folds average acceleration over Vina 1.1.2. The final configuration is \u201cQuickVina with circular individual buffer of size 5\u2009N, maximum checks 4\u2009N and Octree global buffer with cutoff of 5 Angstrom and maximum checks of 1\u2009N, where N is the\u00a0number of degrees of freedom\u201d.We are releasing this tool under the name \u201cQuick-Vina-Wide\u201d (QVina-W), which refers to the ability to work in wide search space. It is suitable for blind docking with its proven high accuracy and accelerated speed.In this work, we present QVina-W, a new docking tool particularly useful for wide search space, especially for blind docking. QVina-W utilizes the powerful scoring function of AutoDock Vina, the accelerated search of QVina 2, and adds thorough search for wide search space. It is based on the observation that allowing a searching thread to communicate with other nearby threads to make use of their wisdom, would increase the speed and sensitivity of that searching thread. This communication was allowed by means of a global buffer that keeps high quality search history points from all the\u00a0threads.In order to prove our theory, we analyzed the search process to trace the Average Sum of Proximity relative Frequencies (ASoF) among all searching threads, along with its effect on the speed and sensitivity of decision taking, as well as the effect on increasing number of search steps on the search speed and accuracy. That proved the direct relation between the length of the search and ASoF, which is reflected on the search speed and accuracy, and that in turn implies higher probability for better results. QVina-W makes use of the acceleration and explores four folds the number of points that Vina used to explore in a more efficient way. We also multithreaded the preparation overhead, which adds more to the overall time acceleration.QVina-W proved to be faster than QVina 2 , yet better than Vina in terms of Binding Energy and RMSD (with success rate of 72% by QVina-W versus 63% by Vina).Our plan to extend this work includes implementing genetic algorithm between nearby points to maximize the benefit of shared wisdom of threads, in addition to making a self-fine-tuning tool for QuickVina, to adjust its parameters according to the installation environment.http://www.qvina.org]Operating system(s): cross platformProgramming language: C++Other requirements: BOOST 1.60License: Apache License (Version 2.0)The tool is available from this [http://www.pdbbind.org.cn].The dataset supporting the conclusions of this article is available in the PDBbind database repository, [Supplementary DocumentSearch Progress sample"} +{"text": "U-statistic. However, Bai et al. (2016) revisit the HJ test and find that the test statistic given by HJ is NOT a function of U-statistics which implies that the CLT neither proposed by Hiemstra and Jones (1994) nor the one extended by Bai et al. (2010) is valid for statistical inference. In this paper, we re-estimate the probabilities and reestablish the CLT of the new test statistic. Numerical simulation shows that our new estimates are consistent and our new test performs decent size and power.The multivariate nonlinear Granger causality developed by Bai et al. (2010) plays an important role in detecting the dynamic interrelationships between two groups of variables. Following the idea of Hiemstra-Jones (HJ) test proposed by Hiemstra and Jones (1994) : 1639-1664), they attempt to establish a central limit theorem (CLT) of their test statistic by applying the asymptotical property of multivariate After the pioneering work of Granger , GrangerThe real world is \u201calmost certainly nonlinear\u201d as Granger notes, sBai et al. extend tU-statistics as Hiemstra and Jones ) nd Jones claimed nd Jones propose nd Jones and provConsidering the significant importance of the multivariate nonlinear Granger causality test, there is an urgent need to reinvestigate Bai et al. and exteXt = \u2032, Yt = \u2032. The mxi-length lead vector of Xi,t is defined as Lxi-length lag vector of Xi,t, and Lyi-length lag vector of Yi,t are defined as Mx = , mx = max, Lx = , lx = max, Ly = , ly = max. For given Mx, Lx, Ly, e, Bai et al. [X \u2212 Y\u2016 = max for any two vectors X = and Y = .Bai et al. consideri et al. define tDefinition 1The vector time series {Yt} does not strictly Granger cause another vector time series {Xt} ifwhereP(\u22c5|\u22c5) denotes conditional probability.Using the notation, et al. re-exprexi,t, i = 1, \u22ef, n1, t = 1, \u22ef, T} and {yi,t, i = 1, \u22ef, n2, t = 1, \u22ef, T}, they propose the following test statisticFor two sets of simultaneous samples {Remark: Following the instruction of Hiemstra and Jones [Cjs as multivariate U-statistic estimators of their counterparts Cj(\u2217)s and apply the asymptotic property of U-statistic to show the limiting results for the test statistics s are not U-statistics, because the expectations of the general terms are not the same. Moreover, the Cj(\u2217)s are related to the indices t and s , while the Cjs were independent of t and s for summing up over them. Therefore, the Cj estimators are neither consistent nor asymptotic normal estimators of their counterparts Cj(\u2217).nd Jones , Bai et nd Jones take Cj in Cj(\u2217). In fact, both Hiemstra and Jones [Cj(\u2217). The improper estimators Cj thus lead to an invalid asymptotic distribution of the test statistic.We first remind the reader that the pair , we denotes, t), if |t \u2212 s| = l, we denote C1 \u2261 C1, which does not depend on t, so we can write C1 instead of C1, the same to the others. So under the assumption of strictly stationary, for each l > 0, we examine whether there is nonlinear Granger causality from {Yt} to {Xt} by testing the following hypothesisWe now begin to state the procedure for our new test. If {xi,t, i = 1, \u22ef, n1, t = 1, \u22ef, T} and {yj,t, j = 1, \u22ef, n2, t = 1, \u22ef, T}, we first provide the consistent estimators of C1, C2, C3 and C4 areLxy = max, n = T \u2212 Lxy \u2212 l \u2212 mx + 1.If we consider two sets of simultaneous samples {at1,}, {at2,} are i.i.d. and mutually independent random variables generated from N, while {Yt} could be any stationary sequence. Let l = 1, Lx = Ly = Mx = 1. We can calculate the exact values of C4, which are 0.2709 and 0.5057, respectively, when e = 1 and e = 1.5. For simplicity, we denote the true value of C4, the estimate proposed by Bai et al. [C4, T = 1000, 2000 and 4000. It is obvious that The consistency of our proposed estimators can be shown straightforwardly and the detail of the proof is omitted. We use a simple numerical study to show that our estimators are consistent whereas those of Bai et al. are not.Tn for the Granger causality test.Now, we proposeTheorem 1Stationary sequences {xi,t, i = 1, \u22ef, n1, t = 1, \u22ef, T} and {yj,t, j = 1, \u22ef, n2, t = 1, \u22ef, T} are strong mixing, with mixing coefficients satisfying the conditions of Lemma 1 presented in Appendix, for given values ofl, Lx, Ly, Mxande > 0, under the null hypothesis that {Yt} does not strictly Granger cause {Xt}, then the test statistic is defined in with its consistent estimator H0 defined in . The hyfined in is rejec\u03c32. A model-based approach uses known laws of {Xt} and {Yt} to calculate the expectations in the formula given in the Appendix and simply substitutes Cj(\u2217), j = 1, 2, 3, 4 with their corresponding estimates. However, in practice, we can hardly avoid model misspecification and may obtain improper laws of {Xt} and {Yt}. We suggest the use of bootstrap methods as in the simulation studies we use to test hypothesis H0.There are several possible methods to estimate the asymptotic variance R be the times of rejecting the null hypothesis that Yt does not strictly Granger cause Xt nonlinearly in 10,000 replications at the \u03b1 level, and thus, the empirical power is R/10,000. In our simulation, the level \u03b1 = 0.05, we standardized the series and chose the same lag length and lead length: Lx = Ly = Mx = 1. We set three situations of l and two situations of e: l = 1, l = 2, l = 3 and e = 1, e = 1.5.In this subsection, we perform numerical studies using simulations to illustrate the applicability and superiority of the new multivariate nonlinear Granger causality test developed in Section 3. Let Yt1,, Yt2,)\u2032} are i.i.d. and mutually independent random variables generated from standard normal distribution N, {\u03b5t} is Gaussian white noise generated from N and independent of {Yt1,}, {Yt2,}. There is no nonlinear Granger causality from Yt to Xt when \u03b2 = 0, and causality strengthens when \u03b2 increases.Consider the following model:\u03b2 = 0 the empirical size are all closed to the test level 0.05 for different settings of parameters and sample size. Second, our test possesses very appropriate power, as we see that empirical power increases as \u03b2 increases, especially when sample size is 500 the empirical power sharply increase to 1. Further, we find that different settings of e may influence the test results. Though the influence is little in our simulation, we still suggest that practitioners choose a couple of different values of e.From the results displayed in Now we apply our new method to detect the nonlinear causality from returns to exchanging volumes on China\u2019s stock market. To eliminate the potential influence of asymmetry information caused by language barriers, different accounting standards and foreign currencies exchanging, we consider China\u2019s A shares which are denominated in the local currency the Yuan and traded among China\u2019s citizens. A shares listed on the Shanghai Stock Exchange are named SHA while the ones listed on the Shenzhen Stock Exchange are named SZA.t as Yt1, Yt2)\u2032 and \u2032, we let Lx1 = Lx2 = Ly1 = Ly2 = mx1 = mx2 = 1 and consider l = 1, l = 2 and l = 3, e = 1 and e = 1.5.We denote the prices of SHA and SZA at Yt1, Yt2, Xt1 and Xt2. It is worth noting that the standard deviations of Yt1, Yt2, Xt1 and Xt2 are 0.014843, 0.017552, 0.195853 and 0.171018 respectively. To implement our proposed test, Yt1, Yt2, Xt1 and Xt2 need to be standardized at first so that all series share a common standard deviation 1. To avoid tedious notation we also denote the standardized sequences (Yt1 \u2212 Mean(Yt1))/SD(Yt1), (Yt2 \u2212 Mean(Yt2))/SD(Yt2), (Xt1 \u2212 Mean(Xt1))/SD(Xt1) and (Xt2 \u2212 Mean(Xt2))/SD(Xt2) as Yt1, Yt2, Xt1 and Xt2. Yt1, Yt2)\u2032 to \u2032 as well as two pairs of subset, Yt1 to Xt1 and Yt2 to Xt2. Generally speaking, there exists causality from stock returns to volume changes, as we can see on the row which testing no causality from \u2032 \u2192 \u2032, p values are all much smaller than level 0.05. Further, we try to dig which subset possess the dynamic causal explanatory ability. Results on the last row of U-statistics and establish a CLT of the test statistic by applying the asymptotic property of U-statistics. After revealing that the estimators proposed by Bai et al. [U- statistics, we show that their estimators are also not consistent.In this paper, we reinvestigate the multivariate nonlinear Granger causality test extended by Bai et al. which ati et al. is not UThe procedure of our new test begins with presenting consistent estimators of probabilities in the definition. Numerical study supports that our estimators are consistent, further our new test possesses admirable properties both in size and power.There are still amounts of appealing aspects in nonlinear Granger causality test. It is worth noting that Diks and Wolski extend tZt are \u03c3-algebras \u03c3-algebras E(Zi) = 0, Let {Definition A1: A stationary process {Zt} is said to be strongly mixing (completely regular) if \u03c4 \u2192 \u221e through positive values.Lemma A1: Let the stationary sequence {Zi} satisfy the strong mixing condition with mixing coefficient \u03b1(n), and let E|Zi|\u03b42+ < \u221e for some \u03b4 > 0. If \u03c3 \u2260 0, then Readers can be referred to Ibragimov for a prxi,1, xi,2, \u22ef, xi,T} and {yj,1, yj,2, \u22ef, yj,T}, i \u2208 {1, 2, \u22ef, n1}, j \u2208 {1, 2, \u22ef, n2} are both strong mixing stationary sequences whose mixing coefficient satisfying the conditions in Lemma 1. Then the following four sequencesn = T \u2212 Lxy \u2212 l \u2212 mx + 1 andZt1}, {Zt2}, {Zt3} and {Zt4} satisfy the central limit theorem.Assume {a1, a2, a3 and a4, the sequence {Zt = a1Zt1 + a2Zt2 + a3Zt3 + a4Zt4, t = Lxy + 1, \u22ef, T \u2212 l \u2212 Lxy \u2212 mx + 1} also satisfies the conditions of Lemma 1 which implying that\u03a3 is a 4 \u00d7 4 symmetric matrix. DenoteFurther, for any real number \u03c32 = \u2207\u2032 \u03a3\u2207, in which\u03a3\u2207 by their empirical estimates.Under the null hypothesis, applying the delta method , we have"} +{"text": "As hepatocellular carcinoma (HCC) usually occurs in the background of cirrhosis, which is an end-stage form of liver diseases, treatment options for advanced HCC are limited, due to poor liver function. The exosome is a nanometer-sized membrane vesicle structure that originates from the endosome. Exosome-mediated transfer of proteins, DNAs and various forms of RNA, such as microRNA (miRNA), long noncoding RNA (lncRNA) and messenger RNA (mRNA), contributes to the development of HCC. Exosomes mediate communication between both HCC and non-HCC cells involved in tumor-associated cells, and several molecules are implicated in exosome biogenesis. Exosomes may be potential diagnostic biomarkers for early-stage HCC. Exosomal proteins, miRNAs and lncRNAs could provide new biomarker information for HCC. Exosomes are also potential targets for the treatment of HCC. Notably, further efforts are required in this field. We reviewed recent literature and demonstrated how useful exosomes are for diagnosing patients with HCC, treating patients with HCC and predicting the prognosis of HCC patients. There are at least three extracellular vesicles in the extracellular microenvironment, including exosomes, microvesicles and apoptotic bodies . The exoCancer-derived exosomes form network complexes of communication between tumor and nontumor cells. On the one hand, exosome-mediated cancer progression through the promotion of a tumor microenvironment, such as enhancing cell proliferation ,8 and anHepatocellular carcinoma (HCC) is the fourth most common cancer type and the second most common cause of cancer-related deaths . HCC is This review focuses on the contents of exosomes and how exosomes contribute to the development of HCC. We further address how useful exosomes are to diagnose HCC, to treat patients with HCC and to predict their prognosis.HCC largely occurs in the background of chronic liver disease and cirrhosis in over 90% of cases . In the The high mortality rates of HCC are almost equal to the incidence rates of HCC in most countries, indicating the lack of effective therapies . The treDuring the last decade, the oral multitargeted tyrosine kinase inhibitor sorafenib was first approved as an oral agent of systemic chemotherapy in patients with advanced and metastatic HCC . HoweverExosomes are 30\u2013100 nm vesicles with a phospholipid bilayer membrane . The exoExosomes contain a variety of cellular components, including a range of proteins such as heat shock proteins (HSPs), lipids, RNAs, mRNAs and DNA molecular cargoes, with surface protein markers, including tetraspanins 32]. Ex. Ex32]. http://www.exocarta.org) is a manually curated Web-based database of exosomal proteins, RNAs and lipids [The contents of exosomes and their effects on recipient cells mainly depend on the cell types from which they are derived. It has been reported that ExoCarta [Wang et al. investigated 1428 proteins in exosomes derived from HCC using mass spectrometry, and these proteins were classified by GO annotation according to their biological process, cellular component and molecular function . They seta.org/) . HoweverAnother group identified 129 proteins that exist in exosomes derived from HCC, using protein profiling . Among tZhang et al. quantified more than 1400 exosomal proteins by performing the super-Stable Isotope Labeling using Amino Acids in Cell Culture (SILAC)-based mass spectrometry (MS) analysis on the exosomes secreted by three human HCC cell lines . They reYukawa et al. showed that exosomes derived from HCC play an important role in the influence of the immune system and angiogenesis through expressed killer cell lectin-like receptor K1 (KLRK1 /NKG2D), an activating receptor for immune cells, and HSP70, a stress-induced heat shock protein associated with angiogenesis . They foHowever, other groups investigated HCC-derived immunomodulators from different perspectives. Rao et al. investigated the expression of immunomodulators HCC antigens, such as HSP70, \u03b1-fetoprotein (AFP) and glypican 3, in HCC-derived exosomes . AnotherFu et al. showed that attached HCC cell-derived exosomes contain SMAD Family Member 3 (SMAD3) protein, which facilitates detached HCC cell adhesion . They suHigh mobility group box 1 (HMGB1) was expressed on HCC-derived exosomal membranes and bound with high affinity to Toll like receptor-2 (TLR-2), TLR-4, TLR-9 and advanced glycation end products (RAGE), which led to tumor cell survival, expansion and metastasis . HMGB1 eWang et al. found that the protein and mRNA levels of 14-3-3\u03b6 are up-regulated in HCC-derived exosomes and that 14-3-3\u03b6 impairs the anti-tumor activity of tumor-infiltrating T lymphocytes by T cell exhaustion . ClinicaLi et al. found that CXC chemokine receptor-4 (CXCR4) was elevated in high lymph node metastatic HCC-derived exosomes and promoted the migration and invasion of HCC cells with low metastatic potential . A previSohn et al. suggested that HCC-derived exosomal miR-18a, miR-221, miR-222 and miR-224 were significantly higher and miR-101, miR-106b, miR-122 and miR-195 were lower than those in the sera from patients with cirrhosis . A previThus, exosomal miRNAs may be used as novel serological biomarkers. Wang et al. showed that serum exosomal levels of miR-122, miR-148a, and miR-1246 are significantly higher in HCC than those in liver cirrhosis and normal control groups . HoweverExosomal miR-122 is highly expressed in the liver ; and decFornari et al. revealed that HCC-derived exosomes mediate miR-519d, miR-21, miR-221 and miR-1228, which corelate with circulating and tissue levels . They suCirculating microRNAs may be used as noninvasive biomarkers. Wang et al. found that exosomal miR-21 is significantly higher in patients with HCC compared to chronic hepatitis B patients or healthy volunteers . Serum mZhou et al. showed that HCC-derived exosomal miR-21 is elevated and promoted cancer progression by activating cancer-associated fibroblasts (CAFs) . miR-21 Li et al. examined 11 well-known reference genes from circulating exosomes across healthy controls, hepatitis B patients and HCC patients . They foSugimachi et al. found that exosomal miR-718 is significantly suppressed in patients with HCC recurrence after liver transplantation . DecreasKogure et al. identified 11 miRNAs, including miR-584, miR-517c, miR-378, miR-520f, miR-142-5p, miR-451, miR-518d, miR215-, miR-376a, miR-133b and miR-367, which are highly enriched in HCC-derived exosomes . InteresWang et al. showed that stellate cell-derived exosomes can supply miR-335-5p cargo to recipient HCC cells, inhibit HCC cell proliferation and invasion in vitro and induce HCC tumor shrinkage in vivo . AnotherLin et al. examined 19 known miRNAs that significantly increase in the sera of HCC patients and found that miR-210-3p is elevated in exosomes isolated from the sera of HCC patients . InteresYu et al. identified five down-regulated miRNAs , and one up-regulated miRNA (miR-296-3p) in the fast migrated HCC group compared to the slow migrated group by 372 HCC profiles from The Cancer Genome Atlas (TCGA) . The tarShi et al. showed that exosome-delivered miR-638 in HCC is down-regulated and negatively associated with tumor size, vascular infiltration, TNM stage and overall survival . They suLiu et al. investigated that circulating HCC patient-derived exosomal miR-125b levels were down-regulated compared with those from patients with chronic hepatitis B and liver cirrhosis . miR-125Matsuura et al. found that HCC-derived exosomal miR-155 is up-regulated under hypoxic conditions . ExosomaFu et al. showed that multidrug-resistant HCC cell-derived exosomal miR-32-5p is significantly elevated but PTEN is reduced. miR-32-5p activated the PI3K/Akt pathway by suppressing PTEN and by promoting angiogenesis and EMT causing multidrug resistance . ClinicaLiu et al. showed that HCC-derived exosomal miR-25-5p was elevated and contributed to tumor self-seeding by enhancing cell migratory and invasive abilities in mouse xenograft models .Takahashi et al. found that HCC-derived exosomes enrich lncRNA-ROR . A previLi et al. showed that lncRNA-FAL1 was up-regulated in HCC tissues and HCC-derived exosomes . lncRNA-lnc-RNA may be useful as a novel diagnostic biomarker or a novel target for the treatment of HCC in the future. Hou et al. identified five prognostic lncRNAs as follows: CTD-2116N20.1, AC012074.2, RP11-538D16.2, LINC00501 and RP11-136I14.5, in HCC-derived exosomes . They suZhang et al. found that lncRNA-HEIH expression in both serum and exosomes increased in patients with HCV-related HCC . HoweverXu et al. investigated serum exosomal lncRNA ENSG00000258332.1 LINC02394) and LINC00635 in experiments by comparing the sera between 55 HCC patients, 60 chronically HBV-infected patients and 60 healthy controls for the purpose of identifying potential diagnostic markers and prediction markers for the prognosis of HCC . LINC006 and LINCGramantieri et al. showed that lncRNA circulating cancer susceptibility 9 (CASC9) and lung cancer associated transcript 1 (LUCAT1) are up-regulated in HCC-derived exosomes . LUCAT1 Sun et al. investigated eight candidates lncRNAs based on the available literature by quantitative reverse transcription-PCR (qRT-PCR) and determined that HCC-derived serum exosomal lncRNA-LINC00161 is up-regulated compared to that in healthy controls . A previJpx gene did not affect male cells, suggesting that Jpx was a sex-specific gene.Using qRT-PCR, Ma et al. found that lncRNA Jpx, which is an activator of X-inactive-specific transcript (Xist), was up-regulated in the exosomes of female HCC patients compared to healthy female volunteers and patients with chronic hepatitis B and cirrhosis . lncRNA Li et al. found that the HCC-derived exosomal lncRNA TUC339 is up-regulated and was taken up by macrophages . lncRNA Abd El Gwad et al. found that lncRNA RP11-513I15.6 and miR-1262 are included in the RAB11A competing endogenous network . These eHCC patient-derived exosomal heterogeneous nuclear ribonucleoprotein H1 (hnRNPH1) was markedly higher than that in chronic hepatitis B patients and a healthy control . A previWang et al. found that HCC patient-derived exosomal circular RNA PTGR1 (circPTGR1) was up-regulated compared to controls . A previExosomes are a distinct source of tumor DNA . A higheExosomes may be potential detection biomarkers for early-stage HCC . Among eExosomes may be potential biomarkers for predicting survival in HCC patients 8,30,35,95,107. Exosomes are also attractive targets for the treatment of HCC ,43,102. There is a substance that may also have the potential to treat HCC by changing the exosome contents. Xiong et al. found that exosomal miR-490 was up-regulated from mast cells stimulated by the HCV E2 envelope glycoprotein . MoreoverAAV/AFP)-transfected dendritic cells (DC)-derived exosomes (DEXs) stimulate naive T cell proliferation and induce T cell activation to become antigen-specific cytotoxic T lymphocytes (CTLs), exhibiting antitumor immune responses against HCC [There are some reports that exhibit HCC therapeutic potential of HCC-derived exosomes ,114,115.In contrast, there is a report that exhibits HCC resistance to sorafenib in HCC-derived exosomes. Qu et al. showed that HCC-derived exosomes induce sorafenib resistance by activating the HGF/c-Met/Akt signaling pathway, inhibiting sorafenib-induced apoptosis and elevating HGF, which may be an important mechanism underlying HCC resistance to sorafenib . Serum eTrivedi et al. reported that miRs in the exosomesare related to the activation genes associated with anti-tumor signaling . These aExosomes mediate the communication and the transfer of several molecules implicated in exosome biogenesis between both HCC and non-HCC cells involved in tumor-associated cells, and thereby are both tumorigenic and tumor suppressors. It is difficult to diagnose early-stage HCC and to treat HCC radically. HCC-derived exosomes may be useful for the diagnosis of HCC as novel HCC biomarkers and for the treatment of HCC as therapeutic targets and as treatment tools via representing the major delivery system for proteins and several types of RNAs. The limitation of this review is that we did not completely incorporate the current findings and concept of exosomes in the various liver diseases that are associated with the formation of HCC because the development of HCC originates from the long-term disease progression and leading force of liver-associated diseases. Most studies emphasized the implications of exosomes as biomarkers in diagnosis of human diseases. However, exosomes have diverse functions in the maintenance of homeostasis. Whether and how exosomes sever as reliable and practicable biomarkers remain largely controversial . These f"} +{"text": "Epigenetic alterations, such as histone modification, DNA methylation, and miRNA-mediated processes, are critically associated with various mechanisms of proliferation and metastasis in several types of cancer. To overcome the side effects and limited effectiveness of drugs for cancer treatment, there is a continuous need for the identification of more effective drug targets and the execution of mechanism of action (MOA) studies. Recently, epigenetic modifiers have been recognized as important therapeutic targets for hepatocellular carcinoma (HCC) based on their reported abilities to suppress HCC metastasis and proliferation in both in vivo and in vitro studies. Therefore, here, we introduce epigenetic modifiers and alterations related to HCC metastasis and proliferation, and their molecular mechanisms in HCC metastasis. The existing data suggest that the study of epigenetic modifiers is important for the development of specific inhibitors and diagnostic targets for HCC treatment. Hepatocellular carcinoma (HCC), a common primary liver cancer, is the second leading cause of death in cancer patients, and is caused by chronic hepatitis B and C virus (HBV and HCV) infections and other factors, such as alcohol, diabetes, and aflatoxin exposure ,2. In adThus, in this review, we introduce HCC-related epigenetic modifiers and suggest their clinical utility in HCC treatment. In particular, we summarize the molecular mechanisms and functions of epigenetic modifiers in HCC metastasis, suggesting their potential as therapeutic targets and diagnostic markers in HCC.miRNAs regulate gene expression at the posttranscriptional level by inhibiting the translation or decay of transcripts, depending on the target sequence matching ratio. The biogenesis of miRNAs has been extensively studied. In brief, primary miRNAs (pri-miRNAs) are generated by RNA polymerase II and processed into precursor miRNAs (pre-miRNAs) by the RNase III enzyme Drosha and the DiGeorge critical region 8 (DGCR8) microprocessor complex in nucleus . SubsequTransforming growth factor (TGF)-\u03b2 signaling has been shown to play an important role in EMT. TGF-\u03b2 induces cells to lose their epithelial characteristics and to acquire migratory behavior through activating Smad (mothers against decapentaplegic homolog) signaling . The majTGFB1 transcript by binding to the 3\u2019-UTR [Recent studies have indicated that abnormal regulation of miRNA-mediated TGF-\u03b2/Smad signaling pathways induce malignant tumor development. Low expression of miR-542-3p and miR-142 is frequently found in HCC; these two miRNAs directly regulate the e 3\u2019-UTR ,22 and, e 3\u2019-UTR .Several studies have shown that miRNAs regulate various downstream genes of TGF-\u03b2 signaling, but this does not occur through Smad signaling. For example, TGF-\u03b21-induced EMT was suppressed by miR-300 through targeting focal adhesion kinase (FAK), which modulates EMT . AnotherSome studies have reported that miRNAs can be positively or negatively regulated by TGF-\u03b2 signaling. For example, treatment with recombinant TGF-\u03b21 increased miR-155 and miR-181a expression levels in HCC ,27. OverThere are several lines of evidence that indicate that the Wnt/\u03b2-catenin signaling pathway plays important roles in EMT . In partWang et al. found that the downregulation of miR-122 enhanced the proliferation, migration, and invasion of HCC. However, they showed that miR-122 overexpression decreased cell proliferation, migration, and invasion by targeting Wnt1, and inhibiting EMT-related gene expression . miR-148The Wnt/\u03b2-catenin signaling pathway can regulate miRNA expression levels. For example, miR-25 is significantly upregulated in human HCC tissues compared with normal liver tissues. The functions of miR-25 include stimulating HCC cell growth and activating EMT by targeting Rho GDP dissociation inhibitor alpha (RhoGDI1) . miR-25 Snail1 is the critical point of convergence in EMT regulation and represses CDH1 expression at the transcriptional level . A previPreviously, miR-140-5p and miR-630 were found to directly bind to Slug and negatively regulate its expression in HCC . AnotherRecent reports have shown that Twist1 induces EMT and promotes metastasis in HCC because it regulates various EMT-associated genes. Significant downregulation of miR-26b-5p and miR-27a-3p has been found in HCC. Mechanistically, Twist1 could suppress these miRNAs by binding to their promoter regions ,33,34. PIn addition, high expression of miR-345 inhibited EMT and cell mobility by targeting IRF1-mediated mTOR/STAT3/AKT signaling, and genes downstream of these pathways, including Snail, Slug, and Twist, are related to EMT in HCC .Exosomes, a type of extracellular vesicle, are small vesicles less than 200 nm in diameter . In the A previous study showed that a high level of miR-103 was associated with a higher metastasis potential of HCC. Exosomal miR-103 increases vascular permeability and promotes metastasis by directly targeting endothelial junction proteins, including VE-cadherin (VE-Cad), p120-catenin (p120), and zonula occludens 1 (ZO1). Therefore, miR-103 is a therapeutic target and metastasis marker of HCC . AnotherDNA methylation catalyzed by DNA methyltransferases (DNMTs) is a chemical modification of DNA that occurs by conjugation of a methyl group to the 5\u2019 carbon position of the cytosine ring, which is crucial for regulating gene expression . In cancTo date, several studies on the roles of DNMT1 in HCC metastasis have been reported. In CD133+/CD44+ cells, a subpopulation of HCC with CSC properties, a noncollagenous bone matrix protein osteopontin (OPN), enhances HCC metastasis by regulating DNA methylation. Knockdown of OPN in CD133+/CD44+ cells suppressed sphere formation and migration by inhibiting DNMT1 expression, which reduced the methylation of tumor suppressor genes such as RASSF1, GATA4, and CDKL2 .The axis governed by hepatocyte growth factor (HGF) and its receptor c-Met plays important roles in cell proliferation, survival, and migration in the liver . Within Several recent studies have shown that DNMT3 regulates HCC invasion and metastasis. A clinicopathological study by Oh et al. reported significant correlations between DNMT expression and overall survival and metastasis-free survival in HCC patients . Among tIn HCC metastasis and invasion, DNMT3 is involved in the epigenetic regulation of the metastasis-associated protein 1 (MTA1) gene . MTA1 isTo date, several studies on the epigenetic regulation of tumor suppressor gene expression by undefined DNMTs have been reported in HCC. PCDH10 was recently demonstrated to be a tumor suppressor gene, and is frequently silenced in HCC . An analThe transmembrane glycoprotein CD147 has been implicated in HCC progression and metastasis, and CD147 gene silencing reduced MMP secretion and the invasive potential of HCC cells . Within Therefore, DNA methylation status and DNMT levels may be potential biomarkers of HCC and attractive therapeutic targets for HCC treatment.Histone modifications, such as histone methylation, acetylation, and ubiquitination, critically control oncogenes and tumor suppressor genes at the transcriptional level during tumor progression ,76. AmonEnhancer of zeste homolog 2 (EZH2) is a member of the polycomb group complex and plays an important role in the proliferation and metastasis of various cancers via methylation of histone H3K27 . EZH2 mRSETDB1 (KMT1E) is a methyltransferase that targets histone H3K9 methylation to repress gene expression . SETDB1 Euchromatic histone lysine methyltransferase 2 and suppressor of variegation 3-9 homolog 1 (SUV39H1) predominantly methylate histone H3K9 to induce the formation of heterochromatin, and are overexpressed in several types of cancer ,100,101.KDM5C and JARID1B are histone demethylases in the family of JmjC domain-containing proteins that mainly demethylate histone H3K4 to suppress gene expression via the formation of heterochromatin, and are overexpressed in many types of cancer ,104. In Cancer metastasis, the spread of cancer cells from the primary site, is the major cause of morbidity and mortality in various cancers. HCC metastasis is defined as either intrahepatic metastasis (IHM) via portal vein dissemination, or extrahepatic metastasis (EHM) to other organs, including the lungs, lymph nodes, bones, and adrenal glands ,106,107.CDH1), inhibitor of DNA binding 2 (ID2), matrix metalloproteinase 9 (MMP9), and transcription factor 3 (TCF3)) were independent prognostic factors for the overall survival of HCC patients [EMT and MET have been proposed to be engines of metastasis in various cancers, including HCC ,112,113.patients . Moreovepatients . Intrigupatients ,119,120.CDH1 gene promoter has been confirmed in many cancers [CDH1 gene expression. Snail recruits the histone demethylase lysine-specific demethylase 1 (LSD1), which removes the dimethylation of K4 on histone H3 (H2K4m2) and mediates the transcriptional repression of CDH1 [Since EMT is a crucial event in hepatocyte progression and metastasis, epigenetic alterations have potential as clinically applicable biomarkers and therapeutic targets in HCC. As we de cancers ,122,123. cancers ,125. Sev of CDH1 . Elevate of CDH1 .Currently, noncoding RNAs have been highlighted as new regulators of various genes, including EMT-related genes. The best-known EMT-related miRNAs are those in the miR-200 family and miR-205 ,128. ExpRASSF1A) gene was detectable in over 90% of HCC patient sera samples, and predicted a shorter relapse-free survival interval for HCC patients [APC, FHIT, p15, p16, and E-cadherin) in HCC tissues was successfully reproduced in plasma circulating DNA [In terms of cancer biomarkers, the body fluid-based liquid biopsy concept has recently emerged for the noninvasive analysis of biomarkers. Indeed, many research groups have discovered and published on circulating HCC-derived biomolecules. Circulating cell-free methylated DNA from cancer cells has been detected in HCC patient body fluids, such as serum, plasma, and urine. Hypermethylated DNA from the Ras association domain family protein 1A (patients . Interesting DNA . Thus, cCirculating miRNAs are another potential noninvasive marker for HCC liquid biopsy. A nested case-control study with prospectively collected sera from HCC patients and controls revealed significantly increased serum levels of miR-29a, miR-29c, miR-133a, miR-143, miR-145, miR-192, and miR-505 in patients with HCC . MoreoveIn recent years, exosomes, a type of extracellular microvesicle, have been shown to be vehicles for miRNAs. Therefore, exosomal miRNAs may be stable in body fluids due to the protective role of exosomes against RNase. Exosomes carrying cancer-specific miRNAs are released from cancer cells and can circulate in body fluids. After being delivered into acceptor cells, exosomal miRNAs play functional roles by targeting specific genes. miR-21 plays an oncogenic role in various cancer tissues. Recently, high levels of exosomal miR-21 were reported in serum from HCC patients, and were shown to be significantly associated with HCC patient prognosis . NotablyCollectively, EMT plays critical role in HCC development and metastasis. EMT is modulated via several epigenetic changes, such as DNA methylation, histone modifications, miRNAs, indicating each epigenetic modifier has great potential to serve as a novel diagnostic and/or therapeutic marker for HCC metastasis. Thus, future research should be more focused on the development of specific inhibitors, discovery of cancer metastasis biomarker, and MOA studies based on epigenetic modifiers."} +{"text": "Hepatocellular carcinoma (HCC) is one of the leading causes of cancer-related deaths worldwide. HCC patients are commonly diagnosed at an advanced stage, for which highly effective therapies are limited. Moreover, the five-year survival rate of HCC patients remains poor due to high frequency of tumor metastasis and recurrence. These challenges give rise to the emergent need to discover promising biomarkers for HCC diagnosis and identify novel targets for HCC therapy. Circular RNAs (circRNAs), a class of long-overlook non-coding RNA, have been revealed as multi-functional RNAs in recent years. Growing evidence indicates that circRNA expression alterations have a broad impact in biological characteristics of HCC. Most of these circRNAs regulate HCC progression by acting as miRNA sponges, suggesting that circRNAs may function as promising diagnostic biomarkers and ideal therapeutic targets for HCC. In this review, we summarize the current progress in studying the functional role of circRNAs in HCC pathogenesis and present their potential values as diagnostic biomarkers and therapeutic targets. In-depth investigations on the function and mechanism of circRNAs in HCC will enrich our knowledge of HCC pathogenesis and contribute to the development of effective diagnostic biomarkers and therapeutic targets for HCC. Hepatocellular carcinoma (HCC), the most common malignancy of the liver, ranks the third leading cause of cancer-related death worldwide and has a significantly low survival rate ,2. HCC cNon-coding RNAs (ncRNAs) are functional RNAs that are generally not translated into proteins . ncRNAs Notably, circular RNAs (circRNAs) are a newly discovered type of ncRNAs and ubiquitously exist in many species ,19. UnliCircRNAs are produced through back-splicing events from exons of protein-coding genes, introns, intergenic regions, antisense, or untranslated regions ,37,38. TAdditionally, RNA-binding proteins (RBPs) act as trans-factors involved in circRNA biogenesis. For instance, the alternative splicing factor, Quaking (QKI), can bridge the 5\u2032 splice site closer to the upstream 3\u2032 splice site by binding to flanking introns, thus promoting ecircRNA formation . SimilarMost introns are spliced out from pre-mRNAs and form a lasso structure that is degraded following off-branch . ciRNAs CircRNAs have become a new star in the field of ncRNA research. The biological function of circRNAs has been extensively investigated. Their function can be grouped into five parts: serving as miRNA sponges to suppress their function, working as transcriptional and translational regulators, influencing alternative splicing of pre-mRNAs, interacting with RBPs to regulate gene expression, and having the potential to encode proteins .ITCH by sponging miR-7, miR-17, and miR-214 [miRNAs function as gene expression regulators by directly binding mRNAs . The int miR-214 ,52. More miR-214 . CircFGFAn increasing number of studies demonstrate that the miRNA sponge function of circRNAs is conserved among various species. Thus, the action of circRNAs as miRNA sponges is a common phenomenon. In some cases, the interaction between circRNAs and miRNAs may not always cause the inhibition of miRNAs. For instance, CDR1as was reported to bind miR-671 and miR-7 ,55,56. Tcis. Ci-ANKRD52, ci-MCM5 and ci-SIRT7 were able to interact with the elongating Pol II complex [ankyrin repeat domain 52 (ANKRD52), minichromosome maintenance complex component 5 (MCM5), and sirtuin 7 (SIRT7). These results demonstrated that ciRNAs could promote transcription of their parental genes by regulating elongating Pol II activity.The function of circRNAs in regulating gene expression has been shown in several studies. EIciRNAs retain the intronic sequences from their parental gene and thus are able to interact with the transcription machinery. For example, circEIF3J and circPAIP2 could interact with U1 small nuclear ribonucleoprotein (snRNP) and RNA Pol II in the promoter region of the host gene, thus enhancing the expression of their parental genes, eukaryotic translation initiation factor 3J (EIF3J) and poly(A)-binding protein-interacting protein 2 (PAIP2) . CiRNAs complex . Depletiformin (Fmn) gene could generate circRNAs via backsplicing [In addition, circRNAs are found to work as modulators in the translational process of mRNA. The mouse splicing . The cirCircRNAs are mainly generated from exons of protein-coding genes . During muscleblind (MBL) locus, and its flanking introns harbored binding sites for the MBL protein [nuclear poly(A)-binding protein 1 (PABPN1) pre-mRNA [PABPN1 mRNA, thereby inhibiting the translation of PABPN1 mRNA.CircRNAs can bind to RBPs and function as RBP sponges. CircRNAs might sequester and transport RBPs to particular subcellular locations . For exa protein . CircMblpre-mRNA . The extThe majority of circRNAs are composed of exons; thus, they may be translated into proteins. Recent studies confirm that circRNAs can encode for proteins. When an internal ribosome entry site (IRES) was introduced to the circRNA sequence, circRNAs could be translated in vivo and in vitro, indicating that circRNAs might have the potential to encode proteins ,69. CircThe biological function of circRNAs has been unveiled in recent years. As mentioned above, circRNAs can serve as molecular sponges to sequester miRNAs or proteins, and may influence their cellular abundance and localization. CircRNAs can regulate gene expression by sponging miRNAs/RBPs or competing with pre-mRNA splicing. Some circRNAs can even encode proteins. These studies suggest that circRNAs play an important role in various biological processes. However, the function of circRNAs remains largely unknown, and only a handful of studies elaborately disclose the biological functions of circRNAs. More efforts should be put into exploring the function of circRNAs.Increasing evidence indicates that a great number of circRNAs are aberrantly expressed in HCC tissues, suggesting that these circRNAs may perform a function in the carcinogenesis and development of HCC. The expression and function of deregulated circRNAs in HCC are listed in CCNE1) and phosphoinositide 3-kinase catalytic subunit \u03b4 (PIK3CD). Mechanistically, CDR1as promoted the proliferation and invasion of HCC cells by sponging miR-7 and then interfering with the PIK3CD/phospho-p70 S6 kinase (p70S6K)/the mammalian target of rapamycin (mTOR) signaling pathway. These findings demonstrated that CDR1as functioned to regulate HCC progression. CDR1as-regulated proteins were also identified in HCC cells by applying the quantitative proteomics-based strategy [CircRNAs are expressed in the tissue-specific manner, implying that circRNAs may be involved in the development of various diseases ,95,96. Astrategy . The proTumor cells initially undergo epithelial-mesenchymal transition (EMT), as characterized by loss of E-cadherin and gain of vimentin, to become metastatic and invasive ,99. The Aquaporin 3 (AQP3) plays a vital role in carcinogenesis and cancer progression . AQP3 waRecently, hsa_circ_0067934 was reported to enhance the proliferation, migration, and invasion of HCC cells . In termCircRBM23 was found to be abundantly expressed in HCC tissues . CircRBMNotch1. More importantly, hsa_circ_0005986 downregulation promoted the proliferation of HCC cells by driving cell cycle transition. Additionally, the low expression level of hsa_circ_0005986 was correlated with the clinicopathological characteristics of HCC patients including tumor size, MVI, and Barcelona Clinic Liver Cancer (BCLC) stage. Consequently, hsa_circ_0005986 may not only perform an inhibitory function in HCC tumorigenesis but may be a promising biomarker for HCC diagnosis as well.The expression of circRNA SMAD2 (circSMAD2) was lower in HCC tissues compared with adjacent normal tissues . CircSMASMARCA5 gene and named it cSMARCA5 (hsa_circ_0001445). They explored the functions of cSMARCA5 in HCC progression. The result indicated that the expression of cSMARCA5 was reduced in HCC tissues. The downregulation of cSMARCA5 in HCC was associated with aggressive clinicopathological characteristics and might work as a risk factor for overall survival and recurrence-free survival in HCC patients after surgical resection. cSMARCA5 overexpression was shown to inhibit the proliferation and migration of HCC cells. In terms of mechanism, cSMARCA5 enhanced the expression of tissue inhibitor of metalloproteinase 3 (TIMP3), a well-known tumor suppressor, by sponging miR-17-3p and miR-181b-5p. These results demonstrated the implication of cSMARCA5 in the growth and metastasis of HCC and provided a new perspective on the role of circRNAs in HCC development. Another study also reported the expression of hsa_circ_0001445 in HCC and matched pericancerous tissues [Yu et al. compared tissues . The resRecently, the expression profile of circRNAs in HCC tissues has been reported . Among tpolyadenylate-binding protein 1 gene (PABPC1), was significantly downregulated by AR in an ADAR1-dependent manner. Of note, ectopic expression of circARSP91 could suppress HCC proliferation, tumor growth and invasion, highlighting the role of AR/ADAR1/circARSP91 axis in controlling HCC progression. However, the mechanism of how circARSP91 regulates HCC development needs to be further deciphered. Lin et al. [ADAR1 has been reported to be an important regulator of circRNA biogenesis . Abnorman et al. investigZKSCAN1, could be transcribed into both linear and circular (circZKSCAN1) forms of RNA in HCC [ZKSCAN1 mRNA and circZKSCAN1 was markedly lower in HCC tissues compared with matched adjacent non-tumorous tissues. Inhibition of ZKSCAN1 mRNA or circZKSCAN1 promoted cell proliferation, migration and invasion of HCC cells. In contrast, overexpression of ZKSCAN1 mRNA or circZKSCAN1 had a suppressive effect on the migration and invasion of HCC cells. Inhibition or overexpression of both forms of RNAs didn\u2019t interfere with each other. Surprisingly, high-throughput RNA sequencing approach together with bioinformatic analysis revealed a different molecular basis for the observed effects. ZKSCAN1 mRNA played a regulatory role in cellular metabolism, such as retinol metabolism and phenylalanine metabolism. In contrast, circZKSCAN1 mainly mediated cancer-related pathways in HCC including the PI3K pathway, migration pathway and adhesion pathway. These results suggested a non-redundant role for ZKSCAN1 circRNA and mRNA. Thus, ZKSCAN1 mRNA and circZKSCAN1 tightly cooperated with each other to exert an anti-tumor effect on HCC. CircZKSCAN1 may serve as a diagnostic biomarker of HCC. The zinc finger family gene, A in HCC . The expA number of studies have demonstrated the vital role of miRNAs in the occurrence and development of HCC ,110,111.Chronic hepatitis B virus (HBV) and hepatitis C virus (HCV) infections are most important risk factors for HCC . ChronicEmerging evidence indicated that chronic HBV/HCV infection changed miRNA expression profiles, and the dysregulated miRNAs performed significant functions in viral replication and the occurrence of virus-related HCC ,131,132.As circRNAs are evolutionally conserved, abundant, and stable in the cytoplasm, they may hold great potentials for cancer diagnosis ,32. The Hsa_circ_0001649 expression was shown to be significantly lower in HCC tissues compared with paired adjacent non-tumorous tissues, and its expression levels associated with tumor size and the occurrence of tumor embolus in HCC . Hsa_cirRecently, a global circRNA expression was unraveled using a circRNA microarray in HCC patients . Hsa_cirYao et al. found thIt is believed that HCC can be cured if diagnosed at an early stage. The early diagnosis of HCC in patients is of paramount importance. The field of early HCC diagnosis has long been a research focus of scientists. As currently used biomarkers and tools usually fail to diagnose HCC at an early stage, early diagnosis of HCC remains a major clinical challenge. Therefore, it is urgent to identify efficient biomarkers for HCC diagnosis. Over the past decades, an increasing awareness of circRNAs has made researchers pay attention to the potential role of circRNAs in clinical diagnosis of HCC. CircRNAs are insensitive to ribonucleases due to their stable circular structures. The expression of circRNAs is shown to be associated with the traditional biomarkers including AFP. Previously, the diagnostic potential of hsa_circ_0001445 for HCC detection was evaluated . The plaCircRNAs also exhibit a high degree of HCC tissue-specificity. Moreover, some circRNAs may be correlated with tumor size, TNM stage, and metastasis in HCC patients, reflecting the stage characteristics of HCC tumorigenesis. Therefore, circRNAs may be efficient novel biomarkers for HCC diagnosis and hold great potential to be used in the clinical diagnosis of HCC. Nevertheless, before circRNAs could be used as effective biomarkers for early HCC diagnosis, several important issues must be addressed. Although several circRNAs display good diagnostic performances in differentiating HCC tissues from non-cancerous tissues, the efficiency and reliability of using circRNAs for HCC diagnosis in clinical practice need to be proven. Currently, the detection of circRNAs in HCC mainly focuses on tissue samples of patients. More easily acquired and non-invasive clinical samples should be detected for circRNA expression in future studies. CircRNAs can be detected in body fluids, such as human cell\u2013free plasma and saliva ,141,142.HCC is one of the most predominant subjects of liver malignancies, which causes a major health problem for a long time. Although new advanced therapeutic approaches were successively carried out in the last few years, the survival rate of HCC is still low. Accumulating evidence indicates that altered circRNA expression can affect the tumorigenesis and progression of HCC, and these circRNAs exhibit great potentials in HCC diagnosis, therapy, and prognosis. Research aimed at revealing the mechanisms underlying the effect of circRNAs on HCC carcinogenesis has indicated that circRNAs function as miRNA sponges to regulate the expression of genes/proteins involved in cell cycle, proliferation, invasion, and metastasis. Based on previous studies, circRNAs are undoubtedly implicated in the onset and development of HCC. CircRNAs possess multiple advantages including high abundance and stability, suggesting that circRNAs are ideal diagnostic biomarkers and promising therapeutic targets for HCC. However, compared with other ncRNAs such as miRNAs and lncRNAs, the study of circRNAs in HCC is still in its infancy. So far, only a small quantity of functional circRNAs have been discovered and characterized in HCC. These circRNAs function to regulate HCC progression generally through their miRNA sponge function. It is possible that circRNAs participate in HCC tumorigenesis by different mechanisms, such as competing with linear splicing of pre-mRNAs that are transcribed from tumor suppressor genes or encoding proteins that function as tumor promoters. Therefore, the features of circRNAs including their biogenesis, degradation, locations, and functions remain to be elucidated. In addition, more detailed studies are required to comprehensively disclose the molecular mechanisms underlying the functional role of circRNAs in HCC pathogenesis. In-depth investigations on the function and mechanism of circRNAs in HCC would enrich our knowledge of the complex regulatory networks involved in hepatocarcinogenesis. Moreover, further understanding of the relationship between circRNAs and HCC aetiology will accelerate the clinical application of circRNAs in HCC diagnosis and therapy."} +{"text": "It has been implicated in many biological processes such as survival, immune surveillance, and cell proliferation. In HCC, TGF\u2010\u03b2 promotes disease progression by two mechanisms: an intrinsic signaling pathway and the extrinsic pathway. Through these pathways, it modulates various microenvironment factors such as inflammatory mediators and fibroblasts. An interesting yet\u2010to\u2010be resolved concept is whether the HCC\u2010promoting role of TGF\u2010\u03b2 pathways is limited to a subset of HCC patients or it is involved in the whole process of HCC development. This review summarizes recent advancements to highlight the roles of circRNAs, lncRNAs, and TGF\u2010\u03b2 in HCC.At the heart of hepatocellular carcinoma (HCC) lies disruption of signaling pathways at the level of molecules, genes, and cells. Non\u2010coding RNAs (ncRNAs) have been implicated in the disease progression of HCC. For instance, dysregulated expression of circular RNAs (circRNAs) has been observed in patients with HCC. As such, these RNAs are potential therapeutic targets and diagnostic markers for HCC. Long non\u2010coding RNAs (lncRNAs), a type of ncRNA, have also been recognized to participate in the initiation and progression of HCC. Transforming growth factor\u2010beta (TGF\u2010 \u03b2, and long noncoding RNAs in the mechanisms of hepatocellular carcinoma. These factors carry huge potential as therapeutic targets for treatment of hepatocellular cancer.Accumulating evidences implicate circular RNAs, transforming growth factor\u2010 We focus on their influence on the various processes involved in HCC development. Their therapeutic and diagnostic potential for HCC are also explored. The ideas synthesized from this review and the molecular mechanisms explored will boost our understanding of tumorigenesis and progression of HCC which can be exploited in the design for drugs design and identification of diagnostic biomarkers for HCC.Therefore, these RNAs are potential therapeutic targets and diagnostic markers for HCC. This review aims to summarize the recent findings on the features and functions of circRNAs, TGF\u20102CircRNAs are synthesized from introns, intergenic regions, exons of protein\u2010coding genes, antisense, or untranslated regions by back\u2010splicing process.When introns located in the lariat are not removed by splicing, they remain in the encircled exons, forming EIciRNAs. In the intron pairing\u2010driven circularizing model, base\u2010pairing in reverse complementary sequences across exon\u2010flanking introns forms a circular structure. After intron pairing, pre\u2010mRNAs back\u2010splicing and exon circularization occur. Other factors that facilitate the formation of circRNA are RNA\u2010binding proteins (RBPs) which function as trans\u2010factors. Eg, Quaking (QKI), an alternative splicing factor, connects the upstream 3\u02b9 splice site to the 5\u02b9 splice site by binding to flanking introns, thereby enhancing the formation of ecircRNA.3Several studies have implicated circRNAs in the development of HCC. A summary of studies performed to determine the aberrant expression of different types of circRNAs in HCC tissues are shown in Tables Also, CDR1as enhances the sponging function of miR\u20107. The overexpression of miR\u20107 inhibited the invasion and proliferation of HCC cells in addition to decreasing the transcription of genes such as PIK3CD and cyclin E1 (CCNE1). Moreover, CDR1as enhances the proliferative capacity and invasiveness of HCC cells by acting as a sponge of miR\u20107 which inhibits signaling through the PIK3CD/phospho\u2010p70 S6 kinase (p70S6K)/the mTOR pathway. These findings show that CDR1as regulates the development of HCC. Furthermore, quantitative proteomics\u2010based approaches have revealed the presence of CDR1as\u2010regulated proteins in HCC cells.\u03b2\u2010catenin signaling pathway. Taken together, these findings show that hsa_circ_0067934 promotes HCC progression and can be exploited for HCC treatment. Another investigation in which the circRNA expression profiles were screened in paired normal liver tissues and HCC tissues,Aquaporin 3 (AQP3) is an important factor in tumorigenesis and cancer progression.Furthermore, loss\u2010of\u2010function experiments demonstrated that silencing hsa_circ_0016788 enhanced cell apoptosis and decreased the invasion and proliferation of HCC cells. In vivo experiments revealed that hsa_circ_0016788 knockdown suppressed the spread of HCC.In another study, four patients had their samples examined for the expression profiles of several pericancerous and HCC circRNAs.4In HCC tissues, circRNA SMAD2 (circSMAD2) expression was found to be decreased compared to adjacent normal tissues.Application of the RNA\u2010sequencing technology to compare the transcription level of circRNAs between normal tissues and paired HCC tissues helped to identify a circRNA which was named cSMARCA5 (hsa_circ_0001445).5\u03b2 signaling pathway,\u201d \u201cInflammatory mediator regulation of transient receptor potential (TRP) channels,\u201d \u201cT cell receptor signaling pathway\u201d and \u201cHepatitis B\u201d. miR\u2010122 is highly expressed in the liver.\u02b9 UTR of the viral genome, which enhanced the HCV replication.Chronic hepatitis C virus and HBV infections are the key risk factors for HCC.Several clinical studies have demonstrated that the inhibitors of miR\u2010122 play an important role in reducing viral load in HCV patients with chronic infection. Other researchers have designed circRNA sponges to absorb miR\u2010122.However, decreasing the ability of miR\u2010122 to suppress tumor activity using artificial circRNAs is a possibility that is worth considering. Aside from inhibiting HCV replication, circRNA sponges may also induce hepatocarcinogenesis by decreasing the miR\u2010122 activity. Since it plays crucial functions such as those involved in the maintenance of hepatic phenotype, lack of miR\u2010122 would lead to detrimental effects on the patients. To enable the clinical application of miR\u2010122\u2010suppressive therapy to treat HCV infection, the consequence of inhibiting miR\u2010122 on the HCC disease progression should be determined. Additionally, the cancer state and hepatic physiology should be monitored routinely during the clinical administration of anti\u2010miR\u2010122 therapy to HCV\u2010infected patients. Further efforts are required to design circRNA sponges targeted at miRNAs which are involved in pathogenesis and hepatitis virus replication.Considering that circRNAs are structurally stable and are expressed in crucial cellular localizations, they may be exploited in the field of molecular medicine. Increasing evidence points to the possibility that the expression profiles of miRNA are altered during chronic HBV/HCV infection and the dysregulated miRNAs modulate the incidence of virus\u2010related HCC.6\u03b2 signaling exhibits two types of responses, ie early and late.\u03b2 activation in a manner equivalent to the one observed in colorectal cancer.\u03b2 plays an important function in HCC is the regulation of the Wnt signaling pathway. Besides, TGF\u2010\u03b2 signaling seems to regulate the growth of tumor cells in some subtypes of HCC and other subtypes, it causes poor prognosis, low \u03b1\u2010fetoprotein (AFP) expression, and larger tumors.A study by Coulouarn et al on human tissue and mouse models revealed that TGF\u2010\u03b2 signaling phenomena occur is crucial in the determination of which HCC patients are likely to respond to TGF\u2010\u03b2 signaling inhibition. Due to the differences among transcriptome\u2010based studies, there are on\u2010going reviews aimed at obtaining a common classification and to expand our understanding of the pathophysiologic pathways involved in hepatocellular carcinoma. The outcome of such a review of the transcriptome\u2010based studies may reveal that TGF\u2010\u03b2 signaling is associated with EpCAM and AFP expression.\u03b2 signaling. It is worth to note that transcriptome\u2010based assessments are based on surgical specimen following local resections and are not from advanced HCC. Therefore the interpretation of findings obtained from transcriptome\u2010based assessments based on liver resections should take into consideration of this concept when inhibiting the TGF\u2010\u03b2 signaling in patients with advanced HCC by pharmacologic agents.Understanding the specific HCC subgroups where these unique TGF\u2010\u03b2 signaling is due to tumor cell growth within the ECM\u2010enriched environment. The presence of tumor in the ECM is associated with connective tissue growth factor and TGF\u2010\u03b2 secretion, thereby affecting the cancer\u2010related fibroblasts.\u03b2 signaling activates fibroblasts and is associated with T regulatory cells, for example, by activating chemokines (such as CCL22) or by the immune presentation of AFP.\u03b2, they are transformed into tumor\u2010initiating cells as evidenced by TGF\u2010\u03b2\u2014induced changes in CD133 and CD90 expression . These effects are correlated with poor prognosis.\u03b2 signaling causes EMT and renders tumors more invasive.The extrinsic effect of TGF\u20107Noncoding RNAs (ncRNAs) are a versatile group of RNA transcripts without protein\u2010coding potential.7.1The roles of lncRNAs in tumors fall into oncogenic and tumor suppressive categories.HOTAIR (HOX transcript antisense intergenic RNA) refers to a lncRNA derived from HOXC antisense strand.\u03b2\u2010catenin pathway.MALAT1 (metastasis\u2010associated lung adenocarcinoma transcript 1) has been found to be expressed in human non\u2010small\u2010cell lung cancer. Its expression in HCC specimen is increased,MVIH (Microvascular invasion in HCC) is found on chromosome 10 and has been recognized to play a role in HCC as reported in a study by Yuan et al7.2Prior studies have supplied compelling evidence that some ncRNAs such as LET, Dreh, MEG3, and H19, are important players in tumor suppression. Accordingly, their expression level in HCC is decreased. The studies that were designed to investigate the effect of downregulated lncRNAs in HCC are summarized in Table Maternally expressed 3 (MEG3) encodes lncRNA with a maternal inheritance pattern lncRNA.A study found that HBx or Dreh was downregulated using a lncRNA microarray assay on WT and HBx\u2010transgenic mice model.HBx\u2010induced downregulation of Dreh relied on vimentin downregulation, and the consequence of these effects was suppressed HCC cell migration and growth,7.3So far, lncRNAs have been shown to act as either oncogenes or tumor suppressors in the initiation of hepatocarcinogenesis. Intriguingly, aberrant lncRNAs expression correlates with various aspects of cancer such as tumor\u2010node\u2010metastasis(TNM) stage, RFS, disease\u2010free survival (DFS), OS, metastasis, and proliferation. By multivariate analysis of various factors associated with HCC, it was found that lncRNAs can independently predict outcomes and recurrence of HCC. Given the recent advancements in the tools of diagnosing cancers such as RNA immunoprecipitation, microarrays, qRT\u2010PCR, and sequencing technology, it is now possible to detect lncRNAs in different types of body fluids, which is likely to boost their application as prognostic markers of HCC. For instance, it was demonstrated that HULC was markedly increased in tumor tissues and serum of HCC patients; hence, it holds huge promise in the diagnosis of HCC.Given that several lncRNAs together with associated signaling molecules are dysregulated in HCC, strategies that restore their normal cellular levels are likely to provide newer cancer treatments, which are less susceptible to chemoresistance. Indeed, various drug companies have directed many resources to exploit the potential of lncRNAs as drug targets.8\u03b2 signaling have shown high efficacy in preventing HCC progression via their modulatory roles on the EMT process. In fact, an inhibitor of TGF\u2010\u03b2, LY2157299, has been clinically investigated in HCC and found to have improved outcomes. When aberrantly expressed, lncRNAs renders cells more likely to undergo tumorigenesis, metastasis and growth diseases and are responsible for a defective immunosurveillance, leading to HCC emergence. This review also reveals that circRNAs show HCC tissue\u2010specificity. The functions and level of circRNAs may be correlated with metastasis, TNM stage and tumor size in patients with HCC, hence may serve as indicators of stage phenotypes of HCC progress. Thus, circRNAs can be exploited to improve clinical HCC diagnosis as they are effective in distinguishing cancerous from normal tissues. However, for this to be achieved, large\u2010scale clinical trials should be carried out to evaluate their clinical utility. Much of the circRNAs measurements in HCC have been performed using tissues from patients. Further studies should develop isolation protocol based on non\u2010invasive clinical samples such as urine, saliva, blood etcThe pathogenesis of hepatocellular carcinoma is characterized by multiple causes. The several non\u2010coding RNAs are deregulated at various stages of HCC. CircRNAs and lncRNAs exhibit diverse associations with proteins, RNAs, and DNAs and thereby playing crucial roles in post\u2010transcriptional, transcriptional and chromatin organization regulation of HCC cells. CircRNAs and lncRNAs have shown a high potential to be used as markers of HCC or diagnosis. Several therapies such as inhibitors of the TGF\u2010"} +{"text": "Over the past decade, a changing spectrum of disease has turned chronic non-communicable diseases (CNCDs) into the leading cause of death worldwide. During the 2015 in China, there were more than 6.6\u00a0million deaths from NCDs, which was the highest rate around the world. In the present study, we performed a systematic review to analyze the health-related quality of life (HRQoL) according to EQ-5D-3L instrument in patients with different kinds of CNCDs in China.We searched PubMed, Embase, Web of Science, Cochrane Library, VIP, WanFang Data, and CNKI databases up to April 12, 2018, to identify all relevant studies that reported on HRQoL assessed by EQ-5D-3L instrument in Chinese patients with CNCDs. Expert consultation and hand-searching of reference lists from retrieved studies were employed to identify additional references. The variation of mean utility values, EQ-VAS score ranges, and responses for each EQ-5D dimension described in relevant studies were extracted.A total of 5027 English-language articles and 618 Chinese-language articles were identified, among which 38 articles met full inclusion criteria. These 38 studies involved 18 kinds of CNCDs. In this review, the health utility for diabetes mellitus ranged from 0.79 to 0.94 (EQ-5D VAS scores from 61.5 to 78.6), hypertension from 0.78 to 0.93 (70.1\u201377.4), coronary heart disease from 0.75 to 0.90 (71.0\u201377.0), chronic obstructive pulmonary disease from 0.64 to 0.80 (55.0\u201367.0), epilepsy from 0.83 to 0.87 (78.3\u201379.6), cerebral infarction from 0.51 to 0.75 (49.7\u201379.0), while children cerebral palsy was 0.44 (27.3).EQ-5D-3L is widely used in studies of HRQoL associated with CNCDs in China. Our results suggest that many factors may influence the measurement results of health utilities, including age, gender, sample source, comorbidities, rural/urban, and EQ-5D-3L value sets. There are more than 1.3\u00a0billion people in China, which make almost 1/6 of world\u2019s population, and largely contribute to a global patients\u2019 community. During the 2015 in China, there were more than 6.6\u00a0million deaths from non-communicable diseases (NCDs), which was the highest rate around the world . COPD isFollowing the shift from biomedicine model to the bio-psycho-social medical model , people Patients\u2019 health-related preferences have an important role for exploring their disease progression and survival, while health utility can be used to represent individual\u2019s preference for a particular health state, which is widely used in health-related research and cost-utility analysis . There aEQ-5D-3L comprises five dimensions, including \u201cMobility,\u201d \u201cSelf-Care,\u201d \u201cUsual Activities,\u201d \u201cPain/Discomfort,\u201d and \u201cAnxiety/Depression.\u201d The questionnaire is divided in dimensions, and each dimension has three levels: \u201chave no problems/be not,\u201d \u201chave some/moderate problems,\u201d \u201chave extremely problems/unable to.\u201d Therefore, 3L questionnaire can be used to define 243 kinds of different health states . Based oDue to the rising burden of diseases, it is necessary to pay more attention to HRQoL . HRQoL cWe performed a systematic review according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines . All relFollowing the inclusion criteria, all the studies were cross-sectional researches in Chinese population with CNCDs that were conducted in China, that reported EQ-5D-3L scores about a specific CNCDs with or without comorbidity by applying a value set, and that were full-text available. In this review, CNCDs are defined as \u201cDiseases or conditions that occur in, or are known to affect, individuals over an extensive period of time and for which there are no known causative agents that are transmitted from one affected individual to another.\u201d , such asPreliminary literature screening was performed by two authors independently based on the titles and abstracts. After title/abstract review, full-text articles were reviewed by two investigators to evaluate eligibility of studies for inclusion and to check the bibliography. Two researchers independently conducted data extraction from all included articles using a pre-formulated sheet. Publication details, data sources, sample size (gender), type of disease, mean age, comorbidities, EQ-5D health utilities, EQ-VAS scores, five-dimension results, full health ratio, and value set information were extracted. Disagreement was solved by a further discussion between reviewers. To extract more information, all the results were pooled into a customized sheet when different articles reported HRQoL from the same dataset.We appraised methodological quality of each study using a 11-item cross-sectional study assessment checklist introduced by Agency for Health Research and Quality (AHRQ) . Each itThe variations of mean utility values described in all studies were reported. Besides that, descriptive analysis of EQ-VAS score ranges and response for each EQ-5D dimension were undertook. We conducted all calculations using Microsoft Excel 2013.A total of 5027 English-language articles and 618 Chinese-language articles were identified via seven databases, while six additional studies were included after expert consultation and manual review. After checking for duplicates, we screened 3227 papers to assess for eligibility. Among these, 38 articles met the inclusion criteria \u201365 Fig.. There wWe extracted HRQoL data on 18 kinds of CNCDs based on EQ-5D-3L from the included studies Table\u00a0. The disIn this review, ten studies reported health utilities for diabetes mellitus \u201337. The For the patients with hypertension, the utility values ranged from 0.78 to 0.93 in six studies , 47\u201351. For the patients with CHD, the utility values ranged from 0.75 to 0.90 in five studies \u201341. Two 1 of predicted than the lowest one. Two studies reported that EQ-5D VAS scores were 55.3 [The health utility values for COPD patients ranged from 0.64 to 0.80 in four studies , 42\u201344. ere 55.3 and 66.6ere 55.3 .The health utility values for epilepsy patients ranged from 0.83 to 0.87 in two studies , 46, andIn terms of health utility for patients with CI, two studies reported the HRQoL , 53. AmoFor the patients with stroke, the health utility ranged from 0.51 to 0.90 in two studies evaluated by Japanese value set , 37. TheThe health utility values for patients with CLD differed in disease severity. The values ranged from 0.70 to 0.80 for compensated patients , 55, whiFor the remaining ten diseases \u201365, i.e.Among the ten diseases, cerebral palsy was 0.44 for the The present review focused on HRQoL in chronic non-communicable diseases in Chinese population. Over recent years, EQ-5D-3L questionnaire has been increasingly applied in different patient groups in China to measure their health utility values. Among 18 different types of diseases, DM, CHD, COPD, and hypertension are the most common CNCDs in China. Due to the high morbidity and mortality rates from these CNCDs, people have become more than ever concerned about the patients\u2019 state of survival and HRQoL.Patient-reported outcomes are important to health decision-makers. As a generic instrument, EQ-5D can be easily used by patients to report their HRQoL. However, there are variations in health utility values for a specific CNCD among different studies. Given the level of heterogeneity is high regarding patient characteristics and study design, meta-analysis is not an appropriate method to calculate a single index across studies. The utility values of DM (0.79\u20130.94), CHD (0.75\u20130.90), COPD (0.64\u20130.80), hypertension (0.78\u20130.93), epilepsy (0.83\u20130.87), CI (0.51\u20130.75), stroke (0.51\u20130.90), and CLD (0.00\u20130.80) reflect HRQoL in patients with CNCDs and with different conditions in a QALY framework. The results can be changed by a series of factors, including age, gender, sample source, comorbidities, rural/urban, and value set. In general, the health status deteriorates as people get old. Thus, the utility value decreases with the increasing age. According to previous study, the values in patients with T2DM aged 60 and over (0.83) was lower compared to patients with T2DM who were younger than 60 (0.86) .In most of the studies that reported on gender-specific health utility values and were included in the present review , 56, 57 Comorbidity has an important role in the variation of health utility value. In addition to the number of comorbidities, different types of comorbidities can affect health utility values as well. Hypertension, DM, CHD, hyperlipidemia, and stroke are the most common comorbid conditions , 36\u201338 TChina is a country with dual economic structure between rural and urban areas Due to tThe application of value sets from various countries in the same disease population leads to different results in health utility values. In the same sample of patients with CHD , the valEQ-5D-3L may lead to ceiling effects when measure HRQoL and health decrements may not be sensitive in disease population . Five-leCompared with other countries\u2019 patients, the health utility of European people with T2DM was 0.69 and it was 0.65 in New Zealand and Australia , 80. In The main limitation of this review is number of studies reporting on each CNCD. Even though 18 different kinds of diseases were included, more than half of the CNCDs were reported separately. Due to the lack of sufficient information on health utility for some of the CNCDs discussed above, it is difficult to get accurate conclusions about the HRQoL in various Chinese population with CNCDs.The comparison and analysis of HRQoL across different populations with CNCDs is of utmost importance. Utility value is a single index that reflects synthetic information about people\u2019s health, and that can provide useful evidence for decision-makers upon optimizing the allocation of health resources."} +{"text": "The aim of the present study was to evaluate the association between worsening of CHF and mortality in AHF patients. Out of 152 included AHF patients, 47 (30.9%) were de novo AHF patients and 105 (69%) were AHF patients with worsening of CHF. The proportion dying in hospital and within 3 months after hospitalization was significantly higher in AHF patients with worsening of CHF. Logistic regression analyses also showed a significant positive association of AHF emerging as worsening of CHF with hospital mortality and 3-month mortality . While the association with hospital mortality was no longer significant after adjusting for comorbidities and clinical as well as laboratory parameters known to be associated with mortality in heart failure patients, the association with 3-month mortality remained significant. We conclude that compared to de novo AHF, AHF evolved from worsening of CHF is a more severe condition and is associated with increased mortality.Acute heart failure (AHF) emerges either According to the European Society of Cardiology (ESC) HF denotes an abnormality of the cardiac structure and function, resulting in failure of the heart to deliver oxygen at a rate commensurate with the requirements of the metabolizing tissues3. Acute heart failure (AHF) denotes the rapid onset, or a massive exaggeration of existing symptoms and signs of HF3. Accordingly, there are patients with de novo AHF and those with AHF as a consequence of the worsening of Chronic Heart Failure (CHF).Heart failure (HF) is a frequent cause of death and disability worldwide4. Insufficient tissue perfusion due to impaired performance of the failing heart as well as tissue congestion due to venous volume overload, a consequence of right-sided HF, cause impaired intestinal nutrient absorption, diminished biosynthetic capacity of the liver5, and worsened renal function in HF7. Consequently, the decreased serum levels of cholesterol, increased concentrations of urea and creatinine, as well as wasting and chronic inflammation reflect the poor overall status associated with an increased mortality rate in HF patients8.HF is a complex syndrome with a versatile underlying pathophysiology, highlighted by disturbed hemodynamics and a deranged metabolismde novo AHF patients. Therefore, we compared the clinical and laboratory characteristics of patients with worsening of CHF with those of de novo AHF patients and evaluated the association between worsening of CHF and mortality in AHF patients.Considering the persistent hemodynamic and metabolic burden imposed on CHF patients we hypothesized that the clinical and laboratory characteristics are worse and the mortality rate higher in AHF patients with worsening of CHF compared to 9 and are shown in Table\u00a0de novo AHF patients. As seen in Table\u00a0de novo AHF patients, patients with worsening of CHF had a significantly lower mean arterial pressure (MAP) and heart rate Table\u00a0, the incF) Table\u00a0.Table 1Bde novo AHF patients , chronic kidney disease (CKD), and cardiomyopathy (CM) were significantly higher, and the incidence of acute coronary syndrome (ACS) was significantly lower in AHF patients with worsening of CHF compared to de novo AHF patients significantly lower in patients with worsening of CHF compared to ts Table\u00a0. While tts Table\u00a0. Levels ts Table\u00a0. While tde novo AHF patients and significantly higher concentrations of IL-6 when compared to de novo AHF patients.Interestingly, despite a higher incidence of JVD, enlarged liver and peripheral edema, the body weight in AHF patients with worsening of CHF was significantly lower than in 13 as well as insufficient binding and neutralization of endotoxines by circulating lipoproteins, whose levels are decreased in HF15, underlie the persistent inflammatory response in HF16. Additionally, the pro-inflammatory activation of venous endothelial cells by circumferential stretch due to congestion contributes to the chronic inflammation in HF17. Accordingly, lower serum levels of LDL cholesterol and HDL cholesterol together with a more severe congestion, in patients with worsening of CHF compared to de novo AHF patients may well explain the more pronounced inflammatory response, reflected by higher IL-6 levels.It is generally accepted that the augmented translocation of bacterial endotoxins from the edematous intestine into the circulation19, is most likely responsible for the lower leukocyte and platelet levels in our AHF patients with worsening of CHF.The augmented inflammatory response, which is known to promote the formation and clearance of leukocyte - platelet complexes14. In contrast to total, LDL, and HDL cholesterol, which were significantly lower in patients with worsening of CHF, triglyceride levels were similar in both studied groups.It is well established that decreased cholesterol and lipoprotein serum levels are strongly and independently associated with increased mortality in HF20, which indeed was more severe in AHF patients with worsening of CHF than in de novo AHF patients.Serum levels of lipids and lipoproteins are determined by their intestinal absorption, hepatic synthesis, and removal from the circulation by various catabolic routes. Accordingly, modulation of these physiological processes by the underlying AHF pathophysiology, such as congestion, which was more severe in AHF patients with worsening of CHF, may explain the different levels of total, LDL, and HDL cholesterol in the studied groups. Similar triglyceride levels in the two studied groups may indicate either that the triglyceride synthesis is not affected by the underlying AHF pathophysiology or that the decreased triglyceride synthesis is counterbalanced by a likewise decreased triglyceride catabolism. The latter possibility is likely, because lipoprotein lipase, a serum enzyme responsible for triglyceride degradation, is decreased by inflammation22, was more pronounced in our AHF patients with worsening of CHF than in de novo AHF patients. This was reflected by a markedly decreased GFR and concomitantly increased serum levels of urea and creatinine. The reduction in GFR may be due to several factors including kidney hypoperfusion23, renal co-morbidities24, renal venous congestion26, or inflammatory cytokines such as IL-627. Accordingly, a higher incidence of CKD, together with a more severe venous congestion and a more pronounced inflammatory response, as well as higher serum levels of NT-proBNP, an established predictor of worsening renal function in CHF28, may well explain the more severe impairment of renal function in our AHF patients with worsening of CHF compared to de novo AHF patients.Renal dysfunction, also a strong predictor of mortality in HF29. Therefore, a more pronounced inflammatory state, reflected by higher IL-6 levels, seems to be an important underlying pathophysiology responsible for worse clinical status of our AHF patients with worsening of CHF compared to de novo AHF patients.Besides their deleterious effects on the kidneys, inflammatory cytokines such as IL-6 exert a direct detrimental effect on the myocardial structure and function by promoting oxidative stress and endothelial dysfunction, thus contributing to the progression of HF31. In the present study, the strong association of worsening of CHF with 3-month mortality was weakened, but remained significant upon adjustment for comorbidities and established risk factors in HF33. This implies that other features inherent to this group of patients underlie their increased mortality compared to de novo AHF patients.The above-mentioned differences between the groups regarding the clinical characteristics observed in our AHF patients with worsening of CHF translated into a higher mortality rate, whereby the impact on 3-month mortality was more pronounced than the impact on hospital mortality. This is in line with two recent reports showing a strong association of the chronicity of HF with 6-month and 3-year mortality, but only a weak association with 1-month mortalityThere are several limitations to our present study: The design precludes conclusions on the pathophysiological mechanisms in terms of cause and consequence, or temporal development and onset of signs, and symptoms associated with worsened status and increased mortality. Furthermore, our data do not facilitate the examination whether the duration of HF chronicity affects mortality rates. Finally, the limited number of participants in this monocentric study influences the statistical power of our analyses. Therefore, further large studies are needed to confirm our results.de novo AHF and is associated with higher mortality rates.Based on our results, we conclude that AHF developed from CHF is a more severe condition than 34. In short, we performed a prospective observational study on hospitalized patients with AHF in Zagreb, Croatia, between 2013 and 2015. Written informed consent was obtained from each patient and the study, which was approved by the Ethics Committees of the University Hospital Centre Sisters of Charity, Zagreb, Croatia and the Medical University of Graz, Austria, was conducted in adherence to the ethical guidelines of the Declaration of Helsinki35. All patients were treated according to the ESC Guidelines for AHF36.Study design, inclusion and exclusion criteria as well as patient characteristics for our AHF cohort have been reported previouslySPAP was approximated the tricuspid valve velocity, estimated central vein pressure (resembling right atrium pressure), and Bernoulli equation from a Doppler echocardiography.37. The levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were determined with the Abbott Architect c8000 .The collection of blood samples, standard laboratory methods, and the determination of the NT-proBNP concentration have been described in our previous reports on our AHF cohortde novo AHF and with worsening of CHF were compared by Mann-Whitney U test or Fisher\u2019s exact test. The impact of worsening of CHF as compared to de novo AHF on hospital and 3-month mortality was assessed by logistic regression analysis. In addition to a univariable model, we also adjusted for age, sex, BMI, NT-proBNP, GFR, MAP, urea, IL-6, and LDL cholesterol, as well as for age, sex, BMI, NT-proBNP, MAP, GFR, COPD, CKD, CM, ACS, and NYHA, the comorbidities as well as clinical and laboratory parameters known to be associated with mortality in HF patients. We checked the variance inflation factor to prevent multi-collinearity among the covariates. Results are presented as odds ratios (OR) with the respective 95% confidence interval (CI). All analyses are exploratory and a p-value of\u2009<0.05 was considered significant. R version 3.4.2. was used for all statistical analyses.Categorical data are shown as absolute and relative frequencies, whereas continuous data are summarized as median and range (minimum to maximum) due to the skewed distribution of many of the laboratory parameters. Patients with Supplementary Tables"} +{"text": "Novel green composites were prepared by melt compounding a binary blend of polylactide (PLA) and poly(\u03b5-caprolactone) (PCL) at 4/1 (wt/wt) with particles of walnut shell flour (WSF) in the 10\u201340 wt % range, which were obtained as a waste from the agro-food industry. Maleinized linseed oil (MLO) was added at 5 parts per hundred resin (phr) of composite to counteract the intrinsically low compatibility between the biopolymer blend matrix and the lignocellulosic fillers. Although the incorporation of WSF tended to reduce the mechanical strength and thermal stability of PLA/PCL, the MLO-containing composites filled with up to 20 wt % WSF showed superior ductility and a more balanced thermomechanical response. The morphological analysis revealed that the performance improvement attained was related to a plasticization phenomenon of the biopolymer blend and, more interestingly, to an enhancement of the interfacial adhesion of the green composites achieved by extrusion with the multi-functionalized vegetable oil. Replacement of fossil feedstocks with renewable ones at an equivalent cost is one of the main endeavors of modern plastics industry . One posg) (around \u221260 \u00b0C) that result in highly flexible articles [m), in the 59\u201364 \u00b0C range, which restricts its use as mono-component in most packaging applications. Thus, the use of PLA/PCL blends have stimulated extensive research on their potential in rigid packaging due to the improvement attained in impact strength [In this context, the use of polylactide (PLA) as the matrix in green composites is currently gaining a great importance . PLA is articles . Althougarticles . Howeverstrength . Neverthstrength .Juglans regia L.) is an important crop that is cultivated throughout the world\u2019s temperate regions for its edible nuts. Worldwide walnut production reached approximately 7.25 million tons in 2016, being China, USA, Iran, and Turkey the main producers [Walnut plant. MLO shows one of the highest unsaturation levels amongst common vegetable oils, comparable only to tung oil, thus leading to a highly versatile additive, ripe for chemical functionalization [Maleinized linseed oil (MLO) is a cost-competitive cross-linker that is industrially prepared from linseed oil, a natural product that is extracted from the oilseed flax of 15\u201330 g/10 min , which makes it suitable for injection molding. Regarding PCL, a CapaTM 6800 commercial grade was supplied by Perstorp UK Ltd. . This PCL resin presents a density of 1.15 g/cm3 and a melt flow index (MFI) of 2\u20134 g/10 min .Commercial PLA Ingeo\u2122 biopolymer 6201D was provided by NatureWorks . This PLA resin has a density of 1.24 g/cmWalnut shell flour (WSF) was supplied by Bazar al andalus . According to the manufacturer, the shells were gently separated from the dry fruit and industrially ground with a high-speed rotary cutting mill to achieve a mean particle size lower than 100 \u00b5m. MLO was obtained from Vandeputte as VEOMER LIN. This oil has a viscosity of 1,000 cP at 20 \u00b0C and an acid value of 105\u2013130 mg potassium hydroxide (KOH)/g.Prior to manufacturing, the biopolymer pellets were dried to minimize their water content in a MDEO dehumidifier from Industrial Mars\u00e9 . Drying was performed at 60 \u00b0C and 45 \u00b0C for PLA and PCL, respectively, both for 36 h. The WSF particles were dried at 100 \u00b0C for 48 h.REX was carried out in a co-rotating twin-screw extruder from Construcciones Mec\u00e1nicas Dupra, S.L. . The screws presented a diameter of 25 mm and length (L) to diameter (D) ratio, that is, L/D, of 24. All materials were fed through the main hopper, being previously pre-homogenized in a zipper bag. The PLA/PCL ratio was fixed at 4/1 (wt/wt) according to previous findings , whereasThe compounded pellets were shaped into pieces by injection molding in a Meteor 270/75 from Mateu and Sol\u00e9 . The temperature profile was 165 \u00b0C (hopper), 170 \u00b0C, 175 \u00b0C, and 180 \u00b0C (injection nozzle). A clamping force of 75 tons was applied while the cavity filling and cooling time were set at 1 s and 10 s, respectively. Pieces with a thickness of ~4 mm were produced.Tensile tests were carried out in a universal test machine Elib 50 from S.A.E. Ibertest following the guidelines of ISO 527-1:2012. The selected load cell was 5 kN while the cross-head speed was 2 mm/min. Shore D hardness values were measured with a 676-D durometer from J. Bot following ISO 868:2003. Toughness was evaluated by the standard Charpy\u2019s test with a 6-J pendulum from Metrotec S.A. as suggested by ISO 179-1:2010. All specimens were tested at room conditions, that is, 23 \u00b0C and 50% relative humidity (RH). At least six samples for each material were tested.The fracture surfaces after the impact tests were observed by field emission scanning electron microscopy (FESEM) using a ZEISS ULTRA 55 model from Oxford Instruments . The samples were previously coated with an ultrathin metallic layer to provide electrical conductivity. This process was conducted in vacuum conditions inside a sputter chamber EMITECH mod. SC7620 provided by Quorum Technologies . 2) of 66 mL/min. Measurements were performed in triplicate.The main transition temperatures and enthalpies were obtained by differential scanning calorimetry (DSC) in a Mettler-Toledo 821 calorimeter . The sample size ranged from 5 mg to 7 mg and it was placed in standard aluminum crucibles (40 \u00b5L). A temperature program based on three stages was performed: first heating from 30 \u00b0C to 180 \u00b0C, cooling from 180 \u00b0C to \u221250 \u00b0C, and second heating from \u221250 \u00b0C to 300 \u00b0C. The heating/cooling rates were set at 10 \u00b0C/min for all three stages. The main thermal parameters were obtained from the second heating program. All the DSC runs were carried out in inert atmosphere using a flow-rate of nitrogen in a TGA/SDTA 851 thermobalance from Mettler-Toledo using a weight sample of 5\u20138 mg and standard alumina crucibles (70 \u00b5L). The thermal program was set from 30 \u00b0C to 700 \u00b0C at a constant heating rate of 20 \u00b0C/min in air. Samples were tested in triplicate.3. The test was carried out in single cantilever bending conditions at a frequency of 1 Hz, with a heating rate of 2 \u00b0C/min, and with a deformation control of 10 \u00b5m. Samples were evaluated in triplicate.Dimensional stability was carried out by dynamic mechanical thermal analysis (DMTA) in a DMA1 from Mettler-Toledo in the temperature range between \u221290 \u00b0C and 80 \u00b0C on injection-molded samples sizing 10 \u00d7 7 \u00d7 1 mm3 were immersed in distilled water at 23 \u00b1 1 \u00b0C. The samples were taken out and weighed weekly, after removing the residual water with a dry cloth, using an AG245 analytical balance from Mettler-Toledo with a precision of \u00b1 0.1 mg. Measurements were performed in triplicate.The evolution of the water absorption was followed for a period of 12 weeks according to ISO 62:2008. Injection-molded samples of 4 \u00d7 10 \u00d7 80 mmp < 0.05). Mean values and standard deviations were also reported.The mechanical, thermal, and thermomechanical properties were evaluated through analysis of variance (ANOVA) using STATGRAPHICS Centurion XVI v 16.1.03 from StatPoint Technologies, Inc. . Fisher\u2019s least significant difference (LSD) was used at the 95% confidence level based on a linear chain-extended, branched or even cross-linked structure and, thus, with an improved molecular entanglement to resist mechanical deformation. In other previous study, Xiong et al. [The effect on the mechanical properties of the single addition of MLO to PLA and its blends was first reported by Ferri et al. , who achg et al. reportedg et al. demonstrIn relation to the PLA/PCL/WSF composites, our recent research also demonstrated that 1\u20135 phr of MLO can successfully improve the mechanical performance of PLA composites containing 30 wt % ASF . SimilarThe remarkable change observed in the fracture surface morphology can be ascribed, as described above, to the multiple actions that MLO can potentially provide to the PLA/PCL/WSF composites. Firstly, low contents of MLO can successfully plasticize the PLA-based matrix , then leg at ~63 \u00b0C and then it developed cold crystallization at higher temperatures, showing a cold crystallization temperature (TCC) at 113 \u00b0C. One can also observe that the sample showed two overlapped melting peaks, having a low-intensity first melting point at ~166 \u00b0C and a more intense second one at nearly 172 \u00b0C. Double melting in polymers is related to a phenomenon of melt recrystallization during heating [CC value of PLA to a lower temperature, that is, 99 \u00b0C, whereas it reduced slightly Tm to ~170 \u00b0C and also suppressed the double-melting peak behavior. This observation points out that PCL enhanced crystallization of PLA. One assumes that this effect can ascribed to plasticization by the dispersed molten phase, that is, PCL, which enhanced PLA chains mobility and then their folding [ heating . Accordi heating . Cold cr heating . Interes folding .CC and Tm values of PLA in the blend. This effect was more remarkable for the composite piece without MLO, in which TCC and Tm were reduced to values below 95 \u00b0C and 165 \u00b0C, respectively. The lower TCC values obtained suggests that the WSF particles provided a nucleating effect on PLA during cold crystallization but, as Tm was also reduced, it is considered that they also impaired the formation of thicker, more perfect, and stable crystals. The reduction observed in the enthalpies of both cold crystallization (\u2206HCC) and melting (\u2206Hm) corroborated the latter effect. In this context, Chun et al. [m\u2019s PLA by 1.5\u20131.7 \u00b0C. The decrease observed in the melting peak of the biopolyester was related to a phenomenon of physical hindrance by the lignocellulosic fillers, which disrupted the chain-folding process of PLA. Therefore, the presence of the WSF particles nucleated the formation of more crystals, though less perfect due to an effect of molecular restriction.The addition of WSF reduced slightly both the Tn et al. observedn et al. showed t5%), was ~340 \u00b0C, whereas thermal degradation (Tdeg) was ~375 \u00b0C. One can also observe that PLA degraded in two stages. The main degradation step occurred in the temperature range of 340\u2013390 \u00b0C, showing a mass loss of ~96%. A second mass loss, of much lower intensity (<4%), was observed from approximately 450 \u00b0C to 500 \u00b0C. The corresponding area of this latter zone on the DTG curve is evidenced by the small peak on the right-hand side. The first major peak was due to thermal degradation of the biopolyester from high-MW macromolecules into smaller chain fragments, while the second minor peak can be related to the thermal degradation of the small-MW biopolymer fragments. In this regard, Signori et al. [5% and Tdeg by approximately 10 \u00b0C and 5 \u00b0C, respectively. In addition, the second peak of mass loss increased to ~7%. In this context, Vilay et al. [i et al. observedy et al. similarl5% and Tdeg values decreased markedly as the filler content increased. It is also worth noting that the rate of weight loss of the biopolymers was considerably reduced, which suggests that the fillers exerted a mass transport barrier to the volatiles produced during their thermal decomposition. One assumes that this impairment is directly related to the relatively low thermal stability of the lignocellulosic filler. In particular, the TGA curve of WSF showed three main weight losses. The first one occurred around 100 \u00b0C, showing a mass loss of ~1.9%. It mainly corresponds to the removal of the filler-remained water after drying due to the great tendency of the lignocellulosic materials to absorb moisture [The incorporation of WSF reduced considerably the thermal stability of PLA/PCL. One can observe that the Tmoisture . Indeed,moisture . Followimoisture . This mamoisture . One canmoisture . This mamoisture . 5% and Tdeg values decreased to approximately 252 \u00b0C and 293.5 \u00b0C, respectively. Indeed, the same composite prepared with 5 phr MLO showed values of T5% and Tdeg around 269 \u00b0C and 300 \u00b0C. Therefore, the MLO addition exerted a positive effect on the overall thermal stability of the green composites. This thermal stability increase in the MLO-containing green composite has been related to the chemical interaction achieved by the multi-functionalized vegetable oil due to the covalent bonds established between the lignocellulosic fillers and the biopolymer matrix [The impairment observed in the thermal stability is in agreement with previous studies concerning green composites based on nut shells ,71,72. Tg values from the damping factor (tan \u03b4) curves. The evolution of the storage modulus in the temperature range from \u221290 \u00b0C to 80 \u00b0C is plotted in g. Slightly higher values of storage modulus were observed in the DMTA curves of the PLA/PCL piece at temperatures below \u221270 \u00b0C. This effect can be ascribed to the fact that, at this temperature, the dispersed PCL phase remains as a vitreous solid and, therefore, it hardens the PLA matrix (~1.8 GPa at \u221280 \u00b0C). However, as PCL started undergoing its glass\u2013rubber transition at around \u221260 \u00b0C, the blend pieces presented a progressive decrease in the storage modulus similar to that observed for neat PLA. This confirms that the presence of PCL increased material\u2019s toughness at room temperature (~1.4 GPa at 20 \u00b0C). One can also observe that the incorporation of WSF in combination with MLO further decreased the storage modulus of PLA/PCL, then leading to softer materials. This supports the above-described plasticizing effect provided by MLO during the mechanical analysis and it is also in agreement with our previous study on PLA/AHF composites compatibilized by MLO [d by MLO . Excepcid by MLO in whichg. In the case of the neat PLA piece, the \u03b1-peak was centered at 68.4 \u00b0C. This value was similar but slighlty higher than that observed by DSC during the second heating, which may be explained by differences in the test conditions and sample crystallinity. The zoomed inset in the graph shows the \u03b1-relaxation of PCL in which one can observe that the unfilled PLA/PCL piece presented a broad and low-intensity peak at \u221253.4 \u00b0C. The incoporation of WSF and MLO interestingly reduced and also shifted both \u03b1-relaxation peaks to intermediate temperatures. The peak reduction observed indicates that the presence of both PCL and WSF partially suppressed the relaxation of the PLA chains [g values of the dispersed PCL phase increased to temperatures in the range from approximately \u221246 \u00b0C to \u221252 \u00b0C, whereas those for PLA decreased to the 63\u201365 \u00b0C range. The reduction of the PLA\u2019s Tg can be ascribed to the aforementioned process of plasticization by PCL and MLO, which is consistent with the previous mechanical and thermal analyses. This thermomechanical change was, however, also observed for the uncompatibilized green composite. Moreover, the Tg values of the PCL phase shifted progressively to higher temperatures with the WSF content. The latter observation suggests that the lignocellulosic fillers also contributed to improving miscibility between both biopolymers. Indeed, partial miscibility can be inferred by the shift of the Tg value of one biopolymer toward that of the other biopolymer [The evolution of tan \u03b4 versus temperature is shown A chains and thisopolymer . This fuOne of the most important drawbacks of green composites is their tendency to absorb water due to the hydrophilic nature of the incorporated lignocellulosic fillers. The evolution of the water absorption as a function of the immersion time for a period of 12 weeks is shown in The dual incorporation into PLA/PCL of up to 20 wt % WSF and 5 phr MLO successfully yielded composites with improved ductility and a minimal loss in mechanical strength and toughness. The fracture surfaces of the pieces revealed that the MLO co-addition favored the particle-to-matrix adhesion and increased plastic deformation. The incorporation of WSF combined with MLO resulted in a lower size of the dispersed PCL phase into PLA. Moreover, MLO slighlty improved thermal stability and reduced rigidity at room temperature. The improvement achieved was related to an enhancement of the interfacial adhesion of the PLA/PCL/WSF composites based on the multiple actions of MLO during extrusion. Firstly, the vegetable oil molecules induced a plasticization phenomenon of the biopolymer matrix. The lubricating effect of MLO increased biopolymer chain mobility and, thus, yielded lower stresses at the biopolymer\u2013filler interface. Secondly, the presence of MLO in combination to high contents of WSF could also promote certain miscibility for the binary blend of biopolymers that constitute the composite\u2019s matrix. Lastly, the multi-functionality of MLO additionally provided melt grafting of the WSF fillers onto the PLA/PCL matrix. The resultant enhanced particle\u2013biopolymer continuity of the composites allowed better load transfer between the PLA/PCL matrix and the dispersed WSF particles that, in turn, led to materials with higher mechanical and thermal performance. MLO could, however, only provide the multiple actions of plasticization and enhanced interfacial adhesion at moderate WSF contents. At higher contents, the effect of MLO was surpassed by the presence of non-grafted fillers and particularly filler agglomerates that potentially acted as a defect rather than reinforcement.This study demonstrates that MLO is a multi-functionalized vegetable oil that can be very attractive as a novel additive for the biopolymer industry since it can positively contribute to the performance enhancement of PLA-based composites. The here-developed green composites with improved ductility can easily find applications in rigid packaging or in the building and construction industry ."} +{"text": "Maleinized linseed oil (MLO) has been successfully used as biobased compatibilizer in polyester blends. Its efficiency as compatibilizer in polymer composites with organic and inorganic fillers, compared to other traditional fillers, has also been proved. The goal of this work is to optimize the amount of MLO on poly(lactic acid)/diatomaceous earth (PLA/DE) composites to open new potential to these materials in the active packaging industry without compromising the environmental efficiency of these composites. The amount of DE remains constant at 10 wt% and MLO varies from 1 to 15 phr (weight parts of MLO per 100 g of PLA/DE composite). The effect of MLO on mechanical, thermal, thermomechanical and morphological properties is described in this work. The obtained results show a clear embrittlement of the uncompatibilized PLA/DE composites, which is progressively reduced by the addition of MLO. MLO shows good miscibility at low concentrations (lower than 5 phr) while above 5 phr, a clear phase separation phenomenon can be detected, with the formation of rounded microvoids and shapes which have a positive effect on impact strength. Natural oils and, in particular, vegetable oils, are currently being widely investigated as they could be a source of a wide variety of new environmentally friendly materials from renewable resources that could positively contribute to sustainable development. In addition, some of these natural vegetable oils cannot be used in the food industry because of regulation restrictions due to their composition and other components. For this reason, some of these vegetable oils are obtained as by-products from other industries, and this contributes to their high worldwide availability, together with their cost-effective price. Recently, selectively modified vegetable oils have been proposed as interesting materials for compatibilization of polymer blends. Other applications of these modified vegetable oils include partially biobased thermosetting resins as an alternative to petroleum-derived resins such as epoxies, which can also be used as matrices in high environmental efficiency green composites. In addition to this, modified vegetable oils are widely used as secondary plasticizers in poly(vinyl chloride)\u2014PVC\u2014to provide increased thermal stability ,5,6,7,8.Vegetable oils are interesting from a chemical point of view because of their triglyceride structure, which consists of a glycerol basic structure which is chemically bonded to different fatty acids through ester bonds. Fatty acids can be saturated as stearic acid , palmitic acid (C16:0) or margaric acid (C17:0). These saturated fatty acids are not interesting for chemical modification. Nevertheless, some fatty acids can contain one, two or more unsaturations, thus leading to unsaturated fatty acids such as palmitoleic acid (C16:1), oleic acid (C18:1), linoleic acid (C18:2) or linolenic acid (C18:3), and others. 2\u00b7n H2O. From a morphological point of view, it consists hierarchical micro/nanoporous structure. It is this hierarchical porosity that allows the use of DE as carriers for active principles in active packaging. DE is composed of micro-shells of marine unicellular eukaryote organisms in phytoplankton and formed a sediment millions of years ago. Diatom fossilization led to formation of huge diatomaceous earth deposits; therefore, it is an abundant cost-effective product. The main properties of DE have qualities of very low density, porous structure, abrasive, chemical inertness, biocompatible, high absorption capacity, low thermal conductivity, high resistance to acids, and permeability, among others. Currently, diatomaceous earth is widely used as filtration media, for absorption, as a natural insecticide, as functional additives, dental fillings, membranes, and chemical sensors, among other things. When used as natural fillers, DE can provide two different effects: on the one hand, they can provide some reinforcement effect, and on the other hand, they can act as carriers for the controlled release of active principles [Unsaturations are highly reactive points, such that they represent the base for a chemical modification to provide the desired functionality. Epoxidation is one of the most investigated chemical modification of a vegetable oil. By a simple epoxidation process with peroxoacids derived from, for example, hydrogen peroxide and acetic acid, unsaturations can be converted into oxirane rings ,9,10,11.inciples ,35,36,37inciples , we repo\u22123. Regarding diatomaceous earth (DE), this was supplied by ECO-Tierra de diatomeas . The polymer matrix used in this study was a commercial grade of poly(lactic acid) manufactured and distributed by Nature Works LLC . This commercial grade was Ingeo Biopolymer 6201D with a melt flow index in the 15\u201330 g/(10 min) range at 210 \u00b0C, which makes it suitable for injection moulding and melt spinning of fibres as well. It is a lightweight material with a typical density of 1.24 g cmThis DE shows different particle sizes and shapes, but triangular shapes with rounded angles are predominant, as can be seen in \u22121 range.The compatibilizer used in this study was maleinized linseed oil (MLO), which is obtained from the reaction of maleic anhydride (MA) with the unsaturations contained in linseed oil . This MLO was a commercial-grade VEOMER LIN supplied by Vandeputte . Some of its features included a viscosity of 10 dPa s measured at 20 \u00b0C and an acid value in the 105\u2013130 mg KOH gThe above-mentioned compositions were subjected to an initial compounding stage in a co-rotating twin screw extruder from DUPRA S.L. . Different temperatures were selected for the extrusion process by taking into account that the melt peak temperature of PLA is close to 170 \u00b0C; therefore, the initial heating stage close to the hopper was set to 165 \u00b0C and progressively increased up to 180 \u00b0C in the extrusion die. A rotating speed of 20\u201325 rpm was used. After the compounding stage in a co-rotating twin screw extruder a continuous filament (4 mm diameter) was obtained. This filament was cooled down in air to room temperature (to avoid hydrolysis) and dropped into a shredder manufactured by Mayper which gave an average pellet size 3 mm in diameter and 2\u20132.5 mm in height. The pellet of the different composites obtained was then processed by injection moulding process using a Meteor 270/75 from Mateu & Sol\u00e9 injection moulding machine. The temperature profile, from the feeding zone to the injection nozzle, was set to: 170\u2013180\u2013190\u2013200 \u00b0C. The material was processed with a holding pressure of 75 bar, with an injection time of 8 s in mold and 20 s as cooling time in the open mould.\u22121. This stage was applied to remove the previous thermal history which is particularly important in semicrystalline polymers. After this, a cooling stage from 200 \u00b0C down to 0 \u00b0C at a constant cooling rate of 10 \u00b0C min\u22121 was applied. With this stage, samples are subjected to a controlled cooling process which allows further comparisons. Finally, a new heating cycle from 0 \u00b0C to 300 \u00b0C at a heating rate of 10 \u00b0C min\u22121 was applied, and all thermal transitions were obtained in this second heating cycle. To avoid undesired oxidations a nitrogen inert atmosphere (66 mL min\u22121) was used. An important parameter in semicrystalline polymers is the degree of crystallinity (\u03c7c) which represents the ratio between the crystalline areas contained in the polymer and the total volume. The degree of crystallinity (\u03c7c) was calculated by using the following expression:DE can affect thermal behaviour of PLA matrix. For this reason, the effects of DE addition and MLO on thermal transitions of PLA/DE composites were obtained using differential scanning calorimetry (DSC) using a Mettler Toledo 821 calorimeter . A typical procedure is based on the use of a sample weight of about 7\u201310 mg. Samples were accurately weighed and placed into standard aluminium sealed pans (40 \u00b5L). The thermal program was divided into three different stages. The first stage was programmed from 30 \u00b0C to 200 \u00b0C at a heating rate of 10 \u00b0C min\u22121) represent the melt and cold crystallization enthalpies, respectively, while \u22121, as reported in literature [In this equation, terature . Finally\u22121 using air atmosphere to simulate more aggressive conditions than using inert atmosphere. The sample weight mass varied in the 8\u201310 mg range and all the samples had similar dimensions to obtain comparable and reproducible results. Standard alumina crucibles (70 mL) were employed for TGA characterization.Complementary to the characterization of thermal transitions, the thermal stability was evaluated by means of thermogravimetric analysis (TGA) using a TGA/SDTA 851 thermobalance from Mettler Toledo Inc. thermobalance . The selected thermal program was a dynamic heating from 30 \u00b0C to 700 \u00b0C at a constant heating rate of 20 \u00b0C minG\u2032) and the dynamic damping factor (tan \u03b4) as a function of increasing temperature. To this end, an AR-G2 oscillatory rheometer from TA Instruments , equipped with an environmental test chamber (ETC) and a special clamp device for solid samples, was using in torsion/shear mode. Rectangular samples with dimensions of 40 \u00d7 10 mm2 and an average thickness of 4 mm were subjected to a temperature sweep from 30 \u00b0C up to 140 \u00b0C at a heating rate of 2 \u00b0C min\u22121. This temperature range was selected because the main thermal transitions of PLA in the solid state, i.e., the glass transition temperature (Tg) and the cold crystallization occur in this range. Other characteristics of this experiment were defined by a maximum shear deformation (%\u03b3) of 0.1% and a frequency of 1 Hz.Dynamic mechanical behaviour of PLA/DE composites with different MLO loadings was used to follow the evolution of the storage modulus . The selected conditions for these tests were: load cell of 5 kN, crosshead speed of 10 mm min\u22122 by taking into account the cross section of samples.Mechanical response of PLA/DE composites in impact conditions were obtained using a Charpy pendulum with a total energy of 1 J from Metrotec S.A. following the guidelines of ISO 179. Five unnotched samples were tested for each formulation, and the impact strength was calculated in kJ mIn addition to the above-mentioned characterization techniques, Shore D hardness was obtained in a durometer 673-D from J. Bot S.A. , as indicated in ISO 868. In a similar way, hardness was measured in five different samples, and the average values were collected.The internal morphology of PLA/DE composites was studied from fractured samples on impact tests. A field emission scanning electron microscope (FESEM) from Oxford Instruments working at an acceleration voltage of 2 kV was used. To provide conducting properties to samples and avoid sample charge, all fractured samples were covered with an ultrathin gold-palladium alloy in a Quorum Technologies Ltd. EMITECH model SC7620 sputter coater .g) of PLA. As PLA is highly sensitive to the cooling process, which affects the degree of crystallinity, a cold crystallization peak can be observed with a peak maximum of 119 \u00b0C, while this characteristic peak moves down to lower values in PLA/DE composites. The cold crystallization process occurs at lower temperatures with 10 wt% DE. In particular, the peak maximum is displaced to 112 \u00b0C. This slight change in the cold crystallization characteristic temperatures is directly related to the fact that DE particles can act as nucleants for crystallization, thus favouring crystallite formation [A comparative plot of the DSC thermograms of neat PLA and PLA/DE composites with varying MLO content is gathered in ormation ,40,41,42g of PLA/DE composites. In particular, the maximum decrease is obtained for a MLO loading of 5 phr which gives a Tg value of 60.2 \u00b0C (3.6 \u00b0C lower than PLA/DE composites). This slight decrease in Tg is representative for poor plasticization effects, as observed in other polymer systems [cc) can be seen from 112 \u00b0C (PLA/DE composite) down to values of 105 \u00b0C for almost all composites, independently of the MLO loading. MLO favors crystallization due to increased chain mobility. On the other hand, the melt peak temperature of the obtained materials does not change in a remarkable way, with values of about 170 \u00b0C, even with increasing MLO content. With regard to normalized enthalpies related to the cold crystallization and melting processes, it is worth noting that they are very useful for making an estimation of the degree of crystallinity (\u03c7) of the PLA/DE composites with increasing MLO content. Neat PLA shows a degree of crystallinity of 9.7%, while the addition of 10 wt% DE leads to increased crystallinity up to values of 15.7% due to the nucleant effect of DE. This is also consistent with the decrease in the cold crystallization peak temperature, as stable crystallites can be obtained at lower temperatures. By adding low MLO loads in the 1\u20132 phr range, the degree of crystallinity remains almost constant but high MLO loading in the 5\u201315 phr range, favour the stability of the amorphous PLA domains which is detectable by a decrease in the degree of crystallinity to values of 13%.Addition of MLO provides a slight decrease in T systems ,43,44. NWith regard to the thermal stability of PLA/DE composites with varying MLO content, The simple addition of DE increases the thermal stability of neat PLA. In fact, the onset degradation temperature of PLA (Tonset) changes from 264 \u00b0C for neat PLA to 294.3 \u00b0C for the PLA/DE composites with 10 wt% DE. These results are in agreement with the work of Carrasco et al., which suggests that addition of small amounts of inorganic materials into a polymer matrix provides increased thermal stability . MoreoveG\u2032 (tan \u03b4 (g), all materials show a typical elastic-glassy behaviour with high G\u2032 values. The uncompatibilized PLA/DE composite shows a G\u2032 value of 1095 MPa at 40 \u00b0C. This G\u2032 value is remarkably higher than that of the neat PLA (565 MPa at the same temperature). Addition of MLO leads to a decrease in stiffness . Thus, the PLA/DE composite with 1 phr MLO shows a G\u2032 value of 832 MPa at 40 \u00b0C. As the MLO content increases, G\u2032 values show a decreasing tendency as it can be seen in G\u2032 values. In addition to this internal lubrication effect, MLO molecules increase the free volume this reducing the intermolecular attraction forces between adjacent PLA chains, all this having a positive effect on overall chain mobility [g, a clear softening occurs, and G\u2032 values decrease in a remarkable way. As can be seen in G\u2032 values above the Tg. As certain temperature is reached, some amorphous areas of PLA tend to re-arrange to form packed structures or crystallites and this increases the density, which is directly related to the stiffness and, consequently, the G\u2032 is increased [G\u2032 a and dyn\u2032 , one of the most widely used is the peak maximum for tan \u03b4. By using this criterion, neat PLA shows a Tg of 68.1 \u00b0C and PLA/DE composite offers a Tg of 66.2 \u00b0C. As MLO loading increases, a slight decrease in Tg can be observed which is in total accordance with previous results obtained by DSC. These Tg values are summarized in at break .\u22122; due to the stress concentration phenomenon of DE, the PLA/DE composite with 10 wt% DE shows a remarkable decrease in impact strength down to values of 12.4 kJ m\u22122. These results are in total agreement with the dramatic decrease in both tensile strength and elongation at break. As the MLO loading increases, the impact strength improves, and it is worth noting that PLA/DE composites containing 15 phr MLO reach an impact strength of about 22 kJ m\u22122. This is lower than neat PLA, but it is remarkably superior to the uncompatibilized PLA/DE composite. Above 5 phr, MLO can counteract the negative effect of DE on impact strength. Finally, with regard to Shore D hardness, it is worthy to note that although the average values show a slight increase in hardness with DE addition and a decrease with MLO loading, the standard deviation of these values shows that Shore D hardness values remain almost unchanged centred in 80.Addition of MLO on PLA/DE composites provides a decrease in rigidity, as expected. For low MLO loadings in the 1\u20132 phr range, the tensile modulus is almost constant. Although the average value is slightly lower, if we take into consideration the standard deviation, the change is not significant. Nevertheless, over 5 phr MLO, a clear change in mechanical behaviour can be detected. In particular, the tensile modulus is remarkably decreased. It is worth noting that PLA/DE composites with 15 phr MLO show a tensile modulus of 1075 MPa, which represents a 20% decrease regarding the PLA/DE composite without MLO. The same tendency can be detected for tensile strength. Addition of 5 phr produces a decrease of 10% in tensile strength while above 5 phr, the percentage decrease in tensile strength is comprised between 25\u201338%. This pronounced change in mechanical resistant properties is inversely related to elongation at break which increases with increasing MLO loading. Uncompatibilized PLA/DE composite is extremely brittle, with an elongation at break of 3.5%. This is slightly increased to values of 5% for low MLO loading but, as expected, above 10 phr MLO, it is possible to obtain a noticeable increase in elongation at break, with values of close to 20% in this range. Some previous works have demonstrated different effects of MLO on PLA and other polyesters. These effects include plasticization, chain extension, branching and, in some cases, some crosslinking ,47,53. T\u22122, which is remarkably lower than neat PLA (28 kJ m\u22122). Two different levels of effect can be seen depending on the MLO loading. For MLO loadings in the 1\u20135 phr range, a slight increase in ductile properties can be detected, with a slight decrease in the glass transition temperature (Tg). Nevertheless, above 5 phr MLO, ductile properties are remarkably improved, and the impact strength increases to values close to 22 kJ m\u22122 which is almost double the value of uncompatibilized PLA/DE composite. The morphology of these composites shows that MLO exerts a compatibilizing effect, bridging the PLA matrix and the DE particles. Therefore, it is possible to conclude that MLO loading in the 10\u201315 phr range gives optimum and balanced properties for PLA/DE composites without compromising the ecoefficiency of the developed composites.This work reports the efficiency of maleinized linseed oil as a biobased compatibilizer in poly(lactic acid)\u2014PLA\u2014composites with diatomaceous earth (DE) at a constant loading of 10 wt%. In particular, the work focuses on the optimization of MLO loading to obtain the most balanced properties on PLA/DE composites. The obtained results show a clear embrittlement of PLA/DE composites without any compatibilizer. The elongation at break is reduced to half the value of neat PLA. On the contrary, the tensile modulus increases as expected. On the other hand, the impact strength of uncompatibilized PLA/DE composites goes down to a value of 12.4 kJ m"} +{"text": "Plasma contains key biomarkers essential for disease diagnosis and therapeutic monitoring. Thus, by separating plasma from the blood, it is possible to analyze these biomarkers. Conventional methods for plasma extraction involve bulky equipment, and miniaturization constitutes a key step to develop portable devices for plasma extraction. Here, we integrated nanomaterial synthesis with microfabrication, and built a microfluidic device. In particular, we designed a double-spiral channel able to perform cross-flow filtration. This channel was constructed by growing aligned carbon nanotubes (CNTs) with average inter-tubular distances of ~80\u2009nm, which resulted in porosity values of ~93%. During blood extraction, these aligned CNTs allow smaller molecules to pass through the channel wall, while larger molecules get blocked. Our results show that our device effectively separates plasma from blood, by trapping blood cells. We successfully recovered albumin -the most abundant protein inside plasma- with an efficiency of ~80%. This work constitutes the first report on integrating biocompatible nitrogen-doped CNT (CN They possess unique thermal, optical, electrical, and mechanical properties. CNTs have also been utilized in various applications4, including sensors7, field-effect transistors10, batteries14, capacitors/actuators16, hydrogen storage components19, field emission devices24, and composite fillers26.Carbon nanotubes (CNTs) consist of nanometer-scaled tubules of sp28. For example, the biocompatibility of CNTs can be improved by substitutional doping with nitrogen atoms31. CNT-based biosensors have also been demonstrated to have an enhanced electrochemical reactivity as a result of their high surface area (102~103 m2g\u22121)33. By integrating the growth of aligned CNTs with microfabrication, a 3-dimensional (3D) filter can be built to capture various types of biomolecules and biomarkers. e.g., tumor cells, bacteria, viruses, nuclei acids, and proteins36. Through selective growing of aligned CNTs along different micro-patterns, novel filtration devices can be designed for the separation of heterogeneous mixtures, including blood.Due to recent advances in the CNT synthesis and functionalization, their applications have expanded rapidly in the biological fields37.Blood is a complex fluid consisting of cells and plasma. This plasma is a bodily fluid containing different types of molecules and ions, e.g., clotting factors, proteins, electrolytes, hormones, enzymes, antibodies, vitamins, sugars, lipids, and minerals. In clinical diagnostics, plasma is vital because it can provide relevant information regarding a patient\u2019s health. It is also noteworthy that blood cells cause background noises during the detection. To achieve effective detection, plasma separation is a critical step. In this context, centrifugation is the conventional route to separate plasma from blood. Although the efficiency is extremely high (>90%), bulky equipment is involved. As an alternative, the development of microfluidic devices provides a miniaturized technology able to separate plasma from bloodIn this paper, we integrated CNT synthesis with microfabrication techniques to construct a CNT-based microfluidic device. This microdevice effectively separates plasma from blood by performing cross-flow filtration. Our work now expands applications of CNTs in point-of-care blood analysis.xCNT) channel and a polydimethylsiloxane (PDMS) top cover. Inside this microdevice, we constructed a porous microfluidic channel by aligning CNxCNTs to form a membrane. The microdevice has one inlet and two outlets (i and ii). Blood samples were then loaded from the inlet port. Human whole blood was obtained from consented donors at the General Clinical Research Center of Penn State, following to an institutional review board\u2013approved protocol. The samples were drawn into 10-mL Ethylenediaminetetraacetic acid (EDTA) K2-tubes . The blood flowing through the device was then collected by the outlet (i). Note that outlet (ii) was connected to a vacuum source to extract and collect the plasma. When blood was transported within the double spiral channel, plasma diffuses through CNxCNTs and it is collected at the outlet (ii).We designed a double spiral microdevice to continuously separate plasma from blood by using cross flow filtration as illustrated in Fig.\u00a038. The process steps are shown in Fig.\u00a0xCNTs on individual devices, we used CVD with benzylamine as a precursor sequentially.We constructed this microdevice by integrating techniques of chemical vapor deposition (CVD) synthesis and microfabricationsor Fig.\u00a0. The preDMS Fig.\u00a0. The masxCNTs. This membrane has to be biocompatible, porous, and to exhibit controllable inter-tubular distances. We used scanning electron microscopy (SEM) and Raman spectroscopy to characterize the structural properties of our CNxCNTs.We apply a homogeneous membrane (3D filter) of aligned CNxCNTs. Figure\u00a0xCNTs for 30\u2009minutes. Figure\u00a0xCNTs only grow on the pre-patterned iron thin film and form a microfluidic channel wall. This double spiral channel has a wall thickness of 100\u2009\u00b5m and a channel width of 100\u2009\u00b5m . Figure\u00a0As explained above, the miniaturized microdevices consist of aligned CN\u2009\u00b5m Fig.\u00a0. Figure\u00a0xCNTs is very important for the design of our micro-fluidic device. In this context, we grew CNxCNTs for 10, 20, 30, and 40\u2009minutes and measured the diameter, density, and inter-tubular distance (ITD) from cross-sectional SEM images .Controlling the height and inter-tube distances of CNges Fig.\u00a0. In Fig./cm Fig.\u00a0. Similar\u2009nm Fig.\u00a0. The ITD41. Raman spectroscopy measures the degree of crystallinity of CNTs, chirality in the case of single-walled nanotubes, defects, etc.43. In our study, we used Raman microscopy to characterize our synthesized CNxCNTs. Raman spectra were recorded using a 514\u2009nm laser excitation for 30\u2009seconds under 50X magnification. The laser power used for the measurements was 10\u2009\u00b5W. The Raman spectrum showed the D-band centered at 1352\u2009cm\u22121, G-band at 1578\u2009cm\u22121, and D\u2032-band at 2659\u2009cm\u22121, respectively . Samples in EDTA tubes were processed within 24\u2009hours. Before processing blood samples, we wet a microdevice by flushing a surfactant until all the air inside a microdevice was replaced. Subsequently, we introduced PBS with a rate of 100\u2009\u00b5L/min for five minutes to wash off the residual surfactant inside the microdevices. After flushing, we turned on a vacuum source connected to the outlet and started removing the residual PBS inside the microdevice. We transported blood samples into a microdevice at a rate of 100\u2009\u00b5L/min. As demonstrated in Fig.\u00a0xCNT channels; yet blood cells were still confined inside the double spiral channel without leaking or clogging. Time-lapse images shown in Fig.\u00a0To characterize the plasma extraction performance, we measured the albumin concentration from different extractions . Albumin is the most abundant protein inside plasma and serves as a biomarker of different diseasesxCNTs. This porous channel exhibits a high porosity (>90%) with a nanometer scale inter-tubular distance (~80\u2009nm). This channel separates micron-scale blood cells, such as leukocytes (diameter: 12~17\u2009\u00b5m) and erythrocytes (diameter: 3~5\u2009\u00b5m) from nano-scale biomarkers, such as albumin (diameter: ~5\u2009nm). These aligned tubes allowed an effective separation of micron-sized particles within whole blood, with a recovery rate of ~80%, averaged from at least 20 devices. Also, this miniaturized device is disposable and three orders of magnitude lighter in weight than conventional centrifuges. This novel portable microdevice now allows point-of-care diagnostics by extracting plasma containing proteins of key biomarkers.We successfully constructed a microfluidic device with a porous channel wall consisting of aligned CN34. In short, the iron catalyst thin film was deposited by e-beam evaporation and further patterned by a lift-off process. The CNxCNT was synthesized by AACVD using Benzylamine as a precursor. The deposition was performed at 825\u2009\u00b0C for 30\u2009minutes, under argon and 15% hydrogen flow of 2.5\u2009L/min.Detailed information is described in our previous reportA Raman microscopy with a 514\u2009nm laser was employed. Spectra were acquired under a 50\u00d7 objective lens for 30\u2009seconds.34, the PDMS mold was manufactured by using a commercialized kit . Before bonding, RF oxygen plasma was applied to activate both the PDMS and CNxCNT surfaces. After bonding, the microdevices were baked at 85\u2009\u00b0C for four hours.As described in our previous study"} +{"text": "Kawasaki disease (KD) is a medium vessel vasculitis that typically occurs in children aged between 6\u2009months and 5\u2009years. It is extraordinarily rare in the neonatal period. KD-related systemic artery aneurysms (SAAs) have never been reported in neonates.A male infant was transferred to our institution for persistent high-grade fever lasting 16\u2009days. Symptoms started at day 14 of life, and he was admitted to a children\u2019s hospital on the second day of fever. Physical examination at the time found no signs suggestive of KD. The only laboratory parameters which were of significance were values suggestive of systemic inflammation. However, his fever persisted and inflammatory markers continued to rise despite 2 weeks of antibiotic therapy. KD as a noninfectious cause of fever was considered when he came to our institution, and echocardiographic findings of left and right medium coronary artery aneurysms (CAAs) confirmed our suspicions. Full-body magnetic resonance angiography also revealed bilateral axillary artery aneurysms. Administration of intravenous gamma globulin resulted in rapid improvement. His fever resolved on the next day and CAAs and SAAs regressed to normal at 6\u2009months and 3\u2009months after diagnosis, respectively.This unique case of incomplete KD highlights the importance of considering KD in neonates with unexplained prolonged fever and reinforces the need to remain vigilant for SAAs in KD. Kawasaki disease (KD) is a self-limiting systemic vasculitis of unknown etiology that typically occurs in children aged between 6\u2009months and 5\u2009years . It is mA 30-day-old male infant was transferred to our institution for persistent high-grade fever lasting 16\u2009days. Symptoms started on day 14 of life, and he was admitted to a tertiary-level children\u2019s hospital on the second day of illness, at which time he had no skin, respiratory, gastrointestinal, or nervous system symptoms. Admission laboratory tests revealed a normal complete blood count, serum transaminase levels, albumin, antinuclear antibodies, immunoglobulin levels, and CD markers, but elevated C-reactive protein (CRP) (50\u2009mg/L), erythrocyte sedimentation rate (ESR) (55\u2009mm/h), ferritin (348\u2009ng/ml) and procalcitonin (0.96\u2009ng/ml). His chest X-ray and abdominal ultrasound were unremarkable. Empirical antibiotic therapy comprising of ampicillin and cefotaxime was started for presumed neonatal sepsis. Physical examination was within normal limits except for a transient day-long generalized reddish rash and mild conjunctival congestion on day 6 of fever, which was considered by the neonatologist to be a manifestation of infection. However, bacterial cultures of blood, urine, stool, and cerebrospinal fluid, as well as viral screens for toxoplasmosis, rubella, cytomegalovirus, herpes simplex, adenovirus, respiratory syncytial virus, Influenza A and B, Epstein Barr virus, and rotavirus were all negative. Unfortunately, his fever persisted even after antibiotics were upgraded to vancomycin and meropenem.9/L, 470\u2009\u00d7\u2009109/L, 160\u2009mg/L and 595\u2009ng/ml, respectively. In contrast, his procalcitonin had decreased to 0.50\u2009ng/ml, while at the same time having hypoalbuminemia (25\u2009g/L) and anemia (95\u2009g/L). At this point, as no clear etiological evidence was found, KD as a noninfectious cause of fever was the first to be considered according to the 2017 American Heart Association (AHA) guidelines [By the time he was admitted to our hospital, his white blood cells, platelets, CRP and ferritin had risen to 26.8\u2009\u00d7\u200910idelines . On day idelines , also reIntravenous gamma globulin (IVIG) (2\u2009g/kg), methylprednisolone (2\u2009mg/kg.d), aspirin and low molecular weight heparin (LMWH) (75u/kg.dose q12h) were administered immediately. The next day, his fever resolved and his CRP level began to decrease, and 1 week later, slight periungual desquamation of the fingers and toes was noted. Subsequent echocardiographic follow-up revealed no worsening of the coronary lesions. On day 30 of admission, glucocorticoids were stopped, and he was discharged home on warfarin and aspirin. Three months after diagnosis, echocardiography showed that the diameter of the LAD and RCA had been reduced to 2.3\u2009mm (z score\u2009=\u20092.3) and 2\u2009mm (z score\u2009=\u20092.1), respectively, and MRA showed complete resolution of the axillary artery aneurysms. Warfarin was thus discontinued. Gene sequencing revealed no gene mutations associated with his symptoms. Aspirin was stopped 6\u2009months after diagnosis, by which time the diameter of the coronary arteries had returned to normal.Both neonatal KD and KD-related SAAs are not well recognized due to their rarity, and thus there are only sporadic reports of a few cases in the English language literature concerning either of these issues \u201316. To tExtremes of the pediatric age range represent a significant risk factor for the development of CAAs and incomplete presentation . A revieA previous study by our team demonstrated that the incidence of SAAs in KD is not as low as we thought. Longer duration of fever, larger CAAs, and younger age may be risk factors for SAAs, with the regression rate of SAAs was better than that of CAAs over time . The cliThis rare case of incomplete KD highlights the importance of considering KD in neonates with unexplained prolonged fever, who are more likely to present with incomplete KD and coronary artery lesions. SAAs are another essential clinical problem that should not be overlooked in KD, and research is needs to explore this issue."} +{"text": "Myo-inositol is a ubiquitous metabolite of plants. It is synthesized by a highly conserved enzyme L-myo-inositol phosphate synthase . Myo-inositol is well characterized during abiotic stress tolerance but its role during growth and development is unclear. In this study, we demonstrate that the apical hook maintenance and hypocotyl growth depend on myo-inositol. We discovered the myo-inositol role during hook formation and its maintenance via ethylene pathway in Arabidopsis by supplementation assays and qPCR. Our results suggest an essential requirement of myo-inositol for mediating the ethylene response and its interaction with brassinosteroid to regulate the skotomorphogenesis. A model is proposed outlining how MIPS regulates apical hook formation and hypocotyl growth. Apical hook formation occurs soon after germination and is maintained while seedlings makes their way through the soil and terminates upon exposure to light. The apical hook formation is orchestrated by a variety of hormones which leads to differential cell elongation in the hypocotyl1. Apical hook formation goes through three consecutive growth phases i.e., formation, maintenance and opening41. The plant hormones, auxin and ethylene interaction leads to differential growth in the formation of apical hook34. They have been involved in differential growth in the apical hook34. However, the mechanism by which ethylene triggers differential growth in the hypocotyl still far from understood.The apical hook formation and maintenance is one of crucial developmental process in higher plants as it protects the shoot apical meristem (SAM) till seedling emergence out of the soil19 in constitutive triple response1 (ctr1) mutant30 and the ethylene overproducer (eto) mutants55. Auxin and ethylene are involved in differential growth in apical hook, in which auxin results in cell expansion and hypostyle growth, while ethylene has an antagonistic effect34. Moreover, ethylene has a stimulatory effect on the auxin biosynthetic pathway51 which suggests another mode of interaction at the hormone level. As both auxin and ethylene are involved in the regulation of apical hook development, their activities are mutually coordinated.Ethylene enhances apical hook curvature as observed upon application of exogenous ethylene35. Arabidopsis mutants of auxin response such as TRANSPORT INHIBITOR RESPONSE 1 (TIR1) and AFB members15 and over-accumulating free auxin34, exhibit hookless phenotype. Exogenous treatment with auxin and of polar auxin transport inhibitors affects hook curvature49, which suggests that optimal auxin transport is critical for differential growth in the apical hook and is regulated by dedicated influx and efflux carriers like AUX/LAX family of auxin influx carriers56, the PIN family of auxin efflux carriers38 and by the B-type ATP-binding cassette transporters (ABC transporters)45.Auxin gradient is one of critical factor for apical hook formation9. BR mutants defective in BR synthesis like det2, cbb1, cpd shows no hook formation during skotomorphogenesis52 and at lower concentration BR stimulates stem growth21. Exogenous ethylene treatment results in altered auxin gradient either directly or change in BR biosynthesis19. All the three hormones, BR, ethylene and auxin affect each other and are necessary for apical hook formation.Brassinosteroid (BR) is a steroid hormone which binds to receptor kinase BRI1 to initiate the signal transduction, e.g. inactivation of GSK3-like kinase; BIN2, dephosphorylation of BRASSINAZOLE RESISTANT 1 (BZR1) family transcription factor; accumulation of unphosphorylated BZR1 in the nucleus; and regulation of BR target genes36, auxin storage2, phytic acid synthesis3 and oligosaccharides synthesis28. MIPS is also required for PIN protein localization, polar auxin transport and auxin-regulated embryogenesis37. Its over-expression results in resistance towards cold, drought and salt stress in several plants along with immunity towards stem nematodes in transgenic sweet potato59. Programmed cell death (PCD) was observed in AtMIPS1 mutant which is light dependent and results in enhanced basal immunity43. Interestingly, Ma and coworkers in 2016 indicated the role of light signaling protein i.e. FHY3 and FAR1 in maintenance of optimal level of myo-inositol via directly binding to the promoter of MIPS1 and activating its expression39. Recent study of MIPS has revealed its critical role in growth and immunity via ethylene50.Previous reports showed that MIPS has been involved in cell wall biogenesisArabidopsis show that apical hook and hypocotyl development also depends on the myo-inositol level. Myo-inositol induced hook maintenance and subsequent stimulation of ethylene biosynthetic genes and auxin transporters suggest that the ethylene effect might be mediated by myo-inositol. Furthermore, myo-inositol (MI) antagonizes brassinosteroid (BR) effects during hook formation which demonstrate an important step of regulation of BR-mediated growth and development. Thus, we conclude that differential MIPS levels are required for optimal hook formation, maintenance and hypocotyl growth.Our investigation on etiolated seedlings of myo-inositol phosphate synthase (MIPS) suggest its role in ethylene response50. To further assess the role of myo-inositol (MI) in ethylene response, we analyzed the effect of myo-inositol on etiolated seedlings of Arabidopsis Col-0. Exogenous MI supplementation resulted in decrease in length of the hypocotyls and hook angle treatment i.e. randomized growth of hypocotyls. Maximum randomization was observed at 100\u00a0nM concentration of EBL Fig.\u00a0. MI suppArabidopsis seeds on media containing a combination of EBL and AgNO3. We found acute randomized growth of etiolated seedlings at different combination of EBL and AgNO3, which was increasing with increase in EBL and AgNO3 levels followed by AgNO3 and LiCl (53%) when compared to AgNO3 (12%), BRZ (50%) and LiCl (0%) Fig.\u00a0E,F,O. A 0%) Fig.\u00a0K\u2013N. Perc0%) Fig.\u00a0O, howeve0%) Fig.\u00a0K.myo-inositol, we carried out the quantitative RT-PCR of AtACO3 was observed upon MI and ACC treatment activity in MI 47. Inactivation of BIN2 by brassinosteroid signaling might lead to decrease in myo-inositol synthesis in dark resulting in randomized growth which is antagonized by supplementation of MI. In support of the later, we further investigated the effect of EBL in combination of LiCl on etiolated seedling and found excessive randomization even on 100\u00a0nM EBL with 10\u00a0mM of LiCl and similar result was observed combinatorial assay with EBL and AgNO3. Enhanced randomization of etiolated seedling with treatment with AgNO3 has also been reported by Gupta et al22. This study indicates that the brassinosteroid signaling decreases the ethylene response via inactivating the MIPS and results in increase in hypocotyl growth. However, treatment with saturating concentration of EBL results in antagonism probably due to a feedback mechanism40.Involvement of brassinosteroid in hook formation directed us to investigate the effect of myo-inositol phosphate synthase to brassinosteroid in hook formation substantiating the role of MIPS in ethylene synthesis. This indicates the existence of a cross talk between ethylene, brassinosteroid and auxin at the MIPS protein level leading to myo-inositol being crucial for hook as well as proper hypocotyl development.We checked the response of etiolated seedling in combination of BRZ and MI and found hook formation in etiolated seedlings compared to etiolated seedlings grown on only BRZ which suggests the upstream role of Atmips1 and Ataux1 mutant along with altered trafficking of PIN2 protein in Atmips111 led us to investigate the relationship of myo-inositol and auxin during hook formation. We carried out the combinatorial assay and found agravitropic behavior of etiolated seedlings with increase in IAA concentration. However, MI supplementation resulted in distinct decrease in hook angle and gravitropic growth. Hookless phenotype is observed upon over-accumulation of the active auxin (IAA), mutation of auxin polar transporter and treatment with inhibitor of auxin polar transport8. Previous report also indicates a crosstalk between ethylene and auxin in hook formation44 as inhibition of hook formation occur upon NPA treatment in eto1 and ctr1 mutant34 plus the restoration of hook occur in ethylene-insensitive mutant by auxin25. In the present study, we observe that myo-inositol was able to antagonize the effect of high level of IAA at 1\u00a0\u03bcM and 10\u00a0\u03bcM concentration. We thus conclude that myo-inositol phosphate synthase is involved in maintaining the optimal levels of auxin in the hook region via proper localization of PIN protein and ethylene synthesis which results in differential accumulation of auxin. TIBA is an auxin transport inhibitor which perturbs the auxin efflux, therefore plants treated with TIBA show agravitropic phenotype38. As expected hookless phenotype was observed when seedlings were grown in dark on media supplemented with TIBA along with agravitropic etiolated seedlings. Subsequent MI supplementation could not evoke hook formation but results in shortening of hypocotyl length. This suggests that differential auxin distribution is extreme downstream of MIPS in hook formation19.Due to importance of differential auxin accumulation in hook formation, similar behavior of myo-inositol are necessary for hook formation and their action are additive in nature as we could see additive effect of inhibitors in combinations. Another observation made was on hypocotyl length i.e. brassinosteroid is required for the hypocotyl growth as decrease in hypocotyl length was seen in BRZ treated etiolated seedlings in combination of AgNO3 and TIBA whereas the MI was antagonizing the effect of brassinosteroid as increase in hypocotyl length was observed in LiCl and BRZ treated etiolated seedling. Previous reports also suggest that hypocotyl growth is associated with brassinosteroid and ethylene antagonizes brassinosteroid effect in hypocotyl growth22. It is therefore MI is one of the important regulators of apical hook formation and hypocotyl growth. A model based on findings and published data have been proposed seeds were sterilized with sodium hypochlorite for 10\u00a0min and three time washed with RO water. Seedlings were grown on half strength MS media (DUCHEFA BIOCHEMIE) without sucrose supplemented different concentration of MI, EBL, IAA, ACC, LiCl, BRZ, TIBA, AgNO3 as specified. Seeds were then cold stratified and exposed to 12\u00a0h light stimulate uniform germination. Plates were wrapped with aluminum foil and then transferred to growth chamber for 5\u00a0days at 22\u2009\u00b1\u20091\u00a0\u00b0C.48. Apical hook angle was measured by taking hypocotyl as a reference. When hook opens up, it creates a straight line. We considered it as 180\u00b0 and opening of hook as an increase in hook angle. We measured the acute angle formed between the cotyledon and hypocotyl, i.e. inner edge of the apical hook. The photographs are of the 5-day old etiolated seedlings.Apical hook angle and hypocotyl length was measured using the ImageJ softwareAtmips1 was confirmed using left primer (LP), right primer (RP) and T-DNA-specific primers (LBb1.3) listed in Supplementary Table RNA was isolated from the different plant tissues (control and treated sample) using RNeasy plant mini kit . 5-day old control and treated etiolated seedlings were ground with liquid nitrogen and further proceeded according to the kit manual. In-column DNase treatment was done to remove the genomic DNA contamination. Quality and quantity of RNA samples were done by gel electrophoresis and nanodrop. 2\u00a0\u03bcg of RNA was used to make the cDNA using High-Capacity cDNA Reverse Transcription Kit (THERMOFISHER SCIENTIFIC) and SuperScript III First-Strand Synthesis System (THERMOFISHER SCIENTIFIC) for Full length cDNA synthesis. SYBR green PCR master mix (THERMOFISHER SCIENTIFIC) was used for qPCR analysis using primers listed in Supplementary Table Arabidopsis etiolated seedlings harboring MIPS1 promoters fused with Egfp:uidA gene were checked according to the protocol described by Jefferson et al.26. 5-days old Arabidopsis seedling were harvested and dipped in GUS staining buffer for 24\u00a0h at 37\u00a0\u00b0C and plant samples were then rinsed with 70% ethanol to remove chlorophyll from the stained tissue. 5-day old etiolated seedlings grown on media supplemented with MI, ACC, AgNO3, EBL, and BRZ were GUS stained and staining was observed using stereo microscope Leica M205 A .\u03b2-Glucuronidase activity in 5-days old All values reported in this work are the average of at least two to three independent biological replicates having at least 15 seedlings each. Error bars represent SE. Statistical differences between control and each treatment were analyzed using Student\u2019s t test with paired two-tailed distribution.Supplementary Information"} +{"text": "Wireless networks have been widely deployed with a high demand for wireless data traffic. The ubiquitous availability of wireless signals brings new opportunities for non-intrusive human activity sensing. To enhance a thorough understanding of existing wireless sensing techniques and provide insights for future directions, this survey conducts a review of the existing research on human activity sensing with wireless signals. We review and compare existing research of wireless human activity sensing from seven perspectives, including the types of wireless signals, theoretical models, signal preprocessing techniques, activity segmentation, feature extraction, classification, and application. With the development and deployment of new wireless technology, there will be more sensing opportunities in human activities. Based on the analysis of existing research, the survey points out seven challenges on wireless human activity sensing research: robustness, non-coexistence of sensing and communications, privacy, multiple user activity sensing, limited sensing range, complex deep learning, and lack of standard datasets. Finally, this survey presents four possible future research trends, including new theoretical models, the coexistence of sensing and communications, awareness of sensing on receivers, and constructing open datasets to enable new wireless sensing opportunities on human activities. The rapid development and the pervasiveness of wireless networks has stimulated a surge in relevant research of wireless sensing, including detection, recognition, estimation, and tracking of human activities. Wireless sensing reuses the wireless communication infrastructure, so it is easy to deploy and has a low cost. Compared to sensor-based and video-based human activity sensing solutions, wireless sensing is not intrusive and of fewer privacy concerns. Specifically, video-based sensing is restricted in line-of-sight (LoS) and light conditions and raises more privacy concerns. Sensor-based sensing incurs extra cost due to additional sensors, as well as accompanying some inconvenience on wearing for users.During the propagation of the wireless signal from the transmitter to the receiver, the wireless signal is affected by obstacles in the transmission space, resulting in attenuation, refraction, diffraction, reflection, and multipath effects. Therefore, wireless signals arrived at the receiver carry the environmental information. Human activity will affect wireless signal propagation, which can be captured inside the received signals. Since different activities may lead to various patterns inside wireless signals, it can be used for different wireless sensing applications. Recent research has applied wireless sensing on motion detection, activity recognition, action estimation, and tracking. Various wireless sensing applications target their specific purpose and use unique signal processing techniques and recognition/estimation algorithms. To enhance a thorough understanding of existing wireless sensing techniques and provide insights for future directions, this survey conducts a review of the existing research on human activity sensing with wireless signals.We provide a comprehensive review of human activity sensing with wireless signals from seven perspectives, including wireless signals, theoretical models, signal preprocessing techniques, activity segmentation, feature extraction, classification, and application.We discuss the future trends on human activity sensing with wireless signals, including new theoretical models, the coexistence of sensing and communications, awareness of sensing on receivers, and constructing open datasets.There are some surveys on wireless sensing with specific wireless signals for specific application scenarios. Some surveys ,2 focus Many surveys ,5,6,7,8 Liu et al. review tThis survey differs from previous surveys on three points. Firstly, it expands the wireless signal types for human activity sensing and describes the pros and cons for each type of wireless signal on sensing, which includes RFID, FMCW, Wi-Fi, visible light, acoustic, LoRa, and LTE. Secondly, this survey provides a comprehensive summary of the models between human activity and wireless signals and a detailed comparison on signal pre-processing, signal segmentation, feature extraction, and classification for each existing wireless sensing studies. Thirdly, the survey analyzes the potential challenges and points out future trends to enhance wireless sensing capabilities. Radio Frequency Identification (RFID) is a communication technology for contactless two-way communication to identify and exchange data. In general, the RFID system consists mainly of low-cost tags and readers. The tags contain built-in coils and chips. The reader sends out a specific frequency signal. When the tag is close enough to the reader, the coil electromagnetic induction generates electrical energy after the tag receives the transmitted signals, and the chip transmits the stored information through the antennas. The reader accepts and recognizes the information sent by the tag, then delivers the identification results to the host.The tags can be classified according to internal electrical energy and frequency. On the one hand, tags can be divided into active and passive tags according to whether they can communicate actively with the reader. The difference lies in the availability of internal electrical energy. On the other hand, in terms of frequency, tags can be divided into low frequency, high frequency, UHF, and microwave tags.As shown in Since the human body reflects the RFID signals, many studies apply RFID in human activity sensing ,11,12,13Pros: RFID uses the principle of electromagnetic induction, so its wireless sensing ability is less affected by the environment. It can be used even in harsh conditions.Cons: Many researches value the low-cost features of tags. However, RFID solutions are only working with the assistance of expensive readers.Frequency modulated continuous wave (FMCW) performs continuous modulation on the frequency of the transmitted signals. According to the pattern of triangular waves, the distance of the object can be estimated by leveraging the time difference and frequency difference between the transmitted and received signals. The signal frequency difference is relatively low, generally kHz, so the processing hardware is relatively simple and suitable for data acquisition and digital signal processing. FMCW signals are widely used in human sensing ,16,17. FHigh sensitivity: Phases are extremely sensitive to small changes in the object position, which help estimate the tiny vibration frequency of the target .High resolution: The wireless bandwidth determines distance resolution. FMCW radar usually has a large bandwidth, so it achieves a high distance resolution.Pros:Cons: The range of measurement is relatively short, and it is difficult for the signal isolation of sending and receiving.Wi-Fi infrastructures have been widely deployed nowadays. Therefore, Wi-Fi has become a hot research direction in the field of human activity sensing. At present, there are two main metrics used in Wi-Fi sensing. One is received signal strength (RSS), the other is channel state information (CSI). RSS represents the strength of the received signal. In general, the RSS value is inversely proportional to the signal propagation distance. As the propagation distance increases, the signal attenuation becomes more significant, resulting in a decrease in the RSS value measured by the receiver. At present, most commercial Wi-Fi devices support obtaining RSS from the MAC layer, which measures the quality of the channel link. Sigg et al. adopt RSChannel state information (CSI) launches from the IEEE 802.11n standard. Its core technologies include multiple-input multiple-output (MIMO) and orthogonal frequency-division multiplexing (OFDM). After dividing the limited spectrum resources into subcarriers, space-time diversity technology is applied to reduce the noise interference of the signal in space, and the communication capacity increases when using multiple antenna pairs. CSI represents the frequency response of each subcarrier of every antenna pair. CSI requires equipping with particular types of wireless network cards .H represents the CSI information in MIMO-OFDM channels, which is a four-dimensional volume. N represents the antenna number at the transmitter, and M is the antenna number at the receiver. The first two dimensions marked in green represent the spatial domain. The third dimension marked in red belongs to the frequency domain, which represents the number of subcarriers under each antenna pair. The number of subcarriers obtained with the Atheros CSI Tool is 56 and is 30 subcarriers with Intel CSI Tool. The last dimension in blue indicates the time domain. In Due to the fine-grained information provided by Wi-Fi CSI, the use of Wi-Fi motion sensing has become a hot topic in recent years ,20,21,22Pros: Wi-Fi infrastructures have been widely deployed. Due to the fine-grained information provided by CSI, Wi-Fi can sense tiny movements such as finger gestures.Cons: Wi-Fi cannot support motion sensing and communication at the same time. Moreover, Wi-Fi is relatively less robust to the environment changes than other signals.With the increasing applications of visible light communication (VLC), several studies conduct human tracking with visible light ,24,25. ALow cost: VLC uses low-cost, high-efficiency photodiodes (LED), which can reuse the existing lighting infrastructure.High transmission efficiency: VLC transmission process is fast and not subject to electromagnetic interference.Pros:Deployment effort: For perception accuracy, it needs to deploy hundreds of photodiodes.High maintenance costs: Photodiodes age fast and have a weak anti-fouling ability. In order to ensure the perception accuracy of human actions, it is necessary to replace the old photodiodes in time, resulting in higher maintenance costs.Vulnerable to ambient light: different intensity levels of ambient light may push photodiodes up into the saturation region, affecting the accuracy of motion perception.Cons:The speakers and microphones of commercial off-the-shelf smart devices can generate and receive continuous sound waves. The human motion may affect the propagation of sound waves and create the phase difference or Doppler shift on received sound waves. By analyzing the received sound waves, the researchers may analyze the movement distance or corresponding direction of the human motion, which makes it possible for acoustic-based human sensing ,27. The Pros: Due to the lower propagation speed comparing to RF signals, acoustic sensing can achieve millimeter-level accuracy.Cons: Acoustic signals are vulnerable to interference from other signals in the band, so the choice of user scenarios is harsh. The noise in the environment will affect the accuracy of motion estimation.LoRa is a radio frequency transmission technique based on a spread spectrum modulation derived from chirp spread spectrum technology, which enables long-range transmissions with low power consumption. LoRa offers a long communication range for up to several kilometers, with the ability to decode signals as weak as \u2212148dBm . BecausePros: LoRa has a high penetration capacity and a wide range of communication. The transmission distance of LoRa signals extends to about 3\u20135 times compared to the traditional radio communication distance so that it can be applied to the perception and detection of a wide range of targets.Cons: The longer sensing range implies that the interference range is also longer due to the higher signal receiving sensitivity. The received signal is more complex to extract human motion because of the interference of many unrelated objects during sensing.LTE signals have almost seamless coverage everywhere, which can be used in wireless induction as an easy-to-receive signal source. The movement of the human body may cause a change in the CSI of the LTE signals so that LTE signals can help in human sensing ,33. LTE Pros: The base stations to transmit LTE are widely distributed, so LTE signals are easy to receive, both indoor and outdoor. LTE signal reception is stable and not easily disturbed by other signals so that it will be a stable and reliable signal for human sensing.Cons: The distribution of LTE signal base station is far away, and the signal propagation has long delays and offsets, so the accuracy of human localization based on LTE is still questionable. Besides, LTE transmission contains other unrelated information, so LTE-based motion-sensing requires specialized algorithms to reduce noise and separate signals.In order to apply wireless signals to sense a variety of human activities, the most critical issue is to understand the relationship between human behaviors and wireless signals, i.e., it is the first question of how human motion affects the propagation of wireless signals.In a typical indoor environment, shown in thn path, thn path, and In the time domain, the received signal Channel frequency response (CFR) is the frequency domain form of CIR, which represents the distortion that occurs on the frequency domain of the wireless signals. CFR can be obtained by performing Fourier Transformation (FFT) on CIR, as shown in Equation (3). Accordingly, in the frequency domain, the received signal spectrum Among the variables involved in the above formula, RX). Motion detection determines whether human motion exists, which often relies on coarse speed estimation in signals to find the speed change caused by human movement. Action recognition needs to identify the difference between multiple types of actions, so it needs more fine-grained speed information, distance range, and direction that an action span. Motion tracking needs to locate the position, direction, and distance to the receiver. Thus, this paper reviews the relationship model between phase, frequency, and amplitude of wireless signals and speed, direction, and distance of human activity. There are three critical parameters inside Equation (3), which are the amplitude As human actions may change the length of some signal propagation path, resulting in phase offset, the phase information is used to deduct human motion. Phase difference vs. human velocity: Higher the velocity, more intense the fluctuation of the phase difference. However, the phase difference can only roughly estimate human actions whose velocity varies significantly. The phase difference can be used to separated walking and running from other static actions, such as sitting, lying, and standing. For the action recognition problem, with the assistance of feature extraction, phase difference helps in distinguishing multiple actions ,70,71,72The human movement causes a change in the length of the reflection path, resulting in frequency shifts. By measuring signal frequency, the direction, speed, and distance involved with human movement can be deduced.Frequency vs. human velocity: The relationship between human speed and signal frequency can be derived using the Doppler effect model. The Doppler effect indicates that when the human body moves relative to the transceiver, it produces a high frequency when approaching, or a low frequency when away from transceiver . When thasurable . fDoppleWe further explore the relationship between Frequency vs. human direction: In a fixed place, when people are moving at the same velocity in different directions, introducing distinct Doppler shifts, shown in It is not possible to derive tangential velocity using only one pair of transceivers. WiDance proposesrections . For therections .Frequency vs. human velocity and distance: Accurately extracting a phase from an analytical signal requires that the signal contains only one frequency component at any given time. The chirp signal is an example of this type . Human a1. Distance measurement when a person is still2. Speed and distance measurementRXd and the speed of human movement humanv can be calculated as Equations (15) and (16). FMCW chirp is usually used in conjunction with the antenna array to solve human tracking problems .Amplitude vs. human distance: The Fresnel zone model can be used to deduce the relationship between amplitude and the distance of human motion. Fresnel zone is a series of concentric ellipsoidal with two foci corresponding to the transmitter and receiver antennas, shown in \u03bb, the phase difference between them is 2Zhang et al. show thaAs LoS path . For subLoS path propose Amplitude vs. human direction: The calculation of human direction from amplitude measurements have two models as follows. 1. Fresnel zone modelA single-frequency carrier cannot deduce the direction of human motion. With Wi-Fi MIMO-OFDM technology, multi-subcarrier can help calculate the direction of human action. Each subcarrier will create its Fresnel region independently. These multi-frequency Fresnel zones share the same foci and shape but different sizes. The subcarrier with a shorter wavelength has a smaller ellipsoid than the neighbor subcarriers. Therefore, the peaks and valleys of different subcarrier waveforms appear at different times, causing their waveforms to have phase differences. 2. Antenna array\u03c4, so there is a phase difference among the signals for different antennas. By measuring the received signals on every antenna, the power at any given angle \u03b8 can be obtained through Equation (18).thn antenna. The arrival angle P of the signal, thereby deriving the direction of human motion. In rayTrack deduces 3. Amplitude vs. Human Velocitythk path. thk path.CARM proposesthk path changes from When a person moves a small distance from time 0 to The complex value Due to the multipath effect, the received signals are the superposition of the propagated signals along different paths. If each reflection path affected by human motion can be resolved from the received signals, it will definitely improve performance for passive human localization and motion tracking. mD-Track construcL distinct paths as Equation (22).Y(t).g(\u03c6) characterizes the phase relationship of the signal coming out of the transmitting antennas while the receive array steering vector c characterizes the phase relationship of the signal arriving at the receiving antennas. H is the CSI matrix, and LTF is the preamble according to the 802.11n standard [U(t) is the residual signals after eliminating the estimated signals from the received signals. mD-Track points out that the proposed iterative optimization is a maximized expectation problem belonging to the EM family [mD-Track models the received signal as the superposition of signals along standard . U(t) isM family ,122, whiThis section presents the signal preprocessing methods for motion sensing with wireless signals in recent years, including noise reduction, calibration, and redundant removal. The raw signals extracted from the PHY layer are very noisy due to hardware defects or some particular noise in the environment. To use wireless signals for human motion sensing, eliminating as much noise as possible is the first step.Time-domain filtering: Moving average filter and median filtering are simple methods for time-domain analysis. Each data point is replaced by an average or median value of adjacent data points. For example, SEARE adopts ami and standard deviation i\u03c3 of adjacent data points. If |xi \u2212 mi |/i\u03c3 is larger than a predefined threshold, the current point xi is viewed as an outlier and replaced with the median Some outliers may not be filtered and will affect the subsequent processing. Local outlier factor (LOF) is employed to find anomalous points by measuring the local density of the collected signals. For example, WiSome uses LOFn mi. EI uses theFrequency domain filtering: The frequency caused by human motion is usually much lower than the frequency of impulses and burst noises. In order to choose signals for a specific frequency band, some filters of frequency domain analysis are applied. Butterworth low-pass filters and Passband filters are widely used to remove high-frequency noises. WiChase uses a BDue to the inconsistency among filtered signals, calibration is the second step of signal preprocessing.Interpolation: The receivers may obtain non-uniform sequences due to weak signals through-wall or from non-LoS paths, which have packet loss and transmission delays. For a relatively stable sampling frequency, the received signal sequence often needs to be interpolated. RT-Fall uses intNormalization: The imbalance of signal distribution comes from the different ranges on the various dimension. Normalization unified the value scale by normalizing from 0 to 1 proportionally. Motion-Fi uses thePhase calibration: The filtered phase is folded due to the nature of the inherent phase periodicity, which needs to transform the raw phase into the real value. After the above pre-processing, the signal sequence still contains some redundant information that is not related to human activity. The removal of such unnecessary details will reduce computation complexity and sift out the signal segment tightly associated with human activities.PCA-based subcarrier selection: The CSI measurements are highly correlated among subcarriers, and different subcarriers have different sensitivity for a given activity . Thus, tExisting researches hold different views on principal component (PC) selection. Some solutions select the first PC, which contains the highest eigenvalue among all the PCs and may correspond to the features caused by human motions ,58. On tStatic environment partial removal: The static signal propagation paths are often treated as a constant in a short period, which is not affected by human activity. Thus, the static component inside the received signals is often removed by subtracting the constant value measured in the static environment without human actions ,22,65.Multipath mitigation: If the surrounding environment changes, such as moving a chair to another place or a person is moving around, the received signals will be different due to the various multipath effect for signal propagation, resulting in the signal pattern distortion for a given activity. WiFinger finds thAfter pre-processing, the remained signals comprise motion segments and non-motion intervals. The non-motion interval inhibits from discovering the characteristics of signals affected by human movement. Precise segmentation for every single action from the signal sequence is the premise of accurate feature extraction and activity recognition. Because human action may induce high fluctuations in the received signals, the action segmentation is mainly based on thresholds. Hence, the action segmentation methods can be classified into two categories, time-domain based and frequency domain-based methods. According to the metrics, time-domain thresholds use thresholds on phase difference, amplitude, statistic features, energy, and similarity comparison.thk and the th(k+1) subcarrier. The threshold T is chosen as Equation (26) where Phase difference threshold: Phase difference threshold implicitly makes use of the spatial information between the antenna pairs at the receiver . MoSenseIn general, threshold cutting based on phase difference is used to cut walking and non-walking (such as sitting and standing) activities . HoweverAmplitude threshold: Amplitude thresholds are widely used in action segmentation with the advantage of low computation. WIAG calculatequences ,86,108.thi gestures, respectively. Zhang et al. [In order to reduce the impact of environmental changes, the cutting modules extract features from the amplitude stream . The thrg et al. set the Statistics threshold: For avoiding misjudgments affected by outliers, statistics thresholds are used in activity segmentation. The significant variation in one sliding window indicates the presence of human activity. The statistical thresholds include variance, variation coefficient, correlation, and outlier.K subcarriers for each sliding window. It is a robust feature to detect the state transition between actions, which means dynamic gestures lead to noticeable fluctuation in amplitude variance among sliding windows [Because the in-place action is of less physical movement than walking, resulting in different variances . Cumulat windows . Cumulat windows ,81.The coefficient of variation (CV) threshold relies on the fact that CV can balance the difference caused by the environmental changes, defined as Equation (30). etection .(30)CV=Because the variance ratio is relatively stable to environmental conditions, Gong et al. apply a The correlations between subcarriers are also used as thresholds to detect the motion segment. The eigenvectors among subcarriers change randomly in the absence of human movement. On the contrary, when human actions exist, nearby subcarriers become similar and correlated. Wang et al. further it down) ,46,87.o is a point near the Local outlier factor (LOF) is defined as Equation (34). ll state ,76,78. and the Hilbert Huang Transform (HHT) to calculates the ratio of real-time energy to the energy sum of each window. It identifies the start and end of the driver motions to check if the driver is fatigued .According to the Doppler model, there is a clear frequency shift when human motion appears. Hence, it is feasible to cut the action segment with the frequency threshold. Such methods need the assistance of time-frequency domain analysis, which can be further divided into three categories, including peak-based, energy-based, and spectrum-based.Peak threshold: WiDance computesEnergy threshold: Guo et al. apply thSimilarity threshold: WiFit uses theP using the theoretical distribution Q. KL divergence threshold can cut out fitness activities and rest intervals. These fitness exercises contain concentration standing bicep curl, seated triceps press, and flat bench bicep curl, which includes a unique arm pattern in each action [Kullback\u2013Leibler (KL) divergence leverages the fact that the distribution of amplitudes within each window should be similar when there are no human actions. Conversely, the amplitudes change rapidly and show a completely different distribution with human motions. KL divergence is defined in Equation (37), which represents the loss of information when fitting the real probability distribution h action .(37)DkLFeature extraction is the core step in motion recognition, which directly affects the recognition robustness and accuracy. Because human action is often buried inside the received signals, as discussed in Time-domain features: Most time-domain features directly apply statistics. Calculating time-domain features usually takes the input of amplitude, phase, or phase difference whose computation costs are small. The statistics feature often characterize the shape of the received waveform in the time domain. There are a large amount of studies ,90,108 tFrequency-domain features: The frequency domain analysis may extract the signal characteristics from a deeper level than the time domain. Compared to time-domain methods, frequency-domain analysis usually requires a large amount of computation. Typically, the signal is transformed in the frequency domain, and then some useful parameters are extracted as the features to the frequency domain. Frequency domain features describe the magnitudes of various frequency components contained in the mixed signal.HeadScan extractsTime-frequency domain features: Time-frequency domain analysis describes the proportion of specific frequency components that the signals contain at different times. Discrete wavelet transformation (DWT) is a representative of time-frequency domain analysis. DWT has good trade-off on time and frequency resolution, so both high-speed and low-speed motion can be captured. WiMotion performsAnother analysis method in the time-frequency domain is a combination of empirical mode decomposition (EMD) and Hilbert\u2013Huang Transform (HHT). EMD is a self-adaptive signal processing method that decomposes data into intrinsic mode functions (IMF), which are symmetric concerning the local zero mean, and have the same numbers of zero crossings and extremums. Each IMF represents the type of oscillation pattern embedded in the signal. By applying HHT to each IMF, the instantaneous frequency can be acquired. Mohammed et al. extract Spatial domain features: For the application of human localization and tracking, it is essential to capture the spatial information such as the direction and distance of the human body at a certain moment. AoA and ToF are two typical spatial features. By exploiting the antenna array, AoA can be derived from the phase difference of the arriving signals between multiple antennas, refer to The features extracted from action segments will be further applied in classifier to recognize human activities. This section focuses on techniques for activity classification, including template matching, machine learning, and deep learning. In terms of training options, these classification methods can be divided into training-free, training-once, and multiple times of training . TemplatSince template matching is a real-time training-free method, its input should be sufficient pre-processed and segmented out signal sequence. The template matching method has to pre-store the templates, which are not suitable for a large number of templates. So template matching is more applicable for recognizing action with fewer categories and of short time series per template. Because human gestures have short durations, the template matching has been widely used in gesture recognition and simple motion recognition.These methods calculate the distance between the action sequence and the known template and measure it based on the similarity thresholds. If the distance is less than the threshold, the action sequence is classified into some known type. According to whether the time series are of fixed length, these methods can be further divided into fixed and different length template matching methods.Fixed length: The difference among fixed template matching is using different distance calculation. Euclidean distance is the simplest distance evaluation. For example, WiGest firstly Compared to European distance, Earth mover\u2019s distance (EMD) can measure the similarity between two probability distributions. It calculates the minimal cost to transform one distribution into the other ,81,89. TJaccard coefficient between the two matrices is the ratio of the number of sample intersections and the number of sample syntheses, which compares similarities between limited sample sets. The higher the Jaccard coefficient value, the higher the sample similarity. WIMU measuresDifferent length: The length of two signal sequences for the same action is often different due to the differences in duration, direction, and speed. The typical template matching method for different-length series is dynamic time wrapping (DTW) ,74,91,93DTW solves the length problem by optimally calculate the distance between two series by stretching and alignment. For finger gesture classification, Mudra classifyThe classification based on machine learning needs signal preprocessing and feature extraction as its basis. Since the classification method of machine learning requires training the model, its time complexity is higher than that of template-based methods. Besides, it requires a large training set to train the model. Machine learning methods are suitable for solving multi-classification problems with training samples of corresponding ground-truth labels.SVM is widely used in various human activity detection and classification systems ,127,130.The decision tree (DT) can output a simple if-else classification model. With the advantage of less computational cost, it is suitable to solve real-time activity recognition ,57,85. TK needs to be artificially pre-set. The recognition accuracy may be reduced due to incorrect parameter K settings. Besides, the algorithm is of low sensitivity to outliers. WiSome [K-nearest neighbor (KNN) classified by measuring the distance between feature vectors. The disadvantage is that . WiSome adopts K. WiSome applies . WiSome achieves. WiSome recognizK-means is an unsupervised learning algorithm without ground truth labels. K-means put similar objects into the same cluster automatically and make messy data becomes organized after K clustering. For example, E-eyes applies Naive Bayes (NB) requires a few parameters, which are not sensitive to the missing data problem . NB has The hidden Markov model (HMM) estimates the joint probability distribution and calculates the posterior probability, which statistically represents the relationship between features and states . It has The sparse matrix representation indicates that almost all raw signals can be represented by a linear combination of fewer basic signals. These basic signals, called atoms, are selected from an over-completed dictionary. The elements with the non-zero coefficient in the sparse matrix reveal the main characteristics and intrinsic structure of the signal. The closer the value of the non-zero factor to 1, the higher the signal similarity. The sparse matrix representation can be applied to motion recognition. For example, HeadScan achievesDeep learning combines feature extraction and classification to achieve multi-classification of actions. Compared to the machine learning-based method, deep learning requires more training data to determine a large number of parameters. Deep learning does not need feature extraction phases. Guo et al. use DNN Convolutional neural network (CNN) is a typical kind of DNN, whose neurons in the neighbor layers are connected through the convolution kernel as an intermediary. CNN has the characteristic of limiting the number of parameters and minimum local structures. Zhang et al. apply CNRNN addresses the limitation of time series on CNN that cannot be modeled. LSTM (long short-term memory) is a typical kind of RNN, which solves the long-term dependency problem. Wi-Multi proposesMoreover, DFL adopts aThis survey divides human motion sensing into four types of applications, including detection, recognition, estimation, and tracking.The human motion detection can be further divided into fall detection ,50,73,92Human activity recognitions, can be further divided into hand/finger gesture recognition ,111,135,Finger gesture recognition is a fine-grained recognition, which requires capturing tiny finger movement variation and accurately distinguish these different subtle change patterns. Finger gesture mainly contain digital finger gesture ,46,69, dThe daily human activity contains actions that usually have running, walking, hand moving, bend, phone call, drinking, eating, typing, sit down, and so on. Human movement directions mainly contain left, right, front, back, left-back, left-front, right-back, and right-front. Fitness actions recognition can be further divided into dumbbell exercises ,74,88 (dEstimation application refers to a system that can count the number of actions/steps after activity recognition or activity detection. Estimation applications can be divided into walking step counting , fitnessFor tracking applications, this survey mainly includes human motion tracking ,71,129, PinIt is a finThis section presents the challenges and future trends for both current and future human activity sensing solutions with wireless signals.From the discussion on the theoretical model and signal processing process, the existing research shares the following common challenges.Robustness: All the theoretical models are based on the multipath analysis to sift out the motion impact on the received signals. Most research add limitations to the experimental environment to analyze the motion impact through multipath analysis. First, there should not be other persons or moving objects around. Other people or moving object\u2019s actions are also captured by the received signals, which makes it hard to sift the target person\u2019s action effect. Second, the action performer is often needed to be on a fixed position from the sender and receiver of the signal in advance for learning-based methods. Otherwise, the learning model would fail to detect or recognize activities. Third, there are often some specific areas for activity recognition. For the Fresnel Zone model, most research takes place at the boundary on the first 8\u201312 FFZs in deployment. However, it is ridiculous for users to calculate this specific place in reality. When targeting the real scenarios of wireless motion sensing, all the above limitations should be eliminated. It may require some new theoretical models to construct relations between human motion and wireless signals or some novel signal processing methods.Non-coexistence of sensing and communications: Wireless infrastructures are designed for signal communications, not for sensing applications. The existing approaches require deploying and controlling both the sender and receiver of the wireless infrastructure. Some sensing applications even require a high frequency of continuous sinusoid signals to achieve high performance. This adds the burden to the scarce bandwidth resources and results in reduced communication performance and efficiency. Moreover, sending the continuous sinusoid signals also affect the communications between nearby wireless devices.Potential privacy threat: Wireless activity sensing takes advantage of non-intrusive and non-obtrusive. However, it still introduces some privacy concerns. As shown in Multiple user activity sensing: Wireless signals are sensitive to any movements in the sensing area, because any motions may change the multipath propagation of the wireless signals. When multiple persons are sharing the same physical space, the received signals will contain all the impacts by all the persons\u2019 motions. Existing FMCW and antenna array-based solutions like can tracLimited sensing range: Although multiple types of wireless signals can be used in human activity sensing, the sensing range is still limited. For example, acoustic-based sensing has a sensing range of 1\u20132 m, while RFID and WiFi have a sensing range of 2\u20138 m. The sensing range of VLC is 3\u20137 m. While LoRa signals have a communication range of 10 km, the current sensing range is below 100 m. Moreover, the applicable sensing systems are still lacking for outdoor environments due to the limited sensing range.Complex deep learning: Some CSI-based activity recognition applications exploit deep learning approaches, for they can automatically extract high-level features from CSI streams for classification. The deep learning approaches, however, require not only an extensive training set to train the underlying parameters of the learning network and but also a comparable computation and storage capacity to perform training. Therefore, it adds to the burden on the users to collect training samples and maybe not computable on resources limited devices such as wearable and edge devices.Lack of standard datasets: Currently, most wireless activity sensing studies evaluate their performance using their dataset. Researchers have to recruit some volunteers to conduct many types of actions to collect wireless signal streams. Moreover, the experimental environments are often chosen according to the particular targets of the applications. Consequently, the system performance often depends on the deployment and the collection process, which makes it difficult for comparison among different studies.This section presents future trends in addressing the above challenges and issues.New theoretical models: The existing model concentrates on the reflection of the human body to the signal propagation, which is captured through the multipath effect. The signal reflection from the human body to the receiver often requires specific positions and angles. It imposes the limitation on the application environment. For example, if the action is extracted from the received signals through the signal diffraction model, the restriction of the specific position may be eliminated. As long as the human performs activities close to the receiver, it will create the diffraction effect with similar patterns inside the received signals. The diffraction model may solve the robust challenge as the diffraction effect depends little on the objects at a certain distance away. If every user has her/his wireless signal receiver, the diffraction effect will naturally separate the sensing space for each user, which also solves the challenge of multiple users. Moreover, a new theoretical model will guide the activity classification process, which may eliminate or reduce the complexity of applying deep learning.Coexistence of sensing and communications: The major obstacle for the coexistence of sensing and communication is that current solutions need to control the sender of wireless infrastructure and require specific continuous signals for sensing. If the wireless signals already in space can be directly for sensing, the sensing system may only focus on listening and does not need to control the sender of infrastructure. Then the coexistence of sensing and communications can be realized. Moreover, a wireless sensing solution with only receivers may apply the mobile signal infrastructure, which has the advantage of ubiquitous coverage and tackles the sensing range limit.Awareness of sensing on receivers: The privacy concerns come from that specific systems may make use of some indicators of the received signals for sensing purposes. The tools to control and report on the usage of received signals except communication are of importance. Moreover, more research efforts should be concentrating on the signal receiver of smartphones. The reason is two-fold. Firstly, smartphones are ubiquitous receivers as people carry them all the time. Secondly, users are familiar with the privacy control procedure with smartphones , so the Constructing open datasets: It is still an open question to construct the standard datasets for wireless sensing research. When constructing open datasets, many factors have to be carefully chosen, including test environments, deployment of wireless transceivers, types of wireless signals, number of volunteers, differences among volunteers, action types, and size of samples. An open standard dataset will help accelerate the wireless sensing study and improve performance evaluation and comparison.According to the directions mentioned above, more research efforts should be put on the wireless sensing solution with just a smartphone as the receiver, which directly makes use of ubiquitous mobile signals for sensing under the guidance of a new theoretical model between human motion and wireless signals.This survey gives a comprehensive review of the background of wireless signals, the theoretical models from wireless signals to human actions, signal pre-processing techniques, signal segmentation techniques, feature extraction, activity recognition, and applications of wireless sensing. The article highlights seven wireless sensing challenges on human activities: robustness, non-coexistence of sensing and communications, potential privacy threat, multiple user activity sensing, limited sensing range, complex deep learning, and lack of standard datasets. Finally, the survey points out four future research trends: new theoretical models, the coexistence of sensing and communications, and awareness of sensing on receivers, and constructing open datasets."} +{"text": "The results demonstrated that the individual, interpersonal, organizational, community, and policy levels, were all associated with risky alcohol consumption. When devising interventions, policymakers should, therefore, take into consideration that variables from multiple levels of influence are at play. Students\u2019 capacities to change or maintain their alcohol consumption behaviors may be undermined if social settings, overarching environments, social norms, and policies are not conducive to their motivations and social expectations.Hazardous use of alcohol is a global public health concern. Statistics suggest that this is particularly common in Europe, and among higher education students. Although it has been established that various factors\u2014ranging from the individual to the overarching societal level\u2014are associated with misuse of alcohol, few studies take multiple levels of influence into account simultaneously. The current study, therefore, used a social ecological framework to explore associations between variables from multiple levels of influence and the hazardous use of alcohol. Data were obtained from a representative sample of higher education students from Flanders, Belgium ( High-volume consumption of alcohol and risky single occasion drinking (RSOD) are common practices among higher education students ,2. Past Although most research has focused on the US, scientific attention aimed at students\u2019 alcohol use in Europe has increased ,9. AlcohThe current study focuses on Flanders, one of three regions in Belgium, by using a representative sample of Flemish students. In Belgium, alcohol is considered to be part of the culture and is associated with numerous social activities like dinners or gatherings with friends and family . The culOne of the key social ecological models was described by Bronfenbrenner . He arguNumerous studies have addressed potential factors that contribute to hazardous drinking among higher education students. A common observation across these studies is that various levels of influence exist, ranging from the individual level to the overarching societal level ,4, in liThere is, thus, a dearth of research taking into account multiple levels of influence when approaching the complex social problem of hazardous alcohol use among higher education students. Such studies would allow researchers to compare the relative influence of these levels and enable them to estimate the effect of predictors, while taking into account an encompassing array of variables. Considering that research addressing multiple levels of influence may simultaneously lead to the most effective interventions ,27,28,29The analyses are based on a substance use data collection project entitled \u2018Head in the clouds?\u2019 among students from all higher education institutions in Flanders (Belgium) . An onlin = 31,847). Next, we selected students (1) who reported having consumed alcohol in the year prior to questioning and (2) who responded to all items included in the dependent variable . Only those students who had no missing values on all variables were included in the statistical models, resulting in a final sample of 21,854 students between the ages of 17 and 24 years.Institutions with a response rate of less than 5% were excluded from the dataset (this corresponds to 820 respondents). These students were removed from the anonymized dataset that we were given access to. The dataset used for analyses included 35,221 higher education students (15.9% response rate). For the current study, students aged 17 to 24 were selected in order to reflect the conventional student age range (2 tests in 2 = 622.06) and first year students (\u03c72 = 81.47) were overrepresented in our sample. These differences were adjusted using post-stratification weights. The weighted mean age of the students in our final sample is 20.62 (SD = 1.76).Representativity of the data was assessed by comparing the sample distribution to the population distribution by means of information made available by the Flemish Ministry of Education and Training . The \u03c72 Our dependent variable is the short form of the Alcohol Use Disorders Identification Test, AUDIT-Consumption (AUDIT-C), which measures the quantity and frequency of drinking. The AUDIT-C was originally developed as a screening instrument for practitioners to identify individuals at risk of developing alcohol problems and to consequently refer them for further alcohol assessments or interventions . Cutoff \u03b1 = 0.82). We used the sum scores to create a single item .The AUDIT-C consists of three questions in which students are asked to think back over the past 12 months and indicate (1) how many times they drank alcohol, (2) how many glasses of alcohol they usually drank per day, and (3) how many times they drank six or more glasses of alcohol on one single occasion. A glass of alcohol was defined as a standard glass for each type of alcoholic beverage , strong beer , wine, fortified drink, spirit) and the corresponding quantity in centiliters . The scale had good reliability in our sample , study year , living situation = 1), employment , and age of onset alcohol use. Study year was included as it was the only information available with which to account for the time exposed to the higher education environment . This scOn the interpersonal level, we included parental educational attainment . Next, students were asked to indicate whether they would talk with (1) family members, and/or (2) friends about alcohol problems . Students were also asked to rate the trustworthiness of people or that you can\u2019t be careful enough in dealing with people (0)\u201d).On the organizational level, students were asked whether they had an affiliation with the following organizations: (1) (board) membership student association; (2) membership sports club/team; (3) membership or group leader of youth movement . A group leader of a youth movement supervises social activities for a group of children during the weekends.The following questions concerning social norms were put to all students: \u201cDuring the academic year (excluding exam periods), how often in the past 12 months do you think (1) an average male student drank six or more alcoholic consumptions in 2 h; (2) an average student drank sufficient alcohol to feel drunk?\u201d . The last question surrounding social norms that was included is as follows: \u201cIn the past 12 months, how many alcoholic consumptions do you think an average student drank on an average day during the academic year (excluding exam periods)\u201d . Similar to the AUDIT-C questions, a glass of alcohol was defined as a standard glass for each type of alcoholic beverage. Finally, an item on the alcohol theme in the study curriculum was added .Students were asked whether they had participated in a public health awareness campaign on alcohol use, the annually organized Tourn\u00e9e Min\u00e9rale. In this campaign, people are challenged to give up alcohol for one month in February. Similar temporary alcohol abstinence challenges are organized elsewhere . For clarity purposes, we henceforth refer to this campaign as Dry February. The answer categories were: (1) Participation in Dry February (reference category); (2) unfamiliar with Dry February; (3) intentionally did not participate in Dry February.2 = 138.01, df = 97, p = 0.004) demonstrated that our data are MAR. We imputed ten datasets and ran pooled regression analyses. These results were nearly identical to the regression analyses including respondents with no missing values , with the exception of very few, minor differences. The latter analyses were used in the current study. The results of the pooled regression analyses are available upon request.To address our research questions, we first performed descriptive and bivariate analyses. The mean differences in AUDIT-C scores for all independent variables were tested by means of independent samples t-tests or ANOVA tests. For continuous variables, Spearman rank correlations were calculated. Our next step was to run hierarchical linear regression analyses. Prior to running these analyses, we checked all of the required assumptions to allow the use of the ordinary least squares (OLS) method. This procedure is outlined in Five models were tested. The first model includes only the individual level. Each level was added successively until we incorporated all levels in the fifth and final model. All analyses were performed with IBM SPSS Statistics 24 .p < 0.001), as did students not living at home = \u221226.59, p < 0.001). Furthermore, students with an earlier age of onset of alcohol use showed more hazardous drinking . On the interpersonal level, the results showed that students willing to talk to family members about alcohol problems had significantly lower scores = 19.93, p < 0.001), while students willing to talk to friends about these problems had significantly higher scores (t(6273.227) = \u221219.13, p < 0.001). The results on the organizational level demonstrate that students with (board) memberships of student associations = \u221222.26, p < 0.001), and members or group leaders of youth movements = \u221229.69, p < 0.001), had significantly higher AUDIT-C scores. On the community level, we found that students who thought that other students exhibited more hazardous alcohol consumption were more likely to show hazardous alcohol behavior themselves. In particular, the held social norm surrounding the amount of alcohol consumption was associated with higher AUDIT-C scores . Finally, on the policy level, we found that students who did not participate in Dry February (DF) had significantly higher AUDIT-C scores = 1389.907, p < 0.001).The results of our bivariate analyses are presented in \u03b2 = \u22120.01, p = 0.13), explained a significant amount of the variance in the dependent variable, AUDIT-C. Male gender , living away from home and having an early onset age of alcohol use were the main explanatory variables on the individual level. Together, the variables accounted for 20.6% of the variance = 810.15, p < 0.001). The results of the final regression analysis are presented in \u03b2 = \u22120.10, p < 0.001), while a willingness to talk to friends was associated with more hazardous alcohol use . The interpersonal level accounted for an additional 2.4% explained variance = 173.94, p < 0.001).The variables on the interpersonal level were significantly associated with AUDIT-C. The willingness to talk to family members about an alcohol or drug-related problem was associated with less hazardous alcohol use and being a (board) member of a student association were the primary explanatory variables of hazardous alcohol use on the organizational level. The additional explained variance of this level of influence was 4.0% = 396.91, p < 0.001).Being a member or group leader of a youth movement . Students\u2019 social norms were significantly associated with hazardous alcohol use, in particular, norms relating to the binge drinking behavior of men and norms relating to the consumption amount (\u03b2 = 0.20 p < 0.001). The community level accounted for an additional 8.5% of the explained variance = 718.74, p < 0.001).Not all variables on the community level were significantly associated with hazardous alcohol use. Whether or not students thought that alcohol or other drugs were addressed in their study curriculum was not significantly associated with scores on the AUDIT-C . Adding the policy level explained an additional variance of 4.4% = 812.30, p < 0.001).The policy variable was significantly associated with hazardous alcohol use. Students who participated were less likely to engage in hazardous alcohol consumption in comparison to those who intentionally did not participate in Dry February , not participating in Dry February (policy level), the held social norms surrounding alcohol consumption (community level), being a member or group leader of a youth movement , and the age of onset of alcohol use .The current study used a social ecological framework to explore how multiple levels of influence affect hazardous alcohol use among Flemish higher education students ,23. We findividual level, our main results correspond with prior academic work showing that hazardous use of alcohol use is more likely to occur among male students [On the students ,9,27,39 students ,41. In cstudents . interpersonal level did not play a large relative role in explaining risky alcohol behavior. The largest relative association was found for students who said they would talk to family members about alcohol problems, which is in line with earlier research demonstrating good parent-child communication as a protective factor [The variables included on the e factor ,27,43. Pe factor ,9,27,44.organizational level, it was found that being a member or a group leader of a youth movement was associated with the hazardous use of alcohol. This association is difficult to fully understand, since students were not asked to define the type of youth movement they were affiliated with. Furthermore, we found very little research involving this association. One twin study indicated that involvement in social activities may protect against the development of hazardous alcohol use [On the ohol use . Howevercommunity level, we found that students\u2019 social norms surrounding consumption amount had a relatively strong impact. This result corresponds with earlier research, showing that students with perceptions of more permissive norms surrounding alcohol consumption were more likely to engage in RSOD and high-volume alcohol consumption [Next, on the sumption ,8. This sumption ,9,46. Onsumption ,48. policy level, we found that participation in a public health awareness campaign was associated with hazardous alcohol use. This finding is similar to earlier research performed in the UK [Finally, on the n the UK ,50. ConsThe presented findings are subject to some limitations. Since the data are cross-sectional in nature, causal inferences cannot be made. Furthermore, as all data are self-reported measures, the common-method variance may affect the results. Due to the anonymity requirements of the use of the data, we had no further information on where students were from and which higher education institution they were attending. Had this information been available, multi-level analyses could have been performed, with the addition of relevant objective measures to enriOver the years, a number of interventions have been designed, implemented, and tested among specific higher education populations. Among the most effective interventions are brief motivational interventions, in particular, personalized feedback and normative reeducation ,53,54,55Ideally, explorative studies with the aim of informing multi-level interventions should acquire representative student data that make it possible to include objectively measured characteristics at the organizational and community level. Unfortunately, such data are hard to come by; nevertheless, studies, such as the current one, enable universities, as well as policymakers, to make more informed decisions. There is certainly a need for more studies and intervention designs that are critically informed, and can address multiple levels of influence simultaneously. Much is still to be learned about which ecological mixes are bestThe hazardous use of alcohol among higher education students in Flanders is a public health concern. Using a social ecological framework, the current study demonstrates that multiple levels of influence are at play in predicting hazardous drinking. While evidence-based interventions are available, these are mostly aimed at addressing only a single level of influence. Students\u2019 capacity to maintain responsible levels of drinking may be undermined if the social settings in which they are embedded, and the overarching environments, social norms, and policies, are not conducive to their motivations and social expectations. The results presented here may guide policymakers toward the development of effective multi-level interventions."} +{"text": "Bifidobacterium represented the most predominant genus and Enterobacteriaceae the second in all groups. In 40\u00a0days of age, Bifidobacterium and Bacteroides were significantly higher, while Streptococcus and Enterococcus were significantly lower in breast-fed group than they were in formula A-fed group. Lachnospiraceae was lower in breast-fed than formula B-fed group. Veillonella and Clostridioides were lower in breast-fed than formula-fed groups. In 3\u00a0months of age there were less Lachnospiraceae and Clostridioides in breast-fed group than formula-fed groups. There were also significant differences of microbiota between formula A-fed and formula B-fed groups. Those differences may have impacts on their long-term health.To compare gut microbiota of healthy infants that were exclusively breast-fed or formula-fed, we recruited 91 infants, who were assigned into three different groups and fed by breast milk (30 babies), formula A (30 babies) or formula B (31 babies) exclusively for more than 4\u00a0months after birth. Faecal bacterial composition was tested. Among different groups, \u03b1 diversity was lower in breast-fed group than formula-fed groups in 40\u00a0days of age, but increased significantly in 6\u00a0months of age. The The first year of life is pivotal to the development of gut microbiota, with breast milk being the main influence factor to the composition of microbiota3.The gut microbiota at birth is of low diversity, while a more complex composition is established by 1\u20132\u00a0years of age to be similar with gut microbiota of adults4. The gut microbiota affects the immune system maturation, nutrient absorption, as well as avoids pathogen colonization. Changes in gut microbiota composition have associations with long-term health disorders, for example obesity, atopic diseases, and chronic inflammatory diseases. So there is a window of opportunity to regulate the gut microbiota in early life to promote long-term health5.Numerous data have shown an association between gut microbiota and chronic non-infectious diseases in humans. The development of gut microbiota in early life has impacts on later health6. Compared with formulas, breast milk has superior effects on the barrier integrity and mucosal defences of the intestinal tract1. However, breast milk is not available in many circumstances. While the composition of commercial formulas is more and more close to that of breast milk, gut microbiota of breast-fed and formula-fed babies remains distinct7.Human breast milk is an ideal source of nutrients for infants, which contains a large variety of components. Breast milk also influences health promoting microorganisms by factors such as polymeric IgA (pIgA), antibacterial peptides, and components of the innate immune responseStudies of gut microbiota in babies fed exclusively breast milk or formulas are rare and mostly of small-scale. Actually, babies are partially breast-fed or formula-fed in most research articles. To gain a better understanding of how different feeding patterns affect the gut microbial composition, we conducted a study detecting gut microbiota in babies fed exclusively human milk or a certain kind of formulas for more than 4\u00a0months after birth. What\u2019s more, in our study, solid foods were introduced from 4 to 6\u00a0months of age, so they did not affect the microbiota before 4\u00a0months of age, ruling out the impact of solid foods on microbiota.A total of 91 infants were enrolled finally with a mean gestational age of 39.3\u2009\u00b1\u20091.1\u00a0weeks , birth weight of 3316.9\u2009\u00b1\u2009406.8\u00a0g , the values of the 25th and 75th percentile (P25/P75) birth length of 49.0/51.0\u00a0cm , and birth head circumference of 33.7\u2009\u00b1\u20090.8\u00a0cm . The data mentioned above had no significant difference among three groups , 30 babies in formula A-fed group and 31 babies in formula B-fed group enrolled. Totally 81 stool samples in 40\u00a0days of age (40\u00a0days), 80 samples in 3\u00a0months of age (3\u00a0m) and 68 samples in 6\u00a0months of age (6\u00a0m) were collected.\u221204, versus formula B-fed p\u2009=\u20099e\u221204), but showed no significant differences with 3-months and 6-months old groups .\u03b1 diversity (within-sample diversity) measurements using Ace index values indicated gut microbiota abundance, and Shannon index values indicated gut microbiota diversity . In formula-fed babies, \u03b2 diversity remained stable in 40\u00a0days and 3\u00a0months of age , but increased significantly in 6\u00a0months of age . In breast-fed group, \u03b2 diversity was higher in 3\u00a0months of age than formula-fed groups . Compared with formula A-fed group, \u03b2 diversity was lower in 40\u00a0days of age (p\u2009=\u20092e\u221204), and higher in 6\u00a0months of age (p\u2009=\u20090) in breast-fed and formula B-fed groups.\u03b2 diversity (between-sample diversity) was measured by Unweighted UniFrac and Weighted UniFrac , but decreased as time went on to 5.9% in 3\u00a0m and 3.9% in 6\u00a0m. While Enterococcus and Streptococcus ranked third and fourth in 6\u00a0m in breast-fed group. After solid foods introduction, percentage of Bacteroides increased in formula A-fed group, from 2.3% in 3\u00a0m to 2.8% in 6\u00a0m, but kept almost the same in formula B-fed group from 0.9 to 0.8%. The 10 most abundant bacteria of gut microbiota at genus level were shown in Fig.\u00a0The relative abundance of operational taxonomic units (OTUs) was assessed across all samples, and OTUs were clustered in a heatmap according to their co-occurrence at genus level Fig.\u00a0. The BifIn our study, solid foods were introduced from 4 to 6\u00a0months of age, so they only affected the last time point in 6\u00a0m of age.Veillonella and Clostridioides were lower in breast-fed than those in formula A and formula B-fed groups. Streptococcus (p\u2009=\u20090.001) and Enterococcus (p\u2009=\u20090.011) copy numbers were significantly lower, while Bacteroides (p\u2009=\u20090.012) and Bifidobacterium (p\u2009=\u20090.015) were significantly higher in breast-fed group than they were in formula A-fed group. Lachnospiraceae (p\u2009=\u20090.005), Fusicatenibacter (p\u2009=\u20090.002) and Lactobacillus (p\u2009=\u20090.009) were lower in breast-fed group than those in formula B-fed group. Pediococcus (p\u2009=\u20090.015) was less in formula A-fed group than that in formula B-fed group and Clostridioides in breast-fed group than that in formula A and formula B-fed groups. No differences were found between formula A and formula B-fed groups between the abundance of significantly altered bacteria at the genus level and parameters. In total, 13 covariates with known associations to gut microbiota development in infants were included in the analysis. Specifically, we analyzed maternal factors including age, height, weight, gestational weight gain, maternal prenatal antibiotics, and maternal postnatal antibiotics, as well as offspring factors such as mode of delivery , breastfeeding, antibiotics usage of infant, district , vitamin D supplementation, household siblings and household furry pets.Bacteroides and Parabacteroides were negatively correlated with the CS delivery. And the relative abundance of Enterococcus was positively correlated with antibiotics usage of infant in breast-fed group, but decreased as time went on to 0.059 in 3\u00a0m and 0.039 in 6\u00a0m.Bacteroides, other health promoting bacteria like Clostridia has been reported to be vital to provide mucosal barrier homeostasis during the neonatal period, which is necessary in the immature intestine6. Formula-fed infants tend to have a more diverse microbial community with increased Clostridia species12, which is in accordance with our finding. We also found Veillonella was lower in breast-fed infants than formula-fed ones. Although there is an analysis indicating that Veillonella has been associated with a lower incidence of asthma, it has not taken feeding patterns into consideration22. So more data are needed to clarify the specific roles of certain bacteria with regard to feeding types.Besides Veillonellaceae, Enterococcaceae, Streptococcaceae23 and Lachnospiraceae7, which is consistent with our results. Some researchers have indicated that higher level of Streptococcus sp. is seen in patients suffered from type 1 diabetes2. There may be other negative effects of these bacteria, but we still know little about them.Studies have shown that breast milk keeps the gut in a condition with a lower abundance of 21. They have indicated that Bacteroidetes is specialized in the decomposition of complex plant polysaccharides21, and it is also associated with faster maturation of the intestinal microbial community2. In our study, after solid foods introduction, percentage of Bacteroides at genus level increased in formula A-fed group, from 0.023 to 0.028, but kept almost the same from 0.009 to 0.008 in formula B-fed group. While in breast-fed group, a decreased percentage of Bacteroides was found from 0.059 in 3\u00a0m to 0.039 in 6\u00a0m. The trends are different according to different feeding patterns. Pannaraj et al. believe that daily breastfeeding as a part of milk intake continues to affect the infant gut microbial composition, even after solid foods introduction8. But in our study, differences in gut microbiota between breast-fed group and formula-fed groups were not seen any more after solid foods were introduced. As for studies of gut microbiota, the taxonomic level of bacteria adopted in research may affect the results. We focused on microbiota mainly at genus level, resulting in certain discrepancies with some other articles at phylum or species level.The subsequent big change in diet is the introduction of solid foods in 4\u20136\u00a0months of age, which is largely associated with changes in infant gut microbiota. A case study has found an increase in Bacteroidetes at phylum level after solid foods are introducedPediococcus was less in formula A-fed group than that in formula B-fed group in 40\u00a0days. Many research articles have not taken the differences of formulas into consideration, especially retrospective studies. Even breast-fed group is mixed with formulas in some reports. So there must be some inaccuracies of their findings.There were significant differences of microbiota between formula A-fed and formula B-fed groups in our study. We found that 9. During the first days of life, the gut microbiota in infants born by vaginal delivery (VD) is similar to that in maternal vagina and intestinal tract, whereas in infants born by caesarean section delivery (CS) the gut microbiota shares characteristics with that of maternal skin. We noticed that the genera of Bacteroides and Parabacteroides were negatively correlated with CS. This was consistent with findings in many other studies, in which the difference of Bacteroides remains in 4 and 12\u00a0months of age9, and we also found the negative correlation of Bacteroides with CS existed not only in 40\u00a0days but also in 6\u00a0months of age. The increased morbidity reported extensively in infants born by CS is likely led by altered early gut colonization partially24. Accumulating data have indicated that antibiotic-mediated gut microbiota turbulence during the vital developmental window in early life period may lead to increased risk for chronic non-infectious diseases in later life24. There is a high detection rate of gut Enterococcus in antibiotic-treated infants in their early postnatal period among 26 infants born in a mean gestational age of 39\u00a0weeks25. We also found that the relative abundance of Enterococcus was positively correlated with antibiotics usage. The overgrowth of Enterococcus may be caused by antibiotic selection25.Except for feeding patterns, several factors are associated with the microbiota over the first year of life, which is a key period for the gut colonization, such as the mode of delivery, antibiotic exposure, geographical location, household siblings, and furry petsIn conclusion, by a larger cohort study than before, differences in gut microbiota among infants who were fed exclusively by breast milk or a single kind of formulas were obtained from this study, contributing further to our understanding of early gut microbial colonization, with more solid data than previous studies with mixed feeding patterns. Faecal diversity was lower in breast-fed infants than formula-fed ones in early life period, but increased significantly after solid foods introduction. A low diversity of the gut microbiota in early life appeared to characterize a healthy gut, if caused by breastfeeding, which was different from theories in adults. There were differences in bacterial composition in infants according to different feeding types, and even different formulas had different effects on microbiota, which we could not ignore in future research. This study presented initial data facilitating further research that will help us understand the importance of breastfeeding to gut microbiota in early life period.Because the samples of exclusively breast-fed or formulas-fed babies were hard to collect by a single hospital, the subjects were recruited from two cities and four hospitals, all of which were members of North China Regional Union of Neonatologist, so that there might be selection bias in the enrolment of the study population. We did not analyse faecal metabolites which would be conducted in the future to better understand the function of gut microbiota. Our sampling did not include time points after 6\u00a0months of age, therefore our data did not provide information on trends in gut microbiota over time in relation to diet.We conducted a prospective study detecting gut microbiota in babies fed human milk exclusively or formulas exclusively for more than 4\u00a0months after birth.(1) Healthy, full-term, new born babies. (2) Birth weight was\u00a0\u2265\u00a02.5\u00a0kg. (3) Babies were born between December 2016 to December 2017 in Peking Union Medical College Hospital, Inner Mongolia People\u2019s Hospital, The Affiliated Hospital of Inner Mongolia Medical University, and Inner Mongolia Maternal and Child Health Hospital.(1) Breast-fed group: Babies in the breast-fed group were fed breast milk exclusively for more than 4\u00a0months after birth. They were recruited in their regular follow-up in 40\u00a0days of age if they were fed breast milk exclusively at that time. (2) Formula-fed groups: Babies who have to be fed with formula due to mother\u2019s disease or medicine and other objective reasons were potential subjects to our study. They were recruited before or right after birth. Parents chose formula A or B voluntarily after they signed the informed consents. Both formulas were market products with no reported adverse events. (1) Formula A-fed group: Babies were fed formula A exclusively for more than 4\u00a0months after birth. (2) Formula B-fed group: Babies were fed formula B exclusively for more than 4\u00a0months after birth.(1) Gestational age <\u200937\u00a0weeks. (2) Birth weight was less than 2.5\u00a0kg. (3) Babies suffered from a serious disease such as heart failure, metabolic diseases, or congenital intestinal malformations. (4) Babies from breast-fed group could not be fed breast milk exclusively for 4\u00a0months for any reason. (5) Babies from formula A and B fed groups changed formula before 4\u00a0months for any reason.All infants were evaluated in 40\u00a0days, 3\u00a0months and 6\u00a0months of age. Clinical data and faecal samples were collected at each time point. Similar solid foods like infant cereals, purees and smashed fleshes were introduced to infants aged 4\u20136\u00a0months. The type and supplement order of solid foods were following the feeding guide of babies by the Chinese Nutrition Society in 2015.Clinical data were collected including mothers\u2019 conditions, such as combined diseases, antibiotics usage, age, height, weight and weight gain during pregnancy; and babies\u2019 conditions, including mode of delivery, gestational age, gender, weight, length, head circumference, antibiotics usage, household siblings, pets, district, vitamin D supplementation, defecating frequency, stool property, and infections.Faecal samples were collected from all infants at 40\u00a0days, 3\u00a0months and 6\u00a0months from birth. All the samples were kept in a sterile container, immediately stored in refrigerator at \u2212\u200970\u00a0\u00b0C and sent to Beijing for testing collectively.(1) Extraction of Genome DNA. Total genome DNA from samples was extracted using CTAB/SDS method. DNA concentration and purity was monitored on 1% agarose gels. According to the concentration, DNA was diluted to 1\u00a0ng/\u03bcL using sterile water. (2) Amplicon Generation. 16S ribosomal RNA (rRNA) genes of V4 region were amplified used specific primer (515F-806R) with the barcode. All PCR reactions were carried out with Phusion High-Fidelity PCR Master Mix (NEW ENGLAND BIOLABS). (3) PCR Products Quantification and Qualification. Mix same volume of 1X loading buffer (contained SYB green) with PCR products and operate electrophoresis on 2% agarose gel for detection. Samples with bright main strip between 400 and 450\u00a0bp were chosen for further experiments. (4) PCR Products Mixing and Purification. PCR products were mixed in equidensity ratios. Then, mixture PCR products were purified with Qiagen Gel Extraction Kit . (5) Library Preparation and Sequencing. Sequencing libraries were generated using TruSeq DNA PCR-Free Sample Preparation Kit following manufacturer's recommendations and index codes were added. The library quality was assessed on the Qubit 2.0 Fluorometer (THERMO SCIENTIFIC) and Agilent Bioanalyzer 2100 system. At last, the library was sequenced on an Illumina MiSeq platform.26 according to the QIIME 27 quality controlled process. A total of 20,383,186 reads were obtained from 16S rRNA gene sequencing. Sequences analysis was performed by Uparse software v7.0.100128. Sequences with \u2265\u200997% similarity were assigned to the same OTUs. Representative sequence for each OTU was screened for further annotation. For each representative sequence, the SILVA Database29 was used based on RDP classifier version 2.230 algorithm to annotate taxonomic information. We compared differences in \u03b1 diversity using Faith\u2019s phylogenetic diversity. \u03b2 diversity was evaluated by Principal Coordinate Analysis (PCoA) and PERMANOVA statistics on Unweighted and Weighted UniFrac distances.Paired-end reads was assigned to samples based on their unique barcode and truncated by cutting off the barcode and primer sequence. Quality filtering on the raw tags were performed under specific filtering conditions to obtain the high-quality clean tags2 test for categorical variables. Correlation analyses were performed by Kendall test for categorical variables, and Spearman test for continuous variables. A standard P value \u2264\u20090.05 was considered significant. A corrected P value \u2264\u20090.0167 was thought significant for multiple comparisons among three groups.All statistical analyses were performed using IBM SPSS version 20.0 . Categorical variables were presented as proportions (percentages), and continuous variables were presented as (means\u2009\u00b1\u2009standard deviation) or median (interquartile range). Normally distributed variables were statistically tested by a two-tailed t test for two independent groups or a one-way analysis of variance (ANOVA) for multiple independent groups. Nonnormal distributed variables were tested by Kruskal Wallis test. Inter-group differences were evaluated by \u03c7Ethical approval was granted by the Ethics Institutional Review Board of Peking Union Medical College Hospital (protocol identifying number: HS-1148) on September 27, 2016. Informed consents were obtained from parents of the eligible infants. The study is in accordance with the ethical standards of the Declaration of Helsinki."} +{"text": "The communication between the networks was established using patterned optogenetic stimulation via a modified digital light projector (DLP) receiving real-time input dictated by the spiking neurons\u2019 state. Each stimulation consisted of a binary image composed of 8 \u00d7 8 squares, representing the state of 64 excitatory neurons. The spontaneous and evoked activity of the biological neuronal network was recorded using a multi-electrode array in conjunction with calcium imaging. The image was projected in a sub-portion of the cultured network covered by a subset of the all electrodes. The unidirectional information transmission (SNN to BNN) is estimated using the similarity matrix of the input stimuli and output firing. Information transmission was studied in relation to the distribution of stimulus frequency and stimulus intensity, both regulated by the spontaneous dynamics of the SNN, and to the entrainment of the biological networks. We demonstrate that high information transfer from SNN to BNN is possible and identify a set of conditions under which such transfer can occur, namely when the spiking network synchronizations drive the biological synchronizations (entrainment) and in a linear regime response to the stimuli. This research provides further evidence of possible application of miniaturized SNN in future neuro-prosthetic devices for local replacement of injured micro-circuitries capable to communicate within larger brain networks.Restoration of the communication between brain circuitry is a crucial step in the recovery of brain damage induced by traumatic injuries or neurological insults. In this work we present a study of real-time unidirectional communication between a spiking neuronal network (SNN) implemented on digital platform and an These effects impair local information processing capabilities and information exchange between distant circuitry, disrupting the process of segregation and integration of information in the brain3, as well as to restore long distance communication between disconnected brain regions and circuits5.In this context, new therapeutic approaches and technologies are needed both to promote cell survival and regeneration of local circuits with the integration of new neuro-glial cells3, regenerating lost long-distance connections seem to be more challenging, since these are early developed circuitry architectures which are difficult to reprogram and recreate7.While cellular therapies have been shown promise in engrafting and refilling circuitries8 over the past decade, where artificial spiking neural circuits are locally capable of receiving and processing input in real time while their output can be delivered locally or remotely, either through electrical or optogenetic stimulation10. This enables fast, bi-directional control of multiple cell types11.Contrastly, major progress has been made in the field of neuro-prosthesis14 but real-time simulation is difficult to achieve, making it unsuitable for bio-hybrid experiments. Hardware SNNs work in real-time, are low power consumption and embedded, a promising choice for hybrid experiments and new generation neuroprosthesis. Hardware SNN21 can be classified in two groups: analog implementation and digital implementation. The digital implementation has the advantage to being tunable and easier to process, despite its higher power consumption. Artificial synapses for spike processing and for bio-physics interface provide biomimetic solutions. Memristive devices can process neural spikes and emulate synapses23. Efficient real-time data compression with low energy is possible thanks to these memristor-based systems.Various approaches exist in the field of neuromorphic engineering to the design of Spiking Neural Network (SNN) and artificial synapses. In the case of neuro-inspired axis, the SNN is quite distinct from biological activity and was designed mostly for applications such as computation and artificial intelligence. The neuromimetic axis on the other hand, imitates more precisely the activity of neuronal cells and operate at accelerated or biological time scales. This SNN can be simulated by software24. Few experiments have succeeded in building a real-time system which can provide adaptive stimulation using SNN26.To create bioelectrical therapeutic solutions for health care, a real-time bio-physics interface is crucialin-vitro biological neuronal network (BNN), by real-time encoding dynamics of a SNN in patterns used for optogenetic stimulation of the BNN.In this paper, we present new evidence of real time communication and information transfer from a hardware SNN implemented on an FPGA board and an 27. The neurons were therefore excited by the blue light stimulation, and their activity was recorded both by the MEA device and through calcium imaging using a fast electron multiplying charge coupled device (EMCCD) camera mounted on the microscope. The area which was optically stimulated covered a limited portion of the cultured network of about 0.8 \u00d7 0.8\u2009mm, out of a global network area of several squared millimetres. Therefore, communication and information transfer were tested on a subnetwork of the BNN which was also monitored by about one fourth (4 \u00d7 4) of the total number of monitoring electrodes (8 \u00d7 8). Indeed, the interaction between global BNN dynamics (such as spontaneous synchronizations) and local information transfer (occurring in the stimulated subnetworks) is a key point of the results presented in this work.Biologically-inspired patterns of activity were generated using the SNN and then encoded in real-time into unique patterns of blue light using a modified video projector micro-projected on to a 2D neuronal network (culture) grown on a multi-electrode array (MEA). Neurons, which were transduced using an Adeno-associated virus (AAV) to express the fast Channelrhodopsin2 (ChR2) variant ChIEF28. Results show that in some optimal conditions, where the SNN activity is capable to entrain the BNN activity and suppress spontaneous synchronizations, and in a linear regime of BNN response, information transmission, measured as similarity between INPUT and OUTPUT patterns, can occur.The experimental data was analysed in relation to the different parameters which were varied in the SNN (concerning spontaneous dynamics and output conversion) and the intrinsic dynamics of the biological neuronal network (BNN), driven internally by spontaneous network burstThese results provide further evidence of possible application of miniaturized SNN in future neuro-prosthetic devices for local replacement of injured micro-circuitries capable to communicate within larger brain networks.The experimental set up Fig.\u00a0 is built29 generated spontaneous activity characterized by neuronal synchronizations with similar features to those generated by the cortical BNNs used in this work, which is typically in biology between 0.1 to 1\u2009Hz28.In order to simulate the activity of a real BNN, the SNN used in this study30 (80 excitatory and 20 inhibitory) implemented in a FPGA . The whole resource is just 10% of the FPGA Table\u00a0.The 8 \u00d7 8 binary matrix image generated as an output by the SNN activity, was converted into an 800 \u00d7 600 pixel image through the VGA input port of a video-projector where a high-power blue LED replaced the original light bulb. The binary 8 \u00d7 8 matrix was displayed in central 600 \u00d7 600 pixels, other pixels were set to zero (black).The image generated by the Digital Micromirror Device (DMD) of the video projector was projected into an up-right epifluorescence microscope through an additional optical pathway obtained in between the camera and the excitation/dichroic cube placed above the sample, by splitting orthogonally the camera pathway with a dichroic mirror see Fig.\u00a0. The focNeuronal cultures between 21 DIV to 28 DIV were used (see Methods). The activity of the neurons was recorded using a standard 8 \u00d7 8 MEA dish with an inter-electrode distance of 200 micrometers Fig.\u00a0. ExcitatThe synchronization between the time of the different devices , has been made through the MEA acquisition system where the signal from each of the 60 electrodes has been recorded simultaneously to the TTL signal activating the stimulation protocol controlling the LED driver (switching ON and OFF the blue light) and the to the single frame signal acquired by the camera.Note that the MEA system was acquiring the activity of the BNN before, during and after the video-projection stimulation was switched ON . Also, the camera was independently acquiring images for calcium imaging (at 57\u2009Hz) when the video-projection stimulation was running , so light stimulation patterns were captured as artefacts in the recorded calcium signals .In this work we present the results from 12 different sessions of communication between the SNNs and the BNNs . In each of the sessions, different SNN and thresholds for Network Synchronization (NS) detection were used to enlarge the range of stimuli by minute. This is summarized in Table\u00a0Varying the parameters, the SNN was generating different OUTPUTs with different ranges of frequency frequency. Here, the SNN NS frequency interval is Hz. This choice is due to avoid overlap stimulations as the stimulation protocol lasts 310\u2009ms and as our cortical BNNs generate an average neuronal synchronization between 0.1 to 1 Hz25.From our library of SNNsThe information transmission (IT) between the SNN and the BNN was quantified looking at the correlation of the similarity between INPUT pairs . Fig.\u00a031). The vectorial network response (VNR) for each stimulus was build was obtained in 8 out of 12 experiments Fig.\u00a0 with theNext, we looked at how information transmission is related to the stimulus intensity and frequency from the SNN, which are both shaped by the spontaneous network synchronizations displayed by the SNN which mimic those ones occurring in BNNs Fig.\u00a0.Information transmission (maxIT) highly correlated with the average stimulus intensity in BNNs and information transmission between spontaneous NSs suppression and information transmission Fig.\u00a0. SpontanWhen looking at the entraining of the BNN, i.e. at the ratio of evoked (within 500\u2009ms from the stimulus) NSs over all NSs when SNN to BNN communication was ON, we observed a high entrainment index of 0.75+/\u22120.25 when information transmission took place (i.e. with a maxIT above 0.5), although no significant correlation was observed between entrainment index and information transmission.In this work, we presented the results of the real-time communication between a 100 neuron SNN and a BNN obtained through spatially coded stimuli shaped by the SNN synchronization patterns. The optogenetic stimulation was achieved through spatial (amplitude) light modulation via a DMD mounted on a digital light projector . The SNN ran with a millisecond resolution on an FPGA board which embedded a network synchronization detector and a VGA controller to convert synchronized patterns of activity into 800 \u00d7 600 pixels images displaying 8 \u00d7 8 binary matrices coding for the activity (ON/OFF) of the first 64 artificial excitatory neurons. Once a VGA-image was delivered to the DLP, an additional simultaneous TTL signal from the FPGA board activated the signal generator which controlled the power modulation of the blue light source of the DLP .The image generated by the DLP was de-magnified (of about fourteen times) through an adapted up-right epifluorescent microscope and focused on the BNN located at the focal plane of the microscope. The BNN, at about four weeks in culture and previously transduced with the fast Channelrhodopsin2 variant ChIEF, responded to blue light stimulation with evoked neuronal firing monitored both by red calcium imaging and multi-electrode recordings.34). Information transmission was studied as a function of the stimulus intensity and frequency, linearity of BNN response, BNN entrainment and BNN spontaneous synchronizations\u2019.The SNN and BNN interpedently generated spontaneous dynamics of neuronal activity generating networks synchronizations occurring at similar frequencies. Information transmission from the SNN to the BNN was studied and quantified by measuring the correlation between the similarity of stimuli (incoming from the SNN) versus the similarity of BNN responses similar to the one spontaneously generated by the BNN in absence of (or not directly caused by) stimuli. Network bursts stereotypically recruit the entire network, as extensively described in literature in a wide variety of in-vitro networks32. Information transmission was maximal when the stimuli frequency approximated 0.56\u2009Hz, which is slightly higher but in the same order of magnitude of the intrinsic (spontaneous) frequency of the BNN synchronizations which was about 0.37\u2009Hz. When information transmission took place, we observed high entrainment of the BNN activity and also suppression of spontaneous synchronizations occurring distally from the last stimulus. Overall these results support the hypothesis that the activity of the BNN needs to be highly entrained by the incoming stimuli from the SNN in order to reliably process it within a linear regime response. We hypothesize that non-linear network responses (over-shooting) could be limited both by optimizing stimulation protocols35 and by controlling a sufficiently large portion of the network receiving the input. In fact, in the presented experiments the stimulation was delivered to a very small and spatially defined population of neurons composing the cultured network, where probably a few hundred neurons out of the hundreds of thousands present were stimulated. Therefore, spontaneous synchronizations could also be generated by the \u201cuncontrolled\u201d, i.e. out of the field of view, neuronal population, therefore a complete suppression of burst could not be optimally achieved. In addition, also local optogenetics stimulations could potentially generate a reverberating overall synchronization causing over-shooting non-linear responses in some of the stimuli.The overall results revealed that information transmission could be achieved if network responses linearly responding to stimulus intensity were considered (linear regime), and therefore over-shooting responses were discarded. In addition, information transmission was optimally achieved when the early response to the stimuli were considered (roughly within the first hundred ms). Over-shooting responses likely represented network synchronizations (or burstsThe fact that in our experiments the location of the stimuli (about 800 \u00d7 800 micrometers) covers less than one fourth of the region covered by the multi-electrode array (about 1400 \u00d7 1400 micrometers) also might have limited the possibility to optimize the read-out of information transmission. The activity recorded in about 4 \u00d7 4 electrodes would most likely monitor locally the direct mono- or short-path poly-synaptic responses, while most of the other electrodes would register feedforward reverberating activity. This represent a clear under-sampling issue in the read-out.We observed a stereotyped relation between SIPs and SOPs in our experiments which tends to cover the upper left part of the plots Fig.\u00a0. This co36. This effect is particularly pronounced in our experiments when SIP is above 0.2\u20130.4, and where the attractor\u2019s network state could be represented by a saturated response similar to a global network synchronization.The trend shown in Fig.\u00a0In this context, these experiments provide important conclusion for the implementation of efficient neuroprosthetic communication, where the high spatial resolution achieved by the stimuli play a key role, and approaching single neuron stimulation could guarantee higher information transmission. In addition, spatial sparseness of stimuli could also be an alternative when stimulations with high spatial resolutions cannot be achieved. Accordingly, overlaps of responses between close neurons could be avoided with a reduction of redundancy of responses and consequent improvement of information transmission.9 in the context of visual stimuli transduction, where images captured by a camera have been used to stimulate an artificial neural network which in turn was used to shape visual output through a portable DLP which could be used to activate optogenetically downstream neuron in the visual pathway.A similar approach to the one described in this paper has been recently shown37.In order to optimize information transmission to downstream circuits, our work support the idea of using optogenetically activateable implants which enable spatio-temporal modulation of stimulation patterns where possibly the use of micro-LED over optical fibers might be a more flexible approachAdvantage of our work is the use of SNN instead of regular stimulation pattern. SNN mimics biological neural activity and then perform biomimetic stimulation. Optogenetic system allows spatial stimulation, SNN adds biomimetic temporal stimulation. Using FPGA platform makes the SNN working in real-time, embedded and low-power. This work provides a real-time adaptive stimulation with a temporal and spatial resolution. Perspective is to create biomimetic optogenetic-based therapeutic solutions for health care.10 due to the higher flexibility and selectivity it enables.Future developments will be able to close the loop to enable adaption of SNN activity to biological rhythms. Such a system could be used to investigate neurological disorders using spatial and temporal adaptive stimulation on biological neurons. Furthermore, use of optogenetics as actuator in neuro-prosthetic devices proves advantageous over classical electrical stimulation, as it has been recently demonstrated also in cochlear implantsFPGA is a low-power and embedded digital electronic system. It works in real-time and is then well suited for bio-hybrid experiments.30, excitatory and inhibitory AMPA and GABA synapses38, short-term plasticity39, axonal delay and synaptic noise40. All of these SNN components allows a biomimetic dynamic and then provide biomimetic adaptive stimulation to cells. All model parameters are stored in RAM and time step computation is 1\u2009ms. All equations including Izhikevich model are implemented using pipeline technique to ensure high performance. From41, simplifications have been performed to optimize number of ressources. Discrete Izhikevich neuron equations are then:The SNN of this research is composed of Izhikevich neuron modelAMPA is considered as an excitatory neurotransmitter, which depolarizes the membrane of a neuron, while GABA is considered as an inhibitory neurotransmitter with a hyperpolarization effect. Depolarization or hyperpolarization is represented by a positive or negative contribution to synaptic currents Ie and Ii Eq.\u00a0. FollowiSynaptic plasticity is defined by 5 parameters . W is the weight of the synapse. X is a scalar factor which indicates the state of the synapse (depression or facilitation). P is a percentage which will be multiplied by the factor x after each emission of a pre-synaptic action potential. If this percentage is larger than 1, this synapse will describe a short-term facilitation. Otherwise, if this percentage is less than 1, this synapse will describe a short-term depression. \u03c4 is the time constant of exponential decay (or growth) in facilitation (or depression).40. The Ornstein-Uhlenbeck process is a noise process which is a Gaussian process with bounded variance and which admits a stationary probability distribution.To allow spontaneous activity and make the activity of our network more biologically realistic, we implemented synaptic noise in the current source of the neuron model. We use the Ornstein-Uhlenbeck process which is one of the best model for modeling synaptic noise in a neural networkThis SNN is composed of 100 neurons with 7700 synapses. 80% of neurons are excitatory type, 20% inhibitory ones. The number of 100 neurons was chosen to get one to one connection between SNN and an 8 \u00d7 8 matrix and is enough number for mimicking dynamic neural activities. The activity of the first 64 neurons (from the 80 excitatory) was converted into an 8 \u00d7 8 binary matrix-image (where each element is controlled by a single neuron) used to be projected to the BNN. A VHDL module creates this matrix and handle the VGA communication. Figure\u00a0The 8 \u00d7 8 matrix generated by the SNN activity coding for the activity of 64 excitatory neurons, was converted into an image of 800 \u00d7 600 pixels (transmitted through VGA to the video projector) and each of the 8 \u00d7 8 square had a size of 75 \u00d7 75 pixels (768/8 = 96), so covering a part (600 \u00d7 600 pixels) of the all images. Out-of-matrix pixels were left dark, i.e. not coding any information. Square images are sent by VGA to the digital light processing (DLP) projector (video-projector). The video-projector generates images through a Digital Micromirror Device (DMD) of 1024 \u00d7 768 mirrors, i.e. pixels. A custom VHDL designs converted the spiking neuronal signal into analog signals which is sent to the projector. Sup. Fig.\u00a02. The images from the video-projector were refreshed at a rate of 60\u2009Hz. The TTL pulse from the FPGA board activated the light stimulation protocol run by the STG1008 stimulator, which was composed of five pulses of 3.75 Volt lasting 30\u2009ms each separated by 40\u2009ms of dark (0 Volt). The STG1008 stimulator output was controlling the custom-made LED power driver. We used 75% of the maximum LED power which guaranteed a sufficient power at the focal plane of the microscope. Since the power used was just enough to stimulate neurons on the illuminated regions, we do not expect any neurons to be stimulated by the scattered ambient light of the projector out of those regions.The light intensity at the microscope focal plane (the location of the BNN) was controlled spatially via amplitude modulation of a collimated beam and temporally via a TTL trigger to the STG1008 (Multi-channel System) stimulator, as shown in Fig.\u00a0The LED was driven using a custom-made switch mode power supply with adjustable current and voltage limiting capable of providing up to 40\u2009A at 5\u2009V as well as a 0\u20135\u2009V input for external control of the output power. The circuit was based around a Texas Instruments TL494 chip.42. Briefly, the entire neo-cortex of P0-1 mouse was removed, chopped with scissors in a Trypsin EDTA solution. The cortical tissue was digested witha papain-based dissociation buffer ), 100\u2009\u03bcl DNAse (Sigma-Aldrich), 3\u20135 crystals of L-Cysteine (Sigma-Aldrich), HBSS with 20\u2009mM HEPES (pH 7.4) and placed on a rotating shaker for 15\u2009min at room temperature for mechanical dissociation by trituration. Cells were resuspended in modified essential medium (MEM) without L-glutamine with essential amino acids , 5% heat-inactivated fetal calf serum , heat-inactivated 5% horse serum , 2\u2009mM glutamine , 3\u2009mg/ml glucose, 2% B-27 , and 0.5% Pen/Strep , and plated on poly-D-lysine covered multielectrode arrays with a cell density of 3000\u20134000 cellsmm\u22122 (\u223c1.5 \u00d7 106 cells per dish). Cultures were maintained at 37\u2009\u00b0C with 5% CO2. Growth medium was partially replaced every 3\u20134 days , 5\u2009mg/ml glucose, 5% heat-inactivated fetal calf serum, 0.8% GlutaMAX , 0.5% Pen/Strep, 2\u2009mM glutamine, 2% B-27). At 7 DIV, cultures were transduced with AAV2/1-hSyn-oChIEF-mCitrine vectors (assembled in HEK293T cells using a previously published protocol43).As experimental model for this research, primary cortical cultures from embryonic rats have been used. All animal care protocols, and all experimental protocols used for this study were conducted according to the animal research guidelines from Tel Aviv University and were approved by the Tel Aviv University Animal Care Committee. Cultures were prepared as described previouslyet al., 2013 and Kanner 2018. Briefly, in order to load the cells with the calcium-sensitive dye, cultures were incubated for 40\u2009min in 1\u2009ml buffered-ACSF solution supplemented with 1\u2009\u03bcl of 10% pluronic acid F-127 (Biotium 59000) and 1\u2009\u03bcl RHOD-3 previously diluted in 1\u2009\u03bcl anhydrous DMSO. Following incubation, cultures were washed, incubated for another 30\u2009minutes with buffered-ACSF and next transferred to set-up and recorded in a fresh buffered-ACSF at 37\u2009\u00b0C in an open-air environment. In order to avoid artifacts due to evaporation and pH changes, the buffered-ACSF was replaced after each recording session lasting between 20 and 40\u2009minutes.Before recording the activity of the neuronal cultures grown on the MEAs, the cultures were loaded with the RFP calcium sensor RHOD-3 (Life Technologies). The loading procedure and recording set-up was similar to what described by Bonifazi Calcium-fluorescence images were acquired with an EMCCD camera (Andor Ixon-885) mounted on an Olympus upright microscope (BX51WI) using a 10X water immersion objective . Images were acquired at 57 frames per second in 2 \u00d7 2 binning mode (composed of 501 \u00d7 502 pixels out of the 10001 \u00d7 1002 pixels of the full EMCCD chip) using Andor software data-acquisition card (SOLIS) installed on a personal computer, spooled to a high capacity hard drive, and stored as uncompressed multi-page tiff file libraries.Fluorescent excitation was provided via a 120-W mercury lamp (EXFO x-cite 120PC) coupled to the microscope optical axis with a dichroic mirror and equipped with an excitation filter matching the dye spectrum (Chroma dsRed 41035). A U-DP beamsplitter (Olympus) equipped with a long pass dichroic mirror (cut-off 532\u2009nm) was placed between the eyepiece and the filter-cube wheel of the microscope and it was used for calcium imaging with the red sensor (RHOD-3). Red calcium imaging also required the relocation of the emission filter of the dsRed-cube above the long-pass dichroic mirror. In this way, the blue light used for ChIEF excitation entering through the U-DP coukld reach with full power the neuronal sample while it was filtered and attenuated by the dsRed emission filter before reaching the camera providing optimal conditions to record red calcium fluorescence. However, given the high power of the blue LED used for optogenetics, residual artefact blue light shined during the stimulations, could reach the camera and therefore it was used to verify and record the spatial-temporal features of the stimuli. Regular GFP imaging was performed normally in absence of the long pass dichroic mirror.The commercial system purchased from Multi Channel Systems has been used for multi-electrode array recordings. We used standard MEAs consisting of 59 round electrodes made of TiN, where electrodes are equidistantly positioned in an 8 \u00d7 8 layout grid, with an inter-electrode distance of 200\u2009\u00b5m and a microelectrode diameter of 30\u2009\u00b5m. A ground electrode embedded on the MEA was used. For this study, we recorded and studied the activity of networks between 21 DIV to 28 DIV.2 and 10 glucose; pH 7.3 (adjusted with KOH) and 280\u2013290\u2009mOsm. In order to isolate currents associated with ChIEF activity, inhibitory and excitatory transmission were blocked using 1\u2009mM picrotoxin and 25\u2009mM NBQX (Tocris). Cells were optically stimulated using a 200\u2009\u00b5m thick optic fiber (Prizmatix) connected to a blue LED and their responses to three stimulation frequencies was recorded . Failure rates were calculated as the percentage of failed action potentials out of the total amount of stimulations delivered at each frequency. Results are shown in suppl. Fig.\u00a0In order to test the response of the neurons to blue light stimulations, celle neurons were grown on ceovrslips and not on MEAs. Recordings from cultured neurons were performed under similar conditions as previously described. Cells which expressed the ChIEF construct were identified by mCitrine fluorescence and were patched using a borosilicate glass electrodes (4\u20136 M\u2126 resistance) filled with (in mM): 110 potassium gluconate, 10 EGTA, 20 HEPES, 2 MgClA different set of cortical cultures plated on MEAs and transduced at 7 DIV with AAV2/1-hSyn-oChIEF-mCitrine vectors, were used for immunostaining at 21 DIV. Cells were washed with PBS and fixed with 4% PFA for 10\u2009min at room temperature. Next, cells were permeabilized with 0.5% Triton x100 (Sigma-Aldrich) in PBS for 10\u2009min and blocked in blocking solution for 1\u2009h at room temperature. The cultures were incubated overnight with the primary antibodies at 4\u2009\u00b0C, then washed three times with PBS and incubated with matching secondary antibody for 1\u2009h at room temperature. Cultures were washed twice with PBS and incubated with DAPI in PBS for 10\u2009min. Cultures were further washed twice with PBS.All data analysis has been performed using MATLAB . In the whole paper, reported errors correspond to standard deviations.The similarity between input pairs (SIPs), i.e. between the binary 8 \u00d7 8 INPUT images, was computed using the Jaccard similarity index, which is calculated as the ratio between the number elements which are shared between both images over the total number of elements in both images .The intensity of the 8 \u00d7 8 image input was quantified as the number of \u201cones\u201d elements set to one ivided by 64 and multiplied to 100 in order to normalize it between 0 and 100%. The frequency of input was calculated as the ratio of the mean inter-stimulus interval, the latter expressed in seconds. Note that in the all work, since as each single stimulus is composed of five light pulses (30\u2009ms ON and 40\u2009ms OFF), the time t of a stimulus was considered as the time of the starting of the first pulse.Electrical recordings were first pre-processed using a zero-phase digital filtering (\u201cfiltfilt\u201d function) with high-pass cut-off set at 100\u2009Hz. All analysis steps described below were applied to the filtered signals.For each electrode, the noise level was estimated by fitting the probability density function of the signal with a Gaussian, in the 5th to 95th percentile interval, over an entire recording. A threshold of 4 standard deviations below the mean was used to extract the timing of the negative peak of the spikes with an imposed refractory time between spikes of 1\u2009ms. In this work, no spike sorting was applied and all the spikes, i.e. multi-unit activity, recorded by each electrode were considered.44.Extraction of the calcium imaging signal for each single cell was performed using the previously described procedure for cell body identificationt (VNR(t)) for a time bin of response T was calculated as a the number of spikes each electrode was recording in the time interval . So, the VNR was a vector of 59 elements, where each element corresponded to an electrode.The Vectorial Network Response to a given stimulus delivered at time The Scalar Network Response (SNR) was calculated as the sum of the VNR over all electrodes.The time window T of response varied between 10 and 500\u2009ms in bin of 10\u2009ms.The similarity between output pairs (SOPs), was quantified as one minus the cosine distance between the VNRs.The correlation between between the SIPs and the SOPs (correlation of similarity) was used as metric of information transmission (IT) between the SNN and the BNN.In the paper, when we refer to correlation coefficient, we refer to the Pearson correlation coefficient.In order to identify the optimal conditions to maximize information transmission and calculate the maximal IT (maxIT), given the set of stimuli within a given experiment, we calculated the correlation between SIPs and SOPs both by varying T (between 10 and 500\u2009ms in bin of 10) and the subset of stimuli included in the calculation by discarding overshooting SNRs according to what follows. After normalizing the SNRs between zero and one by dividing the SNRs to its maximal value (nSNRs), a threshold to the nSNRs between 0.02 and 1 (varying in bins of 0.02) was applied and only stimuli below that threshold were used to calculate the SIPs vs SOPs correlation. In order to guarantee a minimal statistic, cases when more than the 75% of the whole stimuli exceeded the threshold were discarded. The linearity of response was calculated as the correlation between SNRs and intensity of stimulation on given a subset of stimuli after thresholding.Network synchronizations (NSs) were identified from the instantaneous scalar network activity (iSNA). The iSNA was calculated by counting all spikes recorded in the network in sliding time windows of 10\u2009ms (with sliding step of 1\u2009ms) over an entire recording session. Peaks of iSNA with a minimal threshold of 25% of the maximal iSNA and identified with a refractory time of 100\u2009ms, were considered as NSs. The time of the maximum of the peak of iSNA was used as time of the NS.The BNN entrainment index was calculated as the ratio of evoked versus spontaneous NS during the epoch when the communication between SNN and BNN was switched ON. Evoked (spontaneous) NS were marked as the ones occurring within (after) the 500 from the last stimulus. The suppression of spontaneous NSs index was calculated as the ratio between the frequency of spontaneous NSs in the BNN in the epochs preceding and when the communication with the SNN was switched ON. The frequency of the NSs was calculated as one over the average inter-NS interval (the latter expressed in seconds).Supplementary information with supplementary figures."} +{"text": "The shortage of skilled workers who can use robots is a crucial issue hampering the growth of manufacturing industries. We present a new type of workforce training system, TeachBot, in which a robotic instructor delivers a series of interactive lectures using graphics and physical demonstration of its arm movements. Furthermore, the TeachBot allows learners to physically interact with the robot. This new human-computer interface, integrating oral and graphical instructions with motion demonstration and physical touch, enables to create engaging training materials. Effective learning takes place when the learner simultaneously interacts with an embodiment of new knowledge. We apply this \u201cLearning by Touching\u201d methodology to teach basic concepts, e.g. how a shaft encoder and feedback control work. In a pilot randomized control test with a small number of human subjects, we find suggestive evidence that Learning by Touching enhances learning effectiveness in this robotic context for adult learners. Students whose learning experience included touching the robot as opposed to watching it delivers the lessons showed gains in their ability to integrate knowledge about robotics. The \u201ctouching\u201d group showed statistically significant gains in self-efficacy, which is an important antecedent to further learning and successful use of new technologies, as well as gains in knowledge about robotic concepts that trend toward significance. Adult learning; Human-computer interface; Human-robot interaction A study by the Manufacturing Institute estimates that this shortage will leave two million unfilled manufacturing jobs in the United States alone Workforce development for advanced robotics and automation is a crucial challenge. In the robotics and automation society, as well as in the education research sector, a number of valuable educational materials using robots have been developed. These include robotics programming instruments for elementary school children alongside robots will be ubiquitous. Decades of robotics research have given robots more intelligent behavior such as understanding human intentions and coordinating motion alongside humans In the future, occupations in which employees work productively An important consideration for teaching the knowledge necessary to utilize advanced robotics will be each individual's self-efficacy for interacting with robotics Sitting in a classroom, people are often unengaged, merely gaining superficial knowledge. Similarly, online learning, particularly large open courseware, has often failed to engage the broader population Learning by Touching. We hypothesize that people will be more confident and comfortable working alongside robots if they have an opportunity to engage in Learning by Touching with a robotic instructor. The concept is implemented on a collaborative robot system connected to an online learning environment as shown in In the following sections, a novel workforce training system, called TeachBot, is introduced. Extending the object-mediated learning concept, we develop a new methodology for teaching the basics of robotics through physical interactions with the robot, referred to as 2thinking about completing a task and actual muscular motion are closely related. Indeed, imagining doing an activity and actually doing it excite the same parts of the brain homo sapiens, this implies that the neurological development associated with physical actions and that of high-level language skills concurrently occurred The scientific foundation of object-oriented learning can be found in brain and cognitive sciences. Recent neurological studies have revealed that In fact, the same areas of the brain responsible for motor control are also among the most capable of supporting learning and the emergence of skilled behavior. Strick et al. Previous work has leveraged this phenomenon in a variety of applications. While treating patients who suffered brain damage, Doidge These findings imply that providing dual, concurrent stimuli to the learner, one physical and the other conceptual, will improve learning effectiveness. This is the key idea underpinning the TeachBot methodology.3The goal of the current work is to create a new workforce development methodology that can effectively engage broad populations, including older generations. Beyond merely providing educational materials for hands-on learning, we must invite and engage people who might otherwise be unable to get training. Our approach is to develop a new workforce training system that integrates oral explanation and guidance, graphics and animations that are coordinated with the vocal instruction, physical demonstration of the machine in a realistic setting, and concurrent body movements that create touch and proprioceptive sensations. Integrating these will allow us to create new curricula. We hypothesize that people will be more confident and comfortable working alongside robots if they have an opportunity to interact with a robotic instructor. Here, we propose an integrated, verbal-graphical-demonstrative-and-touchable system called TeachBot.TeachBot is an autonomous, robotic instructor that introduces workers on a manufacturing line to robotics. TeachBot plays a dual role: an instructor delivering a lecture and a demonstration machine that can execute programmed movements and perform various tasks. It is the physical extension of an online course where lectures and laboratory sessions are seamlessly integrated. It requires no on-site human instructor; instead, trainees interact directly with TeachBot. Course materials are presented to produce a synergistic effect: integrating verbal and pictorial instructions into physical demonstrations and laboratory exercises. Learners will not only listen to instructions from the robot, but also participate in demonstrations with the robot. This methodology aligns with the latest learning science research on object-mediated learning and embodiment. TeachBot aims to attract a broad range of learners with diverse backgrounds.(A)A robot that can interact with learners physically. The system is built with the technology of collaborative robotics that allows humans to safely interact with the robot;(B)A computer that accesses a cloud-based learning platform, delivers instruction materials, and controls the entire system;(C)An interactive projector that displays graphs, scripts, and other images for instruction and communication; and(D)Peripheral devices and materials, including workpieces, jigs and fixtures, parts-feeders, and belt conveyors.Instead of using a computer monitor, the projector is used for displaying various images on a large worktable. Learners around the worktable focus just on the robot and the worktable rather than distributing their attention to a computer monitor, keyboard, and other places. All information is communicated both verbally and visually by combining TeachBot with the projector. The robot is synchronized with the audiovisual system so that the explanation of concepts and techniques is seamlessly integrated and coordinated with the physical demonstrations. When talking about the three-dimensional orientation of an object, for example, TeachBot immediately demonstrates with its posture how to hold an object in the desired orientation. TeachBot also points to objects with its end effector and speaks with gestures. TeachBot can be programmed to speak any language.This unique integration of oral instruction, graphics, demonstration, and physical interaction will allow us to create unique curricula. Suppose that a learner with limited engineering background is to study how a robot can precisely move its joints to desired angles. A shaft encoder plays a key role in closed-loop control by measuring joint angle. The principle of a shaft encoder, however, is challenging to understand for most people. A diagram explaining the principle of an optical shaft encoder is confusing see . AllowinAs they push the robot arm and observe the effects of their actions resulting in the generation of signals, learners make a mental connection between the motion and measured signals. Their muscular action and visual observation occur concurrently, creating a synergistic \u201cembodiment\u201d of the function of the shaft encoder. This is not merely a standard hands-on laboratory experience. No complex reasoning is required. The learner can understand intuitively what a shaft encoder can do by touching and moving it: Learning by Touching.\u201cFeedback control,\u201d a basic technique for controlling robots and all kinds of machines, is not easy to understand for the majority of people who have a limited engineering background. Even engineering students have difficulty in understanding the concept 44.1We implement TeachBot on a Rethink Robotics Sawyer robot arm The software architecture is illustrated in TeachBot graphics and text are projected onto a worktable by an Epson Powerlite projector. The table was built in-house out of aluminum and melamine board to optimize the projection surface.The concepts taught by TeachBot in this study are categorized into seven sub-modules: Motors and Degrees-of-Freedom, Encoders, Feedback, Kinematics, Memory, Orientation and Position, and Waypoints.4.2We conducted a pilot human subject experiment based on a protocol approved by the Massachusetts Institute of Technology Committee on the Use of Humans as Experimental Subjects (COUHES), #1806389401. The objective was to investigate the efficacy of robot-mediated learning.In particular, this experiment focuses on the evaluation of the TeachBot curriculum along two axes: self-efficacy and knowledge gained by the learner. In addition to quantifying the knowledge gained from the curriculum, we include an evaluation of self-efficacy because previous studies have indicated the importance of self-efficacy for learning and applying knowledge on the job More than simply investigating TeachBot's ability to upskill workers, this pilot study specifically attempts to evaluate the Learning by Touching methodology against online video learning. For this reason, we selected a methodology of randomized control trials in which the control group took a video-based version of the TeachBot course. In order to isolate the effect of physical interaction with TeachBot, we designed the control group's video curriculum to include exactly the same audiovisual content as the experimental group's curriculum: namely, a video recording of an example student taking the TeachBot course.4.2.1Subjects, like the workforce they represented, were diverse: men, women, young adults, retirees, people who worked in jobs with robots, and people who were uncomfortable with the idea of robots in the workplace. Twenty-two subjects were recruited from Central Square in Cambridge, MA, USA and pre-interviewed to ensure none had a four-year degree in STEM. Before the TeachBot course, subjects were given a short survey to gather demographic information (see Appendix A) and the results are illustrated in 4.2.21.Pre-Test: Subjects were evaluated for baseline knowledge and self-efficacy.2.Learning Module: Subjects completed either the TeachBot or video course.3.Post-Test: Subjects were evaluated again for newly learned knowledge and changed self-efficacy.The testing of each subject was completed in three parts, with the whole experiment taking up to 90 minutes per participant:4.2.3\u2022verbatim question generated from ideas and information explicitly stated in the learning module and that required subjects to merely recall the correct responses. These questions required the shallowest level of understanding for the subject to answer correctly. For example, \u201cConsider a robot with position feedback control. You push the arm away from its target position, then let go. What does the arm do?\u201dOne \u2022integration question also generated from ideas and information explicitly stated in the learning module, but which required subjects to integrate two or more ideas from the learning module. Thus, integration questions required a slightly deeper level of understanding. For example, \u201cWhat devices allow a robot arm to change its position and orientation?\u201dOne \u2022inference questions that required subjects to generate ideas beyond the information presented in the learning modules, thus requiring the deepest level of understanding. For example, \u201cThe motors in this activity can only rotate. How can you move motors to make something move in a straight line?\u201dTwo The pre- and post-tests each had 34 questions. Of these, six measured subject self-efficacy by asking subjects to rate how confident they would be completing various tasks involving a robot. The remaining 28 questions evaluated subjects' knowledge of robotic systems. The 28 knowledge-evaluation questions were divided evenly among the seven concepts taught by TeachBot discussed in Sec. The three categories of questions represent different levels of cognitive activity required to respond to the question; these categories were also considered to be indicative of question difficulty After completing the learning module, subjects took a post-test to evaluate self-efficacy for knowledge gains that could be attributed to completion of the learning module. The post-test was identical to the pre-test except that it did not include the demographic survey and the question order was shuffled.Please see Appendix A for the complete test materials.4.2.4Before beginning the pre-test, subjects were asked to draw a piece of paper randomly from a table. Half of the papers directed the subject into the experimental group, half into the control group. The papers were discarded after each draw to guarantee an equal number in each group. The experimental group participated in an approximately 20-minute, interactive, hands-on learning module conducted by the TeachBot. The control group was told to watch a video on two monitors showing a model subject taking the hands-on TeachBot course. As illustrated in 4.3Similar to VanLehn's evaluation 4.3.1i is the question number and j is the test identity: pre-test or post-test:On both the pre- and post-tests, subjects were asked to rate how confident they would be completing each of six unique tasks involving a robot on a scale from one to ten with one being least confident and ten being most confident. These ratings were assigned a label Next, the differences between each subject's Some subjects began with a greater initial level of self-efficacy for robotics than others. Simply comparing the \u0394's between the control and experimental groups weights subjects with less self-efficacy more heavily because such subjects have more potential to improve their scores between tests. To account for this effect, we normalized each \u0394 by computing a ratio of the self-efficacy gained to the amount the subject could have gained:4.3.2i is the question number and j is the test identity: pre-test or post-test:To quantify learning, we used subjects scores on the 28 knowledge-evaluation questions. Subjects answer to these questions on the pre- and post-tests were assigned a binary grade, Similar to the calculation of self-efficacy gain above, \u0394's were computed for each of the 28 knowledge-evaluation questions using Eq. Some subjects came in with significantly more understanding of the field of robotics and automation than others. As in the above calculation of self-efficacy gain, simply comparing the \u0394's between the control and experimental groups weights subjects with less prior knowledge more heavily because such subjects have more potential to improve their scores between tests. To account for this effect, we normalized each \u0394 by computing a ratio of the amount learned to the amount the subject could have learned:4.3.3The learning gain, \u039b, was also computed for each category of question using Eq. 5t-test to compare the results of both metrics across groups. For each metric, this test allowed us to test the hypothesis that the mean learning or self-efficacy gain of a learner who takes the TeachBot course is greater than that of a learner who watches a video lecture containing the same information. Values of the learning gain, \u039b, and self-efficacy gain, G, were computed for each subject. We present the results in \u2022Subjects in the TeachBot course gained more self-efficacy for interacting with robots than those in the video course. As shown in t-test allow us to reject the null hypothesis with \u2022Subjects in the TeachBot course learned more than those in the video course. As shown in t-test allow us to reject the null hypothesis with We evaluate how well TeachBot improves subjects' understanding of fundamental robotics concepts, as well as their self-efficacy regarding those concepts. We use a one-tailed, two-sample, unpaired, heteroscedastic t-test allow us to reject the null hypothesis with Additionally, analyzing the question category learning gains highlights the strengths of the TeachBot curriculum as well as specific areas for improvement. Subjects in the TeachBot course did better on integration questions than those in the video course. As shown in However, similar improvement was not seen in subjects' ability to infer new knowledge about robotic systems. We hypothesize that this is because, even though the TeachBot module was successful in guiding learners to develop a robust understanding of how concepts in automation and system integration build upon each other, we did not design enough inference-related material into the module. The learning modules we designed focused on helping learners connect new concepts to ones they already mastered, but did not guide learners to infer new knowledge about robot systems. Clearly, this is an area for improvement in future iterations of the curriculum.As shown in Finally, there is no correlation between pre-test score and learning gain. As illustrated in These results are promising indicators that the TeachBot system would be a powerful tool to upskill adult workers to be confident and comfortable working with collaborative manufacturing robotics. This small pilot experiment indicated self efficacy gains for subjects who took the TeachBot course statistically significantly greater than subjects who completed a purely video-based curriculum. Learning gains for those who took the TeachBot course tended toward significance. Additionally, these results also help identify the specific strengths and weaknesses of the existing curriculum. Learners who took the TeachBot course saw gains in their ability to integrate knowledge across a variety of concepts, but more work needs to be done to enable students to infer new insights without being explicitly told. More generally, this experiment offers promising evidence toward the future of collaborative robots in education.6We have presented TeachBot, an automatic, robotic education and training system. TeachBot leverages the state-of-the-art in robotics, education, and neuropsychology to create an engaging learning environment and empower a diverse workforce with the skills necessary to contribute in advanced manufacturing occupations. A novel methodology has been developed by streamlining instruction, demonstration, and physical interaction with the robot. Human subject tests have been conducted to evaluate the efficacy of the new methodology, Learning by Touching. These human subject tests demonstrated that TeachBot provides a significant benefit to learners' self-efficacy for interacting with robots, their belief in their innate ability to interact productively with a collaborative robotic system. The findings about self-efficacy are important because self-efficacy is an important determinant of additional learning and effective use of new technologies. Even if there were no gain to \u201clearning by touching\u201d on conceptual measures, the fact that touch may improve self-efficacy makes it an important improvement, especially for non-traditional learners.The findings of this pilot experiment have great implications for lifelong education and closing the manufacturing skills gap. We have proposed a novel educational platform, TeachBot, to upskill manufacturing workers to integrate, maintain, and operate collaborative manufacturing robotics in their places of work. We have provided the software to deploy TeachBot with an open source license. Preliminary human subject testing has demonstrated that learners who take the TeachBot course gain significantly more self-efficacy for manufacturing robotics than do learners who take a video version of the course. Measurements of knowledge gained by subjects in the experimental group also show an increase in learning that tends toward significance. These findings both motivate future development and study of the Learning by Touching methodology for collaborative manufacturing robotics education and offer insights into how the curriculum can be improved.These findings build on previous studies investigating the efficacy of hands-on robotics in education The concepts taught in this pilot experiment are basic, and the experiments conducted are preliminary. This will be expanded to teach a broader range of concepts and skills, and the experiment will be extended to more subjects. Currently, the research team is developing more course materials including trajectory generation and use of various sensors and vision systems. In addition to more material, the team is also developing more challenging activities that are directly applicable to real life scenarios found in manufacturing such as tasks involving pick and place and operating alongside CNC machinery. We envision a multiple-day training curriculum to teach learners a broad range of techniques that they need to know to begin working in advanced manufacturing.While the focus of this pilot experiment is on self-efficacy and foundational knowledge gains, it is also important to investigate how taking the TeachBot course might additionally affect subjects' practical abilities to work with a robot on real manufacturing tasks. Additionally, future work should also include a qualitative component to address questions related to learners' relationship with the robot.Nicholas Stearns Selby, Jerry Ng: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.Glenda S. Stump, George Westerman, Claire Traweek: Analyzed and interpreted the data.H. Harry Asada: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.This work was supported by the Advanced Robotics for Manufacturing Institute and the Massachusetts Manufacturing Innovation InitiativeData will be made available on request.The authors declare no conflict of interest.https://doi.org/10.1016/j.heliyon.2021.e07583.Supplementary content related to this article has been published online at"} +{"text": "Alkynes are amongst the most valuable functional groups in organic chemistry and widely used in chemical biology, pharmacy, and materials science. However, the preparation of alkyl-substituted alkynes still remains elusive. Here, we show a nickel-catalyzed deaminative Sonogashira coupling of alkylpyridinium salts. Key to the success of this coupling is the development of an easily accessible and bench-stable amide-type pincer ligand. This ligand allows naturally abundant alkyl amines as alkylating agents in Sonogashira reactions, and produces diverse alkynes in excellent yields under mild conditions. Salient merits of this chemistry include broad substrate scope and functional group tolerance, gram-scale synthesis, one-pot transformation, versatile late-stage derivatizations as well as the use of inexpensive pre-catalyst and readily available substrates. The high efficiency and strong practicability bode well for the widespread applications of this strategy in constructing functional molecules, materials, and fine chemicals. Alkynes are amongst the most valuable functional groups in organic chemistry, however, the preparation of alkyl-substituted alkynes still remains elusive. Here the authors show a nickel-catalyzed deaminative Sonogashira coupling of alkylpyridinium salts. For example, the introduction of an alkyne into a drug molecule could provide remarkable benefits in its biological activity, such as enhanced lipophilicity, bioavailability, and metabolic stability \u2013C(sp) bond formation11. However, the incorporation of nonactivated, \u03b2-H-containing alkyl electrophiles in Sonogashira reaction to construct C(sp3)\u2013C(sp) bond still remains a formidable challenge, presumably due to the following issues the low concentration of the transmetalating species generated in situ in the reaction medium. Moreover, the facile cyclotrimerization and/or oligomerization of terminal alkynes under the catalysis of low-valent metal is another obstacle that renders such coupling a more intractable objective13. In a pioneering study, Fu and co-workers realized Pd/Cu-cocatalyzed Sonogashira coupling of nonactivated primary alkyl iodides and bromides by the use of an N-heterocyclic carbene (NHC) ligand14. Later on, a few elegant strategies for this transformation were developed based on the discovery of different catalytic systems including Pd/bisoxazoline-derived NHC ligand15, Ni/NN2 pincer ligand17, Ni/pyridine bisoxazoline (pybox) system18, and NHC pincer nickel(II) complex19 salt as cocatalyst might also cause some detrimental effects to the reaction, such as the undesired Glaser coupling of terminal alkynes and the complicated procedure in workup23. Thus, developing simple approaches to access such coupling with more alternatives especially in copper-free conditions is highly important and appealing.Alkynes are one of the most valuable functional groups in organic chemistry because they are not only served as versatile synthetic building blocks for diversified chemical transformations, but also common structural motifs in a wide range of natural products, bioactive molecules and organic materialsity Fig.\u00a0. In addiles Fig.\u00a05\u20139. Therues Fig.\u00a0: (1) thex19 Fig.\u00a0. Further25. In this context, using alkyl amines as alkylating agents in organic synthesis would have many privileged advantages when compared to the traditional platforms using alkyl halides. However, such a promising transformation is still underexploited owing to the high bond dissociation energy of C(sp3)\u2013N bond28. In a seminal work, Watson et al. demonstrated that pyridinium salts29, also known as \u201cKatritzky salts\u201d which are easily formed from primary amines and pyrylium salt, could be used as alkyl radical precursors in cross-coupling with arylboronic acids30. Since then, many elegant approaches based on the utilization of these redox-active amines for deaminative functionalization34, such as arylation40, borylation43, alkenylation46, allylation47, alkyl-Heck-type reaction49, carbonylation54, alkylation58, difluoromethylation59, and C-heteroatom bond-forming reactions62 have been established. However, the deaminative alkynylation of alkyl amines to form the C(sp3)\u2013C(sp) bond still remains elusive. Recently, the Gryko group developed a nice protocol to access such transformation by visible-light-mediated desulfonylative alkynylation of secondary alkyl- and benzylpyridinium salts with alkynyl sulfones63. Han and co-workers reported an efficient nickel-catalyzed reductive cross-electrophile coupling of Katritzky salts with triisopropylsilyl (TIPS)-substituted bromoethyne to achieve the challenging C(sp3)\u2013C(sp) bond64. Nevertheless, these methods rely mainly upon the use of preformed and activated alkynyl sulfones or bromides as alkynylating reagents. In addition, the limited substrate scope and the utilization of largely excess reductants (e.g. zinc flake) further disfavored their wide applications in organic synthesis. Therefore, the direct coupling of terminal alkynes with alkylpyridinium salts in a redox-neutral fashion for the synthesis of important alkynes would be highly desirable in terms of both atom-economy and practical application. To the best of our knowledge, however, such a straightforward and practical protocol has not been achieved.Alkyl amines are naturally abundant and readily available feedstock chemicals, and the prevalence of amino groups in numerous bioactive molecules, pharmaceuticals, and natural products provide expedient opportunities for late-stage functionalization and bioconjugation66, herein, we report the general and efficient nickel-catalyzed Sonogashira coupling of alkylpyridinium salts via C\u2013N bond activation under Cu-free conditions picolinamide L4) is found to be crucial for this transformation, allowing the coupling to occur under mild reaction conditions with excellent yields and high functional group tolerance.Following our keen interest in nickel-catalyzed cross-coupling reactionsons Fig.\u00a0. The eas1a and phenylacetylene 2a was selected as the model reaction for optimization as a catalyst, K3PO4 as a base in tetrahydrofuran (THF) at 80\u2009\u00b0C. When pybox, the most efficient ligand in Liu\u2019s work18, was applied to this reaction, the desired product 3a was obtained in 4% yield and the main product was 1,4-diphenylbutadiyne derived from the homocoupling of 2a (entry 1). While the use of a more electron-rich and bulky 4,4\u2032,4\u2033-tri-tert-butyl terpyridine (ttbtpy) in this process, the yield of 3a was improved to 53% (entry 2). Much to our delight, the yields of 3a could be further improved to 87% and 83%, respectively, when amide-type pincer ligand (e.g. N-(pyridin-2-ylmethyl)picolinamide (L1) or N-(quinolin-8-yl)picolinamide (L2)) was used (entries 3\u20134), though they were seldom used as ligands in transition metal-catalyzed cross-coupling reactions73. This discovery encouraged us to synthesize two sterically more hindered methylated derivatives L3 and L4 as ligands. Gratifyingly, the yield was significantly improved to 96% by employing L4 (entry 6). The reasons for the high efficiency of L4 are still unclear at present but probably related to its steric hindrance and rigidity. Screening of nickel catalysts revealed that Ni(acac)2 was ineffective (entry 7), whereas the inexpensive, air-stable and moisture-stable NiCl2\u00b76H2O gave the best result (entry 8). Subsequently, the effect of the base was examined. K2CO3 resulted in a slightly diminished yield (entry 9). However, the reaction completely shut down by using Et3N, a frequently used base in palladium-catalyzed Sonogashira coupling of aryl halides (entry 10)11. Lowering the amount of catalyst or reaction temperature led to a reduced yield to different extent (entries 11\u201312). Control experiments indicated that NiCl2\u00b76H2O, L4, and K3PO4 were all essential for achieving the transformation (entries 13\u201315) in excellent yields. Various synthetically important functional groups including methoxyl, arylhalide, ester, acetyl, trifluoromethyl, formyl, and free amino were all perfectly accommodated. Particularly noteworthy was that aryl chlorides and bromides, popular electrophilic partners in Sonogashira reactions10, remained inert under our optimized reaction conditions, highlighting the exquisite chemoselectivity of this transformation. Additionally, the presence of an ortho formyl did not hamper the reaction. Strikingly, terminal alkyne (2l) containing a boronate ester group was also successfully engaged in this transformation with its C\u2013B bond intact, thus allowing for further diversification. Heteroaromatic rings such as pyridine and thiophene might deactivate a metal catalyst by coordination, and 1-ethynylcyclohexene could also smoothly undergo the transformation giving the corresponding products (3m\u20133o) in excellent yields. More importantly, aliphatic alkynes (2p\u20132t) could also be coupled in high efficiency. The functional groups such as Cl, NHBoc, and OH were well tolerated, affording the products (3r\u20133t) in high to excellent yields with excellent selectivity. Finally, TIPS- and trimethylsilyl-capped alkynes were also suitable substrates to obtain the products (3u\u20133v) in high yields.With the optimized coupling conditions in hand, the scope of alkynes was first evaluated using 1b\u20131l) and benzylpyridinium salts (1m\u20131o) were all suitable substrates for this transformation, and the desired products (4b\u20134o) could be obtained in high to excellent yields. However, the secondary alkylpyridinium salts (e.g. 1t) exhibited a dramatic drop in reaction efficiency under the optimized conditions. Then reoptimization of secondary alkylpyridinium salts was conducted by exploring various reaction parameters. Gratifyingly, 98% yield of 4t could be obtained by changing the solvent to DMF. Under the slightly modified conditions, diverse secondary alkylpyridinium salts underwent this coupling smoothly to give the desired products (4p\u20134w) in high to excellent yields. Similarly, good functional group tolerance was observed, as exemplified by the well compatible with methoxyl, trifluoromethoxyl, bromide, indole NH, alkenyl, tert-amine, acetal, hydroxyl, and chloride. More importantly, heterocyclic units such as thiophene (1f), pyridine (1g), indole (1h), tetrahydropyran (1s), and piperidine (1t) which are prevalent in medicinally relevant molecules were competent substrates. In addition, benzylpyridinium salts especially electron-rich benzylic salts which are not suitable in Gryko\u2019s work63 could be coupled with high efficiency (4m\u20134n), emphasizing the robustness of our strategy in synthetic applications. It is worth noting that both cyclic (1p\u20131u) and acyclic secondary amines (1v\u20131w) could be readily applied to this protocol with high to excellent yields. Moreover, \u03b3-amino acid-derived pyridinium salt (1x) proceeded well under the standard conditions. Notably, a quaternary carbon center could be successfully constructed by using a tertiary amine derivative (1y), albeit in a 44% yield. However, when phenylalanine (1z) and dipeptide (1aa) were employed in this reaction, complex products distributions were observed, and none of the desired deaminative alkynylation products were obtained.Next, the generality of alkyl amines was evaluated as shown in Fig.\u00a03a could be obtained without further reoptimizing the reaction conditions . This general protocol could be successfully applied for the rapid construction of alkyne-labeled derivatives of biomolecules (5\u20139). The readily attached alkynyl group is expected to serve as a labeling tool to facilitate further chemical biology studies and as a handle for rapid entry to complex derivatives. Likewise, this versatile method can be also applied in the further functionalization of alkynyl-containing bioactive molecules or intermediates (10\u201314). Notably, the virtues of the current method were further illustrated by the successful coupling of two drug molecules for assembling their drug-like hybrids 15\u201320, highlighting the potential applications of this chemistry in the discovery of pharmaceutical candidates.To further demonstrate the broad applicability of this method, late-stage functionalization of natural products and medicinally relevant molecules were conducted Fig.\u00a0. A serie3a was obtained with the concurrent formation of TEMPO-adduct 21 in 16% yield species is not likely involved in this chemistry. To gain more insight into the mechanism of this reaction, a Ni-alkynyl complex A1 was formed by the reaction of NiCl2(glyme), L4, and K3PO4 with p-methoxyphenylethyne in DMF, and its structure was confirmed by X-ray diffraction. Employing 10\u2009mol% complex A1 as the catalyst, the reaction of 1a with p-methoxyphenylethyne delivered the product 3c in 93% yield, which was similar to the result obtained using NiCl2\u00b76H2O and L4 as the catalyst intermediate considering the outcomes achieved in Fig.\u00a017 further supporting the possibility of Ni bis(acetylide) intermediate as the active species for this coupling reaction.To understand the reaction mechanism, a series of experiments were performed. When the radical trapping reagent TEMPO was added to the reaction mixture, the only a trace of eld Fig.\u00a0. In addield Fig.\u00a0. These rion Fig.\u00a0. Howeverved Fig.\u00a0. These ryst Fig.\u00a0. Howevereld Fig.\u00a0. Interesned Fig.\u00a0. These r89% Fig.\u00a0. This reons Fig.\u00a0. A nicke77. Initially, coordination of L4 to the Ni center followed by base promoted transmetalation with terminal alkyne to form a complex A. However, this species possesses no reactivity toward pyridinium 1 as demonstrated by Fig.\u00a0A1 showing an oxidation wave at 1.19\u2009V in DMF further indicates that the direct coupling of complex A with 1 (Ered\u2009=\u2009\u22120.90\u2009V vs. SCE in DMF) is not possible. ion in the Ni bis(acetylide) intermediate is probably coordinated to the triple bond of alkyne, similar to that of binding a copper reported by Hartwig78. Then, the more active species B might undergo oxidative addition with 1 to give intermediate C, during which a radical process is likely involved based on the results obtained from Fig.\u00a0C delivers the C(sp3)\u2013C(sp) coupling product and regenerates complex A for the next catalytic cycle. The reasons for the high selectivity of cross-coupling products are unclear at now, but probably related to the fast alkyl\u2013alkynyl reductive elimination promoted by the NN2 pincer ligand76. Additionally, the oxidation state of Ni in intermediate C seems to be a NiIV, but it might also be described as a NiIII\u2013ligand radical complex when considering the redox-active of NN2 pincer ligand79. Therefore, the current catalytic cycle is not in contradiction with the proposed mechanism in Ni catalysis.Although a detailed mechanism awaits further studies, a plausible mechanism is depicted in Fig.\u00a02 pincer ligand catalytic system. Noteworthy was the realization of the coupling of terminal alkynes with naturally abundant alkyl amines, expanding the substrate scopes used in Sonogashira reaction. The virtues of this reaction are illustrated by the broad substrate scope, well functional group tolerance in both coupling partners as well as the efficient diversification of natural products and medicinally relevant molecules. Further mechanism investigation and application of this catalytic system for the cross-coupling with other electrophiles are currently ongoing in our laboratories.In summary, we have achieved a highly efficient and general Sonogashira coupling of alkylpyridinium salts by the development of a Ni/NN2\u00b76H2O , L4 , anhydrous K3PO4 , primary alkylpyridinium salt (0.3\u2009mmol), and THF (1.5\u2009mL) were successively added to an oven-dried sealable Schlenk tube (10.0\u2009mL) followed by addition of terminal alkyne (0.45\u2009mmol) via microliter syringe . Then the tube was securely sealed and taken outside the glovebox. And it was immersed into an oil bath preheated at 80 or 50\u2009\u00b0C. After stirring for 24\u2009h, the reaction mixture was cooled to room temperature and filtered through a short pad of silica gel. Then the filter cake was washed with dichloromethane or ethyl acetate. The resulting solution was concentrated under vacuum and the residue was purified by column chromatography on silica gel to afford the corresponding product.In a nitrogen-filled glovebox, NiCl2\u00b76H2O , L4 , anhydrous K3PO4 , secondary alkylpyridinium salt (0.3\u2009mmol), and N,N-dimethylformamide (1.5\u2009mL) were successively added to an oven-dried sealable Schlenk tube (10.0\u2009mL) followed by addition of phenylacetylene via microliter syringe. Then the tube was securely sealed and taken outside the glovebox. And it was immersed into an oil bath preheated at 80\u2009\u00b0C. After stirring for 24\u2009h, the reaction mixture was cooled to room temperature and quenched with water. Then it was extracted with ethyl acetate or diethyl ether, washed with water and brine, and dried over anhydrous Na2SO4. The resulting solution was concentrated under vacuum and the residue was purified by column chromatography on silica gel to afford the corresponding product.In a nitrogen-filled glovebox, NiClSupplementary Information"} +{"text": "The preparation of dendritic cells (DCs) for adoptive cellular immunotherapy (ACI) requires the maturation of ex vivo-produced immature(i) DCs. This maturation ensures that the antigen presentation triggers an immune response towards the antigen-expressing cells. Although there is a large number of maturation agents capable of inducing strong DC maturation, there is still only a very limited number of these agents approved for use in the production of DCs for ACI. In seeking novel DC maturation agents, we used differentially activated human mast cell (MC) line LAD2 as a cellular adjuvant to elicit or modulate the maturation of ex vivo-produced monocyte-derived iDCs. We found that co-culture of iDCs with differentially activated LAD2 MCs in serum-containing media significantly modulated polyinosinic:polycytidylic acid (poly I:C)-elicited DC maturation as determined through the surface expression of the maturation markers CD80, CD83, CD86, and human leukocyte antigen(HLA)-DR. Once iDCs were generated in serum-free conditions, they became refractory to the maturation with poly I:C, and the LAD2 MC modulatory potential was minimized. However, the maturation-refractory phenotype of the serum-free generated iDCs was largely overcome by co-culture with thapsigargin-stimulated LAD2 MCs. Our data suggest that differentially stimulated mast cells could be novel and highly potent cellular adjuvants for the maturation of DCs for ACI. Mast cells (MCs) are non-dividing, long-living, terminally differentiated, tissue- or mucosa-resident cells . MCs areEx vivo-produced DCs are used for adoptive cellular immunotherapy (ACI) ,12,13. AIn order to comply with the regulatory authorities and ensure the safety of therapeutic DCs, the production of DCs needs to be performed under strictly defined conditions that often prefer the cell to be cultured in serum-free media . In addiIn this study, we investigated whether a differentially stimulated and well-defined human mast cell line, LAD2 , could bTo investigate the impact of mast cells on the maturation of monocyte-derived DCs, we used human mast cell line LAD2 . We firsTo see the impact of the differentially stimulated LAD2 MCs on DC maturation, we first stimulated IgE-biotin-sensitized LAD2 MCs with thapsigargin, PMA, or streptavidin. The differentially stimulated LAD2 MCs were then extensively rinsed with the culture media and co-cultured with immature monocyte-derived DCs at a ratio of 1:6 (LAD2 MCs:iDCs). These DCs were generated in the serum-containing medium, and the co-culture was performed in the presence or absence of the maturation compound, polyinosinic:polycytidylic acid (poly I:C), which is a TLR-3 agonist and is used for the maturation of DCs used for ACI ,21,29,30One of the impacts co-culture of LAD2 MCs with iDCs delivered was the decrease in the content of viable DCs in the cell co-culture B. These The production of DCs for ACI prefers serum-free conditions ,29,30, pThe serum-free conditions showed that iDCs can become refractory to their maturation with conventional maturation compounds and that this phenotype is also resilient to the modulation with non-stimulated or receptor-stimulated LAD2 MCs. We next investigated whether the non-specific stimulation of LAD2 MCs could break the maturation-refractory DC phenotype. We selected the thapsigargin stimulation for this series of experiments because it had previously shown the best performance with iDCs produced in the serum-containing medium. We also investigated the co-culture impact upon decreased ratios; the thapsigargin-stimulated LAD2 MCs were co-cultured with iDCs at 1:18, 1:54, or 1:172 (LAD2 MCs:iDCs) ratios. As shown in This study showed the human mast cell line LAD2 as a new and highly potent cellular modulator of monocyte-derived DCs. The differentially stimulated LAD2 MCs were able to modulate poly I:C-mediated maturation of iDCs generated in a serum-containing medium. More importantly, however, the thapsigargin-stimulated LAD2 MCs overcame the maturation-refractory phenotype of iDCs generated under the serum-free condition.MCs are immune cells of immense modulatory potential. Upon their stimulation, a large number of biologically active compounds are released ,39. DysrThe therapeutic performance of ex vivo-produced DCs for ACI largely depends on their maturation. Current protocols attempt to increase the maturation of DCs by multiple compounds, which often target the pattern recognition receptors, namely TLRs . HoweverApart from other immune cell types, MCs also express various receptors ,48 that The mechanism through which thapsigargin-stimulated LAD2 MCs were able to induce efficient DC maturation during co-culture is unknown. Short exposure of LAD2 MCs to thapsigargin induces their degranulation and cytokine production ,28. AlthLAD2 MCs were obt6 cells/mL) were, or were not, sensitized with biotinylated IgE for 18\u201324 h at 37 \u00b0C and 5% CO2. The cells were harvested, twice rinsed with 5 mL SP medium without SCF, and stimulated with thapsigargin [2. The stimulated cells were harvested and three-times rinsed with 5 mL of the corresponding DC culture medium in which the maturation was performed. The cells were then resuspended in the DC culture media at desired concentrations.LAD2 MCs were cultured in serum-free culture medium , 100 U/mL penicillin\u2013streptomycin, 2 mM GlutaMax (Thermo Scientific)) supplemented with 100 ng/mL of human stem cell factor . The cells ,28, PMA 6 cells/mL) were sensitized with the biotinylated IgE as above. To analyze the expression of CD117 and Fc\u03b5RI on the surfaces of LAD2 MCs, the cells were starved of SCF for 18\u201324 h at 37 \u00b0C and 5% CO2. The IgE-sensitized or SCF-starved LAD2 MCs were transferred to V-bottom 96-well plates , stained with avidin-FITC or specific antibodies, CD117-APC and Fc\u03b5RI-FITC (Becton Dickinson). The cells were washed with ice-cold PBS containing 2 mM EDTA (PBS/EDTA), resuspended in ice-cold PBS/EDTA with DAPI , and analyzed by a FACSAria II (Becton Dickinson). The flow cytometry data were analyzed by FlowJo software . The degranulation of stimulated LAD2 MCs was determined through LAMP-2 (CD107b) externalization assay as previously described [2 for 30\u201345 min, pelleted, and rinsed with HEPES/BSA buffer with sulfinpyrazone and without Fluo-4 AM. The cells were resuspended in HEPES/BSA/sulfinpyrazone buffer, incubated at 37 \u00b0C for over 5 min, supplemented with DAPI , and immediately analyzed by flow cytometry using FACSAria II (Becton Dickinson). The baseline of the DAPI-negative cells\u2019 Fluo-4 AM fluorescence was acquired in the 37 \u00b0C-chamber for 30 s, and, following the cells\u2019 supplementation with the indicated stimulant\u2019s concentration, the DAPI-negative cells\u2019 Fluo-4 AM fluorescence continued to be acquired in the 37 \u00b0C chamber for 4 min and 30 s. The acquired DAPI-negative cells\u2019 Fluo-4 AM fluorescence kinetics were evaluated by FlowJo software .To analyze the IgE binding to LAD2 MCs, the cells were isolated from buffy coats by density gradient as described previously . For iDCg for 10 min at RT, and resuspended in fresh KM or CellGro medium with GM-CSF and IL-4 at a concentration of 2 \u00d7 106 cells/mL. The cell suspension was transferred to F-bottom 96-well plate wells with 100 \u00b5L of the above corresponding KM or CellGro medium containing, or not, non-stimulated or differentially stimulated LAD2 MCs. The cells were, or were not, then supplemented with poly I:C . Alternatively, the cells were also supplemented with R848 . The cell co-culture was then extensively resuspended and cultured for 18\u201324 h .The produced iDCs were harvested, pelleted at 240\u00d7 For determination of DC maturation, the co-cultured cells were transferred to a V-bottom 96-well plate and stained as above with then). Statistical significance was determined by the indicated test.The means and SEM were calculated by GraphPad Prism 6 from the indicated sample size (LAD2 MCs stimulated with thapsigargin are a novel and highly potent maturation agent of ex vivo-produced, monocyte-derived DCs for ACIs. Our data suggest that selectively stimulated MCs could be highly potent cellular adjuvants for the maturation of DCs for ACI."} +{"text": "Treponema. Comparisons to worldwide populations indicated that Native American groups are similar to South American agricultural societies and urban groups are comparable to African urban and semi-urban populations. The transitioning profile observed among traditional populations is concerning in light of increasingly urban lifestyles. Lastly, we propose the term \u201ctropical urban\u201d to classify the microbiome of urban populations living in tropical zones.Shifts in subsistence strategy among Native American people of the Amazon may be the cause of typically western diseases previously linked to modifications of gut microbial communities. Here, we used 16S ribosomal RNA sequencing to characterise the gut microbiome of 114 rural individuals, namely Xikrin, Suru\u00ed and Tupai\u00fa, and urban individuals from Bel\u00e9m city, in the Brazilian Amazon. Our findings show the degree of potential urbanisation occurring in the gut microbiome of rural Amazonian communities characterised by the gradual loss and substitution of taxa associated with rural lifestyles, such as Evidence shows substantial differences in gut/stool microbiome diversity and composition between populations living in diverse subsistence strategies. Generally, individuals living in rural and/or traditional societies harbour highly diverse microbiomes when compared to those from industrialised areas4. Among other environmental factors, dietary habits, access to medication, sanitation practices and interpersonal contact are mainly responsible for shaping such gut microbial structure7.Gut microbiome metagenomic characterisations across multiple human populations have shed light on the roles of this complex ecosystem in maintaining human healthSpirochaetes phylum and Prevotella genus8. For this reason, the gut microbial communities of populations such as the Hazda and Yanomami are regarded as a \u201cwindow into the past\u201d, given their hosts follow a lifestyle comparable to that of ancient pre-industrialised humans4. Such a lifestyle is marked by relying on foraging and hunting for food, as well as gender division of labour and seasonal food cycling, markedly opposed to the contemporary industrialised world10. Thus, it is thought that the gut microbiome of non-urbanised people is ideally adapted to human physiology, as it promotes overall gut health and beneficial interactions with the immune system12.Consumption of highly plant-based diets such as those followed by traditional hunter-gatherers and rural agriculturalists promote gut colonisation by fibre-degrading microbes, such as those from the 12. Microbial biomarkers for this lifestyle are typically a high abundance of Bacteroides genus and Akkermansia municiphila, while diets are rich in animal fat and protein, simple sugars and processed foods13. Urbanisation and shifts in dietary habits are likely the cause of gut microbial extinctions across generations, disrupting the host\u2013microbiome equilibrium that may eventually lead to the appearance of autoimmune disorders, obesity, type 2 diabetes and other non-communicable diseases15.Conversely, gut microbial communities of industrialised societies seem to have been altered and are increasingly enriched for mucus-degrading and antibiotic-resistant taxa, which may trigger pro-inflammatory responses and gut dysbiosis12. In the Brazilian Amazon territory, there are 500 Native American populations living across a large urbanisation gradient, with some ethnic groups belonging to a hunter-gatherer and agricultural subsistence lifestyle, while others inhabit areas near small or large urban centres. This subsistence shift will likely result in disrupted gut microbiomes, drawing attention to the dangers of compromising the Amazonian biodiversity present in indigenous settings, which have contributed to health maintenance and ecological balance over thousands of generations16.The compositional shifts in gut microbiomes of traditional populations have been a topic of debate and great concern in the field of microbiology and medicine17 was the first to characterise the gut microbiome of Brazilian Amazonian populations living in a rural setting. They found that the trade-off between the abundance of Prevotella and Bacteroides taxa was the main feature distinguishing two Amazonian riverine populations from urban individuals of Rio de Janeiro, located in southeast Brazil. However, these data do not include populations experiencing lifestyle transitions and it remains unclear if these results are transferable to rural Native American Amazonian communities and individuals living in urban Amazonian cities.In this regard, the work of Pires et al.Here, we aimed to determine whether the microbiomes of rural Native American populations in the Brazilian Amazon show markers of transition to urbanisation and to what extent recent subsistence changes are impacting gut microbiome compositions. Currently, there are no data comparing the gut microbiome composition of Native American and urban populations from the Brazilian Amazon, a region with vast biodiversity. We employed 16S ribosomal RNA (rRNA) sequencing to profile the gut microbiome of 114 individuals from four distinct populations of urban and Native American Brazilian Amazonians and compared microbial community structures to other urban and rural groups surveyed in Brazil and across the globe.N\u2009=\u200922), Suru\u00ed-Aikewara (R) (N\u2009=\u200930) and Tupai\u00fa (R) (N\u2009=\u200930), and one urban (U) population from the Brazilian Amazon houses ~50 families, while over 1.3 million individuals live in Bel\u00e9m (U), the capital of Par\u00e1 state21.We recruited three rural (R) Native American populations, namely the Xikrin (R) .Endolimax nana as the most frequent , 30% were Tupai\u00fa (R), 18% were Suru\u00ed (R) and 13% from Bel\u00e9m (U).Microscopic examination of faecal samples revealed that 68% of all tested individuals harbour at least one species of gut protozoa, with commensal p value\u2009=\u20090.006) . However, higher alpha-diversity values were associated with the presence of gut protozoa colonisation as showed by Shannon and Chao1 diversity indexes and number of observed species and number of observed species , we also observed that individuals with helminth intestinal colonisation had increased alpha-diversity values when compared to negative microscopic examinations population shares the most compositional features, followed by Suru\u00ed (R), Tupai\u00fa (R) and Bel\u00e9m (U). We also used permutational multivariate analysis of variance (PERMANOVA) to test whether the dispersion from centroid values was the same among all groups. In this analysis, rural microbiomes showed less inter-individual variation when compared to the urban group (PERMANOVA 01) Fig. . No signPrincipal coordinate analysis (PCoA) based on Unifrac and Weighted Unifrac distances revealed that the Xikrin (R) population, although with a small degree of overlap, forms a distinct cluster and shows far less dispersal than other population groups, which indicates a more homogeneous microbiome structure among individuals Fig. . FurtherPrevotella (53%), Faecalibacterium (14%), Bacteroides (9%), Roseburia (9%), Succinivibrio (7%), Treponema (5%), Oscillospira (2%), Escherichia (1%) and Ruminobacter (1%).Considering we did not find discrete population clustering between all rural and non-rural dwellers, we sought to identify which taxonomic features could be driving compositional similarities observed in Unifrac distance-based PCoA analyses. First, we determined the most frequent genus in each individual microbiome, characterised by the taxa with the highest relative abundance in each sample , Suru\u00ed (R) and Tupai\u00fa (R), while the presence of individuals with Treponema as the most prevalent taxa were only observed among the Xikrin (R) and Suru\u00ed (R).Bacteroidales in relation to Clostridiales order is a biomarker for traditional microbiomes from Africa and South America22, we tested such proportions across sampled Amazonian populations. The Xikrin (R) presented a higher relative proportion of Bacteroidales when compared to other populations , while other Native American groups are not significantly different from urban Bel\u00e9m (U) , Suru\u00ed (R) and Tupai\u00fa (R), with abundance variability displayed mainly within Proteobacteria taxa and the presence of Lentispharae in Bel\u00e9m (U) Fig. . The XikRuminococcaceae, Lachnospiraceae and Prevotellaceae are highly comparable. Nonetheless, almost all Tupai\u00fa (R) participants show the presence of Veillonellaceae and approximately half harbour variable abundances of Succinivibrionaceae, a feature shared only by other sampled Native Americans. For instance, the highest abundances of Succinivibrionaceae belong to Suru\u00ed (R), while the presence of Spirochaetaceae is unique to the Xikrin (R), represented by Treponema at the genus level.Lower taxonomic levels such as family and genus barplots Fig. showed sParabacteroides (Bacteroidetes) and Victivallis (Lentisphaerae) in Bel\u00e9m (U), and CF231 (Bacteroidetes), Treponema (Spirochaetes) and Anaerovibrio (Firmicutes) in Xikrin (R). Hypergeometric enrichment p values were analysed and indicated significant overlaps between the core microbiomes of the Suru\u00ed (R) and Tupai\u00fa (R) populations clustering methods and validated with Hopkins statistic Given that Native American populations share more similarities with Bel\u00e9m (U) than the rural, more remote Xikrin, three scenarios were suggested: (1) Suru\u00ed (R) and Tupai\u00fa (R) populations are increasingly displaying urban-like microbiomes, hence their shared features with Bel\u00e9m (U); (2) Bel\u00e9m individuals do not follow a typically urbanised microbiome composition and are more similar to other non-urban human groups, or (3) both scenarios are occurring simultaneously.24 between populations from the present cohort and compared results to others from Brazil: the rural Amazonian riverine Buiu\u00e7u (R) and Puruzinho (R) populations and urban individuals of Rio de Janeiro (U)17 Treponema , Succinivibrio , and CF231 were more abundant in the Xikrin (R) and display decreasing abundances according to urbanisation. In contrast, Butyricimonas showed the highest abundance in Bel\u00e9m (U) , while Bacteroides was most abundant in Rio de Janeiro (U), Bel\u00e9m (U) and Tupai\u00fa (R) . Bifidobacterium showed the lowest abundances among three rural communities and the urban Rio de Janeiro.ANCOM results showed that six taxa are differentially abundant between groups at the genus level Fig. . Of thesS24-7 (Muribaculaceae) as significantly more abundant in the microbiomes of the Xikrin (R), Buiu\u00e7u (R) and Puruzinho (R) . Moreover, Succinivibrionaceae and Spirochaetaceae families were significantly more abundant among the Xikrin (R) when compared to other tested populations and Spirochaetales orders were found to be differentially abundant. Interestingly, the abundance of Bifidobacteriales among the Xikrin, Buiu\u00e7u and Puruzinho rural groups was not significantly different from urban Rio de Janeiro, but was significantly lower than remaining rural populations , are similar to those found for Venezuelan Yanomami (R) and the Tunapuco from the Peruvian Andes (R) populations.We also computed differential abundance analyses with South American native and rural populations abundances of this genus are similar to that of rural Ngoantet, while Suru\u00ed (R) and Tupai\u00fa (R) show abundance means comparable to semi-urban Mbalmayo. The lowest Prevotella abundance in this comparison belongs to the urban Yaounde, the capital of Cameroon, while Bel\u00e9m (U) has the lowest among Amazonian groups, yet higher than the Yaounde population.Differential abundances for the Amazonian populations in this cohort compared to African populations living in a gradient of urbanisation in CameroonBacteroides genus, but with inversed proportions. When observing the Succinivibrio genus, this tendency continues with the exception of the Suru\u00ed (R) and Tupai\u00fa (R) populations, which display mean abundances more comparable to urban than the semi-urban and rural groups. However, some genera have higher abundances in Brazil than in Cameroon, independently of urbanisation levels.This pattern is the same for the Using PCoA based on Bray\u2013Curtis distances to compare the Xikrin (R) and Bel\u00e9m (U) (most urbanised) populations with other South American and USA populations, we found that the Brazilian Amazonians were located at the intermediate stage of an urbanisation gradient Fig. . The PerThe same analysis was used to compare the Xikrin (R) and Bel\u00e9m (U) populations to three Cameroonian populations living in a gradient of urbanisation. We found that Bel\u00e9m (U) largely overlaps with Yaounde (U) and Mbalmayo (SU) populations, while the Xikrin (R) are similar to both the Ngoantet (R) and the Mbalmayo (SU) Fig. . AdditioIn light of the various technical differences across gut microbiome datasets, meta-population comparisons must be interpreted with caution as they do not rule out the influence of technical factors in producing such findings.25.We used Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) to predict the metabolic potential of microbial communities based on 16S data and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathwaysAfter removing eukaryote-related pathways, PICRUSt results identified 30 different metabolic functions summarised to level 2 KEGG pathway resolutions population, while transcription was found to be significantly decreased. Finally, the transport and catabolism pathways were significantly less abundant among the Suru\u00ed (R).Then, we performed ANCOM to identify which pathways were differentially abundant among the Amazonian populations Fig. . ConsideW\u2009>\u20090.6, ANCOM revealed 27 differentially abundant pathways and xenobiotics biodegradation were found significantly more abundant among in Bel\u00e9m (U) microbiomes, and decreased according to the urbanisation gradient and Tupai\u00fa (R) displayed similar diversity to that of urban Bel\u00e9m, which is comparable to those previously observed among Rio de Janeiro and Amazonian riverine individuals17.Higher alpha-diversity values for the Xikrin (R) were expected, as traditional populations far from industrialisation have been shown to display higher microbial diversity than industrialised individualsTreponema-prevalent microbiomes only in the Xikrin and the Suru\u00ed may reflect that the Suru\u00ed are more recently changing lifestyle patterns, as these taxa are considered a biomarker of traditional societies12.A compositional transition among the rural populations of Suru\u00ed and Tupai\u00fa is further supported by the high rate of dispersal in beta-diversity distance measures, similar to that of an urban group such as Bel\u00e9m. Urbanisation evidence is also demonstrated by the distribution of the most prevalent genera in individuals of each group could indicate that a transition to an urbanised microbiome is more advanced, reinforced by the higher frequency of Bacteroides-prevalent microbiomes. Further, the abundance of Bacteroides and Bifidobacterium taxa have shown to be antagonistic to the colonisation of Treponema species in urbanised individuals, representing an adaptive response to increased consumption of refined carbohydrates and dairy26.The Xikrin (R) follow a diet mostly composed of highly fibrous tubers such as sweet potatoes and cassava, which explains the abundance of polysaccharide degrading taxa in the gut microbiome, as reported for other traditional groups such as the Yanomami, Matses, Hazda and BaAkaTreponema and Parabacteroides, respectively, suggests that they represent opposite extremes of the urbanisation gradient in the Amazonian gut microbiomes and urban and semi-urban groups from Cameroon22 and its differences from USA and Rio de Janeiro individuals and Bel\u00e9m (U) core microbiomes showed group-specific taxa (genera not present in core microbiomes of other populations), such as mes Fig. . Howeverals Fig. .28. Moreover, the city is located at the margins of the Par\u00e1 river and in close proximity to the rainforest environment, which may have an important role in microbial dispersal and composition. Such factors may explain the prevalence of gut protozoa and alpha-diversity values among the urban individuals in our study. In addition, the dietary habits of Bel\u00e9m individuals are unique, as they consist of industrialised food consumption while also including fresh and unprocessed items such as cassava flour, a\u00e7a\u00ed and tropical fruits regularly. This diet may have an effect on the fibre intake of this population, explaining the abundance of Prevotella among such individuals.Despite being the capital of the Par\u00e1 state and populated by nearly two million people, Bel\u00e9m is an urban centre in which only 14% of the population receives sewage collection and, of these, only 3% are treatedWe argue that diet, geographical proximity to native biodiversity and lack of adequate access to sanitation has an important role in determining the gut microbial composition of Bel\u00e9m individuals, in spite of subsistence strategy. Thus, we propose the term \u201cTropical Urban\u201d as a category of subsistence strategy, in the scope of microbiome research, to define populations inhabiting urban settings located in the tropical zone. This new classification considers that tropical urban environments are inserted in a context of proximity to great biodiversity, which has an important impact in defining the ecological and cultural contexts of these cities. Such circumstances determine interactions with microbial diversity and, consequently, shape gut microbial compositions, as seen for Bel\u00e9m and its similarities to urban African populations.Further, we clarify that the term \u201csemi-urban\u201d is not an appropriate categorisation of these areas, as they constitute the urban extreme in the local gradient of urbanisation. Additional comparisons to African and Asian populations living in tropical urban settings will elucidate which aspects of such environments most influence gut microbial compositions.7. In the case of the present Amazonian populations, however, alpha diversity was influenced only by concurrent infections of Entamoeba coli and E. histolytica/dispar. As discussed by Lokmer et al.7, an industrialisation transition is accompanied by a loss in bacterial diversity and colonisation by Entamoeba sp. If we apply this concept to our data analysing overall Entamoeba sp. prevalence, an industrialisation gradient would have the Xikrin (R) at one side of the spectrum with increasing industrialisation passing by Suru\u00ed (R) Tupai\u00fa (R) and Bel\u00e9m (U) community is the most difficult, which makes it troublesome (but not impossible) to obtain urban food products. This is not the case for the Suru\u00ed (R) and Tupai\u00fa (R), although the latter is only accessible by boat or helicopter. Moreover, language is expected to impose a barrier for transitioning. The Xikrin (R) are part of the Jean linguistic group, and only a select number of individuals speak Portuguese. Conversely, the Suru\u00ed (R), although having non-Portuguese speakers, has a larger number of individuals who were able to communicate with the research staff. Regarding Tupai\u00fa (R), their native language was hardly spoken, considering they are a mixture of multiple Native American ethnicities.Language may also stand as a genetic obstacle, meaning non-Portuguese speaking communities are possibly more ethnically homogeneous. It is possible, therefore, that a language barrier is one of the causes for little to no evidence of urbanisation among the Xikrin gut microbiome, as substantial and continued contact with urbanised populations is rare for a majority of these individuals. Therefore, we suggest that future research should investigate host genetic diversity as a means of elucidating its role in the urbanisation transition on gut microbiomes.29. Considering the urban population of Bel\u00e9m is ethnically diverse30 and the Tupai\u00fa are made up of individuals from multiple Native American ethnicities31, it is possible that this may influence the high inter-individual variability observed among the Bel\u00e9m and Tupai\u00fa populations. This is further supported by the homogeneous ethnicity of the Xikrin, which also displayed homogeneous gut microbial profiles, but is contradicted by the microbial variability seen among the Suru\u00ed, which are not ethnically diverse32.In this sense, we highlight that ethnicity has also been found to play a role in determining gut microbiome structures, as it can serve as a proxy for dietary and lifestyle variability5 than to the recently contacted Yanomami of Venezuela4. Despite both Yanomami and Xikrin being Amazonian native communities, the Yanomami follow a hunter-gatherer lifestyle, while the Xikrin are adapted to an agricultural system. In addition, this might explain the separation between the Xikrin and other hunter-gatherers such as the Bostwana San and the Tanzania Sandawe22 showed more proximity to the Andean TunapucoCoprococcus, Lachnospira and Sutterella taxa were also found to be enriched among the Native American Cheyenne and Arapaho individuals from North America, which are increasingly shifting towards an industrialised lifestyle pattern35. Considering a recent report36 showing increased prevalence of obesity and type 2 diabetes among the Xikrin (R), the abundance of these taxa points to a gut microbiome similar to that of urbanised populations even in an environment with minimal intake of processed foods and medication, difficult access and linguistic barriers. In an Amazonian rural Native American population, this double burden of diseases can lead to public health issues.Nonetheless, compared to other rural populations, all the Amazonian samples analysed in this study showed higher abundances of 22 As discussed by Gomez et al.8 for the Bantu individuals, such pathways are associated with higher exposure to pesticides as well as food additives, frequently present in industrialised and processed foods.An urbanisation of the gut microbiome of rural populations is also corroborated by PICRUSt metabolic pathway predictions. Despite limitations of this approach, predictions show an increasing abundance of pathways linked to urbanised microbiomes among Amazonian populations, such as membrane transport, carbohydrate metabolism and xenobiotics biodegradation, similar to what has been seen for Bantu, Tanzania and Botswana populations undergoing urbanisationThe gut microbiome characterisation of heterogeneous traditional/rural and urban communities from the Amazon represents the opportunity to observe the worldwide tendency of changes in gut microbiome composition and transitions in an environment known for its tremendous biodiversity. We observe a local and global transitioning gradient among such populations, with the Native American Xikrin (R) as the most rural-like microbiome and similar to that of other agricultural South American societies. Suru\u00ed (R) and Tupai\u00fa (R) show a transitioning microbiome with both signs of traditional and industrialised communities, and the urban Bel\u00e9m (U) was similar to the urban and semi-urban African population.It is critical that we promote the inclusivity of diverse populations in microbiome research to allow that all human groups benefit from scientific/clinical advancements. The increasing rhythm observed in the prevalence of metabolic diseases and its association with gastrointestinal microbiomes makes the characterisation of Amazonian gut microbiomes an urgent topic in terms of public health. Further studies should investigate a longitudinal perspective for tracking the transition process while controlling for disease biomarkers and host genetic factors.Ethics approval and indigenous territory entry permits were obtained through the Research Ethics Council from the Federal University of Par\u00e1 and from the National Council on Research Ethics, under protocol number 3.094.486. Written informed consent was obtained from the urban-living participants individually and a group consent was obtained from the ethnic leadership in each Native American community, as established in the Brazilian legislation for research in Native American communities (CNS 304/2000). This research was carried out according to the ethical principles established by the Declaration of Helsinki .N\u2009=\u200922, Suru\u00ed, N\u2009=\u200930 and Tupai\u00fa, N\u2009=\u200930) and one urban population . All three Native American communities were visited during the dry season in the Amazon rainforest, which spans from May to December. Figure 38.We sampled 114 individuals from four locations in the Brazilian Amazon including rural Native American communities , who inhabit the margins of the Bacaj\u00e1 River. Subsistence practices among the Xikrin include mostly subsistence agriculture , small game hunting, fishing and gathering of nuts and fruits. The Xikrin are known for hunting and gathering over long distances across the territory, which allows for a great food variety.The Suru\u00ed live in the Soror\u00f3 indigenous territory and access is done by land; the villages are located ~100\u2009km from the closest rural town. Subsistence modes among the Suru\u00ed consist of small-scale cattle raising, rice cultivation and small game hunting, and is progressively less dependent on subsistence agriculture, although it is still present mainly through cassava root and sweet potatoes. Industrialised food products such as frozen poultry, sugar, dairy and crackers are increasingly common.31. Access to this territory is only possible by boat or helicopter. Nevertheless, communication among neighbouring villages is common, and communities are frequently formed by people from multiple ethnic backgrounds. Located in a riverine setting, the subsistence practices are largely dependent on fishing, small game, cassava root agriculture and fruit harvesting.The Tupai\u00fa are one of the several emergent Native American people that inhabit the Tapaj\u00f3s-Arapiuns Extractive Reserve, located by the margins of the Tapaj\u00f3s River, a major tributary of the Amazon RiverThe urban population from Bel\u00e9m was recruited in the Federal University of Par\u00e1 (UFPA) and samples consisted of university students, faculty members and surrounding neighbourhoods. Bel\u00e9m is the capital and largest city of the Par\u00e1 state, in the northern region of Brazil, and is located at the mouth of the Amazon River. The typical diet reported by participants consists mainly of rice, beans, animal protein, manioc flour, dairy products and industrialised foods.later stabilising solution (Thermo Fisher Scientific) and frozen at \u221220\u2009\u00b0C until arrival at UFPA, where samples underwent immediate DNA extraction.Each participant received a stool collection container with a lid and instructions for collecting the sample. When received by the research staff, a midsection of the stool sample was immediately stored in a 5\u2009mL tube containing RNAMetadata collection for dietary information consisted of individual dietary habits interviews with urban-living participants and several interviews with ethnic leaders in the Native American populations. Further, stool samples were microscopically examined for intestinal parasites, and medical information regarding medication intake, previous and current diseases were obtained through the local medical staff responsible for each community or through individual interviews with urban-living participants.Total DNA from faecal samples was extracted using the DNeasy PowerSoil Kit according to the manufacturer\u2019s protocol with small modifications. Eluted DNA was quantified with fluorometry and subsequently stored at \u221220\u2009\u00b0C. The next-generation sequencing library preparation was carried out according to the Illumina Metagenomic Sequencing Library Prep protocol with established primers and Illumina Nextera adapters targeting the V3\u2013V4 region of the 16S rDNA, as follows: Bakt 341F (CCTACGGGNGGCWGCAG) and Bakt 805R (GACTACHVGGGTATCTAATCC). Libraries were subsequently pooled and quantified with TapeStation before being sequenced as 300\u2009bp paired-end reads on the Illumina MiSeq platform in two sequencing runs, yielding an average 141,377\u2009\u00b1\u200952,840 raw reads per sample.39. All raw read quality control was performed in Quantitative Insights into Microbial Ecology (QIIME 2) software40. Reads were demultiplexed, denoised, merged, low-quality reads were trimmed, filtered and chimaeras were removed using DADA241. After filtering, reads per sample were an average of 11,925\u2009\u00b1\u20094051. Amplicon sequence variant (ASV) data were generated by a Naive\u2013Bayes machine-learning classifier in QIIME 2, which subsequently output a feature table identifying a total of 39,085 ASVs. A phylogenetic tree was inferred from ASVs using FastTree v.2.1.342 and taxonomic classification was carried out in QIIME 2 using a Naive\u2013Bayes classifier based on Greengenes database v.13.8 clustered at 97% similarity. Prior to taxonomic analyses, we removed one sample that yielded no mapped reads and any singleton ASVs and taxa with an unassigned phylum level.Initial raw data quality visualisation was carried out in FastQC43 using \u201cphyloseq\u201d44 and \u201cvegan\u201d packages45. Based on rarefaction curves test.All alpha- and beta-diversity analyses were performed in R v.3.5.3adonis function of the \u201cvegan\u201d45 package in R was used to determine distances between population pairs. Spearman\u2019s correlations were used to evaluate the relationship between alpha- and beta-diversity metrics. PCoA was computed in R using ggplot2 package46.Beta diversity analyses were carried out by normalising unrarefied sample counts to relative abundances, removing taxa unobserved in at least 1% of samples and agglomerating taxa to genus level. We computed Bray\u2013Curtis, Unifrac and Weighted Unifrac distances. Differences in dispersion from centroids in both Unifrac and Weighted Unifrac distances were tested with PERMANOVA. Pairwise PERMANOVA implemented in the 22 and performed OTU picking and taxonomic assignment with QIIME 2 using vsearch closed-reference based on Greengenes v.13.8 clustered at 97% similarity.To compare our samples to data from other worldwide population gut microbiomes, we downloaded the filtered data from each studyhttp://www.cytoscape.org/). Statistical analyses were carried out through hypergeometric enrichment tests to determine significant overlaps between core microbiomes of different populations using the phyper function in R v.3.5.3 \u201cstats\u201d package v.3.6.243. P values were adjusted for multiple testing with the Benjamini\u2013Hochberg method test.For functional prediction analyses, reads were taxonomically classified by closed-reference using vsearch with Greengenes v.13.5 database clustered at 97% similarity. KEGG pathways were predicted using PICRUSt 1 v.1.1.4Further information on research design is available in the Supplementary InformationReporting SummarySupplementary Data 1"} +{"text": "Our objective was\u00a0to examine differences in cytokine/chemokine response in chronic hepatitis B(CHB) patients to understand the immune mechanism of HBsAg loss during antiviral therapy. We used an unbiased machine learning strategy to unravel the immune pathways in CHB nucleo(t)side\u00a0analogue-treated patients who achieved HBsAg loss with peg-interferon-\u03b1(peg-IFN-\u03b1) add-on or switch treatment in a randomised clinical trial. Cytokines/chemokines from plasma were compared between those with/without HBsAg loss, at baseline, before and after HBsAg loss. Peg-IFN-\u03b1 treatment\u00a0resulted in higher levels of IL-27, IL-12p70, IL-18, IL-13, IL-4, IL-22 and GM-CSF prior to HBsAg loss. Probabilistic network analysis of cytokines, chemokines and soluble factors suggested a dynamic dendritic cell driven NK and T cell immune response associated with HBsAg loss. Bayesian network analysis showed a dominant myeloid-driven type 1 inflammatory response with a MIG and I-TAC central module contributing to HBsAg loss in the add-on arm. In the switch arm, HBsAg loss was associated with a T cell activation module exemplified by high levels of CD40L suggesting T cell activation. Our findings\u00a0show that\u00a0more than one immune pathway to HBsAg loss was found with peg-IFN-\u03b1 therapy; by myeloid-driven Type 1 response in one instance, and T cell activation in the other. HBsAg loss or functional cure of CHB is associated with significant clinical benefits such as reduced HCC and liver complications2. The frequency of CHB patients who develop spontaneous HBsAg loss without treatment is very low (1% per year)3. A meta-analysis has shown that peg-interferon\u2009\u00b1\u2009NA is superior to nucleoside analogues (NA) in achieving HBsAg loss4.Chronic hepatitis B (CHB) affects over 250 million people globally5. The prevailing hypothesis is that high antigen load leads to T cell exhaustion in addition to defective adaptive and innate immunity6. However, it is unclear how immune clearance of HBsAg occurs, and it is assumed that HBV specific T cells are required to achieve \u201cimmune control\u201d. The classic example of immune clearance of HBV occurs in the setting of bone marrow transplantation from a donor with resolved HBV to a CHB patient7, where HBV specific T and B cell responses were restored. Studies in acutely infected chimpanzees and transgenic mice models revealed the importance of CD8 T cell responses in HBV control and the finding that HBV can escape innate immune recognition has made the role of innate immunity in CHB ambiguous8. In addition, CHB patients have defective B cell response with HBsAg specific B cells showing exhaustion and atypical memory phenotype9. Studying the immune mechanisms that lead to functional cure are indeed challenging due to the low frequency of patients achieving functional cure and the assays to evaluate the immune responses are complex and limited by the low frequency of HBV specific T cells10. Consequently, there is a significant unmet need to unravel the immune responses associated with functional cure. Since there are limitations to the use of PBMC to assess cellular immunity, and the liver microenvironment requires invasive liver biopsy, one option is to use an array of plasma cytokines and chemokines to interrogate the full spectrum of immune pathways. Cytokines and chemokines exert their effects by modulating the function of immune and non-immune stromal cells. They work in a concerted manner to set off a cascade of immune responses resulting in elimination of viral infection by cell and antibody mediated mechanisms11, and the outcome of immune reaction is determined by the quality of cytokine\u2013chemokine response.The immune mechanism(s) related to HBsAg loss are not well established. In acute HBV, resolution of infection is associated with an antiviral T cell response but a poor innate immune response12. We profiled plasma for 87 cytokines, chemokines, growth factors and soluble receptors, pre and post therapy using statistical analysis and Bayesian probabilistic network algorithms to assess the cytokine networks in responders and non-responders.To investigate the chemokine-cytokine profile of HBsAg loss, we utilised CHB patients from a randomised clinical trial of NA\u2009\u00b1\u2009peg-IFN-\u03b1 (SWAP study)\u00a0(Suppl. fig. 1A)Plasma cytokines were assessed at three time-points during the course of therapy in responders and non-responders. The median time interval from start of peg-IFN-\u03b1 therapy to loss of HBsAg in responders was 24\u00a0weeks (range: 8\u201372\u00a0weeks). The time-point at which the patients were recruited for the study was referred to as the baseline (T0), the time-point before which the responders showed HBsAg loss was labelled T1 (median\u2009=\u200912\u00a0weeks) and the time-point after which the responders showed HBsAg loss was labelled T2 (median\u2009=\u200936\u00a0weeks) , before the detectable loss of HBsAg. In addition, chemokines CCL11 (Eotaxin), CCL24 (Eotaxin-2), CCL26 (Eotaxin-3), CCL4 (MIP-1b), CXCL1 (GRO-a), CXCL9 (MIG), CXCL11 (I-TAC), CXCL12 (SDF-1a) and CXCL13 (BLC) were increased in the responders. Taken together, this suggests that a strong inflammatory host response plays a key role in HBsAg loss.Many of the augmented cytokines/chemokines in responders, which included IL-12p70, IL-27, IFN-\u03b3, TNF-\u03b1 and CXCL9, are associated with a Type 1 immune response . IL-12p70 secreted by activated DCs promotes type 1 immune response, and is associated with HBeAg seroconversion during antiviral therapy34. Our data suggests that the functional cure is achieved by more than one immune mechanism with the involvement of both type 1 and type 2 responses, and is probably mediated by non-cytolytic function of cytokines37. A dominant myeloid module was observed to be active in the add-on arm, while the switch arm showed T cell activation.Separate Bayesian analysis of add-on vs switch arm of treatment revealed a differential cytokine/chemokine network involved in HBsAg loss. In patients who had add-on peginterferon therapy, the dominant feature was MIF, TRAIL and the CXCL9-CXCL11 module. These findings support a dominant type 1 response. In the switch arm however, soluble CD40L was the dominant node indicating T cell activation mediated by IL-4, IL-13, IL-27 and IL-20. CD40L is a co-stimulatory receptor expressed on activated CD4 T cells and plays an important role in activation of B cells and other APCs by binding to CD40Immune responses are seldom due to one particular cell type, cytokine or chemokine but occur as a network response. We have utilized the power of Bayesian analysis in identifying these immune response modules rather than individual cytokines or chemokines. We have further shown that Type 1 and Type 2 networks are both involved in orchestrating an immune response leading to HBsAg loss. Consequently, we can conclude that there are multiple immune networks and more than one immune mechanism involved in HBsAg loss. Further interrogation of these networks using PBMCs and liver samples is needed to determine the immune mechanism of HBsAg loss. Finally, we show that Bayesian probabilistic analysis is an unbiased machine learning approach that provides insights into immune responses leading to HBsAg loss, the hallmark of functional cure in CHB.12, CHB patients were enrolled to three different arms of therapy, as shown in Fig.\u00a0In this randomised control studyPlasma samples were analysed for a panel of cytokines, chemokines and soluble factors Table . A totalComBat from Bioconductor package sva based on a plasma reference sample that was run along with samples in all the plates38. Data normalisation was confirmed using PCA analysis of data from different batches/plates. Only normalised values were used for analysis and reported in this manuscript. Differences in plasma cytokine concentrations between groups were calculated using the student t-test and corrected for multiple comparisons using Bonferroni\u2019s method where indicated in the figure legends. Associations of cytokine concentrations with clinical parameters were determined using Spearman Rank correlations. Graphpad prism (version 8) and R software version 3.1.2 were used for statistical computation39. Heatmaps were generated using Heatmapper (www.heatmapper.ca)40.Cytokine concentrations were log transformed and normalised for plate variations using https://www.bayesia.com/). Data discretization was performed using supervised multivariate approach with the parameter HBsAg loss as target node. Data were discretized into two bins and analysed using optimised structural co-efficient values for each time-point. Networks were generated by semi-supervised approach using Maximum Weight Spanning Tree algorithm along with Taboo order post processing to ensure robust network generation. The networks were then graphed for the probability contribution of nodes to the target node and the Pearson correlation values between adjoining nodes. For the networks looking at responders and non-responders, pooled data from both treatment arms were used. The HBsAg loss status was assigned as the target node and the network was constructed around it as described above. For analyses of responders and non-responders in the two treatment arms, networks were generated using data from each treatment arm separately. Similar workflow was used for generating networks using time-points as target nodes for analysis of treatment-induced changes between responders and non-responders .Probabilistic machine-learning network analysis was performed using BayesiaLab software (version 8) (Supplementary Information."} +{"text": "Testicular torsion potentially leads to acute scrotum and testicle loss, and requires prompt surgical intervention to restore testicular blood flow, despite the paradoxical negative effect of reperfusion. While no drug is yet approved for this condition, antioxidants are promising candidates. This study aimed to determine astaxanthin\u2019s (ASX), a potent antioxidant, effect on rat testicular torsion\u2212detorsion injury. Thirty-two prepubertal male Fischer rats were divided into four groups. Group 1 underwent sham surgery. In group 2, the right testis was twisted at 720\u00b0 for 90 min. After 90 min of reperfusion, the testis was removed. ASX was administered intraperitoneally at the time of detorsion (group 3) and 45 min after detorsion (group 4). Quantification of caspase-3 positive cells and oxidative stress markers detection were determined immunohistochemically, while the malondialdehyde (MDA) value, superoxide dismutase (SOD), and glutathione peroxidase (GPx) activities were determined by colorimetric assays. The number of apoptotic caspase-3 positive cells and the MDA value were lower in group 4 compared to group 2. A significant increase in the SOD and GPx activity was observed in group 4 compared to groups 2 and 3. We conclude that ASX has a favorable effect on testicular ischemia-reperfusion injury in rats. Testicular torsion is a condition of acute scrotum, starting with the rotation of the testis around a longitudinal axis by at least 180 degrees, and followed by an interruption of circulation inside the organ. Despite the possibility of manual detorsion, surgery is usually required and should be performed as soon as possible after the onset of symptoms. If not recognized in time, it can result in ischemic injuries and testicular loss, but if the operation is performed within 6 h, most testicles can be saved ,3. The i2\u2212), which is then converted to hydrogen peroxide (H2O2) and hydroxyl radical (OH\u00b7). The main unwanted consequence of the production of hydroxyl radicals is membrane lipid peroxidation. Lipid peroxidation causes the systemic release of proinflammatory eicosanoids, disruption of cell permeability, and ultimately cell death [Ischemia-reperfusion injury (IRI) exacerbates cell dysfunction observed after restoring blood flow in previously ischemic tissues. Hence, reperfusion paradoxically causes further damage, endangering the organ vitality and function despite the necessity for blood flow restoration. Reperfusion injury is a multifactorial process that results in tissue destruction . During ll death ,14,15,16ll death . While lll death ,19.Antioxidants are molecules that, by inhibiting the oxidation of other molecules, defend the body\u2019s system against potential damage by free oxygen radicals . In rece40H52O4), found in the microalgae Haematococcus pluvialis, has anti-inflammatory, immunomodulatory, and antioxidant effects [The carotenoid pigment astaxanthin (ASX) of prepubertal age. The animals were housed under the conditions following good laboratory practice (GLP), which included a temperature of 20\u201324 \u00b0C, relative humidity 55% +/\u2212 10%, controlled lighting, and light dark cycle of 12 h/12 h. The noise level did not exceed 60 dB.The research was approved by the School of Medicine, University of Zagreb and the Croatian National Ethics Committee (EP 217/2019). The 3R principles were used\u2014\u201creduction\u201d, \u201crefinement\u201d, and \u201creplacement\u201d\u2014and the concept of five freedoms was respected.Rats were randomly divided into four groups with eight individuals in each group, namely: sham-operated (S) group, torsion\u2212detorsion (T/D) group, and torsion\u2212detorsion + astaxanthin (T/D + ASX) groups.\u00ae, St. Louis, MO, USA, from Blakeslea trispora). In group 4 (T for 90 min/D for 90 min + ASX 45 min from the moment of detorsion) ASX was administered 45 min after detorsion.Group 1 (S) underwent sham surgery. After the intraperitoneal injection of anesthetic, an incision was made in the right inguinal region, to pull out the ipsilateral testis, which was immediately returned to its natural position and the skin sutured. After suture removal, orchidectomy was performed after 3 h. In group 2 (T for 90 min/D for 90 min), the ipsilateral testis was twisted around its axis by 720\u00b0 in a clockwise direction. It was fixed in that position for 90 min. After 90 min, detorsion was performed. The skin was sutured twice (0 min and 90 min). Orchidectomy was performed 90 min from the moment of detorsion. At the time of detorsion, group 3 (T for 90 min/D for 90 min + ASX at the time of detorsion) was administered pure ASX intraperitoneally , and drying, the area was treated with a povidone-iodine solution . In the midline of the scrotum, an incision was made. Upon opening the tunica vaginalis, the testis was twisted manually around its axis by 720\u00b0 in a clockwise direction. The testis was fixed to the inner wall of the scrotum with a monofilament polyglactin suture 6/0 . By removing the suture, the right testicle was manually returned to its natural position. The skin of the scrotum was also sutured with a monofilament polyglactin suture 6/0. All surgical procedures were performed under general anesthesia induced by intraperitoneal injection of ketamine (90 mg/kg) and xylazine (10 mg/kg). The animals were constantly monitored. In case of movement, twitching, or other signs of awakening, intraperitoneal anesthesia was supplemented in a smaller dose. No animals died during the experiment. After orchidectomy, the rats were euthanized using the T-61 solution (1 mL/kg) iv. .All surgical procedures were performed under aseptic conditions. After shaving the right inguinoscrotal region, washing with chlorhexidine gluconate was used as an apoptotic marker, while anti-8-oxo-2\u2032-deoxyguanosine (anti 8-OHdG), anti-nitrotyrosine (anti-NT) and anti-4-hydroxy-2-nonenal (anti-HNE) antibodies were used as oxidative stress markers. After overnight incubation with primary antibody at 4 \u00b0C, the sections were treated with appropriate secondary antibodies. The signal was visualized using 3,3\u2032-diaminobenzidine-tetrahydrochloride (DAB) and hematoxylin for counterstaining. Positive control tissues were used, as recommended by the manufacturer of the antibodies, while the negative controls were gained by omitting the primary antibody in the buffer. To detect caspase-3 positive cells as clearly as possible, the \u201cinvert\u201d option was used in the ImageJ\u00ae software . The number of caspase-3-positive cells was determined by counting 100 random seminiferous tubules (apoptotic index) (x400). Caspase-3 positive cells were counted by visual observation from two independent researchers. If the numbers differed, the opinion of a third researcher was sought. Data are expressed as the mean of caspase-3-positive cells per 100 seminiferous tubules. Descriptive analysis of antibodies against oxidative stress markers was performed to evaluate the histological localization on six samples per group.The immunohistochemical method was used to evaluate the cell damage exhibited by apoptosis and oxidative stress in the testicular tubules after treatment. Anti-cleaved caspase-3 antibody was used to measure lipid peroxidation. According to the manufacturer\u2019s protocol, the MDA in the homogenized sample makes a complex with thiobarbituric acid (TBA), which could be quantified colorimetrically (532 nm) on a spectrophotometer . The SOD activity was analyzed with the colorimetric SOD determination kit . Tetrazolium salt was used as a substrate (WST), which produces a water-soluble formazan dye after reduction with a superoxide anion. The rate of WST reduction was linearly related to the xanthine oxidase (XO) activity, but concomitantly inhibited by SOD. IC50 (50% SOD inhibition activity) was determined by the colorimetric method. As the absorption at 440 nm is proportional to the amount of superoxide anion, the activity of SOD as an inhibitory activity was quantified by measuring the decrease in color development at 440 nm. The GPx Assay Kit measured GPx activity. The main reaction catalyzed by GPx is 2GSH + H2O2 \u2192 GS\u2013SG + 2H2O, where GSH is the reduced monomeric glutathione and GS\u2013SG glutathione disulfide. The mechanism involves the oxidation of selenol in the selenocysteine residue via hydrogen peroxide. Glutathione reductase then reduces oxidized glutathione and completes the following cycle: GS\u2013SG + NADPH + H+ \u2192 2GSH + NADP+. Oxidation of NADPH to NADP+ was accompanied by a decrease in absorption to 340 nm. Under conditions where GPx activity is limited, the rate of decrease in A\u2083\u2084\u2080 is directly proportional to the GPx activity in the sample. The amount of NADPH in the reaction mixture was determined kinetically by reading the \u0394A\u2083\u2084\u2080 absorbance value at 340 nm at 1 min intervals over the 7 min time frame.The values of malondialdehyde (MDA) and enzymatic antioxidants (superoxide dismutase (SOD) and glutathione peroxidase (GPx)) were determined by colorimetric assays using the testicular tissue homogenates as the samples. The MDA Assay Kit for Windows, version 2020.5.1 , was used to analyze the experimental data. Before the study, power analysis was performed where a sample of four groups of eight animals was shown to be required in order to obtain high-quality data. The Shapiro\u2212Wilk test was used for the normal distribution assessment of collected measurements mainly presented by the interquartile range (median). Differences between groups were analyzed by the nonparametric Kruskal\u2212Wallis test. The data were presented as follows; chi-square (\u03c72) = observed value , degrees of freedom (DF), and p-value. The Mann\u2212Whitney U test with Bonferroni correction was used for the pairwise comparisons. A significance level of 0.05 was used.Microsoft Excelp = 0.016) in group 4, in which ASX was administered 45 min from the time of detorsion (mean = 11.84) compared to the untreated torsion\u2212detorsion group 2 . Compared to group 2, group 3, in which ASX was administered at the time of detorsion, recorded a far lower mean (mean = 12.50), but there was no statistically significant difference , the marker of oxidative DNA damage, was found in most tubules of all groups, although it was more intensely stained in group 3, and was without visible tubules with no affection in the same group. The signal was cytoplasmic, limited to the basal layer of the Sertoli cells and spermatogonia, near the tubular wall. In all groups except group 3, there were completely unaffected tubules next to those with a damaged histological appearance G.4-hydroxy-2-nonenal (HNE), the marker of lipid peroxidation, showed the strongest staining intensity in group 3, affecting the entire height of the seminiferous epithelium C. Group Nitrotyrosine staining showed no positive signal in the specimens, while the positive control was stained as expected.p = 0.574). The median values between group 2 (Mdn = 0.222) and group 3 (Mdn = 0.227) were almost identical (p = 0.798). The MDA values in group 2 in relation to the negative control group increased significantly (p = 0.001) compared to the untreated torsion\u2212detorsion group (Mdn = 0.222), but the difference was not statistically significant (= 0.001) .p = 0.01) and group 3, in which ASX was administered at the time of detorsion (Mdn = 85.30) (p = 0.000). It is interesting to note a statistically significant decrease in the enzyme activity of SOD in group 3 compared to group 2 was observed in group 4, in which ASX was administered 45 min from the moment of detorsion (Mdn = 89.61) compared to untreated torsion\u2212detorsion group 2 (Mdn = 88.39) (2 = 17.020 (7.815), DF = 3, p = 0.001), second minute (\u03c72 = 13.497 (7.815), DF = 3, p = 0.004), third minute (\u03c72 = 14.838 (7.815), DF = 3, p = 0.002), fourth minute (\u03c72 = 17.701 (7.815), DF = 3, p = 0.001), fifth minute (\u03c72 = 18.637 (7.815), DF = 3, p = 0.000), sixth minute (\u03c72 = 19.431 (7.815), DF = 3, p = 0.000) ; first minute (\u03c7= 0.000) .The results of this study showed that ASX has a favorable effect on ischemia-reperfusion testicular injury (IRI) in rats. In the immunohistochemical part of the study, we found that there was a decrease in the number of apoptotic caspase-3 positive cells in the ASX groups compared to the torsion\u2212detorsion group in which ASX was not applied (group 2) and statistically significant when ASX was applied 45 min from the moment of detorsion (group 4). Furthermore, biochemical studies showed a decrease in malondialdehyde values and an increase in the enzyme activity of superoxide dismutase and glutathione peroxidase in group 4. Although the malondialdehyde values did not decrease significantly, the observed median decreased. The superoxide dismutase enzyme activity increased significantly in group 4 compared to groups 2 and 3. The same pattern of results was observed for the glutathione peroxidase enzyme activity in the first six minutes. It is also interesting to note statistically significant decreases in group 3 compared to group 2 in the superoxide dismutase and glutathione peroxidase enzyme activity. We expected the ameliorating effect of ASX on the torsion to be stronger in group 3 compared to group 4 because, in group 3, ASX was applied concomitantly with detorsion. Still, the results of all measured variables were closer to the negative control in group 4. This may be due to the sluggish return of the blood flow, which can limit vascular capacity to deliver appropriate doses of antioxidants to the testes during the immediate post-torsion period. By prolonging the duration of torsion, the return of blood after detorsion is slower. It is important to note that the first 60\u201390 min after the initial reperfusion is a critical time, for a toxic outbreak of free oxygen radicals .Several studies have reported a cytoplasmic 8-OHdG expression ,30,31, iThis study focused on the acute effect and acute changes after IRI, but in everyday clinical practice, the average time from torsion to surgery often exceeds 90 min. To mimic real-life settings, the study would benefit from extending the time from torsion to surgery. Prolonging the time from torsion to reperfusion can be considered in future studies. ASX was administered intraperitoneally, as this route of administration was most appropriate for this model. We are aware that oral and intravenous routes of administration are more applicable for human administration, but as more detailed pharmacokinetic and pharmacodynamic studies are ongoing, we believe that intraperitoneal administration is more than satisfactory for testing ASX as a potentially potent antioxidant in preventing IRI. We opted for a dose of 75 mg/kg, but believe that in future studies, the dose may be reduced to keep the dose within the range currently recommended for use in humans, even though no adverse effects have been found in recent toxicological studies and at much higher doses. Next, we show that the slow return of blood could influence the effectiveness of the applied antioxidant, but we also point to the more beneficial effect of ASX when applied 45 min after detorsion than at the time of detorsion. Additional experimental groups should be included in the study to determine the optimal time for the ASX administration. Each group would be given ASX at a successively different time from the moment of detorsion. For example, regarding the already known harmful effect of IRI of the ipsilateral on the contralateral testis, one would also have to explore the ASX potential in ameliorating this effect.The effects of ASX on testicular torsion have not been investigated prior to our study, although the effects regarding its precursor lycopene have been. Hekimoglu et al. investigBusiness Communications Company, 2015). The total carotenoid market in 2019 was $1.8 billion, and \u03b2-carotene, lutein, and ASX accounted for more than 60% of the market share [Although it has been known for centuries that certain natural derivatives (exogenous factors) have beneficial effects on human health and the male reproductive system, it is only in recent decades that they have become increasingly important. Many are already registered as dietary supplement and are presented on the pharmaceutical market as supplements ,39,40. Cet share ,41,42,43et share concludeet share ,46, whilet share . The preet share . While Tet share observedet share .Within the European Union, ASX from natural sources is currently sold in daily doses of up to 12 mg and is approved by national authorities worldwide in daily doses of up to 24 mg. Critical determinants of ASX\u2019s ability to properly integrate into its molecular environment to increase its activity are structural features such as size, shape, and polarity . To dateGiven the potential ethical issues and research length, to date, no clinical studies have been conducted on the effect of ASX on testicular IRI in humans. The effects of ASX on humans are being explored, showing its beneficial effect on the human body ,60,61,62Our study promotes ASX treatment on testicular ischemia-reperfusion injury. Given the rapid growth of research in the field of antioxidants and testicular ischemia-reperfusion injury, we believe that one day the powerful antioxidants, especially ASX, will be applicable in clinical settings, given that, to date, there is no cure given to patients."} +{"text": "IntroductionIt has been long established that open surgeries were the only options available for the management of intra-abdominal abscesses or collections. These were associated with increased morbidity and mortality. Traditionally, the idea of percutaneous needling could not gain popularity due to poor localization of collections. However, with the advent of ultrasound, percutaneous pigtail-catheter drainage has proven to be minimally invasive and allows precise localization of the drainage site.ObjectivesTo study the effectiveness of ultrasound-guided pigtail catheter drainage as an alternative to exploratory laparotomy for the management of intra-abdominal abscesses or collections.Materials and methodsA total of 48 patient cases, which included liver abscesses, perinephric collections, malignant ascites, splenic collections, pseudocysts, and psoas abscesses, were studied prospectively in a medical college in India from October 2020 to October 2021. The efficacy of the drainage was assessed by serial ultrasound.ResultsOut of 48 patients, 34 were male and 14 were female, ranging in age from 19 to 64 years, who were diagnosed with intra-abdominal abscesses or collections and underwent ultrasound-guided pigtail catheter drainage. The average hospital stay for patients was 2.5 days. They were followed up periodically for three months post-procedure, and none had significant complications or recurrence.ConclusionThe pigtail catheter is the treatment of choice for liquefied intra-abdominal collections or abscesses, which helps to reduce post-procedure hospital stays and complications.ContributionThis article reiterates the use of minimally invasive techniques in place of open surgeries with less morbidity. Intra-abdominal collections are one of the major groups of morbidities experienced by the medical and surgical teams. Usually, small unilocular collections resolve spontaneously with medical therapy alone, without any surgical interventions. However, large abscesses or collections usually require surgical interventions like laparotomies for their treatment. For a long period, patients were subjected to extensive surgeries for the management of intra-abdominal abscesses or collections. These were associated with increased morbidity, mortality, and complications such as adhesions, as well as an increased hospital stay, which further exposes a patient to the risk of infections. Hence, for better patient care, it is vital to avoid major surgical procedures. Percutaneous therapeutic procedures like pigtail catheter insertion have been increasingly performed nowadays as an alternative to open surgical laparotomies ,2,3.Traditionally, the idea of percutaneous needle aspiration could not gain popularity due to the poor localization of collections. However, with the advent of ultrasound, percutaneous pigtail catheter drainage has proven to be safe, effective, minimally invasive, and allows precise localization of the drainage site. Percutaneous drainage has been used for liver abscesses for more than six decades now. However, these procedures had high failure rates due to their poor localization. Later, Gerzof et al. proved that percutaneous drainage was effective when used under the guidance of ultrasound and computed tomography together . EventuaIt is a descriptive interventional study design. A total of 48 patients were studied prospectively at the Dr. DY Patil Medical College and Research Center, Pune, from October 2020 to October 2021. Research Protocol No. IESC/PGS/2020/1731 has been approved by the Institutional Ethics Review Board.All the procedures were carried out on Siemens or Sonosite ultrasound machines, as per their availability. Written consent was taken, and all the patients were informed about the technique and possible complications before the procedure. Patients were requested for platelet counts, prothrombin time (PT), and international normalised ratio (INR) results before the procedure as a precautionary measure to rule out coagulopathy, which was excluded from the study. Also, solid collections and those with multiple thick internal septations were excluded from the study. The surgical team with immediate operative capability was kept on standby in case of any complications or failure. Abdominal abscesses are typically ellipsoid, displacing surrounding viscera to provide a safe window for percutaneous drainage. None of the collection drainages was refused due to a lack of a safe entry route. The exact percutaneous entry route was meticulously planned on sonography before needling, considering various criteria like size, location, distance from the skin surface, and the anatomic relation of the collection to the surrounding vital structures like vessels and viscera. Proper sterile precautions were taken before the procedure. A sterile cover was placed over the ultrasound transducer. A sterile gel was applied over the cover, and the entry route was reconfirmed. As a local anaesthetic, 5 cc of 1% lidocaine was injected subcutaneously. The skin incision was made with an 11-number scalpel blade. Under real-time ultrasound guidance, an 8-14 Fr pigtail catheter was inserted through the desired route, avoiding vital structures like vessels and viscera. The catheter was unscrewed from the troche and advanced into the collection site. The initial sample was collected and sent for diagnostic fluid analysis and antibiotic sensitivity. A three-way stopcock and extension tubing with vacuum drainage bottles were attached. The catheter was fixed externally by suturing it to the skin and placing a piece of tape around the catheter. Patients were monitored post-procedure and underwent CT screening to look for intra-abdominal complications like bleeding, viscus perforation, and catheter placement. No complication was experienced by any of the patients, and all were discharged two to three days post-procedure. Patients were followed up periodically for three months, and none had a recurrence of significant complications.Table Out of 48 patients, 34 were male and 14 were female, ranging in age from 19 to 56 years, of which 28 were between 40 and 56 years old. This correlation suggested that older patients were less likely to resolve spontaneously with medical therapy and thus needed interventional procedures.Patients complained of various symptoms depending on the pathology, most commonly pain in the abdomen (46 patients) and fever (44 patients), generalised weakness (34 patients), pallor (28 patients), jaundice (16 patients), and vomiting (15 patients).Table This was a prospective study, where the patients with intra-abdominal collections or abscesses were assessed and drained using a pigtail catheter under ultrasound guidance. The treatment of these collections has been dramatically improved, with a significant decrease in mortality and morbidity due to the concurrent use of antibiotics and imaging guidance instead of open surgical drainage . Our stuThe overall success rate without any complications in our study was reported to be 91.6%. However, 8.4% of patients experienced minor complications like kinking of the catheter, catheter displacement or removal, and blockage of the catheter due to extremely thick fluid, flakes, and debris. These complications were overcome by repositioning the catheter under aseptic precautions, using larger French-size catheters, and frequent flushing of the catheter with normal saline in cases of obstruction.Percutaneous drainage either using needle aspiration or a catheter has proven to be standard management for the treatment of liver abscesses . HoweverPrecaution should be taken to avoid any adjacent visceral perforations. Percutaneous nephrostomy (PCN) has proven to be fundamental for upper urinary diversion in cases of hydronephrosis due to impacted ureteric calculus, pyonephrosis, stricture ureter, failed Double-J (DJ) stent, distal ureteral malignancy, etc. Pigtail catheter for direct PCN tube placement is easy to perform under expert hands, precise, and comparatively economical to its counterpart, the wire-guided technique . The potPatients with advanced malignancies and associated refractory ascites often have limited treatment options and a poor prognosis. Also, these subgroups of patients suffer from debilitating symptoms of abdominal distension, early satiety, breathlessness, and vomiting, which further contribute to the impairment of quality of life. Paracentesis has proven to be effective in palliative management; however, it tends to recur, and there is a need for repeated procedures that can cause further complications like infection and viscus perforation , 8. MajoSplenic abscess presents with vague or nonspecific symptoms, and it is often seen in patients with comorbidities like immunocompromised status , 10. It In our study, one patient experienced blockage of the catheter after 24 hours of insertion due to extremely thick viscous fluid, large flakes, and debris; this was managed by flushing the catheter with normal saline. Psoas abscess presents with a variable-sized abscess, which may sometimes extend to the entire length of the muscle. Many surgeons advocate open surgical management in psoas abscesses, given that it reduces the pressure within the abscess cavity and relieves the symptoms early . HoweverPancreatic pseudocysts are a common complication occurring in cases of acute pancreatitis. External drainage of the pseudocyst is indicated in cases where the pseudocyst is large (more than 5 cm) or when its contents are purulent. However, the pancreatic duct communicates with the pseudocyst, and hence the drainage of the pseudocyst will lead to the refilling of the sac. Recently, surgical exploration using laparoscopy has been introduced; however, it requires general anaesthesia, which is also detrimental and may cause morbidity . TherefoAll patients drained successfully with no major complications, and the catheter was removed after the complete collapse of the cavity, which was seen usually within nine days. None of the patients had any recurrence or delayed complications like fistula formation in their follow-up screening using ultrasound, which was done over a period of three months.LimitationsWe acknowledge the limited sample size of each subgroup and the scope for further dedicated studies. A large number of samples for individual subgroups of collections will help broaden our view of rare complications that may have been missed.We concluded from our study that, using a precise technique for pigtail catheter insertion under ultrasound guidance, most intra-abdominal collections can be managed without the need for an exploratory laparotomy. Reduced cost, morbidity, complications, hospital stay, and chances of hospital-acquired infection were drastically reduced."} +{"text": "Tumor regression throughout treatment would induce organ movement, but little is known of this in the esophagus. To achieve successful tumor regression, radiation therapy requires several weeks of radiation to be delivered accurately to the tumor. Usually, a 5\u201310 mm margin is allowed for set-up error and internal organ motion. Our case exhibited an unexpectedly large movement of the esophagus across the aorta with tumor regression that extended outside the margin and thus outside the radiotherapy field. These movements may affect subsequent invasive procedures or treatment during cancer therapy. After the unexpected large movement of the esophagus due to tumor regression, we revised the radiotherapy plan to reflect the new esophageal position. This implied that regular imaging and close monitoring are required during treatment of esophageal cancer. A 79-year-old man was diagnosed with esophageal cancer during an evaluation of dysphagia. Upper endoscopy revealed a lumen-encircling mass 32 cm from the incisors ; a biops2) on days 1 through 4 and days 22 through 25 and cisplatin (75 mg/m2) was given on days 1 and 22. By 3 weeks after commencement of chemoradiation, the esophagus was outside of the radiotherapy field during image-guided radiotherapy. Follow-up CT revealed that the esophagus had moved from the right of the aorta to the left (Definitive concurrent chemoradiation was scheduled. The radiation dose was 50.4 Gy/28 fractions over 5.5 weeks. The patient had received continuous infusion of 5-fluorouracil (1000 mg/mthe left C,D. Fusithe left .Rumor regression throughout treatment would induce organ movement ,2,3. To"} +{"text": "The rates of early gastric cancer and type 2 diabetes mellitus(T2DM) are sharply increasing in Korea. Oncometabolic surgery in which metabolic surgery is conducted along with cancer surgery is a method used to treat gastric cancer and T2DM in one-stage operation. From 2011 to 2019, a total of 48 patients underwent long-limb Roux-en-Y gastrectomy (LRYG) in Inha University Hospital, and all data were reviewed retrospectively. A 75\u00a0g oral glucose tolerance test and serum insulin level test were performed before and 1\u00a0week and 1\u00a0year after surgery. One year after LRYG operation, 25 of 48 patients showed complete or partial remission and 23 patients showed non-remission of T2DM. The preoperative HbA1c level was significantly lower and the change in HbA1c was significantly greater in the T2DM remission group. Insulin secretion indices(insulinogenic index and disposition index) were increased significantly in the T2DM remission group. In contrast, the insulin resistance indices (homeostatic model assessment of insulin resistance (HOMA-IR) and Matsuda index) changed minimal. In the case of LRYG in T2DM patients, remnant \u03b2 cell function is an important predictor of favorable glycemic control. Obese patients usually have comorbidities such as type 2 diabetes mellitus (T2DM), sleep apnea and hyperlipidemia. The concept of metabolic surgery has emerged, as T2DM improvements have been observed after bariatric surgery. Roux-en-Y gastric bypass(RYGP) is a type of bariatric surgery, and the diabetes mellitus (DM) remission rate among RYGP patients is reported to be approximately 40.6\u201388%7. According to report on the Korea National Health and Nutritional Examination survey(KNHANES) 2011, 12.4% of Koreans over the age of 30 have diabetes, which is reported to be about 4 million or more9. This rapid increase in the number of diabetic patients leads to complications and mortality associated with diabetes, which induces an enormous economic burden10. In 2014, diabetes was the sixth leading cause of death in Korean about 3.9% of all deaths among aged 20\u201379\u00a0years people11. And diabetes is the most common cause of renal replacement therapy12.Approximately 60% of diabetes cases worldwide occur in Asia, the rate of increase in incidence is higher in Asia than in the West, and diabetes occurs more frequently in young people13. As a result of early detection along with an increase in survival time and a 21% decrease in mortality rate, the 5\u00a0year survival rate of EGC has been reported to be over 90% in recent decades14. Therefore, surgery that can improve the quality of life of patients, such as function-preserving surgery or oncometabolic surgery, has been considered beyond surgery used only as a treatment for gastric cancer.With the National Cancer Screening System in Korea, the detection rate of early gastric cancer (EGC) has increased by 1.7 times compare to non-screening period and more than 50% of patients are found to have EGC16. The reason for these results is assumed to be hormonal change according to the foregut theory and hindgut theory21. Several studies have reported hormonal changes in various lengths of biliopancreatic limbs23. In order to maximize the effect of this hormonal change, long limb Roux-en-Y gastrectomy was performed to improve T2DM, and positive results were reported in several studies26.There were several reports that Roux-en-Y reconstruction has a better effect than other reconstruction to improve diabetes after surgery30. Few studies have evaluated factors that are important for improving T2DM after gastrectomy30. We will evaluate the difference between patients whose T2DM improved and those whose T2DM was unimproved after long-limb Roux-en-Y gastrectomy (LRYG) and investigate potential factors predicting T2DM improvement after oncometabolic gastrectomy.The rate of T2DM improvement after gastrectomy was reported to be approximately 30.4\u201355.7% in several previous studies; this rate is lower than that observed after bariatric surgery33. The baseline and postoperative follow-up data, glycemic control status outcomes, antidiabetic medication use and success of T2DM remission were reviewed.A retrospective study was conducted with 48 gastric cancer patients with T2DM who underwent LRYG at Inha University Hospital from January 2011 to December 2019. Of these 48 patients, 36 underwent subtotal gastrectomy, and 12 underwent total gastrectomy. Among these patients, laboratory tests, including oral glucose tolerance tests (OGTTs) and serum insulin levels were performed before and 1\u00a0week and 1\u00a0year after surgery. Based on these results, various DM indices were calculated and reviewed. The insulinogenic index and disposition index were used to evaluate the patients' insulin secretory function, and the homeostatic model assessment of insulin resistance (HOMA-IR) and Matsuda index were used to evaluate the insulin resistanceT2DM patients were defined as those with fasting blood sugar (FBS)\u2009>\u2009126\u00a0mg/dL and HbA1c\u2009>\u20097% or those with a previous antidiabetic medication history. T2DM complete remission was defined as HbA1c\u2009<\u20096% and FBS\u2009<\u2009100\u00a0mg/dL, and partial remission was defined as HbA1c\u2009<\u20096.5% and FBS 100\u2013125\u00a0mg/dL without the use of antidiabetic medication 1\u00a0year after surgery. The non-remission group was defined as those who used antidiabetic medication or those whose glucose profile was not controlled to at least the partial remission level.Oncometabolic surgery was performed only on patients expected to have stage I or II on preoperative examination, because these patients are expected long\u00a0term survival and no need of chemotherpy. Radical D1\u2009+\u2009or D2 subtotal gastrectomy or total gastrectomy was performed according to the patient's gastric cancer location and stage. Conventional Roux-en- Y anastomosis is usually performed with a 20\u00a0cm jejunum limb and a 40\u00a0cm Roux limb. In contrast, for LRYG anastomosis, the jejunum limb was made 80\u00a0cm below the Treiz ligament for foregut bypassing, and a 80\u00a0cm Roux limb was made to perform anastomosis with the remnant stomach to reach the distal ileum early.Statistical analysis was performed using SPSS v 22.0 . The chi-square test was used to analyze categorical variables, and Student\u2019s t-test was used to analyze continuous variables in the T2DM and non-T2DM groups. Fisher\u2019s exact test was used to analyze categorical variables, and the Mann\u2013Whitney test was used to analyze continuous variables in the subtotal gastrectomy subgroup analysis. Data are presented as the means with standard deviations or medians with ranges.All patients were gastric cancer patients and they must perform gastrectomy. Oncometabolic surgery and OGTT were explained to all patients. Only patients who provided\u00a0informed consent to oncometabolic surgery and OGTT were included and patients who did not want were excluded.The study protocol was approved by the Institutional Review Board of Inha University Hospital (IRB number: INH 2021-01-013). All methods were performed in accordance with the relevant guidelines and regulations.Forty patients were using oral antidiabetic drugs; among them, 5 patients were using insulin injections as well. Eight patients did not use antidiabetic treatment. 6 patients were diagnosed with T2DM for the first time before surgery and 2 patients did not use voluntarily anti diabetic medication. The T2DM remission group included 25 patients , and the non-remission group included 23 patients. In the T2DM remission group, 21 patients achieved complete remission, and 4 patients achieved partial remission. In the non-remission group, 19 patients used oral antidiabetic drugs, 1 patient used insulin injection along with oral antidiabetic drugs, and 3 patients did not use antidiabetic drugs in spite of their HbA1c\u2009>\u20097.0% at 1\u00a0year although they have HbA1C level\u2009<\u20097.0% in laboratory at 6\u00a0month, because they refuse antidiabetic drugs.p\u2009=\u20090.002). There was a more significant difference in HbA1c levels at 1\u00a0year after LRYG . T2DM duration was shorter in remission group, but it was not statistically significant. The preoperative insulinogenic index and disposition index, which represent insulin secretory function, were significantly higher in the T2DM remission group . The preoperative insulin level was also significantly higher in the T2DM remission group . After 1\u00a0year, serum insulin increased more in the T2DM remission group. Insulin secretory function improved 1\u00a0year after surgery, resulting in a larger difference between the two groups than that of previous ones. In the T2DM remission group, the insulin secretory indices 1\u00a0year after surgery improved to within the normal range. However, the non-remission group did not show significant improvement were rather high in the T2DM remission group. In both groups, they improved rapidly 1\u00a0week after surgery and worsened again 1\u00a0year after surgery, but the patients showed improved insulin resistance status compared with that before surgery. There was no statistically significant difference between the two groups. The body mass index (BMI) change between both groups 1\u00a0year after surgery was not statistically significant.In the T2DM remission group, the OGTT results improved sharply 1\u00a0week after surgery and became almost normal after 1\u00a0year. Insulin secretion also increased to some extent in the non-remission group, and the OGTT results showed a similar pattern to that of the normal group, but the results were not in the normal range . In this group, the HbA1c level improved further, so there was a more significant difference in HbA1c after 1\u00a0year . T2DM duration was statistically significantly shorter in remission group The preoperative insulinogenic index and disposition index were significantly higher in the T2DM remission group . The insulin resistance index and insulin secretory function index showed similar change patterns .Of the 12 patients who underwent total gastrectomy, 5 patients showed T2DM remission, of whom 3 showed remission status and 2 showed partial remission status. In the subtotal gastrectomy group, the proportion of patients with T2DM remission was high, but the difference was not statistically significant (20/36 (55.6%) vs. 5/12 (41.7%): p\u2009=\u20090.045).Table T2DM is caused by an increase in insulin resistance. Insulin secretory function compensates for the increase in insulin resistance. However, if this compensation does not suffice, the glucose level becomes uncontrollable. If this hyperglycemic stress persists for a long time, pancreatic \u03b2 cell function is eventually destroyed, resulting in a decrease in insulin secretion. Because Asian people have fewer pancreatic \u03b2 cells, insulin secretion decreases more severely than in Western people.The mechanism of gastric cancer surgery as a metabolic surgery has some similarity to that of bariatric RYGP. Remnant stomach volume reduction through gastric resection results in caloric restriction and weight loss. In the case of Billroth II (BII) or Roux-en-Y gastrectomy, food material bypasses the duodenum and proximal jejunum due to the bypassing of the alimentary tract and reaches the distal ileum earlier than is the case in normal individuals. The difference is that gastric cancer patients are relatively less obese; therefore, the effect of weight loss is small compared to that in bariatric patients. In the case of subtotal gastrectomy, the fundus cannot be excised, which is disadvantageous in reducing caloric restriction and ghrelin secretion. The bypass length is shorter than that in bariatric surgery, therefore the effect of enteric hormonal changes may be weaker than that of bariatric RYGP.36. After LRYG, the remission rate is reported to be 11.6\u201378.6%25. In this study, the postoperative remission rate was approximately 47%, and when partial remission was included in this rate, it was approximately 55.6% 1\u00a0year after surgery. This remission rate is comparable to the T2DM remission rate in patients with class II obesity37. One of the reasons DM improves after bariatric RYGP is weight loss, which has the effect of improving insulin resistance. Another reason is the effect of changes in hormones at the insuloenteric axis, caused by the change in alimentary tract passing, which is contributes more than weight loss. The foregut hypothesis posits that patients undergoing duodenal bypass experience antidiabetic effects due to decreased anti-incretin factor. The hindgut hypothesis posits that early contact of food material with the distal ileum improves diabetes through the increased and early release of hormones such as glucagon-like peptide-1 (GLP-1) and peptide YY (PYY)21.The T2DM remission rate after conventional subtotal gastrectomy was reported to be 11.2\u201322.2%, and after Roux-en-Y reconstruction, it was reported to be 20\u201330.7%40. In our study, the preoperative insulinogenic index, which represents insulin secretory function, was the only significant factor influencing T2DM remission. Insulin resistance showed a rapid improvement without weight loss within 1\u00a0week after surgery, after which it worsened. Insulin resistance and weight reduction did not show differences 1\u00a0year after surgery between the two groups. In contrast, insulin secretion showed a significant difference between both groups 1\u00a0year after surgery. In addition, the shape of OGTT graph of the non-remission group showed a normal one, but the glucose value was abnormal. These results indicated that the improvement in T2DM after LRYG caused by increased insulin secretion due to the effect of metabolic surgery was greater than the improvement caused by weight reduction, which is similar to the findings of studies conducted in obese T2DM patients21.The improvement in T2DM after conventional gastrectomy has been thought to be caused by improved insulin resistance due to weight loss. However, in several previous bariatric studies or gastrectomy studies, there have been reports that pancreatic \u03b2 cell function plays a more important role in T2DM remissionA statistically significant factor related to T2DM remission was the preoperative insulinogenic index. The preoperative insulinogenic index was a useful index (AUC\u2009=\u20090.694), and the insulinogenic index cutoff value was 0.105 . (Fig.\u00a025. Nowadays, longer limbs are accepted as the usual method41. If it is performed with a longer biliopancreatic limb and alimentary limb, better results can be expected.The OGTT test is a very uncomfortable examination because it causes dizziness, nausea, vomiting, etc. It requires several blood samples at one visit, so patient compliance is poor. In this study, various diabetes indices were obtained through OGTT and serum insulin level tests. It is worthwhile to determine which factor is more important in the indication of T2DM remission by calculating the indices. The mixed meal tolerance test cannot calculate various DM indices. Although OGTT is an inconvenient test, it was used for the purpose of this study. All patients were given informed consent for oncometabolic surgery and OGTT test. This study was started in 2010. At the time, it was thought that a biliopancreatic limb and alimentary limb of about 80\u00a0cm would be sufficientThis study was a retrospective study, so the influence of selection bias cannot be excluded. There were some patients who missed follow-up visits, and there was difficulty performing the OGTT test, so some laboratory results were missing. Of the 58 patients, 10 had missing data, and 48 patients were included in the analyses. This study had only 1-year follow-up results and a small number of patients, which limited the statistical power. There is a lack of comparative evaluation of patients\u2019 nutritional status.The effects of LRYG on weight loss and insulin resistance improvement were not different between the two groups, and the difference in insulin secretory function was the most important factor associated with T2DM remission. The results of this study shows that the effect of LRYG in T2DM patients with low BMI was similar to that of metabolic surgery in obese T2DM patients. Oncometabolic surgery can be one of the optional surgery for T2DM gastric cancer patients with a high possibility of long-term survival and well-preserved pancreatic \u03b2 cell function, even if they are within normal weight. Besides, these results give us glimpses of the possibility for expanding the indication for metabolic surgery for the T2DM patients with over-weight or normal BMI."} +{"text": "Significant associations between being female and experiencing chills, muscle or joint pain, anorexia, drowsiness, and hair loss were also found, as well as being above the age of 30 and experiencing a cough. Being a smoker was significantly associated with experiencing a cough, and a headache. Furthermore, chills, and a sore throat were significantly associated with individuals who had not been infected before. Conclusion: Mild side effects were reported after receiving the inactivated COVID-19 vaccine. Fatigue was the most commonly reported side effect. Females, older adults, smokers, and those who had never been infected with COVID-19 had a greater susceptibility to certain side effects.Background: The perception of COVID-19 vaccines as being unsafe is a major barrier to receiving the vaccine. Providing the public with accurate data regarding the vaccines would reduce vaccine hesitancy. Methods: A cross-sectional study was conducted to collect data on the side effects experienced by the vaccinated population to assess the safety of the inactivated COVID-19 vaccine. Results: The majority of the study participants (n = 386) were female (71.9%), and 38.6% of them were under 30 years old. Around half of the participants (52.8%) reported side effects after receiving the inactivated COVID-19 vaccine. Fatigue (85.1%), a sore arm at the site of the injection (82.1%), and discomfort (67.2%) were the most commonly reported side effects after the first dose. Reporting side effects was significantly associated with the female sex ( When a disease is declared to be a pandemic, healthcare providers, clinical researchers, and pharmaceutical manufacturers rush to find a cure or a strategy of prevention to reduce the spread of the disease and its related death toll. This includes the development of vaccines to control the spread of the disease. The World Health Organization (WHO) declared the outbreak of coronavirus disease 2019 (COVID-19) to be a pandemic in March 2020 ,4,5. TheAround 70% of the world\u2019s population has received at least one dose of a COVID-19 vaccine, and in Jordan, around 50% of the population is vaccinated . Among tAccording to the Strategic Advisory Group of Experts (SAGE), vaccine hesitancy is a \u2018delay in accepting or refusing vaccination, in spite of the availability of immunisation services\u2019 . After cGlobally, there was a need to increase public confidence in COVID-19 vaccines. Accordingly, the Jordanian government took many actions to increase the trust in the COVID-19 vaccine, including a countrywide immunisation campaign coordinated by the Ministry of Health and the National Center for Security and Crisis Management ,17. OthePeople\u2019s attitudes towards vaccination have changed over the era of the COVID-19 pandemic. No specific study has looked closely into this change of attitude, however, an indirect evaluation of the reported beliefs can provide such an insight. A cross-sectional study August 2020) was conducted to evaluate the perception of people in Jordan regarding the COVID-19 vaccines and assess their hesitancy toward receiving the COVID-19 vaccine. Out of the participants (n = 1287), more than half of them (n = 665) reported not having adequate information about the COVID-19 vaccine benefits. Moreover, 64% of them preferred to achieve natural immunity . In Dece20 was coThis study aimed to assess the safety of the inactivated COVID-19 vaccine and reveal the association between certain side effects and different parameters, which in turn would provide accurate data regarding the COVID-19 vaccine and help the public understand what to expect after receiving the vaccine. This is the first study conducted among the Jordanian population to assess the side effects specifically associated with the inactivated COVID-19 vaccine. Other published studies in Jordan assessed different types of COVID-19 vaccines, with none of them focusing on the inactivated COVID-19 vaccine. \u00ae, Beijing, China) among the Jordanian population. The survey was developed and disseminated digitally using Google Forms to Jordanian inhabitants who had taken at least the first dose of the inactivated COVID-19 vaccine who were deemed eligible to participate in the study.A cross-sectional study was carried out in August 2021 to collect data regarding the side effects of the inactivated COVID-19 vaccine . Following an extensive review of the literature, a broad spectrum of potential side effects following the administration of the inactivated COVID-19 vaccine was identified, thus, a pool of questions was generated from various sources to assist in constructing the first draft of the survey (1\u20133). To meet the study objectives, the research team developed the survey based on the available information regarding the side effects of the inactivated COVID-19 vaccine.To ensure face and content validity, the first draft of the survey was validated by an expert panel who evaluated the questions\u2019 comprehension, relevancy, and word clarity. The expert panel included five independent academics who were randomly selected from a list of 30 academics and who worked at different higher education institutions. The academics were selected based on their experience (minimum 10 years) in related research areas. In addition to the five academics, two specialist pulmonologists were requested to validate the survey. The experts were invited to assess the suitability of the words, appropriateness of the content, consistency of the layout and style, relevancy of the survey items to the study objectives, and whether the items were related to the study\u2019s aim. Furthermore, they confirmed that the survey was free from medical jargon and complicated terminology. The amendments were conducted based on their feedback. A pilot for the survey was conducted, and necessary refinements were made. The survey questions were revised as a final point in the survey development and to make the study appropriate for online administration.The survey\u2019s final edition consisted of two primary sections addressing the aspects of interest. Social media was primarily used to recruit the participants. The potential participants were first asked via WhatsApp if they had received at least the first dose of the inactivated COVID-19 vaccine; if the answer was \u201cYes\u201d, the potential participants were then briefly informed of the study\u2019s aim, and the survey link was sent to them. Moreover, Facebook was used to recruit the participants; the research team posted the survey link using their accounts, and a question about receiving at least the first dose of the inactivated COVID-19 vaccine along with the aim had to be answered \u201cYes\u201d for them to proceed to the other survey sections. The survey was designed to be completed within an average of five minutes. Eligible participants could view the ethics committee\u2019s approval before filling out the survey.The sample size calculated in this study was needed to reveal the side effects experienced by people in Jordan after receiving the inactivated COVID-19 vaccine. After considering the number of vaccinated individuals in Jordan , the samp-value of \u22640.05 was deemed to be statistically significant (Chi-square test).Following the data collection, the data were analysed using the Statistical Package for the Social Sciences (SPSS), Version 24.0 . Qualitative variables are presented as percentages. A p-value less than 0.25 was deemed to be eligible to enter into the multiple logistic regression to explore the independent variables significantly associated with experiencing side effects after receiving the inactivated COVID-19 vaccine. To ensure the absence of multicollinearity among the independent variables, the variables were chosen after confirming their independence by providing tolerance values that were greater than 0.2 and variance inflation factor values that were less than five. For the multiple logistic regression, a variable that has a p-value < 0.05 was considered to be statistically significant.Logistic regression was conducted to screen for the variables , age , smoking status , and whether the participants have been infected before ) affecting whether the participants\u2019 experienced side effects after receiving the inactivated COVID-19 vaccine. For a simple logistic regression, a variable with a The responses of 386 participants were included in the study analysis. Of the participants (n = 386), 149 of them (38.6%) were under 30 years of age. About sixty percent of the participants (n = 229) were married, around three quarters were living in Amman, and 94.8% of them had Jordanian nationality. Regarding the participants\u2019 education, most of the participants had a graduate or postgraduate degree. About 65% of them were employed (n = 250), and around one-third of the participants were smokers (n = 124). Before receiving the inactivated COVID-19 vaccine, 53.0% of the participants were virus-free (had never been infected with COVID-19). As shown in Around half of the study\u2019s participants reported side effects after receiving the inactivated COVID-19 vaccine. The participants were questioned about 23 different side effects. Fatigue was the most reported side effect among the participants (n = 204) after receiving the first dose of the inactivated COVID-19 vaccine. p-value = 0.027) is significantly associated with experiencing side effects highlighted that the female sex to assess the association between each side effect and the participants\u2019 sex, age, smoking status, and whether they had previously contracted COVID-19 . There was a significant association between being female and experiencing chills, muscle or joint pain, anorexia, drowsiness, and hair loss. A significant association was also found between being over 30 years of age and experiencing a cough. Being a smoker was significantly associated with experiencing cough and headache. Furthermore, chills and a sore throat were significantly associated with individuals who had never been infected .Out of the study participants (n = 386), 84.5% of them (n = 326) received the second dose of the inactivated COVID-19 vaccine. More than half of the participants who received the second dose reported side effects .As shown in Only nine participants (2.4%) reported severe side effects that required hospital admission within four weeks of receiving the inactivated COVID-19 vaccine.This study was conducted to assess the safety of the inactivated COVID-19 vaccine and to reveal the association between certain side effects and different parameters by collecting data on the short-term side effects after receiving the vaccine. About half of the study participants reported experiencing side effects after the first dose of the inactivated COVID-19 vaccine. More than half of the current study\u2019s participants reported seven out of twenty-three side effects. Fatigue was the most commonly reported side effect after both of the doses. Moreover, certain side effects were significantly associated with females.The vaccines\u2019 side effects, such as fever, fatigue, muscle pain, and injection site inflammation, are considered to be a typical natural response to injecting foreign irritants, which is managed by the body\u2019s innate immune system. Neutrophils and macrophages release cytokines when they identify foreign vaccine particles. Cytokines are the chemical messenger that causes immunological reactions such as a fever and muscle discomfort. Thus, when a vaccine is injected, the cytokine response is what is anticipated to happen , indicatMultiple studies among different populations were conducted to assess the side effects following the COVID-19 vaccine: many of them found that the inactivated COVID-19 vaccine induced fewer side effects than other types of COVID-19 vaccines did ,29,30,31After conducting several studies, mild side effects were reported after receiving the inactivated COVID-19 vaccine ,28,29,32The Inactivated COVID-19 vaccine appears to be a safe choice, owing to its self-limiting mild side effects .Some side effects might be reported under various conditions unrelated to the vaccine: these are cultural nuances in which some cultures lean toward in some situations. Perhaps some of the side effects that were reported in Jordan to the vaccine are more related to culture than they are to medical or pharmacological differences. In this study, out of the participants (n = 204) who reported side effects after the first dose, 19.4% of them suffered from hair loss. This side effect has not been reported in any other studies. Other published studies documented related findings, for example, in Italy: three cases of alopecia areata recurrence were reported following the first COVID-19 vaccine dose . AnotherIn this study, side effects related to the inactivated COVID-19 vaccine were significantly associated with the female sex. This result is consistent with other conducted studies, where the side effects associated with the vaccines were more common among females than they are among males ,30,37,38A significant association was found between being over 30 years old and experiencing a cough. This is similar to a study by Lounis et al., where older participants developed more side effects . Other sBeing a smoker was significantly associated with experiencing a cough and a headache among the current study\u2019s participants. As documented in a systemic review, active smoking negatively affects the body\u2019s humoral responses to COVID-19 vaccines, but the pathophysiologic mechanism for this relation is not fully understood . A crossDar-Odeh et al. conducted a cross-sectional study to assess the long-term adverse events (LTAE) of three COVID-19 vaccines among healthcare providers (dentists and physicians). Among the different types of vaccine, the inactivated COVID-19 vaccine showed the highest signification association with LTAE. The present study assessed the short-term side effects, and more than half of the participants who reported side effects after the first dose documented experiencing fatigue, muscle and joint pain, a headache, and drowsiness, however, the same previously mentioned side effects were reported as LTAE in the study conducted by Dar-Odeh et al. . see Su.This study comes with limitations. Since the current study was based on an online questionnaire; this might be a source of selection bias. Moreover, COVID-19 infection after vaccination was not assessed, hence, there could have been co-incidence between the reported side effects and the symptoms of infection. Additionally, muscle and joint pain have been combined as one side effect, however, future studies may separate them into two different side effects due to inconsistent aetiology. Finally, although the study met the minimum calculated number of participants, future studies can use a larger sample size to generalise the results further.Knowledge of COVID-19 vaccine safety is crucial to reduce public hesitancy to receive the vaccine . Several"} +{"text": "The Pfizer BioNTech COVID-19 vaccine was the first to receive emergency authorization and approval from the FDA. Therefore, it is preferred by most recipients; however, many people are concerned about the vaccine\u2019s side effects. At the time of the study, December 2021, Palestine lacked a national reporting system for monitoring adverse vaccine effects. Therefore, this study investigates the post-vaccine adverse events following the Pfizer/BioNTech COVID-19 Vaccine administration in Palestine and identifies the occurrence, extent, and severity among university staff, employees, and students at Birzeit University.A questionnaire-based retrospective cross-sectional study was conducted using a university website (Ritaj), social media platforms , and in-person interviews. The Chi-square, Fisher\u2019s exact, and McNemar\u2019s tests were used to investigate significant relationships. Data were analyzed using SPSS version 22.In total, 1137 participants completed the questionnaire, 33.2% were males, and the mean age was 21.163\u00a0years. All participants received at least one dose of the Pfizer-BioNTech COVID-19 vaccine. Approximately one-third of participants reported no adverse effects after receiving the first, second, or third doses . The most commonly reported adverse events were fever, chills, headache, fatigue, pain and swelling at the injection site, muscle pain, and joint pain. Allergic reactions were reported by 12.7% of the participants; furthermore, participants with a history of allergy or anaphylaxis before vaccination had a significantly higher tendency for post-vaccination allergic reactions. Eight participants reported rare side effects, including 7 (0.6%) cases of thrombocytopenia and one (0.1%) case of myocarditis. Males aged less than 20\u00a0years and smokers were significantly less likely to complain of adverse events. The number of reported side effects was significantly higher after the second vaccine dose than after the first dose. Finally, participants infected with COVID-19 before vaccination was significantly associated with side effects such as fever, chills, shortness of breath, and persistent cough.In this study, the most common\u00a0post- BNT162b2 Vaccination reported self-limiting side effects similar to those reported by Pfizer/BioNTech Company. However, higher rates of allergic reactions were reported in this sample. Rare side effects, such as thrombocytopenia and myocarditis, were reported by 8 participants. COVID vaccines have been developed at an accelerated pace, and vaccine safety is a top priority; therefore, standard monitoring through a national adverse event reporting system is necessary for safety assurance. Continuous monitoring and long-term studies are required to ensure vaccine safety.The online version contains supplementary material available at 10.1186/s12879-022-07974-3. COVID-19 infectious disease, the causative of the Coronavirus pandemic, is caused by different mutated types of Severe Acute Respiratory Syndrome Coronavirus 2(SARS-COV-2). This novel virus first appeared in December 2019 in China and later spread worldwide. Globally, as of May 21, 2022, over 524 million confirmed cases of COVID-19, including over 6.27 million deaths. In Palestine, as of May. 21, 2022, approximately 657,456 reported cases of COVID-19 and a total of 5659 deaths based on the department of health [Worldwide mass efforts are in progress to develop COVID-19 vaccines and halt this pandemic by minimizing the spread and protecting human lives . On 11 DCOVID-19 vaccine safety is a top priority to ensure that benefits exceed the risk. However, severe or rare adverse events may not be identified in phase 3 trials due to limited sample size, inclusion criteria, and participants\u2019 characteristics, which may differ from the population receiving the immunization .The state of Palestine received the first shipment of Pfizer vaccines on March 17 2021, and as of May 21, 2022, a total of 3,720,221 vaccine doses were received, resulting in 1,768,991 (35.5/%) fully vaccinated people , 12. MosThis study investigates the post-vaccine adverse events following the Pfizer/BioNTech COVID-19 Vaccine administration in Palestine to alleviate the incomplete clinical trial gap and support the national strategic readiness and response plan. Furthermore, to identify the occurrence, extent, and severity of adverse events among university staff, employees, and students at Birzeit University and compare the incidence of these side effects between the first, second, and third doses and the data published by Pfizer company. Finally, this study aimed to predict the post-vaccination side effects based on individual predisposing factors such as age, gender, smoking status, food/drug allergy, comorbidities, and COVID-19 infection before vaccination.A questionnaire-based retrospective cross-sectional study was conducted at Birzeit University in Palestine, which started the vaccination of the university community in September 2021, from December 13, 2021, to March 29, 2022. The study included participants aged 18\u00a0years and older who received at least one dose of the Pfizer/BioNTech COVID-19 Vaccine. The questionnaire was distributed through the university website (Ritaj), social media platforms , and in-person interviews. 1496 participants were included in this study, while 375 were excluded because they refused to participate, had an incomplete response to the questionnaire, or received other types of COVID-19 Vaccines.The questionnaire was prepared in English, following a thorough literature review , 15. TheA pilot study was conducted to confirm questionnaire consistency among 32 vaccinated COVID-19 individuals, who were asked to complete the questionnaire and provide feedback on its clarity, relevance, and construction. These pilot study responses were not included in the formal evaluation, and modifications were made to the final Arabic draft based on the participants\u2019 reviews.The questionnaire included five sections with 27 questions formulated as open- and closed-ended multiple-choice questions besides two short essay questions. The first section included nine questions concerning demographic information, such as gender, age, weight, height, employment, chronic diseases, allergic reactions, and smoking status. The second section included four questions about infection status with SARS-Cov-2 before vaccination. The third section consisted of 4 questions on the COVID-19 vaccines, such as the type, the number of doses received post-vaccination counseling and allergic reactions after vaccination. The fourth section, \u201cPfizer-BioNTech side effects,\u201d consists of a list of 23 possible side effects divided into two categories as local or systemic adverse events according to the world health organization\u2019s global manual on surveillance of adverse events following immunization . AdverseThe data were analyzed using IBM Statistical Package for the Social Sciences (SPSS) version 22. Descriptive statistics were used to analyze the data. Frequencies and percentages were measured for categorical data, whereas means and standard deviations were measured for continuous data to be used as descriptive statistics. First, questions were recoded and categorized; height and weight were computed to BMI and then categorized based on BMI classification: underweight below 18.5), normal weight (18.5\u201324.9), overweight (25.0\u201329.9), and obese (\u2265\u200930.0) [.5, norma2) or Fisher\u2019s exact tests were performed to investigate the association between participants\u2019 demographics and post-vaccination side effects for the first and second doses. Chi-square (\u03c72) or Fisher\u2019s exact tests were also applied to investigate the association between participants\u2019 demographics and onset plus side effects duration. In addition, McNemar\u2019s tests were conducted to compare the incidence of each side effect between the first and second shots. All inferential tests were performed considering a confidence interval (CI) of 95% and a significance value of p\u2009<\u20090.05.Chi-square .This study included 1137 participants from Birzeit University in Palestine. All participants received at least one dose of the Pfizer-BioNTech COVID-19 Vaccine. The mean age of the participants was 21.163\u00a0years\u2009\u00b1\u20095.361, with 63.2% older than 20\u00a0years. In addition, 66.8% were females, 91.8% were healthy without chronic diseases, 26.7% were smokers, and 14.4% had drug/food allergies , angioedema, shortness of breath, coughing, and significant swelling of the tongue or lips, were reported by 144 (12.7%) participants of participants reported shoulder injury related to vaccine administration (SIRVA) . Moreover, the presence of comorbidities was statistically significant with headache , increase in blood pressure , increase in heart rate , shortness of breath , voice hoarseness , dizziness , diarrhea , abdominal pain , and myalgia .Participants infected with COVID-19 before vaccination were significantly associated with side effects such as fever , chills , headache , shortness of breath , a persistent cough , chest pain , abdominal pain , joint pain ,\u00a0menstrual cycle changes , voice hoarseness , and myalgia .Table There were significant associations between participants who suffered from drug/food allergy with the frequencies of all adverse events following the second vaccine shot except tiredness and fatigue, joint pain, and swollen armpit glands. The reported differences are shown in Fig.\u00a0As information about the COVID-19 vaccine continues to evolve, and with the FDA approval for human use, post-marketing studies are necessary to ensure safety, efficacy, and use. Therefore, this study was conducted as a post-marketing survey of the Pfizer-BioNTech vaccine in Palestinian society. First, we investigated the incidence of side effects and reinfection rates following the administration of the Pfizer-BioNTech vaccine, then compared the incidence of these side effects between the first, second, and third doses. In addition, we predict the post-vaccination side effects based on predisposing factors.Adverse effects were reported in more than two-thirds of the study participants. Most of the side effects were experienced within 12\u00a0h of vaccination and persisted for 1\u20133\u00a0days. This finding is similar to a systemic review of Pfizer-BioNTech COVID-19 Vaccine side effects and other COVID-19 vaccines, where participants suffered from post-vaccination adverse effects after the three doses, and to an Egyptian study where adverse events resolved a couple of days after onset , 19\u201321. A wide range of common side effects, including fever, chills, headache, fatigue, pain and swelling at the injection site, muscle pain, and joint pain, were reported by participants. In addition, some side effects were reported less commonly, such as increased blood pressure, increased heart rate, vomiting, diarrhea, swollen armpit glands, swollen ankles and feet, and others. These findings are consistent with those reported in the Pfizer-BioNTech factsheet by the Food and Drug Administration (FDA) and many other studies , 23. MyoThe severity of most experienced side effects reported after each of the three vaccine doses was mild to moderate and self-limiting, similar to the published results of the phase III Pfizer clinical trial, with the majority occurring after the second dose , 23, 32.Specific Side effects types and severity reported by participants differed slightly from the CDC, the FDA, or other studies. For example, as shown in Fig.\u00a0A considerable percentage 13%) of the participants experienced at least one type of allergic reaction after receiving the Pfizer BioNTech vaccine to non-IgE activating factors . In a coTherefore, healthcare providers and institutions are encouraged to follow the CDC\u2019s\u00a0COVID-19 vaccination guidance, including pre-vaccination screening forms, enforcing the recommended 15-min post-vaccination observation periods, and having the necessary reserves available to handle severe allergic reactions.The participants\u2019 demographic data revealed a significant percentage of females, young, non-smokers, and individuals of normal weight. These findings refer to the\u00a0large percentage of student participation and a higher percentage of female vs. male students at Birzeit University .Females experienced a higher incidence of side effects after receiving the Pfizer-BioNTech vaccine. These gender variations were also found in previous vaccines, as reported by a 2019 study on allergic reactions following the 2009 flu vaccine . Gender Many studies have addressed smoking, vaccinations, and the risk of COVID-19 infections. The CDC developed its recommendation for the necessity of smoker vaccination based on smokers' high risk of COVID infections. Smokers who participated in the study experienced a lower prevalence of post-COVID-19 vaccination side effects than non-smokers. This finding is supported by other studies where nonsmokers who received the COVID-19 vaccine experienced a higher incidence of pain and swelling at the injection site after the first dose than smokers . SmokingElderly patients and patients with comorbid diseases prioritized vaccinations according to many vaccine protocols owing to a higher risk of COVID-19 infections and complications. Participants with comorbidities are assumed to have depleted responses to immunogens; thus, they are more susceptible to experiencing reduced side effects following any Vaccination . In thisRegarding pre-vaccination COVID-19 infection, there was a significant association between previous COVID-19 infection and post-vaccination adverse effects. Participants who had COVID-19 infection before vaccination experienced higher post-vaccination side effects. This result was consistent with a multinational study among Arab populations and an Italian study , 32. In \u00ae) COVID-19 vaccine to ensure good HCW practice, professional vaccine administration, and storage, as well as to provide HCWs with scientific data regarding vaccine safety and efficacy [The CDC has developed training modules for healthcare workers (HCW) delivering the Pfizer-BioNTech were rarely used, expanding the participants\u2019 information.With increasing studies on the side effects of COVID-19 vaccines worldwide, our study is the first to evaluate the side effects of the Pfizer-BioNTech vaccine among Palestinians with a large sample size (N\u2009=\u20091137) and high educational levelAs the study was conducted at a university, a higher percentage of responses were received from young people aged 18\u201323 year old (students) compared to older people aged\u2009>\u200930\u00a0year old (university staff), the age groups were unevenly distributed, causing the participant proportions in different groups to be biased. Second, although only a tiny percentage of questionnaire responses were collected online via Google Forms, differences resulted from exposure, interpretation, or misclassification of side effects. Third, this study was a self-reported study based on participant perception of adverse events, which was not clinically evaluated or confirmed, and could be related to other factors besides the vaccine; therefore, this study was unable to make a causality assessment of serious events as recommended by the WHO. Furthermore, uncovering severe side effects and establishing a direct causal relationship will require further research and studies. Therefore, further studies are recommended to cover the entire country, including the occupied Palestinian territories, to confirm the initial results of this study.COVID vaccines have been developed at an accelerated pace, and vaccine safety is a top priority; in this study, the most common post- BNT162b2 Vaccination reported self-limiting side effects similar to those reported by Pfizer/BioNTech Company. However, higher rates of allergic reactions were reported in this sample. In addition, rare side effects, such as thrombocytopenia and myocarditis, were reported by 8 participants. Therefore, standard monitoring through a national adverse event reporting system is necessary for safety assurance; continuous monitoring and long-term studies are required to ensure long-term vaccine safety.Additional file 1. Study Questionnaire."} +{"text": "Most of the participants were from Taif city , and 57.6% (n = 654) were unmarried. Pfizer was the most frequently administered vaccine . Most participants explained that their vaccine administration protected themselves and their families . The acceptance showed that 55% (n = 626) of the participants had either very high or high confidence in the efficacy of the COVID-19 vaccines, while 14.7% (n = 167) of them had low/very low confidence in its efficacy. The side effects showed that 80.8% (n = 918) of the participants showed that they did not have any difficulties attributed to COVID-19 vaccine administration. Positive attitudes and practices were apparent, and most of the participants tended to be actors in the fight against COVID-19. Conclusions: The current study showed a high level of acceptance of COVID-19 vaccination among people living in the western region of SA. Health education and communication from authoritative sources will be important to alleviate public concerns about COVID-19 vaccine safety.Background: There are limited studies that have assessed COVID-19 vaccine acceptance and side effects, both globally and in the western region of Saudi Arabia (SA). Objective: This study assessed the acceptance of vaccination against COVID-19, determined motivators and barriers for taking these vaccines, and assessed vaccine side effects in the western region of SA. Study design: The study was an online cross-sectional study conducted among the people who lived in the western region of SA during the period from December 2021 to March 2022. Participation was voluntary for participants who were above 18 and lived in the Western region of SA. Children and those living in other countries were excluded from the study. Methods: The study tool was a self-administered questionnaire which assessed COVID-19 vaccine acceptance, determined motivators and barriers for taking the vaccines, and assessed their side effects among 1136 participants in the western region of SA. Data gathered were analyzed by the SSPS version 22 software. Result: A total of 1136 individuals, aged 18 years and above, participated in the study, with 50.7% ( The first case of COVID-19 was reported in Wuhan in China in December 2019. Until now, there have been 623,121,528 confirmed cases of COVID-19, involving 6,549,730 deaths, as reported by the World Health Organization [WHO]. In Saudi Arabia [SA], the first confirmed case of COVID-19 was announced on 2 March 2020. Up to 1 October 2022, there have been 801,600 confirmed cases of COVID-19 infections and 9352 deaths in SA reported to the WHO [The governments around the world have implemented various strict control measures for the COVID-19 pandemic. The authorities in SA adhered to strict measures, e.g., face masks, social distancing, partial and comprehensive closures, and the closure of schools and all business sectors. Although the impact is negative on the economic level, such measures have helped to flatten the epidemic curve. Nevertheless, the re-emergence and spread of COVID-19 (as well as the Delta and Omicron variants) have been reported. Therefore, there is an urgent need for long-term preventive measures. Few countries have sought to achieve herd immunity, which is defined as a level of immunity in a population that prevents outbreaks of disease through natural infection; however, such an approach has been deemed unethical . Since tCOVID-19 is a global threat due to its devastating effects on the world economy and healthcare systems. In addition, there is no approved treatment for this dangerous, highly infectious disease. This mandates the application of strict preventive measures and preventive vaccination campaigns. This is of specific importance within Middle Eastern countries during these difficult times of crises and political conflicts, where individuals usually fear financial difficulties, infection, isolation, lockdown, and death. This creates a state of psychological, behavioral, and physical distress among the population . CurrentThe current study was a cross-sectional survey conducted among people living in the western region of SA who were conveniently invited to participate in this study. The study was conducted during the period from December 2021 to March 2022. Adults above the age of 18 who agreed to take part in the study were included. Participants under the age of 18 were not permitted. Participants were sent a link for the study survey, and participation was voluntary. The link was distributed to candidate participants on social media, including WhatsApp, Twitter, Telegram, and Facebook. Once the participant clicked on the study link, they were informed about the study on the first page. They were informed that their participation in the study was voluntary and they could exit the survey, if they needed so. Participants who were above 18 and lived in the western region of SA were included in the study, while children and those living in other regions were excluded from the study. The study protocol was approved by the ethics committee at Taif University, with approval number 43-312.Based on the available statistics, we expected that about 5,000,000 people in the western region of SA received at least one dose of the COVID-19 vaccine ,7. As ruThe current study tool was a self-administered questionnaire designed after consulting previously published studies, some of which having validated questionnaires ,12,13,14The total acceptance rate was determined by taking the mean average acceptance response to the questions of The questionnaire was examined by a group of four researchers from Taif University\u2019s College of Pharmacy for the face and contents validity. They were asked to assess clarity, consistency, and suitability for the regional settings. Their suggestions were incorporated into the final version of the questionnaire. Additionally, 25 volunteers were used in a field test as a pilot sample to validate the questionnaire, and their data were excluded from the study.Although all variables were intended to be analyzed individually, and there was no intention to compute a scoring instrument , we checked the reliability of the items assessing residents\u2019 acceptance of the COVID-19 vaccination using the pilot sample data, and obtained a Cronbach\u2019s Alpha of 0.873. For general interest and for the purpose of comparison, we tested the reliability of the items representing the motivators and barriers, and obtained Cronbach\u2019s Alpha values of 0.341 and 0.466, respectively. Practically, it is not expected that such items would be internally consistent, because there would be variability between respondents in the variables representing the motivators and barriers.p-value of <0.05 was considered significant.Statistical analysis was performed by using the Statistical Package for Social Sciences (SPSS) version 22 . Descriptive statistics were generated for the responses and correlation coefficients to describe relationships between continuous variables. For independent variables, the Chi square test was used to compare categorical variables. A n = 669; 58.9%) were in the age range of 18\u201330 years (mean age: 34.5 SD: 9.8). Additionally, less than half of the participants were married , and most of them had a college degree . In this regard, male and female participants were equally distributed (n = 10), they were merged with the group aged 51\u201360 years. Both groups were included in a single group (>50 years), and were analyzed as such. Participants who did not work at the time of the study represented 62.0% (n = 704), and those who worked in the private sector comprised 9.9% (n = 112). Participants who had a family member working in the health sector represented around 40.1% (n = 455), and those who had seen or heard news and information about COVID-19 vaccination from social media made up 47% (n = 1047), in comparison to the 30% (n = 670) who heard about the pandemic from local television. Other details of the demographic data of the participants are shown in A total of 1136 participants successfully filled out the online questionnaire, and the responses were saved on a Google drive in a password protected manner. The baseline demographic characteristics of the participants are shown in tributed . Due to n = 1054; 92.8%). The Pfizer vaccine was the most frequent vaccine taken by the participants , followed by AstraZeneca . Most people declared that the reason behind taking the vaccine was to protect themselves and their families , followed by obtaining services that were restricted to those who had received the vaccine . Interestingly, most of the participants stated that they personally knew someone who had had COVID-19 infection or died from COVID-19 infection (Most of the participants had received the vaccine (nfection .n = 889) of participants wanted to be actors in the fight against COVID-19, and this was associated with residency area and marital status (p = 0.007 each). In addition, 63.4% (n = 720) of the participants wanted their children to resume school as soon as possible; this was associated with age (p < 0.001), residency (p = 0.004), marital status (p < 0.001), and employment (p < 0.001). In addition, 72.2% (n = 820) of the participants stated that they took the vaccine to avoid infection with COVID-19. Moreover, 95.9% (n = 1089) of the participants did not want to transmit COVID-19 infection to others. Furthermore, 39.8% (n = 452) of the participants stated that they took the vaccine because it was free, and this was associated with gender (p = 0.005), age (p < 0.001), educational level (p = 0.019), marital status (p < 0.001), and employment (p = 0.002).n = 576) did not take vaccine because it was mandatory for traveling abroad, which correlated with gender (p = 0.019), educational level (p = 0.037), and employment (p = 0.003). In addition, one third of the participants stated that their doctor\u2019s recommendation was an important factor in vaccination decision-making, and this correlated with age (p = 0.008) and education level (p = 0.004). In addition, 58.9% (n = 669) of the participants stated that they were afraid of adverse effects of the vaccine, which correlated with gender (p < 0.001). Moreover, 59.4% (n = 675) of the participants indicated that vaccine convenience was an important factor in vaccination decision-making, which correlated with residency (p = 0.020) and marital status (p = 0.002). Approximately one half of the participants showed that they did not really understand how the COVID-19 vaccine worked, and this was associated with age (p = 0.011), education level (p = 0.003), marital status (p = 0.019), and presence of a family member working in the health sector (p = 0.018). Moreover, 41% (n = 466) of the participants did not think that the development of the COVID-19 vaccines was too fast, which can be a barrier for vaccination, and this correlated with marital status (p = 0.009). Regarding mandatory vaccination, half of the participants of the participants had very high or high confidence in the efficacy of the COVID-19 vaccines, while 14.7% (n = 167) of them had low/very low confidence in its efficacy. Confidence in the efficacy of the COVID-19 vaccines was associated with the gender of the participants (p < 0.001), their educational level (p = 0.001), and their marital status (p = 0.003). Additionally, 68.8% (n = 989) of the participants believed (to a very high/high degree) the COVID-19 vaccine to be important for the health of their family members, friends, and communities, while 11.5% (n = 130) thought otherwise, which was significantly associated with age (p < 0.001) and employment (p = 0.045). In addition, 58.3% (n = 662) of the participants rated their level of knowledge about vaccination as very high or high, while only 13.9% (n = 158) of them rated their knowledge level as low or very low. Moreover, 72.7% (n = 829) of the participants encouraged their family members to get the COVID-19 vaccine, while only 9.5% (n = 107) of them strongly disagreed/disagreed with this issue; this was significantly associated with gender (p = 0.003) and age (p < 0.001). Interestingly, and as shown in n = 561) of the participants declared that they would take the COVID-19 vaccine if it was not mandatory, while 34.8% (n = 395) would not take it; this was significantly associated with gender (p = 0.005), age (p =0.008), and employment (p = 0.035). Furthermore, 59.1% (n = 671) of participants intended to take the third dose, while 14.3% (n = 162) declared that they would not take it. Only 42.3% (n = 480) of the participants believed that the COVID-19 vaccination should be mandatory for the public, while 39.7% (n = 451) of them did not; this was significantly associated with gender (p < 0.001), age (p = 0.021), and employment status (p = 013). Importantly, only 23% (n = 261) of the participants had received vaccination against influenza in the past or in the current season, while 74.1% (n = 842) did not. Furthermore, 15.1% (n = 171) of the participants had refused to take certain vaccines in the past, while 80.3% (n = 912) of them had not. In addition, 47.9% (n = 544) of the participants preferred to wait until they had more information about these new COVID-19 vaccines, while 28.2% (n = 320) did not.The rate of acceptance for our participants to vaccination was 53.2%. As shown, 55% (n = 918) of the participants showed that they did not have any difficulties attributed to COVID-19 vaccine administration. Most of the participants had pain at the injection site with the first dose, second dose , and third dose . Fever was also reported by 44.9% (n = 510), 36.8% (n = 418), and 9.0% (n = 102) of the participants after taking the first, second, and third doses of the vaccine, respectively. Headache was suffered by 22.8% (n = 259), 52.9% (n = 601), and 3.8% (n = 43) after the first, second, and third doses of the COVID-19 vaccine, respectively. Lethargy and fatigue were reported by 55.2% (n = 627), 39.6% (n = 450), and 10.7% (n = 122) after the first, second, and third doses of the vaccine, respectively ,16,17. IApproximately 47% of our participants had seen or heard news and information about COVID-19 vaccination from social media, and 30% from local TV news. In this regard, the study conducted among the population in Ethiopia found that 33.7% of participants obtained their information about COVID-19 from mass media, 32.9% from the Internet, and 31.8% from social media , which wThe majority of our participants (74.6%) personally knew someone who had either contracted or died from COVID-19 infection. In this regard, a recent study found that 75.5% and 89.8% of medical and dental students personally knew someone who had contracted or died from COVID-19 infection, respectively , and theThe confidence level among our participants concerning the COVID-19 vaccine efficacy was average (55.5%), and this is similar to the study which was conducted among university students in France regarding conventional vaccines . In addiAnother finding in our study was that 72.7% of participants agreed to encourage their family members to get the COVID-19 vaccine. A study performed in Ethiopia found that only 50.0% of participants would encourage their family/friends/relatives to receive the vaccine . AnotherAbout 42.3% of our participants supported and agreed on mandating COVID-19 vaccination of the public, while a study in Turkey reportedOnly 23% of our participants had received the vaccination against influenza in the past or the current season. In this regard, a study performed in China found that only 14.6% of the participants had received vaccination against influenza in the past season showed tAlmost 93.6% of our participants wanted to return to normal life as soon as possible, similarly to a report from France (85%) . In addiA doctor\u2019s recommendation was an important factor in vaccination decision-making in only 33.3% of our participants, while this figure was 80% in China , which iRegarding our study participants\u2019 motivators and barriers regarding COVID-19 and its vaccine, we found that vaccine convenience was an important factor in vaccination decision-making for 59.4% of them. In this regard, a study performed in China found that vaccine convenience was an important factor in vaccination decision-making among 75.7% of the participants (15), which was higher than our findings. In addition, 51.6% of our participants did not really understand how the COVID-19 vaccine works, while this number was only 10% in France .Pain at the site of injection was the most common adverse effect that happened to our participants after taking the first, second, and third doses of the vaccine, at frequencies of 71.1%, 66.2%, and 15.3%, respectively. Furthermore, lethargy and fatigue were the second most common adverse effect after the first and third doses, at frequencies of 55.2% and 10.7%, respectively. On the other hand, headache was the second most common side effect after the second dose (52.9%). In addition, fever was considered the third most common side effect after taking the first and third doses of the vaccine, at frequencies of 44.9% and 10.7%, respectively. Moreover, lethargy and fatigue were the third most side effects after the second dose (39.8%). Finally, fever was reported after the second dose by 36.8% of participants.This study has several strengths, including the large number of participants. Additionally, there is a scarcity or a near lack of reports of such data from the population in the western region of SA. In addition, although our study is a common observational cross-sectional study, such studies represent the basis for preliminary information which is useful for policy implementation, as well as an indication of how well a policy would succeed. Moreover, measuring vaccine acceptance is essential for predicting vaccine campaigns success in view of the hesitancy associated with the novel COVID vaccine platforms. On the other hand, this is a cross-sectional study which is exploratory in nature, and it was conducted at a specific time point. Although this method has been widely used and accepted in published literature, the use of the online survey has some limitations. Participants may refuse to participate or have exaggerated or understated their self-reported vaccine-related adverse events. The online nature of the survey may also have limited the participation of older, illiterate individuals, or those who have no internet or social media access. This could lead to selection bias. Therefore, the data should be interpreted with caution. In addition, it is difficult to estimate the response rates among the studied population when using online surveys. Another limitation was that a larger percentage of the respondents were from a single geographic area, which may impact the generalization of the survey results. However, this would not affect the generalizability within the western region of Saudi Arabia. This is because the general demographic features of the population in the western region are consistent and homogeneous, since the same tribes and families living in the region are extended across the governorates of Makkah and Madeenah, as well as the biggest cities of Makkah, Jeddah, Taif, and Madeenah.The current study showed a high level of acceptance of COVID-19 vaccination among people living in the western region of SA. Health education and communication from authoritative sources are important for alleviating public concerns about COVID-19 vaccine safety. The study was conducted only in the western region of KSA, and so the results may not represent the four districts of the kingdom. Future studies among residents of the four districts of Saudi Arabia are essential."} +{"text": "Multiple vaccines have been tested in clinical trials for their efficacy and safety. In Saudi Arabia, Pfizer\u2013BioNTech or Moderna were approved for children, however, previous studies to report their safety profile are limited. This research aims to understand the side effect of children's vaccination against SARS-CoV-2 infection in Saudi Arabia.This was an observational retrospective cross-sectional study was conducted using an online survey in Saudi Arabia from March to May 2022. The inclusion criteria were parents aged 18\u00a0years and above who live in Saudi Arabia and have vaccinated their children. The self-reported questionnaire was adopted from published studies to investigate the study objectives Descriptive statistics were used to describe patients\u2019 demographic characteristics, continuous data were reported as mean\u2009\u00b1\u2009S.D., categorical data were reported as percentages (frequencies), and logistic regression was used to identify predictors of persistent post-COVID-19 symptoms.This study had a total of 4,069 participants. Only 41.9% of the participants reported that their child(ren) had been infected with the coronavirus. 2.00 was the median number of children (IQR: 1.00\u20134.00). More than half of the study participants (64.2%) reported that a family member had been infected with the coronavirus. Both parents received COVID-19 vaccination, according to most participants (88.7%). Most participants (70.5%) stated that all children who met the vaccination criteria had received the vaccine. Most participants (83.5%) said their child or children had two doses of their vaccine, and about half (50.4%) of those who received the vaccine reported experiencing side effects. In addition, the majority (78.9%) reported that the side effects appeared within one day of receiving the vaccine, and nearly two-thirds (65.7%) reported that the side effects lasted between one and three. A total of 11,831 side effects cases were documented. Pain at the injection site, hyperthermia, and fatigue were the most reported side effects, accounting for 15.3%, 14.1%, and 13.2%, respectively.It appears that the side effects of the COVID-19 vaccine for children are minor, tolerable, and like those described previously in clinical trials. Our data should encourage the public about the safety of receiving the COVID-19 vaccine for children. On 11 March 2020, the Coronavirus Disease 2019 (COVID-19) pandemic was declared . AccordiSince COVID-19 is declared as a pandemic, vaccination brought hope to control this condition. There are different COVID-19 vaccines approved by WHO, such as The Pfizer/BioNTech vaccine and AstraZeneca/AZD1222 vaccines , 8. VaccThis vaccination underwent a safety evaluation to observe any adverse reaction following the injection of either dose for adults. In most clinical trials, injection-site pain was the most frequently reported local adverse reaction. In addition, moderate to mild fever, headache, and fatigue were frequently reported as adverse systemic reactions. A relatively small number of patients experienced a severe systemic reaction \u201316. EvenThis was an observational retrospective cross-sectional study using an online survey conducted in Saudi Arabia between March and May 2022. Parents aged 18\u00a0years and above and living in Saudi Arabia were eligible to complete the survey.This study used a convenience sample to recruit the study population. The study was conducted among the general population of Saudi Arabia, including all the geographic regions of Saudi Arabia from March to May 2022, using an online questionnaire. The questionnaire was formulated in Arabic and distributed through social media platforms such as WhatsApp, Twitter, and Snapchat). The study sample was invited using a survey link. The inclusion criteria were parents aged 18\u00a0years and above who live in Saudi Arabia (Saudis and non-Saudis) and have vaccinated their children. The survey link was re-posted once weekly to increase the response and make it reachable to the general population. The cover letter clearly stated the study's aims and objectives.The questionnaire was originally prepared in English. The original questionnaire was translated into Arabic utilizing the forward and backward translation technique. Two professional clinicians and academics assessed the Arabic version of the study and affirmed that participants would have no trouble understanding it. The Arabic version of the questionnaire was then administered to 30 participants in Saudi Arabia who met the inclusion criteria for the study. Participants were asked about the clarity and readability of the questionnaire, as well as whether any questions were difficult to understand. Participants were also asked whether any of the questions were offensive or unpleasant. Participants reported that the questionnaire was straightforward to comprehend and complete.The self-reported questionnaire was adopted from published studies to investigate the study objectives \u201330. AfteThe target sample size was determined in accordance with WHO recommendations for the minimum sample size required for a prevalence study . The samDescriptive statistics were used to describe patients\u2019 demographic characteristics, continuous data were reported as mean\u2009\u00b1\u2009S.D., categorical data were reported as percentages (frequencies), and logistic regression was used to identify predictors of persistent post-COVID-19 symptoms. For the logistic regression, the independent variables were the presence of allergy, chronic conditions, or the smoking status of the parents and the dependent variable was defined as patient who had persistent post-COVID-19 symptoms for more than 4\u00a0weeks.A two-sided p\u2009<\u20090.05 was considered statistically significant. The statistical analyses were carried out using S.P.S.S. (version 27).This study had a total of 4,069 participants. The majority of them (92.8%) were Saudis. Around half of participants (52.7%) reported living in the western region. Around half of the parents who took part had a bachelor's degree (fathers 48.4%\u00a0and mothers 50.4%). The average monthly income of one-third of the research participants (35.4%) was between 10,000 and 20,000 SAR, Table When parents were asked how many children they had, the median answer was three IQR: 2.00\u20135.00). Around 14.7%\u00a0of the participants said their children have chronic conditions, the most common of which were asthma and type 1 diabetes mellitus (T1DM), with 5.3%\u00a0and 2.3%, respectively, Table .00\u20135.00.Only 41.9%\u00a0of the participants reported that\u00a0their child(ren) had been infected with the coronavirus. 2.00 was the median number of children (IQR: 1.00\u20134.00). Most children (89.2%) had been infected with the disease for at least six months.At least one of their children has an allergy, according to one-quarter of the survey participants (25.0%). Most children with allergies (78.2%) were allergic to food. Around half of them 48.3%) said their allergic child(ren) had an allergic reaction in the previous month, Table % said thMore than half of the study participants 64.2%) reported that a\u00a0family member had been infected with the coronavirus. Both parents received COVID-19 vaccination, according to most participants (88.7%). Most participants (70.5%) stated that all children who met the vaccination criteria had received the vaccine. Most participants (83.5%) said their child or children had two doses of their vaccine, and about half (50.4%) of those who received the vaccine reported experiencing side effects. In addition, the majority (78.9%) reported that the side effects appeared within one day of receiving the vaccine, and nearly two-thirds (65.7%) reported that the side effects lasted between one and three days, Table .2% reporWhen we requested parents to report side effects encountered by their children, a total of 11,831 cases were documented. Pain at the injection site, hyperthermia, and feeling tired were the most reported side effects, accounting for 15.3%, 14.1%, and 13.2%, respectively, Fig.\u00a0Binary logistic regression analysis showed that smoking status of the parents, having allergy and having other comorbidities were risk factors of having persistent post-COVID-19 symptoms p\u2009\u2264\u20090.05), Table , Table 5The present study showed that most parents and their children received COVID-19 vaccine, and COVID-19 vaccine hesitancy was low. Moore et al. found a low rate of vaccination hesitancy among Brazilians . COVID-1In our study, more than half of the people reported that their children had adverse effects from their vaccination and that the side effects lasted one to three days. The Centers for Disease Control and Prevention reported that children and teenagers may experience some adverse effects after receiving the COVID-19 vaccination, which may interfere with their ability to do daily activities, but that these side effects should subside within a few days . AccordiThe most reported side effects in our study were pain at the injection site, fever, and tiredness. A systematic review of the safety, immunogenicity, and efficacy of COVID-19 vaccines in children and adolescents showed that COVID-19 vaccines had good safety profiles in children and adolescents and that injection site pain, fatigue, headache, and chest pain were the most common adverse events . AccordiAccording to the CDC's Vaccine Adverse Event Reporting System, more than 90% of post-vaccination adverse event reports among children and young people were not for significant symptoms and included dizziness, fainting, nausea, headache, and fever . CentersIn our study, most vaccine side effects (~\u200980%) tend to occur on the first day of vaccination and resolve within 1\u20132\u00a0days. In contrast, long-lasting side effects were noticed in minimal participants of our population (~\u20094%). Compared to our study, Kaur R et al. have documented in their systematic review that most COVID-19 vaccine side effects are acute and usually resolved in 3\u20134\u00a0days . AdditioThe confidence and trust of the public in vaccines and medications are usually built based on high quality research, ethical, scientific, and professional standards . The abiThere are certain limitations to our research. The first limitation is that because the present study included a self-administered survey, recall bias may affect the replies of the participants. The second limitation is that the participants were not limited to one response per person, which could lead to an overestimation or under-estimation of the presence of side effects. The third limitation is that the study's findings were based on survey data, which means that, like any other cross-sectional study, the results cannot be used to infer causality.This research contributes to understanding the side effect of children vaccination against SARS-CoV-2 infection in Saudi Arabia. In this report, the most prevalent side effects were pain at the injection site, hyperthermia, and tiredness. These side effects are minor, tolerable, and like those described in clinical trials, demonstrating that COVID-19 vaccinations have safe profiles. Further studies with larger populations are necessary to evaluate the safety of COVID-19 vaccinations."} +{"text": "Video-assisted surgery has become an increasingly used surgical technique in patients undergoing major thoracic and abdominal surgery and is associated with significant perioperative respiratory and cardiovascular changes. The aim of this study was to investigate the effect of intraoperative pneumoperitoneum during video-assisted surgery on respiratory physiology in patients undergoing robotic-assisted surgery compared to patients undergoing classic laparoscopy in Trendelenburg position.2) insufflation, one-hour, and two-hours into surgery and at the end of surgery. At the same time, arterial and end-tidal CO2 values were noted and arterial to end-tidal CO2 gradient was calculated.Twenty-five patients undergoing robotic-assisted surgery (RAS) were compared with twenty patients undergoing classic laparoscopy (LAS). Intraoperative ventilatory parameters (lung compliance and plateau airway pressure) were recorded at five specific timepoints: after induction of anesthesia, after carbon dioxide and two-hour intervals during surgery and at the end of surgery . Significant changes in lung compliance were also observed between groups at one-hour and two-hour intervals and at the end of surgery . At the end of surgery, plateau pressures remained higher than preoperative values in both groups, but lung compliance remained significantly lower than preoperative values only in patients undergoing RAS with a mean 24% change compared to 1.7% change in the LAS group (p\u2009=\u20090.01). We also noted a more significant arterial to end-tidal CO2 gradient in the RAS group compared to LAS group at one-hour and two-hours interval , as well as at the end of surgery .We observed a statistically significant difference in plateau pressure between RAS and LAS at one-hour (26.2\u2009\u00b1\u20094.5 cmHVideo-assisted surgery is associated with significant changes in lung mechanics after induction of pneumoperitoneum. The observed changes are more severe and longer-lasting in patients undergoing robotic-assisted surgery compared to classic laparoscopy.The online version contains supplementary material available at 10.1186/s12871-022-01900-5. Video assisted surgery (VAS) has become extensively used worldwide in cardiothoracic and major abdominal surgery, including gynecological and urological pro-cedures , 2. VAS,2) during VAS is associated with an increase in mean arterial pressure and systemic vascular resistance and a decrease cardiac output *100 and results were recorded as absolute values. Haemodynamic variables, mean arterial blood pressure measured invasively \u2013 MAP and heart rate \u2013 HR, were recorded at the same time points.Patient age, sex, height, and weight were collected by the attending anesthesiologist before surgery. Arterial blood samples and ventilatory parameters were obtained at five specific time points: after induction of anesthesia (T0), after induction of pneumoperitoneum (T1), one-hour into surgery (T2), two-hours into surgery (T3) and at the end of surgery (T4). Arterial blood gases analysis was performed on ABL 800 Radiometer . Lung compliance (Lc) was defined as pulmonary compliance during periods without gas flow, such as during an inspiratory pause. The following ventilatory parameters were recorded at the five timepoints: plateau airway pressure (Pplat), Lc after performing an inspiratory hold maneuver and end-tidal COP value\u2009\u2264\u20090.05 was considered statistically signdicant.In order to detect clinically significant 15% change in lung compliance and airway pressure, based on mean variables cited in the literature, 24 patients were included in the RAS group in order to obtain a 75% statical power. The 15% change was based on previously published data from our study group, as well as that demonstrated by other studies , 12 ThesTwenty-five patients were included in the RAS group and twenty patients in the LAS group. No statistically significant differences regarding preoperative variables were identified between the two groups Table . Pplat i2O in the RAS group after induction of pneumoperitoneum compared to 15.1\u2009\u00b1\u20093.4 cmH2O postinduction of anesthesia (p\u2009=\u20090.05) but did not change compared to this level at both one-hour and two-hours into surgery or at the end of surgery . Pplat remained significantly higher at the end of surgery compared to postinduction of anesthesia (p\u2009=\u20090.05). In the LAS group, Pplat increased to 22.2\u2009\u00b1\u20094.6 cmH2O after induction of pneumoperitoneum compared to 14.9\u2009\u00b1\u20092.7 cmH2O postinduction of anesthesia (p\u2009=\u20090.01) but did not change intraoperatively at one-hour and two-hours into surgery or at the end of surgery . Pplat remained significantly higher at the end of surgery compared to postinduction of anesthesia (p\u2009=\u20090.02)\u2014Fig.\u00a0Pplat significantly increased to 25.0\u2009\u00b1\u20095.3 cmH2O after insufflation of pneumoperitoneum compared to 48.1\u2009\u00b1\u20098.8\u00a0mL/cmH2O postinduction of anesthesia (p\u2009=\u20090.03) but did not significantly change com-pared to this level at one-hour or two-hours into surgery or between the second hour and the end of surgery . Lc remained statistically significantly lower at end of anesthesia compared postinduction of anesthesia (p\u2009=\u20090.04). In the LAS group, Lc decreased significantly to 33.0\u2009\u00b1\u20097.2\u00a0mL/cmH2O after induction of pneumoperitoneum compared to 58.0\u2009\u00b1\u20099.8\u00a0mL/cmH2O postinduction of anesthesia (p\u2009=\u20090.02), then significantly increased at one-hour into surgery compared to postinduction of pneumoperitoneum and remained constant at two-hours into surgery and at the end of surgery . There was no statistical difference in Lc at the end of surgery compared to postinduction of anesthesia (p\u2009=\u20090.23) \u2013 Fig.\u00a0In the RAS group, Lc decreased significantly to 26.4\u2009\u00b1\u20096.4\u00a0mL/cmHa p\u2009=\u20090.0 but did 2 after induction of anesthesia and after induction of pneumoperitoneum and a statistically significant difference one-hour and two-hours into the surgery and at the end of surgery . Data are presented in Table 2 during the same time points. Data are presented in supplementary table We observed a non-significant difference between the RAS and LAS groups in \u0394COOur results show that induction of pneumoperitoneum is associated with an increase in plateau pressure and decrease in lung compliance in patients undergoing VAS in Trendelenburg position independent of surgical technique. Our results are in accordance with previously published data in patients undergoing pelviscopic surgery .The increase in Pplat during surgery observed in both groups may be explained by the increase in intra-abdominal pressure due to induction of pneumoperitoneum and Trendelenburg position that cause an upward shift of the diaphragm. This increases intrathoracic pressure and is responsible for observed changes in the distribution of ventilation and subsequent increase in ventilation-perfusion mismatch , 15. How2O had a fivefold greater incidence of postoperative respiratory complications, longer postanesthesia care unit stays, greater alveolar dead space-to-tidal volume ratios and a lower arterial partial pressure of oxygen. We consider that anesthetic strategies aimed at lowering airway pressure below this threshold are important to improve both intraoperative respiratory function and to decrease the incidence of postoperative complications. In another study, Sroussi et al. [In a study by Choi et al. , patienti et al. showed tThe changes observed in lung compliance were more long-lasting during RAS. In this group we observed that Lc decreased after induction of pneumoperitoneum, remained low throughout surgery, and did not return to preoperative values at the end of surgery. By comparison, in the LAS group Lc decreased after induction of pneumoperitoneum, gradually increased during surgery and there was no statistically significant difference between preoperative and end-of surgery values. The decrease in Lc is mostly due to basal atelectasis, decrease in functional residual capacity and a decrease in diaphragmatic excursion during pneumoperitoneum . ApplicaAlthough in our study there was no significant difference in terms of length of surgery, this may represent a reason for the persistence of decreased Lc and increased Pplat at the end of surgery.The observed differences in both Lc and Pplat between RAS and LAS were statistically significant with better lung mechanics in patients undergoing classic laparoscopy. Two main reasons can be responsible for the observed changes. The first would be a much higher insufflation pressure to maintain pneumoperitoneum during surgery. However, no difference between intraabdominal pressure was observed between the two groups. (12\u201314\u00a0mmHg). The second reason would be a steeper Trendelenburg position applied during RAS to improve surgical access . In a st2O alongside applying a PEEP of 5 cmH2O. However, due to the low number of patients we cannot assess if this is sufficient to decrease the incidence of postoperative pulmonary complications. The use of recruitment maneuvers has also been investigated in a meta-analysis published by Pei et al. [2O of PEEP resulted in a better ventilation profile and favorable physiologic effects during RAS prostatectomy, however this did not improve postoperative lung function.One of the most important aspects of any observed physiological changes during anesthesia is the impact on patient outcome. In a recently published systematic review, Katayama et al. found noi et al. . Their ri et al. in a ranThe mode of mechanical ventilation may also represent a key factor in lung mechanics during VAS. When comparing pressure-controlled ventilation to volume controlled-ventilation, pressure-control was associated with higher Lc and lower peak airway pressure but did not have any overall advance in terms of respiratory mechanics and hemodynamics . Dual-co2 difference. Absorption of CO2 during surgery and increased ventilation-perfusion mismatch is responsible for the higher CO2 gradient [2 values. Although abdominal pressure was identical between the two groups, we observed that patients in the RAS group had both a higher CO2 gradient and a decreased lung compliance. This may be related with increased atelectasis and a higher shunt fraction that is responsible for the difference in arterial to end-tidal CO2, and thus making \u0394CO2 a useful marker in the assessment of ventilation-perfusion mismatch.The induction of pneumoperitoneum was associated with an increase in arterial to end-tidal COgradient . Kamine gradient showed tThe present study has some limitations. First, this was an observational, retrospective, single-center study and all patients received the same ventilatory strategy independent of the video-assisted technique use and so, the observed difference in ventilatory mechanics, may be minimized by a more personalized approach on ventilation and positive end-expiratory pressure titration. The authors are aware of the fact that these strategies can very between centers and our remarks may apply only in patients who undergo VAS under similar conditions of mechanical ventilation. Secondary, some parameters, like the steepness of Trendelenburg position and shunt fraction, that may have an important effect on lung physiology, could not be assessed. Thirdly, the low number of patients was insufficient to assess the effects of intraoperative lung mechanics on postoperative outcome. Future studies are needed to investigate the composite effect of surgical position, type of VAS used and intraoperative recruitment maneuvers on perioperative lung mechanics.2 gradient.In conclusion VAS, regardless of whether RAS or LAS was used, or is associated with increased airway pressure and decreased lung compliance. The effects of pneumoperitoneum on lung mechanics are more pronounced in patients undergoing robotic-assisted surgery compared to classic laparoscopy. Although airway pressures failed to return to preoperative values in both groups at the end of surgery, changes in lung compliance were minimal compared to preoperative values in patients undergoing classic laparoscopy compared to RAS after the pneumoperitoneum was released. The decrease in lung compliance and increase in plateau pressure was associated with a greater arterial to end-tidal COAdditional file 1Supplementary Table 1. Comparison of hemodynamics parameters and PaO2 between the two groups."} +{"text": "Recent studies on the effects of mandatory online teaching, resulting from the COVID-19 pandemic, have widely reported low levels of satisfaction, unwillingness to continue online teaching, and negative impacts on the psychological well-being of teachers. Emerging research has highlighted the potential role of psychological need thwarting (PNT), in terms of autonomy, competence, and relatedness thwarting, resulting from online teaching. The aim of this study was to evaluate the immediate and delayed effects of PNT of online teaching on teachers\u2019 well-being (including distress and burnout), intention to continue online teaching, and job satisfaction. Moreover, data collected from both cross-sectional and longitudinal surveys allowed for a systematic validation of an important instrument in the field of teacher psychology, the Psychological Need Thwarting Scale of Online Teaching (PNTSOT), in terms of longitudinal reliability and validity. The data reveal the usefulness of the construct of PNT in terms predicting and explaining teachers\u2019 willingness to continue using online teaching as well as the degree of burnout after a period of 2 months, such that PNT is positively associated with burnout and negatively associated with willingness to continue online teaching. As such, the PNTSOT is recommended for future research evaluating the long-term psychological, affective, and intentional outcomes stemming from teachers\u2019 PNT. Moreover, based on our findings that the impact from PNT of online teaching is persistent and long-term, we suggest that school leaders provide flexible and sustained professional development, model respectful and adaptive leadership, and create opportunities for mastery for the development of community of practice that can mitigate the thwarting of teachers\u2019 autonomy, competence, and relatedness during times of uncertainty. Additionally, in terms of the psychometric properties of the PNTSOT instrument, our empirical findings demonstrate internal reliability, test\u2013retest reliability, measurement invariance, and criterion validity (concurrent and predictive) based on cross-sectional and longitudinal data. The COVID-19 pandemic has had a profound impact on the world, with pervasive effects in all aspects of life, including education. In fact, according to a survey by the United Nations Educational, Scientific and Cultural Organization (UNESCO), more than 180 countries had closed all school campuses during the pandemic, affecting the lives of 1.6 billion primary and secondary school students . This laThe sudden onset of the pandemic required teachers to adopt online teaching, generally with very little training or background experience in distance learning . ConceptWhile the findings by The emerging construct of psychological need thwarting (PNT) has been used to describe the effects of online teaching on primary and middle school teachers , findingTo extend our understanding of the dynamics of teacher psychology during online teaching, a more complete analysis of the long-term effects of online teaching on psychological well-being is required, beyond the relatively superficial findings concerning teachers\u2019 frustration already covered sufficiently in the literature ,b. As suThe potential of PNT in explaining the mechanisms behind the impact of online teaching on teacher psychological well-being has some support from recent research . While mWhile the value of PNT in terms of psychological well-being has been reported in the literature, to date there have been few studies conducted during school closure which have examined the association between PNT related to online teaching and psychological well-being among primary and middle schoolteachers . One stuIn the specific context of the COVID-19 pandemic, PNT, particularly related to online teaching, has been developed and validated to some extent . HoweverThe TPNTS and PNTSOT instruments are baseOnline teaching has shown the potential to decrease task identity, task significance, autonomy, and social dimensions of teachers\u2019 job characteristics . We beliUniquely, this study evaluates the predictive influence of psychological need thwarting of online teaching, which itself is an emerging construct, from a longitudinal perspective. While the influence of the construct of psychological need thwarting and satisfaction has been simultaneously evaluated by some prior studies, the present study emphasizes the role of PNT in order to (a) address the lack of empirical studies evaluating PNT, as compared to psychological need satisfaction, particularly in the context of online teaching and (b) to avoid the inclusion of too many items in both the questionnaire and model, which would create a burden for respondents while also making validation of the PNTSOT instrument difficult. Some recent studies , have moFurthermore, in the present study, we adopt a research design that integrates cross-sectional and longitudinal elements to test the immediate and delayed effects of PNT of online teaching while simultaneously systematically evaluating the psychometric properties of an instrument for measuring PNT of online teaching , originally developed by This study was conducted in a city in a province in central China which had implemented mandatory online teaching, with campuses closed due to multiple COVID-19 infections reported at the end of October 2021. It should be noted that although the Chinese government had relaxed COVID-19 restrictions since September 2020 , once an infection was reported, restrictive measures were still untaken immediately in order to limit infections. As such, after an outbreak of the pandemic in the city in which the study took place, the city government decided to close all school campuses and fully implement online courses from November 3, 2021. Our research team has been engaged in long-term collaboration with the city\u2019s educational authorities, providing psychological counseling services and regularly holding mental health workshops for the teachers in the city. Thus, in order to monitor the mental health status of teachers during this quarantine period , our research team, with the assistance of local educational authorities, conducted an online survey of primary and secondary school teachers . A follow-up collection of data was conducted after a two-month interval . At Time 2, campuses had reopened for 2 weeks and mandatory online teaching was no longer being implemented, with schools returning to a face-to-face mode of instruction. The survey was administered through an online questionnaire, forwarded by the educational administration of each school district to each school in their jurisdiction for voluntary completion by teachers. A total of 9,554 (Time 1) and 4,176 (Time 2) teachers completed the online survey . Participants completing the first survey were asked to leave their email if they would like to participate in a follow-up survey after 2\u2009months. A total of 1,642 school teachers left their email information and participated in the longitudinal portion of the study. Written informed consent was obtained electronically on the first page of the online survey, providing participants with information on the purpose of the research, the affiliation of the researchers, and a guarantee of privacy and anonymity through appropriate storage and curation of the collected data. This study was approved by the Jiangxi Psychological Consultant Association (IRB ref.: JXSXL-2020-J013).The design of the survey was purposefully arranged to minimize the burden to the respondents, providing questions which were relevant only to the current situation (mandatory online teaching due to the recent outbreak). Due to the sudden nature of the announcements related to school closure, teachers were required to conduct online teaching from home and, at this point in time, were required to create, prepare, and manage a great deal of instructional materials. As such, the data collected at Time 1 included measures related to online teaching (including the PNTSOT and a questionnaire assessing satisfaction with online teaching) and a measure of psychological distress (DASS-21).For both theoretical and practical reasons, a measure of teacher burnout was not included at Time 1, but was evaluated at Time 2. Theoretically, the construct of burnout was utilized in our model as a predicted variable, indicative of the long-term effects of PNT and psychological well-being from a longitudinal perspective, and thus was not included in data collected a Time 1. From a practical perspective, this measure was used only for Time 2 in order to reduce the length of the survey at Time 1 and to avoid influencing teachers\u2019 attitudes towards the longer-term impacts of online teaching, with general attitudes towards online teaching and measures of psychological well-being collected using the PNTSOT and DASS-21 instruments.For the assessment of satisfaction with online teaching, a single question was posed (Time 1): \u201cHow would you rate the effectiveness of your online teaching?\u201d with possible responses varying from very dissatisfied to very The PNTSOT developed by To evaluate teacher burnout at Time 2, this study utilized the subscale of \u201cEmotional Exhaustion\u201d from the Chinese version of the Primary and Secondary School Teachers\u2019 Job Burnout Questionnaire (CTJBO). The questionnaire includes 22 items and was developed by The Chinese version of the 21-item Depression, Anxiety, and Stress Scale (DASS-21) was adopted to measure teachers\u2019 psychological distress during school closure (Time 1) and after restrictions were lifted and face-to-face teaching resumed (Time 2). The survey instructions asked the participants to reflect on their current mental state during school closure (for Time 1) or their mental state in the recent 2 weeks (for Time 2). The DASS-21 is equally divided according to three emotional states: depression, anxiety, and stress . Items aIn order to thoroughly and systematically evaluate the impact of PNT of online teaching in terms of immediate effects and delayed effects (including intention to continue online teaching and burnout), mean values and correlation coefficients for all variables were first analyzed. Subsequently, the reliability and factorial validity of PNTSOT were evaluated. Finally, structural equation modelling (SEM) was utilized in order to test the causal relationships among variables, including the effect of PNT of online teaching on psychological distress and satisfaction with online teaching (both measured at Time 1), as well as the delayed effects on intention to continue online teaching and burnout (both measured at Time 2). Furthermore, in order to evaluate the psychometric properties of the PNTSOT, data collected by Descriptive statistics were first used to analyze the characteristics of the participants and their responses on the PNTSOT and criterion variables . Moreover, Pearson correlations among the observed variables for the PNTSOT, burnout, satisfaction of the online teaching, intention of continuing online teaching in the future, and psychological distress were computed for the longitudinal data. Following, McDonald\u2019s \u03c9, the Intraclass correlation coefficient (ICC) with a 2-way mixed effects model combining a Bland\u2013Altman plot, and CFA were used to evaluate internal reliability, test\u2013retest reliability, and factorial validity. It should be noted that, during the development of the Chinese version of TPNTS, on which PNTSOT is based, item 8 was found to be cross-loaded for both the relatedness thwarting and competence thwarting factors. Thus, as the inclusion of item 8 may affect the overall measurement quality of the scale , we condAfter scrutinizing the factorial validity of each sample, a multi-group and longitudinal invariance test was conducted to assess whether the PNTSOT possessed measurement invariance across different occasions. Finally, we constructed and tested a structural equation model (SEM) including a higher-order CFA of PNTSOT and a causal model to test criterion validity see . Specifip\u2009<\u20090.01), estimation utilizing diagonally weighted least squares (DWLS) was adopted for CFA, tests of measurement invariance, and SEM, as DWLS is more suitable for dealing with non-normally distributed data (2), comparative fit index (CFI), non-normed fit index (NNFI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). CFI and NNFI values of 0.95 or higher, RMSEA values of 0.06 or lower, and SRMR values of 0.08 or lower were considered acceptable the configural model was compared with the factor-loading constrained equal model; (b) the factor-loading constrained equal model (a less constrained model) was compared with the factor-loading and item intercept constrained equal model (a more constrained model); (c) the factor-loading and item intercept constrained equal model (a less constrained model) was compared with the factor-loading, item intercept, and errors constrained equal model (a more constrained model); (d) the factor-loading and item intercept constrained equal model (a less constrained model) was compared with the factor-loading, item intercept, and factor variance, as well as the covariance constrained equal model (a more constrained model). The differences in CFI, RMSEA, and SRMR from the less constrained models to the more constrained model were used to judge whether or not measurement invariance was supported: \u0394CFI > \u22120.01, \u0394RMSEA <0.015, and \u0394SRMR <0.03 (for factor loading) or \u0394SRMR <0.01 .t-tests demonstrated very few significant differences between primary and middle school teachers on the variables of interest and Time 2 (where respondents were asked to reflect on their previous 2 months of online teaching). Additionally, differences in perception between Time 1 and Time 2 were evaluated in order to better interpret the effects of PNT of online teaching in the context of other variables of interest.Among the three dimensions of psychological need thwarting, autonomy and competence thwarting were higher during Time 1 (as compared to Time 2), with mean autonomy thwarting of 3.98 and 3.92 and mean competence thwarting of 3.97 and 3.93 . The scores of these two subscales decreased when returning to offline instruction (Time 2) for both the cross-sectional and longitudinal data, with means for autonomy thwarting of 3.78 and 3.62 , and means for competence thwarting of 3.79 and 3.69 . Interestingly, relatedness thwarting increased from Time 1 to Time 2, for both cross-sectional and longitudinal data; with mean values at Time 1 of 2.50 and 2.39 ; and mean values at Time 2 of 2.62 and 2.50 . It is noted that despite changes in the observed scores for PNTSOT over time, tests for longitudinal reliability and validity are still necessary to evaluate ICC and longitudinal measurement invariance.The participants\u2019 overall psychological distress was 20.15 and 19.62 at Time 1, while psychological distress scores were 22.44 and 19.12 at Time 2. Regarding specific emotional states, the percentage of teachers with clinical depression increased in both cross-sectional (from 25.2 to 29.9%) and longitudinal data (from 22.41 to 23.51%); the percentage of anxiety and stress increased in the cross-sectional data but decreased in the longitudinal study . Participants\u2019 burnout was considered moderate due to the value being close to the median of the scale . Finally, more than half of the teachers were satisfied or very satisfied with their online teaching, as indicated by responses of either \u201csatisfied\u201d or \u201cvery satisfied\u201d ; with 69.02% satisfaction based on the cross-sectional data and 69.61% based on the longitudinal data. However, only 37% of the participants responded they would like to continue using online teaching in the future .r\u2009=\u2009\u22120.39; p\u2009<\u20090.001) and Time 2 ; and a negative correlation between PNT of online teaching and intention to teach online found at Time 1 and Time 2 . Moreover, PNT of online teaching was significantly and positively correlated with burnout and psychological distress, with correlation coefficients ranging from 0.27 to 0.44 . These observed associations were supported by the results of structural equation modelling . Similar findings were reported for the cross-sectional data, with coefficients exactly the same as with the longitudinal data . Moreover, the ICC for PNTSOT using a 2-way mixed effects model was 0.71, indicating acceptable test\u2013retest reliability across the two-month interval. Analysis also demonstrated that 95% of the data points lied within \u00b11.96 SD of the mean difference in a Bland\u2013Altman plot had a better model fit than the scale with 12 items see . Specifi\u03b3\u2009=\u20090.44, t\u2009=\u20097.45, p\u2009<\u20090.001), negatively associated with the satisfaction of online teaching at Time 1 , negatively associated with the intention to continue online teaching at Time 2 , and positively associated with burnout at Time 2 .Regarding criterion validity, concurrent and predictive validity were assessed by SEM see . Since tRecent literature on the effects of the COVID-19 pandemic on teachers has widely reported low levels of satisfaction with online teaching , an unwiIn terms of a theoretical contribution, although a few studies have reported a negative relationship between PNT of online teaching and the psychological well-being outcomes using cross-sectional data , to our Given that measurement equivalence was supported for PNTSOT across different occasions, the values from the scale between two points in time (occasions) can be meaningfully compared and interpreted as reflecting a real change in the thwarting of psychological needs. Our results demonstrated that the three components of PNT of online teaching varied with context, depending on the conditions caused by the pandemic and the actions taken by local educational authorities. It is, therefore, not surprising that teachers\u2019 psychological need thwarting stemmed mainly from a stressful environment , althougUnexpectedly, although the closure of schools reduced teachers\u2019 access to job-related resources, it also seemed to have reduced some demanding aspects of the teaching job. These findings highlight the double-edged nature of the teaching profession . On the In terms of the literature related to the measurement of PNT, starting from the initial assessment of athletes , an incrThe lasting harm from PNT of online teaching was demonstrated in this study, predicting teachers\u2019 willingness to continue using online teaching as well as the degree of burnout after a period of 2 months. These results highlight the delayed and long-term effect of mandatory online teaching which lasts beyond the period when online teaching is implemented \u2013 a finding which has been described by some studies ,b. This In terms of competence, most teachers have not received sufficient training or experience in implementing online teaching which naIn terms of autonomy, in many cases teachers were required to use their school\u2019s designated platforms (including software) and follow prescribed course activities (including assessment methods) for online teaching which can result in perceived lack of autonomy in terms their teaching . As statFrom the point of view of relatedness \u2013 some literature has reported a separate spike in psychological distress among schoolteachers once schools reopen \u2013 the present study provides insights into the interpretation of this situation in terms of teachers\u2019 relatedness thwarting. Given the fact that teachers\u2019 perceptions of the PNT of online teaching were less severe when asked to evaluate online teaching in retrospect provides some hope of a \u201crebound\u201d effect from the PNT of online teaching or negative experiences with other types of educational technologies, if teachers\u2019 psychological need thwarting can be averted. For example, there is potential in providing more opportunities for preventing relatedness thwarting through the establishing of communities of practice, simultaneously mitigating threats to competence and autonomy through targeted professional development that emphasizes not only skills and knowledge related to new technologies, but also takes into consideration potential thwarting of teachers\u2019 psychological needs. As such, the emotional care provided by school leaders is important during the early days of campus reopening. This kind of emotional care is characterized as warm and empathetic, led and modelled by front-line leaders, rather than enacted by means of an authoritarian style of leadership . As suchIn light of the potential for thwarting of competency, autonomy, and relatedness needs, a recurring theme is the importance of professional development. Given the importance of teachers\u2019 psychological needs during challenging times, such as mandatory online teaching, the role of teacher training must also be considered. During teacher training, pre-service teachers can benefit from increased choice and freedom in pursuing individual goals (autonomy), positive feedback through coaching and mentorship which encourages student teachers to identify their unique personal qualities and incorporate these into their teaching (competence), and fostering a sense of the social environment of teaching with attention to individual students . It should be noted that, although the subjects of the longitudinal study and the subjects of the cross-sectional study shared a similar demographic background and reported similar levels of PNT of online teaching, the changes in psychological distress were different for the two samples. Whether or not this situation was due to official assistance provided through participation in the online survey is still uncertain. We suggest that future research should initiate longitudinal monitoring of teachers\u2019 mental health after they return to face-to-face teaching, and explore related factors which can influence teachers\u2019 psychological well-being and intention to continue in specific teaching tasks. Second, given that PNT of online teaching describes a perception toward one\u2019s working environment, exploring the effect of school management as a school-level variable to represent a work/environmental factor related to teachers\u2019 online teaching PNT is another potential area for future research. Third, while the present study focused on the thwarting of teachers\u2019 psychological needs as a risk factor, measures of psychological need satisfaction may be explored by future studies to examine its potential as a protective factor and its relationship with other measures of psychological well-being regarding the use of online teaching during both times of distress, as well as under normal working conditions. Finally, while the present study was concerned with the evaluation of the direct effects of PNT of online teaching in terms of both immediate effects as well as delayed effects (intention to continue using online teaching), future studies can further evaluate potential mediation and moderation effects. Moreover, future studies may test models that include alternative predictor and outcome variables in relation to the construct of PNT of online teaching.In this present study, we systematically evaluated the psychometric properties of the PNTSOT instrument, as developed by As noted in our results on the change in psychological distress between the two occasions, improvement in psychological well-being among schoolteachers was not found in either the cross-sectional or longitudinal data. This finding is consistent with the research from Spain and DenmAs we move through the next stages of the pandemic (a post-COVID-19 era), we can take advantage of the lessons learned during the pandemic as an opportunity to better evaluate the potential future of other innovations and evolutions in educational practice that promote online teaching and other interventions which can enhance teaching and learning . School The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The studies involving human participants were reviewed and approved by Jianxi Psychological Consultant Association. The patients/participants provided their written informed consent to participate in this study.I-HC and C-YL: conceptualization, formal analysis and project administration. I-HC, X-MC, and C-YL: methodology. I-HC and X-MC: validation. I-HC, K-YZ, and Z-HW: investigation. I-HC and X-LL: resources. X-LL: data curation. I-HC and JG: supervision, writing\u2014review and editing and writing\u2014original draft preparation. I-HC: funding acquisition. All authors have read and approved the final manuscript.This research was supported by the Anhui Province Philosophy and Social Science Planning Project \u201cEvidence-Based Decision Making and Practice Research on Comprehensive College Entrance Examination Reform in Anhui Province\u201d (Project No.: AHSKZ2021D12) and the 2022 Shanghai Philosophy and Social Science General Project for Educational Planning \u201cResearch on the TPACK Development Mechanisms and Enhancement Paths of College Teachers in the Post-Epidemic Era\u201d (Project No.: A2022014).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Electrical switching based data center networks have an intrinsic bandwidth bottleneck and, require inefficient and power-consuming multi-tier switching layers to cope with the rapid growing traffic in data centers. With the benefits of ultra-large bandwidth, high-efficient cost and power consumption, switching traffic in the optical domain has been investigated to replace the electrical switches inside data center networks. However, the deployment of nanosecond optical switches remains a challenge due to the lack of corresponding nanosecond switch control, the lack of optical buffers for packet contention, and the requirement of nanosecond clock and data recovery. In this work, a nanosecond optical switching and control system has been experimentally demonstrated to enable an optically switched data center network with 43.4 nanosecond switching and control capability and with packet contention resolution as well as 3.1 nanosecond clock and data recovery. Several challenges still impede the deployment of optical switches in data centers. The authors report an optical switching and control system to synergistically overcome these challenges and provide enhanced performance for data center applications. Consequently, each aggregation switching node in the data center network (DCN) has to handle multiple Tb/s to hundreds of Tb/s traffic. Along with the increasing demands of higher switching bandwidth, emerging latency-sensitive applications are also imposing stringent requirements of low latency2 to DCNs. However, due to the limited maximum bandwidth per pin of the CMOS chips, as well as the limited chip number that can be used on a single package in electrical switching technologies, it is hard to linearly increase bandwidth3. New technologies, such as Silicon Photonics4, 2.5D/3D packaging5, and co-packaging6, are being investigated to scale the I/O bandwidth. However, before these technologies become viable, a number of challenges have to be solved, e.g., the high complexity to package external laser sources and fiber coupling, and the high manufacturing (including both packaging and testing) costs.The escalation of traffic-boosting applications and the scale out of powerful servers have significantly increased the traffic volume inside the data centers (DCs)8. Benefiting from the optical transparency, the optical switching with high bandwidth is independent of the bit rate and data format of the traffic. Wavelength division multiplexing (WDM) technology can be employed to boost the optical network capacity at a superior power-per-unit bandwidth performance9. In addition, the optical switching networks eliminate the dedicated interfaces for modulation-dependent processing, achieving fast and high processing efficiency10. Furthermore, eliminating the power-consuming optical-electrical-optical conversions at the switch nodes significantly improves the energy and cost-efficiency11. All these benefits can be exploited to flatten the DCN topology and overcome the hierarchical architecture with associated large latency and low throughput12.As a counterpart, switching the traffic in the optical domain has been investigated considerably as a solution to overcome the bandwidth bottleneck and latency issues in DCNs13. However, the tens of milliseconds of switching time have strictly confined the applications to well-scheduled and long-lived tasks. Considering the time-varying traffic bursts and high fan-in/out hotspots patterns in DCNs, the slow optical switches providing static-like and pairwise interconnections would only be beneficial as supplementary switching elements. Contrarily, fast optical switches with nanosecond switching time in support of the packet-level operations, such as semiconductor optical amplifier (SOA)-based optical switch, can be exploited to realize DCNs with high network throughput, flexible connectivity, and on-demand resource utilization15.Multiple optical switching techniques have been proposed and investigated, of which the micro-electro-mechanical (MEMS)-based slow switches are seeing penetration into data centers, providing reconfigurable high-bandwidth and data-rate transparency channels16. Fast switch reconfiguration time including both hardware switching time (on the order of nanoseconds for SOA-based switch) and controlling overhead time is essential as it determines the network throughput and latency performance. Moreover, the reconfiguration time should be independent of the DCN scale. This requires fast processing of the optical labels that carry the destinations of the packets at the switch controller on the order of nanoseconds, which subsequently reconfigures the optical switch for the data packets forwarding. However, the synchronization requirement between the processed optical labels at the switch controller and the delivered optical packets at the optical switch strongly limits the implementation of the nanosecond switching control17. To complete the switch control, packet contention resolution is another unsolved critical challenge that needs to be addressed18. The electrical switches employ random access memories (RAM) to buffer the conflicted packets and then solve contentions. Because no effective RAM exists in the optical domain, the conflicted packets at the optical switch will be dropped and thereby cause high packet loss. Despite several approaches that have been proposed to overcome such issues, based either on optical fiber delay lines (FDLs)19 or deflection routing20, none of them is practical for large-scale DCN implementation.Despite the promises held by the fast optical switching technologies, the practical implementation of the nanosecond optical switching DCNs is actually facing several challenges. As the main unresolved challenge, a nanosecond scale control system is essential for the switch control and fast forwarding of the data traffic17. Therefore, in a packet-based optical switching network, the clock frequency and phase of the data signal vary packet by packet. Thus, nanosecond burst-mode clock and data recovery (BCDR) receivers are demanded to recover the clock frequency and phase on the packet base. The BCDR locking time determines the number of preamble bits that can dramatically reduce the network throughput, especially for the intra-DC scenario where many applications produce short traffic packets21. BCDR receivers have been extensively studied in the context of Passive Optical Networks and architectures based on gated oscillators or over-sampling have been shown to achieve nanoseconds locking time22. These techniques, however, increase the complexity and cost of the transceiver design and need to be re-evaluated for higher data rates, although initial results are encouraging23. The burst-mode links proposed in ref. 24 can be dynamically reconfigured to support the fast optical switching. However, the control plane of this solution does not resolve the packet contention, which could cause high packet loss.Unlike the point-to-point synchronized connections between any paired electrical switches, optical switches create only momentary physical links between sources and destination nodesThe above mentioned challenges in terms of nanosecond switching control, packet contention resolution, and fast CDR locking have been the roadblock to the deployment of fast optical switches in the DCNs. In this work, we propose and experimentally demonstrate a nanosecond optical switching and control system to comprehensively solve all these issues that prevented deployment of nanosecond optical DCN with packet-based operation. The nanosecond optical switching and control system is based on a combination of a label control and synchronization mechanism for nanosecond switch control, an Optical Flow Control protocol to resolve packets contention, and a precise clock distribution method for nanosecond data packet recovery without the deployment of BCDR receivers. Experimental results validate that the label control system is capable to distribute the clock frequency from the switch controller to all the connected top of racks (ToRs) allowing 3.1\u2009ns data recovery time with no BCDR receivers, and 43.4\u2009ns overall switching and control time of the DCN.The proposed nanosecond optical switching and control system is schematically illustrated in Fig.\u00a0L1) to the switch controller via the label channels. The timestamps are processed at the controller and sent back to the source ToRs. The corresponding ToR records the time (TL2) when the timestamp is received. Based on the time offset (Toffset\u2009=\u2009TL2\u2009\u2212\u2009TL1), and the processing delay (Tprocessing) inside the FPGA-based ToR and switch controller, the physical fiber transmission delay (Tfiber) of the label channel can be automatically measured as Tfiber\u2009=\u2009(Toffset\u2009\u2212\u2009Tprocessing)/2. Thus, the switch controller sends the controller time (Tcontroller) to all the connected ToRs. Once the controller time is received at each ToR, the ToR time of (TToR) will be updated by compensating the received controller time with the measured fiber delay and the FPGA processing time (TToR\u2009=\u2009Tcontroller\u2009+\u2009Tfiber\u2009+\u2009Tprocessing/2). This mechanism guarantees all the ToRs with identical time reference inherited from the switch controller. Therefore, the optical labels and optical data packets can be sent out aligned with the timeline, guaranteeing the synchronization of the optical labels at the switch controller and the data packets at the optical switch.In the time-slotted optical packet switching network, the slotted optical packets generated by the ToRs have to arrive aligned at the optical switches. This requires precisely unifying the time for all the ToRs. During the system initialization state as shown in Fig.\u00a025. When the monitored buffer is over the predefined queuing threshold, the ToR can generate a PAUSE or PFC frame and send it back to the corresponding source on the reverse path (label channel) with the normal Ethernet frames to pause the traffic transmitting.Considering the lack of optical buffer at the optical switches, an Optical Flow Control (OFC) protocol is developed for resolving the packet contentions when multiple optical data packets have the same destination. Once contentions occur, the data packets with higher priority will be forwarded to the destination ToRs while the conflicted packets with lower priority will be forwarded to the ToRs with no destination requesting. This kind of packet forwarding mechanism guarantees the receivers at each ToRs to receive a continuous traffic flow at every time slot. The developed OFC protocol is deployed on the bidirectional label channels between the switch controller and the ToRs. After the contention resolution, the switch controller sends back to each ToR an ACK signal or NACK signal (packet forwarded to the un-destined ToRs). Based on the received ACK/NACK signal, the ToR label packet processor will release the stored data packet from the RAM or trigger the data packet processor to retransmit the optical packet . Moreover, to prevent the packet loss at the overflowed buffer, the layer-2 flow control mechanism, like the Ethernet PAUSE frame and priority-based flow control (PFC), can be integrated in this control system. The ToR can monitor the buffer occupation ratio of each buffer block in real-timeIn this proposed approach, each of the bidirectional label channels is a continuous link not only used to send the label requests from ToRs to the switch controller and the ACK/NACK signals from the switch controller to the ToRs, but it is also used for clock distribution from the switch controller to the ToRs to synchronize the system clock frequency. As shown in Fig.\u00a026, where the phase alignment requires more complex network interconnections and extra devices to guarantee the precise positional relationship between the TX and RX sides, limiting the practical implementation in large scale DCNs. Furthermore, the clock carried by the label channel does not need to be distributed at full line rate, because the clock can be frequency multiplicated at the CDR block according to the transceivers requirements of the data channels.It should be noted that the CDR circuits at the conventional receivers need to receive continuous data traffic to maintain the recovered clock with good quality. To guarantee this, the switch controller, which has the full vision of the traffic from the ToRs, exploits the multicast capability of the optical switch to forward the conflicted packet with lower priority to one un-destined ToR to fill the empty slot. Moreover, the system only needs to distribute the clock frequency to the ToRs in the sub-network, there is no requirement to align the clock phase as the CDR is done at each ToR in less than one clock cycle. This is an advantage with respect to other techniques1 in cluster1 destined to ToRM in cluster2 could be first forwarded by the inter-cluster switch ES1 to the intermediate ToRN+1 in cluster 2. The ToRN+1 Ethernet switch, based on the destination address, forwards the data packets to the intra-NIC so that the packets are delivered to the ToRM. via the intra-cluster switch IS2. Another two-hops link for this inter-cluster communication is Cluster1ToR1\u2009\u2194\u2009IS1\u2009\u2194\u2009Cluster1ToRM\u2009\u2194\u2009ESM\u2009\u2194\u2009Cluster2ToRM. Moreover, the Ethernet switch at each ToR monitors the traffic volume of intra-cluster and inter-cluster communications by reading the destination MAC address27, and then accordingly assigns the adaptable optical bandwidth to the intra-cluster and inter-cluster links.The system shown in Fig.\u00a0Note that the intra-cluster interconnect network (consisting of the intra-NICs and intra-cluster switch) and the inter-cluster interconnect network (consisting of the inter-NICs and inter-cluster switch) are two independent sub-networks as shown in Fig.\u00a0The experimental setup illustrated in Fig.\u00a01 and ToR2 is 3 indicate that the data packets from ToR1 and ToR2 are destined to ToR3 in time slot N. Given the higher priority, ToR1 packet is forwarded to the ToR3, while ToR2 packet with lower priority is sent to ToR2 to maintain the continuous stream traffic (this receives the packet at ToR2 will be dropped once verified that its destination is ToR3). Afterwards, ToR1 receives an ACK signal and ToR1 , respectively. Both the loopback and ToR1 channel loss 2 packets at the initialization state while the ToR1 losses 5 packets in total. The temporary time disorder at the initial state could introduce a couple of packet losses.The stability assessment of the clock distribution, synchronization of the slotted mechanism, and the effectiveness of the OFC protocol are investigated by counting the packet loss on the optical links for 10 days at the traffic load of 0.8. The packet loss on the loopback link from ToRTo verify the networking capability of the proposed system, OMNeT++ simulation model of large-scale optical DCN is built based on the principles illustrated in Fig.\u00a0The N\u00d7N semiconductor optical amplifier (SOA) based optical switch is a broadcast and select (B&S) style. For each output port, only one SOA gate will be ON state when forwarding optical packets to this specific port. Other SOA gates at OFF state may not completely block the optical signal and then introduce the cross-talk noises. This results in the channel cross-talk when coupling multiple SOA gate signals. Thus, to quantify the signal impairment introduced by the cross-talk noises, the ON/OFF power ratio of SOA gates is measured under different driving currents as shown in Fig.\u00a0We have presented and experimentally demonstrated a nanosecond optical switching and control system for optical DCNs based on the label control mechanism, OFC protocol, and the clock distribution to enable nanosecond data recovery. Optical label channels deliver the allocated time, the label signals for the nanosecond packets forwarding, and the OFC protocol signals to resolve the packet contention. Experimental results confirmed an overall 43.4\u2009ns optical switching and control system operation, 3.1\u2009ns data recovery time without BCDR receivers, and a packet loss rate less than 3.0E-10 after 10 days of continuous and stable network operation. Those results pave the way to the practical deployment of high capacity and low latency optical DCN architectures based on distributed nanosecond optical switches with the nanosecond control system.29 could be deployed in the FPGA-based ToR to flexible manage the optical packet forwarding and thereby improving the network throughput. WDM channels destining to different racks can be deployed at the optical ToR to improve the switching capability. Combining the linear regression algorithm proposed in ref. 30, the WDM wavelength could be fast tuned to adapt the various traffic pattern.High-radix switches in large-scale networks can reduce switch count and hops, thereby decreasing the flow completion time and power consumption. The radix of currently proposed fast optical switches is less than that of the electrical switches, even if the feature of the theoretically unlimited optical bandwidth per port can properly compensate for this radix deficiency. For the next research step, the design of large-radix optical switches will further improve the network performance with the capability to build a fully flat network. Moreover, the flexible adjustment of the optical packet length in real-time is of key importance to adapt the network workload and thereby fully utilize the optical bandwidth. The automatic adjustment mechanism based on the traffic load prediction is a promising solution for future research. Furthermore, a fast and scalable hardware design of Push-In-Extract-Out (PIEO) scheduler proposed and implemented in refs. 27.The Spirent Ethernet Test Center is configured by the XML file to generate the burst traffic pattern in this experimental demonstration, emulating the real data center traffic characteristics. The Spirent is programmed to generate Ethernet frames with the length varying from 64 bytes to 1518 bytes at the load from 0 to 1. The same as the practical network, 35% of the frames with lengths shorter than 200 bytes are generated as the control frames. More than 45% of the frames with the lengths longer than 1400 bytes are utilized to carry the real application information. Traffic flow is defined as the Spirent-generated continuous Ethernet frames with the same destination within a certain period of time. The flow model is built based on the ON/OFF period length (with/without traffic flow generating), following the data center traffic behavior descripted in ref. For the scheduling mechanism at the FPGA-based switch controller in the proposed switching and control system, the controller computes a schedule to guide packet contention resolution and data transmission based on the label request signals, which are delivered on the independent label channels. Note that the OFC protocol and time synchronization are implemented by reusing the label channels, which significantly simplify the scheduling mechanism compared with the conventional schemes. In a conventional implementation, admission control components at ToRs report demand information to the controller and hold data for transmission until triggered to send it by the scheduler. The conventional scheduler in the controller uses a complex scheduling algorithm to determine when to transmit data and how to configure switches. As a comparison, the optical data in the proposed scheduling scheme is not held and transmission is triggered by the scheduler. The optical data packet is directly forwarded with the label signals in the same time slot. The proposed scheduling algorithm then focuses only on the switch configuration to forward the data packets. Benefitting from the parallel processing capabilities of FPGA-based switch controller, the proposed scheduling scheme can be easily scaled out to support more than 64 ToRs in one cluster.This proposed optical switching and control system can scale to a higher date rate at NIC in two ways. (1) Adding more transceivers at the data channels. (2) Deploying the optical transceivers with higher speed such as QSFP/SFP28/QSFP-DD or other types at a higher data rate is expected in the future. In this proposed system, the clock is distributed by the label channel to drive the transceivers of the data channels, and this clock can be at lower data rate with respect to the higher data rate transceivers carrying the optical data. The distributed clock can be frequency multiplicated at the CDR block according to the higher data rate of the transceivers. Therefore, the data rate of the NIC for the data channel can be scaled to higher data rate using multiple transceivers or higher speed transceivers.Peer Review File"} +{"text": "Species richness has been found to increase from the poles to the tropics but with a small dip near the equator over all marine fishes. Phylogenetic diversity measures offer an alternative perspective on biodiversity linked to evolutionary history. If phylogenetic diversity is standardized for species richness, then it may indicate places with relatively high genetic diversity. Latitudes and depths with both high species and phylogenetic diversity would be a priority for conservation. We compared latitudinal and depth gradients of species richness, and three measures of phylogenetic diversity, namely average phylogenetic diversity (AvPD), the sum of the higher taxonomic levels (STL) and the sum of the higher taxonomic levels divided by the number of species (STL/spp) for modelled ranges of 5,619 marine fish species. We distinguished all, bony and cartilaginous fish groups and four depth zones namely: whole water column; 0 \u2013200 m; 201\u20131,000 m; and 1,001\u20136,000 m; at 5\u00b0\u00a0 latitudinal intervals from 75\u00b0S to 75\u00b0N, and at 100 m depth intervals from 0 m to 3,500 m. Species richness and higher taxonomic richness (STL) were higher in the tropics and subtropics with a small dip at the equator, and were significantly correlated among fish groups and depth zones. Species assemblages had closer phylogenetic relationships (lower AvPD and STL/spp) in warmer than colder environments (high latitudes and deep sea). This supports the hypothesis that warmer shallow latitudes and depths have had higher rates of evolution across a range of higher taxa. We also found distinct assemblages of species in different depth zones such that deeper sea species are not simply a subset of shallow assemblages. Thus, conservation needs to be representative of all latitudes and depth zones to encompass global biodiversity. The latitudinal diversity gradient (LDG) has interested ecologists for a long time since it generalizes over local and regional patterns, and thus helps us to understand where species have evolved and survived on both ecological and evolutionary time scales . Dozens Until recently, the literature presented the typical LDG to be a decrease in species richness from the equator to the poles. However, present LDGs of marine species are now recognized to be bimodal with a dip at or near the equator . This diSpecies richness is the most common and simplest way to measure biodiversity. However, biodiversity comprises variation within species, between species and amongst ecosystems . MeasureIn this study, two additional phylogenetic indices were created to offer a simpler way to understand the higher taxonomic richness and phylogenetic relationship. One was the sum of the higher taxonomic levels (STL) that added the number from classes to genera as a measure of higher taxonomic richness. Because STL is dependent on the number of species present, this study also divided STL by the number of species present (STL/spp) in a given latitude and depth zone to standardize higher taxonomic (phylogenetic) richness for species richness. Thus, a given area with few species but lots of higher taxa will have higher STL/spp (distant phylogenetic relationship). In contrast, a given area with more species but a less or similar number of higher taxa will have lower STL/spp (closer phylogenetic relationship). The concept of STL/spp is similar to AvPD but by directly using the number of taxonomic levels, it is a simpler way to understand the phylogenetic relationship in a given area.In the marine environment, species richness is also related to depth. Generally, species richness declines with depth, although it may peak at intermediate depths for some taxa in some places . Depth mi.e.,\u00a0turnover) may identify latitudes and depths where boundaries may separate assemblages differing in phylogenetic and/or species richness.How marine fish composition changes along latitudes in different depth zones, and along depth zones at the global scale has not been studied. A previous global-scale study showed that multiple marine taxa, including fishes, in 5-degree latitude bands could be divided into five assemblages: tropical (between 32.5\u00b0S and 27.5\u00b0N), two temperate groups, and two polar groups . TherefoHere, we describe species richness and turnover, higher taxonomic richness (STL), phylogenetic relationships using conventional and novel indices (AvPD and STL/spp), and species assemblages of marine fishes across latitudes in the whole water column and three depth zones from 75\u00b0S to 75\u00b0N, and in depths from 0 m\u20133,500\u00a0m. We also describe gradients among all, bony and cartilaginous fishes to see the difference among fish groups and we illustrate the relationship between the phylogenetic indices and species richness. The hypotheses in this study are that higher taxonomic richness will be highly correlated to species richness, and there will be distinct fish assemblages with latitude and depth. If this is not the case, it would suggest some latitudes or depths have higher recent rates of speciation than others.The distribution ranges of all 5,619 fish species for which ranges were available were obtained from AquaMaps Table S. They re\u03c9) to branch lengths adhered to the methodology outlined by \u03c9 = 20 for different species in the same genus, \u03c9 = 40 for species in the same family but different genera, \u03c9 = 60 for species in the same order but different families, \u03c9 = 80 when species are in the same order but different classes, and \u03c9 = 100 when species are in different classes (Average phylogenetic diversity (AvPD) is a measure of the average phylogenetic distance (branch length) between any two chosen species within a given phylogenetic tree . It is c classes .In addition, two new and simple methods of phylogenetic diversity based on the number of the five taxonomic levels in a given latitude band or depth band were applied. One was the sum of the higher taxonomic levels (STL). We used 5 for classes, 4 for orders, 3 for families, and 2 for genera as assignment of weights. Therefore, the equation of STL for each latitude band or depth band in this study was classes \u00d75 + orders \u00d74 + families \u00d73 + genera \u00d72. For example, the STL of an assemblage with one class, one order, two families and four genera would be 23 from [(1\u00a0\u00d7 5) + (1\u00a0\u00d7 4) + (2\u00a0\u00d7 3) + ( 4\u00a0\u00d7 2)]. The second simple measure was the sum of the higher taxonomic levels divided by the number of species (STL/spp). This measure was used to account for the number of species because where very few species occur then fewer higher taxa can occur.The latitudinal distribution range (northern and southern limits) and the preferred maximum depth were derived from the species geographic and depth ranges. We compared bony and cartilaginous fishes. Overall, three groups including \u201cAll Fish\u201d, \u201cBony Fish\u201d, and \u201cCartilaginous Fish\u201d were analysed for species richness and three phylogenetic indices in the whole water column and three depth zones. The number of taxa in the depth zones among the three fish groups are in The calculation of the species richness and three phylogenetic indices used a 5\u00b0\u00a0latitude band between 75\u00b0S and 75\u00b0N and four depth zones , middle , and deep ) reflecting the photic, mesophotic and aphotic zones of light penetration . We alsoThe Jaccard similarity coefficient was useda is the number of species that are common in samples i and j, b is the number of species present in sample i but absent in sample j, and c is the number of species present in sample j and absent in sample i were low in polar latitudes and high in the tropics and subtropics in all depth zones . In contThe STL among all, bony and cartilaginous fishes showed a similar gradient to species richness with a small dip at the equator . The STLThe latitudinal gradients of AvPD, STL and STL/spp of all, bony and cartilaginous fish groups in the surface zone were similar to the gradients in the whole water column . This isAvPD and STL/spp of all, bony and cartilaginous fishes were all lower between 30\u00b0S and 30\u00b0N in the surface and middle zones and between 40\u00b0S and 40\u00b0N in the deep zone, and they all peaked at 55\u00b0S and 75\u00b0N . In addiThe STL of all and bony fishes in the surface, middle and deep zones were all similar to species richness, being highest in the northern subtropical areas (between 25\u00b0N and 35\u00b0N), and they all had a small dip at the equator in the three depth zones . For carOverall, we found that the species were on average more phylogenetically closely related in the tropics and subtropics (between 30\u00b0S and 30\u00b0N) in all four depth zones. In contrast, the species assemblages in the southern temperate latitudes and Arctic Ocean were more phylogenetically distantly related in all depth zones. The species in the Southern Ocean were more closely phylogenetically related in the surface and middle zones, but not in the deep zone compared to the temperate latitudes .R2\u00a0=\u00a00.34\u20130.99, p-values <0.001) . These r <0.001) .That the STL of all, bony, and cartilaginous fishes increased with species richness among four depth zones reflected that areas with high species richness also had higher phylogenetic richness . HoweverSpecies richness and STL among all, bony and cartilaginous fish groups were all highest in the shallow water (<100 m), then decreased with depth . There wi.e., less phylogenetically similar).The AvPD and STL/spp were all lowest shallower than 100 m and they increased with depth among all, bony and cartilaginous fishes . AvPD anFish species assemblages formed spatially coherent clusters by latitudinal bands . The SIMIn contrast to the surface zone, below 200 m depth the first division in the dendrogram separated out the Southern Ocean, indicating its fish fauna was dissimilar from all the rest , 7D. TheThe most distinct fish assemblages separated at 500 m, closely followed by a shallower than 100 m, 101\u2013500 m, 501\u20131,400 m, 1,401\u20132,300 m, and deeper than 2,301 m . BecauseWe found higher species and taxonomic richness (STL) within the tropics and subtropics, with a slight dip at the equator, and similar correlations across fish groups and depth zones. Phylogenetic relationships of species assemblages demonstrated greater closeness (lower AvPD and STL/spp) in warmer environments, encompassing lower latitudes and shallower waters, compared to colder environments, which included higher latitudes and deeper seas. Additionally, distinct species assemblages were evident across various depth zones, indicating that deeper sea species are not mere subsets of shallow-water assemblages. Here, we discuss the possible mechanisms for our findings.i.e., Apristurus, Bathyraja, Galeus, Rajella) contained more species than other genera.In general, we found gradients of higher taxonomic richness (STL) were similar to gradients of species richness among fish groups and depth zones. The only difference was that the species richness of cartilaginous fishes in the deep zone peaked at 15\u00b0N\u201335\u00b0N , but theSpecies richness and STL of all, bony, and cartilaginous fishes all peaked in the northern subtropics with a small dip near the equator. Using the same dataset as the present study, Our results showed that the latitudinal gradients in species richness of all, bony and cartilaginous fishes were significantly correlated with three phylogenetic indices among all depth zones. Areas with more species, such as the tropics and subtropics (between 30\u00b0S and 30\u00b0N) had relatively lower phylogenetic similarity (low AvPD and STL/spp) despite higher taxonomic richness (high STL). That is, the tropics and subtropics not only had more species with a closer phylogenetic relationship but had more diverse higher taxonomic levels. In contrast, areas with low species richness, such as temperate areas, had higher AvPD and STL/spp but lower STL reflecting that species within assemblages at high latitudes were less phylogenetically related and from fewer higher taxonomic levels. In addition, the results showed that phylogenetic similarity (lower AvPD and STL/spp) was lower in the tropics and subtropics, but higher at 55\u00b0S and 75\u00b0 N, respectively, from the surface to the deep depth zone. These results confirmed the initial expectation that phylogenetic indices would have clear latitudinal gradients because the species richness also had clear latitudinal gradients in all depth zones .Here, we suggest two reasons that may explain the findings that species and higher taxonomic richness were more diverse and species assemblage was more phylogenetically closely related in the tropics and subtropics than temperate latitudes.e.g., First, temperature is the main driver affecting marine biodiversity at a global scale. The higher temperature in the tropics, resulting in shorter generation times, higher rates of metabolism, faster rates of mutation, and faster selection, which generate and maintain higher biodiversity . EmpiricThis pattern of higher tropical speciation and diversification is not limited to marine species and fish; a global analysis of the LDG in mammals found speciation and diversification rates higher, and extinction and dispersal rates lower, in the tropics to subtropics than temperate latitudes . SimilarIt has been suggested that the absence of glaciations (that would have extirpated polar fauna during ice ages) has allowed more time for speciation and lower extinction rates in the tropics . HoweverSecond, habitat complexity and niche diversity are higher in the tropics and subtropics. The tropics and subtropics have higher habitat complexity, notably coral reefs which contain 27% of marine fish species , that prTogether these reasons allow phylogenetic diversity and species to originate and accumulate, even in the deep sea of the tropics and subtropics because species\u2019 may extend their distribution from the shallow water to the deep sea . In contSpecies richness, AvPD, STL and STL/spp of all and bony fishes decreased in the Southern Ocean in the whole water column, surface, and middle zones. The Southern Ocean has been a relatively enclosed environment compared to adjacent southern temperate latitudinal areas since the opening of the Drake Passage 23\u201325 million years ago, the formation of the Antarctic Circumpolar Current, and the subsequent ocean cooling, resulting in the evolution of a unique fish fauna with a high rate of endemism , and oveSpecies and phylogenetic richness not only declined into higher, colder latitudes but also with depth in the ocean. Similarly, zooplankton genetic diversity declined with depth to 1,500 m in the North Pacific subtropical gyre . BecauseOur results showed that species richness and STL among all, bony and cartilaginous fish groups were all highest in the shallowest waters and decreased with depth. This is consistent with the shallow depths being the driver for deep-sea species origins . In conti.e., Actinopterygii) and most of the species were from different families. At a depth of 2,701\u20132,800 m, 3 of 14 species were from the family Macrouridae, and the other 11 species were all from different families. Therefore, the fishes at 2,700\u20132,800 m were far less closely related than in shallower depths.Deeper than 2,300 m, there was only one class were higher in the tropics and subtropics than in high latitudes, with a small dip at the equator in all fish groups and depth zones. In addition, species assemblages had closer phylogenetic relationships (lower AvPD and STL/spp) in warmer than colder environments (high latitudes and deep sea). This result was significantly related to species richness and different fish groups among latitudes and depth zones. The results in this study support the hypothesis that a warmer temperature environment fosters speciation and thus generates higher biodiversity and a closer phylogenetic relationship. However, the cold environment in the Southern Ocean was dominated by endemic notothenioids, so it had a closer phylogenetic relationship than other temperate latitudes because of its unique isolated environment.10.7717/peerj.16116/supp-1Supplemental Information 1Click here for additional data file."} +{"text": "The growing power and ever decreasing cost of RNA sequencing (RNA-Seq) technologies have resulted in an explosion of RNA-Seq data production. Comparing gene expression values within RNA-Seq datasets is relatively easy for many interdisciplinary biomedical researchers; however, user-friendly software applications increase the ability of biologists to efficiently explore available datasets.https://marisshiny.research.chop.edu/ROGUE/), a user-friendly R Shiny application that allows a biologist to perform differentially expressed gene analysis, gene ontology and pathway enrichment analysis, potential biomarker identification, and advanced statistical analyses. We use ROGUE to identify potential biomarkers and show unique enriched pathways between various immune cells.Here, we describe ROGUE (RNA-Seq Ontology Graphic User Environment, User-friendly tools for the analysis of next generation sequencing data, such as ROGUE, will allow biologists to efficiently explore their datasets, discover expression patterns, and advance their research by allowing them to develop and test hypotheses.The online version contains supplementary material available at 10.1186/s12859-023-05420-y. RNA sequencing (RNA-Seq) has become an extremely powerful tool for understanding biological pathways and molecular mechanisms. Technological advancements, both wet-lab and computational, have transformed RNA-Seq into a more accessible tool, giving biomedical researchers access to a less biased view of RNA biology and transcriptomics \u20133. The gThe explosion of computational algorithms and pipelines in the last decade has given researchers the ability to perform rigorous analyses and explore RNA-Seq data \u20139. DiffeDEA is often combined with gene ontology (GO) analysis, pathway analysis, and clustering algorithms to characterize data and elucidate the processes and dynamics involved in transcription . These sThe availability of RNA sequencing datasets is becoming more common due to increased support of open data by academicians and requirements by scientific journals and funding agencies to make publication-affiliated datasets publicly available. This has gifted the scientific community with an extensive repository of datasets \u201327 derivRNA-Seq Ontology Graphic User Environment (ROGUE), an R Shiny application that allows biologists to perform differentially expressed gene analysis, gene ontology and pathway enrichment analysis, potential biomarker identification, and advanced statistical analyses. We demonstrate the capability of ROGUE by exploring the basic differences between CD4+ T cells, CD8+ T cells, and natural killer (NK) cells. Furthermore, we show how ROGUE can be used to identify biomarkers and differentially enriched pathways present in similar immune cells in different diseases.User-friendly tools for RNA-Seq analyses will allow biomedical scientists with limited programming experience to explore these datasets. Here we present We propose that ROGUE will allow scientists to explore their datasets and also compare their findings with publicly available datasets, increasing the potential of data-driven biomedical discovery.https://marisshiny.research.chop.edu/ROGUE/Instructions.pdf. When the input is raw read counts or length-normalized counts quantified by packages such as HT-seq [ROGUE is an R Shiny web app with a graphic user interface (GUI) Fig.\u00a0A that tas HT-seq or RSEM s HT-seq , ROGUE gs HT-seq or DESeqs HT-seq which ars HT-seq and has s HT-seq , 31. ROGGene expression comparison between samples and groups can be visualized with heatmaps, bar plots, and boxplots. Users can also use ROGUE to predict possible biomarkers by ranking genes with maximized fold change and minimized coefficients of variation in gene expression between groups of samples. The Welch\u2019s t-test and the Wilcoxon Rank Sum Test can also be used to rank genes by their difference in expression distribution between the groups using the Biomarker Discovery Tool.+ vs. CD8+ T cells). GSEA between individual samples or groups can be performed using the Fast Gene Set Enrichment Analysis (fgsea) R package [Gene set enrichment analysis (GSEA) is a computational method that determines whether a pre-ranked gene list shows statistically significant, concordant differences between two biological states , 2, 4, and 24\u00a0h. Dataset GSE40350 contains CD8+ T cells treated with IL-2 and IL-15 for 0 (control), 4, and 24\u00a0h. Dataset GSE101470 includes RNA-Seq from mature CD11b\u2212/CD27\u2212, CD11b\u2212/CD27+, CD11b+/CD27+, and CD11b+/CD27low NK cells as well as Stat5 double knock-in mice with N-terminal mutations in STAT5A and STAT5B that prevent STAT5 tetramerization but not dimerization.We performed basic analyses on datasets GSE60424 , GSE1023+ T cells, CD8+ T cells, and natural killer (NK) cells in datasets downloaded from the GEO Database. First, we performed DEA using edgeR [We demonstrate the capability of ROGUE by exploring some basic differences between CD4ng edgeR and comp+ T cells versus CD8+ T cells from healthy humans in dataset GSE60424 using edgeR [+ T cells and CD8+ T cells from healthy humans of genes downregulated in CD4+ T cells with Nras knockout (KO) mice of genes downregulated in na\u00efve CD8+ T cells when compared to CD4+ T cells (Fig.\u00a0+ and CD8+ T cells from the four healthy donors in the dataset (Fig.\u00a0+ T cells showed enrichment in genes related to immune effector process, immune response, and leukocyte activation (Fig.\u00a0+ T cells as they expressed more genes at greater RPKM than the CD4+ T cells (Fig.\u00a0To illustrate the basic features of ROGUE, we first performed DEA on CD4ng edgeR and geneans Fig.\u00a0. For thi reduced . Not sur+ T cells, CD8+ T cells, and NK cells using the Biomarker Discovery tool (Fig.\u00a0+ T cells, CD8+ T cells, and NK cells from healthy controls clustered reasonably well based on the potential biomarkers discovered (Fig.\u00a0+ T cells from CD8+ T cells and NK cells across both datasets while CD8A and CD8B were identified as potential biomarkers for CD8+ T cells. Gene expression of the potential human NK cell biomarkers were enriched in mouse NK cells that expressed CD27 (Additional file + NK cells was reflected in the t-SNE plot as they formed a distinct cluster from the other NK cells (Additional file Biomarker discovery is essential in biomedical and pharmaceutical research \u201346. Althool Fig.\u00a0A, and a ool Fig.\u00a0B. The exred Fig.\u00a0C. Clustered Fig.\u00a0D, but thred Fig.\u00a0E. We evared Fig.\u00a0: A\u2013B. As+ T cells, CD8+ T cells, NK cells, neutrophils, and monocytes of MS patients before and after IFN\u03b2 treatment. MS is an inflammatory demyelinating disease of the central nervous system [+ T cells, CD8+ T cells, and NK cells isolated from patients pre- or post-treatment with IFN\u03b2. CD4+ T cells showed upregulation of the MDA-5 signaling pathway, among other biological processes (Fig.\u00a0+ T cells and NK cells showed upregulation of 2\u2032\u20135\u2032-oligoadenylate synthetase activity (Fig.\u00a0+ T cells, CD8+ T cells, and NK cells (Additional file Dataset GSE60424 contains RNA-Seq data from CD4s system . IFN\u03b2 trs system \u201351 due ts system , 53. Whihttps://github.com/afarrel/ROGUE when processing large datasets. Here, we show that a user can explore RNA-Seq data obtained from public databases and use ROGUE to analyze that data to generate or support new or existing hypotheses. ROGUE provides non-R programmers access to many statistical and graphical R packages for RNA-Seq analyses through a GUI so they can analyze their data and create figures. Ideally, tools like ROGUE will allow more biomedical researchers to take advantage of genomic data available and help expedite needed bioinformatics analyses. ROGUE is available at\u00a0https://marisshiny.research.chop.edu/ROGUE/.ROGUE is designed to be a user-friendly R Shiny application that allows users to perform basic tasks with available RNA-Seq data such as differentially expressed gene analysis and gene ontology analysis. While other freely available web tools and portals have been developed to allow researchers to address discrete questions based on molecular and genomic datasets without the need for strong computational skills , 69, ROGProject Name: ROGUE.https://marisshiny.research.chop.edu/ROGUE/.Project Home Page: Github: https://github.com/afarrel/ROGUE.Operating System: Platform independent.Programming language: R.Other requirements: R environment and included packages. Tested on R version 3.6.Any restrictions to use by non-academics: none.Additional file 1:\u00a0GSEA analysis of healthy human CD8+ T cells vs CD4+ T cells.Additional file 2:\u00a0Evaluating biomarkers found in human CD4+ T cells, CD8+ T cells, and NK cells in mouse immune cells from different datasets.Additional file 3:\u00a0Distribution of gene expression profiles in the differentially expressed pathways.Additional file 4:\u00a0Evaluation of MD5A-signaling, RIG-1 signaling, and 2'-5'-oligoadenylate synthetase pre- and post-IFN\u03b2 treatment.Additional file 5:\u00a0Available Rshiny RNAseq analysis tools.Additional file 6: List of case studies."} +{"text": "MVC) of the rectus femoris (RF), vastus medialis (VM), vastus lateralis (VL), and gluteus maximus (GM) decreased after continuous and intermittent BFR training programs, and those of the biceps femoris (BF) and semitendinosus (SEM) increased; The RMS standard values of the VL, BF, and SEM were significantly increased after continuous and intermittent BFR training (P\u2009<\u20090.05), The RMS value of GM significantly decreased after cuff inflating (P\u2009<\u20090.05). The MF values of RF, VM, VL, and GM decreased significantly after continuous BFR training (P\u2009<\u20090.05). Continuous BFR deep-squat training applied at 50% AOP was more effective than the intermittent BFR training program. Continuous application of BFR induces greater levels of acute fatigue than intermittent BFR that may translate into greater muscular training adaptations over time.We aimed to investigate acute changes before and after low-intensity continuous and intermittent blood flow restriction (BFR) deep-squat training on thigh muscle activation characteristics and fatigue level under suitable individual arterial occlusion pressure (AOP). Twelve elite male handball players were recruited. Continuous (Program 1) and intermittent (Program 2) BFR deep-squat training was performed with 30% one-repetition maximum load. Program 1 did not include decompression during the intervals, while Program 2 contained decompression during each interval. Electromyography (EMG) was performed before and after two BFR training programs in each period. EMG signals of the quadriceps femoris, posterior femoral muscles, and gluteus maximus, including the root mean square (RMS) and normalized RMS and median frequency (MF) values of each muscle group under maximum voluntary contraction (MVC), before and after training were calculated. The RMS value under MVC (RMS Numerous studies have reported that low-load resistance exercise in combination with blood flow restriction (BFR), also known as BFR training, elicits increases in both muscle size and strength, with benefits comparable to traditional high-load resistance training8. At the same time, BFR training can promote muscle activation to ensure a total power output similar to traditional training11. In addition, BFR training can enhance neural activation and promote the recruitment of type II motor units12. Therefore, the neuromuscular response generated by external pressure stimulation is also one of the main causes of muscle hypertrophy. Shinohara et al. have shown that muscle hypertrophy might be partially driven by neuromuscular responses accompanying BFR training13. Nevertheless, whether the neuromuscular component represents a pivotal role in the genesis of muscle adaptations to BFR training is not known.High-load resistance training can effectively promote muscle hypertrophy and increase strength in athletes+phosphatase and calmodulin dependent kinase pathways. Therefore, increasing the intensity of training load can make EMG activity stronger, which is related to an increase in blood lactate concentration and a greater demand for metabolism in muscles. At the same time, as the blood milk concentration increases, the H+ concentration also increases, leading to the release of GH and the hypertrophy reaction of fast muscle fibers. It is not yet clear whether neuromuscular factors play a crucial role in muscle adaptation during pressure resistance training. But it is worth affirming that the increase in fast muscle fiber recruitment is a good manifestation of adaptation to muscle hypertrophy caused by low intensity compression training.Previous studies have shown that low intensity compressive resistance training can increase the recruitment and discharge frequency of motor units and activate more fast muscle fibers to participate in muscle activity compared to non-compressive conditions. In addition, the increased electrical activity of muscles can stimulate muscle protein synthesis through the transcription of Ca215, while Abe and Yasuda calculated external blood limit pressure based on brachial artery resting systolic pressure and applied it in their experimental design17. Loenneke et al. pointed out that neither of the above two methods represents an effective strategy for personalizing the pressures of the lower limbs18. Patients of different girths experience different degrees of BFR under the same pressure conditions and produce completely different training fatigue responses. Laurentino et al. were the first to use a Doppler probe for determining the pressure required for complete vascular occlusion in the upper thigh at resting conditions19. Therefore, the relative pressure calculated using the individual specific percentage of arterial occlusion pressure (AOP) deserves widespread application and promotion.In studies on muscle hypertrophy induced by BFR training, different application methods of external pressure resulted in different degrees of fatigue. Moore and Pierce used arbitrary, subjective pressure values to implement a BFR training program20. In the field of multi-joint compound movement, Li Zhiyuan et al. concluded that the application of 50% AOP can significantly improve the optimal activation degree of the quadriceps femoris and posterior femoris muscle groups of male handball players at the same time, resulting in the best training effect21.In studies exploring the instantaneous changes in muscle activation caused by different external pressure stimuli before and after compression resistance training, the exercises were mainly single joint movements such as elbow flexion of the upper limb and isokinetic knee extension of the lower limb. Loenneke et al. found that knee extension with 40\u201350% AOP to limit blood pressure may change the acute response of the quadriceps femoris and improve the muscle activation level, while higher pressure does not cause these changesHowever, the effects of continuous and intermittent BFR training on thigh muscle activation and fatigue during deep squats under the same external adaptive pressure conditions have not been thoroughly investigated. Therefore, this study aimed to investigate the characteristics of the instantaneous changes in muscle group activation and fatigue of the lower limbs of male handball players before and after squat training with two modes of continuous and intermittent BFR training under 50% AOP. This study provides a theoretical basis and reference for the scientific selection and rational utilization of the application mode in BFR training. In this study, we hypothesized that the activation of the anterior and posterior thigh muscle groups increased significantly during both continuous and intermittent BFR training programs.Twelve elite national players of the Beijing male handball team were recruited as participants . Before the experiment, the purpose, method, and possible risks were explained to the participants, and written informed consent was obtained from all participants. Before the test, the essentials of the tested action were explained to the participants, and they were asked to train as usual 1\u00a0week before the beginning of the experiment.A fully automatic KAATSU Master 2.0 Package and 5-cm-wide pressure band were used to digitally display the inflation pressure value. Other instruments included a Wave Plus Wireless surface electromyography (EMG) tester and surface electrodes , a Panasonic HC-V100 Review , a Gymaware linear sensor device , a goniometer, a barbell rod and barbell piece, a set of Smith squat racks, a set of tape measures, a metronome.Participants stood on the Smith squat rack, with their gaze directed straight ahead, and feet naturally spaced apart. We performed standard squats, characterized by achieving a knee joint angle of 60\u00b0\u201370\u00b0, with the thigh roughly parallel to the ground. Both hip and knee were extended simultaneously during the squat movement. The knee joint angle was measured and monitored in real-time using a goniometer, and the participants were prompted verbally. The participants controlled the timing and rhythm of each movement according to the metronome that was set directly in front of the participants.20. The athletes\u2019 right thigh circumference was 62.9\u2009\u00b1\u20096.1\u00a0cm, the occlusion pressure was uniformly selected as 40\u00a0mmHg22, and the inflatable pressure was 150\u2013180\u00a0mmHg. The inflatable pressure of the 5 athletes was 150\u00a0mmHg, and that of the other 7 athletes was 180\u00a0mmHg. At the same time, the Gymaware linear sensor was used to measure the squat one-repetition maximum (1RM) for each athlete to determine the individualized load intensity for the pressurized squat exercise.The athletes\u2019 thigh circumference was measured and evaluated 48\u00a0h before the experiment. Due to the limitations of the study equipment, the final pressure was set to a percentage of arterial occlusion estimated from the thigh circumference, and the relative value of the 50% AOP cut-off blood pressure of each person was determined according to the right thigh circumference Table 20. The aEach participant underwent weight-bearing squat exercises under the two intervention conditions of continuous BFR training (Program 1) and intermittent BFR training (Program 2), with a load intensity of 30% 1RM and a relative applied pressure of 50% AOP. Automatic BFR cuffs were applied to the most proximal portion of the participant\u2019s thigh. The cuff was inflated to 50\u00a0mmHg for 30\u00a0s and then deflated for 10\u00a0s. The cuff was then inflated to 100\u00a0mmHg for 30\u00a0s and then deflated for 10\u00a0s (unless 100\u00a0mmHg was the target pressure). The cycle of cuff inflation/deflation was repeated with the cuff pressure increasing in increments of 40\u00a0mmHg until the target inflation pressure was reached. Before the pressure cuff was inflated to the target pressure, it was pressurized to 50%, 75%, and 100% of the target pressure, respectively, and then the pressure was removed immediately. Next, \u201cinflation-deflation\u201d cyclic pressure adaptation preparation was performed. The cuff was inflated to the target pressure before the first session and removed after the last session. The total duration of each program did not exceed 10\u00a0min.Before the test, participants underwent the following warm-up exercises: (i) 6\u00a0min of no-load power cycling (60\u201370\u00a0rpm); and (ii) three sets of 30% 1RM weighted squat exercises (five repetitions per set). After the warm-up, practice and testing were carried out according to the following procedures owing to the highly negative correlation between the load and the speed24. In the first set, bar speed was more than 1\u00a0m/s, and in the last set, the bar speed was less than 0.5\u00a0m/s. The number of test sets was approximately 3\u20135, and the increasing load of each set was 20\u201330\u00a0kg, depending on the weight and strength of the participant.Gymaware, a linear sensor, was used to test the 1RM of deep squats using the increasing load test methodThigh circumference was measured with the participants\u2019 feet shoulder-width apart, placing a circumference tape measure at the line below their hips and horizontally measuring their thigh circumference. The left and right thighs were measured three times, and the mean was calculated. The blood pressure limit at each stage was set according to the right thigh circumference.25; the skin was cleaned before placement, hair was shaved, and the skin was smoothed with sandpaper and then wiped with 75% medical alcohol to remove any oil on the skin\u2019s surface. The surface electrode was placed on the most elevated part of the muscle and fixed along the direction of the muscle fiber to avoid any vibration-induced interference.The rectus femoris (RF), vastus medialis (VM), vastus lateralis (VL), biceps femoris (BF), semitendinosus (SEM), and gluteus maximus (GM) of the right thigh were selected for testing. The electrodes were placed according to the requirements of the EMG manual25. We collected and recorded surface EMG values, which were used as RMS values under the MVC (RMSMVC) after processing.After placing the surface electrodes, the electrode wires were connected to test the thigh muscles\u2019 MVC, and the corresponding muscle surface EMG signals were recorded. Specific testing methods are outlined in the EMG manualMVC) value for standardized processing.Before each squatting test, the camera and Waveplus wireless EMG acquisition system were set up. After the participants began to move, the acquisition system was turned on to collect the EMG data. All EMG signals from 1 to 4 sets of exercise were selected according to the synchronous video of the experiment. The original EMG data were cleaned, filtered, and smoothed, and their amplitude standardized using the matching analysis software of the Emgserver instrument, and the selected index was the RMS amplitude. The range of muscle exertion was selected from the original EMG, and the mean RMS was determined. The EMG RMS of each muscle obtained during MVC testing was defined as 100% MVC. The standardized EMG data processing system automatically divided the EMG RMS value obtained in each set by the EMG of maximum voluntary contraction . All statistical calculations were computed using the SPSS22.0 statistical software package and a significance level of p\u2009<\u20090.05 was used.Two-way analysis of variance and the Mauchly sphericity test were used to evaluate the lower limb muscle groups\u2019 RMS and median frequency (MF) values during the weight-bearing squat in the pre-1, pre-2, post-1, and post-2 periods. If the test P-value was\u2009>\u20090.05, the sphericity test was not satisfied, and the test result of the one-way ANOVA prevailed. If the test P-value was\u2009<\u20090.05, the sphericity test could be satisfied, and two-way ANOVA test results were used. Using the Bonferroni method, multiple comparisons were made between the four operation periods before and after BFR training to test for significant between-condition differences. The RMSThe study was approved by Fuzhou University Human Research Ethics Committee (LLWYH20200269) and all aspects of the study were conducted in agreement with the Declaration of Helsinki. All of the participants were fully informed about the purpose and experimental procedures of the study. Written, informed consent was obtained prior to the study from each participant. The participants were informed that all data collected would be processed anonymously.MVC values were significantly different (P\u2009<\u20090.05). The time and interval mode interaction were significantly different in the RF, BF, SEM, and GM RMSMVC values (P\u2009<\u20090.05). The RF, VM, VL and GM RMSMVC values decreased after removing the cuff at the end of deep-squat training with BFR. The RF and GM RMSMVC values changed significantly (P\u2009<\u20090.05), while the BF and SEM RMSMVC values increased. There were significant changes in intermittent BFR training (P\u2009<\u20090.05).As shown in Table As shown in Table As shown in Table As shown in Fig.\u00a0MVC values of the RF, VM, and VL decreased after cuff deflation. The RMSMVC values of the RF changed significantly after continuous BFR training (P\u2009<\u20090.05), while those of the VM and VL showed no significant change (P\u2009>\u20090.05). Most previous studies showed that after the implementation of BFR training, the maximum activation of active muscles restricted by blood flow had a downward trend. Loenneke et al.18 performed BFR and non-BFR unilateral knee extension exercises with the quadriceps as the main active muscle group in 16 healthy adult males and found that the MVC value of the quadriceps muscle decreased after leg extension exercises with BFR and non-BFR. Umbel et al.26 found that the MVC value of concentric contraction decreased by 9.8%, and the MVC value of concentric contraction decreased by 3.4% after 24\u00a0h of unilateral knee extension exercises with BFR and returned to normal after 96\u00a0h, which may be related to delayed onset muscle soreness. Fatela et al.27 performed knee extension exercises with different BFRs in 14 adult males and found that the RMSMVC values of the VM and VL decreased significantly after exercise under 60% BFR and 80% BFR. This study found that compared with knee extension exercise, the squat exercise with BFR can lead to reduction of quadriceps activation. Additionally, compared to the intermittent BFR training, continuous squat training with BFR led to more reduction, which may be related to acidosis caused by excessive accumulation of metabolite in active muscle cells.The results in Table MVC values of the BF and SEM increased after squat training with BFR, and there was a significant change after the intermittent training. However, the RMSMVC value of the GM decreased and significantly changed after continuous BFR training. The results showed that the RMSMVC of the posterior thigh muscle showed an increasing trend after squat training with BFR. Yasuda et al.28 also found that the activation contribution rate of the triceps brachii as an antagonistic muscle increased from 40 to 60% after 30%-1RM multi-joint bench press exercise with BFR, suggesting that BFR training can improve the maximum independent activation rate of antagonistic muscles, thus effectively developing the strength of antagonistic muscle groups compared with multi-joint exercise without BFR. The results also showed that RMSMVC values of the BF and SEM increased significantly after BFR training. This indicated that BFR training can provide an optimal acidic environment to activate type II motor units in the posterior thigh muscle group.In addition, the results in Table Many studies have confirmed that the RMS value of active muscles increases during low-intensity BFR training. After BFR training, the RMS values of the anterior and posterior thigh muscle groups were higher than those before BFR training. During continuous BFR training, the RMS values of VL, BF, and SEM were significantly increased. After the cuff was removed, the RMS values of the anterior and posterior thigh muscle groups decreased compared with the RMS values of the period before the inflation at the end of the implementation of the BFR program, but were higher than the RMS values after inflation.29. Fatela et al.27 also found that the activation level of the RF and VM changed during knee extension exercises with BFR. Acute BFR stimulation increased the RMS values of the RF and VM; nevertheless, the RF and VM responded differently to different BFR stimulations. The activation level of the VM was significantly increased in 60% and 80% BFR before BFR training (post-1) compared to that at pre-2. The activity of the RF increased significantly only under 80% RFR, suggesting that higher blood pressure restriction can induce a reflex increase in VM and RF activation. The results also showed that the VL, BF, and SEM had similar characteristics of change before and after inflation with continuous and intermittent training under continuous BFR, and that in the VL, BF, and SEM, intermittent training significantly increased VM activation. In addition, the results in Table 30 concluded that low-intensity BFR training and hard pull exercises can increase the activation degree of the distal muscles with BFR and the proximal muscles with no limited synergistic function. This study showed that low-intensity squats with BFR reduced the activation of the GM, suggesting that different BFR training had different effects on the activation of unrestricted proximal coordination muscles.The above results show that after the implementation of the BFR training program, activation of the thigh\u2019s anterior and posterior muscle groups increased. This is mainly caused by the acidic environment resulting from the accumulation of metabolites in the muscle, which can activate more type II muscle fibers, thus increasing the RMS value of the EMG signal on the surface31. Place et al.32 confirmed that the degree of intramuscular acidity and the reduction of Ca2+ absorption by the sarcoplasmic reticulum are the main factors affecting the decline in muscle contraction function. The results in Table 2+ absorption by the sarcoplasmic reticulum, and ultimately, accelerated muscle fatigue.Indicators reflecting muscle fatigue are usually expressed by the MF, and the decrease in the MF value is strongly correlated with the decrease in the swing rate of the muscle bridge15 pointed out that 60% AOP of continuous knee extension exercise with BFR can lead to a significant reduction in the MF values of VL and RF. Neto et al.33 also performed a set of 80% 1RM high-intensity squats with 60% AOP, and the results showed that the MF values of the VM and VL decreased by 18.5% and 18.2%, respectively. Previous studies have shown that BFR training induces fatigue mainly by stimulating protein synthesis of the Akt/mTOR signaling pathway, and the decrease in MF values is sensitive to biochemical changes in type II muscle fibers34.After a short rest of 1\u00a0min after deflation, the MF values of the three muscles in the front thigh group recovered to the pre-inflation level. This indicates that the functional decline of the muscle caused by the two programs at 50% AOP level is temporary, and it can be restored to the level before inflation after a short rest with deflation. In addition, Pierce et al.Combined with the above results, it can be concluded that neuromuscular fatigue is affected by the intermittent mode and the external BFR intensity. A higher BFR intensity will cause greater fatigue and slower recovery speed. In addition, Table 35 found that the degree of activation of the biceps brachii during an elbow flexion exercise was not affected by BFR. High-BFR and low-BFR had similar adaptive effects on muscle hypertrophy, strength, endurance, and other aspects, but high-BFR could produce higher discomfort. In addition, Dandel et al.36 found that adding BFR into intervals of high-intensity elbow flexion training could not promote the activation of the biceps brachii and cause muscle hypertrophy. It has been suggested that BFR during low-load squat training can promote muscle activation and cause a hypertrophic response. The results of this study, similar to previous studies, showed that continuous and intermittent training of the upper and lower extremities with moderate intensity blood pressure limitation could effectively improve the activation degree of the prime muscle and reduce discomfort.As shown in Fig.\u00a037. However, there may be synergies between the thigh and buttocks in squat training. This study showed that the GM was complementary, and its activation increased significantly when the anterior thigh group was relatively low during BFR training. In the second, third, and fourth exercise sets, the activity of the anterior thigh muscles gradually increased, while the activity of the GM gradually decreased. This may be because the GM extends the hip while the quadriceps muscle mainly extends the knee. The two muscles may be complementary in function to some extent as the adjacent active muscles of the important lower limb motor chain and the GM may recruit more motor units to supplement the quadriceps deficit. This demonstrates that 50% AOP during squat training with BFR can induce more significant muscle group activation in the front and back of the legs, while GM stimulation was not significant for hip extension. The authors believe that the main reason for this is that the inflation site mainly plays a major role in lower limb blood flow occlusion but has little effect on the gluteal muscle. At the same time, lightweight squat training may not significantly affect the GM, and further extensive hip extension or increased weight-load to 85% or more may induce greater GM activation.Figure\u00a0Due to the limitations of research conditions, considering the daily training of sports teams, the accumulated effects of the athletes' other physical and technical training may have interfered with the subsequent tests to a certain extent and affected the accuracy of the acquired EMG data. In addition, the study did not measure the blood concentrations of lactic acid, creatine phosphate, or other blood metabolites or the participants\u2019 subjective physical effort level. To meet the need for training practice, future studies should combine the methods of electric index blood physiological and biochemical indicators and subjective indicators, which could more comprehensively evaluate the mechanism and effect of BFR training.38.A previous version of this manuscript was published as a preprintIn 30% 1RM squats with 50% AOP, the acute activation of thigh muscle groups was different between continuous BFR training and intermittent BFR training. The degree of high muscle groups activation induced was greater in continuous BFR training than in intermittent BFR training. The RMS values of all the muscle groups increased significantly after training in the third and fourth sets except the VM, but fatigue also occurred. In contrast, although the intermittent mode of deflation produces fatigue, recovery is faster. Therefore, the use of a continuous BFR training mode is recommended for trained athletes and an intermittent BFR training mode for beginners."} +{"text": "The present demand for child and adolescent mental health services exceeds the capacity for service provision. Greater research is required to understand the utility of accessible self\u2010help interventions, such as mobile apps. This study sought to investigate whether use of a mental health app, underpinned by CBT, led to changes in psychological distress amongst adolescents. Mechanisms of change were examined, specifically whether changes are attributable to cognitive strategies.This study utilised a multiple\u2010baseline single\u2010case experimental design, tracking variables across baseline and intervention phases. Surveys assessing participant experience were also administered.Five participants with moderate\u2010to\u2010severe levels of psychological distress engaged with a CBT\u2010based app over five weeks. Participants were recruited from both a well\u2010being service and the general population. Supplementary weekly calls to participants offered clarification of app content.A small overall effect of the intervention of psychological distress was evident; however, outcomes were dependent on the analysis conducted. The intervention appeared to promote an increase in use of adaptive cognitive strategies but not negative thinking styles. The CBT app did not promote changes in participant well\u2010being. Participant feedback highlighted practical challenges of utilising the app.The clinical benefits of app\u2010based CBT were small, and a range of barriers to engagement were recognised. While further research is required, caution should be exercised in the interpretation of studies reporting on app effectiveness. Use of the CBT\u2010based app led to small improvements in psychological distress across three of five participants.Changes in distress appeared temporally related to adaptive cognitive strategies, but not to maladaptive cognitive styles.Use of the CBT\u2010based app did not lead to any reliable or clinically significant changes in well\u2010being.The CBT\u2010based app was considered by participants to be demanding on time and motivation.As the concept of \u2018psychological distress\u2019 is broadly defined, the statistics of distress amongst young people are often not reported and instead rely on diagnostic classifications model is utilised within England; this stepped\u2010care approach proposes to cost less clinician time and enable faster treatment to young people , the NHS Health Research Authority (HRA), and a University Ethics Committee. The study complied with the British Psychological Society code of human research ethics.This study utilised a mixed methods single\u2010case series using a non\u2010concurrent multiple baseline design. Single\u2010case experimental designs (SCEDs) are used as a prospective investigation of manipulated variables, with regular repetition of measures across baseline and intervention phases, tracking variables across time. Within multiple\u2010baseline designs, participants complete baseline and intervention phases with staggered introduction of the intervention phase across participants Christ,\u00a0; this caIndividuals aged 13\u201318\u2009years were eligible to participate in the study if they had capacity to provide informed consent and were experiencing psychological distress. For those aged below 16\u2009years, additional informed consent from legal parents/guardians was required; adolescents aged 13\u201315\u2009years would be excluded if parental consent was not provided. Individuals were excluded if accessing crisis support, to ensure necessary support was not withheld. Exclusions applied to those unable to speak or understand English, as the app and measures were only available in English, and in cases where smart devices and internet could not be accessed.A minimum of three participants is proposed to represent three replications of effect within SCEDs , along with processes consistent with the therapy models utilised within the app Table\u00a0.Once recruited, participants completed the measures online via the Qualtrics platform. Three baseline durations were identified a priori, to increase experimental control guidance demonstrated a significant decrease in adaptive strategies between phases but only a small effect. An overall weighted average Tau\u2010U calculation suggested a significant positive phase difference in adaptive strategies across participants, with a moderate effect size.Visual analysis of negative cognitive strategies Figure\u00a0 showed tThe CompACT\u20108 provided scores of behavioural awareness, openness to experience, valued action and psychological flexibility Figure\u00a0. Scores The Framework method was utilised to analyse written feedback provided by participants on an online survey. This systematic approach utilised a matrix output, enabling findings to be summarised according to theme. Three main themes were generated using a largely deductive approach, with an inductive approach utilised if additional themes were identified within the data. The main themes included (a) intervention, (b) change, (c) recommendations, outlined further in Table\u00a0The primary aim of this study was to investigate evidence of change in psychological distress amongst adolescents while using a mental health app. This investigation did not find a replication of treatment effects across all five participants; the magnitude of change also depended on the method of analysis. According to the primary method of analysis, visual inspection of phase differences, treatment effects in reducing psychological distress were demonstrated across three participants, all of whom were recruited from the community sample. Despite this, only two participants demonstrated reductions in distress across all methods of analysis. Furthermore, a small overall effect of the app was found in reducing psychological distress; however, no improvements were evidenced in participants' well\u2010being.The presented findings add to the mixed body of research investigating apps for adolescent mental health. For example, an RCT investigating the app \u2018CopeSmart\u2019 found no increases in well\u2010being, adaptive coping strategies or emotional self\u2010awareness, and no reductions in distress or dysfunctional coping strategies , changes were not evident in negative strategies (such as self\u2010blame). While skills in cognitive reappraisal facilitate regulation of negative affect, strategies such as rumination maintain or increase negative affect (Kauer et al.,\u00a0It may be understood that rather than insight, subjective awareness of emotions and thoughts might instead elicit perseverative cognition or rumination (Volkaert et al.,\u00a0The possibility of iatrogenic effects should not however be dismissed. The results of this study in conjunction with existing literature do not demonstrate conclusive support for mental health apps. It may be possible that mobile devices are not conducive to therapeutic outcomes, and studies have begun highlighting the problematic nature of smartphone use. For example, adolescents exhibiting excessive mobile use also demonstrate more maladaptive cognitive strategies (Extremera et al.,\u00a0A notable finding here was participant attrition; despite two recruitment methods, 55% of participants withdrew. This is greater than the 47.8% dropout rate highlighted within a meta\u2010analysis exploring app studies (Torous, Lipschitz et al.,\u00a0Without careful regulation, the potential harms of digital environments may be overlooked, for example potential addiction to devices, exposure of harmful content (Roland et al.,\u00a0The integration of digital technologies, rather than development of isolated tools such as apps, may instead be more beneficial to promote care in a flexible and responsive manner (Roland et al.,\u00a0Despite the study's strength in utilising a multiple baseline design, several limitations are identified. First, this investigation lacked long\u2010term follow\u2010up, a consistent limitation across app studies (Badesha et al.,\u00a0This study investigated evidence of change in psychological outcomes amongst five adolescents using a mental health app. The findings demonstrate that despite small reductions in psychological distress, the app did not lead to changes in psychological well\u2010being. Participants self\u2010reported subjective changes in their behaviour and thinking. Such findings were partially supported further, as participants evidenced increased adaptive cognitive strategies, but no changes in negative thinking strategies.The large attrition rates suggest that there may be universal barriers to app engagement and adherence, in line with existing literature. Participants' feedback offered insights into the hindering aspects of the app, for example difficulties finding the time or motivation to engage with the content. Therefore, interpretation of results should consider such biases.Dr Kiran Badesha: contributed to conceptualization, investigation, methodology, recruitment, formal analyses, writing\u2014original draft and writing\u2014review and editing. Dr Sarah Wilde: contributed to conceptualization, methodology, supervision and Writing\u2014review and editing. Dr David L Dawson: contributed to conceptualization, methodology, resources, supervision and writing\u2014review and editing.The authors declare that there is no conflict of interest." \ No newline at end of file