{"text": "PLoS Biology, volume 2, issue 6:In Integrative Annotation of 21,037 Human Genes Validated by Full-Length cDNA ClonesTadashi Imanishi, Takeshi Itoh, Yutaka Suzuki, Claire O'Donovan, Satoshi Fukuchi, et al.DOI: 10.1371/journal.pbio.00201621. The abbreviation FLJ was expanded incorrectly (as full-length long Japan) in the abbreviations list and in the first paragraph of the Results/Discussion section. FLJ stands for full-length cDNA Japan.2. In Table 1, the number of cDNAs and the number of library origins for two of the cDNA sources were incorrect. The correct numbers are as follows:"} {"text": "The coordination around mercury is completed by two bromido ligands resulting in a distorted tetra\u00adhedral arrangement.In the title polymeric complex, [HgBr DOI: 10.1107/S1600536808030055/si2110Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"} {"text": "Each NiII atom is chelated by two oxalate ligands and one 2,2\u2032-bipyridine, forming a slightly distorted octa\u00adhedral geometry. Oxlate acts as a bridge to link neighbouring pairs of NiII cations, forming a one-dimensional wave-like chain. The crystal showed partial inversion twinning.The title compound, [Ni(C DOI: 10.1107/S1600536808028389/cf2213Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"} {"text": "Inter\u00admolecular N\u2014H\u22efN hydrogen bonds link neighboring molecules into extended chains parallel to [100].In the title complex, [Ni(C DOI: 10.1107/S1600536808028171/bv2105Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"} {"text": "The four metal ions in the cluster are held together by four bridging hydroxide groups. Each NiII atom adopts a distorted octa\u00adhedral geometry. The title complex, [Ni DOI: 10.1107/S1600536810003697/rz2413Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"} {"text": "The NiII atoms are linked to each other, forming an infinite chain parallel to (In the title complex, [Ni(C DOI: 10.1107/S1600536808030377/dn2372Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"} {"text": "The Cd atoms are connected by two dicyanamide ligands, resulting in a neutral chain propagating parallel to [010].In the title compound, [Cd(C DOI: 10.1107/S1600536809046364/pv2226Isup2.hklStructure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF reportAdditional supplementary materials:"} {"text": "At the present state of knowledge, the emergence of the Gravettian in eastern Italy is contemporaneous with several sites in Central Europe and the chronological dates support the hypothesis that the Swabian Gravettian probably dispersed from eastern Austria.In the northern Adriatic regions, which include the Venetian region and the Dalmatian coast, late Neanderthal settlements are recorded in few sites and even more ephemeral are remains of the Mid-Upper Palaeolithic occupations. A contribution to reconstruct the human presence during this time range has been produced from a recently investigated cave, Rio Secco, located in the northern Adriatic region at the foot of the Carnic Pre-Alps. Chronometric data make Rio Secco a key site in the context of recording occupation by late Neanderthals and regarding the diffusion of the Mid-Upper Palaeolithic culture in a particular district at the border of the alpine region. As for the Gravettian, its diffusion in Italy is a subject of on-going research and the aim of this paper is to provide new information on the timing of this process in Italy. In the southern end of the Peninsula the first occupation dates to around 28,000 Numerous sites throughout the Italian Peninsula and the western Balkans document key events between the late Middle Palaeolithic and the Mid-Upper Palaeolithic. Focusing on the northern Adriatic Sea rim which includes the Venetian region and the Dalmatian coast, the millennia preceding the demise of Neanderthals are recorded in very few sites which displayed data of variable relevance Even scarcer in this area is the archaeological evidence of the Mid-Upper Palaeolithic, a period better known along the Tyrrhenian Sea and the southern Adriatic coasts, where evidence of intense Gravettian occupation can be found One of the most debated issue is whether the Gravettian developed from a local Aurignacian 14C BP in Paglicci Cave in the southern end of the Peninsula 14C BP , at 580 m asl on the Pradis Plateau in the eastern part of the Carnic Pre-Alps. The Pradis Plateau comprise an area of 6 sq km, enclosed on three sides by mountains peaking from 1,148 m to 1,369 m and to the south by the foothills, facing the present-day Friulian Plain . Rio SecThe presence of Palaeolithic settlements at Rio Secco Cave was detected in 2002 after a test-pit The cave is filled with an ensemble of sedimentary bodies of differing volume, shape, composition and origin, grouped into four macro-stratigraphic units and separated by erosional and sedimentary discontinuities Macro-unit BR1 includes layer 4 and an anthropic horizon containing Gravettian flint artifacts, layer 6. The most relevant features are angular to subangular stones, with fragments of karst limestone pavement that originated from the collapse of the vault. Layer 6, with organic matter and micro-charcoal has been exposed at the entrance of the cave shelter, approximately 20 cm below the top of BR2: it is thin, planar, discontinuous, and contains rare bones and lithics .Macro-unit BR2 is a massive open-work stone-supported breccia made of angular boulders and randomly deposited stones. It lies in the external zone but ends 1 m behind the drip line in the SE zone of the cavity, where it seals the layer 5 top. Large patches have been reworked by marmots, as demonstrated by bones, an articulated skeleton found within the tunnels, several burrows and dens.The sedimentary body below BR2 is composed of stones and loamy fine fraction and is labeled BIO1 due to the intense bioturbation caused by the activity of marmots, responsible for mixing, displacing portions of anthropic sediment, and scattering Mousterian flint implements, bones and charcoals. At the top of this macro-unit, one finds layer 5 top, a brown level of variable thickness with archaeological content. Due to its variable thickness, layer 5 top has been locally divided in two arbitrary cuts, I and II. Below, the loamy, dark yellowish-brown layer 7 has been found only in some squares under the cave vault and not in the external zone, where it is cut by the burrows. The upper boundary with layer 5 top is marked by an increasing frequency of bones and lithics, some of which also bear signatures of accidental heating. Sandwiched between the two anthropic horizons layers 7 and 8, layer 5 is made of stones and loamy fine fraction with dispersed bones and lithic implements frequently affected from post-depositional alteration. Layer 8 continues in the inner cavity and is best described as 10 cm thick, stony, with dark brown loamy fine fraction, frequent tiny charcoals, small and burnt bones. Layer 8 lies over layer 9, possibly a fifth macro-unit, made of stones and yellowish brown sandy-loam, with no charcoal or other finds.The archaeological contents of BR1 and BIO1 include numerous lithic artifacts ascribed to the Middle Palaeolithic and Upper Palaeolithic (layer 6 and correlated arbitrary cuts 4c and 4d) and a few bone retouchers Evidence for the use of fire has been found in layers 8 and 7 by tiny dispersed charcoals, burnt bones and heat-affected flints. In layer 6 two hearths have been brought to light, even if partially affected from post-depositional disturbances, labeled as US6_SI and US6_SII. The former is an agglomeration of charcoals mostly disaggregated around a large piece of charred wood . This heEvery stratigraphic unit contained animal bone remains. The colonization of the cave fill by marmots is clearly documented by diagnostic signatures observed in BR1 and BR2, such as dens, chambers and articulated skeletons. There are fewer faunal remains in the Gravettian layers in comparison with the Mousterian.Capra ibex and Rupicapra rupicapra) and remains of Bos/Bison (Bison priscus/Bos primigenius). Traces of human modification on the bones include cut-marks on shafts of caprids, partly combusted, and on a marmot clavicle. One partially burned epiphysis of the scapula of Castor fiber has been found associated to the hearth US6_SI.The archaeozoological analysis, still in progress, reveals among the ungulates the presence of caprids predominate over the ungulates, which rather than caprids (chamois and ibex) or bovids, consist more of cervids such as red deer, roe deer, elk and wild boar . Bones are mostly fragmented, due to post-depositional processes as well as human and carnivore activity. Human interest in ungulates is evidenced by cut marks on red deer. Also the remains of Ursus spelaeus and Ursus sp. from layers 7 and 5 top show traces of butchering, skinning and deliberate fracturing of long bones This faunal association with cervids and, in particular, deer, elk, roe deer and wild boar is indicative of forest vegetation and marsh environment somewhere in the Pradis Plateau. The presence of bovids and caprids suggest the existence of patchy woodland compatible with the mountain context. Cave bears were well adapted to this kind of environment, and used the cavities for hibernation, as suggested from the faunal assemblage recovered during the last field-campaigns.-89, GRS13SP57-138, GRS13SP57-153, GRS13SP57-125, GRS13SP57-37, GRS13SP57-11, GRS13SP57-18, GRS13SP57-46, GRS13SP57-2, GRS13SP57-4.All necessary permits were obtained from the Archaeological Superintendence of the Friuli-Venezia Giulia for the described study, which complied with all relevant regulations. The identification numbers of the specimens analyzed are: GRS13SP57Repository information: the specimen is temporary housed at the University of Ferrara, in the Section of Prehistory and Anthropology, Corso Ercole I d\u2019Este Ferrara, Italy, with the permission of the Archaeological Superintendence of the Friuli-Venezia Giulia.We selected 10 well preserved thick cortical bone fragments with and without cut marks from each layer. Four bones from layer 7 (three of them with cut marks), four bones from layer 5 (three with cut marks) and two charcoal samples from the hearth SI of layer 6.2 effervescence was observed, usually for about 4 hours. 0.1 M NaOH was added for 30 minutes to remove humics. The NaOH step was followed by a final 0.5 M HCl step for 15 minutes. The resulting solid was gelatinized following Longin (1971) at pH 3 in a heater block at 75\u00b0C for 20 h. The gelatin was then filtered in an Eeze-Filter\u2122 (Elkay Laboratory Products (UK) Ltd.) to remove small (<80 \u00b5m) particles. The gelatin was then ultrafiltered with Sartorius \u201cVivaspin 15\u201d 30 KDa ultrafilters Bone collagen was extracted at the Department of Human Evolution, Max Planck Institute for Evolutionary Anthropology (MPI-EVA), Leipzig, Germany, using the ultrafiltration method described in Talamo and Richards The collagen extract was weighed into pre-cleaned tin capsules for quality control of the material. Stable isotopic analysis was evaluated using a ThermiFinnigan Flash EA coupled to a Delta V isotope ratio mass spectrometer.The two charcoal samples were sent directly to the Klaus-Tschira-AMS facility of the Curt-Engelhorn Centre in Mannheim, Germany, where they were pretreated with the ABA method At Rio Secco Cave the C:N ratio of all the samples are 3.2 which is fully within the acceptable range (between 2.9 and 3.6), and all of them displayed a high collagen yield, mostly ranging between 2.4 to 8.2%, substantially higher than 1% of weight for the standard limit Once these criteria were evaluated, between 3 and 5 mg of the collagen samples were sent to the Mannheim AMS laboratory (Lab code: MAMS), where they were graphitized and dated 14C, equivalent to ca. >48,000 14C years BP) estimated from pretreated 14C free bone samples, kindly provided by the ORAU and pretreated in the same way as the archaeological samples.The radiocarbon results are listed in 14C years BP. The four dates of layer 5 range from 45,740 to 43,210 14C years BP. The uppermost layer (layer 6), which was identified as a Gravettian layer, ranges from 29,390 to 28,995 14C years BP. There is no discrepancy between the results obtained on bones with cut marks and without cut marks.The uncalibrated radiocarbon dates of late Mousterian (layer 7) range from >49,000 to 44,560 14C BP but the charcoal result, pretreated with ABOX-SC, displayed an age of 42,000\u00b1900 14C BP. The main argument for this difference has to be found, as described above, in the stratigraphic entities of the layer, in fact it contains frequent tiny charcoals of undetermined conifer, small bones and burnt bones. Moreover deformations, removal and various crossings by marmots and other minor bioturbations affect this layer. In addition, a test-pit opened during the last field campaign (summer 2013) had detected no archaeological traces at 1,5 meters underneath this layer, thus excluding possible pollution from older deposits. For this reason we considered the youngest date as an outlier.A series of radiocarbon dates were previously obtained from layers 8, 5 and 6 14C Age 37,790\u00b1360) Layer 5 has produced an age that is too young compared with our new results in the layer below layer 7 (layer 8) 14C Age 42,000\u00b1900) in layer 8 is confirmed to be an outlier. OxCal finds a good agreement index , between the full set of finite radiocarbon dates and the stratigraphic information; the results of the outlier detection method confirm ideal posterior probability for all the samples.A start calibrated boundary for the lower part of the sequence (Layer 7) at Rio Secco Cave cannot be defined. What we can determine is that the lower level of Layer 7 is older than 49,000 The upper boundary of layer 7, calculated by OxCal, ranges from 49,120 to 47,940 cal BP (68.2%); the layer 5 top ranges from 47,940 to 45,840 cal BP (68.2%) .uAround the Alpine regions Neanderthal sites with comparable age ranges are rare The charcoal samples, from the archaeological horizon US 6_SI located between layers BR2 and BR1 range from 33,480 to 30,020 cal BP (68.2%) . These rIt should be noted that the charcoal samples dated at Mannheim yielded consistent age with the previous radiometric dates obtained at Poznan for the same horizon.Here it is useful to remember that strong progress has been achieved in the last decade on the radiocarbon method. Calibration is now possible back to 50,000 cal BP An accurate sample selection, more specialized pretreatment protocols, the control of isotopic values of bone collagen, in case the samples pretreated were bones and the requirement of several dated samples per layer are fundamental criteria that should be considered in order to establish the radiocarbon chronology of the archaeological sites.14C years BP.Normally the risk of underestimating the true age of the samples is higher when the samples are at the limit of the radiocarbon method. However the chronological reassessment of Gei\u03b2enkl\u00f6sterle, Abri Pataud, Fumane Cave and Mochi rockshelter sites Bearing in mind this fundamental issue, Rio Secco Cave layer 6 shows the newest radiometric assessment of the Italian late Mid-Upper Palaeolithic. Moreover the comparison with the single dates of layer 23 in Paglicci Cave, permits to ascribe Rio Secco as the oldest Early Gravettian site in Italy.At this stage of our investigation, the backed pieces and the burins introduced and reduced on site are an expression of short term occupations by hunter gatherers equipped with previously retouched tools made of high quality flints collected outside the Carnic Pre-Alps The appearance of the early Gravettian in Europe predates the last phases of the Aurignacian In central Europe between northern Austria, Moravia and southern Poland one finds a second early Gravettian techno-complex, named the Pavlovian, Furthermore, in the Italian Peninsula local developments of the Gravettian have not been recorded so far Current evidences make us inclined on the cultural diffusion hypothesis, and the Rio Secco site provides new insight on the two natural corridors used to reach the Italian Peninsula, the Adriatic southern coast from Croatia At the junction between the North Adriatic Plain and the eastern Alps, the chronometric refinement of a new site, Rio Secco Cave, contributes to enhance the investigation of the prehistoric human occupation during the mid-Late Pleistocene. Although not completely explored, Rio Secco Cave fills an important chronological gap and preserves an archive of potential interest for understanding the study of the late Neanderthals, the dispersal of Mid-Upper Palaeolithic populations and the diffusion of the Gravettian culture. Nevertheless, the new set of dates does not cover the millennia of the Middle-Upper Palaeolithic transition in the second half of MIS3, a period chronometrically secured from key-sequences in neighboring regions 14C results show that the excavated archaeological horizon Layer 6 belongs to the early Gravettian time period. At the present state of knowledge, with our new 14C dates, the emergence of the early Gravettian in eastern Italy is contemporaneous with the Swabian Gravettian and the Pavlovian.The continued implementation of the project with fieldwork and laboratory studies will provide new elements necessary to better understand the settlements in this area, previously considered so marginal in comparison with the North Adriatic Plain, extending towards the south. At the present stage of research, the Gravettian archaeological record at Rio Secco Cave is scarce compared with the Mousterian one, due to the thinning of layer 6 and its partial reworking produced by illegal excavations in the inner cavity. We cannot exclude that the rockfall that occurred after the late Middle Palaeolithic induced the Gravettian foragers to place their settlement under the present-day rockshelter just in front of the cave entrance. Nevertheless, the few flint artifacts give economic hints of potential interest. The The broad expansion of Swabian Gravettian and Pavlovian techno-complexes is explained by high mobility patterns of the hunter-gatherers with transport of exogenous raw materials up to 200 km Although the absence of diagnostic lithic tools at Rio Secco Cave layer 6 doesn\u2019t allow a correlation of the lithic assemblages with the central European techno-complexes, the radiometric dates support the hypothesis of dispersal of the Swabian Gravettian probably from the eastern Austria . In the Table S1Radiocarbon dates on key Gravettian sites (DOCX)Click here for additional data file."} {"text": "Avian influenza viruses pose a serious pandemic threat to humans. Better knowledge on cross-species adaptation is important. This study examined the replication and transcription efficiency of ribonucleoprotein complexes reconstituted by plasmid co-transfection between H5N1, H1N1pdm09 and H3N2 influenza A viruses, and to identify mutations in the RNA polymerase subunit that affect human adaptation. Viral RNA polymerase subunits PB1, PB2, PA and NP derived from influenza viruses were co-expressed with pPolI-vNP-Luc in human cells, and with its function evaluated by luciferase reporter assay. A quantitative RT-PCR was used to measure vRNA, cRNA, and mRNA levels for assessing the replication and transcription efficiency. Mutations in polymerase subunit were created to identify signature of increased human adaptability. H5N1 ribonucleoprotein complexes incorporated with PB2 derived from H1N1pdm09 and H3N2 viruses increased the polymerase activity in human cells. Furthermore, single amino acid substitutions at PB2 of H5N1 could affect polymerase activity in a temperature-dependent manner. By using a highly sensitive quantitative reverse transcription-polymerase chain reaction, an obvious enhancement in replication and transcription activities of ribonucleoproteins was observed by the introduction of lysine at residue 627 in the H5N1 PB2 subunit. Although less strongly in polymerase activity, E158G mutation appeared to alter the accumulation of H5N1 RNA levels in a temperature-dependent manner, suggesting a temperature-dependent mechanism in regulating transcription and replication exists. H5N1 viruses can adapt to humans either by acquisition of PB2 from circulating human-adapted viruses through reassortment, or by mutations at critical sites in PB2. This information may help to predict the pandemic potential of newly emerged influenza strains, and provide a scientific basis for stepping up surveillance measures and vaccine production. Influenza A virus contains eight single-stranded RNA segments. The negative-sense viral RNA (vRNA) segments act as templates for messenger RNA (mRNA) synthesis in transcription, and for complementary RNA (cRNA) synthesis which is used for replication of vRNA. Both transcription and replication are performed by viral RNA-dependent RNA polymerase (RdRp) inside the nucleus of infected cells Since 1997, sporadic human infections with highly pathogenic H5N1 viruses have been reported. Although an efficient and sustained transmission of highly pathogenic H5N1 viruses in humans has yet occurred It is clear that preferential binding of haemagglutinin (HA) to terminal \u03b1-2,3 and \u03b1-2,6-linked sialic acid receptors on host cell surface is not the sole barrier of cross-species infection in-vitro model to examine the polymerase activity of viral RNP complexes. Cells were maintained in Dulbecco\u2019s modified Eagle\u2019s medium (DMEM) supplemented with 10% fetal bovine serum (FBS) at 33\u00b0C or 37\u00b0C in a 5% CO2 incubator.Human embryonic kidney 293T cells were used as an cDNA clones originated from four influenza virus strains were used to generate different RNP complexes. These viruses include: A/Thailand/1(KAN-1)/2004 (H5N1), representing highly pathogenic influenza A H5N1 viruses; A/HongKong/CUHK-72079/2009 (H3N2), representing seasonal H3N2 viruses; and A/Auckland/1/2009, representing the 2009 pandemic virus (H1N1pdm09). The A/WSN/33 (WSN) H1N1 virus was used as a reference strain in this study.HindIII and NotI restriction sites, whereas the BamHI and NotI restriction sites and the KpnI and NotI restriction sites were used for PA and NP, respectively. Point mutations were introduced into the PB2 plasmid of H5N1 to generate three different mutants with the following nucleotide substitutions: (i) 473, A\u2192G causing E158G, (ii) 811, A\u2192G causing T271A, (iii) 1879, G\u2192A causing E627K. The identities of these clones were confirmed by sequencing.The protein expression plasmids for three polymerase subunits and nucleoprotein (NP) of H1N1pdm09 and H3N2 had been described previously Various combinations of PA, PB1, PB2 and NP-expression plasmids derived from different subtypes of influenza virus were used to generate hybrid viral RNP complexes. 293T cells in 48-well plates were co-transfected with 0.4 \u00b5g of each plasmid and pPolI-NA by using Lipofectamine 2000 for 48 hrs at 33\u00b0C or 37\u00b0C.Renilla luciferase gene.Luciferase reporter assay was performed as described before Total RNA was extracted from transfected cells using TRIzol plus RNA purification kit (Invitrogen) and followed by DNaseI treatment . A previously described approach was used to achieve specific quantification for each RNA species Applied Biosystems, Foster City, CA). Five microlitres of the corresponding cDNA sample were added to the 25-\u00b5L reaction. Primers were 5'-CCG GCA AAG TGA TGT GTG TGT G-3' (corresponding to nucleotide 806 to 827 of NA cRNA) and 5'-CCG AAA ACC CCA CTG CAG ATG-3' (complementary to nucleotide 900 to 920 of NA cRNA). Reactions were first incubated at 50\u00b0C for 2 min, followed by 95\u00b0C for 10 min, and subjected to 40 cycles of 95\u00b0C for 15 s and 60\u00b0C for 1 min. The concentration of viral RNA were normalized with the corresponding input 5 S rRNA by primer 5\u2032- TAC GGC CA TAC CAC CCT GAA C -3\u2032 and primer 5\u2032- CGG TAT TCC CAG GCG GTC T -3\u2032.Quantitative reverse transcription-polymerase chain reaction (qRT-PCR) was used to quantify mRNA, vRNA and cRNA levels using Power SYBR\u00ae Green PCR Master Mix ; H3N2 showed no significant difference between 33\u00b0C and 37\u00b0C ; whereas H5N1 showed a significantly higher activity at 37\u00b0C .To characterize the effect of temperature on avian and human influenza viruses, we compared the polymerase activities of parental H5N1, H1N1pdm09 and H3N2 RNP complexes in human cells under incubation at 33\u00b0C and 37\u00b0C, mimicking physiological temperatures of human upper and lower respiratory tracts, respectively. H1N1pdm09 showed a significantly higher activity at 33\u00b0C than 37\u00b0C (The polymerase activities of hybrid RNP complexes are shown in P<0.001) . In lineP<0.001) .P\u200a=\u200a0.002) compared to 37\u00b0C (P<0.001) than at 33\u00b0C . In contP<0.001) .P<0.001; H1N1pdm09\u22360.3-fold at 33\u00b0C, P<0.001; 0.1-fold at 37\u00b0C, P<0.001) .P\u200a=\u200a0.002), but not at 37\u00b0C , but a lower activity was observed at 33\u00b0C .On the other hand, when the H3N2 NP or H1N1pdm09 NP was replaced by H5N1 NP, a slight decrease in activity was observed at 37\u00b0C .Taken together, substitution of the H5N1 RNP complex with PB2 derived from either H3N2 or H1N1pdm09 resulted in a substantial increase in polymerase activity, and the effect was more pronounced at 33\u00b0C. Substitution of the H5N1 RNP complex with PA derived from H1N1pdm09 also achieved a pronounced increase in activity at 33\u00b0C. In contrast, substitution of the H5N1 RNP complex with PB1 derived either from H3N2 or H1N1pdm09 decreased the polymerase activity of H5N1.P<0.001), respectively (The effects of H5N1 PB2 mutations on polymerase activity are shown in ectively . These mectively . Among tP\u200a=\u200a0.01) and 3.4-fold (P\u200a=\u200a0.03), respectively (P<0.001), 2.8-fold (P\u200a=\u200a0.04) and 3.2-fold (P\u200a=\u200a0.03), respectively. Among all the substitutions, E158G showed a stronger transcription activity compared to others at 33\u00b0C, whereas no differences among the three mutations were observed at 37\u00b0C.We further investigated whether the higher activity associated with those mutations was linked to an increase in transcription (mRNA) and/or replication (cRNA and vRNA synthesis) by using a real-time PCR-based quantitative assay ectively . On the The cross-species infection of influenza virus, probably of avian origin, has caused a serious pandemic in 1918 We found that the polymerase activity of H5N1 could be increased to a very large extent by substituting the RNP complex with a PB2 derived from either H3N2 or H1N1pdm09, and the effect was more pronounced at 33\u00b0C. This is an important concern as such reassortants may be able to adapt to human upper airway with increased human to human transmission efficiency. By contrast, mammalian PB1 abolished polymerase activity of H5N1-originated polymerase in human cells. These results clearly demonstrated that optimal polymerase activity, due to a better compatibility of avian PB1 and mammalian PB2, is probably required for efficient viral replication in human cells In addition to PB2, mutations in PA subunit have been reported to affect viral RNA replication The function of avian polymerase in human cells may be improved by substitution of H5N1 PB2 subunit with \u201chuman\u201d residues. Previous studies have shown that 627K in PB2 enhanced polymerase activity The differential quantitative analysis of different viral RNA species revealed that mutations E158G, T271A and E627K on PB2 enhanced both the transcription and replication activity of viral polymerase. However, notable differences in the profile of transcriptive (mRNA) and replicative (vRNA and cRNA) intermediates were observed between 33\u00b0C and 37\u00b0C. Similar to earlier study, the K627E mutation significantly reduced vRNA and cRNA promoter binding activities of PB2 in avian H5N1 virus The mechanism behind how mutations and subunits of different species origin influence the transcriptional and replication activity of RNP complex remains elusive. One possibility is differential thermal stability of RNP complex with positive and negative strand template RNA at restrictive and permissive temperatures, as suggested in seasonal H1N1 virus In conclusion, substitution of H5N1 RNP complexes with subunits, especially PB2, derived from H3N2 or H1N1pdm09 viruses could remarkably increase its replication and transcription activity in human cells. This indicates that some residues in human PB2 subunit might be involved in human adaptation. By using a highly sensitive quantitative RT-PCR, consistent with the result of polymerase activity, an obvious enhancement in replication and transcription activity of RNPs was observed by introduction of lysine at residue 627 in PB2 subunit. Although less strongly in polymerase activity, the temperature dependency of E158G mutation appeared to alter the accumulation of viral RNA levels, suggesting a temperature-dependent mechanism in regulating transcription and replication exists."} {"text": "Genetic reassortment plays a critical role in the generation of pandemic strains of influenza virus. The influenza virus RNA polymerase, composed of PB1, PB2 and PA subunits, has been suggested to influence the efficiency of genetic reassortment. However, the role of the RNA polymerase in the genetic reassortment is not well understood.Here, we reconstituted reassortant ribonucleoprotein (RNP) complexes, and demonstrated that the PB2 subunit of A/HongKong/156/1997 (H5N1) [HK PB2] dramatically reduced the synthesis of mRNA, cRNA and vRNA when introduced into the polymerase of other influenza strains of H1N1 or H3N2. The HK PB2 had no significant effect on the assembly of the polymerase trimeric complex, or on promoter binding activity or replication initiation activity in vitro. However, the HK PB2 was found to remarkably impair the accumulation of RNP. This impaired accumulation and activity of RNP was fully restored when four amino acids at position 108, 508, 524 and 627 of the HK PB2 were mutated.Overall, we suggest that the PB2 subunit of influenza polymerase might play an important role for the replication of reassortant ribonucleoprotein complexes. Orthomyxoviridae, and classified into subtypes by antigenic differences of the two surface glycoproteins, the hemagglutinin (HA) and the neuraminidase (NA) Influenza A virus is a member of the family The segmented genome structure of influenza virus facilitates genetic reassortment with other influenza strains which is co-infected in the same cells and has played a pivotal role in the emergence of pandemics. The pandemic strains of the past century have incorporated genes expressing the superficial HA and NA glycoproteins giving rise to new subtypes with novel surface antigens. Mutations of avian HA for acquiring human receptor \u03b12, 6 sialic acid binding specificity is a prerequisite for human adaptation. In addition to acquiring novel genes for superficial glycoproteins, at least one internal gene of the RNA polymerase from avian strains has been concurrently incorporated into human strains. The1957 and 1968 influenza pandemics coincided with the introduction of an avian PB1 gene The influenza virus RNA polymerase is a heterotrimeric complex composed of three subunits, PB1, PB2 and PA, which assembles with nucleoprotein (NP) and viral RNA, forming ribonucleoprotein complex (RNP) In this study, we attempted to characterize and dissect the role of the polymerase genes in giving rise to a functional genetic reassortant. We generated RNP containing hybrid polymerase between human-isolated avian H5N1 and human H1N1 or H3N2. Our results revealed that the PB2 subunit of H5N1 has a strong inhibitory effect on the RNP activity when introduced into the polymerase of other influenza strains. Importantly, H5N1 PB2 could form functional 3P complex properly, but significantly reduced the accumulation of RNP, specifically through the properties of four amino acids in PB2 at position 108, 508, 524 and 627.cDNA clones isolated from the following influenza strains were used: A/HongKong/156/1997 (H5N1) (abbreviated as HK or H), A/Vietnam/1194/2004 (H5N1) (abbreviated as VN or V), A/WSN/1933 (H1N1) (abbreviated as WSN or W), a newly pandemic A/Kurume/K0910/2009 (H1N1) (abbreviated as SW or S), A/NT/60/1968 (H3N2) (abbreviated as NT or N) PB1, PB2, PA and NP-expressing plasmids of influenza viruses HK, VN, WSN, SW and NT have previously been described 293T human embryonic kidney cells For a preparation of the polymerase, 293T cells were transfected with expression vectors containing PB1, PB2 and TAP-tagged PA subunit of each strain. For a preparation of the RNP, NP and vNA expression vectors were also transfected simultaneously. Cells were harvested 2 days posttransfection and the polymerase or RNP was purified by the tandem affinity purification (TAP) method described previously In order to investigate the role of the RNA polymerase on the restriction of genetic reassortment, we reconstituted reassortant RNP by introducing the polymerase subunit of H5N1 into the polymerase of H1N1 or H3N2. We used two human-isolated H5N1 strains of A/HongKong/156/1997 [HK] and a closely related A/Vietnam/1194/04 [VN], two human H1N1 strains of A/WSN/33 [WSN] and the new pandemic influenza A/Kurume/K0910/09 [SW], and H3N2 strain of A/NT/60/68 [NT], because these strains were extensively analyzed previously In the case of the HK strain of H5N1 , lane 5,Despite originating from the same H5N1 subtype, the amino acid sequence of PB2 between HK and VN differs at 26 positions out of 759 amino acids . Since HSince four amino acids in HK PB2 were found to be critical for rescuing activity of the hybrid RNP, it is conceivable that substitution of amino acids at the same position in VN PB2 by the HK PB2 sequence might decrease the RNP activity. Single mutant R508Q or double mutant R508Q/T524M showed a slight decrease in activity compared with VN PB2 , lane 5.To address the question why HK PB2 severely inhibited the activity of RNP reconstituted from hybrid polymerase, we initially examined whether HK PB2 affects the correct assembly of the trimeric complex of PB1, PB2, and PA. To allow the purification of trimeric complex, a C-terminally TAP-tagged PA was co-expressed with PB1 and PB2 in 293T cells, and the trimeric complex was affinity purified by a TAP method and quantitatively adjusted see . In the An alternative explanation for the defect of RNP activity is a loss of the polymerase activity. The binding of RNA polymerase to the promoter is an essential step to initiate RNA synthesis. Therefore, the promoter binding activity of the polymerase in vitro was assayed by UV cross-linking, then subsequent initiation of RNA synthesis was analyzed by dinucleotide replication assay see . SurprisAnother possible explanation for the loss of RNP activity is a defect in the assembly of RNP. To test this possibility, we purified and evaluated the amount of RNP accumulated in vivo. To allow RNP purification, a C-terminally TAP-tagged PA was coexpressed and the reconstituted RNP was affinity purified by a TAP method see . The amo8) genotypes theoretically. However, systematic studies by using reverse genetics have shown that the number of replicative reassortant viruses is apparently limited The genetic reassortment between two different influenza A viruses can generate 256 significantly decreased the accumulation of RNP reconstituted in human 293T cells and resulted in the concomitant decrease in the RNP activity The E\u2192K mutation at HK PB2 627 showed a significant reduction in vRNA promoter and cRNA promoter bindings in cross-linking experiments . PB2 hasA purified H5N1polymerase shows significantly higher polymerase activity in vitro when compared to human strain A/WSN/33 (H1N1) Introduction of HK PA into SW and NT polymerases increased the synthesis of mRNA, cRNA and vRNA has a strong inhibitory effect on the RNP activity when introduced into the polymerase of other influenza strains. In addition, four residues at positions 108, 508, 524 and 627 of the PB2 subunit appear to be important determinants that are involved in the accumulation of functional RNP and in modulating the polymerase activity. These results may suggest a possible mechanism by which the generation of replicative reassortant virus of influenza is highly restricted."} {"text": "The soluble CD14 subtype, Presepsin, appears to be an accurate sepsis diagnostic marker, but data from intensive care units (ICUs) are scarce. This study was conducted to evaluate the diagnostic and prognostic value of Presepsin in ICU patients with severe sepsis (SS), septic shock (SSh) and severe community-acquired pneumonia (sCAP).Presepsin and procalcitonin (PCT) levels were determined for patients at admission to ICU. Four groups have been differentiated: (1) absence or (2) presence of systemic inflammatory response syndrome, (3) SS or (4) SSh; and 2 groups, among the patients admitted for acute respiratory failure: absence or presence of sCAP. Biomarkers were tested for diagnosis of SS, SSh and sCAP and for prediction of ICU mortality.One hundred and forty-four patients were included: 44 SS and 56 SSh. Plasma levels of Presepsin and PCT were significantly higher in septic than in non-septic patients and in SSh as compared to others. The sepsis diagnostic accuracy of Presepsin was not superior to that of PCT (AUC: 0.75 vs 0.80). In the 72/144 patients admitted for acute respiratory failure, the capability of Presepsin to diagnose sCAP was significantly better than PCT. Presepsin levels were also predictive of ICU mortality in sepsis and in sCAP patients.Plasma levels of Presepsin were useful for the diagnosis of SS, SSh and sCAP and may predict ICU mortality in these patients. Despite advances in therapy, sepsis is the leading cause of death in critical care settings . To imprMore recently, the soluble CD14 subtype, Presepsin, appears to be an accurate sepsis diagnostic marker and rises up a great clinical interest. Levels of Presepsin were found significantly higher in septic than in non-septic patients or those with SIRS . MoreoveTherefore, this study aimed to evaluate the diagnostic and prognostic utility of Presepsin measurements using the new fast method in severe sepsis and septic shock intensive care unit (ICU) patients. We also aimed to evaluate the diagnostic and prognostic utility of Presepsin measurements for severe community-acquired pneumonia (sCAP) in the subgroup of patients admitted to the ICU with acute respiratory failure.This observational prospective study was performed at 2 ICUs of Lapeyronie and Gui de Chauliac University hospitals of Montpellier, France. These two ICUs admit preferentially patients with suspected infectious diseases. It was carried out according to the principles of the Declaration of Helsinki and was approved by the Ethic Committee of Montpellier (Comit\u00e9 de protection des Personnes: CPP du CHU de Montpellier). Written informed consent was obtained from all participating patients or their closest relatives or legal representatives.All consecutive patients admitted to ICUs from January to May 2014 were included. Exclusion criteria were pregnancy, age\u00a0<\u00a018\u00a0years, previous congestive heart failure (class NYHA\u00a0\u2265\u00a0III), right ventricular failure, chronic renal failure stage III KDOQI or more, hepatic failure and acute pulmonary embolism.Baseline clinical variables including age, gender, cause of sepsis, and comorbidities were collected. The severity of disease was assessed by SAPS II and SOFADiagnosis of systemic inflammatory response syndrome (SIRS) and of sepsis severity was based on established criteria of the American College of Chest Physicians/Society of Critical Care Medicine . MicrobiCommunity-acquired pneumonia (CAP) was defined as the presence of a new infiltrate on a chest radiograph and at least one of the following signs: cough, sputum production, dyspnea, core body temperature\u00a0>\u00a038.0\u00a0\u00b0C, auscultatory findings of abnormal breath sounds and rales . Diagnos\u00ae recently evaluated [\u00ae immunoanalyzer following the manufacturers\u2019 instructions. Determination of hsCRP was run on the Cobas8000/e502\u00ae analyzer using immunoturbidimetric method.Venous samples were taken from all patients at admission and immediately performed for Presepsin, PCT and hsCRP measurements. Presepsin concentration was measured by a chemiluminescent enzyme immunoassay (CLEIA) on a compact automatized immunoanalyzer PATHFASTvaluated . The refvaluated . PCT wasTwo study physicians (KK and VG) independently reviewed all available clinical, biological and radiological patients\u2019 data and classified all patients into four disease groups: absence (non-SIRS) or presence of SIRS, severe sepsis (SS) or septic shock (SSh). The two study physicians followed recommended definitions and algorithms (20). Briefly, patients with SIRS and positive cultures were considered as septic. When cultures were non-contributive, clinical and biological picture , successful treatment by antibiotics and rule out of other diagnosis were main elements of sepsis diagnosis. Among the subgroup of patients who were admitted for acute respiratory failure, they reviewed also their data and classified them into two disease groups: absence or presence of sCAP (even in the absence of identified causative agent). When the study physicians cannot statute on the presence or not of sepsis, the patient was not included in the study. The study physicians and those on charge of patients were blinded to the results of Presepsin and PCT.t test, or two-tailed Mann\u2013Whitney\u2013Wilcoxon\u2019s test when appropriate. Results were adjusted for multiple comparisons using Bonferroni\u2019s method. Levels of significance for all tests were set at p\u00a0<\u00a00.05. Sensitivity, specificity and positive predictive value (PPV) and negative predictive value (NPV) of Presepsin and PCT for the diagnosis of sepsis and pneumonia were calculated using final diagnosis categorization based on clinical data, clinical scores and routinely used biomarkers levels. A receiver operating characteristic (ROC) analysis was performed for each of the biomarkers, and their diagnostic performance for sepsis and for other pathological condition was compared. The optimal threshold value was set for each ROC curve through the Youden Index (corresponding to the maximum of the sum \u201csensibility\u00a0+\u00a0specificity\u201d). Mortality was displayed as Kaplan\u2013Meier (log-rank test) plots according to the quartiles of Presepsin levels.The statistical analyses were performed using the STAT-VIEW II . We first performed a descriptive analysis by computing the frequencies and the percents for categorical data, means, standard deviations, quartiles and extreme values for continuous data. We also checked for the normality of the continuous data distribution using the Shapiro\u2013Wilks tests. We compared septic to non-septic patients and patients with and without sCAP for Presepsin, CRP and PCT measurements. The univariate analysis was performed using two-tailed Student\u2019s During the study period, a total of 222 critically ill patients were admitted in ICUs. After the exclusion of 78 patients, 144 were included: 88 males and 56 females. One hundred patients conformed to the criteria of bacterial sepsis: 44 with SS and 56 with SSh. Among the 44 non-septic patients, 19 were assigned for non-SIRS and 25 for SIRS. The screening process is shown in Fig.\u00a0Patient\u2019s baseline characteristics are summarized in Table\u00a0p\u00a0=\u00a00.574). In contrast, they were significantly higher in SSh versus SS and SIRS groups and non-infectious respiratory failure (AUC\u00a0=\u00a00.85) was higher than that of PCT (0.79), SAPS II (0.72), SOFA (0.78) scores, and similar to that of the combination of Presepsin and PCT 0.84) Fig.\u00a0b. Using Fig.\u00a0b. p\u00a0=\u00a00.04) died during ICU stay. Deceased septic patients showed significantly higher Presepsin, PCT levels and severity scores at ICU admission Table\u00a0a. HoweveAt ICU admission, plasma levels of Presepsin were found to be significantly higher in critically ill patients with sepsis in comparison with those without sepsis. Presepsin plasma levels of SIRS and SS patients were not significantly different, but patients with SSh had significant higher levels as compared to others. The sepsis diagnostic accuracy of Presepsin was not superior to that of PCT. With the combination of Presepsin and PCT, specificity and predictive positive value for sepsis were enhanced. We also demonstrated the usefulness of Presepsin for the diagnosis of sCAP in settings of ARF with an even better accuracy than PCT. Also, plasma Presepsin levels best predict ICU mortality in septic patients and those with sCAP at cutoff values of 1925 and 714\u00a0pg/mL, respectively.It is now well demonstrated that sepsis, especially SS and SSh, should be diagnosed early and treated within 1\u00a0h after diagnosis . ConsequMore than half (58\u00a0%) of our septic patients have a sepsis from pulmonary origin. Diagnosis and severity of CAP are difficult and largely depend on the clinician\u2019s experience since they are based on clinical and radiological arguments \u201333. CircWe must acknowledge some limitations to our study. First, our study was a bi-center study, and the results may not be directly applicable to all ICUs. Second, our population included a relative limited number of patients. Third, only plasma Presepsin levels at ICU admission were determined and dynamic and follow-up changes of this biomarker were not investigated . Fourth,Our results demonstrated the usefulness of Presepsin levels in the diagnosis and prognosis of septic shock patients admitted to ICUs, but its diagnostic ability remains moderate as recently demonstrated . Its spe"} {"text": "The aim of this study was to evaluate the diagnostic and prognostic value of presepsin in patients with severe sepsis and septic shock during the first week of ICU treatment.In total, 116 patients with suspected severe sepsis or septic shock were included during the first 24\u00a0hours of ICU treatment. Blood samples for biomarker measurements of presepsin, procalcitonin (PCT), interleukin 6 (IL-6), C reactive protein (CRP) and white blood cells (WBC) were drawn at days 1, 3 and 8. All patients were followed up for six months. Biomarkers were tested for diagnosis of sepsis, severe sepsis, septic shock and for prognosis of 30-days and 6-months all-cause mortality at days 1, 3 and 8. Diagnostic and prognostic utilities were tested by determining diagnostic cutoff levels, goodness criteria, C-statistics and multivariable Cox regression models.P <0.03). Presepsin levels revealed valuable diagnostic capacity to diagnose severe sepsis and septic shock at days 1, 3 and 8 (range of diagnostic area under the curves (AUC) 0.72 to 0.84, P\u2009=\u20090.0001) compared to IL-6, PCT, CRP and WBC. Goodness criteria for diagnosis of sepsis severity were analyzed . Presepsin levels revealed significant prognostic value for 30\u00a0days and 6\u00a0months all-cause mortality . Patients with presepsin levels of the 4th quartile were 5 to 7 times more likely to die after six months than patients with lower levels. The prognostic value for all-cause mortality of presepsin was comparable to that of IL-6 and better than that of PCT, CRP or WBC.Presepsin increased significantly from the lowest to most severe sepsis groups at days 1, 3 and 8 (test for linear trend In patients with suspected severe sepsis and septic shock, precipices reveals valuable diagnostic capacity to differentiate sepsis severity compared to PCT, IL-6, CRP, WBC. Additionally, presepsin and IL-6 reveal prognostic value with respect to 30\u00a0days and 6\u00a0months all-cause mortality throughout the first week of ICU treatment.NCT01535534. Registered 14 February 2012.ClinicalTrials.gov Severe sepsis and septic shock represent major challenges of modern intensive care medicine, and still recently published international guidelines demand ongoing research about the pathophysiology, diagnostics and treatment . WorldwiSoluble cluster of differentiation 14 subtype (sCD14-ST) - so-called presepsin - is cleaved from the monocyte/macrophage-specific CD14 receptor complex after binding with lipopolysaccharides (LPS) and LPS binding protein (LPB) during systemic infections -10. PresThe Mannheim Sepsis Study was conducted as a mono-centric prospective controlled study at the University Medical Centre Mannheim (UMM), Germany. Patient enrolment started in October 2011. The study was carried out according to the principles of the declaration of Helsinki and was approved by the medical ethics commission II of the Faculty of Medicine Mannheim, University of Heidelberg, Germany. Informed consent was obtained from all participating patients or their legal representatives.2) to the fraction of inspired oxygen (FiO2) PaO2/FiO2\u2009<\u2009250, renal organ failure with urine output <0.5\u00a0ml/kg/h, hematological organ failure with platelet count <100,000/mm3 or unexplained metabolic acidosis with pH <7.3 and lactate levels >1.5 times the upper limit of normal. Sepsis-induced organ failures in these patients were strongly connected to infection and were present for less than 24\u00a0h. Patients developing cardiovascular organ failure with need for vasopressors longer than 1\u00a0h were categorized as suffering from septic shock. Disease severity on the ICU was documented by the acute physiology and chronic health evaluation II (APACHE II) and the sequential organ failure assessment (SOFA) score [The study was designed to reflect a representative cohort of patients with a minimum age of 18\u00a0years, who had proven criteria of severe sepsis or septic shock, found at a typical internal ICU. Main exclusion criteria were any traumatic or postoperative cause of sepsis development . Diagnosis of systemic inflammatory response syndrome (SIRS) and of sepsis severity was based on established criteria ,16: whenA) score ,18.All patient data, such as creatinine levels, hemoglobin, hematocrit, WBC count, platelet count, CRP, bilirubin, sodium, potassium, urea, IL-6, PCT, body temperature, respiratory rate, heart rate, blood pressure, partial pressure of O2 and CO2, bicarbonate, base excess, lactate, pH value, Glasgow coma scale (GCS) were documented. Additionally, prior medical history, age, sex, body weight and the germ spectrum were documented.After the end of each hospital treatment, two study physicians independently reviewed all available clinical data of the study patients and classified all patients into four disease groups: SIRS, sepsis, severe sepsis or septic shock. The study physicians were blinded to the results of tested biomarker measurements, such as presepsin, PCT and IL-6.Blood samples for presepsin measurements were taken within 24\u00a0h after clinical onset of severe sepsis or septic shock on the ICU (day 1) as well as on day 3 and 8 of ICU treatment. All patients were followed up until 30\u00a0days and 6\u00a0months after study inclusion by direct telephone visits with the patients or their general practitioners. The main prognostic outcome was all-cause mortality after 30\u00a0days and 6\u00a0months: 60 people without any clinically proven systemic infection served as a control group.g at 4\u00b0C for 15\u00a0minutes. Serum/plasma was separated, frozen and stored at \u221280\u00b0C.Blood samples were obtained by venipuncture into serum and ethylenediaminetetraacetic acid (EDTA) monovettes\u00ae . Within 30\u00a0minutes all blood samples were centrifuged at 1,000\u2009\u00d7\u2009Presepsin measurements were performed with the PATHFAST\u00ae immunoassay analytical system using plasma from EDTA monovettes\u00ae ,19. IL-6t-test was applied. Otherwise, the Mann\u2013Whitney U-test was used as a nonparametric test. Deviations from a Gaussian distribution were tested by the Kolmogorov-Smirnov test. Spearman\u2019s rank correlation for nonparametric data was used to test the association of presepsin blood levels with medical parameters. Qualitative parameters were analyzed by use of a 2\u2009\u00d7\u20092 contingency table and Chi2 test or Fisher\u2019s exact test as appropriate. Quantitative data are presented as mean\u2009\u00b1\u2009standard error of mean (SEM) or as median and interquartile ranges (25th to 75th percentiles), depending on the distribution of the data. For qualitative parameters absolute and relative frequencies are presented. A test for linear trend was applied to compare the biomarker levels in the different groups of sepsis severity. Post-hoc statistical power analyses were performed. All analyses were exploratory and utilized a P-value of 0.05 (two-tailed) for significance.For normally distributed data, the Student C-statistics: receiver-operating characteristic (ROC) curve analyses were performed with calculation of area under the curve (AUC) for diagnosis of sepsis, severe sepsis and septic shock during the first week of ICU treatment at days 1, 3 and 8. A minimal AUC was set at 0.75 to define valuable discriminative diagnostic capacity of any biomarker. Accordingly, diagnostic goodness criteria , and relative risk) of the biomarkers were calculated. Accuracy was defined as the sum of true positives plus true negatives divided by all measured patients. Diagnostic AUCs were compared by the method of Hanley et al. [For y et al. .C-statistics: ROC analysis with calculation of the AUC was performed for prognosis of all-cause mortality in all patients after 30\u00a0days and 6\u00a0months for all biomarkers , APACHE II and SOFA score. Prognostic AUCs were compared by the method of Hanley et al. [For y et al. . Log-traThe calculations were performed with InStat and StatMate (GraphPad Software), SPSS software (SPSS Software GmbH), and SAS version, release 9.2 .Baseline characteristics are given in Table\u00a0Mean APACHE II score at day 1 was highest in patients with septic shock (mean\u2009\u00b1\u2009SEM\u2009=\u200927\u2009\u00b1\u20091). The most common primary site of infection was the lung in at least 50% of patients in each group, followed by abdominal and urinary tract infections (up to 16% of patients).r\u2009=\u20090.28, P\u2009=\u20090.002) as well as with the days of renal replacement therapy (RRT) during ICU treatment . Additionally, presepsin correlated with WBC, CRP, PCT, IL-6 and bilirubin. Interestingly, presepsin was also significantly correlated with the number of days of intensive care treatment, mechanical ventilation and catecholamine therapy (P <0.05). Presepsin levels were not correlated with patients\u2019 age and gender in this cohort (P >0.05) (data not shown).Presepsin levels were significantly correlated with clinical and laboratory parameters at day 1. As shown in Table\u00a0P \u22640.03), which was not observed for PCT or IL-6 (P >0.05).Figure\u00a0Presepsin levels (pg/ml) were as follows ): day 1: SIRS 393 (249 to 745), sepsis 362 (249 to 745), severe sepsis 947 , septic shock 2,330 ; day 3: SIRS 448 (350 to 844), sepsis 651 , severe sepsis 1,479 , septic shock 2,060 ; day 8: SIRS 604 (223 to 965), sepsis 1,528 , severe sepsis 1,556 , septic shock 3,041 ; and controls 216 (146 to 350).P >0.05) (Table\u00a0P\u2009=\u20090.05) and comparable to that of IL-6 (AUC\u2009=\u20090.81) . Interestingly, presepsin (AUC\u2009=\u20090.80) levels still revealed valuable diagnostic capacity to diagnose at least severe sepsis when compared to IL-6 (AUC\u2009=\u20090.71) and PCT (AUC\u2009=\u20090.66) at day 3. However, presepsin was not able to differentiate septic shock at day 3 , whereas the AUC of IL-6 was 0.76 at day 3. At day 8 of ICU treatment, the diagnostic value of presepsin was evident for all different groups of sepsis severity , whereas IL-6, PCT, WBC and CRP did not exceed an AUC \u22650.75 was comparable to that of IL-6 (AUC\u2009=\u20090.86) and PCT AUC\u2009=\u20090.83) at day 1 of ICU treatment with a minimum sensitivity of 89% to diagnose either patients with at least sepsis, severe sepsis or septic shock at all points in time Table\u00a0. At day P\u2009=\u20090.008).All-cause mortality rates were 50% after 30\u00a0days (58/116) and 62% after 6\u00a0months (72/116). Six months of follow up were completed in all patients. Presepsin levels were significantly higher in patients who died by 30\u00a0days or 6\u00a0months compared to those who survived (P >0.05) (Table\u00a0P >0.05).The prognostic AUCs of presepsin were statistically significant at all time points and for all-cause mortality at 30\u00a0days and at 6\u00a0months Table\u00a0. AUCs ofP <0.001) increased in non-survivors compared to survivors consistently at days 1, 3 and 8 of ICU treatment both in survivors and non-survivors in patients who died from septic shock within 6\u00a0months compared to those patients surviving septic shock , when measured at day 1 of ICU treatment (data not shown).Presepsin and IL-6 levels were significantly (P <0.03) . However, presepsin was not statistically significant in adjusted Cox models at day 3 of ICU treatment. In contrast, IL-6 had significant prognostic value at all time points of ICU treatment and for both 30-day and 6-month all-cause mortality (data not shown).Despite replacing the APACHE II with the SOFA score within Cox regression models to predict 30-day and 6-month all-cause mortality, presepsin retained its significant prognostic value at days 1 and 8 of ICU treatment for diagnosis of septic shock on day 1 and 3 and for sepsis at day 3. PCT, CRP and WBC mostly failed to have any discriminative capacity for the different groups of sepsis severity at the different time points (AUCs <0.75).et al. previously demonstrated that presepsin levels had the best diagnostic capacity for diagnosis of sepsis (AUC\u2009=\u20090.82), severe sepsis (AUC\u2009=\u20090.84) and septic shock (AUC\u2009=\u20090.79) compared to PCT in 859 patients presenting at the emergency department in Bejing, China [et al. [In accordance with the presented results, Liu g, China , while c [et al. evaluateIn the present study we tried to avoid this inconsistency by choosing optimal and uniform cutoff levels at a maximum achievable sensitivity on days 1, 3 and 8 . This approach was based on clinical considerations to capture as many patients as possible who were truly diseased. Interestingly, within the present analysis presepsin levels had weak diagnostic value for the differentiation of septic shock at day 3 of ICU treatment. This might be explained by a longer half-life of presepsin keeping higher concentrations at least until day 3 of intensive care treatment, which might also being influenced by the presence of acute kidney injury (AKI) in patients with severe sepsis/septic shock . At day et al. [In contrast, Endo et al. ,21 evaluRecent trials mostly evaluated single measurements of presepsin in patients presenting to the emergency department, trying to establish presepsin as an early one-shot guiding biomarker for emergency care ,14. HoweAccordingly, further influencing factors on presepsin despite age and renal function are rarely known . NotablyC-statistics and within multivariable Cox regression models being adjusted to age, sex, intensive care days and APACHE II/SOFA score.Second, it was demonstrated that measurements of presepsin levels revealed valuable prognostic capacity to predict short- and long-term all-cause mortality compared to PCT, CRP and WBC consistently throughout days 1, 3 and 8 of ICU treatment. Patients with presepsin levels of the fourth quartile were up to five to seven times more likely to die after 6\u00a0months than patients with lower levels. APACHE II and SOFA scores solely revealed acceptable prognostic values for all-cause mortality. IL-6 was the only inflammatory biomarker with comparable prognostic value at all time points as demonstrated both in et al. [et al. demonstrated constantly increased presepsin levels in decedents and revealed significant prognostic value for both 28-day and 90-day all-cause mortality, whereas PCT failed to provide any prognostic information [The prognostic value of presepsin in severe sepsis/septic shock has not yet been evaluated in detail. Presepsin has been described as a powerful prognostic biomarker compared to PCT and APACHE II scores for short-term 28-day all-cause mortality ,13. Ullaet al. demonstret al. . In a reormation . It was Therefore, the present study delivered new evidence about both presepsin and IL-6 as powerful prognostic biomarkers of short- and long-term prognosis in patients with severe sepsis and septic shock -27. SpecP-values and relatively marginal differences in these biomarker levels with regard to diagnostic and prognostic differentiation in the present study cohort of 116 patients suggest that there might not be a real clinically relevant difference. Within our study, we did not find any significant associations of presepsin levels with age and sex of the patients, which were previously described [The present study was performed on the internal ICU at the University Medical Centre Mannheim, Germany. Primarily, this study included patients with severe sepsis or septic shock 24\u00a0h after admission to the medical ICU or 24\u00a0h after disease onset during ICU treatment. If statistically significant diagnostic and prognostic values for presepsin and IL-6 were calculated, sufficient statistical power of at least 80% could have been presumed. If statistically non-significant differences were calculated, a larger number of study patients of at least 300 to 2,000 patients in each group would have been required to guarantee sufficient statistical power, for example, for the inflammatory biomarkers PCT, CRP and WBC with respect to diagnosis or prognosis. However, high escribed . AnalyseTaken together, it has been demonstrated that measurements of presepsin levels had independent diagnostic and prognostic value in patients with severe sepsis and septic shock during the first week of intensive care treatment. Presepsin levels had valuable diagnostic value for the diagnosis of sepsis, severe sepsis and septic shock at days 1, 3 and 8 of ICU treatment compared to PCT, IL-6, CRP and WBC. Additionally, presepsin levels had valuable prognostic capacity to predict short- and long-term all-cause mortality when compared to PCT, IL-6, CRP, WBC and APACHE II score.Presepsin reveals valuable diagnostic capacity for stages of sepsis severity compared to PCT, IL-6, CRP, and WBC in patients being treated on an internal ICU.Diagnostic cutoffs of presepsin were set at \u2265530\u00a0pg/ml for\u2009\u2265\u2009sepsis, at \u2265600\u00a0pg/ml for\u2009\u2265\u2009severe sepsis and \u2265700\u00a0pg/ml for septic shock.Presepsin levels had valuable prognostic capacity to predict short- and long-term all-cause mortality at 30\u00a0days and 6\u00a0months compared to PCT, CRP, WBC, SOFA and APACHE II scores.IL-6 had comparable prognostic value to presepsin levels.Diagnostic and prognostic capacity of presepsin was consistently demonstrated throughout days 1, 3 and 8 of ICU treatment."} {"text": "Sepsis, a leading cause of death in critical care patients, is the result of complex interactions between the infecting microorganisms and the host responses that influence clinical outcomes . ReliablCritical patients with suspected sepsis admitted to the Unit of Intensive Care of the University Hospital of Catanzaro were recruited into this study; healthy volunteers were also included as controls. Plasma samples in EDTA from each patient were collected at multiple time points; samples were tested for CRP, PCT and presepsin. Blood cultures were also evaluated and processed by a BacT/Alert 3D system ; CRP was measured by immunonephelometry and PCT was assayed by an enzyme-linked fluorescent assay ; presepsin levels were measured by rapid automated PATHFAST immunoanalyzer , based on chemiluminescent enzyme immunoassay. A statistical analysis was carried out by Mann-Whitney test.Presepsin and PCT levels were significantly higher in culture-positive subjects versus negative controls; such difference was found even at the admission time. The presepsin values in worsening/dead patients exhibited a significantly higher level at admission time. On the contrary, in the same group of patients, PCT exhibited a decrease of its level. In poor prognosis patients CRP showed a quite irregular kinetic, although in such a group the admission value was higher than the same marker in live subjects.In this preliminary study, presepsin and PCT levels exhibited substantial higher values in culture-positive patients. The kinetic curves of presepsin, obtained from both survival and worsening/dead subjects, revealed the optimal performance of this biomarker, particularly in severely ill patients, as also shown in other studies. During sepsis, increase of presepsin levels may be a more reliable marker, indicating an unfavorable outcome ,4. Furth"} {"text": "Presepsin (sCD14-ST) serves as a mediator of the response to infectious agents. First evidence suggested that presepsin may be utilized as a sepsis marker.Presepsin was determined at presentation (T0), after 8, 24 and 72 hours in 123 individuals admitted with signs of SIRS and/ or infection. Primary endpoint was death within 30 days. Presepsin was determined using the POC assay PATHFAST Presepsin .P < 0.0001). Baseline presepsin differed highly significant between patients with SIRS, sepsis, severe sepsis and septic shock. Twenty-four patients died during 30 days. The 30-day mortality was 19.5% in total, ranging from 10 to 32% between the first and the fourth quartile of presepsin concentration. Nonsurvivors showed high presepsin values with increasing tendency during the course of the disease while in surviving patients this tendency was decreasing. See Table Mean presepsin concentrations of the patient group at presentation and of the control group were 1,945 and 130 pg/ml, respectively (Presepsin demonstrated a strong relationship with disease severity and outcome. Presepsin provided reliable discrimination between SIRS and sepsis as well as prognosis and early prediction of 30day mortality already at admission. Presepsin showed close association with the course of the disease."} {"text": "This study was performed to assess the value of procalcitonin (PCT) for the differential diagnosis between infectious and non-infectious systemic inflammatory response syndrome (SIRS) after cardiac surgery.n\u2009=\u200947) or non-infectious (n\u2009=\u200995), were included. The patients with infectious SIRS included 11 with sepsis, 12 with severe sepsis without shock, and 24 with septic shock.Patients diagnosed with SIRS after cardiac surgery between April 1, 2011 and March 31, 2013 were retrospectively studied. A total of 142 patients with SIRS, infectious , and the white blood cell (WBC) count were significantly higher in the infectious SIRS group than in the non-infectious SIRS group. PCT had the highest sensitivity and specificity for differential diagnosis, with a cut-off value for infectious SIRS of 0.47\u00a0ng/mL. PCT was more reliable than CRP in diagnosing severe sepsis without shock, but it was not useful for diagnosing septic shock. The PCT cut-off value for diagnosing severe sepsis without shock was 2.28\u00a0ng/mL.PCT was a useful marker for the diagnosis of infectious SIRS after cardiac surgery. The optimal PCT cut-off value for diagnosing infectious SIRS was 0.47\u00a0ng/mL. According to an epidemiological survey, the incidence of sepsis in the USA rose at an average annual rate of 8.7% from 1979 to 2000 . Early dThis study was a retrospective investigation of 142 SIRS patients who were admitted to the ICU after cardiac surgery at the Tokyo Medical and Dental University Hospital between April 1, 2011 and March 31, 2013 when a total of 376 patients after cardiac surgery were screened. The study was approved by the ethical review board of Tokyo Medical and Dental University Faculty of Medicine. The SIRS patients were divided into an infectious SIRS group and a non-infectious SIRS group according to the results of microbiological testing. The infectious SIRS group was further divided into three groups, namely, sepsis, severe sepsis without shock, and septic shock, according to the diagnostic criteria of the Surviving Sepsis Campaign Guidelines for Management of Severe Sepsis and Septic Shock: 2012 (SSCG2012) [3/\u03bcL\u2009=\u20090; <100\u2009\u00d7\u2009103/\u03bcL\u2009=\u20091; <50\u2009\u00d7\u2009103/\u03bcL\u2009=\u20092), elevated fibrin-related marker , prolonged prothrombin time , and fibrinogen level derived from the International Society on Thrombosis and Haemostasis [The pre-operation data, operation-related data, and post-operation data of both groups were analyzed and compared. The pre-operation data included the patient age, gender, body mass index (BMI), and concomitant diseases. The operation-related data included the cardiopulmonary bypass (CPB) duration, aortic cross clamping, operation time, and blood loss. The post-operation data were as follows: mechanical ventilation (MV) duration, disseminated intravascular coagulation (DIC) incidence, serum levels of PCT, CRP, and WBC, Acute Physiology and Chronic Health Evaluation (APACHE) II score, Sequential Organ Failure Assessment (SOFA) score, ICU stay, hospital stay, postoperative blood purification treatment ratio such as continuous renal replacement therapy (CRRT) and polymyxin B-immobilized fiber column-direct hemoperfusion (PMX-DHP), and postoperative extracorporeal membrane oxygenation (ECMO) therapy. DIC was identified using a score based on platelet count . The serum CRP level was determined by a latex coagulation detection method with a nephelometer. The PCT, CRP, and WBC levels measured on the first day after the diagnosis of SIRS in the ICU were used in this analysis.t test and comparisons of more than two groups were analyzed by the one-way analysis of variance test. If the variance was not homogeneous, the data were shown in median and interquartile ranges. Comparisons between two groups were done by the Mann-Whitney test and comparisons of more than two groups were done by the Kruskal-Wallis test. The categorical data comparisons between groups were analyzed by the Pearson Chi-square test or Fisher\u2019s exact test. Statistical significance was assumed at p values of less than 0.05 on both sides. The abilities of PCT, WBC, and CRP to diagnose infection were evaluated by comparing the infectious and non-infectious groups by receiver operating characteristic (ROC) curve analyses. The abilities of PCT and CRP to diagnose severe sepsis without shock or septic shock were evaluated by ROC curve analyses comparing the sepsis group with the severe sepsis without shock/septic shock groups and comparing the severe sepsis without shock group with the septic shock group. The cut-off values for diagnosing infection, severe sepsis without shock, and septic shock were determined by the ROC curves. The diagnosis sensitivity, specificity, and positive and negative predictive values were calculated and compared.Statistical analyses were performed using a Statistical Package for Social Sciences for Windows. For the numerical data, the homogeneity test of variance was done first. If the variance was homogeneous, the data were shown as mean\u2009\u00b1\u2009SD. The numerical data comparisons between two groups were analyzed by the two-sided Student\u2019s In total, 142 patients were treated for SIRS after cardiac surgery. Among them, 47 were diagnosed with infectious SIRS and 95 were diagnosed with non-infectious SIRS. Table\u00a0The infectious pathogens were Gram-positive bacteria in 19 patients, Gram-negative bacteria in 14 patients, and fungus in 16 patients. The positive culture specimens were sputum in 14, urine in 15, blood in 24, and drainage in 1.ROC) were 0.966, 0.875, and 0.799 for PCT, CRP, and WBC, respectively. According to the ROC curves, the cut-off values of PCT, CRP, and WBC were 0.47\u00a0ng/mL, 11.95\u00a0mg/dL, and 10.85\u2009\u00d7\u2009103/\u03bcL, respectively. The sensitivity and specificity of PCT for predicting infection were 91.5% and 93.7%, respectively.PCT, CRP, and WBC levels were significantly higher in the infectious SIRS group than in the non-infectious SIRS group Table\u00a0. Preopern\u2009=\u200911), severe sepsis without shock (n\u2009=\u200912), and septic shock (n\u2009=\u200924). The patient characteristics of the three subgroups are shown in Table\u00a0According to the diagnostic criteria, 47 patients from the infectious group were further divided into sepsis or interleukin-6 (IL-6) induces high expression of the calcitonin-I gene, which activates the continuous release of PCT .p\u2009<\u20090.001) higher than those in the non-infectious SIRS group. Therefore, PCT becomes a tool to distinguish infectious and non-infectious SIRS, although some factors such as CKD and multiple organ dysfunction syndrome may affect the PCT level.The surgical injury to the body and extracorporeal circulation during cardiac surgery can activate the complement system, which in turn induces the massive release of inflammatory cytokines and the subsequent onset of SIRS . Once deClec\u2019h et al. reported a wide variability of PCT cut-off values in an analysis of 143 patients with different diseases . By theiThe utility of PCT for evaluating infection severity has also been controversial. One study suggested that the PCT was a valuable parameter for determining infection, but was no better than CRP in evaluating the infection severity . In contThe MV duration, postoperative blood purification therapy, postoperative ECMO therapy, DIC incidence, ICU stay, and hospital mortality were all significantly higher in the infectious SIRS group than in the non-infectious SIRS group. These findings indicate that infection will increase the disease severity and necessitate more invasive treatments such as MV, CRRT, and ECMO. This finding was consistent with the results of Rahmanian et al. . The sepThere are three limitations to this study worthy of note: The study was retrospective; the samples were too small, especially for the infectious group; and no long-term follow-up was performed.Infection after cardiac surgery significantly increased the disease severity and necessitated more invasive treatments such as mechanical ventilation, CRRT, and ECMO. PCT was a useful marker for the diagnosis of infectious SIRS after cardiac surgery. PCT had a better diagnostic value than CRP or WBC. The optimal PCT cut-off value for detecting infection was 0.47\u00a0ng/mL. The serum level of PCT rose significantly according to the degree of infection. Prospective, large-scale, controlled, and randomized studies are awaited."} {"text": "Many of the processes behind the decline of farmland birds can be related to modifications in landscape structure (composition and configuration), which can partly be expressed quantitatively with measurable or computable indices, i.e. landscape metrics. This paper aims to identify statistical relationships between the occurrence of birds and the landscape structure. We present a method that combines two comprehensive procedures: the \u201clandscape-centred approach\u201d and \u201cguild classification\u201d. Our study is based on more than 20,000 individual bird observations based on a 4-year bird monitoring approach in a typical agricultural area in the north-eastern German lowlands. Five characteristic bird guilds, each with three characteristic species, are defined for the typical habitat types of that area: farmland, grassland, hedgerow, forest and settlement. The suitability of each sample plot for each guild is indicated by the level of persistence (LOP) of occurrence of three respective species. Thus, the sample plots can be classified as \u201cpreferred\u201d or \u201cless preferred\u201d depending on the lower and upper quartiles of the LOP values. The landscape structure is characterized by 16 different landscape metrics expressing various aspects of landscape composition and configuration. For each guild, the three landscape metrics with the strongest rank correlation with the LOP values and that are not mutually dependent were identified. For four of the bird guilds, the classification success was better than 80%, compared with only 66% for the grassland bird guild. A subset of six landscape metrics proved to be the most meaningful and sufficiently classified the sample areas with respect to bird guild suitability. In addition, derived logistic functions allowed the production of guild-specific habitat suitability maps for the whole landscape. The analytical results show that the proposed approach is appropriate to assess the habitat suitability of agricultural landscapes for characteristic bird guilds.The online version of this article (doi:10.1007/s10661-017-5837-2) contains supplementary material, which is available to authorized users. The decline of biological diversity in European landscapes is well documented by many regional and Europe-wide studies Defra . Of all Suitable landscape data can be retrieved from digital maps, which are becoming abundant and available for many regions, based on aerial photos, satellite images, ground mapping, etc. Modern methods, including GIS and spatial statistics, allow for area-wide analyses of landscape metrics and, if time series are available, the characterization of trends. Because many of these data already exist, the calculation of landscape metrics is often well established has the potential to improve area-wide assessments of landscapes as habitat for birds. Thus, to contribute to the development of an appropriate methodology, we draw on an extensive set of bird occurrence data of an agricultural landscape , and we test a variety of landscape metrics as potential indicators of defined groups of bird species.Generalizations of case studies are needed to address the habitat suitability for birds on a landscape scale. What is the essence of locally detected relationships between bird occurrence and landscape characteristics that may be transferable to other regions? Answers to this question may help support the efficiency of empirical work (e.g. bird monitoring schemes), the design of meaningful biodiversity supporting schemes (e.g. changes in habitat configurations) and the comparison and assessment of landscape and land use development scenarios. While it is obvious that appropriate conservation means depend on the specific regional or local situation , with a decreasing trend. Parts of the investigation area belong to a landscape conservation area. The significance of the region for birds is high, as BirdLife International identifies the region as one of the Important Bird and Biodiversity Areas (IBAs) in Europe or absence (0) of the three characteristic species, neglecting the actual number of individuals. The occurrences were summarized as follows: in each of the surveys, a maximum of three was possible on each sample plot when all species of the guild could be observed, and 0 when no species was observed. These values are taken to construct a so-called level of persistence (LOP) for every guild at each sample plot, which is the sum of all presence/absence values over the whole survey area and duration, with a theoretical minimum of 0 and a theoretical maximum of 57 . It is true that the LOP approach introduced here has no direct ecological meaning. Due to its construction, it may be considered a further development of the common presence-absence approach. The addition of the frequency of bird occurrences at particular points may furthermore indicate the habitat quality of the sample areas in an aggregated quantitative way. A high LOP value means that species of a guild are frequently observed, so the area is considered to be a rather preferred habitat of that guild. The LOPs were derived for each of the five guilds, and statistical analysis was conducted for each guild for all sample plots. In total, 120 of the 125 potential sample plots were visited at all occasions and used for the analysis. Five sample plots were excluded due to missing data.Data analyses were conducted using SPSS Statistics 22 software from IBM and ArcMap 10.2.2 software from ESRI. Landscape metrics were calculated based on the spatial data of the \u201cComprehensive Biotope and Land use Map\u201d of the Federal State of Brandenburg MLUL . These cHabitat data were pre-processed, i.e. the polygon and line layer were joined to cover the area as accurately and in as much detail as possible. For this purpose, line data were buffered with a buffer distance of 5\u00a0m to ensure that line elements were not discarded during the conversion from polygon to raster format. The processed spatial information had a standard deviation of 0.99% with respect to the original polygon data. The standard deviation of the habitat coverage and original and combined data sample plots was less than 1% and was 0.92% smaller than the whole sample area. The sample plots were tested for representativeness using the biotope inventory of the whole area. The total classification comparison revealed a significant correlation of 0.993 between the sample plots and the total Quillow investigation area , which was confirmed by Spearman\u2019s rank correlation coefficient at the 5% error level. Therefore, the sample plot area was considered to represent the entire Quillow observation area reasonably well registered during 8,432 listed individual sightings; the 15 selected species represent 42% of the observed individuals of the overall bird community. The composition of the different guilds is shown in Fig. To obtain a visual impression of the spatial distribution of the selected bird guilds in the investigation area, the occurrence information for the guilds in the sample plots was displayed on spatial maps of related landscape metrics. Figure The main objective of the statistical analysis of the bird occurrence data was to determine whether it was possible to predict the guild occurrence within the sample areas based on the values of the specific landscape metrics around the sample areas. The occurrence of a guild in a sample plot was expressed by the occurrence of its 3 characteristic species and their LOP values. The derived LOP values serve as a target variable for subsequent analytical statistics. Due to the statistical properties of the LOP values and of the considered landscape metrics, only analytical methods without special requirements concerning the statistical distributions of the included features were applied.The pairwise relationships between landscape metrics and the LOPs for the particular guilds were analysed with Spearman rank correlation coefficients. The classification of \u201cpreferred\u201d and \u201cless preferred\u201d habitats was then accomplished with a binary logistic regression using input data derived from the LOPs. The binary logistic regression was selected as a principal analytical method because the analytical focus was on the qualitative distinction between rather unsuitable habitats (less preferred) and rather suitable habitats (preferred) based on landscape metrics. The binary logistic regression model is able to predict a binary response dependent on one or more independent inputs and has no particular requirements concerning the scale level of these inputs. Practically any appropriate quantitative or qualitative variable including landscape metrics may serve as input. The binary logistics regression allows a direct and transparent interpretation of input effects, and it is furthermore possible to evaluate the reliability of the classification and to rank the importance of inputs. Mathematically, the binary logistic regression function takes values between 0 and 1, and the binary response is assigned to the function values using a cut point between 0 and 1. In most practical situations, 0.5 serves as the cut point. Mathematical details and the implementation in SPSS are described by Field .Here, the binary response (preferred vs. less preferred habitats) is derived from the LOPs as a dependent variable, and a set of landscape metrics is used as independent variables. Among the landscape metrics, those three metrics were selected as inputs that showed the greatest Spearman correlation coefficient with the particular LOP. In case of obviously functionally dependent landscape metrics , only one of them was used as input in order to keep the inputs as mutually independent as possible. The number of input variables was limited to three to enable a transparent interpretation of the mutual interconnections.R2 values were calculated.The members of the two alternative classes (preferred vs. less preferred habitats) were taken from the cases that fell below the 25% quartile of the LOPs and above the 75% quartile of the LOPs of every particular guild and observation point. The \u201clower\u201d class indicates rather unsuitable habitats (less preferred), and the \u201cupper\u201d class indicates rather suitable habitats (preferred). Each class contains approximately 30 cases. Within SPSS, the binary logistic regression was executed with the variable selection method \u201cEnter\u201d, i.e. all independent input variables are entered in a single step Field . To evalp value of the Wald chi-square statistics. The p values indicate the strength of evidence of the relationship between target and input , the shape index distribution (SHAPE_MN) and the edge density (ED). All three act in the opposite direction of the LOP and are statistically significant at the 1% error level, that is, the higher the value of the metrics, the lower the occurrence of farmland birds. Greater shape mean, with a 5% error.The correlation analysis results of the other four guilds can be interpreted in an analogous manner. With the exception of the grassland bird guild, all selected landscape metrics were significantly correlated with the particular guild LOP. In the case of the grassland birds, the only significant landscape metric was the R2 value is reported. Additionally, the output of the logistic regression indicates what type of misclassification occurs, i.e. whether there is a systematic preference among misclassifications for preferred or less preferred areas.Table R2 value in a comparative test of the classifications. It was not possible to derive the overall statistical significance of the classification, but it was possible to use the Nagelkerke R2 value to interpret the reliability of the classification. In this respect, it was obviously easier to classify the preferred and less preferred habitats for species of the forest and settlement guilds than for species of the grassland and hedgerow guilds. The rate of correct classifications and the considerably lower Nagelkerke R2 value for the grassland guild indicate that the classification of grassland habitats seems to be the most difficult.For four of the five considered guilds, it was possible to distinguish between less preferred and preferred habitats with success rates better than 80%. Only in the case of the grassland guild was the rate of correct classifications (66%) apparently poorer. Because an identical number of cases was used for all five guilds, it was possible to use the Nagelkerke p value of the Wald statistic. The latter two provide information concerning the relative importance of individual input variables (i.e. habitat suitability expressed by landscape metrics) and their statistical significance. The input variable acts independently of the input variables. In the estimated logistic regression function for the farmland guild, all three unstandardized regression coefficients were statistically significant at the 5% error level. The standardized regression coefficient for Simpson\u2019s diversity index (SIDI) indicates that this landscape metric had the greatest relative importance. In the case of identical relative changes among the three inputs, changes in the SIDI would have the greatest effect.Table p value of the Wald chi-square statistic . The relative importance of the input metrics can be derived from the standardized regression coefficient .Tables Of the 16 different landscape metrics initially chosen, only 6 were retained in the final statistical models describing the preferred vs. less preferred habitats for the five selected bird guilds. The habitats for farmland and forest guilds as well as for hedgerow and settlement guilds were described with the same set of indices but with varying directions and combinations. Table The relationship between landscape structure and biota has been extensively described mostly for species richness issues, including birds and the highest uncertainties of all guilds. This might be due to the lowest numbers of sightings of these bird species in our survey. The preferred areas for the grassland species were identified by the contiguity index distribution, patch density and shape index distribution set and are described as small, isolated grassland plots (low contiguity and low patch density) without highly complex shapes (low shape index). The occurrence of small closed drainage basins with grassland vegetation, which are extensively used or not managed at all, is a specific feature of many North-Middle European landscapes. The preference of the grassland birds for these features, as found in our study, might be strongly affected by the extensive management of such plots. The meadow pipit is known to inhabit open grassy areas with dense, low vegetation cover, such as extensive meadows, and to avoid very short grass in intensive meadows or grazed pastures between the different habitat types (high Simpson\u2019s diversity), in contrast to the forest habitats due to their small sizes (largest patches). Bat\u00e1ry et al. found clForest birds preferred areas with high shape index values, high Simpson\u2019s diversity index values and low edge density values in our analysis. They can be differentiated from the hedgerow guild as preferring different habitat types, including forests (higher Simpson\u2019s diversity index), that are of irregular shape (high shape index) and as avoiding transient zones (edges) between different habitat types. Within the agricultural matrix, forests are mostly singular patches and seldom share borders with other forest patches. In contrast, in forest-dominated landscapes, the forest patch area could be the most significant variable explaining the patch occupancy of residents and summering forest birds, as reported by Suk et al. . BarbaroBird species that prefer settlements and urban environments can be described with the same indices as the hedge breeding birds but with different factor levels. Settlement birds prefer the transient area between settlements and arable land (high edge density), the patchy environment around the settlements (high largest patch index LPI) and the high diversity between habitat types (high Simpson\u2019s diversity index). Birds in urban environments can be classified into groups of distinct habitat requirements would lose comparability with respect to the full models.Many papers emphasize that the spatial scale affects the performance of the landscape metrics for terrestrial birds landscapes with different natural pre-conditioning (setting) and to (ii) different bird fauna compositions (inventories), which we suggest to investigate further.ESM 1(PDF 7018 kb)"} {"text": "Furthermore it was demonstrated that although APTES was fully removed from the silicon surface following four hours incubation in water, the gold nanoparticle-amino surface complex was stable under the same conditions. Atomic force microscopy (AFM) and X-ray photoelectron spectroscopy (XPS) were used to study these affects.This study evaluates the effectiveness of vapour-phase deposition for creating sub-monolayer coverage of aminopropyl triethoxysilane (APTES) on silicon in order to exert control over subsequent gold nanoparticle deposition. Surface coverage was evaluated indirectly by observing the extent to which gold nanoparticles (AuNPs) deposited onto the modified silicon surface. By varying the distance of the silicon wafer from the APTES source and concentration of APTES in the evaporating media, control over subsequent gold nanoparticle deposition was achievable to an extent. Fine control over AuNP deposition (AuNPs/\u03bcm Fabrication and manipulation of nano-sized features is a fast-growing science playing an important role in the development of electronics, materials and biotechnology . Whilst The aim of this work was to evaluate the effectiveness of a vapour phase deposition protocol for preparing sub-monolayers of aminopropyl triethoxysilane (APTES) on silicon surfaces to control gold nanosphere surface densities. The effect of changing vapour phase deposition parameters on resulting APTES density and the propensity of the modified surface to adsorb gold nanoparticles (AuNPs), have been systematically evaluated. Additionally, the effect of varying the ionic concentration of the incubating AuNP solution on nanoparticle deposition and the protection of underlying chemical functionality conferred by the nanoparticles was investigated. The methods described are simple and practical, allowing reproducible deposition of various AuNP densities and may find application in the patterning of a range of substrates that are amenable to silanisation, including mica, quartz and glass.et al. [AuNPs stabilised with anionic citrate ions bind electrostatically to cationic APTES molecules and have widely been used to confirm the presence of APTES on derivatized silicon wafer ,10,11. Tw/w in paraffin oil (PO)) as the evaporative solution and with silicon surfaces centered at 0.25, 0.5, 1.0, 1.5, 2.0, 2.5 and 3.0 cm from the edge of the Eppendorf lid. AFM images of the surfaces following immersion in AuNP solution are shown in et al. [2. In this current study the maximum observed number density was 242 \u00b1 17 AuNPs/\u03bcm2. This discrepancy in AuNP number density is more than likely attributable to a difference in the particle size and ionic concentrations of the AuNP solutions used in the studies. Surfaces positioned up to 3.0 cm . A clear gradient in coverage (represented as nitrogen content) was observed as the APTES concentration increased from 2% to 20%. Further increasing the concentration had no effect on nitrogen concentration suggesting that monolayer coverage was achieved at 20% APTES concentration. There was no evidence for multilayer coverage since there was no significant increase in surface nitrogen at APTES concentrations >20% . Following immersion in 10 nm AuNP solution, no AuNP deposition was observed on the surfaces silanised with 2% APTES but almost complete coverage was seen at 4% and 8% (2); the clear gradient observed in the XPS data for 10, 30, 120 and 240 min. AuNP surface density reached a maximum (~200 AuNPs/\u03bcm2) between 10 and 30 min (\u22123 M) inset. T .The stability of nanocolloidal solutions is based upon repulsion between individual nanoparticles. In aqueous solution, negatively-charged citrate-stabilized AuNPs are surrounded by an electrical double layer of positive ions. Increasing the thickness of the double layer directly increases the electrostatic repulsion and interparticle distance of colloids in solution (Deyagin-Landau and Verwy-Overbeek (DLVO) theory) ,15,16. Ai.e., below the immobilized pKa of APTES (immobilized pKa 7.6 [450) values [The AuNP solutions used in this study were stabilised in a citrate buffer (0.04% trisodium citrate) that was sequentially diluted to give rise to solutions of varying ionic concentration. All solutions, regardless of ionic concentration, were shown to have a pH of between 6.4 and 6.7, pKa 7.6 ). Ionisa) values and mult\u22123 M). It was observed that on increased exposure to water, a decrease in AuNP surface densities resulted . All experiments were carried out at room temperature and pressure. Surfaces were degreased by ultrasonication in ethanol (15 min) followed by dimethylformamide (15 min), then dried under a stream of nitrogen. Surfaces were submerged in Piranha solution and maintained at 80\u201390 \u00b0C . Surfaces were washed thoroughly in diH2O and dried under a stream of nitrogen. Unless otherwise stated, the activated surfaces were placed 2 cm from the edge of an upturned Eppendorf tube lid and enclosed within a Petri dish ) were cut into ~0.5 cmtri dish . A mixtu2O to remove physisorbed AuNPs. The ionic concentration of the solution, with respect to trisodium citrate, is given for each solution.Unless otherwise stated, surfaces were incubated in AuNP solution ) for 14\u201324 h and then rinsed thoroughly in dHA Veeco Scanning Probe Microscope was used in tapping mode for AFsM analysis of the surfaces using RTESP 1\u201310 Ohm-cm phosphorous (n) doped silicon AFM tips. All consumables were purchased from Veeco Instruments Inc. AFM images were processed using \u201cWSXM\u201d image analysis . To detei.e., 90\u00b0 grazing angle). Surfaces were mounted using double-sided adhesive tape. The data was analyzed using Casa XPS [X-ray photoelectron spectroscopy (XPS) was carried out using a Kratos Axis Ultra DLD spectrometer . A monochromatic AlK\u03b1 X-ray source (75\u2013150 W) with an analyzer pass-energy of 160 eV (survey scans) or 40 eV (detailed scans) was used. Photoelectrons were detected in a direction normal to the surface distribution, onto silanised surfaces was shown to be readily achievable by varying the ionic concentration of the AuNP solutions. The protective nature of AuNPs on the underlying APTES groups will allow for chemical nanopatterning, through the selective modification of the amine groups not involved in the binding interaction with the nanoparticles, to generate a bi-functional surface."} {"text": "Electrostatic sensor arrays (ESAs) are promising in industrial applications related to charged particle monitoring. Sensitivity is a fundamental and commonly-used sensing characteristic of an ESA. However, the usually used spatial sensitivity, which is called static sensitivity here, is not proper for moving particles or capable of reflecting array signal processing algorithms integrated in an ESA. Besides, reports on ESAs for intermittent particles are scarce yet, especially lacking suitable array signal processing algorithms. To solve the problems, the dynamic sensitivity of ESA is proposed, and a hemisphere-shaped electrostatic sensors\u2019 circular array (HSESCA) along with its application in intermittent particle monitoring are taken as an example. In detail, a sensing model of the HSESCA is built. On this basis, its array signals are analyzed; the dynamic sensitivity is thereupon defined by analyzing the processing of the array signals. Besides, a component extraction-based array signal processing algorithm for intermittent particles is proposed, and the corresponding dynamic sensitivity is analyzed quantitatively. Moreover, simulated and experimental results are discussed, which validate the accuracy of the models and the effectiveness of the relevant approaches. The proposed dynamic sensitivity of ESA, as well as the array signal processing algorithm are expected to provide references in modeling, designing and using ESAs. Electrostatic monitoring systems (EMSs) are featured with the advantages of robustness and low cost ,2, makinThe sensing principle of an electrostatic sensor is electrostatic induction. It determines that even if a particle carries a constant charge, the corresponding induced charge on an electrostatic sensor probe will reduce sharply when the distance between the particle and the probe gets larger. As a result, a near particle with quite weak charge can generate similar signals as a far particle with great charge. That is to say, the charge quantity of a particle can hardly be accurately monitored by a single electrostatic sensor if the position of the particle is not provided. In view of the sensitivity, this means that most electrostatic sensors have the drawback of inhomogeneous and quite localized sensitivity; thus, only particles near the probe can be effectively detected ,10. In aCircular ESA is a kind of representative hardware structure, which is made up by placing some identical electrostatic sensors uniformly around a circular pipeline in one cross-section ,16,17,18In the aspect of application, a circular ESA is usually used to detect the particle distribution over the cross-section of a pipeline and further infer gas-solid flow parameters, such as particle density and flow regime. In these cases, the particles to be monitored are continuous in the time domain. That is to say, there are always numerous particles passing a cross-section simultaneously, thus forming an obvious profile of the solid-phase, which can be imaged by tomography-based methods ,11,14. ETo solve the problems above, the dynamic sensitivity of ESA is proposed to describe the sensitivity of ESAs in a systemic perspective. An ESA called the hemisphere-shaped electrostatic sensors\u2019 circular array (HSESCA) along with its application in intermittent particle monitoring are taken as an example. In detail, a sensing model of the HSESCA, as well as that of the sensor units is built in This section builds a quantitative relationship between a charged particle and the induced charge of an HSESCA, which provides a theoretical foundation for this paper.An HSESCA with eight sensor units is installed on a grounded pipeline, as shown in K whose unit is V/C. A hemispherical probe and its signal conditioner channel compose an independent sensor unit of the HSESCA. The array signal analyzer is composed of a signal acquisition card and a computer. Some specially-designed array signal processing algorithms are usually integrated in the computer to calculate the monitoring parameters (denoted as M) of the whole HSESCA.The hemispherical probes are uniformly installed in a cross-section around the pipeline. It is called the observation cross-section. Once the charged particles pass, the induced charge will be generated on each probe and then converted into voltage array signals immediately by the multi-channel signal conditioner for post-processing. Every channel of the conditioner is designed as an identical proportional two-stage amplifier as in As the sensitivity characteristics of an electrostatic sensor are quite localized ,9,10 andx-axis and the z-axis, respectively. Then, the y-axis is determined by the right-hand rule automatically. Accordingly, a point in the pipeline is denoted as P.As shown in \u221219 s), the interaction between the probe and charged particles in the pipeline can be described by a pure electrostatic field [1 is the hemispherical surface of the probe and \u03932 is the inner wall of the pipeline. t\u03c1(P) is the volume charge density in the pipeline at time t, and t\u03c6(P) is the corresponding potential distribution. Besides, const(t) is a constant function of t, describing the equipotential surface of the probe. \u03b5 is the permittivity of free space, and Because electrostatic equilibrium states are reached instantaneously is firstly solved using the Green function and the method of image charges. Then, the sensing model at P is obtained according to the relationship between the induced charge and the potential distribution on a conductor surface [Q is the total induced charge on the probe, a is the radius of the probe and On account of the pipeline being much bigger than the probe, the electrostatic field on the inner wall is simplified as that on one infinite plane ,6,17. Inq ,10, \u03c6t and those of the point charge denoted as P in the new coordinate system; as shown in In order to build a sensing model for all of the sensor units in the same coordinate system, firstly, according to the symmetry of the probe, Equation (2) is transformed into: one see to the ci-th sensor unit is derived from Equation (3) as: ib is the distance from the charged particle to the i-th probe and i\u03b2 is the included angle between the point charge and the axis of the i-th probe. They are determined by geometrical relationships: O and the bottom face of a probe.Thereupon, the sensing model of the Further, the sensing model of the HSESCA is obtained by making a superposition of sensing models of all of the sensor units. That is: D = \u22120.0188 i\u03b2 + 0.0444 is the calibration boundary that is determined according to the variation trend of the relative error [ib < D, the relative error is negligible; thus, the sensing model is only calibrated where ib \u2265 D. The calibration parameters are obtained using fitting methods; they are: In addition, an FEM-based calibration method was proposed as a supplement to consider the actual boundary conditions . It is uve error . In the It is obvious that Equation (7) is calibrated as In order to make a comparison with the dynamic sensitivity of ESA, the static sensitivity of ESA is defined here. The static sensitivity of an electrostatic sensor is defined as the absolute induced charge on the probe generated by a unit charge ,8,21. Anq are of opposite signs. As for an HSESCA, its static sensitivity model is derived from Equations (9) and (10): The minus indicates that the induced charge and the point charge When actual boundary conditions are considered, the model in the observation cross-section can also be calibrated in a similar way as Equation (8), that is Array signal processing algorithms are included in the definition of the dynamic sensitivity of ESA. In order to find effective array signal processing algorithms for an HSESCA, the characteristics of its array signal shave to be analyzed.x-coordinates of the particle are denoted with velocity v and time t, while the y-coordinates and z-coordinates are regarded as constants. In this way, induced charge on the i-th probe is expressed by iQ, where x0 is the initial x-coordinate. After that, the induced charge is transformed and amplified into voltage according to the amplification coefficient K of the signal conditioner and (12), one can get: K = \u22121, q = 1C, and the time when the particle reaches the observation cross-section is set as the zero time . The radius of the hemispherical probes is set to 12 mm, and the motion path is set to y = 100 mm and z = 70 mm occur on gas path components of gas turbines, some fault-related particles will be produced and charged. Thus, the condition information of the gas path components can be obtained by monitoring the particles ,6,7,28. i-th sensor unit of the HSESCA, its dynamic sensitivity model builds a relationship between moving charged particles and its out-put voltage signals, as Equation (14). This is expressed by As for a single electrostatic sensor, for example the q and position . They codetermine the signal peak uimax, according to the unit\u2019s dynamic sensitivity model SiD,, that is uimax, = qSiD,. However, the relationship is irreversible in practical applications. This is because the position information is difficult to obtain by using a single electrostatic sensor; thus, the value of SiD, is not determined. As a result, the charge quantity q cannot be calculated backward for the lack of position information. This is the basic reason for the low monitoring accuracy of a single electrostatic sensor.This means that a charged particle contains two kinds of information, charge quantity It has been mentioned that the array signals of an HSESCA contain the position information of a monitored particle see , which cq and position . They codetermine the signal peak uimax, of each sensor unit according to its dynamic sensitivity model SiD,. Then, the peaks are processed by a certain array signal processing algorithm to calculate the monitoring parameter M. In this process, the estimated position information of the particle is usually obtained and used. Therefore, valves of SiD, are determined by the position information, making it possible to calculate the charge quantity q backward. In fact, the monitoring parameters of many applications are just the charge quantity or some derivative values of it. As a result, by using an HSESCA and corresponding array signal processing algorithms, the monitoring accuracy can be improved.This means that a charged particle contains two kinds of information, charge quantity FD is proposed to denote the array signal processing algorithms. It builds the relationship between the array signals and monitoring parameters of an ESA is the vector of the signal peaks and SD is the vector of dynamic sensitivity values of the sensor units. The q is reducible because FD has to be designed as a linear operator of q.Further, the dynamic sensitivity of ESA is defined in the observation cross-section as the absolute value of a monitoring parameter when a unit charge passes. As for intermittent particle monitoring, the dynamic sensitivity model of an HSESCA is expressed as: It is seen from Equation (15) that the essence of the dynamic sensitivity of ESA is a recombination of the dynamic sensitivity of the sensor units according to a dynamic operator, which builds a direct relationship between moving charged particles and the system monitor values of an ESA. In other words, the dynamic sensitivity of ESA reflects the monitoring accuracy of an ESA directly by taking array signal processing algorithms into consideration. The concept of dynamic sensitivity has been used by Zhou et al. , but theA proper array signal processing algorithm is significant to ensuring the monitoring accuracy of an ESA. As for intermittent particle monitoring , it has been mentioned that the accurate charge quantity of any monitored particle is desired; thus, the position information of the particle should be used. In addition, the demand for real-time monitoring should be met to avoid missing momentary faults; thus, the corresponding array signal processing algorithms have to be fast enough.To meet the conditions above, a component extraction-based array signal processing algorithm is designed. It is based on a simple idea that a monitored particle is located near the sensor units that produce larger signal peaks. According to this idea, a set of common components is extracted from the peaks of the array signals. The components are considered to be generated by a set of imaginary point charges. They have fixed positions, but their imaginary charge quantities vary with the position of the monitored particle. In this way, the position information of the monitored particle can be used according to different charge quantities of the imaginary point charges. It is worth noting that the accurate position of a particle is not needed in the algorithm, but it is made use of by introducing a component extraction-based method. As a result, the algorithm only contains some simple steps, which are convenient for computer processing. More details are provided as follows.y = 100 mm and z = 70 mm.As shown in Umax. Then, Umax is re-sorted into Ur incrementally according to the absolute values of the signal peaks. That is to say, the i-th absolute smallest peak is denoted as riu; thus, the new vector is Ur{ru1, ru2, ru3, ru4, ru5, ru6, ru7, ru8}. For example, the No. 2 probe is the nearest one to the point charge, while the No. 6 probe is the farthest one to the point charge . In detail, the absolute smallest signal peak ru1 is firstly denoted as c1; it is a common signal component contained in every signal peak. Then, the second absolute smallest peak ru2 is denoted as the sum of c1 and c2. It is obvious that c2 is a common signal component contained in every signal peak, except ru1, and so on; a vector of eight common signal components is extracted from Ur, that is C{c1,c2, c3, c4, c5, c6, c7, c8}. The second step is component extraction. It is clear that a monitored particle is located near the sensor units that produce larger signal peaks. Thus, if a signal component is only contained in one signal peak, its value implies how close the particle is to the corresponding sensor probe. Analogously, if a signal component is contained in every signal peak, its value implies how close the particle is to the central point. Accordingly, a set of common signal components is extracted from q1 located in the central point is firstly assumed to generate c1. Then, because c2 is contained in every signal peak, except that of the No. 6 sensor unit, an imaginary point charge q2 is located in the connection line between the origin, and the No. 2 probe is assumed to generate c2. Analogously, as c3 is produced by every sensor unit except the No. 5 and No. 6 sensor units, q3 located in the bisector of the No. 1 and the No. 2 probes is assumed to generate c3, and so on; eight imaginary point charges are decomposed from the charged particle q, as shown in q8 should be quite close to the No. 2 probe, because its relevant signal component is only produced by that probe. The positions of the imaginary point charges are considered to be fixed due to the symmetry of the HSESCA. However, their imaginary charge quantities, which are determined by the values of the extracted components, vary with the position of the monitored particle. In other words, the charge quantities of the imaginary point charges imply the position information of the monitored particle.The third step is weighting and summation. First of all, the extracted signal components are considered to be generated by a set of imaginary point charges. In detail, based on the symmetry of the circular array and the idea that a charged particle is located near the sensor units that produce larger signal peaks, an imaginary point charge i-th imaginary point charge is expressed as iq = ic/is, where is is the dynamic sensitivity of the No. 2 sensor unit at the position of iq. By recording: iq = iciw, where iw is just the weight coefficient of ic.Next, the charge quantities of the imaginary point charges are calculated, which can be regarded as a process of weighting the extracted components according to the position information. According to the dynamic sensitivity model of the sensor units as Equation (14), the M is obtained by summing up the imaginary point charges: W is the vector of the weight coefficients and C is the vector of the extracted signal components. When q is in some symmetric lines of the HSESCA, some components could be zero, but it does not affect the computing process.Finally, the monitoring parameter W is determined in the pre-step at the grid points shown in As mentioned above, an HSESCA with eight sensor units is installed on a grounded pipeline. The pipeline has an inner radius of 200 mm; the hemispherical probes have a radius of 12 mm with their button faces 2 mm far from the inner wall of the pipeline. For the sake of simplicity, the amplification coefficient of the signal conditioner is set to The simulated results are shown by the surface diagrams in The static sensitivity of the HSESCA in the observation cross-section is calculated by using Equation (11) and the calibration method. The results are shown in It is seen from K = \u22121. Then, by placing a point charge q = 1 C in turn at each grid point shown in For the sake of simplicity, here, we set the amplification coefficient It is seen from Experimental results were obtained by using an eight-channel HSESCA experiment apparatus. As shown in In the experiment, the particles were released at different test points, as shown in P1, P2, P3, P4 and P5, respectively , the experimental value at each test point was obtained by dividing the summation of all of the signal peaks by the charge quantity of the corresponding particle, followed by a scaling to suppose K = \u22121. The results are shown in To make a comparison between the theoretical and experimental results of the static sensitivity of the HSESCA, the theoretical values in Line 1 and Line 2 were firstly calculated by using Equation (11) and the calibration method when It can be seen that the experimental values show fine consistencies with the theoretical ones. Further analysis shows that the mean absolute value of relative error from all of the test points in Line 1 is 2.2544%, and that in Line 2 is 1.3516%. The errors are acceptable and can be explained by the errors in controlling the particles\u2019 falling positions and measuring their charge quantities. Therefore, the sensing model of the HSESCA, as well as that of the sensor units are validated.K = \u22121, the theoretical values in Line 1 and Line 2 were firstly calculated (see K = \u22121. The results are shown in Theoretical and experimental results of the dynamic sensitivity of the HSESCA were also compared. When the amplification coefficient was set to ated see . After tIt is observed that the experimental values match well with the theoretical ones. Further analysis shows that the mean absolute value of relative error from all of the test points in Line 1 is 3.3233%, and that in Line 2 is 0.5936%. Just as the results in the static sensitivity test, the errors here are acceptable and can be explained by the errors in controlling the particles\u2019 falling positions and measuring their charge quantities. In addition, by making a comparison between The experimental results match well with the theoretical ones, which has validated the accuracy of the theoretical models and effectiveness of the corresponding methods.Compared with static sensitivity, the dynamic sensitivity of the HSESCA has much greater values in most of the observation cross-section. This demonstrates that the component extraction-based array signal processing algorithm is effective at overcoming the defect of inhomogeneous and localized static sensitivity, thus making a significant improvement on the monitoring accuracy for intermittent particles.There still exist relatively less sensitive zones near the inner wall of the pipeline after adopting the proposed algorithm. This is caused by the influence of the pipeline on the electrostatic field. A strategy to optimize the number of sensor units combined with optimizations on the proposed algorithm is likely to alleviate the drawback, which deserves further studies.Ill-conditioning and noise may impact the results of the proposed array signal processing algorithm. This is of great significance, to be discussed in further studies.By taking the processing of the array signals into consideration, the dynamic sensitivity of ESA has been defined to describe the sensitivity characteristics of ESAs in a systemic perspective. It builds a direct relationship between the monitored particles and the monitoring parameters of a whole ESA, thus reflecting the monitoring accuracy directly. An HSESCA along with its application in intermittent particle monitoring has been taken as an example. Its dynamic sensitivity in accordance with a proposed component extraction-based array signal processing algorithm has been analyzed quantitatively. Relevant numerical simulations have been made, and experimental validations have been carried out on an eight-channel HSESCA experiment apparatus. Detailed results have been provided, compared and discussed, which are summarized as follows:"} {"text": "MicroRNA are major regulators of neuronal gene expression at the post-transcriptional and translational levels. This layer of control is critical for spatially and temporally restricted gene expression, facilitating highly dynamic changes to cellular structure and function associated with neural plasticity. Investigation of microRNA function in the neural system, however, is at an early stage, and many aspects of the mechanisms employing these small non-coding RNAs remain unclear. In this article, we critically review current knowledge pertaining to microRNA function in neural activity, with emphasis on mechanisms of microRNA repression, their subcellular remodelling and functional impacts on neural plasticity and behavioural phenotypes. Neurons are characterized by their ability to rapidly integrate, store and transmit synaptic stimuli received from a multiplicity of sources. This complex nature is thought to be underpinned by the inherent plasticity of neuronal structure and excitability, both of which are intrinsically linked and associated with modulation of neuronal communication and information storage in complex organisms ,2. In reThe regulation of mRNA translation, in particular, has been subject to increasing focus due to its intricate nature and extensive potential for fine-tuning of neuronal remodelling. Since the discovery of active ribosomes in dendritic spines ,5, it haMounting evidence supports a major regulatory role for brain-enriched non-coding microRNA (miRNA) at the post-transcriptional and translational levels through target-specific repression of mRNA . RecentlmiRNA are a family of \u223c22nt non-coding RNAs initially characterized in 1993 as negative regulators of gene expression ,17. SincmiRNA genes are typically transcribed by RNA polymerase II and processed by the Drosha-DGCR8 nuclear microprocessor complex to yield \u223c70nt hairpin precursor miRNAs (pre-miRs) ,21. NewlmiRNA and the RISC have been traditionally characterized as mediators of transcript degradation, which has been observed in multiple eukaryotic systems ,31. In pThe majority of human miRNA tend to engage in imperfect base-pairing with targets to promote degradation via the 5\u2032 to 3\u2032 mRNA decay pathway . In contet\u00a0al. mRNA without triggering degradation, causing these cells to exhibit decreased responsiveness to nicotine stimulation and facilitation (Aplysia) . P-bodies are thought to act as cytoplasmic foci of miRNA-mediated mRNA repression, which form upon the aggregation of 5\u2032 to 3\u2032 decay machinery, RISC components and translationally arrested mRNAs ,63. AlthAplysia) . Taken tGiven the capacity of miRNA to precisely regulate the stability and translation of mRNA, further investigation of their subcellular localisation in neurons provides important insight into their function. Early tissue profiling studies identified the CNS as a source of miRNA enrichment ,75, withet\u00a0al. receptors, resulting in dendritic outgrowth which bears strong functional relevance for neuronal miRNA localisation in activity-dependent neuronal plasticity transcription of its host gene after pharmacological stimulation of hippocampal neurons ,86. Thiset\u00a0al. demonstret\u00a0al. .et\u00a0al. or spared nerve injury (in vivo) . Interesin vivo) . ThereinTarget RNA-directed miRNA degradation (TDMD) has also recently emerged as a novel mechanism of neuronal miRNA downregulation, whereby extensive complementarity with a target RNA leads to miRNA destabilisation and decay through 3\u2032 trimming or the untemplated addition of A residues . Complemet\u00a0al. \u2014a protein kinase which regulates translational control pathways\u2014from functional repression . The resin vivo in brain regions such as the hippocampus has been shown strongly influence performance in behavioural paradigms associated with memory. This concept was demonstrated in a study by Konopka et\u00a0al (Dicer1 knockout mouse to investigate the effects of total miRNA ablation. Subsequent remodelling of dendritic spine structure and increased expression of synapse-associated proteins were, in part, thought to contribute to an enhancement of spatial and fear-associated memory after Morris water maze and fear conditioning paradigms (et\u00a0al. (Patterns of miRNA expression, and the resultant changes to gene expression evidently play a major role in maintaining and dynamically modulating neural plasticity. Many of the resultant modifications to neuronal excitability and morphology have been directly observed in models of synaptic plasticity, the hypothesised biological mechanism behind the phenomena of learning and memory . Accordika et\u00a0al , who devaradigms . These r (et\u00a0al. , wherebySeveral specific miRNA species have been implicated in the regulation of neurocognitive functions associated with synaptic plasticity. These studies suggest that aberrant regulation of a subset of genes may result in overall abnormal neurobehavioural phenotypes. For example, overexpression of miR-134 in the CA1 region of the hippocampus results in contextual fear learning deficits in mice, indicative of the role miR-134 plays in negative regulation of dendritic spine size . SimilarFrom the discussed evidence, it is clear that miRNA regulation of gene expression has particularly distinct complexity and biological significance in the neuronal context. Although investigating the interactions and physiological impacts associated with individual miRNA has been an essential aspect of research effort, the mechanistic details associated with miRNA expression and the discrete decision-making processes involved, is of key importance to fully appreciate how these molecules operate. One dimension of miRNA function which will likely form key focus in coming years with regards to the neuron is the modulation of mRNA translational competency, which presents an especially interesting and seemingly logical system by which subcellular neuronal compartments such as dendritic spines could express genes in a semi-autonomous manner. Recent reports of miRNA acting as translational regulators supports the existence of such a system ,52,58\u201360Our understanding of miRNA in the regulation of neural plasticity is slowly emerging and likely to expand further in coming years as new miRNA are implicated and characterized. While over 3700 mammalian miRNA are currently supported by evidence , and a lPost-transcriptional gene regulation is vital to organizing the intricate patterns of intracellular protein synthesis that supports complex cellular morphology, cellular networks and systems. The vertebrate brain has optimized these systems by incorporating small non-coding RNA as the universal guide to mediate the highest specificity of interactions at the lowest biological cost. In this review we have summarized what is known about this remarkable system, as there is increasing evidence that post-transcriptional regulation of gene expression is critical for neuronal plasticity and its neurobehavioural manifestations. We are currently just scratching the surface of this mechanism, and further investigation of miRNA in particular is likely to reveal significant new insights into the complexity of neuronal gene expression and function."} {"text": "Despite interesting and unique pharmacological properties, levosimendan has not proven a clear superiority to placebo in the patient populations that have been enrolled in the various recent multicenter randomized controlled trials. However, the pharmacodynamic effects of levosimendan are still considered potentially very useful in a number of specific situations.Patients with decompensated heart failure requiring inotropic support and receiving beta-blockers represent the most widely accepted indication. Repeated infusions of levosimendan are increasingly used to facilitate weaning from dobutamine and avoid prolonged hospitalizations in patients with end-stage heart failure, awaiting heart transplantation or left ventricular assist device implantation. New trials are under way to confirm or refute the potential usefulness of levosimendan to facilitate weaning from veno-arterial ECMO, to treat cardiogenic shock due to left or right ventricular failure because the current evidence is mostly retrospective and requires confirmation with better-designed studies. Takotsubo syndrome may represent an ideal target for this non-adrenergic inotrope, but this statement also relies on expert opinion. There is no benefit from levosimendan in patients with septic shock. The two large trials evaluating the prophylactic administration of levosimendan in cardiac surgical patients with poor left ventricular ejection fraction could not show a significant reduction in their composite endpoints reflecting low cardiac output syndrome with respect to placebo. However, the subgroup of those who underwent isolated CABG appeared to have a reduction in mortality. A new study will be required to confirm this exploratory finding.Levosimendan remains a potentially useful inodilator agent in a number of specific situations due to its unique pharmacological properties. More studies are needed to provide a higher level of proof regarding these indications. Levosimendan was developed in the early 1990s in Finland and became available for prescription starting in 2001. It has been used since then in more than 60 countries. The unique pharmacological properties of this drug raised a major interest among physicians in charge of patients with heart failure, both in the medical and the surgical environments. Initial studies were most often positive, attesting for improvement in hemodynamics and/or organ function, and even suggested a reduced mortality . ShortlyLevosimendan has a pharmacodynamic profile combining inotropic and vasodilating effects (inodilator), and a nearly unique (among inotropes) myocardial protective effect. The inotropic effect results, in part, from an increased affinity of troponin C for calcium when the drug is present, which in turn prolongs the duration of actin/myosin cross-bridges \u201312. ThisFrom the pharmacokinetic point of view, levosimendan has a fast onset of action and a half-life of 1\u00a0h. The drug undergoes hepatic metabolism (acetylation) followed by renal excretion. Quite uniquely, it has an active metabolite with a very long half-life 70\u201380\u2009h) responsible for a prolonged effect , 23. All0\u201380\u2009h reThe \u201cclassical\u201d inotropic drugs (catecholamines and phosphodiesterase III inhibitors) are widely used in the perioperative setting, particularly in patients undergoing cardiac surgery. While they can provide life-sustaining support in circumstances of severe right and/or left cardiac ventricular failure and improve both clinical symptoms experienced by patients and systemic end-organ perfusion, the benefits of these inotropes on medium- and long-term survival have never been documented . To dateThe catecholamines are the most frequently used during the perioperative period. With an elimination half-life of just a few minutes, their on/off properties are quite appreciated at the bedside. The infusion of dobutamine produces a dose-dependent rise in cardiac output, mainly by increasing heart rate rather than stroke volume . Major an\u00a0=\u200952) of patients with acute or chronic heart failure and treated with beta-blockers, a prospective randomized double-blind international study found that levosimendan, as opposed to dobutamine, increased cardiac index and decreased pulmonary capillary wedged pressure, but failed to improve clinical symptoms and mixed venous oxygen saturation [Patients receiving beta-blockers (BB) in an acute or chronic setting have an altered beta-adrenergic receptor function and are therefore unlikely to respond optimally to catecholamines. The relevant clinical contexts are as follows: (i) patients with acute myocardial infarction given BB prophylactically who subsequently develop cardiogenic shock and require inotropic support, (ii) patients with chronic heart failure who are under chronic BB therapy and have acute on chronic cardiac decompensation, and (iii) patients with accidental/intentional BB intoxication. When beta-adrenergic receptors number and/or function are decreased, levosimendan appears as the drug of choice because its mechanisms of action are independent of this receptor. This is supported by a few experimental and clinical studies. In an animal model of acute intoxication with propranolol, Leppikangas et al. demonstrated that levosimendan, but not dobutamine or placebo, was able to increase stroke volume, inotropism, heart rate, and mean arterial pressure . This waturation . In addituration , 34. Basturation , 36.Fig\u22121\u00a0min\u22121) over 6\u2009h at 2-week intervals in 120 advanced heart failure patients [p\u00a0=\u20090.069). The LION-HEART study, another multicenter RCT involving 69 patients with chronic advanced heart failure, compared the effect of 6-h levosimendan infusions (0.2\u2009\u03bcg\u00a0kg\u22121\u00a0min\u22121) repeated every 2\u2009weeks for 12\u2009weeks with placebo infusions on NT-proBNP levels at 12\u2009weeks [P\u00a0<\u20090.001) and attenuated the decline in quality of life . The incidence of serious adverse events was not different between treatment and placebo groups, reflecting the safety of repetitive infusions of levosimendan. However, it is important to mention that although mortality and readmission rates decreased with levosimendan, they remained much higher in comparison to those observed when heart transplantation or mechanical circulatory assist devices could be carried out. Therefore, the strategy of repetitive infusions cannot be considered as an alternative to them, but alleviates the symptoms and improves the quality of life in patients awaiting transplantation or in those who are ineligible for more invasive approaches. A new RCT is currently under way to try to confirm the usefulness of repeated infusions of levosimendan in this patient population [Patients with symptomatic advanced heart failure despite optimal medical treatment are sometimes unable to be discharged because they are dependent on dobutamine infusions that cannot be weaned off. Such patients are at high-risk of death and may be waiting for a heart transplantation or a long-term mechanical assist device or, in contrast, may not be eligible for these therapeutic options. In such situations, repetitive infusions of levosimendan may offer the advantage of a prolonged inotropic effect with the possibility of improving the clinical symptomatology and allowing hospital discharge. Very limited data exists regarding the effectiveness of repetitive infusions of levosimendan \u201341. A repatients . The imp12\u2009weeks . The autpulation .Although the classical hemodynamic profile of cardiogenic shock associates low cardiac output, low arterial pressure, elevated left/right ventricular diastolic pressure, and elevated systemic vascular resistance, other phenotypes can be encountered. Some shock states related to ischemia-reperfusion injury lead to a sepsis-like syndrome with low systemic vascular resistance , 47, whiUnlike dobutamine, levosimendan increases moderately myocardial oxygen consumption, does not alter diastolic function, and has less direct pro-arrhythmic effects . Moreove2 [There is currently no high-quality study dealing with the use of levosimendan in cardiogenic shock. The most recent meta-analysis performed using a few studies with a high risk of bias, reports that, when compared to dobutamine, levosimendan did not affect short and long-term mortality, ischemic events, acute kidney injury, dysrhythmias, or hospital length of stay . Levosim2 . The nee2 , 50. Cur2 . A new RTakotsubo syndrome is a form of acute myocardial stunning in which catecholamines appear to have a central role in the pathophysiology, as there is no occlusive coronary artery disease to explain the pattern of temporary LV dysfunction observed. To date, there have been no randomized trials to define the optimal management of patients with suspected Takotsubo syndrome. Levosimendan has been advocated as the first choice inotropic support when mechanical circulatory assist devices are not available \u201354. ThisThere is experimental evidence attesting for the ability of levosimendan to reverse, in part, pulmonary vasoconstriction and improve right ventricular function in various animal models of pulmonary hypertension \u201358. Pati. To avoid this complication, low doses of positive inotropic drugs are commonly administered in order to maintain left ventricular ejection and avoid upstream congestion. Moreover, since the duration of VA ECMO is directly correlated to complications, the weaning of the device should be attempted as soon as possible. Although dobutamine is currently the first-line drug used for patients in cardiogenic shock [Veno-arterial extra-corporeal membrane oxygenation (VA ECMO) is used to restore adequate perfusion to vital organs in patients suffering from refractory cardiogenic shock . Howeveric shock , 66, theic shock \u201370. Howeic shock , 71. In Experimentally, in septic rabbits, levosimendan yields a similar improvement in left ventricular systolic function, when compared to dobutamine or milrinone, but improves diastolic function to a greater extent . A largeAlthough two large randomized controlled trials failed to show a reduction in composite endpoints reflecting low cardiac output syndrome and mortality in a mixed population of CABG, valvular, or combined surgery with LVEF<\u200940% , 8, a reDue to a very interesting pharmacological profile, levosimendan has raised a lot of interest in the field of heart failure management. Unfortunately, all the large randomized controlled trials have failed to demonstrate a clear superiority of this drug over placebo, when used on top of usual catecholamines, in the populations that they have tested. Even if the initial enthusiasm for this drug has somewhat been reconsidered, there are still good reasons to believe that it might be useful in specific subgroups. These include patients with acute heart failure receiving beta-blockers and Takotsubo syndrome, patients awaiting heart transplant or left ventricular assist device implantation, and patients under VA ECMO to facilitate weaning. In addition, there may still be room for levosimendan in the management of some patients with cardiogenic shock, and as a prophylactic treatment prior to CABG surgery in patients with low LVEF. Additional studies are still required to support these potential indications with a higher level of evidence."} {"text": "Healthy eating and fitness mobile apps are designed to promote healthier living. However, for young people, body dissatisfaction is commonplace, and these types of apps can become a source of maladaptive eating and exercise behaviors. Furthermore, such apps are designed to promote continuous engagement, potentially fostering compulsive behaviors.The aim of this study was to identify potential risks around healthy eating and fitness app use and negative experience and behavior formation among young people and to inform the understanding around how current commercial healthy eating and fitness apps on the market may, or may not, be exasperating such behaviors.Our research was conducted in 2 phases. Through a survey (n=106) and 2 workshops (n=8), we gained an understanding of young people\u2019s perceptions of healthy eating and fitness apps and any potential harm that their use might have; we then explored these further through interviews with experts (n=3) in eating disorder and body image. Using insights drawn from this initial phase, we then explored the degree to which leading apps are preventing, or indeed contributing to, the formation of maladaptive eating and exercise behaviors. We conducted a review of the top 100 healthy eating and fitness apps on the Google Play Store to find out whether or not apps on the market have the potential to elicit maladaptive eating and exercise behaviors.Participants were aged between 18 and 25 years and had current or past experience of using healthy eating and fitness apps. Almost half of our survey participants indicated that they had experienced some form of negative experiences and behaviors through their app use. Our findings indicate a wide range of concerns around the wider impact of healthy eating and fitness apps on individuals at risk of maladaptive eating and exercise behavior, including (1) guilt formation because of the nature of persuasive models, (2) social isolation as a result of personal regimens around diet and fitness goals, (3) fear of receiving negative responses when targets are not achieved, and (4) feelings of being controlled by the app. The app review identified logging functionalities available across the apps that are used to promote the sustained use of the app. However, a significant number of these functionalities were seen to have the potential to cause negative experiences and behaviors.In this study, we offer a set of responsibility guidelines for future researchers, designers, and developers of digital technologies aiming to support healthy eating and fitness behaviors. Our study highlights the necessity for careful considerations around the design of apps that promote weight loss or body modification through fitness training, especially when they are used by young people who are vulnerable to the development of poor body image and maladaptive eating and exercise behaviors. There was a sense that apps should send positive notifications regardless of whether the daily goal was achieved:Apps that implemented a positive approach to encourage users for their effort were seen to be more appealing. Participants noted that receiving \u201clittle badges\u201d (W2-P2) or a simple message such as \u201cnice one, you\u2019ve done that well\u201d (W1-P5) could be [On Fitbit] at the end of the day if you don\u2019t hit your daily step target [...] it just goes back to the next day, but then if you do hit it, it goes green and it\u2019s like \u201cwoohoo\u201d. But MyFitnessPal, I think it\u2019s like addictive methods of trying to get you to use it to reach your goals.W2-P2However, this concept of achieving set goals was not always seen to be a positive thing. Participants described how fixating around the attainment of a goal could lead to obsessive behaviors: \u201cYou don\u2019t want to do badly, so it makes you more obsessive of reaching your goals\u201d (W1-P1) and how not reaching a specific goal could lead to feelings of distress and guilt: \u201cThe worst features of the apps are that he [the persona character] feels guilty if he goes over calories and that can cause further restriction and it\u2019s made him obsessive\u201d (W1-P4), which could in turn lead to negative counter-behaviors, such as meal skipping: \u201cyou could reach your calories at like 3pm and then you don\u2019t feel like you could eat anything else for that day\u201d (W1-P1).E-3 described how the prescription of goals\u2014such as hitting 10,000 steps a day\u2014may overshadow any intrinsic motivations for exercising :The monitoring apps can make you think about physical activity in quite a sad way, or quite a compulsive way. If you have a step counter actually you think \u201cI\u2019m going to go for a walk to get my step count up\u201d and your motivation behind doing it is quite external, it\u2019s not about enjoying the walk for example, it\u2019s not about getting some fresh air or just clearing your head for a minute, actually suddenly that walk is all about \u201cwell this is going to get me about 3000 steps if I do this 20 minute walk at lunch time\u201d for example.E-3E-1 discussed how a focus on these aspects of quantification and goal attainment could become a potential risk for the emergence of maladaptive eating and exercise behavior, if paired with obsessive personality traits or perfectionism:People who tend to have an obsessive personality and also high levels of perfectionism will sustain their use of fitness apps and count every calorie, count every step, count every episode that they go to the gym and all their exercise and use a mobile platform to collect that data.He noted how, in clinical care, \u201cwe actively discouraged [people with eating disorders] from using those apps and various wrist monitors that count the steps and activity levels\u201d as they can become \u201ctriggering\u201d as well as sustaining \u201cobsessional and restrictive and rigid behaviors\u201d. E-2 further highlighted this point by describing his own personal experience of using healthy eating and fitness apps to track his food intake and exercise levels:I got to this point where I felt like I was really in control and I found the counting of the calories wasreally empowering...especially running and walking [...] you can put it on and it tracks you and it gives you your speed, and I was really fixated on how far I was going and how fast I was going.E-2E-2 also highlighted how this eventually led to him losing a sense of control:I no longer felt in control, it sort of started spiraling and I was losing like five pounds, six pounds.At this point, he realized that he has established an unhealthy relationship with healthy eating and fitness apps:It all had to sort of go because I just realized how unhealthy it was and I know that I can\u2019t really use those kind of apps in a healthy way.Participants discussed how heavy dieting and exercise regimes could have a negative impact on maintaining a good social life; many social gatherings focus on food, and often, a lot of time is required to maintain a heavy gym regime. W2-P1 described how, for some people, socializing becomes challenging because of the need to make dietary sacrifices:If you\u2019re very religiously logging it and then you think oh I can\u2019t go out to that restaurant or I don\u2019t want to go out for drinks.The concept of religiously monitoring caloric intake in an attempt to lose weight was highlighted by E-1 as a key challenge that could ultimately lead to social isolation and impact of the psycho-social development of a young person:For a lot of people it becomes an obsessive, all-consuming pursuit and therefore all energy is invested in that and there is no space left to do other things like invest time in social relationships, in social gatherings, education, family, friends and other sorts of developmental aspects of growing up.It was noted how engagement with apps on a mobile phone is largely, by design, a personal and private experience, which could in fact intensify negative experiences and behaviors, as it is easy to hide from family and friends:...by the time you\u2019ve noticed it they\u2019re probably already obsessive, because apps are actually very easy to hide that you\u2019re using itW1-P1W2-P2 also described how easy it is for young people to engage in secretive behaviors without anyone finding out:It\u2019s on a phone where you\u2019ve got a password. Sure, people can have visibility of it but they may not even know you have it. So say if it\u2019s a young, like a teenager, their parents may not even know they\u2019re using it, why would they, it\u2019s on a locked phone [...] I think you can quite easily start to like manipulate your calories in a negative way and nobody would know you\u2019re doing it.Participants discussed the issue of under logging calorie intake and how an app may respond in such cases:It comes up in red basically giving you a warning that you\u2019re not eating as much, or, if you carry on like this you\u2019ll be losing two pounds in a week.W1-P2However, they discussed that providing such weight loss projection could motivate users to eat less to lose weight faster within the estimated time frame. Participants explained how, when the user underlogs, apps currently only respond with a warning notification, instead of taking any actions to prevent this from happening. For example, W2-P2 described how she restricted her calorie intake to 1200 calories per day for 7 months without any guidance from the app to re-evaluate her goals:It didn\u2019t come up like \u201cyou\u2019re doing this and you should not be doing it\u201d kind of thing, so I think it is negative, it should prompt you to re-evaluate your weight and your goals but it doesn\u2019t, it automatically recalculates your weight, but you can keep changing your end goal and then I\u2019m not sure it brings up like well this is in an underweight category of [Body Mass Index] BMI.Our health experts also envisioned apps taking more responsibility to protect users, particularly those at risk, by not just focusing on sending notifications to encourage achieving the daily goal but also by notifying when overuse happens:I think apps could be more intelligent and could give feedback about over-use [...] if you spend too much time on your fitness app or on your diet app or too much time in the gym if you are monitoring all your activities, one could have a warning come up that this might not be the most healthy thing to do.E-1gentle approach to encourage engagement was also seen as beneficial:On the contrary, the need for a If someone\u2019s not doing as much physical activity, having an app which kind of brings you back in gently rather than saying \u201cyou haven\u2019t exercised for five days, what are you doing,\u201d actually that could be really helpful, so trying to think about encouraging people to do those healthy activities rather than worrying too much about what those outcomes are going to be.E-3listen to your body and gain the power to switch off when needed. They discussed the notion of feeling controlled, or being driven, by apps and how one should have a period of detox if this happens. E-1 highlighted the need for a psycho-educational component about the harms and benefits of healthy eating and fitness apps, explaining that young people in recovery learn to disconnect themselves from their phones:Each of the experts highlighted the importance of being able to ...trying to live with the anxiety of not knowing the detail of their activity levels and input and outputs; that is something pretty much promoted in recovery.He further explained how they aim to teach young people to trust their own bodies and themselves, to look after themselves:It\u2019s natural for one to allow your body to find its own equilibrium and its own input and output in terms of food and activity levels, and to rest when you feel the need to rest and to exercise when you feel the need to exercise.E-1This view was also shared by E-2 who explained the need to be mindful around logging behaviors:Am I checking this because of anxiety and habit or because I want to use it? How autonomous am I in doing this? [...] whether you feel compelled to use it, I think people need to work out whether that\u2019s the case for them or not, like having that awareness is really important and that\u2019s what gives people the power.E-2This was echoed by E-3:I try to think about what am I doing, why am I doing this, what\u2019s this app encouraging me to do, is it actually doing what I want it to be making me do or is it making me do something negative.E-3It was acknowledged by E-2 that this can be counter-intuitive to the way healthy eating and fitness apps are currently designed, with users often being actively encouraged to log and monitor themselves constantly: \u201cif you don\u2019t [log] then you mess up your statistics\u201d. He explained how, on one occasion, forgetting his phone at home and not having the ability to track his steps made him feel anxious:I felt more anxious because I didn\u2019t know how much I\u2019d done and I needed the steps, like the app to tell me that I\u2019d done enough exercise, rather than listening to my body, which I think is a big thing with these apps and social media\u2014they\u2019re prescribed\u2014I think that sort of disconnects people from their bodies a little bit.E-2In summary, survey results showed that almost half of our respondents had some form of negative experience using healthy eating and fitness apps. Approximately one-third of respondents reported that they had stopped engaging with healthy eating and fitness apps, typically citing that they found them too demanding or that they lacked the motivation for long-term use. Workshops and interviews further explored personal and professional perspectives on the impact of healthy eating and fitness apps on young people. Participants described how the pressure of attaining health goals, set through the app, could lead to negative experiences and behaviors. There was much consensus that healthy eating and fitness apps can bring feelings of guilt leading to negative experiences and behaviors such as meal skipping. The quantification element of the healthy eating and fitness apps was also seen as a contributing factor for development of maladaptive eating and exercise behaviors, particularly for those with obsessive personalities. Participants concerningly discussed the possible impact of obsession with logging behaviors on social isolation. This was backed up by experts highlighting the importance of being mindful around logging behaviors.The next stage of our research then focused on exploring the specific features of healthy eating and fitness apps that are currently available on the market, including the ability to log behaviors of interest and how the app responds to user engagement and behavior logging. We wanted to understand the potential of the top 100 healthy eating and fitness apps to elicit the negative experiences and behaviors reported in the first stage.Data from the reviews were collated into a spreadsheet. For review responses that resulted in qualitative data, we performed a qualitative content analysis . As follFor reviews that resulted in quantitative data, responses were summed to produce scores that reflect the number of apps belonging to each possible category. Interrater reliability of this coding had already been determined in the previous review phase see .Descriptions of apps were coded according to the health behavior targeted by the app. The vast majority of the apps focused on exercise promotion (n=62) and exercise and diet combined (n=24). A total of 3 apps were found to not focus on either diet or exercise. This included 1 app that focused on editing the body to assess how one would look like with various bodily enhancements and 2 that focused on weight monitoring . Around two-thirds of the apps included the ability to set appearance-related goals, such as weight loss and enhancement in muscle tone (n=65).The majority of apps collected some form of data pertaining to physical activity (n=84). This included data such as daily steps, cycling, and the completion of workouts specified by the app. A much smaller percentage of apps collected dietary data pertaining to food consumption (n=23). More than half requested user data pertaining to weight (n=56).In total, 21 apps allowed the setting of underweight body goals (corresponding to a BMI<18.5). Qualitative responses to other questions highlighted how this could include setting extremely low BMI targets in some apps: \u201cI still can set a BMI goal<13\u201d.Around two-thirds of the apps that facilitated the recording of dietary intake allowed the reporting of calorie consumption that was 50% more or 50% less than the recommended daily amount for the average individual (n=16). Some apps responded differently to the entering of low-calorie consumption (n=8), for example, highlighting that a goal had not been met or not allowing the calories to be logged. Others had a minimum calorie entry threshold and would not allow calories to be logged under this target. Some apps treated the under logging of calories the same as logging calorie content in line with their goals (n=8), including those that praised under logging calories in the same way that they praised calorie entry consistent with goals . A small proportion of apps also responded differently to logging calories that exceeded the daily target (n=8). Example responses to exceeding calorie goals included numeric feedback, reminders about goals, and visual feedback such as turning the calorie counter red or sad face emoji. Furthermore, some apps had a more questioning approach, for example, asking if the calorie entry is correct.A fifth of the apps responded to failure to meet exercise goals (n=20). The nature of these responses varied and included reminders, numerical feedback. Similar to when logging a high calorie intake, some apps adopted a questioning approach (\u201cWanna give up? Think about why you started?\u201d). Two-thirds of the apps offered praise and rewards for continuous use (n=60). Qualitative responses as to the nature of this praise varied. Many offered direct messages of praise such as congratulations messages or fireworks. Elements of gamification were present in some apps, with trophies, badges, and the unlocking of levels being used to reward continuous use.Apps were coded according to whether positive or negative feedback was used to motivate users\u2019 engagement with the app. The majority focused on positive feedback (n=72), whereas a minority of apps were felt to focus on negative feedback (n=19). The remainder of apps either provided neutral feedback (n=8) or provided both positive and negative feedback to the extent that the reviewers could not differentiate between which was being used more in the app (n=1).Examples of both positive and negative feedback were provided in qualitative responses to other questions . Some apps sent positive feedback to encourage users to keep up with their initial goal \u201cHey [user name]! You\u2019ll feel great after a workout. Strive for progress, not perfection\u201d or suggested that the user adjust their personal goal so that it was more in line with their lifestyle \u201cYou haven't been active lately, do you want to adjust your goal\u201d. An example of negative feedback included users being punished for not reaching their personal target \u201cif you miss a day you'll get punished by losing a heart\u201d.Furthermore, apps were coded as to whether they focus on what the user has achieved or what they have not achieved. Just under half of the apps were positively focused on achievements (n=49), highlighting what the user had done, whereas around one quarter were negatively focused on achievements (n=26), highlighting what the user had not done. The remainder did not focus on achievements (n=25) and included apps that did not involve goal setting or targets and those with predetermined reminders not linked to user behavior.Around one-third of apps facilitated the sharing of data with other app users (n=32). The remaining apps either did not facilitate this (n=66) or did not collect any user data (n=2). A much higher proportion of apps facilitated the sharing of app-related achievements through social media sites (n=59), whereas the remainder did not (n=39) or did not collect any data (n=2).A small proportion of apps automatically ranked users among others (n=6) but all of these apps allowed this feature to be deactivated. This ranking approach was evident in the qualitative responses about app feedback. For example, a push-up workout app compared user reports about their workouts with other users in similar categories : \u201cyou did X push-ups, this is better than XX% of users today\u201d.In summary, we wanted to identify the logging functionalities available in the top 100 healthy eating and fitness apps that have the potential to elicit negative experiences and behaviors captured in the first stage. We found that apps responded differently to data-logging behaviors, for example, some allowing users to under log their calorie consumption whereas others using guilt-inducing techniques when calories exceed the daily goal. Alarmingly, 21% of the apps we reviewed allowed underweight goal setting (BMI<18.5). In addition, one quarter of the apps focused on negative reinforcement for maintaining health goals. A substantial number of apps facilitate sharing data with other users (n=32) and across social media (n=59).The aim of this research was to understand the potential role of healthy eating and fitness apps in the development of maladaptive body-related attitudes and eating and exercise behaviors in young people. Drawing on the findings from Phase 1 and Phase 2, we have identified 5 important ways in which healthy eating and fitness apps may potentially exert negative impacts on users. In this section, we reflect on these themes and how future research may mitigate against these issues.Calorie counting is a prominent feature of many currently available healthy eating and fitness apps and a key reason why many of the survey respondents, both male and female, reported using healthy eating and fitness apps. However, participants in both the survey and workshops also described how obsessive thoughts and behaviors around calorie counting and logging could emerge. For example, not achieving a calorie count within a certain boundary could cause feelings of failure and guilt, often then leading to restrictive eating or excessive exercising behaviors , simply because the app indicated that they had already reached their caloric intake for the day. This tendency to engage in purge behaviors echoes findings from previous research examining the consequences of healthy eating and fitness app use and foodself-knowledge through numbers [numbers game for the young people in our study, becoming a law for them to live their lives by. In the workshops, participants describe how self-tracking led them to over focus on the calories contained in food at the expense of nutrition . Furthermore, some survey respondents described cheating the app by not accurately logging their food consumption. Although healthy eating and fitness apps offer new ways of monitoring, measuring, and representing the human body [the numbers over physical observations [In addition to calorie counting, there were other aspects of self-tracking evident in the apps reviewed . The primary objective of self-tracking is around obtaining numbers , and the numbers . Howeverman body , they alrvations . This obReflecting on these points, we need to think about better ways to log food and exercise. For example, have useFurthermore, our experts highlighted a need for approaches that promote listening to one\u2019s own body, and its nutritional needs and physical limitations, rather than becoming reliant on the attainment of what are often arbitrary numerical goals. Such an approach is consistent with newly emerged evidence-based psychological approaches to improving exercise and eating behavior that are focused on the development of embodied and mindful approaches to eating and exercise . ReframiA high percentage of healthy eating and fitness apps currently available on the market focus on the achievement of appearance-related goals rather than health goals. This finding is consistent with research conducted around other health and fitness media, such as magazines .Focusing on appearance-related goals in eating and exercise settings has been linked to negative body image and maladaptive eating and exercise behavior . In contthinspiration . However, it is possible that these types of responses might serve as munities ), motivaThe literature on behavior change, and the roles of notifications in supporting change, is extensive ,75,76. IIn our study, young people openly discussed the negative emotions they have experienced when interacting with healthy eating and fitness apps. They expressed feelings of guilt, disappointment, and pressure. This echoes findings from ,63, wherextreme measure behaviors , instead promoting better coping mechanisms . Consequently, engaging users in the creation of preplanned statements that mitigate negative feelings such as guilt triggered by not meeting goals might be a useful way of supporting users.Although the aim of notifications is to encourage engagement, previous research suggests that these are not always effective and can instead cause additional stress on young people . Rather compete toward weekly step counts. Research suggests that perceived social norms can be a powerful motivator of behavior change in relation to eating and exercise, in that individuals are more likely to engage in behavior if they perceive others to be doing it [Healthy eating and fitness apps offer the opportunity to enhance the social life of the user by giving access to group memberships and opportunities to find like-minded people or friends on the internet (sharing and comparing data through the app) and offline . Many apdoing it . Howeverdoing it .Our app review indicates that the majority of apps facilitated social features, that is, data sharing with other app users (n=32) or through social media (n=59). However, although healthy eating and fitness apps are seen as socially mediated experiences to connect people with common interests and goals for healthy eating and physical activity , our quaMost healthy eating and fitness apps are designed to reinforce continuous use. Typically, this is achieved through gamification features that render daily routines into games . GamificAlthough the aim of gamifications is to promote sustained use, studies have shown that long-term engagement is not always achievable, for example, only 11.6% of app users in Great Britain are still engaging with an app a week after installing it on their phones (findings from other countries are similar) . Our surperiod of detox when unusual patterns of use are detected, as part of the gamification model. In addition, providing a framework within the app that encourages users to self-reflect on their use patterns and to re-evaluate their goals regularly may help users who might be struggling to re-gain a sense of much needed control.One possible option for the future app developer would be to consider encouraging more long-term behavior change, rather than focusing on engagement . FurtherThis research adopted a mixed-methods approach to develop a holistic understanding of the potential negative impact of healthy eating and fitness app use among young people. Though we aimed to recruit both male and female participants to engage in our qualitative research, we struggled to recruit individuals identifying as male to engage in workshop activities. That said, male perspectives are still represented in the survey data and in fact made up the majority of respondents. One reason for this could be that we recruited largely from computer science, which is a predominantly male field; however, for future work, a gender balance would be preferable. We also did recruit a male eating disorder campaigner, activist, and writer with a personal experience of eating disorder who advocates for greater recognition of eating disorder service delivery and the male experience to participate in the interviews. However, future research should aim to more deeply explore the male experience of healthy eating and fitness app use further. Similarly, it may be important for future research in this field to consider how other individual difference variables, such as race, class, body size, and disability, affect participants\u2019 engagement with healthy eating and fitness apps and their potential for misuse.Through our study, we have offered a deepened understanding of young people\u2019s experiences of healthy eating and fitness apps and the potential harm that their use might have. We have offered a set of guidelines for future apps that can be responsibly developed to prevent the formation of maladaptive eating and exercise behaviors. Although we understand that an app developer would never set out to meaningfully bring about negative emotional responses in their users, we must also be mindful of the fact that these responses are happening. Through this study, we hope to open a dialogue around how use of these apps could have the potential to become the seed that develops into a more serious issue. As the target user demographic for these types of apps, young people are most vulnerable to the development of poor body image and maladaptive eating and exercise patterns. As such, we need to take care in the type of language we are using and the type of sustained behaviors we are promoting."} {"text": "The relationships between several Hofstede\u2019s cultural dimensions and prosocial behavior at national level have been investigated by some studies. Yet the roles of indulgence versus restraint (IVR) and long-term versus short-term orientation (LTO), two newly established cultural dimensions, have received insufficient interest. This study aimed to investigate whether the World Giving Index (WGI), a national level measure of prosocial behavior provided by Gallup, was affected by IVR and LTO. The results suggested a positive link between IVR and WGI, and a negative link between LTO and helping a stranger. Culture values can in a great extend account for why prosocial behavior varies across countries. Further analysis revealed interactions among IVR, LTO, and individualism versus collectivism (IND). Simple slope analyses found that: (1) a higher level of IND could enhance the positive influence of IVR on prosocial behavior; (2) a lower level of IND could weaken the negative impact of LTO on prosocial behavior; (3) a higher level of IVR could weaken the negative effect of LTO on prosocial behavior. People are willing to sacrifice their own interests, including time, energy, money, and even physical health, to benefit others or the society as a whole. These behaviors are called prosocial behavior, which includes sharing, formal and informal helping, charitable donation, and volunteering . In the Confucian Work Dynamism. LTO indicates the time-orientation of a society. Societies located at the long-term pole prefer virtues oriented toward future reward, particularly perseverance, thrift, order of relation by status, and a sense of shame. In contrast, societies located at the short-term pole prefer virtues related to the past and present, in particular respecting for tradition, protecting one\u2019s \u201cface,\u201d and fulfilling social obligations. East Asian countries mostly have a long-term orientation, while Australia, United States, some Latin American, African, and Muslim countries can be identified as short-term orientated societies and the sixth dimension were added to Hofstede\u2019s model . LTO oriocieties .Indulgence versus restraint was originally extracted from the World Values Survey and adopted by Hofstede in his model . IVR refCulture values are important factors in determining an individual\u2019s social behavior . CountryThe reasons why national cultural values influence prosocial behavior have also been addressed by previous researchers. Culture is a collective level phenomenon containing variable values, beliefs, and practices across societies . Individ1. Furthermore, the cultural values of a country may change over time the relationship between LTO and prosocial behavior was moderated by IVR.2 using data gathered by Gallup. Three components of WGI were measured each by an item asking the participants if they have given money to charity/volunteered for an organization/helped a stranger in the month previous to the survey. For each of these three questions, a percentage of participants who said yes were calculated. Then these three percentages were averaged within a country to form an aggregate score representing prosocial behavior at national level. If a country had missing data in the 2016 WGI, the missing value was replaced by the average score of the years available during 2010\u20132015. Previous studies showed that WGI can be used as a reliable and valid measure of prosocial behavior as that country.Based on previous findings that religion and economic factors also infTable 1. Power distance, uncertainty avoidance, and masculinity versus femininity were also included in the correlation analysis in order to make a comparison with the results of previous studies. As Table 2 illustrated, IVR was positively correlated with three indicators of prosocial behavior , while LTO was only significantly negatively correlated with helping a stranger . And IND was significantly positively correlated with donating and volunteering . This indicated that IVR may be the most important predictor of prosocial behavior among the three cultural dimensions. Among the three cultural dimensions, the correlation between IVR and LTO was negative , and the correlation of LTO with IND was positive , while the relationship between IVR and IND was insignificant. These results suggested that these three cultural dimensions are essentially different constructs that may each contribute uniquely to prosocial behavior.Pearson correlations between IVR, IND, LTO, and WGI were shown in r = -0.50; for volunteering, r = -0.28; for helping, r = -0.21; for WGI, r = -0.44, ps < 0.01; uncertainty avoidance: for donating, r = -0.44; for volunteering, r = -0.25; for helping, r = -0.23; for WGI, r = 0.40; ps < 0.01), while masculinity versus femininity was not associated with prosocial behavior.Consistent with Table 2). This method enables us to find the variables whose predictive values cannot be substituted by other variables. Table 2 showed that (1) when WGI, donating, and volunteering were outcomes, the effects of uncertainty avoidance and IVR were significant , with other four cultural dimensions providing no more predictive power; (2) when helping a stranger was the outcome, the effects of LTO, uncertain avoidance, and power distance were significant . These findings suggested that the two newly developed cultural dimensions in Hofstede\u2019s model are useful in predicting country-level prosociality. Uncertainty avoidance was the only cultural dimension that was predictive of WGI and its three components above other cultural dimensions, showing that it has particular importance in influencing prosocial behavior. Surprisingly, IND and religion did not exert significant influence on prosocial behavior. This is inconsistent with Hierarchical multiple regression was used to analyze the unique contribution of each cultural dimension. Religion and HDI as the controls were entered in Step 1, and six cultural dimensions were added in the regression model using a stepwise method in Step 2 IVR did not influence WGI , whereas in high IND societies the influence of IVR on WGI was significant , see Figure 1. Simple slope analysis revealed similar findings when the outcomes were donating , and helping a stranger . Furthermore, the influence of IND on the relationship of volunteering with IVR is stronger in high IND countries than in Low IND countries . These results suggested that indulgence in a society generally has a positive effect on prosocial behavior, but this effect is much stronger in individualist societies than in collectivist societies.As shown in Table 4, the significant interaction between LTO and IND showed that the relationship between LTO and donating can be influenced by IND. Simple slope analysis showed that in low IND societies LTO did not influence donating , while in high IND societies the influence of LTO on donating was significantly negative ], see Figure 2.In Table 5 showed that the interaction effect of LTO and IVR in predicting prosocial behavior was also significant, showing that the relationship between LTO and prosocial behavior (excepting volunteering) was moderated by IVR. Simple slope analysis showed that LTO had a negative influence on WGI in low IVR societies , while the influence of LTO on WGI is insignificant in high IVR societies , see Figure 3. Similar results were found when the outcome was helping a stranger . But both in low and high IVR societies, influences of LTO on donating money were not significant .It has been suggested that the fundamental psychological processes of an individual, such as cognition, judgment, evaluation, emotion, etc, can be systematically influenced by the culture values of a society . ConsistLong-term versus short-term orientation is also an important cultural dimension in influencing prosocial behavior. In this study, the negative association between LTO and helping a stranger was rather strong, which is partly consistent with Hypothesis 1b. A possible explanation is that, in short-term orientated societies service to others is considered as an important goal, while in long-term orientated societies thrift and perseverance are considered as important goals . Luria eIn this study the effect of IND on prosocial behavior was partialed out by IVR, LTO, religion, and HDI, though it has been revealed by previous studies . The dimThe results showed that IND played a moderating role in the relationship between IVR and prosocial behavior. High IND in a country could strengthen the relationship of IVR with WGI and its components. That is, the IVR-prosociality association was stronger in high IND countries than in low IND countries. This may be explained by the fact that components of WGI are prosocial behavior primarily directed toward out-groups/strangers. In collectivist societies, IVR may promote prosocial behavior toward in-groups, while in individualist societies IVR may promote prosocial behavior toward out-groups or strangers .Individualism versus collectivism also played a moderating role in the relationship between LTO and prosocial behaviors. A significant interaction between LTO and IND in predicting money donation indicated that the negative LTO-donation association tended to be stronger in individualist societies than in collectivist societies. Collectivism emphasizes interpersonal relatedness and commitment to groups or the society . The negFurthermore, a moderating effect of IVR in the LTO-prosociality relationship also was found. The effect of LTO on prosocial behavior was significant in low IVR societies, but insignificant in high IVR societies. Long-term orientated societies encourage thrift and perseverance and discourage service to others. One manifestation of LTO is a conservative attitude toward money that may deter prosocial behavior . These nDrawing on the newly published datasets and a relatively larger sample size, this study explored the effects of two newly established cultural dimensions in Hofestede\u2019s model (IVR and LTO) on prosocial behavior that have received little interest in previous studies. We found that the effects of LTO and IVR on prosocial behavior were moderated by IND. These findings have made a significant progress in explaining why prosocial behavior varies across cultures. However, this study also has limitations.Previous literature suggests that WGI is a reliable measure of prosocial behavior , but whaIncreasing prosocial behavior in a society is beneficial not only for social harmony and solidarity , but alsThis study showed that the interactions among IVR, LTO, and IND also have significant effects on prosocial behavior. Nevertheless, we have not provided sound explanations for these interactions. For example, why the relationship between IVR and prosocial behavior is stronger in high versus low IND countries? These questions are expected to be addressed elaborately by future research.In this study IVR was the only cultural dimension contributing to all three types of prosocial behavior. Its influence remained robust after the effects of other cultural dimensions were controlled. This is consistent with previous findings that prosocial behavior is mainly motivated by emotion . SurprisLong-term versus short-term orientation has exerted a strong negative influence on helping a stranger, suggesting that devaluing the importance of serving others and endorsement of thrift and perseverance in a society can have a negative impact on building prosocial ethos. Moderation analyses have also revealed several valuable findings: (1) IVR is more strongly conductive to prosocial behavior in individualist societies than in collectivist societies; (2) low IND in a society could weaken the negative effect of LTO on donating; (3) high IVR in a society could attenuate the negative effect of LTO on prosocial behavior.QG designed the study and wrote the manuscript. ZL and XL wrote the manuscript and collected the data. ZL and XQ collected and analyzed the data under the supervision of QG. QG, ZL, and XQ revised the manuscript, and replied to comments.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "The interface interactions between the two components and the resultant optoelectronic properties of the composite are demonstrated. According to transmission electron microscopy and X-ray absorption spectroscopy, the dispersion of CeO2 nanoparticles in the polymer matrix strongly depends on the CeO2 nanoparticle concentration and results in different degrees of charge transfer. The photo-induced charge transfer and recombination processes were studied using steady-state optical spectroscopy, which shows a significant fluorescence quenching and red shifting in the composite. The higher photo-activity of the composite as compared to the single components was observed and explained. Unexpected room temperature ferromagnetism was observed in both components and all composites, of which the origin was attributed to the topology and defects.This study presents the preparation, characterization, and properties of a new composite containing cerium oxide nanoparticles and a conjugated polymer. CeO Composites containing nanoparticles have received much attention due to their synergistic and hybrid properties derived from their corresponding components. The composite properties depend on the properties of their individual components and also on their morphologies and interfacial characteristics. For instance, Bellucci et al. demonstrated that the optical properties changes were correlated to nanoparticle-driven interface modulations . The pot2) is one of the most used metal oxides because cerium is the most abundant element among the rare earths and is environmentally friendly thiadiazole, two equivalents of 2,7-bis-9,9\u2019-spirobifluorene and one equivalent of N,N\u2019-bis(2-ethylhexyl)-2,6-dibromonaphtalene-1,4,5,8-tetracarboxylic acid diimide. 2,7-Bis-9,9\u2019-spirobifluorene , 4,7-dibromobenzo[c] thiadiazole , N,N\u2019-bis(2-ethylhexyl)-2,6-dibromonaphtalene-1,4,5,8-tetracarboxylic acid diimide and aqueous K2CO3 were mixed in 1,4-dioxane (20 mL). After degassing for 10 min, tetrakis(-triphenylphosphine) palladium was added under an argon atmosphere. The reaction mixture was heated at 110 \u00b0C for 48 h before cooling to room temperature. The reaction mixture was extracted with dilute aqueous hydrochloric acid and dichloromethane. The organic layers were combined, washed with brine, dried over anhydrous magnesium sulfate. After filtration, the residue was concentrated and poured into methanol and then the precipitate was collected by filtration. The solid was washed by a Soxhlet extraction with methanol and acetone for 24 h, respectively before being dissolved in hot chloroform. The product was dried under a vacuum to give a yield of 65%. The detailed synthetic process of the polymer is shown in The conjugated polymer was synthesized by the modification procedures reported previously . All che3)3\u00b76H2O; 434.23 g/mol, 99.5% purity) was purchased from Alfa Aesar. Ammonium hydroxide was purchased from Avantor . Ethylene glycol (99.5% purity) was purchased from Sigma-Aldrich. These chemicals were used without further purification. Distilled water was used in the synthesis process for nanoparticles.Cerium (III) nitrate hexahydrate (Ce(NO2 nanoparticles were synthesized by the co-precipitation method. The Ce(NO3)3\u00b76H2O is dissolved in 80% ethylene glycol (EG)/water mixture by stirring at 600 rpm at room temperature. When all the nitrate precursors are dissolved, ammonium hydroxide (3 M) is added dropwise while the solution is kept in the same condition. After the addition of the precipitating agent , the mixture was stirred at 30 \u00b0C for 21 h. The precipitate was separated by centrifugation at 6000 rpm for 10 min and washed three times with ethanol. The separated solid was dried for 24 h and crushed with mortar and pistil, the CeO2 nanoparticles were obtained as powders.CeO2 nanoparticle powders with a 20, 40 and 50 weight percent were added into the polymer solution and the mixture was sonicated for 30 min. The composites were left in the oil bath for 72 h.Polymer is dissolved in tetrahydrofuran (THF) with a concentration of 0.001 g/mL and sonicated for 30 min. Different amounts of CeO3L-edge were performed at room temperature on a Wiggler beamline 17C at the National Synchrotron Radiation Research Center (NSRRC), Taiwan. The monochromator Si (111) crystals were used in Wiggler beamline 17 C. The energy resolution at the Ce 3L-edge (5723 eV) was about 0.4 eV. The XANES spectra at the C K-edge were recorded at beamline 20A using total electron yield (TEY) mode at the National Synchrotron Radiation Research Center (NSRRC), Taiwan. The magnetization was measured at room temperature using a vibrating sample magnetometer (VSM) at the Institute of Physics, Academia Sinica, Taiwan.The NPs were characterized by using the X-ray diffractometer (XRD) with Cu K\u03b1 radiation and beamline 01C2 at the National Synchrotron Radiation Research Center (NSRRC), Taiwan. The particle distribution, morphology, and crystal structure are studied by a transmission electron microscope operated at 200 keV (Philips Technai G2 FEI-TEM) . The ultraviolet\u2013visible (UV\u2013Vis) spectra were recorded using a Jasco V-670 spectrophotometer . The photoluminescence properties were investigated using a Jasco FP-8500 spectrofluorometer . The X-ray absorption near-edge fine structure (XANES) measurements at the Ce 2 nanoparticles and CeO2/polymer composites with different amounts of CeO2 nanoparticles. The XRD pattern of CeO2 nanoparticles indicates peaks at 2\u03b8 about 28.5\u00b0, 33.1\u00b0, 47.4\u00b0, 56.3\u00b0, 59.1\u00b0, 69.3\u00b0, 76.7\u00b0 and 79.1\u00b0 which corresponds to the lattice plane of (111), (200), (220), (311), (222), (400), (331) and (420) of CeO2 with space group2/polymer composites contains all peaks from both components which indicates that the efficient blend of two components.2 nanoparticles and CeO2/polymer composites. As can be seen from 2 nanoparticles highly aggregated together due to the small size and Van der Waal\u2019s forces. The size of the cluster is as large as the micron scale. According to selected area electron diffraction (SAED) analysis , the degree of dispersion in the polymer is suppressed. The size of the CeO2 cluster was then increased again is a unique method that provides electronic structural information on the orbital symmetry and spin state of the materials and was thus utilized to determine the C hybridization as well as the Ce charge statecaused by the composite formation.The decrease in the CeOmoieties . Due to K-edge XANES of the composites are shown in 2, the polymer peak intensity decreased significantly, indicating that more electrons transfer into the polymer. As for the peak located at 290.4 eV (peak B), which correspond to carbon atoms in polymer attached to hydrogen, nitrogen or other species, shows a clear increase. This suggests that the interfacial C\u2013O\u2013Ce bonding is formed for the CeO2/polymer composites.The C 2 was investigated using XANES of the Ce-L edge. The normalized XANES spectra of Ce L3 edge of composites containing different ceria nanoparticle concentration are shown in 1(5d6s)4 state). Therefore, the Ce3+ concentration in the CeO2 matrix (Ce3+/(Ce3+ + Ce4+)) can be then expressed as the ratioC is the deconvoluted peak C intensity and IT is the intensity sum of peaks A, B and C [3+ concentration (the ratio of the concentration of Ce3+/(Ce3+ + Ce4+)) was about 10% in the raw CeO2 nanoparticles. For the composite, whereas the weight ratio between CeO2 and polymer is 0.2 and 0.4, the Ce3+ concentration decreases to 7.5% and 7%, respectively. The decrease in the Ce3+ concentration confirms that electrons transfer from CeO2 into the polymer.The variation in the Ce charge state in ceria between the composites and pure CeO B and C . AccordiK edge and Ce L edge demonstrated the charge transfer between CeO2 nanoparticles and the polymer, which results in the increase in electrons in the C=O of polymer and the decrease of hybridization between Ce and O in CeO2 nanoparticles. This implies that more Ce3+ was induced at the particle surface. It is worth noting that among composites with different amounts of CeO2 nanoparticles, the decrease in the Ce3+ concentration is almost the same. This suggests that the interaction occurs only at the CeO2 nanoparticle surface.The above XANES analysis of the C p valence band (VB) of O2\u2212 to the 4f conduction band (CB) of Ce4+ in CeO2. For the polymer, a broad absorption band was observed possibly due to the \u03c0-\u03c0* transition and the intramolecular charge transfer between spirobifluorenes and naphthalene bisimides [2 nanoparticles the polymer absorption is enhanced and the absorption edge shifts to the longer wavelength. This broad absorption and longer absorption edge can be attributed to the charge transfer from the ceria nanoparticles into naphthalene bisimide moieties of the polymer as demonstrated by XAS analysis in the previous section.The original characteristics of both components would be affected by the change in the electronic structure as mentioned above. The UV\u2013Vis spectra of the ceria, polymer and composites are shown in isimides . It shoua is a constant and n is the index characterizing the nature of electronic transition causing optical absorption. n can take on values of 3, 2, 3/2, or 1/2, corresponding to indirect prohibited, indirect permitted, directly prohibited, and directly permitted transitions, respectively. A graph plotted between (\u03b1h\u03c5)1/n as ordinate and h\u03c5 as abscissa. The extrapolation of the linear part of the graph to (\u03b1h\u03c5)1/n = 0 (Tauc\u2019s plot) gives the optical band gap. Accordingly, the optical band gap is 2.97, 2.02 and 1.70 eV for CeO2 nanoparticles, polymer, and composite in which the ratio between CeO2 to polymer is 0.4, respectively. The change in the composite band gap indicates the intramolecular charge transfer between spirobifluorenes and naphatalene naphthalene bisimides that leads to a change in the band structure.The optical band gap can be determined using Tauc\u2019s model using the UV\u2013Vis absorption spectra. Tauc\u2019s model is given by:2 nanoparticles can be observed. A broad emission band from 550 nm to 750 nm with a maximum emission peak (631 nm) was observed for the polymer. The excitons were created by photoexcitation in which the holes and electrons are located in the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) of the polymer, respectively. The electron and hole recombination is responsible for this emission peak. After the composite formation, the fluorescence quenching, as well as red shifting of the emission maximum, is observed. The fluorescence quenching may indicate the intramolecular charge transfer that takes place between the two species as has been confirmed by XANES. The emission peak of the composites shifted from 631 nm to 643 nm. This redshift suggests that there is a change in the polymer electronic structure as well as the chemical environment, which was demonstrated by UV\u2013Vis spectra and XANES of the C-K edge of the polymer. This shift may arise due to the reduction in bandgap and the variation in the interface between the CeO2 nanoparticles and the polymer.Photoluminescence (PL) is a remarkable method to probe certain structural aspects and provide information at short and medium-range, where the degree of local order such as structurally inequitable sites can be distinguished by their different types of electronic transitions and are linked to specific structural arrangements. The structural and electronic order or disorder effects and the nature of bonding in a conjugated polymer have a key impact on the optical property. s) of the CeO2 is 0.008 emu/g , pristine polymer and pure ceria NPs measured at room temperature were shown together in inset in , which i2 NPs, the value of Ms was 0.2 emu/g. It is lower than that of pure polymer. According to the magnetic behavior of pristine polymer and ceria NPs, the decrease in magnetization could be attributed to the lesser content in the polymer. However, it is worth mentioning that the Ms of composite is higher than the linear combination of both components, demonstrating the magnetism of both components changed by interface interactions. For ceria NPs, it has been demonstrated that within a wide range of defect concentrations, the value of Ms is proportional to the Ce3+ concentration at the surface. As shown in the previous section, the Ce3+ concentration is less after the composite formation. Based on the above results, the decrease in CeO2 ferromagnetic contribution is predicted. In other words, the polymer magnetic response became stronger in the composite. Both polymer conjugation and delocalization was affected by the electron transfer. Similar results were reported by B. Yang et al. They observed that the saturation ferromagnetism of polymer P3HT mixed with phenyl-C61-butryic acid methyl ester (PCBM) was about 0.65 emu/g and suggested that the ferromagnetism origin is associated with P3HT crystallization and the charge transfer between P3HT and PCBM [In the polymer composite with 40 wt% CeOand PCBM . Accordi2 NPs was successfully produced using an ex situ preparation method. This method has an advantage over the in-situ method as it is free from the unnecessary solid remaining in the hybrid and it is simple and used for large-scale production. TEM and XAS analysis results showed that the nanoparticles are well dispersed in the polymer matrix with charge transfer that occurred between the two components. The difference in band structure resulted in a broader absorption peak than the individual components UV\u2013Vis spectra. Photoluminescence revealed the fluorescence quenching and red shifting of the composite peak compared to that of the pristine polymer. The difference in electron hybridization and localization also affected the magnetic response of both components. The magnetization of the pristine polymer was found to be enhanced.Polymer composite with CeO"} {"text": "Triple-negative breast cancer (TNBC) is one of the most common malignancies worldwide and shows maximum invasiveness and a high risk of metastasis. Recently, many natural compounds have been highlighted as a valuable source of new and less toxic drugs to enhance breast cancer therapy. Among them, S-adenosyl-L-methionine (AdoMet) has emerged as a promising anti-cancer agent. MicroRNA (miRNA or miR)-based gene therapy provides an interesting antitumor approach to integrated cancer therapy. In this study, we evaluated AdoMet-induced modulation of miRNA-34c and miRNA-449a expression in MDA-MB-231 and MDA-MB-468 TNBC cells. We demonstrated that AdoMet upregulates miR-34c and miR-449a expression in both cell lines. We found that the combination of AdoMet with miR-34c or miR-449a mimic strongly potentiated the pro-apoptotic effect of the sulfonium compound by a caspase-dependent mechanism. For the first time, by video time-lapse microscopy, we showed that AdoMet inhibited the in vitro migration of MDA-MB-231 and MDA-MB-468 cells and that the combination with miR-34c or miR-449a mimic strengthened the effect of the sulfonium compound through the modulation of \u03b2-catenin and Small Mother Against Decapentaplegic (SMAD) signaling pathways. Our results furnished the first evidence that AdoMet exerts its antitumor effects in TNBC cells through upregulating the expression of miR-34c and miR-449a. Breast cancer is the most commonly diagnosed invasive cancer and the second-leading cause of mortality among women worldwide ,2. The iTumor metastasis is responsible for up to 90% of cancer-related deaths so, prevention and treatment of metastasis are key to improving clinical outcomes . MetastaEMT has been implicated in breast cancer progression and metastasis and is also involved in cancer stem cell expansion and chemoresistance to cancer treatment. For this reason, finding a natural molecule without significant toxic side effects and capable of reducing the spread of cancer cell is a major clinical challenge and recently intensive efforts have been focused on understanding the molecular mechanisms, as well as new targets for suppressing these processes ,9,10. MaAdoMet, the second most extensively-used enzyme cofactor after ATP, performs a wide of well-documented biological functions in all living cells and it is the linking of three primary metabolic pathways: transmethylation, polyamines biosynthesis, and transsulfuration ,12. GiveExperimental evidence has been reported showing that AdoMet is able to regulate, through an epigenetic mechanism, the expression of genes playing a crucial role in cell migration, invasion, and metastasis ,29,30,31Recent experimental and clinical studies have improved our knowledge of tumor metastasis formation, a dynamic program triggered by complex regulatory networks involving transcription factors, non-coding RNAs, epigenetic modulators, and exogenous inducers by the tumor microenvironment ,9,10. AmOur research group has thoroughly investigated the antiproliferative and proapoptotic role exerted by AdoMet in breast cancer ,41,42 anHere, we reported that in MDA-MB-231 and MDA-MB-468 TNBC cells AdoMet upregulated the expression of miR-34c and miR-449a, well-known regulators of specific oncogenes and modulators of tumor growth and cancer metastasis in breast cancer ,46,47. WEmerging evidence suggests that miRNAs play important roles in the pathogenesis of many types of human cancers by modulating different genes in the context of signaling pathways involved in tumor promotion or suppression ,36,37,38MiRNA-34c and miRNA-449a, that belong to miRNA-34/449 superfamily, share several target genes and a very similar seed sequence, a conserved heptameric sequence comprising nucleotides 2\u20137 at the 5\u2032 end of the miRNA, essential for binding to target mRNA ,46,47. MTo obtain insight into the functional mechanism underlying AdoMet\u2019s anticancer effects in TNBC cells, we evaluated its ability to induce the expression of miR-34c and miR-449a by performing quantitative real-time PCR (qRT-PCR) analysis with pre-designed probe-primer sets after 24- and 48 h treatment of MDA-MB-231 and MDA-MB-468 cells with 500 \u00b5M AdoMet. The results obtained showed that the relative expression of the two miRNAs was up-regulated by AdoMet in both TNBC cell lines. As shown in To explore the antitumor activity of AdoMet in TNBC cells, we first tested the ability of the sulfonium compound to induce apoptosis in MDA-MB-231 and MDA-MB-468 cells. The cells were treated with AdoMet 500 \u00b5M and the apoptotic process was evaluated after 72 h by flow cytometry. As shown in Firstly, by qRT-PCR we demonstrated that, the transfection of cells with miR-34c or miR-449a mimics effectively up-regulates miR-34c and miR-449a transcriptional levels in the two TNBC cell lines reaching a value approximately double (data not shown). Next, to analyze the antitumor activity of miR-34c and miR-449a we assessed apoptosis induction. Our results showed that in both MDA-MB-231 A and in Altogether, these findings indicated that miR-34c and miR-449a played a tumor suppressive role in TNBC cells and that upregulation of miR-34c and miR-449a by AdoMet mediated AdoMet-induced apoptotic cell death.In order to evaluate the effect of AdoMet on TNBC cell migration, we performed transfection experiments with miR-34c or miR-449a mimics in highly aggressive and invasive mesenchymal-like MDA-MB-231, and in the basal-like MDA-MB-468, characterized by a relatively low invasive phenotype and potential and thenFirst, we have assessed that in a short time, such as 24 h, after AdoMet and/or miRNAs treatment the apoptotic cell death evaluated by FACS analyses did not interfere with the migration process evaluated at 24 h in both cell lines because it was not significant and reached, as a maximum, a value 10% more than the control (data not shown).Cell migration was detected in real-time by using video time-lapse microscopy (TLVM), and representative images of the wound closure process are shown in In Cancer metastasis begins with detachment of metastatic cells from the primary tumor, followed by an increase in cellular motility and invasion, proteolysis, and resistance to apoptosis. These four essential steps are correlated and influenced by multi-biochemical events and parameters.N-cadherin and vimentin, a protein overexpressed in most epithelial cancers, whose levels are correlated with tumor migration, invasion, and poor prognosis expressed as mm/hour. Moreover, the fields of view selected and used to build up the overall averaged curves all had a similar scratch width, ranging from 0.7 to 0.9 mm, corresponding to a wound area of 16\u201320 mm2. The statistical significance of the experiment was ensured by the possibility to visualize contemporarily several fields of view (up to 30\u201336 h) of the same sample (depending on the delay time chosen by the operator) in the staged incubator. Furthermore, a single field of view (~10 \u00d7 105 \u03bcm2) represented 5% of the total scratch area (~20 \u00d7 106 \u03bcm2) of each well. Because we captured at least five field-views in three repetitions per well, this ensured us that we analyzed 25\u201330% of the scratch in each specific well. Triplicates were performed for each scratch assay.The wound closure measurements were calculated by the software as Area t /Area tg for 30 min a 4 \u00b0C and the supernatant was recovered. Protein concentration was performed by Bradford method as previously reported [MDA-MB 468 and MDA-MB-231 cells were transfected with 100 nM miR-34c or miR-449a mimic, treated or not with AdoMet 500 \u00b5M and after 48 and 72 h, collected by centrifugation, washed twice with ice-cold PBS, and the pellet was lysed using 100 \u00b5L of RIPA buffer. After incubation on ice for 30 min, the samples were centrifuged at 18,000\u00d7 reported .Western blotting analysis was performed as previously reported . All priStatistical analysis was performed as previously reported .Overall, our results provided the first evidence that AdoMet exerts its antitumor effects in TNBC cells by regulating miRNA expression and gave new information for a better and deeper understanding of the molecular mechanisms underlying the anticancer properties of this naturally-occurring multifunctional sulfonium compound suggesting the use of AdoMet as an attractive chemopreventive and therapeutic strategy miRNA-mediated in TNBC.Our study also provided significant contribution to the knowledge of miR-34/449 biological functions furnishing the first evidence that in MDA-MB-231 and DA-MB-468 cells miR-34c and miR-449a act as tumor suppressors and inhibitors of the metastatic potential of cancer cells by targeting TGF-\u03b2/SMAD and \u03b2-catenin signaling pathways.Taken together, these data suggested the advantage of using natural compounds with pleiotropic effects, such as AdoMet and miRNAs, which can regulate many target genes achieving more efficient modulation than single-target drugs. These discoveries offer another approach for the scientific community in cancer therapy using only natural compounds, and highlight a new tool to fight cancer giving the possibility to recover health by avoiding typical side-effects, improving prognosis, and increasing the percentage of remission. However, it appears that miRNAs are very suitable in combination with AdoMet in cancer therapy and more data are expected in the coming years to deepen the knowledge of the regulation of non-coding RNAs by AdoMet in order to improve currently applied anticancer therapies."} {"text": "This online structured survey has demonstrated the global impact of the COVID-19 pandemic on vascular services. The majority of centres have documented marked reductions in operating and services provided to vascular patients. In the months during recovery from the resource restrictions imposed during the pandemic peaks, there will be a significant vascular disease burden awaiting surgeons.One of the most affected specialties Coronavirus disease 2019 (COVID-19) has had a profound effect on the availability of surgical resourceshttps://www.medrxiv.org/content/10.1101/2020.05.27.20114322v1; ISRCTN 80453162).International guidelines on designing and reporting of surveys were usedS2, supporting information). Results reported here are for the period 23 March to 3 May 2020, divided into three 2-week periods for comparison. Duplicate responses were removed.A remote digital survey was developed by a global team of vascular healthcare professionals. Questions related to all aspects of vascular care, including staff availability, multidisciplinary team input, and personal protective equipment (PPE) .International/continental comparisons were performed, where possible, to describe relative change in practice. A score of 0\u20133 was allocated to each answer based on perceived relative service reduction by 12 VERN healthcare professionals (0 represents no change and 3 the most significant change) (S3 (supporting information).Overall, 465 completed survey responses were collected from 249 different units in 53 countries across six continents maintained their pathways for acute aortic syndromes . A small proportion (5\u00b79 per cent) moved to conservative management only, 4\u00b75 per cent were offering early endovascular surgery, and 26\u00b76 per cent limited surgery to ruptures.Thresholds for abdominal aortic aneurysm (AAA) repair were raised in the majority of centres; 11\u00b77 per cent of vascular units limited surgery to AAA more than 6\u00b75 cm in maximal diameter, 16\u00b74 per cent to those above 7 cm, 25\u00b71 per cent to symptomatic or ruptured AAA, and 2\u00b73 per cent to AAA suitable for endovascular AAA repair (EVAR) only. Despite this, 25\u00b71 per cent reported no change in practice. Access to EVAR out of hours was initially available to 8\u00b75 per cent of responding units, increasing to 21\u00b72 per cent in the following 4 weeks. Overall, only 14\u00b72 per cent of units maintained a 24/7 EVAR service, 26\u00b73 per cent maintained an \u2018in hours\u2019 service, 31\u00b75 per cent offered EVAR for urgent cases only, and 18\u00b75 per cent were able to run their service on an Changes to the management of lower-limb pathology are shown in Some 27\u00b75 per cent of units moved to a triage clinic system, and 29\u00b70 per cent cancelled all planned outpatient clinics. Use of technology permitted 14\u00b79 per cent of units to move to video or telephone clinics, with 18\u00b77 per cent including subsequent triage for attendance if required. The use of \u2018hot\u2019 clinics (reserved for acute/urgent patients) increased during the pandemic, and 79\u00b71 per cent of units reported using some form of hot clinic to accommodate vascular patients.Overall, 32\u00b72 per cent of units that normally participated in a multidisciplinary team (MDT) continued with face-to-face meetings; 59\u00b75 per cent stopped regular face-to-face meetings and, of those, 39\u00b71 per cent did not replace them. Overall, 36\u00b78 per cent moved to remote conferencing.Globally, 5\u00b75 per cent of senior surgeons were redeployed to support other specialties, compared with 53\u00b75 per cent of junior vascular surgical staff.The majority (80\u00b75 per cent) of units had PPE guidance in place. Some 26\u00b72 per cent of units did not have access to adequate PPE at the start, compared with 17\u00b75 per cent at the end of the period.The COVER study is the first international prospective study of unit-level vascular surgical practice during the COVID-19 pandemic. Findings from tier 1 suggest radical changes in practice in a range of services.One notable change across participating vascular units is the reduction in AAA screening activity. The benefit of AAA screening, and the likelihood of finding a new AAA (less than 1\u00b75 per cent)14. For EVAR, a paradigm has been created where, potentially, more EVAR is performed during the pandemic, but with a reduction in post-EVAR surveillance. There are important implications relating to the financial resources, operating time and staffing that will be required to catch up with missed scans and scheduled operations as services begin to resume. Vascular patients will be competing with the estimated 28 million operations cancelled or postponed during the peak of the pandemicAnother common finding is the reported preference for endovascular strategies to address aortic and peripheral arterial disease; this is thought to be based on a drive to minimize hospital stay and reduce demand on ICU bedsMDT meetings support individual clinician decision-making by navigating complex decisions through a multifaceted approach. COVID guidelinesDespite the large number of units taking part, correlating individual country or regional data with dates of lockdown is challenging. Dates of lockdown were, however, similar for countries providing the majority of responses . All participating units entered lockdown in March 2020, and were in lockdown when the survey began. If there are any subsequent COVID-19 \u2018waves\u2019 in areas that are \u2018past the peak\u2019bjs11961-sup-0001-SupinfoAppendix S1: Supporting informationClick here for additional data file."} {"text": "This study (a) assessed quality of life (QoL) in a patient sample with severe mental illness in an integrated psychiatric care (IC) programme in selected regions in Germany, (b) compared QoL among diagnostic groups and (c) identified socio-demographic, psychiatric anamnestic and clinical characteristics associated with QoL.This cross-sectional study included severely mentally ill outpatients with substantial impairments in social functioning. Separate dimensions of QoL were assessed with the World Health Organisation\u2019s generic 26-item quality of life (WHOQOL-BREF) instrument. Descriptive analyses and analyses of variance (ANOVAs) were conducted for the overall sample as well as for diagnostic group.A total of 953 patients fully completed the WHOQOL-BREF questionnaire. QoL in this sample was lower than in the general population 32.8 to 35.5), with the lowest QoL in unipolar depression patients and the highest in dementia patients . Main psychiatric diagnosis, living situation , number of disease episodes, source of income, age and clinical global impression (CGI) scores were identified as potential predictors of QoL, but explained only a small part of the variation.Aspects of health care that increase QoL despite the presence of a mental disorder are essential for severely mentally ill patients, as complete freedom from the disorder cannot be expected. QoL as a patient-centred outcome should be used as only one component among the recovery measures evaluating treatment outcomes in mental health care.The online version of this article (10.1007/s11136-020-02470-0) contains supplementary material, which is available to authorized users. Quality of life (QoL) has been reported to be reduced in patients with severe mental illnesses. Awareness of the topic in psychiatric research is rising , 2. BecaSeveral studies have shown a clinically relevant reduction in QoL in patients with unipolar depression. Socio-demographic factors such as age, sex, relationship status and living situation explained only a small proportion of the variance in QoL , 5. SelfFor patients with bipolar depression, scientific reviews have reported a reduced QoL compared to healthy people even in euthymic phases , 13. No Subjective QoL in patients with schizophrenia is usually lower than in the general population , 23 but Studies on QoL in patients with schizoaffective disorder are limited but suggest a reduced QoL compared to the general population . No assoPatients with anxiety disorders have reported a reduced QoL in previous research, especially when diagnosed with a generalised anxiety disorder . SubjectPatients with dementia have reported a subjective QoL level similar to elderly persons without dementia , 38. A lIn patients with alcohol addiction, QoL has been shown to be lower than in the general population , 52. HigThe present study on QoL was embedded in an integrated psychiatric care (IC) programme for patients with severe mental disorders. Since 2008, this programme has been offered in several federal states within the framework of selective contracts between providers and statutory health insurance funds .The aim of this study was to (a) assess the QoL in patients with severe mental illness participating in an IC programme in Germany, (b) compare the QoL among different diagnostic groups, and (c) identify socio-demographic, psychiatric anamnestic, and clinical characteristics associated with different QoL domains in this population.This cross-sectional observational study was performed within a research project at the Charit\u00e9\u2014Universit\u00e4tsmedizin Berlin (Germany) for the evaluation of a model of IC. The model was implemented in the regions of Berlin/Brandenburg and Lower Saxony/Bremen to strengthen the network of therapeutic care providers by allowing complex outpatient care for patients with severe mental illnesses \u201361.aged 18\u00a0years or older;resided in the participating regions;insured with one of the participating statutory health insurances ;diagnosed with an F0.X to F8.X ICD-10 code;needed hospital admission requiring care;entitled to receive complex outpatient care instead of inpatient treatment according to the assessment of the attending physician;impaired social functioning level );assessed with illness severity of\u2009\u2265\u20095 on the clinical global impression scale (CGI);provided written consent .The IC model included patients who met the following inclusion criteria:Patients with acute suicidality could not be included in the IC programme.The present study was based on a subsample of IC patients who were selected between 01 January 2008 and 31 March 2010 and were categorised into seven diagnostic groups based on the following main diagnoses: affective disorders , schizophrenia (F20), schizoaffective disorder (F25), neurotic disorders , dementia (F00\u201303), and alcohol-related disorder (F10). The patients with specific personality disorders (F60), other psychoactive substance-related disorders (F19), organic mental disorders other than dementia (F06\u201307) and other acute or chronic psychotic disorders were excluded from the analysis due to insufficient numbers.Patients were asked to complete the World Health Organisation QoL-BREF (WHOQOL-BREF) questionnaire , 65. ThePatients were consecutively included by the attending physicians. On a quarterly basis, physicians assessed socio-demographic , psychiatric anamnestic and clinical data according to a standardised manual. The main diagnosis was the one that led to the acute need for psychiatric therapy and admission to the IC. The patients completed the WHOQOL-BREF at inclusion in the IC and at the beginning of every quarter year. The physicians\u2019 documentations and the patients\u2019 questionnaires were checked by trained study personnel according to standard operating procedures (SOPs), and continuous quality circles with the participating physicians were implemented .The present analysis includes only cross-sectional data collected at the time of inclusion in the IC. Statistical analysis was conducted with SPSS 19.0 for Microsoft Windows. Socio-demographic, psychiatric anamnestic and clinical patient characteristics were described for the entire IC sample and stratified by main diagnostic group by mean and standard deviation (SD) or frequencies and percentages.In the first step, the association between age, sex and main diagnosis was assessed for each of the diagnostic groups of interest to identify potentially confounding factors. In a second step, QoL mean values were compared among diagnostic groups separately for each WHOQOL-BREF domain and for the overall QoL scale using analysis of variance (ANOVA). The reference values for QoL of the German general population were taken from Angermeyer et al. .2 and adjusted R2 values). Results were checked for multicollinearity and heteroscedasticity. All results are considered exploratory .Due to differences in the QoL values among the diagnostic groups, means were adjusted for diagnosis but not for age and sex (no relevant differences). To identify the influence of patient characteristics on the QoL, adjusted mean values for each of the WHOQOL-BREF domains and the overall QoL scale are presented by socio-demographic, psychiatric anamnestic and clinical characteristics, based on domain-specific models of multivariable ANOVA .Overall, 1433 patients were included in the IC programme, of which 1347 patients had one of the study diagnoses. Among those, 953 completed the WHOQOL-BREF questionnaire. The calculation of scores for the subdomains of the WHOQOL-BREF was possible for all 953 patients, while the assessment of the global score was possible for 941 patients.The largest groups were patients with unipolar depression 58.1%) and psychotic disorders (13.7%), followed by neurotic disorders (11.7%) and those with bipolar depression , schizophrenia and dementia . Psychological health-related QoL was impaired mainly in the patients with unipolar depression and with neurotic disorders, who showed markedly lower values than the patients with schizophrenia and dementia. In the domain of social relationships, the patients with dementia showed a notably higher QoL level than the patients in four out of the remaining six diagnostic groups. Regarding the environmental domain, QoL was higher in the dementia patients than in the patients with unipolar depression and those with neurotic disorders. Additionally, on the WHOQOL-BREF global scale, the patients with dementia were the least impaired group, which was shown by higher values than most of the other diagnostic groups . Furthermore, the schizophrenia patients reported a markedly higher QoL than those with unipolar depression or neurotic disorders.To estimate the association between socio-demographic, psychiatric anamnestic and clinical factors and QoL, domain-specific multivariate models were fit. All potential confounding socio-demographic, psychiatric anamnestic and clinical factors , while a high CGI score was associated with a lower QoL in the global scale section and the environmental domain .The largest adjusted mean differences were observed among the main diagnostic groups in the global scale model was related to a lower QoL in these domains, which was also observed by Henkel et al. .Independent of socio-demographic, psychiatric anamnestic, and clinical characteristics, the main diagnostic group remained the factor with the most distinct differences in all models. This suggests QoL as a diagnosis-specific aspect that should be taken into account in evaluation studies in different patient groups.Overall, these exploratory results indicate potential clinical relevance of some of the factors investigated in the present study. They should, however, be cautiously interpreted against the background of a limited precision due to a small number of cases in some categories, which resulted in rather large CIs. The selection of factors potentially influencing the patients\u2019 QoL in this study was limited to variables that were assessed in the context of the research project for the IC model evaluation. Therefore, factors that had been identified as potential determinants in other studies, such as self-esteem, individual expectations, personality traits, self-efficacy , illnessp values of one predictor variable (i.e. reduction in power) when another predictor is included in the model. However, since this was an exploratory study without adjustments for multiple testing, p values were not the focus of our analyses. In addition, dropping important variables from the model might have introduced bias. In our view, our data might not be suitable for disentangling all of the combination effects which are common in socio-demographic and lifestyle factors. A graphical check did not find relevant evidence for heteroscedasticity. Hence, no methods to account for heteroscedasticity (e.g. robust standard errors) were used.The statistical examination for multicollinearity revealed indications of dependencies between a few of the variables (e.g. living situation and subsistence). However, we chose to keep all of the factors in our analyses. One reason is that collinearity mostly is a concern to result in essential shifts in the Despite the controversial discussion about the assessment of QoL in psychiatric patients , the cond between 0.10 and 0.37) [F scores and df to describe differences between people with varying diagnoses and clinical as well as socio-demographic characteristics [The interpretation of the results is hampered by the lack of a definition of MCID of different QoL measures as patient-reported outcomes. The MCID is the smallest difference in a score that a patient would identify as important. In our study, no control group was available neither did we test the effects of an intervention. Responsiveness of the WHOQOL-BREF instrument has been shown in a variety of settings and conditions. The instrument is able to detect even small changes induced by treatments as shown by effect sizes in more than 20 studies being highly significant even with low or moderate values (Cohen\u2019s nd 0.37) . Howevernd 0.37) , 100, thnd 0.37) . In the nd 0.37) , we usederistics .The study population exclusively consisted of outpatient and mostly female, unipolar depressive patients from selected regions in Germany that were enrolled in an IC project, which limits generalisation of the results. In addition, due to a low number of patients in some of the diagnostic groups, differentiation into clinically relevant subgroups was not possible; e.g. \u2018dementia\u2019 included patients with different forms of dementia (mainly Alzheimer\u2019s disease), while \u2018neurotic disorders\u2019 comprised mainly but not only anxiety patients.It also needs to be considered that analyses were performed only for patients who completely answered the WHOQOL-BREF, which might selectively exclude more severely ill patients. Especially for dementia patients, non-declared support by family members for filling in the questionnaire cannot be excluded. Additionally, physicians might have graded patients as more severely ill on the psychopathological scales to allow for the patients\u2019 inclusion in the IC programme. Because no causal conclusions can be drawn due to the cross-sectional study design, further prospective longitudinal studies would be desirable.A notable strength of the study was that it allowed an analysis and a comparison of the QoL among different diagnostic groups based on a large psychiatric outpatient sample. It thereby differs from studies conducted in inpatient settings and provides results that are highly transferable to clinical practice and care.In summary, the results provide further indication that socio-demographic and clinical variables have little impact on the QoL of people with mental illness. However, if factors that can be improved by psychiatric or psychosocial interventions have only a limited influence on QoL in the short to medium term, the question arises whether QoL is an appropriate outcome in mental health care research. Severely mentally ill patients are often affected by recurrent disease episodes or chronic disease courses, and only a minority can expect to stay completely free from symptoms for the remainder of their life. Especially for those patients, health care that improves QoL despite the presence of the illness is essential.A recent review emphasised that there is an association between low social functioning and negative QoL in psychotic disorders . The resScepticism about the increasingly widespread use of QoL as an outcome in psychiatric health care research was expressed more than 10\u00a0years ago , 106, agQoL is a markedly subjective measure, and the individual rating depends on the underlying type of mental disorder. QoL can also be seen within a personal frame of reference. This frame of reference is formed by the level of social functioning and the degree of integration into society. When estimating QoL values, this subjective frame of reference must be taken into account. If the change in QoL is used as an outcome after an intervention or after an observation period, the typical characteristics such as measurability, change sensitivity, reliability and validity must also be considered.As a conclusion of the study, symptoms of mental disorder, clinical impression, and social functioning alone are not sufficient as outcome measures because they do not reflect the subjective patient\u2019s perspective. However, QoL as a patient-centred outcome measure is not unproblematic either. Therefore, the effects of health care should not be measured by the change in QoL alone, but QoL should only be used as one component alongside other recovery measures. Future research should look for alternative patient-related outcomes that better reflect the success of long-term psychiatric care.Supplementary file1 (PDF 22 kb)Below is the link to the electronic supplementary material."} {"text": "Leishmania growth and survival is dependent on polyamine biosynthesis; therefore, inhibition of Leishmania arginase may be a promising therapeutic strategy. Here, we evaluated a series of thirty-six chalcone derivatives as potential inhibitors of Leishmania infantum arginase (LiARG). In addition, the activity of selected inhibitors against L. infantum parasites was assessed in vitro. Seven compounds exhibited LiARG inhibition above 50% at 100 \u03bcM. Among them, compounds LC41, LC39, and LC32 displayed the greatest inhibition values . Molecular docking studies predicted hydrogen bonds and hydrophobic interactions between the most active chalcones and specific residues from LiARG's active site, such as His140, Asn153, His155, and Ala193. Compound LC32 showed the highest activity against L. infantum promastigotes (IC50 of 74.1 \u00b1 10.0 \u03bcM), whereas compounds LC39 and LC41 displayed the best results against intracellular amastigotes . Moreover, compound LC39 showed more selectivity against parasites than host cells (macrophages), with a selectivity index (SI) of 107.1, even greater than that of the reference drug Fungizone\u00ae. Computational pharmacokinetic and toxicological evaluations showed high oral bioavailability and low toxicity for the most active compounds. The results presented here support the use of substituted chalcone skeletons as promising LiARG inhibitors and antileishmanial drug candidates.Arginase catalyzes the hydrolysis of Leishmania infantum is the etiological agent of visceral leishmaniasis (VL), a lethal infectious disease that afflicts neglected populations mainly distributed in Africa, Asia and Latin America. Recently, the World Health Organization estimated that 30,000 new cases of VL occur worldly has been proposed as a potential target for new drug candidates of natural and in vivo contexts; this finding reinforces that ARG is essential for Leishmania pathophysiology and is therefore an intriguing target for new therapeutics.Arginase plays a pivotal role for the survival of Trypanosoma cruzi and Leishmania by naturally occurring phenolic substances cells were transformed with RP1B-LiARG. Cells were grown at 37\u00b0C until mid-exponential phase (Abs600 ~ 0.6). Protein expression was induced with 1 mM IPTG and cells were grown for 16 h at 30\u00b0C. Cells were harvested by centrifugation, resuspended in lysis buffer and lysed by sonication . The cell lysate was centrifuged , and the clarified supernatant was loaded onto a His-Trap HP column . A wash step with 100 mM manganese chloride was carried out for enzyme activation. LiARG was eluted with an imidazole gradient ranging from 5 to 500 mM. Fractions containing LiARG were pooled and dialyzed in the presence of His6-TEV protease for removal of the N-terminal His6 tag. A second nickel-affinity chromatography step was used to remove the His6 tag and His6-TEV. Finally, purified LiARG was dialyzed against , concentrated and stored at \u221280\u00b0C.LiARG expression and purification was performed as described previously at 37\u00b0C for 5 min. A set of thirty-six synthetic substituted chalcones was used to screen for potential LiARG inhibitors. Synthesis of chalcone derivatives were described previously by Ventura et al. . Urea concentration was determined spectrophotometrically by hydrolyzing urea into ammonia and then converting ammonia into indophenol blue, which absorbs light at 600 nm into the active site of LiARG, we used LiARG three-dimensional model and docking parameters from our previous work required to kill 50% of the rats tested.The two-dimensional structures were drawn with ACD/ChemSketch were added and the cultures were incubated at 37\u00b0C in 5% CO2 for 48 h. Subsequently, cell viability was measured by a colorimetric assay using 3--2,5-diphenyl tetrazolium bromide (MTT) . MTT was added to the culture medium at a final concentration of 1.0 mg/mL, and cultures were incubated at 37\u00b0C in 5% CO2 for 3 h. MTT was transformed by the living cells in formazan crystals that were dissolved in DMSO. The concentration of formazan was measured spectrophotometrically at 570 nm . The 50% cytotoxic concentration (CC50) values were determined by the nonlinear regression fits of the dose-response curves using GraphPad Prism 8.0. All measurements were performed in triplicates, twice independently.RAW 264.7 macrophages were cultured in DMEM medium supplemented with 10% fetal bovine serum at 37\u00b0C in 5% COLeishmania infantum strain MHOM/BR/1974/PP75 were cultured in Schneider's medium supplemented with 10% fetal bovine serum at 26\u00b0C. First, the selected LiARG inhibitors were diluted in culture medium at concentrations ranging from 2.9 to 1,723.0 \u03bcM. Then, late-exponential phase (after 96 h growth) L. infantum promastigotes, at a final density of 105 parasites/mL, were incubated with previously diluted inhibitors at 26\u00b0C for 96 h. Subsequently, parasite viability was determined by a colorimetric assay using resazurin . Resazurin was added to the culture medium at a final concentration of 0.001% (w/v), and cultures were further incubated at 26\u00b0C for 4 h values were determined by the nonlinear regression fits of the dose-response curves using GraphPad Prism 8.0. All measurements were performed in triplicates, twice independently.Promastigotes of 5 cells/well and incubated at 37\u00b0C in 5% CO2 for 2 h. After this period, adherent cells were washed twice with phosphate buffered saline and infected with stationary-phase L. infantum promastigotes at a 10:1 parasite/macrophage ratio. After 4 h of infection at 37\u00b0C in 5% CO2, free parasites were removed by a wash step with PBS and infected macrophages were incubated for an additional 24 h to allow parasite differentiation into amastigotes. Then, infected cells were treated with increasing concentrations of selected LiARG inhibitors (11.0\u201354.0 \u03bcM) for 48 h. After this period, the culture supernatant was collected for the evaluation of NO production by the Griess reaction , as described earlier. Fungizone\u00ae (130\u2013540 nM) was used as the reference drug. Results were expressed as percentages of viability in relation to the control (100% viability). The 50% inhibitory concentration (IC50) values were determined by the nonlinear regression fits of the dose-response curves using GraphPad Prism 8.0. All measurements were performed in triplicates, twice independently.Intracellular anti-amastigote activity was determined using the promastigote recovery assay with minor modifications . Among the flavonoids tested, taxifolin showed the most effective inhibition result (88%) at 100 \u03bcM. Although chalcones and chalcone-based compounds have been described as inhibitors of other Leishmania enzymes, to the best of our knowledge, this is the first report on the inhibition of Leishmania arginase by this class of flavonoids.Chen et al. showed tLeishmania trypanothione reductase -1-(2-methoxy-4-((3-methylbut-2-en-1-yl)oxy)phenyl)-3-(4-nitrophenyl)prop-2-en-1-one] with a prenyloxy group in ring A and a nitro group in ring B demonstrated a potent inhibitory activity against To gain further insights into the mechanism of inhibition, the three most active chalcone derivatives, those with percentages of inhibition greater than that of quercetin , were doL. mexicana arginase , Asn143(H), Asp194(H), Ala192(A), Thr257(A) residues with milin silico results support the findings of de Mello et al. . LC32 exhibited the greatest effect against L. infantum promastigotes with an IC50 of 74.1 \u00b1 10.9 \u03bcM. Then, chalcones LC14, LC41, and LC39 showed similar IC50 values of 283.4 \u00b1 14.2, 319.1 \u00b1 14.3, and 398.0 \u00b1 44.2 \u03bcM, respectively. Lastly, LC34 showed an IC50 of 747.2 \u00b1 22.3 \u03bcM, and LC37 was the only chalcone derivative that did not display in vitro activity against L. infantum promastigotes at the highest concentration tested.After this first screening, the chalcone derivatives exhibiting more than 50% LiARG inhibition were sel50 of 4,531.0 \u00b1 212.0 \u03bcM and thus a SI of 11.4, proving to be more than 10 times selective against L. infantum promastigotes. In contrast, LC14 displayed a CC50 of 75.1 \u00b1 8.9 \u03bcM (SI < 1), being more toxic to the host cell than the parasite. The chalcone derivatives LC34, LC41, and LC32 showed CC50 values of 3,010.9 \u00b1 88.0 (SI of 4.0), 1,146.8 \u00b1 58.8 (SI of 2.6), and 479.1 \u00b1 19.5 \u03bcM (SI of 6.5), respectively. Indeed, the replacement of a bromide at position 4 of ring A of LC39 by a methoxy group in LC41 may play an important role in the cytotoxicity for macrophages. In addition, LC37, which did not show anti-L. infantum promastigote activity, also did not inhibit the growth of RAW 264.7 macrophages at the highest concentration tested.Next, we evaluated the cytotoxicity of the selected chalcone derivatives against RAW 264.7 macrophages, enabling us to determine their selectivity against parasite cells. LC39 showed a CCL. infantum promastigote activity and selectivity against parasite cells (SI > 1.0), the chalcone derivatives LC32, LC34, LC39, and LC41 were selected for the analysis of their activity against L. infantum intracellular amastigotes. Fungizone\u00ae was again used as a reference drug control. L. infantum-infected peritoneal macrophages were treated with the selected chalcone derivatives and the viability of promastigotes recovered from infected macrophages was measured. All chalcone derivatives were able to reduce the parasite load when compared to untreated control cells, displaying IC50 values of 42.3 \u00b1 17.1 (LC39), 43.7 \u00b1 13.7 (LC41), 65.4 \u00b1 10.9 (LC34), and 111.5 \u00b1 19.8 \u03bcM (LC32) . MaquiavM (LC32) demonstr50 value against intracellular amastigotes, resulting in a SI of 107.1, even greater than that of the reference drug Fungizone\u00ae . It is noteworthy that a SI over 10 is considered ideal according to the hit and lead criteria in drug discovery for infectious diseases previously established by Katsuno et al. . Indeed, previous studies have already connected the anti-inflammatory effect of chalcones to the inhibition of NO production revealed that arginase downregulates several virulence factors, including LPG, PPG, GP63, and amastin increased NO production by control . The res control , which sL. infantum arginase. Among the chalcone derivatives tested against LiARG, three showed inhibitory potential greater than the reference inhibitor quercetin. In silico studies indicated the direct interaction of chalcone derivatives with LiARG's active site residues as well as their low toxicity and good oral bioavailability. In addition, the chalcone derivatives LC34, LC39, and LC41 were effective against the promastigote and intracellular amastigote forms of L. infantum. Remarkably, LC39 stood out for being highly selective to the parasite, even more so than the reference drug Fungizone\u00ae. Interestingly, our results point at the nitro group substituent in ring B as an important factor for the antileishmanial effect of chalcones, corroborating previous findings. Taken together, the results presented here bring new perspectives for Leishmania arginase inhibitors and the development of chalcone-based drug candidates against visceral leishmaniasis.We report for the first time on the ability of chalcones to inhibit The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.in silico analysis. IL, RS, and LM synthesized the chalcone derivatives. AG performed the biological assays. All authors contributed to the article and approved the submitted version.AV, AP, and IR designed the work. AG, JJ, AP, and IR wrote the manuscript. AG and DO performed the enzymatic assays and data analysis. AMTS, and ARS performed the The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "As of 18 October 2020, over 39.5 million cases of coronavirus disease 2019 (COVID-19) and 1.1 million associated deaths have been reported worldwide. It is crucial to understand the effect of social determination of health on novel COVID-19 outcomes in order to establish health justice. There is an imperative need, for policy makers at all levels, to consider socioeconomic and racial and ethnic disparities in pandemic planning. Cross-sectional analysis from COVID Boston University\u2019s Center for Antiracist Research COVID Racial Data Tracker was performed to evaluate the racial and ethnic distribution of COVID-19 outcomes relative to representation in the United States. Representation quotients (RQs) were calculated to assess for disparity using state-level data from the American Community Survey (ACS). We found that on a national level, Hispanic/Latinx, American Indian/Alaskan Native, Native Hawaiian/Pacific Islanders, and Black people had RQs > 1, indicating that these groups are over-represented in COVID-19 incidence. Dramatic racial and ethnic variances in state-level incidence and mortality RQs were also observed. This study investigates pandemic disparities and examines some factors which inform the social determination of health. These findings are key for developing effective public policy and allocating resources to effectively decrease health disparities. Protective standards, stay-at-home orders, and essential worker guidelines must be tailored to address the social determination of health in order to mitigate health injustices, as identified by COVID-19 incidence and mortality RQs. The high incidence of novel coronavirus disease 2019 (COVID-19) in the United States is concerning; there have been approximately 8.2 million confirmed cases and nearly 220,000 deaths . Th. Th13]. Data on White, Black, Hispanic/Latinx, Asian, AIAN, and NHPI people were evaluated. We excluded the multiple race/ethnicity category from our analysis, as there was insufficient data for analysis. COVID-19 incidence data classified by race and ethnicity were available in 46 of 50 US states. Racial and ethnic data from Louisiana, New York, North Dakota, and Rhode Island were not available and were excluded from our analysis. Cases and deaths where race or ethnicity were unknown were also excluded. The data were restricted to reports from 1 March to 14 June 2020, when public policy stay-at-home orders were in effect, regardless of their enforceability .A representation quotient (RQ) was utilized. RQs were defined as the proportion of a particular subgroup in the total number of COVID-19 cases or mortality relative to the corresponding estimated proportion of that subgroup in the United States population. For example, COVID-19 incidence RQ for White people would be given by stralia) .p < 0.05) differences between the average state RQs between all racial/ethnic groups.After calculation of incidence and mortality RQ by race and ethnicity, Tukey\u2019s test for post-hoc analysis was completed with an ANOVA to evaluate statistically significant published categorized the following industries in an advisory memorandum as critical infrastructure sectors, and therefore as essential work: medical and health care, telecommunications, information technology systems, defense, food and agriculture, transportation and logistics, energy, water and wastewater, law enforcement, and public works . This seAccording to the Center for Economic and Policy Research\u2019s 2019 analysis of the essential workforce, over 31 million Americans are deemed essential. Approximately 50.9% of these workers are in the health care sector, 20.6% in food and agriculture, and 7.2% in transportation and delivery . These eIn the United States, more than 36% of essential workers are non-White . FurtherThere are distinct differences in the proportion of workers who are eligible to work from home by race and ethnicity. The US Bureau of Labor Statistics reported that in the period 2017\u20132018, 42 million wage and salary workers (29%) had the option to work from home. Of these, 36 million reported working from home at least occasionally . While 2Personal transportation methods vary widely across the US. In rural areas, Americans rely heavily on personal vehicles to travel to their place of employment . ConversEarly reports from Japan show COVID-19 incidence rates are heavily influenced by population density . DensityLimited access to potable water or indoor plumbing on Native American reservations makes it difficult to follow handwashing guidelines. The study by Rodriguez-Lonebear et al. found that among AIAN, increased likelihood of COVID-19 was associated with the lack of indoor plumbing . The assSocially assigned race has been shown to have greater impact on health than self-reported race . These sBeginning in May, Black Lives Matter and other social justice movement demonstrations were held nationwide. These public demonstrations were sparked by the murders of unarmed Black people by police, including Breonna Taylor and George Floyd, among others. Public health officials initially raised concerns that these protests could potentially increase the spread of COVID-19. However, it is unclear whether these large-scale protests have directly impacted COVID-19 outcomes. A notable increase in COVID-19 incidence occurred shortly after the protests began, but this also coincided with the \u201creopening\u201d of many states and a marked increase in overall social activity. Interestingly, studies have shown that cities which had large protests actually had a net increase in social distancing behavior .Many argue that racial injustice, and the longstanding mistreatment of Black people by the criminal justice system, is a public health crisis and necessitates these protests. Indeed, the incarceration of Black people occurs at a rate 5 times higher than their White counterparts . Black aStructural racism contributes to disproportionate rates of incarceration; there are higher rates of incarceration among Black, Indigenous, and Hispanic/Latinx individuals, particularly those of low socioeconomic status and those with mental illness, than their White counterparts . MoreoveAlmost 30 million Americans do not have health insurance coverage . AlthougHealth insurance programs may be expensive, inaccessible for many individuals, and have complex policies for covered expenses. Even when enrolling in marketplace plans with tax credits, only 69% of individuals had premiums of <$100 per month . This baAccess to testing and treatment varies across the US. Of greatest concern is affordability of care. Several states, such as Michigan, Wisconsin, and New York, have reported that the zip codes most impacted by COVID-19 are also the poorest . In New In an effort to expand services to rural health clinics, the US Department of Health and Human Services allocated $225 million dollars to support expansion of COVID-19 testing, telemedicine services, and notification/information services . Using tIn 2017, the Federal Communications Commission identified approximately 25 million Americans that do not have internet access, of which, approximately 19 million live in rural areas . FurtherInsufficient access to nutritious foods, low health literacy, and lack of culturally appropriate health education compound risk of hypertension, diabetes, and obesity\u2014all of which are risk factors for COVID-19 morbidity and mortality ,56,57. AVitamin D deficiency has also been found to be associated with increased risk for COVID-19, further worsening the outlook for the undernourished . StudiesWe suspect that COVID-19 incidence and mortality may be under-reported given the initial difficulty in testing. Studies indicate that, after controlling for COVID-19 deaths, the overall mortality rate has increased during the March\u2013May 2020 period compared to average mortality rates for the past 10 years . This suAnother limitation to this study was the variability in state-level data. There is no standard method for data collection by race, sex, and ethnicity between states. For example, some states categorized more than 25% of reported cases as \u201cother\u201d race or ethnicity. This aggregation of data may obscure other disparities, yet to be defined, and may have led to underestimation of incidence and mortality RQs. Additionally, our study only evaluated state-level incidence and mortality, which may have masked greater disparities between counties and cities within states.During the study period, 8 states had fewer than 100 COVID-19 deaths, some with less than 40. This may overestimate racial and ethnic disparities due to a low sample size. Cases and mortality continue to fluctuate nationwide and as more data become available, repeat analysis may be warranted using an extended time frame.National data may mask racial and ethnic disparities in COVID-19. Microcosmic data, from city or state-level reporting, better illustrate the disparate health outcomes during the pandemic. The evidence suggests that there is significant disparity in COVID-related health outcomes by race and ethnicity regarding representation by state. We suspect that these disparities would be even more apparent at the county and city level and this warrants further investigation. There is a need for public reporting of disaggregated COVID-19 data by county and zip code. Transparency of local data would allow for greater precision in allocation of resources and establishment of effective policy changes to disrupt the social determination of poor health outcomes, moving the United States towards the goal of health justice.Infectious disease, including COVID-19, does not selectively affect individuals based on race. However, the social dynamics which perpetuate racial, economic, and environmental disparity create a system of injustice which sustains health inequality, thereby resulting in disparate susceptibility to infection, morbidity, and mortality among marginalized communities. It is an exercise in futility to search for any single element that predisposes people of color, in particular Black people, to adverse COVID-19 outcomes. Instead, we need to explore the intersectionality of inequity and the social determination of health. Therefore, we believe that racial and societal disparities are not only cultural and economic issues, but also an issue of public health and social justice.Many of these inequalities are longstanding and rooted in our society\u2019s infrastructure. The dismantling of systems of oppression which drive the ongoing health injustice epidemic is requisite for a reduction in health disparities and relief of COVID-19 disease burden. Limiting and reducing mortality require identification of disparity dynamics in our communities and addressing the social determination of these vulnerabilities through specific, intentional interventions. Although systemic change may take time, we hope that health injustices, as exemplified by the COVID-19 pandemic, serve to spark action and create momentum towards achieving social justice and eliminating health disparities."} {"text": "It has been a long-standing puzzle why electrons with repulsive interactions can form pairs in unconventional superconductors. Here we develop an analytic solution for renormalization group analysis in multiband superconductors, which agrees with the numerical results exceedingly well. The analytic solution allows us to construct soluble effective theory and answers the pairing puzzle: electrons form pairs resonating between different bands to compensate the energy penalty for bring them together, just like the resonating chemical bonds in benzene. The analytic solutions allow us to explain the peculiar features of critical temperatures, spin uctuations in unconventional superconductors and can be generalized to cuprates where the notion of multibands is replaced by multipatches in momentum space. The exotic phenomena of superconductors remained mysterious until Bardeen, Cooper and Schrieffer (BCS)2 came up with a complete theory in 1957. There are two key ingredients in BCS theory. Firstly, a negatively charged electron distorts the nearby lattice formed by positively charged ions and another electron thus feels an effective attraction toward the original electron. In professional jargons, one can say that the effective attraction is mediated by electron-phonon interactions. The second key ingredient is pair formation: an electron with specific momentum and spin pairs up with another electron with opposite momentum and spin via the effective attraction. These electron pairs, referred as Cooper pairs, form a Bose-Einstein condensate in the superconducting state4.It has been over one hundred years since Heike Kamerlingh Onnes discovered the resistance of mercury suddenly drops to zero7, heavy-fermion compounds10, organic superconductors11, iron-based superconductors13 and recently found superconducting graphene superlattices14, remaining queer16 and unexplained by the electron-phonon interactions alone. It is generally believed that the Coulomb interaction between electrons may be responsible for the emergent superconductivity4. However, there is a huge mismatch of the energy scales \u2013 the critical temperature is much smaller than the bare electronic interactions. Perhaps the most intriguing puzzle for these unconventional superconductors is the pairing mechanism: what is the glue to pair up electrons from mutual repulsive interactions? The experimental evidences seem to suggest that pairing in the unconventional superconductors is not due to electron-phonon interactions13. Due to strong magnetic correlations20 in these materials, it is proposed that spin fluctuations27 may play the role of glue to pair electrons up. The renormalization-group (RG) studies3,29 seems to indicate that the antiferromagnetic spin fluctuations may be the cause of unconventional superconductivity in iron-based materials.Despite the celebrating success of BCS theory, there are other superconductors including cuprates32, we integrate out quantum fluctuations at shorter length scales within the one-loop approximation and seek for the ground state in the low-energy limit. Because the RG equations are intrinsically non-linear, numerical analysis is already challenging, rendering simple understanding beneath the messy numerics inaccessible in most cases. We adapt the classification scheme of relevant couplings developed in previous works35 to overcome the challenge and analyze how superconductivity emerges in RG transformations.In this Article, we investigate the pairing mechanism in multiband iron-based superconductors by RG analysis. As in previous RG studiesg and g\u22a5, standing for intraband and interband pair hopping. The functional form of the gap function computed within the mean-field theory dictates the RG equations for g and g\u22a5. In fact, these RG equations derived from the mean-field theory are so simple so that analytic solutions can be found easily.In addition, we make use of the scaling relation to bridge the results obtained by RG analysis and those in the conventional mean-field theory. While it is well known that the mean-field theory provides a self-consistent description for various physical quantities, it is less familiar to the scientific community that the mean-field theory also lays out the RG equations for its parameters. For example, for the generalized multiband BCS Hamiltonian, the vanilla version of the mean-field theory is characterized by two important parameters: Here comes the surprise. For multiband iron-based superconductors, we find the RG flows are well captured by simple analytic solutions for intraband and the interband pairing hopping derived from the mean-field theory. The solutions are elegant and simple enough for us to reveal the pairing mechanism in multiband superconductors.g and g\u22a5 are positive before RG transformations. However, it turns out that there is no need to search for attractive glues to pair electrons up anymore. Through interband pair hopping g\u22a5, electron pairs hop between different bands to lower the energy, compensating the penalty for pair formation due to repulsion. Our findings are consistent with the previous studies37. The pairing mechanism in multiband superconductors is resonating pair hopping between different bands, just like the resonating chemical bonds in benzene. In fact, the effective theory we found is consistent with the previous mean-field theories for multiband superconductors in the interband-dominated regime44.Because the electronic interactions are repulsive in nature, both couplings g and g\u22a5 are large, the binding energy of Cooper pairs is determined by their difference \u03b4, which is one order smaller than the bare couplings as detailed later. (3) The interband pair hopping not only explains why strong magnetic fluctuations often show up in unconventional superconductors but also pins down at what momentum the spin fluctuations should appear.However, the RG analysis makes stronger predictions than the mean-field theories and sews several different phenomena into one coherent picture. Let us explain the major findings without digging into the technical details. The binding energy for Cooper pair formation is dictated by a small parameter \u03b4\u2009=\u2009g, just the intraband pair hopping. Thus, for electrons to pair up, one needs some sort of attractive interactions (g\u2009<\u20090). Besides, the binding energy is of the same order of the effective attraction. But the interband pair hopping completely changes the story: attractive interaction is no longer a necessity. Despite of repulsive interactions, as long as the interaction profile produces a larger interband pair hopping, Cooper pairs form and superconductivity sets in. The interband pair hopping, driving unconventional superconductivity in the presence of repulsion, provides natural explanations, not only for the spin fluctuations at the momentum connecting the Fermi surfaces, but also for the mismatched energy scales of the critical temperature and the bare interaction strength.The difference between pairing mechanism for multiband superconductors and that for the conventional BCS theory lies in the presence of interband pair hopping. In its absence, the binding energy is solely determined by To put our feet on the firm ground, we choose iron-based superconductor as a demonstrating example. In the following, we elaborate the numerical and analytical details how the above conclusions are achieved. It is worth emphasizing that most conclusions are not limited to iron-based superconductors. Generalization of our theory will be discussed at later paragraphs.46. The mutual repulsion between electrons is model by the simplest on-site interactions, containing intra-orbital repulsion U1, inter-orbital repulsion U2 and the Hund\u2019s coupling JH between different orbitals. Detail descriptions of the model can be found in Methods.The kinetic energy in iron-based superconductors is described by a five-orbital tight-binding model with appropriate hopping matrix elementss\u00b1 pairing symmetry31. Due to the s-wave symmetry, it is enough to sample one point on each Fermi surfaces for qualitative understanding. It turns out that this simplification is equivalent to put the iron-based superconductor on the four-leg ladder with periodic boundary conditions along the x direction. It will become clear later that this geometry does not change the ground state but simplify the algebra significantly. Later, we will come back to two-dimensional calculations and reaffirm the validity of our major claims.The above Hamiltonian has been studied by functional renormalization group (fRG) method, which shows that the ground state is a superconductor with sign-reversed kx and the chirality R/L as shown in Fig.\u00a049, the allowed interactions are Cooper scattering \u03c3, \u03c1 denote the spin and charge channels respectively. The RG equations for all couplings are given explicitly in Methods. Though the RG equations can be written down explicitly, the solutions are still too complicated. With l\u2009=\u2009In(\u039b0/\u039b) being the logarithm of the ratio between bare energy cutoff \u039b0 and the running cutoff \u039b, we integrate the coupled differential equations numerically, and elaborate the details in Methods. We find all couplings are captured by the scaling Ansatz35,gi are generally referred to the Cooper and forward scatterings mentioned above, Gi is an order one constant and 0\u2009\u2264\u2009\u03b3i\u2009\u2264\u20091. The scaling exponent \u03b3i help us to build the hierarchy of all relevant couplings without ambiguity. For example, starting with U1\u2009=\u20092.8 eV, U2\u2009=\u20091.4 eV and JH\u2009=\u20090.7 eV, the RG exponents for all couplings are shown in Fig.\u00a0x\u2009=\u20090 to x\u2009=\u20090.12.In weak coupling, the relevant degrees of freedom can be labeled by the quantized momentum mentclass2pt{minim\u03b3i\u2009=\u20091 are the most dominant, including intraband Cooper scattering c12 leads to the s\u00b1 pairing symmetry. Note that our RG Ansatz predicts the dominant superconducting bands with the correct pairing symmetry as obtained by the fRG method. In fact, a detail analysis including all subdominant relevant couplings lead to correct signs for all gap functions in different bands as those obtained in the fRG method. Thus, the ground state in the ladder geometry is the same as that in the two-dimensional system.The couplings with g(l) and g\u22a5(l),l\u2009=\u2009ld, it fixes that c(0), c\u22a5(l) are given and ld is found in numerics already, the values of g(0) and g\u22a5(0) are pretty much determined.However, due to the explicit form of the RG equations for ladder geometry, the pairing mechanism in the multiband superconductor can be studied analytically. Construct the couplings, c(l) and c\u22a5(l) from integrating the whole set of nonlinear differential equations (about 50 equations for this case) can be captured by the analytic solutions given in Eq. (c(l), c\u22a5(l) and the analytic ones g(l), g\u22a5(l) are shown in Fig.\u00a0The above simplification is inspiring: it implies that the RG flows for the intraband and the interband pair hopping n in Eq. . Comparig\u2009<\u20090 arisen from effective attraction and \u039b is the Debye energy. But, it is not widely known that the RG equation for g can be derived from the dependence of the gap function. In the RG method, the gap function is rescaled with rescaling energy of systems in RG steps. By integrating high-energy modes, the rescaling of the gap function is described by the RG equation d\u0394/dl\u2009=\u2009\u0394. It is said that gap functions at different energy scales can be related by 50. Taking the derivative, d/dl, on the both sides, the RG equation reads, dg/d\u2009=\u2009\u2212g2l. Therefore, if the RG flow for some system is described by the derived RG equation, its effective theory is just the single-band BCS Hamiltonian.To dig out the secret message behind the analytic solutions, let us switch gear to review the well-known single-band BCS Hamiltonian first. The gap function takes the form g and g\u22a5 stand for intraband and interband pair hopping. The above non-linear coupled equations can be solved exactly, giving the analytic solutions we discussed previously. Therefore, the effective theory for iron-based superconductors is the multiband BCS Hamiltonian, proven by matching the RG flows together.We can generalize the scaling argument to multiband BCS Hamiltonian as elaborated in Methods. In particular, when two bands reign interband pair hopping, the corresponding RG equations areg\u2009=\u2009g\u22a5\u2009<\u20090), the ground state flows toward the Fermi-liquid fixed point and no superconductivity occurs. However, if the interband pair hopping is larger (g\u2009=\u2009g\u22a5\u2009<\u20090), the ground state flows toward the superconducting phase with s\u00b1 pairing symmetry. It is worth mentioning that, if the initial couplings are close to the symmetric ray g\u2009=\u2009g\u22a5, it will flow toward the Fermi-liquid fixed point first and then turns around to the unconventional superconducting state. This means that, upon cooling down the system toward the critical temperature, it exhibits non-trivial crossover properties over a wide range of temperatures. This crossover is shown in the orange-shaded regime in Fig.\u00a0g and g\u22a5 are small.RG flows for repulsive interactions are shown in Fig.\u00a0g\u22a5\u2009=\u20090). As is clear in Fig.\u00a0g(0), i.e. attractive interaction, is required to trigger the superconducting instability. For repulsive interaction g(0)\u2009>\u20090, the RG flow always bring the system to the trivial Fermi-liquid fixed point. The RG flows in the special case g\u22a5\u2009=\u20090 give rise to the commonly accepted criterion that an effective attraction is necessary for Cooper pair formation. However, the criterion is obviously wrong as the RG flows in the upper plane is quite different from those in the horizontal axis.It is interesting to compare with the pairing instability in the absence of interband pair hopping instability if the momentum Q\u2009=\u2009K1\u2009\u2212\u2009K2, connecting the two Fermi surfaces, is close to half of the reciprocal lattice vectors, i.e. Q\u2009=\u2009G/2. In the case of iron-based superconductor in ladder geometry, Q\u2009=\u2009 satisfies the above condition. Therefore, it is expected that the antiferromagnetic spin fluctuations at the momentum Q are enhanced along with unconventional superconductivity. Note that, in conventional superconductors, Q is automatically zero and no antiferromagnetic spin fluctuations are expected to grow hand-in-hand with the superconductivity.Finally, the interband pair hopping 50 on RG in two dimensions. In stead of following Shankar\u2019s original approach, we follow closely the methodology developed in functional RG. Consider two circular Fermi surfaces as shown in Fig.\u00a0\u03b8i represents the angle of the corresponding momentum.Now that we realize the importance of interband pair hopping, we are ready to generalize Shankar\u2019s seminal workDerivations of the RG equation in two dimensions are provided in Mathods. Here we concentrate on the final results and their implications. The RG equations in two dimensions are,In the absence of interband pair hopping, we have checked that the above RG equations reduce to those derived by Shankar with just one circular Fermi surface. The general RG equations involving integration over internal angles can be simplified to the previous result. Assuming the density of states and the bare couplings are rotationally invariant, i.e. s in Eq. again. TThe two-pocket model presented here provides another perspective on the relation between superconductivity and antiferromagnetic spin fluctuations. In iron-based superconductors, the momentum connecting two Fermi surfaces is Our previous calculations concentrate on the dominant bands in the iron-based superconductor to reveal the pairing mechanism in multiband superconductors on firm ground. But, it is reasonable to include all active bands in the generalized BCS Hamiltonian when quantitative accuracy is required. In addition, it is intriguing to check whether the fRG flows can be captured by the analytic solutions in Eq. as well.d-wave symmetry. It implies that pairing mechanism in cuprates arises from resonating pair hopping between antinodal regimes51. The interpatch pair hopping between antinodal regimes also explains the enhance spin fluctuations at . The pairing instability in the nodal regimes is only triggered when approaching close to the critical temperature. How to rigorously generalize the analysis presented here from multiband superconductors to multipatch single-band system remains an open question at this point.For single-band materials but with significant variations in density of states at different momenta, one should divide the Fermi surface into appropriate patches and applies the method developed here. We have carried out primitive RG analysis on cuprates by cutting the single-band Fermi surface into 16 patches . The dominant patches locate at and with sign-reversed gap functions, agreeing with the g\u22a5 and suppresses the intraband one g. It is not yet clear whether the optimal electronic interaction for different materials share a generic profile, or the answer may vary from one material to the other.We would like to emphasize that the weak-coupling RG analysis demonstrated here may work well in strong coupling if no quantum phase transition is encountered. That is to say, the generalized BCS Hamiltonian with multiple bands (or different momentum patches) may be the mother Hamiltonian for most unconventional superconductors in the low-energy limit. Even the pairing mechanism is revealed, non-trivial crossovers in the unconventional superconductors are expected at wide temperature range and a complete phenomenological description is still lacking. Finally, the pursue for higher critical temperatures should not be on seeking for stronger glues. In stead, it is time to search for optimal interaction profile which gives the largest interband pair hopping Kvw in the momentum space has been constructed in previous studies45. The Fermi surface with electron doping \u03c0, \u03c0) in the Brillouin zone. In addition, there are two electron pockets located at and points46.The band structure of the iron-based superconductor can be described by the five-orbital tight-binding model,mentclass2pt{minim30 satisfying the SU(2) symmetry arei runs over all lattice sites. Note that, due to the multiple orbitals, even the simplest on-site interactions contain three terms: intra-orbital repulsion U1, inter-orbital repulsion U2 and Hund\u2019s coupling JH between different orbitals.To investigate the correlation effects, the most general on-site interactionsx direction, we quantize the momentum kx and reduce the five active bands to five pairs of Fermi points, as show as Fig.\u00a049, and can be written down explicitly,By putting the iron-based superconductor on the four-leg ladder with periodic boundary conditions along the 0 and the running cutoff \u039b. The coefficient tensor vi for each band. We intentionally separate the intraband and interband RG equations for clarity. Various symmetries ensure that cij\u2009=\u2009cji and fij\u2009=\u2009fji. To avoid double counting, we choose fii\u2009=\u20090 here.The notation 49, i.e.To compute the initial conditions of the RG transformation, we rewrite the field operators in the five-orbital tight-binding Hamiltonian by the chiral field decompositionuR/Lia, a\u2009=\u20091, 2, \u2026, 5, are the five orbital components of the Bloch wave functions associated with the R/L (or +y/\u2212y direction) members of the i-th Fermi pair, and c\u03c3 and c\u03c3 terms can be represented asHere With the similar computation, we also obtain the initial conditions of the forward couplings,g and g\u22a5 of our effective theory in the main text are We notice that the initial conditions of U1\u2009=\u20092.8, U2\u2009=\u20091.4 and JH\u2009=\u20090.7, we numerically solve the RG equations to obtain the RG flows with different filling factors. We find that the coupling constants diverge at the same length scale ld. By showing the RG flows versus 1/(ld\u2009\u2212\u2009l) in the log-log plot, as illustrated in Fig.\u00a034. The RG exponents for different filling factors are showed in Fig.\u00a0Within the initial conditions for 54, the four-fermion vertex function \u0393(4) is expressed as\u03b1, \u03b2 are spin indices, \u03c9 and K\u2009=\u2009 are labelled as the frequency and the two-dimensional momentum respectively.Following standard derivations developed in functional RG approachN patches for each band in the vicinity of the two Fermi surfaces, determined by the energy cut-off \u039b0, as shown in Fig.\u00a0kFP and vP respectively being the Fermi momentum and Fermi velocity of the band P. We also approximate that the dispersion of the two Fermi pockets are independent of the angular direction. As a consequence, the tree-level RG analysis, demonstrated by Shankar50, shows that the only marginal couplings are the forward and Cooper scattering in this convention i-th patch. With the convention of the band indices V\u2009=\u2009C, F.The derivation is aided by dividing the Fermi surfaces into two bands and U(1) and SU(2) symmetries, the generic form of functional RG equations for Vs and V1 in one-loop approximation are represented asLpp and Lph functions, depicting particle-particle and particle-hole bubble diagrams respectively, are denoted asUnder the P-th Fermi pocket as 50.We elaborate the computation of the first and second contributions of ) in Eq. , as an en of Eq. is repreWith the same spirit, we compute the full set of RG equations. Furthermore, to simplify the coefficients in the RG equations, we rescale the couplings asThus, the full set of non-trivial RG equations is represented as,N\u2009\u2192\u2009\u221e and the two dimensional RG emerges. Terms with 1/N factor but without summation vanish in the two dimensional limit. This simplification is the same as the elegant \u201cphase-space argument\u201d invented by Shankar50 before. Terms with summations smoothly evolve back into angular integral,Take 50.The identity 56.i-th band. To simplify the calculations, we drop the momentum dependence of the interaction 55 with different instabilities, depending on two parameters D is the density of states at the Fermi energy.We consider the standard multiband BCS Hamiltonians\u00b1 pairing symmetry, i.e. the gap functions are related s-wave pairing symmetry, i.e. the gap functions are related For s\u00b1 pairing symmetry for instance, by integrating high-energy modes, the rescaling of the gap function is described by the RG equation 50. It is said that gap functions at different energy scales can be related by l in the both side, we obtain the equation,,We connect the gap function to the RG approach, where the gap function is rescaled with rescaling energy of systems in RG steps. Taking the gap function with We combine with the gap function from generalized two-band BCS Hamiltonian Eq. , and ends-wave pairing symmetry leading to the RG equation,In the same manner, the generalized two-band BCS Hamiltonian gives the Combining the above RG equations together, we obtain Eq. ."} {"text": "Identification of molecular determinants of receptor-ligand binding could significantly increase the quality of structure-based virtual screening protocols. In turn, drug design process, especially the fragment-based approaches, could benefit from the knowledge. Retrospective virtual screening campaigns by employing AutoDock Vina followed by protein-ligand interaction fingerprinting (PLIF) identification by using recently published PyPLIF HIPPOS were the main techniques used here. The ligands and decoys datasets from the enhanced version of the database of useful decoys (DUDE) targeting human G protein-coupled receptors (GPCRs) were employed in this research since the mutation data are available and could be used to retrospectively verify the prediction. The results show that the method presented in this article could pinpoint some retrospectively verified molecular determinants. The method is therefore suggested to be employed as a routine in drug design and discovery. The results were confirmed by site-directed mutagenesis (SDM) studies and identified Asn147, Glu182, Thr323, and Gln347 as the molecular determinants [Development of computational methods to accurately identify molecular determinants of the receptor-ligand binding is of considerable interest. Istyastono et al. combinedrminants . On the rminants employedrminants ,16, and rminants in a retrminants . Unforturminants offer po2a receptor (AA2AR), \u03b22 adrenergic receptor (ADRB2), C-X-C chemokine receptor type 4 (CXCR4), and Dopamine D3 receptor (DRD3). These receptors were selected because they are members of GPCRs, their ligands and decoys datasets are available in the enhanced version of the database of useful decoys (DUDE) [The upgraded version of PyPLIF called PyPLIF HIPPOS was recently made publicly available . The sofs (DUDE) , and theThe proposed method identified 23 probable molecular determinants of the ligand binding to the studied receptors . ThirteeIn In Similar to Two branches are leading to ligand identification in https://gpcrdb.org/) (accessed on 20 March 2021) [The proposed computational methods presented in this article predicted in total 23 molecular determinants of the ligand binding to some GPCRs, i.e., AA2AR, ADRB2, CXCR4, and DRD3. Thirteen out of these 23 molecular determinants (circa 56.52%) were retrospectively verified by observing the mutant data compiled in GPCRdb ch 2021) ,9. There21 ,9. ThThe computational techniques employing the combination of retrospective SBVS campaign and RPART analysis were originally used to increase the prediction quality of the developed SBVS to identify ligand for ER\u03b1 . SubsequThe most recent GPCRdb updates have just been published , which a4 receptor (HRH4) as the molecular determinant of the ligand binding could be used further to fine-tune the HRH4 affinity and the selectivity toward the histamine H3 receptor (HRH3) [The information of the molecular determinants could be used further in structure-based drug design and discovery, especially in fragment-based approaches to perform optimization rationally in order to fine-tune the affinity and the selectivity for a particular receptor target . For exar (HRH3) . In the r (HRH3) . The infr (HRH3) ,27. In or (HRH3) ,35 and dr (HRH3) ,37.All the identified molecular determinants in this receptor were verified in the GPCRdb . Only onThree out of nine identified molecular determinants in this receptor were verified in the GPCRdb . There aThree out of four identified molecular determinants in this receptor were verified in the GPCRdb . SimilarThree out of six identified molecular determinants in this receptor were verified in the GPCRdb . There a\u00ae Xeon\u00ae CPU E5-2620 v4 @ 2.10GHz as the processor and 64GB of random-access memory (RAM). There were in total 16 central processing units (CPUs) from 8 cores @ 2 threads. The following were the software used in the research presented in this article: AutoDock Vina version 1.1.2 [https://github.com/radifar/PyPLIF-HIPPOS/releases/tag/0.1.2, accessed on 20 March 2021); PLANTS docking software 1.2 [https://www.rcsb.org/) (accessed on 20 March 2021) [https://gpcrdb.org/) (accessed on 20 March 2021) [The computational simulations were performed in a 64-bit Linux (CentOS Linux release 7.4.1708) machine with Intelon 1.1.2 ; PyPLIF on 1.1.2 version ware 1.2 ,30; SPORware 1.2 ; Open Baware 1.2 ; ADFRsuiware 1.2 ; and RPAware 1.2 . Compounch 2021) with thech 2021) were useThe generic procedure consisted of 3 steps: (i) retrospective SBVS campaigns using AutoDock Vina; (ii) PLIF identification using PyPLIF HIPPOS followed by ensPLIF calculation; and (iii) RPART analysis using R.The retrospective SBVS using molecular docking simulations started with the preparation of the compounds obtained as mol2 files from DUDE, followed by preparation of the virtual target obtained from RCSB PDB, and preparation of the configuration file to perform docking. Then, the docking simulations were performed for all prepared compounds. Module \u201cseparate\u201d from Open Babel was used to split the obtained files from DUDE into a single file for a single compound. The mol2 files were then subjected to the module \u201cprepare_ligand\u201d from ADFRsuite to be converted into the AutoDock Vina readily format pdbqt. The module \u201csplitpdb\u201d from SPORES was used to split the protein part from others into the pdb files obtained from RCSB PDB. The \u201cprotein.mol2\u201d resulted from the module \u201csplitpdb\u201d of SPORES was then subjected to the module \u201cprepare_receptor\u201d from ADFRsuite to be converted into the AutoDock Vina readily format pdbqt. The generic configuration for the docking simulations were set as follows: num_modes = 10; energy_range = 5; cpu = 8; log = out.log. The XYZ coordinate position and the size of the docking box were specific for each virtual target. The center of the co-crystal ligand was set as the XYZ coordinate position, and the distance of 5 \u00c5 from the surface of the co-crystal was used to calculate the docking box size. The module \u201cbind\u201d from PLANTS was used to obtain the values of the XYZ coordinate positions and the size of the docking boxes. Each prepared compound was then docked using AutoDock Vina. The module \u201cbind\u201d from PLANTS also provided a list of residues in the docking box. The residues were used to create configuration files for PLIF identification using PyPLIF HIPPOS. By employing the configuration files, the PLIF identifications were performed for all docking poses resulted from the retrospective SBVS . The opthttps://www.rcsb.org/structure/3eml (accessed on 20 March 2021) [http://dude.docking.org/targets/aa2ar (accessed on 20 March 2021) [The human AA2AR with the PDB ID of 3EML was downloaded from ch 2021) , while thttps://www.rcsb.org/structure/3ny8 (accessed on 20 March 2021) [http://dude.docking.org/targets/adrb2 (accessed on 20 March 2021) [The human ADRB2 with the PDB ID of 3NY8 was downloaded from ch 2021) , while thttps://www.rcsb.org/structure/3odu (accessed on 20 March 2021) [http://dude.docking.org/targets/cxcr4 (accessed on 20 March 2021) [The human CXCR4 with the PDB ID of 3ODU was downloaded from ch 2021) , while thttps://www.rcsb.org/structure/3pbl (accessed on 20 March 2021) [http://dude.docking.org/targets/cxcr4 [The human DRD3 with the PDB ID of 3PBL was downloaded from ch 2021) , while tThe combination of retrospective SBVS campaigns, PLIF-derived ensPLIF descriptors using PyPLIF HIPPOS, and RPART analyses provide a full in silico complementary method to SDM studies for the molecular determinants of the ligand binding to the corresponding GPCRs. Notably, the method shows better prediction quality indicators of the SBVS protocols compared to the original protocols. Moreover, for a particular receptor target, there are options to optimize the prediction quality, e.g., fine-tuning the configuration of the docking simulations or filtering poses prior to ensPLIF calculation based on the docking score."} {"text": "The emergence of the Internet of Vehicles (IoV) aims to facilitate the next generation of intelligent transportation system (ITS) applications by combining smart vehicles and the internet to improve traffic safety and efficiency. On the other hand, mobile edge computing (MEC) technology provides enormous storage resources with powerful computing on the edge networks. Hence, the idea of IoV edge computing (IoVEC) networks has grown to be an assuring paradigm with various opportunities to advance massive data storage, data sharing, and computing processing close to vehicles. However, the participant\u2019s vehicle may be unwilling to share their data since the data-sharing system still relies on a centralized server approach with the potential risk of data leakage and privacy security. In addition, vehicles have difficulty evaluating the credibility of the messages they received because of untrusted environments. To address these challenges, we propose consortium blockchain and smart contracts to accomplish a decentralized trusted data sharing management system in IoVEC. This system allows vehicles to validate the credibility of messages from their neighboring by generating a reputation rating. Moreover, the incentive mechanism is utilized to trigger the vehicles to store and share their data honestly; thus, they will obtain certain rewards from the system. Simulation results substantially display an efficient network performance along with forming an appropriate incentive model to reach a decentralized trusted data sharing management of IoVEC networks. With the rapid movement of urbanization and industrialization, the number of registered vehicles worldwide is estimated to reach two billion within the next 10\u201320 years [However, the conventional IoV system has difficulty overcoming the increasing complexity of ITS applications that exponentially led to the demand for the enormous data storage volumes with high computation and communication processing requirements. Hence, the notion of IoV edge computing (IoVEC) is introduced to enhance user experience with low latency, high bandwidth, and real-time communications with the help of mobile edge computing (MEC) technology . IoVEC iDespite the potentials mentioned above, the conventional architecture of IoVEC with a centralized approach has crucial challenges related to user\u2019s data security and privacy. In this sense, the potential exposure of user information with single point of failure (SPoF) challenges still will reasonably occur since the IoVEC framework\u2019s data is centralized on a central server. Hence, the IoV network participants might be hesitant in the data sharing process that contains private information, such as customer identities, vehicle numbers, and driving preference. Moreover, the risk of selfish behaviors might diminish participants\u2019 enthusiasm to cooperate with each other in the system. This problem becomes more serious when there exists a malicious vehicle in the network. The various adversarial actions may threaten the privacy security to gather the user\u2019s private information for personal benefit, as well as endanger traffic safety and efficiency, by giving the incorrect report to the system. Consequently, IoVEC needs to construct a reliable and trusted data sharing management model by removing a centralized intermediary scheme.Previous work considers that vehicle may share their data voluntarily . UnfortuIn recent years, since Nakamoto introduced Bitcoin in 2008,1.We design a decentralized trusted data sharing management framework for IoVEC networks by utilizing a consortium blockchain and smart contracts. This framework proposes a secure data sharing scenario among vehicles without relies on trusted intermediaries in distributed, verifiable, and immutable ledgers.2.We present an information credibility assessment scenario to minimize irrelevant information data and against malicious behaviors of the vehicle on the data sharing process.3.We design an appropriate incentive mechanism based on the vehicle\u2019s contribution by leveraging smart contracts\u2019 self-execution nature. This scheme aims to motivate vehicles to participate positively in maintaining trusted data sharing activities and ensuring the system\u2019s security and sustainability.4.We formulate a decentralized data sharing of IoV networks prototype and evaluate its performance based on simulation results.The remainder of this paper is organized as follows. In this section, we review relevant literature on conventional data sharing management on vehicular network and centralized incentive mechanism to position the existing approach in relation to our research.Mobile edge computing (MEC) technology is introduced in 2014 by the European Telecommunications Standards Institute (ETSI), aiming to heighten user experiences with low latency, high bandwidth, and real-time communications . MEC levIn the vehicular network environment, the primary entities in the data sharing process are vehicles and roadside units (RSUs), which form two types of communication, namely vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I). Vehicles interact with other neighboring vehicles by using onboard units (OBUs) equipped with several sensing devices with simple computation and communication capabilities. OBUs are also used to automatically recognize traffic-related information and facilitate the vehicles to send notification messages to others using V2V communication standards to improve traffic safety and efficiency. On the other hand, V2I provides a single or multi-hop communication between vehicle and RSU, supported by dedicated short-range communication (DSRC) standards . RSUs, aMoreover, conventional VN data management relies on a cloud service platform with a centralized database approach. In this sense, the centralized server is employed to collect and store all vehicles\u2019 shared data centrally. However, the centralized approach is still facing the threat of security and privacy risk, where the attackers can easily forge or tamper with the data sent by OBUs in an open wireless communication environment . FurtherSince the VN system relies on a centralized approach, vehicles might be reluctant to share their data due to data security and privacy protection issues. Moreover, the SPoF is likewise a significant problem for centralized networks. Another challenge is that vehicle participation in the data sharing process remains low due to self-interest characteristics, and the vehicles do not obtain the compensation or benefit from the system. Hence, the incentive scheme is used to boost vehicle participation with positive contribution and maintain the system\u2019s reliability and sustainability. The existing incentive mechanism with a centralized approach allows simple transactions between vehicles and a trusted third party. In this case, a trusted third party holds the entire data transaction and provides transaction incentives among participants involved. In short, as a data provider, the trusted third party controls the whole system orchestration (including incentive scheme).Current works that represent centralized incentive approach are monetary-based incentive and repuBlockchain is considered as a solution to improve security and privacy protection since it is suitable to overcome the centralized problem approach. Blockchain is a distributed ledger technology that enables participator entities to accept and share recorded activities with a time-stamp to the network. The particular consensus mechanism validates those activities before it is filed in a changeless database. Generally, blockchain can be utilized to achieve three objectives: managing distributed services relying on smart contracts, operating a distributed ledger, and accomplishing decentralized storage . There eMoreover, the authors in Reference proposedFurthermore, several efforts study employing consortium blockchain\u2019s advantages to minimize malicious entities\u2019 existence during the consensus mechanism. The authors of Reference proposedThis section explains our proposed model, blockchain-based secure and trusted data sharing management in IoVEC networks. Inspired by Reference , our arcUser network layer manages vehicle enrollment and authentication, the road-related message broadcasting, and the message credibility assessment process.In our scenario, vehicles represent the user network that communicates with other vehicles and RSUs to improve traffic safety and efficiency. Before entering and accessing the network service, all vehicles must be authorized by a trusted party (TP), e.g., the Department of Transportation, to guarantee vehicle identity legitimation by binding their real identities . Then, a legitimate vehicle eference . The sysk and time t and encrypts those messages before being broadcasted to the network by utilizing V2V and V2I communication. Nevertheless, k. We consider that the message sent by vehicles near the event location is more trusted compared to the vehicle in a far distance. Therefore, the message credibility is defined based on Equation and validation block smart contract (VBSC). MRSC collects, records, and aggregates the number of message information \u03c8Pt,A consensus mechanism is utilized to achieve the required agreement between the authorized participant entities to generate a new block transaction into a blockchain network using a particular set of rules. Here, only authorized RSUs are eligible to be the nodes participants in the consensus mechanism with more extensive storage and computation capability compared to the OBUs. We use the PBFT algorithm to conduct a consensus mechanism due to its advantages, including small resource consumption, high efficiency, consistency, and maturity, making it proper for our proposed scheme. Moreover, PBFT permits the presence of anomalous nodes =(n\u22121)/3 . Figure n edge nodes RSUs k, where r, a leader is responsible for storing Leader selection step: In consortium blockchain, RSUs are selected as validators to verify the block Request step: The request step represents a new candidate block generation Pre-prepare step: In this step, as shown in Prepare step: Each validator verifies the commit message after receiving over prepare-step.Commit step: Then, the validators broadcast the r is finished after the consensus reaches over commit message and then uploads the verified block Reply step: Finally, the leader t according to the message credibility assessment from We designed a trusted data sharing management IoVEC network based on the proposed model, consisting of three layers, i.e., user network layer, blockchain edge layer, and blockchain network layer. Each layer has its setting that distinct from one another. Using the OSMWebWizard package provided in the simulation of urban mobility (SUMO), we modeled a highway traffic scenario to prototype and evaluate IoVEC networks\u2019 efficiency. Here, NS3 as a discrete-event network simulator is used to verify the result, analyzing a trace file for vehicle mobility and message credibility.To form an IoVEC, we use an optimized link-state routing protocol (OLSR) as one of the protocol standards in the wireless access for the vehicular environments (WAVE). This protocol enables the system to provide better performance in terms of vehicle mobility, speed, and delay communication . Here, vFurthermore, we also observed the relationship between the distance of After the MRSC aggregates the trust value rating, its result is then validated using a consensus mechanism in the VBSC consortium blockchain. To construct a consortium blockchain, we use Hyperledger Sawtooth as part of the Hyperledger platform. Hyperledger Sawtooth platform is suitable for our proposed model because it supports the PBFT consensus mechanism equipped by several validators. We utilized Docker containers to facilitate the main core components of Hyperledger Sawtooth architecture, such as transaction processors, validators that represent preselected edge nodes, and Sawtooth Representational State Transfer (REST) server. The consensus mechanism is then conducted by preselected RSUs to validate a new block transaction. Before that block is stored in the blockchain network layer, the PBFT algorithm requires the agreement over To support an adequate incentive for the information provider (vehicle HTTP://127.0.0.1:7545 (accessed on 11 January 2021) with an auto mining mode. The address for all vehicles is derived from Ganache that holds 100.00 ETH each (publicly available). In a real-world implementation, the address does not depend on a single user interface, and the address is managed in the private wallet of each entity.We utilized a smart contract feature in the Ethereum platform through Ganache Truffle (v.2.4.0) graphical user interface (GUI). The default setting is applied where the gas limit is set to be 6,721,975 units, with the gas price is 20,000,000,000 wei running on a remote procedure call (RPC) server We performed the incentive distribution with a different number of vehicles (the message providers) Recently, blockchain has been widely studied to form a secure, trusted, and decentralized data management in the vehicular network. Several works have proposed blockchain-based IoV solutions to protect road-related information sharing among vehicles to improve traffic safety and efficiency ,15,16. HCompared to the proposals by Yang et al. and ZhanKang et al. also makFurthermore, we employ consortium blockchain to avoid SPoF attacks in a centralized system and prevent data modification attacks that may broadcast and create a modified block transaction in the consensus process conducted by compromised RSUs. Compared to other works which mainly use a joint PoW and PoS mechanisms wBy leveraging the several advantages of our proposed framework as discussed above, the blockchain-based IoVEC framework can be utilized to address the limitation of ITS application , especially in terms of enhancing system performance and security protection. In IoVEC, MEC can improve the system performance; providing low latency, high bandwidth, and real-time communication by placing its server in the edge network to be closer to the user. Furthermore, the blockchain can form a decentralized and trusted data management system, as well as to provide the users with privacy and security. Nevertheless, future discussions are encouraged to address relevant issues in the blockchain-based IoVEC. For instance, the construction of a robust message authentication mechanism in order to strengthen privacy and security protection in the blockchain-based IoVEC would be of great importance. In Reference , the autWe have introduced a consortium blockchain and smart contracts to achieve a decentralized trusted data sharing management system in IoVEC. In this paper, smart contracts are exploited to accomplish an efficient, reliable, and secure data management system. Here, two smart contracts, MRSC and VBSC, are employed and placed on RSUs as the distributed edge network infrastructure. MRSC is used to collect and aggregate the trust value rating, while VBSC performs the consensus mechanism. Furthermore, this framework permits vehicles to validate the credibility of messages from their neighboring vehicles by generating a reputation rating. Additionally, we utilized an incentive mechanism based on Ethereum smart contract to motivate and propel the vehicles to contribute and sincerely share their data to obtain certain rewards from the system. Packet delivery ratio, considered to represent trust value rating of data sharing efficiency in IoVEC-Blockchain, shows a favorably positive performance and feasible to form a decentralized trusted data management system. Lastly, further studies are still required to apply a robust message authentication mechanism and cope with the blockchain scalability issue."} {"text": "U test. There was a tendency towards higher expression of ER \u03b1 in LF fibroblasts in the hypertrophy group (p = 0.065). The Sairyo and Okuda scores were more severe for the hypertrophy group but, in general, not statistically relevant. There was no statistically relevant correlation between the expression of ER \u03b1 and sex (p = 0.326). ER \u03b1 expression was higher in patients with osteochondrosis but not statistically significant (p = 0.113). In patients with scoliosis, ER \u03b1 expression was significantly lower (p = 0.044). LF hypertrophy may be accompanied by a higher expression of ER \u03b1 in fibroblasts. No difference in ER \u03b1 expression was observed regarding sex. Further studies are needed to clarify the biological and clinical significance of these findings.The most common spinal disorder in elderly is lumbar spinal stenosis (LSS), resulting partly from ligamentum flavum (LF) hypertrophy. Its pathophysiology is not completely understood. The present study wants to elucidate the role of estrogen receptor \u03b1 (ER \u03b1) in fibroblasts of hypertrophied LF. LF samples of 38 patients with LSS were obtained during spinal decompression. Twelve LF samples from patients with disk herniation served as controls. Hematoxylin & Eosin (H&E) and Elastica stains and immunohistochemistry for ER \u03b1 were performed. The proportions of fibrosis, loss and/or degeneration of elastic fibers and proliferation of collagen fibers were assessed according to the scores of Sairyo and Okuda. Group differences in the ER \u03b1 and Sairyo and Okuda scores between patients and controls, male and female sex and absence and presence of additional orthopedic diagnoses were assessed with the Mann\u2013Whitney One of the most common spinal disorder in the elderly is LSS , with LFFifty patients were prospectively included in our study. Study approval was obtained from the Ethics Committee of the Medical Faculty of the Philipps University of Marburg (study reference 191/09). Thirty-eight patients with LSS necessitating decompressive surgery comprised the study group, and LF specimens were harvested from the L4/L5 spinal level. The control group consisted of 12 patients with disk herniation at the L4/L5 level. None of the controls had signs of degeneration on preoperative magnetic resonance imaging (MRI). Care was taken to remove as little LF as possible.Tissues were fixed in 4% formalin solution, embedded in paraffin, cut at a thickness of 4 \u03bcm, and stained with H&E and Elastica van Gieson (EvG) stains, as established for routine histopathological diagnosis. Immunohistochemistry was performed using heat-induced antigen retrieval (employing citrate buffer pH 6.0) and the Leica Bond Polymer Refine Detection System with 3,3\u2032-diaminobenzidine (DAB) as the chromogen . All immunostainings were run on an automated immunostaining apparatus . ER \u03b1 was detected by the monoclonal mouse antibody NCL-ER6F11 .The expression of ER \u03b1 was semiquantitatively assessed as the nuclear staining of fibroblasts in LF according to the standardized procedures in breast cancer diagnostics by experienced senior pathologists (CCW and AR). The immunoreactive score (IRS) was calculated according to Remmele by multiplying the staining intensity with the proportion of positive cells among all LF fibroblasts . The proU, Kruskal\u2013Wallis-, t-test, or Fisher\u2019s exact test using R [Differences with regards to sex, age, additional orthopedic diagnosis, ER \u03b1 and proportion of fibrosis, loss and/or degeneration of elastic fibers, and proliferation of collagen fibers, according to Sairyo and Okuda between the groups of patients and controls, male and female study participants, and among LSS patients were evaluated with the Mann\u2013Whitney using R and comp using R where app = 0.088). The LSS patients were significantly older than the study participants in the control group (p = 0.009), reflecting the typical age of onset for lumbar spinal stenosis (LSS patient group) and disk herniation (control group).The clinical characteristics, ER \u03b1 expression data, and pathological findings of the control and the LSS patient groups are displayed in p = 0.065). In the LF specimens, the Sairyo and Okuda scores were in general more severe for the LSS patient group but, with regards to the differences between the control and LSS patient groups, not statistically relevant. Immunohistochemical staining for ER \u03b1 in the nuclei of LF fibroblasts could be demonstrated for both sexes, along with a partly impressive staining intensity for the male probes c,f. Therp = 0.127), additional orthopedic diagnosis (p = 0.365), or ER \u03b1 (p= 0.326) and sex. Furthermore, in both sexes, the same range of ER \u03b1 expression was found , especially among LSS patients (patients . No statp = 0.113) (p = 0.932). However, in patients with scoliosis, the ER \u03b1 expression was significantly lower (p = 0.044). When evaluating the absolute fibrosis and loss of elastic fibers on the basis of the score of Sairyo among the LSS patient group (p = 0.04), but there was no difference for structural instability (p = 0.124) or scoliosis (p = 0.125). There was also a trend for less of a loss of elastic fibers in patients with osteochondrosis (p = 0.076). No statistically significant effect was seen for the loss of elastic fibers in patients with structural instability (p = 0.308) or scoliosis (p = 0.765). The scores of fibrosis and the loss of elastic fibers according to Sairyo and the scores according to Okuda showed no statistically significant effects with regards to additional diagnoses . In patint group , there wiagnoses .Hypertrophy of the LF is an important contributing factor in LSS , being oIn previous studies, expression of the ER was demonstrated on different tendons and ligaments via immunohistochemistry or reverse transcription-polymerase chain reaction (RT-PCR) ,20,21,22In our study population, we found a tendency toward higher ER \u03b1 expression in LF fibroblasts of LSS patients among both sexes, accompanied by a higher variation among LSS patients . ER \u03b1 haSince the LSS patients were much older than the study participants of the control group, it should be considered whether age alone could lead to a higher ER \u03b1 expression. However, in previous studies on ER \u03b1 in mesenchymal cells, no such correlation was found. In smooth muscle cell nuclei of the uterosacral ligament, the ER was detected immunohistochemically in all 25 patients studied regardless of variations in patient ages, which ranged from 34 to 68 years . In meseAmong LSS patients, the ER \u03b1 expression was higher with the additional diagnosis of osteochondrosis, with a tendency toward statistical significance, whereas ER \u03b1 expression was significantly lower in patients with the additional diagnosis of scoliosis. To date, no correlation regarding estrogen/ER and osteochondrosis has been found in the pertinent literature. With respect to scoliosis, ER \u03b1 is considered a candidate gene for idiopathic scoliosis . FurtherThere have been a number of studies investigating the histopathological basis of LF hypertrophy and its correlation with clinical findings. However, no generally approved score exists to quantify histopathological changes in LF pathology. Therefore, we used the scores proposed by Sairyo et al. and OkudOkuda et al. found no significant relationship between the loss of elastic fibers, degeneration of elastic fibers, or proliferation of collagen fibers and clinical symptoms or image findings. Although they examined materials from 50 patients, they did not include a control group without degenerative changes but analyzed only patients with LSS or spondylolisthesis.In a gender-matched case\u2013control study of 30 patients with LSS or disc herniation, respectively, Park et al. identified a statistically significant correlation of elastin degradation and fibrosis with LSS while grIn our study population\u2014being slightly larger with respect to LSS patients than Park\u2019s cohort\u2014the scores according to Sairyo and Okuda, respectively, were, in general, more severe for the LSS patient group but, with regard to the differences between the control and LSS patient groups, not statistically relevant.Sairyo et al. demonstrated increased fibrosis and loss of elastic fibers, particularly in the dorsal portion of the LF . In ParkWhile our results on the histopathological changes in LF hypertrophy are in-line with previous findings and promising, further studies are warranted, with preferably more patient samples examined and LF tissues gained and processed in an oriented manner, e.g., for the differentiation of dural and dorsal LF. Further, larger studies may aid in elucidating the potential clinical consequences of our findings.n = 12) compared with the hypertrophy group, we only used LF samples from patients with an indication for discectomy without visible signs of degenerative lumbar spine disease, being a rare clinical situation. Additionally, only a few male control samples were available (n = 2), representing a potential bias. However, care was taken to recruit an equivalent number of male and female patients (n = 19 each).Concerning the small number of control samples in our study (Our study is the first published to examine ER \u03b1 on LSS patient materials with a relevant number of cases and clearly indicating both sexes. LF hypertrophy may be accompanied by a higher expression of ER \u03b1. More severe fibrosis, loss of elastic fibers, and the proliferation of collagen fibers, according to Sairyo and Okuda, were observed in the hypertrophy group. No statistically relevant correlation was seen between ER \u03b1 and sex. Among the LSS patients, ER \u03b1 expression was higher with the additional diagnosis of osteochondrosis, with a tendency toward statistical significance, whereas the ER \u03b1 expression was significantly lower in LSS patients with the additional diagnosis of scoliosis. Further, larger studies with some technical modifications in material processing may help to clarify the biological and clinical significances of these findings."} {"text": "The incretin hormone glucagon-like peptide-1 (GLP-1) has received enormous attention during the past three decades as a therapeutic target for the treatment of obesity and type 2 diabetes. Continuous improvement of the pharmacokinetic profile of GLP-1R agonists, starting from native hormone with a half-life of ~2\u20133 min to the development of twice daily, daily and even once-weekly drugs highlight the pharmaceutical evolution of GLP-1-based medicines. In contrast to GLP-1, the incretin hormone glucose-dependent insulinotropic polypeptide (GIP) received little attention as a pharmacological target, because of conflicting observations that argue activation or inhibition of the GIP receptor (GIPR) provides beneficial effects on systemic metabolism. Interest in GIPR agonism for the treatment of obesity and diabetes was recently propelled by the clinical success of unimolecular dual-agonists targeting the receptors for GIP and GLP-1, with reported significantly improved body weight and glucose control in patients with obesity and type II diabetes. Here we review the biology and pharmacology of GLP-1 and GIP and discuss recent advances in incretin-based pharmacotherapies. Obesity decades Figure\u00a01 decades .While traditionally recognized primarily as a disease of the elderly, T2D is currently one of the most frequently diagnosed preventable chronic diseases in middle age, as well as children and adolescents , 6. ExceLifestyle modifications, such as balanced nutrition, calorie restriction and physical exercise, remain the cornerstone of any weight loss intervention. However, lifestyle changes alone are insufficiently efficacious and sustainable as a stand-alone therapy, possibly because physiological adaptations conspire to promote weight regain following diet-induced weight loss . Genetic\u00ae Novo Nordisk, Copenhagen, Denmark), a long-acting agonist at the glucagon-like peptide-1 receptor (GLP-1R) , and the(GLP-1R) . Each of(GLP-1R) \u201328. WhilGLP-1 is encoded by proglucagon, a 158 amino acid precursor protein, that is predominantly expressed in the gut, pancreas, and distinct neuronal populations of the hindbrain . In the via the G\u03b1s pathway, and hence accelerates intracellular levels of cAMP (via the G\u03b1q and \u03b2-arrestin pathways and knockdown of \u03b2-arrestin-1 in rat insulinoma (INS1) cells decreases the ability of GLP-1 to promote GSIS , a 7 transmembrane G protein-coupled receptor of the class B family . GLP-1R of cAMP . GLP-1R ote GSIS . Immunohote GSIS . These dote GSIS . Consistote GSIS , 29, expote GSIS , 46. In ote GSIS . No exprote GSIS .2), with a low proportion produced as GLP-1(7-37) and an even a lower portion as GLP-1(1-37) or GLP-1(1-36NH2) (in vivo proteolysis by the dipeptidylpeptidase-4 (DPP-4) and fast renal elimination. DPP-4 cleaves GLP-1(7-36NH2) and GLP-1(7-37) at the second N-terminal amino acid (Ala8) position, leading to metabolically metabolites GLP-1(9-36NH2) and GLP-1(9-37) of much reduced potency , 57. Nat1-36NH2) \u201360, whic potency . Despite potency , 62. The potency , 64, it\u2019 potency .2) at a rate of 4.8 pmol-1 kg-1 min-1 improved glycemic control and insulin sensitivity in patients with T2D , and the recruitment of GLP-1 into unimolecular pharmacology with GIP, glucagon (and others) and moreDenmark) , 28. AlsDenmark) \u201385 or glDenmark) , 87 have others) , 29, 88 others) .in vivo action and potency to protect from degradation by DPP-4. Such structural modification has been applied to exenatide, lixisenatide, semaglutide, dulaglutide, and albiglutide.heloderma suspectum). Both exenatide and lixisenatide contain the full sequence of exendin-4, but lixisenatide is extended on the C-terminus to possess six additional lysine residues is a sixty amino acid tandem of two DPP-4 protected GLP-1 molecules that are covalently fused to human albumin . The chevia a gamma glutamic acid spacer at the lysine residue at position 26 and semaglutide . Liraglutide is linked to palmitic acid (C16:0) ition 26 . The fatition 26 . Semagluition 26 . Of notein vivo proteolysis, and enhance toxicological safety and potentially even its non-immunogenic resilience which was approved by the FDA in 1990 for adenosine deaminase deficiency associated with severe combined immunodeficiency disease (SCID). The PEG modification can, in rare cases, lead to vacuolation or the generation of antibodies against the PEG. As it pertains to drugs to control body weight and/or glycemia, preclinically tested pegylated anorectics include leptin (Polyethylene glycol (PEG) is a synthetic water-soluble inert polymer with the potential to enhance a drug\u2019s half-life by slowing down its rate of renal clearance. Pegylation of a drug can also enhance its aqueous solubility, protect against silience . The linsilience \u2013110. Thie leptin , FGF21 has been applied for dulaglutide. The sixty amino acid molecule comprises two Gly8-modified DPP-4 protected GLP-1 molecules in which the C-termini are fused at a Gly36 residue to IgG4 Fc fragments. Notably, while dulaglutide carries a glycine at position 8 to protect from DPP-4 cleavage, Fc fusion to native GLP-1 is sufficient to reduce DPP-4 degradation by 4-5-fold relative to native GLP-1 in the acidic endosomal compartments with the consequence that the FcRn-bound Fc complex is recycled back to the plasma membrane and secreted back into general circulation is an extended-release (ER) formulation of exenatide. The drug is self-applied on a weekly basis independent of meal patterns. The extended release is achieved through incorporation of exenatide (exendin-4) into 0.06 mm-diameter biodegradable microspheres, which comprise a 50:50 poly (PLG) polymer along with sucrose weeks, yielding therapeutic levels after two weeks and a steady state after 6\u20137 weeks . The relThe prolonged rise to achieve steady state plasma concentrations seems to have beneficial effects on tolerability since the frequency of nausea and vomiting is reduced in patients treated with exenatide ER relative to treatment with exenatide BID , 133. Exvia the GI-tract and quick degradation by proteolytic enzymes and the acidic environment of the stomach. For this reason, peptide GLP-1R agonists are not available as oral preparations and rather have to be subcutaneously self-injected by the patient. Difficulties and discomfort with self-injected medication is a factor that negatively affects patient compliance and quality of life . In Rybelsus\u00ae, semaglutide is co-formulated with sodium N-[8-(2-hydroxybenzoyl)aminocaprylate] (SNAC), which shields the molecule from enzymatic and acidic degradation and accelerates its site-directed release and absorption in the stomach was approved in 2014 for the treatment of obesity in adults and in 2020 for the treatment of obesity in children aged 12 and older in adjunct to lifestyle programs for the treatment of obesity in adults (vs. 31.5% in placebo controls), while 69.1% (vs. 12.0% in placebo controls) lost >10%, 50.5% (vs. 4.9% in placebo controls) >15% and 32% (vs. 1.7% in placebo controls) >20% >20% . Similarls) >20% , 143, pals) >20% . Like wils) >20% . Notablyls) >20% , 146\u20131482), is produced in the intestine and the pancreatic \u03b1-cells by cleavage of proGIP by PC2 . Body wecontrols and is de tissue and the e tissue . However of Gipr \u2013179, indue (BAT) or the pue (BAT) .Gipr protects from diet-induced obesity , fatty-acyl GIP decreases body weight and food intake in HFD-fed wildtype mice but not in mice with CNS deletion of Gipr has been developed by Eli Lilly . The molIn recent phase III clinical trials, tirzepatide showed profound ability to improve body weight and glycemia in patients with obesity and T2D. Depending on the dose , 40-52 weeks of treatment with tirzepatide, decreased HbA1c between -1.87 and \u2013 2.59%, with 81 \u2013 97% of patients achieving a HbA1c <7%, and 23 \u2013 62% of patients achieving HbA1c <5.7% \u201327, 196.The results of completed tirzepatide trials are quite exciting. However, some effort is still required to ensure such therapeutic advance will be applicable to all populations for whom it is intended. Racial and ethnic minorities carry a disproportionate burden of obesity and T2D in the general population, but their enrolment in the completed tirzepatide trials was lower than expected, which raises concerns regarding the generalizability of these trials. Female participants were reasonably represented in the completed trials, but no assessment of biological sex differences in drug efficacy and safety profiles was made \u201327, 196.It is well recognized that T2D and obesity treatments may be best suited for precision medicine approaches rather than a \u201cone size fits all\u201d paradigm. While considerable effort continues to be invested into developing more pharmacotherapies for T2D and diabetes, it is of paramount importance that use of the currently available medications is optimized\u2014that is, to provide the right treatment to the right patient, at the right time. The completed trials of tirzepatide demonstrate meaningful improvement of glycemic and weight control with this drug. However, as with any medical therapy, there are large inter-individual differences in response among participants, including the possibility of less than 5% weight loss to 20% or greater weight loss \u201327, 196,QT and SA co-wrote sections and edited the manuscript. CO made the figures and edited the manuscript. RW edited the manuscript. RD edited and revised the manuscript and co-wrote sections. TM conceptualized and wrote the manuscript. AH co-conceptualized the manuscript, co-wrote sections, and edited the manuscript. All authors contributed to the article and approved the submitted version.QT received research funding from the International Helmholtz Research School for Diabetes and the Alberta Diabetes Institute. SA received research funding from the Helmholtz Association (Helmholtz Diabetes School). TM received research funding from the German Research Foundation (TRR152 and TRR296). AH received research funding from the Canadian Institutes of Health Research and Weston Family Foundation (Pro00069477 and Pro00100067).AH is an investigator on clinical trials for Rhythm Pharmaceuticals, Inc., and Levo Therapeutics, has received grant funding from the W. Garfield Weston Foundation, and has served as a speaker for Rhythm Pharmaceuticals, Inc. TM receives research funding by Novo Nordisk, but these funds are unrelated the here described work. TM further received speaking fees from Eli Lilly, Novo Nordisk, Mercodia, AstraZeneca, Berlin Chemie, and Sanofi Aventis. RD is a co-inventor on intellectual property owned by Indiana University and licensed to Novo Nordisk. He was recently employed by Novo Nordisk and previously Lilly Research Laboratories.The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.The handling editor declared a shared affiliation with several of the authors SA and TM.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "The membrane showed permeation behavior, wherein the permeance reduced with the molecular size, attributed to the effect of molecular sieving. The separation performances were also determined using the equimolar mixtures of N2\u2013SF6, CO2\u2013N2, and CO2\u2013CH4. As a result, the N2/SF6 and CO2/CH4 selectivities were as high as 710 and 240, respectively. However, the CO2/N2 selectivity was only 25. These results propose that the high-silica CHA-type zeolite membrane is suitable for the separation of CO2 from CH4 by the effect of molecular sieving.The polycrystalline CHA-type zeolite layer with Si/Al = 18 was formed on the porous \u03b1-Al Some kinds of zeolite membranes are available for the dehydration of many kinds of organic solvents in industries [Zeolites are microporous aluminosilicate compounds, and they have been attracted much attention as the potential material for membranes. Zeolite membranes have been studied since the 1990s, and the MFI-type zeolite membranes were formed on the substrates by deposition and intergrowth of crystallites, which were nucleated in synthesis mixtures ,7,8,9,10dustries ,21,22.2/CH4 selectivity was as high as 2000 below 250 K. However, the selectivity decreased to 220 around 300 K. Noble and coworkers investigated CHA-type silica-aluminophosphate zeolite (SAPO-34) membranes [2\u2013CH4, CO2\u2013N2, Kr\u2013Xe, and N2\u2013CH4 mixtures. The CO2/CH4 selectivity was 171 at 295 K [2 separation performances were determined [N-methyl-2-pyrrolidone (NMP) [\u22122 h\u22121 and 1100, respectively. Imasaka et al. developed high-silica CHA-type zeolite membranes with Si/Al = 23, and they applied them to the CO2 separation [2 permeance (1.5 \u00d7 10\u22126 mol m\u22122 s\u22121 Pa\u22121) and CO2/CH4 selectivity (115) at 313 K. AEI-type zeolite is one of the aluminophosphate-type zeolites and contains no exchangeable cations. Since the crystal structure is similar to CHA-type zeolite, the identical CO2/CH4 selectivities were obtained [These mechanism studies have established that DDR-, CHA-, and AEI-type zeolite has favorable characteristics for the membrane material, such as small micropore diameter, large micropore volume, and the composition variability . These fembranes ,28,29,30termined ,32. Satone (NMP) . For a 5obtained ,34,35.2, CO2, N2, CH4, n-C4H10, and SF6 at 303\u2013473 K. The gas separation tests were also examined for binary mixtures of N2-SF6, CO2-N2, and CO2-CH4 at 303\u2013473 K. Furthermore, the gas permeation and separation mechanisms of the CHA-type zeolite membrane were discussed in this paper.Recently, we developed the rapid preparation technique for high-silica CHA-type zeolite membranes using the structure conversion of Y-type zeolite ,37,38. T2O3 support tube by the combination of the secondary growth of seed particles and the structure conversion of FAU-type zeolite [N,N,N-trimethyl-1-adamantammonium hydroxide solution , and ultra-stable Y-type zeolite particles . The molar composition of the solution was 40 SiO2:1 Al2O3:4 Na2O:8 SDA:800 H2O. The mixture was poured into a Teflon-lined stainless-steel autoclave, and a hydrothermal reaction was carried out at 433 K for 4 days. Solids were recovered by filtration, washed with distilled water, and dried overnight at 383 K to obtain seed particles. For the secondary growth, a synthesis solution was prepared by the same procedures as that for the seed particles, and the mixture was stirred at room temperature for 4 h. The molar composition of the mixture was 45 SiO2:1 Al2O3:4.5 Na2O:3.4 SDA:4500 H2O. The \u03b1-Al2O3 tube was used as the support, and the properties were as follows: outer diameter = 2.0 mm; inside diameter = 1.5 mm; mean pore diameter = 0.3 \u03bcm; and porosity = 45%. The outer surface of the support tube was rubbed with the seed particles to implant seeds for nucleation, and the tube was added to the autoclave filled with 30 g of the synthesis solution. The autoclave was placed horizontally in an oven at 433 K for 20 h to form the polycrystalline high-silica CHA-type zeolite layer. After the autoclave was cooled to room temperature, the support tube was recovered, washed with distilled water, and dried at room temperature overnight. Finally, the tube was calcined in air at 773 K for 10 h to remove the SDA to obtain the high-silica CHA-type zeolite membrane.A high-silica CHA-type zeolite membrane was prepared on the outer surface of a porous \u03b1-Al zeolite ,37,38. TThe morphology was observed using a scanning electron microscope , and the composition was analyzed by an energy-dispersive X-ray (EDX) analyzer attached with the SEM. The crystal structure of the membrane was identified by X-ray diffraction .2. The membrane was fixed to a permeation cell, as shown in 2, CO2, N2, CH4, n-C4H10, and SF6, as well as binary mixtures of N2\u2013SF6, CO2\u2013N2, and CO2\u2013CH4, were fed onto the outer surface of the membrane (feed side) at 100 mL min\u22121, and either argon (for H2) or helium (for the others) was introduced into the inside of the membrane (permeate side) at 10\u201350 mL min\u22121 as the sweep gas. The total pressures of the feed and permeate sides were kept at 300 and 101 kPa, respectively. In this study, the membrane was treated under N2 flow at 473 K for 30 min to remove adsorbed water, and the test gas was fed onto the feed side. The pretreatment was carried out before each measurement. The gas composition was analyzed using a gas chromatograph with a thermal conductivity detector (Shimadzu GC-8A), and the gas flow rate was determined by a soap-film flowmeter. The permeance for component i, iQ, was calculated using the following equation:Np is the molar flow rate of the outlet from the permeate side; S, the effective membrane area for permeation; iy, the mole fraction of component i in the outlet gas of the permeate side; Pif, the partial pressure of component i on the feed side; and Pip, the partial pressure of component i on the permeate side. The selectivity was defined as the ratio of the permeances in this study.Both the ends of the support tube were connected to stainless-steel tubes with silicon resin , and the outer surfaces of resin were wrapped with thermally shrinking tetrafluoroethylene and hexafluoropropylene copolymer (FEP) tubes . The effective membrane area for permeation was 1.2 cm2O3 tube with high reproducibility.2 was 5.1 \u00d7 10\u22127 mol m\u22122 s\u22121 Pa\u22121 at 303 K. The permeance, except for H2, decreased, with increase in the diameter, and that reached 4.1 \u00d7 10\u221211 mol m\u22122 s\u22121 Pa\u22121 for SF6. The diameter of the crystallographic channel aperture of the CHA-type zeolite is 0.38 nm [2, CO2, N2, CH4, n-C4H10, and SF6 are 0.289, 0.33, 0.364, 0.38, 0.43, and 0.55 nm, respectively [2, CO2, and N2 molecules are smaller than the channel diameter, those molecules can penetrate into and diffuse within the zeolite channels. Although the molecular diameter of SF6 is clearly larger than the channel sizes, SF6 was detected on the permeate side of the membrane. The marginal permeance of SF6 proposes that the membrane had intercrystalline boundaries. The unit cell of the high-silica CHA-type zeolite was shrunk by the air calcination, and the volume shrinkage degree was 0.6 vol % [ 0.38 nm , and theectively . Since H.6 vol % . The sma2 was 1.8 \u00d7 10\u22128 mol m\u22122 s\u22121 Pa\u22121 at 473 K. After nine times heating and cooling treatment for determination of the permeation properties of single-component gases and binary mixtures, the permeance was 1.7 \u00d7 10\u22128 mol m\u22122 s\u22121 Pa\u22121. The identical permeances of N2 indicates that the high-silica CHA-type zeolite membrane was stable for the thermal treatment.Moreover, the permeance of N2, CO2, N2, CH4, n-C4H10, and SF6 at 303\u2013473 K. The permeance of CO2 was 5.1 \u00d7 10\u22127 mol m\u22122 s\u22121 Pa\u22121 at 303 K and decreased with temperature. As a result, it was 1.3 \u00d7 10\u22127 mol m\u22122 s\u22121 Pa\u22121 at 473 K. The permeances of H2 and N2 showed similar dependencies. Since these molecules are smaller than the channel diameter of the CHA-type zeolite, these molecules can adsorb on the zeolite channels. The adsorption amount decreased with temperature. Therefore, the permeances of H2, CO2, and N2 were decreased with temperature by the reduction of the concentration difference between both the sides of the membrane.4, n-C4H10, and SF6, the diameters of which are identical or larger than the zeolite channels, the permeances increased with temperature. The effect of temperatures is described using the Arrhenius equation as follows:iQ* and Ep are the pre-exponential factor and activation energy for permeation, respectively. The pre-exponential factors and activation energies of single-component gas permeation are listed in n-C4H10 and SF6 were higher than those of the other gases. It is well known that the difference in the diffusivities is important for the permeation through membranes [n-C4H10 and SF6 suggest that it is difficult for these molecules to permeate through the intercrystalline boundaries. Therefore, the high-silica CHA-type zeolite membrane had small and minimal intercrystalline boundaries.In contrast, for CHembranes ,15,16,172 < N2 < CH4 < SF6. This suggests that the CO2/N2, CO2/CH4, and N2/SF6 selectivities are higher at lower temperatures. The separation performances for these mixtures are examined in the next subsection.As shown in 2 and SF6 for the equimolar mixture of them at 303\u2013473 K. As same as for the single gas, the permeance of N2 for the binary mixture decreased with temperature, while that of SF6 showed the reverse dependency. Therefore, the N2/SF6 selectivity was the highest (=710) at 303 K. The permeance of SF6 for the mixture was the same as that for single gas, and the permeance of N2 for the mixture was also identical at temperatures higher than 363 K. However, below 363 K, the permeances of N2 for the mixture was lower than that for the single gas. As a result, the N2/SF6 selectivity was 440 for the mixture. The lower N2 permeances for the mixture was attributed to the weaker interaction of N2 with the zeolite than SF6. The interaction potential of individual molecules can be described using the Lennard\u2013Jones 12-6 equation. The depths of interaction potential are 0.59 kJ mol\u22121 and 1.85 kJ mol\u22121 for N2 and SF6, respectively [6 proposes that SF6 molecules interact with zeolites more strongly than N2. In addition, SF6 molecules permeated through the boundaries, as discussed above. Therefore, N2 molecules, permeated through the boundaries, were inhibited by the SF6 molecules, and the permeance of N2 became lower compared to the single gas.ectively . The dee2 and N2 for the equimolar mixture of them at 303\u2013473 K. Although the permeance of N2 became the maximum at 343 K for the mixture, that of CO2 decreased with temperature. The CO2/N2 selectivity for the mixture was the highest at 303 K (=25) and decreased with temperature. As a result, the selectivity became only 9 at 473 K.2 for the mixture was similar, although the permeance for the mixture was slightly higher at all temperatures. The permeance of N2 was almost the same that for the single gas at higher temperatures than 373 K, while it was lower below 363 K. These are typical permeation properties when molecules are transferred and separated by the preferential adsorption of CO2. Similar properties were also observed for the CO2\u2013N2 separation using FAU-type zeolite membranes [2 adsorption and the overtaking N2 by CO2 [Compared to the single gas, the temperature dependency of COembranes ,16. Moro2 by CO2 .2 and CH4 for the equimolar mixture of them. The permeances of CO2 and CH4 for the mixture were 5.3 \u00d7 10\u22127 and 2.2 \u00d7 10\u22129 mol m\u22122 s\u22121 Pa\u22121 at 303 K, respectively. The CO2/CH4 selectivity was 240 for the mixture. The permeance of CO2 decreased with temperature, although that of CH4 showed the reverse dependency. As a result, the CO2/CH4 selectivity reduced to 43 at 473 K. Comparing to the single gases, the permeances of CO2 and CH4 for the mixture were almost identical. The permeation properties cannot be explained by only the preferential CO2 adsorption, as discussed in 4 is similar to the channel diameter of the CHA-type zeolite, the diffusion of CH4 within the zeolite channel was slower than N2, as show in i-C4H10 molecules hinder the diffusion of n-C4H10 for the separation of butane isomers using MFI-type zeolite membranes [4 molecules within the zeolite channels may hinder the diffusion of CO2 molecules.embranes . The CH42 concentration in the feed mixture was determined to check the interaction between CO2 and CH4 molecules during the permeation. 2 concentration in the feed mixtures on the permeation properties of CO2 and CH4 for binary mixtures of them at 303 K. If CH4 inhibits the permeation of CO2, the permeance of CO2 would be reduced at the dilute CO2 concentration. However, the permeances of CO2 and CH4 were independent of the CO2 concentration in the feed mixture, and the selectivity was almost constant at 200\u2013260. The constant permeances of CO2 and CH4 propose that the CO2 and CH4 molecules did not interact each other during the membrane permeation. Therefore, it is considered that CO2 and CH4 molecules diffuse in the different passes, such as zeolite micropores and inter-crystalline boundaries.The influence of the CO2 separation performance to previous reports [2 permeance and selectivity of our membrane were relatively high for CO2\u2013CH4 mixtures, and the performances could be plotted on the trade-off line of SAPO-34, CHA-, and AEI-type zeolite membranes [2 separation performances of the membranes are attributed to the similar crystal structures [2 selectivity below 300 K, the selectivity at temperature higher than 300 K was almost the same as those of CHA- and AEI-type zeolite membranes [2 selectivities of the FAU-type zeolite membranes were lower compared to those membranes [2 permeance and selectivity for CO2\u2013N2 mixtures [2 separation from CH4, while the selective adsorption of CO2 is necessary for CO2\u2013N2 mixtures. reports ,34,42,43embranes ,32,33,34ructures . Althougembranes . The CO22O3 tube in this study, and the gas permeation properties were determined using single-component H2, CO2, N2, CH4, n-C4H10, and SF6 at 303\u2013473 K. The permeance was 5.1 \u00d7 10\u22127 mol m\u22122 s\u22121 Pa\u22121 for CO2 at 303 K and the permeance reduced with increasing the molecular size. Moreover, the permeances of H2, CO2, and N2 decreased with temperature, while those of CH4, n-C4H10, and SF6 showed reverse trends. The gas separation tests were also carried out using binary mixtures of N2-SF6, CO2-N2, and CO2-CH4. The membrane showed the high separation performance for the mixtures, and the N2/SF6, CO2/N2, and CO2/CH4 selectivities were 710, 25, and 240, respectively.The polycrystalline CHA-type zeolite layer with Si/Al = 18 was formed on the porous \u03b1-Al"} {"text": "The objective of this study was to identify potential biomarkers and possible metabolic pathways of malignant and benign thyroid nodules through lipidomics study. A total of 47 papillary thyroid carcinomas (PTC) and 33 control check (CK) were enrolled. Plasma samples were collected for UPLC-Q-TOF MS system detection, and then OPLS-DA model was used to identify differential metabolites. Based on classical statistical methods and machine learning, potential biomarkers were characterized and related metabolic pathways were identified. According to the metabolic spectrum, 13 metabolites were identified between PTC group and CK group, and a total of five metabolites were obtained after further screening. Its metabolic pathways were involved in glycerophospholipid metabolism, linoleic acid metabolism, alpha-linolenic acid metabolism, glycosylphosphatidylinositol (GPI)\u2014anchor biosynthesis, Phosphatidylinositol signaling system and the metabolism of arachidonic acid metabolism. The metabolomics method based on PROTON nuclear magnetic resonance (NMR) had great potential for distinguishing normal subjects from PTC. GlcCer(d14:1/24:1), PE-NME (18:1/18:1), SM(d16:1/24:1), SM(d18:1/15:0), and SM(d18:1/16:1) can be used as potential serum markers for the diagnosis of PTC. Thyroid cancer is the most common endocrine-related malignancy and the most prevalent cancer of the head and neck in the past decades . It accoAt the same time, researchers have been searching for molecular markers that are valuable in diagnosing thyroid cancer, such as BRAF, RET/PTC, RAS, PAX8/PPAR\u03b4, P53, NTRK1, galectin-3. CK19, VEGF, Aurora-A, P16, AR, HBME-1, etc. , but disLipids played critical roles in cellular structures and functions, including cellular barriers, membrane matrices, signaling and energy storage. They undergo constant changes in physiological, pathological, and environmental conditions. Lipids play essential roles in cell growth and metabolism, therefore they are associated with carcinogenic pathways. Lipidomics, the metabolism of lipids, is defined as \u201cthe full characterization of lipid molecular species and of their biological roles with respect to expression of proteins involved in lipid metabolism and function, including gene regulation\u201d . First iE (UPLC-QTOF-MSE)-based technique for determination of total lipids present in patient plasma to identify the potential diagnostic biomarkers for thyroid cancer. UPLC-Q-TOF-MS has been used in systems analysis of complicated metabolome (Recent advances in mass spectrometry (MS), nuclear magnetic resonance (NMR) and other spectroscopic methods have greatly facilitated the development and application of lipidomics , and MS tabolome . Differen = 47) and control check (CK) (n = 33) were collected from the First Hospital of Tsinghua University from August 2016 to September 2019. The patients were selected according to the following criteria: (1) all patients with papillary thyroid carcinoma were diagnosed by pathology; (2) no patients received preoperative treatment, including adjuvant chemotherapy and radiotherapy; and (3) patients do not have hyperlipidemia, diabetes, and other diseases that might affect lipid metabolism. (4) Patients with a history of other malignancies or recurrent tumors were excluded. The selected healthy controls include age and gender-matched healthy subjects with no metabolic diseases and were proven to lack any lesions in thyroid after the physical examination followed by ultrasonography of the thyroid.Serum samples from PTC (Fasting venous blood samples were collected in EDTA anticoagulant tube. The fresh blood samples were transported to the laboratory for 20 min by cold chain (4\u00b0C), and the plasma was obtained by centrifugation at 1,000 g and 4\u00b0C. The plasma was cold extracted in a liquid nitrogen tank for 15 min, and then put into the \u201380\u00b0C freezer for analysis.Mass spectrometry was an analytical method which ionizes the substance to be measured, separated it according to the mass/charge ratio of ions, and measured the intensity of various ion spectrum peaks to achieve the purpose of analysis. Mass was one of the inherent characteristics of substances. Different substances had different MS. Use this property, qualitative analysis can be carried out. The peak intensity was also related to the content of the compound it represented and can be used for quantitative analysis.t-test was used for age variables. Metabolic changes in Plasma extract were analyzed by using UPLC-Q-TOF MS system and its software Progenesis QI (Waters). The original tandem mass spectrometry datasets were generated on the Waters XEVO-G2XS QTOF instrument and processed by the commercial software Progenesis QI 2.0, including raw data import, selection of possible adducts, peak set alignment, peak detection, deconvolution, dataset filtering, noise reduction, compound identification, and normalization with some method. The original data was preprocessed and the linear model was adjusted. Orthogonal Partial least squares discriminant analysis (OPLS-DA) was first used for classification discrimination. OPLS-DA was a supervised statistical method for discriminant analysis. OPLS-DA was used to establish a model of the relationship between the metabolite expression and the sample type, so as to realize the prediction of sample type , VIP value (VIP > 1), and FDR value (FDR < 0.05) Standard potential difference marker is selected. The best truncation value was determined by using The Youden index. Finally, potential biomarkers were correlated with metabolic pathways through KEGG.Statistical analysis was conducted on clinical data, gender variables were analyzed using the chi-square test, and independent ple type . The relP < 0.05 was considered statistically significant.All statistical analyses were performed used R version 3.6.3, and There were 47 PTC patients , and 33 healthy controls . The clinical information of the samples was shown in validation set and a training set. To describe the changes between PTC group and CK group, an OPLS-DA model was developed .As can be seen in the figure, the plasma lipid profile of the two groups changed significantly. In addition, we obtained the S-plot showing a good curve, and the further away the metabolites from the origin in the figure, the greater the contribution to the grouping . Thirty P < 0.05 were selected by the classic one-stage method , Recursive Partitioning (RPART), Support Vector Machine (SVM), Random Forest (RF), Gradient Boosting Machine (GBM) as the alternative algorithm. Through the 7-fold cross-validation, the indexes of each model were calculated, including accuracy, sensitivity, specificity and AUC. Statistical analysis of the results of 7-fold cross- validation showed that the classification effect of Logistic Regression was similar to that of SVM, which showed high AUC valued and high accuracy . Validatp-value or pathway impact, were associated with Glycerophospholipid metabolism (a), Linoleic acid (b), alpha-Linolenic acid metabolism (c), Glycosylphosphatidylinositol (GPI) (d), Glycerolipid metabolism (e), Phosphatidylinositol signaling system (f), and Arachidonic acid metabolism (g). Metabolomics Pathway Analysis (MetPA) is a part of many functions of MetaboAnalyst network database. It can visualize the metabolic pathway information of potential biomarkers with the help of METLIN, HMDB, and KEGG database. As shown in In our study, UPLC-Q-TOF MS metabolomics technology was used to analyze the plasma of PTC group and CK group. Based on the classical statistical method, appropriate metabolites were selected for pathway analysis to determine the potential metabolic pathways and mechanisms.Tumor progression is a complex process involving proliferation, hypoxia, angiogenesis, apoptosis, metastasis, immunity, and increased tolerance to reactive oxygen species . These tPhospholipids are divided into two main groups, glycerophospholipids (GPs) and sphingophospholipids. Depending on the different substituents at the sn-3 position of the glycerol backbone, GPs fall into phosphatidylcholine (PC), phosphatidylethanolamine (PE), phosphatidyl glycerol (PG), phosphatidylserine (PS), phosphatidylinositol (PI), phosphatidic acid (PA), and cardiolipins. There is evidence that PC, PE and sphingomyelin (SM) are major components of eukaryotic cell membranes.PE is a key phospholipid that helps maintain cell membrane fluidity. SM is an important component of biofilm composition. SM and its metabolites such as ceramide (Cer), sphingosine (Sph), and sphingosinephosphate (S1P) are an important class of biologically active signaling molecules involved in the regulation of many important signal transduction processes such as cell growth, differentiation, senescence and death are involved . Among tIn cells, PC is mediated by phospholipase A2 (PLA2), a family of enzymes that hydrolyze glycerophospholipids to fatty acids and lysophosphatidylcholine. PLA2 is significantly more active in thyroid cancer cells than in normal thyroid tissue, and thus PC, along with its choline metabolites produced during metabolism, has an important role in tumor proliferation and survival . Guo et de novo synthesis of ceramide pathway Enzymes can reverse drug resistance in cancer cells mainly exists in body tissues in the form of complex lipids. The research results of Linoleic acid, as an unsaturated fatty acid, has many functions. First, LA inhibits tumors by inducing the formation of lipid peroxidation products . FurtherGlycosylated phosphatidylinositol (GPI) proteins are proteins that are anchored to the surface of eukaryotic cell membranes by a glycosylated phosphatidylinositol-anchored structure at the carboxyl terminus. GPI ethanolamine phosphate transferase participates in glycosylphosphatidylinositol biosynthesis. The relationship between GPI and tumor. First, GPI-anchored proteins are associated with tumor markers . GPI-ancAccording to the current study, sphingolipids are the most recognized lipid markers. Sphingolipids play an important role in cell proliferation, migration, inflammatory response to anticancer drugs and other cancer-related functions as well as in preventing the occurrence and development of cancer . but no We didn\u2019t find the sphingolipid pathway in our metabolic pathway, but phospholipids and glycolipids are complex lipids composed of simple lipids and non-lipid components . Phospholipids are lipids containing phosphoric acid. They can be divided into glycerol phospholipids and sphingosine phospholipids according to the different alcohols in the molecules. In addition, these two pathways can be interrelated through phosphoethanolamine and ceramide. In future experiments, we will expand the sample size and carefully screen the samples for further verification of the results. We would like to explore the role and association of sphingolipid metabolism in cancer.This study culminated in the design of a predictive model that was constructed using altered lipid metabolites found previously in several patients with thyroid cancer, which will hopefully help in future work to diagnose thyroid cancer. Early diagnosis can enable PTC patients to receive more effective follow-up monitoring, especially high-risk patients to receive timely treatment to reduce the higher medical costs and physical injuries caused by delayed progression of the disease. Although ultrasound is the preferred method for the diagnosis of thyroid nodules, its application in the differential diagnosis of benign thyroid nodules and papillary thyroid carcinoma is controversial, and the diagnostic accuracy range is between 20 and 76% . The appThe limitations of our study include a relatively small sample size and a study group. Follicular, anaplastic, and poorly differentiated tumor samples were not included in our study because of their low incidence. In this study, we did not compare the changes in the lipid spectra of rai-refractory and rai-responsive. The samples selected in this study were patients with papillary thyroid carcinoma confirmed by pathology. There is no clear distinction between early-stage (I-II) and late-stage (III-IV) tumors. Another limitation of this study is the small data set. In the future, the sample size should be enlarged.The lipids in the serum of patients with PTC and in the healthy control groups were comprehensively analyzed using UPLC-QTOF/MS. Thirteen lipid species are proposed as potential biomarkers for the diagnosis of PTC. These species showed significant differences between the PTC and healthy control group. The identified biomarker or panels showed excellent diagnostic accuracies for distinguishing among PTC patients, and normal individuals. The predictive model showed good diagnostic performance and it could be gradually incorporated as a support method for the diagnosis of PTC.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.The study was performed according to the standards of the Institutional Ethical Committee and the Helsinki Declaration of 1975, as revised in 1983, and was approved by the Institutional Review Board of the Tsinghua University. The patients/participants provided their written informed consent to participate in this study.NJ, ZZ, and XC were involved in the study concept and design. NJ, ZZ, GZ, LP, CY, GY, and LZ provided the tools and patient specimens. XC and YW performed the experiments. XC, JH, and TX analyzed and interpreted the results and edited the manuscript. JH and TX organized the results and drafted the manuscript. NJ and XC approved the final version. All authors participated in the critical revision of the manuscript for important intellectual content.XC and JH were employed by the BaoFeng Key Laboratory of Genetics and Metabolism. TX was employed by the Zhongguancun Biological and Medical Big Data Center. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "In uncertainty, the existing t-test of a correlation coefficient is unable to investigate the significance of correlation. The study presents a modification of the existing t-test of a correlation coefficient using neutrosophic statistics. The test statistic is designed to investigate the significance of correlation when imprecise observations or uncertainties in the level of significance are presented. The test is applied to data obtained from patients with diabetes. From the data analysis, the proposed t-test of a correlation coefficient is found to be more effective than existing tests.The existing In the t-test for correlation, the null hypothesis that there is no association between two variables is tested against the alternative hypothesis that two variables are associated. Values of the statistic of the t-test for correlation are calculated from given data and compared with tabulated values. The null hypothesis of no association between two variables is accepted if the calculated value is less than the tabulated value. Bartroff and Song to be pair data and let rN = rL+rUIrN; IrN\u03f5 be a neutrosophic correlation, where rL is a determinate part, the rUIrN is an indeterminate part, and IrN\u03f5 is the measure of correlations. The neutrosophic correlation rN\u03f5, by following Aslam and Albassam , the probability of accepting HN0: no correlation between G1 and G2 is 0.95 (true), and the measure of indeterminacy is 0.0027. The t-test using fuzzy logic will give information only about the measures of falseness and truth. Based on the analysis, it is concluded that the proposed t-test for correlation is better than the existing tests.The proposed rN = 0.99\u22120.9899IrN; IrN\u03f5. It is interesting to note that the correlation between the two groups, G1 and G2, varies from 0.99 to 0.9899, with the measure of indeterminacy IrN= 0.001. From this correlation analysis, it can be seen that there is a strong positive correlation between fasting of 8 h and after 2 h of drinking the glucose solution. It means that if an 8-h fasting blood sugar level is high, then the blood sugar level after 2 h of drinking the glucose solution is also high and vice versa. It is important to note that after the 8-h fasting, the minimum blood sugar level of those aged 45 is 159. The value 159 indicates that these patients should take some energy drink before 2 h before sleeping, so that blood sugar can be utilized properly by the body. In addition, with an increase in 8-h fasting sugar, patients aged 45 to 60 should avoid taking carbohydrate or glucose items.As the data is collected from a group having 20 people at the fasting time and then after two hours of drinking glucose solution about 237 milliliters contained 75-gram. The neutrosophic form of the correlation between G1 and G2 is t-test of a correlation coefficient under neutrosophic statistics was presented in the article. The proposed t-test of a correlation coefficient was a generalization of the existing t-test of a correlation coefficient under classical statistics. From the real example, the proposed t-test of a correlation coefficient was found to be effective for investigating the significance of correlation in an indeterminate environment. The simulation study showed that measures of indeterminacy affect the decision on the significance of correlation. The proposed test can be applied to investigate correlations in the fields of economics, business, medicine, and industry. The proposed t-test of a correlation coefficient using a double sampling scheme can be considered as future research. Further statistical properties can be studied in future research. The proposed study can be extended for blood sugar measurement under different conditions and validation methods as future research. In addition, some disturbances can also be considered for blood glucose measurement in future studies.The The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.Both authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.This work was funded by the Deanship of Scientific Research at King Abdulaziz Univesity.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "To develop and assess the reliability of an instrument that enables auditing information on consumer food environment indicators, such as availability, price, promotional and advertising strategies, and quantity of brands available, using the food recommendations adopted by the Dietary Guidelines for the Brazilian Population as a theoretical basis.. The Content Validity Index was estimated for each instrument item (>0.80 satisfactory). Inter-rater and test-retest reliability were assessed by percentage agreement and Kappa coefficients. Pearson\u2019s correlation coefficient and Scatter-plots were used to measure the degree of linear correlation between two quantitative variables. This is a methodological study in two phases: 1. development of the audit instrument and 2. assessment of its reliability and reproducibility The Content Validity Index was 0.91. Inter-rater and test-retest reliability were mostly high (Kappa> 0.80), for food availability indicators. Among the items that measure advertising, Kappa values for inter-rater reliability ranged from 0.57 to 1.00 and for the test-retest ranged from 0.18 to 0.90. Prices and quantity of brands showed a positive linear correlation between measurements performed by researcher 1 and 2 and between visits 1 and 2. AUDITNOVA is reliable for measuring aspects such as availability, price, quantity of brands, and advertising of foods available in the consumer food environment. Strong evidence relates it to the chronic noncommunicable disease (CNCD) epidemic, especially obesity, in developed5 and developing countries6 . The food environment is also connected to increased body mass index3 and to two important dimensions that ensure food and nutritional security, healthy food access, and availability7 . Using a socioecological behavior approach, Glanz et al.1 suggest a conceptual model that divides the food environment into four main domains: community food environment, organizational food environment, information food environment, and consumer food environment. The latter refers to what consumers find inside and around retail food establishments, for example, healthy food availability, variety, price, promotions, shelf position, nutritional information, and advertising, determining factors in decision-making processes of food acquisition and consumption by the population9 . For example, a study in the United States found only 7% of the total food information in the weekly circulated leaflet from a supermarket chain was about fruit, 10% about vegetables, 10% about milk and dairy products, and 18% about cereals and grains; moreover, it showed information in these leaflets often influence consumers when making their food purchases10 .The food environment, in its multiple dimensions1 . Food environment aspects contributing to the health and quality of life of populations have been widely discussed in many countries and motivated the creation of indicator monitoring networks, such as the International Network for Food and Obesity/CNCD Research, Monitoring and Action Support (INFORMAS), a global network of public-interest organizations and researchers from 30 countries that aims to monitor, benchmark and support public and private sector actions to increase healthy food environments and reduce obesity, CNCD and health-related inequalities11 . INFORMAS proposes food environment monitoring indicators such as food nutritional composition, labeling, food advertising, supply chain, quantity and types of retail food establishments, prices, and investments in the food production chain11 .The Dietary Guidelines for the Brazilian Population (DGBP) also recognizes the role of the food environment in promoting healthy eating and indicates six obstacles that hamper the adherence to current nutritional recommendations: information, advertising, time, cooking skills, cost, and food availability. Four of them are directly related to the food environment , according to the model by Glanz et al.13 and international networks11 , valid and reliable measures are necessary to study food environment14 . In the Brazilian scenario, research on this topic is recent16 , and studies need to expand production and collection of indicators, especially on the consumer\u2019s food environment, which influences the behavior of food purchase and consumption1 .As food environment contributes to obesity and CNCD, as shown in Brazilian technical documents18 assess food availability and advertising according to the DGBP recommendations. Martins et al.15 , validating the Brazilian food environment audit instrument based on the adaptation of the study by Glanz et al.14 , approximate the food groups recommended by the DGBP, but explore only three indicators and disregard the latest version of the NOVA food classification, which divides them into four groups: in natura /minimally processed, culinary ingredients, processed foods, and ultra-processed foods17 . Brazilian instruments lack indicators that dialogue with national recommendations. Regarding this, identifying the density of commercial establishments that primarily sell ultra-processed foods, the food advertisements according to food type in line with NOVA, the sales of ultra-processed foods in places such as checkout aisles or the sales of in natura products at the establishment entrance and the promotional prices of healthy and unhealthy foods may contribute to the understanding of the relation between food consumption and environment in the face of the new Brazilian nutritional and epidemiological scenario12 .None of the audit instruments already validated for the Brazilian context20 . Thus, this study aims to develop and evaluate the reliability of a food environment audit instrument that enables capturing information of consumer food environment indicators such as availability, price, promotional and advertising strategies, and quantity of brands available, using the dietary recommendations adopted by the DGBP as theoretical basis.The low reliability of the methods and instruments that propose to analyze the food environment and the current lack of objective criteria in the generation of indicators are barriers to understanding the mechanism of association between food environment, obesity, and food consumption. The Ethics Committee of Faculdade de Sa\u00fade P\u00fablica of Universidade de S\u00e3o Paulo approved this study under CAAE number 69045917.5.0000.5421. All commercial establishments were aware of the informed consent form and voluntarily participated in the study.This is a methodological study in two phases: 1. development of the audit instrument and 2. assessment of its reliability and reproducibility 17 . The processing of food identified by NOVA involves physical, biological and chemical processes applied after food is separated from nature and before it is eaten or prepared as dishes and meals. NOVA classifies all foods into four major groups:We designed the NOVA-based food environment audit instrument named AUDITNOVA in stages that involved systematic meetings of the research group; detailed analysis of Brazilian and international audit instruments and detailed analysis of the NOVA food classification proposed by Monteiro et al.in natura or minimally processed foods. In natura foods are defined as edible parts of plants or animals after leaving nature and minimally processed foods are in natura foods altered by processes that include removal of inedible or unwanted parts and drying, crushing, grinding, fractionating, filtering, roasting, boiling, non-alcoholic fermentation, pasteurization, refrigeration, freezing, and vacuum packaging;group 1: group 2: culinary ingredients. Substances derived from group 1 or from nature by processes that include pressing, refining, grinding, and drying to make durable products used for cooking and preparing food at home or in restaurants;group 3: processed foods. They are essentially made by adding salt, oil, sugar, or other substances from group 2 to group 1;17 .group 4: ultra-processed foods. Formulations made mainly or entirely from food-derived substances and additives, with little or no food from group 121 . The first version of the instrument had two blocks, one on food availability, prices, variety, and quality and another on advertising.We selected 66 foods for the AUDITNOVA with the highest frequency of acquisition by the Brazilian population according to data from the 2008-2009 Family Budget Survey (FBS)22 .The first version of the instrument underwent the content validation process by a panel of judges with the participation of nine experts distributed among the following areas: food research, food advertising, and consumer protection. DGBP was the theoretical framework adopted during the panel of judges, especially Chapter 2 (\u201cChoosing Food\u201d) and Chapter 5 (\u201cUnderstanding and Overcoming Obstacles\u201d). The judges reviewed each AUDITNOVA item and assigned scores based on the 4-point Likert scale for the clarity, relevance, pertinence, and representativeness attributes. In addition, the judges had a field for writing suggestions if the score was 3 or less. We estimated the content validity index for each item and for each instrument block and considered it satisfactory when it reached 0.80 agreement among the judges. We discarded items below this percentage in the final version of the instrumentThe final version of AUDITNOVA, after analysis by the panel of judges, contains 14 blocks of questions divided into: block 1 \u2014 general information ; business name and address, date, collection start time and end time); block 2 \u2014 establishment type and products sold ; block 3 \u2014 establishment entrance ; block 4 \u2014 fruit and vegetable section ; block 5 \u2014 meat, chicken, and fish section ; block 6 \u2014 dairy section ; block 7 \u2014 grocery section ; block 8 \u2014 canned food section ; block 9 \u2014 bakery section ; block 10 \u2014 frozen food section ; block 11 \u2014 beverage section ; block 12 \u2014 chocolate and snack section ; and finally, blocks 13 and 14 \u2014 advertisements inside and outside the establishment, respectively (Supplementary Archive).24 and are important equipment for measuring the reliability and reproducibility of a food environment audit instrument, as they enable the researcher to apply the instrument integrally.We performed the AUDITNOVA reliability assessment study on a convenience sample in the metropolitan region of the city of S\u00e3o Paulo (SP), easily accessible by public transportation from Faculdade de Sa\u00fade P\u00fablica. We designed the neighborhood selection to maximize the ability to contrast supermarkets in neighborhoods with different income levels. To guarantee socioeconomic differences between them, we chose sites with different human development indexes (HDI): Pinheiros (HDI = 0.98), Higien\u00f3polis (HDI = 0.93), Bel\u00e9m (HDI = 0.91), and Sacom\u00e3 (HDI) = 0.84). In each neighborhood, we selected 20 commercial establishments, among supermarkets, hypermarkets, and markets, totaling a sample of 80 establishments. Only seven refused to participate in the survey. Supermarkets, hypermarkets and markets have various food products available to consumersTo assess inter-rater reliability, two trained researchers independently visited the 73 commercial establishments in the region chosen for the study. Inter-rater reliability is used to assess the consistency of a measurement by different evaluators. To assess test-retest reliability, the same researchers revisited 41 sites 32 days after the initial observations. Test-retest reliability is used to assess the consistency of a measurement between two distinct moments.25 . We interpreted negative agreement values as a sign that evaluators agreed less on one item than expected due to chance \u2014 for example, a systematic disagreement among observers because diverse food items were available in the supermarkets. To assess the reliability of food availability according to NOVA, we grouped the 66 food items in the instrument into: group 1 \u2014 sum of all in natura /minimally processed foods; group 2 \u2014 sum of all culinary ingredients; group 3 \u2014 sum of all processed foods, and group 4 \u2014 sum of all ultra-processed foods.For categorical variables, percentage agreement and Kappa coefficients assessed inter-rater and test-retest reliability. We quantified Kappa values using the following scale: 0.01 to 0.20, slight agreement; 0.21 to 0.40, fair agreement; 0.41 to 0.60, moderate agreement; 0.61 to 0.80, substantial agreement; 0.81 to 0.99, high agreement26 . Subsequently, we estimated Pearson\u2019s correlation coefficient (r), which measures the degree of linear correlation between two quantitative variables. It is a dimensionless index with values between -1.0 and 1.0, which reflects the intensity of a linear relationship between two data sets. We estimated Pearson\u2019s correlation coefficient between the pairs of variables collected by researcher 1 and researcher 2 and between the variables collected on the first and second visits. We estimated price and brand averages for each of the four food groups analyzed. We performed statistics on the statistical package Stata 14.For quantitative variables such as price and quantity of brands available, we first performed an exploratory analysis using scatter plots illustrating the linear fit and the quadratic fit. Scatter plots allow identifying patterns in data distribution and possible systematic and random errors depending on how dots are distributed along the lineAt the content validation stage conducted by nine judges, the experts provided the necessary information to review the audit instrument and improve its content. The content validity index (CVI), which represents the average score for the clarity, relevance, pertinence, and representativeness attributes, was 0.91 for the entire instrument. Although the CVI was greater than 0.80 for most items, the suggestions provided by the experts were incorporated into the instrument because they are totally suitable according to the researcher\u2019s assessment .During the audit process, both trained researchers\u2019 first visits to the 73 establishments occurred in an average of 41 days (standard deviation = 11.8 days). The average application time of AUDITNOVA was 90 minutes (standard deviation = 7.0 minutes). The researchers\u2019 second visit occurred between 32 and 47 days after the first collection, with an average of 39.5 days (standard deviation = 4.8 days).in natura /minimally processed foods showed moderate Kappa values (0.41\u20130.60) for both inter-rater and test-retest. The other three food groups had Kappa values above 0.70 for inter-rater reliability, ranging from 0.57 to 0.64 for test-retest reliability, which indicates moderate agreement between visits.17 . We carefully selected the indicator foods of the four groups proposed in NOVA because Brazilians frequently purchase them, according to national surveys, and DGBP recommends them. These foods included in AUDITNOVA may assess retail establishments regarding their availability of healthy and unhealthy foods according to Brazilian guidelines12 . In addition, information about price, quantity of brands and advertising will enable assessing the consumer\u2019s food environment in detail, observing the barriers and conveniences that consumers face when choosing their food1 . Most of the indicators in this instrument are appropriate for the planning of policy programs aimed at modifying the environment, assessing intervention needs and population needs when faced with food availability, and serving as evaluation, surveillance and advocacy indicators for other actions based on the consumer\u2019s food environment20 .The food environment audit instrument developed in this study, AUDITNOVA, had high inter-rater and test-retest reliability, which ensures that it is a reliable instrument for studies aimed at working with food environment indicators based on the NOVA food classification proposed by Monteiro et al.in natura /minimally processed foods often change over the seasons; therefore, whenever the instrument is reapplied, repeated observations should be considered to assess or control seasonal effects14 .High inter-rater reliability shows the definitions and instructions in the measurement manual and training methods were sufficient to prepare observers to collect high-quality data. The high test-retest reliability in most of the indicators suggests only minor changes in food availability, price, quantity of brands, and advertising strategies occur over the data collection period. Thus, the measures collected with AUDITNOVA generated a stable estimate of the consumer\u2019s food environment. However, the availability and price of 15 and Duran et al.16 developed and validated pioneering instruments for auditing the community food environment and the consumer food environment. The audit instrument validated by Martins et al.15 is an adaptation of the one developed by Glanz et al.14 to measure the consumer food environment, specifically retail food establishments, and to assess aspects such as food availability, price, and quality with a food list guided by the food pyramid and the degree of processing. However, it disregards the full version of the NOVA classification. Duran et al.16 proposed an audit instrument designed to audit retail food establishments and restaurants and to measure aspects such as availability, variety, quality, price, and advertising of healthy food indicators such as fruits and vegetables and of unhealthy food indicators such as ultra-processed foods. The main differences of the AUDITNOVA developed and validated in this study comparing with the other two Brazilian instruments were the full use of the NOVA classification in the food item selection, the expansion of advertising and promotional strategies by food groups, the availability of 66 food items (including culinary ingredients and processed foods), the inclusion of strategic aspects of the consumer food environment , and the collection of information on normal or promotional prices, determining factors in the food acquisition by the population20 .In Brazil, studies on food environment are recent, Martins et al.in natura /minimally processed foods, especially in the test-retest, but Kappa values were substantial when evaluating food items in isolation. However, the seasonality and the low variety of in natura/ minimally processed foods in supermarkets and markets compared with street markets, big retail markets and farmer\u2019s markets may have influenced the indicator reliability27 .The main indicators proposed in this instrument showed substantial and high Kappa values. Kappa values were moderate for the indicator of availability of 12 , because large food industries, especially the ultra-processed food ones, use the advertisements to sell more products, not to educate consumers28 .AUDITNOVA enables measuring the different food information sources available in the consumer food environment in detail, dividing the types of advertising according to the NOVA\u2019s four food groups. The DGBP recognizes that the publicity and information available in the consumer food environment can become an obstacle for the population to reach the food recommendations28 . Concerning this, the development of audit instruments that provide an overview of these advertising practices in the consumer food environment and corroborate the DGBP will be essential for the advancement of public policies and regulation. Dietary environment indicators that enable producing more evidence about their influence are part of the strategy to face obesity and CNCD5 .The World Health Organization also recognizes that the massive advertising campaigns adopted by the food industries, especially those aimed at children and with different appeals , affect these individuals\u2019 health. Thus, countries should review the regulatory processes regarding the propagation of these advertisements on packaging and in the mainstream media16 , and it may indicate the researcher\u2019s difficulty in identifying the different advertising strategies available in the retail establishment and in knowing how to distinguish, in particular, the types of appeals that these advertisements bring. Advertisements with Kappa values lower than 0.40 in the test-retest were: tabloids with in natura /minimally processed food advertisements, promotional islands with ultra-processed foods, appeal to the convenience of ultra-processed foods, ultra-processed food launches, and advertisements of culinary ingredients and processed foods, in general. One hypothesis to improve the reliability of this indicator would be to conduct more than one field researcher training throughout the audit process to reaffirm the different types of appeal and approaches of food advertising in retail and/or to expand the sample of audited establishments to increase the prevalence of these types of advertising. However, researchers can still use the instrument in the field. As the instrument is built in independent blocks, they will be free to select the indicators that best fit their research goals.The advertising variables measured by AUDITNOVA showed higher inter-rater reliability than in the test-retest, including many values that could not be computed due to the low availability of advertisements in the establishments. Duran et al. also observed this fact in their study,29 . Measuring these aspects reliably, even over a certain time range, is essential for the use of the instrument in the monitoring and mapping of these indicators in different commercial establishments and in different social realities.The variables price and quantity of brands showed positive correlations between the measurements made by researchers 1 and 2 and visits 1 and 2. Both price and quantity of brands influence consumers when buying fooda may vary beyond expected due to advertising campaigns and new products available on those dates. Therefore, the researcher must assess the necessity of applying the instrument in these periods.Although this study disregarded the food environment throughout the year, at certain times , price, availability, and especially advertising indicatorsPesquisa de Or\u00e7amentos Familiares (POF \u2013 Brazilian Family Budget Survey), provided subsidies for selecting foods that are frequently purchased by the Brazilian population. Another strength of the study is the presence of foods in greater variety in relation to Brazilian instruments, enabling the grouping according to NOVA, as well as the inclusion of more complete information on advertising, prices and quantity of brands, which may provide a more detailed overview of food environment for researchers who will use the instrument.Some of the strengths of this study are the content validation process prepared by a panel of judges specialized in food environment and food advertising, and the use of the NOVA food classification as a theoretical and analytical framework. In addition, the use of Brazilian databases, such as that of 21 , developing and validating appropriate instruments to audit these places according to the new national food recommendations is necessary. This study did not use a quality indicator of retail food establishments based on possible scores generated by the instrument, a fact recognized as important, which will be considered for future studies.One of the limitations of this study is the convenience sample of only one Brazilian city and the low variety of audited business types . This sample does not represent the municipality and the country; however, neighborhoods have significant socioeconomic variations that may impact the food availability audited. Another limitation is the lack of evaluation of seasonal differences during the year. The instrument evaluated only retail establishments used by the population for food purchase and not for immediate consumption, such as bars and restaurants. As many individuals eat out in BrazilThe instrument developed, AUDITNOVA, proved to be reliable for audits in the food environment, especially in the consumer food environment, as it enables an overview of types of retail equipment in the territory and a broad analysis of the main determinants that contribute to supporting the population to choose healthier food. AUDITNOVA is reliable for measuring aspects such as availability, price, quantity of brands, and food advertising. Associations between food environment, food consumption, and obesity are becoming more frequent; however, reliable data collection instruments are needed to reach these results. The development and validation of a food environment audit instrument based on the recommendations presented in the DGBP dialogues with other Brazilian policies and supports the development of evidence that allows us to rethink the role of the food environment in availability, access, and consequently, food and nutritional security of the Brazilian population. We published the data collection training manual developed in this investigation and the AUDITNOVA instrument, which are available for download at: http://colecoes.sibi.usp.br/fsp/items/show/3364#?c=0&m=0&s =0&cv=0.Research supported by the International Development Research Center (IDRC) and the Brazilian Institute of Consumer Protection (Instituto Brasileiro de Defesa do Consumidor \u2013 IDEC) for data collection and S\u00e3o Paulo Research Foundation (Process 2016/12766-6) 1 , influencia o consumo alimentar e a forma\u00e7\u00e3o de h\u00e1bitos alimentares3 . T\u00eam surgido fortes evid\u00eancias de sua rela\u00e7\u00e3o com a epidemia de doen\u00e7as cr\u00f4nicas n\u00e3o transmiss\u00edveis (DCNT), em especial, a obesidade, em pa\u00edses desenvolvidos5 e em desenvolvimento6 . Ele tamb\u00e9m est\u00e1 relacionado ao aumento do \u00edndice de massa corporal3 e a duas dimens\u00f5es importantes para garantia da seguran\u00e7a alimentar e nutricional, o acesso e a disponibilidade de alimentos saud\u00e1veis7 . Utilizando uma abordagem sociol\u00f3gica de comportamento, Glanz et al.1 prop\u00f5em um modelo conceitual que divide o ambiente alimentar em quatro dom\u00ednios principais: ambiente alimentar comunit\u00e1rio, ambiente alimentar organizacional, ambiente alimentar da informa\u00e7\u00e3o e ambiente alimentar do consumidor. Esse \u00faltimo se refere ao que os consumidores encontram dentro e ao redor dos com\u00e9rcios varejistas de alimentos, por exemplo, disponibilidade de alimentos saud\u00e1veis, variedade, pre\u00e7o, promo\u00e7\u00f5es, posi\u00e7\u00e3o na prateleira, informa\u00e7\u00f5es nutricionais e publicidade, itens determinantes nos processos decis\u00f3rios de aquisi\u00e7\u00e3o e consumo de alimentos pela popula\u00e7\u00e3o9 . Por exemplo, um estudo realizado nos Estados Unidos verificou que, do total de informa\u00e7\u00f5es alimentares presentes no folheto circulado semanalmente em uma rede de supermercados, somente 7% eram sobre frutas, 10% sobre hortali\u00e7as, 10% sobre leite e derivados e 18% sobre cereais e gr\u00e3os; al\u00e9m disso, mostrou que os consumidores eram frequentemente influenciados pelas informa\u00e7\u00f5es veiculadas nestes folhetos no momento de fazer suas compras de alimentos10 .O ambiente alimentar, em suas m\u00faltiplas dimens\u00f5es1 . Os aspectos do ambiente alimentar que contribuem para a sa\u00fade e qualidade de vida de popula\u00e7\u00f5es t\u00eam sido amplamente discutidos por diversos pa\u00edses e motivado a cria\u00e7\u00e3o de redes de monitoramento de indicadores, como no caso da Rede Internacional de Pesquisa em Alimentos e Obesidade/DCNT (Informas), uma rede global de organiza\u00e7\u00f5es de interesse p\u00fablico e pesquisadores de 30 pa\u00edses que visa monitorar, avaliar e apoiar a\u00e7\u00f5es dos setores p\u00fablico e privado para fomentar ambientes alimentares saud\u00e1veis e reduzir a obesidade, as DCNT e as desigualdades relacionadas a essas condi\u00e7\u00f5es de sa\u00fade. Entre os indicadores de monitoramento de ambiente alimentar propostos pelo Informas est\u00e3o \u00e0queles que dizem respeito \u00e0 composi\u00e7\u00e3o nutricional dos alimentos, rotulagem, publicidade de alimentos, rede de abastecimento, quantidade e tipos de com\u00e9rcios varejistas de alimentos, pre\u00e7os e investimentos na cadeia produtiva de alimentos11 .O Guia Alimentar para a Popula\u00e7\u00e3o Brasileira (GAPB) tamb\u00e9m reconhece o papel do ambiente alimentar na promo\u00e7\u00e3o da alimenta\u00e7\u00e3o saud\u00e1vel e aponta seis obst\u00e1culos que dificultam a ades\u00e3o \u00e0s recomenda\u00e7\u00f5es nutricionais vigentes: informa\u00e7\u00e3o, publicidade, tempo, habilidades culin\u00e1rias, custo e disponibilidade de alimentos. Quatro deles t\u00eam rela\u00e7\u00e3o direta com o ambiente alimentar , segundo o modelo de Glanz et al.13 e redes internacionais11 , \u00e9 necess\u00e1rio estud\u00e1-lo por meio de medidas v\u00e1lidas e confi\u00e1veis14 . No cen\u00e1rio brasileiro, s\u00e3o recentes as pesquisas sobre esse tema16 , fazendo-se necess\u00e1rio ampliar a produ\u00e7\u00e3o e coleta de indicadores, em especial sobre o ambiente alimentar do consumidor, o qual influencia o comportamento de compra e consumo de alimentos1 .Devido \u00e0 contribui\u00e7\u00e3o do ambiente alimentar para a obesidade e as DCNT, como apontado por documentos t\u00e9cnicos nacionais18 , mas nenhum deles avalia a disponibilidade de alimentos e a publicidade segundo as recomenda\u00e7\u00f5es do GAPB. Martins et al.15 , ao validar um instrumento brasileiro de auditoria de ambiente alimentar baseado na adapta\u00e7\u00e3o do estudo de Glanz et al.(14), fazem uma aproxima\u00e7\u00e3o dos grupos alimentares recomendados pelo GAPB, por\u00e9m exploram somente tr\u00eas indicadores e n\u00e3o utilizam a vers\u00e3o mais atual da classifica\u00e7\u00e3o de alimentos NOVA, que os divide em quatro grupos: in natura /minimamente processados, ingredientes culin\u00e1rios, alimentos processados e alimentos ultraprocessados17 . Verifica-se nos instrumentos brasileiros a falta de indicadores que dialoguem com as recomenda\u00e7\u00f5es nacionais. Nesse sentido, identificar a densidade de estabelecimentos comerciais que vendem prioritariamente alimentos ultraprocessados, propagandas de alimentos segundo tipo de alimento de acordo com a NOVA, vendas de alimentos ultraprocessados em locais como caixas ou venda de produtos in natura na entrada do com\u00e9rcio e pre\u00e7os promocionais de alimentos saud\u00e1veis e n\u00e3o saud\u00e1veis poder\u00e1 contribuir para o entendimento da rela\u00e7\u00e3o entre consumo alimentar e ambiente diante do novo cen\u00e1rio nutricional e epidemiol\u00f3gico brasileiro12 .H\u00e1 instrumentos de auditoria validados para o contexto brasileiro20 . Sendo assim, o objetivo do presente estudo \u00e9 desenvolver e avaliar a confiabilidade de um instrumento de auditoria do ambiente alimentar que possibilite captar informa\u00e7\u00f5es sobre indicadores do ambiente alimentar do consumidor como disponibilidade, pre\u00e7o, estrat\u00e9gias promocionais e publicit\u00e1rias e quantidade de marcas dispon\u00edveis, utilizando como base te\u00f3rica as recomenda\u00e7\u00f5es alimentares adotadas pelo Guia Alimentar para a Popula\u00e7\u00e3o Brasileira.A baixa confiabilidade dos m\u00e9todos e instrumentos que se prop\u00f5em a analisar o ambiente alimentar e a atual falta de crit\u00e9rios objetivos na gera\u00e7\u00e3o dos indicadores \u00e9 uma barreira para entender o mecanismo de associa\u00e7\u00e3o entre ambiente alimentar, obesidade e consumo alimentar. O estudo foi aprovado pelo Comit\u00ea de \u00c9tica da Faculdade de Sa\u00fade P\u00fablica da Universidade de S\u00e3o Paulo sob n\u00famero do CAAE 69045917.5.0000.5421. Todos os estabelecimentos comerciais foram informados do termo de consentimento livre e esclarecido e participaram voluntariamente do estudo.Trata-se de um estudo metodol\u00f3gico em duas fases: 1. desenvolvimento do instrumento de auditoria e 2. avalia\u00e7\u00e3o de sua confiabilidade e reprodutibilidade 17 . O processamento de alimentos identificado pela NOVA envolve processos f\u00edsicos, biol\u00f3gicos e qu\u00edmicos empregados depois que os alimentos s\u00e3o separados da natureza e antes de serem consumidos ou preparados como pratos e refei\u00e7\u00f5es. A NOVA classifica todos os alimentos em quatro grandes grupos:O instrumento de auditoria do ambiente alimentar baseado na classifica\u00e7\u00e3o NOVA e nomeado AUDITNOVA foi planejado em etapas que envolveram reuni\u00f5es sistem\u00e1ticas do grupo de pesquisa; an\u00e1lises detalhadas dos instrumentos de auditoria brasileiros e internacionais e an\u00e1lise detalhada da classifica\u00e7\u00e3o de alimentos NOVA proposta por Monteiro et al.in natura ou minimamente processados. In natura s\u00e3o definidos como partes comest\u00edveis de plantas ou animais ap\u00f3s deixarem a natureza e minimamente processados s\u00e3o alimentos in natura alterados por processos que incluem remo\u00e7\u00e3o de partes n\u00e3o comest\u00edveis ou n\u00e3o desejadas e secagem, esmagamento, moagem, fracionamento, filtragem, torrefa\u00e7\u00e3o, ebuli\u00e7\u00e3o, fermenta\u00e7\u00e3o n\u00e3o alco\u00f3lica, pasteuriza\u00e7\u00e3o, refrigera\u00e7\u00e3o, congelamento e embalagem a v\u00e1cuo;grupo 1: alimentos grupo 2: ingredientes culin\u00e1rios. Definidos como subst\u00e2ncias derivadas de alimentos do grupo 1 ou da natureza por processos que incluem pressionar, refinar, moer e secar, com o objetivo de fazer produtos dur\u00e1veis utilizados para cozinhar e preparar alimentos em casa ou em restaurantes;grupo 3: alimentos processados. S\u00e3o feitos essencialmente adicionando sal, \u00f3leo, a\u00e7\u00facar ou outras subst\u00e2ncias do grupo 2 ao grupo 1;17 .grupo 4: alimentos ultraprocessados. Definidos como formula\u00e7\u00f5es feitas principalmente ou inteiramente a partir de subst\u00e2ncias derivadas de alimentos e aditivos, com pouco ou nenhum alimento do grupo 121 . A primeira vers\u00e3o do instrumento foi composta por dois blocos, um sobre disponibilidade, pre\u00e7os, variedade e qualidade dos alimentos e outro sobre publicidade e propaganda.Os 66 alimentos selecionados para fazer parte do AUDITNOVA foram aqueles que apresentaram maior frequ\u00eancia de aquisi\u00e7\u00e3o pela popula\u00e7\u00e3o brasileira segundo dados da Pesquisa de Or\u00e7amentos Familiares (POF) 2008-200922 .A primeira vers\u00e3o do instrumento passou pelo processo de valida\u00e7\u00e3o de conte\u00fado por um painel de ju\u00edzes com a participa\u00e7\u00e3o de nove especialistas distribu\u00eddos entre as seguintes \u00e1reas de atua\u00e7\u00e3o: pesquisa em ambiente alimentar, publicidade e propaganda de alimentos e defesa do consumidor. O referencial te\u00f3rico adotado durante o painel de ju\u00edzes foi o GAPB, especialmente o Cap\u00edtulo 2 e o Cap\u00edtulo 5 (\u201cA compreens\u00e3o e supera\u00e7\u00e3o dos obst\u00e1culos\u201d). Cada item do AUDITNOVA foi analisado pelos ju\u00edzes, que atribu\u00edram pontua\u00e7\u00f5es baseadas na escala de Likert de 4 pontos para os atributos clareza, relev\u00e2ncia, pertin\u00eancia e representatividade. Al\u00e9m disso, os ju\u00edzes dispunham de um campo para escrever sugest\u00f5es caso a pontua\u00e7\u00e3o fosse igual ou inferior a 3. Foi calculado o \u00edndice de validade de conte\u00fado para cada item e para cada bloco do instrumento. O IVC foi considerado satisfat\u00f3rio quando alcan\u00e7ou concord\u00e2ncia de 0,80 entre os ju\u00edzes. Os itens que n\u00e3o atingiram tal porcentagem foram descartados da vers\u00e3o final do instrumentoA vers\u00e3o final do AUDITNOVA, ap\u00f3s an\u00e1lise pelo painel de ju\u00edzes, cont\u00e9m 14 blocos de quest\u00f5es divididos da seguinte maneira: bloco 1 \u2013 informa\u00e7\u00f5es gerais ; nome e endere\u00e7o do estabelecimento comercial; data, hora de in\u00edcio e hora final da coleta); bloco 2 \u2013 tipo de com\u00e9rcio e produtos comercializados ; bloco 3 \u2013 entrada do estabelecimento ; bloco 4 \u2013 setor de hortifr\u00fatis ; bloco 5 \u2013 setor de carnes, frangos e peixes ; bloco 6 \u2013 setor de latic\u00ednios ; bloco 7 \u2013 setor de mercearia ; bloco 8 \u2013 setor de latarias e conservas ; bloco 9 \u2013 setor de panificados e matinais ; bloco 10 \u2013 setor de congelados ; bloco 11 \u2013 setor de bebidas ; bloco 12 \u2013 setor de chocolates, bolachas e salgadinhos de pacote ; e, por fim, os blocos 13 e 14, sobre propagandas dentro e fora do estabelecimento, respectivamente (Arquivo Suplementar).24 , sendo equipamentos importantes para medir a confiabilidade e reprodutibilidade de um instrumento de auditoria do ambiente alimentar, visto que possibilitam ao pesquisador a aplica\u00e7\u00e3o da ferramenta de forma completa.O estudo de avalia\u00e7\u00e3o da confiabilidade do AUDITNOVA foi realizado em uma amostra de conveni\u00eancia da regi\u00e3o metropolitana da cidade de S\u00e3o Paulo (SP), de f\u00e1cil acesso por transporte p\u00fablico a partir da Faculdade de Sa\u00fade P\u00fablica. A sele\u00e7\u00e3o dos bairros foi projetada para maximizar a capacidade de contrastar supermercados em bairros com diferentes n\u00edveis de renda. Para garantir diferen\u00e7as socioecon\u00f4micas entre eles, foram escolhidos locais com diferentes \u00edndices de desenvolvimento humano (IDH): Pinheiros , Higien\u00f3polis , Bel\u00e9m e Sacom\u00e3 . Em cada bairro foram selecionados 20 estabelecimentos comerciais dos tipos supermercado, hipermercado e mercado, totalizando uma amostra de 80 estabelecimentos. Apenas sete n\u00e3o aceitaram participar da pesquisa. Os supermercados, hipermercados e mercados apresentam grande variedade de produtos aliment\u00edcios dispon\u00edveis ao consumidorPara avaliar a confiabilidade interavaliadores, dois pesquisadores treinados visitaram de forma independente os 73 estabelecimentos comerciais da regi\u00e3o escolhida para o estudo. A confiabilidade interavaliadores \u00e9 utilizada para avaliar a consist\u00eancia de uma medida realizada por avaliadores distintos. Para avaliar a confiabilidade teste-reteste, um total de 41 estabelecimentos foram revisitados pelos mesmos pesquisadores 32 dias ap\u00f3s as observa\u00e7\u00f5es iniciais. A confiabilidade teste-reteste \u00e9 usada para avaliar a consist\u00eancia de uma medida entre dois momentos distintos.25 . Os valores de concord\u00e2ncia negativos foram interpretados como sinal de que os avaliadores concordaram menos em um item do que o esperado devido ao acaso \u2013 por exemplo, houve um desacordo sistem\u00e1tico entre os observadores devido \u00e0 diversidade de itens de alimentos dispon\u00edveis nos supermercados. Para avaliar a confiabilidade da disponibilidade de alimentos segundo a NOVA, os 66 itens alimentares presentes no instrumento foram agrupados da seguinte forma: grupo 1 \u2013 somat\u00f3ria de todos os alimentos in natura /minimamente processados; grupo 2 \u2013 somat\u00f3ria de todos os ingredientes culin\u00e1rios; grupo 3 \u2013 somat\u00f3ria de todos os alimentos processados e grupo 4 \u2013 somat\u00f3ria de todos os alimentos ultraprocessados.Para as vari\u00e1veis categ\u00f3ricas, a confiabilidade interavaliadores e teste-reteste foi aferida por percentual de concord\u00e2ncia e coeficientes de Kappa. Os valores de Kappa foram quantificados usando a seguinte escala: 0,01 a 0,20, ligeira concord\u00e2ncia; 0,21 a 0,40, concord\u00e2ncia justa; 0,41 a 0,60, concord\u00e2ncia moderada; 0,61 a 0,80, concord\u00e2ncia substancial; 0,81 a 0,99, alta concord\u00e2nciascatter-plots ilustrando o ajuste linear e o ajuste quadr\u00e1tico. Scatter-plots s\u00e3o estrat\u00e9gias que permitem identificar padr\u00f5es na distribui\u00e7\u00e3o dos dados e poss\u00edveis erros sistem\u00e1ticos e aleat\u00f3rios dependendo de como os pontos est\u00e3o distribu\u00eddos ao longo da reta26 . Subsequentemente, foi calculado o coeficiente de correla\u00e7\u00e3o de Pearson (r), que mede o grau da correla\u00e7\u00e3o linear entre duas vari\u00e1veis quantitativas. \u00c9 um \u00edndice adimensional com valores situados ente -1.0 e 1.0, que reflete a intensidade de uma rela\u00e7\u00e3o linear entre dois conjuntos de dados. O coeficiente de correla\u00e7\u00e3o de Pearson foi calculado entre os pares de vari\u00e1veis coletadas pelo pesquisador 1 e pelo pesquisador 2 e entre as vari\u00e1veis coletadas na primeira e na segunda visita. Foram calculados a m\u00e9dia de pre\u00e7o e de marcas para cada um dos quatro grupos de alimentos analisados. As estat\u00edsticas foram realizadas no pacote estat\u00edstico Stata 14.Para as vari\u00e1veis quantitativas, como pre\u00e7o e quantidade de marcas dispon\u00edveis, primeiro foi realizada uma an\u00e1lise explorat\u00f3ria com o uso de Na etapa de valida\u00e7\u00e3o de conte\u00fado realizada por nove ju\u00edzes, os especialistas forneceram informa\u00e7\u00f5es necess\u00e1rias para revisar o instrumento de auditoria e melhorar seu conte\u00fado. O \u00edndice de validade de conte\u00fado, que representa a m\u00e9dia da pontua\u00e7\u00e3o para os atributos clareza, relev\u00e2ncia, pertin\u00eancia e representatividade, foi de 0,91 para o instrumento todo. Apesar de a maioria dos itens terem IVC maior que 0,80, as sugest\u00f5es fornecidas pelos especialistas foram incorporadas ao instrumento por serem totalmente pertinentes segundo avalia\u00e7\u00e3o do pesquisador .Durante o processo de auditoria, os dois pesquisadores treinados realizaram as primeiras visitas aos 73 estabelecimentos em uma m\u00e9dia de 41 dias . O tempo m\u00e9dio de aplica\u00e7\u00e3o do AUDITNOVA foi de 90 minutos . A segunda visita foi realizada pelos pesquisadores entre 32 e 47 dias ap\u00f3s a primeira coleta, com uma m\u00e9dia de 39,5 dias .in natura /minimamente processados apresentaram valores de Kappa moderados tanto para interavaliadores quanto para teste-reteste. Os outros tr\u00eas grupos de alimentos apresentaram valores de Kappa acima de 0,70 para a confiabilidade interavaliadores e variando de 0,57 a 0,64 na confiabilidade teste-reteste, o que indica concord\u00e2ncia moderada entre as visitas.A A Na scatter-plots ilustrando a rela\u00e7\u00e3o entre as vari\u00e1veis de pre\u00e7o de alimentos segundo a NOVA coletadas pelos pesquisadores 1 e 2 e coletadas na primeira e segunda visita s\u00e3o apresentadas na Os scatter-plots ilustrando a rela\u00e7\u00e3o entre as vari\u00e1veis de quantidade de marcas de alimentos segundo a NOVA coletadas pelos pesquisadores 1 e 2 e coletadas na primeira e segunda visita. As inclina\u00e7\u00f5es das retas mostram uma correla\u00e7\u00e3o positiva em todos os casos.A A 17 . Os alimentos indicadores dos quatro grupos propostos na NOVA foram cuidadosamente selecionados por serem frequentemente adquiridos pelos brasileiros segundo pesquisas nacionais e tamb\u00e9m recomendados pelo GAPB. Esses alimentos inclu\u00eddos no AUDITNOVA poder\u00e3o fornecer uma avalia\u00e7\u00e3o dos estabelecimentos varejistas no que diz respeito \u00e0 disponibilidade de alimentos saud\u00e1veis e n\u00e3o saud\u00e1veis segundo as diretrizes nacionais12 . Al\u00e9m disso, as informa\u00e7\u00f5es sobre pre\u00e7o, quantidade de marcas e publicidade possibilitar\u00e3o avaliar o ambiente alimentar do consumidor com detalhes, observando as barreiras e facilidades que os consumidores enfrentam ao realizar suas escolhas alimentares1 . A grande maioria dos indicadores presentes neste instrumento \u00e9 apropriada para o planejamento de programas pol\u00edticos com o objetivo de modificar o ambiente, avaliar as necessidades de interven\u00e7\u00e3o e as necessidades da popula\u00e7\u00e3o perante a disponibilidade de alimentos e servir como indicadores de avalia\u00e7\u00e3o, vigil\u00e2ncia e advocacy para outras a\u00e7\u00f5es pautadas no ambiente alimentar do consumidor20 .O instrumento de auditoria do ambiente alimentar desenvolvido neste estudo, AUDITNOVA, teve alta confiabilidade interavaliadores e teste-reteste, o que garante que ele seja um instrumento confi\u00e1vel para pesquisas que objetivam trabalhar com indicadores do ambiente alimentar baseados na classifica\u00e7\u00e3o NOVA de alimentos proposta por Monteiro et al.in natura /minimamente processados geralmente mudam ao longo das esta\u00e7\u00f5es; portanto, sempre que o instrumento for reaplicado, dever\u00e1 ser levada em conta a repeti\u00e7\u00e3o das observa\u00e7\u00f5es para avaliar ou controlar os efeitos sazonais14 .A alta confiabilidade entre avaliadores indica que as defini\u00e7\u00f5es e instru\u00e7\u00f5es no manual de medi\u00e7\u00e3o e os m\u00e9todos de treinamento foram suficientes para preparar os observadores para coletar dados de alta qualidade. A alta confiabilidade teste-reteste na maioria dos indicadores analisados sugere que ocorrem apenas pequenas mudan\u00e7as na disponibilidade, pre\u00e7o, quantidade de marcas e estrat\u00e9gias publicit\u00e1rias dos alimentos ao longo do per\u00edodo de coleta de dados. Assim, as medidas coletadas com o AUDITNOVA geraram uma estimativa est\u00e1vel sobre o ambiente alimentar do consumidor. No entanto, a disponibilidade e o pre\u00e7o dos alimentos 15 e Duran et al.16 . O instrumento de auditoria validado por Martins et al.15 \u00e9 uma adapta\u00e7\u00e3o do instrumento de auditoria desenvolvido por Glanz et al.14 para mensurar o ambiente alimentar do consumidor, especificamente os com\u00e9rcios varejistas de alimentos, e avalia aspectos como disponibilidade, pre\u00e7o e qualidade dos alimentos com uma lista de alimentos orientada pela pir\u00e2mide alimentar e pelo grau de processamento, mas n\u00e3o aborda a vers\u00e3o completa da classifica\u00e7\u00e3o NOVA. O instrumento de auditoria proposto por Duran et al.16 foi desenvolvido para auditar estabelecimentos comerciais varejistas de alimentos e restaurantes e mensura aspectos como disponibilidade, variedade, qualidade, pre\u00e7o e publicidade de indicadores de alimentos saud\u00e1veis como frutas e hortali\u00e7as e indicadores de alimentos n\u00e3o saud\u00e1veis como os alimentos ultraprocessados. As principais diferen\u00e7as do AUDITNOVA desenvolvido e validado neste estudo em rela\u00e7\u00e3o aos outros dois instrumentos brasileiros foram a utiliza\u00e7\u00e3o integral da classifica\u00e7\u00e3o NOVA no processo de sele\u00e7\u00e3o dos itens alimentares, a amplia\u00e7\u00e3o das mensura\u00e7\u00f5es sobre publicidade e estrat\u00e9gias promocionais por grupos alimentares, a disponibilidade de 66 itens alimentares , a inclus\u00e3o de aspectos estrat\u00e9gicos do ambiente alimentar do consumidor e a coleta de informa\u00e7\u00f5es sobre pre\u00e7os normais ou promocionais, fatores de reconhecida import\u00e2ncia como determinantes no processo de aquisi\u00e7\u00e3o de alimentos pela popula\u00e7\u00e3o20 .No Brasil, estudos sobre o ambiente alimentar s\u00e3o recentes, e os instrumentos pioneiros para auditoria do ambiente alimentar comunit\u00e1rio e ambiente alimentar do consumidor foram desenvolvidos e validados por Martins et al.in natura /minimamente processados apresentou valores de Kappa moderados, especialmente no teste-reteste, mas, na avalia\u00e7\u00e3o dos itens alimentares de forma isolada, os valores de Kappa foram substanciais. No entanto, a sazonalidade e a pouca variedade de alimentos in natura /minimamente processados em supermercados e mercados em rela\u00e7\u00e3o \u00e0s feiras livres, varej\u00f5es e sacol\u00f5es pode ter influenciado a confiabilidade do indicador27 .Os principais indicadores propostos neste instrumento apresentaram valores de Kappa substanciais e altos. Foi poss\u00edvel observar que o indicador de disponibilidade de alimentos 12 , pois as grandes ind\u00fastrias de alimentos, especialmente as de alimentos ultraprocessados, utilizam as propagandas para vender mais produtos, n\u00e3o para educar o consumidor28 .O AUDITNOVA possibilita mensurar de forma detalhada as diversas fontes de informa\u00e7\u00f5es sobre alimenta\u00e7\u00e3o dispon\u00edveis no ambiente alimentar do consumidor, dividindo os tipos de publicidade e propaganda conforme os quatro grupos de alimentos da NOVA classifica\u00e7\u00e3o. O Guia Alimentar para a Popula\u00e7\u00e3o Brasileira reconhece que a publicidade e as informa\u00e7\u00f5es dispon\u00edveis no ambiente alimentar do consumidor podem se tornar um obst\u00e1culo para que a popula\u00e7\u00e3o alcance as recomenda\u00e7\u00f5es alimentares preconizadas28 . Nesse sentido, desenvolver instrumentos de auditoria que possibilitem tra\u00e7ar um panorama destas pr\u00e1ticas publicit\u00e1rias no ambiente alimentar do consumidor e corroborem com o GAPB ser\u00e1 imprescind\u00edvel para o avan\u00e7o das pol\u00edticas p\u00fablicas e regula\u00e7\u00e3o. Indicadores de ambiente alimentar que permitam produzir mais evid\u00eancias sobre sua influ\u00eancia fazem parte da estrat\u00e9gia do enfrentamento da obesidade e DCNT5 .A Organiza\u00e7\u00e3o Mundial da Sa\u00fade tamb\u00e9m reconhece que as campanhas massivas de publicidade adotadas pelas ind\u00fastrias aliment\u00edcias, em especial as direcionadas ao p\u00fablico infantil e com diferentes apelos t\u00eam um impacto negativo na sa\u00fade desses indiv\u00edduos. Assim, caberia aos pa\u00edses rever os processos regulat\u00f3rios a respeito da veicula\u00e7\u00e3o dessas propagandas nas embalagens e nas grandes m\u00eddias16 , pode indicar a dificuldade do pesquisador em identificar no com\u00e9rcio varejista as diferentes estrat\u00e9gias publicit\u00e1rias dispon\u00edveis e de saber distinguir, em especial, os tipos de apelos que essas propagandas trazem. As propagandas que apresentaram valores de Kappa inferior a 0,40 no teste-reteste foram: tabloides com propagandas de alimentos in natura /minimamente processados, ilhas promocionais com alimentos ultraprocessados, apelo \u00e0 praticidade com alimentos ultraprocessados, lan\u00e7amentos de ultraprocessados e propagandas em geral de ingredientes culin\u00e1rios e alimentos processados. Uma hip\u00f3tese para melhorar a confiabilidade desse indicador seria realizar mais de uma capacita\u00e7\u00e3o dos pesquisadores de campo ao longo do processo de auditoria para reafirmar os diferentes tipos de apelo e abordagens das propagandas de alimentos no com\u00e9rcio varejista e/ou ampliar a amostra de estabelecimentos auditados para aumentar a preval\u00eancia desses tipos de propaganda. No entanto, tal fato n\u00e3o invalida a utiliza\u00e7\u00e3o do instrumento por pesquisadores da \u00e1rea. Por ser constru\u00eddo em blocos independentes, eles ser\u00e3o livres para selecionar os indicadores que sejam mais adequados aos objetivos de suas pesquisas.As vari\u00e1veis de publicidade aferidas pelo AUDITNOVA apresentaram maior confiabilidade interavaliadores do que no teste-reteste, incluindo muitos valores que n\u00e3o puderam ser computados devido \u00e0 baixa disponibilidade de propagandas nos estabelecimentos. Tal fato, tamb\u00e9m observado no estudo de Duran et al.29 . Conseguir mensurar esses aspectos de forma confi\u00e1vel, inclusive ao longo de um determinado intervalo de tempo, \u00e9 primordial para que o instrumento possa ser utilizado no monitoramento e mapeamento desses indicadores em diversos estabelecimentos comerciais e em diferentes realidades sociais.As vari\u00e1veis pre\u00e7o e quantidade de marcas apresentaram correla\u00e7\u00f5es positivas entre as medidas realizadas pelos pesquisadores 1 e 2 e nas visitas 1 e 2. Ambas as vari\u00e1veis pre\u00e7o e quantidade de marcas influenciam o consumidor no momento da compra de alimentosApesar de este estudo n\u00e3o avaliar o ambiente alimentar ao longo do ano, sabe-se que em certos per\u00edodos , os indicadores de pre\u00e7o, disponibilidade e especialmente publicidade Entre os pontos fortes do estudo, est\u00e3o o processo de valida\u00e7\u00e3o de conte\u00fado elaborado por um painel de ju\u00edzes especialistas em ambiente alimentar e publicidade de alimentos e a utiliza\u00e7\u00e3o da classifica\u00e7\u00e3o de alimentos NOVA como referencial te\u00f3rico e anal\u00edtico. Al\u00e9m disso, a utiliza\u00e7\u00e3o de bases de dados nacionais como a POF forneceu subs\u00eddios para a sele\u00e7\u00e3o de alimentos que fazem parte dos alimentos frequentemente adquiridos pela popula\u00e7\u00e3o brasileira. Outro ponto forte do trabalho \u00e9 a presen\u00e7a de alimentos em maior variedade em rela\u00e7\u00e3o aos instrumentos nacionais, possibilitando o agrupamento conforme a NOVA, al\u00e9m da inclus\u00e3o de informa\u00e7\u00f5es mais completas sobre publicidade, pre\u00e7os e quantidade de marcas, que poder\u00e1 fornecer um panorama mais detalhado do ambiente alimentar aos pesquisadores que far\u00e3o uso do instrumento.21 , se faz necess\u00e1rio o desenvolvimento e valida\u00e7\u00e3o de instrumentos adequados para auditar esses locais segundo as novas recomenda\u00e7\u00f5es alimentares nacionais. Neste artigo n\u00e3o foi apresentado um indicador de qualidade de estabelecimentos varejistas de alimentos baseado em poss\u00edveis pontua\u00e7\u00f5es geradas pelo instrumento, fato reconhecido como importante e que ser\u00e1 considerado para futuros estudos.Uma das limita\u00e7\u00f5es deste estudo \u00e9 a amostra de conveni\u00eancia de uma \u00fanica cidade brasileira e a baixa variedade de tipos de estabelecimentos comerciais auditados . Essa amostra n\u00e3o \u00e9 representativa do munic\u00edpio e do pa\u00eds; entretanto, os bairros apresentam varia\u00e7\u00f5es socioecon\u00f4micas importantes, que podem impactar na disponibilidade de alimentos auditada. Outra limita\u00e7\u00e3o \u00e9 que n\u00e3o avaliamos as diferen\u00e7as sazonais durante o ano. O instrumento avaliou somente os com\u00e9rcios varejistas utilizados pela popula\u00e7\u00e3o para aquisi\u00e7\u00e3o de alimentos e n\u00e3o para consumo imediato, como bares e restaurantes. Devido ao grande n\u00famero de indiv\u00edduos que se alimenta fora de casa no BrasilO instrumento desenvolvido, AUDITNOVA, se mostrou confi\u00e1vel para a realiza\u00e7\u00e3o de auditorias no ambiente alimentar, em especial no ambiente alimentar do consumidor, pois permite tra\u00e7ar um panorama de tipos de equipamentos varejistas no territ\u00f3rio e analisar de forma ampla os principais determinantes que contribuem para apoiar a popula\u00e7\u00e3o a realizar escolhas alimentares mais saud\u00e1veis. O AUDITNOVA \u00e9 confi\u00e1vel para mensurar aspectos como disponibilidade, pre\u00e7o, quantidade de marcas e publicidade de alimentos. Associa\u00e7\u00f5es entre o ambiente alimentar, consumo alimentar e obesidade s\u00e3o cada vez mais frequentes, no entanto para chegar nestes resultados s\u00e3o necess\u00e1rios instrumentos confi\u00e1veis para coletar dados. O desenvolvimento e valida\u00e7\u00e3o de um instrumento de auditoria do ambiente alimentar baseado nas recomenda\u00e7\u00f5es apresentadas no Guia Alimentar para a Popula\u00e7\u00e3o Brasileira dialoga com as demais pol\u00edticas brasileiras e d\u00e1 suporte ao desenvolvimento de evid\u00eancias que permitam repensar o papel do ambiente alimentar na disponibilidade, acesso e consequentemente na seguran\u00e7a alimentar e nutricional da popula\u00e7\u00e3o brasileira. O manual de treinamento de coleta de dados desenvolvido nesta pesquisa e o instrumento AUDITNOVA foram publicados e est\u00e3o dispon\u00edveis para download em: http://colecoes.sibi.usp.br/fsp/items/show/3364#?c=0&m=0&s=0&cv=0."} {"text": "To develop and validate an instrument for measuring the home cooking skills of health professionals involved with guidelines for promoting adequate and healthy food in primary health care. This is a methodological study with a psychometric approach, carried out in the city of S\u00e3o Paulo between January and November 2020, to develop and validate a self-applied online instrument. The data of the 472 participants were presented by descriptive statistics. Content validation was performed by expert judgment using the two round Delphi technique and empirical statistics for consensus evidence. Exploratory factor analysis was used for construct validation and reliability analysis, and the model adjustment rates and composite reliability were analyzed. The instrument presented satisfactory content validity for CVRc indices and \ud835\udf05 in the two rounds of the Delphi technique. After the factor analysis, the final model of the Primary Health Care Home Cooking Skills Scale presented 29 items with adequate factorial loads (> 0.3). Bartlett\u2019s and Kaiser-Meyer-Olkin\u2019s (KMO) tests of sphericity performed in exploratory factorial analysis suggested interpretability in the correlation matrix, the parallel analysis indicated four domains and explained variance of 64.1%. The composite reliability of the factors was adequate (> 0.70) and the H-index suggested replicable factors in future studies. All adjustment rates proved to be adequate. The Primary Health Care Home Cooking Skills Scale presented evidence of validity and reliability. It is short and easy to apply and will make it possible to reliably ascertain the need for qualification of the workforce, favoring the planning of actions and public policies of promotion of adequate and healthy food in primary health care. They are related to environmental and economic implications2 and are valued by the Guia Alimentar para a Popula\u00e7\u00e3o Brasileira (Food Guidelines for the Brazilian Population)3 as an expression of cultural and social aspects. The document recognizes cooking as a strategic practice to promote adequate and healthy food (AHF), aiming to reduce the choice of ultra-processed foods, which are associated with overweight, obesity, cancer and other diseases5. Therefore, recognition of cooking should be paramount in food and nutrition education actions2.Home cooking skills (HCS) comprise actions such as menu planning, selection, mixing, cutting and cooking of food, the ability to perform tasks whilst cooking and confidence for culinary practices6. Such actions allow health workers to convey technical knowledge to the daily lives of the subjects, therefore it is important that those workers develop their home cooking skills7.In Brazil, the guidelines of AHF are located substantially within the scope of primary health care (PHC), the first level of care and link of subjects with the Unified Health System. PHC professionals play a relevant role in promoting food and nutrition education actions involving culinary practices, such as dissemination of recipes, workshops, guided visits to open-air markets, home visits and sensory exploration of food9. Teixiera et al.10 identified and critically analyzed the psychometric quality of 12 Brazilian and international instruments for measuring cooking skills in adults. The psychometric attributes of those instruments were considered insufficient, with unsatisfactory results based on statistical criteria or methodological inadequacies. Two of the studies were Brazilian: Jomori11 performed a cross-cultural adaptation of an instrument based on the program Cooking with a Chef, from Clemson University. The results of a part of the scale of this instrument were unsatisfactory for reliability. Martins et al.12 developed a cooking confidence scale for parents of schoolchildren. The authors evaluated the internal consistency, stability and content validity of the instrument, but did not report agreement rates between experts and procedures for construct validity.An accurate diagnosis of these skills is essential to promote workforce qualification and plan public health actions and policies on the subject, and it relies on the use of valid and reliable instruments, based on robust psychometric criteriaThus, there is a strong need to develop a new instrument for assessing home cooking skills aimed at Brazilian health professionals involved with guidelines for promoting adequate and healthy food in PHC, based on psychometric criteria that follow methodological rigor to determine its validity and reliability recommended in the scientific literature.13 conducted between January and November 2020.This is a methodological study with a psychometric approachThis study was approved by the research ethics committee of University of S\u00e3o Paulo and by the co-participating institution of the S\u00e3o Paulo Municipal Health Department (SMS-SP) . The participants were informed of the objectives of the study and confidentiality of the data through an informed consent form.In the prototype stage, a working group was created with nine members of both sexes and from different Brazilian states . They were nutrition and gastronomy majors from the Faculdade de Sa\u00fade P\u00fablica of Universidade de S\u00e3o Paulo (FSP-USP) involved with culinary approach disciplines and PhD researchers with experience in the elaboration and validation of research instruments to systematically develop the instrument.14(c) systematic review to identify and analyze psychometric properties of instruments that assessed the home cooking skills of adults10. The domains, items and response formats of the instruments identified in this review were discussed by the research group for the construction of the prototype.To define the theoretical domains and items of the first version of the instrument proposed in this study, the following were considered: (a) professional experience and culinary experience of the group; (b) exploration of theoretical framework on HCS15.The construction of the initial set of items and response formats of the prototype version of the instrument, entitled Primary Health Care Home Cooking Skills Scale (PHCHCSS), followed the quality recommendations proposed by DeVellis16. A number of participants between three and 10 was considered sufficient17.The next phase, the psychometric phase, consisted of three stages. The first stage featured experts from various professional levels, including university professors, researchers and nutrition and gastronomy professionals from Brazil18 was used. Experts completed online questionnaires, with semi-structured questions of sociodemographic characterization and evaluation of the items and theoretical domains of the instrument built in the prototype phase. They proposed improvements, inclusion and exclusion of items, adequacy of the options of the instrument scale, and responded to a scale Likert of agreement (1 = strongly disagree and 4 = strongly agree) to evaluate each item for:The two-round Delphi method Clarity: Was the item written in such a way that the concept is understandable and adequately expresses what is to be measured?Pertinence: Does the item reflect the concepts involved in the domain and is it adequate to achieve the proposed objectives?Relevance: Is the item important for the construction of the domains that are the focus of the research scale?The first round of the panel took place between March 26 and April 29, 2020 and featured eight experts. The research group assessed the comments provided, excluded irrelevant and non-pertinent items, made adjustments to those considered unclear and included suggested items for a better coverage of the phenomenon. The instrument was re-submitted to the experts for evaluation after the modifications. Second round, started on May 28, 2020, lasted 30 days and featured seven experts.19and the Kappa coefficient (k) was calculated to evaluate the agreement between experts on each item20 of the two rounds of the panel. Items with CVRc > 0.05 and k \u2265 0,6020 were retained19. The content validity index (CVI) was also used to analyze the validity of the instrument as a whole21. The result > 0.8 was considered acceptable22.The characteristics of the study participants were presented by descriptive statistics. The Critical Content Validity Ratio - CVRc was used to statistically analyze the validity of each attribute of the items and domainsThe second stage was the pre-test phase, in which professionals from a health center in the city of S\u00e3o Paulo, with similar characteristics to the research population of the project, tested the usability of PHCHCSS. The pre-test participants were not part of the construct validity sample and reliability analysis of the instrument. They commented on possible difficulties in filling out the instrument, clarity and adequacy of the questions to the objective of the research and recorded response times.23.In the third stage, construct validity and reliability of PHCHCSS were tested. The scale was developed for professionals involved in the promotion of adequate and healthy nutrition in basic health units (BHU) of S\u00e3o Paulo\u2019s Municipal Health Department (SMS-SP). There are 464 BHU in the city of S\u00e3o Paulo24, of 10 subjects per instrument item.The sample included professionals who expressed their consent to participate. Recruitment was done by contacting regional health coordinators, technical health supervisors and BHU managers to collect the emails addresses of target professionals. A website was also developedData collection began on August 2, 2020, lasting 30 days. A total of 472 professionals answered a sociodemographic questionnaire and PHCHCSS online. Their characteristics were presented by descriptive statistics.25. The rotation used was the Robust Promin26. Values of 60% of the total variance explained, items with commonality \u2265 0.4 and factorial loads \u2265 0.30 were considered satisfactory. Items with cross-factorial loads were excluded27. KMO values \u2265 0.70 and significant values for Bartlett\u2019s index represented adequacy measures of the sample28.Exploratory factor analysis (EFA) was used to evaluate the factorial structure of the PHCHCSS. Polychoric correlation and the Robust Diagonally Weighted Least Squares (RDWLS) extraction method were used. The decision on the number of retained factors was made by parallel analysis with random permutation of the observed data30.The goodness of fit of the model was evaluated using the Root Mean Square Error of approximation (RMSEA) index, the Comparative Fit Index (CFI) and the Tucker-Lewis Index (TLI). RMSEA values should be < 0.08, and CFI and TLI values should be > 0.90 or, preferably, 0.9531.The stability of the factors was assessed by the H-index, which assesses how well a set of items represents a factor. H values > 0.80 suggest a well-defined and probably stable latent variable in different studies32.To test the reliability, the composite reliability (CR) was calculated, with acceptable values > 0.7029.All statistical analyses were performed using the statistical software Factor, version 10.10.03The The instruments identified in a systematic review had dimensions of planning, selection and purchase of food and confidence in food preparation, and may or may not include pre-prepared and convenience products.Guia Alimentar para a Popula\u00e7\u00e3o Brasileira. Multitasking skills were also identified as a theoretical domain. They are defined in the scientific literature as the ability to perform tasks simultaneously in the home environment, representing an advantage when preparing meals.For the PHCHCSS, the theoretical dimensions of HCS for the construction of the initial items were considered to be the food shopping planning and meal preparation, culinary creativity, the use of sensory perception and confidence in the preparation of meals based on fresh, minimally processed and culinary ingredients, as recommended by the The prototype version of the instrument was submitted to content evaluation by experts. The main results of the development and validation of the PHCHCSS are shown in the latu sensu), 12.5% (n = 1) masters, 25% (n = 2) PhDs and 12.5% (n = 1) full professors. The panel also had a lay participant with training in gastronomy and a full-time job cooking. The experts were professors at public (25%) and private (12.5%) universities, researchers (12.5%), nutritionists in food services (37.5%) and culinary professionals (12.5%). The length of professional experience ranged from 10 to 33 years . The average time devoted to culinary practices among experts was 12.2 hours per week (SD = 9.6 hours per week).In the psychometric phase, the first stage was the validation of the content. The study sample size was adequate for this stage. The response rate for the first round of the Delphi technique was 72.7% (8/11) and 87.5% (7/8) for the second round. Most of the experts were female , with a mean age of 42.3 years (SD = 9.0). Of the total, 37.5% (n = 3) were experts , psychologists and nurses . This sample was not part of the validity and reliability analysis of the instrument. Participants reported that the instrument was easy to access by computer, comprehension of the questions and answer options, with a suggestion to enlarge the font size of the questions, which was adopted by the research group. The average response time was 15 minutes.The third stage consisted in performing construct validity and reliability analysis. The study sample size was adequate for this stage. 25. After these items were excluded, the instrument was analyzed again. Subsequently, items with cross-factorial loads in the interpretation of factors were removed and the instrument underwent a new analysis. The reduced model of the instrument retained 29 of the 43 items. Bartlett\u2019s and KMO (0.91: very good) test of sphericity suggested interpretability of the correlation matrix, with four factors identified in the parallel analysis and explained variance of 64.1%.The EFA was initially performed with a version of the instrument validated by experts, with 43 items. Bartlett\u2019s and KMO (0.91: very good) test of sphericity suggested interpretability of the correlation matrix. The parallel analysis suggested four representative factors for the data, with an explained variance of 54.6%. Some items had insignificant factorial loads and commonalitiesThe final EHAPS model resulted in a scale of the type Likert, with response options on the frequency of actions centered on HCD attributes, with 29 items28.The items retained showed adequate loads in their respective factors. No new patterns of cross loads were found in the reduced model . The composite reliability of the factors was adequate (> 0.70) for all factors. The H-index measure suggested replicable factors in future studies (H > 0.80)2 = 296, gl = 334,246; p = 0.06; RMSEA = 0.037; IFC = 0.99; TLI = 0.99).It should be noted that the factorial structure presented adequate adjustment indexes to quantify the degree of agreement among experts resulted in items with strong content validity. The opinion of experts was considered in other studies that reported instruments for measuring cooking skills34. However, these studies did not present empirical methods derived from the judgment of experts as evidence of the content validity. The fact that experts give opinions on construct items does not in itself provide relevant information for the validation process28. Thus, this study stands out regarding the methodological rigor employed for content validity of the PHCHCSS.Although uncommon in scale development studies, the content validity stage had a lay member in the the expert panel35 assert that pilot studies can be conducted with small samples as long as the performance of the analyses is not compromised in any way. Considering that the sample was used to qualitatively evaluate the understanding and deployment of the instrument, the number of pre-test professionals did not create limitations to the study.The pre-test participants reported adequate usability of the instrument. Five health professionals participated in this stage. Rattray et al.Guia Alimentar para a Popula\u00e7\u00e3o Brasileira3.Regarding the stage of construct validity and reliability of the PHCHCSS, the parallel analysis suggested a multidimensional instrument with four factors. The multidimensionality of the scale is aligned with the complex nature of the acts of eating and cooking, recognized by the in natura, minimally processed foods and procedures done in advance to facilitate the act of cooking. A similar finding was observed in the study by Jomori11, which considers the creative ability to plan menus and organize meal preparation as skills for individual-centered culinary practice. This dimension is related to the main recommendation of the Guia Alimentar para a Popula\u00e7\u00e3o Brasileira3: \u201cYou should always prefer in natura or minimally processed foods and culinary preparations to ultra-processed foods\u201d. It is also related to the chapter on understanding and overcoming obstacles to putting this and other recommendations into practice. Cooking procedures done in advance shorten the time spent preparing meals. Given the pace of modern life, this obstacle is more easily overcome when multitasking skills are also put into practice.The creative planning dimension considers creativity when planning and preparing home-cooked meals 36. Gabe37 discusses the influence of the home environment on the quality of the meals consumed, highlighting that there is a gender discrepancy regarding responsibility for household chores, which is reinforced by Mills et al.38 These findings provide an opportunity to use the PHCHCSS in studies aimed at analyzing differences in multitasking skills between genders, in order to encourage the fair sharing of responsibilities in the home, which includes preparing meals.The dimension of multitasking skills comprises the ability to perform household tasks simultaneously to culinary practices. If an individual is unable to cook while doing laundry and taking care of children, they may be less likely to prepare a home-cooked meal12, the confidence judgment considers individual performance, which depends on practice and task performed, considered an excellent predictor of behavior to determine how individuals employ their skills. The PHCHCSS reduces misinterpretations about HCS by disregarding questions about confidence to prepare meals based on ready-made and convenience products, which could overestimate the individual\u2019s skills, a recurring problem in international instruments1. The cooking confidence scale by Lavelle et al.34, for example, includes questions about confidence to \u201cprepare food in a microwave oven, including heating ready-made dishes\u201d.The dimension of confidence regarding cooking skills corresponds to self-sufficiency to employ cooking techniques and utensils. According to Martins39. According to the authors, low food literacy is associated with increased diet-related chronic diseases.Finally, the dimension of food selection, combination and preparation refers to the sensory and quantification aspects of food aiming at the adequacy of purchasing and cooking procedures. Similar components, which concern the ability to shop for food, use it in preparations and judge it for quality, are found in the study about food literacy by Vidgen and Gallegos28and they suggest a well-defined latent variable, with dimensions that are likely to be stable in future studies31. The adjustment indexes presented validated the model extracted from the analysis and confirm the measured theory, showing a well-defined construct30. The reliability of the instrument was also adequate, with satisfactory results of composite reliability. This measure represents a good indicator to evaluate the quality of the structural model of the instrument and is presented as a more robust precision indicator, compared to the alpha coefficient32.The results of the exploratory factor analysis showed adequate factorial loads and commonalities in all items retained in the instrument40, which is the case in this study.Developing evaluation instruments is a complex task, only recommended in the lack of another instrument suitable to the reality being investigated37 to interpret the score of her dietary quality assessment instrument, adopted by the Ministry of Health. It also offers messages on the status of the individual\u2019s home cooking skills, with instructions for encouraging and appreciating these skills. It should be noted, however, that the score of the scale derives from its raw score. Although commonly found in studies of instrument development, the use of this score assumes a subjective definition of classification cut-off points, conferring the same weight for items with different factorial loads. The item response theory is an analysis proposal to overcome this limitation by considering the characteristics of the questionnaire items regarding the ability to differentiate the variable of interest and location in the respective continuum and a probabilistic model to estimate and describe the scores41. Thus, the item response theory could be used in future studies aiming to improve the score of this instrument, validated by classical methods.As an advantage, the PHCHCSS is short, easy to apply and standardized, allowing its use in comparative studies. This instrument summarizes the home cooking skills according to scoreranges easy to interpret, delimited by traffic light colors, based on a diagram suggested by GabeAutomation minimized possible errors by the interviewer. The online application of the instrument proved advantageous due to its low cost and ease of access. However, its application on paper has not been studied to verify the occurrence of similar results, a limitation of this study. The printed version would allow access to health professionals working in places with limited internet access or not included digitally.Guia Alimentar para a Popula\u00e7\u00e3o Brasileira and could be performed by comparing the scale score with a 24-hour dietary recall or with the score of a food literacy scale. Conducting this validity study would be opportune in future analyses.Another limitation is that a convergent validity study was not conducted. This kind of validity refers to the associations of the PHCHCSS score with external measures, which could confirm whether the scale measures HCS related to food choices recommended by the 42, the sample from this city may not represent the cultural diversity of food within the national territory. Thus, a cross-cultural adaptation of the instrument for Brazilian macroregions is recommended.Finally, the sample used for exploratory factor analysis was composed of professionals working in primary health care in the city of S\u00e3o Paulo. Despite being the main destination for regional migration in BrazilThis study is innovative in the context of the recognition of cooking as an emancipatory practice and health promotion. It is understood mastering home cooking skills allows primary health care professionals to bring their scientific knowledge closer to people\u2019s lives and to social practices and knowledge, thereby strengthening the ability of individuals or communities to identify solutions for their daily lives. This instrument will make it possible to reliably ascertain the need for qualification of the workforce for actions to promote healthy and adequate food based on home cooking skills. It also provides opportunities to identify needs for reviewing pedagogical proposals of health courses, to train professionals to work on food sovereignty and the human right to adequate food at the expense of medicalizing practices and guidelines. 1. Est\u00e3o relacionadas \u00e0s implica\u00e7\u00f5es ambientais e econ\u00f4micas2 e s\u00e3o valorizadas pelo Guia Alimentar para a Popula\u00e7\u00e3o Brasileira3 como express\u00e3o de aspectos culturais e sociais. O documento reconhece a culin\u00e1ria como pr\u00e1tica estrat\u00e9gica para promo\u00e7\u00e3o da alimenta\u00e7\u00e3o adequada e saud\u00e1vel (PAAS), visando reduzir a escolha de alimentos ultraprocessados, cujo consumo se associa ao sobrepeso, obesidade, c\u00e2ncer e outras doen\u00e7as5. Logo, a valoriza\u00e7\u00e3o do ato de cozinhar deve ser central nas a\u00e7\u00f5es de educa\u00e7\u00e3o alimentar e nutricional2.As habilidades culin\u00e1rias dom\u00e9sticas (HCD) compreendem a\u00e7\u00f5es de planejamento do card\u00e1pio, sele\u00e7\u00e3o, combina\u00e7\u00e3o, corte e coc\u00e7\u00e3o dos alimentos, capacidade de realizar tarefas concomitantes ao ato de cozinhar e confian\u00e7a para pr\u00e1ticas culin\u00e1rias6. Tais a\u00e7\u00f5es possibilitam ao profissional de sa\u00fade articular conhecimentos t\u00e9cnicos ao cotidiano dos sujeitos, portanto, \u00e9 importante que esse profissional se aproprie das habilidades culin\u00e1rias dom\u00e9sticas7.No Brasil, as orienta\u00e7\u00f5es de PAAS ocorrem substancialmente no \u00e2mbito da aten\u00e7\u00e3o prim\u00e1ria \u00e0 sa\u00fade (APS), o primeiro n\u00edvel de aten\u00e7\u00e3o e v\u00ednculo dos sujeitos de direito com o Sistema \u00danico de Sa\u00fade. Profissionais da APS desempenham relevante papel ao promover a\u00e7\u00f5es de educa\u00e7\u00e3o alimentar e nutricional que envolvem pr\u00e1ticas culin\u00e1rias, como divulga\u00e7\u00e3o de receitas, oficinas, visitas guiadas a feiras livres, visitas domiciliares e explora\u00e7\u00e3o sensorial de alimentos9. Teixeira et al.10 identificaram e analisaram criticamente a qualidade psicom\u00e9trica de 12 instrumentos brasileiros e internacionais que mensuram habilidades culin\u00e1rias em adultos, cujos atributos psicom\u00e9tricos foram considerados insuficientes, com resultados insatisfat\u00f3rios frente aos crit\u00e9rios estat\u00edsticos ou inadequa\u00e7\u00f5es metodol\u00f3gicas. Dois dos estudos eram brasileiros: Jomori11 realizou adapta\u00e7\u00e3o transcultural de um instrumento baseado no programa Cooking with a Chef, da Universidade de Clemson, cujos resultados de confiabilidade de parte da escala foram insatisfat\u00f3rios. Martins et al.12 desenvolveram uma escala de confian\u00e7a culin\u00e1ria para pais de crian\u00e7as em idade escolar. Os autores avaliaram consist\u00eancia interna, estabilidade e validade de conte\u00fado do instrumento, por\u00e9m n\u00e3o reportaram \u00edndices de concord\u00e2ncia entre especialistas e procedimentos para validade de constructo.O diagn\u00f3stico preciso dessas habilidades \u00e9 essencial para promover qualifica\u00e7\u00e3o da for\u00e7a de trabalho e planejar a\u00e7\u00f5es e pol\u00edticas p\u00fablicas de sa\u00fade sobre a tem\u00e1tica e depende do emprego de instrumentos v\u00e1lidos e confi\u00e1veis, baseados em crit\u00e9rios psicom\u00e9tricos robustosFortaleceu-se, assim, a necessidade de desenvolver um novo instrumento de avalia\u00e7\u00e3o das habilidades culin\u00e1rias dom\u00e9sticas, destinado aos profissionais de sa\u00fade brasileiros que atuam pela PAAS no contexto da APS, com base em crit\u00e9rios psicom\u00e9tricos que respeitem rigor metodol\u00f3gico para determina\u00e7\u00e3o de sua validade e confiabilidade recomendados em literatura cient\u00edfica.13, realizado entre janeiro e novembro de 2020.Estudo metodol\u00f3gico com abordagem psicom\u00e9tricaEste estudo foi aprovado pelo comit\u00ea de \u00e9tica em pesquisa da Universidade de S\u00e3o Paulo e pela institui\u00e7\u00e3o coparticipante da Secretaria Municipal de Sa\u00fade de S\u00e3o Paulo (SMS-SP) (no. 3.585.369). Os participantes foram informados dos objetivos do estudo e sigilo dos dados por meio de termo de consentimento livre e esclarecido.Na fase protot\u00edpica criou-se um grupo de trabalho com nove integrantes de ambos os sexos e de diferentes estados brasileiros com forma\u00e7\u00e3o em nutri\u00e7\u00e3o e gastronomia, acad\u00eamicos da Faculdade de Sa\u00fade P\u00fablica da Universidade de S\u00e3o Paulo (FSP-USP) envolvidos com disciplinas de abordagem culin\u00e1ria e pesquisadores de doutorado com experi\u00eancia em elabora\u00e7\u00e3o e valida\u00e7\u00e3o de instrumentos de pesquisa para desenvolver sistematicamente o instrumento.14; (c) revis\u00e3o sistem\u00e1tica para identificar e analisar propriedades psicom\u00e9tricas de instrumentos que avaliaram as habilidades culin\u00e1rias dom\u00e9sticas de adultos10. Os dom\u00ednios, itens e formatos de resposta dos instrumentos identificados nesta revis\u00e3o foram discutidos pelo grupo de pesquisa para a constru\u00e7\u00e3o do prot\u00f3tipo.Para a defini\u00e7\u00e3o dos dom\u00ednios te\u00f3ricos e itens da primeira vers\u00e3o do instrumento proposto neste estudo, considerou-se: (a) viv\u00eancia profissional e experi\u00eancia culin\u00e1ria do grupo; (b) explora\u00e7\u00e3o de referencial te\u00f3rico sobre HCD15.A constru\u00e7\u00e3o do conjunto inicial de itens e formatos de resposta da vers\u00e3o protot\u00edpica do instrumento, intitulado Escala de Habilidades Culin\u00e1rias Dom\u00e9sticas da Aten\u00e7\u00e3o Prim\u00e1ria \u00e0 Sa\u00fade (EHAPS), seguiu as recomenda\u00e7\u00f5es de qualidade propostas por DeVellis16. Considerou-se suficiente a quantidade entre tr\u00eas e 10 participantes17.A fase seguinte, a psicom\u00e9trica, foi constitu\u00edda por tr\u00eas etapas. A primeira etapa contou com a participa\u00e7\u00e3o de especialistas de diversos n\u00edveis profissionais, entre docentes universit\u00e1rios, pesquisadores e profissionais de nutri\u00e7\u00e3o e gastronomia do Brasilrounds18 foi realizada. Especialistas preencheram question\u00e1rios on-line, com quest\u00f5es semiestruturadas de caracteriza\u00e7\u00e3o sociodemogr\u00e1fica e de avalia\u00e7\u00e3o dos itens e dom\u00ednios te\u00f3ricos do instrumento constru\u00eddos na fase protot\u00edpica. Apontaram melhorias, inclus\u00e3o/exclus\u00e3o de itens, adequa\u00e7\u00e3o das op\u00e7\u00f5es da escala do instrumento, e responderam a uma escala Likert de concord\u00e2ncia , para avalia\u00e7\u00e3o de cada item quanto a:A t\u00e9cnica Delphi de dois Clareza: O item foi redigido de forma que o conceito esteja compreens\u00edvel e expressa adequadamente o que se espera medir?Pertin\u00eancia: O item reflete os conceitos envolvidos no dom\u00ednio e \u00e9 adequado para atingir os objetivos propostos?Relev\u00e2ncia: O item \u00e9 importante para a constru\u00e7\u00e3o dos dom\u00ednios que s\u00e3o o foco da escala de pesquisa?round do painel ocorreu entre 26 de mar\u00e7o e 29 de abril de 2020 e contou com oito especialistas. O grupo de pesquisa examinou os coment\u00e1rios fornecidos, excluiu itens irrelevantes e n\u00e3o pertinentes, realizou adequa\u00e7\u00f5es daqueles considerados n\u00e3o claros e incluiu itens sugeridos para melhor abrang\u00eancia do fen\u00f4meno. O instrumento foi reapresentado aos especialistas para avalia\u00e7\u00e3o das reestrutura\u00e7\u00f5es. O segundo round, iniciado em 28 de maio de 2020, durou 30 dias e contou com sete especialistas.O primeiro Critical Content Validity Ratio \u2013 CVRc) foi utilizada para analisar estatisticamente a validade de cada atributo dos itens e dom\u00ednios19e o coeficiente Kappa (k) foi calculado para avaliar a concord\u00e2ncia entre especialistas sobre cada item20 dos dois rounds do painel. Foram retidos itens com CVRc > 0,0519 e k \u2265 0,6020. O \u00edndice de validade de conte\u00fado (IVC) tamb\u00e9m foi utilizado para analisar a validade do instrumento como um todo21. Resultado > 0,8 foi considerado aceit\u00e1vel22.As caracter\u00edsticas dos participantes do estudo foram apresentadas por estat\u00edstica descritiva. A raz\u00e3o cr\u00edtica de validade de conte\u00fado da SMS-SP. H\u00e1 464 UBS no munic\u00edpio de S\u00e3o Pauloa, divulgado em m\u00eddias sociais para apresentar e esclarecer o objetivo da pesquisa e recrutar participantes. O n\u00famero de integrantes da amostra baseou-se nas recomenda\u00e7\u00f5es de Costello e Osborne24, de 10 sujeitos por item do instrumento.Integraram a amostra os profissionais que registraram consentimento de sua participa\u00e7\u00e3o. O recrutamento se deu por contato com coordenadorias regionais de sa\u00fade, supervis\u00f5es t\u00e9cnicas de sa\u00fade e gestores de UBS para coleta de e-mails de profissionais de interesse. Tamb\u00e9m foi desenvolvido um siteA coleta de dados teve in\u00edcio em 2 de agosto de 2020, com dura\u00e7\u00e3o de 30 dias. Ao todo, 472 profissionais preencheram um question\u00e1rio sociodemogr\u00e1fico e a EHAPS de forma on-line. Suas caracter\u00edsticas foram apresentadas por estat\u00edstica descritiva.Robust Diagonally Weighted Least Squares (RDWLS). A decis\u00e3o sobre o n\u00famero de fatores retidos se deu por an\u00e1lise paralela com permuta\u00e7\u00e3o aleat\u00f3ria dos dados observados25 e a rota\u00e7\u00e3o utilizada foi a Robust Promin26. Valores de 60% da vari\u00e2ncia total explicada, itens com comunalidade \u2265 0,4 e cargas fatoriais \u2265 0,30 foram considerados satisfat\u00f3rios. Itens com cargas fatoriais cruzadas foram exclu\u00eddos27. Valores de KMO \u2265 0,70 e valores significativos para o \u00edndice Bartlett representaram medidas de adequa\u00e7\u00e3o da amostra28.Empregou-se an\u00e1lise fatorial explorat\u00f3ria (AFE) para avaliar a estrutura fatorial da EHAPS. Utilizou-se matriz polic\u00f3rica e m\u00e9todo de extra\u00e7\u00e3o Root Mean Square Error of Aproximation (RMSEA), Comparative Fit Index (CFI) e Tucker-Lewis Index (TLI). Valores de RMSEA devem ser < 0,08, e valores de CFI e TLI devem ser > 0,90 ou, preferencialmente, 0,9530.A adequa\u00e7\u00e3o do modelo foi avaliada por \u00edndices de ajuste 31.A estabilidade dos fatores foi avaliada pelo \u00edndice H, que avalia qu\u00e3o bem um conjunto de itens representa um fator. Valores de H > 0,80 sugerem uma vari\u00e1vel latente bem definida e provavelmente est\u00e1vel em diferentes estudos32.Para testar a confiabilidade, calculou-se a fidedignidade composta (FC), com valores aceit\u00e1veis > 0,7029.Todas as an\u00e1lises estat\u00edsticas foram realizadas utilizando o programa estat\u00edstico Factor, vers\u00e3o 10.10.03Likert de cinco pontos .O Os instrumentos identificados em revis\u00e3o sistem\u00e1tica apresentavam dimens\u00f5es de planejamento, sele\u00e7\u00e3o e compra de alimentos e confian\u00e7a no preparo de alimentos, podendo ou n\u00e3o incluir produtos pr\u00e9-preparados e de conveni\u00eancia.in natura, minimamente processados e ingredientes culin\u00e1rios, como preconiza o Guia Alimentar para a Popula\u00e7\u00e3o Brasileira. As habilidades multitarefas tamb\u00e9m foram identificadas como dom\u00ednio te\u00f3rico, apontadas na literatura cient\u00edfica como a capacidade de realizar tarefas simultaneamente no ambiente dom\u00e9stico, representando uma vantagem para o preparo de refei\u00e7\u00f5es.Para a EHAPS, considerou-se como dimens\u00f5es te\u00f3ricas de HCD para a constru\u00e7\u00e3o dos itens iniciais o planejamento de compras e do preparo de refei\u00e7\u00f5es, criatividade culin\u00e1ria, uso de percep\u00e7\u00e3o sensorial e confian\u00e7a no preparo de refei\u00e7\u00f5es baseadas em alimentos A vers\u00e3o protot\u00edpica do instrumento foi submetida \u00e0 avalia\u00e7\u00e3o de conte\u00fado por especialistas. Os principais resultados do desenvolvimento e valida\u00e7\u00e3o da EHAPS s\u00e3o apresentados na round da t\u00e9cnica Delphi foi de 72,7% (8/11) e 87,5% (7/8) para o segundo round. A maioria dos especialistas era do sexo feminino , com m\u00e9dia de idade de 42,3 anos . Do total, 37,5% (n = 3) eram especialistas (latu sensu), 12,5% (n = 1) mestres, 25% (n = 2) doutores e 12,5% (n = 1) livre docentes. O painel tamb\u00e9m contou com um participante laico com aperfei\u00e7oamento em cursos livres de gastronomia e 32 horas semanais dedicadas \u00e0 culin\u00e1ria. Os especialistas eram docentes em universidades p\u00fablicas (25%) e privadas , pesquisadores , nutricionistas registrados em servi\u00e7os de alimenta\u00e7\u00e3o e culinaristas . O tempo de experi\u00eancia profissional variou de 10 a 33 anos . O tempo m\u00e9dio dedicado \u00e0s pr\u00e1ticas culin\u00e1rias entre especialistas foi de 12,2 horas semanais .Na fase psicom\u00e9trica, a primeira etapa foi a de valida\u00e7\u00e3o do conte\u00fado. O estudo apresentou tamanho amostral adequado para essa etapa. A taxa de resposta para o primeiro A avalia\u00e7\u00e3o dos especialistas resultou na exclus\u00e3o de sete itens da vers\u00e3o protot\u00edpica do instrumento, dois itens transferidos de dom\u00ednio, tr\u00eas itens revisados quanto \u00e0 clareza, seis novos itens propostos e altera\u00e7\u00e3o da escala de concord\u00e2ncia para uma escala de frequ\u00eancia, totalizando 43 itens v\u00e1lidos quanto ao conte\u00fado. Uma vis\u00e3o geral da an\u00e1lise de validade de conte\u00fado do instrumento \u00e9 demonstrada na A segunda etapa, de pr\u00e9-teste, contou com a participa\u00e7\u00e3o de cinco profissionais de um centro de sa\u00fade do munic\u00edpio de S\u00e3o Paulo. A pandemia por covid-19 representou uma dificuldade para recrutamento, diante da demanda intensificada por atendimentos em UBS. Compuseram a amostra mulheres que atuavam como nutricionistas , psic\u00f3logas e enfermeiras . Esta amostra n\u00e3o integrou a an\u00e1lise de validade e confiabilidade do instrumento. Participantes relataram facilidade de acesso ao instrumento pelo computador, compreens\u00e3o das perguntas e op\u00e7\u00f5es de resposta, com sugest\u00e3o de amplia\u00e7\u00e3o de fonte das quest\u00f5es, o que foi adotado pelo grupo de pesquisa. O tempo m\u00e9dio de registro de respostas foi de 15 minutos.Na terceira etapa realizou-se a validade de constructo e a an\u00e1lise de confiabilidade. O estudo apresentou tamanho amostral adequado para essa etapa. A Bartlett e KMO sugeriram interpretabilidade da matriz de correla\u00e7\u00e3o. A an\u00e1lise paralela sugeriu quatro fatores representativos para os dados, com vari\u00e2ncia explicada de 54,6% e alguns itens apresentaram cargas fatoriais e comunalidades insignificantes25. Ap\u00f3s exclus\u00e3o desses itens, o instrumento foi novamente analisado. Na sequ\u00eancia, itens com cargas fatoriais cruzadas na interpreta\u00e7\u00e3o de fatores foram removidos e o instrumento foi submetido a nova an\u00e1lise. O modelo reduzido do instrumento manteve 29 dos 43 itens. Os resultados dos testes de esfericidade de Bartlett e KMO sugeriram interpretabilidade da matriz de correla\u00e7\u00e3o, com quatro fatores identificados na an\u00e1lise paralela e vari\u00e2ncia explicada de 64,1%.A AFE foi inicialmente realizada com vers\u00e3o do instrumento validada por especialistas, com 43 itens. Os testes de esfericidade de Likert, com op\u00e7\u00f5es de resposta sobre a frequ\u00eancia de a\u00e7\u00f5es centradas em atributos de HCD, com 29 itensb. O escore da escala \u00e9 determinado pela soma das pontua\u00e7\u00f5es correspondentes \u00e0s op\u00e7\u00f5es assinaladas em cada item . A partir da somat\u00f3ria de pontos dos itens, foram propostas quatro faixas de escore com os seguintes status: HCD baixas ; HCD moderadamente baixas ; HCD moderadamente altas e HCD altas . A interpreta\u00e7\u00e3o do escore final foi graficamente apresentada em formato de r\u00e9gua com grada\u00e7\u00e3o de cores , com mensagens instrucionais sobre o escore atingido e est\u00edmulo ao desenvolvimento dessas habilidades.O modelo final da EHAPS resultou em uma escala do tipo A 28.Os itens retidos apresentaram cargas adequadas em seus respectivos fatores. N\u00e3o foram encontrados novos padr\u00f5es de cargas cruzadas no modelo reduzido . A fidedignidade composta dos fatores foi adequada para todos os fatores. A medida de H-index sugeriu fatores replic\u00e1veis em estudos futuros 2 = 296, gl = 334.246; p = 0,06; RMSEA = 0,037; CFI = 0,99; TLI = 0,99).Cabe destacar que a estrutura fatorial apresentou \u00edndices de ajuste adequados para quantificar o grau de concord\u00e2ncia entre especialistas resultou em itens com alta validade de conte\u00fado. A opini\u00e3o de especialistas foi considerada em outros estudos que reportaram instrumentos para medir habilidades culin\u00e1rias34. No entanto, esses estudos n\u00e3o apresentaram m\u00e9todos emp\u00edricos derivados do julgamento de especialistas como evid\u00eancia da validade do conte\u00fado. O fato de especialistas opinarem sobre itens do constructo n\u00e3o fornece por si s\u00f3 informa\u00e7\u00f5es relevantes para o processo de valida\u00e7\u00e3o28. Assim, este estudo se destaca quanto ao rigor metodol\u00f3gico empregado para validade de conte\u00fado da EHAPS.Apesar de ser uma pr\u00e1tica pouco comum em estudos de desenvolvimento de escalas, a etapa de validade de conte\u00fado contou com um membro laico na composi\u00e7\u00e3o do painel de especialistas35 afirmam que estudos-piloto podem ser realizados usando pequenas amostras, desde que n\u00e3o haja comprometimentos ao desempenho das an\u00e1lises. Considerando que a amostra foi empregada para avaliar qualitativamente a compreens\u00e3o e operacionaliza\u00e7\u00e3o do instrumento, o n\u00famero de profissionais do pr\u00e9-teste n\u00e3o gerou limita\u00e7\u00f5es ao estudo.Os participantes do pr\u00e9-teste apontaram adequada usabilidade do instrumento. Essa etapa contou com a participa\u00e7\u00e3o de cinco profissionais de sa\u00fade. Rattray et al.3.Em rela\u00e7\u00e3o \u00e0 etapa de avalia\u00e7\u00e3o do constructo e confiabilidade da EHAPS, a an\u00e1lise paralela sugeriu um instrumento multidimensional com quatro fatores. A multidimensionalidade da escala est\u00e1 alinhada com a natureza complexa dos atos de comer e cozinhar, reconhecidos pelo Guia Alimentar para a Popula\u00e7\u00e3o Brasileirain natura e minimamente processados e antecipa\u00e7\u00e3o de procedimentos que facilitem o ato de cozinhar. Achado similar foi observado no estudo de Jomori11, que considera a capacidade criativa de planejar menus e organizar a prepara\u00e7\u00e3o de refei\u00e7\u00f5es como habilidades para a pr\u00e1tica culin\u00e1ria centradas no indiv\u00edduo. Essa dimens\u00e3o est\u00e1 relacionada \u00e0 principal recomenda\u00e7\u00e3o do Guia Alimentar para a Popula\u00e7\u00e3o Brasileira3: \u201cPrefira sempre alimentos in natura ou minimamente processados e prepara\u00e7\u00f5es culin\u00e1rias a alimentos ultraprocessados\u201d, e ao cap\u00edtulo sobre a compreens\u00e3o e supera\u00e7\u00e3o de obst\u00e1culos para colocar essa e outras recomenda\u00e7\u00f5es em pr\u00e1tica. Antecipar procedimentos culin\u00e1rios possibilita abreviar o tempo de preparo de refei\u00e7\u00f5es, diante da vida moderna, obst\u00e1culo superado com mais facilidade quando habilidades multitarefas tamb\u00e9m s\u00e3o colocadas em pr\u00e1tica.A dimens\u00e3o de planejamento criativo considera a criatividade ao planejar e preparar refei\u00e7\u00f5es caseiras baseadas em alimentos 36. Gabe37 discute a influ\u00eancia do ambiente dom\u00e9stico sobre qualidade das refei\u00e7\u00f5es consumidas, destacando discrep\u00e2ncia de responsabilidades quanto \u00e0s tarefas dom\u00e9sticas relacionadas ao g\u00eanero, que \u00e9 refor\u00e7ada por Mills et al.38 Esses achados oportunizam o emprego da EHAPS em estudos que visem analisar diferen\u00e7as de habilidades multitarefas entre g\u00eaneros, de modo a estimular o justo compartilhamento de responsabilidades no lar, o que inclui o preparo de refei\u00e7\u00f5es.A dimens\u00e3o de habilidades multitarefas compreende a capacidade de realizar tarefas dom\u00e9sticas simult\u00e2neas \u00e0s pr\u00e1ticas culin\u00e1rias. Se o indiv\u00edduo for incapaz de cozinhar enquanto lava roupa e cuida dos filhos, poder\u00e1 ser menos propenso a preparar uma refei\u00e7\u00e3o caseira12, o julgamento de confian\u00e7a considera o desempenho individual, que depende da pr\u00e1tica e tarefa executada, considerado um excelente preditor de comportamento para determinar como indiv\u00edduos empregam suas habilidades. A EHAPS reduz interpreta\u00e7\u00f5es err\u00f4neas sobre as HCD ao desconsiderar quest\u00f5es sobre confian\u00e7a para preparar refei\u00e7\u00f5es \u00e0 base de produtos pr\u00e9-prontos e de conveni\u00eancia, que poderiam superestimar habilidades do indiv\u00edduo, problema recorrente em instrumentos internacionais1. A escala de confian\u00e7a culin\u00e1ria, de Lavelle et al.34, por exemplo, inclui quest\u00f5es sobre confian\u00e7a para \u201cpreparar alimentos em forno micro-ondas, incluindo o aquecimento de pratos prontos\u201d.A dimens\u00e3o de confian\u00e7a quanto \u00e0 capacidade culin\u00e1ria corresponde \u00e0 autossufici\u00eancia para empregar t\u00e9cnicas e utens\u00edlios culin\u00e1rios. Segundo Martins39 sobre food literacy . Segundo os autores, o baixo letramento em alimentos est\u00e1 associado ao aumento de doen\u00e7as cr\u00f4nicas relacionadas \u00e0 dieta.Por fim, a dimens\u00e3o de sele\u00e7\u00e3o, combina\u00e7\u00e3o e preparo de alimentos refere-se aos aspectos sensoriais e de quantifica\u00e7\u00e3o de alimentos visando adequa\u00e7\u00e3o de compras e procedimentos culin\u00e1rios. Componentes similares, que versam sobre a capacidade de comprar alimentos, utiliz\u00e1-los em prepara\u00e7\u00f5es e julg\u00e1-los quanto \u00e0 qualidade, s\u00e3o encontrados no estudo de Vidgen e Gallegos28e sugerem uma vari\u00e1vel latente bem definida, com dimens\u00f5es com boa probabilidade de serem est\u00e1veis em estudos futuros31. Os \u00edndices de ajuste apresentados validaram o modelo extra\u00eddo da an\u00e1lise e confirmam a teoria mensurada, evidenciando constructo bem definido30. A confiabilidade do instrumento tamb\u00e9m se mostrou adequada, com resultados satisfat\u00f3rios de fidedignidade composta. Essa medida representa um bom indicador para avaliar a qualidade do modelo estrutural do instrumento e \u00e9 apresentada como um indicador de precis\u00e3o mais robusto, comparado ao coeficiente alpha32.Os resultados da an\u00e1lise fatorial explorat\u00f3ria evidenciaram cargas fatoriais e comunalidades adequadas em todos os itens retidos no instrumento40, situa\u00e7\u00e3o observada neste estudo.Desenvolver instrumentos de avalia\u00e7\u00e3o \u00e9 uma tarefa complexa, sendo apenas incentivada na aus\u00eancia de outro instrumento adequado \u00e0 realidade investigada37 para interpreta\u00e7\u00e3o do escore de seu instrumento de avalia\u00e7\u00e3o de qualidade da dieta, adotado pelo Minist\u00e9rio da Sa\u00fade. Oferece, ainda, mensagens sobre o status das habilidades culin\u00e1rias dom\u00e9sticas do indiv\u00edduo, com instru\u00e7\u00f5es para est\u00edmulo e valoriza\u00e7\u00e3o dessas habilidades. Ressalta-se, por\u00e9m, que a pontua\u00e7\u00e3o da escala deriva de seu escore bruto. Embora comumente observada em estudos de desenvolvimento de instrumentos, a utiliza\u00e7\u00e3o desse escore assume uma defini\u00e7\u00e3o subjetiva de pontos de corte de classifica\u00e7\u00e3o, conferindo o mesmo peso para itens com diferentes cargas fatoriais. A teoria de resposta ao item \u00e9 uma proposta de an\u00e1lise para superar essa limita\u00e7\u00e3o ao considerar as caracter\u00edsticas dos itens do question\u00e1rio quanto \u00e0 capacidade de discriminar a vari\u00e1vel de interesse e localiza\u00e7\u00e3o no respectivo continuum e um modelo probabil\u00edstico para calcular e descrever os escores41. Assim, a teoria de resposta ao item poderia ser empregada em estudos futuros, visando aprimorar o escore deste instrumento, validado por m\u00e9todos cl\u00e1ssicos.Como vantagem, a EHAPS \u00e9 curta, f\u00e1cil de aplicar e padronizada, possibilitando utiliza\u00e7\u00e3o em estudos comparativos. Este instrumento sintetiza as habilidades culin\u00e1rias dom\u00e9sticas de acordo com faixas de escores de f\u00e1cil interpreta\u00e7\u00e3o, delimitadas por cores de sem\u00e1foro, fundamentada em diagrama\u00e7\u00e3o sugerida por GabeA informatiza\u00e7\u00e3o minimizou poss\u00edveis erros do entrevistador. A aplica\u00e7\u00e3o on-line do instrumento mostrou-se vantajosa devido ao baixo custo e facilidade de acesso. No entanto, sua aplica\u00e7\u00e3o em papel n\u00e3o foi estudada para averiguar a ocorr\u00eancia de resultados semelhantes, o que representa uma limita\u00e7\u00e3o. A vers\u00e3o impressa viabilizaria acesso aos profissionais de sa\u00fade atuantes em locais com internet limitada ou n\u00e3o inclu\u00eddos digitalmente.food literacy. Sua realiza\u00e7\u00e3o seria oportuna em an\u00e1lises futuras.Outra limita\u00e7\u00e3o corresponde \u00e0 n\u00e3o realiza\u00e7\u00e3o de estudo de validade convergente. Esse tipo de validade refere-se \u00e0s associa\u00e7\u00f5es do escore da EHAPS com medidas externas, o que poderia confirmar se a escala mede HCD relacionadas a escolhas alimentares recomendadas pelo Guia Alimentar para a Popula\u00e7\u00e3o Brasileira e poderia ser performado comparando-se o escore da escala com um recordat\u00f3rio de 24 horas ou com o escore de uma escala de 42, a amostra dessa cidade pode n\u00e3o representar a diversidade cultural da alimenta\u00e7\u00e3o no territ\u00f3rio nacional. Desse modo, sugere-se realizar adapta\u00e7\u00e3o transcultural do instrumento para macrorregi\u00f5es brasileiras.Finalmente, a amostra utilizada para an\u00e1lise fatorial explorat\u00f3ria foi composta por profissionais que atuam na aten\u00e7\u00e3o prim\u00e1ria do munic\u00edpio de S\u00e3o Paulo. Apesar de se consolidar como principal destino das migra\u00e7\u00f5es regionais no BrasilEste estudo \u00e9 inovador no contexto de valoriza\u00e7\u00e3o da culin\u00e1ria como pr\u00e1tica emancipat\u00f3ria e de promo\u00e7\u00e3o de sa\u00fade. Compreende-se que a apropria\u00e7\u00e3o de habilidades culin\u00e1rias dom\u00e9sticas por profissionais da aten\u00e7\u00e3o prim\u00e1ria permite aproximar seu conhecimento cient\u00edfico da vida das pessoas, dos saberes e pr\u00e1ticas sociais, fortalecendo a capacidade dos indiv\u00edduos ou comunidades de identificar solu\u00e7\u00f5es para seu cotidiano. Este instrumento possibilitar\u00e1 averiguar fidedignamente a necessidade de qualifica\u00e7\u00e3o da for\u00e7a de trabalho para a\u00e7\u00f5es de promo\u00e7\u00e3o da alimenta\u00e7\u00e3o adequada e saud\u00e1vel baseadas em habilidades culin\u00e1rias dom\u00e9sticas. Oportuniza, ainda, identificar necessidades de revis\u00e3o de propostas pedag\u00f3gicas dos cursos de sa\u00fade, para formar profissionais habilitados a atuar pela soberania alimentar e o direito humano \u00e0 alimenta\u00e7\u00e3o adequada em detrimento de pr\u00e1ticas e orienta\u00e7\u00f5es medicalizadoras."} {"text": "For this purpose, a complete set of methods was used to comprehensively analyze mitophagy levels, mitochondrial reactive oxygen species (ROS), mitochondrial membrane potential (MMP) and the mitochondrial mass (MM) of subsets of lymphocytes. It is expected that this will provide a complete set of standards, and drawing the mitochondrial metabolic map of lymphocyte subsets at different stages of differentiation and activation.Mitochondria are mainly involved in ATP production to meet the energy demands of cells. Researchers are increasingly recognizing the important role of mitochondria in the differentiation and activation of hematopoietic cells, but research on how mitochondrial metabolism influence different subsets of lymphocyte at different stages of differentiation and activation are yet to be carried out. In this work, the mitochondrial functions of lymphocytes were compared at different differentiation and activation stages and included CD8+ T cell, a relatively higher level of mitochondrial metabolism was noted for naive CD4+ T cells. Finally, from the CD8+ T cell subsets, CD8+ Tcm had relatively high levels of MM and MMP but relatively low ones for mitophagy, with effector T cells displaying the opposite characteristics. Meanwhile, the autophagy-related genes of lymphoid hematopoietic cells including hematopoietic stem cells, hematopoietic progenitor cells and lymphocyte subsets were analyzed, which preliminarily showed that these cells were heterogeneous in the selection of mitophagy related Pink1/Park2, BNIP3/NIX and FUNDC1 pathways. The results showed that compared with CD4+ T, CD8+ T and NK cells, B cells were more similar to long-term hematopoietic stem cell (LT-HSC) and short-term hematopoietic stem cell (ST-HSC) in terms of their participation in the Pink1/Park2 pathway, as well as the degree to which the characteristics of autophagy pathway were inherited from HSC. Compared with CLP and B cells, HSC are less involved in BNIP3/NIX pathway. Among the B cell subsets, pro-B cells inherited the least characteristics of HSC in participating in Pink1/Park2 pathway compared with pre-B, immature B and immature B cells. Among CD4+ T cell subsets, nTreg cells inherited the least characteristics of HSC in participating in Pink1/Park2 pathway compared with naive CD4+ T and memory CD4+ T cells. Among the CD8+ T cell subsets, compared with CLP and effector CD8+ T cells, CD8+ Tcm inherit the least characteristics of HSC in participating in Pink1/Park2 pathway. Meanwhile, CLP, naive CD4+ T and effector CD8+ T were more involved in BNIP3/NIX pathway than other lymphoid hematopoietic cells.Of all lymphocytes, B cells had a relatively high mitochondrial metabolic activity which was evident from the higher levels of mitophagy, ROS, MMP and MM, and this reflected the highly heterogeneous nature of the mitochondrial metabolism in lymphocytes. Among the B cell subsets, pro-B cells had relatively higher levels of MM and MMP, while the mitochondrial metabolism level of mature B cells was relatively low. Similarly, among the subsets of CD4This study is expected to provide a complete set of methods and basic reference values for future studies on the mitochondrial functions of lymphocyte subsets at different stages of differentiation and activation in physiological state, and also provides a standard and reference for the study of infection and immunity based on mitochondrial metabolism. In this case, the subset of CD8+ T cell include effector CD8+ T, effector memory CD8+ (CD8+ Tem) and central memory CD8+ T (CD8+ Tcm), while the subset for CD4+ T cells include nTreg cells, memory CD4+ and naive CD4+ T cells mice were maintained in the animal facility of the Institute of Hematology. All experiments involving animals were carried out according to the animal care guidelines with approval of the Institutional Animal Care and Use Committees of the State Key Laboratory of Experimental Hematology.+ T, CD4+ T, B and Natural Killer cells (NK) as well as their subsets were obtained from the flushed bone marrow of 6-8-week-old mice . For this purpose, cell concentrations were adjusted to 5\u00d710Single-cell transcriptome data for hematopoietic stem cells (HSCs), multipotent progenitor cells (MPPs), common lymphoid progenitor cells (CLPs), and lymphocytes of mouse bone marrow were extracted from the NCBI database (GSE77098). This was followed by an analysis of nine genes related to mitophagy in blood cells to determine their expression levels.For all flow cytometry data, the mean fluorescence intensity and standard deviation (SD) of each channel were determined. For the t-distributed stochastic neighbor embedding (t-SNE) and uniform manifold approximation and projection (UMAP) diagrams, cells for which protein expression was similar are close, while highly interconnected nodes cells represent those having similar phenotypes in order to visualize different subsets of cells. A heat map or a color dimension overlaid on the t-SNE or UMAP diagrams show the final cell classification. In the t-SNE and UMAP diagrams, the proximity of cells reflects their distance in the high-dimensional space.P-values of < 0.05. The GraphPad Prism 6 software were used for all graphical representations.Data processing was carried out with the SPSS 26.0 statistical software package. Normally-distributed data were presented as the mean \u00b1 SD, with the results compared using ANOVA. Groups with a uniform variance or an uneven one were compared by applying the LSD method or Tamhane\u2019s T2 method respectively. For non-normally distributed data, the results were expressed as median with the interquartile range. In addition, with the distribution being non-normal in this case, groups were compared using the non-parametric Kruskal-Wallis test. For all statistical tests, significant differences were indicated by + T cells, with the lowest level obtained for CD8+ T. More specifically, compared with CD8+ T cells, CD4+ T cells had significantly higher levels of MM, MMP and ROS. Similarly, B cells had the highest level of MMP (P < 0.001) as well as a significantly higher mitophagy level compared with CD4+ and CD8+ T cells, with NK cells also having a significantly higher mitophagy level than CD8+ T cells . Similarly, a significantly higher expression level was noted for ST-HSC compared with CD8+ T cells (P < 0.001), while in the case of B, Optn expression was significantly higher than for CD4+ T and CD8+ T cells (P < 0.01). The other gene, Tax1bp1, encodes an autophagy receptor that connects ubiquitin to the membrane of an autophagosome membrane during the selective autophagic clearance of damaged mitochondria. In this case, a significantly lower expression level of this gene was noted for CD4+ T compared with LT-HSC, ST-HSC and MPP (P < 0.01). Altogether, the results suggested that out of the four lymphocytes, B cells were more similar to LT-HSC and ST-HSC in their level of involvement in the Pink1/Park2 pathway as well as in terms of their greater similarity to the autophagy pathway characteristics of HSC. In contrast, CD4+ T, CD8+ T and NK cells were less involved in the autophagy pathway and inherited less of the characteristics of HSC. For the BNIP3/NIX pathway, the expression level of Bcl2 was significantly lower in LT-HSC and ST-HSC compared with CLP and B cells (P < 0.05), thus indicating a lower involvement of HSC in this particular pathway. The bioinformatics and statistical analyses of these results showed that transcription was heterogeneous and subgroup-specific in hematopoietic stem progenitor cells and lymphocytes, with the expression of autophagy genes, in particular, being significantly heterogeneous among cell populations.Statistical analysis of the expression levels of autophagy-related genes in lymphoid cells at different stages of differentiation showed that + T and CD8+ T cells, with B cells having the lowest level (P < 0.001). In lymphocytes, the fact that B cells had higher levels of mitochondrial functions but lower SSC levels indicate a significantly lower number of organelles such as ribosome, Golgi apparatus, endoplasmic reticulum and lysosome in B cells.Side scatter characteristics (SSC) was used in order to characterize the number of organelles in lymphocytes as an indicator of the MM of hematopoietic cells. SSC reflects cells\u2019 granularity, that is, the complexity of cell organelles in terms of their types and numbers and the SSC results of lymphocytes and their different subsets are shown in P < 0.001), thus indicating that the organelle content was higher in pro-B. Finally, results of mitophagy showed that, of all B cell subsets, pre-B had the highest mitophagy level while mature B had the lowest one. These results show that, compared with the relatively primitive subpopulation, mitophagy levels were lower in differentiated B cells . As a serine/threonine kinase, Pink1 can specifically locate depolarized mitochondria for activating parkin through ubiquitin phosphorylation. The recruitment of autophagy receptors subsequently induces mitophagy. Pink1 expression was significantly lower in ST-HSC compared with immature B cells (P < 0.05), while that of Tax1bp1 was significantly lower in pro-B in comparison with LT-HSC, ST-HSC, MPP and pre-B (P < 0.01). Thus, it appeared that, among the subsets of B cells, pro-B cells inherited the least characteristics of HSC in terms of their involvement in the Pink1/Park2 pathway. For the BNIP3/NIX pathway, Bcl2 was significantly more expressed in CLP compared with LT-HSC, ST-HSC, pro-B, pre-B and immature B cells (P < 0.01), hence indicating that CLP was strongly involved in this pathway . Therefore, the results reflected the heterogeneity in terms of organelle numbers.UMAP, t-SNE and FlowSOM analyses indicated that phenotypic differences allowed CD4+ T cell at different differentiation and activation stages indicated that Optn was more highly expressed in LT-HSC and ST-HSC compared with nTreg, memory CD4+ T and naive CD4+ T cells (P < 0.05). In addition, nTreg had a significantly lower Tax1bp1 expression level than LT-HSC, ST-HSC and MPP (P < 0.01), although that of ST-HSC was significantly higher compared with naive CD4+ T cells (P < 0.01). These results suggest that, of all subsets of CD4+ T cells, nTreg cells inherited the least characteristics of HSC in terms of their participation in the Pink1/Park2 pathway. Regarding the BNIP3/NIX pathway, Bcl2 expression was significantly higher in naive CD4+ T cells in comparison with LT-HSC and ST-HSC cells (P < 0.01), thus showing that in a similar way to CLP, the former was more involved in the BNIP3/NIX pathway , hence indicating that active cells were of greater complexity and had a higher number of organelles.UMAP, t-SNE and FlowSOM analyses showed that CD8+ T cell at different differentiation and activation stages indicated that Optn was significantly more expressed in LT-HSC and ST-HSC compared with naive CD8+ T and CD8+ Tcm (P < 0.05), with CD8+ Tcm having a significantly lower level of Tax1bp1 expression in comparison with LT-HSC, ST-HSC, MPP, CLP, naive CD8+ T and effector CD8+ T (P < 0.05). These results show that CD8+ Tcm inherited the least characteristics of HSC in terms of their participation in the Pink1/Park2 pathway, especially when compared with CLP and effector CD8+ T cells. For the BNIP3/NIX pathway, Bcl2 was significantly more expressed in naive CD8+ T and effector CD8+ T cells compared with LT-HSC cells (P < 0.01), thus indicating that, in a similar way to CLP, the former was more involved in the BNIP3/NIX pathway , reflecting the strong mitochondrial functions of CD4+ T cells. This phenomenon may be related to a change in the mitochondrial content of T cells during lineage development. During the process of differentiation, T cells turn into a metabolically active state characterized by \u201cincreased nutrient intake, increased glutamine decomposition and increased glycolysis metabolism\u201d to meet the demand for active proliferation. T cells also increase the mitochondrial pool, with the mitochondrial content increasing accordingly. In the study of immune cells, it was found that ROS played a very important role in the process of T cell differentiation. The high concentration of ROS in the environment was conducive to the production of IL-2 and IL-4, promoting Th2 differentiation and prolonging Th2-mediated immune response. The corresponding low concentration of ROS promoted the differentiation of Th1 and Th17. In addition, some studies also showed that in CD8+ T cells, low MPP showed enhanced persistence in vivo compared with high MMP. By activating CD8+ T cells, the proliferation resulting from low membrane potential is less than that induced by CD8+ T with high membrane potential. CD8+ T cells can also enhance autoimmunity and greater anti-tumor effects. The increase of MMP is helpful to promote the secretion and function of effector cytokines in various lymphocyte lineages. For example, in CD8+ T cells, high MPP displayed increased oxidative stress, DNA damage response, cell cycle arrest and T cell failure, which altogether may lead to T cell apoptosis. On the other hand, low MMP is more effective in combating oxidative stress, protecting against DNA damage and resisting apoptosis. Therefore, the differences between CD4+ T cells and CD8+ T cells at MM, MMP and ROS levels may indicate the heterogeneity of lineage development and central memory T cells (Tcm) not only participate in the maintenance of an immune cell bank, but also play an important role in immune response . Among tStudies have shown that there is a dynamic balance in the MM of cells, with mitochondrial distribution, arrangement, numbers, size and morphology being different in cells which are at different stages of differentiation and activation \u201336. MMP + T and CD4+ T). In B, CD8+ T and CD4+ T subpopulations, pro-B, nTreg and effector CD8+ T cell populations displayed greater heterogeneity than other cells within the same subpopulation was stronger than for adaptive immune cells . In the process of immune response, the energy supply pathway of lymphocytes changes, and the morphology as well as dynamics of mitochondria also change in the bone marrow, while lymph nodes and spleen, which are sites of immune response, have a higher proportion of mature lymphocytes , before being presented on the cell surface in the form of MHC complex and recognized by immunocompetent lymphocytes. After recognition and binding of antigen peptides with BCR, TCR and other lymphocyte surface receptors, a series of signal transduction cascades occur in the lymphocytes, and these ultimately enable lymphocytes to undergo appropriate biological changes for immune response. During the immune response, different lymphocytes play different biological functions, and this requires specific changes in the mitochondrial functions of different types of lymphocytes. T cells differentiate into CD4et cells .+ T helper cells which is mainly regulated by the generation of ROS necessary for NFAT, NF-kB and proximal TCR signal transduction. For example, compared with Th17 cells, the oxidative phosphorylation level of Treg increases and the glycolytic flux decreases. Similar to T cells, B cells also experience a major transformation in their metabolic mode as they switch from their initial resting state to proliferating cells. After binding with antigens, B cells greatly enhance glucose and glutamine metabolism during cloning and amplification. BCR stimulates the release of calcium ions into the cytoplasm, promotes the production of ROS in mitochondria and allows the activation of downstream signal pathways. In this case, the level of ROS production affects the activation of downstream signaling pathways. During antigen presentation, NK cells promote various biological processes at the molecular level by upregulating glycolysis and oxidative phosphorylation. Generally speaking, resting mature NK cells maintain their cell homeostasis mainly through oxidative phosphorylation. When they are affected by proinflammatory cytokines such as IL2 and IL12, they display a strong upregulation of glycolysis and oxidative phosphorylation pathways , granulocyte subsets and erythroid cell subsets . This will add to the current understanding of the mitochondrial functions of hematopoietic cells.The heterogeneity of mitochondrial functions are important for lymphoid hematopoietic differentiation. In this work, to ensure uniformity, each mitochondrial feature was detected by using the same flow cytometer at the same time. The voltage of the photomultiplier tube of the target fluorescence channel was also consistent in each flow template, with potential interference from other fluoresceins on the target fluorescence channel eliminated by regulating fluorescence compensation regulation. In this way, the mitochondrial functions of 14 main types of lymphocytes were determined at different differentiation and activation stages simultaneously. This study compared the mitochondrial metabolism of different lymphocytes and their subsets to provide a complete set of methods and standards for analyzing the mitochondrial mitophagy, ROS, MMP and MM levels of hematopoietic cells, Furthermore, mitochondrial metabolic maps of lymphocytes were drawn at different stages of their differentiation and activation while single cell sequencing technology allowed the heterogeneity of the mitochondrial autophagy pathway, as inherited from lymphoid hematopoietic cells, to be analyzed. The current results provide a standard and reference for future studies involving mitochondrial metabolism in subsets of myeloid cells, especially since these functions are also closely related to the maintenance of hematopoietic homeostasis. Finally, this study may serve as a reference in the study of tumor and aging-related biomarkers based on mitochondrial metabolism.+ T lymphocytes, CD4+ T lymphocytes, B lymphocytes, NK cells as well as their subsets. A complete set of methods was used to comprehensively analyze mitophagy levels, ROS, MMP and MM of subsets of lymphocytes. Of all lymphocytes, B cells had a relatively high mitochondrial metabolic activity which was evident from the higher levels of mitophagy, ROS, MMP and MM, and this reflected the highly heterogeneous nature of the mitochondrial metabolism in lymphocytes.In this work, the mitochondrial functions of lymphocytes were compared at different differentiation and activation stages and included CD8+ T, CD8+ T and NK cells, B cells were more similar to LT-HSC and ST-HSC in terms of their participation in the Pink1/Park2 pathway, as well as the degree to which the characteristics of autophagy pathway were inherited from HSC.Meanwhile, the autophagy-related genes of lymphoid hematopoietic cells including hematopoietic stem cells, hematopoietic progenitor cells and lymphocyte subsets were analyzed, which preliminarily showed that these cells were heterogeneous in the selection of mitophagy related Pink1/Park2, BNIP3/NIX and FUNDC1 pathways. The results showed that compared with CD4The original contributions presented in the study are included in the article/The animal study was reviewed and approved by the Institutional Animal Care and Use Committees of the State Key Laboratory of Experimental Hematology.HL, WF, and WY contributed equally to this work. YZ, YG, and XK conceived and directed the project; HL and WF performed the experiments; HL, WF, WY, ZC, XK, and EL analyzed the data; HL wrote the manuscript. WF, WY, ZC, XK, EL, FS, YG, and YZ contributed to the discussions and comments on the paper. All authors contributed to the article and approved the submitted version.This work was supported by grants from National Natural Science Foundation of China , Tianjin Science and Technology Planning Project (No. 18ZXXYSY00010) and CAMS Innovation Fund for Medical Sciences (2022-I2M-2-003). This work was supported funded by Tianjin Key Medical Discipline Construction Project (No. TJYXZDXK-006A).All authors kindly thank the core facilities of State Key Laboratory of Experimental Hematology, Institute of Hematology & Blood Diseases Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College for the technical assistance.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "After exploration, words were presented while participants performed a deep or shallow encoding task. Incidental memory was quantified in a surprise test. Results showed that participants in the deep encoding condition remembered more words than those in the shallow condition, while novelty did not influence this effect. Interestingly, however, children, adolescents and younger adults benefitted from exploring a novel compared to a familiar environment as evidenced by better word recall, while these effects were absent in older adults. Our findings suggest that the beneficial effects of novelty on memory follow the deterioration of neural pathways involved in novelty-related processes across the lifespan.Exploration of a novel environment has been shown to promote memory formation in healthy adults. Studies in animals have suggested that such novelty-induced memory boosts are mediated by hippocampal dopamine. The dopaminergic system is known to develop and deteriorate over the lifespan, but so far, the effects of novelty on memory across the lifespan have not yet been investigated. In the current study, we had children, adolescents, younger, and older adults ( Also events occurring in the temporal vicinity of novelty exploration are relevant, as these may relate to the requirements or consequences of exploring that environment.Do you remember what you had for breakfast during your holiday abroad last year? And do you remember what you had for breakfast the day after getting back? Chances are that you have recollection of your holiday breakfast, but not of the one you had at home. Although both events are almost equally distant in time, the breakfast you had in a familiar environment was less memorable than the one you had in an unfamiliar holiday location. From an evolutionary perspective it makes sense that learning is promoted around the time that novel environments are explored. When visiting a new place, it is crucial to learn about the circumstances of danger and reward, to optimize chances of avoiding danger and finding rewards in the future4. Animal work has further suggested that the effects are dependent on hippocampal dopamine that can have two potential sources: One being the ventral tegmental area (VTA) and the other being the locus coeruleus scenes before performing a word learning task 5\u00a0min later10. Commensurate with the findings in animals that suggest that novelty can promote memory, it was observed that participants recollected more words on this unrelated task, 24\u00a0h later after being exposed to novel rather than familiar stimuli. This study observed effects of novelty on consolidation, but also on encoding. Previous neuroimaging work has suggested that co-activation of the hippocampus and dopaminergic midbrain support successful encoding it is possible that novelty may mediate memory through effects on encoding or consolidation10. The study by Fenker et al.10, however, contrasts with the previous work in animals that investigated effects of novelty on memory, in that it employed images of novel scenes rather than novel environments. More akin to the studies in rodents, one study used 3D immersive novel and familiar environments by employing virtual reality20. In a within-subjects design, participants explored a novel\u00a0environment on one day and a previously familiarized environment on a different day,\u00a0before performing a word learning task. It was found that exploration of a novel environment improved recall, but not recognition in a memory test shortly later.Relative to the work in animals, however, research on the effects of novelty on memory is lagging in humans11. Similar beneficial effects of novelty have been found on visual memory in adolescents, suggesting that experiencing novelty can have a generalizable effect on long-term memory21. In line with the work in healthy adults20, one previous study employed spatial novelty, comparing effects in children and adolescents with attention deficit hyperactivity disorder (ADHD) and a typically developing control group22. Findings from this study suggest that exploration of a virtual novel environment improved free recall on a word learning task in children and adolescents with ADHD, but not in typically developing children. The authors suggested that the lack of positive effects in the control group may be explained by novelty only improving weak memory traces, which may have been more prevalent in the children with ADHD due to general learning impairments, but the role of memory strength has not yet been experimentally investigated in humans.Not only novel scenes and spatial novelty have been shown to have benefits on memory. One study in elementary school children found that experiencing a novel but not familiar science or music lesson 1\u00a0h before or after reading a story improved memory of the story30, and one previous study found that children respond stronger to stimulus novelty than adults, as evidenced by faster responses to an auditory target when a novel rather than familiar image was shown31. Changes in the response to novelty across the lifespan are supported by structural and neuroimaging studies that have suggested an age-related degeneration of the substantia nigra/ventral tegmental area . In line with this suggestion, a recent study in young adults suggested that active exploration of novel environments may indeed be required, as only active exploration, but not passive novelty exposure enhanced word recall37. Novelty-exposure interventions have been argued to have the potential to counteract or slow-down age-related memory decline32, but as the influence of age remains underinvestigated it is currently unclear whether the beneficial effects of novelty on memory as discussed above are also present in older adults.Notwithstanding these prior indications of differences between children and adults, the effects of spatial novelty on memory across the lifespan has not been investigated systematically, and it thus remains unclear whether age influences these effects. Psychophysiological studies have suggested that the topology and size of the novelty-induced event-related components change with agingn\u2009=\u2009439) including children (8\u201311\u00a0years), adolescents (12\u201317\u00a0years), younger adults (18\u201344\u00a0years), and older adults that they could freely explore. They then explored the same (familiar) or a novel environment, after which they were shown a list of words. One group of participants performed a deep encoding task , while another group of participants performed a shallow encoding task (\u201cis the first letter of the shown word open or closed?\u201d). After a short distraction task, word recall and recognition, and memory for encountered landmarks during exploration was tested. Participants also filled out a novelty seeking (NS) questionnaire40.In the current study we tested the effects of novelty on memory over the lifespan, in a relatively large sample (43). We furthermore expected that these effects would be stronger in adolescents and younger adults as compared to children and older adults, since dopaminergic functioning rises and declines with age38. As novelty has been shown to especially promote encoding of weak memory traces in animals44, we explored whether the boosting of novelty would be stronger for participants who performed the shallow rather than deep encoding task5. We furthermore expected that exploration behavior would be positively linked to subsequent word recall, via putative dopaminergic modulation32.We expected that novelty exploration would promote word recall but not recognition (cf. 25 and28). Forty-five participants were excluded: 17 participants were excluded because of administrative issues , seven due to technical issues , seven because of language issues , six because they worked together or received help from a parent, five participants because they did not finish the tasks in sequence , and two participants because they talked on the phone during the word learning task. As such, 439 participants were included in the main analyses (401 performed the task in Dutch and 38 in English). As the landmark test did not run on all laptops due to technical issues, the number of included participants that completed this task was only 331. Participants were classified as children , adolescents , younger adults or older adults based on their age (and presumed associated differences changes in dopaminergic functioning:46). Supplementary Information (SI): Appendix 1 shows demographics and the distribution of participants over age groups and conditions. Participants in the first testing week performed a word learning task with a deep encoding, and participants in the second week performed a shallow encoding task. For participants within each age-group, age distributions were similar across the different novelty and level of processing conditions . Also sex distributions were similar over conditions .A total of 487 visitors of the NEMO Science Center in Amsterdam aged 8\u00a0years or older volunteered to participate in this study. Data was collected during a 2-week Science Live exhibition, during which we tested all visitors interested in volunteering during all opening hours of the NEMO Science Center. While this somewhat restricted our control over the age and the total number of participants, it yielded a final sample size that largely exceeded that of prior studies of Leiden University, the Netherlands. All procedures were in line with the Declaration of Helsinki , and followed relevant COVID-19 guidelines and regulations.Throughout all procedures the experimenters were wearing a mask and gloves as a safety regulation regarding the COVID-19 pandemic. For data collection we used six laptops in two spacious testing rooms that allowed for social distancing (>\u2009=\u20091.5\u00a0m). The experimenter stayed in the testing room throughout the entire procedure to start the tasks and to answer questions. The entire experimental procedure took approximately 15\u201325\u00a0min.Data was collected at the NEMO Science Center in Amsterdam. Upon arrival, participants were asked to disinfect their hands as part of the COVID-19 protocol. Before participation, participants or their parents read the information letter and were given the opportunity to ask questions. After giving written informed consent, the participants were seated before they performed a series of tasks on a laptop.47, the landmark task and NS questionnaire were created using E-Prime 3.0 software .The VEs were created using Unity Version 2017.2.21f1 , and were matched in size, path length, and number of intersections. Both VEs consisted of fantasy islands with unusual landmarks (such as a slot machine) at intersections or road endpoints, including land and a body of water and eleven words referred to a non-living thing . Similarly, four words started with a closed letter and eleven with an open letter .For the word learning task fifteen Dutch neutral nouns were chosen from the CELEX lexical database and translated to English for non-Dutch speakersParticipants were reminded of the response keys and task during the encoding, recall and recognition phase of the word learning and landmark tasks. The response keys were shown below the word, in the location corresponding to the keyboard, and in the semantic task the response keys were further accompanied by the picture of a cow (to indicate a living thing) and a chair (to indicate a non-living thing). These reminders were included to lift the working memory load, especially because this otherwise could have made the task disproportionally difficult for the younger children.Landmarks were objects from the Unity Asset store, and included a wide range of easily recognizable objects, such as an airplane and desk chair. Pictures of the landmarks presented on a grey background were used in the landmark memory test. During this test also lures were presented, which consisted of objects that were not part of either of the two VEs.familiarization phase, participants explored the VE for 3\u00a0min. After exploration, they were asked to indicate their happiness and arousal on a visual analogue scale (VAS) with Self-Assessment Manikins49. They could use the number keys to indicate their answers, and completing the ratings took less than 1\u00a0min.Participants received scripted verbal instructions regarding how to navigate through the VE. The \u2018W\u2019 key could be used to move forward, and the mouse could be used to look around and determine heading direction. The space bar could be used to jump, although there was no function in jumping, as one could not jump on top of things. Participants were instructed that they could navigate freely but should try to stay on the paths. During the first second exploration phase participants explored either the same or a new VE for another 3\u00a0min . After this exploration, participants were asked to rate their happiness and arousal levels again on the same two VAS as before the first exploration. See Fig.\u00a0During the word task, instructions were shown on the screen. During the encoding phase, fifteen nouns were shown in a random sequence . In the first week of data collection, word learning involved a deep encoding task in which participants had to judge whether the shown word represented a living or a non-living thing. During the second test week, word learning involved a shallow encoding task in which participants had to indicate whether the first letter of the shown word had an open (such as a \u201cW\u201d) or closed (such as an \u201cO\u201d) shape. Each word was presented for a duration of 3000\u00a0ms (irrespective of whether a response was given or not). In between words a fixation cross was shown for 500\u00a0ms. After the encoding phase, participants performed a series of nine simple math problems in a distractor task. The solution to all problems varied between 1 and 9. Next, participants were prompted to enter as many words as they could remember from the encoding phase. They were instructed to press ENTER, to continue entering words or to press ESC\u2009+\u2009ENTER to continue if they could not remember any more words. In the following recognition test all 15 words from the encoding phase were randomly shown, interspersed with 10 lures (new words that were not presented during encoding). Participants had to indicate for each word whether it was old (\u201cpress X\u201d) or new (\u201cpress N\u201d). Each word was shown until a response was given. All phases of the word task were finished in 3\u20134\u00a0min. Recall was quantified by the percentage correctly remembered words, while recognition was quantified by the corrected hit rate . Next, participants performed a visuomotor adaptation task, which was completed in 2\u20133\u00a0min (results published in54).For the landmark test assessed memory for landmarks that participants could have encountered during the second exploration phase. In total 35 landmarks were shown, of which 20 were present in the second VE and 15 were lures . Participants had to indicate for each landmark whether they saw it before (\u201cpress X for old\u201d) or not (\u201cpress N for new\u201d). When participants indicated \u201cold\u201d they were further asked to indicate whether they thought the landmark was \u201csure old\u201d (\u201cpress X\u201d), \u201cprobably old\u201d (\u201cpress N\u201d) or whether they guessed (\u201cpress M \u201c). Each landmark was shown until a response was given and the test had a duration of approximately 2\u20133\u00a0min. As an estimate of landmark recollection, the \u201csure\u201d CHR was calculated .The NS scale of the Tridimensional Personality Questionnaire55, whereas children and adolescents filled out a simplified and abbreviated (20 item) version of the questionnaire. Each question remained on the screen until a response was given. All questions could be answered in about 2\u20135\u00a0min. Afterwards feedback was shown on basis of the total NS score . These cut-off scores were only used to provide the participants feedback and were not used in any analyses.Finally, participants reported their sex , age in years, and handedness . Adults (>\u200917) subsequently filled out the 34 items of the 38. In line with our hypothesis that older adults would show diminished effects of novelty, an interaction between novelty and age was followed up with three 2*2*2 ANOVAs with Novelty , Encoding type , and Age as factors. As the groups between conditions were unequal, we also included Encoding type in this analysis, but the main effect and interactions with this factor are not interpreted. For all analyses, the \u03b1-criterion was set at 0.05, and Bonferroni-Holm correction was applied to compensate for multiple testing.Recall and CHR for words were subjected to 2*2*4 ANOVAs with Novelty , Encoding type and Age group as between-subject factors. As we expected the effects of age on memory performance to be quadratic, with performance peaking in adolescents or young adults, we followed up a main effect of age group with a quadratic contrastpj was calculated for each of the two VEs, where the likelihood that someone visited each of the XY positions j was calculated, by dividing the total number of visits to that location by the total number of visited locations for all participants. See Fig.\u00a0Roaming entropy (RE) during the first and second exploration round was defined for each participant. In this analysis the Z-coordinates were omitted, as the VEs consisted of only one location for each of the XY coordinates . As there was a very high number 6.31 million) of possible locations the likelihood that the same coordinates were visited during the 3-min exploration was small, therefore the individual paths were smoothed using a Gaussian filter with a width of 100. Then a likelihood matrix 1 millionREi) was calculated per participant and exploration round by summating over the product between the individual\u2019s path (pij) and the log of the probability that each location was visited (pj) divided by the log of the number of possible locations (k):Roaming entropy as the sum of Euclidian distances between successive datapoints (2D) and counted the number of landmarks that were encountered in the second exploration round for each participant by defining regions of interest (ROIs) for each of the landmarks for which memory was tested. These ROIs consisted of rectangular bounding boxes around the landmarks. ROIs could overlap in case landmarks were close to each other. For each participant it was determined which ROIs were visited. The total number of ROIs visited provides an additional measure of exploration, as it reflects how many regions were visited by the participant. A GLM including novelty and encoding type as categorical predictors, and RE for round 2 and age as continuous predictors of word recall was ran, to investigate whether exploration behavior as quantified by RE could predict later word recall above and beyond the effects of novelty and encoding type. We chose to include only RE and not distance travelled, or landmarks encountered in this model, as these measures were found to be positively correlated (see SI: Appendix 5). RE is the most commonly used measure of exploration and the only of these three measures for which we found a novelty effect. The GLM was ran on centered data to reduce multicollinearity. Multicollinearity was shown to be low, with all variance inflation factor (VIF) values\u2009<\u20091.15. It is of note that we ran our task on different laptops with varying specifications, which resulted in different sampling rates between participants, but as participants were randomly distributed over the laptops, and RE exhibits a similar pattern of results as the other exploration measures we believe that potential effects of these differences were minimal.https://psyarxiv.com/r2tdn/A previous version of this manuscript was published as a preprint F\u2009=\u200976.72, p\u2009<\u20090.001, \u014b2\u2009=\u20090.16. Word recall also differed as a function of age group, F\u2009=\u200914.33, p\u2009<\u20090.001, \u014b2\u2009=\u20090.09. A quadratic relationship of age was observed, Contrast estimate\u2009=\u2009\u2212\u20090.65, p\u2009<\u20090.001, with the adolescents and younger adults remembering more words than the children and older adults.SI: Appendix 2 reports analyses of subjective arousal and mood ratings. In SI: Appendix 3 the level-of-processing analyses for performance during encoding are reported. Figure\u00a0p\u2009=\u20090.115), novelty interacted with age group, F\u2009=\u20093.20, p\u2009=\u20090.023, \u014b2\u2009=\u20090.02 \u2009=\u20097.85, p\u2009=\u20090.006, \u014b2\u2009=\u20090.06 (surviving Bonferroni-Holm correction \u03b1/3). For younger versus older adults a similar pattern was observed, with younger adults having higher word recall after exploration of a novel compared to familiar environment, and the reverse for older adults, F\u2009=\u20095.97, p\u2009=\u20090.015, \u014b2\u2009=\u20090.03 (\u03b1/[3 \u2212 1]). Finally, children also had higher word recall after exploration of a novel compared to familiar environment, while older adults showed the opposite pattern, F\u2009=\u20094.83, p\u2009=\u20090.029, \u014b2\u2009=\u20090.02 (\u03b1/[3 \u2212 2]). No other interactions were observed in the main ANOVA .Although no main effect of novelty was observed p\u2009=\u20090.11, noveltyAs we did not check whether participants who completed the English version of the task were (near) native speakers, they may not have been sufficiently familiar with the words included in our list. We therefore repeated the main ANOVA without the data from the participants who completed the English version of the task and observed the same pattern of results as reported for the analyses on all subjects. Furthermore, when including language version as a between-subjects factor , no effect of language was found.F\u2009=\u200943.74, p\u2009<\u20090.001, \u014b2\u2009=\u20090.09. Age also influenced recognition, F\u2009=\u200916.85, p\u2009<\u20090.001, \u014b2\u2009=\u20090.11. A quadratic relationship of age was observed, Contrast estimate\u2009=\u2009\u2212\u20090.93, p\u2009<\u20090.001, with adolescents\u00a0and younger adults remembering more words than\u00a0older adults and children. Novelty did not influence CHR (p\u2009=\u20090.259), and no interactions were observed .Figure\u00a0Figure\u00a0Fs\u2009>\u20096.06, ps\u2009<\u20090.001, etas\u2009>\u20090.04. Follow-up contrasts revealed quadratic effects for all three measures with higher exploration measures for adolescents and younger adults than for children and older adults , but age and novelty interacted for RE in round 2, F\u2009=\u20092.73, p\u2009=\u20090.044, \u014b2 = 0.02. RE in round 2 was affected by novelty, with a larger RE in the familiar compared to the novel condition, F\u2009=\u20098.49, p\u2009=\u20090.004, \u014b2 = 0.02. For the number of encountered landmarks and distance travelled no effect of novelty was observed (ps\u2009>\u20090.060). There was a statistical trend suggesting that NS scores were higher for younger versus older adults, t(153.74)\u2009=\u20091.950, p\u2009=\u20090.053 .The different exploration measures were analyzed with univariate ANOVAs with novelty and age as factors. A main effect of age group was observed for the number of encountered landmarks, distance travelled, and RE in round 2, all In line with the main ANOVAs a GLM (details in Methods) suggested that word recall was higher in the deep than in the shallow condition, and an effect of age group was found approach comparing age group does not allow us to infer whether the observed effects were driven by a detriment in performance in older adults or better performance in younger participants. However, crucially, the effects of age and novelty on recall cannot be explained by general arousal or mood\u00a0reports of participants, as these affective measures did not differ between novelty conditions and did not follow the same pattern over the different age groups as the effects on memory. Irrespective of novelty, arousal and mood peaked in children, with higher ratings for children compared to adolescents. In line with previous work, we observed no novelty-related effects on recognition37. It is possible, however, that our subjective measure of arousal and mood (9-point SAMs) was not sensitive enough to identify fluctuations. We therefore recommend that future studies use more objective or validated measures, such as heart rate (variability), pupil size, or dedicated questionnaires to estimate awareness.In line with the rise and fall of the dopaminergic system, quadratic effects showed higher memory performance in adolescents and young adults, compared to children and older adults32. The mesolimbic dopamine system is strongly associated with motivation58, but this system also has been suggested to underlie the effects of novelty on memory60. As such, the effects of novelty through dopaminergic modulation as described above, would likely share neural pathways with potential effects of novelty on motivation. In the current study, however, we did not include a measure of motivation, nor did we include measures of dopaminergic functioning, therefore potentially differential effects of novelty on motivation and dopaminergic pathways cannot be teased apart.A potential explanation for our findings is that novelty influenced motivation differently across age groups (as suggested by the NOvelty-related Motivation of Anticipation and exploration by Dopamine model [NOMAD]n\u2009=\u200964), with some conditions having a low number of participants. Potentially, the lack of a novelty effect in this age group was due to this smaller sample size. Especially, identifying a 3-way interaction may have been hard as a result (observed power was moderate for the 3-way interaction\u2009=\u20090.458). It is also possible that participants in the different age groups experienced the environments differently in terms of novelty, potentially explaining the observed differences between the age groups. Future studies could include additional subjective measures regarding the experienced novelty of the environments to further address the effects of such differences over the lifespan.One limitation of the current study was that the number of participants per age group varied, as we could not select participants on age in the museum setting . Due to natural variation in the age of the museum visitors, the older adult group was the smallest over the lifespan69, suggesting that changes in this region develop gradually. Taking into account this work on dopaminergic functioning across the lifespan, it is unlikely that dopamine deterioration occurs abruptly. Despite our sample of older individuals being relatively young, it is likely that some of these changes already started to develop in our sample too, as changes in dopaminergic functioning have already been reported in early adulthood. This\u00a0point could be further investigated by including a sample of older individuals or by adopting a longitudinal design in future studies.\u00a0Also note that our findings of novelty influencing immediate recall (only after a short distraction task) differ from the studies in animals, where typically long-term potentiation or memory are measured after longer delays . Due to this difference and other differences regarding the type of novelty or memory content , it is difficult to compare the current findings with those in animals. As such it is not possible to make conjectures regarding underlying neural mechanisms on basis of the current study\u2019s results. Future studies in humans could aim to further bridge the gap between the two literatures by using more similar designs as in the animal studies, or to investigate the potential neural mechanisms .One may argue that a limitation regarding our current interpretation in terms of dopaminergic mechanisms is that the mean age of our older adults was\u2009~\u200953\u00a0years old, while prior indications of age-related deterioration of dopaminergic pathways were based on slightly older participants encountered more landmarks, had higher RE, and travelled further during the second round of exploration (novel/familiar). Indeed, encountered landmarks, distance travelled, and RE exhibited a positive correlation with later memory success on the landmark task. Interestingly, participants in the familiar condition had higher RE than those in the novel condition. A reason for this may be that we tested memory immediately after a short distractor task. Some animal studies suggest that the interaction of novelty and depth of encoding develops over timeTaken together, our findings suggest that exploring a novel environment has a generalizable memory boosting effect, on both weakly and strongly encoded information, in children, adolescents, and younger adults, but not in older adults. These results imply that the beneficial effects of novelty on memory are limited to younger individuals, and that interventions aimed at counteracting age-related memory impairments may be less effective.Supplementary Information 1.Supplementary Information 2.Supplementary Information 3.Supplementary Information 4.Supplementary Information 5."} {"text": "Myrj-b-PCL copolymer with various PCL/Myrj ratios were synthesized via ring-opening bulk polymerization of \u03b5-caprolactone using Myrj (Myrj S40 or Myrj S100), as initiators and stannous octoate as a catalyst. The synthesized copolymers were characterized using 1H NMR, GPC, FTIR, XRD, and DSC. The co-solvent evaporation method was used to prepare CyA-loaded Myrj-b-PCL micelles. The prepared micelles were characterized for their size, polydispersity, and CMC using the dynamic light scattering (DLS) technique. The results from the spectroscopic and thermal analyses confirmed the successful synthesis of the copolymers. Transmission electron microscopy (TEM) images of the prepared micelles showed spherical shapes with diameters in the nano range (<200 nm). Ex vivo corneal permeation study showed sustained release of CyA from the developed Myrj S100-b-PCL micelles. In vivo ocular irritation study (Draize test) showed that CyA-loaded Myrj S100-b-PCL88 was well tolerated in the rabbit eye. Our results point to a great potential of Myrj S100-b-PCL as an ocular drug delivery system.Low aqueous solubility and membrane permeability of some drugs are considered major limitations for their use in clinical practice. Polymeric micelles are one of the potential nano-drug delivery systems that were found to ameliorate the low aqueous solubility of hydrophobic drugs. The main objective of this study was to develop and characterize a novel copolymer based on poly (ethylene glycol) stearate (Myrj\u2122)- Ethoxylated fatty acids e.g., PEG stearates (sold under the trademark Myrj\u2122) are non-ionic surfactants widely used in various drug delivery systems . When inThe United States Food and Drug Administration (US FDA) has approved the use of Myrj\u2122 products as safe excipients used in cosmetics, pharmaceutical formulations, and food additives . RecentlPoly (\u03b5-caprolactone) (PCL) is an eco-friendly polyester and has been widely utilized for tissue engineering and controlled drug delivery applications . It allob-PCL significantly improved the water solubility of paclitaxel (from ca. 0.3 \u00b5g/mL up to 88.4 \u00b5g/mL) and sustained the release of the loaded drug in vitro , stannous octoate (95%), \u03b5-Caprolactone, and HPLC-grade tetrahydrofuran (THF) were purchased from Sigma-Aldrich . Deuterated chloroform or Myrj S100 (MW ~ 4700 Da) were used as a macroinitiator and stannous octoate was used as the catalyst. Monomer (\u03b5-caprolactone) to catalyst molar ratio was always kept at 1:500. Different \u03b5-caprolactone to Myrj feed ratios were used to synthesize Myrj-b-PCL block copolymers with varying degrees of \u03b5-caprolactone polymerization. Briefly, either Myrj S40 or Myrj S100, \u03b5-caprolactone, and stannous octoate were added to an ampoule that was previously flamed and purged with nitrogen gas, which was then sealed under vacuum. The reaction was conducted at 140 \u00b0C for 4 h. Then, it was terminated by removing the reaction vessel (ampoule) from the oven and storing it at room temperature overnight.Ring-opening polymerization of \u03b5-caprolactone was the approach used to synthesize the copolymers ,16. For 1H NMR (Bruker Ultra shield 500.133 MHz spectrometer) in CDCL3.Tetramethylsilane (TMS) was used as an internal standard, and Topsin software was used to process the data and obtain the spectra. The number average molecular weight of all synthesized copolymers was determined from 1H NMR spectra by comparing the peak intensity of present in the PEO segment of each copolymer to that of PCL . The calculation used the integration area of the peaks of methylene protons of PCL at 4.07 ppm and of PEG at 3.65 ppm, respectively copolymer-based micelles were used for topical ocular delivery of dexamethasone and no irritation was reported [All the involved animals were active and healthy without any abnormal signs and symptoms of overall toxicity throughout the experiment, except for the observed signs as scored and mentioned in O-b-PCL) and methO-b-PCL) micelle reported . All togb-PCL and Myrj S100-b-PCL copolymers were successfully synthesized, but only Myrj S100-b-PCL formed micelles with optimum size, PDI, and no precipitation. The mean diameters of the prepared self-assembled structures were in the nano-range (\u2264200 nm). Moreover, Myrj S100-b-PCL micelles significantly increased the aqueous solubility of CyA from ca. 23 \u00b5g/mL to over 540 \u00b5g/mL. The developed micelles showed a transcorneal permeation comparable to Restasis\u00ae, the leading commercial CyA ocular formulation. The in vivo ocular irritation study demonstrated that CyA-loaded Myrj S100-b-PCL88 micelles were well tolerated in the rabbit eye. Our results point to a great potential for Myrj-b-PCL micelles to serve as an efficient solubilizing and delivery system for CyA and potentially other hydrophobic drugs.Myrj S40-"} {"text": "Traction force microscopy (TFM) has emerged as a versatile technique for the measurement of single-cell-generated forces. TFM has gained wide use among mechanobiology laboratories, and several variants of the original methodology have been proposed. However, issues related to the experimental setup and, most importantly, data analysis of cell traction datasets may restrain the adoption of TFM by a wider community. In this review, we summarize the state of the art in TFM-related research, with a focus on the analytical methods underlying data analysis. We aim to provide the reader with a friendly compendium underlying the potential of TFM and emphasizing the methodological framework required for a thorough understanding of experimental data. We also compile a list of data analytics tools freely available to the scientific community for the furtherance of knowledge on this powerful technique. The premise of mechanobiology is that the mechanical properties of biological tissues can direct given cellular processes, like proliferation, migration, survival, and differentiation. Therefore, mechanobiology entails the understanding of how forces are generated, maintained, and interpreted by cells which actively respond to biophysical stimuli arising from their milieu .Figure\u00a01The primary sites of cell interaction to any substrate are the multiprotein complexes which connect the extracellular matrix (ECM) to cell cytoskeleton, the focal adhesions (FAs).i.e., their localization or conformation changes following the application of physical and biochemical stimuli generated at the ECM). The cooperative activity of these components makes it difficult to determine the specific mechanosensitivity of single FA members. An established example of an FA mechanosensitive protein is talin, a 270-kDa protein which interacts directly with both \u03b2-integrin cytoplasmic domain and F-actin. The protein acts as a force buffer by unfolding the numerous rod domains following mechanical load, thus exposing cryptic hydrophobic binding domains able to interact with vinculin monolayer cultures routinely used to study cellular mechanisms. However, the predictivity of in\u00a0vitro monolayers when compared to native tissues is known to get poorer with the increase in the system complexity. Moving to three-dimensional (3D) culture allows cells to undergo indirect mechanical stimulation by controlling the rigidity and stiffness of the ECM in which they are embedded as engineered tools to measure single-cell adhesion forces , 29, 30.et\u00a0al. . As a no or silicon-based gels are common substrates for TFM. Both types of gels exhibit a linear elastic behavior under deformations produced by cell traction, and their stiffness can be varied over a range of several orders of magnitude. Interestingly, mechanical properties of those gels have been proven not to change under the action of biochemical factors that may occur during a TFM measurement, including cell proteases .et\u00a0al., who designed a polydimethylsiloxane (PDMS) contractile force screening platform featuring 96 monolithic independent wells ; converet\u00a0al., . It is worth noting that under elongated focal adhesions, upward and downward normal tractions are more likely to appear on distal (toward the cell edge) and proximal (toward the cell body) ends of adhesions. The resulting rotational moments affect focal adhesions by either protruding or retracting peripheral regions. To measure this, Legant et\u00a0al. developea priori assumption of stress state or ECM geometry. Interestingly, in this case we are not bound to infinite substrate requirements typical of the Boussinesq theory (BOX 1E and the Poisson\u2019s ratio \u03bd (The math underneath the reconstruction of traction forces in TFM relies on the theory of linear elasticity . Substra ratio \u03bd .direct TFM) by calculating the strain field \u03b5 from the measured displacement field u, according to the linearized expression, which holds for small strains:c is the stiffness tensor, which describes the substrate properties (can be expressed in terms of E and v).TFM problems can be solved directly of the system to a point load. The integral in Equation\u00a0u located at t acting in i.e., the assumption of infinite thickness) can be applied.In typical TFM settings, where cell-induced displacements are ca. two orders of magnitude smaller than the substrate thickness , the Boussinesq approximation of an infinite half-space (\u03bd \u2245 0.5), decoupling occurs between in-plane tractions and out-of-plane displacements . Additionally, in most practical cases, displacement vectors are experimentally measured on the x-y plane only, which further reduces the problem to a 2D one, characterized by the following Green\u2019s function :x or y directions separately induce displacements in both x and y directions or, more commonly, in the spatial frequency domain (Fourier space). All these methods have important methodological considerations, which have been exhaustively addressed in the primary literature and carefully reviewed by Schwarz and Soin\u00e9 , although other studies tune the regularization parameter for each single dataset to account for biological and experimental variability. Undoubtedly, such arbitrariness may introduce systematic errors and makes it hard to compare results in the literature. via FTTC.From a practical point of view, no consensus has been reached on the application of regularization strategies to experimental data. Many authors recommend a single regularization parameter to be chosen for a given TFM experiment is a numerical technique used to produce approximate solutions to partial differential equations, as well as integral equations. The goal is to reduce the complexity of the problem to a system of ordinary differential equations that are solved by numerical integration inside a problem-defined domain. The space of the problem is then discretized and subdivided into a finite number of regions (elements) where equations are solved locally . FEM haset\u00a0al. , who comBOX 2Over the years, several software packages have been released to calculate traction forces and other features of mechanobiological relevance. Here, we present a selection of free tools that could be helpful to the reader.https://github.com/qztseng/imagej_plugins, .Template matching: performs alignment between the reference and the \u201cstressed\u201d images compensating for experimental errors using particle image velocimetry with a user-defined searching window ;3.FTTC: calculates the traction map starting from the displacement field, given the substrate-constitutive parameters, the image spatial calibration (pixel/\u03bcm), and the regularization parameter \u03bb.A tool for referenced TFM is freely accessible as a set of plugins for ImageJ , downloa1, 2022) . The too . The sofn et\u00a0al. .https://www.mathworks.com/matlabcentral/fileexchange/52-regtools, . Detailed explanation is provided in the book by the same author e author . The pachttps://github.com/FranckLab/LD-3D-TFM, . The package includes both the FIDVC algorithm, which is used to calculate 3D deformation fields from micrographs, and the solver to reconstruct force fields from 3D deformations, according to the method published by the same authors (A MATLAB package for computing 3D TFM in the case of large deformations (LD-3D-TFM) has been developed by the group of Prof. Franck at the University of Wisconsin-Madison and distributed through GitHub repository authors .et\u00a0al. . The tool integrates all the computational steps to calculate active cellular forces from confocal microscopy images, including image processing, cell segmentation, image alignment, matrix displacement measurement and force recovery.More recently, a comprehensive tool for 4D TFM (TFMLAB) based on an FEM engine has been developed as a MATLAB package by Barrasa-Fano et\u00a0al. and madeSarcTrack is a MATLAB software program designed by Toepfer et\u00a0al. . The tool determines sarcomere count and any changes in sarcomere length. In return, the algorithm computes sarcomere percent contraction. Cell contraction is obtained from the imaging of labelled z-disc or m-line pairs inside individual sarcomeres.r et\u00a0al. and avaiet\u00a0al. technique, developed by Kronenberg et\u00a0al. . ERISM het\u00a0al. of silicone rubber is used to fabricate a microchamber to be filled with culture media. The optical cavity is sandwiched between two semitransparent layers coated in gold. Cellular forces cause wrinkles on the surface of the substrate which are observed at selected wavelengths using an optical microscope endowed with a tunable monochromatic light source (monochromator). Dark fringes are detected at positions where the actual thickness of the cavity fulfills a resonance condition, and the resulting reflectance at that position is hence attenuated. Analysis of the fringe pattern gives a readout of cell-induced deformation field, with an estimated lateral resolution of ca. 1.6\u00a0\u03bcm. A detailed discussion on\u00a0ERISM working principle has been provided by Liehm et\u00a0al. .et\u00a0al. displacement, which is detected by collecting a 3D z-stack of the quantum dot array and performing fluorescence intensity profiling along the normal axis.Another approach which has been proposed to bypass the acquisition of a reference image is the one of Bergert et\u00a0al. . The metet\u00a0al. set of case studies demonstrating the versatility of TFM for characterizing cell-generated forces in diverse biological scenarios. Paradigmatic examples are summarized in via the deposition of a scar-like ECM, in a phenomenon dubbed as tissue remodeling .Moving over to the pure quantification of the single-cell response, Pasqualini et\u00a0al. investigated the relationship between contractile proficiency and metabolism in neonatal rat ventricular myocyte cells, calculating the ratio between the contractile work done by the cells and the metabolic energy provided by the mitochondria. To do so, cells were cultured on gels with stiffness mimicking soft/immature (1\u00a0kPa), physiological (13\u00a0kPa), and stiff/pathological (90\u00a0kPa) cardiac microenvironments, while measuring the strain energy of substrates following cell contraction and ATP production . Only onThe cells responsible for ECM remodeling in the heart are cardiac fibroblasts, which undergo profound phenotypic alterations in the failing heart . Human cValve cells are also exposed to a high degree of mechanical stress. In order to understand the response of a cell to traction, valve interstitial cells were challenged with long-term uniaxial or biaxial stretching conditions, and their ability to develop traction force was studied as a form of adaptation . The resThe motility of eukaryotic cells is needed for many biological processes, such as embryonic development or tissue repair, as well as for the function of the immune system . Many paThe generation of mechanical forces is central for regulating the attachment of cells to a substrate, for cell spreading and migration. Thus, the analysis of cell migration requires understanding the spatial and temporal pattern of cell\u2013cell and cell\u2013substrate mechanical interactions .The first attempt at investigating the contribution of cell-generated stresses to migration was described in mouse NIH3T3 fibroblasts and entailed the acquisition of time-lapse images and shear fields of traction stress at a high spatial and temporal resolution . These fThese pioneering studies were the first to demonstrate the existence of a frontal towing mechanism for cell migration in two dimensions, where dynamic traction forces at the leading edge actively pull the cell body forward. This theory was recently questioned by studies adopting nanopillars and displaying enhanced spatial resolution to indicate that the highest displacements are produced by the motile cell in correspondence of the perinuclear regions, rather than on the edges .The ability of the cell to exert force on the ECM during migration has been associated with the formation of the FA complexes. However, recent evidence demonstrated the size and overall FA distribution only partially overlaps with the distribution of traction stress. More importantly, at the leading edge, small dynamic adhesions were shown to transmit strong propulsive tractions, whereas stable mature focal adhesions exerted weaker forces . TractioDictyostelium discoideum crawling on soft hydrogels , and their maximum traction decreased to nearly 50 pN/\u03bcm2. The authors also investigated cell migration velocity, which was larger on less rigid gels. During cell migration on planar 2D matrices, cells move due to a cyclic process of polarization, protrusion formation, traction generation, and retraction at their rear end. In this condition, inertial and viscous drag forces are negligible. However, cell migrating in a 3D ECM are required to overcome the steric hindrance of the surrounding environment. For this reason, Koch et\u00a0al. can be utilized to predict their motility in substrates with low stiffness (elastic modulus ca. 10\u00a0kPa). Consistently, the invasiveness was found to correlate with the interplay between the focal adhesions and the cytoskeleton .Another interesting method was designed to quantify traction forces of cancer cells in highly nonlinear 3D hydrogel networks. The technique exploited a finite-element approach based on a constitutive equation which models the complex mechanical behavior of ECM-like hydrogels . InvestiThese results highlight the necessity to extend the evaluation of traction forces outside the limits of the cell adhesion plane to understand the overall complexity of cell behavior.The ability of living cells to efficiently generate and transfer intracellular forces to the surrounding milieu is crucial for cell adhesion, migration, and maturation. Also, the occurrence of aberrant mechanical signals from the ECM and defects in its perception at the cellular level are now considered of physiological and pathological relevance. Although TFM setup has been standardized and its principles are thoroughly detailed in numerous studies and protocols, the processing of data and the extraction of information from TFM measurements remains somehow operator dependent. The intrinsic ill-posed nature of the mathematical problem and nonuniqueness of its solution mandates for numeric approaches which show some margins of arbitrariness. Nonetheless, thanks to the work of some leading research groups, software tools have been made available, which constitute a solid framework for consistent and reproducible data analysis. This effort is expected to contribute to a wider diffusion of TFM for complementing mechanobiology studies at the single-cell scale.The authors declare that they have no conflicts of interest with the contents of this article."} {"text": "Habitat loss, urbanisation and climate change may cause stress in koalas. Non-invasive monitoring of faecal cortisol metabolites (FCMs) can be utilised to evaluate the impact of stress. The effectiveness of two enzyme immunoassays (EIAs), 50c and cortisol, in measuring FCM values in wild, stressed koalas was tested. Faecal samples of 234 diseased, injured and control koalas in Queensland, Australia were analysed. Diseased and injured koalas had significantly higher FCM values than clinically healthy control animals as measured by the 50c EIA. Only the 50c EIA detected higher absolute values in males, and also found that females showed a more elevated response to stress manifested by injury and disease. The cortisol EIA was also found unreliable in detecting stress in rehabilitated koalas treated with synthetic glucocorticoids as it cross-reacts with these chemicals.Loss of habitat, urbanisation, climate change and its consequences are anthropogenic pressures that may cause stress in koalas. Non-invasive monitoring of faecal cortisol metabolites (FCMs) can be utilised to evaluate the impact of stressors. The aim was to determine if the tetrahydrocorticosterone (50c) and cortisol enzyme immunoassays (EIAs) could be effective in measuring FCM values in wild, stressed koalas. This research included 146 koalas from the Australia Zoo Wildlife Hospital (AZWH) and 88 from a study conducted by Endeavour Veterinary Ecology (EVE), Queensland, Australia. Faecal samples of diseased, injured and control koalas were analysed. The effect of hospitalisation on FCM values was also investigated. Diseased and injured koalas had significantly higher FCM values than clinically healthy control animals as measured by the 50c EIA. FCM values with the cortisol EIA differed significantly between control and diseased koalas, but not between control and injured ones. Moreover, only the 50c EIA detected higher absolute values in males compared to females, and also found that females showed a more elevated response to stress manifested by injury and disease compared to males. The 50c EIA detected stress during hospitalisation better than the cortisol EIA. The cortisol EIA was also found unreliable in detecting stress in rehabilitated koalas treated with synthetic glucocorticoids as it cross-reacts with these steroids providing artificially high values. The hypothalamic-pituitary-adrenal (HPA) axis is one of the systems responding to an organism\u2019s internal and external changes to maintain homeostasis. It regulates metabolic and physiological processes by stimulating the release of glucocorticoids (GCs), such as cortisol and corticosterone, produced by the adrenal cortex ,2. DurinDuring stressful situations, its function is to maintain homeostasis and ensure the individual\u2019s survival by increasing the output of glucose for muscular and brain function and decreasing the activity of peripheral organs, hence preparing the organism\u2019s response to cope with the situation. Anti-inflammatory responses and immune suppression are also exacerbated by cortisol release in response to stress ,8,9.The secretion of cortisol is naturally limited by a negative feedback loop caused by its own secretion , and desPhascolarctos cinereus) were listed as endangered in Queensland, New South Wales (NSW) and the Australian Capital Territory under the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act) [In February 2022, koalas (PBC Act) . The conPBC Act) .Chlamydia pecorum is one of the bacteria causing devastating diseases in koalas, affecting the urogenital system with cystitis, endometritis, pyelonephritis and prostatitis, as well as causing blindness and impacting the respiratory tract [In South East Queensland (SEQ), in particular, there is significant loss of koala habitat for housing developments and for the expansion of the road and rail networks. The increased traffic volume and larger distances between areas of koala habitat are causing koalas to travel long distances and to cross roads, with a high chance of being attacked by dogs or hit by vehicles ,17,18,19ry tract ,22. Conversely, there are ample examples showing that the effect of pain caused by illness and injuries in animals can increase stress hormone and, as a consequence, faecal cortisol metabolite (FCM) concentration ,25,26,27The effect of stressors on koalas, at an individual and population level, can be evaluated using a non-invasive method that measures the levels of FCMs. Accounting for the lag time due to the transition in the intestinal tract, metabolites of cortisol are directly related to adrenocortical activity during stressful events ,27,28.Previous studies, which explored the relationship between stress (investigated by FCMs) and injuries or illness in domestic species and wildlife, including koalas, yielded mixed results with metabolite values increasing above or decreasing below the baseline ,29,30,31Our recent research, conducted with captive koalas in wildlife parks, identified tetrahydrocortisol to be the main FCM in koalas . ValidatHowever, those studies did not investigate the suitability of both EIAs in identifying naturally occurring stressors in this species. A clear need was expressed for more studies to validate the EIA using biological parameters such as The study started in February 2021 and was completed in March 2022. Two groups of wild koalas were part of this study: a group of 146 koalas admitted to the Australia Zoo Wildlife Hospital (AZWH) (code: AZ), Beerwah, Queensland, and one group of 88 from a study conducted by Endeavour Veterinary Ecology (EVE) at The Mill (code: ML), a locality at Moreton Bay, Queensland. Details of the makeup of these groups are shown in The 36 koalas that were neither diseased nor injured and taken to AZ by concerned citizens were considered as the AZ control. During the rehabilitation, ill and injured koalas were treated with a variety of systemic and local (ocular) medications. Systemic treatment included oral synthetic GC, prednisolone , antimicrobial sub-cutaneous injections of doxycycline and chloramphenicol and enrofloxacin antibiotic used as nebulizer. Local treatment included eye ointments with chloramphenicol and GC (Chloroptsone) and antibiotic chloramphenicol . Wherever possible, information was obtained on clinical activities performed during the collection of scats for the longitudinal study. Chlamydial infection was determined using loop-mediated isothermal amplification (LAMP) [In total, 346 faecal samples were obtained for this study from the AZ koalas. Only intact fresh pellets were collected on admission and in the morning from the ground of the enclosure where the koalas were housed individually. Despite the likely need for an increased collection effort, the use of fresh pellets is recommended to avoid the possible effect of environmental conditions on the structure of the samples [A first sample from each koala was collected on arrival at the hospital before any intervention was undertaken by the veterinarians. Due to the lag time that occurs between a stressful event and the increase in FCM values , the anaA second sample from 53 of the 146 koalas was collected again between 10 and 15 days after admission to detect if changes in FCM values occurred during hospitalisation . Serial faecal pellets from 20 of the 146 admitted koalas were also collected for a period between 7 and 10 days from admission .The 88 koalas from ML were part of The Mill Koala Tagging and Monitoring Program, carried out by EVE on behalf of Moreton Bay Regional Council. The program aimed to ensure the welfare of koalas during vegetation clearing operations for site remediation and construction works as the site transitioned from industrial use to a mixed-use precinct with a university campus, community infrastructure and future commercial development precincts. Koalas were fitted with collar-mounted K-Tracker biotelemetry tags weighing 70 g , a very high frequency (VHF) transmitter and lead weight in housing for a total collar weight of 190 g, with a customised weak link based on the weight of the animal. When a koala was located, it was caught by a tree climber using ropes and harness. Koalas were immediately placed in a transport cage (dimensions: 520 mm H \u00d7 580 mm L \u00d7 350 mm W) with fresh eucalypt browse and transported to the EVE facility for a veterinary examination.Koalas remained in the transport cage until the veterinary examination was conducted. While undergoing veterinary assessment, any fresh and intact faecal pellets voided were collected. If pellets were not produced during the 20 to 45 min veterinary examination, the cage was checked to determine if any suitable fresh pellets could be collected.A sample of at least four intact faecal pellets was collected if fresh pellets were observed. Data relating to each individual\u2019s history and chlamydial status were provided to this project. None of these koalas were either injured or diseased at the time of the faeces collection, hence this was considered as an external control.All the samples provided for this study were immediately placed in a \u221220 \u00b0C freezer, and remained at this temperature until the samples were collected and transported with a \u221220 \u00b0C portable freezer to a \u221280 \u00b0C freezer before processing.Latitudinal and longitudinal coordinates for the 146 koalas prior to admission at AZ were derived from the precise locations provided by the rescuers. The coordinates of the ML koalas were provided by the EVE researchers. A map of the locations of all koalas involved in the study was geneg for 15 min. Completely dried-down aliquots (0.25 mL) of the extracts in 1 mL Eppendorf tubes, sealed with paraffin film, were shipped to the University of Veterinary Medicine (Austria), where dried sample extracts were resolubilised in 80% methanol and further diluted with assay buffer (1 + 9). Aliquots were analysed in duplicate with cortisol and tetrahydrocorticosterone (50c) enzyme immunoassays (EIAs). The EIAs were selected based on the findings of Santamaria et al. [The extraction procedure and analysis are described in Santamaria et al. . In briea et al. and useda et al. ,37.The normality of FCM values was assessed by visually examining the histograms and by uDescriptive statistics of FCM values were presented for all subgroups of animals, including 95% confidence intervals for medians .For control animals, the non-parametric Mann\u2013Whitney U (Wilcoxon rank sum) test was used to compare FCM values between the locations and between breeding and non-breeding season for samples obtained on the first collection day.The non-parametric Kruskal\u2013Wallis test was applied to compare FCM values between koala\u2019s conditions for samples obtained on the first collection day. Nonparametric pairwise comparisons were then conducted using Dunn\u2019s test with the Bonferroni adjustment . The ManPoisson regression was utilised in a multivariate model to evaluate the association between FCM values and koala condition, sex and the interaction between sex and condition for samples obtained on the first collection day . Robust The Mann\u2013Whitney U test for matched data (Wilcoxon rank sign) was applied to compare EIA values for AZ koalas between the first sampling and the subsequent sampling 10\u201315 days later.The correlation between 50c and cortisol was explored in a scatter plot and the Pearson correlation coefficient was used for quantification.Microsoft Excel (Version 2208 Build 16.0.15601.20204) was used for descriptive statistics and bar and line graphs. Data analysis was performed in STATA 16.1 .A total of 234 koalas from the AZ and ML were considered for the analysis, of which 36 from AZ and 88 from ML were neither diseased nor injured and were considered controls. Of the AZ koalas, 76 were diseased and 30 were injured. Four were diseased and injured, but were excluded from further analyses.The distribution of FCM concentrations measured with both EIAs for the AZ and ML control are shown in The median values (ng/g) (95% CI) detected by the 50c EIA and cortisol EIA in koalas at AZ (N = 36) on the first collection day were 22.4 and 6.8 , respectively. The median values (ng/g) (95% CI) in koalas at ML (N = 88) on the first collection day were 18.8 (50c EIA) and 6.4 (cortisol EIA), respectively. Mean, minimum, maximum FCM values for control koalas from AZ and ML are presented in p = 0.1379) and the cortisol (p = 0.5858) EIAs between the AZ and the ML control, highlighting that healthy koalas have similar FCM values in different geographical locations and confirming that both could be used as a combined control in the analysis.There was no significant difference in FCM values measured by the 50c (p = 0.1469 and p = 0.5415) between the breeding and non-breeding season are shown in Scatterplots were used to display the bivariate correlations between the two EIAs for the control (N = 124), diseased (N = 76) and injured (N = 30) in day 1 samples . The datp < 0.001), for the diseased was 0.4479 (p < 0.001) and for the injured was 0.4310 (p < 0.0174).The correlation coefficient for the control was 0.4120 (95% CI) detected with the 50c EIA and cortisol EIA for the combined control koalas from AZ and ML (N = 124) were 20.0 and 6.5 , respectively. The median FCM values (ng/g) (95% CI) detected with the 50c EIA for the diseased koalas from AZ (N = 76) were 36.1 and 11.4 with cortisol EIA. The median FCM values (ng/g) (95% CI) of the injured koalas (N = 30) as measured with the 50c EIA were 34.9 and with the cortisol EIA 8.9 .Mean, minimum and maximum FCM values for these three groups of koalas with different conditions are presented in p < 0.001). In the pairwise comparison, diseased and injured koalas had significantly higher FCM values measured by the 50c EIA compared to the control , but FCM values did not differ between diseased and injured koalas (p = 1.000). FCM values measured by the cortisol EIA only differed significantly between control and diseased koalas (p < 0.0001), but not between control and injured (p = 0.1016) or between diseased and injured koalas (p = 0.1246).FCM values measured by both EIAs differed significantly between control, diseased and injured koalas (95% CI) measured by the 50c and the cortisol EIA of females and males in the control, diseased and injured groups are shown in p < 0.001) and diseased (p = 0.0210) animals, but did not differ between injured females and males (p = 0.2725). FCM values measured with the cortisol EIA were not significantly different between sexes for the three conditions, control (p = 0.2997), diseased (p = 0.9833) and injured koalas (p = 0.2042). Thus, in contrast to 50c EIA, cortisol EIA was not able to detect the difference in FCM values between males and females for control and diseased animals.FCM values measured with the 50c EIA were significantly higher for the males in the control (p < 0.001) and higher in diseased and injured koalas compared to controls (p < 0.001), a significant interaction was also found between the conditions and the sex of the animals (p = 0.0307). Thus, the IRR of FCM values measured with 50c EIA were significantly lower for injured males relative to injured females when compared to the controls. Similarly, IRR of FCM values measured with 50c EIA were marginally lower for diseased males relative to diseased females when compared to the controls (p = 0.074).The results of the multivariate analysis are presented in p = 0.4221).In contrast, in the multivariate analysis for cortisol EIA, the interaction between the conditions and the sex of the animals could not be detected of stresIn contrast to our previous study , which dA strong artificial increase in FCM values measured by the cortisol EIA was directly associated with the therapeutic systemic administration of prednisolone (Redipred) in koalas whose scats were collected during the therapy. These results can be explained by the antibody of the cortisol EIA cross-reacting with cortisone, prednisolone and prednisone ,51,52 , the main FCM in koalas . MoreoveHigh endogenous GC levels trigger a negative feedback that decreases the secretion of the ACTH, consequently decreasing the release of endogenous GCs . This reAs expected, on day 1, high FCM values were found in diseased and injured koalas. There is an abundance of literature on stress, including anthropogenic stress, causing illness and disease in wildlife, though there are no articles on the effect of disease on stress in wildlife. However, critical and acute illness, generally associated with serious inflammations and infections , had a strong impact on stress in children, causing a significant and sustained increase in plasma cortisol, which was, however, not significant in those with non-critical illnesses . Hence, In fact, injuries are well known to cause pain and increase stress in humans and animals (domestic and wildlife). Pain in horses, resulting from colic, has shown to dramatically increase FCM values , and in The 50c EIA detected that FCM values of males were significantly higher than those of females (except for the injured), but no difference between sexes could be detected with cortisol EIA. Furthermore, through the interaction term between condition and sex in the multivariate analysis, it was identified that 50c EIA values were proportionally higher in females than males for individual conditions. This indicates that females, relative to males, respond to stressful situations derived from disease and injury with proportionally larger elevation of FCM values when measured with 50c EIA. In contrast, when measuring stress levels with cortisol EIA, a difference between sex and the interaction between the sexes and the three conditions could not be detected . Thus, the cortisol EIA is less suited to detecting subtle differences between physiological changes due to pain.Therefore, in koalas, the cortisol EIA had a lesser discriminatory power than the 50c EIA, which reiterates what was already demonstrated in our previous work . HoweverThis study was designed to assess the stress caused by injuries and disease and the effect of hospitalisation on wild koalas. While this study successfully achieved its aim, there were some limitations worth mentioning here.Due to the real-life nature of this study, some elements could not be standardised. As these were wild koalas, their age was not known and, hence, the FCM values between age groups could not be established. However, our previous investigation on captive koalas of known age , showed Koalas were admitted to hospital at different times, but this would not alter the FCM values of day 1 samples as these reflected stress events occurred many hours earlier and further samples in the hospital were collected freshly defecated each morning. Nevertheless, our previous study showed dThis study has clearly demonstrated that pain causes an increase in stress in koalas, which is known to impact the immune system. Stressed koalas have an increased likelihood of being further affected by illnesses, hindering their natural recovery and wellbeing.While we have established a link between stress and diseases and injuries, anthropogenic activities, some also causing climate change, are also considered stressors increasingly impacting on the survival of koalas. Using a species-specific EIA for measuring FCM values, as an indicator of stress, is of utmost importance.Here, it has been demonstrated that stress can be reliably measured with the 50c EIA. It has been confirmed that the baseline FCM values measured in our previous paper can be used to assess stress in koalas. It has been shown that, before admission to hospital, the diseased and injured koalas were stressed.More importantly, this study has clearly demonstrated the unsuitability of the cortisol EIA to determine stress in most rehabilitated animals and in those treated with synthetic GCs, as the values detected reflect administered exogenous GCs rather than the endogenous cortisol metabolites. Hence, future studies on stress of rehabilitated animals medicated with GCs need to take into consideration the cross-reaction of the EIA with any exogenous GCs. These findings will be applied to a wider study on wild koalas where the relationship between health and stress will be further investigated."} {"text": "In this commentary, we explore the pioneering implementation of 3D-printed thin liquid sheet devices for advanced X-ray scattering and spectroscopy experiments at high-repetition rate XFELs. This has enabled the development of novel methods, such as serial femtosecond crystallography (SFX), fluctuation X-ray scattering (FXS), and several X-ray spectroscopies, which have transformed atomic resolution molecular imaging and ultrafast materials characterization is predominantly utilized for liquid sample injection in XFELs X-ray spectroscopies (Ekimova"} {"text": "There are no population-based data on the relative importance of specific causes of hypercapnic respiratory failure (HRF). We sought to quantify the associations between hospitalisation with HRF and potential antecedent causes including chronic obstructive pulmonary disease (COPD), obstructive sleep apnea, and congestive cardiac failure. We used data on the prevalence of these conditions to estimate the population attributable fraction for each cause.2\u2009>\u200945\u00a0mmHg. Controls were randomly selected from the study population using a cluster sampling design. We collected self-reported data on medication use and performed spirometry, limited-channel sleep studies, venous sampling for N-terminal pro-brain natriuretic peptide (NT-proBNP) levels, and sniff nasal inspiratory pressure (SNIP) measurements. Logistic regression analyses were performed using directed acyclic graphs to identify covariates.A case\u2013control study was conducted among residents aged\u2009\u2265\u200940\u00a0years from the Liverpool local government area in Sydney, Australia. Cases were identified from hospital records based on PaCOWe recruited 42 cases and 105 controls. HRF was strongly associated with post-bronchodilator airflow obstruction, elevated NT-proBNP levels, reduced SNIP measurements and self-reported opioid medication use. There were no differences in the apnoea-hypopnea index or oxygen desaturation index between groups. COPD had the highest population attributable fraction .COPD, congestive cardiac failure, and self-reported use of opioid medications, but not obstructive sleep apnea, are important causes of HRF among adults over 40\u00a0years old. No single cause accounts for the majority of cases based on the population attributable fraction.The online version contains supplementary material available at 10.1186/s12890-023-02639-6. Hypercapnic respiratory failure (HRF) is a commonly encountered clinical scenario for hospital clinicians in a wide range of disciplines. Many patients are known to have a predisposing condition such as severe chronic obstructive pulmonary disease (COPD). However, in some patients presenting with HRF, the underlying cause is not apparent at initial presentation, and there may be multiple underlying potential causes existing concurrently. Although each cause requires disease-specific therapies, most patients require hospitalisation and many benefit from ventilatory support in dedicated respiratory and critical care units. Hence, HRF can be considered a single, albeit heterogeneous entity, that constitutes a significant problem for health facilities worldwide.Previous studies examining the underlying causative conditions among patients with HRF have typically included participants identified following admission to an intensive care or respiratory admission and requiring ventilatory support therapy \u20135. ThreeUsing a community-based case control study design, we sought to determine the strength of association between hospitalisation with HRF and the following conditions: COPD, congestive cardiac failure (CCF), obstructive sleep apnea (OSA), respiratory muscle weakness, and the use of opioid and benzodiazepine medications. We selected these causes in consensus based on previous studies and our clinical experience. In addition to estimating the strength of association with HRF, we used data on the prevalence of these conditions in the general community to estimate, for each cause, the population attributable fraction (PAF), an epidemiologic measure to describe the relative importance and public health impact of a risk factor in a population.This prospective study was based in the City of Liverpool, a metropolitan area within Sydney, Australia . The stu2\u2009>\u200945\u00a0mmHg and pH\u2009\u2264\u20097.45. We anticipated the number of cases of HRF missed by this screening method to be low due to the increased risk of hospitalisation associated with hypercapnia, the public healthcare scheme in Australia that provides free hospital services to all citizens and most permanent residents, and local data that indicate most people with respiratory conditions from this population who require hospitalisation attend Liverpool Hospital. Medical records were reviewed to exclude suspected nosocomial cases of HRF, or instances where the person had suffered an out-of-hospital cardiac arrest or traumatic injury. Cases were invited to participate first by mail and then by follow-up telephone calls.Cases were patients who attended Liverpool Hospital between 2016 and 2018. Potential cases were identified based on an ABG, collected within 24\u00a0h of presentation, demonstrating PaCOPopulation-based controls were randomly selected using a two-stage geography-based cluster sampling design, implemented from 2018 to 2019. First, we randomly selected 40 census tracts from the 449 comprising this region, the City of Liverpool. The probability of tract selection was proportional to the number of eligible residents in each tract. Next, we undertook \u2018random walks\u2019 to select households units within each tract, from which control participants were recruited. Investigators starting from the geographical centre of each tract walked along streets in directions guided by a computer-based random number generator. This method of population-based sampling is a practical method for random selection of participants from a large population when a population list is incomplete or unavailable, and is modified from methods used by the World Health Organization . LettersStandardised questionnaires were administered by members of the research team. Data were collected on sociodemographic factors, comorbidities, and medications, including the use of opioids and benzodiazepines, in the preceding two years.Spirometry was performed using the EasyOne spirometer , before and after administration of salbutamol 200\u00a0\u03bcg via metered dose inhaler and spacer. All spirograms were reviewed by the author (Y.C.) for acceptability and repeatability using published criteria . N-termi1)/forced vital capacity (FVC) ratio, NT-proBNP levels, maximum SNIP value (SNIPmax) and apnea-hypopnea index (AHI). Receiver operator characteristic (ROC) curves were generated from these models to determine the area under the ROC curve (AUC) in order to assess the predictive value of each of these variables for HRF. For subsequent analyses, participants were classified as having COPD if post-bronchodilator FEV1/FVC was below the lower limit of normal, using Global Lung Initiative reference values [max was less than 70 cmH2O and 60 cmH2O, among males and females, respectively. A diagnosis of moderate-to-severe OSA was recorded if the AHI was\u2009\u2265\u200915 events per hour.Continuous variables are summarised as means with standard deviations (SD) and medians with interquartile ranges (IQR). Groups were compared using independent t-tests or Mann\u2013Whitney U tests, as appropriate. Frequencies and percentages are used to describe categorical variables, and Fisher\u2019s chi-square test used to compare groups. Baseline logistic regression models were used to assess the relationship between the presence or absence of HRF and continuous variables: the post-bronchodilator forced expiratory volume with 95% confidence intervals (CI). Covariates for each model were informed by a directed acyclic graph (DAG) developed by the authors using the web-based program \u2018daggity\u2019 showing We estimated 205 participants would be required in each group to detect an odds ratio of 2.8 and 2.0 for risk factors with prevalence values of 5% and 15%, respectively, with 80% power and two-sided alpha of 0.05. Study recruitment was stopped early due to low response rates and the COVID-19 pandemic.One hundred and forty-seven subjects (42 cases and 105 controls) completed the study, as shown in Fig.\u00a0p\u2009<\u20090.001). The Charlson comorbidity index, a composite predictor of mortality [p\u2009<\u20090.001).Demographic and clinical characteristics are shown in Table ortality , was sigp\u2009<\u20090.001) (Table 1 in cases was 51 (21) percent predicted. The most frequent potential cause for HRF among controls was moderate-to-severe OSA (34%). The median (IQR) AHI among all controls was 6.6 events per hour.At least one pre-specified cause for HRF was identified in 42 (100%) cases, and 66 (63%) controls . Post-hoc analysis showed that when a lower NT-proBNP cut-off value of 35\u00a0pmol/L (296\u00a0pg/mL) was used to determine the presence of CCF, the magnitude of association was attenuated but remained statistically significant with an OR of 4.25 (95% CI 1.16 \u2013 15.6).The risk of HRF associated with each cause is shown in Table Moderate-to-severe OSA did not appear to be associated with increased risk of HRF, with more controls having this condition compared with cases. There was no difference between groups based on mean AHI, using a 4% desaturation threshold, or using the oxygen desaturation index. There were no significant differences in age, BMI, degree of comorbidity, reported sleepiness, and estimated risk of OSA based on the STOP-Bang score between No single cause was identified as being responsible for more than 50% of HRF cases, based on the PAF, as shown in Table In this population-based case\u2013control study, we show that HRF is a multifactorial condition with no single disease responsible for the majority of cases. In addition to chronic health conditions such as COPD and CCF, opioid use and respiratory muscle weakness are significantly associated with HRF hospitalisations. Interventions to reduce the prevalence of these causes have the potential to substantially reduce HRF-associated hospitalisations in this and other comparable populations.Of the hypothesised causes, COPD had the highest population attributable fraction. Most previous surveys have shown COPD to be the dominant cause of hospitalisation with HRF \u20133, 7, 8,In this population, CCF was an important contributor to hospitalisations with HRF. Patients with heart failure have reduced lung compliance and increased airways resistance, experiencing greater mechanical costs of breathing resulting in muscle fatigue and inability to maintain adequate ventilation . HypercaSelf-reported use of opioid medications contributed significantly, with a PAF comparable to that of COPD. In recent decades, Australia and other high-income countries have documented a marked increase in the use of prescription opioids . The cliThere was a relatively high prevalence of respiratory muscle weakness among cases, and there was a significant association between reduced muscle strength and HRF. Neuromuscular disease including motor neuron disease, polio and muscular dystrophy accounts for a substantial proportion of home mechanical ventilation users . We did Our study did not show a significant association between moderate-to-severe OSA and hospitalisation with HRF. Our analysis was based on the assumption that OSA could lead to HRF via sleep-related hypoventilation, or via an alternate pathway involving the development of CCF, with or without central sleep apnea and sleep hypoventilation Fig.\u00a01)1). The aThis study has some weaknesses. We were unable to achieve the recruitment target, but in our analysis we found that our sample size was sufficient to detect with statistical confidence the high magnitude of associations observed between HRF and the causes COPD, CCF and opioid use. Rather than the small groups per se, the primary limitation of this study is the potential selection bias induced by low response and non-participation rates. A relatively small proportion of potentially eligible people proceeded to study participation, despite attempts made by the investigators to minimise inconveniences and provide financial reimbursement for the time provided. Controls may have been more likely to participate if they had symptoms of, or were concerned about, OSA. If this effect had occurred in controls but not cases, it would have led to an underestimation of the association with OSA. Potential cases may have been excluded due to frailty or death. The effect of this selection bias would have been under-estimation of the association with causes known to increase the risk of death, such as CCF. Due to the case\u2013control design, risk factors for the outcome are measured after the occurrence of HRF, although we expect most to be chronic conditions that would have been present prior to hospitalisation. Finally, our results may not be generalisable to other populations depending on socioeconomic and other factors affecting the prevalence of each cause.Nevertheless, our work has several strengths distinguishing it from previous studies. This is one of few studies that have selected cases based on ABG results. We have provided objective measurements of causes rather than self-reported diagnoses or medical records which may be incomplete. We employed causal diagrams to inform our statistical analysis in keeping with modern epidemiological theory. Importantly, this is the first study of HRF to provide data on a control group, allowing estimation of the association between specific causes and HRF at a population level as well as the relative importance of each cause as reflected in the population attributable fraction.In summary, our study provides evidence for the multifactorial nature of HRF and the range of potential contributing factors. In addition to COPD, other causes including CCF, opioid use and respiratory muscle weaknesses are significantly associated with HRF hospitalisations. These findings have important implications for the assessment and management of patients with HRF and highlight the need for comprehensive evaluation to ensure that all treatable factors are addressed.Additional file 1: Table E1. Details of regression models used to determine the associations between each cause and the outcome of hypercapnic respiratory failure."} {"text": "Background: Eating disorders are a problem that is becoming more and more common among younger and younger age groups. Many studies examine the risk factors for EDs, however, the treatment of these diseases is very complicated and requires dietary, psychological and medical intervention. Methods: 233 primary and secondary school students aged 12 to 19 were surveyed using the EAT-26 (Eating Attitudes Test-26) questionnaire, the self-esteem Scale SES and the Cantril scale for life satisfaction. Results: Women, when compared to men, showed lower self-esteem, satisfaction with their appearance, body weight and their lives and at the same time a higher risk of eating disorders in all three areas. Low life satisfaction is often correlated with weight loss greater than 10 kg. Low self-esteem correlated positively with significant weight loss (>10 kg) and more frequent uncontrollable binge eating and exercising (more than 60 min a day) to influence appearance. People with low self-esteem were more likely to be treated for EDs. Subjects dissatisfied with their lives binged, feeling that they could not stop. Conclusion: The younger the person, the more likely they are to develop eating disorders. This is closely correlated with low self-esteem and negative life satisfaction. Men were more likely to be satisfied with their weight, appearance, and life, and were less likely to show ED symptoms. Eating disorders (ED) are a complex group of mental disorders that result in pathological eating behaviors that can lead to serious complications . IncorreAtypical anorexia nervosa ;Nervosa bulimia (of low frequency and/or limited duration);Binge eating disorder (of low frequency and/or limited duration);Purging disorders;Midnight Eating Syndrome.In the Diagnostic and Statistical Manual of Mental Disorders DSM-4), eating disorders include anorexia nervosa (AN), bulimia nervosa (BN), and eating disorder not otherwise specified (EDNOS). EDNOS is a composite diagnosis that includes all eating disorders that do not meet the AN or BN diagnostic criteria [, eating The DSM-5 also includes a category called unspecified feeding or eating disorder (UFED), which includes people who do not fit into any of these five categories or for whom there is not enough information to make a specific diagnosis of OSFED . In addiAlthough the different forms of eating disorders differ in course and treatment, they share common psychological and behavioral symptoms as well as risk factors. It has been shown that experiencing negative emotions increases the risk of losing self-esteem and thus losing control over your eating behavior . Excess The criteria for diagnosing eating disorders have changed over the years. One of the screening tools commonly used to detect eating disorders is the EAT-26 . EAT-26 The aim of this study was to determine the relationship between the impact of self-esteem and life satisfaction on the occurrence of eating disorders among adolescents in three areas, such as control, weight loss, and bulimia and food preoccupation.2, and the average BMI desired by the subjects was 19.72 \u00b1 3 kg/m2. Each of the participants provided informed consent to participate in the study, and the consent of the legal guardian was also obtained. Lack of consent and an age under 12 or over 19 excluded participation in the research. The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Bioethics Committee at the University of Rzeszow . The study was conducted in 2022, when restrictions related to the COVID-19 pandemic were lifted. Isolation had no effect on the test results.In total, 233 primary and secondary school students aged 12 to 19 were surveyed. They all came from the Subcarpathian region. The average age of the respondents was 15.46 \u00b1 2.33 years. Among the respondents, 37% were boys and 63% were girls. The mean body mass index (BMI) of the subjects was on average 20.84 \u00b1 4 kg/mTo examine the risk of eating disorders, the Eating Attitudes Test-26 questionnaire was used. It explores the risk of eating disorders in three areas: dieting, bulimia and food preoccupation, and oral control. It is a standardized survey consisting of 26 questions relating to feelings and 4 questions related to behaviors occurring during the last 6 months in the examined person, such as losing more than 10 kg of body weight or a sense of loss of control over food consumption. This scale was developed in 1982 by David M. Garner for the early detection of eating disorders. The study used the Polish adaptation from 2016, developed by Rogoza et al. .The Self-esteem Scale (SES) of M. Rosenberg consists of 10 statements that relate to your beliefs about yourself. It is a tool that allows you to assess levels of self-esteem. The respondent is asked to indicate to what extent he or she agrees with each of the statements . This stThe Cantril scale has the form of a ladder, with degrees from 0 to 10 defining the level of satisfaction, with 0 being the worst and 10 the best satisfaction. It is assumed that a score <6 is defined as dissatisfaction, and >6 means satisfaction with one\u2019s life .In addition, the respondents were asked to provide their height and current and desired body weight. On this basis, the current and desired Body Mass Index (BMI) was calculated, and their values were found on the WHO charts .p < 0.05.Statistical analysis of the collected data was performed in the Statistica 13.3 package. Only non-parametric tests were used for the analysis of variables. The choice of this type of test was conditioned by the failure to meet the basic assumptions of parametric tests, i.e., compliance of the distributions of the examined variables with the normal distribution, which were verified by the Shapiro\u2013Wilk W test. For most numerical variables, descriptive statistics were calculated: mean, median, minimum, maximum, first and third quartiles, and standard deviation. The Mann\u2013Whitney U test was used to assess the differences in the average level of a numerical trait in the two populations. The correlation of two variables that did not meet the criterion of normal distribution was determined using Spearman\u2019s rank correlation coefficient. The level of statistical significance was p < 0.001), body weight (p = 0.007) and life (p < 0.001) compared to women. Compared to men, women scored more on the SES scale (p = 0.001); therefore, they had lower self-esteem and a higher risk of eating disorders\u2014EAT scale (p < 0.001). Regarding eating disorders, women also scored higher on dieting (p = 0.034), bulimia and food preoccupation (p = 0.005) and oral control (p = 0.001) (It was shown that men obtained higher results of satisfaction with their appearance (= 0.001) .p = 0.035; R = \u22120.14), and a positive relationship between the age of the subjects and bulimia and food preoccupation . The older the respondents were, the lower their score in the area of eating disorders\u2014oral control. Conversely, older people scored higher in the category of bulimia and food preoccupation among ED areas . The influence of age on losing more than 10 kg is shown in the It was shown that people who lost 10 kg or more were on average younger (14.87 years old) than people who did not lose such an amount of body weight (15.64 years old). The described difference in the age of the subjects from the two groups was statistically significant (p = 0.052), although this result was close to the threshold of significance. People treated for this reason were on average slightly older (16.5 years old) than those not treated (15.37 years old).There were no differences in the ages of people treated and not treated for eating disorders were less satisfied with their appearance and their body weight. It also showed that people with lower self-esteem were more likely to overeat in the last 6 months when they felt they could stop, vomit or exercise more than 60 min a day to lose weight or control weight. People with lower self-esteem had a higher risk of eating disorders\u2014EAT-26 scale in all three areas .p = 0.010). The descriptive data related to weight loss of 10 kg or more and self-esteem are in People who had lost 10 kg or more in the last 6 months had lower self-esteem than people who had not lost weight (23.53 points). This difference was statistically significant (p = 0.023). The correlation between treatment for eating disorders and self-esteem is shown in In addition, it was shown that people who were treated for eating disorders had lower self-esteem than people who were not treated for this reason (23.68 points). This difference was statistically significant (p < 0.001), satisfaction with their body weight (p < 0.001), risk of eating disorders\u2014based on the EAT-26 scale in the area of slimming (p < 0.001), and controls (p = 0.017). The higher the respondent\u2019s self-esteem, the higher their satisfaction with their appearance and body weight was. Risk of eating disorders\u2014the EAT-26 scale was statistically significantly more frequent in people with low self-esteem than in people with average and high self-esteem. This was similar in the ED area\u2014slimming. The degree of control, however, differed statistically significantly among people with low and high self-esteem from 1980 to 2019. It was found that over the years the incidence among persons aged under 15 years has increased. The reason may be the more effective diagnosis of the disease and taking into account younger and younger subjects . A studyIt should be noted that people with abnormal eating habits are at high risk of eating disorders, which are characterized by a psychopathological attitude and accompanying groups of syndromes involving weight changes or physiological disorders, including anorexia nervosa, bulimia and atypical eating disorders . For exaTo a large extent, self-esteem is related to satisfaction with the appearance. In particular, low self-esteem determines dissatisfaction with the appearance of the body, and vice versa, low self-esteem leads to dissatisfaction and a negative perception of the figure . In our Satisfaction with life plays a significant role as a risk factor for eating disorders. Satisfaction with appearance and self-confidence affects this indirectly. Doornik et al. found that life satisfaction is a plastic factor that changes with the severity of anorexia symptoms. To obtain effective treatment, you also need to improve your commitment to specific domains of life . Eating The present study has some limitations. Respondents could subjectively complete the questionnaires, which may change the results. Another limitation is the small number of men in the study. This study, however, provides some direction for future research. All people from primary and secondary schools in the Subcarpathian region who gave their consent were tested. Each respondent had different conditions of private life, knowledge about the state of physical health, and emotional approach. We believe that it would be interesting to study a group in a lower age range, with the exclusion criteria presented above. It is also worth diversifying the income levels and living environments of children\u2019s carers in terms of the impact on the quality and quantity of nutrition. Considering further research on this topic, researchers should increase the study group and include adolescents from different regions of the country.Gender plays a significant role in the risk of eating disorders. Women, compared to men, showed lower self-esteem and at the same time a higher risk of eating disorders in all three areas. Men showed greater satisfaction with their appearance, body weight and their lives, and they also had higher self-esteem.Episodes of significant weight loss (>10 kg) concerned a group of younger students. The younger the person, the greater the probability of eating disorders, especially in the area of bulimia and food preoccupation and control.Low self-esteem correlated positively with significant weight loss (>10 kg), more frequent uncontrollable binge eating and exercising more than 60 min a day to influence appearance. People with lower self-esteem were more likely to be less satisfied with their appearance and weight and were more likely to be treated for eating disorders.Among adolescents, satisfaction with appearance and weight affects life satisfaction, as measured with the Cantril ladder, which showed a positive relationship with the risk of eating disorders in all three areas. Subjects dissatisfied with their lives binged, feeling that they could not stop."} {"text": "Scutellariae Barbatae Herba and Scutellariae Radix and is demonstrated to have anti-tumor properties in colon cancer. Notwithstanding, the function and mechanism of Salvigenin in hepatocellular carcinoma (HCC) are less well studied. Different doses of Salvigenin were taken to treat HCC cells. Cell viability, colony formation ability, cell migration, invasion, apoptosis, glucose uptake, and lactate production levels were detected. As shown by the data, Salvigenin concentration dependently dampened HCC cell proliferation, migration, and invasion, weakened glycolysis by abating glucose uptake and lactate generation, and suppressed the profiles of glycolytic enzymes. Moreover, Salvigenin strengthened HCC cells\u2019 sensitivity to 5-fluorouracil (5-FU) and attenuated HCC 5-FU-resistant cells\u2019 resistance to 5-FU. Through network pharmacological analysis, we found Salvigenin potentially regulates PI3K/AKT pathway. As shown by the data, Salvigenin repressed the phosphorylated levels of PI3K, AKT, and GSK-3\u03b2. The PI3K activator 740Y-P induced PI3K/AKT/GSK-3\u03b2 pathway activation and promotive effects in HCC cells. However, Salvigenin substantially weakened 740Y-P-mediated effects. In-vivo assay revealed that Salvigenin hampered the growth and promoted apoptosis of HCC cells in nude mice. Collectively, Salvigenin impedes the aerobic glycolysis and 5-FU chemoresistance of HCC cells by dampening the PI3K/AKT/GSK-3\u03b2 pathway.Salvigenin is a Trimethoxylated Flavone enriched in The online version contains supplementary material available at 10.1007/s12010-023-04511-z. Hepatocellular carcinoma (HCC), a primary malignancy of hepatocytes, has high-level invasiveness . In the Metabolic reprogramming, particularly aerobic glycolysis , was first detected in HCC . As HCC Scutellariae Barbatae Herba (SBH), a dried whole plant of Scutellaria Barbata D. Don, is referred to as Ban-Zhi-Lian in traditional Chinese medicine (TCM). SBH, equipped with anti-cancer, anti-oxidant, and anti-angiogenic activities, enjoys a history of thousands of years in China [Scutellariae Radix, the root of Scutellaria Baicalensis, boasts anti-inflammatory and anti-cancer functions [Scutellariae Radix exerts a remarkable anti-cancer function in liver cancer [Scutellariae Barbatae Herba and Scutellariae Radix, is reported to show the dual activities of reducing lipid and boosting mitochondrial functions [in China \u201315. The in China , 17. Scuunctions . Scutellr cancer . Salvigeunctions . Furtherunctions . Salvigeunctions .Network pharmacology is an emerging field in terms of the pharmacological research of modern Chinese medicine. It reveals the underlying molecular mechanisms of traditional Chinese medicine potency via the establishment and visualization of the \u201cdrug-target-disease\u201d interactive network , 24. Herhttps://www.genecards.org/) database and obtained Salvigenin-associated targets from SwissTarget (http://www.swisstargetprediction.ch/) [https://pubchem.ncbi.nlm.nih.gov/). Venn\u2019s diagram online tool (https://bioinfogp.cnb.csic.es/tools/venny/index.html) was adopted to analyze the crossing targets of the disease and drug. The Cytoscape software (version 3.9.1) was harnessed to visualize the drug-target network diagram. We uploaded the common targets to the String database (https://cn.string-db.org/cgi/input.pl) to produce the protein-protein interaction (PPI) network diagram. The DAVID database (https://david.ncifcrf.gov/home.jsp) was introduced for the GO and KEGG enrichment analyses of the common targets.We acquired the targets related to hepatocellular carcinoma (HCC) from the GeneCards (ion.ch/) . The Iso2, humid air).The Human HCC cell lines were ordered from American Type Culture Collection. The cells were cultured with an RPMI1640 medium filled with 10% fetal bovine serum and 1% penicillin/streptomycin in an incubator for overnight culture. Predicated on prior studies, we treated the cells with Salvigenin or 5-FU for 24 h. Next, 10 \u03bcL of CCK-8 solution was administered to each well for 2 h of further incubation . At the end of the culture, a microplate reader was utilized to gauge the absorbance value at 450 nm. Santa Cruz Biotechnology, Inc. was the supplier of Salvigenin (CAS number: 19103-54-9). Cell death was assessed using the Calcein/PI Cell Viability/Cytotoxicity Assay Kit . Huh7 and HepG2 cells were seeded into 96-well plates (density: 1\u00d7103 cells/well) for overnight culture. Followed by Salvigenin treatment, Calcein AM/PI working solution was added into each well (100 per well), and the plates were put in a 37 \u00b0C incubator for 30 min. The Calcein AM/PI fluorescence signals were observed and taken using a microscope .The CCK-8 kit was adopted to examine cell viability. We inoculated HCC cells into 96-well plates , 740Y-P (50 \u03bcg/mL) or 5-FU (100 \u03bcM) for two days. After the medium was removed, the cells were cultured in a fresh culture medium without the above drugs. The medium was exchanged every 3 days; 10 days later, the colonies were fixed using 70% methanol for 30 min. Then, they were stained in 0.5% crystal violet at room temperature (RT) for 30 min. A light microscope was utilized for counting the number of cell colonies.Huh7 and HepG2 cells were seeded onto 60-mm culture dishes with a density of 1\u00d710\u2212\u0394\u0394CT approach was employed to calculate the relative expression of the target gene . The primers were designed via Primer3 (https://primer3.ut.ee/). The primer sequences are exhibited in Table TRIzol reagent (Thermo Fisher Scientific) was taken to extract total RNA from the cells. Nano Drop 2000 was adopted to measure the purity and concentration of the total RNA. As per the manufacturer\u2019s recommendations, the RevertAid First Strand cDNA Synthesis Kit was exploited to reverse-transcribe 1 \u03bcg of the total RNA into cDNA, which was utilized as the template to amplify the target gene and reference gene GAPDH. SYBR GreenPCR (MedChemExpress) was introduced for PCR. Thermal cycling was implemented with these conditions: 3 min of initial degeneration at 95 \u00b0C, 10 s of denaturation at 95 \u00b0C, 30 s of annealing at 60 \u00b0C, and 30 s of extension at 60 \u00b0C . The 2We harvested the tumor tissues from the nude mice and HCC cells treated with varying doses of Salvigenin (25\u2013100 \u03bcM) and/or 740Y-P (50 \u03bcg/mL). RIPA lysis buffer supplemented with a mixture of protease and phosphatase inhibitors was adopted to extract total protein from the cells and tumor tissues, respectively. The Bicinchoninic Acid (BCA) protein quantification kit was harnessed for protein quantification. Then, 50 \u03bcg of the total protein samples from each group were isolated through 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and moved onto polyvinylidene difluoride (PVDF) membranes ; 5% skimmed milk was given to seal the membranes at RT for 2 h. Next, the membranes were incubated with primary antibodies , Anti-p-PI3K , Anti-AKT , Anti-p-AKT , Anti-GSK-3\u03b2 , Anti-p-GSK-3\u03b2 , Anti-HK2 , Anti-PFK1 , Anti-PKM2 , and Anti-\u03b2-actin ) at 4 \u00b0C overnight. Later, the horseradish peroxidase (HRP)-conjugated goat anti-rabbit IgG secondary antibody was added to the membranes for 2 h of incubation at RT. Color and image development was done with the use of ECL in darkness. \u03b2-actin served as the internal parameter.5 cells/mL), 200 \u03bcL of the cell suspension was given to the upper chamber, whereas 600 \u03bcl of a medium with 15% FBS was administered to the lower compartment. A sterile, thermostatic incubator was adopted to culture the cells for 24 h at 37 \u00b0C with 5% CO2. The compartments were taken out. The cells that failed in migration and invasion were wiped off using cotton swabs. Transwell membranes were immobilized in 4% paraformaldehyde for 10 min and dyed in 0.1% crystal violet. They were observed and photographed employing a microscope.Transwell compartments were placed in 24-well plates; 50 \u03bcL of diluted Matrigel was added to the upper compartment in the invasion experiment instead of the migration assay. With the cells suspended in a medium without serum . Then, the supernatant was harvested. As instructed by the manufacturer, the glucose uptake detection kit and the lactate production detection kit were taken to determine the levels of glucose uptake and lactate generation.6 cells/mL, Huh7 cells were suspended in a serum-free medium and subcutaneously transfused into the right abdomen of the mice (0.1 mL each). In this way, a liver cancer xenograft model was built. A vernier caliper was taken to gauge the length (L) and width (W) of transplanted tumors every three days. As the tumor volume attained around 50 mm3, the animals were randomized to different groups (n = 5 per group). Salvigenin was intraperitoneally injected into the mice for treatment. Normal saline of the identical amount was given to the control group. The mouse tumor bodies were examined every week for 4 weeks running. V (mm3) = L \u00d7 W2/2. When the experiment came to an end, the mice were sacrificed through the inhalation of excessive CO2. The tumors were separated and photographed. Their volume and weight were gauged. The tumors were harvested for the following experiments. All experiments, authorized by the Ethics Committee of Zhejiang Taizhou Hospital, affiliated with Wenzhou Medical University, were implemented in conformity with the Guidelines for the Care and Use of Laboratory Animals .Male BALB/c nude mice, 6\u20138 weeks of age and 20 \u00b1 2 g in weight, were ordered from the Animal Experimental Center of Wenzhou Medical University. They were reared under specific-pathogen-free (SPF) conditions with a 12-h light/dark cycle. They could access food and water at will. With the cell density adjusted to 4\u00d7102O2 was applied to block the activity of endogenous peroxidase. After antigen repair, the slices were sealed using 5% goat serum for 20 min, flushed in PBS three times, and then incubated along with Anti-Ki67 antibody overnight at 4 \u00b0C. Afterward, the sections were incubated with the HRP-conjugated goat anti-rabbit IgG secondary antibody for 2 h. Diaminobenzidine (DAB) was taken for color development. Hematoxylin was used for redyeing. The staining outcomes were observed employing a microscope after the slices were dehydrated, made transparently, and sealed by neutral gum.The mouse tumor tissues were immobilized using 4% paraformaldehyde, dehydrated with alcohol of gradient concentrations, made transparent with xylene, embedded in paraffin, and severed into slices (4-\u03bcm thick). The sections were dewaxed and hydrated; 3% HThe paraffin slices of tumor tissues were dehydrated using gradient alcohol, washed in distilled water, dyed with the hematoxylin solution for 8 min, flushed in running water, and turned blue with 0.6% ammonia. Subsequently, the sections were stained in the eosin solution for 2 min, dehydrated, and made transparent with the use of xylene. Finally, they were dried and sealed with neutral gum. A microscope was utilized for observation and photography.The paraffin slices were routinely dewaxed to water and subjected to antigen repair. After PBS washing, the sections were sealed using 5% bovine serum for 30 min. We added the primary antibodies Anti-PI3K and Anti-AKT to incubate them overnight at 4 \u00b0C. Next, the fluorescence-labeled goat anti-rabbit IgG secondary antibody was applied for 1-h incubation at RT. DAPI was utilized for redyeing. An anti-fluorescence quenching agent was adopted to seal the sections. A fluorescence microscope was taken for observation and imaging.After the HCC cells were treated with Salvigenin (100 \u03bcM), 740Y-P (50 \u03bcg/mL), or 5-FU (100 \u03bcM), as mentioned before, the medium was removed. The cells were flushed in pre-cooled PBS, fixed with 4% polyformaldehyde for 60 min, and rinsed in PBS once. Later, an immunostaining detergent was applied for 2 min of incubation in an ice bath. The samples were incubated for an hour in darkness at 37 \u00b0C with 50 \u03bcL TUNEL solution. After being flushed in PBS three times, the slices were sealed using an antifade mounting medium and observed by a fluorescence microscope .As per the instructions of the TUNEL apoptosis kit , the tumor slices were routinely dewaxed, hydrated, and flushed using PBS three times. The Protease K working solution (20 \u03bcg/mL) without any DNase was added for 15 min of reaction at 37 \u00b0C. The slices were rinsed in PBS three times. Next, 50 \u03bcL TUNEL solution was added and evenly spread over the samples. After being sealed with an anti-fluorescence quenching agent, the slices were observed by a fluorescence microscope.The mouse blood samples were harvested. The levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), blood urea nitrogen (BUN), and serum creatinine (Scr) were through the Fuji Dri-Chem 3500i Biochemistry Analyzer .t-test was implemented to compare data differences between two groups. P < 0.05 held statistical significance.GraphPad Prism 8.0 was introduced for analyzing statistics. Measurement statistics were exhibited as mean \u00b1 standard deviation (SD). One-way ANOVA was taken for comparison among multiple groups, followed by post hoc Tukey\u2019s test, while a To investigate the pharmacological function of Salvigenin Fig. B. We alsP < 0.05, Fig. P < 0.05, Fig. Aerobic glycolysis, a symbol of cancer metabolism, features glucose uptake and lactate generation. This prompted us to probe the influence of Salvigenin on the aerobic glycolysis of HCC cells. Corresponding kits were taken to examine the levels of glucose uptake and lactate generation in Huh7 and HepG2 cells. As a result, Salvigenin vigorously attenuated glucose uptake and lactate production and 5-FU were applied to treat Huh7 and HepG2 cells for the purpose of studying the influence of Salvigenin on HCC cells\u2019 chemoresistance. CCK-8 tested cell viability, with the IC50 value calculated. The outcomes displayed that Salvigenin dramatically fortified HCC cells\u2019 sensitivity to 5-FU (versus the con group) (https://www.genecards.org/). The SMILES number of Salvigenin obtained from the PubChem database (https://pubchem.ncbi.nlm.nih.gov/) was inputted into the SwissTarget database (http://www.swisstargetprediction.ch/). We selected Homo Sapiens and downloaded 100 Salvigenin-concerned targets. The targets of the disease and drug were uploaded to Venn\u2019s diagram online tool (https://bioinfogp.cnb.csic.es/tools/venny/index.html), and we obtained 32 common targets . We selected Homo Sapiens to generate the PPI network diagram and visualized the top 20 signaling pathways and the top 10 terms of biological process (BP), cellular component (CC), and molecular function (MF) and The Human Protein Atlas (HTTPS://www.proteinatlas.org/) databases revealed that the expressions of PIK3CA, AKT1, and GSK3B in HCC tissues were enhanced Fig. A, B throP < 0.05, Fig. P < 0.05, Fig. To better understand the anti-cancer function of Salvigenin in HCC, we treated Huh7 cells with Salvigenin and/or 740Y-P (the PI3K activator). CCK-8, colony formation assay, Transwell migration and invasion experiments, and TUNEL were implemented. Figure in vivo. Salvigenin dose-dependently hampered tumor growth was taken to treat the mice. In this way, we probed the influence of Salvigenin on HCC cell growth wth Fig. A, B. HE wth Fig. A, B. Thein vivo.The profiles of HK2, PFK1, PKM2, and the PI3K/AKT/GSK-3\u03b2 pathway were determined by western blot. The experimental statistics unveiled that Salvigenin restrained their expressions Fig. A, B. Tisin-vivo and in-vitro experiments. Salvigenin exerted its cancer-suppressing function in the context of HCC by hampering the PI3K/AKT/GSK-3\u03b2 signaling pathway.HCC development, a sophisticated biological process, concerns multiple molecular and cellular signaling pathways . Conventin-vitro outcomes, Salvigenin concentration dependently hampered HCC cell proliferation, migration, and invasion, elicited apoptosis, and weakened cell aerobic glycolysis and chemoresistance. In-vivo experiments unraveled that Salvigenin repressed the growth of tumors . The PI3Glycogen synthase kinase (GSK)-3\u03b2 is a classical downstream target of the AKT pathway. Reportedly, GSK-3\u03b2 has two phosphorylation sites (Ser9 and Tyr216). The phosphorylation of the Ser9 site can culminate in GSK-3\u03b2 inactivation, whereas the phosphorylation of the Tyr216 site leads to GSK-3\u03b2 activation . The funAll in all, through network pharmacological analysis and experiments, we have probed the underlying mechanism of Salvigenin in HCC. As a result, Salvigenin may dampen HCC cell proliferation, migration, and invasion and weaken cell glycolysis and chemoresistance mainly by modulating the PI3K/AKT/GSK-3\u03b2 pathway. Nonetheless, our study still has some limitations. In more animal studies, we will confirm the function and molecular mechanism of Salvigenin in the context of HCC.Supplementary Figure 1http://gepia.cancer-pku.cn/) database. D-G: The Human Protein Atlas (https: //www. proteinatlas.org/) was adopted to check the profiles of PIK3CA, AKT1, and GSK3B proteins in HCC tissues and normal liver tissues. (PNG 2290 kb)The profiles of PIK3CA, AKT1, and GSK3B in HCC tissues and adjacent normal tissues. A-C: The profiles of PIK3CA, AKT1, and GSK3B in HCC tissues and adjacent normal tissues were determined through the GEPIA (High resolution image (TIF 8192 kb)Supplementary Figure 2http://gepia.cancer-pku.cn/) was introduced to verify the relationship between the levels of PIK3CA, AKT1, and GSK3B and the OS and RFS rates of HCC patients. (PNG 495 kb)The correlation between the expression levels of PIK3CA, AKT1, and GSK3B and prognosis in HCC. A-C: The GEPIA database (High resolution image (TIF 1670 kb)Supplementary Figure 3The toxicity of Salvigenin in tumor-bearing mice. A-B: ELISA ascertained ALT, AST, Scr, and BUN levels in the mouse serum. C: HE staining monitored pathological alterations in the heart, liver, spleen, lung, and kidney tissues of mice. (PNG 4175 kb)High resolution image (TIF 14644 kb)"} {"text": "Specifically, the research topics \u201cnew bio-ink investigation,\u201d \u201cmodification of extrusion-based bioprinting for cell viability and vascularization,\u201d \u201capplication of 3D bioprinting in organoids and in vitro model\u201d and \u201cresearch in personalized and regenerative medicine\u201d were predicted to be hotspots for future research.Three-dimensional (3D) bioprinting is an advanced tissue engineering technique that has received a lot of interest in the past years. We aimed to highlight the characteristics of articles on 3D bioprinting, especially in terms of research hotspots and focus. Publications related to 3D bioprinting from 2007 to 2022 were acquired from the Web of Science Core Collection database. We have used VOSviewer, CiteSpace, and R-bibliometrix to perform various analyses on 3,327 published articles. The number of annual publications is increasing globally, a trend expected to continue. The United States and China were the most productive countries with the closest cooperation and the most research and development investment funds in this field. Harvard Medical School and Tsinghua University are the top-ranked institutions in the United States and China, respectively. Dr. Anthony Atala and Dr. Ali Khademhosseini, the most productive researchers in 3D bioprinting, may provide cooperation opportunities for interested researchers. Tissue Engineering Part A contributed the largest publication number, while Frontiers in Bioengineering and Biotechnology was the most attractive journal with the most potential. As for the keywords in 3D bioprinting, Bio-ink, Hydrogels , Scaffold , extrusion-based bioprinting, tissue engineering, and Three-dimensional (3D) bioprinting is a technology that enables the 3D printing of various cells, biocompatible materials, and supporting components into complex 3D functional living tissues . This tein vitro models and organoids database has been widely used in bibliometric studies . We perfSeveral software are frequently used in bibliometric analysis, including VOSviewer, CiteSpace, and HistCite . Two sofVOSviewer 1.6.18 is a widely used software for constructing knowledge-maps based on a co-occurrence matrix . The genCiteSpace 6.1. R2 is another bibliometric software for creating visualization maps based on the data retrieved from the database . Severalhttps://bibliometric.com/app_v0) was used for the comparison analysis of the annual number of papers among the top 10 countries. GraphPad Prism 8 and Microsoft Office Excel 365 were respectively used to draw a column chart and provide a descriptive analysis for annual publications. Impact factor (IF) scores were extracted from 2021 Journal Citation Reports (JCR).Moreover, an online platform (R2 = 0.9873) of the trend was created, where X represents the year and Y indicates the amount of annual publications.Based on the search strategy, a total of 3,327 papers were identified. The number of annual publications regarding 3D bioprinting is illustrated in n = 1,069), followed by China (n = 733), South Korea (n = 288), Germany (n = 210), and India (n = 166). As shown in All publications regarding 3D bioprinting were published by 2,733 institutions in 83 countries/regions in total. The top 10 prolific countries and institutions are listed in n = 86) is the institution with the largest number of publications, followed by Tsinghua University (n = 75), Chinese Academy of Sciences (n = 74), Zhejiang University (n = 69), and Wake Forest School of Medicine (n = 56). University of California, Los Angeles, has the highest centrality of 0.18. Of the top 10 most prolific institutions, four were from China and four were from United States. Moreover, an institution co-authorship analysis is shown in The co-occurrences of institutions are presented in n = 63) contributed the largest number of papers, followed by Lee SJ (n = 53), Gatenholm P (n = 41), Zhang YS (n = 39), and Yoo JJ (n = 39). These and the rest of the authors are listed in The top 25 most prolific authors are shown in n = 1,139), followed by Ozbolat IT (n = 806), Skardal A (n = 691), Kolesky DB (n = 650), and Pati F (n = 589). These and the rest of the top 10 co-cited authors are listed in The density maps of co-cited authors based on co-citations are shown in n = 225), followed by Biofabrication (n = 191), Advanced Healthcare Materials (n = 82), International Journal of Bioprinting (n = 81), and Frontiers in Bioengineering and Biotechnology (n = 65). All the top 10 most productive journals had an IF (2021) of >4. Seven of these journals were categorized in the Q1 JCR division.The articles included in the analysis were published in 752 journals. Of these, 33 published a minimum of 20 papers and were included and visualized in The dual-map overlay of journals created using CiteSpace is shown in in vitro and in vivo applications (ications . In 2006ications . In 2019ications proposedn = 470) and the highest strength of citation bursts (Strength = 120.73) was published in Nature Biotechnology by Murphy SV et al. in 2014 , Bio-ink was defined as materials which are capable to include cells and other bioactive components for the use in biofabrication . Bio-inkBased on the material requirements, hydrogels are gradually being used in bio-inks. Hydrogel biomaterials include alginate, gelatin, collagen, fibrin/fibrinogen, gellan gum, hyaluronic acid, agarose, chitosan, silk, decellularized extracellular matrix (dECM), poly (ethylene glycol), and Pluronic. Hydrogels have many attractive features for use as bio-inks. As they are biocompatible and typically biodegradable, most of them are easy for cells to adhere, grow, spread and proliferate on . FurtherTissue engineering is one of the hottest research fields in recent years, and it has been widely applied to musculoskeletal tissue, oral tissue, cardiovascular tissue, urogenital tissue, ocular tissue, and so on . Based oThere are four major 3D bioprinting methods, namely, extrusion, inkjet, stereolithography, and laser-assisted printing . EBB extDECM scaffold refers to biomaterials formed by human or animal organs/tissues with the removal of immunogenic cellular components using decellularized technologies . A dECM In vitro models are important tools to study the occurrence and development of diseases , is the research Frontier in this field, and it is currently in an explosive period. Last but not least, the application of 3D bioprinting in personalized and regenerative medicine will continue to develop in the future, but the important issues of cell viability and vascularization need to be addressed first.Using information visualization technology, this study fully summarizes the research progress, hotspots, and frontiers in the 3D bioprinting field. Exciting findings can provide a foundation for further research in this field and inspire the collaboration among potential partners and institutions. Research on 3D bioprinting has greatly developed and still has huge prospect in the future. The United States and China have absolute advantages in this field, and interested researchers may find cooperation opportunities in Harvard Medical School, Tsinghua University, Chinese Academy of Sciences, University of California-Los Angeles, and Wake Forest School of Medicine. Tissue Engineering Part A published the most articles in this field, and Dr. Anthony Atala was the most productive author. The academic exchange and cooperation between countries, institutions, and authors has promoted the development of this research field. Bio-inks and scaffolds have become a hotspot recently. The development of different hydrogels and dECM is expected to become an attractive direction in the next years. In addition, further studies on the modification of the EBB technique are key to promoting the progress of 3D bioprinting. Notably, the application of 3D bioprinting, especially in tissue engineering and"}