diff --git "a/deduped/dedup_0599.jsonl" "b/deduped/dedup_0599.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0599.jsonl" @@ -0,0 +1,42 @@ +{"text": "Recognition of a given object required integration of the information provided by the two subsets, and previous research had found that recognition declined as the delay between subsets was increased. The present experiment found the decline in recognition to be linear for each of several levels of ambient illumination, dropping rapidly under photopic test conditions, and with the slope being progressively less steep with transition into the scotopic range. The change in the duration of information persistence may be related to the density of information that is provided under various lighting conditions, and a requirement that the information be buffered against noise or \"packaged\" to accommodate successive saccades.Minimal discrete shape cues, All the connections set up between sensations by the formation of ideas tend to persist, even when the original conditions of connection are no longer fulfilled.\" Titchener [\"itchener It is well established that brief stimulation can initiate sustained neural activity that allows information to be sampled or integrated over time intervals that far outlast the duration of the stimulus. In vision, the persistence of information has been variously described as visual information store , iconic Previous research from this laboratory found that the information persistence needed for recognition of transient discrete shape cues is affected by the level of ambient room illumination . In thosUnder these test conditions, the prior work found that if the two subsets were displayed with minimal delay between offset of the first subset and onset of the second, recognition levels were relatively high . HoweverThese results providedTen USC undergraduates served as subjects in the experiment. Subjects had normal or corrected to normal visual acuity. Except for the task instructions described below, they were naive to the hypothesis under consideration. Subjects received course credit for their participation.The shapes to be identified were taken from the Macmillan Visual Dictionary or from One hundred fifty 150) shapes were used in the present experiment, as shown in Table 50 shapesFor the present experiment the display set was then divided into two subsets, each containing roughly half of the dots to be displayed. A convention was applied that numbered the address positions of the display set, specifying each odd position as belonging to one subset, and each even position to the other. These were designated as odd and even subsets, as illustrated in the lower two panels of Fig. Testing was done in a room that had no windows, and fluorescent tubes housed in standard recessed ceiling fixtures with plastic diffusion panes provided the lighting. The level of ambient illumination from these fixtures was controlled by the addition of opaque occluding panels that were held in channels that were coplanar to the surface of the fixture. Each fixture had two panels, one over each end, which could be slid apart to alter the area of the opening through which light could flow. This provided for control of ambient illumination without any change in color temperature of the light.Three levels of ambient illumination were used in the experiment, designated as bright, dim and dark. Ambient light levels were measured with a Tektronix J17 photometer, which uses a cosine corrected head having certified calibration. The light readings were taken from the location of the seated subject. Mean illumination was 303 lux for the bright condition, and was 13.3 lux for the dim condition. The lights were turned completely off for the dark condition, and the illumination was functionally zero.2, and for the dim condition the luminance was 1 Cd/m2.Measures were also taken of the amount of light being reflected from the art-board frame and from the wall surrounding the display board (both of which were the same shade of ivory). When the room was bright, the luminance of these surfaces was 25 Cd/m2. When the room was either dim or dark, the emission was set at 7 Cd/m2, the lower level being used because brief flashes that are substantially brighter can produce afterimages.Stimulus shapes were presented using a display board having a 64 \u00d7 64 array of LEDs, each of which could be illuminated under control of a computer and microprocessor slave. The GaAlAs LEDs emitted at a wavelength of 660 nm, and had a rise/fall time for emission in the range of 50\u2013100 nanoseconds. Two levels of LED emission were used. With the room bright, the emission level was set to 96 Cd/mi.e., measured from center-to-center of the outside elements, was 7.7 \u00d7 7.7 arc\u00b0.The display board was attached to a wall at a viewing distance of 3.5 m, and with an elevation above eye level of approximately 10 degrees. At this distance the diameter of each LED was 4.9 arc', center-to-center spacing was 7.4 arc', and the dimensions of the full array, Each dot of the display set was shown on the LED array by allowing current to flow through the specified LED for 0.1 ms, this being designated as T1. It is convenient to describe the display of a given address as a pulse, so T1 specifies pulse width, as illustrated in Fig. Figure A major variable of interest was the time interval between subsets, which was measured from offset of the final pulse in the odd subset till onset of the first pulse in the even subset. This was designed as T3. As outlined in the introduction, Greene found a To be specific, when the room was bright, the T3 intervals were: 0, 10, 20, 30 and 40 ms. When the room was dim, the T3 intervals were: 0, 20, 40, 60 and 80 ms. When the room was dark, these values were: 0, 40, 80, 120 and 160 ms.The order of room illumination was determined at random for each subject. Subjects were dark adapted for 20 minutes prior to testing with the room being dark.i.e., each was display successively with illumination being the same. For each level of room illumination the order of shape presentation was random, which provided for a random order of T3 values.Shapes that had been assigned to a given level of room illumination were tested as a block, Recognition of a given object required integration of shape cues that were provided by the two subsets. Pilot work had shown that the hit rate from display of a single subset would be in the 20% range. Observing hit rates that are substantially above this value provides evidence of the degree to which the shape cues from the two subsets are being combined by the visual system, which may be described as information persistence or iconic memory.Previous research had demonstrated that the time interval within which shape information can be integrated shows large differentials as a function of room illumination . The goaFor a given subject, each shape was displayed only once at one of the fifteen treatment combinations \u2013 five levels of T3 interval across three levels of room illumination. The shapes were approximately matched for difficulty level on the basis of the number of dots in the display sample, and the response variable was successful recognition (yes/no).Mean recognition level across subjects (hit rate) for each of the fifteen treatment combinations are plotted in Fig. i.e., loge (proportion/1 \u2013 proportion), were calculated, and treatment differences were compared using the standard error of the difference for these values.For statistical confirmation of effects, the appropriate model for this binary data is a generalized linear model with binominal errors . Dot peri.e., no quadratic effect, with the largest probability being 0.54. This indicates that the decline in recognition is completely linear over the intervals tested for each of the room illumination conditions. Dot percentage was not a significant factor for any of the three models, with the largest probability being 0.32. This indicates substantial success in rendering the shapes to be equivalent in their level of difficulty. Note that proper variance measures for the data are only possible using the logit scores, which precludes the use of error bars on the hit-rate means that are shown in Fig. For each of the three levels of room illumination, there was a significant decline in the hit rate (p < .001 for each). There was no significant turning point in the response for any level of ambient illumination, In the previous study the levePrior research from this laboratory used spai.e., information persistence, is determined by the level of ambient illumination. It is well understood that the visual system dramatically increases its sensitivity under low-light conditions, and for threshold detection, stimuli are integrated over a longer interval [i.e., the duration over which a very brief stimulus is subjectively perceived [It is possible that the interval over which information persists, interval -12. Visierceived -16, is aerceived review terceived , among oerceived examinedAs an alternative to the concept that information persists for a fixed amount of time that is a function of ambient illumination, it is possible that the interval over which information can be combined is closely tied to the density of the information being provided. In this model, information from a given moment would be \"compartmentalized\" and buffered against interference from noise and/or incompatible information. Thus with photopic levels of illumination, where large amounts of information are being delivered, the temporal compartment would be relatively short. The compartment interval would become wider as ambient illumination declined, given that the lower illumination also decreased the density of the information being provided at any given moment, as well as the potential for interference. The ability to set the width of the temporal compartment as a function of information density would be especially useful for animals that are highly mobile or move their eyes, as these actions drastically change the image content being provided to the retina from one moment to the next.Stimulus events that occurred at the same moment would be included in a given temporal compartment. It may be relevant, therefore, that another study from this laboratory has founA few studies have examined the question of whether the complexity of the information to be processed affects integration time, most being done using a visible persistence protocol of one kind or another. Loftus & Hanna , for exaSimilar results have been reported by Erwin & Herschenson , who asse.g., letters vs. Xs; upright letters vs. inverted letters, and failed to find any effect of complexity on the duration of visible persistence. They argue that the tasks used by Loftus & Hanna [per se.Conversely, Irwin & Yeomans argue ag & Hanna and by E & Hanna ,21 assese.g., upright letters vs. inverted letters, would not produce much net change in the abundance of data being delivered by the entire visual scene.Prior results from this laboratory found thWhether one views the process as a change in duration of information persistence, or as compartmentalizing stimulus elements as a function of information density, the present results confirm that there is a change in the duration over which partial shape cues can be combined as one transitions from photopic to scotopic viewing conditions. Additionally, we now know that percent recognition is a linear function of the interval between cue subsets, with a slope that is a function of room illumination. The range for this linear decline is relatively short when the room is bright, and becomes progressively longer with decreasing room illumination.arc\u00b0 : degrees of visual anglearc' : minutes of visual angle2 : candela per meter squaredCd/mGaAlAs : gallium, aluminum and arsenicLED : light emitting diodee : natural logLogm : metersms : millisecondsN : number used to specify which dots from address list will be displayednm : nanometersns : nanosecondsp : probabilityT1 : pulse widthT2 : temporal separation within a given subsetT3 : temporal separation between subsetsThe author declares that he has no competing interests."} +{"text": "Drosophila melanogaster, D. virilis, Calliphora vicina, and Sarcophaga carnaria, we measured the rates at which homologous R1\u20136 photoreceptors of these species transmit information from the same stimuli and estimated the energy they consumed. In all species, both information rate and energy consumption increase with light intensity. Energy consumption rises from a baseline, the energy required to maintain the dark resting potential. This substantial fixed cost, \u223c20% of a photoreceptor's maximum consumption, causes the unit cost of information (ATP molecules hydrolysed per bit) to fall as information rate increases. The highest information rates, achieved at bright daylight levels, differed according to species, from \u223c200 bits s\u22121 in D. melanogaster to \u223c1,000 bits s\u22121 in S. carnaria. Comparing species, the fixed cost, the total cost of signalling, and the unit cost (cost per bit) all increase with a photoreceptor's highest information rate to make information more expensive in higher performance cells. This law of diminishing returns promotes the evolution of economical structures by severely penalising overcapacity. Similar relationships could influence the function and design of many neurons because they are subject to similar biophysical constraints on information throughput.Trade-offs between energy consumption and neuronal performance must shape the design and evolution of nervous systems, but we lack empirical data showing how neuronal energy costs vary according to performance. Using intracellular recordings from the intact retinas of four flies, Drosophila. The blowfly pays a high price for better performance; its photoreceptor uses ten times more energy to code the same quantity of information. We conclude that, for basic biophysical reasons, neuronal energy consumption increases much more steeply than performance, and this intensifies the evolutionary pressure to reduce performance to the minimum required for adequate function. Thus the biophysical properties of sensory neurons help to explain why the sense organs and brains of different species vary in size and performance.Many animals show striking reductions or enlargements of sense organs or brain regions according to their lifestyle and habitat. For example, cave dwelling or subterranean animals often have reduced eyes and brain regions involved in visual processing. These differences suggest that although there are benefits to possessing a particular sense organ or brain region, there are also significant costs that shape the evolution of the nervous system, but little is known about this trade-off, particularly at the level of single neurons. We measured the trade-off between performance and energetic costs by recording electrical signals from single photoreceptors in different fly species. We discovered that photoreceptors in the blowfly transmit five times more information than the smaller photoreceptors of the diminutive fruit fly Evidence from single-neuron recordings supports the law of diminishing returns, i.e., high performance eyes in larger, faster flies have less efficient photoreceptors than those of their small, sluggish counterparts. The balance between cost and benefit plays an important role in directing the evolution of biological systems ,2. CostsSpalax . Th. Th33]. formula to the p2\u2212103 effective photons s\u22121) the information rates of all four photoreceptors were almost identical, suggesting that under these conditions the information rates in all four species were limited by photon noise, rather than response bandwidth (3 effective photons s\u22121) the information rates of the photoreceptors diverged , compared with \u223c510 bits s\u22121 in D. virilis photoreceptors and \u223c200 bits s\u22121 in D. melanogaster photoreceptors . Note that, as explained in the Materials and Methods, our set of values from 26 D. melanogaster photoreceptors includes data from 21 cells that were published in an earlier study [Calliphora and D. melanogaster R1\u20136 photoreceptors are similar to those measured previously with comparable methods [D. melanogaster R1\u20136 photoreceptors saturated at our highest intensities, but the information rates in the other species did not over a broader bandwidth of response. The contributions of SNR and bandwidth to performance are illustrated by comparing a plot of information versus frequency for the highest information rate photoreceptor, Sarcophaga R1\u20136, with a plot for the lowest information rate photoreceptor, D. melanogaster R1\u20136 /N(f) in Sarcophaga R1\u20136 codes almost half of its information at frequencies in the range 100\u2013300 Hz but, because of its poorer bandwidth, D. melanogaster R1\u20136 codes very little information at frequencies above 100 Hz consumed the most ATP 1.5, but with only four species these exponents are preliminary estimates. Nonetheless, there is no doubt that both the unit cost of information and the dark cost are directly related to a photoreceptor's ability to transmit information , bit rate increases with light level while the total cost per bit falls also increases the flow of ions across the membrane. Indeed, the high metabolic cost of increasing membrane bandwidth has been invoked to explain why slowly flying insects, exemplified by Tipulid flies, have slow photoreceptors with a low potassium conductance, long time constant, and narrow bandwidth ,60. The ncreases . These fCalliphora this dark current amounts to approximately 2% of a blowfly's total resting metabolic rate. Although photoreceptor fixed costs vary between species by more than an order of magnitude, they are a remarkably constant proportion, between a fifth and a quarter, of the energy consumed in full daylight and, following earlier calculations , we finddaylight . This prdaylight , as alsoThe reasons why R1\u20136 photoreceptors have a high fixed cost are unclear, but the proximate cause is a dark resting potential that is approximately 20 mV less negative than the potassium reversal potential . The inwCalliphora vicina, the R1\u20136 photoreceptors that look ahead at approaching objects through superior optics have higher information rates than those looking sideways and backwards through inferior optics [The energy-information trade-offs that we have described in photoreceptors have implications for the design and evolution of insect retinas. The cost of increasing the maximum rate at which a photoreceptor can handle information is substantial and involves large increases in both the cost per bit and the fixed cost of maintaining the photoreceptor in the dark. This leads to a law of diminishing returns whereby a small increase in information capacity requires a larger proportional increase in energy cost. This law increases evolutionary pressure to reduce photoreceptor performance to the minimum required for satisfactory visually-guided behaviour by penalizing excess capacity. The result, allocation of resources according to need, could help to explain why, in males of r optics .The fixed costs of phototransduction could be particularly important for nocturnal insects. Their photoreceptors often have a large area of photosensitive membrane to improve photon capture ,68,69, aThe relationships between energy and information observed here in fly photoreceptors will apply to signalling systems that share similar biophysical relationships between SNR, bandwidth, and energy cost. Although neurons use synapses as discrete signalling units, rather than microvilli, they too are subject to the stochastic activation of conductance, and are constrained by membrane time constant . ConsequThe relationships between fixed costs, signalling costs, and bit rate could have a significant impact on coding and neural circuit design. In fly R1\u20136 photoreceptors, the fixed and total costs increase as power functions of maximum bit rate, with exponents of approximately 1.5 and 1.7, respectively . Thus thInformation rate is not the only measure of neuronal performance by which to judge efficiency. The measures that are most appropriate for a neuron will be defined by the role the neuron plays, processing signals in circuits, and determining behaviour. Relevant measures of performance could include the sharpness of frequency tuning in auditory systems , latencyMyotragus, brain size was reduced by 50% relative to similar bovids of comparable body mass following isolation on a Mediterranean island. It is argued that this reduction was a response to two factors; reduced predation pressure and increased competition for a limited food supply [The balance between energy cost and performance appears to have played a significant role in determining the evolution of nervous systems ,6. Numerd supply . In birdd supply . Thus bod supply . ImproveC. vicina, S. carnaria, D. virilis, and D. melanogaster. Populations of three of these species C. vicina, D. virilis, and D. melanogaster were maintained in the Department of Zoology, University of Cambridge, United Kingdom. Individuals of S. carnaria were obtained from wild populations near Cambridge between May and September, 2004. The two larger fly species, C. vicina and S. carnaria, were mounted with their dorsal surface uppermost on a wax platform. Additional wax was used to fix the head and thorax but not the abdomen, which was left free to allow breathing. Both Drosophila species were mounted in a custom-built holder, and their head and thorax fixed using wax. In all species a small window (no more than a few facets in diameter) was cut manually into the top of the right compound eye and sealed immediately with silicon grease to prevent dehydration. The grease is soft enough to allow intracellular microelectrodes to be inserted through the seal, without damage. A second window was cut into the left compound eye to allow access for the indifferent electrode, a 50-\u03bcm-diameter silver wire.We used four species of fly for this study; C. vicina, S. carnaria, D. virilis, and D. melanogaster. All recordings were made using borosilicate glass electrodes filled with 3 M KCl. The electrode resistance varied considerably depending on the species from which the recording was being made; electrodes with resistances of 100\u2013130 \u03a9 were used for C. vicina and S. carnaria R1\u20136 photoreceptors, whereas 200 \u03a9 or greater resistance electrodes were used for R1\u20136 photoreceptor recordings from D. virilis and D. melanogaster. The pipettes were pulled from 10-cm borosilicate glass capillaries using a Sutter P97 puller and inserted into the eye, through the silicon grease seal using a Zeiss Jena grease-plate micromanipulator. All recordings were made using an Axoclamp 2A amplifier . Throughout recordings the temperature of the flies was maintained between 22 \u00b0C and 24 \u00b0C.In vivo intracellular microelectrode recordings were obtained from R1\u20136 photoreceptors of C. vicina and S. carnaria. Additional criteria such as the amplitude of the saturating impulse response in dark-adapted conditions and the photoreceptor input resistance were also used to determine recording quality. The photoreceptor responses to light were recorded in bridge mode. To determine the input resistance, current was injected, and the voltage response measured in switched current clamp mode. Stimulus generation and data acquisition were carried out using a digital computer and a purpose-built interface. Both stimuli and responses were usually digitised at 2 kHz and, to prevent aliasing, responses were low pass filtered by a four-pole Butterworth with a cutoff at half the Nyquist frequency, i.e., 500 Hz.Photoreceptors were considered for analysis only if their membrane potentials were hyperpolarised by more than \u221255 mV in the case of photoreceptors from the drosophilid species and \u221260 mV in the case of photoreceptors from Calliphora R1\u20136, white light was provided by a 450-W high-pressure xenon arc lamp (PRA model 301s), which was stabilised with optical feedback to suppress unwanted fluctuations in the light intensity delivered to the waveguide to below 0.5% (root mean square). To provide white-noise stimulation, the arc was modulated by feeding a voltage command waveform from the computer to the optical feedback unit. In the setup used for the other photoreceptors, the light source was a high intensity LED whose output was controlled directly by a voltage to current converter, driven directly by the computer. All voltage commands were corrected for the nonlinear characteristics of the LED. Light was attenuated by calibrated neutral density filters to provide a series of background light levels.Photoreceptors were stimulated by a point source, the tip of a light guide that was positioned on the optical axis and subtended six degrees at the cornea. In the setup used for The effective intensity of the light source was determined for each photoreceptor by counting its responses to single photons, quantum bumps . The phoc(t) = I(t)/Io, where I(t) and c(t) specify the intensity and contrast with time t, and Io is the mean light level. The root mean square contrast was 0.32, which is close to the mean value of 0.4 measured for natural scenes [I(t) were used, each repeated 50 times. This was reduced to two pseudorandom traces in D. melanogaster, where controls showed that this reduction had a negligible effect on measured information rates. The ensemble average of the photoreceptor voltage response to each sequence was derived to give the voltage signal S(t) (N(f), which was corrected for recording noise by subtracting the noise spectrum recorded with the electrode outside the cell. The two or three signal traces were transformed, and the spectra averaged to give the signal power spectrum, S(f). A four-term Blackmann-Harris window was applied to the signal traces and the noise traces prior to transformation to the frequency domain, and the SNRs, S(f)/N(f), were corrected for statistical bias [D. melanogaster photoreceptors R1\u20136 recorded for this study were supplemented with published data from 21 cells [Information rates were measured from a photoreceptor's voltage response to Gaussian white noise using wel scenes . The volnal S(t) B and 1C,cal bias . The ampCalliphora the resistance was estimated from the response to injected white-noise current, because this method also measures the dynamic impedance of the membrane [v(t) was calculated by ensemble averaging 200 repeats of the pseudorandom white-noise stimulus and transformed to the response power spectrum V(f). Dividing V(f) by the power spectrum of applied current i(f), yielded the impedance Z(f). The membrane resistance, RM, was estimated from the zero frequency asymptote of Z(f). In the other three species, the photoreceptor membrane resistance was measured from the change in membrane potential produced by a low amplitude current pulse. The pulse's duration was adjusted so that it fully charged the membrane capacitance and, to ensure that the activation of voltage-sensitive conductance had a negligible effect on these measurements, the current was reduced to a level (\u223c50 \u03bcA), where positive and negative pulses produced symmetrical responses. The responses to several hundred current pulses were averaged to generate a reliable estimate of the voltage change. By using very small currents, this second method returns a value of membrane resistance that is closer to the steady state because it reduces artefacts due to rectification. This may explain why the resistances measured in Sarcophaga R1\u20136 using current pulses are slightly higher than those measured in Calliphora, using white-noise current gL and iK = (Em \u2212 EK)gK. In order to maintain ionic homeostasis the pump current must be ip = iK/2.The model incorporates the three dominant membrane mechanisms, a light-gated conductance = \u22125 mV , a potas= \u221285 mV , and a sdrolysed . When thiK + iL + ip = 0 setting gK + gL = 1/RM where RM is the measured membrane resistance, and inserting the measured membrane potential, EM. The rate of ATP hydrolysis required to generate this steady-state pump current is our estimate of the rate at which the photoreceptor consumes energy.The pump current can be derived from the membrane model by equating currents across the model membrane EM, the impedance Z(f), and the information rate were successively measured, using the procedures described above. The membrane potential was then checked for drift. The light was then extinguished, the stability of the resting potential was checked for 30 s\u20131 min, and following withdrawal of another neutral density filter, the next highest background was switched on. This sequence of stable light adaptation and measurement was repeated until the maximum effective intensity was reached. The light was then extinguished, and the cell left in the dark to check that the resting potential returned to a value that was within 2 mV of that measured at the start of the experiment. Data from cells that failed this final test were rejected. Finally, the electrode noise was measured with the electrode just outside the cell, in a position where the noise amplitude was at a minimum.Following a stable electrode penetration, the photoreceptor was dark adapted for at least 20 minutes and then calibrated by counting quantum bumps A, as des"} +{"text": "Furthermore, the conformation of the N\u2014H bond is syn to the ortho-chloro group in the aniline ring and the C=O bond is syn to the ortho-methyl substituent in the benzoyl ring, similar to what is observed in 2-chloro-N-(2-chloro\u00adphen\u00adyl)benzamide and 2-methyl-N-phenyl\u00adbenzamide. The amide group makes almost the same dihedral angles of 41.2\u2005(14) and 42.2\u2005(13)\u00b0 with the aniline and benzoyl rings, respectively, while the dihedral angle between the benzoyl and aniline rings is only 7.4\u2005(3)\u00b0. The mol\u00adecules in N2CP2MBA are packed into chains through N\u2014H\u22efO hydrogen bonds.In the structure of the title compound (N2CP2MBA), C \u00c5 b = 24.318 (2) \u00c5 c = 10.0562 (8) \u00c5 \u03b2 = 90.373 (6)\u00b0V = 1195.34 (17) \u00c53 Z = 4 K\u03b1 radiationCu \u22121 \u03bc = 2.67 mmT = 299 (2) K 0.55 \u00d7 0.13 \u00d7 0.05 mm Enraf\u2013Nonius CAD-4 diffractometeret al., 1968T min = 0.695, T max = 0.878Absorption correction: \u03c8 scan (North 2264 measured reflections2126 independent reflectionsI > 2\u03c3(I)1695 reflections with R int = 0.050 3 standard reflections frequency: 120 min intensity decay: 1.5% R[F 2 > 2\u03c3(F 2)] = 0.067 wR(F 2) = 0.333 S = 1.49 2126 reflections158 parametersH atoms treated by a mixture of independent and constrained refinementmax = 0.58 e \u00c5\u22123 \u0394\u03c1min = \u22120.64 e \u00c5\u22123 \u0394\u03c1 CAD-4-PC Software (Enraf\u2013Nonius, 1996CAD-4-PC Software; data reduction: REDU4 (Stoe & Cie, 1987SHELXS97 (Sheldrick, 2008SHELXL97 (Sheldrick, 2008PLATON (Spek, 2003SHELXL97.Data collection: 10.1107/S1600536808020229/bg2196sup1.cif Crystal structure: contains datablocks I, global. DOI: 10.1107/S1600536808020229/bg2196Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"} +{"text": "Furthermore, the C=O bond is syn to the ortho-methyl group in the benzoyl ring, similar to what is observed in 2-methyl-N-(4-methyl\u00adphen\u00adyl)benzamide and 2-methyl-N-phenyl\u00adbenzamide. The amide linkage (\u2013NHCO\u2013) makes dihedral angles of 36.9\u2005(7) and 46.4\u2005(5)\u00b0 with the aniline and benzoyl rings, respectively, while the dihedral angle between the benzoyl and aniline rings is 83.1\u2005(1)\u00b0. In the crystal structure, mol\u00adecules form chains running along the b axis through N\u2014H\u22efO hydrogen bonds.In the structure of the title compound, C \u00c5 b = 5.1092 (4) \u00c5 c = 22.222 (1) \u00c5 \u03b2 = 109.593 (6)\u00b0V = 2390.1 (3) \u00c53 Z = 8 K\u03b1 radiationCu \u22121 \u03bc = 2.67 mmT = 299 (2) K 0.50 \u00d7 0.13 \u00d7 0.13 mm Enraf\u2013Nonius CAD-4 diffractometerAbsorption correction: none2213 measured reflections2085 independent reflectionsI > 2\u03c3(I)1741 reflections with R int = 0.074 3 standard reflections frequency: 120 min intensity decay: 1.0% R[F 2 > 2\u03c3(F 2)] = 0.046 wR(F 2) = 0.159 S = 1.08 2085 reflections182 parametersH atoms treated by a mixture of independent and constrained refinementmax = 0.27 e \u00c5\u22123 \u0394\u03c1min = \u22120.39 e \u00c5\u22123 \u0394\u03c1 CAD-4-PC (Enraf\u2013Nonius, 1996CAD-4-PC; data reduction: REDU4 (Stoe & Cie, 1987SHELXS97 (Sheldrick, 2008SHELXL97 (Sheldrick, 2008PLATON (Spek, 2003SHELXL97.Data collection: 10.1107/S1600536809002633/bt2857sup1.cif Crystal structure: contains datablocks I, global. DOI: 10.1107/S1600536809002633/bt2857Isup2.hkl Structure factors: contains datablocks I. DOI: crystallographic information; 3D view; checkCIF report Additional supplementary materials:"} +{"text": "Although the majority of the snakebite cases in Malaysia are due to non-venomous snakes, venomous bites cause significant morbidity and mortality if treatment measures, especially ant-venom therapy, are delayed.To determine the demographic characteristics, we conducted a retrospective study on all snakebite patients admitted to the Emergency Department of Hospital Universiti Sains Malaysia (HUSM) from January 2006 to December 2010.p = 0.003) and more likely to result in local gangrene .In the majority of the 260 cases that we found (138 cases or 52.9%), the snake species was unidentified. The most common venomous snakebites among the identified species were caused by cobras (52 cases or 20%). Cobra bites are significantly more likely to result in severe envenomation compared to non-cobra bites. Post hoc analysis also showed that cobra bite patients are significantly less likely to have complete recovery than non-cobra bite patients (48 cases, 75.0% vs. 53 cases, 94.6%; Cobra bites are significantly more likely to result in severe envenomation needing anti-venom administration and more likely to result in local gangrene, and the patients are significantly less likely to have complete recovery than those with non-cobra bites. As early as 1963, it was shown that the majority (74.0%) of snakebite incidents in Malaysia occurred in the four northern states of Peninsular Malaysia .In fact, even bites of venomous snakes are often not life threatening for humans unless a sufficient amount of venom is injected at the time of the bite. In fact, most bites are dry bites because they are defensive . NonetheVenomous snakes in Malaysia can be divided into three main groups - two groups of land snakes and one of sea snakes. The two main groups of land snakes are the Elipidae (such as cobras) and the Viperidae . All 22 species of sea snakes in Malaysia are considered venomous [Myotoxicity is venom toxicity that results in myotoxic effects such as muscular pain, stiffness and myoglobinuria. Myoglobinuria is characterized by the brown discoloration of urine and usually eventual respiratory failure. Neurotoxicity is defined as a toxicity that results in neurotoxic effects such as muscular weakness, spreading paralysis (within 15 min to 2 h), dysphagia, dysphasia, ptosis, external opthalmoplegia as well as slowed, labored breathing, culminating in respiratory arrest with or without convulsions. Hemotoxicity results in hemotoxic effects such as echymoses, petechial hemorrhage, epistaxis, hematemesis, malena, coagulopathy, hematuria or any bleeding manifestations not attributable to other causes. The venom of pit vipers often results predominantly in hemotoxicity, the venom of Elapidae predominantly in neurotoxicity, whereas that of sea snakes predominantly causes myotoxicity [The purpose of this study is to map out the demographic characteristics, clinical profiles and manifestations, and the outcomes for snakebite patients admitted to our hospital over the last 5 years.This is a retrospective study involving all snakebite patients admitted to the Emergency Department of Hospital Universiti Sains Malaysia (HUSM) from January 2006 to December 2010.After retrieving the registration numbers and case notes for all snakebite patients admitted to HUSM during the stipulated time, we reviewed all the relevant data needed for our analysis. Besides demographic data, the analyzed variables included the type of snake, severity of envenomation, time periods where the bites occurred, common symptoms suggestive of hemotoxicity, myotoxicity and neurotoxicity, local symptoms including conditions of wounds and recovery progress.Cases where the patients were 'discharged against medical advice' were excluded. Cases of 'unknown' bites in the absence of fang marks or any other symptoms suggestive of venomous snakebites were also excluded. This study was conducted with the approval of our institutional research ethics board from the Advanced Dental and Medical Institute, Universiti Sains Malaysia. Permission was similarly obtained from the Hospital Director to allow us to access the information from the patients' case notes strictly for the purpose of this research.Mild envenomation is defined as minimal or mild swelling of a less than 4 cm increase in limb circumference with no clinical evidence of local gangrene or systemic symptoms. Moderate envenomation is defined as resulting in local swelling of 4 cm or more and/or showing clinical evidence of local gangrene with minimal or no systemic symptoms. Severe envenomation results in clinical evidence of systemic poisoning that potentially can be fatal [p value of less than 0.05 was taken as statistically significant.Statistical analysis was done using the Statistical Package for Social Sciences (SPSS) version 18 for Windows. Comparisons of categorical data were carried out using Pearson's chi-square or Fisher's exact test where appropriate. A A total of 260 snakebite patients were analyzed in the 5-year period from January 2006 to December 2010. Of these 260 cases, 64 (24.5%) were cobra bites, 52 (20.0%) viper bites, 4 sea snake bites (1.5%), 3 python bites (1.1%) and 138 unknown snakebites (52.9%).In terms of the patients' age groups, the highest number of cases (89 cases or 34.2%) occurred in the 10-19-year-old category days. The longest hospital stay was 40 days. Six out of 260 patients (2.31%) were admitted to the intensive care unit (ICU). These six patients all had severe envenomation, and two were mechanically ventilated.Sixty patients 23.1%) presented with symptoms suggestive of myotoxicity, 9 (3.5%) with symptoms suggestive of hemotoxicity and 35 (13.5%) with symptoms suggestive of neurotoxicity. Nine patients (3.5%) presented with overlapping features of both neurotoxicity and myotoxicity, but not hemotoxicity. Six patients (2.31%) presented with overlapping features suggestive of both myotoxicity and hemotoxicity, but none of the patients presented with symptoms of both hemotoxicity and neurotoxicity. Regarding the bite sites, 191 patients (73.45%) were bitten on the lower limbs, whereas 60 (23.10%) were bitten on the upper limbs (9 patients or 3.45% with missing data). Although 98 patients (37.7%) presented with signs and symptoms suggestive of severe envenomation, only 48 (18.5%) received anti-venom. The details of the common symptoms experienced by the patients are presented in Table % presentp < 0.001, chi-square test applied) (Tables p = 0.164) had severe envenomation compared to only 15 (26.8%) such cases in the non-cobra-bite group Tables . In a si4) Table .In term of local effects, fang marks were noted in 186 patients (71.5%) and gangrene in 17 (6.5%). Six patients (2.3%) had clinical features suggestive of compartment syndrome, and one eventually underwent fasciotomy. Furthermore, 13 patients (5.0%) developed secondary infections Table .Up to 24% of the patients had a time lapse of between 1 to 4 h before presenting to the hospital Table . ApproxiIn this study, we found that in the majority of snakebite cases (52.9%), the exact snake species was not identified, although in these unidentified cases fang marks or other symptoms suggestive of venomous bites were present. This is not surprising given the fact that these were often quick, defensive bites . The patMost earlier epidemiological studies done in the 1960s to 1990s showed that majority of venomous bites were due to pit vipers -7. HowevFurthermore, contrary to what many people may believe, the cobra is actually not an aggressive snake and avoids encountering humans as much as possible ,8. It onNot only did we find that cobra bites made up the majority of the identified venomous snakebites in our study, but cobra bites were more likely to result in severe envenomation compared to other species. Post-hoc analysis also showed that cobra bites were more statistically likely to cause local gangrene at bite sites than non-cobra bites, and the patients were less statistically likely to achieve complete recovery. This may be due to the fact that the venom of cobras, or the Elapidae as a whole, often results in neurotoxicity -3.The observation that bites on the lower limbs were three times as common than bites on the upper limbs suggests that in most cases the snake was stepped on inadvertently ,7. Most There are a number of limitations in our study. Our data on the species of snakes taken from the hospital case notes were based entirely on the description given by the patients and other witnesses. Unlike some other studies, we were reluctant to categorize our data on 'type of snake' into suspected cases and confirmed cases, because we found this categorization to be rather arbitrary since there was no herpetologist in our center to help us with this task. Furthermore, the many confusing and missing data in the case notes render such categorization difficult. This study was conducted only in one center in Malaysia over a 5-year period, and therefore, the epidemiological findings may not truly reflect the epidemiological trend in Malaysia as a whole. Future multicenter studies should be conducted to validate these findings.Overall, from this study, we found that in more than 50% of the snakebite cases admitted to HUSM from 2006 to 2010, the species of snake was not identified. Among those identified, the most common venomous snakebites were cobra bites. Cobra bites are significantly more likely to result in severe envenomation needing anti-venom administration. Post-hoc analysis also showed that patients with cobra bites were significantly less likely to achieve complete recovery than those with non-cobra bites and more likely to develop local gangrene.No direct consent was taken from the patients as this is a retrospective study. Details of the history, clinical findings, admissions and outcomes were obtained from the hospital records. Consent, instead, was obtained from the Hospital Director to use the information contained in the patient record solely for the educational purpose of this research only.The authors declare that they have no competing interests.KSC contributed in data collection, results analysis and was directly involved in writing this manuscript. HWK contributed in the initial conception, drawing up the study design, data collection as well as analysis of this study RA contributed in the initial conception and designing the methodology of this study. NHNAR contributed in the study design and result analysis of this study."} +{"text": "NHSC, an urban community-based health center in Plainfield, Union County, NJ, USA, provides services to 25,000 uninsured/minority/impoverished patients. Plainfield consistently ranks first and second among the 29 Union County municipalities for Syphilis, Gonorrhea and Chlamydia. Plainfield ranks second for the numbers of HIV/AIDS. There is, therefore, a dire need to address the existing health emergency around STDs and its correlation with HIV.NHSC incorporates a coordinated, proactive, patient-centered approach to integrating STD screening/prevention with primary care in a medical home environment. Risk assessments/screenings are done by clinicians. Those identified suspicious for STD or with STD symptoms/diagnosis receive on-the-spot HIV counseling and Rapid testing. Patients receive immediate treatment intervention for suspected STDs. There are on-going prevention efforts including development of Risk Reduction Plans agreed upon/signed by patients.Resulting from the integrated STD/HIV prevention approach: 100% of patients presenting with symptoms/suspicion for STDs received HIV counseling/Rapid testing; 100% received prevention education and free condoms; 100% had Risk Reduction Plans developed and agreed upon; 19 persons were identified STD and HIV positive and were immediately linked to care.Integration of STD/HIV prevention with outpatient care under the umbrella of Early Intervention Services allowed to: identify extend of STD/HIV correlation; allow for a seamless one-stop shop prevention-treatment service delivery model; improve patient awareness of on-site prevention/treatment resources."} +{"text": "Electronic monitoring equipment usually have a predefined set of safe ranges and alarm thresholds that are manually altered depending on characteristics of patients and health care personnel. Preset alarm thresholds used to identify patients at risk of deterioration have poor specificity due to generalization of patient population, clinical needs, and care models. Staff at the Emergency Department is challenged by alarm fatigue which can ultimately be fatal; our aim with this study is to investigate the possible impact of triage specific thresholds on the number of generated alarms.st 2013-August 1st 2014) with vital signs measured on electronic monitors. Registration of personal identification number, timestamps, and all measured vital values, linked to triage information from the electronic logistic tool in the Department and stored locally in a research database. Level of severity defined by triage category given upon arrival and observation regimes according to the ADAPT triage model. We define an alarm event as three consecutive threshold violations. Alarm events for each patient were analyzed with 1) preset alarm thresholds configured in the patient monitors, and 2) individual thresholds as proposed by the ADAPT triage model.Adult patients admitted to the bed units of the Emergency Department (ED) at Odense University Hospital (October 1884 triaged patients and a total of 628,839 vital sign measurements were registered. The frequency of alarm events generated was higher when preset alarm thresholds were used. 70 alarms opposed to 22 in the resuscitation category. 1,433 opposed to 450 in the very urgent category and 4,032 to 4,261 in the less urgent category. In the two less urgent triage categories; \"not urgent\" and \"fast track\", the number of generated alarms were distributed opposite with most generated alarm events with individualized thresholds; 5,661/1,844 and 144/12, respectively.The effect of individualized monitor alarm thresholds based on triage level has very different impact across groups. Given that patients are correctly triaged, the triage specific thresholds would have increased the total number of generated alarms by 35%. Primarily due to effect from patients triaged with the lowest levels of urgency."} +{"text": "Malignant steroid cell tumors of the ovary are rare and frequently associated with hormonal abnormalities. There are no guidelines on how to treat rapidly progressive Cushing\u2019s syndrome, a medical emergency.A 67-year-old white woman presented to our hospital with rapidly developing signs and symptoms of Cushing\u2019s syndrome secondary to a steroid-secreting tumor. Her physical and biochemical manifestations of Cushing\u2019s syndrome progressed, and she was not amenable to undergoing conventional chemotherapy secondary to the debilitating effects of high cortisol. Her rapidly progressive Cushing\u2019s syndrome ultimately led to her death, despite aggressive medical management with spironolactone, ketoconazole, mitotane, and mifepristone.We report an unusual and rare case of Cushing\u2019s syndrome secondary to a malignant steroid cell tumor of the ovary. The case is highlighted to discuss the complications of rapidly progressive Cushing\u2019s syndrome, an underreported and often unrecognized endocrine emergency, and the best available evidence for treatment. Ectopic Cushing\u2019s syndrome is rarely seen with ovarian tumors. Steroid cell tumors are rare stromal tumors of the ovary first defined by Scully in 1979 . These tA 67-year-old nulliparous white woman with no prior medical history or pertinent family history presented to our hospital with a 4-month history of hirsutism, deepening voice, weight gain, easy bruising, hair thinning, and chest redness. Her physical examination revealed abdominal swelling, pedal edema, and excessive hair growth. She had new-onset hypertension, with an elevated blood pressure of 153/78\u00a0mmHg. Initial laboratory test results revealed a serum glucose level of 543\u00a0mg/dl , a potassium level of 2.5\u00a0mg/dl , and a bicarbonate level of 44.8\u00a0mEq/L . Her hemoglobin A1c was 9.2\u00a0%, compared with 5.4\u00a0% when checked only 6\u00a0months prior. Further hormonal evaluation revealed a testosterone concentration >800\u00a0ng/dl , a dehydroepiandrosterone level of 243\u00a0ng/ml , a luteinizing hormone concentration <0.2\u00a0IU/L , and a follicle-stimulating hormone level <0.7\u00a0IU/L . The patient\u2019s 24-h urine cortisol was 273\u00a0\u03bcg/24\u00a0h . Dexamethasone 1\u00a0mg failed to suppress her morning cortisol, which was 33\u00a0mg/dl . Her aldosterone concentration was <4\u00a0ng/dl , and her renin level was 1.2\u00a0ng/ml . Magnetic resonance imaging (MRI) of the her abdomen and pelvis revealed a 9.4\u2009\u00d7\u20095.8\u2009\u00d7\u20097.9-cm ovarian mass with ascites and diffuse abdominal metastasis. Ovarian hormone testing disclosed a CA-125 level of 742.6 U/ml , an inhibin A level of 11\u00a0pg/ml , and an inhibin B level of 5060\u00a0pg/ml . She was started on lisinopril 10\u00a0mg, furosemide 20\u00a0mg, hydrochlorothiazide 12.5\u00a0mg, and potassium chloride 10\u00a0mEq daily, as well as metoprolol 25\u00a0mg twice daily.An exploratory laparotomy of the right ovary revealed a lobulated mass measuring 9.8\u00a0cm in its greatest dimension. The tumor extended across to the left ovary and involved the bladder, periovarian tissue, anterior abdominal wall, and cecum. Liver metastasis was also noted. Because of the extent of the patient\u2019s disease, a complete resection was not attempted. A hysterectomy, bilateral salpingo-oophorectomy, and tumor debulking were performed.The cut surface of the tumor was orange with hemorrhage. Histologically, polygonal cells and abundant cytoplasm were seen, ranging from eosinophilic to granular, with brisk mitotic activity (22 mitoses per 10 high-power fields [HPF]). The tumor demonstrated a solid and trabecular pattern of growth and focal myxoid stroma. Reinke crystals were absent. The tumor stained positive for inhibin, calretinin, and MART-1 and negative for chromogranin, S-100, WT-1, CK7, and CD99. Based on these findings, a diagnosis of ovarian steroid cell tumor was made.Postoperatively, the patient\u2019s testosterone (90\u00a0ng/dl) and urinary cortisol (656\u00a0\u03bcg/24\u00a0h) were elevated with adrenocorticotropic hormone (ACTH) 12\u00a0pg/ml. No prior ACTH measurement was available. Spironolactone and ketoconazole were started in response to hypokalemia, edema, and high cortisol. Spironolactone was titrated up to 400\u00a0mg daily and ketoconazole to 1200\u00a0mg daily with persistent hypokalemia. Mitotane 1500\u00a0mg daily and mifepristone 300\u00a0mg daily were added. Disease burden continued to progress as urine cortisol remained elevated at 1056\u00a0\u03bcg/24\u00a0h, with refractory hypokalemia.Staphylococcus aureus infection requiring surgical debridement and intensive care unit admission for pneumonia. She developed delirium, depression, and malnutrition. Her poor functional status prohibited conventional chemotherapy. A decision was made to provide comfort care, and she subsequently died in the hospice . Steroid cell tumor NOS is the most common of the three subtypes, accounting for 60\u00a0% of cases , 8. SterThe clinical manifestations of steroid cell tumor NOS are similar to both stromal luteomas and Leydig cell tumors and are associated with hormonal activity and virilization. Common virilizing findings include hirsutism, acne, deep voice, and alopecia. Estrogenic effects are not uncommon and include menorrhagia, postmenopausal bleeding, and endometrial changes. Symptoms of Cushing\u2019s syndrome include abdominal pain, distention, and bloating, but they are seen in only 6\u201310\u00a0% of cases , 9, 10.The majority of steroid cell tumors have either benign or low-grade behavior , 12. MosMalignancy in steroid cell tumors is strongly suggested by the following pathological features: two or more mitotic figures per 10 HPF, necrosis, a diameter >7\u00a0cm, hemorrhage, and grade 2 or 3 nuclear atypia [The diagnosis is made histologically as first defined by Scully . SteroidImmunohistochemistry aids in diagnosis, with inhibin and calretinin differentiating sex cord-stromal from non-sex-cord tumors. Seventy-five percent of cases are vimentin-positive , 13. InhOn imaging studies, steroid cell tumors are typically unilateral and solid. Cystic changes or necrosis is possible. Most are small and thus frequently undetected by ultrasound or computed tomography . MRI is The best treatment for patients diagnosed with steroid-secreting ovarian sex cell tumors has not been described. Most of these tumors are diagnosed at an early stage and do not recur or metastasize; thus, little is known about response to therapy, and there is no recommended standard of care.Treatment of advanced ovarian steroid cell tumors is primarily surgical. Following cytoreductive surgery, adjuvant chemotherapy is used, though no formal recommendations have been made , 12, 15.Rapid, aggressive Cushing\u2019s syndrome is, and should be considered as, an \u201cendocrine emergency.\u201d It is a rare presentation of ovarian tumors and is highlighted in the case of our patient. Inadequately treated Cushing\u2019s syndrome is associated with a three- to fivefold higher mortality . Prompt,Ketoconazole is an imidazole derivative originally developed as an antifungal agent. It blocks sex steroid and cortisol synthesis by multiple mechanisms, including inhibiting 11\u03b2-hydroxylase, 17\u03b1-hydroxylase, and C17,20 lyase enzymes. It can also inhibit ACTH secretion by corticotroph tumor cells , 17, 18.Metyrapone is an 11\u03b2-hydroxylase inhibitor that blocks the final step in cortisol synthesis: conversion of 11-deoxycortisol to cortisol . It is pMitotane is a derivative of dichlorodiphenyltrichloroethane and reduces cortisol production by blocking cholesterol side-chain cleavage and 11\u03b2-hydroxylase , 17, 18.Mifepristone (RU-486) is the only U.S. Food and Drug Administration-approved medication for Cushing\u2019s syndrome of any cause. It is a potent glucocorticoid and progesterone receptor antagonist, blocking cortisol at the tissue level. The block in glucocorticoid action leads to negative feedback at the hypothalamic-pituitary level, resulting in a rise in ACTH and cortisol . RecentlEtomidate is an imidazole derivative used for anesthesia induction. It reduces cortisol production by blocking cholesterol side-chain cleavage, aldosterone synthase, and 11\u03b2-hydroxylase. Case reports show successful use of etomidate as a short-term treatment in critically ill patients with Cushing\u2019s syndrome. It has a rapid onset of action, with cortisol levels decreasing within 12\u00a0h . As a coThe case of our patient highlights a rare coexistence of Cushing\u2019s syndrome and hyperandrogenemia due to a malignant steroid cell neoplasm of the ovary. Rapidly progressive Cushing\u2019s syndrome presented a unique therapeutic challenge, with the biochemical effects of hypercortisolism leading to rapid morbidity and mortality. When surgery fails to reverse hypercortisolism, medical treatment can suppress cortisol overproduction and its end-target effects, improving clinical status.In retrospect, a more aggressive approach using multiple potent, short-acting agents may have improved the patient\u2019s outcome. Careful monitoring and treatment by clinicians familiar with medications\u2019 mechanisms of action are essential. The case of our patient underscores the need for further research into the biology of this tumor and a targeted approach for treating severe hypercortisolism. This addition to the literature highlights a rare malignancy and an unrecognized endocrine emergency."} +{"text": "A novel numerical method at the microscale for studying the mechanical behavior of an aluminum-particle-reinforced polytetrafluoroethylene (Al/PTFE) composite is proposed and validated experimentally in this paper. Two types of 2D representative volume elements (RVEs), real microstructure-based and simulated microstructures, are established by following a series of image processing procedures and on a statistical basis considering the geometry and the distribution of particles and microvoids, respectively. Moreover, 3D finite element modelling based on the same statistical information as the 2D simulated microstructure models is conducted to show the efficiency and effectiveness of the 2D models. The results of all simulations and experiments indicate that real microstructure-based models and simulated microstructure models are efficient methods to predict elastic and plastic constants of particle-reinforced composites. Impact-initiated composite material, characterized by its exothermic and rapid energy releasing property upon impact with targets or when impacted, has been of concern in recent years. It can be categorized in a much larger category\u2014i.e., reactive materials (RMs), which denotes the class of materials generally combining two or more non-explosive solids that, upon their ignition, react to release chemical energy in addition to the kinetic energy resulting when the high-speed projectiles containing the reactive materials collide with the target . Impact-Among the many formulas of impact-initiated composites, aluminum (Al)-particle-filled polytetrafluoroethylene (PTFE) is typical, which has become a benchmark for investigation into properties of impact-initiated composites. Based on the basic formulas of Al/PTFE, over the past decades, much progress has been achieved, especially in formulations and fabrications, mechanical properties of both statics and dynamics, flow and failure, impact initiation mechanisms and properties, and reaction and energy release properties ,9,10,11.However, all research has been conducted macroscopically; microstructural properties and the effects of a microstructure on its macro properties and performance are still less known. From the perspective of fabrication and application of Al/PTFE, different particle geometries, volume fractions and distributions may result in different composite properties, and the pressing/sintering process may introduce microvoids and microcracks, which will affect the initiation properties and strength of the composite. Moreover, the demand in the application of Al/PTFE is proposed with a higher density, better strength, sufficient insensitivity and complete reaction. All of these properties depend on the microstructural characteristics of Al/PTFE and cannot be neglected.With the development of algorithms and modelling techniques, finite element analysis (FEA) is increasingly adopted in mechanics of materials both macroscopically and microscopically. Especially with the combination of trans-scale mechanics and FEA technology, the mechanical properties of materials are being understood at increasingly smaller scales ,13. In tHowever, both unit cell and RVE models are highly idealized models for heterogeneous materials because all materials are regarded as regular particle arrangements or even particles of the same geometries. The unit cell approach assumes that the composite is constructed of an array of basic units, each with identical composition, geometry, inclusion shape and material properties . As a maGiven the significant influences of the geometric details on the material properties, methods aimed at approaching the real structures must be proposed. Given that the microscopy instruments and observation technologies are now well developed, image mapping has become a highly powerful tool for this purpose . Two-dimIn this paper, we present a comprehensive study of two-dimensional real microstructure-based finite element modelling of the deformation behavior of aluminum (Al)-particle-filled polytetrafluoroethylene (PTFE); such studies are considerably limited at a microscopic level. In addition to this, FE models from statistics-based simulated microstructures considering microvoids are established to draw comparisons. All simulated results are compared with experimental results.The samples studied in this paper are a pressed and sintered mixture of Al and PTFE powders, 26.5% and 73.5% by weight, respectively. The samples were fabricated by the authors themselves, based on Joshi\u2019s [First, powder of Al and PTFE was mixed in proportions of 26.5% to 73.5% by weight, respectively, via a dry mixing process. After the mixture was made, the material was pressed in a die to make a flat cylindrical sample 60 mm in diameter and 15 mm in height. Pressure applied to the mixture in the die was in the range of 70 MPa to 80 MPa with a dwell time of approximately 10 min. The pressed mixture then underwent a sintering cycle to prevent any oxidation or surface reaction of the material, which was performed under an argon atmosphere. The sample was heated at a rate of approximately 50 degrees per hour to a final temperature of approximately 380 \u00b0C, held at this final temperature for 4 h, and then cooled to room temperature via an initially slow and then fast process. \u22123\u00b7s\u22121. Contact surfaces of specimens were lubricated before the test. For reliable results, three samples were tested. The true stress\u2013strain curves of three tested specimens under quasi-static compression are shown in To validate the FE models and algorithm, quasi-static compression tests with the employment of an Instron, Inc. , model 5985 servo-hydraulic load frame, were performed in a standard laboratory environment . The load applied to specimens was measured with a load cell mounted to the crosshead. The tests were carried out under displacement control with a constant crosshead speed of 0.6 mm/min, which corresponds to a nominal strain rate of 10Microscopic images of Al/PTFE were obtained by a HITACHI S-4800 scanning electron microscope . Five images of different magnifications were chosen . The dimThe SEM images then must be processed via an image processing method, of which the main aim is to extract the edges of the particles and microvoids ,32. The Simulated microstructures of Al/PTFE were set up based on the statistics of particles and voids and fulfilled by another MATLAB program. The particle size distribution of Al is depicted in Statistics on the microvoids were collected based on their distribution and geometry extracted from the SEM images . The distribution of equivalent diameters is shown in The shapes of microvoids are described by shape factor The reproduction of a simulated microstructure is based on a random sequential adsorption (RSA) algorithm, which was initially used for studying protein adsorption and is widely used for regeneration of composite microstructures . The basAfter the real structure-based and simulated microstructures were regenerated, finite element analysis modelling a quasi-static compression test was performed with ABAQUS/Standard 6.13 .To study the effectiveness of the microstructure reproduction algorithm and FE analysis methods and compare them with the experimental results, both aluminum particles and PTFE matrix are modeled as elastic\u2013plastic materials . The elaAnother important point to be considered is the properties of the materials within the microvoids. As a matter of fact, the introduction of air into composites during pressing and expansion/contraction of constituents during the sintering process is the cause of microvoids. However, in a quasi-static compression domain, the effect of air within microvoids on mechanical properties of composite is found to be small or nonexistent. The effect is due to the lack of constituents rather than the air. Thus, the microvoids are assumed to be vacuums.Q1 represents a general node on the face of an RVE cube, and the corresponding node Q2 is at the same location on the opposite face. V is the force applied to the nodes. X and x denote the positions of a material point in the original and deformed configurations, respectively. The PBC is implemented by a code in the Python language developed by the author, which extracts boundary node information and constrains the nodes based on Equation (4).For a given RVE, three types of boundary conditions are typically used: (i) the prescribed displacement boundary condition (PDBC); (ii) the prescribed traction boundary condition (PTBC); and (iii) the periodic boundary condition (PBC). The effects of the three types of boundary conditions on the prediction of RVE models were investigated by Chen C, and the results showed that PDBC and the PTBC overestimated and underestimated the yield strength, respectively, whereas the PBC provided the best performance . This isl format :(4)x(Q1x- and y-directions, whereas the lower-right node is fixed in the y-direction. To apply a compression load to the FE models, only the top-left node is given a prescribed displacement to create a 10% compression strain due to the PBCs.To prevent rigid body movement during compression, the lower-left node is fixed in the To obtain the effective elastic properties of the RVE, a homogenization approach is employed by considering the heterogeneous composite in the microscale to be a homogeneous material in the macroscale . Given tBoth real microstructure-based and simulated models were meshed with three-node linear plane strain triangle elements\u2014i.e., CEP3 elements\u2014so the analysis of all models was under the plane strain hypothesis. To apply the periodic boundary condition, the models had to be meshed with periodic elements\u2014i.e., the same number of elements and nodes on opposite edges. Another point to consider is that the dimensions of some particles are very small, so the element size should be small enough to mesh small particles with sufficient precision. Thus, the same number of seeds was selected along opposite edges, and the models were all meshed with an element size of approximately 1 \u03bcm. Taking the 1-R model, which is the largest, for example, it contains 183,119 nodes and 364,757 elements. Compression strain was set to 10% by applying displacement in the y direction. General contact was added for all models in case the contact of materials resulted from the contraction of microvoids. After the analysis was complete, stresses and strains were extracted by a Python code to draw stress\u2013strain curves and calculate elastic constants.The contour plots of von Mises effective stress and deformation at the final moment are shown in First, opposite edges deforming identically and stress continuity across the boundaries can be observed, which suggests that proper PBC is applied . Owing tIn this section, stress\u2013strain curves of the five real microstructure-based and five simulated microstructure models are extracted and compared with those of the experiments, as shown in Generally, as shown in The experimental results are shown in To compare them with the 2D models, 3D models are established based on the same statistical information. However, the largest challenge of using either a 3D real microstructure-based model or simulated model is that many elements and a highly refined mesh are required to conform to the heterogeneous nature of the microstructure. Although a 3D model without simplification of the microstructure certainly gives a better prediction of composite properties, it requires extreme computational power. Three-dimensional models were calculated previously by the author; a 50 \u03bcm \u00d7 50 \u03bcm \u00d7 50 \u03bcm model based on the statistics of particles and microvoids given in The elastic modulus and yield stresses of all simulations and experiments are listed in By comparison, the relative errors of predictions from 2D models over the experimental results are within 11%, whereas those of real microstructure-based models are only 1.9% and 6.6%, respectively. Corresponding predictions from 3D models show the highest variations in elastic modulus with 13.6% and 7.3% relative error over elastic modulus and yield stress, respectively. After comprehensive comparisons, real microstructure-based models that consider particle distribution, geometry and microvoids give the best results, followed by the simulated-microstructure models and 3D models based on statistics of particles and microvoids.the real microstructure-based models, established by processing SEM images and extracting edges of Al particles and microvoids, are able to accurately represent the distribution and geometry of particles and microvoids;the simulated microstructure models are generated by statistics on the distribution and geometry of particles and microvoids and consider the drawbacks of RVEs regarding regular particle arrangement and micro defects in composite structures;compared with 3D models, the 2D real microstructure-based models and simulated microstructure models are more efficient methods to simulate the mechanical behavior of composites at the microscale;experimental results show that the microscale modelling of real microstructure-based models and simulated microstructure models gives good predictions of elastic modulus and yield stress. Two types of models predict the elastic modulus with relative errors of 1.9% and 10.9%, respectively, whereas those of the yield stress are 6.6% and 10.6%, respectively.Two-dimensional microscale finite element analyses of Al/PTFE composite are conducted in this paper, which include real microstructure-based models established following a series of image processing and finite element modelling procedures and simulated microstructure models reproduced on a statistical basis considering the geometry and distribution of microvoids. In addition to 2D models, 3D finite element modelling and experiments are conducted to compare with the two types of 2D models, and the results with different methods are discussed and analyzed. Specifically,"} +{"text": "Experimental and clinical evidence have demonstrated aberrant expression of cytokines/chemokines and their receptors in patients with hippocampal sclerosis (HS) and focal cortical dysplasia (FCD). However, there is limited information regarding the modulation of cytokine/chemokine-regulatory networks, suggesting contribution of miRNAs and downstream transcription factors/receptors in these pathologies. Hence, we studied the levels of multiple inflammatory mediators along with transcriptional changes of nine related miRNAs and mRNA levels of downstream effectors of significantly altered cytokines/chemokines in brain tissues obtained from patients with HS (n\u2009=\u200926) and FCD (n\u2009=\u200926). Up regulation of IL1\u03b2, IL6, CCL3, CCL4, STAT-3, C-JUN and CCR5, and down regulation of IL 10 were observed in both HS and FCD cases (p\u2009<\u20090.05). CCR5 was significantly up regulated in FCD as compared to HS (p\u2009<\u20090.001). Both, HS and FCD presented decreased miR-223-3p, miR-21-5p, miR-204-5p and let-7a-5p and increased miR-155-5p expression (p\u2009<\u20090.05). As compared to HS, miR-204-5p (upstream to CCR5 and IL1\u03b2) and miR-195-5p (upstream to CCL4) were significantly decreased in FCD patients (p\u2009<\u20090.01). Our results suggest differential alteration of cytokine/chemokine regulatory networks in HS and FCD and provide a rationale for developing pathology specific therapy. FCD and HS are network-level disorders involving alterations in several cellular processes and multiple cell signalling pathways3. Brain\u00a0inflammation is an intrinsic feature of the hyperexcitable\u00a0pathologic brain tissue in DREs of differing etiology4. There is robust evidence for an activated immune response in non-CD pathology like HS. The activation of inflammatory pathways in human HS is supported by gene expression analysis2. An elevated level of interleukin\u00a06 (IL6), IL1\u03b2, chemokines (Chemokine (C-C motif) ligand) are reported in the cerebrospinal fluid and brain tissues of HS patients6. In experimental models, up regulation of the IL1\u03b2, IL6, IL1Ra, tumor necrosis factor alpha (TNF\u03b1) and tumor growth factor beta (TGF\u03b21) are reported7. Moreover, a direct pro-convulsive effect of IL1\u03b2 is demonstrated8. These observations suggest the existence of a feedback loop between the pro-inflammatory cytokine/chemokine systems, which may be critical for the propagation of the inflammatory response in human TLE with HS9.Focal cortical dysplasia (FCD) caused by the malformations of cortical development (MCDs) and hippocampal sclerosis (HS) are the two most frequent drug resistant epilepsy (DRE) pathologies\u00a0which constitutes about fifty percent of all surgical pathology of epilepsy10. There is also evidence of activation of the plasminogen, the toll like receptor (TLR) and the vascular endothelial growth factor (VEGF)-mediated signalling contributing to glial activation and the associated inflammatory reactions11.Induced inflammatory response in FCD is supported by the activation of IL6 and IL1\u03b2 signalling pathways, induction of the chemokines, microglial reactivity, as well as blood brain barrier (BBB) breakdown12. Little is known regarding the postulated cytokine/chemokine regulatory network in HS and FCD; however, inflammation related mediators have been implicated in a number of studies. The aim of the present study was to analyze the cytokine-chemokine regulatory networks in the\u00a0brain tissues resected from patients with HS and FCD.There is a growing body of evidence supporting that the regulation of inflammatory response is a very complex process involving coordinated participation of multiple regulation systems, such as an integrated network of microRNAs (miRNAs) and transcription factors/receptors18 whereas miRNAs were identified through miRTarBase19.To the best of our knowledge, direct, simultaneous quantification of multiple inflammation-related mediators and its regulators in resected tissues of human HS, FCD, and non-epileptic brain tissue has not been previously accomplished. Therefore, we measured the level of eight inflammatory mediators , CCL4(MIP1\u03b2), TNF\u03b1 and VEGF) and investigated the gene expression of nine inflammation-related miRNAs and four downstream effectors in brain tissues obtained from fifty two epilepsy patients (twenty six HS and twenty six FCD) and twenty two non-epileptic control\u00a0subjects using multiplex immunoassay and quantitative RT-PCR respectively. The inflammatory mediators and the downstream effectors were selected based on previous literatureand contribution in inflammatory processesA total of twenty six HS (age range-6 to 43 yrs) and twenty six FCD (age range-4 to 51 yrs) patients were included in this study. For non-epileptic control (age range- 3 to 63 yrs), cortical tissues were obtained from the margins of tumor from the patients who underwent curative surgery because of brain tumors (twenty two) having no seizures. For the cytokine/chemokine assay, ten HS , ten FCD and eight controls were included. Subsequently, we used surgically resected tissues from ten HS , ten FCD and eight controls for downstream gene expression analysis whereas six HS , six FCD and six control were used for micoRNA expression studies. The detailed clinical characteristics of individuals were listed in Table\u00a0Up regulation of IL1\u03b2, IL6, CCL3 and CCL4 and down regulation of IL 10 were observed in both HS and FCD patients as compared to non-epileptic controls Table\u00a0. IL 10 wA significant increase in STAT-3 expression was observed in HS and FCD as compared to non-epileptic control , miR-21-5p , and miR-204-5p and let-7a-5p and increased miR-155-5p compared with non-epileptic controls , miR-21-5p , miR-204-5p , miR-195-5p , miR-203-3p and let-7a-5p and increased miR-155-5p , compared with non-epileptic controls and miR-195-5p only found to be decreased in FCD when compared with HS demonstrated that increase in ICER mRNA following status epilepticus might have a role in suppressing the severity of epilepsy in animal models but no such reports are available in epilepsy patients16. We observed increased STAT-3 andC-JUN expression in surgically resected tissues from both HS and FCD patients whereas ICER up regulation was only observed in FCD. This suggests up regulation of IL1\u03b2 & IL6 signalling pathways might be mediated via JAK/STAT signalling in both HS and FCD but the downstream effectors may vary. On the basis of these evidences, we postulate that the elevation of IL1\u03b2 & IL6 and its related downstream factors in HS and FCD lesions\u00a0may\u00a0contribute\u00a0to the pathophysiology associated with epileptogenesis.STAT proteins are principal mediators of cytokine receptor signalling\u00a0and a number of STAT3 target genes have been found to be involved in the process of epileptogenesis32. Our results demonstrated differential expression of several microRNAs in HS and FCD. miR-204 play pivotal roles during epileptogenic process and STAT3 overexpression is found to be associated with decrease in miR-20433. Up regulation of STAT-3 might contribute to down regulation of miR-204-5\u00a0pin HS and FCD patients as observed in our study. Contradictory to our results, miR-21-5p is shown to be up regulated in children with MTLE and FCD35. However, down regulation of miR-21-5p is correlating with the up regulated IL1\u03b2 and STAT-3 in the current study. Further studies are needed to extrapolate the clear association. miR-203-3p is found to be significantly decreased in FCD patients and unaltered in HS patients. There are few reports showing deregulation of its expression in MTLE tissues, thus, its contribution in regulating inflammatory processes needs further investigations31. Previous studies demonstrated that miR-223 forms a positive regulatory loop with IL6 for pro-inflammatory cytokine production36. Up regulated miR-155p expression is found to be associated with increased expression of pro-inflammatory cytokines and decreased expression of suppressor of cytokine signalling protein (SOCS1)37. Up regulation of IL6, IL1\u03b2, STAT-3 and miR-155p\u00a0and down regulation of miR-223-3p are in sync with these published data. miR-Let-7a is shown to regulate anti-inflammatory factors like IL10 and IL4. Down regulation of IL10 and up regulation of IL6 in the present study, correlates well with the down regulated let-7a-5p38. We could not find significant differences in the levels of miR-106a-5p and let-7c-5p in MTLE and FCD patients as compared to controls, though changes in the expression of miR-106a-5p has been demonstrated by Kan et al.31 in MTLE patients.Changes in expression of miRNAs involved in regulating inflammatory responses have been observed in patients with HS, and relatively few reports are available in FCDin vivo and also lower the seizure frequencies over time39. Thus, STAT3 can be considered as a potential therapeutic target for both HS and FCD and needs further investigations.Network analysis exhibited clustering of altered cytokines and downstream transcription factors in complex immunological networks. STAT-3, IL6, IL1\u03b2 and C-JUN constituted major hubs showing complex interactions with various pathways i.e. JNK, MAPK, mTOR, EGFR, NF-\u03baB, etc. Our results suggest that inflammatory processes might contribute to underlying neuronal-glia network dysfunctions. This analysis also supports the evidence that STAT-3 is the major downstream effector to the altered cytokine signalling cascade. Studies in rat model demonstrated that systemic administration of WP1066, a STAT-3 inhibitor, transiently reduce seizure-induced STAT3 40. However, there is no report available regarding CCL3 and \u22124 expression in FCD so far. The effects of these chemokines are mediated by their G-protein-coupled receptors (GPCR\u2019s), of which most important and widely studied is CCR5. We also found increased level of CCR5 in both HS and FCD patients. Increased CCL3/CCL4 signalling via CCR5 may relate to increased glial glutamate release and disruption of the BBB40.Two chemokines CCL3 and CCL4 are up regulated in both HS as well as FCD patients. Most studies so far report increased chemokine levels in human HS tissue or in experimental models for HS41. However, no substantial evidence has been reported regarding miR-204 and miR-203 with reference to chemokine signalling. Decreased expression of miR-195 (upstream to CCL4) has been shown to up regulate RPS6KB1, (a known marker of mTOR activation)42. Interestingly, the mTOR pathway controls the function of dendritic cells (DCs) which regulates the chemokine production. Aberrant activation of mTOR pathway is also reported in FCD. Thus, miR-195 might be playing potential role in modulating chemokine activity and/or mTOR pathway in FCD. Recently, increased miR-155 is shown to be associated with increased production of the chemokines CCL3, CCL4, CCL5 and CCL8, and regulate chemokine receptor expression in rheumatoid arthritis patients43. So, the role of increased miR-155 expression in regulating chemokine signalling in epileptic cases cannot be ruled out. The current knowledge of the action of the\u00a0semi\u00a0RNAs with reference to chemokine signalling is not very much studied and demands further investigation.A\u00a0significant decrease in miR-204-5p and miR-223-3p are demonstrated in both the pathologies. In addition, we observed decreased expression of miR-203-3p and miR-195 (upstream to CCL4) only in FCD patients. miR-223 is shown to directly target the chemoattractants CXCL2, CCL3, and the pro-inflammatory cytokine IL6Network analysis identified CCR5 as one of the major hubs showing complex interactions with various pathways i.e. JNK, MAPK, mTOR, EGFR, NF-\u03baB, etc. CCR5 mediates cross talks with cytokine signalling to co-ordinate the release of many different cytokines/chemokines that are used by cells to orchestrate immune responsesFigs\u00a0 and 4. C19.IL1\u03b2 level is relatively higher in HS as compared to FCD, although it is not statistically significant. Chemokines, CCL3 and CCL4 levels are found to be relatively higher in FCD as compared to HS, however, it is not statistically significant. At the downstream expression level, only CCR5 (downstream target of CCL3 and CCL4) has been found to be significantly up regulated in FCD as compared to HS. At the microRNA level, expression of miR-204-5p and miR-195-5p are found be significantly down regulated in FCD as compared to HS. miR-204-5pand miR-195-5p are the possible microRNAs that could regulate the expression of chemokines and its receptors (CCR5) as performed with miRTarBASE4. In FCD, apart from intrinsic inflammation, the contribution of peripheral immune cells such as T cells and DCs, is shown to be more significant than in HS. DCs deserve attention in view of their ability to drive chronic inflammation interacting with T cells and regulating chemokine production9. It could be speculated that decreased expression of miR-195 may regulate the activity of mTOR\u00a0signalling-denritic cell-chemokine regulatory axisand influencesthe production of chemokines in FCD44. As reported previously, relatively higher levels of IL1\u03b2 in HS might be attributed to IL1\u03b2 polymorphism in HS patients45.These apparent discrepancies are likely due to the different role that the activation of specific pro-inflammatory pathways may have on neuronal excitability, cell survival, and cellular and synaptic plasticity. Brain tissue from HS is shown to be mainly characterized by intrinsic inflammation involving activated microglia, parenchymal and perivascular astrocytes, scattered neurons, endothelial cells of microvessels and macrophages surrounding vessels and parenchyma. Cells of adaptive immunity, such as T cells and B lymphocytes, are shown to be scarce or absentin vitro studies. In addition, due to several limitations associated with autopsy and trauma tissues, and better availability of tumor periphery tissues, we preferred to use histopathologically normal tumor periphery tissues as non-epileptic controls for this study. Hence, the mechanism and clinical implications of these pathology-specific immune alterations need to be clarified in a larger cohort of patients using different sets of non-epileptic controls. However, we believe that our preliminary study may contribute to understand the inflammatory networksinvolved in the pathogenesis of HS and FCD. Further investigations concerning the cytokine-chemokine network-mediated regulation of neuroinflammation may lead to novel therapeutic strategies against HS and FCD.Our study has several limitations. This is\u00a0a preliminary study conducted on a set of inflammatory molecules, miRNAsand downstream transcription factors/receptor, selected based on their involvement in the pathogenesis of epilepsy, but it does not cover the whole potential network of molecules/pathways contributing to this disease. Another major limitation of this study is the small sample size\u00a0which have not ageand sex matched cases vs controls. The amount of tissues resected in epilepsy and non-epileptic controls (tumor periphery) is so small in size that it limits the use of same tissues for multiple experiments including This study was reviewed and approved by Institutional Ethics Committee (IEC), All India Institute of Medical Sciences (AIIMS), New Delhi, India and Institutional Human Ethics Committee (IHE), National Brain Research Centre (NBRC), Manesar, Haryana, India, before the study began. Informed consent was obtained from all the participants. All the experiments of this study were performed in accordance with guidelines of the IEC, AIIMS, New Delhi and institutional human ethics committee, NBRC, Manesar.46. In order to obtain useful information from post-mortem brain studies, it is important that the quality of the post-mortem brain samples meet\u00a0to certain standards. Evidences exists indicating that RNA and protein degrades progressively with increasing post-mortem interval (PMI) and that measurement of gene expression in brain tissue with longer PMI may give artificially low values48. Recently, Roncon et al.49 have also raised doubt about using autopsy as a control for epilepsy surgery tissue in microRNA studies, suggesting that post-mortem modifications may have greater impact than the disease background on the data generated, with a potentially serious hindrance in their interpretation. Taken together, all these observations arises concerns on the use of autopsy tissue as control for this kind of studies. In addition, the potential role of systemic inflammatory diseases and infections may influence the cerebral inflammatory status50. Since this condition can represent a potential problem in the evaluation of the tissue content of cytokines/chemokines/mRNA and microRNA in autopsy samples, brain tissues resected because of associated tumor pathologies would be more ideal to be considered as a non-epileptic control. Tissue taken from the perimeter of the abnormal area are\u00a0also used as non-epileptic controls in previous studies25. 22 cortical tissues obtained from the margins of tumors during surgical resection in patients with brain tumors without any history of seizures\u00a0have been included as non-epileptic control (Table\u00a0Patients with HS (twenty six) and FCD (twenty six) underwent phased pre-surgical assessment, and the pathology was demonstrated by documenting convergent data on MRI, video EEG (vEEG), fluoro-2-deoxyglucose positron emission tomography (FDG-PET) evaluations and electrocorticography (ECoG), and confirmed by histopathological examinations. Based on the concordant observations, decision for surgical resection was taken after explaining the available options and obtaining the written informed consent from the patient. There are no \u201cideal\u201d or acceptable non-epileptic controls for this study which has been conducted on surgically resected tissue specimens from drug resistant epilepsy (HS\u00a0and\u00a0FCD) patients. Conceptually, the potential non-epileptic control tissue could be human brain samples resected from similar ages with non-seizure pathologies, such as tumors or trauma or post-mortem autopsy controls. Unfortunately, trauma is not very suitable as a control because within minutes of a traumatic impact, a robust inflammatory response is elicited in the injured brainol Table\u00a0. DetaileFor this study, ten HS (H1\u2013H10), ten FCD (F1\u2013F10) and eight controls (C1\u2013C8) were included. Protein was isolated using Cell lysis kit (Bio-rad) and estimated by bicinchoninic acid (BCA) Protein Assay Kit . MIA was performed on selected chemokines, cytokines and growth factors . The experimental procedure was done using Biorad Bio-plex Pro assays kit as per manufacturer\u2019s protocol. In brief, 50\u2009\u00b5l of diluted bead solution and 50\u2009\u00b5g of protein were added to each well in triplicate. Plate was incubated for 3\u2009h and washed three times; afterwards, 50\u2009\u00b5l of diluted biotin antibody was added to each well and incubated for 1\u2009h. The plate was then washed and 50\u2009\u00b5l of diluted streptavidin-PE was added to each well and incubated for 30\u2009min. Finally, the plate was washed again and measured. Measurements and data analysis were performed using the Bio-Plex system in combination with the Bio-Plex Manager software version 4.1 (Bio-Rad Laboratories).18. Regulatory miRNAs, miR-106a-5p, miR-223-3p, miR-21-5p, miR-195-5p, miR-204-5p, miR-203-3p, miR-155-5p, let-7a-5p and let-7c-5p were identified and selected through miRTarBase, the experimentally validated miRNA-target interactions database19.The known downstream effectors of altered cytokines/chemokines were selected from the available literature2. Specific primers were designed using the Primer-BLAST (Supplementary Table\u00a02. Purified RNA was reverse transcribed using high capacity cDNA reverse transcription kit (Thermoscientific) following the manufacturer protocol. Real time PCR amplifications were performed in CFX96 Real Time System (Bio-rad) with the following cycling parameters: an initial hot start of 95\u2009\u00b0C for 3\u2009min followed by 40 cycles of 95\u2009\u00b0C for 5\u2009s and 60\u2009\u00b0C for 30\u2009s. The 2\u2212\u0394\u0394Ct method was used to quantify the relative normalized expression of studied genes. Melting curve analyses were performed to validate the specific generation of the expected PCR products.qPCR was performed to evaluate the expression level of downstream targets . HPRT (hypoxanthine phosphoribosyl-transferase) was included as reference geneDifferential expression of selected upstream miRNAs was evaluated by qPCR using the QuantiMirSystem (SBI System Biosciences). Total RNA of tissues were extracted and purified using mirVana TM miRNA Isolation kit (Applied Biosystems/Ambion) following the manufacturer\u2019s instructions. For this experiment six HS (H21-H26), six FCD (F21-F26), and six controls (C17-C22) were included. 200 ng of total RNA from each sample was reverse transcribed using the QuantiMir kit (System Biosciences (SMI), Mountain View, CA, USA) in a total volume of 20\u2009\u03bcl.51. Melting curve analysis of the qPCR products verified product specificity.mir16 was included as a normalization signal. The primer sequences used in RT-PCR were listed in Supplementary Table\u00a052.In order to assess the relationships between the inflammatory molecules, miRNAs and mRNAs of downstream targets which were altered in our study, gene network analysis was performed using Natural language processing-based (NLP) network discovery algorithms in gene spring software version 13.1.1. In brief, the software overlaid the list of\u00a0significantly altered inflammatory molecules, miRNAs and mRNAs of downstream targets\u00a0onto a global molecular network to derive networks in which the focus genes are projected as nodes and interacting partners are clustered around these nodes based on their reported connectivity. Gene relationship and interaction is extracted using NLP algorithms in Pathway Architect\u00a0software available in Gene Spring ver. 13.1.1 (Agilent\u00a0technologies). This tool extracts interactions for a given list\u00a0of genes from Pubmed articles and Interact with text mining tools based on following relationships: binding, expression, metabolism, promoter binding, protein modification, regulation and transport etc. The network displays interactions among the entities and their neighbouring genes, including those that are not included in original list, thereby giving rise to nodes with different interrelationshipsProtein levels of inflammatory mediators are represented as median with range. mRNA and microRNA expression data are represented as \u0394Ct values with mean\u2009\u00b1\u2009standard deviation (SD). Scatter diagram were plotted\u00a0using\u00a0GraphPadPrism softwareversion 7.03. One-way analysis of variance (ANOVA) followed by Dunnett\u2019s test/Tukey\u2019s post-hoc test is used as required to analyze the data between more than two group. A p-value of <0.05 is considered statistically significant. Sigma Plot softwareversion 13.0 (SYSTAT SOFTWARE INC) is used for statistical analyses.We confirm that we have read the Journal\u2019s position on issues involved in ethical publication and affirm that this report is consistent with those guidelines.Real time PCR primers used in this study"} +{"text": "For that purpose the activity of ethanolic extracts of flowers, inflorescence stalks and leaves against thirty one strains of bacteria, yeasts, dermatophytes and moulds were studied using both the agar well diffusion and broth dilution assays. Among the investigated extracts found to be effective against a broad spectrum of microorganisms, the flower extract was considered to be the most potent one. Linalool and rosmarinic acid, as the most abundant constituents found, are very likely major contributors to the observed antimicrobial effects. The results suggest that flowers of lavandin \u2018Budrovka\u2019 could serve as a rich source of natural terpene and polyphenol antimicrobial agents.This study aimed to evaluate the Plant-based remedies have been part of traditional health care in most parts of the World for thousands of years and there is increasing scientific interest in plants as a source of novel agents to fight infectious diseases. Since the discovery and exploitation of antibiotic agents in the 20th century, the targeted selective toxicity of such agents has ensured their widespread and largely effective use to combat infection; however it has paradoxically resulted in the emergence and dissemination of multi-drug resistant pathogens ,2. The qLavandula species, aromatic plants mainly distributed in the Mediterranean area. Well-known since ancient times, the medicinal uses of lavender inflorescence are based on its sedative, cholagogue, spasmolytic, carminative and antiseptic properties [Lavandula species are also grown for their wide range of uses in perfumery, cosmetics and food processing. Previous phytochemical studies have indicated that Lavandula species contain essential oil, triterpenes, coumarins, hydroxycinnamic acids and flavonoids [Lavandula \u00d7 intermedia Emeric ex Loisel. \u2018Budrovka\u2019 is a Croatian lavandin cultivar, widely cultivated for the production of essential oil. Our recent findings highlighted this indigenous hybrid as a rich source of polyphenolic compounds found to be responsible for its antioxidant activity [Lavandula species, non-volatile extracts were generally very poorly studied [The Lamiaceae family includes numerous medicinal herbs with pronounced therapeutic properties such as operties . Lavanduavonoids . Lavanduactivity . Althoug studied ,9,10. Moin vitro activities of L. \u00d7 intermedia \u2018Budrovka\u2019 against a broad spectrum of human pathogenic bacteria and fungi, as well as microorganisms causing the spoilage of food, pharmaceutical and cosmetic products.In the search for new antimicrobial resources, the present work focuses on investigating the The antimicrobial activity of plant extracts depends strongly on the type and amount of active principles. Their contents and composition vary from plant to plant species and even in different parts of the same species . TherefoL. \u00d7 intermedia \u2018Budrovka\u2019. As can be seen from the In the first step, the contents of essential oil and individual polyphenolic subclasses were determined in flowers, inflorescence stalks and leaves of f values 0.38 and 0.69, respectively), which indicate great amounts of these compounds. In contrary, only traces of terpenes were detected in ethanolic extracts of inflorescence stalk and leaf. The presence of flavonoids and phenolic acid corresponding to yellow and blue spots, respectively, was proved in all extracts studied . MIC and MBC values were determined using the dilution method, and the results are expressed in volume percentages as well as the minimum inhibitory concentration (MIC) values and minimum bactericidal/fungicidal concentrations (MBC/MFC). As shown in the centages . The extCandida krusei was one of the most sensitive fungi using dilution method, while all tested moulds were found to be the most resistant strains. Generally, dermatophytes proved to have a great susceptibility to the fluid ethanolic extracts of L. \u00d7 intermedia \u2018Budrovka\u2019 in the both assays applied.L. \u00d7 intermedia \u2018Budrovka\u2019 have antimicrobial capacities and act against a broad spectrum of bacteria, yeasts, moulds and dermatophytes, with flower extract proving to be the most potent one. Our findings suggest that the antimicrobial effects of the extracts examined could be associated with their phenolic constituents. In addition, linalool and linalyl acetate observed in a relatively huge amount in the flower extract have been already considered as a strong antimicrobial agents, notably linalool. Dorman and Deans [et al. [L. \u00d7 intermedia \u2018Budrovka\u2019 could serve as a rich source of antimicrobial agents which could be effective against bacteria having ability to develop antibiotic resistance. In addition to the proven activity against a broad spectrum of pathogenic bacteria, a strong antifungal activity of the flower extract was found against dermatophytes and Candida species, which are both causal agents of serious skin and mucous membrane infections that require long-term treatment. In this connection, our study suggests the possibility of external use of L. \u00d7 intermedia \u2018Budrovka\u2019 flower extract, and encourages future in vivo research too.The presented results demonstrate that all investigated extracts possess the ability to inhibit either bacterial or fungal growth. Though the activities against the tested microorganisms were of varying intensity, the results clearly proved that ethanolic extracts of nd Deans showed t [et al. , with MI [et al. . Our resMicrobial contamination of food as well as pharmaceuticals and cosmetics is significant both from public health and economic viewpoints. Utilization of plant extracts as an alternative to synthetic antimicrobial and antioxidant chemicals is an increasing trend since natural additives are considered to be safer and less toxic ,17. The L. \u00d7 intermedia \u2018Budrovka\u2019 flowers a good candidate for further research with a view to finding a safe and effective natural preservative.Additionally, our recently published study reportedLavandula \u00d7 intermedia Emeric ex Loisel. \u2018Budrovka\u2019 were collected at full flowering stage from plants cultivated at a farm in the village of Dragovan\u0161\u010dak, near the city of Jastrebarsko . Plant material was air-dried and leaves, flowers and inflorescence stalks were separated. The plant sample was identified at the Department of Pharmacognosy, Faculty of Pharmacy and Biochemistry, University of Zagreb, Croatia, where a voucher specimen has been deposited (Number FBF-FGN/BB 101).Aerial parts of L. \u00d7 intermedia \u2018Budrovka\u2019 were prepared by the percolation procedure according to European Pharmacopoeia [Liquid extracts of flowers, inflorescence stalks and leaves from acopoeia . BrieflyThe contents of essential oil in flower, inflorescence stalk and leaf of lavandin \u2018Budrovka\u2019 were determined by hydrodistillation method, according to the procedure described in the European Pharmacopoeia . In shor1), and the quantification of total phenols was done with respect to the standard calibration curve of pyrogallol (6.25\u201350.00 mg). For the determination of tannins content, stock solution was vigorously shaken with hide powder for 60 min. Since the hide powder adsorbed tannins, phenols unadsorbed on hide powder were measured in filtrate, after addition of Folin-Ciocalteu\u2019s phenol reagent in a sodium carbonate medium (A2). The percentage content of tannins, expressed as pyrogallol, was calculated from the following equation: 1 \u2212 A2) / (A3 \u00d7 m)(%) = 3.125 \u00d7 .Determination of total tannin contents was performed following the method described in European Pharmacopoeia . Briefly(%) = A \u00d7 2.5 / m, where A is the absorbance of the test solution at 505 nm and m is the mass of the sample, in grams. Analysis of each sample was performed in triplicate.Determination of hydroxycinnamic acid derivatives was performed according to procedure described in European Pharmacopoeia . BrieflyL. \u00d7 intermedia \u2018Budrovka\u2019 flower, inflorescence stalk and leaf, respectively, were determined using the spectrophotometric method of Christ and M\u00fcller [(%) = A \u00d7 0.772 / m, where A is the absorbance of the test solution at 425 nm and m is the mass of the sample, in grams.The total flavonoid contents in d M\u00fcller . Each po254 HPTLC plates . Aliquots (10 \u03bcL) of fluid ethanolic extracts and 0.5% ethanolic solution of reference substances were spotted on to the plates which were then developed in vertical glass chambers previously saturated with the mobile phases: toluene:ethyl acetate , ethyl acetate:formic acid:water , diisopropyl ether-acetone:water:formic acid [High performance thin-layer chromatographic (HPTLC) analyses of different secondary plant metabolites was performed on precoated silica gel 60 Fv/v/v/v) ,21,22 foA total of thirty-one tested microbial cultures belonging Gram-positive and Gram-negative bacteria, yeasts and dermatophytes are listed below in 8 CFU/mL).All tested bacterial and fungal strains were maintained on an agar slant and stored at 4 \u00b0C. Inocula were prepared with fresh cultures by suspending the microorganisms in sterile saline and adjusting the density to 0.5 Mcfarland standard and minimum bactericidal/fungicidal concentration (MBC/MFC), the broth dilution method was applied ,24. SeriExperiments were carried out in triplicate, and the results are expressed as mean \u00b1 standard deviation (SD). Differences were estimated by Student\u2019s t-test and the values p < 0.05 were considered statistically significant.L. \u00d7 intermedia \u2018Budrovka\u2019, contributing also in a great extent to the current poor knowledge on the effectiveness of non-volatile extracts of Lavandula taxa against microorganisms in general. The performed studies demonstrated that all of the extracts prepared from the different plant parts, flower, inflorescence stalk and leaf, respectively, possess the ability to inhibit either bacterial or fungal growth although their activity varied greatly. Flower extract of L. \u00d7 intermedia \u2018Budrovka\u2019 was found to be the most potent one, showing the capacity to act toward a broad spectrum of bacteria, yeasts, moulds and dermatophytes. Hence, lavandin \u2018Budrovka\u2019 flowers could be used or as an easy accessible source of natural antimicrobial agents for medicinal purposes or to meet the needs of food, pharmaceutical and cosmetic industry. Although our findings indicated that the monoterpenes linalool and linalylacetate, accompanied by polyphenolic constituents, are most probably the constitutents responsible for the strong antibacterial and antifungal effects of the flower extract, further research should aim to isolate the active compounds and to investigate their structure and effectiveness in greater detail.The results presented in this paper can be considered the first report on the antimicrobial activity of"} +{"text": "Humans are effective at dealing with noisy, probabilistic information in familiar settings. One hallmark of this is Bayesian Cue Combination: combining multiple noisy estimates to increase precision beyond the best single estimate, taking into account their reliabilities. Here we show that adults also combine a novel audio cue to distance, akin to human echolocation, with a visual cue. Following two hours of training, subjects were more precise given both cues together versus the best single cue. This persisted when we changed the novel cue\u2019s auditory frequency. Reliability changes also led to a re-weighting of cues without feedback, showing that they learned something more flexible than a rote decision rule for specific stimuli. The main findings replicated with a vibrotactile cue. These results show that the mature sensory apparatus can learn to flexibly integrate new sensory skills. The findings are unexpected considering previous empirical results and current models of multisensory learning. From a dozen different methods, we might get a dozen different estimates. Since we can only act out one decision, one singular estimate must be decided upon. To arrive at a single estimate, different techniques can be pursued, some of which might be more prone to fail. For example, picking one estimation method at random could be extremely inaccurate if an unreliable method were selected. Averaging all estimates together might also result in a poor decision if a few very bad estimates and a few very good estimates were given equal weight. Even if we somehow knew that one estimation method is most reliable and decided to use it just by itself, we would be throwing away a lot of potentially useful information. How do we best handle all of this sensory input?Using the information from our senses to make the best decisions can be surprisingly complex. Consider the simple act of comparing the distances of two free supermarket checkouts. There are potentially a dozen different methods of estimating visual distance2. If the different methods give approximately normally-distributed (Gaussian) error, the process for forming a single unified estimate with the lowest variance (uncertainty) is to take an average that is weighted by each estimate\u2019s precision (1/variance). This strategy is used explicitly by statisticians for meta-analysis and by engineers developing sensing and control systems3. Surprisingly, human adults\u2019 behaviour suggests that their perceptual systems also closely approximate this type of computation when making perceptual decisions using multiple familiar sensory inputs6. To achieve this, perceptual systems must represent the reliabilities of their estimates. The algorithm for reliability-weighted averaging can be broadly termed Bayes-Like Cue Combination since it is considered optimal under Bayesian statistics, and also because it is widely associated with modelling perception as approximating Bayesian inference, which provides a coherent explanation for diverse findings in the study of perception and cognition8.One good solution is to consider not just the individual estimates, but also their error distributions2, into the realm of sensory substitution and augmentation, where people are trained to use new techniques or devices to perceive their environment10. This typically involves learning a new sensory skill, defined here as a new ability to use information from an available sense. Previous reports have already shown broad ability to learn new sensory skills16, such as extracting distance information from reflected sound (part of human echolocation), but very little is known about if/how these become integrated with existing sensory skills.Here we ask whether people\u2019s previously-documented Bayes-like cue combination abilities extend beyond the use of highly familiar sensory cues, such as visual and haptic cues to object size13 or use of a sensory substitution device16, but also have it combine with remaining vision, gaining something that is better than either the augmentation or the remaining vision alone. A surgeon could learn new, auditory cues to their instrument\u2019s position or the material it is contacting, combining the audio and visual information to perform procedures more accurately. Applications such as these depend not just on learning a new sensory skill, which we already know people can do, but also combining it with pre-existing sensory skills in an efficient Bayes-like manner.Understanding adults\u2019 capacities for integrating new sensory skills with existing ones has important applications to expanding the human sensory repertoire. What if a new sensory skill could be combined with existing sensory skills, so enhancing existing sensory skills instead of replacing them? If so, people with moderate vision loss could not only be taught echolocation18. This view may explain why children under 10\u201312 years don\u2019t yet combine sensory cues in this way22, even within the same modality24, despite having a number of other skills in the domain of cross-modal sensory perception29.Can people actually do this? We are not aware of any empirical demonstration that they can. In addition, current theoretical models of multisensory perceptual learning suggest that learning to combine a new sensory skill would be a truly extraordinary effort \u2013 essentially impossible given the limited resources of a routine laboratory experiment or a routine therapy method. Specifically, they suggest that it would take a full decade of daily cue-specific experience for Bayes-like combination to emerge8 can adapt flexibly. For example, participants quickly learn new prior distributions and adapt their weighting of these distributions in a Bayes-like manner31. Thus, there might perhaps be more flexibility than suggested by the models reviewed above18. In other words, an alternative hypothesis is that humans can actually learn to combine novel sensory cues, in line with the other kinds of adaptation that have already been documented8.On the other hand, there is evidence that probabilistic sensory-motor computations13. The full echo signal was reduced to the echo delay component: a longer time between the initial sound and its \u201cecho\u201d means that the target is farther away. The delay was based on the actual speed of sound in air, approximated at 350\u2009m/s. We also showed participants how to use an independent noisy visual cue, a display of \u2018bubbles\u2019 in which a wider point in the display meant that the whale was more likely to be there. The crucial tests then examined how participants performed when both cues were presented simultaneously.To examine this empirically, we developed a new experimental task to train people to use a new sensory cue to distance Fig.\u00a0. Adult p2. The second criterion is flexibly re-weighting (changing reliance on) different cues when they change in reliability, even without feedback on trials with both cues32. The presence of both these markers together suggests that the two cues are being averaged in a way that accounts for their reliabilities, gaining precision over the best possible use of either single cue alone, and rules out alternative models based on rote learning or weaker cue interactions. If na\u00efve adults show both markers, we will have evidence for the acquisition of Bayes-like cue combination with a new sensory skill. This would lead to revising current models of multisensory learning18 and would have important potential applications to augmenting the human senses.Over a series of five sessions lasting about one hour each, we first trained distance perception via the novel cue, then tested for two central markers of Bayes-like combination with vision. The first criterion is reducing uncertainty (variable error) below the best single estimate when both cues are available33, we found evidence for \u201cnear-optimal\u201d cue combination. The five sessions were structured as follows: healthy adult participants were first asked to learn how to use an echo-like cue to distance (Sessions 1 and 2); then to also use a noisy visual cue to distance (Session 2); then to also use both simultaneously (Session 3); and finally to generalize their learning to a new audio frequency (Session 4) and an altered reliability of the visual cue (Session 5). The Supplementary Video shows examples of key training sessions, but note that in the experiment participants were immersed in the environment using a stereoscopic VR headset which had a wider field of view and accounted for minor head-movements, and with higher-quality sound. Another group in a control experiment also attempted to use the echo-like cues without training, but was unsuccessful, showing that the echo-like cue was indeed novel to participants. In another follow-up experiment, an additional group was trained with a novel vibrotactile cue to distance and showed similar results, demonstrating that Bayes-like integration in the main experiment was not specific to auditory or echo-like stimuli.In short, we found consistent positive results in favour of Bayes-like cue combination. In the terms of a recent standardized taxonomyPerformance in the untrained control experiment was worse than we would expect if participants just ignored the audio cue entirely and just pointed to the center of the response line on each trial, demonstrating that people were unable to use the echo-like audio cue without training and then with continuous responses (Session 2), they learned the novel audio cue and were able to use it accurately to judge distance. The correlation between the logarithm of the targets and the logarithm of the responses was above 0.80 for every participant. See SI S2 for individual graphs, see Fig.\u00a0The two key criteria of Bayes-like cue combination were fully met by all four main tests. Exact statistics are in Table\u00a0The first test was in Session 3, where participants were given a mix of audio-only, visual-only, and simultaneous audio-visual trials, all with feedback. A sign-rank test was favoured over a paired t-test because the distribution of variable errors is heavily skewed. The result shows that there was a significant decrease in variable error in audio-visual trials compared to the best single cue for that same participant Table\u00a0. Thus, pA second test of this same criterion was in Session 4. Here, an untrained variation of the audio stimulus was introduced by altering the base frequency and removing feedback on all trials including this cue. The data show that even with these new, untrained audio-stimuli, and without feedback, participants still successfully lowered variable error below the best single cue on audio-visual trials in Session 4 Table\u00a0. This inThe third test was in Session 5. Here the reliability of the visual stimulus was altered and feedback was removed on audio-visual trials. Participants again lowered variable error for the audio-visual trials versus the best single cue Table\u00a0. This pr2:This brings up the question of whether or not integration was optimal. Given the distribution of errors we observed for half of participants and decreasing its variance (shortening) for the other half. The variance of the visual cue was set to 75% (short) or 125% (long) of each participant\u2019s observed audio variance in Session 2. We expected this manipulation to decrease/increase reliability of the visual cue (increase/decrease variable error) and thus change the critical relative reliability of the two cues. Confirming that our manipulation worked as intended, we found a significant relation between the visual cue variance and the variable error for both visual-only trials, r(22)\u2009=\u20090.86, p\u2009<\u20090.001, and for audio-visual trials, r(22)\u2009=\u20090.77, p\u2009<\u20090.001 see Fig.\u00a0, with wi32. It further rules out any explanations that don\u2019t involve averaging the two cues together, such as processing the audio cue better with the visual cue present34, since this would lead to inferred weights near zero or one.Under standard Bayesian theory, where participants are taking a precision-weighted average, we can predict that a cue\u2019s weight will go down when its relative reliability goes down and that it will go up when its reliability goes up. The simplest way to see this is to examine how close participants responded on average to the center of the visual cue Fig.\u00a0. When giSI provides additional analyses that do not change the main interpretation, but suggest that the main test results are robust to reasonable variations in exactly what is entered into the analysis: total error (squared distance between target and response) and average variable error for each participant.An examination of the biases (mean signed error) did not reveal anything of note for our hypotheses. Biases were all between +0.18 and \u22120.26 on a log scale Fig.\u00a0, meaningThe main study with a novel audio cue was repeated with a novel vibrotactile cue to distance, omitting Session 4 . This cue worked by having a mobile phone motor, strapped to the participant\u2019s wrist, vibrate with greater intensity if the target was further away. To summarize the results Fig.\u00a0, both crBoth key criteria for Bayes-like cue combination were fully met by all analyses. Both cues must have been used when they were both available, because otherwise we would not expect systematic improvements in precision when compared to the best single cue. Participants must have had some representation of their own uncertainty, because otherwise they would not have been able to re-weight the cues when their relative reliability changed and feedback was removed. Participants must have averaged the two cues, rather than have the presentation of one cue enhance the processing of the other cue, because otherwise the enhancing cue would receive no inferred weight. This was true across the four main planned analyses in the Results and also across several variations in SI \u2013 including a replication using a novel vibrotactile cue instead of the auditory echo-like cue.13 and devices translating information from one modality into another16 might not only hold promise for people with complete sensory loss , but could in principle be a useful aid for less severe levels of impairment \u2013 for example, moderate vision impairments, estimated to affect 214 million people alive today35. Our results also suggest an optimistic outlook on sensory augmentation more broadly, allowing people to use novel signals efficiently during specialist tasks \u2013 for example, if novel auditory cues to position during brain surgery36 were combined with visual information.These results are a proof-of-principle that learning to combine a new sensory skill with a familiar sensory skill is well within the realm of human possibility. This point is crucial because it means that new sensory skills need not replace familiar sensory skills. Humans whose senses are enhanced with a new sensory skill stand to gain the Bayesian benefits of incorporating the new and standard sensory information, rather than having to choose between them. This discovery suggests that techniques like echolocation37. This previous project had a conceptually similar goal as our current study, and to this effect exposed adults to the use of the feelSpace belt, a vibrotactile device that is useful for sensing how much the wearer has turned. This could, in principle, be combined with information provided by their inner ear for the same purpose. During that project, participants did not receive any feedback. The authors found that variable error with both cues was no lower, and in some cases higher, than performance with the best single cue. Behaviour in their study was best explained by \u2018subjective alternation\u2019, meaning that participants used whichever sensory skill they subjectively felt to be more accurate in isolation. In contrast, the present study found that variable error with both cues was lower, evidence for integration. It is not entirely clear if the difference in results here is due to the difference in sensory skills (the belt may be harder to use than the new sensory skills we introduced here), the difference in training , or other details of the tasks (Goeke and colleagues used a 2AFC approach throughout). We would hypothesize that the feedback is critical, since it should be needed for participants to find the relative reliability of the two cues, which is crucial for Bayes-like (reliability-weighted) combination2. Regardless, the present study provides a basis for future studies teaching sensory augmentation skills to moderate sensory loss patients, and examining if benefits can move from lab to everyday life.The present result is unexpected for two reasons. First, the result is opposite to the result of a previous project18. These models, in essence, question the need to posit a \u2018learning to learn\u2019 effect in development. Before these models were proposed, it was commonly assumed that children become more efficient and effective at learning perceptual algorithms like cue combination as they mature through middle childhood38 \u2013 in effect, \u2018learning to learn\u2019 as developmental change. In contrast, modelling18 suggested that \u2018learning to learn\u2019 effects were not needed to explain the data at time of writing. Instead, they suggested that data could be explained by the learning process just being very slow, requiring ten years of exposure to the specific cues involved. This parsimoniously explains why children under ten don\u2019t combine cues24, children over ten (and adults) combine a wide variety of cues24, and that the adults exposed to the feelSpace belt for a few hours don\u2019t combine its output with their typical inner-ear sense of turning37 \u2013 since all these failures had been found in participants with less than ten years of exposure to the specific cues. However, since these models leave out a \u2018learning to learn\u2019 effect, they predict that a person at any age \u2013 even adults \u2013 would take ten years to learn to combine any new pair of cues (i.e. would be just as slow as children). We have clearly falsified this prediction here. Thus, we instead suggest that people do \u2018learn to learn\u2019 to combine cues as they mature. One critical future question is exactly how the approach taken by adults here differs from the approach seen in development.Second, the result from our current study is surprising because it violates the predictions of current models of multisensory learning6; though see also39. One potential explanation is that perfect Bayesian integrators must choose cue weights exactly proportionate to cue reliabilities, but our participants may not have yet learned to judge cue reliabilities precisely. It is unknown how rapidly and accurately humans learn reliabilities of novel cues. Related studies on the acquisition of novel perceptual priors show that reliabilities are not always learned accurately in limited training sessions \u2013 for example, in 1200 trials31, similar to here. Another potential explanation is that cue combination depends on inferring that two cues share a common cause40. This can limit combination with cues that are perceived as discrepant, and with unfamiliar cues in particular, this might affect combination. We predict, but cannot empirically confirm at this time, that near-optimal combination would be found after much more extensive training.However, our findings also come with a notable limitation. Our analyses suggest that our participants achieved less than the optimal variance reduction predicted for a perfect Bayesian integrator Figs\u00a0; Table\u00a01In conclusion, Bayes-like cue combination can extend to new sensory skills. This suggests an optimistic outlook on augmenting the human senses, with potential applications not only to overcoming sensory loss, but also to providing people with completely new kinds of useful signals. To make this even more exciting, it can also happen rapidly. Further study will be needed to examine the full extent of this ability and the circumstances under which it is shown.For the main study, 12 healthy adult participants were recruited using the Durham Psychology Participant Pool and posters around Durham University. There were 2 males and a mean age of 24.9 years . A single pilot participant was also run, whose data were not included in the main analysis but instead used for a power analysis. That analysis found over 95% power for the primary tests and presented using an Oculus Rift headset . This seascape contained a large flat blue sea, a \u2018pirate ship\u2019 with masts and other items, a virtual chair, and a friendly cartoon whale introduced with the name \u201cPatchy\u201d see Fig.\u00a0. ParticiThe Oculus Rift headset has a refresh rate of 90\u2009Hz, a resolution of 1080\u2009\u00d7\u20091200 for each eye, and a diagonal field of view of 110 degrees. Participants were encouraged to sit still and look straight ahead during trials but did not have their head position fixed. The Rift\u2019s tracking camera and internal accelerometer and gyroscope accounted for any head movements in order to render an immersive experience.Sound was generated and played using a MATLAB program with a bit depth of 24 and a sampling rate of 96\u2009kHz. A USB sound card was attached to a pair of AKG K271 MkII headphones with an impedance of 55 ohms.Participants used an Xbox One controller . They only used the left joystick and the A button. Pressing the other buttons did not have any effect on the experiment.The audio stimuli were created by first generating a 5\u2009ms sine wave either 4000\u2009Hz or 2000\u2009Hz in frequency with an amplitude of 1. Half of participants experienced the higher frequency in Sessions 1\u20133 and 5, switching to the lower in Session 4, and vice-versa (see below). The first half-period of the wave was scaled down by a factor of 0.6. An exponential decay mask was created starting after 1.5 periods and ending at 5\u2009ms. The exponent was interpolated linearly between 0 and \u221210 over that period. This was all embedded in 1\u2009s of silence, with a 50\u2009ms delay before the sound appeared. An exact copy of the sound was added after an appropriate delay, calculating the distance to the target divided by the speed of sound (approximated at 350\u2009m/s), then times two . With a minimum distance of 10\u2009m, the two sounds (clicks) never overlapped . Real echoes contain more complex information, including reductions in amplitude with distance, but we chose to make delay the only relevant cue so that we could be certain which information all participants were using. Our stimuli also allowed us to use range of distances at which real echoes are typically very faint, minimizing the scope for participants to have prior experience with them.1 and require various techniques to isolate fully. However, we wanted to frame the task overall as a reasonably naturalistic \u2018hide and seek\u2019 game with a social agent. This is in conflict with these isolation techniques. We also wanted to have precise control over how useful this cue was so that we could match it well to each participant\u2019s learning with the echo-like cue. To achieve this, the visual cue had strong external noise and negligible internal noise.Vision provides a wide variety of depth cues that interact heavilyThe visual cue was a fully 3D array of 256 \u2018bubbles\u2019 (translucent white spheres with a radius of 0.15\u2009m and 50% opacity) arranged to show a mirrored log-normal distribution perpendicular from the line on which the whale appeared Fig.\u00a0. This waParticipants were told that that they were going to play a kind of \u2018hide and seek\u2019 game with Patchy the whale. On each trial, they received the visual and/or auditory cues and then used the joystick to move an arrow to try to point as close as they could to where Patchy was hiding. There was no time limit. Participants were not given any additional information while responding and were not allowed to hear the audio stimulus again. The visual stimulus remained static on the sea while they were responding. For trials with feedback, Patchy appeared at his true location with a speech bubble indicating the error as a percentage of his true distance away from the ship . The text was provided in addition to seeing the whale in case there was any confusion about which point on the whale, from its head to its tail, was the precise point that participants were attempting to localize. The full script of the different sessions is detailed in the SI and briefly explained here. Example trials can also be seen in the Supplementary Video. Session 1 and 2 taught the audio cue to the participants. Session 1 started with 50 two-alternative-forced choice (2AFC) task trials, choosing among a point at the nearest vs farthest limit of the line. It then progressed to 100 trials of 3AFC (including the midpoint) and 150 trials of 5AFC (including the 25% and 75% points). The correct targets were as evenly distributed as possible. The whale surfaced after every trial to give accurate feedback.Session 2 , and all further sessions, began with a short 40-trial warm-up of 2-, 3-, and 5-alternative-choice trials to remind participants how the audio cue works. In the second session, this proceeded to 250 audio-only trials with feedback, now with a continuous response , with the targets spaced evenly on a log scale. This staging of training, from 2AFC to continuous, was done to help scaffold the participants towards consistent performance. Common to Sessions 2\u20134, the last 10 trials were used to introduce what would happen in the next session.Session 3 tested for cue combination by showing participants the visual cue only, the audio cue only, or both together. Feedback was given throughout. This was used to test the first criterion for Bayes-like combination, whether the variance of combined estimates was lower than the variance of single-cue estimates.41.Session 4 tested for the ability to generalize the echo-like cue to a new emission, specifically a click with a different frequency of the amplitude modulated sine wave. The session was very similar to Session 3 except that all trials with the new sound were not given feedback. This further tested the first criterion. This manipulation was inspired by the finding that actual echolocation users have some meaningful variation in exactly what frequencies they emit from click to clickSession 5 tested for the ability to generalize to a new reliability level of the visual cue. It was much like Session 3 except that the log-normal distribution displayed by the visual cue changed in variance. Feedback was given on the visual-only trials to allow participants to see how the reliability had changed, but not on the bimodal trials, so that they had to infer the correct weights from their knowledge of each cue\u2019s reliability. This was used to test the second criterion, reliability-based reweighting of cues.Each of the last three sessions involved audio-only, visual-only, and audio-visual trials. These were generated in triplets, starting with the audio-visual trials. The centers of the visual distributions were spaced evenly on a log scale and the true target was drawn from the distribution they showed. The audio cue was generated to signal the target exactly. The standard deviation of the visual cue was either 75% or 125% of the error in audio-only responses during Session 2. The 75% and 125% figures were chosen because they are close enough to 100% that we still expect a substantial cue combination effect , but far enough from each other that we expected to be able to measure changes in cue weighting. Half of participants were shown visual cues with the 75% standard deviation for Session 3 and 4, then 125% for Session 5, and the other half vice versa. A matching audio-only trial was made by just removing the visual cue, and a visual-only by just removing the audio cue. This means that when we compare the single-cue trials to the dual-cue trials within each triplet, there was nothing different except the other unused cue in the single-cue trials. Order of presentation was randomized.These data were collected to be sure that the echolocation-like sensory skill was genuinely new and not familiar from everyday experience with reflected sound. Participants were told that we wanted to know if people without echolocation training had any intuitions about how echolocation might work, that the sounds they were going to hear were echoes that trained participants could use to estimate distance, and that their goal was to respond as close to Patchy as possible despite never seeing where he actually was. The stimuli were exactly the same as the continuous-response section of Session 2 , but there was no feedback given in any part of this experiment. There was only one session.42.The data from the second half of Session 1 (which was just training with the audio cue) and the continuous trials of Session 2 (further audio cue training) were transformed onto a log-scale to account for the way that auditory delays become harder to perceive in linear terms as the delays become longerFor sessions 3\u20135, targets and responses were again transformed onto a log-scale for the same reason. For the first three main tests (out of four), the log error was parsed into constant error (the bias component) and variable error (the precision component). The constant error was calculated separately for each participant, session, and trial type combination . The specific formula was:The variable error was calculated for each trial for the main analysis. The specific formulas was:The best single cue was determined at the level of participant and session, averaging the variable errors over all targets within each modality and selecting the lower of the two. The variable error for each bimodal trial was paired for analysis with the variable error from the best single cue trial for that same target, same participant, and same session. This gives us the most power to detect possible changes in variable error with the addition of the worse cue (which is the only change across these pairs), and thus the best test of the first criterion.Simulations were also performed to be sure that this method does not inflate the type I error rate, suggesting that, if anything, the analysis method here is slightly conservative against finding differences in variable error . Statistics calculations were performed in Matlab 2016a by MathWorks. Tests were all two-tailed and, except for the modelling analysis detailed below, were all sign-rank tests.i gives the participant, j gives the session, k gives the trial number, Y is the response, W is the weight given to the visual cue, A is the placement of the audio cue, V is the placement of the visual cue, and \u03c4 is the precision of responses. In other words, participants respond at a weighted average of the two cues plus some noise, with the weights as free parameters.This model was used to estimate weights and to assess their pattern of change from Session 3 to Session 5 when the visual cue reliabilities changed. The model has two layers. The lower layer was designed to let us estimate the weight given to each cue within each participant and session. The central equation here was:R is the reweighting effect, M is the mean reweighting, and T is the precision of reweighting effects. In other words, a positive value for M fits Bayesian predictions, but a value of zero for M fits the learning of a static weighting.The higher layer of the model was designed to relate each person\u2019s cue weights in Session 3 to their cue weights in Session 5, specifically to see if it fits a pattern that matches Bayesian predictions . The central equations here were:The priors for each parameter were:Wi3\u2009~\u2009Uniform\u03c4ij\u2009~\u2009Exponential(0.001)M\u2009~\u2009NT\u2009~\u2009Exponential(0.001)43 with 6 independent chains consisting of 25,000 used samples and 5,000 burnin-in samples each . In the WinBUGS implementation, an exponential distribution is given its rate parameter, so the prior on T had a mean of 1,000 (extremely vague).These were fit in WinBUGSM that includes zero. To check this, we purposefully switched participants 4\u20136 with participants 10\u201312, which fit M with an interval including zero. Fourth, there is no hierarchical structure to the weights in Session 3, so M captures mean changes rather than the mean of weights in Session 5.Note then that several possible assumptions are not \u2018built-in\u2019 to the model. First, it is capable of fitting a value of W arbitrarily close to 0 or 1. At W\u2009=\u20090 or W\u2009=\u20091, the model simplifies to a cue selection or weak interaction model. Second, there is no requirement that it fits a specific audio-visual variance, so it is free to fit a wider range of weights and variances than just the ones predicted by standard Bayesian reasoning. Third, the model is perfectly capable of finding a credible interval for the mean weight change Methods VideoSupplementary InformationMain DatasetVibrotactile Dataset"} +{"text": "Experimental and clinical observations have highlighted the role of cytotoxic T cells in human tumor control. However, the parameters that control tumor cell sensitivity to T cell attack remain incompletely understood. To identify modulators of tumor cell sensitivity to T cell effector mechanisms, we performed a whole genome haploid screen in HAP1 cells. Selection of tumor cells by exposure to tumor-specific T cells identified components of the interferon-\u03b3 (IFN-\u03b3) receptor (IFNGR) signaling pathway, and tumor cell killing by cytotoxic T cells was shown to be in large part mediated by the pro-apoptotic effects of IFN-\u03b3. Notably, we identified schlafen 11 (SLFN11), a known modulator of DNA damage toxicity, as a regulator of tumor cell sensitivity to T cell-secreted IFN-\u03b3. SLFN11 does not influence IFNGR signaling, but couples IFNGR signaling to the induction of the DNA damage response (DDR) in a context dependent fashion. In line with this role of SLFN11, loss of SLFN11 can reduce IFN-\u03b3 mediated toxicity. Collectively, our data indicate that SLFN11 can couple IFN-\u03b3 exposure of tumor cells to DDR and cellular apoptosis. Future work should reveal the mechanistic basis for the link between IFNGR signaling and DNA damage response, and identify tumor cell types in which SLFN11 contributes to the anti-tumor activity of T cells. Immunotherapeutic approaches are emerging as a revolutionary class of cancer therapeutics with clinical benefits across a series of cancer types. Specifically, infusion of antibodies blocking the action of the T cell inhibitory molecules CTLA-4 and PD-1 has shown clinical benefit in, amongst others, melanoma, non-small cell lung cancer, and urothelial carcinoma . FurtherIn this study, we performed a haploid genetic screen to identify tumor cell resistance mechanisms to T cell killing. Using this approach, we identified the direct cytotoxic effect of IFN-\u03b3 as a major effector mechanism of T cells in this system. Surprisingly, we identified SLFN11, an IFN-inducible gene previously shown to influence tumor cell sensitivity to DNA damaging agents (DDA), as a modulator of HAP1 sensitivity to T cell attack ,15. Nota epitope. Subsequently, a library of loss-of-function mutant cells was generated by transduction with a gene-trap vector, and this mutant cell library was exposed to T cells transduced with the MART-1-specific 1D3 TCR [In order to identify cancer cell-intrinsic mechanisms of resistance to T cell-mediated cytotoxicity, we set up a whole-genome loss-of-function haploid cell screen. To generate a system in which tumor cells can be exposed to defined T cell pressure, the HLA-A2-positive haploid human cell line HAP1 was modi 1D3 TCR . After 226\u20133, 27 A>L 1D3 TCR , all can 1D3 TCR were ide 1D3 TCR ). Notabl6 fold lower than for untreated controls. Interestingly, IFN-\u03b3 exposure in the presence of caspase blockade resulted in a delay in expansion that lasted for approximately a week after IFN-\u03b3 removal, after which cells resumed expansion with a similar kinetic as untreated controls and L-glutamine (Thermo Fisher Scientific); DU145 and WM2664 were maintained in DMEM supplemented with 10% FCS (Sigma-Aldrich) and 100 U ml\u22121 penicillin\u2013streptomycin (Thermo Fisher Scientific). Cells were tested for mycoplasma by PCR. To generate a MART-1 epitope expressing HAP1 variant, the coding sequence of MLNA was cloned in front of the coding sequence of the Katushka protein with the two open reading frames linked by P2A coding sequence, and was subcloned into pCDH-CMV-MCS-EF1-copGFP (System Bioscience) using XbaI\u2014SalI sites.HAP1 cells have been described previously . DU145 wRetroviral transduction of T cells to generate MART-1-specific T cells has been described previously (17). In brief, retroviral particles were produced by transfecting the pMP71-1D3 vector that encodes the MART-1-specific 1D3 TCR (17) into FLYRD18 packaging cells. Leucocytes were purified from healthy donor buffy coats (Sanquin) using Ficoll density gradients (Sigma-Aldrich), T cells were activated and magnetically isolated using Human T-Activator CD3/CD28 dynabeads (Thermo Fisher Scientific). 48 h after activation, T cells were spin transduced on retronectin-coated plates (Takara).8 HAP1 cells (>90% haploid) were exposed to 1D3 transduced T cells for 24 hours, at a ratio of 0.5 TCR transduced T cell/ HAP1 cell). Subsequently, T cells were removed by 3 washes with PBS and surviving HAP1 clones were expanded for 7 days. Integration sites were amplified and analyzed as described in (29).Procedures for the generation of gene-trap retrovirus and HAP1 mutagenesis have been described previously . To seleacatgaaccctatcgtatat was used for generating IFNGR1 KO clones. The sgRNA sequences tgtcagctgagtctatctag (sgRNA SLFN11#1) and tacactggtctgctaagggg (sgRNA SLFN11#2) were used to generate bulk populations of SLFN11 KO cells. The sgRNA sequence acggaggctaagcgtcgcaa (sgRNA ctrl) served as non-targeting control. Lentiviral shRNA vectors were retrieved from the arrayed TRC human genome-wide shRNA collection. Additional information is available at http://www.broadinstitute.org/rnai/public/clone/search using the TRCN number. The following lentiviral shRNA vectors were used: TRCN0000152057 (shSLFN11#1) and TRCN0000148990 (shSLFN11#2). For production of lentiviral particles, indicated plasmids were co-transfected into HEK293T cells along with packaging plasmids . Two days after transduction, transduced cells were selected by exposure to puromycin.Knockout cell lines were generated using the CRISPR\u2013Cas9 system. To generate bulk knockout HAP1 cells, cells were transduced with pLentiCRISPR v.2 vector (Addgene 52961) encoding two independent sgRNAs targeting SLFN11. 48 h after transduction, cells were selected with puromycin . In order to generate knockout clones, cells were transfected with px459 vectors (Addgene 48139) encoding either sgRNAs targeting SLFN11 or IFNGR1. Following puromycin selection , single-cell clones were expanded and gene disruptions were validated by sequence analysis and western blot analysis (SLFN11), or by flow cytometry (IFNGR1). The sgRNA sequence The vector for the excisable gene trap experiment was custom synthesized by Thermo Fisher Scientific. The left homology arm spanned genomic region 35370581\u201335371183 and right homology arm spanned genomic region 35369965\u201335370580 of chromosome 17 (assembly Dec.2013 GRCh38/hg38). In between these homology arms, a loxP site, a splicing acceptor sequence, a codon optimized GFP-puromycin N-acetyltransferase fusion protein, an SV40 polyA site, and another loxP site were inserted in this order. To introduce this DNA segment into the SLFN11 locus, >90% haploid HAP1 cells were transfected together with the vector px458 (addgene 48138) expressing the sgRNA agttatctggtatagtcttt, designed such that the majority of the genomic sequence recognized by the sgRNA is located on one of the homology arms and the PAM on the other, in order to avoid Cas9 activity against the transfected plasmid or after integration of the DNA segment.For experiments involving IFN-\u03b3 or chemotherapeutics, 5,000 HAP1/well or 500 DU145 or WM2664/well were plated in 100 \u03bcl/well in 96 well plates 24 hours before addition of IFN-\u03b3 or chemotherapeutics. Compounds were added in 100 \u03bcl of medium to reach the indicated final concentration, and cell viability was assessed after either 2 days (chemotherapeutics or IFN-\u03b3 exposed HAP1), or 7 days (IFN-\u03b3 exposed DU145 and WM2664). To assess cell viability, supernatants were discarded and cells were incubated with 50 \u03bcl of 2.4 mM MTT 3--2,5-diphenyltetrazolium (Thermo Fisher Scientific) for 30\u2019 at 37\u00b0 C. Subsequently, supernatant was removed and cells were incubated in DMSO at room temperature for 15\u2019. 540 nm absorbance was used as a measure of cell viability. For colony forming assays, 25,000 cells/ well (6 well plates) or 5,000 cells/ well (24 well plates) were seeded 24 h before addition of T cells or IFN-\u03b3. Cells were exposed to T cells at the indicated ratio for 24 h, or to IFN-\u03b3 at the indicated concentration for the entire duration of the experiment. 7 days after T cell or IFN-\u03b3 exposure, cells were washed once with PBS, fixed in ice-cold methanol for 15\u2019, and stained with 0.05% (w/v) crystal violet solution in water.Cell lysates for western blot analysis were prepared by washing cells with PBS and subsequent lysis in RIPA buffer supplemented with freshly added protease inhibitor cocktail (Roche). After incubation on ice for 30\u2009min, cell lysates were centrifuged at 20,000g for 15 min at 4\u00b0C. Supernatants were subsequently processed using a Novex NuPAGE Gel Electrophoresis System, according to the manufacturer\u2019s instructions (Thermo Fisher Scientific). The following antibodies were used for western blot analysis: anti-HSP90: H114 (SantaCruz); anti-TUBA1A: 2144s ; anti-phosphorylated STAT1 7649s ; anti \u03b3-H2AX 2577s ; anti-phosphorylated CHK1 12302s ; anti-phosphorylated ATM 10H11.E12 (Millipore); anti-SLFN11 HPA023030 (Atlas).Total RNA was extracted using TRIzol reagent (Ambion life technologies) according to the manufacturer\u2019s instructions. Quality and quantity of total RNA was assessed on a 2100 Bioanalyzer using a Nano chip (Agilent). Total RNA samples having RIN values >8 were subjected to library generation. Strand-specific libraries were generated using the TruSeq Stranded mRNA sample preparation kit (Illumina Inc.) according to the manufacturer's instructions. Libraries were sequenced on a HiSeq2500 using V4 chemistry (Illumina Inc.), and reads (65bp single-end), were aligned against the human reference genome (hg38) using TopHat (version 2.1.0), allowing the spanning of exon-exon splice junctions. TopHat was supplied with a known set of gene models (Ensembl version 77). Samples were generated using a stranded library preparation protocol, in which TopHat was guided to use the first-strand as the library-type. Tophat was run with bowtie 1 version 1.0 and the additional parameters \u201c\u2014prefilter-multihits\u201d and \u201c\u2014no coverage\u201d. In order to count the number of reads per gene, a custom script (ItreeCount) was used. This script is based on the same concept as HTSeq-count and has comparable output. ItreeCount generates a list of the total number of uniquely mapped sequencing reads for each gene that is present in the provided Gene Transfer Format (GTF) file.S1 FigParental cells, SLFN11 KO cells, two independent IFNGR1 KO clones, or the same IFNGR1 KO clones in which SLFN11 was subsequently disrupted were exposed to T cells at the indicated effector: target ratio. Viability was assessed by analysis of metabolic activity 7 days after T cells exposure. * p<0.05, ** p<0.01.(EPS)Click here for additional data file.S2 FigA-D) Transcript levels of IDO1 (A), HLA-A (B), PD-L1 (C) and IFIT3 (D) following exposure to 10 ng/ml of IFN-\u03b3 for the indicated times in parental, IFNGR1 KO, and SLFN11 KO cells.(EPS)Click here for additional data file.S3 FigParental cells, SLFN11 KO cells or SLFN11KO cells in which the cDNA of SLFN11 was overexpressed with a lentiviral vector were exposed to different concentration of IFN-\u03b3. 7 days after IFN-\u03b3 exposure, surviving cells were stained with crystal violet.(EPS)Click here for additional data file."} +{"text": "Pteris have their established role in the traditional herbal medicine system. In the pursuit to identify its biologically active constituents, the specie Pteris cretica L. (P. cretica) was selected for the bioassay-guided isolation. Two new maleates (F9 and CB18) were identified from the chloroform extract and the structures of the isolates were elucidated through their spectroscopic data. The putative targets, that potentially interact with both of these isolates, were identified through reverse docking by using in silico tools PharmMapper and ReverseScreen3D. On the basis of reverse docking results, both isolates were screened for their antioxidant, acetylcholinesterase (AChE) inhibition, \u03b1-glucosidase (GluE) inhibition and antibacterial activities. Both isolates depicted moderate potential for the selected activities. Furthermore, docking studies of both isolates were also studied to investigate the binding mode with respective targets followed by molecular dynamics simulations and binding free energies. Thereby, the current study embodies the poly-pharmacological potential of P. cretica.Members of genus In the history of human civilization, plants have established a role in treating various ailments thereby reducing morbidity and mortality. With the dawn of modern chromatographic and spectroscopic techniques, this rich source of bioactive compounds was and is being extensively studied for the pursuit of high-end medicines. The small-molecule natural products are secondary metabolites of plants biosynthesized mainly as a defense mechanism against pathogenic microorganisms, insects and herbivores ,2. From Pteris (Pteridaceae) comprises of about 250 species which are widely distributed on all continents excluding Antarctica, with the highest diversity in the tropical and subtropical area, particularly including New Zealand, Australia, South Africa, Japan, America, China, and Europe + .Colorless amorphous solid, UV \u03bbmax (MeOH) nm (log \u03b5): 287 (2.2). IR \u03c5max (KBr) cm\u22121: 3335 (hydroxyl), 2898 br. (carboxylic acid hydroxyl), 1732 (carbonyl), 1655 (conjugated olefin). 1H-NMR \u03b4 (ppm): 7.72 , 7.62 , 4.22 , 1.68 , 1.43 , 1.36 , 1.34 , 1.32 , 0.94 , 0.91 . 13C-NMR \u03b4 (ppm): 169.4 (C-1), 132.4 (C-2), 129.9 (C-3), 169.4 (C-4), 69.1 (C-1\u2019), 40.2 (C-2\u2019), 31.6 (C-3\u2019), 30.1 (C-4\u2019), 24.0 (C-5\u2019), 14.4 (C-6\u2019), 24.9 (C-1\u2019\u2019), 11.4 (C-2\u2019\u2019). HR-FAB-MS m/z 229.1430 [M+H]+ .Colorless amorphous solid, UV \u03bbP. cretica, physicochemical properties were determined from Mcule (https://mcule.com/) + peak at m/z 257.1747. Two downfield doublets of 1H-NMR spectrum of F9 showed its connectivity with oxygen and methine multiplet at \u03b4 1.65 in COSY. The high frequency region was a characteristic of hydrocarbon chain methylenes within the range of \u03b4 1.29\u2013139 (12H) and the presence of two terminal methyl groups was revealed by two triplets at \u03b4 0.90 and 0.86 , which indicated the presence of a branch in the hydrocarbon chain. 13C-NMR (BB and DEPT) spectra showed the carbonyl carbons signal at \u03b4 167.8 while the olefinic carbons showed signals at \u03b4 130.9 and 128.9. The oxygenated methylene carbon was observed at \u03b4 68.1 whereas a methine carbon showed the signal at \u03b4 38.1. The methylene carbons of the hydrocarbon chain were resonated in the range of \u03b4 23.0\u201331.9 with the two terminal methyl signals at \u03b4 14.1 and 11.0.Compound um of F9 of olefi2J and 3J correlations with the carbonyl at \u03b4 167.8, confirming the presence of olefinic bond adjacent and in conjugation with the carbonyls. The proton at \u03b4 4.19 (H-1\u2019) showed the 3J correlations with carbonyl carbon at \u03b4 167.8 (C-4), which confirms the presence of ester moiety and connectivity of C-1\u2019 with ester oxygen. The proton at \u03b4 4.19 (H-1\u2019) also showed 3J with \u03b4 30.3 (C-3\u2019) and 23.7 (C-1\u201d), and 2J correlation at \u03b4 38.1 (C-2\u2019), and subsequently H-2\u2019 (\u03b4 1.65) showed key 2J correlations at \u03b4 30.3 (C-3\u2019) and 23.7 (C-1\u201d) and 3J correlation at \u03b4 and 11.0 (C-2\u201d), confirming the branch is ethyl group and its connectivity at C-2\u2019. The remaining HMBC correlations illustrated in F9 as 2-ethyloctyl maleate bind to CDK4 and to CDK6 and CDK-cyclin D complexes to regulate the entry in the G1 phase [CB18 and F9. This process promotes apoptotic cell death and hence, estimated the onco-protective potential of P. cretica. As tabulated in CB18 and F9 of significant importance with respect to cancer-preventive potential of P. cretica.The process of growth and differentiation is under tight regulation, controlled by growth factors and their receptors. Abnormal growth and development may result as a consequence of slight variations in the regulation of expression of the molecules associated, causing malignant transformation. An increase in the expression level of growth factors for instance, transforming growth factor-\u03b1 (TGF-\u03b1), can lead to non-cancerous disease, one such example is psoriasis. Anticancer studies have demonstrated a significant role of growth factors regarding anticancer activity . ProlifeG1 phase ,73. ThisP. cretica. The current reverse screening protocol, has revealed many anti-inflammatory targets including Tumor necrosis factor, Estradio 17-beta dehydrogenase-1, cyclooxygenase-2, Interleukin- 4 and VEGFR that have the potential to bind with CB18 and F9, key bioactive compounds of P. cretica. Studies centered upon carcinogenesis have indicated inflammation to be of prime importance in the process, involving tumor initiation, promotion and its progression. Acute inflammation though observed to play a significant role in defense response, cancer has been found to be caused by chronic inflammation . For theCB18 and F9) which estimated the neuroprotective potential of P. cretica , is characterized by the development of plaques formed by the aggregation of beta-amyloid (A\u03b2) peptide, responsible for causing neuronal death . Amyloid cretica .P. cretica against antibacterial and enzymatic inhibition activities. To further estimate the therapeutic potential, other identified targets were taken into account and investigated through molecular modeling studies. Due to experimental and financial constraints, some of the identified targets were selected to evaluate the potential of bioactive compounds from The antibacterial activities of two isolated compounds were performed against selected bacteria by applying MABA and results have been reported in F9 and CB18 did not demonstrate any activity against K. pneumoniae, S. flexneri and P. aeruginosa whilst showed less significant antibacterial activity against E. coli and S. aureus as compared to ampicillin at 1 mg/mL.For testing of antibacterial activity of isolated compounds, MABA was employed and results have been tabulated in Purified compounds were subjected to test their antioxidant potential by the DPPH assay and the results have been tabulated in The isolated compounds were also studied for their enzyme inhibition potential against AChE and GluE. The results in terms of percentage inhibition have been mentioned in CB18 depicted slight AChE inhibition activity of 13.25% in comparison to the Eserine standard that showed 91.27% enzyme inhibition. These compounds were also tested for GluE inhibition activity. The compounds F9 and CB18 with a concentration of 0.5 mM displayed reasonable 43.82% and 42.35% enzyme inhibition activity, respectively; whereas Acarbose (standard drug) presented 92.23% inhibition.CB18 and F9 with corresponding proteins, which were validated experimentally after reverse docking predictions. Molecular docking revealed fairly good binding affinities against AChE , alpha-glucosidase and also with bacterial targets, E. coli-DHPS and S. aureus-DHFR , for CB18 and F9 respectively. To check the all-atoms backbone stability of proteins in the presence of the corresponding ligand. MD simulations was performed to check the fluctuations in the root-mean-square-deviation (RMSD) of backbone atoms over a time period of 20 ns and analyzed together with co-crystalized ligand of corresponding PDB using 2D-ligplot and chimera molecular surface representation. The overall binding interaction analysis with bacterial enzymatic targets is illustrated in Molecular docking was employed to investigate the binding pose of E. coli-DHPS and S. aureus-DHFR complexed with CB18 and F9 were quite stable and remain converged between ~ 0.75 \u00c5. This stability was evident with the stable conformation of the ligand inside the binding pocket during the entire simulation period , while 2-ethylhexyl and 2-ethyloctyl through hydrophobic interactions with the binding site residues of d His257 B. LikewiS. cerevisiae-\u03b1-glucosidase. Both compounds were found to have a similar binding pose deep inside the hydrophobic groove of both enzymes when superimposed on reported co-crystalized complexes [CB18 and F9 with other bacterial targets was divided into molecular mechanics (\u0394EMM) and solvation energy (\u0394Gsol) contributions. \u0394Eele is further divided into van der Waals (\u0394Evdw) and electrostatic energy contributions (\u0394Eele) while solvation energy is divided into polar (\u0394Gp) and nonpolar (\u0394Gnp) contributions. A total of 1000 snapshots were extracted from the whole trajectory for binding free energy calculations. The predicted total binding affinities and contributing energies are tabulated in To further explore the binding free energy calculations between selected proteins with MM) and solvation energy (\u0394Gsol) contributions are tabulated in CB18 and F9 against \u03b1-GluE , the results were apparent with reasonable 43.82% (CB18) and 42.35% (F9) enzyme inhibition activity with respect to Acarbose (standard drug) (92.23%) at a concentration of 0.5 mM. While, with antibacterial potential targets, CB18 and F9 revealed lower \u0394Gtol with E. coli DHPS and S. aureus DHFR which were evident from less significant antibacterial activity of both compounds (CB18 and F9) against E. coli and S. aureus as compared to ampicillin (90% and 95%) at 1 mg/mL, respectively. Likewise, the estimated \u0394Gtol of CB18 and F9 with AChE also showed higher binding energy .The MMGBSA calculations along with molecular mechanics (\u0394EP. cretica and their structures were elucidated. Target potentials were evaluated by reverse docking and biological potentials of the isolates were measured for best-targeted potentials. The results show that the isolates are moderately active against all tested activities. Consequently, it can be said that the isolates have poly-pharmacological potentials. Further study is required to fully explore this plant species. Through bioassay direction, two maleates were isolated from"} +{"text": "The worldwide landscape of patient registries in the neuromuscular disease (NMD) field has significantly changed in the last 10\u00a0years, with the international TREAT-NMD network acting as strong driver. At the same time, the European Medicines Agency and the large federations of rare disease patient organizations (POs), such as EURORDIS, contributed to a great cultural change, by promoting a paradigm shift from product-registries to patient-centred registries. In Italy, several NMD POs and Fondazione Telethon undertook the development of a TREAT-NMD linked patient registry in 2009, with the referring clinical network providing input and support to this initiative through the years. This article describes the outcome of this joint effort and shares the experience gained.The Italian NMD registry is based on an informatics technology platform, structured according to the most rigorous legal national and European requirements for management of patient sensitive data. A user-friendly web interface allows both direct patients and clinicians\u2019 participation. The platform\u2019s design permits expansion to incorporate new modules and new registries, and is suitable of interoperability with other international efforts.When the Italian NMD Registry was initiated, an ad hoc legal entity (NMD Registry Association) was devised to manage registries\u2019 data. Currently, several disease-specific databases are hosted on the platform. They collect molecular and clinical details of individuals affected by Duchenne or Becker muscular dystrophy, Charcot-Marie-Tooth disease, transthyretin type-familial amyloidotic polyneuropathy, muscle glycogen storage disorders, spinal and bulbar muscular atrophy, and spinal muscular atrophy. These disease-specific registries are at different stage of development, and the NMD Registry itself has gone through several implementation steps to fulfil different technical and governance needs. The new governance model is based on the agreement between the NMD Registry Association and the professional societies representing the Italian NMD clinical network. Overall, up to now the NMD registry has collected data on more than 2000 individuals living with a NMD condition.The Italian NMD Registry is a flexible platform that manages several condition-specific databases and is suitable to upgrade. All stakeholders participate in its management, with clear roles and responsibilities. This governance model has been key to its success. In fact, it favored patient empowerment and their direct participation in research, while also engaging the expert clinicians of the Italian network in the collection of accurate clinical data according to the best clinical practices.The online version of this article (10.1186/s13023-018-0918-z) contains supplementary material, which is available to authorized users. Neuromuscular diseases (NMDs) include a wide range of pathological rare conditions, mainly of genetic origin, that affect muscles and motor or sensory neurons \u20133.Due to the high variability of genetic defects and clinical phenotypes, the development of therapeutic approaches has always been very challenging. This prompted the NMD scientific community and patient organizations (POs) to network in order to address together the main bottlenecks and translational challenges. In 2007, the TREAT-NMD project started as a European Committee (EC) funded \u201cNetwork of excellence\u201d with the purpose of supporting translational research in the NMD field and promote trial readiness . This waFondazione Telethon (Telethon) was the Italian partner of the TREAT-NMD EC project and, in this role, it engaged the Italian neuromuscular POs to develop NMD patient registries. A legal entity called \u201cAssociazione del Registro dei pazienti con malattie neuromuscolari\u201d was established in 2009 to manage the activity and to guarantee proper data stewardship to all stakeholders . CurrentClinical researchers working in the Italian NMD centres involved in NMD diagnosis and care belong to the scientific associations for the study of muscle and peripheral nerve diseases. During the last couple of decades, AIM and ASNP were fundamental in building the Italian NMD clinical network and promoting many collaborative studies, which involved the majority of the Italian tertiary clinics. Telethon was also instrumental in this networking activity by providing support to several multicentre studies including the development of clinical registries \u201314.In 2015, AIM, ASNP and Telethon signed a Memorandum of Understanding to start a new initiative called \u201cNMD Alliance\u201d, with the purpose of advancing NMD clinical research in Italy. One of the main goals of the NMD Alliance is to consolidate and expand the Italian NMD Registry by incorporating new disease registries and map the activities of the clinical centres. To address this objective, in 2017 the NMD Alliance and ADR co-signed an \u201cAgreement on Registry governance and data stewardship\u201d (Agreement). The Agreement is an important step forward that codifies the relationship among all stakeholders who interact with the NMD Registry. In fact, clear and transparent management roles, as well as inclusion of all stakeholders in the process are recognised as key factors for the success of patient registries, as they favour the endorsement of the initiative and help addressing the many hurdles and complexities [The interest for rare disease patient registries has enormously grown over the last years. All stakeholders acknowledge that well-structured databases are tools of great scientific and practical value to improve patient care \u201321. HighIn line with the TREAT-NMD priorities, the first two registries developed by ADR addressed DMD and SMA and were, consequently, linked to the TREAT-NMD Global Registry; they envisage the direct involvement of patients, or parents in case of children, in the data collection. The presence of strong Italian POs focused on these diseases facilitated such implementation. Another DMD registry linked to TREAT-NMD is operating in Italy, managed by the Italian Parent Project PO; discussion is ongoing on how to merge the two DMD registries.In addition, ADR acknowledged the need expressed by the clinicians of the Italian NMD network to develop clinician-driven registries for groups of disorders for which treatments were available or in the therapeutic pipeline and offered the opportunity to host these databases on the NMD Registry platform. Therefore, several clinician-driven registries were deployed during the last 5\u00a0years, namely for Charcot-Marie-Tooth disease (CMT), muscle glycogen storage disorders (MGSD), spinal and bulbar muscular atrophy (SBMA), and transthyretin type-familial amyloidotic polyneuropathy (TTR-FAP). Purpose and structure of these new registries fit with the definition given by Gliklich and colleagues of a registry as \u201can organized system that uses observational study methods to collect uniform data to evaluate specified outcomes for a population defined by a particular disease, condition, or exposure, and that serves one or more predetermined scientific, clinical, or policy purposes\u201d .Overall, the Italian NMD Registry is an established initiative for data collection through a flexible technical tool; it is managed by a legal entity with transparent procedures and is involving all key stakeholders. Purpose of this article is to describe its structure and unique governance features, highlight the scopes of the disease-specific registries with reference to the international context, and provide a critical analysis of the experience gained so far.The informatics technology (IT) platform is a software infrastructure that provides basic functionalities such as: disease-specific forms for personal and clinical data collections (connected via pseudonymised codes); a module for the registry configuration of the participating clinical centres and their staff; a module for operator authentication; safety and protection solutions aligned with local regulations; storage solutions and data recovery. The IT service provider is Astir s.r.l. , which wThe NMD Registry is customisable and its design allows development by modules. Some of these are implemented on different registries having common features, while others are registry-specific minimal datasets. New forms, dedicated for instance to Patient Reported Outcomes (PRO), surveys or clinical study protocols, can be connected to the same patient\u2019s pseudonymisation code.The IT platform collects information on the clinical investigators involved in each registry. Each expert accesses the system with personal codes and according to his/her role . In addition to managing the individual access to the databases, the IT system can also function as a secure intranet system for data sharing, to support for instance diagnosis consultation, make standards of care guidelines available, and, in general, promote secure communication among clinical centres and collect information about their NMD expertise and activities.. This website also provides information on the governance model, illustrates the main goals of the NMD Registry and of each specific disease registry. In particular, a web page specific for each registry illustrates the type of data that are collected, the composition of the Scientific Committee, and the operating procedures, with a link to the information leaflet and Patient Consent form. When interested individuals create their account, they must consent on privacy policy and accept the online Patient Consent form. The registrants complete the process by signing the paper version of the Informed Consent and making it available to the Registry Curator or the referring clinical centre.Secure access to the NMD Registry by individuals affected by the disease or clinicians that participate in data collection occurs through a web portal . This weThe NMD Registry has developed Standard Operating Procedures (SOPs), addressing (among others): how to create a new registry, role and responsibilities of all actors, modalities of data registration , and data access.. A charter regulates its activities. The Italian NMD Registry is recorded in the Italian Data Protection Authority Register and is compliant with the Italian and European directives on data protection (EU GDPR 2016/279). The main governing body is the Executive Board, which includes a representative from each PO and Telethon; the President is always a PO representative, with a two-year mandate. When a new registry joins the NMD Registry, the referring Italian POs are invited to become members of the ADR and a representative joins the Executive Board. This has been the case for ACMT-Rete and this activity is now in progress for MGSD and TTR-FAP POs ELAC is an independent board composed by five experts in ethical and legal issues, who advice the ADR on these matters. At the start of the initiative, the ELAC provided ADR with advice on the Italian and international legal requirements on data protection, validated the privacy design of the IT platform, the data management procedures and flowchart, and approved the Italian version of the Informed Consent for DMD and SMA registries. The ELAC also provided the Scientific Coordinator of the CMT Registry with advice on the Clinical Protocol and the patient Informed Consent before the submission of the documents for approval to the Institutional Review Board (IRB) of the coordinator and partners\u2019 centres. The CMT registry was the first clinician-reported database hosted on the platform and the procedure developed for its implementation paved the way for the other disease-specific registries that followed. The ELAC also acted as independent consulting body for the evaluation of each request of data access addressed directly to the Italian NMD Registry. As described more in detail below, the ELAC model is now evolving into a new council defined \u201cAccess Board\u201d.A major implementation of the NMD Registry concerned the inclusion of registries collecting clinician-reported items Table\u00a0, Fig.\u00a02.ADR made the IT platform available to these registry-based observational studies, with the scope of favouring the long-term maintenance, update and re-use of data. Dedicated Steering Committees, including the Scientific Coordinator, the principal investigator of each partner centre and other experts, are responsible for the strategic development and management of each registry. The IRBs of all clinical centres involved in data collection approved the protocols and the Informed Consent forms.The increased complexity of the NMD Registry platform prompted the ADR Executive Board to revise the governance process, in order to include the different stakeholders\u2019 roles and views. In the new management scheme, the Scientific Coordinators participate in the governance on behalf of their Steering Committees, and interact with the Executive Board Fig. .The Agreement describes the roles of POs, Telethon, AIM and ASNP, and of the independent experts that participate in the Access Board. It also illustrates the general principles that regulate the different phases of a registry\u2019s life, namely: i) development of new registries; ii) management and use of data by participating centres ; iii) access policies by third parties ; iv) dissemination of results; v) sustainability. Specific details on the registry management, which can slightly vary from one disease-specific registry to another, are then addressed in the respective protocols and are reflected in the legal contracts for data management that are co-signed by ADR and each participating centre.The NMD Alliance may facilitate the establishment of a new registry; moreover, it supports the ADR in the definition of its strategic vision, in order to keep up with clinical research innovation in the NMD field.When a request for consultation of the Italian DMD and SMA registries affiliated to TREAT-NMD is channelled through the international TREAT-NMD network, ADR relies on approval process of the International TREAT-NMD Global Database Oversight Committee , 24, 25,Up to now, for any other request that addresses specifically the Italian NMD Registry, the ADR asked ethical and legal advice to its ELAC. The increased complexity of the NMD Registry platform, however, meant that this model was no longer adequate and a new consultative board named \u201cAccess Board\u201d is now replacing the ELAC, which includes proper expertise to better address the scientific issues that may arise regarding any enquiries. Therefore, the main remit of the Access Board is to evaluate any external requests, relying on the ethical, legal or scientific competences, and independence of its members. To avoid conflicts of interest, the clinical experts that are members of the Access Board should not participate in any disease-specific registry hosted on the NMD registry platform. In case of requests by third parties, both the Scientific Coordinator of the targeted registry and the referring PO\u2019s representative participate in the discussion as informed persons, without voting rights. This model is the outcome of in depth discussion between ADR and the NMD Alliance, to which also the former ELAC members took part by helping ADR and clinicians to define clear roles for all stakeholders and, ultimately, designing a rigorous and transparent process that safeguards all rights.The establishment of the ADR legal entity entailed also the creation of a fund with equal contribution by all members. This fund was initially invested in the development of the IT structure according to the legal and technical requirements, the design of the web portal, and the creation of the first three disease-specific databases. The business plans of the registries developed more recently were supported by a grant awarded to the Scientific Coordinators and partners by Telethon within a specific competitive call dedicated to NMD clinical studies. The allocated budget concerns the development of the database and part-time support to personnel engaged in data collection and processing, from data entry to validation. The clinical Host Institutions and the National Health System support the work carried out by clinicians to perform molecular diagnosis analyses and visit patients for the accurate data collection.Telethon is now directly supporting costs for IT maintenance and general management activities. Alternative or complementary sustainability opportunities are being sought.All databases entail that affected individuals, or their parents/legal guardians in case of minor or incapacitated subjects, start the registration process by creating their own account.The IT platform hosts several databases with patient- and/or clinician-reported forms at different level of implementation Table . The DMDAs specifically detailed below for each registry, all clinical protocols of data collection derive from international consensus or include standardised disease-specific datasets Table . Therefo. Patients are informed about the items that will be collected during the consent talk and the clinical visit.Patient-reported forms are written in Italian, and their contents match to what implemented in different languages by the other registries participating in the TREAT-NMD Global Registry , 25. TheThe DMD and SMA registries collect essential clinical information with mandatory and optional items on clinical status. The datasets were initially defined under the TREAT-NMD project and have recently been revised to take into account updates in standards of care and map novel treatments.The DMD database collects mainly information on young adults with DMD and Becker muscular dystrophy . The SMA database currently collects information of 560 patients with SMA types 1\u20134 .Being part of the TREAT-NMD Global Registry, the two databases have been consulted several times to share aggregated data for feasibility enquires by pharmaceutical companies, inform suitable patients about upcoming trials and contribute information for analyses performed by the TREAT-NMD coordination team , 25. TheThis registry is part of the CMT International Database (CMT-ID), which is based on national independent registries and is linked to the Inherited Neuropathy Consortium (INC), a network of 22 centres from US, UK, Italy, Belgium and Australia collecting data for a natural history study on CMT, with a centralised website based at the University of South Florida . The forThe Italian CMT registry is a dual registry, as CMT comprehends a heterogeneous group of disorders with a complex classification and reporting of clinical, electrophysiological and genetic information needs to be accomplished by the attending clinician Fig. . The minBy June 2018, 721 CMT patients had registered and chosen one of the nine reference centres; up to now clinical information has been entered in the Registry for 584 of them.Registered patients had also the chance to participate in a linked study that required to fill in online self-reported questionnaires related to five important domains: disease course and complications during pregnancy; use, efficacy and tolerability of orthotics and assistive devices; outcome of surgery for skeletal deformities; safety of anaesthesia; occurrence of sleep disorders . Control individuals also filled in online the questionnaires for pregnancy, anaesthesia, and sleep, and were recruited among friends and unaffected relatives of CMT participants, matched as much as possible for age and sex. By the end of the study (30 November 2017), 306 patients and 66 relatives/friends had filled in the questionnaires. A huge amount of data have been collected, which will be very important for improving knowledge and giving advice to patients about pregnancy, orthotics, surgery, anaesthesia, sleep and fatigue in CMT, as well as for defining needs, disease burden and standards of care in CMT. Data analysis is under way.The registry capitalized on the CMT experience for its development. International experts reached an agreement on the items to be collected during the 210th ENMC workshop , 37. LikThe purpose of this registry is to collect uniform clinical and laboratory data of patients with MGSD. The Italian Registry is aligned with the international criteria on the molecular and clinical classification of individuals with MGSD , 42. In By June 2018, 260 individuals have been registered. Preliminary results of the clinical observations were disseminated in Italian and European (European Academy of Neurology) congresses by the Coordinator and some of the other clinicians involved in the registry.The TTR-FAP Registry has been planned by tertiary clinical centres for peripheral neuropathies, most of which are also members of the Italian CMT network and reference centres for acquired and genetic amyloidosis according to the guidelines of the already active French TTR-FAP Network CORNAMYL , 44. TheThis is a dual registry where patients register themselves and choose the centre where they want to be evaluated for the data collection. A collaborating academic centre was also included in the working group for the specific expertise on the psycho-social burden of patients with chronic diseases and their caregivers. Main aims are: to improve understanding of genotype-phenotype relationships, differences in disease presentation, diagnosis and course including inter- and intra-mutation variability; to identify most sensitive outcome measures in a one-year period; to collect information about quality of life, standards of care, psychosocial burden, professional support; to compare the Italian data with other European countries. The minimal dataset collects a series of information on demography, comorbidities, current medication, participation to clinical trials, genetic data, family history, symptoms at onset. The clinical data include neurological, autonomic and cardiologic evaluation, and neurophysiology, echocardiography, cardiac magnetic resonance imaging and scintigraphy. Asymptomatic carriers of TTR mutations are also enrolled in the registry.At June 2018, 206 individuals have registered . Other four Italian centres have recently asked to be involved in the registry and have obtained ethical approval by their respective IRBs.There are several reasons that usually delay the recognition of these rare disorders: a) rarity of those conditions, 2) lack of sufficient awareness of signs and symptoms, 3) adequate knowledge about natural history and diagnostic aspects, 4) rarity of specific treatments, 5) scarcity of clinical follow-up procedures and of standards of care. A registry could provide a powerful database to generate such evidence-based information about clinical and pathophysiological features and supplies also information about current treatments. As a consequence, clinicians and patients expectations are based on implementing the disease-specific registry to allow researchers to be in the best position to examine natural history and different either genotypes or phenotypes of a larger number of these patients.The Italian NMD Registry was developed to help fulfil these needs of the Italian NMD patient community, which is a major stakeholder of Telethon since its start.The Registry is based on an IT platform hosting several NMD disease-specific databases, each of them having peculiarities in terms of modality of data collection (patient- or clinician-reported forms) and purposes. The management framework has been periodically revisited to fulfil the upcoming needs. Overall this initiative has been fruitful and relevant from several points of view.The development of TREAT-NMD-affiliated registries in Italy occurred with the direct engagement of POs with specific interest in muscular dystrophies and motor neuron diseases. Several POs not only expressed the willingness in taking a direct responsibility in the management of these registries, but also decided to work in partnership in order to share tools and knowledge.Establishing and maintaining registries is a complex and intense effort. The POs and Telethon were aware of it, and through the foundation of ADR, they created a legal entity to make sure that all legal and ethical aspects on patient data stewardship were in place. Safety, protection and wellbeing of participants were key values since the NMD Registry\u2019s start, together with transparency of the process. The POs wished to build up a structure that could guarantee further expansion to incorporate new NMD registries. Accordingly, they designed a simple and flexible governance model, in order to be inclusive and admit any Italian POs with specific interest in the NMD registries.In 2012, Eurordis, Nord and Cord issued a joint declaration underlying the importance of disease registries and inviting collaborative approaches among all stakeholders . EffortsADR defined its legal structure in 2009 based on similar principles. A privacy by design model was chosen since the beginning and strong measures were put in place to ensure security and privacy of data collection, maintenance and use. This governance experience allowed the POs to increase their empowerment and gain significant understanding in the highest ethical and legal codes for data management. Moreover, it created a virtuous circle for the empowerment of new POs and their participation in data stewardship and decision-making processes. Several other examples in the NMD field show this positive trend \u201351. MoreThe Italian NMD Registry mainly satisfies the principles for data management and stewardship envisaged by Eurordis-Nord-Cord , EuropeaAlthough the most recent registries started out as observational studies Fig. , they weADR and the NMD Alliance acknowledged the common interest to promote collection and management of clinical data of people with NMD. The signed Agreement is a milestone achievement for several reasons. First, it reports in a transparent way the roles and contributions of each stakeholder involved in the NMD Registry and describes the general principles and policies that regulate data management and access. Making these aspects clear and well documented is an essential step to increase the chances of success of a registry .Moreover, the Agreement states that proponent of a new registry, in order to be deployed in the NMD Registry platform, has to provide a rigorous protocol approved by the pertinent IRBs, a business plan and adequate funds. Also, alignment of data collection to the international FAIR principles is strongly recommended and is part of the evaluation process. These rules are particularly reassuring for the ADR, as they imply proper peer-review validation of the scientific content and objectives. This was already the case for the current registries, in particular for the most recent ones , which are based on clinical study protocols that underwent a rigorous peered-review competition process.Another important asset of the Agreement is its inclusiveness, a concept that applies to all stakeholders. It concerns any individuals with a NMD condition who can sign up to provide their data for the registry, regardless their membership to any specific POs, as well as any POs with interest in a specific NMD that want to participate in the governance. It also refers to any clinical centre that is willing to contribute to data collection if the scientific quality criteria are satisfied. All involved stakeholders consider this a very important principle and, regardless of who is the first driver for the development of a new registry this approach will guarantee a large participation of all people who may have specific interest.Finally, with this document the relevant scientific societies endorse the NMD Registry platform, thus promoting its use within the clinical network and contributing to the dissemination of its governance principles and SOPs. The NMD Registry platform is expected to become the referring structure for future registry initiatives, engaging the young generation of clinicians in data collection through an educational process that has its foundation in the FAIR concepts and ontologies-based classification , 57\u201359. Registries based on voluntary patients\u2019 expression of interest have several advantages, as they are powerful tools to directly involve individuals with a specific condition in the process and grasp their daily life experience. On the other hand, the epidemiological validity of these registries may be limited, as they cannot reasonably map nationwide all existing individuals with that specific conditions ; for thiCondition-specific registries may have different design and collect patient-reported or clinician-reported forms. Both registration modes have strengths and limitations that must be considered to make sure that the chosen model generates valid knowledge and fulfil all stakeholders\u2019 expectations. Potential risks that may affect the quality of both type of registries concern population bias, data quality and validation, and loss to follow up.A bias in the population selection may derive from a different exposure of patients and families to communication regarding the registry existence and scopes. Those who are regularly followed by centre of expertise or are members of the POs of interest are in general better informed. Moreover, given that POs typically direct families and patients towards their referring experts where possible, it is reasonable to expect that these patients are also followed according to the most updated standards of care, thus affecting the information on adopted standards of care that can be inferred from the registry.Collecting medical information directly from patients has a number of advantages. First, it captures the direct experience of individuals with a specific condition, it offers a fresh perspective on the quality of life and needs, and engages the family in a responsible way, favouring empowerment and participation. Importantly, it can help reducing the burden on clinicians regarding the workload of data entry. This type of collection proved its usefulness to design clinical trials and helped patient recruitment into trials , 25, 58 Dual Registries have ambitious aims, which include defining a well-characterized and properly diagnosed patient population, not only for recruitment in clinical trials, but also for natural history studies, validation of outcome measures, creation of biorepositories, epidemiological analyses (if feasible) or surveys. They guarantee both direct input of patients in the registry and the collection of accurate and validated clinical data that are recorded in a systematic and scientifically sound manner and can be rapidly analysed. Of course, to fulfil the above scopes, data need to be validated and of high quality. In our experience, the organisation of training sessions on the functional scales used to record patients\u2019 performance, the implementation of standards of care by the involved centers, as well as the regular monitoring and solicitation of data entry by the registry\u2019s Curator, contribute to implement the databases with data of good quality. These registries may have limitations too, which depend on the nature of the registry itself and, for instance, on the rarity of the diseases and their related clinical and molecular heterogeneity. There may be enrolment biases due to several reasons. One, again, relates to the fact that registration is on a voluntary basis. Participation in the registry is often stimulated by POs or Patient Advocacy Groups linked to clinical study networks and clinicians; therefore, patients are more likely to register if they are already engaged in the patients\u2019 community, and have instruments and contacts to become aware of the registry. In other cases, registration depends on clinical and genetic characteristics; the most severely affected patients are more likely to attend tertiary centers, which in turn are more often involved in registries. Clinicians also may be more prone to encourage registration of patients with genetically defined disorders as compared with those without a precise genetic diagnosis. Upcoming treatments specific for certain disease subgroups may also be a reason for asymmetrical recruitment. Therefore, how much the registry population corresponds to the real world, and how valid the epidemiological inferences are remain open questions. To address and overcome such selection biases is rather difficult. One way is to try to enlarge the registration pathways as much as possible, by clearly defining and disseminating the objectives through the involvement of clinicians in promoting the registry, dedicating time to carefully explain the Informed Consent document, obtaining the agreement and endorsement from the scientific societies, developing the Patient Advocacy Groups\u2019 role and the potentialities of the web. When the registry concerns ultra-rare patients, such as in the case of the MGSD, registration supported by the clinicians themselves may be more effective than waiting for spontaneous registration by affected individuals or their families.Mapping individuals living with an ultra-rare NMD condition may be even more effective when this occurs directly through an international registry , 61, 62.Theoretically, registries should define: how many patients they plan to recruit based on a sample size estimation in relation to the aims; how frequently patients should be re-assessed; for how long the registry should last; which statistical analyses will be applied. A registry\u2019s steering committee should include statisticians to deal with such aspects. Plans should be in place for recruiting a reasonable number of patients in a predetermined timeframe for the purposes of the registry, and a statistical analysis plan should be defined right in the registry project design .The aims of the registries currently on the NMD platform are mainly of a descriptive nature and did not require preliminary sample size estimations per se. However, the Steering Committees that planned each database assessed their feasibility by careful evaluation of the expected number of individuals that the national registry could likely map, depending on the disease rarity and the patients\u2019 cohorts of each participating center. The design of the registries was therefore based also on this knowledge, in addition to the international consensus on standardised items.A very critical point that affects the timeframe and population size that allows the registry reaching statistical significance is its funding. This is particularly true when data collection puts a high burden on clinicians, because data items are numerous and/or patient population is relatively wide. In some cases, registries started as observational studies, with the declared intention of long-term maintenance, update and reuse of data. However, when the initial funding support ends it becomes more and more difficult for clinicians to keep the pace with data recording and updates. In order to overcome this problem, clinicians are encouraged to include fund request for structural sustainability and dedicated personnel in any grant application that entails using the registry as a tool.Finally, a current limitation of the NMD registry platform is that, being owned by a private entity, it cannot merge data with public healthcare databases. This is clearly an obstacle when one considers the clinical burden in managing medical data and the missed opportunity to share individual biomedical information, which could already be available and complementary to the clinical data collected by the registry. All legally feasible efforts should be put in place to verify the possibilities of matching data with other systems, such as the regional or national healthcare registries or the nA burning issue raised by EUCERD and the European Medicines Agency (EMA) in 2011 was the need to organise registries by disease and no longer by product . In theiThe clinician-driven registries of the Italian NMD Registry platform have not reached a sufficient number of registered subjects yet, or do not have enough follow up to be meaningful for purposes that may be of immediate interest for industry Fig. . These rA recent paper illustraThe experience of the NMD Registry highlights the importance of the partnership between patients groups and clinicians to facilitate clinical research. It expands the definition of patient-driven registries: not only patients contribute to data collection, but also towards governance. On the one hand, the data stewardship model favoured PO empowerment and direct participation in decision-making processes and provided the clinical network with a tool to collect prospective data in a safe and inclusive manner. On the other hand, it engaged expert clinicians and promoted training on data management and data sharing concepts according to the best clinical practices.Overall, the databases collect information on more than 2000 individuals with rare or ultra-rare NMDs. The NMD registry platform, however, has the potential to grow and accommodate other registries, not only based on other disease conditions or geographical extension, but also having different research purposes, including capturing real word and post-marketing efficacy data on new treatments that become available for these patient populations.; Daniela Lauro (Famiglie SMA); Renato Pocaterra (AISLA); Marco Rasconi ; Federico Tiberio (ACMT Rete); Salvatore del Vecchio (ASAMSI).ADR - ELAC: Francesco Maria Avato ; Sara Casati (Milan); Alessandro Martini (Padua); Deborah Mascalzoni (Bolzano); Livio Tronconi (Pavia).NMD Alliance Executive Board: Anna Ambrosini, Lucia Monaco, and Davide Pareyson ; Guido Cavaletti ; Maurizio Moggio (Milan); Tiziana Mongini (Turin); Angelo Schenone (Genoa); Gabriele Siciliano (Pisa); Maria Letizia Solinas (Livorno).DMD and SMA Registries: Maria Carmela Pera ; Eugenio Maria Mercuri (Rome).CMT Registry: Davide Pareyson and Daniela Calabrese ; Isabella Moroni, Emanuela Pagliano, Chiara Pisciotta, Giuseppe Piscosquito, and Stefano Carlo Previtali (Milan); Franco Gemignani and Isabella Allegri (Parma); Gian Maria Fabrizi and Tiziana Cavallaro (Verona); Angelo Schenone, Marina Grandis, and Chiara Gemelli (Genoa); Luca Padua and Costanza Pazzaglia (Rome); Lucio Santoro and Fiore Manganelli (Naples); Aldo Quattrone and Paola Valentino (Catanzaro); Giuseppe Vita and Anna Mazzeo (Messina).MGSD Registry: Antonio Toscano ; Corrado Angelini (Venice); Bruno Bembi (Udine); Andrea Martinuzzi (Conegliano); Paola Tonin (Verona); Massimiliano Filosto (Brescia); Lorenzo Maggi (Milan); Tiziana Mongini (Turin); Claudio Bruno (Genoa); Maria Alice Donati (Florence); Gabriele Siciliano (Pisa); Serenella Servidei (Rome).SBMA Registry: Davide Pareyson and Daniela Calabrese ; Caterina Mariotti, Cinzia Gellera, and Silvia Fenu (Milan); Gianni Sorar\u00f9 and Giorgia Querin (Padua); Mario Sabatelli and Amelia Conte (Rome).TTR-FAP Registry: Giuseppe Vita ; Gian Maria Fabrizi and Tiziana Cavallaro (Verona); Davide Pareyson and Silvia Fenu (Milan); Giampaolo Merlini and Laura Obici (Pavia); Alessandro Mauro (Turin); Marina Grandis and Chiara Gemelli (Genoa); Claudio Rapezzi (Bologna); Mario Sabatelli (Rome); Lucio Santoro and Fiore Manganelli (Naples); Lorenza Magliano (Caserta); Costanza Barcellona (Messina).Additional file 1:Working Groups \u2013 affiliations (DOCX 16 kb)"} +{"text": "The outcomes of species interactions\u2013such as those between predators and prey\u2013increasingly depend on environmental conditions that are modified by human activities. Light is among the most fundamental environmental parameters, and humans have dramatically altered natural light regimes across much of the globe through the addition of artificial light at night (ALAN). The consequences for species interactions, communities and ecosystems are just beginning to be understood. Here we present findings from a replicated field experiment that simulated over-the-water lighting in the littoral zone of a small lake. We evaluated responses by emergent aquatic insects and terrestrial invertebrate communities, and riparian predators (tetragnathid spiders). On average ALAN plots had 51% more spiders than control plots that were not illuminated. Mean individual spider body mass was greater in ALAN plots relative to controls, an effect that was strongly sex-dependent; mean male body mass was 34% greater in ALAN plots while female body mass was 176% greater. The average number of prey items captured in spider webs was 139% greater on ALAN mesocosms, an effect attributed to emergent aquatic insects. Non-metric multidimensional scaling and a multiple response permutation procedure revealed significantly different invertebrate communities captured in pan traps positioned in ALAN plots and controls. Control plots had taxonomic-diversity values (as H\u2019) that were 58% greater than ALAN plots, and communities that were 83% more-even. We attribute these differences to the aquatic family Caenidae which was the dominant family across both light treatments, but was 818% more abundant in ALAN plots. Our findings show that when ALAN is located in close proximity to freshwater it can concentrate fluxes of emergent aquatic insects, and that terrestrial predators in the littoral zone can compound this effect and intercept resource flows, preventing them from entering the terrestrial realm. Seemingly separate ecosystems are in fact connected by flows of resources that can substantially contribute to the structure and functioning of the recipient ecosystems . AquaticMost aquatic insects are biphasic, relying on terrestrial ecosystems for reproduction and completion of their lifecycles. As a result, populations emerge from aquatic ecosystems, often in discrete pulses, and these emergence events are exploited by riparian predators including spiders , 5, batsAmong riparian spiders, members of the family Tetragnathidae, the long-jawed orb-weavers, are especially adept at subsidizing their diets with aquatic prey , 13\u201316\u2013uThe outcomes of species interactions\u2013such as those between predators and prey\u2013depend on environmental conditions that are increasingly modified by human activities. Light is among the most fundamental environmental parameters in natural ecosystems. Natural patterns of light and darkness provide organisms with cues for migration , guide nHuman population centers are often located adjacent to water, making aquatic and associated ecosystems, such as riparian zones, vulnerable to ALAN impacts. Insects may be particularly sensitive because ALAN interferes with insect navigation , 26 and Here we present findings from a replicated field experiment that simulated over-the-water lighting in the littoral zone of a small lake. We hypothesized that the plots exposed to experimentally added ALAN would harbor different communities of invertebrates relative to control plots that did not receive additional ALAN. Furthermore, we hypothesized that aquatic invertebrates would aggregate at sites of ALAN more than their terrestrial counterparts. Lastly, we hypothesized that predatory spiders would also aggregate at sites of ALAN, and benefit through the accrual of greater individual body mass more than spiders in plots without added ALAN. Results support these hypotheses and demonstrate how ALAN near shorelines can impact the exchange of resources from aquatic to riparian ecosystems in the form of emergent insects, and the predators that consume them.Lepomis cyanellus) on littoral communities and ecosystems. Experimental light treatments consisted of 2 levels and 2 levels of fish . Each treatment had 5 replicates in a fully crossed 2x2 factorial design. These treatments were assigned randomly to mesocosms after installation via coin tosses. However, we were unable to maintain populations of fish in the mesocosms due to suspected predation by birds, and unknown factors. No fish effects were observed for any of the many community or ecosystem parameters we examined [Initially we designed an experiment to evaluate ALAN-mediated top-down effects of predatory fish . The lake is surrounded by mixed deciduous forest and isolated from direct ALAN exposure. There is no artificial lighting immediately around the lake, but the 6,000-acre Highland State Recreation Area is surrounded by low-density human development, and skyglow is present. To distinguish the two treatments we refer to them as \u201cALAN\u201d and \u201cnon-ALAN\u201d. \u201cNon-ALAN\u201d should not be taken to mean that the plots were completely without exposure to ALAN given the skyglow present at the site, but rather, they did not intentionally receive additional illumination as part of our experiment. The lake has an extensive area of 0.30 m to 0.45 m deep littoral zone where our experimental mesocosms were located. Permission to conduct this experiment was provided by a permit issued by the State of Michigan Department of Natural Resources (PRD-SU-2016-015) to Elizabeth Parkinson.2 area. Twenty experimental mesocosms consisted of 4 wooden stakes supporting a 150 mm extruded plastic mesh structure measuring 1 m2. All mesocosms were submerged with approximately 15 cm of mesh protruding above the water\u2019s surface. Mesocosms were installed by hammering wooden stakes into the lakebed and attaching the mesh components with cable ties. After mesocosm installation, two 3.8-liter buckets of local macrophytes and sediment were returned to each mesocosm to cover the mesh bottom.Mesocosms were installed at the study site during the span of 3 days in June 2016. Before installation, local macrophytes and the benthos were removed by hand in each 1 mth, 2016, with an 86% full moon. Light readings were taken 3 times at each mesocosm and averaged. Lux measurements were significantly greater in ALAN versus non-ALAN treatments . ALAN treatments averaged 27.3 lx and non-ALAN averaged 22.7 lx.ALAN was added 1 week after mesocosm installation and originated from solar LED warm-white \u201cpath-lights\u201d with an output of 10 lumens each and powered by a single, solar-charged battery . Each ALAN mesocosm was illuminated independently by 4 lights; one path light was attached to each corner of the mesocosms at approximately 30 cm above the surface of the water. Some horizontal shielding was present on path-lights, but LED bulbs were mostly exposed on all sides, leading to some horizontal light emission. Light levels were measured at the onset of the study using the application \u201cLight Meter\u201d version 1.1 for iPhone 6. The light meter was positioned over the center of each mesocosm, approximately 15 cm above the water\u2019s surface. Readings were taken at 11:30 pm on June 16rd using pan traps. Pan traps consisted of a plastic container (58.4 cm x 41.3 cm x 15.2 cm) with approximately 2 cm of a water/dish soap mixture covering the bottom of the pan. One pan trap was placed in each mesocosm and left floating in the mesocosms overnight. On August 4th pan traps were removed and the contents were poured through a sieve to capture invertebrates, which were then stored in 70% ethanol, and transported to the Aquatic Ecology Lab at Oakland University. In the lab invertebrates were enumerated and identified to lowest practicable taxonomic unit, usually family [Emergent aquatic insect and terrestrial invertebrate inputs into each mesocosm were measured overnight on August 3y family , 40.Tetragnatha (probably T. elongata) were enumerated on each mesocosm biweekly on three dates: July 8, July 23, and August 4, 2016. On the final date three spiders were collected from each of the mesocosms, stored in vials, and transported to the lab on ice. In the lab spiders were classified by sex, dried in the oven at 40\u00b0 C for 48 h before being cooled in a desiccator, and weighed to the nearest 0.1mg. Insects that were captured in spider webs built on mesocosms were enumerated three times, biweekly, on the same dates that spiders were enumerated.Tetragnathid spiders belonging to the genus post hoc analysis of individual sampling dates for spider abundance and web-captured prey abundance in package car in program R. Non-metric multidimensional scaling (NMDS) was performed in package Vegan [Response variables with multiple sampling dates (spider abundance and web-captured prey abundance) were analyzed using repeated-measures analysis of variance (rmANOVA) in package ez in progrge Vegan in progrge Vegan . Inverte1,16 = 33.34, p<0.001) . Post hoc tests to compare ALAN plots and controls on each individual sampling date revealed that on the first sampling date there was no ALAN effect. By the second and third date, however, the magnitude of the ALAN effect was significant and ALAN plots had 172% and 104% more spiders relative to controls.Mean spider abundance per mesocosm was greater in ALAN plots relative to non-ALAN plots. Totals of 431 and 214 spiders were enumerated on ALAN and non-ALAN plots respectively with means (\u00b1SD) of 14.37 \u00b1 7.83 spiders in ALAN plots and 7.13 \u00b1 3.29 spiders in non-ALAN plots. On average, across the three sampling dates, ALAN plots had 51% more spiders than plots that were not illuminated with ALAN . The mag1,56 = 23.15, p<0.001), and a significant interaction was found between light treatment and spider sex . Post hoc tests were performed to compare ALAN effects on male and female spider body mass separately. Although body mass was greater in ALAN treatments for both sexes, the magnitude of the effect was substantially greater for females; mean male body mass was 34% greater in ALAN mesocosms relative to controls while female body mass was 176% greater . These d2 = 0.40, p = 0.001) , the num= 0.007) , and the= 0.004) . Spider = 0.031) . Caenida1,18 = 15.13, p<0.001) . On the first sampling date the number of prey items in webs was 172% greater in ALAN mesocosms than control mesocosms then dropped to an 18% difference in abundance on the second sampling date. The third sampling date showed webs build on ALAN mesocosms had 195% greater abundance of prey items than control mesocosms.Across all sampling dates the number of prey items captured in spider webs was on average 139% greater on ALAN mesocosms than those built on no-ALAN mesocosms . There w1,18 = 23.6, P<0.001) with mean values (+/- SD) of 1.398 (+/- 0.35) and 0.879 (+/- .30) respectively .Invertebrate abundance, richness, diversity, and evenness in pan traps were all significantly influenced by ALAN. One thousand nine hundred and ninety individual invertebrates representing thirty-six emergent insect and terrestrial invertebrate families were collected from pan traps. Invertebrate abundance was on average 456% greater in ALAN plots than in non-ALAN plots ; ALAN pl= 0.015) . Controlectively . ControlP<0.001) . Differe1,18 = 2.24, P = 0.152), but the number of emergent insects was significantly greater in ALAN treatments .Invertebrate-community composition from pan-trap samples was significantly different in ALAN mesocosms than in controls, as shown by NMDS . ALAN anResource exchanges are an intrinsic facet of riparian and aquatic ecosystems that contribute to their biodiversity and productivity , 44, andWe found a strong increase in insect abundance in ALAN plots, with these communities dominated by emergent aquatic insects, a finding that is consistent with those from other studies , 46, 47.Two conceptual models\u2013the captivity effect and the vacuum-cleaner effect \u2013have relIn addition to seeing an increased abundance of emergent insects at ALAN plots, we also observed concomitant increases in the abundance and body mass of tetragnathid spiders. Because these spiders can track prey via a numerical response , 15, andThe greater abundance and body mass of tetragnathids in ALAN plots, combined with the much greater abundance of emergent insects there, illustrates the effect ALAN can have on the movement of subsidies from aquatic to terrestrial ecosystems. As spiders move into artificially lit areas and away from those of natural darkness, they may alter the flow of aquatic resources with potential cascading consequences in coupled aquatic-riparian environments. Manfrin et al. (2017) observed that subsidy exchange became more limited outside of an artificially lit area, and aquatic-derived energy shifts have been observed on an ecosystem level in stream/riparian coupled systems . In our Like most areas in North America, our study sites experience skyglow from surrounding human development, and our ALAN additions were therefore not done in conditions of natural darkness. Rather, our experiment augmented existing ALAN, simulating the common anthropogenic practice of adding light to areas that are experiencing increasing development. Had our experiment been done under darker conditions our experimental ALAN additions may have been more conspicuous and attracted a greater number of insects. Alternately, the long-term presence of skyglow could have altered light-sensitive members of the invertebrate community, and altered our results. Productive areas of future research are to evaluate the iterative addition of ALAN to human landscapes to simulate future light regimes as development and light intensity increase across the globe.Our study made use of a replicated field experiment, and the design was constrained by limited amount of littoral area and appropriate abiotic conditions, such as water depth. Light from ALAN treatments illuminated adjacent plots and this \u2018cross-talk\u2019, along with skyglow and moonlight, can explain the fairly high light measurements that were taken from non-ALAN treatments. Viewed from a distance, the aggregation of our ALAN plots in a single littoral area of the lake may have made them more conspicuous to invertebrates, and may have resulted in a stronger vacuum-cleaner effect than if the mesocosms had been more dispersed. Despite these challenges the addition of experimental ALAN was well within the range of ALAN found in the literature, both in the context of experimentally added ALAN, and that observed in human-inhabited landscapes.Responses by predators to ALAN have been inconsistent across studies. Most have documented increases in predator abundance or body mass in response to ALAN , 36, 37,We found that the effect of ALAN on spider size was strongly sex-dependent, with females responding substantially more than males. Tetragnathids are sexually dimorphic and females are larger than males, and in light of this, it isn\u2019t surprising that females accrued more absolute biomass than males. However, even when adjusting increases in mass by expressing results on a percent-mass basis, female body mass still increased by 176% in ALAN plots relative to controls; body mass of males increased by only 34%. While the mechanisms behind this finding are not clear, it suggests that females are more-strongly food-limited than males, and have the ability to rapidly increase in mass in the presence of elevated food availability. A recent study found that gravid tetragnathid spiders increased prey capture and consumption compared to non-egg laying females of the same species , suggestA large body of research has shown that resource subsidies are an integral dimension of ecosystem functioning and productivity e.g., , 61, 62,"} +{"text": "In Ethiopia, uncomplicated severe acute malnutrition is managed through the outpatient therapeutic program at health posts level. This brings the services for the management of Severe Acute Malnutrition closer to the community by making services available at decentralized treatment points within the primary health care settings. So far, evidence of the treatment outcome of the program is limited.The main aim of this study was to determine the magnitude of treatment outcomes of severe acute malnutrition and associated factors among under-five children at outpatient therapeutic feeding units in Gubalafto Wereda, Ethiopia, 2019.This was a retrospective cohort study conducted on 600 children who had been managed for Severe Acute Malnutrition (SAM) under Outpatient Therapeutic Program (OTP) in Gubalafto Wereda from April to May/2019. The children were selected using systematic random sampling from 9 health posts. The structured, pre-tested, and adapted questionnaire was used to collect the data. The data was entered by using EPI-data Version 4.2 and exported to SPSS version 24.0 for analysis. Bivariate and Multivariate regression was also carried out to determine the association between dependent and independent variables.A total of 600 records of children with a diagnosis of severe acute malnutrition were reviewed. Of these cases of malnutrition, the recovery rate was found to be 65%. The death rate, default rate, and medical transfer were 2.0, 16.0, and 17.0 respectively. Immunized children had 6.85 times higher odds of recovery than children who were not immunized (AOR = 6.85 at 95% CI (3.68\u201312.76)). The likelihood of recovery was 3.78 times higher among children with new admission than those with re-admission (AOR = 3.78at 95% CI ((1.77\u20138.07))). Likewise, children provided with amoxicillin were 3.38 times recovered than their counterparts (AOR = 3.38 at 95% CI ((1.61\u20137.08))). SAM treatment in OTP is beneficial because of its local access for most severe cases since children reach early before developing complications as a result fatalities will be reduced.The recovery rate and medical transfer were lower than the sphere standard. Presence of cough, presence of diarrhea admission category, provision of amoxicillin, and immunization status were factors identified as significantly associated with treatment outcome of severe acute malnutrition. The impact on increasing the recovery rates of children treated using the OTP service indicates the potential benefits of increasing the capacity of such services across a target region on child mortality/recovery. Timely intervention is another benefit of a more local service like OTP. Building capacity of OTP service providers and regular monitoring of service provision based on the management protocol was recommended. Severe Acute Malnutrition (SAM) is defined by very low weight for height (below -3z scores of the median WHO growth standards), by visible severe wasting, or by the presence of oedema of both feet and mid-upper arm circumference (MUAC) < 115 mm , 3.Globally, 52 million children of age less than five years were affected by acute malnutrition from which 17 million were severely acutely malnourished . Data shIn developing countries, 2% of children suffer from severe acute malnutrition . PublishFormerly in many countries, treatment of SAM had been restricted to facility-based approaches, greatly limiting its coverage and impact . HoweverIn Ethiopia, the program has now expanded to every health center and health post of the country. OTP serve the management of SAM in children aged 6\u201359 months , 20. TheInpatient therapeutic feeding units are faced with a lot of challenges in handling cases of severe acute malnutrition. Some of the challenges include; limited in-patient capacity, lack of enough skilled staff in the hospitals to treat the large numbers needing care, the centralized nature of hospitals promotes late presentations and high cost for treatment, increased risk of cross infections for immune-suppressed children such as children with SAM . OutpatiBesides the prevention strategies, the improved management of SAM is an integral part of the World Health Resolution on Infant and Young Child Nutrition to improve child survival. Children with SAM have profoundly disturbed physiology and metabolism when intensive re-feeding is initiated before metabolic and electrolyte imbalances corrected .Despite malnutrition is being one of the major public health problems in Ethiopia, limited information exists regarding the outcome of SAM treatment provided through the outpatient decentralized approach. Even though SAM patients are being managed at OTP unites; there is scarce evidence in the efficacy of these ongoing SAM treatments in the study area. The study, therefore, is aimed to assess the treatment outcome of SAM and associated factors among under-five children in the outpatient therapeutics unit.The study was conducted in Gubalafto Wereda from April 2016 to May 2019 GC. Gubalafto is one of the Districts in North Wollo Zone of Amhara Region which is 521 km from Addis Ababa, North-Central Ethiopia. The total estimated population in the Woreda is 1,76,492 of whom, 90,187 male and 86,305 females. The number of children under-five years in this Wereda is 23,904. The Woreda has a total of 34 kebeles; among which 30 kebeles are rural and 4 kebeles are urban. Gubalafto Wereda administration health office report, there are 8 health centers and 34 health posts .A retrospective cohort study was conducted using document review in outpatient therapeutic feeding units of the selected health posts North Wollo Zone, Amhara region, Ethiopia, 2019.All children under-five years at outpatient therapeutic feeding units with the diagnosis of severe acute malnutrition in North Wollo Zone health posts.Under-five children at outpatient therapeutic feeding units of the selected health posts.Medical records of the sampled children under-five years at outpatient therapeutic feeding units of the selected health posts.Records of children under-five years at outpatient therapeutic feeding units.Transferred cases and records with incomplete information were excluded.A multistage sampling technique was employed to select the study subject. From the total 34 kebeles, 7 rural and 2 urban kebeles was selected by simple random sampling method.The samples are distributed proportionally based on probability proportional to size (PPS) allocation technique. Participants in each kebele were selected by using a systematic sampling technique after calculating the sampling interval (K) for each kebeles.The sample frame is the list of children under-five years SAM charts at OTP. It is identified after checking all 9 selected kebeles (study population) to identify charts of children from birth up to 59 months old and coding of those charts was done to prepare sampling frame for each kebele. Those children with incomplete charts are considered as non-respondent. Finally, the OTP record card of each child was selected using systematic random sampling.For the first specific objective sample size for the magnitude of treatment outcome, the sample size was determined using a single population proportion formula. A study done in the OTP Wolaita zone showed a recovery rate of 64.9% and two different studies in the Amhara region showed a recovery rate of 78% and 58.4%. For this calculation, we used the proportion that was conducted in Wolaita since the two lists above done in the inpatient therapeutics unit.A total sample size of 354 was determined using single sample proportion formula by considering 95% confidence, 5% margin of error, and taking a 64.9% recovery rate from Wolaita. By adding 10% non-respondent rate the final sample size was 390Where,n = sample size derived from estimation formulaZ\u03b1 /2 = the value of z at a confidence level of 95% = 1.96P = is recovery rates of children who had been managed for SAM = .64.9 (64.9%)d = is the margin of error to be tolerated and taken as 5%Considering 10% contingency for missing data the final sample size for determining the treatment outcoFor the second objective, to assess risk factors for treatment outcome of SAM among under-five children in outpatient therapeutics unit, the sample size was determined using a double population proportion formula by considering study was done in Tigray and Wolaita recovery rate p = 61.78 , 64.9 1 respectaP1: is a percent of exposed with the outcomeP2: is a percent of non-exposed with the outcomeZ\u03b1/2: is taking CI 95%,ZB: 80% of powerr is the ratio of non-exposed to exposed 1:1And The study area, Gubalafto Wereda has a total of 34 Kebeles . From the total 34 kebeles, 7 rural and 2 urban kebeles (02 and 04) were selected by simple random sampling method.The samples are distributed proportionally based on probability proportional to size (PPS) allocation technique considering number of children under-five years SAM charts at OTP. Participants in each kebele are selected by using a systematic sampling technique after calculating the sampling interval (K) for each kebele separately.The sample frame is the list of children under-five years SAM charts at OTP. It is identified after checking all 9 selected kebeles (study population) to identify charts of children from birth up to 59 months old and coding of those charts was done to prepare sampling frame for each kebele. Those children with incomplete charts are considered as non-respondent. Finally, the OTP record card of each child was selected using systematic random sampling.Treatment outcome Recovered or Not RecoveredSocio-demographic variables: Type of malnutrition Medical co-morbidities Admission category Treatment outcome: grouped as recovered and not recovered from SAM management at outpatient therapeutic feeding units in this study.Recovered: children with severe acute malnutrition declared as cured or recovered in the logbook of outpatient therapeutic feeding units.Not recovered: defined as children discharged from outpatient therapeutic feeding units with outcome other than recovery in this study .Severe acute malnutrition (SAM): the weight-for-height ratio of less than minus 3 standard deviations below the median WHO growth standards or weight-for-height ratio of below 70% of the median NCHS reference or presence of nutritional edema.Outpatient Management: Management of SAM of children without medical complications or pass appetite test.Defaulter: A SAM patient who becomes absent continuously from the therapeutic feeding program of outpatient care.Non-responder: SAM patient admitted to inpatient that does not reach discharge criteria after 40 days in the inpatient program.Died: Severe Acute Malnutrition Patient in OTP as died.Type of malnutrition: grouped as marasmus (non-edematous), kwashiorkor (edematous), marasmus kwashiorkor (both edema and severe wasting), and visible severe wasting.The data collectors and the supervisors were trained for two days on techniques of data collection and the importance of disclosing the possible purposes of the study to the study participants before the start of data collection. To assure the quality of the data, investigators closely supervised the data collection procedure daily. The review was made in the field for checking the completeness of questionnaire and correction was made in the field.Each questionnaire and data sheet was check before the data entry. The data was entered one I data version 4.2 daily, basis and missing data were identified. Incorrectly filled or questioners that miss major content was not included in the study. The pretest was conducted in the Woldia health center (which is not a study area) using 5% of the total sample size which is not included in the actual sampling and necessary adjustments were made on the tool.The data was entered and analyzed by using EPI-data Version 4.2 and exported to SPSS version 24.0 for analysis. Bivariate and Multivariate regression was also carried out to determine the association between dependent and independent variables.Ethical approval was obtained from the research ethics committee of the Woldia University College of health science. An official letter of permission was obtained from Woldia University College of health science and was submitted to the respective administrative bodies of the Gubalafto Woreda; permission from these administrative bodies was also given. All records were fully anonymized before we accessed them and the ethics committee waived the requirement for informed consent. Confidentiality was ensured throughout the research process. All incomplete charts were considered as non-response rates.The study included 600 eligible children who had been managed for SAM under the OTP from April to May (2016\u20132019); 50.8% of children enrolled in the study were males. Children beyond two years of age, 179 (29.8%), were underrepresented in the OTP as compared to their middle age groups, 313 (52.2%). About 18% of the children were younger. The median weight at admission, marasmic, marasmic kwashiorkor and kwashiorkor patients marasmic kwashiorkor and kwashiorkor patients were 7.7 kg (IQR: 6.2 to 10.5 kg), 7.1 kg (IQR: 5.8\u20139.2 kg), 8.4 kg (IQR: 7.1\u20139.8 kg), and 9.97 kg (IQR: 8.15\u201311.60 kg) respectively. Concerning vaccination history: 444 (74.0%) were fully vaccinated, 83 (13.8%) were partially vaccinated, 40 (6.7%) unknown vaccination status, and 33 (5.5%) were not vaccinated for age. The majority (90.2%) of children were identified as newly admitted children. The length of stay for severely malnourished children admitted in the OTP program ranges from 2 months up to 4 months.Regarding treatment outcome 65% recovered, 2% were dead .Regarding the type of Malnutrition at Admission about 451 (75.2%) of children admitted to OTP had non-edematous (marasmic), type of severe acute malnutrition at admission (visit), (149) 24.8% of the children were kwashiorkor.Forty present (40%) of children admitted to OTP had a fever. In some case, there were multiple comorbidities at admission including diarrhea (12.8), HIV positive 12(2%), TB 12(2%), cough 19.2% and vomiting 25.3%. In addition to this, 26.4% of the children had edema .Admitted cases with severe acute malnutrition to OTP were managed following the federal ministry of health of Ethiopia guideline protocol for the treatment of severe acute malnutrition. Out of 600 children whose medication records were available for review, the most prescribed medications were PO antibiotics (90%) Amoxicillin followed by Vitamin A supplementation (74.0%). Of the total 44.6% of the children was dewormed with Albendazole or Mebendazole, 52.2% received folic acid .In the bivariate logistic regression analysis of malnutrition, the presence of fever, presence diarrhea, presence cough, presence vomiting, presence edema, PO antibiotics, admission category, immunization status, the weight of child were associated with treatment outcome of SAM. Those variables that have a p-value less than or equal to 0.25 were entered into a multivariable logistic regression model to adjust for possible confounders.In a multivariate logistic regression presence of cough, the presence of diarrhea PO antibiotics, admission category, and the immunization status of a child be significantly associated with the treatment outcome of SAM. Accordingly, the odds of recovery on SAM among children presenting with cough was lower compared to those without cough. The odds of recovery on SAM were higher among children who took PO antibiotics compared to those who were not taken. The admission category was also associated with the recovery of SAM. Those who have new admission were 3.78 times more likely to recover than those who have readmission. The odds of recovery on SAM among children presence with diarrhea less likely recover than those SAM children without diarrhea. Concerning immunization status of children the odds of recovery on SAM management among children who have been fully vaccinated ) was higher compared to those who have not been vaccinated .The study was mainly aimed to report on treatment outcomes of SAM in OTP and associated factors with it among children treated with SAM. Accordingly, the overall prevalence of cured, dead, defaulter, and medical transfer were 65.0, 2.0, 16.0, and 17.0 respectively. The result revealed 600 (65%) SAM children admitted to OTP were recovered. This indicates the recovery rate was lower than the sphere standard acceptable range . The finThe overall defaulter rate in this study in line with study in Tigray 17.5% . This fiChildren provided with amoxicillin were 3.38 times more likely to recover when compared to their counterparts. This result was consistent with the finding from North Ethiopia . This caRegarding factor associated with SAM treatment outcome (recovered/not recovered) this study revealed presence of cough, the presence of diarrhea, PO antibiotics, admission category, and the immunization status of a child as predictor of SAM treatment outcome. Presence of complications like cough and diarrhea were 53% less likely compared to their counterparts. This might be due to the fact that coughs and diarrhea are usual associated with infections which result in further nutritional consumption which shares nutrients for nutritional recovery. Children who were fully and partially vaccinated had approximately 7 times better recovery rate when compared to those who have been not vaccinated which is almost consistent with a study in Bahirdar Felege Hiwot hospital of 4.4 times . The mosRecovery rates in the study area are below the cut of points of the minimum standard sets in humanitarian and disaster prevention (or the sphere standards), it is low when compared to similar studies conducted in different parts of Ethiopia but the death rate was lower than the international standard. Presences of cough comorbidities were statistically significant factors that hinder the recovery rate of malnourished children. On the other hand, vaccination and administration PO antibiotics were positive indicators for recovery. Attachment of follow-up chart to the individual folder and monitoring of the child progress with the chart also has a greater contribution in improving the recovery of children with severe acute malnutrition in the TFU. Thus, the health care providers should emphasize those SAM cases with comorbidity like cough and readmission case which require strict follow up according to the protocol and increased use of SAM management follow up chart for all SAM patients. It is also recommended to give community-based health education and counseling for mothers to enhance child immunization. 7 May 2020PONE-D-19-29446Treatment Outcome of Severe Acute Malnutrition and associated factors among under-five children in outpatient therapeutics unit in Gubalafto Wereda, North Wollo Zone, Amara Regional State, North Ethiopia, 2019 G.CPLOS ONEDear Mr Beletew,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE\u2019s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.You can see that a large revision is necessary and the suggestions must be responded entirely,It is necessary a complete English revision after your changes have be done, for the entire manuscript. The referees have pointed some of them, but there are even more. Please use a professional native English translator.\u00a0https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.We would appreciate receiving your revised manuscript by Jun 21 2020 11:59PM. When you are ready to submit your revision, log on to If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocolsTo enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.We look forward to receiving your revised manuscript.Kind regards,Ricardo Q. Gurgel, PhDAcademic EditorPLOS ONEJournal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf andhttps://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf2. We suggest you thoroughly copyedit your manuscript for language usage, spelling, and grammar. If you do not know anyone who can help you do this, you may wish to consider employing a professional scientific editing service. \u00a0http://learn.aje.com/plos/) for a 15% discount off AJE services. To take advantage of our partnership with Editage, visit the Editage website (www.editage.com) and enter referral code PLOSEDIT for a 15% discount off Editage services.\u00a0 If the PLOS editorial team finds any language issues in text that either AJE or Editage has edited, the service provider will re-edit the text for free.Whilst you may use any professional scientific editing service of your choice, PLOS has partnered with both American Journal Experts (AJE) and Editage to provide discounted services to PLOS authors. Both organizations have experience helping authors meet PLOS guidelines and can provide language editing, translation, manuscript formatting, and figure formatting to ensure your manuscript meets our submission guidelines. To take advantage of our partnership with AJE, visit the AJE website A clean copy of the edited manuscript (uploaded as the new *manuscript* file)3. We noticed you have some minor occurrence of overlapping text with the following previous publication(s), which needs to be addressed:https://jhpn.biomedcentral.com/articles/10.1186/s41043-017-0083-3https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0065840In your revision ensure you cite all your sources (including your own works), and quote or rephrase any duplicated text outside the methods section. Further consideration is dependent on these concerns being addressed.4. In ethics statement in the manuscript and in the online submission form, please provide additional information about the patient records used in your retrospective study.Specifically, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent.If patients provided informed written consent to have data from their medical records used in research, please include this information.5. Thank you for including your ethics statement:\"Ethical approval was obtained from the research ethics review board of the WU faculty of health science.\"a. Please amend your current ethics statement to include the full name of the ethics committee/institutional review board(s) that approved your specific study.b. Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the \u201cEthics Statement\u201d field of the submission form (via \u201cEdit Submission\u201d).http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-researchFor additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/data-availability.6. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.Upon re-submitting your revised manuscript, please upload your study\u2019s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: We will update your Data Availability statement to reflect the information you provide in your cover letter.7. Thank you for stating the following financial disclosure: 'N/A'At this time, please address the following queries:Please clarify the sources of funding for your study. List the grants or organizations that supported your study, including funding received from your institution.State what role the funders took in the study. If the funders had no role in your study, please state: \u201cThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\u201dIf any authors received a salary from any of your funders, please state which authors and which funders.If you did not receive any funding for this study, please state: \u201cThe authors received no specific funding for this work.\u201dPlease include your amended statements within your cover letter; we will change the online submission form on your behalf.8. Please amend the manuscript submission data (via Edit Submission) to include author Befkad Adresse.9. Please amend your authorship list in your manuscript file to include author Befkad Deresse.10. Your ethics statement must appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please also ensure that your ethics statement is included in your manuscript, as the ethics section of your online submission will not be published alongside your manuscript.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: YesReviewer #2: No**********2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: YesReviewer #2: No**********3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1: YesReviewer #2: Yes**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesReviewer #2: No**********5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1: I have attached a document with recommended changes which are almost all relating to editorial changes. Please see attached and once these editorial changes are made, the content is appropriate.I would bring the lower death rates observed into both the abstract and conclusions. This is valuable and commendable work and I hope the grammatical changes I suggested are not too painful to implement.Reviewer #2: Comments onPONE-D-19-29446: Treatment Outcome of Severe Acute Malnutrition and associated factors among under-five children in outpatient therapeutics unit in Gubalafto Wereda, North Wollo Zone, Amara Regional State, North Ethiopia, 2019 G.CMajor Comments:1. The title shall be re-written as \u201cTreatment Outcome of Severe Acute Malnutrition and associated factors among under-five children in outpatient therapeutics unit in Gubalafto Wereda, North Wollo Zone, Ethiopia\u201d2. The Background section requires major revision. The first paragraph of the background section shall start by defining \u201cSevere acute malnutrition\u201d. The next paragraph shall describe the burden of severe acute malnutrition in the globe, in Africa, in Ethiopia and the study area. The ways of preventing and treating severe acute malnutrition and respective outcomes shall be presented with corresponding references.3. There is also a need to define \u201coutpatient therapeutics unit\u201d in the Ethiopian context. Does it refer to health care systems 4. The justification for the study \u201cBesides, the high percentage of malnutrition is alarming which needs further study to describe the treatment outcome of SAM in OTP to assess the factors contributing to the treatment outcome.\u201d is not adequate. It rather shall state whether there is an ongoing SAM treatment and hence evidence of gap in the efficacy of the ongoing SAM treatment in the study area. The justification as it stands now is vague to warrant the objective. Hence there is a need for major revision.5. The Objective \u201cThe study, therefore, is aimed at describing the treatment outcome among children of age less than five years and identifies factors contributing to the treatment outcome.\u201d is not congruent with the title \u201cTreatment Outcome of Severe Acute Malnutrition and associated factors \u2026\u2026\u2026\u2026.\u201d which is quite important to revise6. Several typographic, grammatical and logical errors are rampant in the background section and need to be corrected7. Methods: There is repetition on the sampling technique and procedure. \u201cThe study area, Gubalafto Wereda has a total of 34 Kebeles . From the total 34 kebeles, 7 rural and 2 urban kebeles was selected by simple random sampling method\u201d is repeated. The sample size is different . And the reason how you came up with the total samples of 600 is not clear yet. The names of the selected Kebeles shall be presented here. To which objective does the statement \u201cFor the second objective, the sample size was determined using a double population proportion formula by considering study done in Tigray and Wolaita recovery rate p=61.78,64.9 respectably to calculate the required sample size. Finally, it is calculated by using Epi info version 7 statistical packages.\u201d correspond? If the authors have two specific objectives they shall present them clearly in the background section and also state the sample size to each objective clearly. In addition, each specific objective shall have a clear background and/or justification in the background section. The statement \u201cParticipants in each kebele are selected by using a systematic sampling technique after calculating the sampling interval (K=2) for each kebeles\u201d sounds vague as the number of residents, population etc varies across the kebeles and hence the corresponding sampling interval. Hence this section shall be clearly presented as it is central to the study.8. Several typographic errors in the methods section require major revision.9. Results section: The results shall show the kebeles/health posts and corresponding cases. This section shall present the results clearly. Statistical figures of the data to inform readers on matters such as on the magnitude of cases occurred in the area.10. The number(%) of severe cases and mild cases should be presented clearly indicating the totoal cases, and the cases in each kebele.11. Results: the section \u201cBivariate and Multivariate analysis on treatment outcome of SAM and associated factors\u201d shall be re-written as \u201cTreatment outcome of SAM and associated factors\u201d12. Under the section \u201cTreatment outcome of SAM and associated factors\u201d it remains important to present the length of treatment and related outcome. As it has several advantages for policy and follow up.13. \u201cTable 4: Bivariate and Multivariate analysis on treatment outcome of SAM and associated factors\u201d shall be re-written as \u201cTable 4: Treatment outcome of severe acute malnutrition and associated factors in ------ Woreda, Northern Wollo, Ethiopia ---month, ----year to -----month, ----year\u201d14. Discussion: This section shall present the main findings obtained by analyzing the data in line with the objective and discuss contextually. \u201cThe study was mainly aimed to indicates treatment outcomes of OTP and associated factors with it among children treated from SAM\u201d This is vague and requires clarification. Did you address \u201cAccordingly, the overall prevalence of cured, dead, defaulter, and medical transfer were 65.0, 2.0, 16.0, and 17.0 respectively\u201d in your study adequately**********what does this mean?). If published, this will include your full peer review and any attached files.6. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool, AttachmentPONE-D-19-29446_reviewer (1).pdfSubmitted filename: Click here for additional data file. 15 May 2020Date: May 15/2020plosone@plos.orgTo: \"PLOS ONE\" birukkelemb@gmail.comFrom: \"Biruk Beletew\" Subject: Submitting Revision Version [PONE-D-19-29446]PONE-D-19-29446Treatment Outcome of Severe Acute Malnutrition and associated factors among under-five children in outpatient therapeutics unit in Gubalafto Wereda, North Wollo Zone, Amara Regional State, North Ethiopia, 2019 G.CPLOS ONEDear Editor we have no word to explain our deepest thanks for your constructive comments and helping us throughout the process of preparing the manuscript for publication. Since we have agreed with all points you raised we believe we have carefully amended the paper as per your point of view and the journal guideline. We have downloaded the journal guideline and prepare the manuscript accordingly. Thank you again since you have contributed much for our better paper.We have presented below the point to point response to each pointes raised by the editor and the reviewers. Editor comment: 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdfAuthors\u2019 response: we have used the attached documents as a guide to prepare the manuscript and really helped us. Editor comment: 2. We suggest you thoroughly copyedit your manuscript for language usage, spelling, and grammar. If you do not know anyone who can help you do this, you may wish to consider employing a professional scientific editing service. Upon resubmission, please provide the following: Editor comment: 3. We noticed you have some minor occurrence of overlapping text with the following previous publication(s), which needs to be addressed:https://jhpn.biomedcentral.com/articles/10.1186/s41043-017-0083-3https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0065840In your revision ensure you cite all your sources (including your own works), and quote or rephrase any duplicated text outside the methods section. Further consideration is dependent on these concerns being addressed. Authors\u2019 response: we have paraphrased the manuscript and removed the textual overlap. We have also cited those published works in method part.Editor comment: 4. In ethics statement in the manuscript and in the online submission form, please provide additional information about the patient records used in your retrospective study.Specifically, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information.Authors\u2019 response: we have amended the ethics statement as per your comment as \u201c All records were fully anonymized before we accessed them and the ethics committee waived the requirement for informed consent. Confidentiality was ensured throughout the research process. All incomplete charts were considered as non-response rate.\u201d (Page 10 Line 244-246) Editor comment: 5. Thank you for including your ethics statement:\"Ethical approval was obtained from the research ethics review board of the WU faculty of health science.\"a. Please amend your current ethics statement to include the full name of the ethics committee/institutional review board(s) that approved your specific study.b. Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the \u201cEthics Statement\u201d field of the submission form (via \u201cEdit Submission\u201d).http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-researchFor additional information about PLOS ONE ethical requirements for human subjects research, please refer to Authors\u2019 response: we have amended as per your comment as \u201cEthical approval was obtained from the research ethics committee of the Woldia University College of health science. An official letter of permission was obtained from Woldia University College of health science and was submitted to the respective administrative bodies of the Gubalafto woreda; permission from these administrative bodies was also given. All records were fully anonymized before we accessed them and the ethics committee waived the requirement for informed consent. Confidentiality was ensured throughout the research process. All incomplete charts were considered as non-response rate.\u201d (Page 10 Line 241-247)http://journals.plos.org/plosone/s/data-availability. Editor comment: 6. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see Authors\u2019 response: we have indicated the availability of data and materials as \u201cAll relevant data are within the paper\u201d:page18; Line 375Editor comment: 7. Thank you for stating the following financial disclosure: 'N/A'At this time, please address the following queries:a. Please clarify the sources of funding for your study. List the grants or organizations that supported your study, including funding received from your institution.b. State what role the funders took in the study. If the funders had no role in your study, please state: \u201cThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\u201dc. If any authors received a salary from any of your funders, please state which authors and which funders.d. If you did not receive any funding for this study, please state: \u201cThe authors received no specific funding for this work.\u201d Authors\u2019 response: we have amended the manuscript as per the comment: Funding: The study was funded by Woldia University. However, the funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript:Page 18 line 379-380Editor comment: 8. Please amend the manuscript submission data (via Edit Submission) to include author Befkad Adresse. Authors\u2019 response: sorry it was by mistake, the correct one is Befkad Deresse, and it have been amended in both editorial system and the manuscript: Page 1 Line 5Editor comment: 9. Please amend your authorship list in your manuscript file to include author Befkad Deresse.Authors\u2019 response: sorry it was by mistake, the correct one is Befkad Deresse, and it have been amended in both editorial system and the manuscript: Page 1 Line 5 Editor comment: 10. Your ethics statement must appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please also ensure that your ethics statement is included in your manuscript, as the ethics section of your online submission will not be published alongside your manuscript.Authors\u2019 response: adjusted the ethics statement appear in the Methods section not elsewhere (Page 10 Line 241-247) To Reviewer #1Dear Reviewer we would like to forward our deep-seated gratitude for your interesting and valuable comments and helping us throughout the process. We all really appreciate your potential and optimism while you give such constructive, thoroughly and in-depth comments. Since we have agreed with all of your points raised we have amended the manuscript as per your comments. We would like to thank you again since you are contributing for our better paper by giving such comments which are important to improve the quality of this paper. Below we have written the point to point response to issues you raised.Reviewer comment: I have attached a document with recommended changes which are almost all relating to editorial changes. Please see attached and once these editorial changes are made, the content is appropriate.I would bring the lower death rates observed into both the abstract and conclusions. This is valuable and commendable work and I hope the grammatical changes I suggested are not too painful to implement.Authors\u2019 response: thank you very much, we have listed your comments taken from the attached file and indicated as we have amended and highlighted in the manuscript.Abstract\u2022 Reviewer comment: Replace evidence on with evidence of\u2022 Authors\u2019 response: amended as per the comment: Page 1 Line 18\u2022 Reviewer comment: define the word and then follow with (SAM) and for OTP\u2022 Authors\u2019 response: amended as per the comment: Page 2 Line 26\u2022 Reviewer comment: the recovery rate was revealed as 65 % with the recovery rate was found to be 65%\u2022 Authors\u2019 response: amended as per the comment: Page 2 Line 34\u2022 Reviewer comment: Replace Children who took immunization were had 6.85 times with Immunized children had 6.85..\u2022 Authors\u2019 response: amended as per the comment: Page 2 Line 35\u2022 Reviewer comment: Replace recover compared to their counterparts who were not provided with recover than their counterparts\u2022 Authors\u2019 response: amended as per the comment: Page 2 Line 39\u2022 Reviewer comment: Replace with treatment outcome of Sever Acute Malnutrition With-with the treatment outcome of severe acute malnutrition.\u2022 Authors\u2019 response: amended as per the comment: Page 2 Line 46\u2022 Reviewer comment: make a stronger link in the concluding statement between the outpatient services existing and the impact - e.g. The impact on the recovery rates of children treated using the OTP service indicate the potential benefits of increasing the capacity of such services across a target region on child mortality/recovery. The importance of timely intervention is another benefit of a more local service.\u2022 Authors\u2019 response: Thank you very much, we have amended as per the comment. Page 2 Line 47-51.Background\u2022 Reviewer comment: Across the document define abbreviations on first time of use \u2022 Authors\u2019 response: we have defined abbreviations on first time of use throughout the document \u2022 Reviewer comment: Replace the program now with the program has now \u2022 Authors\u2019 response: amended as per the comment: Page 4 Line 87\u2022 Reviewer comment: I am not sure I understand the second part of this sentence \"and high opportunity cost for careers\"\u2026. Page 4 Line 94-95\u2022 Authors\u2019 response: amended as per the comment\u2022 Reviewer comment: Replace metabolism, such that if intensive re-feeding metabolism with when intensive refeeding...\u2022 Authors\u2019 response: amended as per the comment: Page 4 Line 100-101\u2022 Reviewer comment: Replace Despite malnutrition is one with despite malnutrition being\u2022 Authors\u2019 response: amended as per the comment: Page 4 Line 102Methods \u2022 Reviewer comment: Methods title needs to be aligned with the methods section\u2022 Authors\u2019 response: amended as per the comment: Page 5 Line 110\u2022 Reviewer comment: one of the Wereda x.. provinces/districts?\u2022 Authors\u2019 response: amended as per the comment: Page 5 Line 113\u2022 Reviewer comment: delete \"found at\"\u2022 Authors\u2019 response: deleted as per the comment\u2022 Reviewer comment: delete far\u2022 Authors\u2019 response: deleted as per the comment\u2022 Reviewer comment: whom 80,187 male and 86,305 females\u2022 Authors\u2019 response: amended as per the comment: \u2022 Reviewer comment: children under five years\u2022 Authors\u2019 response: amended as per the comment : Page 5 Line 115\u2022 Reviewer comment: All children under five years - across document \u2022 Authors\u2019 response: amended as per the comment\u2022 Reviewer comment: will be excluded\u2022 Authors\u2019 response: amended as was excluded: Page 5 Line 135\u2022 Reviewer comment: in this paragraph replace a proportion that was done with \"a proportion that was conducted in xx\" and keep in the past tense. We used..\u2022 Authors\u2019 response: amended as per the comment\u2022 Reviewer comment: small c - co-morbidities\u2022 Authors\u2019 response: amended as per the comment\u2022 Reviewer comment: Create table or put in header titles with bullets under them; or titles followed by :x,y,z \u2022 Authors\u2019 response: amended as per the comment: Page 6 Line 152-153\u2022 Reviewer comment: cured or recovered\u2022 Authors\u2019 response: amended as per the comment: Page 9 Line 212\u2022 Reviewer comment: space after the number 179 (29.8%) across all document\u2022 Authors\u2019 response: amended as per the comment: Page 10 Line 255\u2022 Reviewer comment: in the table Make first letter of all words on list capital \u2013\u2022 Authors\u2019 response: amended as per the comment: Page 11 Table 1\u2022 Reviewer comment: In some cases there were multiple co-morbidities at admission to include \u2013\u2022 Authors\u2019 response: amended as per the comment\u2022 Reviewer comment: In bivariate logistic regression analysis of malnutrition, the presence of....\u2022 Authors\u2019 response: amended as per the comment: Page 13 Line 291\u2022 Reviewer comment: In a multivariate..\u2022 Authors\u2019 response: amended as per the comment: Page 13 Line 291\u2022 Reviewer comment: delete \"than as\"\u2022 Authors\u2019 response: deleted as per the comment\u2022 Reviewer comment: delete as\u2022 Authors\u2019 response: deleted as per the comment\u2022 Reviewer comment: across document put in the 0 before the decimals e.g. 0.25-0.86\u2022 Authors\u2019 response: amended as per the comment: Page 14 Line 304\u2022 Reviewer comment: delete as\u2022 Authors\u2019 response: amended as per the comment\u2022 Reviewer comment: not been\u2022 Authors\u2019 response: amended as per the comment: Page 14 Line 307\u2022 Reviewer comment: would keep all one color, or do full rows in a single color, or highlight important parts in bold text\u2022 Authors\u2019 response: amended as per the comment across all tablesDiscussion\u2022 Reviewer comment: to report on the\u2022 Authors\u2019 response: amended as per the comment: Page 16 Line 335\u2022 Reviewer comment: children treated with SAM\u2022 Authors\u2019 response: amended as per the comment: Page 16 Line 376\u2022 Reviewer comment: and between these two values\u2022 Authors\u2019 response: amended as per the comment: Page 16 Line 322\u2022 Reviewer comment: higher than study findings from\u2022 Authors\u2019 response: amended as per the comment: Page 16 Line 329\u2022 Reviewer comment: death rate, this study reported a lower proportion\u2022 Authors\u2019 response: amended as per the comment: Page 16 Line 335\u2022 Reviewer comment: compared to\u2022 Authors\u2019 response: amended as per the comment: Page 16 Line 335\u2022 Reviewer comment: This needs to be brought out as a key findings and reported in the abstract: indicates impact of the local access point for the most severe cases or reduction of fatalities\u2022 Authors\u2019 response: modified as per the comment: Page 2 Line 46-51\u2022 Reviewer comment: when not as\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 351\u2022 Reviewer comment: when not as\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 365\u2022 Reviewer comment: Space and then - The most probable reason for this is that immunization..\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 353\u2022 Reviewer comment: for not to this\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 353\u2022 Reviewer comment: decreases\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 362\u2022 Reviewer comment: when\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 365\u2022 Reviewer comment: were\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 367\u2022 Reviewer comment: hinder the..\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 368\u2022 Reviewer comment: and administration of PO\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 369\u2022 Reviewer comment: require instead of which need\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 373\u2022 Reviewer comment: increased\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 374\u2022 Reviewer comment: follow up charts for all\u2022 Authors\u2019 response: amended as per the comment: Page 17 Line 374To Reviewer #2Dear we would like to forward our deep-seated gratitude for your interesting and valuable comments and helping us throughout the process. We all really appreciate your potential and optimism while you give such constructive, thoroughly and in-depth comments. Since we have agreed with all of your points raised we have amended the manuscript as per your comments. We would like to thank you again since you are contributing for our better paper by giving such comments which are important to improve the quality of this paper. Below we have written the point to point response to issues you raised.Reviewer comment: 1. The title shall be re-written as \u201cTreatment Outcome of Severe Acute Malnutrition and associated factors among under-five children in outpatient therapeutics unit in Gubalafto Wereda, North Wollo Zone, Ethiopia\u201dAuthors\u2019 response: amended as per your comment Reviewer comment: 2. The Background section requires major revision. The first paragraph of the background section shall start by defining \u201cSevere acute malnutrition\u201d. The next paragraph shall describe the burden of severe acute malnutrition in the globe, in Africa, in Ethiopia and the study area. The ways of preventing and treating severe acute malnutrition and respective outcomes shall be presented with corresponding references.Authors\u2019 response: Thanks for this interesting comments. We entirely updated the background section considering your comments. First paragraph: we defined SAM, Second paragraph: burden of severe acute malnutrition in the world, Third paragraph: burden of severe acute malnutrition in Developing countries, Africa, and in Ethiopia context, Next paragraphs: we tried to indicate the previous hospital based management of SAM and its challenges; then we described the purpose of OTP over the hospital based management of SAM: Finally, we explained as there is scarcity of evidence on the treatment outcome of SAM managed it OTP despite it is being implemented in the study area context. Reviewer comment: 3. There is also a need to define \u201coutpatient therapeutics unit\u201d in the Ethiopian context. Does it refer to health care systems Authors\u2019 response: outpatient therapeutics unit in Ethiopia context is to refer primary health care systems such as health posts, primary clinics, health centers and primary hospital.Reviewer comment: 4. The justification for the study \u201cBesides, the high percentage of malnutrition is alarming which needs further study to describe the treatment outcome of SAM in OTP to assess the factors contributing to the treatment outcome.\u201d is not adequate. It rather shall state whether there is an ongoing SAM treatment and hence evidence of gap in the efficacy of the ongoing SAM treatment in the study area. The justification as it stands now is vague to warrant the objective. Hence there is a need for major revision.Authors\u2019 response: yes, you raised important point we missed. SAM patients are being treated at OTP. But, there is limited data on the treatment outcome (effectiveness) of that treatment. We have amended considering your comment as: Even though SAM patients are being managed at OTP unites; there is scarce evidence in the efficacy of these ongoing SAM treatments in the study area (Background: page 4 line 104-107)Reviewer comment: 5. The Objective \u201cThe study, therefore, is aimed at describing the treatment outcome among children of age less than five years and identifies factors contributing to the treatment outcome.\u201d is not congruent with the title \u201cTreatment Outcome of Severe Acute Malnutrition and associated factors \u2026\u2026\u2026\u2026.\u201d which is quite important to reviseAuthors\u2019 response: very nice comment, we have amended it. \u201cTo assess treatment outcome of SAM and associated factors among under-five children in outpatient therapeutics unit\u201d (Background: page 4 line 106-107)Reviewer comment: 6. Several typographic, grammatical and logical errors are rampant in the background section and need to be correctedAuthors\u2019 response: Regarding English language we have consulted a native English speaking collogues and they have edited the paper. We have also edited it through repetitive checking and online grammar editor. Reviewer comment: 7. Methods: There is repetition on the sampling technique and procedure. \u201cThe study area, Gubalafto Wereda has a total of 34 Kebeles . From the total 34 kebeles, 7 rural and 2 urban kebeles was selected by simple random sampling method\u201d is repeated. The sample size is different . And the reason how you came up with the total samples of 600 is not clear yet. The names of the selected Kebeles shall be presented here. Authors\u2019 response: the redundant statement has been deleted. Regarding the sample size (600): First we calculated considering both objectives separately; the first objective is on the magnitude of treatment outcome (Recovered/not recovered) by single proportion formula, it becomes 354, the second objective factors which affect the treatment outcome by double proportion formula by taking children with co-morbidities as a factor which gives maximum sample size, then than other factors the sample size becomes 374. Then we selected the one which gives maximum sample size which was the second objective. Finally, we used a design effect of 1.5 to compensate for potential losses during multi-stage sampling and added 10% of the sample for missing and incomplete data. And the final sample size becomes 600 (Method Page 6-7)Reviewer comment: To which objective does the statement \u201cFor the second objective, the sample size was determined using a double population proportion formula by considering study done in Tigray and Wolaita recovery rate p=61.78,64.9 respectably to calculate the required sample size. Finally, it is calculated by using Epi info version 7 statistical packages.\u201d correspond? If the authors have two specific objectives they shall present them clearly in the background section and also state the sample size to each objective clearly. In addition, each specific objective shall have a clear background and/or justification in the background section. Authors\u2019 response: The second objective is: to assess risk factors for treatment outcome of SAM among under-five children in outpatient therapeutics unit. The sample size for the second objective were calculated by taking P1 (proportion of recovery among exposed) and P2 (proportion of recovery among un-exposed) and using double population proportion formula for each factors. Using the second objective the sample size was 600. The first objectives were to assess the treatment outcome (recovered/ not recovered) of SAM. We used single proportion population formula. Using first objective the maximum sample size was 390. Finally from the two objective the one (the second objective) which gives maximum sample size were taken, then the sample size 600 were taken.Reviewer comment: The statement \u201cParticipants in each kebele are selected by using a systematic sampling technique after calculating the sampling interval (K=2) for each kebeles\u201d sounds vague as the number of residents, population etc varies across the kebeles and hence the corresponding sampling interval. Hence this section shall be clearly presented as it is central to the study.Authors\u2019 response: ye you are correct the sample frame, children under-five years SAM charts at OTP vary across kebeles. But what we have done were, we distributed the sample to each selected kebeles proportionally based on probability proportional to size (PPS) allocation technique considering the size of the sampling frame; That means, kebeles which have large number of children under-five years SAM charts at OTP will take larger number of the samples and vis versa. After sample distribution we used systematic sampling technique by calculating K in each kebele separately. (Method page 8)Reviewer comment: 8. Several typographic errors in the methods section require major revision.Authors\u2019 response: Regarding English language we have consulted a native English speaking collogues and they have edited the paper. We have also edited it through repetitive checking and online grammar editor. Results sectionReviewer comment: 9. Results section: The results shall show the kebeles/health posts and corresponding cases. Authors\u2019 response: the kebeles are indicated in number like in rural (01-30) and in urban kebeles (01-04). By simple random sampling technique 7 rural and 2 urban kebeles (02 and 04) were selected. (Page 8 line 198)Reviewer comment: This section shall present the results clearly. Statistical figures , 7.1 kg (IQR 5.8 \u20139.2 kg), 8.4 kg (IQR 7.1\u20139.8 kg), and 9.97 kg (IQR 8.15\u201311.60 kg) respectively. (Page 10 line 257-259)Reviewer comment: Tables 1, 2, 3, and 4 do not show the time frame or the period (eg. September 2000-August 2004) of the data to inform readers on matters such as on the magnitude of cases occurred in the area.Authors\u2019 response: we have included the study period at the title of each tableincluding the period from April 2016 to May 2019 GC.Reviewer comment: 10. The number (%) of severe cases and mild cases should be presented clearly indicating the total cases, and the cases in each kebele.Authors\u2019 response: since our study is conducted in OTP, all cases are diagnosed with Sever Acute Malnutrition (SAM). However, those patients have no medical complication and passed appetite test. Therefore, all included patients fulfill SAM diagnostic criteria but not hospital admission criteria. That means, very low weight for height (below -3z scores of the median WHO growth standards), by visible severe wasting, or by the presence of oedema of both feet and mid-upper arm circumference (MUAC) < 115 mm. this have been explained in the introduction part of the paper. ResultsReviewer comment: 11. Results: the section \u201cBivariate and Multivariate analysis on treatment outcome of SAM and associated factors\u201d shall be re-written as \u201cTreatment outcome of SAM and associated factors\u201dAuthors\u2019 response: amended as per your comment (page 13 line 291)Reviewer comment: 12. Under the section \u201cTreatment outcome of SAM and associated factors\u201d it remains important to present the length of treatment and related outcome. As it has several advantages for policy and follow up.Authors\u2019 response: The length of stay for SAM management in OTP program were ranges from 2 months up to 4 months. However, the length of stay was not associated with the treatment outcome in our study .Reviewer comment: 13. \u201cTable 4: Bivariate and Multivariate analysis on treatment outcome of SAM and associated factors\u201d shall be re-written as \u201cTable 4: Treatment outcome of severe acute malnutrition and associated factors in ------ Woreda, Northern Wollo, Ethiopia ---month, ----year to -----month, ----year\u201dAuthors\u2019 response: Amended as per your comment (Page 14 line 310-312)Reviewer comment: 14. Discussion: This section shall present the main findings obtained by analyzing the data in line with the objective and discuss contextually. \u201cThe study was mainly aimed to indicates treatment outcomes of OTP and associated factors with it among children treated from SAM\u201d This is vague and requires clarification. Did you address \u201cAccordingly, the overall prevalence of cured, dead, defaulter, and medical transfer were 65.0, 2.0, 16.0, and 17.0 respectively\u201d in your study adequatelyAuthors\u2019 response: the aim of the study was to assess treatment outcomes of SAM in OTP and associated factors. As per our objective we have tried to address both objectives. Because regarding the first objective (treatment outcome of SAM) we found overall prevalence of cured, dead, defaulter, and medical transfer were 65.0, 2.0, 16.0, and 17.0 respectively. Regarding the second objective (factor affecting treatment outcome-recovered/not recovered) we found : presence of cough, the presence of diarrhea PO antibiotics, admission category, and the immunization status of a child as predictor of SAM treatment outcome. We have tried to discuss accordingly.Regarding the first objective: the prevalence of treatment outcome was found to be 65.0, 2.0, 16.0, and 17.0 respectivelyRegarding the second objective: Factors affecting treatment outcome we indicated in the discussion like vaccinated vs unvaccinated children, newly admitted vs re-admitted children page 10 line 343-353.AttachmentResponse to reviewers.docxSubmitted filename: Click here for additional data file. 13 Aug 2020Treatment outcome of Severe Acute Malnutrition and associated factors among under-five children in outpatient therapeutics unit in Gubalafto Wereda, North Wollo Zone, Ethiopia, 2019PONE-D-19-29446R1Dear Dr. Beletew,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Ricardo Q. Gurgel, PhDAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #2:\u00a0(No Response)**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2:\u00a0(No Response)**********3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2:\u00a0(No Response)**********4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #2:\u00a0Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #2:\u00a0(No Response)**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #2:\u00a0I found most comments addressed. Authors need to check the language before it is submitted for final publication.**********what does this mean?). If published, this will include your full peer review and any attached files.7. PLOS authors have the option to publish the peer review history of their article (If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #2:\u00a0No 20 Aug 2020PONE-D-19-29446R1 Treatment outcome of Severe Acute Malnutrition and associated factors among under-five children in outpatient therapeutics unit in Gubalafto Wereda, North Wollo Zone, Ethiopia, 2019 Dear Dr. Abate:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofProfessor Ricardo Q. Gurgel Academic EditorPLOS ONE"} +{"text": "The student\u2013teacher relationship is of special interest to understand mechanisms of physical activity behavior change in physical education (PE). (2) Methods: In this cross-sectional study, 481 girls answered a German version of the Basic Psychological Need Satisfaction (BPNS) in PE Scale. Contrary to previous studies, the psychometric properties of this scale were examined by multilevel confirmatory factor analysis. (3) Results: A model with three latent factors on both levels showed acceptable fit and all items showed significant factor loadings. Although one item was excluded due to psychometric reasons, the scale showed good internal consistencies; \u03b1 = 0.85 at the individual level and \u03b1 = 0.84 at the class level. Subscales\u2019 internal consistency at the individual levels was good, while at class level, the scores differed from poor to good. Small significant correlations of BPNS with moderate to vigorous physical activity support criterion validity. (4) Conclusion: The 11-item scale is a valid measurement tool to assess BPNS in PE and further application in the school setting would broaden the insights into the psychological impacts of SDT in PE. Numerous studies have pointed out the insufficient physical activity (PA) levels of children and adolescents in industrialized countries ,2,3,4. IClearly, we must also consider the environmental factors and individual circumstances which influence PA behavior . InitialAccording to SDT, every individual has the natural, constructive tendency to interact with other individuals in their environment, to act effectively in this milieu and experience themselves as proactive and autonomous . The thrIn the context of PE, researchers have proven the positive correlations between support and satisfaction of BPN ,12, whicIn order to evaluate these theoretical considerations and to establish the SDT in a domain-specific sample, instruments to assess the BPNS are mandatory. Based on the original Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS), several scales were developed by adapting them to specific criteria of domain, language and age. A confirmatory factor analysis (CFA) of the original 24 items by Chen et al. exhibited a good fit for a hypothesized 6-factor model, assuming separate dimensions for satisfaction and frustration for each of the three needs . InternaGiven our focus on the satisfaction of the three innate psychological needs, the entire BPNSFS was not appropriate for the incorporation in a comprehensive questionnaire. Furthermore, negative need fulfillment has been pointed out as a distinct dimension, which justifies in light of adverse health outcomes a separate investigation ,20. BesiOne limitation of previous validations derives by using CFA for data assessed in the school setting, since students are clustered in classes. Ignoring the clustered nature of data could lead to biased estimates and misinterpretations, since already small intercorrelations have an impact on model estimates and variances . FurtherAs educational policy is a matter of the respective federal states in Germany, the curriculum differs between states. In secondary schools in Bavaria, two PE lessons (each 45 min) per week are mandatory at class stages 5 to 10. A male or female PE teacher carries out the gender-separated PE lessons, respectively. Girls represent a specific risk group regarding the effects of age, gender and SES on PA . TherefoTo date, no measurement instrument exists that is rigorously validated to examine the BPNS of German-speaking adolescents within the PE context. Questionnaires are needed that are specifically designed for adolescents by addressing their stage of development and language. The purpose of the study is to provide initial evidence of reliability and validity of scores derived by the German Basic Psychological Need Satisfaction in Physical Education Scale (GBPNS-PE). According to Huang\u2019s MCFA approach , the facThe sample derived from the single-sex intervention study CReActivity, which aimed to promote PA especially for girls . We sampThe BPNS in PE was assessed by an adapted and translated version of the BPNSFS by Chen et al. . Two EngPA was assessed by accelerometry. Participants wore an ActiGraph GT3X or GT3X-BT see for seveParticipants reported parents\u2019 occupations and job description. A trained committee of four student research assistants coded the written answers according to the International Standard Classification of Occupation 2008 (ISCO-08). After revision by two researchers, open conflicts were solved and coded under consideration of the International Socio-Economic Index (ISEI) score .Participants reported birth date before a research assistant assessed anthropometric data using a weight scale and stadiometer.The assessments were conducted at the beginning of the PE lessons. Individual codes ensured the anonymity of the participants. With regard to a previously defined protocol, research employees gave instructions to fill out the questionnaire and supervised the pupils during process time in order to answer any questions about the questionnaire without disturbing the other students. The PE class started after the research employee collected all completed questionnaires.The Ethics Committee of the Technical University of Munich in Germany approved the study, registered with 155/16S. Principals and parents councils approved the assessments in schools. Parents or legal guardians as well as children gave written informed consent to participate in the study.Taking into account the clustered nature of the sample, we considered Huang\u2019s multilevFirstly, an adjusted single-level CFA was conducted under consideration of the pooled within-group covariance matrix instead of the total covariance matrix. In a second step, we specified the null model, ergo the factor structure of step 1 on both levels, using the pooled within- and the between-group covariance matrices. Here, we constrained equal factor loadings, variances and covariances for every manifest variable and latent factor. Thirdly, we incorporated new group-level latent variables with denial to covary in the so-called independence model, to estimate the variance at group level. In step 4, we reversed the denial and used all degrees of freedom at the between-group level to create a fully saturated model. As a last step, we specified the actually hypothesized models. Initially, including one latent factor to the between-group level ensured the correlation of the latent group-level factors .In addition, exploring the scale dimensionality justifies model structures with one or three latent factors at level 2. Some estimated residual variances for the random intercepts at level 2 were negative but also close to zero. These variances were fixed to zero to allow the model to converge and find admissible solutions. This procedure is justified due to the small sample size at level 2 and intraclass correlations (ICCs) close to zero .2 likelihood ratio statistic, the comparative fit index (CFI) [Several fit indices were adduced to evaluate the goodness of fit for the model, since all of them have limitations regarded separately : the \u03c72 ex (CFI) , the Tucex (CFI) , and theex (CFI) . Therebyex (CFI) . For comex (CFI) was applReliability scores for the scales at both levels respectively were calculated by the alpha function from the psych package . Since t2 = 259.12, df = 310, p = 0.99) support that the missings are completely at random [N = 386) kg/m\u00b2 reflects a normal-weight sample. Responses of students whose height and weight were measured did not differ significantly from students, both apparently overweight and normal-weight girls, which refused to be weighed. Participants come from households with an average SES of 49.80 .Proportion of missing values from 1.04% to 2.50% and Little\u2019s MCAR test . Small significant correlations with device-based assessed MVPA can be evinced at level 1 with 481 individuals. While correlations of autonomy and competence subscale were significant by 0.13 (p = 0.01) and 0.19 (p > 0) respectively, there was no significant correlation of MVPA with the relatedness subscale . At group level (n = 33), the subscales competence, by 0.28 (p = 0.12), and relatedness, by 0.25 (p = 0.16), showed smaller non-significant correlations while autonomy correlated significantly by 0.38 (p = 0.03). There was no significant correlation between SES and MVPA , while BMI and age showed a negligible significant correlation of r = \u22120.14 (p < 0.01) and r = \u22120.16 (p < 0.01), respectively.On average, girls spent 80.44 (\u00b121.01) minutes in MVPA per day by Haerens et al. but loweOur subscales had lower reliability values (0.76\u20130.84) as the original BPNS scale achieved higher alpha scores of 0.81 to 0.92 . AdditioThe large within-class variability of the relatedness and competence construct implicates that classes with few participants produce scores with low reliabilities at the class level. In support of this contention is the lower within-class variability of the construct autonomy. Seemingly, the girls within one class appraise autonomy to the same extent, while girls valuate the constructs relatedness and competence differently throughout the class. Probably justifiable due to group-dynamic processes influencing the climate of each class, the reliability score decreases at the group level. We interpret the decrease of the competence reliability scores at level 2 due to heterogeneous classes in terms of sports prowess. Talented girls are often high-performers in their classes in PE, while in other classes, the overall physical performance of girls might be weaker. Moreover, even though the curriculum puts PE into a frame, the teacher determines the demands of challenges in PE lessons. These demands differ from teacher to teacher, ergo from class to class.The main reason to prefer the 11-item scale is due to the improved goodness of fit in comparison to the 12-item scale under consideration of the model assumptions. An explorative reduction of items improved the goodness of fit of all models but at the same time, this procedure cannot be justified due to a mainly nontransparent reduction of significant items resulting in an over-estimation of the model.Model A could be used to interpret the data, since it represents the procedure of an adjusted single-level CFA . HoweverWhile SES seems to be not related to MVPA, our data indicate a weak negative relationship of BMI and age with MVPA. However, current literature states a contradictory position and underlines the need to incorporate social and environmental factors in the analysis of PA behavior of youth by an adequate analysis . EspeciaThis study provides detailed indications of the psychometric properties of GBPNS-PE for a specific target group in the school setting by an MCFA procedure. Nonetheless, an extension to other age groups, sex/gender and consideration of demographic domains would support generalizability of the psychometric properties of the scale, especially the incorporation of frustration items is sought in future investigations.Three classes with less than seven individuals remained in the sample to retain a sufficient sample size on level 2, although clusters with few observations could bias the estimates and reliability scores . We consThis validation study provides initial proof for the three-factor structure of the BPNS scale in a multilevel design. Facing the limited periods available to assess comprehensive data in the school setting, the GBPNS-PE is an efficient solution to evaluate the need satisfaction of students in PE. Further investigations with a validation sample would establish the GBPNS-PE as a valid measurement tool in the German-speaking area and contribute to a higher robustness of the scale."} +{"text": "Health Policy and Planning, Volume 34, Issue Supplement_1, October 2019, Pages i4\u2013i13, https://doi.org/10.1093/heapol/czz011Published: 23 October 2019ErratumFollowing the publication of the original article in October 2019, it came to authors\u2019 attention that few errors remained. The authors have corrected these errors as below:Table 3The errors appeared in the last column \u2013 Concentration index (CI): Total \u2013 public sector CI should be -0.120 instead of -0.614, Total - private sector (excl non-profit) 0.176 instead of 0.242; and Total \u2013 all sector CI should be 0.025 and not -0.901.Because of these changes the following sections of the paper have been amended:Abstract (lines 17-19)Looking across the entire health system, health financing in Cambodia appears to benefit the rich slightly more than the poor with a significant proportion of spending remaining in the private sector which is largely pro-rich.Page i10 Looking across the entire health system, health financing in Cambodia appears to benefit the rich slightly more than the poor with a significant proportion of spending remaining in the private sector which is largely pro-rich. The overall positive CI of 0.025 confirms the marginally pro-rich distribution. Although the substantially pro-poor distribution in the non-profit hospital IPD sector is desirable, overall the sector is quite small, accounting for <1% of total spending."} +{"text": "In the test group, surface topography showed bone tissue integrated into the porous structures. In the apical part of the test group, all the histometric parameters exhibited significant increases compared to the control group. Within the limitations of this study, enhanced bone growth into the porous structure was achieved, which consequently improved osseointegration of the implant.A porous titanium structure was suggested to improve implant stability in the early healing period or in poor bone quality. This study investigated the effect of a porous structure on the osseointegration of dental implants. A total of 28 implants (14 implants in each group) were placed in the posterior mandibles of four beagle dogs at 3 months after extraction. The control group included machined surface implants with an external implant\u2013abutment connection, whereas test group implants had a porous titanium structure added to the apical portion. Resonance frequency analysis (RFA); removal torque values (RTV); and surface topographic and histometric parameters including bone-to-implant contact length and ratio, inter-thread bone area and ratio in total, and the coronal and apical parts of the implants were measured after 4 weeks of healing. RTV showed a significant difference between the groups after 4 weeks of healing ( A dental implant has been accepted as a reliable treatment modality for edentulous ridge with high long-term survival , and impThe topographical features in an implant surface can be defined in terms of their scales, which were produced by surface modification treatments such as titanium plasma-spraying, grit-blasting, acid-etching, or combinations ,8. ApartAnother approach in surface modification was the production of porous bodies of titanium metal and its alloys, and sintering of metal powders onto the surface was commonly used for porous coatings ,12. AdvaRecent approaches have utilized methods such as selective melting with laser or electron beam, 3D printing, casting or vapor deposition to control the internal pore geometry and distribution ,19. The In the present study, a novel method of utilizing the powder injection molding technique has been employed to form a porous titanium structure, which was fabricated on the apical portion of the machined screw-type implant. The effect of the newly developed porous structure on osseointegration was compared to the smooth-surfaced implant in the canine model.A threaded machined surface implant (c.p. titanium grade 4) with an external-type abutment connection was used in the control group. The implant measured 4.1 mm in diameter, 8.1 mm in length, and had a straight configuration of the implant body (core diameter of 3.25 mm) with a homogenous thread height of 0.35 mm a. The te2) powder, space holder and some polymeric binders, was prepared as a material for powder injection molding. Expandable polystyrene (EPS) beads with an average diameter of 325 \u00b5m were selected as space holders to form open-pore structure made from the contact between the beads during the expansion that occurred above 80 \u00b0C. The feedstock was injected into the narrow cavity between the threads at the apical third portion of the implant insert. The molded implants were inserted again into a mold designed for expansion of the EPS beads and kept for 20 min in an oven at 110 \u00b0C. The expanded beads were removed in a solvent, resulting in an open-pore structure consisting of TiH2 powder and binders using insert powder injection molding technology a. Briefl binders b. The poh vacuum c. DuringFour 12-month-old male beagle dogs weighing 12.0 to 17.0 kg were used in this study. The dogs were kept in separate cages under standard laboratory conditions. Animal selection, management, surgical procedures, and preparations were performed according to the protocols approved by the Institutional Animal Care and Use Committee at Korea Animal Medical Science Institute, Guri, Korea . The study was conducted following the Animal Research: Reporting In Vivo Experiments (ARRIVE) guidelines .Surgical procedures were performed under general anesthesia with intravenous injection of a solution (0.1 mL/kg) containing 1:1 ratio of tiletamine/zolazepam and xylazine hydrochloride . Infiltration anesthesia with 2% lidocaine HCl with 1:100,000 epinephrine was used at the surgical sites. The premolars (P1\u2013P4) and the first molar (M1) in both the mandibles were carefully extracted and a total of 28 implants (14 implants for each group) were placed after 12 weeks. In each quadrant, three or four implants from the control or test group were randomly allocated. After sequential osteotomies, implants were installed under 40 Ncm (newton centimeter) of torque and submerged a\u2013c. AntiImplant stability quotient (ISQ) value was measured immediately after the implant placement and at the time of animal sacrifice. A SmartPeg was connected to each implant and a commercially available RFA equipment was adjusted at the mesial and buccal direction of the implant a. The meIn each group, 7 implants were randomly selected and removal torque values (RTV) were measured using torque meter on the day of sacrifice. Removed implant specimens were dehydrated in graded ethanol series and sputter-coated with platinum . Surface topography was examined under a field-emission scanning electron microscope and photograph images were taken at the magnifications of 20\u00d7 and 100\u00d7 with 5.0 kV.Among the 14 implants allocated for each group, 7 specimens were processed for histologic and histometric analysis, as the rest of the 7 implants were tested for RTVs described in 2), which was the sum of bone area observed between the threads; and (d) the inter-thread bone area ratio , which was the percentage of BA in the region of interest (ROI). ROI in the control group and coronal part of the test group was determined by outlining the space between the threads (inter-thread space). To determine the corresponding ROI in the apical part of the test group, superimposition of the counterpart in the control implant was performed to outline the virtual boundary of the original shape of the inter-thread area of the implant, which was transversely divided along its long axis. The apical part included the area between the most apical border of the fixture and 3 mm above and coronal part was from the coronal border of the apical part to the most coronal endpoint thread of the fixture a. The foead area b.p < 0.05.Statistical analyses were performed using SPSS Ver. 12.0 . Normality of data distribution was determined by Shapiro\u2013Wilk test. For ISQ and RTV in both groups, paired t-test was used to compare the differences of the parameters between the two groups at each time period and the differences in ISQ between the baseline and 4 weeks in each group. Regarding histometric parameters, comparisons between the groups in each part and between the two parts in each group were performed using Student\u2019s t-test , or Wilcoxon\u2019s rank-sum test . The level of statistical significance was set at The experimental sites in all the animals demonstrated uneventful healing and did not exhibit any adverse reaction throughout the postoperative healing period.p = 0.03) was higher than that of the control group (8.0 \u00b1 3.6) with a significant difference at the 4-week healing period ( = 0.03) .The original surface topography of the test group implant showed a titanium structure with regular distribution of pores in similar size ranging from 200 to 350 \u00b5m in the apical part a,b, wherNew bone (NB) formation and BIC were observed at entire length of the implants in both the test and control groups a,f. In tp < 0.007), BICR (p = 0.011), BA (p = 0.014) and BAR (p = 0.028) of the total area were shown in the test group compared to the control group. The apical part of the test group presented significant increases in BICL (p < 0.001), BICR (p = 0.001), BA (p = 0.011) and BAR (p = 0.020) compared to the control group; all the parameters in the coronal part were similar in both the groups. The test group also showed significant differences between the coronal and apical part in BICL (p = 0.005), BICR (p = 0.010), BA (p = 0.009), and BAR (p = 0.049), whereas the control group showed no differences between the two parts (Significant increases in BICL (wo parts .In the present study, the porous titanium structure fabricated at the apical portion of the implant resulted in the interconnected open-pore structure, which led to an increase in the implant surface and its osteoconductivity to enhance bone ingrowth into the porous scaffold, thereby improving osseointegration of the implant.The percentage of porosity on the overall surface and the size of pores are known as the determining factors in bone ingrowth . ConventThe powder injection molding technique was introduced to process the fine ceramics in the past two decades and could offer the reproducible mass production of complicated structures like near-net-shapes, even in hard materials like ceramics . In dentThe multithreaded root-form implant has clinical benefits of simple osteotomy, implant placement, close mechanical proximity to the bone to increase primary stability, and less traumatic retrieval under conditions of failure . When coImplant stability measured using RFA showed ISQ values over 60 in both the groups and at both the observation periods with no significant differences. The ISQ value over 60 was reported to demonstrate clinical stability of the implant despite the differences in implant designs, surgical models, and devices used, and the factors affecting RFA include stiffness of the implant-bone interface, distance to first bone contact, and marginal bone loss . It can Significant increases in total BICL, BICR, BA, and BAR in the test group implants were attributed to the apical portion, and histologic findings supported the results by showing enhanced NB formation in direct contact with the interconnected open pores with an increased surface area at the apical portion and improved BIC at 4 weeks. Healing in the trabecular compartment relies on the process of osteoconduction and de novo bone formation at the implant surface, resulting in contact osteogenesis , and porIn the present study, a porous titanium structure fabricated by the powder injection molding technique was able to provide three-dimensional interconnected porosity on the implant surface and thereby enhanced the new bone ingrowth at the surface. The histometric findings, including the fact that the bone-to-implant contact and new bone formation inside the porous material demonstrated the improvements in osseointegration by the porous structure in the early healing dynamics, were in accordance with some other studies utilizing trabecular-like scaffolds to the dental implants ,21,25. HThe findings of the study suggest that the porous titanium structure might increase apical bone-to-implant contact due to the increased surface area and enhance new bone formation with increased osteoconductivity in the early healing period, thereby leading to improvements in the osseointegration of the implants."} +{"text": "P > 0.05). Statistically significant differences in ISQ values were observed between the control and experiment 1 groups, and the control and experiment 2 groups at the 12 to 48 weeks\u2019 follow-ups. Within the limits of this prospective study, an early loading protocol can be applied as a predictable treatment modality in posterior mandibular single missing restorations, achieving proper primary stability.The purpose of this study was to compare the implant survival, peri-implant marginal bone level, and peri-implant soft tissue of three different types of implants. This was performed with an early loading protocol, using a complete digital workflow, for one year of follow-up. Twenty-four patients with a single missing tooth in the mandibular posterior region were randomly assigned to the control group , experiment group 1 , and experiment group 2 . For each patient, a single implant was installed using the surgical template, and all prostheses were fabricated using a computer-aided design/computer-aided manufacturing system on a 3-dimensional model. A provisional prosthesis was implanted at 4 weeks, and a definitive monolithic zirconia prosthesis was substituted 12 weeks following the implant placement. The implant stability quotient (ISQ) and peri-implant soft tissue parameters were measured, and periapical radiographs were taken at 1, 3, 4, 8, 12, 24, 36, and 48 weeks after implant placements. Seven implants in the control group, nine implants in the experiment 1 group, and eight implants in the experiment 2 group were analyzed. There were no significant differences among the three groups in terms of insertion torque, ISQ values between surgery and 8 weeks of follow-up, marginal bone loss at 48 weeks of follow-up, and peri-implant soft tissue parameters ( According to Branemark\u2019s original protocol, in order to obtain a direct bone-to-implant interface (osseointegration), dental implants need to be submerged under the soft tissue and left for 3\u20136 months . The ratMany efforts have been made to shorten the restoration period, in order to reduce patient discomfort. First, several studies have shown that submerging the dental implants under the soft tissue is not necessary for successful implant restoration ,4,5,6. SRecent advances in 3-dimensional (3D) imaging and computer-aided design (CAD)/computer-aided manufacturing (CAM) have enabled accurate diagnosis and created a surgical guide for implant placement . A virtuImplant manufacturing technologies have been improved to enhance implant stability. In terms of macrodesign, a tapered implant can be used to exercise compression of the surrounding bone during the implant insertion protocol, increasing implant primary stability . In addiImplant surface modification has been reported to influence primary and secondary implant stability. A study by Salermo et al. showed that modified implant surfaces are well preserved even after implant placement . The areThe purpose of this study was to compare the implant survival, peri-implant marginal bone level, and peri-implant soft tissue of three different types of implants with an early loading protocol using a complete digital workflow with a one-year follow-up.All procedures were conducted according to the Declaration of Helsinki on experimentation involving human subjects . This clThe subjects were patients with single missing mandibular posterior teeth at least 3 months after extraction. A surgical guide was prepared using digital impressions with an intraoral scanner and CBCT data before surgery. Dental implant type was allocated randomly before surgery with a computerized random number using software . A provisional restoration was installed according to the early loading protocol, and the final prosthesis was mounted 12 weeks (3 months) after implant placement. Implant success rates and implant stability were evaluated and compared between the three groups for 13 months.The study design was a three-arm randomized controlled trial. Three different types of implants were used in this study, as follows: SLActive bone level implant = approximately 1.8 \u03bcm ; contactThe characteristics of the implant types used in this study are shown in The sample size was calculated based on a previous study related to early loading with chemically modified surface implants . The totThis study included 27 subjects at a single institute in Seoul National University Dental Hospital, Seoul, Republic of Korea. A total of 102 potential participants were recruited using subway advertisements.The following inclusion criteria were applied: (a) 18 years of age or older; (b) a single tooth missing in the posterior mandibular region, with at least 3 months having passed after the tooth extraction; (c) the ability to undergo implant surgery and restoration; (d) sufficient bone volume in the site to accommodate implant placement without any need for bone augmentation ; and (e) normal occlusal plane of the opposite teeth and no missing areas in the opposite jaw. The exclusion criteria were (a) pregnancy, (b) myocardial infarction within 1 year, (c) bleeding disorders or need for blood anticoagulants for surgery, (d) any systemic disease affecting the surgery or restoration procedure; (e) mental illness; (f) allergy to implant materials; (g) adjacent periodontally compromised teeth; (h) parafunctional habits or disorders; (i) lack of interocclusal space; (j) bone quality type D4; (h) insertion torque of less than 35 Ncm or greater than 45 Ncm; and (i) implant stability quotient (ISQ) value <65.All subjects underwent preoperative diagnosis and preparation for implant surgery and prosthetic restoration planning. A panoramic X-ray and CBCT were used for diagnosis, and a digital impression was taken using an oral scanner . Based on CBCT images and digital impression data, implant placement and prosthetic restoration were planned using Implant Studio software , taking into account the anatomical structure and intermaxillary relationship .According to this plan, surgical templates, customized titanium abutments, and provisional prostheses were produced.Implant placement was performed by two experienced periodontists using a surgical template .If the buccolingual width of the attached gingiva was 8 mm or more, the implant was placed following a punch technique without incision. If the buccolingual width of the attached gingiva was less than 8 mm, the implant was placed with minimal incision. The drilling procedure was performed in accordance with the manufacturer\u2019s recommendation protocol for early loading. In the control group, the bone quality was divided into type 1, large homogenous cortical bone; type 2, thick cortical layer surrounding a dense medullar bone; type 3, thin cortical layer surrounding a dense medullar bone; and type 4, thin cortical layer surrounding a sparse medullary bone, when osteotomy was performed with a 2.2-mm straight drill . In contThe surgical template was removed after the implant was placed at the planned location and depth in order to measure the insertion torque. A small amount of bleeding was observed in the surgical site E. The taISQ values measured on the day of surgery and 1 week, 3 weeks, and 4 weeks after surgery were greater than 65; pre-fabricated customized abutment and provisional prosthesis were inserted 4 weeks after the surgery. The prepared titanium customized abutment was secured to the implant with 20 N\u00b7cm, and a provisional prosthesis was delivered with temporary cement. In the case of excursion, the articulating paper should not be in contact, and the occlusion was adjusted so that the bite force was applied to the long axis of the implant. After the restoration was installed, a periapical view radiograph was taken using a digital sensor holder .The final prosthetic procedure was performed using a digital prosthetic workflow 8 weeks after implant placement. Digital impression was obtained with a pre-abraded titanium customized abutment with an oral scanner . The monolithic zirconia final crown of a definitive fixed screw and cement-retained prosthesis (SCRP) was inserted 12 weeks following implant surgery. The screw hole within the prosthesis acted as a vent hole for cement for the escape of excess luting cement. The residual cement in subgingival margin was checked with explorer. After prosthesis cementation, the hole was sealed with Teflon tape and resin.The insertion torque value that measured in the process of implant installation during surgery was recorded as peak insertion torque. Implant stability was measured using an Osstell Mentor at the following times: at implant placement, 1, and 3 weeks after implant placement; again at 4 weeks , 8 weeks, and 12 weeks ; and also at 24, 36, and 48 weeks after implant placement. The ISQ value was measured on the mesial, distal, buccal, and lingual sides and represented as an average.Periapical radiographs were taken to evaluate the amount of marginal bone changes following the implant surgery and 48 weeks after implant placement. The ratio between the actual distance (L) and the distance on the radiograph (b) was calculated. In the radiographs taken at implant placement and 48 weeks after implant placement, the distance between the alveolar crest and implant platform (a) was measured on the radiograph and converted to the actual distance (X) between the alveolar crest and implant platform by applying the ratio .The marginal bone changes between the implant placement and 48 after the implant placement were measured at each mesial and distal area, and then the average value was calculated, Equation (1):All patients were scheduled for recall visits at 24, 36, and 48 weeks after implant placement. Clinical and radiographic evaluations were conducted during the follow-up period. Full mouth occlusion was investigated, and the ISQ values were assessed at every appointment. In addition, soft tissue evaluation including plaque index, calculus index, sulcus bleeding index, and widths of keratinized mucosa were examined, and periapical radiographs were taken. Any complications including clinically detectable mobility, pain or other symptoms of discomfort or ongoing pathologic processes, peri-implantitis with suppuration, or continuous radiolucency around the implant was evaluated during the follow-up. P = 0.05) was adjusted according to the Bonferroni correction method. Two-way repeated measures analyses of variance were performed after the verification of sphericity using the Huynh\u2013Feldt method to evaluate the differences in the patterns of ISQ changes over time.Statistical analysis comparing the three groups was performed based on the intention to ireat (ITT) and the per protocol (PP) analyses. The \u03c72 test for categorical variables and the Kruskal\u2013Wallis test for differences between groups were used . Pairwise comparisons were performed using the Mann\u2013Whitney U test in the cases with significant differences according to the Kruskal\u2013Wallis test. The level of significance , and one patient was excluded due to implant failure during follow-up. As a result, the data from 24 implants in 24 participants were analyzed for the present study .The mean age of the 24 subjects was 50.3 \u00b1 10.55 years, ranging from 26 to 65 years. They participated in this study from January 2018 to October 2019 .Seven implants were placed in the first mandibular molars, fifteen implants were placed in the second mandibular molars, and two implants were placed in the second mandibular premolars. Seven implants were performed in the control group, nine in the experiment 1 group, and eight in experiment 2 group.The primary stability was investigated using the peak insertion torque and ISQ value at the time of implant placement .P = 0.559). The ISQ value was lower in the control group than in experiment group 1 or 2, but there were no statistical differences (P = 0.389).In this clinical study, the insertion torque values and ISQ values in all patients met the inclusion criteria. The insertion torque values were similar in the three groups .The ISQ values progressively increased in all control and experimental groups, which showed a remarkable increase 4 weeks after the surgery. The ISQ values of experiment groups 1 and 2 were relatively higher than those of the control group at the same observation times. In particular, ISQ had the highest value in the experiment 2 group (CMI IS-III HActive) until 4 weeks after surgery. There were no statistical differences in all three groups at 8 weeks (P < 0.05).Overall, the ISQ values progressively increased in all control and experimental groups. There was a period in which the increase in ISQ was stagnant between 2 and 4 weeks after the surgery; however, it increased again after 4 weeks. The ISQ values of experimental groups 1 and 2 were relatively higher than those of the control group at the same observation time. In particular, ISQ had the highest value in the experiment 2 group (CMI IS-III HActive) until 4 weeks after the surgery, and experimental group 1 (CMI IS-III Active) showed the highest value 8 weeks after the surgery. There was no statistical difference in any of the three groups until 8 weeks after the surgery. ISQ values 12, 24, 36, and 48 weeks after surgery were significantly higher in the experiment group 1 and 2 groups than in the control group (P > 0.05).The marginal bone changes showed little bone resorption at the 12-, 24-, and 48-week follow-ups, nor 24 weeks after implant placement in the control group and experiment groups 1 and 2. The amount of bone resorption was found to be relatively low in experiment group 1 compared to the control group or experiment group 2, but there were no statistical differences in all groups . When considering the ranges of errors in evaluating periapical radiographs ,30, the In this study, the inclusion criteria for early loading protocol was insertion torque values between 35 and 45 N\u00b7cm. Although adequate primary stability is required for predictable results of early loading protocol, the criteria of insertion torque value for immediate or early loading showed various ranges of values. A recent systematic review demonstrated that the criteria of insertion torque value for immediate or early loading was heterogeneous including \u226515 N\u00b7cm (12.5%), \u226520 N\u00b7cm (4.2%), \u226530 N\u00b7cm (20.8%), \u226535 N\u00b7cm (50%), \u226540 N\u00b7cm (8.3%), or \u226545 Ncm (4.2%) in the previous studies . In thisHydroxyapatite (HA) has been applied to implant surfaces to facilitate bone-to-implant contact in various studies ,33. SomeIn order to solve these mechanical issues and make the most of the osteoconductive activity of HA, a technology related to thin HA coatings on the implant surfaces has been developed and implemented ,37. AccoAnother type of surface treatment, a chemically modified sandblasting and large-grit acid etching (SLA) surface, is known to enhance hydrophilicity, facilitating bone apposition during the early stages of bone healing . ChemicaUnlike preclinical studies of chemically modified SLA implants and HA-coated implants increasing osseointegration at the early bone healing stage , this clOverall, the ISQ values progressively increased in all types of implants, with a stagnant period between 2 and 4 weeks, and an increasing period between 4 and 24 weeks following implant placement. At 4 weeks after implant placement, implant prostheses were delivered. It seems that the ISQ values might increase after loading progressively to some extent. A previous human study demonstrated that loading might stimulate bone remodeling around the implant and increase lamellar bone formation . PhysiolThe macro-design of the implant seems to affect the ISQ values post-loading. In this study, the experiment 1 and experiment 2 group showed higher ISQ values after 12 to 48 weeks of observation (post-loading period) compared to the control group. Some differences were observed in the macro-design between the experimental and control groups in terms of tapering, pitch height, thread height, and inclination angle of the thread flank. A previous study reported that 0.9 mm of the pitch height of the implant showed an improved stress distribution compared to 0.8 mm of the pitch height of the implant . InteresAccurate implant placement is important for esthetic and functional implant prosthesis. A surgical guide for implant placement was created with the 3D imaging and CAD/CAM technology. In addition, elaborate surgical planning can improve predictability in surgery at the proper location of the implant ,10. HoweVarious literatures have emphasized the importance of case selection in relation to early loading ,38,41. IThe limitation of this study is the small sample size and short follow-up period. As in previous studies, many technical and biological complications related to dental implants occur at least 3 years post-loading ,43. A syThis prospective clinical study was performed with an early loading protocol using a complete digital workflow, resulting in a high survival rate of implants in the mandibular posterior region. In addition, ISQ values, marginal bone level, and peri-implant soft tissue parameters showed acceptable results during the 1-year follow-up period. Within the limitations of this study, all three types of taper implants with an early loading protocol showed a successful treatment modality, prevailing slightly in ISQ values after loading in a novel macro-design implant compared to the control group."} +{"text": "The propose was to compare this new implant macrogeometry with a control implant with a conventional macrogeometry.Eighty-six conical implants were divided in two groups (n = 43 per group): group control (group CON) that were used conical implants with a conventional macrogeometry and, group test (group TEST) that were used implants with the new macrogeometry. The new implant macrogeometry show several circular healing cambers between the threads, distributed in the implant body. Three implants of each group were used to scanning electronic microscopy (SEM) analysis and, other eighty samples (n = 40 per group) were inserted the tibia of ten rabbit (n = 2 per tibia), determined by randomization. The animals were sacrificed (n = 5 per time) at 3-weeks (Time 1) and at 4-weeks after the implantations (Time 2). The biomechanical evaluation proposed was the measurement of the implant stability quotient (ISQ) and the removal torque values (RTv). The microscopical analysis was a histomorphometric measurement of the bone to implant contact (%BIC) and the SEM evaluation of the bone adhered on the removed implants.p = 0.0103 and in Time 2 p < 0.0003).The results showed that the implants of the group TEST produced a significant enhancement in the osseointegration in comparison with the group CON. The ISQ and RTv tests showed superior values for the group TEST in the both measured times (3- and 4-weeks), with significant differences (p < 0.05). More residual bone in quantity and quality was observed in the samples of the group TEST on the surface of the removed implants. Moreover, the %BIC demonstrated an important increasing for the group TEST in both times, with statistical differences (in Time 1 Then, we can conclude that the alterations in the implant macrogeometry promote several benefits on the osseointegration process. Osseointegrated titanium implant is frequently used for rehabilitation of loss organs due different causes, mainly trauma or diseases. It has been shown that titanium has properties that stimulate its interaction with bone tissue ,2, preseEven with the high success rates achieved by implants, studies have been made to accelerate the osseointegration with different technologies and manufactures methods. In this sense, alterations in the micro- and macrogeometry of the implant design are presented \u20138. New sThe surgical technique used to elaborate the osteotomy and the macrogeometry of the implant are also factors considered of great importance in the process of osseointegration . CurrentTaking these concepts into account, recent new studies have proposed changes in the relationship between the size of the osteotomy for implant placement, ie, a drilling where the bed size is closer to the outside diameter of the implant threads, thus, decreasing the insertion torque of the implant and, consequently, the compression of bone tissue around of the implant ,26. ReceThe new implant macrogeometry was developed with a healing chambers on their body in order to decrease the compression on the bone tissue without changing the drilling system sequence. Then, the propose of this preclinical study was to evaluate and compare, thought biomechanical and microscopical analysis, the behavior of this new implant macrogeometry using conventional commercialized implant macrogeometry as a control. The hypothesis was that this changes in the implant macrogeometry can accelerate the osseointegration process.Eighty-six titanium implants manufactured in commercially grade IV titanium were used in the present study. Forty-three implants with the conventional macrogeometry (Due Cone Implant) and 43 implants with a new macrogeometry (Maestro Implant), both manufactured by Implacil De Bortoli Ltda with 9-mm in length and 4-mm in diameter, forming the group CON and group TEST, respectively. The conventional macrogeometry (group CON) showed a conical implant body and trapezoidal threads design; whereas, the new implant macrogeometry (group TEST) showed a similar conical body and trapezoidal threads plus the creation of circular healing chambers between the threads. All implants presented surface treatment (rugosities) made by blasting with microparticles of titanium oxide (~150 \u03bcm) plus acid conditioning . All implant samples were prepared to commercialization. The Three samples of each group were evaluated using a scanning electronic microscopy to describe some morphological characteristics.n = 40 per group) were implanted in the rabbit tibias (n = 2 per tibia). The localization of the implants in each tibia was made using a website (www.randomization.com). Moreover, due to the difference between the proximal portion of the joint, where the bone tissue is much more medullary and of lower density, all implants were installed in a more central area of the tibia, which is schematically shown in the For this experimental animal study, twenty laboratorial rabbits , with weight between 4 and 5 Kg, were used. Previously, the study protocol was analyzed and approved by the animal committee of the University of Rio Verde with the number 02-17/UnRV. All animals were care and management in accordance with our traditional protocol applied in other studies ,15. Inte\u00ae; Agener Uni\u00e3o Ltda., S\u00e3o Paulo, Brazil) plus 0.5 mg/kg of xylazine . The incision was made at ~10 mm from the proximal articulation to distal direction, totalizing of ~30 mm. The bone was acceded, and the perforations were proceed using a drilling sequence preconized for this implant system . All osteotomies were performed under irrigation with distillate water at 20 \u00b1 2 \u00b0C.For the intramuscular anesthesia were used 0.35 mg/kg of ketamine . After the implantation, all animals received an intramuscularly injection with a single dose of 0.1 ml/kg of Benzetacil plus three doses (one per day) of 3 mg/kg of ketoprofen . The sacrifice was made with an overdose of anesthesia at 3- and 4-weeks after the implantations. Both tibias with the implant samples were removed and immediately immersed in a 4% formaldehyde solution.The measurement of the implant stability was performed thought the Osstell device . The smartpeg magnetic sensor was positioned, screwed and torqued for each implant at 10 Ncm, as preconized in a recent study . The meaTen implants of each group at each proposal period (3- and 4-weeks) were used to measure the removal torque value (RTv). The analysis was performed in a computed torquimeter machine . All block samples (bone and implant) were positioned in the apparatus and the maximum RTv was measured and registered.All removed implants in the torque removal test were care packaged, dried and prepared for the SEM analysis. Initially the samples were metalized by a sputtering machine . Then, a sequence of image with different increases were obtained using a SEM apparatus . The characteristic of the residual bone founds on the implant surface was analyzed and described.After one week, all samples that were fixed in the formaldehyde solution were subjected to treatment with an alcohol sequence for dehydration of these pieces, which followed a progressive increase from 50 to 100% ethanol. After the dehydration, the sample blocks with the bone plus implant, were inserted in historesin . After the polymerization, the pieces were cut on the centre of the implants using a metallographic machine . Then, the slices obtained were fixed and submitted to a polishing treatment with a sequence of abrasive paper (180 to 1200 mesh) in a polishing machine . All slices were stained with picrosirus hematoxylin staining technique . A sequeDescriptive analysis of the findings found in the histological sections was performed separately from the cortical and medullary portion of the bone, as shown schematically in the 2 (1 mm from the implant towards the bone and 2 mm on the long axis of the implant) of the tissues around of the implant in the cortical and medullar portion separately analysis was used the t-test to evaluate the statistical differences between the groups in each time. Moreover, a descriptive analysis using the percentual of ISQ and RT values increase between the groups and inside of each group between each time of evaluation was calculated. All analyzes were made in the GraphPad Prism program in the version 5.01 . In all analysis was considered significative when p < 0.05.The ANOVA one-way statistical test was used to determine difference among three times of the ISQ in the same group. Moreover, the The SEM analysis of both implant macrogeometry showed the differences made in the TEST group, that present circular healing chambers made between the threads. The sequence of images in different increases of the n = 40 implants per group).In both times proposed for the evaluation (3- and 4-weeks after the implantations), all implants presented clinically good signal of osseointegration, without mobility. Moreover, no inflammation or infection signals were observed in any sample. Then, the 80 implant samples could be analyzed for 3 weeks and 48.3 \u00b1 3.43 N for 4 weeks, whereas in the TEST group was 45.3 \u00b1 3.80 N for 3 weeks and 65.1 \u00b1 3.45 N for 4 weeks, with a statistical difference between them (p <0.05). The bar graph of SEM images of both groups clearly showed residual bone adherence on the implants examined. In the samples evaluated after 3 weeks, the group CON showed the presence of a thin layer of bone tissue present on the surface in some areas of the implant , while iIn the samples evaluated after 4 weeks, the group CON showed the presence of a more uniform and consistent thin layer (in comparison with the samples of this same group with 3 weeks) of bone tissue present on the surface in all areas of the implant . While iAfter the period predeterminate at 3- and 4-weeks, all implants showed a good stability and all signals of osseointegration. Ten implants of each groups and times were analyzed regards to the bone-to-implant contact (%BIC). In the group CON, the images demonstrate an initial process of neoformation of the bone, showing no signs of formation within the medullar portion. Representative histological section images of the implant after 3- and 4-weeks for both groups are showed in the However, in the group TEST, the images demonstrate a more advanced healing process and some areas in the medullar bone portion showing a new bone formation. Representative histological section images of the implant after 3- and 4-weeks for both groups are showed in the Significant difference in the %BIC were observed between the both groups in the two times analyzed at 3- and 4-weeks after the implantations. The mean, standard deviation and statistical analysis of measured values are summarized in the The morphological parameters measured of the organization of the healing bone tissue showed a different quantities of new bone formation, osteoid matrix and medullary spaces for the both groups in the two times proposed, are present in the graphs of the Figs In the present study a new implant macrogeometry was evaluated and compared with a conventional commercialized implant regarding its osseointegration potential in early healing periods (3- and 4-weeks). The results demonstrated that this new implant promotes an acceleration in the osseointegration process compared to the conventional implant. The development of this new macrogeometry was based on recent studies that demonstrated that the presence of healing chamber and non-compression of bone tissue during implant installation benefit and accelerate the osseointegration ,26. HoweThe search for reduction in lead times for osseointegration of implants has received much attention from researchers and industry worldwide. In this sense, different micro- and macro-changes in implant design were studied and proposed. On the other hand, there is the patient, with his biology and physiological individualities, which are a fundamental part of obtaining osseointegration of implants. When implants are inserted using high torque values, the physiological limit to absorb this excessive trauma may be exceeded and may cause a higher than expected inflammatory reaction, which may lead to necrosis of the bone tissue ,32,33. OEvents related to implant installation, such as milling and implant insertion, promote different intensities of bone tissue trauma, which affect the inflammatory reaction. This intensity of the inflammatory response, promoted during the implant implantation surgery, was measured by the expression of the transcription factor NF-kB in previous studies performed by our group . Other sSeveral studies had proposed that the morphological alterations on the implant surface characteristics can improving and accelerating the osseointegration process \u201310. ThusMoreover, the results obtained of the secondary stability measured showed that the new implant macrogeometry (group TEST) presented higher values in the two times (3- and 4-weeks) after the implantation, in comparison with the control group (group CON). Regarding implant stability (ISQ) measured by the Osstell device, the TEST group showed a significant increase in relation to the implants of the CON group, ie, at 3 weeks in the general average 13.5% higher and at 4 weeks 14.3% bigger. When the evolution within the same group was evaluated, the implants of the TEST group increased the ISQ values by 12.5% of the time T1 to T2 and, on average 35% of the time T1 to T3, while in the CON group, it was 1% and 20% of ISQ increase respectively for the same comparison parameters. These results clearly demonstrate the benefit of the new implant macrogeometry with healing chambers.Another biomechanical assay to assess implant osseointegration is the removal torque value (RTv), which provides values on the joint force between bone and implant ,12,13. AInitially, it was hypothesized that the new implant design presented would not alter the initial stability values, as evidenced by the results obtained. Moreover, we observed an increase in torque removal and %BIC values for samples of the group TEST, showing that this new macrogeometry promotes a positive effect for osseointegration, especially in the initial tested period of 3 and 4 weeks after implantation and, in comparison with the group CON. The efficiency of elaborating healing chambers has been demonstrated in other previous studies, which reported a lower primary implant stability due to the technique used for the elaboration of these free spaces, ie, an oversized perforation that creates these spaces between the implant and the bone tissue ,26. In oAs reported in other studies \u201313, chanThere are some limitations to the present animal study. First of all, the results of studies with animal models cannot be directly translated to human models, because even among rodent species, correlations of only 70% are generally found . On the Within the limitations of the present study, the results found showed that changes in implant macro-design can produce a significant increase for the acceleration of the bone healing process around the implants (osseointegration). Higher bone-to-implant contact, primary stability and torque removal values, as well as greater quantity and quality in bone adhered to the surface of the implants with new macro-design, corroborate the importance of implant macrogeometry in the osseointegration process."} +{"text": "TLR2 gene expression than nonobese individuals, particularly in men. In contrast, surface TLR4 expression was lower in men and in obese individuals. Postprandial cell-surface expression decreased similarly after all macronutrient loads. Neutrophil TLR2 decreased only in obese subjects whereas TLR4 showed a greater decrease in nonobese individuals. However, TLR2 gene expression increased after glucose ingestion and decreased during the lipid load, while TLR4 was induced in response to lipids and mostly to glucose. Postprandial TLR gene expression was not influenced by group of subjects or obesity. Both cell-surface and gene postprandial expression inversely correlated with their fasting levels. These responses suggest a transient compensatory response aiming to prevent postprandial inflammation. However, obesity and sex hormones showed opposite influences on surface expression of TLR2 and TLR4, but not on their gene expression, pointing to regulatory posttranscriptional mechanisms.We studied if macronutrients of the diet have different effects on leukocyte activation, and if these effects are influenced by sex hormones or obesity. We analyzed leukocyte cell surface and gene expression of toll-like receptors 2 and 4 (TLR2 and TLR4) during fasting and after macronutrient loads in women with polycystic ovary syndrome and female and male controls. Fasting TLR2 surface expression in neutrophils was higher in men than in women. Obese subjects presented higher Pathogen and nutrient response pathways are evolutionarily conserved and highly integrated to regulate metabolic and immune homeostasis. Metabolic and inflammatory pathways converge at different levels, including that of cell-surface receptors such as toll-like receptors (TLRs). TLRs comprise a family of proteins that recognizes pathogen-associated molecular patterns (PAMPs) and initiates host innate immune responses. These molecular sites allow for the coordination between nutrient-sensing and immune response pathways in order to maintain homeostasis under diverse metabolic and immune conditions ,2. AmongThe inflammatory response can be damaging to the host if not regulated properly. In agreement, certain disorders such as obesity and type 2 diabetes\u2014characterized by the existence of a chronic low-grade inflammatory state \u2014have beeInteractions between sex hormones and the immune system with consistent sexdisparities in immunity have been known for a long time ,16. MaleA rise in inflammation also takes place acutely following meals and several cells, including immune cells, responds to the postprandial elevation of several meal components by mounting transient hormonal, metabolic, inflammatory, and oxidative responses ,21,22,23In order to provide new insights on the role of TLRs in the integration of metabolic and immune responses at fasting and during single macronutrient loads, while considering the possible influences of sexual hormones and body weight on these responses, we evaluated the expression patterns of TLR2 and TLR4 in peripheral blood leukocytes after oral glucose, lipid, and protein challenges in a group of young adults, including control women, women with PCOS, and men.2) or obese (BMI \u2265 30 kg/m2) subgroups. The diagnosis of PCOS was based on the presence of clinical and/or biochemical hyperandrogenism, oligo-ovulation, and exclusion of secondary etiologies [We included 53 young adults: Seventeen women with PCOS, 17 non-hyperandrogenic control women and 19 control men. Female and male controls were selected so that they were similar in terms of age and body mass index (BMI). Subjects were classified into non-obese (BMI < 30 kg/miologies . The conThe study (PI11/00357) was approved by the Ethics Committee of the Hospital Universitario Ram\u00f3n y Cajal on 4 November 2011. We obtained written informed consent from all participants.All individuals underwent a comprehensive clinical, anthropometric, and physical evaluation. Patients were instructed to follow a diet unrestricted in carbohydrates (at least 300 g of carbohydrates per day during three days) before sampling in order to avoid false positive results in the 75 g oral glucose tolerance test, which was used not only for research purposes but also to check the patients for disorders of glucose tolerance. We obtained serum and plasma samples after a 12 h overnight fasting, and during the follicular phase of the menstrual cycle or in amenorrhea after excluding pregnancy in women.On alternate days, we submitted patients to separate oral loads in the following order: glucose, lipids, and proteins. The protocol for macronutrient challenges has been described elsewhere . ChallenTechnical characteristics of the assays used for laboratory measurements have been described in detail elsewhere . Serum cThe cell-surface expression of TLR2 and TLR4 on monocytes and neutrophils, and of leukocyte activation markers CD36 and CD86 on monocytes, was detected by direct immunofluorescence and quantified by flow cytometry. Fresh blood samples were collected during fasting and 2h (glucose and proteins) or 4h (lipids) after the oral challenges and immediately assayed by a three-color flow cytometry. Briefly, 100 \u00b5L aliquots of whole blood were stained according to the manufacturer\u00b4s instructions for 25 min in the dark at room temperature with the following directly conjugated fluorescent-labeled monoclonal antibodies (mAbs) as appropriate: CD282 (TLR2)-fluorescein isothiocyanate (FITC) , CD284 (TLR4)-phycoerythrin (PE) , CD36-PE , CD86-FITC , CD14-FITC , CD33-PE , and CD45-peridinin\u2013chlorophyll protein (PerCP) . The corresponding isotypecontrols were used to detect nonspecific staining. Erythrocytes were lysed with FACS lysing solution (BD Biosciences) for 10 min at room temperature. The immuno-stained cells were washed two times with PBS and the remaining leukocytes were analyzed in a fluorescence-activated cell counter . Cytometry data analysis was performed using the FCS Express 4 software package . Ten thousand events were acquired, and neutrophils and monocytes were identified and gated according to their characteristic CD45+ staining and side-scatter profiles, excluding cellular debris . In addiTLR2 and TLR4 genes in peripheral blood leukocytes by quantitative real-time PCR experiments (qPCR) as recently described [RPS18 and HPRT1 among seven genes evaluated were found to be more consistent in their expression and therefore served as reference for data normalization. Experiments were performed in predesigned Human CustomSignArrays384 qPCR panels with Perfect Master Mix SYBR Green on a LightCycler480 instrument, software version 1.5 (ROCHE). Data were expressed as arbitrary units using the log2\u2212\u0394Cq transformation. All samples were assayed in duplicate and negative and positive controls were included in each plate.We performed gene expression studies to evaluate the relative quantification of escribed . Additiohttps://www.imim.cat/ofertadeserveis/software-public/granmo/, last accessed 7 February 2019). This calculation was based on previous results of Gonzalez et al. [We used the online sample size and power calculator provided by the Institut Municipal d\u2019Investigaci\u00f3M\u00e8dica from Barcelona, Spain, version 7.12 or mean \u00b1 SEM (figures). Logarithmic transformations were applied as needed to ensure normal distribution of data. Univariate general linear models (GLMs) were used to determine within a single analysis the influence of group , obesity and the interaction of both factors on hormonal, metabolic, and inflammatory variables at fasting, while adjusting the level of significance to compensate for the multiple comparisons involved. The mean of the three measurements obtained at fasting\u2014before each macronutrient load\u2014were used for these analyses. To evaluate the response to macronutrient ingestion of cell-surface and gene expression and the influence of group and obesity or a possible interaction of both variables on these responses we used repeated-measures GLMs in two distinct analyses. We introduced as within-subjects factor: (i) Fasting and postprandial values to evaluate significant changes from fasting; and (ii) the responses to the separate oral challenges expressed as areas under the curve (AUC) to evaluate differences among macronutrients, also introducing group and obesity as between-subjects factors. For each dependent variable, the AUC was calculated using the trapezoidal rule. The AUC was subsequently corrected for fasting levels\u2014to comprise only the net increment or decrement in the dependent variable\u2014and was normalized by the whole duration of the challenge to warrant comparison among macronutrient loads (120 min for glucose and protein loads and 240 min for lipid load). The Mauchly\u2019s test was used to estimate sphericity and the Greenhouse\u2013Geisser correction factor was applied as needed. In all the analyses, Fisher\u2019s Least Significant Difference post-hoc test was used for multiple comparisons among groups. The relationships between continuous variables were assessed by Pearson\u2019s or Spearman\u2019s correlation analysis as needed. All the analyses were performed with SPSS Statistics 15.0 and Fasting clinical, hormonal, and metabolic characteristics of participants are shown in p = 0.077). Accordingly, TLR2 gene expression was greater in obese subjects compared with non-obese participants (p = 0.034), this effect tending to be driven by the group of men (p = 0.072). On the contrary, TLR2 surface expression in monocytes was unaffected by group of subjects and obesity towards a larger decrease after lipid ingestion compared with those observed after the intake of glucose and proteins (p = 0.096) to be larger in men compared with control women irrespective of macronutrient loads to decrease in male controls compared with PCOS women . Accordingly, the AUC of TLR2 expression was larger after glucose ingestion when compared to those observed after lipid and protein intake to be greater after lipid than after protein ingestion (p = 0.094) to be larger in non-obese subjects than in obese individuals (p = 0.073) to be greater in men compared with control women .TLR2 mRNA was induced after glucose ingestion but decreased in response to the oral lipid challenge. On the contrary, TLR4 expression was greatly activated by glucose and to a lesser extent by lipids, whereas proteins caused no significant changes in either gene. However, sex, sexual steroids, and obesity did not influence TLR2 and TLR4 expression postprandially, despite obese subjects, chiefly men, showing higher fasting TLR2 levels than non-obese individuals.Regarding leukocyte gene expression, In accordance with the literature, our present data further support an association of TLRs and obesity ,10,11. ITLR2 and TLR4 are subjected to different regulatory mechanisms and are activated by different ligands. In particular, lipopolysaccharide (LPS) binds specifically to TLR4 but not TLR2 and inteIn addition to obesity, sex steroid hormones also play a key role in the modulation of bacterial-host interactions and inflammation, and have been reported to influence TLR expression ,46. In fHowever, the literature regarding TLR expression in patients with PCOS is scarce. Two studies presented data describing increased TLR4 in ovarian tissues from animal models of PCOS ,55, and In general, focusing on the different macronutrient loads, surface expression of leukocyte activation markers was strongly diminished during all the macronutrient challenges with a relatively weaker decrease after protein ingestion. Upon ligand binding, TLR2 and TLR4 activation may require the internalization of the receptor leading to the down regulation of their cell-surface expression . Thus, wTLR4 showed a very similar pattern of gene expression during the macronutrient challenges compared with those of interleukin 10 (IL10), tumor necrosis factor alpha (TNF), the TNF receptor superfamily member 1B (TNFRSF1B), and interleukin 6 receptor (IL6R), with the highest induction corresponding to the glucose load and to the anti-inflammatory cytokine gene IL10 [We have recently reported a similar discrepancy between the generalized decrease observed in several circulating inflammatory mediators and their respective gene expression induction by macronutrients . Of noteene IL10 . The facene IL10 , and altene IL10 . Furtherene IL10 ,64.TLR2 and TLR4, in agreement with several in vitro studies [TLR2 mRNA expression compared with their non-obese counterparts. Finally, the lack of TLR2 induction by lipids in our study might be related to the large percentage of mono- and polyunsaturated fatty acids in the lipid emulsion administered to our experimental subjects, as exposed earlier and consistent with previous reports demonstrating an induction of TLR expression and proinflammatory cytokine release by palmitate and stereate but not with oleate or palmitoleate [The impact of macronutrients on leukocyte function and its association with inflammation have been reported earlier, but most of the experiments addressing their effects on TLR expression have been performed either in vitro or follo studies ,68,69. M studies . We rece studies . This daitoleate ,71.CD36 and CD86 gene expression after macronutrient intake, nevertheless such work only included younger and lean men and its lipid and protein loads differed from ours in macronutrient source and composition [Our results concerning monocyte CD36 and CD86 expression suggest an influence of obesity. Remarkably, CD36 and CD86 surface expression showed strong correlations with those of TLR4 and TLR2, respectively. Both TLRs have been associated with CD36 in several studies and postposition . As for However, our study was subjected to certain limitations. We have not performed complementary and in vitro analyses that would have provided valuable information and greater robustness to our results. Due to technical reasons, fluorescent markers for TLRs and CD proteins could not be combined in the flow cytometry experiments. Gene expression assays were carried out in whole white blood cells, therefore, we could not conduct direct correlations with surface levels nor differentiate the true expression in each different leukocyte subpopulation, which hinder a clearer interpretation of the results. Moreover, given the complexity of our study design, the sample size of the subgroups was relatively small because of the need to perform three independent oral loads in alternate days and, thus, our study may have been underpowered to detect small differences among groups of subjects. Finally, the order of macronutrient challenges was not randomized, yet the possibility of carryover effects was minimized by conducting the oral loads in alternate days. In our opinion, such limitations were compensated by the rather homogeneous healthy population studied here in terms of age and percentage of obesity, the recommendation to follow the same diet for a few days before the start of the study, the quality of the procedures used in the challenges, and the administration of exactly the same number of calories in the three oral loads.TLR2 and TLR4 gene expression regardless of obesity, sex, or sex hormones, suggesting a lesser effect of protein intake on postprandial inflammation, and that the mechanisms regulating this compensatory response appear to occur at the posttranscriptional level. Additional investigations are required to further discern the specific mechanisms by which macronutrients, obesity, and sex hormones exert their different effects on the immune cell response and the role of TLR-regulation in these processes.We observe a general decrease in the cell-surface expression of leukocyte activation markers after ingestion of either macronutrient that is inversely associated with their expression at fasting, and an opposite influence of obesity and sex on the cell-surface expression of TLR2 and TLR4 in leukocytes. These results suggest a transient compensatory response of immune cells to macronutrient intake aiming to prevent an exacerbated inflammatory process, which is modulated by obesity. However, glucose and lipids, but not proteins, differently and effectively activated leukocyte"} +{"text": "Modern software development and operations rely on monitoring to understand how systems behave in production. The data provided by application logs and runtime environment are essential to detect and diagnose undesired behavior and improve system reliability. However, despite the rich ecosystem around industry-ready log solutions, monitoring complex systems and getting insights from log data remains a challenge. Researchers and practitioners have been actively working to address several challenges related to logs, e.g., how to effectively provide better tooling support for logging decisions to developers, how to effectively process and store log data, and how to extract insights from log data. A holistic view of the research effort on logging practices and automated log analysis is key to provide directions and disseminate the state-of-the-art for technology transfer. In this paper, we study 108 papers from different communities and structure the research field in light of the life-cycle of log data. Our analysis shows that (1) logging is challenging not only in open-source projects but also in industry, (2) machine learning is a promising approach to enable a contextual analysis of source code for log recommendation but further investigation is required to assess the usability of those tools in practice, (3) few studies approached efficient persistence of log data, and (4) there are open opportunities to analyze application logs and to evaluate state-of-the-art log analysis techniques in a DevOps context. Software systems are everywhere and play an important role in society and economy. Failures in those systems may harm entire businesses and cause unrecoverable loss in the worst case. For instance, in 2018, a supermarket chain in Australia remained closed nationwide for 3 h due to \u201cminor IT problems\u201d in their checkout system . More rehttps://www.elastic.co/what-is/elk-stack) (a.k.a. \u201cELK\u201d stack) is a popular option to collect, process, and analyze log data (possibly from different sources) in a centralized manner.While software testing plays an important role in preventing failures and assessing reliability, developers and operations teams rely on monitoring to understand how the system behaves in production. In fact, the symbiosis between development and operations resulted in a mix known as DevOps , where bhttps://www.elastic.co/elasticsearch/), Logstash (https://www.elastic.co/logstash) and Kibana (https://www.elastic.co/kibana): Logstash is a log processor tool with several plugins available to parse and extract log data, Kibana provides an interface for visualization, query, and exploration of log data, and Elasticsearch, the core component of the Elastic stack, is a distributed and fault-tolerant search engine built on top of Apache Lucene (https://lucene.apache.org). Variants of those components from other vendors include Grafana (https://grafana.com) for user interface and Fluentd (https://www.fluentd.org) for log processing. Once the data is available, operations engineers use dashboards to analyze trends and query the data .Unfortunately, despite the rich ecosystem around industry-ready log solutions, monitoring complex systems and getting insights from log data is challenging. For instance, developers need to make several decisions that affect the quality and usefulness of log data, e.g., where to place log statements and what information should be included in the log message. In addition, log data can be voluminous and heterogeneous due to how individual developers instrument an application and also the variety in a software stack that compose a system. Those characteristics of log data make it exceedingly hard to make optimal use of log data at scale. In addition, companies need to consider privacy, retention policies, and how to effectively get value from data. Even with the support of machine learning and growing adoption of big data platforms, it is challenging to process and analyze data in a costly and timely manner.The research community, including practitioners, have been actively working to address the challenges related to the typical life-cycle of log, i.e., how to effectively provide better tooling support for logging decisions to developers (\u201cLogging\u201d), how to effectively process and store log data (\u201cLogging Infrastructure\u201d), and how to extract insights from log data . Previously, In this paper, we propose a systematic mapping of the logging research area. To that aim, we study 108 papers that appeared in top-level peer-reviewed conferences and journals from different communities . We structure the research field in light of the life-cycle of log data, elaborate the focus of each research area, and discuss opportunities and directions for future work. Our analysis shows that (1) logging is a challenge not only in open-source projects but also in industry, (2) machine learning is a promising approach to enable contextual analysis of source code for log recommendations but further investigation is required to assess the usability of those tools in practice, (3) few studies address efficient persistence of log data, and (4) while log analysis is mature field with several applications , there are open opportunities to analyze application logs and to evaluate state-of-the-art techniques in a DevOps context.RQ1:What are the publication trends in research on log-based monitoring over the years?RQ2:What are the different research scopes of log-based monitoring?The goal of this paper is to discover, categorize, and summarize the key research results in log-based software monitoring. To this end, we perform a systematic mapping study to provide a holistic view of the literature in logging and automated log analysis. Concretely, we investigate the following research questions:RQ1) addresses the historical growth of the research field. Answering this research question enables us to identify the popular venues and the communities that have been focusing on log-based monitoring innovation. Furthermore, we aim at investigating the participation of industry in the research field. Researchers can benefit from our analysis by helping them to make a more informed decision regarding venues for paper submission. In addition, our analysis also serves as a guide to practitioners willing to engage with the research community either by attending conferences or looking for references to study and experiment. The second research question (RQ2) addresses the actual mapping of the primary studies. As illustrated in The first research question . To find a balance between those cases, we conducted preliminary searches with different terms and search scopes, e.g., full text, title, and abstract. We considered terms based on \u201clog\u201d, its synonyms, and activities related to log analysis. During this process, we observed that forcing the presence of the term \u201clog\u201d helps to order relevant studies on the first pages. In case the data source is unable to handle word stemming automatically , we enhance the query with the keywords variations. In addition, configured the data sources to search on titles and abstracts whenever it was possible. In case the data source provides no support to search on titles and abstracts, we considered only titles to reduce false positives. This process resulted in the following search query:log AND Dealing with multiple libraries requires additional work to merge data and remove duplicates. In some cases, the underlying information retrieval algorithms yielded unexpected results when querying some libraries, such as duplicates within the data source and entries that mismatch the search constraints. To overcome those barriers, we implemented auxiliary scripts to cleanup the dataset. We index the entries by title to eliminate duplicates, and we remove entries that fail to match the search criteria. Furthermore, we keep the most recent work when we identify two entries with the same title and different publication date .As of December of 2018, when we first conducted this search, we extracted 992 entries from Google Scholar, 1,122 entries from ACM Digital Library, 1,900 entries from IEEE Xplore, 2,588 entries from Scopus, and 7,895 entries from SpringerLink . After merging and cleaning the data, we ended up with 4,187 papers in our initial list.C1: It is an English manuscript.C2: It is a primary study.C3: It is a full research paper accepted through peer-review.C4: The paper uses the term \u201clog\u201d in a software engineering context, i.e., logs to describe the behavior of a software system. We exclude papers that use the term \u201clog\u201d in an unrelated semantic .We conduct the selection process by assessing the 4,187 entries according to inclusion/exclusion criteria and by selecting publications from highly ranked venues. We define the criteria as follows:C1 is that major venues use English as the standard idiom for submission. The rationale for criterion C2 is to avoid including secondary studies in our mapping, as suggested by C3 is that some databases return gray literature as well as short papers; our focus is on full peer-reviewed research papers, which we consider mature research, ready for real-world tests. Note that different venues might have different page number specifications to determine whether a submission is a full or short paper, and these specifications might change over time. We consulted the page number from each venue to avoid unfair exclusion. The rationale for criterion C4 is to exclude papers that are unrelated to the scope of this mapping study. We noticed that some of the results are in the context of, e.g., mathematics and environmental studies. While we could have tweaked our search criteria to minimize the occurrence of those false positives , we were unable to systematically derive all keywords to exclude; therefore, we favored higher false positive rate in exchange of increasing the chances of discovering relevant papers.The rationale for criterion C4, the authors classified it as \u201cOut of Scope\u201d. The categories we used are: \u201cOut of Scope\u201d, \u201cShort/workshop paper\u201d, \u201cNot a research paper\u201d, \u201cUnpublished\u201d , \u201cSecondary study\u201d, and \u201cNon-English manuscript\u201d. It is worth mentioning that we flagged three entries as \u201cDuplicate\u201d as our merging step missed these cases due to special characters in the title. After applying the selection criteria, we removed 3,872 entries resulting in 315 entries.The first author manually performed the inclusion procedure. He analyzed the title and abstracts of all the papers marking the paper as \u201cin\u201d or \u201cout\u201d. During this process, the author applied the criteria and categorized the reasons for exclusion. For instance, whenever an entry fails the criteria http://www.core.edu.au/conference-portal) as a reference. We considered studies published only in venues ranked as A* or A. According to the CORE Rank, those categories indicate that the venue is widely known in the computer science community and has a strict review process by experienced researches. After applying the rank criteria, we removed 219 papers.In order to filter the remaining 315 papers by rank, we used the CORE Conference Rank (CORE Rank) (Our selection consists of (315 \u2212 219 =) 96 papers after applying inclusion/exclusion criteria (step 1) and filtering by venue rank (step 2). We focus the data extraction process to the required data to answer our research questions.RQ1, we collect metadata from the papers and their related venues. Concretely, we define the following schema: \u201cYear of publication\u201d, \u201cType of publication\u201d, \u201cVenue name\u201d, and \u201cResearch community\u201d. The fields \u201cYear of publication\u201d and \u201cVenue name\u201d are readly available on the scrapped data from the data sources. To extract the field \u201cType of publication\u201d, we automatically assign the label \u201cjournal\u201d if it is a journal paper. For conference papers, we manually check the proceedings to determine if it is a \u201cresearch track\u201d or \u201cindustry track\u201d paper (we assume \u201cresearch track\u201d if not explicitly stated). To extract the field \u201cResearch community\u201d, we check the topics of interest from the conferences and journals. This information is usually available in a \u201ccall for papers\u201d page. Later, we manually aggregate the venues and we merge closely related topics . While a complete meta-analysis is out of scope from our study, we believe the extracted data is sufficient to address the research question. RQ1.To answer RQ2, we collect the abstracts from the primary studies. In this process, we structure the abstract to better identify the motivation of the study, what problem the authors are addressing, how the researchers are mitigating the problem, and the results of the study. Given the diverse set of problems and domains, we first group the studies according to their overall context . To mitigate self-bias, we conducted two independent triages and compared our results. In case of divergence, we review the paper in depth to assign the context that better fits the paper. To derive the classification schemafor each context, we perform the keywording of abstracts . The divergences were then discussed with the second author of this paper. Furthermore, the second author reviewed the resulting classification. Note that, while a paper may address more than one category, we choose the category related to the most significant contribution of that paper. forward snowballing to fetch a preliminary list of papers from 2019. We use snowballing for simplicity since we can leverage the \u201cCited By\u201d feature from Google Scholar rather than scraping data of all five digital libraries. It is worth mentioning that we limit the results up to 2019 to avoid incomplete results for 2020.As of October of 2020, we updated our survey to include papers published in 2019 since we first conducted this analysis during December in 2018. To this end, we select all 11 papers from 2018 and perform For the preliminary list of 2019, we apply the same selection and rank criteria (see Section \u201cStudy Selection\u201d); then, we analyze and map the studies according to the existing classification schema (see Section \u201cData Extraction and Classification\u201d). In this process, we identify 12 new papers and merge them with our existing dataset. Our final dataset consists of (96 + 12 =) 108 papers.We identified 108 papers published in 46 highly ranked venues spanning different communities . Table 2OGGING: Research in this category aims at understanding how developers conduct logging practices and providing better tooling support to developers. There are three subcategories in this line of work: (1) empirical studies on logging practices, (2) requirements for application logs, and (3) implementation of log statements .LOG INFRASTRUCTURE: Research in this category aims at improving log processing and persistence. There are two subcategories in this line of work: (1) log parsing, and (2) log storage.LOG ANALYSIS: Research in this category aims at extracting knowledge from log data. There are eight subcategories in this line of work: (1) anomaly detection, (2) security and privacy, (3) root cause analysis, (4) failure prediction, (5) quality assurance, (6) model inference and invariant mining, (7) reliability and dependability, and (8) log platforms.LWe grouped the studied papers among the following three categories based in our understanding about the life-cycle of log data see . For eacOG ANALYSIS dominates most of the research effort (68 out of 108 papers) with papers published since the early 90\u2019s. LOG INFRASTRUCTURE is younger than LOG ANALYSIS as we observed papers starting from 2007 (16 out of 108 papers). LOGGING is the youngest area of interest with an increasing momentum for research (24 out of 108 papers). In the following, we elaborate our analysis and provide an overview of the primary studies.We provide an overview of the categories, their respective descriptions, and summary of our results in debug for low level logging, info to provide information on the system execution, error to indicate unexpected state that may compromise the normal execution of the application, and fatal to indicate a severe state that might terminate the execution of the application. Logging an application involves several decisions such as what to log. These are all important decisions since they have a direct impact on the effectiveness of the future analysis. Excessive logging may cause performance degradation due the number of writing operations and might be costly in terms of storage. Conversely, insufficient information undermines the usefulness of the data to the operations team. It is worth mentioning that the underlying environment also provides valuable data. Environment logs provide insights about resource usage and this data can be correlated with application logs on the analysis process. In contrast to application logs, developers are often not in control of environment logs. On the other hand, they are often highly structured and are useful as a complementary data source that provides additional context.Log messages are usually in the form of free text and may expose parts of the system state to provide additional context. The full log statement also includes a severity level to indicate the purpose of that statement. Logging frameworks provide developers with different log levels: OGGING deals with the decisions from the developer\u2019s perspective. Developers have to decide the placement of log statements, what message description to use, which runtime information is relevant to log , and the appropriate severity level. Efficient and accurate log analysis rely on the quality of the log data, but it is not always feasible to know upfront the requirements of log data during development time.LWe observed three different subcategories in log engineering: (1) empirical studies on log engineering practices, (2) techniques to improve log statements based on known requirements for log data, and (3) techniques to help developers make informed decisions when implementing log statements . In the following, we discuss the 24 log engineering papers in the light of these three types of studies.Understanding how practitioners deal with the log engineering process in a real scenario is key to identify open problems and provide research directions. Papers in this category aim at addressing this agenda through empirical studies in open-source projects (and their communities).Later, It is worth mentioning that the need for tooling support for logging also applies in an industry setting. For instance, in a study conducted by In the context of mobile development, Understanding the meaning of logs is important not only for analysis but also for maintenance of logging code. However, one challenge that developers face is to actively update log-related code along functionalities. The code base naturally evolves but due to unawareness on how features are related to log statements, the latter become outdated and may produce misleading information . This isIn a different study, Finally, An important requirement of log data is that it must be informative and useful to a particular purpose. Papers in this subcategory aim at evaluating whether log statements can deliver expected data, given a known requirement.Fault injection is a technique that can be useful to assess the diagnosibility of log data, i.e., whether log data can manifest the presence of failures. Past studies conducted experiments in open-source projects and show that logs are unable to produce any trace of failures in most cases . The ideOGENHANCER leverages program analysis techniques to capture additional context to enhance log statements in the execution flow. Differently from past work with fault injection, LOGENHANCER proposes the enhancement of existing log statements rather than addition of log statements in missing locations.Another approach to address the diagnosability in log data was proposed by OG A, a logging framework based on SOAP intermediaries that intercepts messages exchanged between client and server and enhances web logs with important data for monitoring and auditing, e.g., response and processing time.In the context of web services, Developers need to make several decisions at development time that influence the quality of the generated log data. Past studies in logging practices show that in practice, developers rely on their own experience and logging is conducted in a trial-and-error manner in open-source projects and induOGTRACKER. In another study, NALYZER, a checker that encodes the anti-patterns identified on their analysis. Some of these patterns are usage of nullable references, explicit type cast, and malformed output in the log statement. INDER. On the evaluation, they discovered not only new issues on the analyzed systems but also on other two systems (Camel and Wicket). Note that several recurrent problems aforementioned can be capture by static analysis before merging changes into the code base.OGADVISOR, a technique that leverages supervised learning with feature engineering to suggest log placement for unexpected situations, namely catch blocks (\u201cexception logging\u201d) and if blocks with return statements . Some of the features defined for the machine learning process are size of the method, i.e., number of lines of source code, name of method parameters, name of local variables, and method signature. They evaluated LOGADVISOR on two proprietary systems from Microsoft and two open-source projects hosted on GitHub. The results indicate the feasibility of applying machine learning to provide recommendations for where to place new log statements. Deciding where to place log statements is critical to provide enough context for later analysis. One way to identify missing locations is to use fault injection (see \u201cLog Requirements\u201d). However, the effectiveness of that approach is limited to the quality of tests and the ability of manifesting failures. Furthermore, log placement requires further contextual information that is unfeasible to capture only with static analysis. Another approach to address consistent log placement in large code bases is to leverage source code analysis and statistical models to mine log patterns. Choosing the appropriate severity level of log statements is a challenge. Recall that logging frameworks provide the feature of suppressing log messages according to the log severity. An important part of log statements is the description of the event being logged. Inappropriate descriptions are problematic and delay the analysis process. In addition to log descriptors, the state of the system is another important information the event being logged. The infrastructure supporting the analysis process plays an important role because the analysis may involve the aggregation and selection of high volumes of data. The requirements for the data processing infrastructure depend on the nature of the analysis and the nature of the log data. For instance, popular log processors, e.g., Logstash and Fluentd, provide regular expressions out-of-the-box to extract data from well-known log formats of popular web servers . However, extracting content from highly unstructured data into a meaningful schema is not trivial.OG INFRASTRUCTURE deals with the tooling support necessary to make the further analysis feasible. For instance, data representation might influence on the efficiency of data aggregation. Other important concerns include the ability of handling log data for real-time or offline analysis and scalability to handle the increasing volume of data.LWe observed two subcategories in this area: (1) log parsing, and (2) log storage. In the following, we summarize the 16 studies on log infrastructure grouped by these two categories.\u201cConnection from A port B\u201d and \u201cConnection from C port D\u201d represent the same event. The heart of studies in parsing is the template extraction from raw log data. Fundamentally, this process consists of identifying the constant and variable parts of raw log messages.Parsing is the backbone of many log analysis techniques. Some analysis operate under the assumption that source-code is unavailable; therefore, they rely on parsing techniques to process log data. Given that log messages often have variable content, the main challenge tackled by these papers is to identify which log messages describe the same event. For example, OM (Iterative Partitioning Log Mining) leverages the similarities between log messages related to the same event, e.g., number, position, and variability of tokens .Another approach to reduce storage costs consists of data compression techniques for efficient analysis . Lin et After the processing of log data, the extracted information serves as input to sophisticated log analysis methods and techniques. Such analysis, which make use of varying algorithms, help developers in detecting unexpected behavior, performance bottlenecks, or even security problems.OG ANALYSIS deals with knowledge acquisition from log data for a specific purpose, e.g., detecting undesired behavior or investigating the cause of a past outage. Extracting insights from log data is challenging due to the complexity of the systems generating that data.LWe observed eight subcategories in this area: (1) anomaly detection, (2) security and privacy, (3) root cause analysis, (4) failure prediction, (5) quality assurance, (6) model inference and invariant mining, (7) reliability and dependability, and (8) platforms. In the following, we summarize the 68 studies on log analysis grouped by these seven different goals.Anomaly detection techniques aim to find undesired patterns in log data given that manual analysis is time-consuming, error-prone, and unfeasible in many cases. We observe that a significant part of the research in the logging area is focused on this type of analysis. Often, these techniques focus on identifying problems in software systems. Based on the assumption that an \u201canomaly\u201d is something worth investigating, these techniques look for anomalous traces in the log files.Researchers have been trying several different techniques, such as deep learning and NLP , data miInterestingly, while these papers often make use of systems logs for evaluation, we conjecture that these approaches are sufficiently general, and could be explored in (or are worth trying at on) other types of logs .EEPLOG, a deep neural network model that used Long Short-Term Memory (LSTM) to model system logs as a natural language sequence, and OGMINE technique to represent extracted templates from logs and LSTMs to learn common sequences of log sequences. In addition, Researchers have also explored log analysis techniques within specific contexts. For instance, finding anomalies in HTTP logs by using dimensionality reduction techniques , findingechnique ) exploreLogs can be leveraged for security purposes, such as intrusion and attacks detection.An interesting characteristic among them all is that the most used log data is, by far, network data. We conjecture this is due to the fact that (1) network logs are independent from the underlying application, and that (2) network tends to be, nowadays, a common way of attacking an application.Differently from analysis techniques where the goal is to find a bug, and which are represented in the logs as anomalies, understanding which characteristics of log messages can reveal security issues is still an open topic. Finally, regarding privacy, Detecting anomalous behavior, either by automatic or monitoring solutions, is just part of the process. Maintainers need to investigate what caused that unexpected behavior. Several studies attempt to take the next step and provide users with, e.g., root cause analysis, accurate failure identification, and impact analysis.Being able to anticipate failures in critical systems not only represents competitive business advantage but also represents prevention of unrecoverable consequences to the business. Failure prediction is feasible once there is knowledge about abnormal patterns and their related causes. However, it differs from anomaly detection in the sense that identifying the preceding patterns of an unrecoverable state requires insights from root cause analysis. This approach shifts monitoring to a proactive manner rather than reactive, i.e., once the problem occurred.Work in this area, as expected, relies on statistical and probabilistic models, from standard regression analysis to machine learning. We noticed that, given that only supervised models have been used so far, feature engineering plays an important role in these papers. Log analysis might support developers during the software development life cycle and, more specifically, during activities related to quality assurance.More than 10 years later, Model-based approaches to software engineering seek to support understanding and analysis by means of abstraction. However, building such models is a challenging and expensive task. Logs serve as a source for developers to build representative models and invariants of their systems. These models and invariants may help developers in different tasks, such as comprehensibility and testing. These approaches generate different types of models, such as (finite) state machines directedState machines are the most common type of model extracted from logs. The mining of properties that a system should hold has also been possible via log analysis. We also observe directed workflow graphs and dependency maps as other types of models built from logs. Finally, and somewhat different from the other papers in this ramification, Logs can serve as a means to estimate how reliable and dependable a software system is. Research in this subcategory often focuses on large software systems, such as web and mobile applications that are distributed in general, and high performance computers.OSQL databases and Big Data technology (Spark) for efficient in-memory processing to assist system administrators.Outside the web domain, Analyzing the performance of mobile applications can be challenging specially when they depend on back-end distributed services. IBM researchers proposedMonitoring systems often contain dashboards and metrics to measure the \u201cheartbeat\u201d of the system. In the occurrence of abnormal behavior, the operations team is able to visualize the abnormality and conduct further investigation to identify the cause. Techniques to reduce/filter the amount of log data and efficient querying play an important role to support the operations team on diagnosing problems. One consideration is, while visual aid is useful, in one extreme, it can be overwhelming to handle several charts and dashboards at once. In addition, it can be non-trivial to judge if an unknown pattern on the dashboard represents an unexpected situation. In practice, operations engineers may rely on experience and past situations to make this judgment. Papers in this subcategory focus on full-fledged platforms that aim at providing a full experience for monitoring teams.Two studies were explicitly conducted in an industry setting, namely MELODY at IBM aIDAL, a tool to characterize the workload of cloud infrastructures, They use log data from Google data clusters for evaluation and incorporate support to popular analysis languages and storage backends on their tool. OGAIDER, a tool that integrates log mining and visualization to analyze different types of correlation . In this study, they use log data from Mira, an IBM Blue Gene-based supercomputer for scientific computing, and reported high accuracy and precision in uncovering correlations associated with failures. While an industry setting is not always accessible to the research community, publicly available datasets are useful to overcome this limitation. LOUDSEER . Part of the problem is the lack of requirements for log data. When the requirements are well defined, logging frameworks can be tailored to a particular use case and it is feasible to test whether the generated log data fits the use case (see subcategory \u201cLog Requirements\u201d). However, when requirements are not clear, developers rely on their own experience to make log-related decisions. While static analysis is useful to anticipate potential issues in log statements , other logging decisions rely on the context of source code (see subcategory \u201cImplementation of Log Statements\u201d). Research on this area already shows the feasibility of employing machine learning to address those context-sensitive decisions. However, it is still unknown the implications of deploying such tools to developers. Further work is necessary to address usability and operational aspects of those techniques. For instance, false positives is a reality in machine learning. There is no 100% accurate model and false positives will eventually emerge even if in a low rate. How to communicate results in a way that developers keeps engaged in a productive way is important to bridge the gap of theory and practice. This also calls for closer collaboration between academia and industry.In LOG INFRASTRUCTURE, most of the research effort focused on parsing techniques. We observed that most papers in the \u201cLog Parsing\u201d subcategory address the template extraction problem as an unsupervised problem, mainly by clustering the static part of the log messages. While the analysis of system logs was extensively explored (mostly Hadoop log data), little has been explored in the field of application logs. We believe that this is due to the lack of publicly available dataset. In addition, application logs might not have a well-defined structure and can vary significantly from structured system logs. This could undermine the feasibility of exploiting clustering techniques. One way to address the availability problem could be using log data generated from test suites in open-source projects. However, test suites might not produce comparable volume of data. Unless there is a publicly available large-scale application that could be used by the research community, we argue that the only way to explore log parsing at large-scale is in partnership with industry. Industry would highly benefit from this collaboration, as researchers would be able to explore latest techniques under a real workload environment. In addition to the exploration of application logs, there are other research opportunities for log parsing. Most papers exploit parsing for log analysis tasks. While this is an important application with its own challenges , parsing could be also applied for efficient log compression and better data storage.In LOG ANALYSIS is the research area with the highest number of primary studies, and our study shows that the body of knowledge for data modeling and analysis is already extensive. For instance, logs can be viewed as sequences of events, count vectors, or graphs. Each representation enables the usage of different algorithms that might outperform other approaches under different circumstances. However, it remains open how different approaches compare to each other. To fulfill this gap, future research must address what trade-offs to apply and elaborate on the circumstances that make one approach more suitable than the other. A public repository on GitHub (Loghub: https://github.com/logpai/loghub) contains several datasets used in many studies in log analysis. We encourage practitioners and researchers to contribute to this collective effort. In addition, most papers frame a log analysis task as a supervised learning problem. While this is the most popular approach for machine learning, the lack of representative datasets with labeled data is an inherent barrier. Projects operating in a continuous delivery culture, where software changes at a fast pace , training data might become outdated quickly and the cost of collecting and labeling new data might be prohibitive. We suggest researchers to also consider how their techniques behave in such dynamic environment. More specifically, future work could explore the use of semi-supervised and unsupervised learning to overcome the cost of creating and updating datasets.LOur study maps the research landscape in logging, log infrastructure, and log analysis based on our interpretation of the 108 studies published from 1992 to 2019. In this section, we discuss possible threats to the validity of this work and possibilities for future expansions of this systematic mapping.A and A* according to the CORE Rank). There are several external factors that influence the acceptance of a paper that are not necessarily related to the quality and relevance of the study. The rationale for applying the exclusion criterion by venue rank is to reduce the dataset to a manageable size using a well-defined rule. Overall, it is possible that relevant studies might be missing in our analysis.The main threat to the generalization of our conclusions relates to the representativeness of our dataset. Our procedure to discover relevant papers consists of querying popular digital libraries rather than looking into already known venues in Software Engineering (authors\u2019 field of expertise). While we collected data from five different sources, it is unclear how each library indexes the entries. It is possible that we may have missed a relevant paper because none of the digital libraries reported it. Therefore, the search procedure might be unable to yield complete results. Another factor that influences the completeness of our dataset is the filtering of papers based on the venue rank . Furthermore, we analyze a broad corpus of high-quality studies that cover the life-cycle of log data.The main threat to the internal validity relates to our classification procedure. The first author conducted the first step of the characterization procedure. Given that the entire process was mostly manual, this might introduce a bias on the subsequent analysis. To reduce its impact, the first author performed the procedure twice. Moreover, the second author revisited all the decisions made by the first author throughout the process. All diversions were discussed and settled throughout the study.OG ANALYSIS is a mature field, and we believe that part of this success is due to the availability of dataset to foster innovation. LOGGING and LOG INFRASTRUCTURE, on the other hand, are still in a early stage of development. There are several barriers that hinder innovation in those area, e.g., lack of representative data of application logs and access to developers. We believe that closing the gap between academia and industry can increase momentum and enable the future generation of tools and standards for logging.In this work, we show how researchers have been addressing the different challenges in the life-cycle of log data. Logging provides a rich source of data that can enable several types of analysis that is beneficial to the operations of complex systems. L10.7717/peerj-cs.489/supp-1Supplemental Information 1Each row indicates a primary study that includes the respective metadata, extracted information, and classification.Click here for additional data file."} +{"text": "Sensor data from digital health technologies (DHTs) used in clinical trials provides a valuable source of information, because of the possibility to combine datasets from different studies, to combine it with other data types, and to reuse it multiple times for various purposes. To date, there exist no standards for capturing or storing DHT biosensor data applicable across modalities and disease areas, and which can also capture the clinical trial and environment-specific aspects, so-called metadata. In this perspectives paper, we propose a metadata framework that divides the DHT metadata into metadata that is independent of the therapeutic area or clinical trial design (concept of interest and context of use), and metadata that is dependent on these factors. We demonstrate how this framework can be applied to data collected with different types of DHTs deployed in the WATCH-PD clinical study of Parkinson\u2019s disease. This framework provides a means to pre-specify and therefore standardize aspects of the use of DHTs, promoting comparability of DHTs across future studies. Digital health technologies (DHTs) are attracting considerable interest in clinical trials of new treatments for Parkinson\u2019s disease (PD). These technologies have many potential advantages that complement traditional clinical assessments, including high-frequency data collection, improved objectivity, the ability to capture occasional events such as freezing of gait, and more naturalistic data collected in a home setting . There rHowever, data standardization and harmonization for DHTs used in clinical research is in its infancy. While true standardization will take time, progress in this area is likely to accelerate the regulatory acceptability of measures derived from DHTs.Advancing of the regulatory maturity of measurements derived from DHTs is a particular focus of the Critical Path for Parkinson\u2019s Digital Drug Development Tools (3DT) consortium, and the absence of a suitable metadata framework for standardizing the way measurements are made using DHTs in clinical trials was identified as one of the key barriers to advancing their widespread use in clinical studies intended for submission to health regulators. Metadata is the data needed to interpret the data (such as motion sensor data) collected by a DHT. A crucial step towards standardization and harmonization of DHTs for use in endpoint development is defining the metadata needed to describe how the data used to produce the study endpoint has been generated. For example, the DHTs we are considering in this manuscript collect data from one or more individual data collection devices , microphones, PPG or ECG sensors). Each of these types of sensors can be implemented in different hardware, can be incorporated into devices with different form factors, can be worn in different body locations, and can be configured and pre-processed by firmware and software in numerous ways before the sensor data is output from the device.application independent, and application dependent. Secondly, we propose that this metadata framework can go beyond describing the DHT data, but also has the potential to be used to pre-specify how data is collected in future clinical trials in order to help standardize DHT use across studies. In the regulatory language of drug development tools, the application-specific metadata is associated with the concept of interest and context of use [Comprehensive metadata is needed to describe how data from an individual DHT is collected in order to properly interpret the DHT output and compare with similar data from a different DHT or the same DHT configured in a different way. Further metadata is needed to describe how the DHT is deployed in a particular clinical application such as a clinical trial of a new treatment for early PD. This additional metadata needs to detail how the clinical trial was performed, such as the clinical population being studied, whether the DHT collected data \u201cpassively\u201d or during performance of standardized tasks (\u201cactively\u201d), and any application-specific data analysis undertaken. Previously, a metadata concept for advancing the use of digital health technologies in Parkinson\u2019s disease has been proposed in Badawy et al., 2019 . The frat of use ,12.Once a metadata framework has been defined and implemented, it provides a way to fully describe how DHT outputs have been obtained, which will facilitate data sharing and collaboration as well as reproduction of the measure in future studies wishing to use a given endpoint generated by a DHT. Using a metadata framework to pre-specify how data is collected could also enable new ways of capturing and controlling the sources of variability that are described in a related publication from this consortium . Taken tIn this paper, we describe how metadata has helped to advance regulatory maturity of other drug development tools. We then propose a metadata framework, which was developed by members of the Critical Path for Parkinson\u2019s consortium (CPP). Formulation of this framework took into account approaches used for other drug development tools along with our own independent efforts which were informed by the specific feedback we obtained from health regulators in the US and Europe . The proA successful example of standardization and harmonization of digital data with a similar degree of complexity to that of DHTs is neuroimaging in Alzheimer\u2019s disease clinical trials. An aim of the Alzheimer\u2019s Disease Neuroimaging Initiative (ADNI) project was to make neuroimaging measurements more acceptable to regulators for evaluating novel treatments for Alzheimer\u2019s disease. A key goal of ADNI was therefore to use MRI scanners and PET scanners at multiple hospitals with different hardware and software configurations, to make comparable measurements of disease-related change in the brains of Alzheimer\u2019s disease patients. The starting point was an experimental process to standardize the way the data was collected from a pre-determined range of MRI and PET scanners manufactured by different companies , to obtain comparable results for some defined metrics of brain volume change and brain function ,16. Amylapplication-independent metadata that described how data was collected from those devices (in this case MRI and PET scanners), and application-dependent metadata describing how these are used for deriving a measure relevant to the desired clinical trial application. The increasing regulatory maturity of imaging endpoints in Alzheimer\u2019s disease clinical trials, which the ADNI project helped standardize, is illustrated by the recent accelerated approval of aducanumab by the FDA. Accelerated approval is an FDA mechanism to approve a novel drug in an area of important medical need, based on the drug\u2019s effect on a \u201csurrogate endpoint\u201d that is reasonably likely to predict a clinical benefit to patients, rather than a more traditional clinical endpoint. In the case of aducanumab, the surrogate endpoint was amyloid positron emission tomography standardized uptake value ratio (PET SUVR), which the FDA concluded showed that the drug provided a dose-and time-dependent reduction in amyloid beta plaque in patients with Alzheimer\u2019s disease [Key aspects of the standardized approach to neuroimaging data in AD are (1) a description of all the MRI and PET scanning parameters, augmented by a detailed set of instructions on how patients should be prepared and positioned for scanning, (2) how data quality assurance and handling should be performed, and (3) defining the clinically meaningful derived measures that were suitable as study endpoints . This example from a different field demonstrates that obtaining comparable data from heterogeneous data collection devices involved both defining the disease .Measurement Device and Hub metadata,Sensor and signal metadata,Participant/Population metadata,Analysis metadata,Experimental metadata andContextual metadataThe essential types of metadata that should be collected during a clinical trial to describe the process in appropriate detail is provided in application-independent metadata that describes the sensor(s) and signals, and the data collection hardware and software.Some of this metadata is quite generic and can be used to describe data collected for many clinical applications. For example, a wearable actigraphy device could be used to measure many different parameters in different patient populations, including total activity, gait speed, turning gait, falls, sleep, and tremor. For all these applications, there is therefore a core set of application-dependent metadata. In Other metadata is specific to a particular clinical application and is required, e.g., to compare or combine measurements from DHTs obtained in different clinical trials for the same therapeutic area, and we refer to this as To support standardization and quality control of DHT data in clinical trials, we propose that the values of the metadata required for a particular study should be specified a priori. In this way, the metadata framework does not just retrospectively describe data that has already been collected but can be used to define how it should be collected and enable quality assurance of that data collection. Furthermore, by using the same pre-specified values for different studies, such pre-specification will support standardization of measurement across trials, and re-use of analytically validated DHT methodology. For example, the application-independent metadata might pre-specify the desired data acquisition rate, the location and orientation of a wrist worn accelerometer, and the required hardware and firmware version. The application-dependent metadata could specify any Patient-Reported Outcome (PRO) or clinical assessment completed simultaneously with the data collection, and the time of day and expected duration of any specific \u201cactive\u201d tasks performed by the subjects. These pre-specified values could then be used to setup the DHTs being deployed in this study and could also be used for an automatic quality check that the collected data contains the pre-specified characteristics.minimum requirements for any data coming from DHTs, including the metadata, such that we can ensure successful completion of this study. We propose that this detail should be provided in a dedicated DHT Charter that might be annexed to the protocol, or treated like the Independent Review Charters used to standardize imaging endpoints in clinical trials [In the same way, we carefully specify the exact sequence of events, tests, procedures, and measurements during the execution of a clinical trial, we should also specify the l trials .Measurement device and hub: Comprises metadata that uniquely identifies the:\u25cbMeasurement device used for data collection, including its brand, model, serial number , hardware and firmware version. This metadata needs to allow the device location on the body and orientation to be recorded. Furthermore, by tracking individual device ID, any change in performance over time or repairs can be associated with the data.\u25cbHub: Since many wearable measurement devices work in combination with a separate device in order to interconnect with a remote database and potentially also to perform other functions such as pre-processing and authentication, the application-independent metadata also includes metadata to uniquely describe the hardware/software of the hub which the measurement device connects.\u25cbfile format and technical aspects of the data storage and transfer .The \u25cbMetadata version. The metadata framework needs to be able to be refined so it is important that there is a metadata version associated with the device collecting data.Sensor and Signals: is the description of the types of data collected including the modality , the recording mode and any calibration of the sensor performed prior to deployment in each study, data rates and timing. A single DHT may generate multiple signals with distinct metadata, for example, a DHT might include an accelerometer, gyroscope, magnetometer and PPG sensor, each operating at different acquisition frequency and with different timing information. The metadata framework supports this through a single device supporting multiple sets of sensor metadata. The signals can cover traditional wearable sensor signals, but may also be used for environmental context signals, such as the ambient temperature where subject is located, whether the subject is indoors or outdoors , which room in their home they are located in and whether they are alone in that room or accompanied.Participant/Population: We propose that the application-independent metadata has a single element of participant/population metadata, namely a unique identifier that can be linked to this subject data in the application-dependent metadata.Analysis metadata: The application-independent metadata needs to describe any generic analysis performed in the device itself , which we refer to as \u201cpre-processing\u201d to distinguish from endpoint-specific analysis that is application dependent.Experiment metadata: We propose that the only application-independent metadata element for the experimental metadata is an experiment identifier. The details of the experiment being performed are application dependent.In the DHT metadata elements listed in A practical benefit of this approach is that the application-independent metadata is compact and generic, while at the same time, being closely associated with the application-dependent metadata described in the next section.The application-independent metadata alone is not sufficient to be able to reproduce the experimental context in which a given measurement was completed. The required metadata needs to take into account the underlying clinical sign or symptom, or biological process that is being measured using the DHT. We call this additional metadata the \u201capplication-dependent metadata.\u201dSubject metadata: The application-independent metadata only includes a subject unique identifier (UID). The application-dependent metadata includes the relevant demographics and associated health information relevant to the clinical study concerned, the inclusion and exclusion criteria and any comorbidities relevant to this study.Analysis metadata: The data analysis is in many cases very specific to the clinical trial design. We refer to this application-dependent analysis as \u201cendpoint analysis\u201d to distinguish from the generic pre-processing described in the application-independent metadata. All relevant software versions and selectable parameters must be clearly defined.Experiment metadata: This describes the clinical trial cohort in which a given subject is enrolled, the clinical site, any clinical trial questionnaire or human observation metadata, and a reference to the applicable protocol and its version number. In particular, this metadata needs to include details of any active tests and passive monitoring involved, and the details of the active test.Contextual data: A description of the environment of data collection and properties of the environment if available should be included in the metadata.The application-dependent metadata format in the framework shown in The application-dependent metadata aspect of the framework is flexible to support use in multiple clinical trial designs in PD and beyond. Some key challenges in implementing application-dependent metadata, and the way in which this framework addresses these, are listed below.Variability in metadata requirements across clinical applications and sensor modalities: For example, we may be interested in monitoring gait in patients with Parkinson\u2019s disease. In one specific clinical trial, we may want to evaluate a patient\u2019s gait using a wrist-worn wearable device in a clinical setting during the performance of a 6-min walk test. In that instance, it may be necessary to record, as metadata, the actual length of the lab or walkway that the patient is using for the test and whether the test was performed with or without caregiver support. This application-dependent metadata would not be required if one would like to evaluate the same patient\u2019s gait at home. Similarly, in some clinical trials, data might be acquired continuously (passively), and for other applications, data would be collected when the subject is prompted to perform a task or complete a PRO (active). For active tasks, the application-dependent metadata would need to include a description of the prompt or simultaneous PRO to fully describe the data collection. The metadata framework proposed here incorporates the necessary detail in the experiment metadata portion of the application-dependent metadata.Variability in metadata requirements across different stages of PD: Severity of disease would also have a significant impact on the metadata that should be recorded within a particular trial. If studying individuals with probable PD in the pre-manifest stage of the disease, there may be minimal motor symptoms, and as such subjects may often engage in vigorous activities such as running that would be captured by a continuously recording activity monitor. This would not be the case for patients with advanced disease, who may struggle to safely and independently navigate their own homes. Thus, if we were to devise a measure of \u201caverage daily activity\u201d it would greatly vary across these two populations. In the case of pre-manifest PD, a study might measure the amount of time of moderate-to-vigorous activity per week and in the case of advanced PD, a study might seek to measure any and all activity. In the former group, we would need to capture extraneous factors that may have impacted a patient\u2019s ability to perform vigorous activity: if we are monitoring a golfer who usually plays 2\u20133 times/week, a month-long weather pattern may substantially alter their activity levels. In the latter group, these factors may not be as relevant. The flexible design of the application-dependent metadata format in the proposed framework allows this variability to be described in the experiment metadata and Analysis metadata.Data pre-processing: Another challenge we face is that DHTs do not always provide ready access to the raw data, as we are used to collecting from research-grade clinical equipment. Additionally, even if there is access to some version of the raw data, these data often vary greatly across devices, based on the manufacturer or even the version of a specific device. Smartwatch actigraphy devices that generally report step count over a defined epoch often claim to also output raw data that we hope to use for clinical research. Indeed, the term \u201craw data\u201d is seldom the output of the analogue to digital converters (ADCs) in the sensor, but normally has filters or data compression applied and is often the output of a software interface (API) provided by the manufacturer. Data streams with such differences may not be used interchangeably. The application-independent metadata in our proposed framework addresses this challenge by including both sensor metadata fields that records software and hardware versions and the Device ID. The Device section of the application-independent metadata in the framework therefore uniquely identifies the type of pre-processed data available, even when the details of the pre-processing are not provided by the manufacturer. Given precise specification of this device metadata, lab-based experiments can be used to test whether the data output from different hardware/software versions of the same device are sufficiently similar to be combined in a particular context.Data analysis: In addition to pre-processing on the wearable hardware itself, DHTs involve analysis to calculate measurements of interest from the sensor data. This analysis is normally done after pre-processing, and on one or more separate devices such as a mobile phone app, a home hub, or cloud server. Data analysis software can evolve and be changed, and indeed some of this type of analysis software is \u201cself-learning\u201d and changes how it works in use. The proposed framework addresses this challenge through a detailed description of all pre-processing in the application-independent metadata, including details of hardware and software versions of any smartphone app or hub that is used as an intermediary between a data collection device and the data analysis platform. The application-dependent metadata in our framework then uniquely identifies the data analysis software, including its version number that is applied to this input.Controlling environmental sources of variability: The environment of data collection is an important source of variability. For example, a clinical trial subject\u2019s mobility may depend on the ambient temperature as well as their symptoms and treatment, and behaviours and activities are influenced by whether the clinical trial subject is on their own or in the same room as a family member or care partner. The way a subject walks may also depend on whether they are inside or outside their home, or which room in the house they are in. A timed up and go task will be influenced by the height of the chair from which the subject stands up. Increasingly, clinical trials are capturing information about these environmental factors. In our framework, such environmental data is captured as a separate sensor but other environmental context would more appropriately be captured in an experiment metadata portion of the application-dependent metadata, for example whether a particular assessment is being performed in the clinic or at home, and whether it is done as part of a prompted task or passively.Such challenges and limitations cause difficulties in many clinical studies that deploy digital health technologies, mostly stemming from lack of standardization in the devices and their output. We believe our proposed framework is a practical approach to dealing with these challenges, as it defines both the metadata that any researcher should carefully specify, collect, and record about the clinical study design in addition to the metadata about the sensor dataset provided by the devices.We illustrate the application of this metadata framework to the measurement of tremor in Parkinson\u2019s disease, using information from the Wearable Assessments in the Clinic and Home in PD (WATCH-PD) study . In partWATCH-PD is a 12 month multicenter, longitudinal, digital assessment study of PD progression in subjects with early untreated PD . The primary goal is to generate and optimize a set of candidate objective digital measures to complement standard clinical assessments in measuring the progression of PD and the response to treatment. A secondary goal is to understand the relationship between standard clinical assessments, research-grade digital tools used in a clinical setting, and more user-friendly consumer digital platforms to develop a scalable approach for objective, sensitive, and frequent collection of motor and nonmotor data in early PD.In WATCH-PD, two different DHTs were deployed in the same subjects, and both were capable of measuring tremor\u2013patients wore both the APDM (Ambulatory Parkinson\u2019s Disease Monitoring) Opal sensors and an Apple watch while completing a standardized set of tremor tasks in a clinic setting.Participants were instrumented with both the six-sensor opal system recording continuously and an Apple watch placed on their most affected side. Subjects initiated recordings on the watch using a paired mobile application. Participants were instructed while seated to hold two positions for 30 sAn example metadata framework for tremor measurement based on that in We have described challenges of using DHTs in clinical trials that could be mitigated with an appropriate metadata framework and have proposed such a framework to help address the challenges and enhance the utility of DHTs in drug development and other clinical and research applications. While the exemplar application given is for Parkinson\u2019s Disease, we propose that this framework is more generally applicable where DHTs are used in observational or therapeutic studies and in patient management. The proposed metadata framework divides metadata into two classes: metadata independent of the clinical trial design (application-independent metadata), and metadata dependent on the clinical trial design (application-dependent metadata). Our framework proposes a method for linking these classes of metadata, and we have provided an example of how the framework can be applied to the measurement of tremor using two different types of sensor platform deployed in the WATCH-PD study.should be collected, and also how it was collected.A particular innovation in this metadata framework is that it is designed to support pre-specification of the minimum required values of the relevant metadata fields, and by comparing pre-specified values with actual values provides a quality assurance framework for data collected using DHTs. The framework proposes that the pre-specified values of both application-independent and application-dependent metadata (where applicable) are documented in a dedicated DHT Charter that describes both how DHT data A further challenge in DHT metadata is to capture the environmental context in which data was collected. For example, a measurement of gait speed may be influenced by disease progression or effective treatment, but it is also influenced by whether the clinical trial subject is inside or outside, the ambient temperature, the size of the room they are in, etc. This environmental context is a key source of variability in data from DHTs deployed in clinical trials and is an active area of research as well as a topic of attention by regulatory agencies ,20. AlthThere are several limitations to the approach outlined in this paper. First, the WATCH-PD study is just one exemplar and this study has not yet been completed at the time of submission of this manuscript. The metadata from this one study may not be generalizable to other DHT studies collecting data on similar concepts of interest from different devices. The ability to generalize this framework will need to be tested with multiple independent prospective studies. Future work will aim to apply this metadata framework to a wider range of sensor data and study designs, to identify how this framework could inform future efforts to standardize metadata for DHTs.The proposed metadata framework provides a functional method for metadata collection in a manner that is agnostic to a given study design. Additionally, the modular structure of framework has flexibility to accommodate future expansion where required. The metadata framework achieves this using three specific innovations. First, it captures the core information needed to optimize the value of the measures derived from a DHT. Second, it supports comparison of measures of the same concept of interest using different DHTs (such as APDM sensors and Apple Watch sensors in the example given above), helping us move towards a device-agnostic approach to measurement of a given concept of interest. Third, through the use of pre-specification, it provides a means to standardize and assure the quality of data collected with a DHT. Taken together, the elements of the proposed metadata framework represent an initial step toward standardization of data collection across devices and studies, paving the way greater regulatory acceptability of DHTs in clinical trials or research."} +{"text": "Background: The clinical efficacy of repetitive transcranial magnetic stimulation (rTMS) protocols on patients with poststroke dysphagia is still unclear.Objective: This trial aimed to explore and analyze the effectiveness of 5 Hz rTMS on the unaffected hemisphere, affected hemisphere, and cerebellum in stroke patients with dysphagia.Methods: This observer-blind and randomized controlled trial included a total of 147 patients with stroke. Patients were divided into four treatment groups: the unaffected hemispheric group, the affected hemispheric group, the cerebellum group and the control group. Each group received traditional dysphagia treatment 5 days a week for 2 weeks. All recruited patients except for those in the control group underwent 10 consecutive rTMS sessions for 2 weeks. For the affected hemispheric group and unaffected hemispheric group, 5 Hz rTMS was applied to the affected mylohyoid cortical region or to the unaffected mylohyoid cortical region. For the cerebellum group, 5 Hz rTMS was applied to the mylohyoid cortical representation of the cerebellum . The Fiberoptic Endoscopic Dysphagia Severity Scale (FEDSS), Penetration/Aspiration Scale (PAS), Gugging Swallowing Screen (GUSS), and Standardized Swallowing Assessment (SSA) were used to evaluate clinical swallowing function before the intervention (baseline), immediately after the intervention and 2 weeks after the intervention.Results: There were significant time and intervention interaction effects on the FEDSS, PAS, SSA, and GUSS scores (p < 0.05). In a direct comparison of the swallowing parameters of the four groups, the changes in FEDSS, PAS, SSA, and GUSS scores showed a significantly greater improvement in the unaffected hemispheric group, the affected hemispheric group and cerebellum group than in the control group (p < 0.05).Conclusions: Whether stimulating the unaffected hemisphere or the affected hemisphere, 5 Hz high-frequency rTMS on mylohyoid cortical tissue might have a positive effect on poststroke patients with dysphagia. In addition, cerebellar rTMS is a safe method that represents a potential treatment for poststroke dysphagia, and more clinical trials are needed to develop this technique further.Clinical Trial Registration:chictr.org.cn, identifier: ChiCTR2000032255. Dysphagia, affecting 27\u201364% of stroke patients, is one of the most common poststroke sequelae and is oIt is controversial to stimulate either the ipsilesional or contralesional hemisphere. Previous systematic studies have shown different outcomes regarding the efficacy of non-invasive brain stimulation (NIBS) according to its stimulating point. Specifically, a review reported that no differences were found dependent on the stimulation site , whereasCerebellar neurostimulation has been considered an unexplored method and a prelude of treatment for dysphagia by modulating swallowing pathways. It has been shown that the cerebellum can be strongly activated during swallowing exercise , and stiTherefore, this prospective, randomized, observer-blind clinical study focused on the effectiveness and safety of rTMS in stroke patients with dysphagia. Outcomes after stimulation of the unaffected side, the affected side and the cerebellum were compared to determine which area of stimulation is more beneficial for the recovery of patients with dysphagia to guide clinical work in the future.One hundred fifty-five poststroke patients suffering from dysphagia were included from April 2020 to April 2021. All of the patients were hospitalized to the Department of Rehabilitation Medicine, Yue Bei People's Hospital, Guangdong Province, China. The inclusion criteria were as follows: (1) subacute stroke <3 months diagnosed by imaging tests, including computed tomography (CT) or magnetic resonance imaging (MRI), hemorrhagic stroke or unilateral ischemia; (2) dysphagia confirmed by fiberoptic endoscopic evaluation of swallowing (FEES); and (3) no prior dysphagia rehabilitation. The exclusion criteria included history of any other neurogenic disease, epilepsy, tumor; severe cognitive impairment or aphasia; and contraindication to electrical or magnetic stimulation. All patients provided written informed consent before inclusion. The trial protocol was approved by the Ethics Committee of Yue Bei People's Hospital, and this clinical study was carried out and reported according to the Consolidated Standards of Reporting Trials (CONSORT) guidelines . DetailsA total of 155 poststroke patients with dysphagia were recruited before assessment for eligibility, and 147 were included after exclusion.One hundred forty-seven patients were divided into four groups: the unaffected hemispheric group, affected hemispheric group, cerebellum group and control group. Four included patients withdrew from the trial. One patient in the unaffected hemispheric group withdrew for a personal reason not relevant to the trial. Two patients in the affected hemispheric group and one in the cerebellum group quit the study due to exacerbated pneumonia. Consequently, 143 patients completed the trial .This study was an observer-blind and random controlled trial. Patients were randomly divided into three groups by the random number table method. A sealed opaque envelope was opened at patient enrollment to determine whether the patient was to be assigned to the unaffected hemispheric, affected hemispheric or cerebellum group. These three groups of patients received 10 consecutive rTMS sessions for 2 weeks. For the affected hemispheric group and unaffected hemispheric group, 5 Hz rTMS was applied to the affected mylohyoid cortical region or to thEach patient in the affected hemispheric group and unaffected hemispheric group was seated in a quiet environment and relaxed state. Electromyography (EMG) data representing oral swallowing musculature from mylohyoid muscles were detected using the same methods as Hamdy et al. . MagPro Cortical excitability on both hemispheres separately of each patient, including the motor evoked potential (MEP) and resting motor threshold (rMT) were measured using single-pulse TMS. The coil was moved around in an area within 2\u20134 cm anteriorly and 4\u20136 cm laterally of the vertex of the cranium to locate the mylohyoid cortical region of the hemisphere to obtain the maximum MEP recording . The maxIn previous studies, it has been identified that rTMS stimulation is effective regardless of which side of the cerebellum is stimulated , 24. ForThe same parameters of stimulation were used for each intervention group. For each patient, 20 min rMT intensity with 5 Hz at 110% was applied at the \u201chot spot\u201d area, which would last for 10 days with a total of 1,800 pulses per day. The protocols of rTMS applied in this study were strictly followed by the clinical safety guidelines for rTMS applications .All included participants were assessed at three different times: baseline (before the treatment), 2 weeks (after the treatment), and follow-up (2 weeks after the treatment) see . The priAll included patients required FEES. First, the secretion status of patients was measured, and then the patient received standard volumes of semiliquid diet, such as soft solid food, liquids, or puree. Stroke-related dysphagia was divided into a six-point FEDSS with 1 score for the best and 6 scores for the worst based on different consistencies of diet observed in the endoscopic examination and the risk of saliva penetration or aspiration .The SSA consists of three parts. One section comprises eight indicators, including the responsiveness level, breathing, sound intensity, lip closure, control of trunk and head, voluntary cough and pharyngeal reflex. It is scored vary from 8 to 23 points. In the second section, the patients swallowed 5 mL water three times, and at the same time, salivary management and laryngeal movement were assessed. Repetitive swallowing, stridor, choking, and vocal quality were also evaluated, with a score range of 5\u201311 points. Once patients completed the first two parts of the assessment, they underwent the third part that entailed swallowing 60 mL water; this activity was scored from 5 to 12 points. The total SSA score varied from 18 to 46 points, and higher scores indicated worse swallowing function , 28.Dysphagia severity was scored by an 8-point scale named the Penetration/Aspiration Scale (PAS). This scale was widely conducted for semiquantitative assessment of the degree of penetration and aspiration of endoscopic or radiological measurements, with higher scores indicating more severe impairment .The GUSS is a validated reliable screening test for swallowing with a maximum score of 20. This tool consists of two parts: five indirect questions were used to measure the swallow function of the patient, and four direct questions were conducted to assess the physical condition of patients when ingesting liquid, semisolid and solid food. A higher score suggested a milder condition of dysphagia, but a lower score suggested a more serious dysphagia condition. Fourteen points were deemed passing scores for swallowing, and patients who scored <14 points were regarded as having a high likelihood of aspiration .Post-hoc analysis was performed using Bonferroni correction. A Greenhouse-Geisser correction was performed to correct the non-sphericity of the data. A P < 0.05 was considered significantly different.In this study, statistical analyses were conducted with SPSS 23.0 software . Two-way analysis of variance (ANOVA) was used for continuous data among multigroup comparisons , and the chi-squared test was performed for categorical data. To assess the effect of the interaction between intervention and time, repeated measure analysis of variance (ANOVA) was used, in which time was used as a within-subject factor and intervention as a between-subject factor. One hundred forty-seven subjects were randomized into four groups. The average ages in the unaffected hemisphere group, the affected hemisphere group, the cerebellum group and the control group were 64.47 \u00b1 13.95 years , 64.67 \u00b1 10.87 years , 63.18 \u00b1 9.92 years , and 62.34 \u00b1 11.54 years , respectively. There were no significant differences between the groups at baseline in clinical and demographic characteristics, Basic Activities of Daily Living (BADL) score, Mini-Mental State Examination (MMSE) score, Eating Assessment Tool-10 (EAT-10) score, Nutrition Risk Screening-2002 (NRS2002) score, Water Swallow Test (WST) score, FEDSS score, PAS score, SSA score, or GUSS score .P = 0.008) and 4 weeks (P = 0.001). Similarly, there was a significant difference in PAS scores at 2 weeks (P = 0.024) and 4 weeks (P = 0.005) . FiguresP < 0.001) and a significant time\u2013group interaction .P = 0.012) and 4 weeks (P = 0.001) (P = 0.017) and 4 weeks (P = 0.008), the GUSS scores were significantly different. Repeated measure analysis of variance showed a significant main effect of the assessment time point and a significant interaction (time-group) for the GUSS . SimilarThree participants (one unaffected and two affected) suffered transient headache. No participants developed seizures during or after therapy.Our study compared the effects of dysphagia intervention based on the stimulation site: the affected mylohyoid cortical area, unaffected mylohyoid cortical area and cerebellum. This study revealed large effect sizes for swallow scores after the end of intervention in the unaffected hemispheric group, the affected hemispheric group and the cerebellum group compared to the control group. These results suggest that rTMS stimulation of the affected hemisphere, unaffected hemisphere and cerebellum was useful in improving swallowing function in patients with dysphagia after stroke. Nevertheless, the effects among these sites were not significantly different. The mechanism of rTMS is not fully understood. Some previous studies , 31 werePrevious studies have shown different outcomes in which various stimulation parameters of rTMS could improve the function of dysphagia in patients after stroke. For example, Park et al. showed tRecently, a growing number of studies have explored the possibility of rTMS on cerebellar tissue in the treatment of dysphagia. Some studies , 36 haveRecent studies show that compared to unilateral stimulation, bilateral pharyngeal stimulation with 10 Hz rTMS stimulation on \u201chot spots\u201d has more positive outcomes in both acute and chronic stroke patients , 39. HowThis study may possess the following limitations. First, the difference in swallowing function rehabilitation by stroke type was not analyzed. We were not able to perform cerebellar subgroup analysis according to affected, unaffected and cerebellar stroke lesions on account of the insufficient number of patients with infratentorial stroke lesions. Second, the effect of rTMS in our study was evaluated based on the clinical severity and fiberoptic endoscopic dysphagia severity scale and not on neurophysiologic evaluation, such as MEP amplitude and latency of rTMS. Finally, the effect of rTMS on brain plasticity was not evaluated by neuroimaging tests or neurophysiologic evaluation in our study. In the future, the combination of neuroimaging studies and neurophysiology would be beneficial in exploring the potential mechanism of rTMS in the recovery of dysphagia.The present study suggested that 5 Hz rTMS in the affected hemisphere, unaffected hemisphere and cerebellum for 10 days improves swallowing function in poststroke dysphagia patients. However, no difference among the affected hemisphere, unaffected hemisphere and cerebellum was observed. Therefore, regardless of whether the unaffected hemisphere or the affected hemisphere is stimulated, 5 Hz high-frequency rTMS on mylohyoid cortical tissue might have a positive effect on patients with poststroke dysphagia. In addition, cerebellar rTMS is a safe method that represents a potential treatment for poststroke dysphagia, and more clinical trials are needed to further improve this technique.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The studies involving human participants were reviewed and approved by the Ethics Committee of Yue Bei People's Hospital. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.HL contributed to the conception of the study, supervised the clinical trial, and performed manuscript writing and editing. LZ, JW, and JR performed data analyses and manuscript writing and editing. PW and YZ contributed to the conception and design of the study. FL and YP performed data collection. All authors have agreed with the submitted version of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The rapid advancements of high throughput \u201comics\u201d technologies have brought a massive amount of data to process during and after experiments. Multi-omic analysis facilitates a deeper interrogation of a dataset and the discovery of interesting genes, proteins, lipids, glycans, metabolites, or pathways related to the corresponding phenotypes in a study. Many individual software tools have been developed for data analysis and visualization. However, it still lacks an efficient way to investigate the phenotypes with multiple omics data. Here, we present OmicsOne as an interactive web-based framework for rapid phenotype association analysis of multi-omic data by integrating quality control, statistical analysis, and interactive data visualization on \u2018one-click\u2019.OmicsOne was applied on the previously published proteomic and glycoproteomic data sets of high-grade serous ovarian carcinoma (HGSOC) and the published proteome data set of lung squamous cell carcinoma (LSCC) to confirm its performance. The data was analyzed through six main functional modules implemented in OmicsOne: (1) phenotype profiling, (2) data preprocessing and quality control, (3) knowledge annotation, (4) phenotype associated features discovery, (5) correlation and regression model analysis for phenotype association analysis on individual features, and (6) enrichment analysis for phenotype association analysis on interested feature sets.We developed an integrated software solution, OmicsOne, for the phenotype association analysis on multi-omics data sets. The application of OmicsOne on the public data set of ovarian cancer data showed that the software could confirm the previous observations consistently and discover new evidence for HNRNPU and a glycopeptide of HYOU1 as potential biomarkers for HGSOC data sets. The performance of OmicsOne was further demonstrated in the Tumor and NAT comparison study on the proteome data set of LSCC.OmicsOne can effectively simplify data analysis and reveal the significant associations between phenotypes and potential biomarkers, including genes, proteins, and glycopeptides, in minutes to assist users to understand aberrant biological processes.The online version contains supplementary material available at 10.1186/s12014-021-09334-w. A phenotype can be defined as any observable characteristic or state of an organism resulting from interactions between genes, environment, disease, molecular mechanisms, and chance . The purIn the past decades, many efforts have been made in bioinformatics tools development for automated omics data analysis and visualization, including commercial solutions of Ingenuity Pathway Analysis (Ingenuihttps://github.com/huizhanglab-jhu/OmicsOne) and can be installed and run locally in Python 3.8 environment in Microsoft Windows. The minimum hardware configuration requirements are 2-cores CPU and 12\u00a0GB RAM.To address these issues, here we present the tool OmicsOne, a software developed in Python based on Dash framework that canOmicsOne was initially designed for isobarically labeled quantitative proteomics data ) but can find applications in label-free quantitation and Data Independent Acquisition (DIA) datasets, as well as other \u201comics\u201d data if the data fits the input format shown in Fig.\u00a0The sample data sets embedded in OmicsOne installation are also downloadable in the Github repository. OmicsOne also allows users to add their customized annotation databases in the sample folder for knowledge annotation, pathway databases for enrichment analysis.We developed OmicsOne under Python 3.8 for automated multi-omics data analysis to discover molecular changes and pathways associated with phenotypes. OmicsOne integrated scientific Python packages for statistical calculation and data visualization, including NumPyv1.21.4) , SciPy; and FeatureMin: impute scaled minimum value in the row .It is often necessary to preprocess the raw data before data analysis to fit the algorithm requirements and control data quality. OmicsOne provides several essential preprocessing functions, including (1) Log-transformation algorithm, which supports the conversion of the expression values to log2 values. OmicsOne accepts log2-transformed\u00a0data by default. (2) Normalization algorithm. We implemented the commonly applied median normalization method to adjust the median values of all features in all samples to the same (default is 0) to reduce the potential batch effect and measurement erros. (3) Noise filtration algorithm. We removed the features expressed less than 50% (user defined) samples as noise features, and (4) Imputation algorithm. Three basic imputation methods were implemented in OmicsOne including: The evaluation of the reproducibility of quality control samples is another critical step before the phenotype association analysis. OmicsOne supports calculating the correlation values of technical or biological replicates and coefficient of variation (CV) of the selected quality control samples to estimate the reproducibility of measured gene or protein level expression.The gene annotation function can help the understanding of biological functions. A quick annotation tool is critical for automated data analysis and manual investigation. In OmicsOne, the features are automatically annotated and linked to the knowledge databases , and feature clustering.Differential expression analysis is a method delineating altered expression profiles of features, such as genes, proteins, and PTMs, which offers the greatest insight into aberrant biology in comparative studies . The algorithms of hypothesis tests implemented in OmicsOne can identify the significant, differentially expressed features, leveraging multiple statistical tests for paired or independent groups. The student t-test is the most commonly used statistical hypothesis test in which the test statistic follows a Student\u2019s t-distribution. Wilcoxon rank-sum test is a non-parametric statistical hypothesis test used to compare the locations of two independent populations respectively . For depThe dimensionality reduction method is a valuable and common approach to classify samples based on the most prominent factors driving different phenotypes without prior knowledge, especially for samples with thousands of features. Among a series of dimensionality reduction methods, Principal component analysis (PCA) is one oFeature clustering is based on the hierarchical clustering supported by Python package Scipy to find OmicsOne provides phenotype association analysis for individual features. The features involved in the gene sets obtained from the differential expression analysis, dimensionality reduction, and feature clustering methods can be investigated individually for phenotype association. The correlation analysis and logistic regression analysis are provided for individual features associated with categorical phenotypes. The features having a\u00a0correlation p-value\u2009<\u20090.05 are considered as phenotype-associated features. The logistic regression model applied on the phenotype and feature expression is helpful to justify if an individual feature can be considered as a potential indicator for the phenotype prediction.The gene sets can be further investigated by the subsequent enrichment analysis, over-representation analysis (ORA) using GSEApy \u201338 to diOmicsOne reports intermediate and finalized results in tables (.csv or.txt) and the corresponding interactive figures for all data analysis. The interactive figures are generated using Plotly in Dash framework for direct checking. OmicsOne automatically generates intermediate tables in.csv or.txt (Tab-separated) file for phenotype association results for each step of processing.The public proteomic data sets of high-grade serous ovarian carcinoma (HGSOC) and lungOmicsOne was firstly applied on the public proteomic and glycoproteomic data set in the Additional tables of HGSOC to demonN-Glycositeatlas [This investigation involved two expression matrices of protein and intact glycopeptides, including 5916 proteins and 365 intact glycopeptides, respectively. In this study, we regarded proteins or intact glycopeptides as features describing the samples. These features described each sample in a high-dimensional space. Although OmicsOne provides the preprocessing functions in module 2 of data preprocessing and quality control, it also accepts data preprocessed using different preprocessing methods outside. The expression matrices of protein and intact glycopeptides have been log2-transformed, normalized, and have no missing values. The expression distribution in each sample is shown in Fig.\u00a0iteatlas , were prIn the module 4 of phenotype-associated feature discovery, we implemented three functions: differential expression analysis, dimensionality reduction, and feature clustering. The purpose of this module is to find individual features or feature sets relevant to specific phenotypes. There are 47 significantly up-regulated and 94 down-regulated intact N-linked glycopeptides were discovered in tumor samples compared with non-tumor samples using Wilcoxon rank-sum tests and considering Benjamini-Hochberg (BH) adjusted p-value\u2009<\u20090.01 and fold change\u2009>\u20091.5 Fig.\u00a0A. The boOmicsOne provides the functional module of correlation and regression model analysis (module 5) for the investigation of phenotype and individual feature association. As shown in Fig.\u00a0OmicsOne was also applied on the proteome data set from LSCC to confirm its performance . The cliOmicsOne is an efficient automated tool to associate the alteration of features with phenotypes. The software uses empirical settings to build a robust working pipeline for standard association analyses in \u2018one-click\u2019 mode and allows the interactive manipulation for tuning the analysis to fit the customized requirement. The \u2018one-click\u2019 mode can speed up the discovery of interesting features and feature sets and the following phenotype association analysis. However, we still strongly suggest that users carefully investigate each module's settings and results and not use OmicsOne as a black box. Thus, we developed a webpage-based dashboard in OmicsOne, which integrates interactive data visualization of results and the corresponding parameter settings to make the analysis clearer and more efficient to validate. Users can monitor the results of each module in real-time during the running of the whole data analysis.OmicsOne supports phenotype profiling, knowledge annotation, and intact glycopeptide analysis. It provides a convenient way to associate intact glycopeptide to clinical phenotypes Fig.\u00a0A and B. The performance of OmicsOne was further demonstrated by the application on the proteome data of LSCC. The results of PCA and differential expression analysis for the comparison between Tumor and NAT samples in minutes. The data analysis results are displayed in an interactive dashboard in real-time. We demonstrated the performance of OmicsOne using the published data sets of HGSOC and LSCC in this study and believe it will be an efficient bioinformatics solution for investigating and evaluating phenotype associations with individual features or interested feature sets to understand aberrant biological processes.Additional file 1: Figure S1. The differential expression analysis applied on Tumor and NAT comparison of the proteome data of LSCC. A PCA plot can separate the Tumor and NAT samples clearly. B Volcano plot of differentially expressed proteins in Tumor and NAT samplesAdditional file 2: Table S1. The analysis results of OmicsOne applied on the proteome dataset of HGSOCAdditional file 3: Table S2. The analysis results of OmicsOne applied on the glycoproteome dataset of HGSOCAdditional file 4: Table S3. The analysis results of OmicsOne applied on the glycoproteome dataset of LSCC"} +{"text": "The continuous fabrication via membraneemulsification of stablemicrocapsules using renewable, biodegradable biopolymer wall materialskeratin and chitosan is reported here for the first time. Microcapsuleformation was based on opposite charge interactions between keratinand chitosan, which formed polyelectrolyte complexes when solutionswere mixed at pH 5.5. Interfacial complexation was induced by transferof keratin-stabilized primary emulsion droplets to chitosan solution,where the deposition of chitosan around droplets formed a core\u2013shellstructure. Capsule formation was demonstrated both in batch and continuoussystems, with the latter showing a productivity up to 4.5 millioncapsules per minute. Keratin\u2013chitosan microcapsules (in the30\u2013120 \u03bcm range) released less encapsulated nile redthan the keratin-only emulsion, whereas microcapsules cross-linkedwith glutaraldehyde were stable for at least 6 months, and a greateramount of cross-linker was associated with enhanced dye release underthe application of force due to increased shell brittleness. In lightof recent bans involving microplastics in cosmetics, applicationsmay be found in skin-pH formulas for the protection of oils or oil-solublecompounds, with a possible mechanical rupture release mechanism . Microcapsulesof sunflower oil in sustainable biomaterialswere made by low-energy membrane emulsification with keratin and interfacialdeposition of oppositely charged chitosan. Encapsulation within a polymeric shellnot only allows their dispersal in a polar environment but also offersbenefits such as protection from oxygen degradation,4 improved retention of volatile components,1 and controlled release of the contents.3 The diameter of microcapsules can range between 1 \u03bcmand a few mm,5 making them small enoughto pass through wastewater treatment plants into aquatic environments,6 contributing to microplastic pollution when syntheticand non-biodegradable wall materials are used . The environment is polluted with 36,000 tons of microplasticseach year in the EU alone9 and concernsover the implications for aquatic life and human health have grownwith the emergence of studies confirming the presence of microplasticsin the entire human food supply chain.10Microencapsulatedoils have a wide variety of applications acrossa range of industries, including food, household,9 there remains a need to develop microcapsulesbased on biodegradable and non-toxic materials. Research on the useof biopolymers for microencapsulation is robust, with most investigatedbiopolymers including alginate, casein, whey proteins, chitosan, soyproteins, gluten, silk fibroin, zein, starch, and cellulose.11While steps have been made to tackle microplastic pollution,includingenacted and proposed limited bans on plastic microbeads,12 andthis mechanism is utilized in coacervation-based microencapsulationtechniques such as complex coacervation13 and layer-by-layer methods.2Oppositely charged biopolymers can formcomplexes with each othervia attractive electrostatic forces,14 Keratin can be solubilized from waste wool orfeathers by sulfitolysis, reduction, or other methods,15 is negatively charged over a range of pH values,16 and has surface-active and emulsifying properties.17 Keratin has been used as a building block inthe synthesis of multilayer films of alternating anionic keratin anda cationic polyelectrolyte;16 however,no examples were found in the literature of keratin being used incoacervation or layer-by-layer style microencapsulation of a liquidcore.Beingnon-toxic, renewable, and biodegradable, the wall materialsused for microencapsulation should be inexpensive and abundant, ideallyexisting in underutilized industrial waste streams. Keratin, a structuralanimal protein, meets all of the above requirements, with millionsof tons of unutilized keratinous waste produced each year.18 most of which has no downstream use,19 making it another ideal sustainable biomaterial.Critically, chitosan is positively charged below its pKa (\u223c6.5)20 and, therefore,complexation with keratin via electrostatic interactions is likely.Chitosan and keratin have been previously combined to prepare compositefilms,21 and chitosan has been used inconjunction with other anionic biopolymers in similar microencapsulationsystems.22Chitosan is the second most abundant biopolymer on theplanet aftercellulose, obtained from crustacean waste by deacetylation of chitin,23 In ME, the disperse phase(DP) is injected through a porous membrane into the continuous phase(CP) where droplet detachment is driven by shear stress across themembrane surface. The size of the droplets can be tuned by carefulcontrol of the process parameters, resulting in the production ofmonodisperse emulsions.24 Due to the lowenergy of ME however, the kinetics of adsorption of an emulsifierat the emerging oil\u2013water (O/W) interface is critical for theproduction of stable emulsions with narrow droplet size distributions.25 While soluble keratin has been reported to producestable emulsions by ultrasonication,17 theuse of keratin in ME has not previously been attempted to the authors\u2019knowledge.Most instances of coacervate-basedmicrocapsules in the literatureuse homogenization as the method of primary emulsification; however,the utilization of membrane emulsification (ME) can offer severaladvantages.In the present study, the formation of stable microcapsulesbasedon the electrostatic interactions between keratin and chitosan isreported for the first time. ME was utilized to generate the primaryemulsion, in both batch and continuous configurations. Subsequently,the production of microcapsules from the primary emulsion was obtainedby adsorption of chitosan to oppositely charged keratin at the dropletsurface and cross-linking with glutaraldehyde (GTA). The propertiesand characteristics of the microcapsules and shell were examined bymicroscopy, zeta potential, and stability. Release studies were thencarried out to assess the effect of chitosan absorption and cross-linkingin the shell on the release of an oil-soluble dye from the encapsulatedoil phase.2O were obtained from Sigma-Aldrich UK andused without further purification.Cleansheep\u2019s wool was obtained fromWingham Wool Work. Sunflower oil was obtained from Tesco and usedas the DP for the primary emulsion. Urea \u2265 98%, sodium metabisulfite\u2265 99%, tris(hydromethyl)aminomethane \u2265 99.8%, sodiumdodecyl sulfate (SDS) \u2265 95%, hydrochloric acid ,and sodium hydroxide were purchased from Fisher Scientific,UK. HCl and NaOH were diluted to 0.1 M as stock solution for pH adjustments;low-molecular-weight chitosan, acetic acid \u2265 99%, fluoresceinisothiocyanate (FITC) \u2265 90%, methanol \u2265 99.9%, nilered \u2265 98%, hydrochloric acid 32%, and GTA solution grade II25 wt % in H15 Clean sheep\u2019swool (30 g) was heated in 1 L of deionized water containing 8 M urea,0.5 M sodium metabisulfite, 0.2 M tris base, and 0.2 M SDS at 65 \u00b0C for 5 h. The resulting aqueousextract was passed through a 50 \u03bcm mesh sieve and dialyzed againstdeionized water for 6 days using a cellulose tube membrane (MWCO 8kDa), replacing the water daily. The solution was then diluted to1 wt % concentration with deionized water, where the initial concentrationof keratin was determined by the loss on drying method. For the losson drying method, approximately 5 g of the sample was dried at 50\u00b0C until no further change in mass was noted, and the mass ofresidual solids was calculated as a percentage of the initial samplemass.Keratin was extractedfrom wool using sodium metabisulfite as a reducing agent to cleavedisulfide bonds.Chitosan (1 wt %) was solubilized in 1 wt % acetic acidby overnight stirring at room temperature. The solution was vacuumfiltered , diluted to the desired concentrationwith deionized water, and adjusted to pH 5.5 using NaOH.The prepared keratinsolution, to be used as the CP of the primary emulsion, was adjustedto pH values between 2 and 12 using NaOH and HCl. Each sample wasloaded into a folded capillary cell, and the zeta potential was measuredusing a Zetasizer Nano ZSP instrument . Two samples were prepared for each pH value, each measured intriplicate.Mixtures ofkeratin and chitosansolutions were prepared with a final concentration of 0.3 wt % chitosanand a range of keratin concentrations . After stirring for 20 min at room temperature, the sampleswere diluted 10\u00d7 with deionized water, and the transmission at300 nm was measured using a Jenway UV\u2013vis spectrophotometer. Turbidity was calculated by subtractionof % trans from 100.The viscosityof the 1 wt % keratinsolution and sunflower oil was measured using a Discovery HR-3 rheometer. A shear rate sweep was conductedat 25 \u00b0C from 0.1 to 1000 1/s using a 40 mm cone (angle = 1\u00b0:0min:25 s) and plate (gap = 29 \u03bcm).The interfacial tensionbetween the 1 wt % keratin solution and sunflower oil at 25 \u00b0Cwas measured using a FTA1000 B Class tensiometer by the rising drop method. The sunflower oil DP wasextruded from a hooked needle into the 1 wt % keratin CP, and thesurface tension was determined from the shape of the rising drop beforedroplet detachment. An average of three measurements was taken (dropvolume \u223c4 \u03bcL).26 Briefly, the items were immersedsequentially in an ultrasonic bath for 1 min in deionized water, 4M NaOH, deionized water, 10 % wt citric acid, and finally deionizedwater. The items were soaked for 10 min in the acid and base solutionsafter sonication and were rinsed with tap water afterward, beforebeing transferred to deionized water.O/W emulsionswere prepared by stirred cell membrane emulsification (SCME) usinga liquid dispersion cell (Micropore Technologies) and ringed, stainless-steel(SS), disc membranes. Prior to use in ME, the SS membranes (both discand tubular) and additional inner rod underwent a standard cleaningprocedure.Sunflower oil (10 mL)was introduced using a syringe pump through the pores of the membraneinto the cell containing 90 mL of keratin solution, where dropletdetachment was facilitated by the wall shear generated from the paddlestirrer.Using DOE software (MODDE Pro 12.1), a fractional experimentaldesign with a linear model was implemented to explore the size andspan (as responses) of emulsions generated using the dispersion cell.Three controllable emulsification parameters were investigated as factors. The diameterof the pores was either 10 or 30 \u03bcm, while the stirring speedand injection rate ranged from 400 to 1100 rpm and 0.3 to 0.5 mL/min,respectively. 12 experiments were conducted including four centerpoints (three repeats).A bespoke systemwas designed and commissioned, consisting of a SS tubular membraneand its assembly in the membrane housing 1.The SS tubular membrane and innerrod were obtained from Microkerf,Leicester, UK. The membrane 1a\u2013c was The SS tubularmembrane was cleaned, as described for the discmembrane before assembly in the membrane housing . For assembly into the housing 1d, the iPrimary emulsion dropletswere isolated from the keratin solution by gravitational creamingin the absence of coalescence, and 1 mL of creamed droplets was mixedwith 1 mL of deionized water and immediately added to 10 mL of 0.25wt % chitosan. This was followed by the addition of 0, 25, or 50 \u03bcLGTA solution under stirring at room temperature. Samples were placedon a roller for 1 h and subsequently stored at room temperature.D50 and span were recorded.Opticalmicrographs were captured using a SP400 microscope and digital camera(Olympus). Volume-weighted particle size distributions were obtainedusing a Mastersizer 3000 particle size analyzer and wet dispersionunit operating at 2000 rpm. The The zeta potentialof primary emulsion droplets was measured before and after additionof the creamed droplets to chitosan solutions of different concentrations to monitor the adsorptionof chitosan at the droplet surface. A washing step with deionizedwater was included before and after stirring in chitosan solutionto remove excess polyelectrolyte. The zeta potential was measured,as described for the keratin solution.27 The chitosan wasprecipitated with NaOH, and unreacted FITC was removed by centrifugation. The precipitate was washed withdeionized water until the supernatant showed no fluorescence. TheFITC-labelled chitosan was dissolved in 1 wt % acetic acid solutionand dialyzed against deionized water for 3 days in the dark, replacingthe water daily. The concentration of chitosan in the final solutionwas determined by the loss on drying method, and the solution wasdiluted with deionized water to 0.25 wt %. The pH was adjusted to5.5 using NaOH.Fluorescently labelled chitosanwas prepared by addition of 100 mg of FITC in 100 mL of methanol to100 mL of 1 wt % chitosan solution and stirred overnight in the darkat room temperature.ex) and emission wavelengths (\u03bbem) of 470and 525 nm, respectively, for the visualization of FITC-labelled chitosan,and a red fluorescent protein light cube for the visualization of the nilered-stained oil, respectively. Prior to imaging, the microcapsuleswere dispersed in deionized water to reduce background fluorescencefrom unadsorbed chitosan.Fluorescence micrographs were captured usingan EVOS M5000 Imaging System (Thermo Fisher Scientific) fitted witha green fluorescent protein light cube with excitation (\u03bbg). An aliquot (1 mL) was taken from the center of the oil layer,and the absorbance was measured at 520 nm by UV\u2013vis spectrophotometry.An average result was taken from three repeats. A standard curve wasprepared by the measurement of known concentrations of nile red-stainedsunflower oil, diluted with unstained oil.Both uncross-linkedand cross-linked microcapsules were prepared using sunflower oil stainedwith nile red (1 mg/mL) to make the primary emulsion. As a control,primary emulsion controls were prepared by mixing 1 mL of creameddroplets with 1 mL of deionized water and addition to 10 mL of 1 wt% keratin solution to ensure the same degree of dilution of the primaryemulsion droplet suspension in all samples. Unstained sunflower oil(5 mL) was gently placed on top of each sample using an automaticpipette. The samples were either left static at room temperature for5 days or centrifuged immediately .30 Since the magnitude of the charge on the keratin decreased withincreasing acidity, pH 5.5 was selected to ensure both polyelectrolytescarried a moderate charge.Since the keratin was negatively charged, it was expectedto interactwith chitosan to form polyelectrolyte complexes by opposite chargeinteractions at an appropriate pH below chitosan\u2019s pAn opaque dispersion was observedwhen solutions of keratin andchitosan solution were mixed together at pH 5.5, indicating the formationof insoluble particles. The opacity of the dispersion became morepronounced with the increased keratin content 3a.12 Wool keratin consists of a variety of amino acids withpolar, non-polar, and ionizable side chains that allow for multipleinteractions to take place.31 Both keratinand chitosan contain groups that can participate in hydrogen bonding,that is, chitosan\u2019s hydroxyl groups and cysteine and serinein keratin, which contain a hydroxyl and sulfhydryl group, respectively.Although the deacetylated chitosan used in this work is hydrophilicin nature,32 hydrophobic interactions maytake place between keratin\u2019s non-polar amino groups and chitosan\u2019s acetyl groups.The degree of opacity was measured by turbidity quantification3c. ThereTable S1 summarizesthe DOE and experimental data for the 12 experiments conducted. Dropletswith median volume diameters (D50) between30 and 126 \u03bcm (R2 = 0.99 and Q2 = 0.87 for the D50 (Table S2), which allowed the estimation of D50 at any given space within the range of parameterstested (Table S3).Afterthe confirmation of complexation between keratin and chitosan,the next step was to apply the interaction at the interface of anemulsion. The ME of the primary emulsion (stabilized by keratin) wasexplored by small batch (100 mL) SCME to scope the droplet size rangeand uniformity of generated emulsions prior to scaling up to continuouscrossflow ME (xME). d 126 \u03bcm 4a were gstested 4b.This D50 was dependent on all factorsincluded in the DOE, with pore size having the greatest influence,followed by stirring speed (Table S2).The size of the pores is a major factor in determining the size ofthe droplets produced by ME, with droplets produced here being 2\u20136times larger than the pore diameter, in agreement with the 2\u201310ratio found in the literature.33 The stirringspeed had a strong influence on droplet diameter as it generated theshear which causes droplet detachment.34 Stirring speeds between 400 and 1100 rpm enabled the controlledaccess to a wider range of droplet size categories for the 30 \u03bcmpore size than the 10 \u03bcm pore size . This result alsoimplies the absence of any transition from dripping to jetting regimesor vice versa, which would have resulted in a clear discontinuityin droplet diameter.The ore size 4b. AlthoTable S1). The DOE was used to identify the parameters wherespan would be lowest, and therefore, the droplets would be most uniform.The model was tuned in order to improve the fit and future predictionprecision by log transformation, removal of an insignificant term(injection rate), and addition of an identified squared term (stirringspeed), resulting in an R2 value of 0.95and Q2 value of 0.79. For the 30 \u03bcmpore diameter, DOE results indicated that low stirring speeds promotedmonodispersity. Within the design space, droplets generated at 400rpm were, therefore, most uniform. An opposite effect was observedwith the 10 \u03bcm membrane whose uniformity increased slightlywith increasing stirring speed. The impact of stirring speed was moresignificant when using the larger pore size and when the stirringspeed was higher. It was concluded therefore that droplet breakupat high shear was responsible for the relatively poor span seen insome samples from membranes with a larger pore size, and the lowerpredictability of the span model versus the D50 model,and hence the minor upper limit deviation of 3.0 and 3.5% for the10 and 30 \u03bcm pore membranes, respectively, in span validationexperiments (Table S3).The span, a dimensionless number indicatingthe width of the distributionof the emulsions, ranged from 0.368 to 0.923 , thewall shear (\u03c4SMCE) of 2.043 Pa was approximated using eqs S3\u2013S6 for the xME equipment designvalues of impeller diameter (D) = 0.03 m, tank diameter(T) = 0.035 m, blade height (b)= 0.011 m, number of blades (nb) = 2;membrane morphology values of r1 and r2 of 0.008 and 0.011 mm as the respective outerand inner radii of the porous region of the ringed membrane; CP properties\u03bcc and \u03c1c of 0.00101 Pa s and 1000kg/m3, respectively; and emulsification \u03c9 of \u224841.9s\u20131 @ 400 rpm.Usingas a starting point the conditions which gave the lowest span in thestirred cell setup of 4.1 \u00d7 10\u20134 and CP capillary number (Cac) of 0.171, respectively . The CPflowrate needed to obtain similar shear, approximated by equation S7, was applied to the xME .This resulted in significantlylarger droplets, with ectively 5i, evaluc at constant Wed;reducing Wed at constant Cac; and a combinationof increasing Cac and reducing Wed.From this first value, the xME system was further tuned followingthree strategies 5a: increD50 approaching the values in the SCME were obtained byincreasing the shear of the xME system to Cac values of\u22480.393 (700 mL/min) from 0.171 at nearly constant Wed ,leading to an increased diffusion of keratin from the bulk CP to theinterface and, consequently, promoting droplet stability due to aslower dispersed phase droplet growth. However, further Wed reduction to 4.3 \u00d7 10\u20135 that enabled interface saturation at the inception of jetting. Theformation of large droplets at high shear is seldom observed in surfactantsystems due to the smaller molecular size of surfactants which promotesfast migration to the interface.35In both cases ofthinning jetting i.e., 5ii,v, lac to Wed ratios were implementedto obtain droplets with D50 \u2248 D50,SCME. For the third strategy, an increasedCac to Wed ratio was accomplished by a simultaneousincrease in Cac and decrease in Wed and Wed of 1.2 \u00d7 10\u20134 toobtain droplets with a D50 of 136 \u03bcmand span of 0.518 (c increase with Wed reduction led to dropletspossessing a D50 of 125 \u03bcm and spanof 0.664 (c of 0.280 (500 mL/min) andWed of 4.8 \u00d7 10\u20135 due to the negativelycharged keratin at the interface 6a. AfterThe microcapsule structure was visualized by fluorescence microscopyof samples made with FITC-labeled chitosan and nile red-stained sunfloweroil. The location of FITC-chitosan, after removal of excess from theCP by dilution in water, was concentrated at the droplet surface 6bi, and 12 and coacervate microcapsules sometimes require chemical cross-linkingto give strength and stability to the shell.36 Therefore, different quantities of GTA solution were added duringmicrocapsule formation to cross-link between the amino groups of keratinand chitosan molecules.The attraction between biopolymers withina polyelectrolyte complexdiffers in strength depending on the characteristics of the biopolymersin question and the environmental conditions,D50 of the cross-linkedmicrocapsules (S1), whereas a significantincrease in average particle size was observed in the uncross-linkedsample , a droplet generation frequency of 76,168 droplets/s was obtaineddue to the xME\u2019s membrane pore area being \u223c10 timesthat of the SCME for the same DP flux. Further increases in dropletgeneration, while maintaining emulsion quality, can be obtained byincreasing the membrane diameter and/or reducing the pitch lengthbetween pores. For example, doubling the inner diameter of the membraneat constant annular diameter and membrane thickness, or doubling thelength of the membrane would double the frequency of produced dropletsproduced at point (vii) conditions to \u223c51,000 droplets/s. Reducingthe pitch length by 50% would have the greatest productivity effectby increasing the droplet generation frequency 4-fold to \u223c103,000droplets/s. A combination of the three changes would result in a 16-foldhigher droplet generation frequency. These values, together with numbering-upstrategies, show that the keratin\u2013chitosan microcapsules couldbe produced at the industrial scale.Scale-up with the continuous xME also showed increased dropletgeneration frequency, as compared to the batch SCME, leveraging anincreased membrane surface area. Consequently, a higher DP flux andemulsion productivity were achieved. Considering the data, as shownin Theproduction of stable microcapsules using renewable and biodegradablebiopolymer wall materials, keratin and chitosan, is reported herefor the first time. The compatibility and scale-up potential of theformulation were demonstrated with ME. Turbidity measurements confirmedthe complexation of keratin and chitosan at pH 5.5 which were linkedto electrostatic attraction arising from their opposite charges, andchitosan was seen to adsorb at the surface of keratin-stabilized primaryemulsion droplets by zeta potential measurements and fluorescencemicroscopy. Using ME, it was possible to generate primary emulsiondroplets with diameters of 30\u2013126 \u03bcm and a span as lowas 0.394.Keratin\u2013chitosan microcapsules cross-linkedwith GTA showedsignificant stability over time, with no increase in size after 6months in storage under ambient conditions. Considering the non-toxicityand biocompatibility of keratin and chitosan, the stability of microcapsulesat skin-pH, and the possible release mechanism of mechanical rupture, these capsules may find use in cosmetic,personal care, or biomedical products."} +{"text": "Glioma stem cells (GSCs) are tumour initiating cells which contribute to treatment resistance, temozolomide (TMZ) chemotherapy and radiotherapy, in glioblastoma (GBM), the most aggressive adult brain tumour. A major contributor to the uncontrolled tumour cell proliferation in GBM is the hyper activation of cyclin-dependent kinases (CDKs). Due to resistance to standard of care, GBMs relapse in almost all patients. Targeting GSCs using transcriptional CDK inhibitors, CYC065 and THZ1 is a potential novel treatment to prevent relapse of the tumour. TCGA-GBM data analysis has shown that the GSC markers, CD133 and CD44 were significantly upregulated in GBM patient tumours compared to non-tumour tissue. CD133 and CD44 stem cell markers were also expressed in gliomaspheres derived from recurrent GBM tumours. Light Sheet Florescence Microscopy (LSFM) further revealed heterogeneous expression of these GSC markers in gliomaspheres. Gliomaspheres from recurrent tumours were highly sensitive to transcriptional CDK inhibitors, CYC065 and THZ1 and underwent apoptosis while being resistant to TMZ. Apoptotic cell death in GSC subpopulations and non-stem tumour cells resulted in sphere disruption. Collectively, our study highlights the potential of these novel CKIs to induce cell death in GSCs from recurrent tumours, warranting further clinical investigation. Gliomas represent the most common primary malignancy of the central nervous system (CNS) and grade IV glioma (glioblastoma (GBM)) are the most aggressive ones. Currently, GBM standard of care (SOC) includes surgery followed by radiotherapy (RT) and temozolomide (TMZ) chemotherapy . VirtualCancer stem cells (CSCs) are defined as subpopulations of cells within a tumour that are capable of aberrant differentiation, self-renewal and tumour initiation. Resistance to radio- and chemotherapy in GBM is understood to arise from glioma stem cells (GSCs) ,3. GSCs CDKs are critical regulatory enzymes that drive all cell cycle transitions . CDK1, -Two recurrent patient-derived tumour cultures were generated from the GBM tissue samples provided by the Department of Neurosurgery of the ErasmusMC , and obtained as part of routine resections from patients under their informed consent . Patient clinical characteristics are given in 2. Cells were routinely tested for mycoplasma infection and were mycoplasma free.GBM cultures were used up to passage number 25 and grown in freshly prepared (every two weeks) DMEM-F12 medium containing B27 supplement 50X (2%) , human bFGF (20 ng/mL), human EGF (20 ng/mL) , penicillin/streptomycin (100 U/mL) , heparin . Cells were grown as gliomaspheres in non-coated plates or ECM coated flasks as a monolayer cultures and maintained in a humidified incubator at 37 \u00b0C and 5% CO4 gated cells were acquired. Isotype controls were used to set CD133-positive and CD44-postive gates. FACS analysis was done using Attune NtX flow cytometer and data was analysed using FlowJo Software 10.6.2. Version 5.0 .GBM cells were cultured as gliomaspheres and left untreated for basal expression analysis or treated with 150 \u03bcM TMZ , 3 \u03bcM CYC065 or 100 nM THZ1 for 120 h. At the end of treatment window, cells were harvested, dissociated using Accutase and washed in FACS washing media . Cells were incubated for 15 min in FACS wash media on ice and stained with staining solution: unstained control, CD133-VioBright 667 , VioBright 667-isotype control , CD44-FITC , FITC-isotype control for 30 min on ice, protected from light. All antibodies were diluted as recommended by suppliers. VioBright 667 was excited at 638 nm and fluorescence emission was collected through a 660 nm long pass filter. FITC was excited at 488 nm and fluorescence emission was collected in the FL1 channel through a 520 nm band-pass filter. A total of 1 \u00d7 102O, pH 8, and protease/phosphatase inhibitor cocktails . BCA protein assay kit was used to determine the protein concentrations in the cell lysates. An equal amount of lysates (20 \u03bcg) were diluted with Laemmli loading buffer and were separated on 12% SDS-polyacrylamide gels and transferred to nitrocellulose membranes using wet transfer. Membranes were blotted with primary antibodies at the following dilutions: Cdk2 , Cdk7 , Cdk9 , cleaved caspase-3 , cleaved caspase-7 , \u03b1-tubulin and GAPDH . Membranes were next incubated with mouse or rabbit horseradish peroxidase-conjugated secondary antibodies and protein bands were visualised using Supersignal West Pico Chemiluminescent Substrate (Pierce). Images were captured using Fuji-film LAS-4000 .Whole-cell lysates were prepared using RIPA lysis buffer containing 150\u2009mM NaCl, 0.1% Triton X-100, 0.5% sodium deoxycholat, 0.1% sodium dodecyl sulfate (SDS), 50\u2009mM Tris in ddH2) containing AnnexinV-FITC conjugate and propidium iodide for 10 min on ice in the dark. FITC was excited at 488 nm and fluorescence emission was collected in the FL1 channel through a 520 nm band-pass filter. PI was excited at 561 nm and fluorescence emission was collected through a 605/40 nm band-pass filter and a 570 nm long pass filter. A total of 1 \u00d7 104 gated cells were acquired. Acquired data from the flow cytometry analyses were analysed using FlowJo Software.Cell death was measured using a BD LSRII flow cytometer. Recurrent GBM cells were seeded as gliomaspheres, 300,000 cells per well in 6-well plates. After 24 h cells were either pretreated with 150 \u03bcM TMZ for 24 h followed by treatment with 3 \u03bcM CYC065 or 100 nM THZ1 for 120 h or treated with 3 \u03bcM CYC065 and 100 nM THZ1 as single agents for 120 h. Following treatments, gliomaspheres were pelleted, dissociated using Accutase and were next incubated in 100 \u03bcL of binding buffer according to the manufacturer instructions. Briefly, cells were seeded as 2-D cultures in 6-well plates coated with extracellular matrix (1:100) and transfected with 20 nM of CDK2, CDK9 or CDK7 siRNA or 20 nM control siRNA in Opti-MEM . Twenty-four hours post transfection cells were detached and plated in DMEM-F12 complete medium in non-coated plates as described above. Whole cell lysates were collected 24 h post re-plating as described previously and transfection efficacy was determined using Western blot analysis. Flow cytometry and fluorometric cell viability were conducted 48 h after siRNA transfections as described.After indicated genetic depletions cells were seeded as 3000 cells/well in 96-well plate. Following 24 h incubation gliomaspheres were stained with Calcein AM . 4 \u03bcM Calcein AM in DPBS was added to a 15 mL falcon tube, mixed and incubated at 37 \u00b0C for 15 min. Media was then removed from the cultures and 100 \u00b5L of Calcein AM/DPBS mix was added/well for 30 min at 37 \u00b0C prior to imaging. Images were taken immediately with an Eclipse TE300 inverted microscope using FITC channel.For flow cytometry sorting and analysis, cells were dissociated with Accutase and labelled with CD44-FITC and CD133-VioBright 667 along with the relevant isotype controls for gating. Gating for single cells was established using forward scatter in the isotype control sample. The isotype control sample was used to establish a gate in the FITC and VioBright 667 channel. Cells showing signal for CD133 above the gate established by the isotype control were deemed to be CD133-positive cells. Cells showing signal for CD44 above the gate established by the isotype control were deemed to be CD44-positive cells . BD FACS2. For imaging, the spheroids were embedded in 1% low-melting agarose in DPBS at 38\u201340\u2009\u00b0C doped with sub-resolution beads at a concentration of 1:1000 of the original stock , sucked into a glass capillary while liquid. Spheres were then stained as per protocol described above using in house made holder to enable immersion of the hardened agar with the sphere into the prepared staining solution (1:10 CD133-VioBright 667 and 1:100 CD44-FITC) on ice as per manufacturer\u2019s instructions. For imaging the capillary was mounted in the microscope sample holder and the chamber of the Light Sheet Fluorescence Microscope . A plunger was used to push the agar with the embedded spheroids out of the capillary into the liquid in front of the 20 \u00d7 1.0 NA lens. The light sheet was generated with two 10 \u00d7 0.2 NA lenses illuminating the sample alternating from each side using the pivot scan mode. Images were taken at zoom 1.0 resulting in a light sheet thickness of 3.3\u2009\u00b5m (405\u2009nm excitation). Hoechst 33258 was excited using the 405-nm laser line, VioBright 667 with 639\u2009nm and FITC with 488 nm all using the 405/488/561/640-nm notch filter. To split the emission onto the two PCO edge sCMOS cameras filter cubes with beam splitter 510\u2009nm using band-pass filters of 420\u2013470\u2009nm (Hoechst 33258), LP 660\u2009nm (VioBright 667) and 505\u2013545 nm (FITC). Image stacks were then taken moving the object along the optical axis of the imaging objective at 1\u2009\u00b5m steps and subsequently turning the object in 45\u00b0 steps and re-aligning the object for each of the next 4 stacks. Stacks in Figure 2A where taken at zoom 2 with 2.91 \u03bcm average light sheet thickness at the sample and 0.114 \u03bcm represented by one pixel size, stacks in Figure 2B were taken at zoom 1.2 with 3.7 \u03bcm average light sheet thickness at the sample and 0.19 \u03bcm represented by one pixel. Stacks in Figure 3 were taken with zoom 1, average 3.25 \u03bcm light sheet thickness and 0.228 \u03bcm represented by one pixel. All images were processed using ZEN black and FiJi (ImageJ 1.52r-t). The multi-view fusion and deconvolution were performed using the dedicated FiJi plugins (MultiView Reconstruction v5.0.2034) .,38.27,38We applied two transcriptional inhibitors, CYC065 and THZ1 to target CD133 and CD44 double- or single-positive GSCs in recurrent gliomaspheres as well as double-negative subpopulations. Since GSCs are known to effectively resist standard chemotherapy, we also studied TMZ as a monotherapy or in combination with the CDK inhibitors. Resistance to DNA alkylating agents in GSCs can be acquired through the regulation of the cell cycle which is slower in GSCs during chemotherapy treatment causing cells entering the quiescent state . HoweverOverall, the high anti-cancer activity of both CYC065 and THZ1 in recurrent GSC cultures highlights the potential of these CDK inhibitors as an alternative to overcome treatment resistance to conventional therapies in GBM. Further investigations using preclinical animal models are needed to validate these in vitro findings."} +{"text": "During biomineralization, the cells generating the biominerals must be able to sense the external physical stimuli exerted by the growing mineralized tissue and change their intracellular protein composition according to these stimuli. In molluscan shell, the myosin-chitin synthases have been suggested to be the link for this communication between cells and the biomaterial. Hyaluronan synthases (HAS) belong to the same enzyme family as chitin synthases. Their product hyaluronan (HA) occurs in the bone and is supposed to have a regulatory function during bone regeneration. We hypothesize that HASes\u2019 expression and activity are controlled by fluid-induced mechanotransduction as it is known for molluscan chitin synthases. In this study, bone marrow-derived human mesenchymal stem cells (hMSCs) were exposed to fluid shear stress of 10 Pa. The RNA transcriptome was analyzed by RNA sequencing (RNAseq). HA concentrations in the supernatants were measured by ELISA. The cellular structure of hMSCs and HAS2-overexpressing hMSCs was investigated after treatment with shear stress using confocal microscopy. Fluid shear stress upregulated the expression of genes that encode proteins belonging to the HA biosynthesis and bone mineralization pathways. The HAS activity appeared to be induced. Knowledge about the regulation mechanism governing HAS expression, trafficking, enzymatic activation and quality of the HA product in hMSCs is essential to understand the biological role of HA in the bone microenvironment. Biomineralization is defined as a process by which living organisms control the formation of hierarchically structured minerals. In animals, the ability to produce biominerals evolved around 550 million years ago . InteresDuring the evolution of biomineralization, an intricate set of mechanisms which convert mechanical stimuli into biochemical cascades, termed mechanotransduction, has been developed as a general machinery for the regulation of biomaterial designs . The pheMechanotransduction also plays a regulatory role in the formation of molluscan shell. In general, an organic matrix is necessary for the nucleating minerals and crystals. Chitin, a homo-polymer of \u03b2- 1-4) linked N-acetylglucosamine molecules, is the main part of this matrix and fulfills an important function in the formation and functionality of mollusc larval shells ,12. The linked N6 Da, whereas HAS3 produces shorter chains of 1 \u00d7 105 Da to 1 \u00d7 106 Da. Human bone marrow-derived MSCs express all three HAS isoforms and the HA receptor CD44.Hyaluronan (HA), just like chitin, is a linear polysaccharide, which consists of alternating residues of \u03b2-D-(1-4)-N-acetylglucosamine and \u03b2-D-(1-3)-glucuronic acid. HA plays important roles in skeletal biology, influencing processes such as migration and condensation of skeletogenic progenitor cells, limb development, joint cavity formation and longitudinal bone growth ,16. HA iThe question arises as to whether HA plays a similar regulatory role in bone formation to chitin in molluscan shell formation. However, the processes in the bone might be more complicated, since bone remodels in a way unlike molluscan shell. In this study, we applied fluid shear stress to bone marrow-derived hMSCs as mechanical stimuli and analyzed the complete set of RNA transcripts and the activity of HASes in hMSCs. In some bone diseases, the HA content in the bone or HAS expression in hMSCs is changed ,29,30. U2 (2) was applied for a duration of 20 h. This pressure level matches the range of volumetric mean marrow shear stress in porcine femurs of 7.1 \u00b1 6.2 Pa during stress relaxation and 9.6 \u00b1 6.9 Pa during cyclic loading as demonstrated by Metzger et al. [To induce mechanotransduction via fluid shear stress in hMSCs obtained from the bone marrow of four healthy donors , cells were cultured on fibronectin-coated channel slides which were connected to a medium-containing tubing system driven by a peristaltic pump. As a control, hMSCs were cultured on fibronectin-coated 24-well plates under static conditions. The whole system, except the pump itself, as well as the control cells were kept in a humidified incubator at 37 \u00b0C and 5% CO2 A. Cells r et al. . Differep-value smaller than 0.05. As biological relevant changes in gene expression, we defined a log2-fold change smaller than -2 or higher than 2. This resulted in 683 significantly downregulated and 624 significantly upregulated genes showed a clear segmentation according to the applied fluid flow along PC1 A. Althoued genes B. A graped genes C, while ormation ,32,33; Tstimulus ,35; PTGStabolism and upretabolism ; CLDN4, unctions ; and XIRrization . Distingigration ; C3AR1, em cells ; CDH10, protein ; and MKIferation ,44,45.CD44, HMMR, ICAM1 and LAYN with red color in the volcano plot and the significantly changed HA receptors ano plot D. The exnown yet . The hyanown yet . Intereslular HA . At the lular HA and IL1-lular HA .Altogether, in our experimental setup, hMSCs responded to fluid shear stress by increasing the expression of genes involved in bone formation, cell adhesion to the surface, cell\u2013cell adhesion and hyaluronan biosynthesis. On the contrary, expression of genes associated with proliferation, the cell cycle and migration was diminished upon exposure to fluid flow.To further investigate the response of hMSCs upon mechanical stimulation with fluid shear stress, we performed Gene Ontology (GO) enrichment analysis of the 624 upregulated genes. HAS2, KLF2, KLF4, MEF2C, MTSS1, PTGS2, SRC, SREBF2, TFPI2), six genes without changes and three significantly downregulated genes in hMSCs. Thus, we observed a clear tendency for a positive response to fluid shear stress. Although we found donor-dependent differences in the expression levels of the individual genes, all donors showed a reaction to the mechanical stimulus in the expression of the majority of genes with minor individual differences (2-fold change (LFC) value smaller than \u22122, but HAS2, KLF2, KLF4, MTSS1, PTGS2 and TFPI2 exhibited a significant LFC value higher than 2 showed a clear cluster of nine significantly upregulated genes signaling is essential in bone formation and represented here by BMP2, BMP6 and the receptor BMPR1B, which are involved in bone remodeling and osteoblast differentiation [DMP1, IBSP and SPP1) coding for non-collagenous bone matrix proteins of the small integrin-binding ligand, the N-linked glycoprotein family [WNT1 and WNT10B are also part of the significantly upregulated genes in these bone-related pathways, both involved in the generation and function of osteoblasts [PTGS2 and SOX11, which are involved in early osteoblast differentiation [ODAPH, which participates in the enamel mineralization of teeth [PTHLH, a gene important for bone integrity [Mechanotransduction is known to be important for the regulation of osteogenic differentiation of MSCs ; therefontiation . BMP2-inntiation . We alson family , which an family . WNT1 aneoblasts ,57. The ntiation . We alsoof teeth , and PTHntegrity . The uprCD44, HMMR, ICAM1, LAYN and LYVE1). Plotted as a heatmap, we found a group of nine significantly upregulated genes , seven genes with insignificant changes in expression and two genes showing a significant downregulation with an overall tendency to a positive LFC and the genes encoding for five HA receptors (p < 0.05).Furthermore, we determined the amount of secreted HA in the supernatants of fluid shear stress-treated hMSCs compared to the static cultured hMSCs using a commercial HA-ELISA kit. In the static cultures, the basic activity levels ranged between 101.6 \u00b1 38.0 and 322.3 \u00b1 48.8 ng HA/1\u00d710reported . Similar4 cells) A. Nevert4 cells) B after eAltogether, the increased number of microvillus-like protrusions and the slightly higher amount of secreted HA upon fluid shear stress indicate the mechanosensitivity of the HASes in hMSCs.PTGS2, which is a well-known marker for the application of this stimulus in hMSCs [Bone marrow-resident MSCs play an important role in bone formation and homeostasis. This function in vivo is mainly fulfilled by their osteogenic differentiation capacity through osteoprogenitor cells, pre-osteoblasts and mature osteoblasts. It has been previously reported that MSC differentiation into the osteogenic lineage in vitro can be induced by the application of different types of mechanical stimulations such as matrix strains and shear stress . In vivoin hMSCs . Beside PTGS2 and BMP2. Signal transduction of mechanically stimulated osteogenesis in hMSCs is mediated by MAP kinase pathways [PTGS2, BMP2, WNT1 and WNT16 are part of the 25 most upregulated genes in our dataset, whilst BMP6, WNT10B and BMPR1B were at least part of the 624 significantly upregulated genes as well as part of gene sets linked to osteogenesis. According to the literature, we observed enrichments in the cellular response to a BMP stimulus (GO:0071773) without the addition of these proteins, the upregulations of osteoblast differentiation (GO:0045669), biomineral tissue development (GO:0070169), bone mineralization (GO:0030501) and ossification (GO:0045778) and skeletal system morphogenesis (GO:0048705).Commitment towards the osteogenic lineage in hMSCs can be achieved via mechanotransduction , applyinpathways and Wnt pathways ,32. PTGSHASes in hMSCs obtained from different donors [KLF2 is significantly upregulated are expressed, while HAS2 is the most abundant isoform and HAS3 has the lowest expression level ,62. The nors see . We have(hUVECs) . To our (hUVECs) . In our egulated B, but wend IL1-\u03b2 D.XIRP1, an inhibitor of actin depolymerization. The consequence might be the increased attachment of the shear stress-induced cells as indicated by the negative regulation of cell motility (GO:2000146) and the positive regulation of cell adhesion (GO:0045785). However, there could also be a correlation with the increased formation of HAS-induced protrusions which are known to contain actin filaments in themselves and in their cortical bases [Our data strongly suggest that fluid shear stress not only upregulates the mRNA expression of HASes but it also increases their activities. On the one hand, the HA content in the supernatants of hMSCs cultured in the presence of shear stress was higher than in the supernatants of hMSCs cultured under static conditions . On the al bases . We alsoThe molluscan myosin-chitin synthases are assumed to be involved in the formation of actin-rich microvilli at the shell-forming interface . As the The exact function of HA in hMSCs is not fully understood yet. HA might regulate their adhesive properties in the bone microenvironment , but it g for 5 min, the cell pellet was resuspended in a culture medium consisting of MEM \u03b1 with nucleosides and GlutaMAX supplement (Life Technologies) containing 10% (v/v) fetal bovine serum , and 1000 U/mL penicillin and 1000 \u00b5g/mL streptomycin (Biochrom) at 37 \u00b0C with 5% CO2 and ~90% humidity. Nonadherent cells were removed by washing with DPBS for three times after the first three days in culture. Cells were passaged when reaching a confluence of ~80% and frozen at passage three in culture medium containing additional 20% (v/v) FBS and 10% (v/v) dimethyl sulfoxide in liquid nitrogen. Aliquots were thawed and cultured five days in advance to the experiments, which were always performed with cells in passage four.Human MSCs were isolated from the bone marrow of femoral heads of four healthy male donors (49 to 87 years old) recruited at the Clinic for General, Trauma and Reconstructive Surgery of the Ludwig-Maximilians-University . The study was approved by the ethics committee of Ludwig-Maximilians-University, Munich and performed according to the Declaration of Helsinki. A written informed consent declaring that the eliminated tissue can be used for medical studies at the university hospital was signed by all donors. The cell isolation was performed by washing the bone graft material with DPBS . Afterwards, the bone material was incubated with 250 U/mL collagenase II in DMEM three times for 10 min each at 37 \u00b0C. Both suspensions were filtered with a 100-\u03bcm cell strainer to remove bone fragments. After centrifugation at 500\u00d7 The generation and culture conditions of the HAS2-overexpressing immortalized hMSCs (SCP1-HAS2-eGFP) are described elsewhere .2) inside the channel slide according to the manufacturer\u2019s instructions to reach the level of volumetric mean marrow shear stress in porcine femurs [2 human plasma fibronectin in DPBS for 1.5 h at 37 \u00b0C and washed with DPBS. The number of seeded cells was: 2.5\u00d7105 cells per channel slide; 1\u00d7105 cells per well of the plate; 5\u00d7104 cells per well of the chamber. After 6.5 h of adhesion, the medium was changed in the control wells and the channel slides were connected to the flow setup to apply shear stress for 20 h. Before and after the mechanical stimulation, images were taken from both conditions using an Axiovert 40 CFL microscope .To apply fluid shear stress as a stimulus of mechanotransduction in hMSCs, a setup for culture under flow conditions was created A. Therefe femurs . All comThe cells from all four donors were lysed in 200 \u00b5l Trizol (Life Technologies) per well (24-well plate)/channel slide. After RNA isolation by following a standardized protocol, RNA quality and quantity were measured using a BioAnalyzer . The libraries for sequencing were prepared with a SENSE mRNA-Seq Library Prep Kit V2 and sequenced on a HiSeq1500 device with a read length of 50 bp and a sequencing depth of approximately 15 million reads per sample. The raw basecall (Bcl) files were demultiplexed with Illumina_Demultiplex. Transcriptomes were aligned to the human reference genome GRCh38.99 by using STAR (version 2.7.2b) to obtai2. Differentially expressed genes were obtained through DESeq2 and insignificant genes were filtered by a 0.05 adjusted p-value cutoff. Log2-fold changes (LFC) smaller than -2 or higher than 2 were defined as thresholds to identify biologically relevant results. From 21,451 genes, 624 were considered to be upregulated and 683 were downregulated after application of the fluid stress (DESeq2 (version 1.26.0) was usedd stress B.p \u2264 0.05; log2-fold change \u22652; part of the background set provided by clusterProfiler). The background frequency is the number of all genes annotated to a GO term divided by the number of genes of the entire background set provided by clusterProfiler.Up- and downregulated genes in the DESeq2 results were analyzed for their Biological Process Gene Ontology Results (GO) by using clusterProfiler (v3.14.3) with orgCD44, HMMR, ICAM1, LAYN and LYVE1) obtained from the DESeq2 results was analyzed concerning the LFC and adjusted p-values. In a similar manner, the change in expression levels in the subset of genes corresponding to the cellular response to fluid shear stress (GO:0071498) was used to verify mechanotransduction.The expression of HA-related genes, a subset consisting of the hyaluronan biosynthetic process (GO:0030213) and five HA receptors paraformaldehyde in DPBS, followed by a permeabilization step with 0.1% (v/v) Triton X100 (Sigma-Aldrich) in DPBS per well/channel. After an additional washing step with DPBS, the cells were blocked with 1% (w/v) BSA (Sigma-Aldrich) in DPBS for 1 h at room temperature. Following this, a rabbit anti-HAS2 antibody and a mouse anti-CD44 antibody were diluted in 1% BSA in DPBS. The fixed cells were incubated over night at 4 \u00b0C in primary solution. Afterwards, the wells were washed with 1% BSA in DPBS. As secondary antibodies, a goat anti-rabbit Alexa Fluor 488 conjugate , a donkey anti-mouse Alexa Fluor 546 conjugate and a phalloidin Alexa Fluor 647 conjugate diluted in 1% BSA in DPBS were added for 1 h at room temperature. The cells were washed with DPBS before the nuclei were counterstained for 2 min with 4,6-diamidino-2-phenylindole (DAPI) and washed a final time with DPBS. Finally, the stained cells were kept in DPBS to perform confocal microscopy using a Leica SP8 AOBS WLL, a HC PL APO 63\u00d7/1.30 GLYC CORR CS2 objective and Lightning deconvolution software applying 1.28\u00d7 zoom . Quantification of protrusions and the definition of density (ratio of detected protrusions to cell edge length) were performed as described elsewhere [p-value less than 0.05 was considered as statistically significant.Fluorescence staining of primary hMSCs and SCP1-HAS3-eGFP cells was performed to analyze the morphological changes at the molecular level after the application of fluid shear stress. Therefore, 1\u00d710lsewhere using thlsewhere for the lsewhere . Therefot-test. A p-value less than 0.05 was considered as statistically significant.HAS activity was analyzed by measuring the HA content in the supernatant of the cells cultured with fluid shear stress and cells cultured in 8-well slides as static control. As substrate, 10 mM N-acetyl-D-glucosamine (Sigma-Aldrich) was added to the standard culture medium for 20 h during the experiment. Afterwards, the supernatant was collected. As described above, the cells were washed, fixed and stained with a phalloidin Alexa Fluor 488 conjugate diluted in DPBS for 1 h at room temperature. After washing in DPBS, the nuclei were counterstained with DAPI. Finally, mosaic pictures of the whole growth area were taken using an AxioObserver Z1 (Carl Zeiss). The cells were counted using the FIJI software . The qua"} +{"text": "RAS wild-type metastatic colorectal cancer (mCRC) patients. However, cetuximab resistance often occur and the mechanism has not been fully elucidated. The purpose of this study was to investigate the role of asparaginyl endopeptidase (AEP) in cetuximab resistance.Cetuximab, a monoclonal antibody targeting epidermal growth factor receptor (EGFR), is effective for Differentially expressed genes between cetuximab responders and non-responders were identified by analyzing the gene expression profile GSE5851, retrieved from Gene Expression Omnibus (GEO). The potential genes were further validated in cetuximab-resistant CRC cell lines. The expression of AEP in the peripheral blood and tumor tissues of mCRC patients in our hospital were detected by enzyme-linked immunosorbent assay (ELISA) and immunohistochemistry, respectively. The survival analysis was carried out by Kaplan\u2013Meier method. The function and associated pathways of AEP were further investigated by lentivirus transfection, CCK8 assay, colony\u00a0formation\u00a0assay, real-time polymerase chain reaction (qPCR) and western blot.P\u2009=\u20090.00133). The expression of AEP was significantly higher in the cetuximab-resistant CRC cell lines, as well as in mCRC patients with shorter PFS treated with cetuximab-containing therapy. Furthermore, AEP could decrease the sensitivity of CRC cells to cetuximab in vitro. And the phosphorylation level of MEK and ERK1/2 was increased in AEP overexpression cells. The downregulation of AEP using specific inhibitors could partially restore the sensitivity of CRC cells to cetuximab.Through bioinformatics analysis, we found that the expression of AEP gene was related to progress free survival (PFS) of mCRC patients treated with cetuximab alone (The higher expression of AEP could contribute to the shorter PFS of cetuximab treatment in mCRC. The reason might be that AEP could promote the phosphorylation of MEK/ERK protein in the downstream signal pathway of EGFR.The online version contains supplementary material available at 10.1007/s12094-022-02986-6. RAS wild-type mCRC .RNA isolation was performed using the TRIZOL reagent . The cDNA was prepared using an oligo (dT) primer (Supplement Table 1) and reverse transcriptase following standard protocols. Quantitative real-time polymerase chain reaction (qRT-PCR) was performed using SYBR Green on the ABI 7500 real-time PCR System . Relative expression was presented using the 2Cell samples were collected and lysed in RIPA buffer with protease inhibitor cocktails. Total cell protein extracts (20\u00a0\u00b5g/lane) were subjected to SDS-PAGE analysis. The membrane was blocked with 5% milk in TBST before incubation in the following antibody overnight at 4\u00a0\u00b0C, AEP antibody or EGFR antibody , phosphor-EGFR antibody , MEK1/2 , phosphor-MEK1/2 , ERK1/2 , phosphor-ERK1/2 , beta-actin . The membranes were washed with TBST and incubated with the secondary antibodies (Santa Cruz Biotechnology). The immunoreactive proteins were visualized by chemiluminescence reagents .Levels of AEP protein in the cell supernatant were determined using ELISA kit in accordance with the protocol provided by the manufacturer. Briefly, samples and standards were added in a 96-well polystyrene microplate coated with diluted AEP capture antibody and incubated for 2\u00a0h. The plates were washed, added with AEP detection antibody, and incubated for 2\u00a0h. The working dilution was added after washing twice.\u00a0Cover the plate and incubate for 20\u00a0min at room temperature. Substrate solution was added to each well and incubate for 20\u00a0min at room temperature.\u00a0The reaction was terminated with stop solution. Then, determine the optical density of each well immediately, using a microplate reader set to 450\u00a0nm.Slides were routinely de-paraffinizated and rehydrated, and then heated at 98\u00a0\u00b0C in a citrate buffer for 20\u00a0min and cooled naturally to room temperature. Sections were incubated in 0.3% hydrogen peroxide for 20\u00a0min and blocked with 5% normal horse serum in PBS for 30\u00a0min. AEP antibody was added for incubating overnight at 4\u00a0\u00b0C, then stained using a highly sensitive streptavidin\u2013biotin\u2013peroxidase detection system and counterstained with hematoxylin.5 cells per well. The cells were then infected with the same titer virus with 8\u00a0\u03bcg/ml polybrene on the following day. Approximately 72\u00a0h after viral infection, GFP expression was confirmed under a fluorescence microscope, and the culture medium was replaced with selection medium containing 4\u00a0\u03bcg/ml puromycin. The cells were then cultured for at least 14\u00a0days. The puromycin-resistant cell clones were isolated, amplified in medium containing 2\u00a0\u03bcg/ml puromycin for seven to nine days, and transferred to a medium without puromycin. The clones were designated as AEP-KD or NC cells.The target sequences for AEP shRNAs were 5\u2032-gatccGATGGTGTTCTACATTG-AATTCAAGAGATTCAATGTAGAACACCATCTTTTTTg-3\u2032 (AEP-KD1) and 5\u2032-gatccAAACTGATGAACACCAATGATTTCAAGAGAATCATTGGTGTTCATCAGTTTTTTTTTg-3\u2032 (AEP-KD2). After 48\u00a0h, the efficiency of AEP knockdown was confirmed via western blot and ELISA. Lentiviral vectors for human AEP-shRNA carrying a green fluorescent protein (GFP) sequence were constructed by Hanyin Co. . To obtain the stable AEP knockdown cell line, cells were seeded in six-well dishes at a density of 2\u2009\u00d7\u2009102. When clones were visible, the culture was stopped. The supernatant was removed by vacuum pump. 80% methanol was added at room temperature for 20\u00a0min. Crystal violet staining solution was added dropwise, and the colonies were counted by software image J.Cells were seeded in six-well dishes at a density of 1000 cells per well and cultured at 37\u00a0\u00b0C, 5% COCells fixed on coverslips were treated with the AEP antibody and secondary antibodies conjugated with Alexa Fluor-488 (Abcam). Then, coverslips were washed with PBS, stained with 4\u2032, 6-diamidino-2-phenylindole , and evaluated using laser confocal microscopy .t test. P value of\u2009<\u20090.05 was considered statistically significant. SPSS 24 software was used for statistical analysis.The median PFS was calculated by Kaplan\u2013Meier survival analysis, and its significance was evaluated by log rank test. The data results of continuous variables were expressed by mean\u2009\u00b1\u2009standard deviation, and the difference between the two groups was analyzed by P\u2009=\u20090.00133) , while there was no significant difference in the expression levels of other seven genes Fig.\u00a0E, F. Dat01) Fig.\u00a0G.Fig. 2AP\u2009=\u20090.0023; serum: 6.35\u00a0months vs 9.7\u00a0months, P\u2009=\u20090.0055).From August 2016 to April 2017, tissue and serum samples from 44 patients with mCRC treated with cetuximab-containing therapy in our department were collected and detected by immunohistochemistry and ELISA, respectively. The baseline characteristics are shown in Table P\u2009<\u20090.001) Fig.\u00a0C, the sa01) Fig.\u00a0D. Howeve01) Fig.\u00a0E. The reThe results of Western blot showed that the phosphorylation level of MEK and ERK 1/2 increased in AEP-OE cells Fig.\u00a0A. As shoIn this study, we identified differentially expressed genes between cetuximab responders and non-responders by analyzing the gene expression profile GSE5851, retrieved from Gene Expression Omnibus (GEO). We found that the expression level of AEP was closely related to the PFS of cetuximab treatment.KRAS wide-type mCRC patients who were enrolled for cetuximab monotherapy. Therefore, it is a very suitable database for analysis of biomarkers for cetuximab. Over the past decade, the combinations of cetuximab and chemotherapeutic regimens have been the standard treatment of RAS wild-type mCRC [The GSE5851 profile was reported in 2007 , containype mCRC , and fewype mCRC , to furtSince 2003, it has been well established that AEP is widely expressed in various cancers, such as glioblastoma , esophagRAS [BRAF [PIK3CA [We further studied the possible mechanisms for AEP-mediated cetuximab resistance. In our previous report, AEP could activate the MAPK/MEK/ERK signaling pathway and promote resistance to microtubule inhibitors in gastric cancers cells, including paclitaxel, docetaxel, and T-DM1 . CetuximRAS \u201313, BRAFAS [BRAF , 14, PIK [PIK3CA , 15, con [PIK3CA .It has been well reported that AEP may promote cancer progression through diverse pathways, but little is known about the mechanism of AEP-mediated drug resistance. AEP could promote tumor progression by blocking the tumor-suppressive function of P53 , activatIn this study, AEP inhibitors (AEPIs) were used to restore the sensitivity of AEP-overexpressed cells to cetuximab. Another study has showIn conclusion, this study suggested that the higher expression of AEP could contribute to the shorter PFS of cetuximab treatment in mCRC. The sensitivity of AEP overexpression colon cancer cells to cetuximab decreased, which can be partially restored by AEP inhibitor. AEP may mediate cetuximab resistance by activating the phosphorylation of EGFR/MEK/ERK signaling pathway.Supplementary file1 (DOCX 110 KB)Below is the link to the electronic supplementary material."} +{"text": "Agapornis fischeri), Peach-faced lovebirds (Agapornis roseicollis), and two Blue and Gold Macaws (Ara ararauna) from four different aviaries died after some days of lethargy and ruffled feathers. Records of gross necropsy and histopathological exams were described and biomolecular analyses were carried out. No specific gross lesions were appreciated at necropsy, while histopathology evidenced a systemic mycosis in several organs, particularly in the lungs. In affected organs, broad and non-septate hyphae, suggestive of mycoses, were observed. Molecularly, Mucor racemosus (Fischer's lovebird) and M. circinelloides (Peach-faced lovebirds) were identified from formalin-fixed and paraffin-embedded (FFPE) lung and liver tissue. In addition, Alternaria alternata and Fusicladium spp. were identified in FFPE tissue from several organs; whereas the role of Mucor spp. as true pathogens is well-demonstrated, and the behavior of A. alternata and Fusicladium spp. in macaws as opportunistic pathogens have been discussed. To our knowledge, this report is the first one reporting mucormycosis caused by M. racemosus and M. circinelloides in lovebirds, and A. alternata and Fusicladium spp. in macaws.A retrospective study was conducted on parrots submitted from necropsy to the Department of Veterinary Pathology, School of Biosciences and Veterinary, University of Camerino, Italy, from 2007 to 2018. From a total of 2,153 parrots examined at post-mortem, four cases were diagnosed with atypical mycosis and were considered for determination of the fungus species by PCR. A Fischer's lovebird ( Aspergillus fumigatus and A. flavus, although the genus includes more than 300 known species , a 1-year-old male Fischer's lovebird (Agapornis fischeri), and a 2-year-old female Peach-faced lovebird (Agapornis roseicollis), coming from different Italian aviaries, were considered for determination of the fungus species by PCR. For all birds, the anamnesis reported some days of lethargy and ruffled feathers before death, without particular clinical signs.A retrospective study was conducted on parrots submitted for necropsy to the Department of Pathology, School of Biosciences and Veterinary, University of Camerino, Italy, from 2007 to 2018. From a total of 2,153 necropsied parrots that were present in the archive, four cases received the histological diagnosis of atypical mycosis. These cases, involving a 10-year-old male and a 17-year-old female of Blue and Gold Macaw , Periodic Acid Schiff (PAS), and Grocott histochemical stains.To genetically characterized fungal elements, at least five formalin-fixed and paraffin-embedded (FFPE) tissue sections (5\u20138 \u03bcm) of lung and liver (for the Fischer's and Peach-faced lovebirds) or pool of organs (for the Blue and Gold Macaws) were sent to the Parasitology Laboratory of the \u201cIstituto Zooprofilattico Sperimentale delle Venezie,\u201d Italy, for molecular investigations.TM FFPE gDNA Miniprep System (Promega), including negative control (A single slice was transferred to a 1.5-ml tube and DNA extraction was performed by using ReliaPrep control . Each exi), and 2 portions of the LSU rRNA (ii and iii).The DNA was amplified by using SYBR Green Real-Time PCR (rtPCR) with three sets of primers targeting a short portion of the ITS 1 region . Scattered, thick-walled, and multiseptate muriform cells, measuring 6\u201312 \u03bcm and divided by fission were evidenced, bringing to the diagnosis of chromoblastomycosis. Long, septate, branched, and strongly PAS-positive hyphae were also observed in the kidney , both ati) and 28S rRNA (ii) rtPCR protocols. The 12F/13R (iii) rtPCR amplified DNA only from Agapornis tissues.Positive DNA amplification was obtained from FFPE tissue from all 4 birds, with ITS (Mucor racemosus (MT240480) and M. circinelloides (MT240488) were identified in Agapornis fischeri and Agapornis roseicollis, respectively with a similarity of 100% when blasted in GenBank database.For the lovebirds, all amplicons were successfully sequenced and ii- NL1/NL4 rtPCR) was successfully sequenced. Alternaria alternata and Fusicladium spp. were identified with 98% and 100% similarity, respectively. The sequencing of ITS amplicons was not possible, or sequences were of poor quality, showing double peaks in the electropherograms.In the two Blue and Gold Macaws, only a portion of the 28S rRNA amplicons (Mucor sequences obtained from Agapornis birds. Sequences of M. racemosus (MT240480) and M. circinelloides (MT240488), clustered into highly supported clades, are clearly separated from Mucor, Rhizopus, Saksenaea, Apophysomyces, and Lichtheimia species (A rooted tree was constructed with 28S LSU rRNA species .Mucor species (M. racemosus and M. circinelloides) as most likely true pathogens in the two lovebirds. The evidence of Alternaria and Fusicladium in Blue and Gold Macaw, suggests that saprophytic fungi can act as opportunistic pathogens in some instances, and are mostly linked to an immunodeficiency of the host.This study describes unusual fungal findings in four parrots. The molecular characterization confirmed the histological description and identified two Alternaria spp., Aspergillus spp., Mucor spp., and Venturia spp. were isolated from migratory birds, confirming the role of birds as a possible vector of fungi and the commensalism of fungi with their avian host , and administration of exogenous corticosteroids , A. alternata causes a season-dependent lung invasion, causing severe disease when the lowest immune status occurs , with feather picking and a skin lesion on the wing are phytopathogens causing worldwide significant economic loss to crops , are anamorphs and morphologically similar to Cladophialophora, were reclassified, with the dothidealean species Venturia hanliniana classified as the teleomorph of Fusicladium brevicatenatum can be found in the article/supplementary material.LG, CF, PD, and GR conceived the study. LG, CZ, LB, and GR performed necropsies and histological analysis and wrote the manuscript. CF and PD performed molecular analysis and wrote the manuscript. SB, A-RA, and GR reviewed the article and provided critical suggestions and comments. All authors discussed the results and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The purpose of this article is to explain how online advertising affects customer satisfaction through the mediation of brand knowledge. The sample size of this survey is based on 100 participants in the Multan region. This study collects data by conducting various unstructured interviews. In this study, we used a qualitative data acquisition technique. The results show that online advertising does not have a significant impact on customer satisfaction. However, when brand knowledge is included as a parameter, the correlation between online advertising and customer satisfaction increases. Online advertising is a new advertising tool used by most organizations. This manuscript helps practitioners choose better tools for online promotion and uses a variety of recognition techniques to improve their brand knowledge. It has been known through this study, that building customer confidence in product quality is a very effective approach in front of business owners, as brand reputation enhances customer satisfaction. This study is unique in that previous studies considered elements of brand knowledge as parameters and ignored to find a direct relationship between online advertising and customer satisfaction. This study highlights key points that will help emerging researchers critically analyze such aspects in future studies. There is a growing need for knowledge and information regarding the online purchasing behavior of consumers due to radical change in e-commerce, which is also known as electronic commerce . In the According to the survey results, most of the customers are not only searching the products on the internet for purchasing but most of them are interested to gain some important information about the specific products . UnfortuThis manuscript is informative to investigate the impact of an online advertisement about the products and services on the satisfaction level of the customer, where a sustainable brand knowledge plays a mediating role between these two variables . In PakiPreviously, no study was majorly conducted by scholars to comprehend the impact of an online advertisement on the satisfaction level of the customer, by considering sustainable brand knowledge as a mediator. To justify this research, some similar research made by the previous scholars is discussed below, which will enhance the authenticity and reliability of this research.Daraz.com, and other advertisement supporting websites , e-commerce promised a \u201cperfect\u201d arrival in the market by introducing product and price comparison websites, the so-called shopping robots . They stIn In 2019, research was conducted based on critically evaluating the importance of brand knowledge and its importance in enhancing the brand image in the customer market. According to Scholars conducted another research to evaluate the two major types of brand knowledge: brand awareness and image. According to In 2020, after conducting critical qualitative research on this factor, the researchers concluded that the perceived quality, positive attitude, and overall profitability ratio in a company are generated due to enhancing the standard of marketing and sales channels . The infIn 2018, a study on online advertisement and customer satisfaction depicted that brands used different advertisement channels to attract their customer as well as to satisfy their needs and desires . NowadayIn 2017, studies were conducted by researchers to critically evaluate the importance of the co-creational factor of the brand in strengthening the relationship between the online brand\u2019s communities and customer satisfaction . AccordiIn another related research article ,b, scholBased on the synthesized review of literature, previous studies do not have a consensus on finding regarding the relationship among said variables and the significance of an online advertisement on customer satisfaction. Although some studies address the same topic, to the best of the authors\u2019 knowledge, none of the studies considers the same issue by incorporating the mediating effect of brand knowledge, especially in a developing market context; hence, this area is still under-researched. Therefore, this study intends to fill this gap by conducting unstructured interviews to examine the impact of an online advertisement on customer satisfaction.To understand the impact of online advertisements on customer satisfaction and sustainable brand knowledge, which mediates the relationship between the independent variable (IV) and dependent variable (DV), different unstructured interviews and other observations based on secondary data were collected. The questionnaire used by Teo, T with others in their research article is used for this analysis . ResponsThe population of this research was based on households, students, officers, hoteliers, and other related people. The sample was based on 100 individuals, who were chosen from several areas of the Multan region. For this purpose, questions were asked to exactly understand the impact of an online advertisement on their satisfaction and also about their knowledge regarding brands\u2019 advertisements and online shopping sites where they purchased some goods. All this data collection is based on formal or informal interviews and other online secondary information. Each interview was held for only 20\u201325 min to better understand the participant\u2019s perception level regarding online advertisements. As discussed earlier, the interview format is highly unstructured.This qualitative technique is more effective for this research analysis because the base of this research was focused on critically evaluating the psychological and behavioral approach of the targeted audience of any companies in the current era . So, forThis study used a cross-sectional data collection method because all responses were collected at once. Convenient sampling is also used for data collection. Data were analyzed through personal interpretations and analytical models. It is an effective research method for critically inspecting and evaluating the cognitive characteristic of a common person regarding the importance of such an online advertisement in the Pakistan market.By reviewing the literature and critically evaluating the relationship between these variables, we have developed the following major hypothesis of this research work:H1: Online advertisements have a significant impact on customer satisfaction.H2: Brand knowledge has a significant and positive impact on customer satisfaction.H3: Brand knowledge strengthens the relationship between online advertisement and customer satisfaction.This survey analysis uses several elements in this survey interview and analysis process. Some of the important questions for respondents are listed in These are the questions asked from the respondents through unstructured interviews and observed the behavior of consumers online.Consider the variables of online surfing. Here, we asked the respondents three questions that explain their general attitude toward online surfing. Respondents have different views on this factor, so most respondents say they spend a lot of time browsing ads for their products and services. This behavior is the same whether the brand is known to the customer or not. The results show that customers often spend a lot of time browsing products and services for more detailed information. As one respondent said:\u201cBuying products online is an important issue, so we spend a lot of time finding more information, whether we know a particular brand or not.\u201dAnother respondent said that:\u201cWhen I see the online advertisement of the known brand, I am certain that the product is not of poor quality.\u201dIt simply means that it is available on the online site and has a good product quality guarantee known to the customer. The second general brand knowledge is used to determine the impact of brand knowledge on customer satisfaction. Questions about this factor determine a customer\u2019s choice, so customers are convinced that the best way to buy is to browse ads for their well-known brands. After searching for the product or service that best suits their needs, they purchased the product. The survey questions focused primarily on the level of customer awareness of advertising. According to the results, most people searched for different brands and preferred only those brands whose features and prices directly matched the desired value.The final question of this factor is primarily based on whether a well-known brand delivered the product to the customer as shown in the online advertisement. The consequences of this factor go in the same direction. This shows that general brand knowledge about a product or service is of little concern when deciding to buy online. In Pakistan, people do not like to buy online because of the potential risk of loss. Because the response rate is neutral, well-known brands do not encourage customers to buy online products, but they guarantee that the products are of high quality.One respondent answered that \u201cThere is too much lose\u2026\u201d By:\u201cIf you blindly buy a product without reading the entire description or review, you are already using that particular brand of a product and you can get the high-quality product as shown in the picture. I do not think so\u2026 It\u2019s impossible.\u201dIt represents the customer\u2019s attitude toward online advertising and online purchases. That is, customers buy online products from well-known brands rather than unfamiliar brands. According to the result data, brand knowledge plays an important role in building long-term relationships with customers. According to the respondents, they prefer to buy products and services of this brand that have a good reputation in the consumer market. This is to increase the customer\u2019s confidence in purchasing the product. This is a useful resource for critically assessing the direct impact of a company\u2019s positive reputation on the customer market.As with the third element, they were asked about people\u2019s interest in displaying online ads. Whether to consider online advertising as a source of information and not purchase products or services? The overall response to this factor was informative, as people do not like to shop online, but it also focuses on seeing online advertising for informational purposes. Most respondents say they frequently check emails sent by different brands to evaluate the brand and its product offerings. As one respondent said:\u201cIt is interesting to see a colorful advertisement not only when I bought that product; but yes, for the sake of information, maybe in future I want to buy these types of products.\u201dAnother said:\u201cIt is a pleasurable activity to see product advertisements and check your junk mails full of advertisements when you have nothing to do.\u201dThis shows that people are not very interested in buying online but are interested in online advertising for informational purposes. The above two answers and other related answers show that most customers prefer to get a lot of relevant information about products offered through online sources. According to them, most buyers only bought products that meet their needs and desires at a reasonable price, and the Internet is an easy and reliable source for them to get relevant information about their products and services.Finally, customer satisfaction factors were critically discussed in interviews that ask respondents seven questions. This element asks about the price of online purchases, and the quality of products purchased on online sites, and asks questions related to respondents. This was an important source of critical information about the importance of online advertising for customer satisfaction in the market. Most of them have a neutral response to this factor, and some of them have a positive response to such a satisfaction factor. As one respondent said:\u201cThere are times when the online shopping experience is good and times when it is bad, but I think that items ordered from cheap online sites must have a more unpleasant experience than high-priced items.\u201dAccording to the answers, the response rate of participants to a company\u2019s products and services may deviate from the average position. By interpreting the data, we can know that prices can also be a factor in influencing customer satisfaction when purchasing online products. The reason is that price factors add some value to a company\u2019s products, especially for price-sensitive customers. In the market, most customers prefer products that are very reasonably priced compared to other products. Considering demographic factors such as age and gender as control variables makes it very easy to access an individual\u2019s behavioral approach. Most Multan-Pakistan online customers prefer to buy products and services that have a good reputation in the market. The market for Multan is considerably lower than in other cities in Pakistan such as Lahore, Islamabad, Karachi, and Faisalabad. But now, with the penetration of technology in the region, many educated people prefer to get relevant information about a particular product or service of the brand. Brand knowledge and reputation in the customer market play an important role in maintaining the position in the customer market.The customers are completely unsatisfied or dissatisfied with the online purchase. Everyone shared their level of awareness, understanding, and experience. It shows the diversity of Multan\u2019s customer market, as most people simply visit the brand\u2019s online site to get relevant information about products, services, prices, features, and other relevant information. The reason is that people have different economic standards and most of them are online brands . Due to The responses weaken the first hypothesis of this research that an online advertisement does not have a significant impact on customer satisfaction. But, their collected data justified the second hypothesis that brand knowledge has a significant and positive impact on the customer satisfaction level. The reason is that the word of mouth regarding the operating activities of any brand creates some knowledge and confidence level among the targeted customers regarding the company\u2019s product and services . Also, tAfter critically assessing the impact of online advertising on client customer satisfaction by conducting qualitative research, brand knowledge plays an important role in improving a company\u2019s performance level in the competitor\u2019s market. Analytical results show that this is an era of information technology, and online advertising and online purchases play a key role in maintaining a company\u2019s outstanding reputation in the customer and competitor markets over the long term. In this article, we concluded that online advertising has a direct impact on customer satisfaction because brand knowledge acts as an intermediary.Online advertising does not significantly affect customer satisfaction with online purchases. However, customers who have positive knowledge of the brand for a particular product have a high level of customer satisfaction in the market. People tend to view online advertising as an important source of information, not for sale or purchase purposes. From this, we can conclude that brand knowledge, positive or negative, has a significant impact on customer satisfaction with the company. This factor also further enhances the interaction between online advertising and customer satisfaction. Otherwise, online advertising will not have a significant impact on customer satisfaction and brand awareness. This is an important study for critically assessing customer behavior by considering the importance of brand knowledge as a key parameter.There are still some caveats with this survey as it is a productive survey aimed at clarifying the factors behind customer intrusion. The first is based on demographic variables such as income level, education level, religion, marital status, mortality rate, average family size, birth rate, the average age at marriage, and occupation. These control factors were not considered due to time limitations. Second, this survey was based on Multani\u2019s customer perceptions of online advertising. There was no comparison or contrast presented to customer awareness in other developed regions. The future direction of the researchers is to critically assess the current market conditions in the region and assess the behavioral approaches of different customers in different regions of Pakistan and in other countries.The original contributions presented in this study are included in the article/AS designed the analysis. MI collected the data. AO performed the analysis. HZ contributed analysis tools. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Many long non-coding RNAs (lncRNAs) have key roles in different human biologic processes and are closely linked to numerous human diseases, according to cumulative evidence. Predicting potential lncRNA-disease associations can help to detect disease biomarkers and perform disease analysis and prevention. Establishing effective computational methods for lncRNA-disease association prediction is critical.In this paper, we propose a novel model named MAGCNSE to predict underlying lncRNA-disease associations. We first obtain multiple feature matrices from the multi-view similarity graphs of lncRNAs and diseases utilizing graph convolutional network. Then, the weights are adaptively assigned to different feature matrices of lncRNAs and diseases using the attention mechanism. Next, the final representations of lncRNAs and diseases is acquired by further extracting features from the multi-channel feature matrices of lncRNAs and diseases using convolutional neural network. Finally, we employ a stacking ensemble classifier, consisting of multiple traditional machine learning classifiers, to make the final prediction. The results of ablation studies in both representation learning methods and classification methods demonstrate the validity of each module. Furthermore, we compare the overall performance of MAGCNSE with that of six other state-of-the-art models, the results show that it outperforms the other methods. Moreover, we verify the effectiveness of using multi-view data of lncRNAs and diseases. Case studies further reveal the outstanding ability of MAGCNSE in the identification of potential lncRNA-disease associations.The experimental results indicate that MAGCNSE is a useful approach for predicting potential lncRNA-disease associations.The online version contains supplementary material available at 10.1186/s12859-022-04715-w. Long non-coding RNAs (lncRNAs) are a type of non-coding RNA with the length of more than 200 nucleotides, which cannot encode proteins . The lncPredicting underlying association between lncRNAs and different diseases has extremely important significance and value, since it can help to analyze and prevent diseases, identify disease biomarkers and reveal the mechanism of lncRNA levels in diseases. However, many biological experiments suffer from the long time and high cost. As a result, a growing number of computational methods have been recently developed to identify lncRNA-disease associations (LDAs). These methods can roughly be classified into two categories: biological network-based methods and machine learning (ML)-based methods.Biological network-based methods are premised on the notion that functionally comparable lncRNAs are frequently linked to the similar diseases. In these methods, heterogeneous networks of diseases and lncRNAs are constructed, then LDAs are identified via different methods, such as matrix decomposition or random walk, etc. For example, SIMCLDA first usML-based methods generally use feature extraction techniques on lncRNAs and diseases to generate their representations, then identify potential LDAs by applying ML classifiers. ML-based methods here do not only refer to the traditional ML methods, but also to deep learning methods. For example, LDAP used theAlthough these methods for identifying LDAs have yielded promising results, there is still space for improvement. Firstly, for the representation learning methods, more advanced deep learning methods could be considered, such as the technique of graph convolutional networks (GCNs) for feature extraction, which has recently achieved outstanding performance. For example, GAMCLDA used GCNMulti-view data of lncRNAs and diseases were used in this study and MAGCNSE incorporated the lncRNA sequence information.MAGCNSE used deep learning methods that synthesize the techniques of GCN, attention mechanism and CNN to fuse the multi-view data to learn the low-dimensional representations of lncRNAs and diseases.After getting the positive and negative lncRNA-disease pairs by concatenating the representations of lncRNAs and diseases according to the lncRNA-disease association matrix, MAGCNSE applied a stacking ensemble model that integrates multiple machine learning classifiers for the prediction task.A series of experiments were performed to demonstrate that MAGCNSE is competitive and reliable in the field of LDA prediction.In this study, a novel method named MAGCNSE is proposed to predict LDAs. First, the GCN is used to extract features from the similarity graphs of different views of lncRNAs and diseases to obtain multiple feature matrices. For views of diseases, MAGCNSE uses disease semantic similarity (DSS) and disease Gaussian interaction profile kernel similarity (DGS), and for views of lncRNAs, MAGCNSE uses lncRNA functional similarity (LFS), lncRNA sequence similarity (LSS) and lncRNA Gaussian interaction profile kernel similarity (LGS). Then, MAGCNSE leverages attention mechanism for adaptively assigning weights to different feature matrices of lncRNAs and diseases. Next, MAGCNSE uses the CNN to further extract features from multi-channel feature matrices to acquire the final representations of lncRNAs and diseases. The representation learning processes were partially inspired by the study . MAGCNSETo evaluate the performance of our model, we used 5-CV for prediction comparison. We treated the known 1569 LDAs as positive samples. To eliminate the impact of data imbalance between positive samples and negative samples, many previous studies \u201336 randoArea under the receiver-operating characteristic (ROC) curve (AUC) and area under the precision-recall (PR) curve (AUPR) were utilized as two comprehensive performance evaluation metrics for performance evaluation of MAGCNSE. Other six evaluation metrics are also used, including Accuracy, Sensitivity, Specificity, Precision, To reduce the bias caused by random sample splitting, we implemented 5 times 5-CV and used the average values of the evaluation metrics.Since the selection of hyperparameters affects the final prediction results, it\u2019s necessary to find the relatively optimal hyperparameters, including the GCN embedding size, number of filters in CNN, number of GCN layers and number of base classifiers. The embedding size of lncRNAs and diseases in GCN could affect their final representations to a large extent, the dimension of the ultimate representations of lncRNAs and diseases was decided by the number of CNN filters, the number of GCN layers affects the number of feature matrices extracted by GCN, the number of base classifiers in the stacking ensemble model determines the input dimension of the LogisticRegression classifier. GCN embedding size was chosen from {16,32,64,128,256}, number of filters in CNN was chosen from {16,32,64,128,256}, number of GCN layers was chosen from {1,2,3,4,5}, number of base classifiers was chosen from {5,10,15,20,25}. We compared the performance of MAGCNSE using different values of hyperparameters under 5-CV, such that only one of the hyperparameters was changed each time, the results are shown in Fig\u00a0For the representation learning, to validate the necessity of using multiple GCN layers and adding the attention mechanism and CNN, we used 5-CV to compare MAGCNSE with the following four variants. (1) MAGCNSE-fgl: uses only the feature matrices generated by the first GCN layer and ignores the subsequent GCN layers, while the attention mechanism and CNN are still applied. (2) MAGCNSE-natt: uses multiple GCN layers and applies CNN to fuse them but does not use the attention mechanism; different feature matrices of lncRNAs and diseases extracted from GCN are given the same weights. (3) MAGCNSE-nattcnn: removes both the attention mechanism and CNN and only uses multiple GCN layers, then assigns the same weights to them. (4) MAGCNSE-ncnn: the feature matrix generated by multiple GCN layers is still applied, and attention mechanism is also applied, but CNN is not used for fusion.It can be seen from Fig\u00a0For the classification task, we compared the entire stacking ensemble model with single base classifiers and the LogisticRegression classifier under 5-CV.From Fig\u00a0Many graph neural network (GNN) models have been recently applied in the field of bioinformatics. Hence, we selected two advanced GNN models, graph attention network (GAT) and grapTo evaluate the overall performance of MAGCNSE, we compared it with six recently proposed state-of-the-art models: LDNFSGB , IPCARF mentclass2pt{minimIn order to confirm whether the results are better as expected after using multi-view features, we applied 5-CV to compare the AUC value and AUPR value of MAGCNSE under different views. It can be known from Fig\u00a0http://www.rnanut.net/lncrnadisease/) and used for the model training. The PubMed literature and two external databases of Lnc2Cancer 3.0 [http://bio-bigdata.hrbmu.edu.cn/lnc2cancer/) and MNDR v3.1 [https://www.rna-society.org/mndr/) were used for verifying the results.In order to further verify the performance of MAGCNSE in predicting the associations between lncRNAs and some specific diseases, we conducted two types of case studies. Our data were all obtained from LncRNADisease v2.0 and small-cell lung cancer (SCLC) . Table\u00a05To demonstrate whether MAGCNSE is capable of accurately retrieving known LDAs for a specific disease, we conducted the second type of case studies. For a specific disease, the detailed steps are as follows. Step 1: remove all associations related the specific disease from the known LDAs to treat it as a new disease, use the remaining known LDAs as the positive samples, and randomly select the same number of negative samples from the unknown LDAs, the negative samples do not involve the specific disease. Step 2: select the sample pairs between all lncRNAs and the specific disease as the testing samples. Step 3: after MAGCNSE is trained using the positive and negative samples, use it to test the lncRNA-disease testing samples, and record the prediction scores of the testing samples. Step 4: sort the prediction scores from the highest to the lowest, and find the top 10 lncRNAs related to that disease. Step 5: validate the results by referring to LncRNADisease v2.0. If no evidence is found in this database, then refer it to Lnc2Cancer 3.0, MNDR v3.1 and PubMed literature. Here, cervical cancer was chosen as the research subject.Cervical cancer is a very prevalent condition in women . Table\u00a06The detailed prediction scores of all predicted lncRNAs with the above-mentioned diseases are given in Additional file The prediction of potential LDAs can help to detect disease biomarkers and perform disease analysis and prevention, using computational methods to efficiently predict LDAS is of great importance. In this study, we developed a novel model called MAGCNSE to identify potential LDAs. MAGCNSE first uses GCN to fuse multi-view similarity graphs of lncRNAs and diseases and obtain multiple feature matrices. Then, it applies the attention mechanism to adaptively assign the weights to different feature matrices. Next, it further extracts features with the use of the CNN to get the final representations of lncRNAs and diseases. Finally, it utilizes a stacking ensemble classifier to make the predictions. Compared with previous models in the field of LDA prediction, multi-view data of lncRNAs and diseases were used in this study, and MAGCNSE used lncRNA sequence similarity, then MAGCNSE utilized deep learning methods rather than linear methods for data fusion to learn the representations of lncRNAs and diseases, and MAGCNSE employed a stacking ensemble model rather than single ML classifiers for the final prediction task. We performed experiments on the effect of parameters, ablation studies in both representation learning methods and classification methods, experiments comparing GCN with two other GNN models, comparison studies with other state-of-the-art methods, experiments on the effect of different views and two types of case studies. All results demonstrate the outstanding performance of MAGCNSE in predicting potential LDAs.However, there are still some aspects in our study that can be further investigated. Firstly, we only use the information of lncRNAs and diseases, there are some other biological information such as miRNA, protein and drug could also be considered for further research. In addition, the way to select, integrate and extract the features of lncRNAs and diseases for by more effective and superior deep learning methods is a long-term challenge in the future.http://www.noncode.org/) and diseases with no DOID from Disease Ontology [https://disease-ontology.org/). Finally, we obtained 1569 human LDAs between 489 lncRNAs and 251 diseases. We define an adjacency matrix In this study, we retrieved known LDAs from LncRNADisease v2.0, which includes 10564 experimentally validated associations between 6105 lncRNAs and 451 diseases among several species. First, we selected only human LDAs and removed duplicated records, then filtered out lncRNAs with no sequence information from NONCODE v6.0 [Ontology (https:/https://www.nlm.nih.gov/), and the directed acyclic graphs (DAGs) for diseases can be constructed afterwards. The disease t that belongs to In studies of ncRNA-disease associations, DSS has been extensively used in recent years and has been proved to be effective. It is calculated by Wang\u2019s method , in whicThe total contributions of package to calcup diseases and q diseases, respectively, then the LFS can be calculated as:It has been previously observed that functionally comparable lncRNAs are frequently linked with similar diseases. We followed the previous works to calcudist denotes the minimum cost of converting lncRNA len is length of lncRNA sequence.Following previous studies , 62, we Based on previous works , LGS canSimilarly, for diseases, DGS is computed as follows:The main workflow of MAGCNSE is shown in Fig\u00a0r are represented as r can be calculated by the following formula:r, while Due to its excellent capacity of data processing and suitability for data with a graph structure, GCN has been extensively used in bioinformatics and other fields in recent years . GCN canr, the representations of the lncRNA nodes on the graph L lncRNAs in the l-th GCN layer in view r. Specifically, the value of the initial embedding R denotes the similarity matrix of all lncRNAs,r, and T diseases in the l-th GCN layer in view s. Specifically, s, and Given the propagation formula of single lncRNA nodes in view l layers, the embeddings of lncRNAs in view r and those of diseases in view s can be denoted as follows:R views and the features of diseases in S views extracted by the GCN are as follows:Given the embeddings of lncRNAs and diseases in multiple GCN layers in diverse views and that the GCN has Z was calculated as:We found the multiple feature matrices under different views to be similar to multiple channels of an image, but with potentially different importance. With reference to the study , we applThen, the attention weights for the feature matrices of lncRNA can be calculated as follows:mentclass2pt{minimGiven the weight of each feature matrix of lncRNA, each normalized feature matrix of lncRNA can be obtained as follows:The normalized multiple channel\u2019s feature matrices of lncRNAs and diseases can be regarded as an image of lncRNAs and an image of diseases, respectively. In the bioinformatics field, CNNs have become extensively exploited due to their excellent image processing abilities in recent years , 67. TheThen, the final lncRNA representations LD and Loss, which can be computed as follows:Loss term.During the above-mentioned procedures, MAGCNSE calculates a temporary matrix Fig\u00a0(1) We use 80The key hyperparameters of the six traditional classifiers and their optimal value after grid search are shown in Table\u00a0Additional file 1: Table S1. The detailed parameters of seven state-of-the-art methods in this study. Table S2. The detailed prediction scores of all predicted lncRNAs with colon cancer. Table S3. The detailed prediction scores of all predicted lncRNAs with lung cancer. Table S4. The detailed prediction scores of all predicted lncRNAs with cervical cancer. Table S5. AUC and AUPR values of MAGCNSE using different values of \u03bc."} +{"text": "In view of age-related health concerns and resource vulnerabilities challenging older adults to age in place, upstream health resource interventions can inform older adults about the availability, accessibility, and utility of resources and equip them with better coping behaviours to maintain health and independence. This paper described the development process and evaluated the feasibility of an upstream health resource intervention, titled Salutogenic Healthy Ageing Programme Embracement (SHAPE), for older adults living alone or with spouses only.A pilot randomised controlled trial design was adopted. SHAPE was designed to equip older adults with resource information and personal conviction to cope with stressors of healthy aging. This 12-week intervention comprised 12 weekly structured group sessions, at least two individual home visits and a resource book. Both the intervention and control groups received usual care provided in the community. Feasibility of SHAPE intervention was evaluated using recruitment rate, intervention adherence, data collection completion rate, satisfaction survey and post-intervention interview. Outcome measures were assessed at baseline and post-intervention. Paired t-tests were used to examine within-group changes in outcome measures. Content analysis was used to analysed qualitative data.Thirty-four participants were recruited and randomised. While recruitment rate was low (8.9%), intervention adherence (93.75%) and data collection completion (100%) were high. Participants expressed high satisfaction towards SHAPE intervention and found it useful. Participants experienced mindset growth towards personal and ageing experiences, and they were more proactive in adopting healthful behaviours. Although the programme was tailored according to needs of older adults, it required refinement. Intention-to-treat analysis showed significant increase in overall health-promoting lifestyle behaviours, health responsibility, physical activity, spiritual growth, and stress management among intervention participants. However, they reported a significant drop in autonomy post-intervention.Findings of this pilot trial suggested that with protocol modifications, SHAPE can be a feasible and beneficial health resource intervention for older adults. Modifications on recruitment strategies, eligibility criteria, selection of outcome measures, training of resource facilitators and strong collaboration bonds with community partners would be needed to increase feasibility robustness and scientific rigor of this complex intervention.This study has been registered with clinicaltrials.gov on 10/05/2017. The trial registration number is NCT03147625. Living arrangement has a major role in shaping the living circumstances, social environment, well-being, and the allocation of economic and care resources for older adults at old age. This socio-demographic characteristic has gained increased attention in recent years, as evident in the World Ageing Report and WorlOlder adults living alone or with their spouses only require resources such as personal financial assets, social capital, care support services and adequately funded healthcare. These enable them to live independently and care for themselves for as long as they can. These older adults also seek material, instrumental and emotional support from various formal and informal resources, i.e. family members, friends, neighbours, social workers, community services, home care, healthcare professionals, home-making, transport and emergency services \u201313. TheySingapore is a multi-ethnic Southeast-Asian developed country that shares the Confucian cultural heritage on filial piety. Therefore, its ageing policy focuses on ageing-in-place where the normative responsibility of senior caregiving first lies with the individual, followed by the family and government. In recent years, the Singapore government took a pro-active role in senior welfare provision, and channelled abundant resources into elderly-centric initiatives to make Singapore an age-friendly city . HoweverThe salutogenic model of health, which advocates for the creation of health using a stress-resource theory approach, was employed as an underpinning theoretical framework . It offeHealthy ageing lifestyle behaviours among elderly include physical activity, reduced sitting time, cognitive engagement, social engagement, sleep quality and sleeping habits, balanced nutritional intake and dietary habits, minimal or moderate alcohol consumption, abstinence from smoking and stress management \u201330. HoweSelf-efficacy is a prominent concept related to coping and health behavioural change. It refers to the belief of one\u2019s abilities to successfully accomplish a task and produce desired effects . EssentiTo ensure the contents and approach of SHAPE intervention were contextually, socio-culturally appropriate, and theoretically grounded to the salutogenic model of health, we conducted a qualitative study among older adults living alone or those living with spouses only. Using focus group discussions, we explored their stressors of healthy ageing and the operationalisation of SOC in context of healthy ageing . Ageing The SHAPE intervention embraced the stressors of healthy ageing as salutary and facilitated older adults in identifying these stressors and understanding their coherence towards life at old age. To strengthen SOC, SHAPE aimed to (1) raise perceptual awareness towards and minimise the unpredictability of ageing-related experiences and daily living challenges (comprehensibility), (2) empower older adults to utilise their surrounding health assets to adopt health-promoting behaviours to cope with stressors and promote bio-psycho-social-spiritual health (manageability), and (3) activate older adults to seek motivation to live each day purposefully through individual reflection and making sense of old age experiences (meaningfulness) , 42. DepTo evaluate the feasibility of SHAPE intervention, pertaining to recruitment process, intervention adherence and outcome data, among older adults living alone or with spouses only.To examine the acceptability towards SHAPE intervention including perceived satisfaction by older adults living alone or with their spouses only.To examine potential effects of SHAPE intervention on SOC, self-efficacy, QoL, health promoting lifestyle behaviours and self-rated health among older adults living alone or with spouses only.We conducted a two-arm pilot randomized controlled trial study to evaluate the feasibility, acceptability, and potential effects of the SHAPE intervention. The following were the study aims:Participants were recruited from a lower socio-economic and elderly-populated residential estate in Singapore. We partnered with the local community centre and recruited participants via convenience and snowballing sampling through various community engagement strategies such as flyer distribution at community events, poster advisement at residential areas and centres, word-of-mouth, and door-to-door canvassing. Eligible community dwelling older adults were aged\u2009\u2265\u200960\u00a0years old, either living alone or with another older adult and able to converse and read in Mandarin language. The age range of eligible older adults was initially\u2009\u2265\u200965\u00a0years old and was The SHAPE intervention was a multi-dimensional, person-centric, and asset-based health resource programme. The content of SHAPE intervention was designed to equip the older adults with resource information, the know-how resource utility, and the personal conviction in coping with stressors of healthy ageing Table .Table 1CThis 12-week SHAPE intervention adopted a mixed modal programmatic approach using 12 weekly structured group sessions, at least two individual home visits and a resource book.Each 2.5-h group session covered on a specific topic, using a mixed use of didactic learning, classroom discussions and hands-on activities. During the break, healthy nutritious snacks were introduced, and practical information on where to purchase these snacks at economical prices or how to prepare them were also provided. Ten weekly 30-min physical activity sessions were incorporated to introduce exercises that promote older adults\u2019 daily functional movements, e.g., reaching out for items from the top shelf or ground. At the end of each group session, participants were given homework to reflect upon and build on the topic introduced. As there is no specific recommendation on the group size of health education programmes, this pilot study strived to have 6 to 10 participants per intervention group class to facilitate sustained participation levels during group sessions. These group sessions were conducted at senior centres within the vicinity of participants\u2019 residence, on either weekday afternoons or weekend mornings.Home visits were core to the SHAPE intervention; they were personal reflective sessions to put together older adults\u2019 existing and learnt resources into individuals\u2019 context of life and focused on personal health concerns and specific resource needs. The first home visit explored and reflected upon the meaning of health and life at old age. The second home visit involved participants to evaluate their health and various aspects of life using an assessment toolkit comprising questions generated from the group session topics, followed by setting of personal health goals and developing action plans collaboratively with the resource facilitator. These health goals included, but not limited to, increasing frequency and intensity of specific exercises, attending regular health screenings, pursuing individual interests at leisure, discussing care preferences and will-making plans with family members. The SHAPE home visits recognised the heterogeneity of care needs among older adults and sought to enhance personal self-care skills according to individual needs.The resource book was an integrative comprehensive health information book that complemented the contents of the group-based learning sessions and home visits. It was a compilation of seven wide-ranging self-care topics for daily living and practical contextual information on the access and utility of local regional community facilities and services, and government schemes. Each topic was supplemented with information sources using QR codes to encourage interested older adults to cultivate information-seeking behaviour and deepen their resource knowledge.An intervention manual was developed to provide clear directives for the resource facilitator to conduct the group sessions and home visits. Both the intervention manual and resource book were content validated by a multi-disciplinary team of salutogenesis, geriatrics and gerontology experts. All group sessions and home visits were conducted by the first author, who is a registered nurse and developed the SHAPE intervention.Both the intervention and control groups received usual care provided in the community. All participants could continue to participate in the leisure activities offered at senior activity centres, community centres, non-profit organisations on their own accord. Examples of such activities included morning exercises, playing board/card games, art and craftwork, excursions to local attractions and festive celebrations.Eligible and consenting participants completed a baseline assessment prior to randomisation to either the SHAPE intervention or the control group. Randomisation was determined using computer-generated random number sequence and the group assignments were placed in sealed opaque sequentially numbered envelopes by the first author. To conceal random allocation sequence to screener prior to intervention assignment, a trained research assistant who performed the eligibility screening took the envelopes in numerical sequence and allowed participants to open the envelopes to reveal the group assignment. Blinding of group assignment to participants was least possible in this community-based study as they could mingle and share their group allocation with each other.The same research assistant conducted the baseline and post-test quantitative data collection (after the conduct of SHAPE intervention at week 12) in the comfort of participants\u2019 home or a nearby community centre. A separate post-intervention interview was conducted by the first author to understand the acceptability of intervention and she collect the qualitative data.st week introductory and 12th week consolidative sessions) and two home visits, (3) 80% of participants completed both pre-test and post-test data collection.To establish trial viability and determine if changes in operation of main trial would be required, progression criteria on recruitment, intervention adherence and outcome data was set . They weTo determine the acceptability of SHAPE intervention, participants\u2019 satisfaction was assessed using a self-developed 13-item evaluation form during a face-to-face interview post-intervention. Participants were asked to rate the overall programme and intervention components on group sessions, physical exercises, home visits and resource book on a five-point Likert scale from 1- strongly disagree to 5-strongly agree. This was followed by open-ended questions such as \u2018what do you like the most about this programme?\u2019, \u2018what do you dislike the most about this programme?\u2019, \u2018what would you change to the programme to make it better?\u2019 and a free-response question for participants to add other comments or suggestions about the programme and its components. As some participants were less proficient in writing, permission was sought to audio-record their verbal responses during the interview.Quantitative outcome measures collected for this pilot study include SOC, quality of life (QoL), self-efficacy, health promoting behaviours and self-rated health.The 29-item Orientation to Life Questionnaire was developed to measure a person\u2019s capacity in responding to stressful situations . Using iThe World Health Organisation Quality of life Old module (WHOQoL-OLD) has been reported to be a valid and reliable cross-cultural geriatric-centric instrument . This 24The Generalized Self-efficacy Scale (GSE) was developed to assess perceived self-efficacy with the intention to predict coping with daily events and adaptation of stressful life events . It had The 52-item Health Promoting Lifestyle Profile-II (HPLP-II) measured multi-components of health promotion lifestyle behaviours. These six subscales are spiritual growth, interpersonal relations, nutrition, physical activity, health responsibility and stress management. The instrument was reported to have an overall internal consistency of Cronbach\u2019s alpha 0.92, with its subscales alpha coefficients ranging from 0.70\u20130.91 [Socio-demographic data including as age, gender, religion, marital status, education years, employment status, housing type and ownership, and brief medical history were collected.Feasibility outcomes were tabulated and reported descriptively and narratively. Quantitative data collected were analysed using the Statistical Package for the Social Sciences (SPSS) version 25. They were summarized using descriptive statistics, presenting results in raw count (%) for categorial data, mean and standard deviations for continuous variables. Chi-square and independent t-tests were used to determine difference for binary and continuous variables respectively between the intervention and control group. This pilot study was not powered to examine between group differences in outcome measures. However, paired t-tests were used to assess within-group mean differences between pre- and post- intervention outcome measures. Alpha was set at 0.05 for statistical significance.Audio-recorded interviews were transcribed verbatim. Content analysis was used to analyse participants\u2019 written responses and verbatim transcripts . Codes wA study flow-chart is shown in Fig.\u00a0n\u2009=\u200916) or control group (n\u2009=\u200918). All except one participant completed the SHAPE intervention (93.75% intervention adherence). The latter participant missed some group sessions to manage urgent family matters and she completed seven group sessions (inclusive of 1st and 12th sessions) and two home visits. All participants completed the questionnaire at baseline and follow-up at week 12 (100% outcome data).Of those approached, 34 older adults enrolled into the study (8.9% recruitment rate) and were randomised to SHAPE intervention Participants shared that they gained practical health knowledge about self-care and ageing. Not only did they learn about the ageing process and different types of illness faced at old age, but they also learnt how to respond to future health events, how to read food labels and eat healthily, how to ensure home safety and prevent falls, as well as to plan and manage their future care. Three participants mentioned that they shared the learnt health knowledge with their family members, friends, and neighbours.Participants revealed that the programme content and the interaction with peers helped to straighten out their thoughts towards their personal ageing experiences. Few participants mentioned that content introduced was thought-provoking and the homework given made them reflect on their own health situations and future. Most participants expressed that their understanding and orientation towards life at old age changed. One participant realised that \u2018even when you are old, you can have goals and pursuits\u2019 (B5). Some were more aware of their life orientation after the programme and developed life goals which they strived to work towards. Others gained in self-confidence, were more responsible for own health or became \u2018more at ease with self and have better control of our lives\u2019 (B11). Overall, participants experienced mindset growth towards personal health and ageing experiences.Participants shared that they were more motivated to take actions for their own health after attending the programme. They shared how they paid more attention to self-care by increasing frequency of exercising, being more selective in consuming nutritious food, eating home-cooked food more often, spending more time with family and friends and signing up for classes to enrich themselves. Through the programme, some participants became more socially active and aware, and few mentioned that they \u2018made new friends and would continue these friendship exchanges\u2019(A9). A few participants also shared that they were in contemplation of engaging other healthful activities such as voluntary work.\u2018I made a telephone call to arrange for vaccination tomorrow\u2026 Next, I will arrange for mammograph, and take the form from doctor and have my hearing checked out. So I have planned for them\u2019 (B9)Some participants also shared that they were more receptive and proactive in their health-seeking behaviors. Few mentioned that they \u2018would read up more\u2019 about health information (B1), others would be more prompt in seeking medical assistance and be proactive in attending health screenings. Nonetheless, few participants revealed that there was \u2018no change in health-seeking behavior\u2019 (B7).\u2018The programme strengthens my muscles, and they will not ache or pain that much. (Now), I can move around more, not be afraid of going out. Whenever others asked me out, I no longer worry that I will not be able to walk. I am now more confident\u2019 (B5)Some participants fed back that they experienced improvements in their physical health after the programme. They experienced better mobility in walking and squatting, as well as lesser joint stiffness and aches. One participant claimed she became \u2018leaner and lost 1\u00a0kg\u2019 after practicing the introduced exercises (B4). A few participants added that their ability to write improved and had better memory after the programme.\u2018I saw the benefit of the programme. I told my family members I do not want to stay till Saturday for the (holiday) trip because I am interested to attend the programme. It is fun and interesting, so no choice my daughter had to fetch me back (home) earlier while the rest continued.\u2019 (B11)Participants expressed that SHAPE intervention was a worthwhile programme which they found it enjoyable and beneficial. Content covered was comprehensible, more detailed compared to other health talks, and meaningful to them. Few of them indicated that it was a well-spent 12-week programme and would try not to miss any of the group sessions. Many mentioned that they would recommend the programme and looked forward to future learning experiences.Almost all participants shared that there was nothing they dislike about the programme. They were satisfied with the programme arrangement. Participants appreciated the handy and informative resource book provided, and they could \u2018refer to it when we have time and when we encounter any situations\u2019 (B1). They liked the nutritious healthy snacks during breaktime, enjoyed listening to their peers\u2019 health experiences, and learning from them. Two participants added that they appreciated the small group sessions . The home visits allowed them to \u2018say innermost thoughts that I have never talked about\u2019 (B9), \u2018understand own self better\u2019 (A16), and \u2018enquire information if encounter personal issues\u2019 (B7). Other participants fed back that the location of the programme was near their residence and was good.\u2018The interaction with the current programme team is good. Relationship with instructor is important. If I do not like the instructor, I will not go.\u2019 (A17)Participants were satisfied with the instructor\u2019s positive and lively class engagement. Few of them highlighted that the relationship with the instructor and \u2018how the instructor explains, leads and drives the programme is important\u2019 (A9).\u2018Exercises are tailored according to participants' health conditions... Because the programme has a variety of exercises. Not every participant can accept. If they cannot accept (or do), they can sit at the side and watch or do modified exercise determined by instructors\u2019 (B1)Participants reflected that the SHAPE intervention was a tailored programme closely aligned to the lives and needs of \u2018every older person, not just limited to those living alone or with spouse only\u2019 (A12). The participants fed back that the physical exercises introduced were \u2018different from (other) outside classes\u2019 (A12), challenging but doable and safe for older adults and most participants. Doing the exercises during group sessions \u2018made the programme purposeful\u2019 (A18). Few participants highlighted that not every participant can accept the exercises due to their medical conditions and the instructor modified the exercises to cater to participants\u2019 physical needs.Some participants voiced that the programme could be further refined to better cater to older adults\u2019 specific needs and preferences. While homework provided was \u2018not difficult and manageable\u2019 (A9), some participants faced difficulty in expressing and writing their responses due to their limited language competencies. While some participants were satisfied with the programme timeslots, others found it inconvenient as they had to rush home to prepare/make dinner. Although participants appreciated the home visits, views towards the duration of home visits were mixed. While some fed back that the duration of home visits was acceptable, others preferred to have a single home visit, keeping it to 2 to 3\u00a0h.P\u2009<\u20090.001), health responsibility (P\u2009=\u20090.006), physical activity (P\u2009=\u20090.013), spiritual growth (P\u2009=\u20090.007) and stress management (P\u2009=\u20090.011). Although total SOC improved among intervention participants, the change was not significant. No changes were observed in SOC components, self-efficacy, self-rated health, total QoL and most of the QoL components among both intervention and control participants. However, intervention participants reported a significant drop in autonomy post-intervention (P\u2009=\u20090.04).Table This pilot study evaluated the SHAPE intervention, a health resource programme, designed to equip older adults living alone or with spouses with better coping behaviours in maintaining health and independence at old age. The strengths of this study included the use of randomised controlled design and comprehensive assessment of implementation feasibility, intervention acceptability, and measurement of health-related outcomes. It yielded positive findings on most feasibility outcomes, intervention acceptability, and on health-promoting behaviours of older adults. The conduct of this pilot study was instrumental in identifying potential facilitators and barriers of implementing larger trials, which would be needed to confirm the effectiveness of SHAPE intervention.Although multiple recruitment strategies were adopted, the recruitment rate of this pilot study fell below the established criteria (20%). Previous preventive lifestyle trials such as Lifestyle Matters and Food and Immunity studies reported lower recruitment rates , 55. AltOverall, participants were satisfied with the SHAPE programme and the intervention adherence was high. Most participants strongly agreed on the usefulness and relevancy of programme, citing that they benefited from the practical health knowledge on self-care and ageing and the programme content was tailored to all older adult adults, regardless of their living arrangement. This justified that SHAPE was a health resource programme that addressed the needs of older adults. The conduct of our prior qualitative study contributed aptly to the identification of healthy ageing stressors, which was critical in aligning with older adults\u2019 demands of ageing experiences . As partIntervention participants reported an increased in engagement of health-promoting lifestyle behaviours, particularly in health responsibility, physical activity, spiritual growth, and stress management. These quantitative findings were consistent with participants\u2019 qualitative responses that they were more proactive and driven in taking actions for their health. Participants practiced the taught physical exercises, made changes to their dietary habits, and pursued greater social/leisure activities. As the SHAPE intervention equipped participants with the know-how in managing stressors of healthy ageing, participants could put these informational resources into practice. Additionally, the focus on positive ageing in SHAPE programme could also have improved participants\u2019 expectations and attitudes towards ageing and spurred them to be active agents in health management , 67. AltThe intervention participants in this pilot study who were younger in age reported a weaker SOC at baseline compared to the control participants. Past studies have shown that SOC could increase with age among older people \u201372. AlthNo significant changes were observed in self-rated health, total QoL and in most QoL components post-test among participants. The lack in statistical outcomes was likely due to our small sample size. However, intervention participants reported a significant small decline in autonomy. This finding contrasted with the qualitative responses that participants reported better control of their lives and it is unclear what contributed to the decrease in autonomy. Perhaps owning to the way how the items on WHOQoL-OLD were framed, participants receiving SHAPE intervention were encouraged to reflect deeply on their lives and they might have realised that they have not exercised their autonomy to their potential during their late lives. Our study found no changes in self-efficacy post-test scores among participants. Although participants from intervention and control group understood the items in GSE instrument, some of them enquired the context of each item. On hindsight, there might be a need to consider using a behaviour-specific self-efficacy instrument. According to Bandura , self-efParticipants in this pilot study were recruited from the community based on their living arrangement and cognitive abilities. Their health and daily needs, literacy levels, and learning abilities were heterogeneous. As such, individual areas of improvements highlighted by the participants reflected differences in preferences, expectations of the program and their learning abilities. More timeslots could also be offered to participants. Facilitation of the present study, as well as future group sessions and home visits need to be adaptive and flexible yet adhering to the programme activities and learning outcomes stipulated in the intervention manual to ensure intervention fidelity when implementing this complex person-centred intervention . For exaThe conduct of this pilot trial was vital and useful in providing the research team with training and experiences to confirm and enhance competencies needed to conduct the main trial investigation and intervention . InterveSHAPE is essentially a resource-intensive intervention; it brings together different aspects of aging well, from accepting bodily changes, adopting healthy lifestyle habits, reconciling social role transitions, navigating complex healthcare system to preparing for end-of-life. Such approach of resource and knowledge integration with the health and wellness of community-dwelling older adults is critical as part of ageing populations\u2019 health policy agenda on the shift from healthcare to health. Perhaps when scalability and effectiveness of the SHAPE intervention are conceivable, the intervention could be delivered at aged care community centres with trained healthcare professionals conducting the home visits. Prudent measures on developing core competencies of resource facilitators, refining the programme\u2019s structure and operational conduct would be warranted. Sustainability of SHAPE would require detailed planning on the administrative support, investment of manpower training and fostering of a strong collaborative partnership with community service providers to achieve optimal engagement among the older adults, resource facilitators and community partners.There were several limitations to this study. As the intervention was piloted in Mandarin language, it excluded non-Mandarin speakers and the study sample consisted only Chinese older adults. No repeated measures were adopted to test the feasibility of multiple data collection time points for main trial. Blinding of group allocation among participants might have to be reconsidered for the main trial as participants could mingled and shared their group allocation with peers from control group in this single-centre community-based study. We adopted convenience sampling for this pilot study. Similar to a past local intervention study conducted on older community-dwellers , this stThe SHAPE intervention was grounded by strong theoretical underpinning of the salutogenic model of health. Findings of this pilot trial were positive and supported that with protocol modifications, SHAPE can be a feasible and beneficial health resource intervention for older adults. The conduct of this pilot trial was essential and fruitful. It was a preview of future larger trial for the research team to gain training and experience, and to understand the demands and resources needed for larger trial implementation. Further modifications on recruitment strategies, eligibility criteria, selection of outcome measures, training of resource facilitators and strong collaboration bonds with community partners would be needed to increase feasibility robustness and scientific rigor of this complex intervention."} +{"text": "Eucoleus garfiai in wild boars across different countries, and more lately in southern Italy, have brought up the need for collecting epidemiological data on this parasite. In the present study, the prevalence of E. garfiai was analyzed in relation to altitude in different provinces of the Campania and Latium regions located, respectively, in southern and central Italy. Results showed that the parasite is more often found at altitudes higher than 900 m above sea level. Some species of earthworms are intermediate hosts of E. garfiai and it is well known that earthworms are more present in high quality soils, which are more likely found at high altitudes where anthropogenic interventions are less frequent. Therefore, we can suggest that the higher prevalence of E. garfiai above 900 m above sea level is probably linked to a higher presence of earthworms in the soil, due to its higher quality in these areas.Recent findings of the nematode Eucoleus garfiai in wild boars in southern Italy have highlighted the need for collecting epidemiological data on the presence of this parasite and understanding the role of possible interactions between wild boars, E. garfiai, and the environment. This study analyses, using histopathological and biomolecular techniques, the presence of E. garfiai in tongue samples of wild boars hunted in four provinces of the Campania and Latium regions , in areas located above and below 900 m above sea level (asl). Histopathological examinations revealed the presence of adults and eggs of nematodes, which were subsequently identified as E. garfiai by biomolecular analysis, in the tongue epithelium. The detection of the parasite was more frequent in samples collected from hunting areas located above 900 m asl than in those collected from areas located below 900 m asl . Some species of earthworms are intermediate hosts of E. garfiai and it is well known that earthworms are more present in high quality soils. Therefore, we can suggest that the higher prevalence of E. garfiai at higher altitudes is probably linked to a greater presence of earthworms in the soil, due to its higher quality in these areas.Recent reports of Francisella tularensis, Yersinia pestis, Bacillus anthracis, Coxiella burnetii, Avian influenza virus H5N1 [The lives of wild animals are strictly connected to the environment in which they live and to the other living beings with whom they share it ,2,3. Therus H5N1 , Swine frus H5N1 , Coronavrus H5N1 , and difrus H5N1 as they rus H5N1 ,17,18,19rus H5N1 ,21,22.Sus scrofa) are extensively distributed worldwide and, starting from the mid-twentieth century, a significant increase in their population in Europe and Italy has been described with a consequent enlargement of their habitat [Wild boars located at different altitudes.Recently, the nematode rn Italy . Previouin Spain , Austriain Spain , Japan [in Spain , centralin Spain and Iranin Spain . Adults te hosts . Histopate hosts . AlthougA total of 69 wild boars were collected in 8 different areas pertaining to 4 different provinces of 2 regions of southern and central Italy (3 in Campania and 1 in Latium) during the months of October, November and December of the 2021\u20132022 hunting season. Sampling locations are shown in E. garfiai 18s_FW: 5\u2032-GTCGTCGTCGAGATGAGTCG-3\u2032, E. garfiai 18s_REV: 5\u2032-TCTCTCCGGAATCGAACCCT-3\u2032 , designed on the sequence available on GenBank (MW947272.1), was employed for amplification of a specific fragment (180 bp) of 18s ribosomal RNA gene of E. garfiai. A positive control mimicking E. garfiai 18s intended amplicon was synthesized according to gBlocks Gene Fragments technology and run along with PCR reactions. Moreover, one no template control (NTC) was included as negative control. Amplification products were then migrated by electrophoresis on 2.5% agarose gel in TAE buffer (Tris-Acetate-EDTA) along with a 100 bp molecular marker (Bioline), stained with ethidium bromide and observed under UV with the ChemiDoc gel scanner (Bio-Rad). One representative PCR product was purified using the QIAquick PCR Purification Kit according to the manufacturer\u2019s protocol, and submitted for sequencing at BMR Genomics . The obtained sequence was aligned with available sequences of E. garfiai from GenBank.Three samples (1 from CF AV < 900 m and 2 from PL BN > 900 m) were excluded as they were in a bad state of conservation; therefore, 66 wild boar tongue samples were isolated and transported by members of the laboratory of Animal Husbandry of the Department of Veterinary Medicine and Animal Productions, University of Naples \u201cFederico II\u201d in containers filled with 10% buffered formalin to the laboratory of Veterinary General Pathology and Anatomical Pathology of the Department of Veterinary Medicine and Animal Productions for further histopathological processing. Samples of 2 cm width were cut from each tongue and routinely processed for histopathological examination as previously described . Briefly\u00ae PRO 14 software.In order to evaluate the possible relationship between the presence/absence of the parasite in wild boars and altitude (below and above 900 m asl), comparison of the percentages was carried out with the Chi-squared test of independence using the JMPE. garfiai adults and/or whole adults, measuring approximately 70 \u00b5m in the transversal direction, were identified in the prickle layer, often in proximity to the basal layer (A fragment of the expected size (180 bp) of < 0.01) . This re < 0.01) .E. garfiai is a non-zoonotic non-pathogenic helminth which can be found in the tongue of wild boars without causing severe lesions. Histopathological results of our study confirmed the presence of eggs and adults in tongue samples associated with mild inflammatory alterations, as previously described [E. garfiai in southern Italy have raised interest in collecting more data to better understand the ecology and epidemiology of this particular parasite. In this study, we investigated the prevalence of E. garfiai in different areas located below and above 900 m asl and the results showed a higher prevalence of E. garfiai in sampling areas located above 900 m asl. Altitude has frequently been used as a parameter for studying biodiversity richness and it was previously reported that earthworms\u2019 communities adapt well to different gradients of altitude, increasing in number and variety of population, probably due to lower anthropogenic influence and higher soil quality [Lumbricus spp. and Allolobophora spp.), which act as intermediate hosts.The growth in population of wild ungulates and the expansion of their living habitat towards more anthropogenic areas pose numerous concerns under an ecological and sanitary point of view. Increased use of lands for agricultural purposes, deforestation and inhabitation of suburban areas have modified the extension of areas inhabited by humans and wild boars, causing overlapping between the two populations, and developing more chances of contact exposure among wild boars and humans and livestock . Wild boescribed ,38. Rece quality ,49,50,51 quality ; therefoE. garfiai could also be higher in areas located above 900 m asl due to the presence of a higher number of wild boars in these areas and the positive correlation of final host abundance to parasitic infection of intermediate hosts [Metastrongylus spp. larval infection of earthworms and wild boar density was previously demonstrated by Nagy et al. [E. garfiai.Moreover, infection of earthworms with te hosts . The assy et al. , and furE. garfiai in tongues of wild boars collected at higher altitudes where a higher presence of earthworms is found. Therefore, it could be useful for the evaluation of the investigated environments to add to routine tests for the identification of zoonotic parasites (Trichinella spp.) the assessment of E. garfiai occurrence in samples of tongues from wild boars hunted during the hunting season, as an indicator of earthworm presence and consequently of soil quality. However, significant research gaps still exist, and more studies should be carried out to understand the variations in parasite abundance and the possible use of E. garfiai as a bioindicator.Soil quality and sustainability can be evaluated by using its micro and macrofauna. In particular, earthworms have been proven to be valuable bioindicators and biomonitors due to the abundance and variety of species composition of the earthworm fauna, the behavior of these invertebrates in contact with the soil substrate, and the accumulation of chemicals from the environment into their bodies ,57,58,59"} +{"text": "The purpose of the present study was (i) to explore the reliability of the most commonly used countermovement jump (CMJ) metrics, and (ii) to reduce a large pool of metrics with acceptable levels of reliability via principal component analysis to the significant factors capable of providing distinctive aspects of CMJ performance. Seventy-nine physically active participants performed three maximal CMJs while standing on a force platform. Each participant visited the laboratory on two occasions, separated by 24\u201348 h. The most reliable variables were performance variables (CV = 4.2\u201311.1%), followed by kinetic variables (CV = 1.6\u201393.4%), and finally kinematic variables (CV = 1.9\u201337.4%). From the 45 CMJ computed metrics, only 24 demonstrated acceptable levels of reliability (CV \u2264 10%). These variables were included in the principal component analysis and loaded a total of four factors, explaining 91% of the CMJ variance: performance component , eccentric component (variables related to the breaking phase), concentric component (variables related to the upward phase), and jump strategy component (variables influencing the jumping style). Overall, the findings revealed important implications for sports scientists and practitioners regarding the CMJ-derived metrics that should be considered to gain a comprehensive insight into the biomechanical parameters related to CMJ performance. The countermovement jump (CMJ) is one of the most implemented testing modalities for the assessment of lower body mechanical capacities. It has been primarily used for monitoring sports performance , inter-lThe CMJ variables computed using force platforms can be roughly categorized into: (i) performance, (ii) kinetic, and (iii) kinematic. Undoubtedly, performance variables are the most frequently considered in the scientific literature and practical setting . One varThe practical utility of CMJ-derived variables is largely contingent on their reliability. In addition to the natural variability of the human system , the reliability of the CMJ-derived variables is greatly affected by the complexity of the computational methods. In other words, variables that require a greater number of computational steps tend to have lower reliability, while the variables directly calculated from the force\u2013time curve usually display higher levels of reliability . For insSeveral studies evaluated the possibility and usefulness of reducing the large pool of CMJ-derived variables to a more pragmatical number of essential variables for describing and understanding the overall CMJ performance ,23,24,25To solve this problem, 45 force\u2013time-derived variables were identified as potentially important CMJ metrics, and the aims of this study were (i) to explore their within- and between-day reliability, and (ii) to reduce a large pool of identified CMJ-derived variables with acceptable levels of reliability to the significant factors using principal component analysis. From the reliability standpoint, it has been hypothesized that the CMJ variables will be ranked from most to least reliable as follows: performance variables > kinetic variables > kinematic variables. However, the hypothesis regarding the minimum number of factors that can explain the overall CMJ performance could not be set due to the inconsistent findings previously reported in the scientific literature. The results of the present study are expected to reveal the list of highly reliable metrics that should be used to thoroughly explore the lower body neuromuscular capacities through the CMJ. Seventy-nine physically active participants and 42 men ) volunteered to participate in the present study. All participants had at least one year of lower-body resistance training experience and most of them were actively involved in resistance training at the time of the testing. Moreover, they were free of musculoskeletal injuries and/or pain that could negatively impact CMJ performance, and none of them were taking any supplementation at the time of the study. Participants were familiarized with the research protocol in both written and verbal manner. The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of the University of Belgrade, Faculty of sport and physical education .The present study aimed to explore the reliability of the most frequently used CMJ variables and to provide a list of reliable metrics that are necessary to be considered when exploring lower body neuromuscular capacities through CMJ. For this purpose, participants completed two identical testing sessions, separated 24\u201348 h apart. During each experiment, participants performed three maximal CMJs without arm swing (having their hands fixed on the hips). The rest interval between each CMJ was 60 s . The sesAll testing procedures were performed in the university research laboratory and at the same time of the day for each subject (\u00b130 min). The participants were required to avoid any intense physical activity 24 h prior to each testing session to ensure their full readiness for the testing. Upon arrival at the laboratory, participants performed a standardized warm-up procedure consisting of 5 min of stationary cycling at a self-selected pace and a set of dynamic exercises , followed by five submaximal CMJs with a 30 s inter-jump rest interval. Two minutes after the completion of the warm-up protocol, participants stepped on a previously calibrated force plate wearing their habitual training shoes during both sessions. They were instructed to stand steadily for 5 s, jump as high as possible after the \u201cgo\u201d signal given by the research assistant, and land at approximately the same spot on the force plate. Each participant completed three maximal CMJs using a self-selected countermovement depth. The rest interval between consecutive CMJ was 60 s ,29. The Vertical ground reaction force (vGRF) data were recorded using fixed bilateral three-dimensional force platforms at a sampling frequency of 1000 Hz. The vGRF data obtained from both platforms was summed and processed using custom-made software . Body weight was established during the 2 s motionless period prior to the beginning of the downward phase of the CMJ motion. The initiation of the downward phase was identified as the moment when the force\u2013time curve trace dropped 10 N below body weight, the take-off when the vGRF was below 5 N, and the landing as the point where vGRF exceeded the 5 N threshold. The centre of mass (COM) acceleration was calculated as the net vGRF (absolute vGRF\u2014body weight) divided by the participant\u2019s body mass. In addition, COM velocity was calculated as a numeric integration of acceleration data with respect to time and COM displacement as a double integral of COM acceleration. All force-derived variables were scaled to the subject\u2019s body mass. Impulse-momentum and flight time methods were used to calculate jump height . RSI was\u22121. The list of all CMJ-derived dependent variables , categorized into performance, kinetic, and kinematic variables, is presented in Additionally, CMJ force\u2013time curve was calculated for the following five phases : (i) Unld effect size (ES) and interpreted using the criterion proposed by Hopkins [Intraclass correlation coefficient and coefficient of variation (CV%) were used for assessing within- and between-day reliability for the 45 dependent variables computed in this study. Acceptable reliability was determined as an ICC \u2265 0.70 and CV \u2264 10% . Paired- Hopkins : trivial Hopkins . All staWithin- and between-day reliability of the performance, kinetic, and kinematic variables are presented in The differences in the magnitude of the variables between sessions 1 and 2 were significant for all performance variables, for 11 out of 23 kinetic variables, and for 11 out of 18 kinematic variables. However, of all the 26 variables that differed between sessions 1 and 2, only 6 presented a higher magnitude during the second testing session . Nevertheless, the ES always ranged from trivial to small suggested sample adequacy and sufficient correlations between variables. Based on Kaiser\u2019s criterion factor analysis, four principal components were extracted which explained 91.8% of overall CMJ variance (The KMO measure of sampling (0.799) and Bartlett\u2019s sphericity test to assess the reliability of a large number of performance, kinetic, and kinematic CMJ-derived variables and (ii) to provide a reduced list of reliable metrics that should be reported to provide information regarding the distinctive aspects of CMJ performance. The main findings revealed that only 24 out of 45 CMJ-derived variables demonstrated acceptable within- and between-day reliability and that the most reliable metrics were the performance variables (3 out of 4), followed by the kinetic variables (12 out of 23), and finally the kinematic variables (8 out of 18). Four main components were extracted as a result of principal component analysis applied to the 24 reliable CMJ-derived variables and were conveniently addressed as: performance component, eccentric component, concentric component, and jump strategy component, explaining 56%, 16%, 11%, and 6% of the common variance, respectively. These findings present important implications for sports scientists and practitioners regarding the CMJ-derived metrics that should be considered to gain a comprehensive insight into the distinctive aspects of the CMJ performance.All performance variables were obtained with an acceptable within- and between-day reliability with the only exception being leg stiffness, which presented a between-day reliability lower than the minimal threshold for acceptable reliability (CV > 10%). These results are in line with the findings of previous research reports focused on exploring the reliability of the CMJ-derived performance variables ,19,21. FRegarding kinetic variables, impulse-related variables were the most reliable , followed by force and power variables (50% of variables with acceptable reliability in each group), while RFD-related variables never reached an acceptable level of reliability. Impulse-related variables were also considered reliable metrics in previous studies ,17,36, wConfirming our first hypothesis, only 8 out of 18 kinematic variables presented acceptable reliability. Specifically, all velocity- and COM-related variables were reliable, as well as the duration of the propulsive phase, flight phase, and the ratio between flight time and overall jump time. In line with our study, velocity-related variables were shown to be consistently reliable ,21,39, aIn summary, a total of 24 (out of 45) CMJ-derived variables demonstrated acceptable reliability and, therefore, were included in the principal component analysis. Considering the number of the included variables and extracted factors, our study is similar to the study of Merrigan et al. who incln = 79 vs. n = 16\u201382) [Several limitations of the present study should be acknowledged. First, the sample consisted of physically active young participants engaged in recreational physical activities, and it is unknown whether the results of the present study could be generalized to other populations . Second, it is important to acknowledge the low number of subjects included in the study per variable , althoug= 16\u201382) ,26,27. TFrom the large pool of 45 CMJ-derived variables computed in the present study, only 24 demonstrated acceptable within- and between-day levels of reliability. The CMJ-derived variables ranked in the order of highest to lowest reliability magnitude were as follows: performance variables , kinetic variables (impulse-related variables were the most reliable), kinematic variables . When included into the principal component analysis, these 24 variables loaded four factors, explaining 91% of the variance and were conveniently addressed as performance component , eccentric component (loaded by the variables related to the breaking phase of the CMJ), concentric component (loaded by the variables related to the concentric phase of the CMJ) and jump strategy component . Overall, the findings of the present study reveal important implications for sports scientists and practitioners regarding the CMJ-derived variables that should be considered to gain a comprehensive insight into the mechanics pertaining to CMJ performance."} +{"text": "The estrogen metabolite 2-methoxyestradiol (2ME) is a promissory anticancer drug mainly because of its pro-apoptotic properties in cancer cells. However, the therapeutic use of 2ME has been hampered due to its low solubility and bioavailability. Thus, it is necessary to find new ways of administration for 2ME. Zeolites are inorganic aluminosilicates with a porous structure and are considered good adsorbents and sieves in the pharmaceutical field. Here, mordenite-type zeolite nanoparticles were loaded with 2ME to assess its efficiency as a delivery system for prostate cancer treatment. The 2ME-loaded zeolite nanoparticles showed an irregular morphology with a mean hydrodynamic diameter of 250.9 \u00b1 11.4 nm, polydispersity index of 0.36 \u00b1 0.04, and a net negative surface charge of \u221234 \u00b1 1.73 meV. Spectroscopy with UV-vis and Attenuated Total Reflectance Infrared Fourier-Transform was used to elucidate the interaction between the 2ME molecules and the zeolite framework showing the formation of a 2ME-zeolite conjugate in the nanocomposite. The studies of adsorption and liberation determined that zeolite nanoparticles incorporated 40% of 2ME while the liberation of 2ME reached 90% at pH 7.4 after 7 days. The 2ME-loaded zeolite nanoparticles also decreased the viability and increased the mRNA of the 2ME-target gene F-spondin, encoded by SPON1, in the human prostate cancer cell line LNCaP. Finally, the 2ME-loaded nanoparticles also decreased the viability of primary cultures from mouse prostate cancer. These results show the development of 2ME-loaded zeolite nanoparticles with physicochemical and biological properties compatible with anticancer activity on the human prostate and highlight that zeolite nanoparticles can be a good carrier system for 2ME. Prostate cancer is the second most frequent male cancer diagnosis in developed and developing countries . It is dSPON1, participates in the signaling pathway by which 2ME exerts its apoptotic activity in cancer cells so that this 2ME-target gene is a good marker for the biological effects of this estrogen metabolite [Recently, a variety of new synthesized chemicals have been proposed as anticancer drugs including quinone derivates, alkaloids or hormone metabolites which could be effective on several reproductive cancers ,4. In thtabolite ,8.The therapeutic use of 2ME has been hampered because it has low water solubility and bioavailability, and it is quickly inactivated by glucuronidation via UDP-glucuronyl transferases ; these f4 and SiO4 functional groups which create regularly distributed mesoporous and cavities where occurs exchange of water, ions, and polar molecules of the surrounding environment [Zeolites are inorganic aluminosilicates widespread in nature with an ordered porous structure which are classified based on their pore structure and size, and the chemical composition of silica and aluminum in three main types: natural, synthetic, and zeolitic imidazolate framework . Mordeniironment ,14. Thesironment ,16.In experimental medicine, it is well recognized the advantages of applying nanoparticles compared to traditional treatments because of decreased adverse effects of delivered drugs and enhanced destruction of inflammatory or cancer cells due to their electrical, magnetic, or optical hyperthermia properties . In thisSPON1 in LNCaP cells. Finally, we performed viability assays in primary cell cultures from mouse prostate cancer to explore the preclinical relevance of the 2ME-loaded zeolite nanoparticles.Herein, we explored the potential use of mordenite-type zeolite nanoparticles as a drug delivery system for 2ME with the purpose of establishing new strategies for prostate cancer treatment. Therefore, we first developed 2ME-loaded zeolite nanoparticles followed by their morphological and physicochemical characterization by dynamic light scattering (DLS), transmission electron microscopy (TEM), zeta potential, Ultraviolet-visible (UV-vis) spectra and Attenuated Total Reflectance Infrared Fourier-Transform Spectroscopy (ATR-FTIR). The efficiency of adsorption and release of 2ME from the zeolite nanoparticle was then evaluated by Ultra-Performance Liquid Chromatography. The anticancer activity of the 2ME-loaded zeolite nanoparticles was assessed determining the effect of this nanocomposite on the viability of the human prostate cancer cell line LNCaP, and assessing whether the 2ME-loaded zeolite nanoparticles mimic the effect of 2ME on the expression of the mRNA for From 8 g of previously milled natural zeolite, 0.0739 \u00b1 0.0049 g of nanoparticles were obtained, which is equivalent to 0.92% \u00b1 0.062 of recovery. Once the zeolite nanoparticles were obtained, they were incubated with 2ME to achieve their adsorption. The hydrodynamic diameter size of the zeolite nanoparticles was 332.6 \u00b1 10.9 nm while the 2ME-loaded zeolite nanoparticles were 350.9 \u00b1 11.4 nm, which is compatible with biological applications . This alThe formed 2ME/zeolite complexes were characterized by UV-vis spectra. As observed in \u22121 and 791 cm\u22121 correspond to the characteristic vibration of an allotropic phase of SiO2 [\u22121 is due to the Al-O bond vibration [\u22121 belongs to Si-O bonds. The last bands situated at 3320 cm\u22121 are related to the hydroxyl functional group of zeolites [\u22121, 3182 cm\u22121, 3000 cm\u22121, 2963 cm\u22121, 2907 cm\u22121, 2809 cm\u22121, and 1600 cm\u22121, and in the ranges between 1500\u20131400 cm\u22121 and 1300\u20131000 cm\u22121, the last bands are the fingerprint of 2ME; these bands have described previously by our research group [\u22121 that corresponds to the vibration of the methoxy group O-CH3 and the alcohol group C-OH and those bands among 1530\u20131400 cm\u22121 are due to CH, CH2, and CH3 bending vibration; all these bands have been slightly shifted from his original position in 2ME. The band located at 1640 cm\u22121 corresponds to Si-O with C=C from 2ME, resulting in a broader band than the Si-O band in pure zeolite and slightly shifted from this original position. At low frequencies, we can observe two peaks at 2847 cm\u22121 and 2922 cm\u22121 that correspond to the stretching vibration of functional groups CH, CH2, and CH3. This band is more intensive than 2ME alone. All these characteristics demonstrate the interaction of 2ME molecules into zeolite nanoparticles.ATR-FTIR spectroscopy was also performed on zeolite and 2ME-loaded zeolite nanoparticles, and 2ME alone to characterize and determine functional groups and modifications. of SiO2 ,29, the ibration ,29. The zeolites . The FTIch group . In the Adsorption into the zeolite framework nanoparticles could occur on both the outer and inner surface of the zeolite and it depends on the ability of the molecules to fit into the mesoporous . In the We can identify in Altogether, we can state that zeolite nanoparticles are able to adsorb 2ME, release it, and preserves its effect over time, overcoming the pharmacokinetic limitations reported for 2ME that have hindered its widespread clinical application.2) with no significative differences between all the pH values, indicating that they are suitable to describe 2ME profile release and determine its release rate from zeolite nanoparticles.To analyze the mechanism of 2ME release from the zeolite nanoparticles we fitted both phases of the 2ME release profile to the Korsmeyer-Peppas , Higuchi2 (0.98221\u20130.98786) for the Korsmeyer\u2013Peppas model as well the release exponents (n > 0.5) suggest a non-Fickian diffusion of 2ME which may be mainly influenced by erosion processes of the zeolite matrix inside the solvent [2 (0.96969\u20130.97768) for the Higuchi model suggests a quadratic drug release implicating that 2ME follows a release pattern corresponding to a diffusion-controlled mechanism. On the other hand, the R2 (0.97519\u20130.97982) for the first-order model indicates that the 2ME release profile is dependent on the concentration of 2ME loaded suggesting that the concentration of 2ME could be regulated according to the patient`s requirements generating an additional advantage of the 2ME-loaded nanoparticles.In phase 1, the high R solvent . The R2 n < 0.5) for all pH values were compatible with a Fickian diffusion model suggesting that 2ME is released by a diffusion process of water into the zeolite matrix in phase 2. On the other hand, all three models showed that the release rates were lower for all pH values in phase 2. This corroborates our findings above described concerning that the high release rate during phase 1 could be explained as a fast desorption of 2ME from the zeolite surface while in phase 2 a slower 2ME desorption occurs due to internal diffusion from the porous of the zeolite nanoparticles. We can also observe that the release rates were slightly slower at pH 4.0 and 5.0 than at pH 7.4 indicating that the pH of the medium influences the release rate of 2ME. Considering that the pH of the gastric medium is between 3.5 and 5, we may state that a great concentration of 2ME still reaches the duodenum and hence the bloodstream supporting that future applications may include oral administration of the 2ME-loaded nanoparticles.In contrast to Phase 1, the Korsmeyer\u2013Peppas model showed that the release exponents and 36 h (range: 444.9 \u00b1 28.1 to 462.1 \u00b1 26.9) after treatment with 2ME alone or 2ME-loaded zeolite nanoparticles. The ethanol (0.01%) used as a vehicle to dissolve free 2ME or incorporated with the zeolite nanoparticles did not affect the cell viability. These findings show that the increase in SPON1 transcript was similar in its kinetic and magnitude between 2ME and 2ME-loaded zeolite nanoparticles suggesting that the nanocomposite can activate 2ME-target genes; thus, adsorption of 2ME into zeolite nanoparticles did not affect the 2ME molecular properties on human prostate cancer cells. It is known that SPON1 is a key regulator of the apoptotic effects of 2ME on cancer cells [SPON1. This is in accordance with Alhakami et al. [The use of primary cultures from human or animal cancer cells is a good preclinical strategy that reflects the tumor response in vitro in a reliable model and it is essential to improve the clinical outcome of anticancer compounds ,48. In tThere are many ways by which the 2ME-loaded zeolite nanoparticles could be improved to enhance its effectiveness in potential clinical applications. The incorporation of a biodegradable polymeric layer around the zeolite nanoparticles could increase its adsorption capacity and permits a major release of 2ME in the target cells. Modification of the physicochemical properties of this nanocomposite to enhance its accumulation in the acid cancer microenvironment or conjugate 2ME with magnetic zeolite nanoparticles to induce a better site-directed sorting of 2ME-loaded zeolite nanoparticles into the body tumors. Finally, 2ME-loaded zeolite nanoparticles could be combined with biopolymers to form nanodisks and directly introduced into the prostate tumors. Future studies on the biomedical properties of the nanocomposite 2ME-zeolite provide further evidence that highlights its application as a therapeutic agent for human prostate cancer.SPON1 in LNCaP cells. The characterization process showed obtention of nanoparticles of zeolite conjugated with 2ME having a mean diameter of 164.9 \u00b1 7.4 nm and Zeta potential of \u221234.3 \u00b1 1.73. Furthermore, 2ME can be adsorbed into nanozeolites with an efficiency of 40% and a liberation capacity of 90% under physiological conditions. Although, the adsorption efficiency of 2ME into nanozeolites is lower compared with drugs such as Diclofenac, Piroxicam, Ketoprofen or Curcumin, the 2ME-loaded zeolite nanoparticles affected the viability and increased the expression of SPON1 in LNCaP cells. Furthermore, 2ME-loaded zeolite nanoparticles induced death cells in primary cultures of mouse prostate cancer. This indicates that 2ME still retains its anticancer properties when the drug is adsorbed suggesting that the zeolite nanoparticles could be a 2ME promising delivery system with potential biomedical applications for prostate cancer treatment.We characterized the nanoparticles of zeolite alone or conjugated with the anticancer drug 2ME by TEM, zeta potential, and FTIR spectroscopy as well as their effects on viability and expression of the 2ME-target gene g for 20 min, and the nanoparticles were dried at 37 \u00b0C overnight. The natural zeolite was characterized through X-ray Diffractometry as mordenite according to previous studies performed with the same batch of recollected zeolite [The natural zeolite was collected in a mine located at 36\u00b016\u2019 S, 71\u00b040\u2019 W and was homogenized and milled to pass a 2 mm sieve. The ball grinding mill was operated at 200 rpm for 4 h and dried for an additional 8 h at 105 \u00b0C to remove excess moisture from the particles. Then, 8 g of the <2 mm size particles were added to a test tube with 1 L of water to separate the smallest particles by sedimentation gradient ,50, and zeolite .g for 1 h at 10 \u00b0C. Then, the solid phase was rinsed in distilled water and dried on a heater plate at 60 \u00b0C.Zeolite nanoparticles were loaded with 2ME using the agitation method developed by Le\u00f3n et al. , and AlfThe hydrodynamic size (diameter), polydispersity index and surface charge were analyzed by dynamic light scattering in the Zetasizer Nano ZS DST1070 cell . The measurements were performed in phosphate buffer saline (PBS) pH 7.4 to mimic the size of the nanoparticles at the time of performing the in vitro viability tests and to approximate the size that the nanoparticles could have in blood circulation ,53. The The morphology and size of the zeolite and 2ME-loaded zeolite nanoparticles were also determined by Transmission Electron Microscopy (TEM). The nanoparticles were mounted on a copper mesh covered with carbon . The observations were performed with a TEM HT7700 at an acceleration voltage of 80 kV. The mean diameter of the nanoparticles was obtained by measuring 120 particles with the ImageJ software .UV-Vis spectra of 2ME, zeolites nanoparticles and 2ME-loaded zeolite nanoparticles were obtained using a UV-visible spectrophotometer (Agilent 8453 UV-Vis).\u22121 range, with a resolution of 4 cm\u22121 at room temperature by using a Thermo Nicolet IS10 spectrometer provided with a single bounce Ge crystal Smart-iTR accessory.The conjugation of zeolite with 2ME was examined by Attenuated Total Reflectance Infrared Fourier-transform spectroscopy (ATR-FTIR). The ATR-FTIR spectra were collected in the 4000\u2013500 cmg, then, samples were taken of the supernatants at 3, 6, 12, 24 or 48 h at 37 \u00b0C to measure the concentration of 2ME by Ultra-high performance liquid chromatography (UPLC) using an Acquity system equipped with a binary solvent delivery pump, an autosampler and a tunable UV detector, and a chromatographic C18 column as previously reported [A) was calculated using the equation:T is 2ME total and 2MEST in the supernatant.The 2ME loading efficiency was determined according to Alfaro et al. . 10 mg oreported ,53. The reported ,53. The With the purpose to measure 2ME release, 1 mg/mL of 2ME-zeolite nanoparticles underwent rapid equilibrium dialysis with bag dialysis at 37 \u00b0C with gentle shaking in PBS 15 mL . At each sampling time, 1 mL of the supernatant was removed and replaced with an equivalent volume of PBS, and 2ME concentration in the supernatants was determined by UPLC.The 2ME release kinetics from the zeolite nanoparticles were analyzed employing the following mathematical models:Korsmeyer-Peppas Model:In this model, drug release is described by the following equation:t/M\u221e is the percentage of drugs released at time t divided by the total percentage of drugs released .MKP is the release constant of the Korsmeyer-Peppas model.Kt is the release time.n is the release exponent.Higuchi modelThe Higuchi model describes drug release through a quadratic relationship between time and the percentage of drugs released.In this model, drug release is described by the following equation:t is the percentage of drugs released at time t.MH is the release constant of the Higuchi model.KSqrt represents the square root of time.First order model:In this model, drug release is described by the following equation:t represents the percentage of drugs released at time t.Q0 is the initial percentage of the drug.Q1 is the release constant of the first-order model.K2 in 95% of the air in a cell culture incubator at 37 \u00b0C. The cells were used until to reach a confluency of 70\u201380%. For all experiments, 2.5 \u00d7 103 cells/well were seeded.The human prostate cancer cell line LNCaP was grown in a DMEM medium supplemented with sodium pyruvate 1 mM, 10% heat-inactivated fetal bovine serum, 100 UI/mL penicillin, 10 \u00b5g/mL streptomycin under 5% CODorsolateral prostate adenocarcinoma was induced in the mouse using a combined treatment of testosterone and the carcinogen N-methyl-N-nitrosurea (NMU) according to a modified protocol of Banudevi et al., . Locally2) pieces in Hanks\u2019 solution and then the smooth muscle cells were mechanically removed from the rest of the tissue and treated with Collagenase, Type I for 1 h to further disaggregation of the cells. The cell suspension was centrifuged at 1200 g for 5 min, washed, and seeded into 6-well tissue culture plates in DMEM/High Modified medium with 4.0 mM L-Glutamine and 4.5 g/L Glucose free of Phenol Red supplemented with 10% (v/v) Foetal Bovine Serum , 1mM sodium pyruvate and 100 UI/mL penicillin and 100 \u00b5g/mL streptomycin. Epithelial cancer prostate cells were incubated at 37 \u00b0C in an atmosphere of 5% (v/v) CO2 for at least 7 days to reach 75\u201380% confluence. For each replicate, a pool of two prostates was used and this experiment consisted of three replicates.Animals were euthanized and their tumors were excised, and a minor portion was fixed in cold 4% paraformaldehyde in PBS pH 7.4\u20137.6 and then processed for histological analysis according to Orostica et al. . The rescAQueous Non-Radioactive Cell Proliferation Assay kit . After incubation, the absorbance value at 490 nm was obtained using an ELISA plate reader . As a positive control we used a solution of 2ME 5 \u00b5M alone and Ethanol 0.01% was used as a vehicle of the nanoparticles and 2ME.LNCaP or primary cultures cells were treated with zeolite or 2ME-loaded zeolite nanoparticles at a concentration equivalent to 5 \u00b5M of 2ME and they were grown on 96-well assay plates and at 6, 24, 48 or 72 h post-treatment, 20 \u00b5L of MTS reagent provided by the Cell-Titer 96SPON1 in the LNCaP cells; GAPDH was chosen as the housekeeping gene to be used as load control. SYBR\u00ae Green, I double-strand DNA binding dye (Roche Diagnostics) was used for these assays. Primers for SPON1 were 5\u2032 GAGAGATACGTGAAGCAGTTCC 3\u2032 (sense) and 5\u2032 ATACGGTGCCTCTTCTTCATAC 3\u2032 (antisense) and for GAPDH were 5\u2032 TGCCAAATATGATGACATCAAGAA 3\u2032 (sense) and 5\u2032 GGAGTGGGTGTCGCTGTTG 3\u2032 (anti sense). All real-time PCR assays were performed in duplicate. The thermal cycling conditions included an initial activation step at 95 \u00b0C for 25 min, followed by 40 cycles of denaturalizing and annealing-amplification and finally one cycle of melting (95\u00b0 to 60 \u00b0C). The relative level of the transcripts was determined according to a method previously reported [LNCaP cells were treated with zeolite or 2ME-loaded zeolite nanoparticles at a concentration equivalent to 5 \u00b5M of 2ME and they were grown on 96-well assay plates and at 16, 24 and 36 h post-treatment, total RNA from LNCaP cells was isolated using Trizol Reagent . One \u00b5g of total RNA of each sample was treated with Dnase I Amplification grade (Invitrogen). The single-strand cDNA was synthesized by reverse transcription using the Superscript III Reverse Transcriptase First Strand System for RT-PCR (Invitrogen), according to the manufacturer\u2019s protocol. The Light Cycler instrument was used to quantify the relative mRNA level for reported . As a pop < 0.05 were considered statistically significant.All assays were performed in triplicate. The data were analyzed using GraphPad Prism . When corresponding, all data are presented as mean with standard error and overall analyses were executed by Kruskall\u2013Wallis test followed by Mann\u2013Whitney U test for pair-wise comparisons when overall significance was detected. All tests that yielded values"} +{"text": "Then multivariate logistic regression analysis was performed to explore the influencing factors of clinical pregnancy rate.P = 0.000) or as P > 1.5 ng/ml . In addition, LFEP duration was significantly associated with clinical pregnancy outcomes in unadjusted logistic regression analysis. However, in multivariate regression models after adjusting confounders, adjusted OR for LFEP duration (\u2265 2 days) in the two models was 0.808 and 0.720 , respectively.This retrospective analysis involved 3,521 first IVF/ICSI cycles with fresh embryo transfers. Clinical pregnancy rate was the lowest in patients with a LFEP duration of \u2265 2 days, irrespective of whether LFEP was defined as P > 1.0 ng/ml (68.79% vs. 63.02% vs. 56.20%; LFEP adversely affects clinical pregnancy outcomes. However, the duration of LFEP seems to have no influence on the clinical pregnancy rate in pituitary downregulation treatment cycles. It was reported that the subtle progesterone rise on the day of human chorionic gonadotrophin (hCG) triggering can reach 5%\u201371% (The influence of late-follicular elevated progesterone (LFEP) on h 5%\u201371% . Origina2), P-to-oocyte, P-to-follicle, and P-to-mature oocyte index (PMOI) \u201310. HoweIn 2012, Huang et\u00a0al. performed an interesting study on the duration of LFEP for the first time. It seemed that the duration of P elevation played a major role in pregnancy outcomes . The cumIn these two previous studies, the criterion for LFEP was P > 1.0 ng/ml on the day of hCG administration. Patients were treated with either multiple ovarian stimulation protocols or GnRH-ant protocol. In the current study with a larger population, we explored the impact of LFEP duration on IVF outcomes in women treated with pituitary downregulation treatment cycles under two different conditions: LFEP was defined as P > 1.0 ng/ml or as P > 1.5 ng/ml.This study was approved by the Institutional Review Board (IRB) of the First Affiliated Hospital of Zhengzhou University. Written informed consent was obtained from all patients before IVF treatment for physicians collecting basic information and treatment data. Data in this study were from the Clinical Reproductive Medicine Management System/Electronic Medical Record Cohort Database (CCRM/EMRCD) from the Reproductive Medical Center, First Affiliated Hospital of Zhengzhou University.Infertile women undergoing their first IVF/ICSI (intracytoplasmic sperm injection) cycle treatment with a GnRH agonist pituitary suppression protocol from January 2016 to December 2016 were enrolled in this retrospective study. All patients underwent fresh embryo transfers. The exclusion criteria were as follows (1): uterine malformation (2); oocyte-donation cycles; (3) recurrent spontaneous abortion and repeated implantation failure; and (4) pre-implantation genetic testing cycles.2 \u2264 50 pg/ml, LH \u2264 5 mIU/mL, and endometrial thickness \u22645 mm. During ovarian stimulation, LH, E2, and P levels were collected when the diameter of the largest follicle reached 14\u00a0mm. Ovulation triggering criteria were: the diameter of the leading follicle was more than 20\u00a0mm; at least three follicles > 17\u00a0mm. Then, 36-38 hours after hCG trigging, oocytes were retrieved. The patients underwent the standardized procedures of the fertility center; most took a P test in the morning, and the frequency of the P test was every day. Embryo transfer was performed on day 3 or day 5 after fertilization. All the patients had two cleavage embryos or one blastocyst embryo transplanted. Clinical pregnancy was defined by the presence of a fetal heart beat 35 days after the day of embryo transfer. Early spontaneous abortion in this article was defined as spontaneous pregnancy loss after sonographic visualization of an intrauterine gestational sac before 12 weeks of gestation.Pituitary downregulation and controlled ovarian stimulation were performed as described in a previous study . Of releFirstly, basic characteristics and pregnancy outcomes were compared between patients from different LFEP duration groups (LFEP was defined as P > 1.0ng/ml or P > 1.5ng/ml). The patients were divided according to the duration of LFEP on the day of hCG triggering. Then, logistic regression analysis was used to explore the impact of LFEP duration and IVF outcomes.P-value was considered significant whenever < 0.05 in general situations.Statistical analysis was performed with SPSS 24.0 software. Continuous data were presented as median \u00b1 standard deviation (M \u00b1 SD) and were analyzed using Student\u2019s t-test; three groups were analyzed by ANOVA analysis. Categorical data was described by the number of cases and percentages, and analyzed using Pearson\u2019s chi-squared test. P < 0.001) and spontaneous abortion rate was higher in the \u2265 2 days group as compared with those in the 0 day and 1 day groups.A cohort of 3,521 first IVF/ICSI cycles with fresh embryo transfers were included in the study. The age of the participants was between 20-48 years old, with a median \u00b1 standard deviation of age of 30.5 \u00b1 4.69. Basic demographic parameters were different between the different LFEP duration groups Table\u00a01.P < 0.001).When the LFEP was defined as P >1.5 ng/ml, there were 2,934 patients in the 0 day group, 420 patients in the 1 day group, and 127 patients in the \u2265 2 days group, as shown in P = 0.008; LFEP defined as P > 1.0 ng/ml) and 0.657 .In the crude logistic regression analysis model shown in P = 0.008). However, LFEP duration (\u2265 2 days) was not associated with IVF outcome . When LFEP was defined as P > 1.5 ng/ml, LFEP duration (\u2265 2 days) was still not a predictive factor .In Many studies have explored the impact of LFEP on IVF outcomes. There is nearly a consensus that LFEP impairs pregnancy rates in fresh embryo transfer cycles, while the definition of LFEP is still in debate. However, in our daily work, it is common to see that LFEP lasts for more than 1 day before hCG administration. Whether the duration of LFEP has an impact on IVF outcome is still not known. The current study showed that LFEP itself indeed was associated with a decreased clinical pregnancy rate. The duration of LFEP did not influence the clinical pregnancy rate in fresh embryo transfer cycles.It has been proposed that elevated follicular P can alter the window of implantation, and therefore the transfer of an embryo in an asynchronous endometrium results in the failure to establishing embryo-endometrium cross-dialogue, which leads to embryonic demise and failure of implantation. Huang et\u00a0al. proposed that the implantation window ranged from post-ovulatory day 6 to day 10; evaluating only one day of absolute serum P concentration might not accurately reflect the chronological change in the implantation window. Therefore, they analyzed the association between the duration of pre-ovulatory serum P elevation and the pregnancy outcomes of IVF/ICSI embryo transfer cycles . ResultsIn the current study, LFEP duration was firstly shown to be associated with clinical pregnancy rate. It was interesting to see that LFEP duration was not a predictive factor for IVF outcome after adjusting for related parameters, and these results were inconsistent with another study . At firsThen, why were the IVF outcomes different between patients with LFEP and those without LFEP but comparable in patients with LFEP for 1 day and \u2265 2 days? We speculate that the possible reason is that the number of progesterone receptors in the endometrium is certain. When progesterone in the late follicle phase reaches a certain level, the endometrium will change to the secretory stage in advance. However, with the increase of LFEP duration, the status of the endometrium no longer dramatically changes. Therefore, the effect of LFEP on IVF outcome for \u2265 2 days is the same as that for 1 day.Our findings have several clinical implications. First, there has been robust evidence that LFEP impairs endometrial receptivity, but LFEP in the fresh cycle does not affect the cumulative birth rate of the frozen transfers in a freeze-only approach . A freezA strength of the current larger population-based study was that all patients were derived from a single-center study, and were all treated with pituitary downregulation protocols. In addition, the effect of LFEP duration on IVF outcomes was explored using two different LFEP criteria. However, several limitations also existed. Not all factors could be controlled due to the retrospective nature of this study. The specific mechanism for this phenomenon still needs to be explored in further basic experiments. Moreover, the impact of LFEP duration and cumulative pregnancy outcomes with transferred frozen embryos should also be demonstrated in the future.Taken altogether, our data showed that irrespective of LFEP criteria , LFEP adversely affects clinical pregnancy outcomes. However, compared with a duration of LFEP of 1 day, the longer duration of LFEP \u2265 2 days seems to have no influence on the clinical pregnancy rate in pituitary downregulation treatment cycles.The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.The studies involving human participants were reviewed and approved by Zhengzhou University Research Ethics Board. The patients/participants provided their written informed consent to participate in this study.JZ contributed to the study design, data analysis, and manuscript preparation. XG handled patient recruitment and data collection. ZB supervised this study. All authors read and approved the final manuscript."} +{"text": "Non-suicidal self-injury (NSSI) is a prevalent phenomenon in somatic emergency departments, where nurses are the most consistent group of healthcare professionals who treat people with NSSI, which means they may affect the NSSI trajectory and help-seeking in the future. The objective of this study was to describe the experiences of nurse practitioners with treatment of people presenting with NSSI in the emergency department.Individual, semi-structured telephone interviews were conducted with seventeen purposefully recruited nurse practitioners from three emergency departments in the Capital Region of Denmark. Interview transcripts were analysed using inductive content analysis, as described by Graneheim and Lundman.The analysis resulted in the formulation of three categories and 10 subcategories describing how nurse practitioners feel confident and competent in treating physical injuries due to NSSI but at the same time insecure about how to provide adequate care and engage in conversations about NSSI and mental wellbeing with people with NSSI. An overarching theme, \u2018Left with a Sisyphean task\u2019, reflects the nurses\u2019 feeling of being handed the responsibility for performing a laborious, never-ending, and futile task.The findings indicate that nurse practitioners feel confident and competent in treating physical injuries due to NSSI but insecure about how to provide adequate care. Therefore, there is a need for training and guidelines.The online version contains supplementary material available at 10.1186/s12873-023-00888-6. Non-suicidal self-injury (NSSI), defined as intentional self-inflicted damage to the surface of the body, such as cutting, burning, or hitting, performed without suicidal intent , is a prNSSI represents a significant health concern as it is linked to an increased risk of repeated NSSI, suicidal behaviour, and suicide, where individuals who frequently self-injure and use multiple methods are at the highest risk of committing suicide , 6, 7. HSelf-injuries are frequently seen in somatic emergency departments (EDs) . For peoDespite the important role of nurses in the treatment of self-injuries in EDs, a literature search of previous research exploring their experiences with treating people with NSSI in the ED only found studies that use a definition that includes self-harm, irrespective of suicide intent, and primarily surveys on the attitudes of ED nurses toward people who self-injure , 14\u201317. This study employed a descriptive qualitative design. We generated data from individual, semi-structured telephone interviews, and used an inductive approach to content analysis.The study took place in three EDs at three university hospitals in the Capital Region of Denmark. Two EDs are open 24\u00a0h a day, each serving approximately 80,000 patients per year and employing about 80 nurses. The third ED operates during daytime hours, serving approximately 40,000 patients per year and employing around 30 nurses. These three EDs cover distinct catchment areas, each with significantly varying social demographics in and around the capital of Denmark.Due to the COVID-19 pandemic sampling and recruitment of participants was carried out by telephone and email. To ensure a broad representation of various characteristics that may influence the experience of the phenomena under study, purposeful sampling was used is really deep and really big, then I have to get a doctor involved, who, in that case, must decide: are we talking involuntary hospitalisation or what should happen? It\u2019s a very difficult assessment that I have to make based a bit on how I feel. What\u2019s my gut feeling about what this is.\u201d (NP17).Having no specific training or guidelines to inform their practices led to large variations in the treatment offered to patients with NSSI:\u201cOur group of staff is very diverse, and we have highly different approaches to treating the self-injuring girls that come to us.\u201d (NP3).The nurse practitioners recounted how they were forced to rely solely on their extensive ED experience, including developing their own standard procedures or copying their colleagues\u2019 approaches to NSSI. Having seen many patients repeatedly return to the ED with NSSI, a group referred to as frequent flyers\u201c\u2018You see the ugly arms you get; you\u2019ll have to live with this the rest of your life\u2019. I can say this several times, and I\u2019m also thinking, \u2018Well, it doesn\u2019t help\u2019 but I just feel that I need to say it.\u201d (NP17).The nurse practitioners described how rewarding it was when patients who had recently begun injuring themselves, a group referred to as beginners, accepted their advice to seek further help. However, they questioned the effectiveness of their preventive strategies and requested evidence-based preventive tools.A common feature of the interviews was that the nurse practitioners related NSSI to mental health and portrayed it as a maladaptive coping strategy for reducing mental distress or as a cry for help due to a failure to thrive. However, the nurse practitioners stated that factors such as being nicely dressed, wearing full make-up, having perfectly painted nails, appearing to be indifferent toward their injuries or being in a good mood contradicted their understanding of patients with NSSI:\u201cPretty good mood sometimes, amazingly good mood; they make jokes with me, and they make jokes with the staff. Sometimes you may think that they take it exactly as, well, a cosy little trip to the emergency department. Well, I get certain thoughts about it, I really do.\u201d (NP17).This type of behaviour and appearance made the nurses perceive some patients with NSSI as young adults engaging in a mainstream, yet destructive, youth culture fuelled by its presence on social media. A tendency to view the severity of the physical injury as a predictor for what was described as a true NSSI issue was also a topic that was touched upon across interviews. Patients with deep or multiple cuts were perceived as having a true NSSI issue and advised to seek mental healthcare, whereas patients with superficial injuries were judged as not having a need for mental healthcare. When men presented with self-inflicted fractures to the hand, the nurse practitioners described their behaviour as a quick fix led on by immediate frustrations rather than an action reflecting underlying mental issues and hence deemed that there was no need for mental healthcare. NSSI was depicted as attention-seeking behaviour which, at one research site, was exemplified by a case of a patient with borderline personality disorder whose frequent visits to the ED due to NSSI had been time-consuming and involved multiple staff and the use of coercion. The ED staff had consequently been advised to limit the amount of attention given to the patient. Accordingly, nurse practitioners from the site described that attention should be limited for all patients with NSSI and/or borderline personality disorder to hinder repeated NSSI and visits to the ED:\u201cThat\u2019s all they really want to achieve, and the more they will come, and the more often they will hurt themselves, those with borderline too, the more attention they get.\u201d (NP2).This category encompasses 3 subcategories related to the nurse practitioners\u2019 experiences of the treatment of NSSI as unrewarding. This category, moreover, highlights an emotional shift among nurse practitioners over time, from initial investment to feelings of powerlessness and aversion.Upon first encountering NSSI, the nurse practitioners described a desire to educate themselves on NSSI and to find ways to secure help for people with NSSI. However, over time, increased familiarity led to the treatment of it becoming just another routine task. The nurse practitioners depicted treatment as time-consuming and the self-inflicted nature of NSSI was portrayed in stark contrast to other ED tasks, which gave rise to resistance towards patients. Helping someone with NSSI was regarded as less fulfilling and rewarding than treating people who had unwillingly got injured:You may look at it [the wound] and say \u2018Well, that turned out nicely\u2019, but [laughs] you also just know that it\u2019ll just be another scar in a row of, well, uh \u2026 So, you don\u2019t feel the same sense of satisfaction as you do when you have stitched up someone with something that happened by accident. I might have patched her together, but it hasn\u2019t solved ANYTHING at all.\u201d (NP9).\u201cNumerous encounters within short periods of time lead to feelings such as exhaustion, powerlessness, frustration, hopelessness, and irritability. The nurse practitioners described exercising restraint in showing negative feelings toward patients, but also communicated that this strategy can become difficult when encounters accumulate:But sometimes you can hardly control yourself. Because you can get tired of it if it\u2019s the same person who shows up twice in one shift. I will honestly admit that once, I simply told someone that I didn\u2019t want to see her once again, which was, of course, a bit harsh. I felt really bad about that, but on the other hand, then sometimes, you simply cannot continue to accommodate everything.\u201d (NP15).\u201cSome nurse practitioners stated that they were embarrassed to admit that they perceive patients with NSSI as difficult. Others stated a more direct aversion towards frequent flyers and spoke of a drama that they tried not to get caught up in:One must be careful not to show them too much pity, because they will feed on it. Then there\u2019s the gotcha moment and you\u2019re stuck [laughs] \u2026 these are some difficult patients, let\u2019s just put it that way.\u201d (NP7).\u201cTreatment of NSSI was depicted as causing feelings of distress when nurse practitioners witnessed what they described as certain colleagues handling treatment poorly:I get a big knot in my stomach if it\u2019s one of them who gets this kind of patient. I can\u2019t stand it. I can get really upset.\u201d (NP14).\u201cNurse practitioners provided various examples of overhearing ill-treatment by colleagues, which included shaming patients for their actions, denying them anaesthetics by claiming they enjoyed the pain or by objectifying them as ideal cases for practicing suturing skills:\u201c\u2018They don\u2019t notice anything anyway\u2019 and \u2018You can just stitch up the wound without giving them an anaesthetic\u2019 and \u2018Their wounds are straight lines, they\u2019re really easy to stitch.\u2019\u201d (NP6).The nurse practitioners emphasised that this did not prompt them to alter their own practices. On the contrary, they felt assured that what they described as ill-treatment did not contribute to patients\u2019 wellbeing. Consequently, they preferred to personally treat all patients with NSSI rather than having their colleagues handle the treatment inadequately.Some nurse practitioners perceived the ability to make it to the ED as a sign that patients were able to take care of themselves, while others feared that patients might continue to self-injure or commit suicide if they left the ED without seeking mental healthcare:\u201cYou know, your worst fear is sending a person home and finding out the next day that they\u2019ve jumped off a tall building.\u201d (NP11).Such concerns were connected to feelings of powerlessness, which could only be quelled if patients were discharged to ongoing treatment at a mental health unit or residential mental health institution.This category (including 4 subcategories) depicts the nurse practitioners\u2019 disappointment and disillusionment with the healthcare system\u2019s limitations, including a lack of resources, inability to provide adequate care, and a perceived societal failure to address the underlying causes of NSSI.The nurse practitioners often referred to the organisational divide between mental and physical healthcare when describing their main area of responsibility as taking care of physical health:\u201cWe\u2019re used to treating wounds, suturing, and bandaging and all the rest; we do it day in day out. But it\u2019s the specialists who take care of the mental health side of things.\u201d (NP5).They often made it clear to patients that the somatic ED exclusively specialises in physical injuries, whereas the mental health unit provides care for mental health issues related to NSSI. During interviews, they nonetheless also described how mental healthcare services distinguish between suicidal self-injury and NNSI when considering a person\u2019s need for mental healthcare, which is why mental health care may not be provided for all patients with self-injury:\u201cSometimes it rings a bit hollow when you say that you can just send them on to the mental health unit, or that it\u2019s a service we offer. And then, too, there\u2019s the fact that I don\u2019t really know what else to offer them.\u201d (NP13).In virtue of this, the nurse practitioners depicted people with NSSI as marginalised by the healthcare system, making it seem like no one really cares about what they are facing:\u201cIf their life is not in danger or they are not a danger to themselves, it can be pretty hard to find immediate help for them.\u201d (NP8).Due to an absence of NSSI-specific guidelines and training in the ED, the nurse practitioners requested specialised, evidence-based knowledge and NSSI-specific guidelines to improve the quality of care, standardise the treatment and help alleviate their insecurities:It would be really, really nice for many of us to have greater competence in this, and I really think it would ease some situations if we had some evidence, or a knowledge base, to support why and what we can do, or how we could do things differently.\u201d (NP13).\u201cThe nurse practitioners requested short educational sessions or one-day seminars on NSSI. They stated a need for clinical guidelines and a semi-structured questionnaire of what to ask patients, including signs to look out for hindering misinterpreting patient needs and the risk of suicide. Further, they requested information on where to refer patients and greater collaboration with mental healthcare units to build a mutual understanding of work responsibilities and services offered in both entities.Frustrations related to a failing healthcare system were amplified when patients are residents at a community mental health institution or inpatients at a mental healthcare facility, leading to a distrust in the ability of these entities to adequately care for patients with NSSI:\u201cI think there\u2019s something fundamentally wrong about the system because we\u2019re giving the patients sub-optimal help. It\u2019s very frustrating, I mean I\u2019m prepared to ring up if someone is hospitalised [in a mental healthcare unit] and enquire. It\u2019s just not right.\u201d (NP10).Concurrently, the nurse practitioners acknowledged the limited capacity in mental healthcare services and called for an increase in both the resources and quality of care provided in mental health facilities, as well as additional treatment options for patients who are not in immediate danger of suicide.Nurse practitioners criticised what they perceived as a limited commitment by society to treat and prevent repeated NSSI and demanded sufficient and adequate treatment options for people with NSSI. As a result, some described themselves as feeling disillusioned in their role in treating NSSI:\u201cSometimes you may feel \u2018Nooo, she can\u2019t be here AGAIN\u2019, because it\u2019s so frustrating. It seems so useless to just sit there and patch people together, and then you can remove the sutures you stitched up the week before.\u201c (NP9).Moreover, they found it maddening to witness endless growth in the prevalence of NSSI and called for action to ensure primary prevention efforts targeting the underlying causes of NSSI at the societal level:\u201cWe [society] must be failing somewhere since we have so many of them. Something must have been done wrong or is being done wrong.\u201d (NP12).In the last phase of the analysis, the underlying meaning of the experiences of nurse practitioners with treatment of people with NSSI in the ED was interpreted and formulated into the overarching theme \u2018Left with a Sisyphean task\u2019. A Sisyphean task symbolises the laboriousness of performing a never-ending, futile task that may potentially lead to feelings of resignation, numbness and disbelief . The resIn this study, which aimed to describe the experiences of nurse practitioners with treating people presenting with NSSI in the ED, we found that nurse practitioners felt powerless in the face of NSSI. Nurses described being able to treat only the wound and were disappointed with the inability of mental healthcare services to prevent repeated NSSI and provide adequate treatment options, which led us to the depiction of treatment of NSSI as a Sisyphean task. Hadfield et al. similarlOverall, our findings suggest that nurse practitioners feel competent and take pride in treating physical injuries resulting from NSSI. However, the findings also suggest that nurse practitioners may perceive their duty of care as restricted to involving phenomena understood within the traditional biomedical model. Results from a systematic review by Taylor et al. on attitA review by the National Institute for Health and Care Excellence in the UK underpinA notable finding in this study was that the nurse practitioners tended to appraise the severity of the injury and the patients\u2019 appearances to determine the validity of NSSI as a genuine issue. These findings indicate that merely presenting with wounds due to NSSI does not guarantee everyone with NSSI will be seen as equally deserving and in need of care. In a Danish study exploring how discourses on mental illness are negotiated in mental health practice, Ringer and Holen found thAnother noteworthy finding of this study was the possible failure to use anaesthetic during suturing. This finding is supported by previous studies, including a systematic review , 29 of EOur study also highlighted how nurse practitioners understood NSSI as attention-seeking behaviour. This is in line with multiple studies and theories on self-injurious behaviour , 36, 37.Although our study participants were informed about the definition of NSSI and asked to avoid referring to their experiences with treating suicidal self-injury, the study indicates that distinguishing between cases of suicidal and non-suicidal self-injury is difficult. They were aware that the mental healthcare unit distinguishes between the two phenomena when assessing the need for mental healthcare but this did not prevent them from making many references to suicidality and suicidal behaviour. In the same vein, the study identified a tendency to consider NSSI as a mental health issue that must be treated in mental healthcare services. While the prevalence of NSSI is higher in clinical samples than in the general population , people The purposeful sampling strategy provided a relevant sample that varied in terms of sex, age and years of ED experience, resulting in a thick description of the phenomenon under study. The three ED nurse managers whom we contacted to recruit nurse practitioners for our research readily allowed their participation in the study. However, it is worth noting that the involvement of gatekeepers, in this case nurse managers, during recruitment might have led to the exclusion of nurse practitioners holding negative attitudes towards NSSI from participating in the study. As a result, relying solely on training and guidelines might prove inadequate in enhancing the care provided to individuals with NSSI. Ideally, comprehensive competence development should encompass not only the acquisition of skills but also the cultivation of awareness regarding attitudes and the promotion of reflection on the nurses\u2019 influence on the patients\u2019 trajectory.KR, CB and SY each conducted 8, 4 and 5 interviews, respectively. At the research site where CB worked as a staff physiotherapist, five interviews were carried out by KR, while CB interviewed 3 participants whom she had not threated. The participation of all authors in the analysis adds to the credibility of our study . TrianguFace-to-face interviews are regarded as the gold standard of data generation in qualitative inquiries, but the assumption is that the quality of the data, and thus the research findings, are somewhat compromised in telephone interviews due to factors such as a lack of visual cues . TelephoIn conclusion, this study found that the experiences of nurse practitioners with treating people with NSSI in the ED showed that they viewed treatment as a Sisyphean task, in other words, as laborious and futile. The findings indicate that nurse practitioners feel confident and competent in treating physical injuries due to NSSI but insecure about how to provide adequate care and engage in conversations about NSSI and mental wellbeing with people with NSSI. The findings further indicate that not all people presenting with NSSI in EDs are considered equally deserving and in need of care, as some people are viewed as performing NSSI untruthfully or as appearing either too well or too distressed. Hence, providing nurse practitioners with NSSI-specific training and guidelines to direct their decision making and strengthen their confidence in their interactions with people with NSSI appears warranted.Below is the link to the electronic supplementary material.Supplementary Material 1"} +{"text": "Federated learning (FL) is a distributed machine learning paradigm that enables a large number of clients to collaboratively train models without sharing data. However, when the private dataset between clients is not independent and identically distributed (non-IID), the local training objective is inconsistent with the global training objective, which possibly causes the convergence speed of FL to slow down, or even not converge. In this paper, we design a novel FL framework based on deep reinforcement learning (DRL), named FedRLCS. In FedRLCS, we primarily improved the greedy strategy and action space of the double DQN (DDQN) algorithm, enabling the server to select the optimal subset of clients from a non-IID dataset to participate in training, thereby accelerating model convergence and reaching the target accuracy in fewer communication epochs. In simulation experiments, we partition multiple datasets with different strategies to simulate non-IID on local clients. We adopt four models on the four datasets , respectively, and conduct comparative experiments with five state-of-the-art non-IID FL methods. Experimental results show that FedRLCS reduces the number of communication rounds required by 10\u201370% with the same target accuracy without increasing the computation and storage costs for all clients. The application of deep learning technology in the Internet of Things (IoT) is very common, with uses in smart healthcare, smart transportation, and smart cities . HoweverIn real-world scenarios, the local datasets among different clients exhibit heterogeneity, indicating that their local data distribution differs from the global data distribution within the entire federated learning (FL) system. Several studies have demonstrated that the heterogeneity of data among clients significantly impacts the effectiveness of FL methods, leading to a substantial reduction in model accuracy ,9. SpeciSome researchers consider only a single category of non-IID environments and do not provide stable performance improvements in different categories of non-IID environments\u00a0,14,15,16Numerous studies focus on devising client selection strategies to alleviate the issue of data heterogeneity in FL. Some authors measure the degree of local dataset skews by utilizing the discrepancy between local and global model parameters for the development of client selection strategies ,21,22. TDeep reinforcement learning (DRL) excels at handling optimal decisions in complex dynamic environments, where the agent repeatedly observes the environment, performs actions to maximize its goals, and receives rewards from the environment. Constructing an agent for the server in FL, the agent adaptively selects clients with high or low loss values to participate in the global model aggregation process by designing a suitable reward function, thus alleviating the problem that client selection strategies are difficult to formulate in dynamic environments. We propose a DRL-based FL framework. In this framework, we design a client selection strategy in FL by utilizing the improved DDQN algorithm that makes optimal decision for the client selection problem at each iteration and selects a subset of clients to achieve the target accuracy with fewer communication rounds.We model the client selection problem in FL as a Markov decision process (MDP), introduce a top-p sampling strategy to the dFedRLCS is deployed on the server side and does not cause additional computing or storage burdens on the client\u2019s side. Moreover, our method represents an optimization at the system level, and it is orthogonal to other FL optimization methods.We summarize some classification methods and use them to construct experimental datasets with non-IID distributions. We conducted extensive experiments and the experimental results show that our method outperforms existing non-IID FL methods.Our main contributions are as follows:The rest of this paper is organized as follows. Federated learning trains a global model through the cooperation of multiple clients; however, data heterogeneity among different clients affects the performance of the aggregated global model. There has been a surge in research literature on the impact of reducing non-IID data on FL. We summarize strategies to overcome non-IID in existing work into the following three categories:Several studies have attempted to alleviate the non-IID issue among clients. Zhao et al. improvedAnother research aspect focuses on addressing the negative impact of heterogeneous data by designing algorithms to enhance the local training phase or improve the global aggregation process. In , the autIn addition, several studies have attempted to design client selection policies for servers. In , the autMost of the mentioned works do not pay attention to the resource constraints of the client, which will impose a large computational or memory burden on the client. In addition, some of these works consider that the non-IID environment is single or the neural network used is relatively simple. Our work will optimize the performance of non-IID FL under the premise of protecting privacy and considering the above factors.t, each client i. The symbols used in this paper and their definition are given in FL has gained recent popularity as a decentralized machine learning framework, typically consisting of a server and a set of clients. Clients in the system share locally trained model parameters rather than private local datasets. The federated averaging algorithm (FedAvg) has become the most commonly used FL algorithm. When the iteration number is Equation .(2)wt+1The above operation is repeated until the global model reaches the target accuracy.t, the DRL agent observes the current state Deep reinforcement learning (DRL) is the learning process of an agent that acts by interacting with the environment to maximize the reward obtained. Specifically, at each episode Q-value at round t defined asIn the value-based DRL algorithm, the agent trains a multi-layer neural network that, for a given state vector The effectiveness of FL in the IID setting was demonstrated in . NeverthAlthough existing FL methods have made considerable progress in solving non-IID data issues by limiting the degree of deviation of local model weight updates and setting a global shared dataset, the potential to suppress these disadvantages by selecting suitable participants before each communication round has been largely ignored. Performing active selection of an optimal subset of clients by comparing local training information for each client is a promising optimization method. This selection helps reduce the discrepancy between the global model parameters and the ideal model parameters, facilitating the rapid convergence of the global model toward the desired accuracyThe experiments conducted in existing studies are limited in their adoption of non-IID dataset partitioning strategies, hindering their representativeness. Therefore, we summarize several real-world common non-IID categories as the basis for our experiments and obtain a more complex and comprehensive non-IID dataset through the combination of different deviation categories.Quantity bias is the most common non-IID category, which means that different clients have different amounts of data .In each client\u2019s local dataset, the proportion of data with different labels is different . For exaCompositional bias refers to the scenario where certain label classes are missing from a client\u2019s dataset, meaning that training on these local datasets cannot capture the full scope of knowledge. This type of bias generally leads to a higher distribution shift .This concept was proposed by He et al. . Data beWhen the distribution of local datasets includes various non-IID categories, it presents a significant challenge to the convergence of FL. The synthetic dataset approach used in our experiments is detailed in In this section, we propose a DRL-based FL framework, FedRLCS, which intelligently selects clients participating in aggregation in each round via a DRL proxy to speed up the convergence of the global model in heterogeneous environments. We introduce the overall architecture of the proposed FedRLCS in N clients. In each round, K clients actively participate in the aggregation process of the model. Initially, the server randomly assigns weight parameters We will now discuss a hypothetical situation in which a FL framework consists of a central server and N clients.Step 1: The central server distributes the global model weight parameters Step 2: Each client conducts an epoch training session using their own dataset and then submits the loss values to the server.t, where it. Then, the DRL agent selects K clients as participants in this FL round based on the state Step 3: The server uses the set of loss values Step 4: The client receiving the confirmation message will continue the remaining local training tasks and upload the local model parameters Step 5: After all of the selected clients have uploaded the model parameters, the server utilizes the standard FedAvg algorithm to update parameters of the global model, denoted as Algorithm 1\u00a0FedRLCS Algorithm1:procedure\u00a0Client-Selected Federated Learning2:\u00a0\u00a0\u00a0\u00a0Server initialization global model parameters 3:for\u00a0do\u00a0\u00a0\u00a0\u00a04:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0push 5:for each client do\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a06:ClientCheck(\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a07:end for\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a08:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a09:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a010:for each client do\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a011:ClientUpdate(i); \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a012:end for\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a013:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a014:end for\u00a0\u00a0\u00a0\u00a015:end procedure16:\u00a0\u00a0\u00a017:function\u00a0ClientCheckClientUpdate is set to 100. For each client, the batch size is set to 50, and the number of local epochs is set to 5.For the CIFAR100 dataset. we train it using a lightweight network MobileNetV2. The number of clients (N) is 50. The batch size and local epochs are the same as in the CIFAR10 dataset.The NICO dataset was constructed and published by He et al. ; it consN, is 100. Each client has a batch size of 16, and the local epoch number for each client is set to 5.For the Tiny ImageNet dataset, we train it using the ResNet-34 network. The total number of clients, We utilize the Adam optimizer with a learning rate of 0.001 for all methods. In each round of communication, by default, 10 clients are selected as participants from all clients. For FedProx, the proximal term he paper , while oN and an output size of K. The hidden layer sizes are 256 and 128, respectively. The probability threshold p of the top-p sampling strategy is set to 0.9. We use Adam as the optimizer of the agent neural network with a learning rate of 0.01.The DRL agent\u2019s model includes two four-layer MLP networks, each with an input size of We calculate the test accuracy when the standard FedAvg algorithm almost converges in each heterogeneous environment, as our target accuracy. Using the aforementioned default settings, the communication iteration number needed for various methods to attain the target accuracy is shown in Due to the lack of optimization for the non-IID environment, FedAvg usually needs to iterate more communication rounds. The effect of the MOON method is much worse than other methods; even the target accuracy cannot be achieved because the extra projection head will affect the performance of the original network architecture. The performance of the FedDyn method is second only to FedRLCS on the CIFAR-10 dataset with a simple network architecture; however, it exhibits unsatisfactory performance on the CIFAR-100, NICO, and Tiny ImageNet datasets, which have more complex network architectures. FedRLCS achieves optimal performance on the non-IID dataset with different partitioning strategies, demonstrating the effectiveness of our proposed DDQN-based client selection strategy.We examine the impacts of different data heterogeneity levels by altering the concentration parameter Compared with the FedAvg method, as the data heterogeneity changes, FedRLCS reduces communication rounds by To demonstrate the scalability of FedRLCS, we construct two different setups. The comparison between FedRLCS and other FL methods is shown in K of clients participating in the aggregation changed to 5, 15, and 20. The results are shown in Setting 1: We experimented with varying numbers of parties on CIFAR-10 over FedAvg in all heterogeneous environments. In contrast, FedRLCS has demonstrated its strengths in different experimental setups. FedRLCS is a systems-based optimization approach that is compatible with previous work. In addition, the training of the agent network is deployed on the server, without introducing an additional workload for the client. Our proposed approach can be better applied in resource-constrained FL environments. One limitation of our study is the exclusive focus on image tasks. The application of Federated Learning (FL) in natural language processing and graph neural networks is extensive and warrants further exploration. In addition, future work will consider factors such as model accuracy, local computation, and communication times. This approach will be especially relevant for more complex problems, such as those involving environmental loads on road transport networks."} +{"text": "As additive manufacturing continues to evolve, there is ongoing discussion about ways to improve the layer-by-layer printing process and increase the mechanical strength of printed objects compared to those produced by traditional techniques such as injection molding. To achieve this, researchers are exploring ways of enhancing the interaction between the matrix and filler by introducing lignin in the 3D printing filament processing. In this work, research has been conducted on using biodegradable fillers of organosolv lignin, as a reinforcement for the filament layers in order to enhance interlayer adhesion by using a bench-top filament extruder. Briefly, it was found that organosolv lignin fillers have the potential to improve the properties of polylactic acid (PLA) filament for fused deposition modeling (FDM) 3D printing. By incorporating different formulations of lignin with PLA, it was found that using 3 to 5% lignin in the filament leads to an improvement in the Young\u2019s modulus and interlayer adhesion in 3D printing. However, an increment of up to 10% also results in a decrease in the composite tensile strength due to the lack of bonding between the lignin and PLA and the limited mixing capability of the small extruder. It creates physical objects by layering materials according to a digital design or geometric representation . ScientiAnother of the ongoing issues in 3D printing is the structure of spaces or cavities between layers, which can affect the mechanical strength of printed objects when compared to those produced using traditional injection-molding processes. To address this issue, researchers are investigating the use of sustainable additives or fillers in the printing process to enhance layer adhesion and tensile properties when 3D printing at complex angles .In Malaysia, the oil palm empty fruit bunch (OPEFB) is one of the major biomasses produced from palm oil mills and comprises cellulose, hemicellulose and lignin; it can be investigated extensively to increase polymer biodegradability, mechanical characteristics and minimize the burden of the fossil fuel business . Among tLignin is a highly versatile material for sustainable development and reducing the carbon footprint. It can be extracted using various techniques that result in different structures and molecular weights, making it attractive for the production of bio-based products. Lignin is gaining attention worldwide due to its low cost, renewability, biodegradability, and high carbon content, especially for 3D printed materials ,12,13. IThe addition of lignin to composites replicates the natural conditions of the plant cell wall and improves the interfacial adhesion of the lignocellulosic matrix, resulting in enhanced mechanical properties such as Young\u2019s modulus, tensile strength, flexural strength, and improved wettability ,16,17,18Therefore, this work aimed to study the stability and compatibility of in-house processing filament via a bench-top extruder at different formulations of polylactic acid (PLA) and extracted lignin. Organosolv lignin, extracted from the OPEFB, was blended with various ratios of PLA pellets. These composite pellets were extruded with a filament diameter of about 1.75 mm and 3D-printed using FDM, to assess the improvement in mechanical properties of the composite over neat PLA. Detailed chemical and thermal analysis was conducted to examine the chemical interactions and structure of the composite, and their relationship to the mechanical properties of the 3D-printed parts.The isolation of lignin from OPEFB fibers was carried out using 90% formic acid (FA) . PLA was received in natural pellets . Organosolv extraction of the lignin process was carried out according to previously described procedures ,25. BrieIn the preparation of filament composites, the organosolv lignin was used as a filler for PLA and was blended with 3\u201315% of lignin ratio compositions. The mixture composition of biopolymer formulation was mixed using a mechanical stirrer at 2000 rpm for 30 min. The samples were kept in a convection oven at 50 \u00b0C for 6 h with the moisture content <0.5% before the filament extrusion.The filament extrusion was carried out using a 3devo Composer 350 bench-top extruder and a single mixing screw extruder with a nozzle diameter of 4 mm . Prior to the filament extrusion, the extruder was preheated and cleaned using virgin high-density polyethylene (HDPE) to remove the excess materials. The extruder was preheated up to 230 \u00b0C until the temperature suited the composite formulation. Gradually, a total of 100 g of each formulated samples was deposited onto the extruder hopper. The control parameter for the filament extrusion was designed to create filament thickness in the diameter \u00b11.75 mm. The extrusion profiles were set at four different heating zones , 80% of cooling fan speed and an extrusion speed of 3.5 rpm, where the lower extrusion speed helps to reduce the amount of fluctuation in the diameter of the filament. The extruded filament composites were spooled and stored in the desiccator until further use. The extruder was cleaned once again using HDPE before the other compositions were repeated. All samples were purged until the flow of extrusion was consistent. The filament diameter, extruder speed and temperatures of all samples were real-time monitored using DevoVision .The extruded filaments were analyzed with a commercial FDM 3D printer with a 0.4 mm nozzle . The CAD model (.stl file) utilized in this study was an ASTM D638 type V standard tensile specimen for tensile property analysis. The file was sliced to a G-code file through slicer software . The printing profiles were varied by the nozzle temperature, infill, layer height, extrusion speed, and printing speed for the best printing structure . In order to determine the interlayer adhesion of infused lignin, the printing orientation was set at 0 and 90\u00b0 on top of the build plate .\u22121 in the range of 4000 cm\u22121 to 500 cm\u22121. Thermochemical analyses were determined using a differential scanning calorimeter under nitrogen circumstances from 25 to 250 \u00b0C at a heating rate of 10 \u00b0C/min. In addition, the thermogravimetric analysis (TGA) was performed by changes in the thermal degradation of composite samples at temperatures ranging from 25 to 600 \u00b0C under nitrogen circumstances at a heating rate of 10 \u00b0C/min.Chemical characteristics of synthesized filament composites were determined using attenuated total reflectance Fourier transform infrared, ATR-FTIR at a resolution of 1 cm\u00ae Electromechanical Universal Testing Systems 3300 Series at 10 mm/min with a load cell of 1 kN. All the data reported were based on the mean of five replicates (n = 5). The morphological arrangement of layer-by-layer adhesion was investigated using a field emission scanning electron microscope (FESEM) . The cross-sectional samples were sputter-coated with platinum before viewing to reduce the charging effect of the samples.The mechanical characterization of the 3D printed samples was measured using InstronOrganosolv lignin that underwent extraction from the OPEFB was processed using an organosolv method that utilizes 90% FA and a rotary evaporator to isolate the organosolv lignin from the FA. After the extraction and separation process, OPEFB fibers became more brownish, as shown in \u22121 and indicate the presence of a hydroxyl group, O\u2013H, whereas the band at 2940 cm\u22121 indicates the presence of a methyl group, C\u2013H, whereas the bands at 1716 cm\u22121 and 1500 to 1600 cm\u22121 indicate the presence of a carbonyl group, C=O, and an aromatic group, C=C. Following this, 1410 to 1470 cm\u22121 show an asymmetrically deformed C\u2013H group in \u2013CH3 and \u2013CH2, while 1350 cm\u22121, 1212 cm\u22121, 1130 cm\u22121 to 1110 cm\u22121 explain the S group of the C\u2013O stretch, phenolic OH and ether in S and G, as well as the S group of the C\u2013H stretch [The presence of functional groups of isolated organosolv lignin was; the blended filament composites are shown in stretch . Despiteg and melting temperature, Tm for PLA were recorded at 54 and 155 \u00b0C, respectively. The previous work based on organosolv extraction lignin from OPEFB fibers exhibited a thermoplastic characteristic with a glass transition temperature, Tg around 97 \u00b0C [The differential scanning calorimetry transitions depicted in nd 97 \u00b0C ; thus, tThe thermal decomposition of filament composites was clearly disrupted by the addition of lignin as a filler see c,d. The All filament composites completely disintegrate at temperatures of up to 600 \u00b0C and this is coupled with a volatile PLA product that decomposes rapidly between 250 and 390 \u00b0C. Despite the fact that the composites gradually deteriorated at temperatures above 250 \u00b0C, this would not have a major impact on the polymer degradation during the composite extrusion process. In this work, the established temperatures that were used in the filament extrusion and FDM 3D printing were below the identified deteriorating temperature; both procedures had temperature settings of 165/180/180/170 \u00b0C and 215 \u00b0C, respectively.The filament diameter over extrusion time was provided in real-time monitoring software, DevoVision see a. AlthouThe inconsistent distribution of lignin during the filament extrusion may as well be observed at its mechanical analysis in All printing parameters were set as the same for the two types of different orientations, horizontal (0\u00b0) and vertical (90\u00b0) see . FurtherThe decreasing trend in tensile strength for composites is most likely due to lignin\u2019s lower tensile strength when compared to PLA. In general, the tensile strength is influenced by lignin\u2013lignin intermolecular interactions, PLA\u2013PLA interactions, and PLA\u2013lignin interactions, as well as the rigidity of lignin particles . FurtherTherefore, 3D printing at a horizontal angle of 0\u00b0and 90\u00b0, PLA-3%L (0\u00b0) shows the highest Young\u2019s modulus among other sample formulations. This emphasizes the importance of including lignin in polymer formulations, as both the orientation set-ups show the improvement in Young\u2019s modulus compared with the neat PLA . Therefore, the improvement in Young\u2019s modulus is mainly attributed to the excellent role of lignin as a rigid filler that increased stiffness. This increase in stiffness can be linked to both hydrophobic interactions and hydrogen-bond electrostatic forces between the lignin and the polymer . In TablThe increment in Young\u2019s modulus is comparable from the previous findings associated with the improvement in the interlayer adhesion of 3D printing ,19,24,35Fractured tensile samples underwent mechanical testing and were then examined for their morphology to study the interaction of lignin with the polymer matrix see . OptimalThe mechanical strength data of 3D printed samples was further validated with the integration of an artificial intelligence hybrid technique called the adaptive neuro fuzzy inference system (ANFIS). The study aimed to predict the tensile strength of the printed samples under different conditions using lignin concentration , infill (15 and 30%), and orientation (0\u00b0 and 90\u00b0) of the printed samples as input variables. The tensile strength was obtained through UTM testing, and the ANFIS model was then trained and tested using a Sugeno-type fuzzy inference system .r2 = 0.986), as shown in In this study, the ANFIS model structure consisted of multiple rules, with 27 rules generated by tuning the three process parameters . The training data were loaded into the network and then used to train and test the fuzzy inference structure by adjusting the membership function parameters for optimal performance. The output function was constructed linearly with a single response. The best prediction for optimum tensile strength was achieved using 3 wt.% lignin at 15% infill and horizontal orientation (0\u00b0), with the highest strength being achieved by neat PLA at 15% infill and horizontal orientation. The ANFIS model results were consistent with the experimental results, as shown in 3D printing filament made from a combination of biopolymer PLA and biofiller lignin was extruded using a bench-top filament maker with an optimal diameter of 1.75 mm. All factors such as the extrusion temperature, speed, and cooling fan speed were carefully controlled and recorded, with the best results being achieved at 165/180/180/170 \u00b0C for the biocomposite. The study found that using a 3% lignin formulation resulted in the highest Young\u2019s modulus for the biocomposite filament. The ANFIS model was used to predict and validate the results and it was found that the biocomposite has potential for use in 3D printing even in a small-scale production." \ No newline at end of file