content
stringlengths 37
2.61M
|
---|
We chose to question the narrative of history by simply quoting words which Churchill used himself.
At a glance, Blighty Café appears to be another one of those trendy and sophisticated coffee shops that offer the terribly appealing combination of tasteful beverages, off-beat music and the inviting aroma of freshly-baked sourdough bread. As a university student living in the area, of course, I couldn’t resist. Sounds perfect, right? Except, sitting in the mock air-raid shelter, drinking my flat white, I couldn’t help but feel ever more uncomfortable.
Blighty Café is located in North London and is not your average hipster café; Churchill memorabilia abounds, the outside area is modelled as a Second World War air-raid shelter, and there is even a life size model of Churchill so you can sip your coffee in the company of the revered wartime leader. Harmless? Chic? Unfortunately, the more I thought about it, the more I felt that it was deeply disrespectful to glorify Winston Churchill, without mentioning those who truly suffered at the hands of colonial rule.
Being of mixed Pakistani and English descent, colonial history has always been very close to home, and uncovering the horrors of British imperialism was a deeply upsetting experience. Churchill cannot be disentangled from this bloody colonial history. His instrumental involvement in the Bengal famine, his blasé attitude towards South African concentration camps and declarations such as “the Aryan stock is bound to triumph” have understandably lead me to question his heroism. With all this in mind, when my flatmate invited me to attend a surprise performance protesting against the café’s decor, I felt my presence would be justified.
“CHURCHILL WAS A RACIST!” Fifteen of us visited the packed-out Blighty Café on a windy Saturday morning. A silence fell amongst the customers as we recited Churchill’s racist outbursts; people were listening quite intently. Our performance lasted no more than five minutes. There was surprisingly (or perhaps unsurprisingly) little friction; except for one member of the cafe’s staff shouting at us as we left: “Churchill fought for all of our freedom!” I felt such a response rather confusing after we had just quoted Churchill as saying: “I hate Indians. They are a beastly people with a beastly religion.” I pondered on the man’s retort on the way home; yes, Churchill did fight for all of our freedom, but he also hated South Asians and said that they followed a beastly religion. Should I then be content with businesses in my local area celebrating his legacy?
The coverage of the protest by The Sun and The Daily Mail in the days following our performance have triggered a racist backlash, with one member of the group singled out for character assassination. We might have intended to make people feel a bit uncomfortable, but at the end of the day it was a peaceful protest. Rather than personal attacks, the newspapers could engage in a debate about our historical narratives. The coverage also noted that, like many young people, some of our group supported Labour leader Jeremy Corbyn. Yet we did not choose to involve Corbyn in our performance, and of course he is entitled to his own opinion on these matters.
On the café’s website, it is stated: “Blighty’s mission is to make the world a closer place by celebrating and improving the relationships between the people and nations of the 52 members of the commonwealth.” That sounds wonderful. I just don’t believe that glorifying figures of history with racist views is the right way to do so. The owner of the café told The Sun that Churchill did “some racist and ignorant things” but his flaws “showed he was human”. If I could ask the café owner to do one thing, it would be to read more into the darker side of Churchill’s legacy, and its effects on colonised people.
We chose to question the narrative of history by simply quoting words which Churchill used himself. It seems ludicrous that the press are so keen to shut us down. One has to ask whether they are silencing a group of students or the words of Churchill which they would rather forget.
The Blighty Café did not respond to a request for comment, but the owner has written an article about his establishment here. |
Aroma classification and characterization of Lactobacillus delbrueckii subsp. bulgaricus fermented milk Highlights The aroma types of fermented milk produced by L. bulgaricus were divided into milky-type, cheesy-type, fermented-type and miscellaneous-type. The flavor fingerprints of different aroma types were established by GC-IMS. Acetaldehyde, 2,3-butanedione, acetic acid, butanoic acid, hexanoic acid and -decalactone of different aroma types were determined by Flavoromics. Introduction Yogurt is a popular dairy product and is usually produced using the combination of L. bulgaricus and Streptococcus thermophilus (). Because of its unique flavors and health-improving functions, such as providing vitamins D and calcium (), regulating gut microbiota, and enhancing immunity (), yogurt is favored by the consumers worldwide. Flavor is an essential factor in yogurt quality, affecting consumers' acceptance. There are many kinds of yogurts in the market, for example, plain yogurt, fruity yogurt, cheesy yogurt, etc. The flavor diversity was achieved by adding spices or juice in yogurt processing (Routray & Mishra, et al., 2016) and 2,3-butanedione were regarded as important indexes for screening aroma-producing strains. However, there was no consistent conclusion about the effects of these compounds on the aroma of yogurt. Chinese liquor (Baijiu), which enjoyed a long history, could be divided into several aroma types such as Luzhou-flavor, Maotai-flavor, etc. (). The typing methods of baijiu aroma may be used as a reference to classify the aroma types of yogurts according to the sensory analysis. The volatile compounds in yogurts of different aroma types could be analyzed by gas chromatography-mass spectrometry (GC-MS) to determine the key flavor compounds. Then, the key flavor compounds could be selected as the indicators for screening strains with specific aroma and flavor, which might be an effective way to solve the homogenization problem in yogurt flavor. In recent decades, some new technologies such as gas chromatography-ion mobility spectrometry (GC-IMS) have emerged in food flavor research. GC-IMS has the advantages of high sensitivity, simple sample preparation, and short analysis time. However, GC-IMS could not provide a reliable qualitative determination as GC-MS. Therefore, the combination of GC-MS and GC-IMS could be an excellent choice to provide a comprehensive analysis of volatile components in yogurts. So far, >90 volatile compounds have been identified in yogurts. However, not all of these compounds could be detected by human olfactory receptors and contribute to the aroma of yogurt. The aroma-active compounds could produce the odor perception in the human brain (). Recently, sensomics or flavoromics has been applied to identify the key aroma-active compounds in dairy products. The flavor of milk fan (), kurut and cheeses () was investigated by flavoromics. In this study, to type the aroma of fermented milk produced by L. bulgaricus, the sensory properties of fermented milk produced by 28 L. bulgaricus strains were analyzed and the types of aromas were classified based on sensory analysis; The fingerprints and the key aromaactive compounds of specific aroma types were investigated by GC-IMS and flavoromics approach. Strains Starter cultures for dairy products were collected from a local market in Tsingtao, Shandong Province, China. The starter cultures were homogenized in 0.9% NaCl solution. Dilutions up to 10 -4 ~ 10 -6 were prepared, and 100 L aliquots from the 10 -4 to 10 -6 dillutions were spread onto MRS agar plates (Qingdao Hopo Bio-Technology Co., Ltd, China) and incubated at 37 ℃ under anaerobic conditions. Colonies were subjected to the catalase test and gram staining. Catalase-negative, gram-positive and rod-shaped colonies were purified two or three times in MRS agar plates (Hajimohammadi ). Pure colonies were identified by 16S rRNA gene sequencing in Sangon Biotech (Shanghai, China) Co., Ltd. A total of 28 L. delbrueckii subsp. bulgaricus strains were finally isolated and kept at − 80 ℃. The information on these L. bulgaricus strains is shown in Table S1. The strains were subcultured three times using a milk medium (Qingdao Hopo Bio-Technology Co., Ltd, China) for 18 h at 37 ℃ before use. Preparation of fermented milk Sterilized whole milk containing 3.0% protein (wt/vol) and 3.7% fat (wt/vol) was provided by a dairy company (Beijing Sanyuan Foods Co., Ltd), and 6% sucrose (wt/vol) was added. Then, the raw milk was pasteurized at 95 ℃ for 5 min and cooled quickly to 42 ℃. L. bulgaricus was inoculated into the raw milk with an inoculum size of 0.5% (vol/ vol) and fermented at 42 ℃. When the titratable acidity (TA) reached above 60 T, the fermentation process was ended immediately by an ice bath. The measurement of TA was conducted following ISO/TS 11869:2012(ISO/IDF, 2012. Then, quantitative descriptive and instrumental analysis were performed after refrigerating the fermented milk at 4 ℃ for 24 h. Each sample was fermented by L. bulgaricus strain according to the above manufacturing method twice. Quantitative descriptive analysis (QDA) According to the method given by ISO 8586: 2012(ISO, 2012, thirty students and staffs of the College of Food Science and Engineering of the Ocean University of China were invited as panel candidates, and they have been trained in relevant courses and have experience in sensory analysis. Thirty candidate panelists were trained for one month (1 h/ day) to familiarize, identify and describe the odorants commonly found in dairy products. Ten panelists (two males and eight females; average age of 25 years old) were selected based on availability and sensory perception abilities. Meanwhile, these ten panelists were retrained and asked to sniff the standard reference and grade the intensities of different aroma attributes. Besides, the training and selection of panelists and the formal descriptive sensory analysis were performed in a sensory laboratory of the College of Food Science and Engineering of Ocean University of China where the room temperature was controlled at 20 ℃. Referring to the ISO 5492: 2008, five sensory descriptors including milky, creamy, cheesy, buttery and fermented were selected, and the definition and references of these five descriptors is shown in Table S2. A continuous 9-point scale, ranging from 0 (absence of the attribute) to 9 (very high intensity of the attribute), was used to measure aroma intensity. The scale of intensity for each reference was set as 7 points. Ten grams of fermented milk samples were weighed in disposable taste glass and coded with random three-digit codes at approximately 10 ℃. Each fermented milk sample was measured in triplicate. The panel evaluated 10 ~ 12 samples during each 120-min session, and 16 total sessions were held to evaluate the 28 samples in three replicates. The panelists could have a rest after they evaluated 4 ~ 5 samples. The data of sensory analysis were collected manually. To ensure the accuracy and reliability of the sensory evaluation, the professionals engaged in the R&D of dairy starters from the company Beijing Doit Biotechnology Co., Ltd were commissioned to guide and supervise our work of sensory analysis. HS-GC-IMS analysis One gram of fermented milk was placed into a 25-mL headspace glass sampling vial, which was incubated at 55 ℃ for 20 min at 500 rpm/min. After the incubation, 500 L of headspace was automatically sucked and transferred into the GC-IMS instrument (FlavorSpec, GAS, Germany). A WAT-Wax capillary column (15 m 0.53 mm) was chosen for volatile separation at isothermal 60 ℃. Nitrogen (purity ≥ 99.999%) was used as the carrier gas, the flow rate of which was set as follows: 2 mL/min (0 to 2 min), 2 ~ 10 mL/min (2 to 10 min), 10 ~ 100 mL/min (10 to 15 min), 100 ~ 150 mL/min (15 to 20 min), 150 mL/min (20 to 30 min). The draft tube was under 45 ℃, where nitrogen was used as a drift gas at 150 mL/min flow. The data was processed by LAV (Laboratory Analytical Viewer) and GC IMS Library Search. The retention index (RI) of volatile compounds was calculated using n-ketones C 4 ~ C 9 as external references for the identification of compounds. Each fermented milk sample was analyzed in triplicate. GC-MS analysis HS-GC-MS was used to analyze the volatile compositions of the fermented milk samples. Ten grams of samples were put into a 40-mL glass vial with a silicon septum and 2 L solution of 2-octanol (GmbH, Augsburg, Germany) with 0.328 mg/L were added as an internal standard. Then, the glass vial was equilibrated at 55 ℃ for 20 min and the SPME fiber coated with 50/30 m divinylbenzene/carboxen/polydimethylsiloxane (DVB/CAR/PDMS) was headed to the headspace of the vial for volatile extraction at 55 ℃ for 40 min. After extraction, the fiber was inserted into the injector port of GC-MS and desorbed for 6 min at 250 ℃ in splitless mode. The GC-MS analysis was operated using an Agilent 8890 GC/7000D MSD according to and Dan et al. with minor modifications. The volatile compounds were determined on a capillary column Agilent HP-INNOWax column (60 m 0.25 mm, 0.25 m, Agilent Technologies) and a capillary column Agilent HP-5MS column (30 m 0.25 mm, 0.25 m, Agilent Technologies). For the HP-INNOWax column, the oven temperature was held at 35 ℃ for 3 min firstly, then increased to 120 ℃ at a rate of 4 ℃/min and then increased to 190 ℃ at 5 ℃/min, and finally increased to 230 ℃ at 10 ℃/min and held for 6 min. For the HP-5MS column, the oven temperature was maintained at 35 ℃ for 3 min firstly, and increased to 100 ℃ at a rate of 4 ℃/min and held for 2 min, then increased to 150 ℃ at 5 ℃/min, and finally increased to 250 ℃ at 10 ℃/min and held for 10 min. The volatile compounds were identified by MS and retention index (RI). The RI was calculated with a series of n-alkanes (C 7 ~ C 30 ). GC-O-MS analysis The aroma-active compounds were located using Agilent 7890B-5977B GC/MSD equipped with an olfactory detection part (Gerstel ODP-2, Mlheim an der Ruhr, Germany). The GC effluent was set to be separated equally between the mass detector and the sniffer in a 1:1 ratio. A capillary column Agilent HP-INNOWax column (60 m 0.25 mm, 0.25 m) was chosen to separate the volatile compounds and the operating conditions were the same as the description in GC-MS analysis. The odor-specific magnitude estimation (OSME) analysis was conducted to evaluate the odor intensities of volatile compounds by ten well-trained panelists. The procedure of selection and training of panelists was the same as the description in session 2.4 Quantitative descriptive analysis (QDA). The time of the onset and end of the aromaactive compounds, the perceived odor characteristic, and aroma intensity (AI) were recorded. AI was marked using the 6-point scale from 0 (none detected) to 5 (extremely strong). The GC-O-MS analysis was performed in triplicate by each panelist. The mean values of the AI marked by ten panelists were considered the final value of AI. To avoid potential loss of aroma-active compounds, the compound would be recognized as aroma-active compound when the AI value of one volatile compound was higher than zero. In addition to MS and RI, the odor descriptions of compounds and corresponding standard compounds were used for further identification of aroma-active compounds. Quantitative and OAV analysis The aroma-active compounds were quantified by establishing standard curves with GC-MS in SIM mode. To obtain a matrix similar to fermented milk, an aroma-blank matrix was prepared according to with minor modification. Firstly, 3% (wt/vol) milk protein concentrate powder (MPC 80, Ningxia Cezanne dairy industry Co., Ltd., Yinchuan, China), 3.6% (wt/vol) sunflower seed oil (COFCO Corporation, Beijing, China), water, 1% (wt/ vol) modified starch and 0.1% (wt/vol) pectin were mixed with a blender (2000 rpm/min, 55 ℃ for 30 min). The mixture was homogenized under 20 MPa and pasteurized at 95 ℃ for 5 min. Finally, the pH of the mixture was adjusted to 4.5 with lactate buffer solution. Ten grams of the mixture were added into a 40 mL glass vial with a silicon septum and 2-octanol was added as described above. The concentration gradient of each compound was set according to the semiquantitative results, and the standard curve of no<6 points was established. All calibration curves were replicated in triplicate. The calibration equations were listed in Table S6, where y represented the peak area ratio (peak area of standard compound / peak area of internal standard), and x represented the concentration ratio (concentration of standard compound /concentration of internal standard). The odor activity value (OAV) was calculated as the ratio of the concentration of compounds to its odor threshold. The odor thresholds of these compounds were collected from the information available in the literature (van Gemert, 2011). Each fermented milk sample was analyzed in triplicate. Aroma recombination To evaluate and compare the actual contribution of each aromaactive compound to the aroma profile of fermented milk samples, all of the aroma-active compounds were prepared at concentrations the same as their occurrence in the original samples in the aroma-blank mixture. After equilibrating at room temperature for 2 h, the aroma recombinant (AR) was finally obtained. Quantitative descriptive analysis (QDA) with ten well-trained panelists, as described in session 2.4, was used to evaluate the sensory scores of the recombination models and their corresponding samples. Omission tests To determine the compounds that influence the overall aroma profile of fermented milk samples, omission models based on the recombinant model were prepared by removing a single compound or a group of compounds from the recombination models. A total of 15 omission models and one recombination model were analyzed by a triangle test. Triangle tests were performed by randomly arranging one omission model and two recombination models for ten well-trained panelists to test whether or not there were some differences between the recombination models and the omission models and select the different one. If at least eight panelists recognized the omission models, it meant that this compound or this group of compounds omitted from the recombination model was or were significantly important to the overall aroma of samples (p < 0.05). Similarly, if there were nine panelists, it meant that this compound or this group of compounds was or were highly significantly important (p < 0.01), and 10 panelists meant very highly significantly important (p < 0.001) (). Each test was conducted in triplicate. Aroma addition experiments Aroma addition experiments were operated to further verify and explore the effects of the key aroma compounds on the intensities of aroma attributes. All aroma-active compounds were mixed in the aromablank mixture at the same concentrations as their occurrence in the original sample of L6-11 to obtain aroma recombinant (AR). Then, the key aroma compounds including acetaldehyde, 2,3-butanedione, acetic acid, butanoic acid, hexanoic acid, and -decalactone revealed by omission test, were added to the aroma recombinant L6-11 at concentrations similar to those in the sample of L6-11. The intensity changes in aroma attributes were quantified by ten well-trained panelists as described in session 2.4 Quantitative descriptive analysis (). The aroma addition experiments were performed in triplicate for every key aroma compound. Statistical analysis Data from the sensory analysis, quantitative analysis, and aroma recombination and addition analysis were evaluated by using an analysis of variance (ANOVA) with Duncan's multiple comparison tests performed by IBM SPSS Statistics version 25. Here, p-values of < 0.05, <0.01 and < 0.001 were considered statistically significant and marked as *, ** and ***, respectively. Radar maps and PCA analysis for the results of sensory analysis and aroma classification were carried out using Origin 2019b (OriginLab, Northampton, United States). Heatmap analysis for concentrations of the six key aroma compounds was performed in MetaboAnalyst 5.0 (https://www.metaboanalyst.ca). Sensory analysis and aroma typing The results of sensory analysis of the fermented milk produced by 28 L. bulgaricus are shown in Table 1. For the milky attribute, the samples of L6-12, L8-5, L9-2, L9-5, and L9-8 were the most scored with intensities of 3. The creamy attribute was only presented in the samples of L4-1-2, L6-11, L6-14, L8-4, L8-7, and L8-8, but it was weaker compared with cheesy, milky, and fermented attributes. For the cheesy attribute, L4-1-2 was the most scored with an intensity of 5, followed by L4-1-1, L4-2-3, L6-15, L8-6, L8-7, and L9-6 samples. For the fermented attribute, only the L6-11 sample was scored with an intensity higher than 5, followed by L9-5. Besides, the samples L6-12, L8-1 and L9-1 showed weak a buttery attribute. For quantitative descriptive analysis of the aroma of fermented milk, it is crucial to develop a sensory lexicon with definitions and scales. Some reports have shown some lexicons and references for aroma sensory evaluation of yogurt, such as grain-like, moldy, yeast, milk, sweaty, etc. (Brown & Chambers, 2015;). Considering the differences in dietary habits and cultures of different countries, we did not copy these lexicons completely. Instead, the lexicon including milky, creamy, cheesy, buttery, and fermented was finally determined according to the ISO 5492: 2008. Twenty-eight samples had obvious differences in all five aroma attributes. The L4-1-2 showed a prominent cheesy aroma, and the sample of L6-11 showed the most outstanding fermented aroma. Interestingly, all of the 28 fermented milk samples showed lower scores in the milky, creamy, and buttery aroma. These phenomena indicated that different L. bulgaricus strains had varying metabolizing abilities to produce flavor compounds during milk fermentation, which provided a theoretical basis to establish screening methods of strains with specific sensory notes. Referring to the classification methods of Chinese liquors, the rule of aroma typing in this study was set as follows: it would be assigned to the aroma type if a fermented milk sample had the highest score of a specific aroma attribute. Accordingly, the 28 fermented milk samples were divided into four aroma types: milky-type, cheesy-type, fermented-type, and miscellaneous-type, and the results are shown in Table 1. Miscellaneous type means that the scores of more than one attribute are the same and the highest. Four samples were grouped into milky-type, and 13, 7, and 4 fermented milk samples were classified into cheesy-type, fermented-type and miscellaneous-type, respectively. Radar maps of 4 different aroma types were structured according to the results of sensory analysis and are shown in Fig. 1. Among four fermented milk samples of the milky-type (Fig. 1a), L8-5 and L9-8 samples were the most scored with a milky intensity of 3. In cheesy-type (Fig. 1b), L4-1-2 showed the most robust cheesy aroma with an intensity of 5. Among seven fermented milk samples of fermented-type (Fig. 1c), L6-11 was the most scored with a fermented intensity of 6 followed by L9-5 with 5 points. As for the miscellaneous-type (Fig. 1d), L9-2 showed the same score for the milky and cheesy attribute, and L9-7 had the same score for the milky and fermented attribute. L9-4 and L9-13 were scored with the same intensity of cheesy and fermented attributes. The PCA plot derived from the aroma characterization is shown in Fig. 1e, which indicates a pictorial relationship among fermented milk samples of four aroma types based on the intensity of the five aroma attributes. PCA could reduce the dimensionality of aroma data and intuitively characterize the prominent aroma features of fermented milk samples (). After PCA analysis, the fermented milk Note: 1 The mean scores round to nearest integer. Attribute intensities were scored on a 0-to 9-point universal intensity scale where 0 indicates that the attribute is not perceived at all; 1 indicates doubt about the presence of the attribute; 2 indicates that the attribute is perceived but very slightly; 3 indicates that the attribute is clearly perceived, although it is slight; 4 indicates that the attribute is clearly perceived, but the intensity is much lower than reference; 5 indicates that the attribute is clearly perceived, but the intensity is lower than the reference; 6 indicates that the attribute is clearly perceived, but the intensity is slightly lower than the reference; 7 indicates that the intensity of the attribute is similar to the reference.; 8 indicates that the intensity of the attribute is higher than the reference; and 9 indicates that the intensity of attribute is much higher than the reference. 2 Different superscript letters in the same row indicated significant differences (p < 0.05). samples of four aroma types were differentiated on the plot. Samples of the milky-type were positively correlated with the milky attribute. Samples of the fermented-type were positively related to the fermented attribute and negatively correlated with the cheesy attribute. All of the 13 samples of the cheesy-type were positively correlated with the cheesy characteristic. Currently, no other study showed the viewpoints of aroma types of fermented milk, and the results of aroma typing in this study might offer suggestions to establish the aroma classification of fermented milk in the future to screen out novel strains with specific aroma features. Identification of volatile compounds by GC-IMS and GC-MS To elucidate the crucial compounds influencing the aroma typing, the volatile compounds of the two fermented milk samples were analyzed by GC-IMS, and the results are shown in Fig. 2. These two Fig. 2. 2D topographic plots of volatile compounds and the flavor fingerprints of two fermented milk samples selected. a. 2D-topographic plots of volatile compounds in two fermented milk samples selected. b. the differential plots of 2D-topographic plots of GC-IMS spectra of volatile compounds in two fermented milk samples selected (with the L4-1-2 as the background). c. The flavor fingerprints of two fermented milk samples selected. The areas of A and B were characteristic fingerprints of L4-1-2 and L6-11, respectively. d. The flavor fingerprints of unidentified plots in two fermented milk samples selected. The areas of C and D were characteristic fingerprints of L4-1-2 and L6-11, respectively. samples, including L4-1-2 (cheesy-type group) and L6-11 (fermentedtype group), showed the highest scores in their corresponding sensory attributes. It was observed that all of the samples of the milky-type and miscellaneous-type groups showed lower scores in five aroma attributes (<3 points), so the samples of these two types were not selected for subsequent GC-IMS and flavoromics analysis. 2D topographic plots of volatile compounds in fermented milk samples showed no obvious difference in the composition of volatile components of the two samples (Fig. 2a). The sample of L4-1-2 was set as the background, and the differential plots were obtained by topographic plot deduction and are shown in Fig. 2b. There were lots of plots that showed differences in colors, which suggested that lots of compounds in the two samples differed in concentrations. To describe the differences of volatile compounds between the L4-1-2 of cheesy-type and the L6-11 of fermented-type, integral and qualitative analyses were operated on the GC-IMS spectra. A total of 48 volatile compounds, including 10 aldehydes, 5 esters, 13 ketones, 2 acids, 11 alcohols, 4 sulfides and 3 others, were identified (Table S3). To demonstrate the differences of different aroma types, the flavor fingerprints of fermented milk samples were structured and are shown in Fig. 2c. A total of 26 volatile compounds were identified as key compounds, including 9 ketones, 3 aldehydes, 6 alcohols, 2 acids, 3 esters, 1 sulfide and 3 other compounds. These 26 compounds were divided into area A and area B. Area A consisted of 15 volatile compounds including 5 alcohols, 1 sulfide, 3 aldehydes, 1 easter, 4 ketones, and 1 other compound. Dimethyl disulfide could provide an onion and cabbage flavor for yogurt, and its concentration in L4-1-2 was higher than that in L6-11, which might cause L4-1-2 had a higher score in cheesy attribute than L6-11. Besides, there were also 3 aldehydes in area A, which came from an amino acid by transamination. (E)-2-pentanal and nonanal could give fruit and green smell to yogurt (;). Area B was mainly composed of acids, esters, and ketones. Acetic acid could improve the sour smell of yogurt. The concentration of acetic acid in L6-11 was higher than that in L4-1-2, which might be why the fermented score of L6-11 was higher than that of L4-1-2. 2-Nonanone, 3-hydroxy-2-butanone, and 2,3-butanedione, which offered a creamy aroma to yogurt, and the concentrations of these compounds in L6-11 were higher than those in L4-1-2, which might account for the higher creamy score of L6-11 than that of L4-1-2. Besides, the intensities of 33 unidentified plots were significantly different (Fig. 2d). Area C consisted of 9 unidentified plots where these 9 plots in L4-1-2 showed higher intensities than those in L6-11. Moreover, area D included 24 unidentified plots and their intensities in L6-11 were higher than those in L4-1-2. The volatile compounds in the two fermented milk samples, L4-1-2 and L6-11, were then investigated by GC-MS. The qualitative results of GC-MS are shown in Table S4. A total of 67 volatile compounds, including 9 aldehydes, 17 ketones, 13 organic acids, 12 alcohols, 3 lactones, 6 sulfides, 2 terpenes, and 5 other compounds, were identified. Comparing and integrating the qualitative results of GC-IMS and GC-MS, 48 volatile compounds were identified by GC-IMS, and 67 volatile compounds were identified by GC-MS. A total of 95 volatile compounds were identified via GC-IMS and GC-MS, and 21 volatile compounds were identified by both GC-MS and GC-IMS. It is worth noting that although GC-MS identified more compounds than GC-IMS, some compounds, including ethyl acetate, butyl acetate, small acid, ketones and alcohols were only identified by GC-IMS, due to its high sensitivity. Besides, there were 33 unidentified plots in GC-IMS spectra, which suggested that GC-IMS could not provide precise qualitative abilities as GC-MS. Therefore, the combination of GC-IMS and GC-MS could comprehensively analyze the volatile compounds in fermented milk. Identification of aroma-active compounds in fermented milk by GC-O-MS Although 95 volatile compounds were identified by GC-IMS and GC-MS, it could not confirm which compounds had aroma activities. Therefore, the aroma-active compounds were investigated by GC-O-MS analysis. Meanwhile, OMSE (odor-specific magnitude estimation) was performed to quantify the aroma-active compounds responsible for the aroma perception based on the aroma intensity (AI). Generally, higher AI values mean a more intense aroma and are important to the aroma of foods. A total of 12 aroma-active compounds, including 5 ketones, 4 acids, 2 aldehydes and 1 lactone, were perceived and identified in samples L4-1-2 and L6-11 by comparing their MS, RI, and odor descriptions with authentic standards ( Table 2). The RI, identification methods, and odor descriptions of the 12 aroma-active compounds are also shown in Table S5. In L4-1-2 and L6-11 were revealed 10 and 9 aroma-active compounds, respectively. From Table 2, 7 aroma-active were perceived in every fermented milk sample, among which 2,3-butanedione (1.38-1.85), butanoic acid (2.25-3.25), hexanoic acid (1.75-1.88), and -decalactone (2.75-3.25) presented relatively high AI values, and were therefore considered to be primary contributors to aroma of fermented milk. Ketones are usually produced by -oxidation in the fermentation process. A total of 5 ketones were perceived in two fermented milk samples selected and 2,3-butanedione had the highest aroma intensities, followed by 2-nonanone. 2-Pentanone and 3-hydroxy-2-butanone had lower aroma intensities (AI < 1) and fewer contributions to the aroma of fermented milk. Some reports showed that 2,3butanedione could provide a creamy aroma to fermented milk (), and it was perceived at similar intensities between the two samples selected. Two aldehydes, including acetaldehyde and benzaldehyde, showed lower aroma intensities compared to other aromaactive compounds, which suggested that aldehydes had little influence on the flavor of fermented milk. In addition, four organic acids, including acetic acid, butanoic acid, hexanoic acid, and octanoic acid, were regarded as the aroma-active compounds. Acetic acid, butanoic acid, and hexanoic acid could give strong sour, cheesy and rancid smells. Acetic acid, butanoic acid, and hexanoic acid were all screened out as aroma-active compounds in L4-1-2 and L6-11. Octanoic acid was only perceived in L6-11. Butanoic acid, hexanoic acid, and octanoic acid had higher aroma intensities (AI > 1), which indicated that these three acids played important contributions to the overall aroma of fermented milk. Meanwhile, acetic acid, hexanoic acid, and octanoic acid showed higher AIs in L6-11 than that in L4-1-2, and butanoic acid showed higher AIs in L4-1-2 than that of L6-11, which could explain the stronger fermented aroma of L6-11 and stronger cheesy aroma of L4-1-2. Besides, -decalactone which is produced mainly by hydrolysis and further esterification of hydroxy fatty acid triglycerides, showed creamy or coconut flavor (;) and was perceived at similar intensities between the two samples selected with the highest aroma intensity (AI > 2), which suggested that this compound played an indispensable role in the aroma of fermented milk samples. Ott et al. () analyzed aromaactive compounds in yogurt by GC-O, and 20 aroma compositions were perceived. However, only 11 aroma-active compounds were identified in their study, which differed from our study's results. This difference might be because that they adopted combined static and dynamic headspace and preparative simultaneous distillation-extraction under vacuum to extract the extraction aroma compositions which might be different from headspace solid microextraction in terms of extraction efficiency. Quantitation of aroma-active compounds in fermented milk The differences in AI values of aroma-active compounds were mainly due to the concentration differences. Therefore, quantitative analysis was performed to further evaluate the contributions of these aromaactive compounds. The quantitative results of these 12 aroma-active compounds are shown in Table 2. It was found that acetaldehyde, 3-hydroxy-2-butanone, acetic acid, butanoic acid, and hexanoic acid were present in relatively high concentrations (>1mg/kg). Conversely, 2-pentanone, 2,3-butanedione, heptanone, 2-nonanone, benzaldehyde, octanoic acid, and -decalactone presented lower concentrations (<1mg/ kg). The contributions of aroma-active compounds are not only determined by their concentrations but also by their odor threshold values. Therefore, the OAV values of aroma-active compounds were calculated to further analyze the contributions of aroma-active compounds. The results are shown in Table 2. The OAV values of 7 aroma-active compounds were higher than 1, including acetaldehyde (OAV 43.16-50.91), 2,3-butanedione (OAV 2.30-4.52), 2-heptanone (OAV 3.98-6.24), acetic acid (OAV 1.19-1.88), butanoic acid (OAV 1.06-1.66), hexanoic acid ) and -decalactone (OAV 9.70-13.06). They were regarded as contributors to the sample aroma based on the principle of OAVs. Among these seven compounds, 2,3-butanedione, butanoic acid, hexanoic acid, and -decalactone showed relatively high AI values, which were in accordance with the results of GC-O-MS analysis. Though -decalactone with a typical creamy aroma was present in very low concentration (24.24-32.66 ug/kg), it was regarded as an influential aroma contributor to fermented milk samples due to its extremely odor thresholds (0.0025 mg/kg). The results of GC-O-MS showed that the AI values of acetaldehyde were low in both two selected fermented milk samples, while its OAV value was the highest, which might be due to different mediums of GC-O-MS analysis and quantitative analysis. In GC-O-MS analysis, AI values were determined by panelists based on the threshold of acetaldehyde in the air (threshold value = 1 mg/m 3 ), which was different from its threshold in the milk (threshold value = 0.062 mg/kg) (van Gemert, 2011;). The OAV values of 2pentanone, 3-hydroxy-2-butanone, 2-nonanone, benzaldehyde, and octanoic acid were<1, which meant that these five aroma-active compounds would play an auxiliary role in the overall aroma of fermented milk. At the same time, these five aroma-active compounds were perceived by GO-O with low AI values, indicating a high consistency between GC-O-MS and quantitative analysis. Thus, seven aroma-active compounds were provisionally regarded as key aroma compounds of the fermented milk sample selected. Aroma recombination Though GC-O-MS and quantitative analysis were used to isolate and identify the aroma components from the food matrix, they could not reflect the real interactions among these aroma-active compounds. Therefore, aroma recombination was performed to verify qualitative and quantitative results of aroma-active compounds and to confirm the contribution of such compounds to the overall aroma. The results of aroma recombination analysis are shown in Fig. 3(a-b). All of the aromaactive compounds were prepared according to the concentrations in the original samples. The aroma of recombination models of L4-1-2 and L6-11 were shown good similarity with the original samples L4-1-2 and L6-11, respectively. However, there were still slight differences found in some aroma intensities, such as fermented, creamy, and cheesy. In aroma recombinant of L4-1-2 (Fig. 3a), the scores of fermented and cheesy attributes of aroma recombinant were significantly lower than those of L4-1-2 (p < 0.05). Meanwhile, the scores of fermented, creamy, and cheesy attributes in aroma recombinant of L6-11 (Fig. 3b) were significantly lower than those of L6-11 (p < 0.05). These differences were possibly caused by variable aroma release from different matrices. There were some gaps between the aroma-blank and the matrix of real fermented milk, which could influence the release of aroma compounds. Besides, it was reported that headspace solid microextraction showed poor extractability for the polar and semi-volatile compounds (), which could cause these differences. Aroma omission test To further verify the contributions of these 12 aroma-active compounds to the aroma profiles of fermented milk, omission tests were conducted using 15 models in which single compounds or a group of compounds were omitted. The recombined model of L6-11 was used as a guide, and omission models were evaluated by a triangle test. The statistical results are shown in Table 3. It could be seen that the omission of all ketones (model 1) showed a very highly significantly different aroma than that of the recombined model (p < 0.001), which indicated that ketones played essential roles in the overall aroma of fermented milk. The models omitting only 2,3-butanedione presented a highly significant difference (model 1-2, p < 0.01). The models omitting only 2-heptanone (model 1-3), 3-hydroxy-2-butanone (model 1-4), and 2nonanone (model 1-5) showed no significant difference compared with the recombined model of L6-11, which suggested that only 2,3butanedione responsible for creamy aroma played indispensable roles in the overall aroma of fermented milk among five ketone compounds. Besides, nine out of ten panelists recognized model 4 omitting -decalactone, which suggested -decalactone was another main contributor to the overall flavor of fermented milk. Meanwhile, -decalactone could confer coconut or creamy flavor. Nine panelists recognized model 2 lacking all aldehydes and model 2-1 omitting acetaldehyde. No panelist recognized the model omitting the benzaldehyde, which indicated that only acetaldehyde could play an indispensable role in the overall aroma of fermented milk. Acetaldehyde could give a rich milky aroma, which suggested that acetaldehyde might be the main Table 2 The aroma intensities of aroma-active compounds in two fermented milk samples obtained by GC-O-MS analysis and their concentrations and OAV values. Note: a The aroma intensity was obtained by GC-O-MS, and the aroma intensities of compounds were reported as the mean ± standard deviation (SD). b The concentrations of compounds were reported as the mean ± standard deviation (SD), and the values with different letters (a tod) in a row are significantly different using Duncan's multiple comparison tests (P < 0.05). contributor to the milky attribute of fermented milk samples. Furthermore, models 3 and 3-3, which lacked all acids showed very a highly significant difference (p < 0.001), and 8 and 9 panelists recognized models 3-1 lacking acetic acid and 3-2 lacking butanoic acid, respectively. Moreover, there was no significant difference between model 3-4 lacking octanoic acid and the recombined model of L6-11. Therefore, acetic acid, butanoic acid, and hexanoic acid were important aromaactive compounds. It is worth mentioning that acetic acid and butanoic acid were not found to have high OAV (OAV < 2), which could be explained why the current estimate of odor thresholds of these two compounds was not fully estimated (). Considering the results of the aroma recombination and aroma omission test, the conclusion could be drawn that acetaldehyde, 2,3-butanedione, acetic acid, butanoic acid, hexanoic acid, and -decalactone were the key aroma compounds of the fermented milk by L. bulgaricus. To reflect the composition differences of key aroma-active compounds in the fermented milk samples, the concentrations of the abovementioned key aroma compounds were presented in the form of a heatmap, as shown in Fig. 3c. Six compounds were divided into two clusters based on the dendrogram. Cluster 1 consisted of four aromaactive compounds, including 2,3-butanedione, hexanoic acid, acetic acid, and -decalactone. 2,3-Butanedione could provide creamy flavor, and its concentration in L6-11 was significantly higher than that in L4-1-2. The score of a creamy attribute of L6-11 was higher than that of L4-11, which suggested this compound might be the key aroma-active compound for the creamy attribute of fermented milk. It was worth noting that the threshold of 3-hydroxy-2-butanedione (threshold = 8 mg/kg) was much bigger than that of 2,3-butanedione (threshold = 0.05 mg/kg), and 2,3-butanedione could be irreversibly reduced to 3-hydroxy-2-butanedione by diacetyl reductase (Smid & Kleerebezem, 2014), thereby reducing the contribution of 2,3-butanedione to the creamy aroma of fermented milk. Therefore, the activity of diacetyl reductase should be considered when screening the strains with prominent creamy flavor. -Decalactone was proved to significantly contribute to a creamy odor ). Similar to 2,3-butanedione, the concentration of -decalactone in L6-11 was significantly higher than that in L4-1-2, suggesting that -decalactone might also be one of the main contributors to the creamy attribute of fermented milk samples. Besides, hexanoic acid and acetic acid could show a rich sour smell. It could be concluded that acetic acid and hexanoic acid might be the main contributors to fermented attributes. Cluster 2 contained two compounds, including acetaldehyde and butanoic acid. Butanoic acid could offer cheese flavor, and the concentration in L4-1-2 was significantly higher than that of L6-11, which suggested that butanoic acid might be the main contributor to the cheesy attribute. Effects of the addition of key aroma compounds on aroma attributes To further explore and verify the effects of the above six aromaactive compounds on aroma attributes of fermented milk, additional experiments were performed, and the results are shown in Fig. 3. The addition of acetaldehyde could improve the aroma intensities of the milky attribute significantly (p < 0.05), which indicated that acetaldehyde was an important contributor to milky attributes. When adding the 2,3-butanedione (Fig. 3e) and -decalactone (Fig. 3i) into the AR, the creamy intensities showed significant improvements (p < 0.05), which could be concluded that these two compounds contributed to the creamy attribute. Meanwhile, the additions of -decalactone could significantly improve the intensities of the milky attribute, suggesting that acetaldehyde and -decalactone might have some important synergistic olfactory effects. When acetic acid was added to AR, the intensity of fermented attribute increased from 4.9 to 5.33. Still, there was no significant difference (p > 0.05), which indicated that acetic acid was not the main contributor to the fermented aroma. The addition of hexanoic acid not only improved the aroma intensity of the fermented aroma significantly but also enhanced the cheesy aroma, which could be concluded that hexanoic acid was the main contributor to the fermented attribute, and some synergistic olfactory effects also existed in hexanoic acid and butanoic acid. Besides, the addition of 2,3butanedione (Fig. 3d) and hexanoic acid (Fig. 3h) could reduce the average aroma intensity of fermented attribute (from 4.9 to 4.5), milky attribute (from 0.92 to 0.5), and creamy attribute (from 1.83 to 1.19) to some extent, which suggested that there were some masking or inhibitory effects in aroma compounds (). In this study, the aroma types of fermented milk samples were based on a prominent aroma attribute of fermented milk samples. The score of cheesy attributes of L4-1-2 assigned to cheesy-type was higher than the other four aroma attributes, and butanoic acid was the main attributor of a cheesy attribute. Therefore, it could be concluded that the butanoic acid was the decisive aroma compound in cheesy-type. Similarly, hexanoic acid and acetic acid were the decisive aroma compounds in the fermented-type. Conclusions The aroma attributes of fermented milk samples by 28 L. bulgaricus were evaluated, and four aroma types, including milky-type, cheesytype, fermented-type and miscellaneous-type, were obtained. A total of 95 volatile compounds in cheese-type and fermented-type were identified by GC-IMS and GC-MS, and 12 aroma-active compounds were selected by GC-O-MS. Finally, six aroma-active compounds were determined as the key ones including 2,3-butanedione, -decalactone, acetaldehyde, butanoic acid, acetic acid, and hexanoic acid. Butanoic acid was the decisive aroma compound for the cheesy-type, and hexanoic acid was the decisive aroma compound of fermented-type. In the future, these compounds could be used as indicators to screen the strains with the characteristics of cheesy-types or fermented-types. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Table 3 Omission experiments from the recombination model. Appendix A. Supplementary data Supplementary data to this article can be found online at https://doi. org/10.1016/j.fochx.2022.100385. |
The lipoidal derivatives of steroids as biosynthetic intermediates. The lipoidal derivatives of pregnenolone, prepared biosynthetically, were converted, by incubation with a mitochondrial-microsomal fraction from adrenal cortical tissue, into lipoidal derivatives of 17-hydroxy-pregnenolone and of dehydroisoandrosterone, thus proving that these pregnenolone derivatives can serve as substrates for 17-hydroxylase and for the lyase enzyme that converts a C21-17-hydroxy-20-ketol into a C19-17-ketosteroid. Three synthetically prepared esters of pregnenolone, the oleate, the linoleate, and the arachidonate, were also hydroxylated at C-17 by a similar adrenal preparation. With the synthetic substrates, however, the corresponding esters of dehydroisoandrosterone were not formed. |
NEW YORK (CNNMoney) -- New fees and poor customer service have sparked an exodus among big bank customers, many of whom switched to smaller institutions last year.
The defection rate for large, regional and midsize banks averaged between 10% and 11.3% of customers last year, according to a J.D. Power and Associates' survey of more than 5,000 customers who shopped for a new bank or account over the past 12 months. In 2010, the average defection rates ranged from 7.4% to 9.8%.
Meanwhile, small banks and credit unions lost only 0.9% of their customers on average last year -- a significant decline from the 8.8% defection rate they saw in 2010.
These smaller institutions were also able to attract many of the customers who left the big banks. Over the course of last year, 10.3% of customers who shopped for a new bank landed at these smaller institutions -- up from 8.1% in the prior year.
New and higher bank fees at the nation's biggest banks led many customers to switch to smaller institutions over the past year, with about a third of customers at big banks reporting fees as the reason for looking elsewhere.
"When banks announce the implementation of new fees, public reaction can be quite volatile and result in customers voting with their feet," said Michael Beird, director of the banking services practice at J.D. Power and Associates.
Checking account fees have been on the rise at the nation's biggest banks over the past year, and customer revolt against big banks really began to mount after Bank of America (BAC, Fortune 500) proposed a monthly fee for debit card use last fall.
Even though the bank later backtracked on its decision, the announcement led to a nationwide, social media-fueled "Bank Transfer Day", during which customers encouraged each other to dump their big banks for community banks and credit unions.
The report also found that many customers were already unhappy with the customer service at big banks, so when fees were announced or raised, there was even more of an incentive to switch institutions.
"Service experiences that fall below customer expectations are a powerful influencer that primes customers for switching once a subsequent event gives them a final reason to defect," said Beird.
More than half of all customers who said fees were the main reason for switching banks also said they had received poor customer service at their prior bank, he said. |
After Kavanaugh's nomination, what do we do now?
Last Saturday, the United States Senate voted to confirm Judge Brett Kavanaugh Brett Michael KavanaughGrassroots America shows the people support Donald Trump The Hill's Morning Report — Category 5 Mueller storm to hit today On The Money: Cain 'very committed' to Fed bid despite opposition | Pelosi warns no US-UK trade deal if Brexit harms Irish peace | Ivanka Trump says she turned down World Bank job MORE to the U. S. Supreme Court 50 to 48. As the lengthy, often frustrating confirmation process demonstrated, a majority of the Senate learned no lessons from Anita Hill’s testimony in 1992.
Along primarily partisan lines, the Senate voted to confirm a man accused of sexual assault by at least three women. The Senate choose to turn their backs on us, women and survivors of sexual assault and abuse. The shameful scam of a confirmation process showed Judge Kavanaugh’s political agenda and a temperament not suitable for the highest court in our land. Many different groups stood in opposition to his appointment.
It is unconscionable that the Senate confirmed his nomination with so many examples of his unsuitability as a justice and lack of integrity.
It is clear that for conservative and anti-choice members of the Senate, Kavanaugh’s long history of opposition to reproductive rights and access to abortion trumped all other possible considerations. Make no mistake, this was an entirely deliberate action. There are many conservative judges who have clear anti-choice judicial philosophies that would have been equally unpalatable to reproductive rights advocates, but who have not been accused of sexual assault. Kavanaugh’s confirmation was a punitive move to reinforce existing cultural norms that suppress the reporting of sexual assault and punish women who question such norms.
With all of this recent history, Kavanaugh will take his seat on the U. S. Supreme Court. Barring anything leading to the long and difficult process of removing a Supreme Court Justice or his decision to step down, Kavanaugh will be a member of the court for the immediate future. He could potentially sit on the court for decades. So, what do we do now?
I call on Justice Kavanaugh as well as the other sitting justices to respect not only the rule of law that is the foundation of our government but also existing federal law regarding abortion. When the right to privacy was recognized in the Roe v. Wade decision, it changed the lives of all women in the United States by making it unconstitutional for states to ban abortion.
Since 1973, people who can get pregnant have not been held hostage by their reproductive capacity. While states have been working to restrict access to abortion since 1973 — particularly after the 1993 decision in Planned Parenthood v. Casey that said that states had a legitimate legal interest in regulating abortion — states have been prohibited from out-right criminalizing abortion.
But restrictions are creating less access across the country and that is hurting all of us. There are currently several cases working their way through the appellate court system that the Supreme Court could decide to hear, and any of these cases could imperil access to abortion in much of the United States.
Four states have laws on the books that would immediately ban abortion in their state if Roe was overturned by the court. Nine states have laws that banned abortion before Roe that could be enforced if it is overturned. Seven states have laws that would put access to abortion in jeopardy without Roe. The anti-choice movement has been pushing for the passage of increasingly extreme legislation since Casey in an attempt not only to make abortion as difficult to obtain as possible, but also to make sure that there would be cases that could be appealed to an anti-choice stacked Supreme Court. Nine states have laws that protect access to reproductive health care. In these states, abortion will remain accessible to women even if Roe is overturned.
Women who live in states without access to abortion will be able to receive care there, if they can afford to travel that is. Women of color are already more harshly affected by restrictions on abortion because they earn less than white women. For low-income women, abortion can already be unobtainable from certain locations. Don’t be fooled, we are much further down the path to criminalizing abortion and that path was built over the backs and bodies of women of color and low income women. Lack of abortion access has caused so much destruction already, we cannot go back.
Our Supreme Court justices must recognize what is at stake if they do not act to protect our existing laws regarding abortion. Before 1973, women died without access to safe abortion services. Current abortion restrictions have put women in dangerous situations because, while abortion was still legal, it was inaccessible to them. Rosie Jimenez died in Texas after the passage of the Hyde Amendment that prohibited her from using her Medicaid to access a safe abortion is a stark reminder.
The stories of doctors who worked in “septic wards,” wards for women who were suffering from septicemia or blood infections from unsafe abortions, are another. In less dramatic circumstances are women who can’t provide for their existing families or who do not want to be pregnant. If abortion is illegal in much of the country, it will be effectively banned for all women without the resources to access it. Legal rights are meaningless without access, but lack of access can also nullify legal rights.
Julie A. Burkhart is the founder and CEO of Trust Women. Trust Women opens clinics that provide abortion care in underserved communities so that all women can make their own decisions about their healthcare. Follow her on Twitter @julieburkhart. |
Increasing Access to Surgical Services for the Poor in Rural Haiti: Surgery as a Public Good for Public Health Although surgical care has not been seen as a priority in the international public health community, surgical disease constitutes a significant portion of the global burden of disease and must urgently be addressed. The experience of the nongovernmental organizations Partners In Health (PIH) and Zanmi Lasante (ZL) in Haiti demonstrates the potential for success of a surgical program in a rural, resource-poor area when services are provided through the public sector, integrated with primary health care services, and provided free of charge to patients who cannot pay. Providing surgical care in resource-constrained settings is an issue of global health equity and must be featured in national and international discussions on the improvement of global health. There are numerous training, funding, and programmatic considerations, several of which are raised by considering the data from Haiti presented here. must be featured in national and international discussions on the improvement of global health. There are numerous training, funding, and programmatic considerations, several of which are raised by considering the data from Haiti presented here. Introduction: a coordinated and robust response to the global burden of surgical disease The burden of surgical disease in the developing world, reflecting a multitude of infectious, noninfectious, and injury-related pathologies, has received inadequate attention to date. Among the world's poorest, a single procedure-amputation-is frequently the definitive therapy for a variety of unrelated processes such as trauma, infection, or gangrene ; many die without any care at all. It is increasingly evident that access to surgical services must be rapidly scaled up-in tandem with the development of primary health care infrastructure and programs for maternal and child health and for infectious diseases-to reduce the global burden of disease. Surgical services in Haiti Haiti is the poorest country in the Western Hemisphere and is faced, not coincidentally, with the worst health and human development statistics in the region. Inadequate infrastructure and ongoing political instability contribute to high maternal and infant mortality and an average life expectancy of just 53 years. There are three settings in which surgical care is delivered in Haiti. Private hospitals serve major metropolitan areas but have fees that are beyond the reach of the majority of the population. Public hospitals provide surgical care for a smaller fee, but supplies (sutures, medications, intravenous fluids, blood, and even gloves) must be purchased-frequently off-site, at private pharmacies-by the patient before any procedures take place. Charitable organizations are the third category of surgical care providers, but most of these institutions also charge fees for their services. The parallel existence of these three sources of surgical care is typical of most poor countries and, because of attendant fees, does not allow for adequate access to surgical care for the destitute sick-often the very people most in need of services. Among the multitude of nongovernmental organizations operating in rural Haiti, to our knowledge only one, Zanmi Lasante (ZL), provides a permanent setting in which a wide variety of surgical diseases can be addressed free of charge to the patient. This article describes Zanmi Lasante's experience scaling up surgical services over a 20-year period in a rural, isolated, and resource-poor setting. Zanmi Lasante: the birth of a ''pro-poor'' hospital and expansion in the public sector The Boston-based nonprofit organization Partners In Health (PIH) was founded in 1987 to support community health care efforts launched by ZL in a squatter settlement in central Haiti in 1983. The hillside site of Cange, originally in the midst of a community displaced by the flooding of a fertile valley for the construction of a hydroelectric dam, is about three hours from the capital city, Port-au-Prince, on a largely unpaved and sometimes impassible road. Since its inception, the medical facility at Cange has grown from a small community clinic to a large sociomedical complex registering more than 250,000 visits annually. Today, l'Hpital Bon Sauveur in Cange is a 104-bed, full-service hospital with two operating rooms; adult, pediatric, and surgical inpatient wards; an outpatient clinic; an infectious disease unit, which includes the national referral center for the treatment of multidrug-resistant tuberculosis (MDR TB); subspecialty services, including women's health, dental, and ophthalmology clinics; laboratories; pharmacies; a blood bank; radiology services; and a broad range of socioeconomic initiatives. The hospital is linked to a network of hospitals and health centers serving the population of Haiti's Central and Artibonite Departments, with a combined population of about 1.2 million people. A critical component of ZL's growth in this challenging setting is the recruitment and training of community health workers, called accompagnateurs. Today, 2,000 paid accompagnateurs serve as the vital link between ZL's health facilities and patients dispersed across the rural countryside. Their role in the PIH/ZL model of care, particularly for patients living with HIV or tuberculosis, has been discussed in detail in previous publications. A second element of ZL's growth in Haiti is its engagement with the public health sector to expand access to services. Since 2001, ZL has collaborated with the Haitian Ministry of Health to revitalize and reinforce eight public hospitals and clinics in central Haiti, from rebuilding hospitals to installing generators for electricity, to hiring and training staff, to establishing pharmaceutical supply chains. The explicit goal of this public-private partnership was to leverage funds for AIDS and tuberculosis programs, which became available in 2002, to reinforce comprehensive primary health care and to strengthen health systems in general. Users' fees, even within the public sector, remained a barrier for the poorest patients until it was agreed that HIV prevention and care, women's health, TB diagnosis and care, and diagnosis and treatment of all sexually transmitted infections would be considered ''public goods for public health''-provided free of charge to the patient. A major consideration for this partnership was the nearly complete absence of surgical services in central Haiti. Could surgical services be considered public goods for public health? Surgical services at l'Hpital Bon Sauveur Although minor procedures, from draining abscesses to tubal ligations, were done intermittently ever since clinical services were first offered in Cange, it was not until 1993 that an operating room was built. With the arrival of a fulltime obstetrician-gynecologist, over the next several years the OR was used regularly to address obstetrical emergencies. Short-term medical missions from the United States became increasingly surgical over time, but there were neither general surgeons nor anesthesiologists on staff. As the number of patients seeking care for the major morbidities of TB, HIV, diarrheal disease, and chronic noncommunicable diseases grew, more and more patients with surgical disease also came to l'Hpital Bon Sauveur seeking care. By the late 1990s, there was increased recognition of the need to treat a variety of routine, urgent, and emergent surgical diseases. To meet these needs, ZL built a second OR and a 15-bed surgical ward, and between 2000 and 2002 hired a general surgeon as well as two additional obstetrician-gynecologists. Through an agreement with the Cuban Ministry of Health, another general surgeon came to the hospital in 2002. An ophthalmologist was hired in 2003 and, with the help of the Haitian Red Cross, a proper blood bank was installed. By 2005, ZL's public-private partnerships at small regional hospitals and clinics, along with hundreds of attendant community health workers, served as a network referring patients from across the entire Central Plateau to l'Hpital Bon Sauveur for evaluation and surgery. Staff at all of the expansion sites are trained in basic surgical resuscitation and stabilization, and referrals are facilitated by Zanmi Lasante/Ministry of Health-owned ambulances in emergency cases or through the provision of transportation stipends to patients with nonurgent problems. Operating rooms at two of the largest expansion sites, Hinche and Belladres, have recently been refurbished and staffed with a full-time obstetrician and nurse anesthetist (NA) for emergency obstetrical care. A Mdecins Sans Frontires (MSF)-led NA training program began at l'Hpital Bon Sauveur in 2006; six nurses from ZL and other parts of Haiti are currently enrolled. To better understand the volume and variety of surgical procedures at l'Hpital Bon Sauveur, we undertook a retrospective review of the surgical cases performed between January 2002 and September 2005. We reviewed the operating room logs maintained by the surgical and anesthesia staff for patient age, place of residence, date of procedure, type of procedure, and the specialty involved. Distance traveled to the hospital was estimated as the crow flies based on patients' place of residence. Table 1 displays the characteristics of surgical patients served at l'Hpital Bon Sauveur between January 2002 and September 2005. A total of 2,601 patients underwent 2,900 procedures, ranging from simple incision and drainage to complicated cardiothoracic surgeries. Two notable findings confirmed our experiential understanding of the l'Hpital Bon Sauveur surgical program. First, the total number of patients seeking services has increased significantly, from 241 patients in 2002 to 762 during the first nine months of 2005. Second, the geographic distribution of patients has expanded: in 2002, more than 80% of patients lived within 50 km of the hospital. By 2005, however, more than half the patients traveled over 50 km to Cange for care-a significant journey to a rural, remote area of the country. This expanding catchment area points to the significance of PIH/ZL's accompagnateur model, which allows patients who might otherwise not seek care to be identified and referred from their communities and later followed up in their homes as necessary, but also the importance of providing free care to those who need it. By 2005 a third of all patients undergoing surgery at l'Hpital Bon Sauveur reported their place of residence to be the capital city of Port-au-Prince-the site of the largest public hospital in the country as well as numerous private surgical practices, all of which require fees for services. The widening geographic distribution of the ZL patient base over time presents convincing evidence of the surgical service's importance not just to the Central Plateau region, where the hospital is located, but to the country as a whole, as cost and/ or quality of care are prohibitive factors even when other facilities are available. A wide variety of surgical pathology was addressed at l'Hpital Bon Sauveur, with common procedures dominating the census (Table 2). General surgery accounted for almost half (47.6%) of all cases, obstetrical and gynecological cases a third (32.6%) of all cases, and urologic, Over the years, PIH and ZL have drawn on both the public health and human rights paradigms to help build and keep afloat a surgical program in a setting in which fee-forservice models are doomed to fail. Outside assessments and some members of the ZL staff suggested that certain surgical services could be provided on a sliding payment scale, but even a cursory study of this approach led us to conclude that triaging patients' ability to pay would have required significant resources and yielded little. It would have cost more to perform such means tests than could be recovered through patient fees. For the destitute sick, the great majority of our patients, the only hope for access to such services came from operationalizing the notion of health care as a human right and maintaining free access to surgical services as we do for medical services. Efforts to keep afloat ZL's rights-based surgical program proved taxing; by 2003, ZL was seeing patients from around the country, the costs of the program continued to rise, and no foundations or international funding mechanisms existed to support the provision of surgical services-even emergency obstetrics-to the very poorest. The connection between women's health and surgical care may well help us to move forward the general surgical agenda in resourced-constrained settings, as we argue in an accompanying editorial. Because there is a strong focus on women's health in both primary care and HIV services at Zanmi Lasante, it is no surprise that women also constitute the majority (55%) of surgical patients in this review. Although there is to date little enthusiasm for the notion of a right to surgical care, there is a growing movement to promote safe childbirth as a right-a possible entry point for promoting a broader agenda for surgical care in resource-poor settings. Many challenges remain for ZL to be able to fully address the burden of surgical disease in the Central Plateau and beyond. These challenges are shared in other similarly poor and rural areas worldwide: inadequate basic infrastructure, lack of trained health-care workers, and lack of large-scale, sustained funding for surgical services. This last issue-the question of consistent funding sources-is one of utmost importance. The logistics and capital costs of setting up a surgical theater are not trivial. Appropriate space, blood-banking, well-trained surgical and postoperative staff, proper tools and supplies, stable electricity, and accessible telecommunications are all essential for high-quality care. The World Bank, among others, has suggested that surgical services incorporated into hospital-level care may be as costeffective as otherlarge-scale public health initiatives. However, large public health initiatives and funders often lack the flexibility and long-term mandate to support the construction of surgical theaters or the regular procurement of consumables, let alone the provision of free care to patients who are unable to afford user fees. Staffing is a second major challenge. The World Health Organization (WHO) estimates that more than four million workers are needed worldwide to meet health-care needs, with sub-Saharan Africa and parts of Asia the most understaffed. The dearth of health personnel is even more striking in specialty fields. Shortages in surgical staffing include anesthesiologists, specialist and general surgeons, and nurses trained in postoperative or intensive care. The ''brain drain'' of workers from low-income to higher-income settings contributes to staffing gaps. The AIDS epidemic and its attendant psychological burdens have demoralized health professionals and even further reduced their numbers. Alongside an absolute shortage of health-care workers, the distribution of existing personnel between capital cities and rural areas is severely imbalanced, and encouraging health personnel to live and work in rural areas can be difficult. In addition to providing sufficient remuneration, ongoing training, and career-development opportunities, ZL has succeeded in retaining surgical staff in part because of the powerful motivator of providing health-care workers with the tools they need to perform their trade-namely, medicines, supplies, and adequate facilities. Targeted, local training efforts can also help overcome staffing shortages. At l'Hpital Bon Sauveur, the chief of anesthesia is a nurse anesthetist from the area who was one of 19 NAs trained by Mdecins Sans Frontires in Haiti between 1998 and 2002. Unfortunately, very few of these graduates are working in the public sector in Haiti; many are not working as NAs at all. In some cases, visiting staff can effectively supplement in-house capacity. Well-organized, short-term surgical missions can address complicated surgical issues if highquality follow-up care and services are available after the visiting surgeons leave. Developing residency training programs for U.S. surgeons based in resource-poor settings is one way to accomplish this. We have done this for medical residency ; now we need to do it for surgery. In extreme cases, some surgical patients with conditions that cannot be addressed in situ in Haiti have required intervention in the United States. PIH's Right to Health Care program serves patients with life-threatening medical and surgical needs that cannot be met at l'Hpital Bon Sauveur or other hospitals in Haiti. Recent interventions have included surgery for repair of tetralogy of Fallot and rheumatic mitral valve disease, total hip replacement, neurosurgery for excision of meningioma, nephrectomy, and plastic surgery management of severe burns. Conclusion Surgical disease has been neglected as a serious public health problem despite its major contribution to death and disability worldwide. Half a million women die of pregnancy-related causes every year; leading causes such as hemorrhage and obstructed labor have effective surgical interventions that are simply not available to those who need them most. Annually, more than one million people lose their lives to road traffic accidents, and 875,000 children and adolescents die as a result of injuries such as burns and falls ; these pathologies all have surgical therapies that can save lives or prevent disability. Recently there has been increasing acknowledgment of the importance of surgical care in public health. In 2005, the WHO established the Global Initiative for Emergency and Essential Surgical Care (GIEESC) with the aim of improving collaborations among organizations and agencies involved in reducing morbidity and mortality from surgical conditions. Further discussions have appeared recently in both medical and surgical journals, and the second edition of the World Bank's influential Disease Control Priorities in Developing Countries included a chapter on surgery. The Bellagio Conference in June 2007 brought together leaders in surgery, anesthesia, obstetrics, health policy, and health economics from Africa, Europe, and the United States, who collectively concluded that a significant proportion of the global burden of disease is surgical, that with investments in infrastructure and training a majority of surgical disease can be treated or prevented at first-level referral centers, and that the integration of surgical services with primary health care services will be essential for prevention and referral efforts. Zanmi Lasante's experience providing surgical services in impoverished rural Haiti suggests that these recommendations are urgent and actionable. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. |
The Centre on Wednesday extended the Armed Forces (Special Powers) Act to three districts in Arunachal Pradesh, declaring them as 'disturbed area'.
The Centre on Wednesday extended the Armed Forces (Special Powers) Act to three districts in Arunachal Pradesh, declaring them as ‘disturbed area’. The Ministry of Home Affairs (MHA) notified that the central government in exercise of the powers conferred by Section 3 of the Armed Forces (Special Powers) Act, 1958 (28 of 1958) had declared the Tirap, Changlang and Longding districts of Arunachal Pradesh and the area falling within the jurisdiction of eight police stations in the districts of Arunachal Pradesh bordering Assam as “disturbed”areas.
“Now, therefore, Tirap, Changlang and Longding districts in Arunachal Pradesh and the areas falling within the jurisdiction of the following eight police stations in the districts of Arunachal Pradesh, bordering the State of Assam, are declared as ‘disturbed area’ under Section 3 of the Armed Forces (Special Powers) Act, 1958 up March 31, 2019 w.e.f. October 1, 2018, unless withdrawn earlier,” the notification further said.
The police stations are Balemu and Bhalukpong stations in West Kameng district, Seijosa police station in East Kameng district, Balijan police station in Papumpare district, Namsai and Mahadevpur police stations in Namsai district, Roing police station in Lower Dibang Valley district and Sunpura police station in Lohit district.
On August 30, Assam Governor Jagdish Mukhi extended the AFSPA in the state for the next six months. The AFSPA grants special powers to the Indian Armed Forces in ‘disturbed areas’. According to The Disturbed Areas (Special Courts) Act, 1976, once declared ‘disturbed’, the area has to maintain the status quo for a minimum of three months. |
Multi Objective Optimization of Parameters of Torsional Vibration Dampers Considering Damping Effect and Light Weight Design To reduce the torsional vibration of vehicle power transmission system (VPTS), a torsional vibration model with multiple degrees of freedom (MDOF) of VPTS was established. The scheme of equipping torsional vibration dampers (TVDs) on the drive shaft was employed by the calculation of the forced vibration and the free vibration of the VPTS. The energy method was used to optimize the parameters of single-stage, two-stage parallel, and two-stage series TVDs based on the principle that balances the damping effect and lightweight design. On the basis of this, the parameters of the models incorporating TVD and elastic couplings (ECs) were optimized. Results showed that the proposed method can ensure the damping effects of TVD and realize the lightweight. |
Spenser's "Amyntas": Three Poems by Ferdinando Stanley, Lord Strange, Fifth Earl of Derby Spenser devoted twelve lines of Colin Clouts Come Home Againe to a lament for "Amyntas": "... the noblest swaine / That ever piped in an oaten quill: / Both did he other which could pipe maintaine, / And eke could pipe himselfe with passing skill" (lines 440-43). Modern scholars have consistently identified Amyntas with Ferdinando Stanley, Lord Strange from 1572 to September 25, 1593 and Earl of Derby from the latter date until his death, April 16, 1594.1 Stanley's artistic patronage is best seen in his maintenance, from at least 1576 until his death, of a company of actors with which Shakespeare may have been associated.2 His contemporary reputation as a poet is supported beyond Spenser's claim by the appearance of his name in the list of "noble personages" whose works are supposedly represented in Belvedere, or, The Garden of the Muses (1600, sig. A5); here Stanley is listed with such better-known Elizabethan court poets as Oxford, Ralegh, and Dyer. To date, however, our only tangible proof that Stanley ever "piped in an oaten quill" has been the poem ascribed to him in a manuscript owned by Sir John Hawkins, who submitted the text to The Antiquarian Repertory (3 : 133-38).3 On the merits of this verse, Alexander C. Judson concluded that Stanley indeed "was a poet (though no poem of consequence by him is known today)."4 In what follows, I hope to bulwark Spenser's praise of Stanley by showing, first, that the Hawkins text is more consequential than Judson believed and, second, that it does not fully represent Stanley's muse, to judge from two heretofore unpublished poems, attributed to him in Cambridge MS Dd.5.75, fol. 32v and Bodleian MS Rawlinson Poetry 85, fols. 76v-77r. The title of the Hawkins text, "A SONNETT by FERDINANDO EARLE of DERBY," indicates a date of transcription after September 25, 1593, when Stanley succeeded to his father's title." The poem's aimless narrative and absurd style point, I believe, not to the author's incompetence, but to his conscious burlesque of the unlearned and homely pastoral stance. This burlesque was not aimed at the inverted syntax and archaic diction of Spenser's pastorals but, more probably, at the redundant style of such works as Abraham Fraunce's The Lamentations of Amyntas for the Death of Phillis, or the celebration of rustic amours in Breton's "Sweet Phillis if a sillie Swaine," or "A Sillie Shepheard lately sate," from Brittons Bowre of Delights (1591, sig. F4-4v, G1v-2). Breton's own lyric, beginning "Fair in a morn (o fairest morn / was never morn so fair),''6 parodies these same forms in a manner quite similar to Stanley's in the Hawkins text. Both poems are written primarily in ballad stanza and concern a shepherd's futile love for a shepherdess named Phyllis. Both achieve their effect through the exaggerated use of the same rhetorical devices: 1 See Ronald B. McKerrow, The Works of Thomas Nashe, 4:150-51; Alexander C. Judson, The Life of Edmund Spenser, p. 6; Kelsie Harder, "Nashe's Rebuke of Spenser," Notes and Queries 198 : 145. 2 John Tucker Murray, English Dramatic Companies (1910; repr. New York, 1963), 1:75; E. K. Chambers, Elizabethan Stage, 2:118-26. 3 Ed. Francis Grose and Thomas Astle, 4 vols. (London, 1775-84), reprinted 1807-9. The Stanley verse was reprinted from the first edition in Thomas Park's enlarged revision of Walpole's A Catalogue of the Royal and Noble Authors of England, Scotland, and Ireland, 5 vols., 2:46-51. 4 Judson, p. 5. 5 I have been unable to trace Hawkins's manuscript, which was probably destroyed in the fire which gutted his house in Westminster, along with his "valuable collection of books and prints," in 1785 (Percy A. Scholes, The Life and Activities of Sir John Hawkins , p. 157). 6 Printed in Englands Helicon, sig. G4-5; manuscript versions in Bodleian MS Rawlinson Poetry 85, fols. lv-2r, and British Museum Additional MS 34,064, fols. 17v-18r. |
Effect of dual-stage actuator on positioning accuracy in 10 k rpm magnetic disk drives We have developed a piezoelectric micro-actuator for dual-stage actuator systems in magnetic disk drives. This microactuator, which drives the head suspension assembly, is based on the shear deformation of piezoelectric elements. We installed the microactuator in one of Fujitsu's 3.5-inch commercial drives for evaluation of the servo system of dual-stage actuator. This paper describes the effect of the dual-stage servo system on positioning accuracy compared with the single actuator. |
1. Field of the Invention
The present invention relates to a magnetic sensor system including a magnetic field generation unit for generating a magnetic field to be detected and a magnetic sensor for detecting the magnetic field.
2. Description of the Related Art
In recent years, magnetic sensors have been widely used to detect the rotational position of an object in a variety of applications such as detection of the degree of opening of a throttle valve in automobiles, detection of the rotational position of steering in automobiles, and detection of the rotational position of the wiper of automobiles. Magnetic sensors are used not only to detect the rotational position of an object but also to detect a linear displacement of an object. Systems using magnetic sensors are typically provided with means (for example, a magnet) for generating a magnetic field to be detected whose direction rotates in conjunction with the rotation or linear movement of an object. Hereinafter, the magnetic field to be detected will be referred to as the target magnetic field. The magnetic sensors use magnetic detection elements to detect the angle that the direction of the target magnetic field in a reference position forms with respect to a reference direction. The rotational position or linear displacement of an object is thus detected.
Among known magnetic sensors is one that employs a spin-valve magnetoresistive (MR) element as the magnetic detection element, as disclosed in WO 00/17666, U.S. Pat. No. 7,483,295 B2, U.S. Pat. No. 7,394,248 B1, and U.S. Pat. No. 8,054,067 B2. The spin-valve MR element has a magnetization pinned layer whose magnetization direction is pinned, a free layer whose magnetization direction varies according to the direction of the target magnetic field, and a nonmagnetic layer disposed between the magnetization pinned layer and the free layer.
A magnetic sensor that employs a spin-valve MR element as the magnetic detection element may have an error in a detected angle due to variations in the magnetic properties of the MR element, as described in U.S. Pat. No. 8,054,067 B2. U.S. Pat. No. 8,054,067 B2 discloses a technology for reducing an error in the detected angle caused by manufacturing variations in MR elements. This technology is, so to speak, a technology for reducing an error in the detected angle that will be found at the time of completion of the magnetic sensor as a product.
Errors in the detected angle that could occur in the magnetic sensor include an error that emerges after the installation of the magnetic sensor in addition to an error found at the time of completion of the product as mentioned above. One of the causes by which an error in the detected angle emerges after the installation of the magnetic sensor is an induced magnetic anisotropy that occurs on an a posteriori basis in the free layer of the MR element. Such an induced magnetic anisotropy may occur in the free layer when, for example, the temperature of the MR element is lowered from a high temperature while an external magnetic field is being applied to the MR element in a particular direction. Such a situation may occur when, for example, the magnetic sensor is installed in an automobile and a specific positional relationship is established between the magnetic sensor and means for generating a target magnetic field during non-operation of the automobile. More specifically, the aforementioned situation may occur when the magnetic sensor is used to detect the position of an object that comes to a standstill in a predetermined position during non-operation of the automobile, such as the wiper of an automobile.
The magnetic sensor is required to have a reduced error in the detected angle that may emerge due to an induced magnetic anisotropy occurring on an a posteriori basis after the installation. Note that the foregoing descriptions have dealt with the problem that is encountered when an induced magnetic anisotropy occurs on an a posteriori basis in the free layer of a spin-valve MR element after the installation of a magnetic sensor that employs the spin-valve MR element as the magnetic detection element. However, this problem applies to any cases where the magnetic sensor has a magnetic detection element that includes a magnetic layer whose magnetization direction varies according to the direction of the target magnetic field and an induced magnetic anisotropy occurs on an a posteriori basis in the magnetic layer of the magnetic detection element after the installation of the magnetic sensor. |
Preparation and Characterization of FeCo2O4 Nanoparticles: A Robust and Reusable Nanocatalyst for the Synthesis of 3,4-Dihydropyrimidin- 2(1H)-thiones and Thiazolopyrimidines The present research describes a mild and efficient method for the synthesis of 3,4-dihydropyrimidine-2(1H)-thiones and thiazolopyrimidine via multi-component reactions using FeCo2O4 nanoparticles. It was found that FeCo2O4 nanoparticles act as a powerful and effective catalyst. The prepared catalyst was characterized by the various spectroscopic techniques.The three-component reaction of thiourea, aromatic aldehydes and ethyl acetoacetate was catalyzed by FeCo2O4 nanoparticles. Next, the prepared 3,4-dihydropyrimidin-2(1H)-thiones were applied for the preparation of thiazolopyrimidines via the reactions of 3,4-dihydropyrimidine-2(1H)- thiones, chloroacetic acid, and aromatic aldehydes in the presence of FeCo2O4 nanoparticles.The FeCo2O4 nanoparticles were synthesized by a facile one-step method and the structure determination of the catalyst has been done using spectral techniques.Then, the prepared nanocatalyst was used in the synthesis of 3,4-dihydropyrimidin-2(1H)-thiones and thiazolopyrimidines under solvent-free conditions at 80°C.FeCo2O4 nanoparticles as a magnetic nanocatalyst were applied as a catalyst in the synthesis of some heterocyclic compounds in excellent yields and short reaction times. The average particle size of the catalyst is found to be 30-40 nm. The study on the reusability of the FeCo2O4 nanoparticles showed the recovered catalyst could be reused fifth consecutive times. We propose that FeCo2O4 nanoparticles act as a Lewis acid cause to increase electrophilicity of carbonyl groups of substrates and intermediates to promote the reactions.The present research introduced various advantageous including excellent yields, short reaction times, simple workup procedure and recyclability of the FeCo2O4 NPs in order to the synthesis of 3,4-dihydropyrimidin-2(1H)-thiones and thiazolopyrimidines. |
YORK, Pa. — Saniya Chong is a much watch.
When the 5-foot-9 Ossining point guard is on the court, the fans lucky enough to be in attendance get to see one of the nation’s best players – a dynamo in every sense of the word. Off of it, college coaches and fans wait patiently to see where the rising senior will take her talents next season.
Over the weekend she played with Ossining coach Dan Ricci with the Hudson River Breeze at the Blue Chip USA Invitational at the Toyota Arena in York, Pa. Chong quickly became the talk of the tournament. She averaged 46.6 points per contest over five games as her team went 4-1 in the Platinum bracket. That includes a 53-point performance against the Pittsburgh Rockers in front of UConn head coach Geno Auriema, a memorable effort in which she also has 10 rebounds and five steals.
“When I tell her one of the coaches is here, she seems to perk up,” Ricci said.
The Bronx native has trimmed her list of suitors down to just four – UConn, Maryland, Ohio State and Louisville – after eliminating North Carolina and Miami. Because of its close proximity, Chong has already taken an unofficial visit to UConn, which she called a “wonderful place.” She plans to begin scheduling official stops in September.
A lot has changed for the reserved Chong in just a year’s time, when her only offers were Virginia Tech and Ohio State, and she was unranked by most scouting services.
There are detractors, of course. There are those who believe Chong is overrated because her Ossining team doesn’t play high profile enough teams. She plays her travel ball with the Westchester Hoopers and the Breeze, not more well-known squads like Exodus, the Gauchos or the Philly Belles. Her gaudy stats, the doubters contend, are coming against teams who can’t physically match up with the long, quick and athletic lead guard.
Chong averaged 33.3 points, 9.7 assists, 5.2 rebounds and 5.0 steals as Ossining went 23-1 and reached the NYPHSAA Class AA state semifinals last season. It’s lone loss came to eventual state Federation champion Cicero-North Syracuse and Breanna Stewart as Chong scored 21 points. The more than 2,000-point scorer was invited to tryout for the USA U17 national team and also dropped in 54 points, including eight 3-pointers, in a win over Archbishop Molloy at the Francis Lewis Winter Ball.
That’s the reason why Chong has some of the nation’s best college programs to choose from and will end her career as one of New York State’s top all-time leading scores. Ricci doesn’t believe she can make a wrong choice. He just wants her to make the best one. |
Turkish president holds the Netherlands responsible for massacre of 8,000 Bosnian Muslims as row over rallies deepens
The Turkish president, Recep Tayyip Erdoğan, has held the Netherlands responsible for the worst genocide in Europe since the second world war as the row over Turkish ministers addressing pro-Erdoğan rallies in the country deepened.
In a speech televised live on Tuesday, Erdoğan said: “We know the Netherlands and the Dutch from the Srebrenica massacre. We know how rotten their character is from their massacre of 8,000 Bosnians there.”
Erdoğan stakes all on winning referendum as diplomatic row simmers Read more
The comments followed Turkey’s suspension of diplomatic relations with the Netherlands on Monday and Erdoğan twice describing the Dutch government as Nazis on Saturday after his foreign minister and family affairs minister were prevented from attending rallies.
The Dutch prime minister, Mark Rutte, faces a general election on Wednesday in which the far-right leader, Geert Wilders, could win the largest number of seats. Rutte earlier on Tuesday played down the impact of Turkey’s diplomatic sanctions, which he said were “not too bad” but were inappropriate as the Netherlands had more to be angry about.
However, after Erdoğan’s speech Rutte told the Dutch TV channel RTL Nieuws that Erdoğan “continues to escalate the situation”, adding the Srebrenica claim was “a repugnant historical falsehood”.
“Erdoğan’s tone is getting more and more hysterical, not only against The Netherlands, but also against Germany,” he said. “We won’t sink to that level and now we’re being confronted with an idiotic fact ... It’s totally unacceptable.”
Facebook Twitter Pinterest The Dutch prime minister, Mark Rutte: ‘Erdoğan’s tone is getting more and more hysterical.’ Photograph: Dylan Martinez/Reuters
Turkey is holding a referendum on 16 April on extending Erdoğan’s presidential powers where the votes of Turkish citizens in EU countries will be crucial.
Erdoğan’s decision to use the Srebrenica genocide, for which a previous Dutch government resigned over its failure to prevent, as a further attack on the Netherlands showed that Ankara does not intend to back down from the dispute.
A lightly armed force of 110 Dutch UN peacekeepers failed to prevent a Bosnian Serb force commanded by Gen Ratko Mladić entering what had been designated a safe haven on 11 July 1995. Muslim men and boys were rounded up, executed and pushed into mass graves.
The former Bosnian Serb leader Radovan Karadžić was found guilty of genocide over the massacre by the UN tribunal in March 2016 and sentenced to 40 years in jail.
Erdoğan said in his speech he would not accept an apology from the Netherlands over the treatment of the ministers and suggested that further action could be taken.
He accused the German chancellor, Angela Merkel, of attacking Turkey the same way that Dutch police used dogs and water cannon to disperse protesters outside the Turkish consulate in Rotterdam. Erdoğan said Merkel was “no different from the Netherlands”, and urged emigre Turks not to vote for “the government and the racists” in upcoming European elections.
What happened in the Turkish referendum and why does it matter? On 16 April 2017 Turkish voters narrowly approved a package of constitutional amendments granting Recep Tayyip Erdoğan sweeping new powers. The amendments will transform the country from a parliamentary democracy into a presidential system – arguably the most significant political development since the Turkish republic was declared in 1923. Under the new system – which is not due to take affect until after elections in June – Erdoğan will be able to stand in two more election cycles, meaning he could govern until 2029. The new laws will notionally allow Erdoğan to hire and fire judges and prosecutors, appoint a cabinet, abolish the post of prime minister, limit parliament’s role to amend legislation and much more. The president's supporters say the new system will make Turkey safer and stronger. Opponents fear it will usher in an era of authoritarian rule.
Erdoğan had on Monday defied pleas from Brussels to tone down his rhetoric, repeating accusations of European “nazism” and saying his ministers would take their treatment by the Dutch to the European court of human rights.
The Turkish foreign ministry said in a statement on Tuesday that the EU’s stance on Turkey was short-sighted and “carried no value” for Turkey. It said the EU had “ignored the violation of diplomatic conventions and the law”.
The Netherlands, Austria, Germany, Denmark and Switzerland, all of which have large Turkish immigrant communities, have cited security and other concerns as reasons not to allow Turkish officials to campaign in their countries in favour of a referendum vote. But with as many as 1.4 million Turkish voters in Germany alone, Erdoğan cannot afford to ignore the foreign electorate.
Austria’s chancellor, Christian Kern, called on Monday for an EU-wide ban on Turkish rallies, saying it would take pressure off individual countries. But Merkel’s chief of staff, Peter Altmaier, said he had doubts as to whether the bloc should collectively decide on a rally ban.
Analysts said the Turkish president was using the crisis to show voters that his strong leadership was needed against a Europe he routinely presents as hostile.
Erdoğan was “looking for ‘imagined’ foreign enemies to boost his nationalist base in the run-up to the referendum”, said Soner Çağaptay, the director of the Turkish research programme at the Washington Institute.
Marc Pierini, a former EU envoy to Turkey, said he saw no immediate solution to the crisis. “The referendum outcome in Turkey is very tight and the leadership will do everything to ramp up the nationalist narrative to garner more votes,” he said.
The standoff has further strained relations already frayed over human rights, while repeated indications from Erdoğan that he could personally try to address rallies in EU countries risk further inflaming the situation.
The row looks likely to dim further Turkey’s prospects of joining the EU, a process that has been under way for more than 50 years. “The formal end of accession negotiations with Turkey now looks inevitable,” the German commentator Daniel Brössler wrote in Süddeutsche Zeitung. |
Stability of Mn2RuxGa-based Multilayer Stacks Perpendicular heterostructures based on a ferrimagnetic Mn2RuxGa (MRG) layer and a ferromagnetic Co/Pt multilayer were examined to understand the effects of different spacer layers (V, Mo, Hf, HfOx and TiN) on the interfaces with the magnetic electrodes, after annealing at 350 C. Loss of perpendicular anisotropy in MRG is strongly correlated with a reduction in the substrate-induced tetragonality due to relaxation of the crystal structure. In the absence of diffusion, strain and chemical ordering within MRG are correlated. The limited solubility of both Hf and Mo in MRG is a source of additional valence electrons, which results in an increase in compensation temperature Tcomp. This also stabilises perpendicular anisotropy, compensating for changes in strain and defect density. The reduction in squareness of the MRG hysteresis loop measured by anomalous Hall effect is<10 %, making it useful in active devices. Furthermore, a CoPt3 phase with (2 2 0) texture in the perpendicular Co/Pt free layer promoted by a Mo spacer layer is the only one that retains its perpendicular anisotropy on annealing. Introduction Modern spintronic materials are being developed for high-speed, non-volatile data storage products such as magnetic random-access memory. An ideal material would be one which has high spin-polarisation, yet little or no magnetic moment. Mn 2 Ru x Ga (MRG) is a ferrimagnetic inverse Heusler alloy, having an XA structure with two inequivalent antiferromagnetically aligned Mn sublattices and almost complete filling of the four lattice sites, as shown in Figure 1. It is considered to be a zero-moment half-metal (ZHM) when its net moment vanishes at the compensation temperature, T comp, and it is a potential candidate for spintronic applications owing to its immunity to magnetic fields, reasonably high Curie temperature and large perpendicular anisotropy field close to compensation. The possibility of tunable anti-ferromagnetic oscillation modes in the THz region is key for future applications in high-frequency computing and communications, and has already been observed in the similar Mn 3−x Ga materials. Perpendicular magnetic anisotropy (PMA) due to biaxial strain is observed in films grown on MgO. The ruthenium 4d site occupancy allows the magnetic properties to be tuned according to the application; in particular, T comp increases with increasing Ru concentration and valence electron count, n v, in the unit cell. Annealing of magnetoresistive devices can have varying outcomes -the crystallisation of the MgO/CoFeB bilayer results in a dramatic increase of tunnelling magne- Figure 1: Crystal structure of Mn 2 RuxGa is a distorted XA Heusler structure with two inequivalent, antiferromagnetically coupled sublattices, Mn 4a and Mn 4c, leading to zero moment at compensation toresistance (TMR) ratio due to coherent tunnelling and perpendicular anisotropy in the free layer, while the diffusion of Ta and other species into the active portion of the device can have a negative impact on functionality. This MgO/CoFeB electrode structure is currently the industry choice for TMR applications, and annealing temperatures for crystallisation of amorphous CoFeB into a body centered cubic structure typically occurs above 320°C, depending on boron content. The use of thin interlayers to enhance performance of MnGa-based magnetic tunnel junctions has proved effective. A major concern with Mn-based materials, however, is diffusion which can affect the both crystal structure as well as the quality of the spin-polariser/barrier interfaces. We have previously demonstrated that an ultrathin dusting layer of Ta or Al can be used to mitigate the effects of diffusion between MRG and an MgO tunnel barrier. Using this technique, a TMR of 40 % has been achieved at 10 K and at low bias in a MRG based magnetic tunnel junction using a CoFeB free layer. Similarly, we have shown that Hf can magnetically couple MRG and CoFeB thin films, after annealing at high temperatures. Hf and Al are not known for preventing diffusion, so the mechanism by which they act to maintain performance is not clear, compared to other similar diffusively mobile materials. A previous study of MRG, annealed at various temperatures up to 400°C and capped with AlO x, shows that the magnetic and crystalline properties are stable up to 350°C, beyond which the crystal structure relaxes. There are several types of defects that occur in the MRG lattice -Mn-Ga (4a/4c ↔ 4b), Mn-Ru (4a/4c ↔ 4d), and Ru-Ga (4d ↔ 4b) antisites -as well as excess Mn and Ga on 4d sites when x < 1. We can quickly identify Ru-based defects in X-ray crystallography due to the much stronger atomic scattering factor of Ru compared to Mn and Ga. Mn-Ga antisite defects have previously been identified as the source of additional electronic pressure that modifies the position of the Fermi level, increasing the moment and spin polarisation. Here we correlate magnetic and transport properties with the presence of crystalline defects. To investigate, we use a spin-valve structure of the form MgO//Mn 2 Ru 0. 7 Ga/X(t)/ 8 /Co(0.4)-/Pt where the spacer layer, X, has a thickness, t, of 1.4 nm and 2 nm (parenthetical values in nm). It will act as a barrier preventing diffusion, or as a source of mobile atoms. The stack uses a Co/Pt superlattice as the free layer due to its perpendicular anisotropy as deposited, independent of seed layer or annealing requirements as with a MgO/CoFeB free layer. The five materials selected as X in this study are listed in Table 1. TiN thin films are widely used as diffusion barriers and serve as a baseline comparison where we expect no diffusion. Hf has also been chosen to serve as a comparison due to our previous experience with it. Diffusion-driven defects are expected to result in a reduction of crystalline order in MRG. Any change in c/a ratio will change the magneto-crystalline anisotropy in MRG, while the perpendicular anisotropy in Co/Pt multilayers is primarily caused by interfacial anisotropy driven by spinorbit hybridisation of Co and Pt. This means that a change in the interfacial roughness or strain will have an impact on the Co/Pt magnetic properties. We measured crystallographic and magnetotransport characteristics before and after annealing at 350°C, a standard industrial practice for MgO/CoFeB-based devices. In this way we aimed to identify the influence of annealing the structure based on the interlayer material used, and its effect on the properties of both electrodes. Methods All films except TiN were deposited by direct current sputtering onto 10 mm 10 mm single crystal MgO (0 0 1) substrates in a SFI Shamrock sputter tool (base pressure <2.5 10 −6 Pa); TiN was deposited by radio frequency sputtering. MRG is deposited at 320°C by co-sputtering from a Mn 2 Ga target and a Ru target, both 76.2 mm diameter, at a target-substrate distance of 100 mm. Argon gas flow was 38 cm 3 min −1. All other materials were deposited at room temperature, except for two spacer layers of TiN grown at 320°C. HfO x was formed by natural oxidation of metallic Hf films in O 2 atmosphere at 2 Pa for 60 s. Annealing is carried out under vacuum at 350°C for 60 minutes in a perpendicular field of 800 mT. X-ray diffraction (XRD) and X-ray reflection (XRR) were measured with a Panalytical X'Pert tool using Cu K radiation, and magnetotransport data was collected by the Van der Pauw method in a 1 T GMW electromagnet. Crystallographic data were analysed following the method of by fitting a Voigt function to the peaks, taking into consideration K 2 and instrumental broadening, characterised using an Al 2 O 3 standard, NIST 1976b. Results As-deposited MRG is textured, with the crystallographic c-axis perpendicular to the substrate surface. Two diffraction peaks are observed in a symmetric -2 scan, namely the (0 0 2) and (0 0 4) ( Figure 2). In addition to the substrate and MRG peaks, we also note the Co/Pt superlattice (0 0 2) peak. As mentioned in section 1, the magnetic properties are strongly dependent on the crystal characteristics. We determine the out-of-plane lattice parameter c, crystal coherence length L c perpendicular to the film (using the Scherrer formula), the perpendicular strain, and the ratio of intensities of the two peaks S Pre-anneal Post-anneal Co/Pt Rem. Sat. Hall Voltage (mV) Pre-anneal Post-anneal signifying the Ru/Mn order as Ru occupying the 4a or 4b sites will increase the intensity of the (0 0 2) peak. Higher S (0 0 2)/(0 0 4) ratios indicates disorder. By comparison L c and give us a metric for the long-range disorder in the film, due to grain boundaries and stacking faults, where larger grain size or less strain indicates a more homogenous film. MRG has a coercivity in a perpendicular magnetic field that diverges as the temperature approaches T comp, where the anomalous Hall effect (AHE) signal changes sign. The composition of MRG used here, x = 0.7, was chosen to give T comp = 165 K, with an as-deposited coercivity of H c ≈ 380 mT at room temperature. Squareness of the Hall signal, i.e. the ratio of Hall voltage at saturation to that at remanence, is used an indicator of perpendicular anisotropy. The main parameters from the AHE hysteresis are labeled in Figure 3. The as-deposited MRG exhibits perpendicular magnetic anisotropy (PMA) in all cases, irrespective of spacer material or thickness; magnetic properties upon annealing are spacer material dependent. It is typically reported that perpendicular anisotropy in Co/Pt is accompanied by a (1 1 1) texture, however we note here that all asdeposited Co/Pt films exhibited perpendicular anisotropy regardless of initial texture or phases present. After annealing with a 1.4 nm Hf spacer, the Co/Pt layer loses much of its perpendicular anisotropy, indicated by the loss of squareness as demonstrated in to 0.99 nm in Pt. This reduces the interfacial anisotropy, bringing the moment in-plane, and is caused by intermixing as evidenced by the weak crystallisation of CoPt 3 with (1 1 1) texture after annealing, as seen in Figure 2. The Co and Pt interfaces of the free layer are well defined before annealing, as can be seen from XRR data for a hetereostructure with a 1.4 nm V spacer layer in Figure 4(a). A clear and intense Co/Pt (0 0 1) superlattice peak is visible which disappears upon annealing, shown in Figure 4(b), a result of intermixing and alloying between the layers forming a CoPt 3 phase as shown in Figure 2. XRD and roughness data for the Co/Pt multilayers is given in Table 3. Of interest is that in each case, the average Co roughness is approximately equivalent to the desired layer thickness (0.4 nm), indicating incomplete coverage. The anisotropy of MRG appears to be maintained, with minimal change in coercivity or squareness. Changes in Co/Pt properties are considered independent of changes in MRG. As discussed in Section 1, the magneto-crystalline anisotropy of MRG is largely dependent on the c/a ratio, or more specifically the tetragonal distortion of the cubic cell - c = (c − a)/a. The properties of MRG are highly dependent on the spacer layer used, and the relative change of each parameter in MRG after annealing has been collated in Figure 5. Below, we summarise the results for each spacer material. For reference, Table 2 gives a summary of the mean and standard deviation for the various properties measured in MRG before annealing. As a basic metric, we aim to main- Giant magnetoresistance was not measured in these devices as a current-perpendicular-to-plane geometry would be required to prevent shunting effects of currents through highly resistive layers, necessitating nanoscale lithographic patterning of the metallic films. Vanadium Annealing of a vanadium spacer stack results in large reductions of the MRG coercivity (∼ −90 %) and squareness (∼ −75 %), along with a significant (∼ −80 %) reduction in c. This indicates the anisotropy has a primarily in-plane contribution, though a small fraction of grains are still perpendicular. This is accom-panied by an extremely large (>500 %) increase of S, as well as a moderate decrease of L c and increase in indicating complete reordering of the crystal structure. Additional peaks near the MRG (0 0 2) and (0 0 4) peaks with d-spacing of ∼ 0.3058 nm and 0.1511 nm are present after annealing. These d-spacings correspond well to either RuV(1 0 0)(0.298 nm), RuV(2 0 0)(0.148 nm) or Ga 5 V 2 (2 2 0)(0.317 nm), Ga 5 V 2 (5 3 0)(0.1537 nm). Given the phase diagrams for Ga-V and Ru-V, we attribute this phase to a Ga-V binary such as Ga 5 V 2 or Ga 41 V 8, both of which are peritectics with relatively low decomposition temperatures. Based on the substantial increase in S, reordering of Ru from the Heusler 4d site occurs forming a highly disordered cubic alloy, with an A2 structure. There is a moderate CoPt 3 (1 1 1) and (2 0 0) texture before annealing. After annealing, only a strong CoPt 3 (2 0 0) peak is present, and magnetic anisotropy is also in-plane. In addition, the average roughness for Co increases moderately, while the roughness of the Pt layers approximately doubles in both cases. Titanium Nitride When using a TiN spacer, we observe a collapse in the tetragonality of MRG while L c increases and decreases, indicating that defects are being removed. This is accompanied by a loss of coercivity and squareness in the magnetic hysteresis. TiN is a well known diffusion barrier, and defects introduced by diffusion are not considered as the cause. We also see that there is no direct correlation with Ru-Mn disorder, as there are three cases with small increase in S contrasted by the 1.4 nm TiN grown at high temperature which has a lower S ratio. However, there is a correlation between strain and chemical ordering S. This means that defects such as stacking faults and grain boundaries induce strain on the crystal structure, which contributes strongly to the magneto-crystalline anisotropy of the film and has a more prominent effect than Ru-Mn anti-sites. The effect is independent of growth temperature. From the Co/Pt superlattice, we observe a very weak Pt(1 1 1) peak, which is distinct from the CoPt 3 (1 1 1), along with a strong CoPt 3 (2 0 0). The high intensity, as well as the Laue fringes observed around the peak indicates that the Pt(1 1 1) is due to the thick Pt capping layer, caused by relaxation at a critical thickness. The close lattice matching of TiN(1 0 0) to MRG(1 1 0) means the TiN(0 0 1) out-of-plane texture is expected, which helps to induce a CoPt 3 (2 0 0) phase in the Co/Pt. However, PMA is lost after annealing. Average roughness of Co and Pt increase moderately in all cases. Pt layers begin with a higher roughness in the case of high temperature grown TiN spacer layers. By comparison, the roughness after annealing is equal or greater when using room temperature grown TiN. This indicates that a high temperature grown TiN seed layer is less suitable, but is also more stable. Hafnium Oxide As HfO x is formed by natural oxidation of a Hf metal layer, there is a gradient of oxygen content that depends on the film thickness. MRG performs poorly with a 1.4 nm thick HfO x spacer resulting in near total loss of perpendicular anisotropy upon annealing, whereas with a 2 nm spacer it maintains both coercivity and squareness with reductions of 11 % and 14 % respectively, despite a significant increase in strain (∼ 25 %). This is correlated with a lesser reduction in c-spacing for the 2 nm spacer stack. Similarly to the case of TiN, the reduction in c comes despite an improvement in the coherence length. If the 1.4 nm HfO x is fully oxidised (x ∼ 2), then diffusion into MRG, either across the barrier or from the barrier itself, will be reduced. This will result in oxidation of MRG at the interface, however. The effect is then similar to that using TiN. As the HfO x films were treated for equal time, the thicker HfO x is not fully oxidised due to passivation, which leads to some interdiffusion. There is no observable texture in the Co/Pt superlattice before annealing. Weak CoPt 3 (1 1 1) and (2 0 0) texture is observed after annealing, but PMA is lost. As with the TiN layers, we see a moderate increase in Co and Pt roughness, with a higher as-deposited roughness. Hafnium Hf spacers work well, with the 1.4 nm spacer showing only a small change in all properties. In contrast to other materials, the 2 nm Hf spacer increases the c of the MRG unit cell. Furthermore, it raises T comp to a value slightly above RT, which causes H c to increase significantly due to its divergent nature close to T comp. The increase in c is accompanied by a decrease in L c and an increase in, indicating an increase of line and planar defects. According to the binary phase diagrams, Hf absorption into both Mn and Ga is limited to ∼ 1 at. % while maintaining crystal structure. This limited solubility accounts for the reduced effects seen in the HfO x 2 nm and Hf 1.4 nm based heterostructures, where the counter-diffusion into the Hf layer limits the available soluble material. We see different effects on the Co/Pt layer depending on the thickness of the Hf spacer, with the 1.4 nm thick spacer inducing no discernable texture in the superlattice. After annealing there is a weak CoPt 3 (1 1 1) texture. The 2 nm thick spacer promotes a CoPt 3 (1 1 1) orientation which is maintained after annealing, as well. In neither case is there any PMA after annealing, although there is a clear preferred orientation. The roughness of Co layers increases moderately. Annealing causes a substantial increase in roughness of Pt layers, to ∼ 1 nm, with the 2 nm Hf spacer giving a higher initial roughness. Molybdenum Mo spacers have similar properties to Hf spacers, and the phase diagrams show a greater affinity of Mn for Mo (4 at. %) and Ga (16 at. %) Counter-diffusion is limited to Table 3: Data for Co/Pt layers before and after annealing -XRD peaks visible are given by CoPt 3 (1 1 1) △, CoPt 3 (2 0 0), CoPt 3 (2 2 0), Pt(1 1 1) ▽. Filled shapes indicate a strong peak, underlined shapes indicate a weak peak. Average interfacial roughness of Co and Pt layers with the superlattice are given based on fitting of XRR data. A indicates that PMA was still present after annealing Mn, as the Mo does not absorb much Ga at 350°C. The increase in S for the 1.4 nm Mo heterostructure, indicates reordering of the Ru within MRG which should result in a decrease in magnetic properties. However, the squareness is still within acceptable limits and the coercivity increases, indicating that T comp has increased despite a lower c parameter. This is corroborated by the change in sign of the MRG switching direction in EHE, showing that T comp is now above RT. This effect is less prominent with the 2 nm Mo heterostructure, but still indicates an increase in T comp. In this case L c and have both increased without any change in S. With more material to diffuse, the additional Mo inclusions maintain chemical ordering of the Ru 4d site. With the thin 1.4 nm spacer, a weak CoPt 3 (2 0 0) peak is present, which becomes stronger after annealing. By contrast, there is a clear CoPt 3 (2 2 0) peak present when using a 2 nm spacer layer, which remains after annealing. This is the only case we witnessed where magnetic anisotropy of the Co/Pt was still perpendicular to the film surface after annealing. There is no clear Mo texture visible in XRD. Co and Pt layers both have moderate increases of roughness, however the heterostructure with a 2 nm spacer layer has higher initial and final roughness. Discussion Strain plays a significant role in the properties of MRG thin films, and crystalline point defects have previously been discussed as a source of electronic doping that maintains strain within the lattice. As demonstrated by use of a TiN spacer, annealing of MRG removes line and planar-type crystalline defects, which relaxes the lattice. These types of defects have previously been identified in MRG from HRTEM. This results in a 60 % to 70 % loss of tetragonal distortion of the unit cell (where a 100 % reduction would indicate a fully cubic cell). It is clear here that this is responsible for the reduction of coercivity and squareness in magnetic hysteresis due to the reorientation of the magneto-crystalline anisotropy into the plane. This effect is much stronger than could be attributable to Rubased anti-sites, so it is not possible here to determine their contribution. Where diffusion is discounted, there is a correlation between strain and chemical ordering within MRG, consistent with. The diffusion of different materials has varying effect on MRG, for example with V causing significant disorder of the crystal structure, ultimately resulting in the formation of additional intermetallic phases. The choice of a suitable spacer material is based on a requirement to maintain magnetic properties of MRG after processing. Hf and Mo are strong candidates for future applications, as in both cases, the magnetic hysteresis is reliable with a change in squareness <10 % from the as-deposited state. This is especially important in magnetoresistive devices which rely upon the relative orientation of magnetic moments between the free and reference layers. T comp is sensitive to the number of valence electrons within MRG, and increases linearly with n v. Both Hf and Mo appear to donate valence electrons to the film, as indicated by the increase in T comp. Based on Slater-Pauling rules, both Hf and Mo would donate ∼ 0.2 electrons per unit cell of MRG for the considered solubility. This contextualises the same effect seen in our previous work using thin interlayers. The results for these materials indicate a complex interplay between strain and film defects when we consider incorporation of additional valence electrons into the unit cell, that results in retention of perpendicular anisotropy. The strongly bonded compounds TiN and HfO x do not diffuse. However, the heterostructure using the thicker, partially oxidised HfO x spacer layer more resembles the structures using a Hf spacer layer in behaviour due to diffusion of metallic species closer to the MRG interface where the layer is not fully oxidised. Typically we see either CoPt 3 (2 0 0) or CoPt 3 (1 1 1) texture in the free layer, however in the case of the 2 nm Mo heterostructure there appears a CoPt 3 (2 2 0) peak. It is of interest that of all the structures, this is the only one that maintained perpendicular anisotropy in the Co/Pt after annealing. We can surmise that a mixture of weak (1 1 1) and (2 0 0) indicates an amorphous interface between spacer and the first Co layer, which is then crystallised and induces a preferential texture on the superlattice during annealing. Annealing results in an increase in interfacial roughness within the superlattice. PMA after annealing is independent on the degree of roughness measured, and is instead correlated with the appearance of CoPt 3 phases which are the result of intermixing. The superlattice which forms a CoPt 3 (2 2 0) phase does not loose perpendicular anisotropy, which we attribute here to the additional strain on the remaining Co/Pt interfaces induced by the crystal texture. As Mo is an element of comparable atomic weight to Pt, the additional orbital contribution at the interface between spacer and superlattice likely contributes to the anisotropy rather than any strain induced by epitaxial growth, considering that the Mo has no visible texture itself. PMA after annealing when using a 2 nm Mo spacer, when there is none present using a 1.4 nm Mo spacer, indicates that counter diffusion of MRG has an effect on the interfacial relationship between the spacer and the initial Co layer of the superlattice. Conclusions This study has enabled us to characterise the effects of various spacer layers on the structural and magnetic properties of both the MRG and the Co/Pt superlattice, which will be useful for developing perpendicular GMR and TMR structures that can withstand annealing at 350°C. The PMA of MRG is more robust, since it depends on strain imposed by the MgO substrate. That of the Co/Pt multilayer is only maintained when a CoPt 3 (2 2 0) phase is developed by intermixing, with the help of Mo. Based on the effect of annealing MRG films capped with a TiN spacer, line and planar defects help to maintain the substrate-induced strain and the tetragonal distortion of the MRG crystal structure, which the perpendicular magnetic anisotropy is dependent on. Annealing causes these defects to be removed, which results in the partial collapse of tetragonal distortion and therefore magnetic properties dependent on the perpendicular anisotropy. There was no identifiable link between these properties and the presence of Ru-based anti-site disorder. The effect of strain on anisotropy is counteracted by the addition of valence electrons in the MRG unit cell In Co/Pt multilayers, interface roughness is the main antagonist of PMA due to the reduction of interfacial anisotropy. Intermixing results in the formation of a CoPt 3 phase, even before annealing. However, the development of CoPt 3 with (2 2 0) texture retains perpendicular anisotropy after a 350°C anneal. A Mo underlayer is proposed here as a viable solution. Hf and Mo will be useful as thin protective layers in future active devices based on MRG, as they are able to offset changes in strain and defect density to maintain the PMA of the MRG layer. The solubility of both materials within MRG provides additional valence electrons, modifying the Fermi level position, and therefore T comp as well as the overall magnetic properties. This is in contrast to V, which prefers to form intermetallic phases instead. Considering the enhanced high-temperature stability of the crystal, doping of MRG with these or similar materials, such as Zr, should be investigated. By tuning the Ru/dopant composition, desired magnetic properties should be easily achievable. |
TricyclohexylphosphateA Unique Member in the Neutral Organophosphate Family Abstract Tricyclohexylphosphate (TcyHP) was synthesised and characterized. The extraction of U(VI) and Th(IV) by TcyHP in ndodecane was studied and compared with tributylphosphate (TBP). The feasibility of the separation of Th and U with TcyHP has been explored and reported. The solubility of TcyHP in water at room temperature has been measured. The stoichiometry of the TcyHP solvates with Th(NO3)4 and UO2(NO3)2 was evaluated. The enthalpy of extraction of U(VI) by TcyHP was measured. The results of column runs with TcyHP impregnated Amberlite XAD7 resin for studying the loading and elution behavior of uranium from nitric acid medium are also reported in this paper. Hexavalent actinides form third phase with neutral organophosphate extractants only at very high nitric acid concentrations and high metal loading. However the presence of the three closed ring alkyl moieties in TcyHP is seen to lead to certain unique features with regard to the third phase formation behavior. It has been observed, for instance, that U(VI) forms third phase with this extractant even with highly polar diluents like 1chlorooctane. Further, the tendency for third phase formation with U(VI) is even higher than in the case of Th(IV), in contrast to the trend observed so far with other extractants. |
Functional alterations of type I insulin-like growth factor receptor in placenta of diabetic rats. The presence of type I insulin-like growth factor (IGF-I) receptors on placental membranes led to the hypothesis that these receptors might play a critical role in the rapid growth of this organ. Diabetes induces feto-placental overgrowth, but it is not known whether it modifies IGF-I receptor activity in fetal and/or placental tissues. To answer this question, we have partially purified and characterized placental receptors from normal and streptozotocin-induced diabetic rats. In normal rats, binding of 125I-IGF-I to a 140 kDa protein corresponding to the alpha subunit of the receptor was observed in cross-linking experiments performed under reducing conditions. Stimulation by IGF-I induces the autophosphorylation of a 105 kDa phosphoprotein representing the beta subunit of the receptor. In rats made hyperglycaemic and insulinopenic by streptozotocin injection on day 1 of pregnancy, placental IGF-I receptor-binding parameters were not different from controls on day 20 of pregnancy. In contrast, the autophosphorylation and kinase activity of IGF-I receptors of diabetic rats were increased 2-3-fold in the basal state and after IGF-I stimulation. The present study indicates that the rat placental IGF-I receptor possesses structural characteristics similar to that reported for fetal-rat muscle, and suggests that the high-molecular-mass beta subunit could represent a type of receptor specifically expressed during prenatal development. In addition, it clearly demonstrates that diabetes induces functional alterations in IGF-I receptor kinase activity that may play a major role in the placental overgrowth in diabetic pregnancy. |
Identification of a Prognostic Signature Associated With the Homeobox Gene Family for Bladder Cancer Background: Bladder cancer (BLCA) is a common malignant tumor of the genitourinary system, and there is a lack of specific, reliable, and non-invasive tumor biomarker tests for diagnosis and prognosis evaluation. Homeobox genes play a vital role in BLCA tumorigenesis and development, but few studies have focused on the prognostic value of homeobox genes in BLCA. In this study, we aim to develop a prognostic signature associated with the homeobox gene family for BLCA. Methods: The RNA sequencing data, clinical data, and probe annotation files of BLCA patients were downloaded from the Gene Expression Omnibus database and the University of California, Santa Cruz (UCSC), Xena Browser. First, differentially expressed homeobox gene screening between tumor and normal samples was performed using the limma and robust rank aggregation (RRA) methods. The mutation data were obtained with the TCGAmutation package and visualized with the maftools package. KaplanMeier curves were plotted with the survminer package. Then, a signature was constructed by logistic regression analysis. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses were performed using clusterProfiler. Furthermore, the infiltration level of each immune cell type was estimated using the single-sample gene set enrichment analysis (ssGSEA) algorithm. Finally, the performance of the signature was evaluated by receiver-operating characteristic (ROC) curve and calibration curve analyses. Results: Six genes were selected to construct this prognostic model: TSHZ3, ZFHX4, ZEB2, MEIS1, ISL1, and HOXC4. We divided the BLCA cohort into high- and low-risk groups based on the median risk score calculated with the novel signature. The overall survival (OS) rate of the high-risk group was significantly lower than that of the low-risk group. The infiltration levels of almost all immune cells were significantly higher in the high-risk group than in the low-risk group. The average risk score for the group that responded to immunotherapy was significantly lower than that of the group that did not. Conclusion: We constructed a risk prediction signature with six homeobox genes, which showed good accuracy and consistency in predicting the patients prognosis and response to immunotherapy. Therefore, this signature can be a potential biomarker and treatment target for BLCA patients. INTRODUCTION Bladder cancer (BLCA) is a common urological tumor, and its morbidity and mortality rates are increasing year by year (). High recurrence and early metastasis lead to the poor prognosis of BLCA. The detection of exfoliated tumor cells in urine or bladder lavage samples has a high sensitivity (84%) for the diagnosis of high-grade BLCA but is less sensitive for lowgrade BLCA (). Cystoscopy, the main method for the diagnosis of BLCA, is invasive, time-consuming, and tedious. Currently, specific, reliable, and non-invasive tumor biomarker tests for the diagnosis and prognosis evaluation of BLCA are desperately needed. The homeobox gene family is a group with a homologous segment of approximately 180 bp in length that encodes a homologous domain of 60 amino acids and is an important transcriptional regulator that plays a vital role in tumor formation and development, regulating cell proliferation, migration, and apoptosis (Laughon and Scott, 1984;;). Current studies have shown that the homeobox gene family is aberrantly expressed in different tumors, such as bladder, bile duct, endometrial, and breast cancers (). In BLCA, ISL1 and LHX5 play important roles in multiple stages of bladder tumorigenesis (); ZHX3 promotes migration and invasion in vitro and in vivo (). Therefore, the homeobox gene family plays an important role in the development and progression of BLCA. Although progress has been made in the study of individual family members, the role and prognostic value of the homeobox gene family in BLCA remain unclear. In this study, we analyzed the mRNA expression of a large number of BLCA samples in public databases . We constructed a prognostic signature for BLCA based on six homeobox genes with significant differential expression between BLCA tissues and normal tissues. This signature can predict a patient's prognosis and response to immunotherapy and thus has good clinical application value. The design flow chart for the entire analysis process of this study is shown in Figure 1. Data Collection The RNA sequencing (RNA-Seq) data, clinical data, and probe annotation files of BLCA patients (providing 18 normal tissues and 406 tumor tissues) in TCGA were downloaded from the University of California, Santa Cruz (UCSC), Xena Browser (https://xenabrowser.net/). BLCA datasets GSE7476 (3 normal tissues and 9 tumor tissues), GSE13507 (69 normal tissues and 188 tumor tissues), GSE37815 (6 normal tissues and 18 tumor tissues), GSE65635 (4 normal tissues and 8 normal tissues), and GSE19423 (48 tumor tissues) were downloaded from the Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm.nih. gov/geo/) using the R package "GEOquery" (Davis and Meltzer, 2007). All 344 homeobox gene family members were extracted from the Hugo Gene Nomenclature Committee (HGNC). The probe IDs in each BLCA dataset were transformed into gene symbols according to the annotation files. Identification and Integration of Differentially Expressed Genes The R package "limma" was used to identify DEGs between normal and tumor tissues in each BLCA cohort with cutoff criteria of adjusted p value <0.05 and |log fold change (FC)| > 0.5 (). DEGs acquired from the five BLCA cohorts were sorted by the log fold change (logFC) value, and then the five gene lists were integrated using the RobustRankAggreg (RRA) R package (). The RRA method is based on the assumption that if the gene rank is high in all datasets, the probability that the gene is differentially expressed is higher and the related p value is lower. Mutation Landscape Analysis TCGA BLCA mutation data containing 411 tumor samples were acquired from the R package "TCGAmutations." The mutation landscape for the six signature genes in BLCA was visualized using the R package "Maftools" (). Construction and Evaluation of the Prognosis Model We randomly divided the TCGA BLCA cohort (n 406) in a 7:3 ratio into a training dataset (n 285) and a testing dataset (n 121). Logistic regression analysis was used to integrate the prognostic value of the six homeobox family genes into a sixgene signature model for BLCA. The formula for calculating the risk score for each sample is as follows: We calculated the risk score using the expression profiles of each sample based on the formula of the signature model. Then, we divided the BLCA cohort into high-and low-risk groups based on the median risk score. The R package "survival ROC" was used to establish the receiveroperating characteristic (ROC) curves for predicting one-, three-, and five-year overall survival (OS) for the two risk groups. Furthermore, we used the R package "rms" to construct calibration curves and evaluate the precision of the one-, three-, and five-year OS predictions for the BLCA cohort. Estimation of Immune Cell Infiltration We identified a group of 782 genes that represent 28 immune cell types involved in innate and adaptive immunity to estimate the infiltration level of different immune cell types in the tumor microenvironment (). Subsequently, the single-sample gene set enrichment analysis (ssGSEA) algorithm with the R package "GSVA" was used to evaluate the infiltration level of each immune cell type based on the expression profiles of each sample in BLCA and the immune cell gene marker (). Functional Enrichment Analysis The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) databases include collections of gene sets associated with the function of cells and organisms. Functional enrichment analysis of a set of genes that are dysregulated under certain conditions revealed which GO terms or KEGG pathways are overrepresented for that gene set. The TCGA BLCA cohort was divided into high-risk and low-risk groups according to the median risk score. Then, the R package "limma" was used to identify DEGs between the two risk groups. GO and KEGG analyses of the DEGs between the two risk groups were performed using the R package "clusterProfiler" (). A cutoff value of adjusted p value < 0.05 was used to determine the significant pathways. Prediction of the Immunotherapy Response The response of each sample to PD-1/PD-L1 and CTLA4 inhibitors was evaluated according to the gene expression profiles of the BLCA cohort with the Tumor Immune Dysfunction and Exclusion (TIDE) algorithm (http://tide.dfci. harvard.edu) (). Survival Analysis The samples were divided into high-and low-risk groups based on the median risk score, and the differences in OS and progression-free survival between the high-risk and low-risk groups were estimated using the Kaplan-Meier method. Survival curves were compared using the log-rank test. The significance threshold was defined as p < 0.05. Statistical Analysis Statistical analyses were performed using the log-rank test for univariate analysis. Pearson's correlation test was used to assess the relationship between the risk score and immune markers, characteristic gene expression, and the immune cell infiltration score. The relationship between the characteristic gene expression and the immune cell infiltration score was also evaluated. Student's t-tests were used to determine statistical significance of differences between variables. Statistical significance was defined as p < 0.05. All statistical analyses were performed in R version 4.0.2. Identification of the Differentially Expressed Homeobox Gene Family Members in Bladder Cancer To screen differentially expressed homeobox genes (DEHGs) in BLCA, four GEO datasets, GSE7476, GSE13507, GSE37815, and GSE65635, as well as TCGA gene expression dataset containing 406 BLCA samples and 18 normal samples from the UCSC Xena Browser were obtained. The R package "limma" was used to Frontiers in Molecular Biosciences | www.frontiersin.org July 2021 | Volume 8 | Article 688298 determine the DEHGs of each dataset using |logFC| > 0.5 and adjusted p < 0.05 criteria, and the volcanoes were plotted ( Figures 2A-E). Furthermore, the RRA method based on the expression of each gene in all datasets was used to screen out the candidate genes (score < 0.05) (Supplementary Table 1). As a result, six homeobox genes, TSHZ3, ZFHX4, ZEB2, MEIS1, ISL1, and HOXC4, were screened out, and then the logFC values of each gene in different datasets were calculated and are shown in Figure 2F. Moreover, correlation analysis of the six homeobox genes was performed, and the results showed that there were significant positive correlations between most genes ( Figure 2G). Correlation of the Six Homeobox Genes With Clinical Status and Mutation Landscape To explore the clinical significance of these six genes, pancancer analysis in BLCA ( Figure 3A) and 23 other tumors Figure 1) was performed, and the results revealed that the expression of TSHZ3, ZFHX4, ZEB2, MEIS1, and ISL1 was significantly lower than that in normal tissues, while the expression of HOXC4 was higher than that in normal tissues, especially in BLCA, breast invasive carcinoma (BRCA), prostate adenocarcinoma (PRAD), and head and neck squamous cell carcinoma (HNSC). Furthermore, we analyzed the correlation between these six homeobox genes and tumor size, regional lymph node involvement, and distant metastases (TNM) as well as the BLCA stage and found that TSHZ3, ZFHX4, and ZEB2 were positively correlated with T stage, N stage, and BLCA stage, but there was no significant correlation with metastasis ( Figures 3B-E). In addition, we analyzed the mutation landscape of these six DEHGs in BLCA. Among the 411 samples, 19.46% had at least one gene mutation; ZFHX4 mutation was the most common change, accounting for 12% of mutations; ZEB2, TSHZ3, MEIS1, and ISL1 mutations accounted for 5, 3, 1, and 1% of all mutations, respectively. The waterfall diagram formed according to the mutation landscape of these six DEHGs showed that most mutations were missense mutations ( Figure 2H). The driver genes ERBB2, HDAC1, PARP1, ERBB3, FGFR3, mTOR, AXL, EZH2, FGFR1, FGFR2, CSF1R, KIT, FGFR4, RET, and ERBB4 are key targets in the treatment of BLCA. Furthermore, we assessed the correlations between these six genes and BLCA driver genes in the BLCA dataset, and it was found that these six genes have a strong correlation with these driver genes (Supplementary Figure 2). A High Risk Score Was Associated With a Poor Clinical Outcome The prognostic value of the six-homeobox-gene signature was evaluated in the training dataset and testing dataset. We calculated the risk score for each BLCA sample in the training set, ranked them according to this score, and divided them into high-risk and low-risk groups based on the median risk score. We used scatter plots to show the survival status of BLCA patients based on risk scores, and we then performed a chi-square test on the data ( Figures 4A,C). The results demonstrated that patients in the high-risk group had a higher mortality rate than those in the low-risk group (p 0.033). The heat map with the gene expression profile of these six homeobox genes showed that ISL1, ZFHX4, TSHZ3, and ZEB2 were more highly expressed in high-risk BLCA samples, while HOXC4 and MEIS1 were highly expressed in the low-risk group ( Figure 4E). The results for the testing dataset were consistent with those for the training dataset ( Figures 4B,D,F). Kaplan-Meier analysis was performed on the training dataset, the testing dataset, and all datasets ( Figures 4G-I), and the results revealed that the survival time of the low-risk group was significantly longer than that of the high-risk group. GO Function Annotation and KEGG Pathway Analysis Between the High-Risk and Low-Risk Groups The DEHGs between the two risk groups were analyzed using GO functional annotation and KEGG pathway analysis with the R software package "clusterProfiler." The GO analysis of biological process (BP), molecular function (MF), and cell component (CC) terms showed that most of the enriched terms were related to immunity, including B cell-mediated immunity, immunoglobulin-mediated immune response, immunoglobulin complex, and antigen binding ( Figure 5A). The KEGG pathway analysis showed that the DEHGs were mainly enriched in cytokine-cytokine receptor interactions, Staphylococcus aureus infection, cell adhesion molecules, etc., most of which are related to immunity ( Figure 5B). The Signature Composed of Six Homeobox Genes Was Closely Related to Immunity Since the results of the GO functional annotation and KEGG pathway analysis showed that the signature was related to immunity, analysis of the risk score and immune cell infiltration was then performed to further confirm the conclusion. The results showed that there were differences in the infiltration of most immune cells, except for CD56dim natural killer cells, eosinophils, and monocytes, between the high-and low-risk groups, which demonstrated that the signature was significantly correlated with immune infiltration ( Figure 6A). In addition, we analyzed the correlation of each gene with the infiltration of immune cells, and the results indicated that TSHZ3, ZFHX4, and ZEB2 were related to almost all immune cell types and that MEIS1, ISL1, and HOXC4 were related to some immune cell types ( Figure 6B). Furthermore, we also analyzed the correlation analysis between these six genes and cytokines related to T cell function. The results showed that five out of the six genes, TSHZ3, ZFHX4, ZEB2, MEIS1, and ISL1, had a strong correlation with most cytokines, while HOXC4 had a strong correlation with IL-17A (Supplementary Figure 3). Similarly, the analysis of the correlations between the expression of these six homeobox genes and immune checkpoints showed that TSHZ3, ZFHX4, and ZEB2 were significantly correlated with the expression of CTLA-4, PD-L1, PD-L2, and PD-1. MEIS1 was strongly correlated with the expression of PD-1. In addition, ISL1 was significantly correlated with CTLA-4, PD-L2, and PD-1 expressions ( Figure 6C). Then, we analyzed the relationship between the risk score and the response to immunotherapy. The samples were divided into response and no-response groups, and the difference in risk scores between the two groups was assessed. The results showed that the risk scores were higher in the no-response group than in the response group ( Figure 6D). Evaluation and External Validation of the Signature Model Performance The ROC curves of the training set, testing set, and entire dataset (combination of training and testing sets) were plotted, and the area under the ROC curve (AUC) was calculated to verify the accuracy of this signature. The AUCs for one-, three-, and fiveyear OS were 0.631, 0.606, and 0.609 in the training set; 0.679, 0.652, and 0.671 in the testing set; and 0.647, 0.629, and 0.633, respectively, in the entire dataset ( Figures 7A-C). To compare the consistency of the model predictions with actual clinical outcomes, calibration curves for one-, three-, and five-year OS were constructed for the training set (Supplementary Figures 4A-C), testing set (Supplementary Figures 4D-F), and entire dataset ( Supplementary Figures 4G-I). The calibration curves showed satisfactory agreement between the predicted and observed values for one-, three-, and five-year OS. We further validated the prediction ability of this prognostic signature using the GEO datasets GSE13507, GSE19423, and GSE37815 for external validation. The risk score of each sample was calculated, and the samples were divided into high-risk and low-risk groups based on the optimal splitting point. Kaplan-Meier analysis of GSE13507 (p 0.17), GSE19423 (p 0.027), and GSE37815 (p 0.012) showed that the highrisk group tended to have a shorter survival time than the low-risk group (Figures 7D-F). DISCUSSION There are many studies on biomarkers of BLCA, such as urine cytology and urine biomarkers; the detection of exfoliated tumor cells in urine or bladder lavage has a high sensitivity for the diagnosis of high-grade BLCA but is less sensitive for low-grade BLCA. There are many biomarkers with unique functions, such as radiotherapy markers, chemotherapy markers, and immunotherapy markers, but these markers have a single function (Giordano and Soria, 2020), and most of them involved single targets, which easily cause false-positive or false-negative results. The application of RNA-Seq and bioinformatic analysis of databases has provided a theoretical basis for mechanistic studies of tumorigenesis and development. Zhu et al. identified some immune-related genes as prognostic factors in BLCA (). Lian et al. established a signature including eight long non-coding RNAs as a candidate prognostic biomarker for BLCA (). At present, there are few biomarkers that can predict both clinical outcomes and immunotherapy response. In this study, a clinical prediction model containing six homeobox genes was constructed through next-generation sequencing (NGS), which can not only predict the prognosis of patients but also predict the patient's immune response. With the popularity of sequencing technology, its price and convenience continue to improve, and this study has good clinical applicability. Although the homeobox gene family is closely related to BLCA (), few studies have focused on its prognostic value in BLCA. Therefore, we analyzed the RNA-Seq data of a large number of samples from TCGA and GEO public databases and screened out six significant DEHGs, namely, TSHZ3, ZFHX4, ZEB2, MEIS1, ISL1, and HOXC4, by the RRA method. Some of these six homeobox genes have been reported to regulate tumor progression and were identified as potential prognostic markers in previous studies. For example, aberrant HOXC4 expression is prevalent and plays an important role in the development of prostate cancer (Luo and Farnham, 2020). Moreover, HOXC4 can promote hepatocellular carcinoma progression by transactivating Snail (). The expression of TSHZ3 is significantly downregulated in human glioma tissues and cell lines, and overexpression of TSHZ3 decreases the invasiveness of U87 and U251 glioblastoma cells (). In addition, the downregulation or deletion of TSHZ3 function is involved in the pathogenesis of ovarian cancer (), which suggests that TSHZ3 plays an oncogenic role. ZFHX4 is required for the regulation of glioblastoma tumor-initiating cells, and its inhibition leads to reduced tumorigenesis and increased glioma-free survival time. Mutations in ZFHX4 are strongly associated with a poor prognosis, and downregulation of ZFHX4 inhibits the progression of esophageal squamous carcinoma (). ZEB2 can promote the migration and invasion of gastric cancer cells by regulating epithelial-mesenchymal transition (EMT) and is a potential target for gene therapy of invasive gastric cancer (). Deregulation of negative feedback between GATA3 and ZEB2 can promote breast cancer metastasis (). The expression level of MEIS1 in acute myeloid leukemia (AML) is negatively correlated with prognosis (). ISL1 plays an important role in a variety of cellular processes, including cytoskeleton genesis, organogenesis, and tumorigenesis (Zheng and Zhao, 2007), and has been found to be a highly specific marker for pancreatic endocrine tumors and metastases (). In addition, it was also significantly associated with aggressive tumor characteristics, tumor recurrence, tumor progression, and disease-specific mortality (DSM) in BLCA and plays an important role in multiple stages of bladder tumorigenesis (). We constructed a predictive signature based on these six prognostic homeobox genes. The expression profiles of the signature genes showed that tumors with higher risk scores tended to exhibit elevated ISL1, ZFHX4, TSHZ3, and ZEB2 levels, while those with lower risk scores tended to exhibit elevated HOXC4 and MEIS1 levels. Patients with high risk scores according to the signature had a poor prognosis. Then, we performed survival analysis on the training dataset, the testing dataset, and all datasets. The results showed that the high-risk group had a shorter survival time than the low-risk group. Finally, we validated the performance of the signature using GEO datasets. Overall, the signature can predict the prognosis of patients accurately and has good prognostic value. Errors in the process of DNA replication are random and universal and subject to correction and repair by the DNA Frontiers in Molecular Biosciences | www.frontiersin.org July 2021 | Volume 8 | Article 688298 mismatch repair system. Once the dynamic balance between the two is disrupted, it will easily lead to the occurrence of gene mutations, which will affect the expression of the corresponding genes and facilitate tumorigenesis and development (). We analyzed the mutation landscape of these six genes in BLCA. Among the 411 samples, 19.46% had at least one gene mutation. Driver genes are important genes associated with tumor development and play a driving role in the process of cancer development and progression (). Currently, the driver genes of BLCA include ERBB2, HDAC1, PARP1, and mTOR. These genes are important targets in BLCA treatment (). We performed correlation analysis between these six genes and BLCA driver genes in the BLCA dataset and found that these six genes have a strong correlation with the driver genes. The results indicated that the six homeobox genes play an important role in the development of BLCA and that the signature could be used in the prediction of BLCA prognosis. As a major component of the tumor microenvironment (TME), immune infiltration has been shown to contribute to tumor progression and the immunotherapeutic response (), and tumor-infiltrating immune cells, particularly T cells, are the cellular basis of immunotherapy. A better understanding of immune cells in the TME is critical to deciphering the mechanisms of immunotherapy, defining predictive biomarkers, and identifying new therapeutic targets (Zhang and Zhang, 2020;). In our GO analysis, most of the enriched functional terms were immune-related, and the same results were obtained by KEGG analysis. Then, the analysis of the risk score and immune cell infiltration showed that there were differences in the infiltration of most immune cells between the high-and low-risk groups, which demonstrated that this signature was significantly correlated with immune infiltration. Immune cells in tumors work together to control tumor growth, and the effectiveness of immunotherapy depends on the synergistic response of innate and adaptive immune cells, particularly T cells (). The function of T cells is usually classified based on whether they secrete specific effector molecules or cytokines, and effector CD4 + T cells include different functional subtypes (Th1 cells secrete IL-2 and IFN-; Th2 secretes IL-4, -13; Th17 secretes IL-17A, etc.), while effector CD8 + T cells secrete cytotoxic mediators (perforin and granzymes) or proinflammatory cytokines (TNF-, IFN-) (). Therefore, in order to analyze the correlation between these six genes and T cell function, we further analyzed the correlation of these six genes with those cytokines, and the results showed that five out of the six genes, TSHZ3, ZFHX4, ZEB2, MEIS1, and ISL1, had a strong correlation with most cytokines, while HOXC4 had a strong correlation with IL-17A. The expression of the six homeobox genes in the signature was correlated with most immune checkpoints (CTLA-4, PD-L1, PD-L2, and PD-1). At present, the most commonly used immunotherapy drugs in clinical practice are immune checkpoint inhibitors. Therefore, we analyzed the relationship between the risk score and the response to immunotherapy. The results showed that the risk scores were higher in the no-response group than in the response group. Above all, this signature was highly correlated with immunity and will be a good predictor of the patient's response to immunotherapy. However, this study has some limitations. This study is based on TCGA and GEO databases, the reliability of its data is unknown, and this study lacks experimental evidence and is mostly based on bioinformatics prediction, which limits its immediate applicability in clinical practice. In addition, the number of non-tumor tissues assessed in this study is rather small (n 18), which constitutes a potentially important bias influencing the results. The GEO datasets used for validation were relatively small. Further validation of model prediction accuracy with clinical data is needed. CONCLUSION We constructed a risk prediction signature with six homeobox genes, which showed good accuracy and consistency in predicting the patient's prognosis and the response to immunotherapy. Therefore, this signature could be a potential biomarker and treatment target for BLCA patients. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. ETHICS STATEMENT Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. AUTHOR CONTRIBUTIONS TY made substantial contributions to the conception, design, interpretation, and preparation of the final manuscript. BD, JL, WS, JS, MZ, SZ, and YM participated in the coordination of data acquisition and data analysis and reviewed the manuscript. DL reviewed and revised the manuscript. Supplementary |
Ram is working on two high performance pickups, including the most powerful one ever made.
Sources tell 5thGenRams.com that the automaker will soon be launching a Ram 1500 inspired by the Rebel TRX concept truck that was shown two years ago.
The report says that it will be offered with both a new 520 hp 7.0-liter V8, called the Banshee, and the 707 hp supercharged V8, currently available in the Dodge Charger and Challenger Hellcats and Jeep Grand Cherokee Trackhawk.
Unlike those street machines, the TRX Concept was conceived to take on the Ford F-150 Raptor as a high speed, off-road truck designed for flying across the desert at triple-digit speeds, with a wide stance and long-travel suspension.
Prototypes of the production version based on the recently introduced 2019 Ram 1500 have been already been spotted being tested, and the trucks are expected to arrive for the 2021 model year. |
Giuseppe Biava
AlbinoLeffe
A centre back, Biava started his career with the Leffe youth team, before moving to newly promoted Serie D team Albinese in 1995. During his time with Albinese before the 1998 merger with the Leffe to found AlbinoLeffe, Biava achieved a promotion to Serie C2 in 1997. In 1998–99, Biava helped AlbinoLeffe to ensure promotion to Serie C1. After a one-year loan at Serie C2 side Biellese, Biava returned to AlbinoLeffe in 2001, and was part of the team that won the promotion playoffs in 2003, thus ensuring them a Serie B place.
In 2003–04, Biava marked his Serie B debut, and scored the goal which ensured the first AlbinoLeffe victory in the league, a 1–0 win against Fiorentina.
Palermo
His performances in the league provoked interest from Serie B club Palermo, which signed him during the January 2004 transfer window. Biava immediately gained a place in the regular lineup, forming a defensive line together with Pietro Accardi and leading the rosanero back to Serie A after over 30 years.
Biava was confirmed by head coach Francesco Guidolin as a key player during the 2004–05 season, coincidentally his first season in the top flight, which ended with a historical first qualification for Palermo in the UEFA Cup. However, he did not confirm his performances in the following season, where he was mostly featured as backup player. With Guidolin back at the helm of Palermo in 2006–07, Biava regained a more important role in the Palermo squad, returning to play at his usual levels.
During the summer 2007, Biava refused an offer from his childhood team Atalanta, preferring instead to stay at Palermo even though the club management could not ensure him a place as a regular. He successively agreed a contract extension with the rosanero until June 2009.
Genoa
In June 2008 Biava was sold to Genoa for €500,000, with Cesare Bovo moving the opposite way. At Genoa, he quickly established himself as a regular for the rossoblu. In their 2008–09 Serie A campaign ending in fifth place he partnered with Salvatore Bocchetti and/or Matteo Ferrari. In the first half of 2009–10 season he often partnered with Bocchetti but in January faced competition in new signing Dario Dainelli while the team's coach preferred a 3–5–2 formation with 3 central defenders and 2 wing-backs.
Lazio
On 1 February 2010, Biava joined Lazio for a transfer fee of €800,000 (including other costs Lazio paid a total of €1.04 million). Along with new signing André Dias, he became the backbone of the Lazio team starting in 11 matches in the second half of the 2009–10 season and helped the club survive the relegation battle.
In the 2010–11 season Biava made 36 appearances, guiding Lazio to a strong 5th-place finish.
In the 2011–12 season, Biava was limited to 31 appearances due to muscle issues.
He made a career high 43 appearances in all competitions in the 2012–13 season, leading Lazio to the quarter finals of 2012–13 UEFA Europa League where they were knocked out by Fenerbahçe, and helped Lazio defeat traditional rivals A.S. Roma to win the 2012–13 Coppa Italia.
Biava missed large portions of the first half of the 2013–14 season, but finished the season strongly, making 24 appearances in all competitions.
Atalanta
On 17 July 2014, Biava signed for home province club Atalanta B.C. as a free agent. |
LAS CRUCES - Alex Moon has a new idea, and the biology doctorate student at New Mexico State University is trying to patent it. Through the process, the Patent and Trademark Resource Center at NMSU’s Zuhl Library has been a valuable resource.
Established at NMSU in October 2016, the PTRC is the only office in New Mexico and the surrounding region and is officially affiliated with the United States Patent and Trademark Office in Alexandria, Virginia.
“This program is set up through the USPTO to help inventors who don’t have the support or money up front to get going on the patent process,” said David Irvin, NMSU business and government documents librarian and PTRC representative.
The PTRC can help individuals conduct prior-art searches for patents and trademarks so they can determine if a patent or trademark already exist before proceeding with the process. The PTRC is a free service and open to the public.
Moon, who is trying to patent his method for building 3D printed casts, filled LLC paperwork in January 2018 for his business and then visited the PTRC and talked with Irvin to research the viability of his idea.
The PTRC provides access to not only the research software and USPTO training materials but also referrals to the ProBoPat program, which connects individuals to pro bono intellectual property attorneys.
“It saves inventors money to be able to have access to an office like this,” Irvin said. “It also is part of the extension and outreach mission of the university ... to reach out and provide resources for inventors, innovators and entrepreneurs who are applying for patents or trademarks.
Moon encourages local inventors to visit the PTRC.
“Don’t be afraid to use it. It’s a very awesome resource that is there and not many people utilize, which is really sad,” he said.
Since the PTRC office opened, Irvin estimates 130 people have been helped. Starting this summer, the PTRC will be open all year, previously the office closed in the summer.
Additionally, Irvin mentioned the NMSU Library collects and binds plant patents from the USPTO. Since 1994, NMSU has received plant patent volumes that are housed in the Branson Library.
For more information visit http://nmsu.libguides.com/PTRC. |
An Application of Tabu Search Algorithms and Genetic Algorithms in Collaborative Logistics Optimization This paper discusses a new way to solve the Vehicles Routing Problem (VRP) of logistics with the use of Taboo Search Algorithms (TSA) and optimize the new way and the distribution center based on Genetic Algorithms (GA). This study focuses on the incorporating of sugar industry and Milled Rice Distribution Center Management in Thailand for the export channel. This provides a systematic and framework to solve the cost minimization problem of sugar transports and Milled Rice transports from the mills to a seaports. The results demonstrate, there is not a useful tool for minimizing the cost and new way, but the tool can manage distribution center, and seaport exporting. While the focus of the paper is on sugar supply chain and rice supply chain, much of the information is relevant to Distribution Center Management of other agricultural commodities as well. |
The Bulgarian cabinet approved on Wednesday the country's participation in negotiations for the purchase of new fighter jets.
This was announced by Defence Minister Nikolay Nenchev after the government granted him mandate to head negotiations with Belgium, the Netherlands and Greece.
It was still not clear whether the negotiations will concern the purchase of new jets or second-hand ones, daily Dnevnik reports.
Nenchev clarified that the above countries offered second-hand F-16s, but it was not excluded for negotiations to be led with producers on the purchase of new jets.
In order for ensuring the proper functioning of the Bulgarian air force, a squadron of least nine jets should be purchased.
The Defence Minister added that the Bulgarian state is not pushed by deadlines for the purchase of new fighter jets as the issue with the repair of the MiG-29s has been resolved.
An agreement on the repair of two or four of the engines is expected to be signed with Poland by the end of next month at the latest.
The second stage foresees for the repair of another 10-12 engines.
Nenchev pointed that Poland offered much more favourable terms and lower prices than Russia on the repair of the MiG-29s.
As regards the safeguard of the country's air space, which was regulated within the framework of NATO's Air Policing missions, only Poland has expressed interest. |
Challenging the validity of imposing contraindications to thrombolysis for acute ischemic stroke A colleague once challenged grand rounds attendants to identify any pharmacologic treatment for stroke, other than recombinant tissue plasminogen activator (rtPA), which, 15 years after Food and Drug Administration approval, still evoked a mental checklist of eligibility criteria, with numerous contraindications, yet the audience was mute. Are all IV thrombolysis exclusion criteria really necessary? The Simplified Management of Acute Stroke Using Revised Treatment (SMART) protocol considers all ischemic stroke patients for thrombolysis regardless of common rtPA exclusion criteria, contending that such criteria are consensus-based, not evidence-based; nearly 20% of all ischemic stroke patients received IV rtPA without an increase in adverse events.1 SMART proposed that exclusion criteria be critically reevaluated,1 with many investigators adopting an empirical approach to this challenge.2,3 Concerns over poorer post-thrombolysis outcomes in patients with a history of prior stroke and diabetes prompted trialists to exclude these patients for the European Cooperative Acute Stroke Study (ECASS)3.4 Prior stroke reduces the benefits from thrombolysis.5 Furthermore, there is a clear association of hyperglycemia with worse outcomes in ischemic stroke, through multiple possible mechanisms6: increased plasminogen activator inhibitor-1 activity, resistance to antithrombotic agents, and |
Last year, in the inaugural weeklong Fedstival, Government Executive and Nextgov convened leading federal officials and thinkers to share ideas about tackling government’s biggest challenges.
Now we’re deep into planning for Fedstival 2017, which will take place from Sept. 18-22 in Washington. The series of events culminates in Bold Friday, during which federal innovators from all corners of government tell about the important work they’re doing in a series of rapid-fire presentations.
At last year’s Bold Friday, experts from the National Defense University, the Office of Management and Budget, the U.S. Agency for International Development, the National Park Service and many other federal organizations shared stories of their cutting-edge work. This year, a new group of federal leaders, selected by a panel of Government Executive and Nextgov editors, will take the stage to tell their peers and colleagues how they’re making a difference in technology, management strategy and workforce development across government agencies.
If you or someone you know fits that bill, we want to hear about it. Nominations are now open for this year’s Bold Friday presentations. This is your chance to highlight the important work you’re doing, share your ideas with peers from other agencies and learn from their experiences.
Overall, this year’s Fedstival will take an in-depth look at the new administration’s efforts in management, workforce and IT innovation. We’ll convene a wide range of practitioners and observers for conversations, debates, workshops, live events, networking and much more. |
The Middle East The undeniable threat that Iran poses to the other countries of the Middle East has its roots in the fundamentalist Islamist ideology that the ruling clerics there espouse. The clerics aspire to evict the long-engrained military forces, aid, and other influences of the United States from the region and to thus become the dominant power. In pursuit of these ambitious goals, Iran has trained and sponsored various proxy forces and terrorists in other nearby countries and has sought to acquire nuclear weapons for the state. Ultimately, these efforts have served to undermine the prospects of building peace in the region. |
Ever since MacDiarmid, Hideki Shirakawa, and Heeger invented conducting polymers and made it possible to dope these polymers over the full range from insulator to metal, a new field of research bordering chemistry and condensed-matter physics emerged, which created a number of opportunities in the application in photoelectronic, electronic and electrochemistry. Conducting polymers have the advantages of stable physical and chemical properties and high conductivity. More significantly, conducting polymers provide an excellent interface between the electronic-transporting phase (electrode) and the ionic-transporting phase (electrolyte). In addition, the conductivity of conducting polymers is dependent on variables such as redox state and pH, which makes conducting polymers ideal for smart materials such as sensors.
In recent decades, conducting polymer hydrogels have received increasing attention for its promising applications in biosensors, chemical sensors, bioelectrodes, biobattery, microbial fuel cell, microbial electrolysis cell, medical electrodes, artificial muscle, artificial organ, drug release, and biofuel cells, etc. due to the following reasons:
1) Conducting polymer hydrogels have nanostructured framework and sufficiently large interfacial area, which enhanced the diffusion of ions and molecules, as well as the transport of electrons;
2) Conducting polymer hydrogels have a softer mechanical interface comparing to conventional metal electrodes;
3) Conducting polymer hydrogel have a biocompatible environment closely matching those of biological tissues.
To date, only a few limited methods were developed to synthesize conducting polymer hydrogels due to the difficulty in achieving the two prerequisite conditions for conducting polymers to form hydrogels: 1) hydrophilicity of polymer; 2) chemical or physical crosslinking between polymer chains. Synthesis of conducting polymer hydrogels has been carried out by following methods:
1) Synthesizing conducting polymer in the matrix of non-conducting polymer hydrogels (i.e. forming a composite material consisting of non-conductive hydrogel and conducting polymer);
2) Using multivalent metal ions such as Fe3+ or Mg2+ to crosslink water soluble conducting polymer such as poly(3,4-ethylenedioxythiophene) (PEDOT) by ions interacting with the negatively-charged electrolytic dopant;
3) Crosslinking polyaniline (PAni) by chemical reaction between epoxy group of the non-conducting crosslink agent and the amino group on PAni.
However, all of the above methods introduce impurities or nonfunctional materials, such as metal ions or nonfunctional polymers, thereby deteriorate the conductivity, electroactivity or biocompatibility of conducting polymers. In method 1), biocompatible composite material can be formulated using conducting polymer held in the matrix of hydrogels such as poly (vinyl alcohol), poly (ethylene glycol), and polyacrylamide chitosan, poly(2-hydroxyethyl methacrylate), poly (acrylic acid), poly (acrylamide), alginate hydrogel, etc. In this case, however, the non-functional polymer hydrogel impurity undoubtedly results in the lowering of conductivity and electroactivity of the material, which reduces the performance of electrodes and sensors. In method 2), crosslinked conducting polymer hydrogel is induced by ionic interaction of metal ions with negative polyelectrolyte dopant, which reduces the biocompatibility and enzyme activity of the hydrogels as high quantities of metal ions are required to form gels. In method 3), crosslinked PAni is made by the reaction between the epoxy crosslinking group and amino group on PAni main chain, which greatly reduces the conductivity of conducting polymers. In summary, the existing synthetic methods cannot meet the requirements of vast applications of conducting polymers, such as biomedical devices, biobattery, and microbial fuel cell. |
Men Versus Women. Although women represent a large segment of our cardiovascular patients, adequate, high-quality clinical trial data on women remains scarce and disproportionally lower than what is needed to guide practice decisions. A recent analysis from the Food and Drug Administration highlighted that there is significant underrepresentation of women in clinical trials involving coronary disease, acute coronary syndrome, and myocardial infarction.1 Because women present at older ages, often have different disease manifestations, and may differ in response to treatments, the Food and Drug Administration and others have provided formal guidance on the inclusion of women in clinical trials, but this issue remains far from resolved. Women with coronary disease and acute coronary syndrome present an additional challenge to researchers and clinicians. There are multiple examples in the literature suggesting that women may have differential responses to cardiovascular therapies, but this has often been later disproven on further study and improved patient risk stratification. For example, GP IIb/IIIa (glycoprotein IIb/IIIa) inhibitors were initially thought to increase the risk of bleeding for women presenting with nonST-segmentelevation myocardial infarction and were thus widely underused, despite evidence about the potential ischemic benefits. Much of this sex difference in bleeding may have been attributable to more frequent overdosing of GP IIb/IIIa inhibitors in women compared with men.2 Importantly, despite some early observations that women might benefit less than men treated with potent platelet inhibition, when analyses were done among troponin positive patients, GP IIb/IIIa inhibitors were found to be equally efficacious in men and women.3 This is an example of the importance of better defining the patient populationfor both men and womenthat is most likely to benefit from aggressive pharmacological interventions. The 2014 American College of Cardiology/American Heart Association nonST-elevation acute coronary syndromes guidelines explicitly recommend that women presenting with acute coronary syndrome should be treated with the same pharmacological interventions as men (Class I recommendation).4 In this issue of Circulation: Cardiovascular Interventions, Berry et al5 once again demonstrate the importance of establishing clear inclusion criteria (ie, which patients will benefit) when studying antithrombotic therapies for patients with coronary disease. Using data from the DAPT study (Dual Antiplatelet Therapy), the authors examined whether the risks/benefits of prolonged dual antiplatelet therapy differed by sex. The DAPT study assessed the effectiveness of dual antiplatelet therapy (clopidogrel or prasugrel in combination with aspirin) beyond 12 months after coronary stenting. Of 7175 enrolled women, 2925 (40.7%) met the inclusion criteria for randomization, which included good adherence to 12 months of DAPT Fatima Rodriguez, MD, MPH Robert A. Harrington, MD EDITORIAL |
Molecular Characterization of Cereal Yellow Dwarf Virus-RPV in Grasses in European Part of Turkey Abstract Yellow dwarf viruses (YDVs) are economically destructive viral diseases of cereal crops, which cause the reduction of yield and quality of grains. Cereal yellow dwarf virus-RPV (CYDV-RPV) is one of the most serious virus species of YDVs. These virus diseases cause epidemics in cereal fields in some periods of the year in Turkey depending on potential reservoir natural hosts that play a significant role in epidemiology. This study was conducted to investigate the presence and prevalence of CYDV-RPV in grasses and volunteer cereal host plants including 33 species from Poaceae, Asteraceae, Juncaceae, Geraniaceae, Cyperaceae, and Rubiaceae families in the Trakya region of Turkey. A total of 584 symptomatic grass and volunteer cereal leaf samples exhibiting yellowing, reddening, irregular necrotic patches and dwarfing symptoms were collected from Trakya and tested by ELISA and RT-PCR methods. The screening tests showed that 55 out of 584 grass samples were infected with CYDV-RPV in grasses from the Poaceae family, while none of the other families had no infection. The incidence of CYDV-RPV was detected at a rate of 9.42%. Transmission experiments using the aphid species Rhopalosiphum padi L. showed that CYDV-RPV was transmitted persistently from symptomatic intact grasses such as Avena sterilis, Lolium perenne and Phleum exratum to barley cv. Barbaros seedlings. PCR products of five Turkish RPV grass isolates were sequenced and compared with eleven known CYDV-RPV isolates in the GenBank EMBL databases. Compared nucleotide and amino acid sequences of CYDV-RPV isolates showed that the identities ranged from 40.38 − 95.86 % to 14.04 − 93.38%, respectively. In this study, 19 grass species from the Poaceae family and two volunteer cereal host plants were determined as natural reservoir hosts of CYDV-RPV in the cereal growing areas of Turkey. |
Comparison of 99mTc- and 18F-Ubiquicidin Autoradiography to AntiStaphylococcus aureus Immunofluorescence in Rat Muscle Abscesses 99mTc-ubiquicidin (UBI) 29-41 is under clinical evaluation for discrimination between bacterial infection and unspecific inflammation. We compared the distribution of 99mTc-UBI 29-41, the potential PET tracers 18F-UBI 29-41 and 18F-UBI 28-41, and 3H-deoxyglucose (DG) in rat muscle abscesses to that of antiStaphylococcus aureus immunofluorescent imaging. Methods: Calf abscesses were induced in 15 CDF-Fischer rats after inoculation of Staphylococcus aureus. One to 6 d later, either 18F-UBI 29-41 and 3H-DG (n = 5) or 18F-UBI 28-41 and 3H-DG (n = 6) or 99mTc-UBI 29-41 and 3H-DG (n = 4) were injected simultaneously. Dual-tracer autoradiography of the abscess area was compared with the distribution of bacteria and macrophages. Results: The UBI derivates exhibited increased uptake in the abscess area that partly matched 3H-DG uptake and macrophage infiltration but showed no congruity with areas that were highly positive for bacteria. Conclusion: A specific binding of UBI derivatives to Staphylococcus aureus in vivo could not be confirmed in this study. |
The roles of microRNAs in regulation of mammalian spermatogenesis Mammalian spermatogenesis contains three continuous and organized processes, by which spermatogonia undergo mitosis and differentiate to spermatocytes, follow on meiosis to form haploid spermatids and ultimately transform into spermatozoa. These processes require an accurately, spatially and temporally regulated gene expression patterns. The microRNAs are a novel class of post-transcriptional regulators. Cumulating evidences have demonstrated that microRNAs are expressed in a cell-specific or stage-specific manner during spermatogenesis. In this review, we focus on the roles of microRNAs in spermatogenesis. We highlight that N6-methyladenosine (m6A) is involved in the biogenesis of microRNAs and miRNA regulates the m6A modification on mRNA, and that specific miRNAs have been exploited as potential biomarkers for the male factor infertility, which will provide insightful understanding of microRNA roles in spermatogenesis. Background Male fertility is dependent upon the successful perpetuation of spermatogenesis that is a highly organized process of germ cell differentiation occurring within the seminiferous tubules in the testes. Spermatogonial stem cells (SSCs) are a subset of undifferentiated spermatogonia that are capable of self-renewal to maintain the pool of SSCs or differentiation to give rise to spermatogenic lineage, thus supporting the continuous production of spermatozoa. Spermatogenesis initiates once SSCs enter differentiation process. The spermatogonia go into the meiotic phase and become spermatocytes. After a long-lasting meiosis I, preleptotene spermatocytes transform into second spermatocytes and enter meiosis II to produce haploid round spermatids, which undergo spermiogenesis including acrosomal biogenesis, flagellum development, chromatin condensation, cytoplasmic reorganization and exclusion. Ultimately, the round spermatids transform into spermatozoa, which are released into the lumen of seminiferous tubules. This highly organized spermatogenesis requires accurate, spatial and temporal regulation of gene expression governed by transcriptional, post-transcriptional and epigenetic processes. More than a thousand of protein coding genes that are involved in the spermatogenesis have been identified. However, the mechanisms that mediate the expression of these spermatogenesis-related genes have not been fully uncovered. The microRNAs (miRNAs, miR), small (~22 nucleotides) single-strand noncoding RNAs, are linked to cell proliferation, differentiation and apoptosis. Transcriptome data indicate that miRNAs are extensively transcribed during spermatogenesis. The miRNAs are differentially expressed in a cellspecific and step-specific manner (, Chen et al. unpublished data). Some miRNAs are specifically expressed in certain type of male germ cells, while the others are universally expressed among different types of cells in the testes. Growing evidences have showed that the miRNAs are essential for male germ cell development and differentiation. A few recent reviews have reported the roles of miRNAs in spermatogenesis and fertility. In this article, we briefly summarize the most recent progress of miRNAs in the regulation of spermatogenesis. miRNA biogenesis At present, there are 1881 miRNA loci having been annotated in the human genome in the miRNA database (http://www.mirbase.org). Analysis has revealed that 1% of the human genome is miRNA genes, of which about half of miRNA genes located in the introns (intronic miRNAs) of host genes. However, some intronic miRNAs exhibit low correlated expression level with their host genes. It is likely these miRNAs are transcribed from unique transcription units independent of host genes. The biogenesis of miRNAs is modulated at a few levels, including miRNA transcription, processing by Drosha and Dicer, RNA methylation, uridylation and adenylation ( Fig. 1). The initial transcripts are termed the primary miRNAs (pri-miRNAs) that are variable in length from several hundreds to thousands of nucleotides. The pri-miRNAs are methylated by the methyltransferase like 3 (METTL3), marking them for recognition and processing by the DiGeorge syndrome critical region 8 (DGCR8). The pri-miRNAs are thus processed by drosha ribonuclease III (Drosha) and its cofactor DGCR8 into~70 nucleotides (nt) long miRNA precursor (pre-miRNAs). The pre-miRNAs are then transported into the cytoplasm by exportin 5 (EXP5) in accompanied with Ran-GTP and cleaved by Dicer into~22 base pair (bp) double-strands RNAs (dsRNAs). These dsRNAs are loaded onto an Argonaute protein (AGO) so as to form miRNA-induced silencing complex (miRISC), in which one strand of the~22-nt RNA duplex remains in AGO as a mature miRNA, whereas the other strand is degraded. Interestingly, Alarcon et al. recently reported that RNA-binding protein heterogeneous nuclear ribonucleoprotein A2/B1 (HNRNPA2B1) binds m6A-bearing pri-miRNAs, interacts with DGCR8 and thus facilitates the processing of pri-miRNAs. In consistent with this, loss of HNRNPA2B1 or depletion of METTL3 led to concomitant accumulation of unprocessed pri-miRNAs and decrease of the global mature miRNAs. Therefore, the methylation mark acts as a key posttranscriptional modification that enhances the initiation of miRNA biogenesis. Mechanisms of miRNA action Usually, a specific base-pairing between miRNAs and mRNAs induces mRNA degradation or translational repression. In mammals, the overall complementarity between a miRNA and its target is usually imperfect, which allows each miRNA to potentially regulate multiple RNAs. It is estimated that one miRNA may target as many as 400 genes on average. Conversely, the expression of a single gene can also be modulated by multiple miRNAs. Interestingly, it has been reported recently that miR-NAs regulate the m6A modification in mRNAs via a sequence pairing mechanism. As a result, manipulation of miRNA expression leads to change of m6A modification through modulating the binding of METTL3 to mRNAs ( Fig. 1). The m6A modification, in turn, modulates mRNA metabolism and thus is another key posttranscriptional control of gene expression. Evidences have indicated that m6A methylation determines stem cell fate by regulating pluripotent transition toward differentiation. Intriguingly, deficiency of ALKBH5, a m6A demethylase, leads to aberrant spermatogenesis and apoptosis in mouse testis through the demethylation of m6A on mRNAs. Functions of miRNAs in spermatogenesis Conditional Dicer knockout mouse models The overall importance of miRNA signaling for regulation of spermatogenesis has been demonstrated using conditional knockout of Dicer in germ cells. Dicer1 ablation in prospermatogonia just before birth using Ddx4 promoter-driven Cre expression led to an alteration in meiotic progression, significant increase of apoptosis in pachytene spermatocytes, a reduced number of round spermatids and morphological defects in spermatozoa. Moreover, Ngn3 is expressed endogenously in type A spermatogonia starting from postnatal d 5. In the mouse model of selective deletion of Dicer1 in type A spermatogonia by Ngn3 promoter-driven Cre, the first clear defects were displayed in haploid round spermatids. The spermiogenesis was severely compromised. Similarly, conditional depletion of Dicer1 using the Stra8Cre transgene in early spermatogonia resulted in the comparable phenotype to the Ngn3Cre-driven Dicer1 deletion. In addition, deletion of Dicer1 in haploid spermatids using the protamine 1 (Prm1)-Cre transgene led to abnormal morphology in the elongated spermatids and spermatozoa. But, the Prm1Cre-Dicer1 knockout caused a less severe phenotype compared to those in which Dicer1 was deleted from prospermatogonia and spermatogonia. Collectively, the earlier the ablation of Dicer occurs, the more severe side effects on spermatogenesis are found. Therefore, miRNA-mediated post-transcriptional control is an important regulator for spermatogenesis. The roles of miRNAs in SSC self-renewal and differentiation SSCs are the foundation of spermatogenesis that involves a delicate balance between self-renewal and differentiation of SSCs to ensure the lifelong production of spermatozoa. In the testes, the SSCs reside in a unique microenvironment or 'niche'. The niche factor glial cell line-derived neurotropic factor (GDNF) is the first welldefined paracrine factor that promotes SSC self-renewal. GDNF signaling acts via the RET tyrosine kinase and requires a ligand-specific co-receptor GFR1 in mouse SSCs. Evidences have shown that through the PI3K/AKT-dependent pathway or the SRC family kinase (SFK) pathway, GDNF regulates the expression of the transcription factors B cell CLL/ lymphoma 6 member B (BCL6B), ETS variant 5 (ETV5), DNA-binding protein 4 (ID4), LIM homeobox 1 (LHX1) and POU class 3 homeobox 1 (POU3F1) to drive SSC self-renewal. miRNAs conduce maintenance of the pool of SSCs. It has been shown that miR-20 along with miR-21, −34c, −135a, −146a, −182, −183, −204, −465a-3p, −465b-3p, −465c-3p, −465c-5p and −544 were preferentially expressed in the SSC-enriched population (Fig. 2). Importantly, miR-20, miR-21 and miR-106a contribute to maintenance of mouse SSC homeostasis. Fig. 2 The expression of associate miRNAs in testicular cells miR-135a mediates the maintenance of rat SSCs by regulating FOXO1 that promotes high levels of Ret protein on the cell surface of SSCs. Moreover, miR-544 regulates self-renewal of goat SSCs by targeting the promyelocytic leukemia zinc finger gene (PLZF), which is the first transcription factor to be identified as being involved in SSC self-renewal. Similarly, miR-224 regulates mouse SSC self-renewal via modulating PLZF and GFR1. Interestingly, miR-34c is expressed in goat SSCs and promotes SSC apoptosis in a p53-depemdent manner. Recently, it was found that miR-204 was involved in the regulation of dairy goat SSC proliferation via targeting Sirt1. Collectively, miRNAs are involved in regulating SSCs fate. The roles of miRNAs in meiosis and spermiogenesis Growing evidences have also demonstrated that specific miRNAs regulate meiosis (Fig. 2). The expression of miR-449 cluster is abundant and is upregulated upon meiotic initiation during testis development and in adult testes. The expression pattern of the miR-449 cluster is similar to that of miR-34b/c. Moreover, miR-34b/c and miR-449 cluster share the same seed region and thus target same sets of mRNAs. Depletion of either miR-34 cluster or miR-449 cluster displays no apparent defect in male germ cell development. However, simultaneous knockout of these two clusters led to sexually dimorphic and infertility, suggesting that miR-34b/c and the miR-449 cluster function redundantly in the regulation of spermatogenesis. Furthermore, miR-18, one of the miR-17-92 cluster, is abundantly expressed in spermatocytes. miR-18 targets heat shock factor2 (Hsf2), which is a critical transcription factor for spermatogenesis. Finally, miR-34b-5p regulates meiotic progression by targeting Cdk6. A unique chromatin remodelling occurs during spermatogenesis when histones are replaced by DNA packing proteins, such as transition proteins (TPs) and protamines (PRMs), which are exclusive to male germ cells. In the post-mitotic germ cells, the timely expression of TPs and PRMs is prerequisite for compaction and condensation of chromatin during spermiogenesis. To secure this timed expression pattern, Tp and Prm are subjected to an efficiently post-transcriptional control. It has been demonstrated that miR-469 suppresses the translation of TP2 and PRM2 by targeting mRNA of Tp2 and Prm2 in pachytene spermatocytes and round spermatids. On the contrary, miR-122a that is abundantly expressed in late-stage male germ cells reduces the Tp2 mRNA expression by RNA cleavage. Although the majority of miRNAs disappear during spermiogenesis, the sperm born miRNAs have also been demonstrated to play important roles. miR-34 is present in mouse spermatozoa and zygotes but not in the oocytes or in embryos beyond the one-cell stage. Upon fertilization, miR-34c is transferred from spermatozoa to zygote where it reduces the expression of Bcl-2 and p27, leading to S-phase entry and the first cleavage. Moreover, injection of miR-34c inhibitor into the zygotes inhibits DNA synthesis and suppresses the first cleavage division, suggesting that the sperm-borne miR-34c is required for zygote cleavage. In addition, dysregulation of miR-424/322 induces DNA double-strand breaks in spermatozoa. Importantly, a set of sperm miRNAs are differentially expressed in asthenozoospermic and oligoastheno-zoospermic males compared with normozoospermic males. Furthermore, miR-151a-5p is abundant in severe asthenozoospermia cases compared with healthy controls and participates in mitochondrial biological functions. Therefore, specific miRNAs have been exploited as potential biomarkers for male factor infertility.. The extrinsic factors derived from these somatic cells trigger specific events in germ cells that dictate or influence spermatogenesis. It has been shown that miRNAs are highly abundant in Sertoli cells (Fig. 2). MiR-133b and miR-202 are involved in pathogenesis of azoospermia or Sertolicell-only syndrome. Importantly,conditional depletion of Dicer1 from Sertoli cells, using the Anti-Mllerian hormone (Amh) promoter-driven Cre in mice, results in disrupted spermatogenesis and progressive testis degeneration, indicating that miRNAs in Sertoli cells play critical roles in spermatogenesis. Specifically, miR-133b promotes the proliferation of human Sertoli cells by targeting GLI3 and mediating expression of Cyclin B1 and Cyclin D1. Moreover, miR-762 promotes porcine immature Sertoli cell growth via the ring finger protein 4 (RNF4). Spermatogenesis is supported by the testicular Sertoli cells, peritubular myoid (PTM) cells and Leydig cells FSH and androgens are fundamentally important for spermatogenesis. To elucidate the molecular mechanisms by which FSH and androgen act in the Sertoli cells, Nicholls et al. investigated the expression and regulation of micro-RNAs (miRNAs). The authors have found that a subset of miRNAs were upregulated after hormone suppression in rat model and in vitro culture of primary rat Sertoli cells. Interestingly, Pten, an intracellular phosphatase, and Eps15, a mediator of endocytosis, were down-regulated by the withdrawal of hormones. In consistent with it, overexpression of miR-23b in vitro resulted in decreased translation of PTEN and EPS15 protein. Similarly, by using androgen suppression and androgen replacement, Chang et al. identified that androgen regulated the expression of several microRNAs in mouse Sertoli cells. One of the miRNAs targets found in this study is desmocollin-1 (Dsc1), which plays an essential role in cell-cell adhesion in epithelial cells. On the other hand, elevated estradiol level is associated with male infertility. Evidences indicate that estradiol regulates proliferation of Sertoli cells in a dose-dependent manner, in which miR-17 family and miR-1285 are involved in the regulation. Collectively, miRNA transcription is a new paradigm in the hormone dependence of spermatogenesis. Leydig cells are responsible for androgen production that is essential for sperm production. Basic fibroblast growth factor (bFGF) promotes the development of stem Leydig cells and inhibits LH-stimulated androgen production by regulating miRNAs. Interestingly, miR-140-5p/140-3p control mouse Leydig cell numbers in the developing testis. Deletion of miR-140-5p/miR-140-3p results in an increase of number of Leydig cells, indicating that the miRNAs are likely to regulate the expression of factors produced by Sertoli cells that regulate differentiation of Leydig cells. Collectively, these findings indicate that miRNAs regulate the development and functions of Sertoli cells and Leydig cells, which create the niche for SSCs and thus provide structural and nutritional support for germ cells. Therefore, miRNAs in somatic cells play important roles in spermatogenesis. Conclusion and perspectives Extensive and accurate regulation of gene expression is prerequisite for spermatogenesis. miRNAs are expressed in a cell-specific or stage-specific manner during spermatogenesis. However, the roles and underlying mechanisms of many of those miRNAs in spermatogenesis remain largely unknown. Future studies should primarily focus on uncovering the roles of germ-cell specific miR-NAs in spermatogenesis. The powerful single-cell small RNA sequencing would help to more accurately profile the miRNAs for certain type of germ cells. Meanwhile, the establishment of long-term culture of SSCs and in vitro induction of differentiation of male germ cells make it possible to elucidate the role of a certain miRNA or miRNA cluster in vitro. The application of CRISPR/ Cas9 system and conditional knockout strategies would speed up the understanding of miRNA functions. Secondly, growing evidences have been demonstrated that some specific miRNAs are preferentially expressed in testicular somatic cells. But it is not clear whether these miRNAs act as secreted paracrine factors in the SSC niche, or whether they indirectly mediate the secretion of growth factors, GDNF for instance, which then affect germ cells. More somatic cell expressed miRNAs are needed to be functionally characterized. Thirdly, it has been demonstrated that some transcription factors promote SSC self-renewal (for example, BCL6B, BRACHY-URY, ETV5, ID4, LHX1, and POU3F1), while several transcription factors stimulate spermatogonia differentiation (DMRT1, NGN3, SOHLH1, SOHLH2, SOX3, and STAT3). However, it is unclear which and how miRNA/miRNA cluster regulates the expression of these transcription factors. Fourthly, it has been discovered recently that RNA methylation is involved in pri-miRNA processing, opening the door for exploring RNA methylation in the biogenesis and function of the miR-NAs. Future research will pay increasing attention on the understanding of biological functions of epigenetic changes (or marks) during germ cell development. Finally, specific miRNAs in spermatozoa or seminal plasma will be exploited as potential biomarkers for male factor infertility. The annotation of the miRNAs and the elucidation of their regulating mechanisms in pathogenesis will provide insight into the etiology of male sterility and infertility. Together, uncovering these questions will shed new light on the pivotal roles of miRNA in spermatogenesis and fertility. |
Share this...
Reddit Linkedin email
The EU renegotiations have begun and I am as cynical as I have always been. I have always maintained that the referendum will be effectively rigged, the debate will be dominated by myth and fear mongering and the whole renegotiation exercise is a smoke screen designed to fool the public in a re-run of 1975. Now I think it is becoming clear that my suspicions are correct.
David Cameron has been on a tour of Europe and found resistance to the incredibly modest ideas he has put forward, even though nothing he is apparently striving to achieve will do anything to alter our relationship with the EU. Therein lies the problem, this renegotiation is spectacularly un-ambitious and half-hearted.
It was inevitably so, the Prime Minister is an ardent europhile, more so than any of his predecessors since Edward Heath. He has a vision of an expansive empire stretching to the Urals and has repeatedly called for Turkey to be accepted as a full member. He has already made it clear that he intends to campaign for Britain to remain a member and is set to ask for very little in return.
The diplomatic resistance towards the negotiations is all part of the phoney war. David Cameron now seems to be facing adversity, even with the piffling concessions he is looking for. Within a year or so, we will make some gains on welfare and other measures. Then victory will be declared and the concessions will be inflated in the media to appear as if they are a real coup, even though they were incredibly minor from the very beginning.
The Government already spurned the chance to preserve the independence of our legal system by opting back into a string of measures including the European Arrest Warrant. There seems to be no genuine desire to oppose integration beyond the superficial measure of opting out of the commitment to ‘ever closer union.’
What would please me and other sceptics? A loose association between the UK and the EU based on trade and cooperation would be acceptable terms of membership. I want Britain to be able to trade freely with countries around the world, to have an independent legal system, to be free of much of the regulatory burden, to be an independent diplomatic powerhouse with an independent foreign policy. I believe this is what most people want, because it would be better for Britain.
If the renegotiation concluded with the UK opted out of the diplomatic corps giving us diplomatic independence; having an autonomous trade policy; opted out of the Common Agricultural Policy and the Common Fisheries Policy; with fiscal independence allowing us to keep the City free from EU regulation and be exempt from all common tax measures; opted out of the EU’s Area of Freedom Security and Justice (including the EAW) and with the sovereignty and supremacy of our parliament restored, then I would be totally satisfied.
That is, however, pure fantasy. Not one of these things is on the negotiating table and so for a sceptic there is simply no room for optimism. If we are not able to trade freely and enjoy economic freedom, if the Common Law is submissive to EU law, if we are still part of common policies which directly harm British interests, if Parliament is still a shell manipulated by the ‘invisible hand’ of EU power then this country is still, whatever superficialities designed to conceal the truth are in place, a province of a larger state.
Whether we are signed up to ‘ever closer union’ on paper or not, the direction of travel is still the same. Sceptics therefore must focus on putting forward their case and preparing for the great debate. The odds are stacked against us in every way imaginable.
The EU referendum campaign will be heavily weighted towards ‘in.’ David Cameron has confirmed that the referendum question is framed with a europhile bias because they have the positive sounding ‘yes’ vote. In a signal of intent, he has also lifted the election time purdah meaning that the whole power of the civil service will be making the case for continued EU membership. The ‘in’ camp also has a number of other factors in their favour.
They have the status-quo factor, all the major political parties back the EU and the full weight of the mass media will swing in favour of staying in too. Even the sceptic papers will surprise you by creating a fanfare over whatever concessions are made and backing an “in” vote. Watch this space; sceptics will be left awkwardly associated with the Daily Express.
Financially there will be absolutely no fair play; all the big money and vested interests will be on the side of the ‘in’ campaign. Sceptics hoping for a fair fight should stop fooling themselves, it is not forthcoming. Sceptics will be marginalised, ridiculed and caricatured as mad, eccentric or regressive. This is going to be a dirty fight which, I am sorry to say, sceptics are likely to lose, but in order to give ourselves a fighting chance we have to argue our point robustly and offer a positive alternative vision.
Those who place their faith in the negotiations do not realise that the new relationship they seek can only be achieved by leaving. Those who want a ‘reformed’ EU are mostly concerned with retaining access to the single market, but we do not need to be in a political union to maintain access. A major pre-occupation of the ‘in’ campaign will be to suppress knowledge of the European Free Trade Association, which Britain actually founded, and could easily become a member once again. We would then, along with the rest of the EFTA members (except Switzerland) become EEA members.
The main complaint that constitutes the ‘in’ camp’s only real argument against EFTA and EEA membership is that these countries have to follow all the EU’s rules without having any say in how they are made. This argument is overblown and must be combated.
Increasingly it is global bodies higher than the EU that are making the rules and regulations that the single market follows. On most of these global bodies, EFTA members sit as independent nations and represent themselves. They are able to argue their own case according their own national interests. They have direct influence while Britain does not have a seat at the negotiation table; it is merely one voice among 28.
Overall, by sitting independently on the global bodies that are increasingly regulating international trade, as well as being able to independently negotiate trade deals with countries across the world, Britain will increase its economic power and influence as an independent nation. Furthermore, our domestic economy will be free to repeal internal regulations imposed by the EU if we so wished, allowing us to liberate our economy.
If this sounds to you like an attractive future for Britain, that’s because it is, hence why the ‘in’ campaigners will try and keep that option out of the public imagination and cloud it in falsehoods. It is an option that allows us to maintain the free movement of goods, services and people while gaining economic and political independence.
The British people are too cautious to countenance losing access to the single market. The warnings and threats being sounded in the business world have brought me to this conclusion. Similar threats were made in the campaign to scrap the pound, but this time the uncertainty and fear factor is attached to the side that are arguing for change. EFTA membership is one of the strongest alternative futures that sceptics can offer people who are fearful of the risk of secession
This is a steep uphill struggle, a rigged game that we seem destined to lose; but we must not forget that, potentially, our argument is very strong. For it isn’t just economic, diplomatic and legal freedoms we are fighting for, it is our democracy itself!
A vote for ‘in’ is a vote for the post-democracy age. The EU is an empire set upon the dissolution of national democracy. We have witnessed unelected governments being imposed on Italy and Greece, referenda being bypassed in Ireland and France and increasingly the economic policies of nations being dictated from Brussels, Frankfurt and Berlin. Is this really the future Britain wants for itself?
The ordinary voter in Europe has no influence whatsoever on EU policies, legislation or governance. The political system of the EU is specifically designed to centralise power and minimise the influence of the electorate.
The EU parliament is an illusion of a democratic legislature. MEPs are selected by their party and have little incentive to do anything other than what their party tells them. Even if the electorate could actually choose the candidate they wish to be their MEP, like the whole EU parliament, is impotent because they cannot appoint commissioners and they merely ‘suggest’ laws.
The real power is within the unelected commission and with the plethora of unelected officials and bureaucrats that are appointed and are in no way accountable to the people they govern. They are not accountable to the people and cannot be removed by the people. This, I believe, is a message that the British people must hear loud and clear if we are to win.
Do we want to be a dynamic, independent, free trading nation-state, or a province? Do we still believe in democracy, that those who rule in our name should be elected by us, accountable to us and ousted at will by us? Or are we happy to surrender our democracy and elect glorified governors and administrators in meaningless elections? Do want to conserve our customs and the elements of our political and legal culture that make us distinct, or be part of a homogeneity? These are the questions that must be put to the British people.
I have a great belief in Britain. We are not a nation of defeatists willing to be governed by defeatists. We are not, I sincerely hope, a nation that has terminally lost faith in itself and is ready to be subsumed and become a mere vassal. We are a great, proud nation and still, I firmly believe, an outward looking independent minded nation that can shape a bright future for itself. |
Formation of Three-Dimensional Electronic Networks Using Axially Ligated Metal Phthalocyanines as Stable Neutral Radicals : Organic -radical crystals are potential single-component molecular conductors, as they involve charge carriers. We fabricated new organic -radical crystals using axially ligated metal phthalocyanine anions ( − ) as starting materials. Electrochemical oxidation of − a ff orded single crystals of organic -radicals of the type M III (Pc)Cl 2 THF (M = Co or Fe, THF = tetrahydrofuran), where the -conjugated macrocyclic phthalocyanine ligand is one-electron oxidized. The X-ray crystal structure analysis revealed that M III (Pc)Cl 2 formed three-dimensional networks with - overlaps. The electrical resistivities of Co III (Pc)Cl 2 THF and Fe III (Pc)Cl 2 THF at room temperature along the a -axis were 6 10 2 and 6 10 3 cm, respectively, and were almost isotropic, meaning that M III (Pc)Cl 2 THF had three-dimensional electronic systems. Introduction Ordinary organic radicals are unstable due to their high chemical reactivity, which is caused by the presence of an unpaired electron. However, organic radicals can be considered to have charge carriers, because their highest occupied molecular orbital (HOMO) is singly occupied, leading to a half-filled band in the solid state. Recently, a nonconjugated organic radical crystal has been reported to be conductive, although the conductivity is low in the solid state. This might be due to the low degree of intermolecular overlap for the molecular orbital in which the radical lies. On the other hand, organic -radicals tend to stack with significant overlap between adjacent molecules, giving rise to conducting pathways. For this reason, organic -radicals have attracted significant interest from those studying molecular conductors, and it has been reported that despite the expected half-filled band, most organic -radical crystals behave as Mott insulators because of the larger on-site Coulomb repulsion energy (U) compared to the transfer energy (t). However, external changes such as pressure can control the competition between U and t, leading to drastic phase transitions from the insulating state to metallic or superconducting states being reported for various molecular Mott insulators. This means that if a stable neutral -radical crystal is fabricated, it could be possible to obtain a single-component molecular conductor, which is a hot topic in the study of molecular crystals. Axially ligated metal phthalocyanine anions ( − ) are attractive components for the construction of neutral -radical crystals as single-component molecular conductors, because the wide -conjugated system of phthalocyanine (Pc) might suppress U. Inabe expected that the electrochemical oxidation of the -conjugated macrocyclic ligand in − would give rise to conducting crystals, while the M III (Pc)L 2 unit could impart many kinds of dimensionality to the molecular arrangement as the axial ligand induces slipped stacks ( Figure 1). Indeed, it was found that there were two different types of - overlaps between the M III (Pc)L 2 units ( Figure 2). Although the - overlaps are significantly reduced compared to those seen in M(Pc)X-type conductors having face-to-face stacking (the overlap integral of type A is about 40% of that of M(Pc)X-type conductors), it is enough for electrical conduction. Consequently, a large number of molecular conducting crystals composed of M III (Pc)L 2 units were fabricated, including -neutral radical crystals as well as crystals of highly conducting partially oxidized salts, with molecular arrangements showing the formation of one-dimensional, two-dimensional, and also three-dimensional networks. Although all -neutral radical crystals composed of M III (Pc)L 2 are Mott insulators, their resistivities are relatively small. Herein, we focused on the dimensionality of M III (Pc)L 2 crystals. It has been reported that the conductivity of M III (Pc)L 2 radical crystals tends to increase as the dimensionality of the network is increased ; the first example of a three-dimensional neutral radical crystal of Co III (Pc)(CN) 2 2H 2 O exhibited a resistivity at room temperature of no more than 10 0 cm with an activation energy of less than 0.1 eV. However, only a few examples of three-dimensional systems exist at present. We have fabricated a new three-dimensional system consisting of neutral radical crystals of M III (Pc)Cl 2 THF (M = Co or Fe, THF = tetrahydrofuran). In this paper, the crystal structures and electrical properties of M III (Pc)Cl 2 THF are reported. Electrochemical oxidation of K (30 mg) with two equivalents of tetraethylammonium chloride or tetrabutylammonium chloride was performed under a constant current of 1 A in a THF:acetonitrile (30 mL, 1:1 v/v) solution at 25 C using an electrocrystallization cell equipped with a glass frit between the two compartments, and yielded black square-pillar crystals of M III (Pc)Cl 2 THF on the anode. The typical size of the crystals obtained was between 0.5 0.3 0.3 mm 3 and 1.0 0.5 0.5 mm 3. The obtained radical crystals were stable, and no decomposition was detected in the X-ray crystal structures or electrical resistivity measurements over several months. The Schematic route for the synthesis is shown in Figure 3. X-ray Crystal Structure Determination Crystal data for M(Pc)Cl 2 THF were collected at 298 K using an automated Rigaku SuperNova system (Rigaku, Tokyo, Japan) with monochromated Mo-K radiation ( = 0.71073 ). The structure was solved by using SIR2019, and was refined by a full-matrix least-squares technique with SHELXL-2018/1 using anisotropic and isotropic thermal parameters for non-hydrogen and hydrogen atoms, respectively. Orientational disorder was observed for the solvent site (THF molecule). We assigned C and O atoms according to the bond lengths. Extended Hckel calculations were performed using a CAESAR 2 software package (Prime Color Software, Inc., Charlotte, NC, USA) with the atomic parameters determined by the X-ray structure analysis. Default parameters were used. Measurements Electrical resistivity measurements along the a-axis of M III (Pc)Cl 2 THF single crystals were performed by the standard four-probe method. The two-probe method was also adopted for the measurement with the current parallel and perpendicular to the a-axis. The electrical leads, which were gold wires with diameters of 25 m, were attached to the crystal face using gold paste (Figure 4). The data are typical, as the resistivity and activation energies varied slightly depending on the individual crystal examined. However, we confirmed the reproducibility of the data by examining several crystals of both materials. Results Co III (Pc)Cl 2 THF and Fe III (Pc)Cl 2 THF were found to have a monoclinic unit cell with the P2 1 /n space group, and were isostructural to each other (Table 1). Similar isostructural natures have been reported for M(Pc)L 2 -based conductors, because the rigid geometry of the Pc framework is insensitive to the central ion. Figure 5 shows the crystal structure of Co III (Pc)Cl 2 THF. Because the oxidation potentials of Co III and Fe III are higher than that of Pc 2-, the Pc ligand is oxidized to Pc -, meaning M III (Pc)Cl 2 THF is a neutral -radical. The M III (Pc)Cl 2 units form one-dimensional stacks along the a-axis via interactions between two peripheral benzene rings per unit, with interplanar distances of 3.49 and 3.54 (type A), and also form additional stacks along the and directions with one peripheral benzene ring per unit (type B) with interplanar distances of 3.43 and 3.44, respectively, resulting in the formation of three-dimensional networks. To evaluate the effectiveness of the stacking, the overlap integrals between the HOMOs of adjacent Pcs along each stacking direction were calculated by the extended Hckel method based on the structural data ( Table 2). Because the intermolecular overlap integral is usually regarded as being proportional to transfer energy, the estimation of the overlap integrals can be a useful index for investigating the anisotropies of molecular stacks, and it was found that the overlap integrals along the a-axis of type A stacks were almost twice those seen in other stacks. Figure 6 shows the temperature dependences of the electrical resistivities along the a-axes of Co III (Pc)Cl 2 THF and Fe III (Pc)Cl 2 THF. The electrical resistivities of Co III (Pc)Cl 2 THF and Fe III (Pc)Cl 2 THF at room temperature were 6.3 10 2 and 6.1 10 3 cm, respectively, and both systems showed semi-conducting behavior, with activation energies of 0.20 eV for Co III (Pc)Cl 2 THF and 0.24 eV for Fe III (Pc)Cl 2 THF. Figure 7 shows the temperature dependences of the electrical resistivities of Co III (Pc)Cl 2 THF and Fe III (Pc)Cl 2 THF along the a-axis (//a) and perpendicular to the a-axis (⊥a), as measured by the two-probe method. In both systems, the resistivity along the a-axis was smaller than that perpendicular to the a-axis, reflecting the difference in the overlap integrals; however, the current-direction dependence of the electrical resistivity was less than one order of magnitude. The activation energies of Co III (Pc)Cl 2 THF and Fe III (Pc)Cl 2 THF were almost independent of the direction of conduction. Figure 7. Temperature dependences of the electrical resistivities of (a) Co III (Pc)Cl 2 THF and (b) Fe III (Pc)Cl 2 THF along the a-axis (//a) and perpendicular to the a-axis (⊥a), as measured by the two-probe method. Discussions Despite the isostructural nature of the two compounds, the electrical resistivity of Fe III (Pc)Cl 2 THF was almost one order of magnitude higher than that of Co III (Pc)Cl 2 THF, and the activation energy was also higher. Similar enhancements of the electrical resistivity and activation energy have been reported for molecular conductors based on M III (Pc)L 2, and could be attributable to the different magnetic moments of the central metal ions. Low-spin Fe III has a magnetic moment of S = 1/2, while low-spin Co III has no magnetic moment (S = 0), and it has been reported that the magnetic moment enhances the electrical resistivity. As shown in Figure 7, the electrical resistivity of M III (Pc)Cl 2 THF depended on the current direction; however, the difference was quite small. In one-or two-dimensional systems, the resistivity along the stacking direction(s) was two orders of magnitude smaller than that perpendicular to the stacking direction(s). Therefore, the almost isotropic nature observed in the resistivity measurements in this study demonstrates that M III (Pc)Cl 2 THF had a three-dimensional electronic system. However, the electrical resistivity of Co III (Pc)Cl 2 THF at room temperature (6.3 10 2 cm) was two orders of magnitude higher than that of Co III (Pc)(CN) 2 2H 2 O, which was the first example of a three-dimensional neutral -radical crystal of the type M III (Pc)L 2. The activation energy of M III (Pc)Cl 2 THF was also higher than that of Co III (Pc)(CN) 2 2H 2 O. Therefore, we calculated the overlap integrals of Co III (Pc)(CN) 2 2H 2 O based on the reported crystal data, and compared the results with the overlap integrals of Co III (Pc)Cl 2 THF (Table 3). Although the molecular arrangement of M III (Pc)Cl 2 in M III (Pc)Cl 2 THF appeared to be similar to that of Co III (Pc)(CN) 2 2 formed type B stacks, were 1.7 10 −3. These values were more than twice those obtained for Co III (Pc)Cl 2 THF. These discrepancies could be the cause of the difference in the electrical properties, meaning that slight changes in molecular arrangements could lead to drastic changes in the electrical properties in the neutral -radical crystals composed of M III (Pc)L 2. Conclusions We succeeded in fabricating single crystals of stable organic -radicals of the type M III (Pc)Cl 2 THF (M = Co or Fe, THF = tetrahydrofuran). The overlap integrals between the HOMOs of adjacent Pcs were calculated based on the atomic parameters determined by the X-ray crystal structural analyses, and it was revealed that M III (Pc)Cl 2 formed three-dimensional networks. Furthermore, the electrical resistivities measured were almost independent of the applied current direction, meaning that M III (Pc)Cl 2 THF had three-dimensional electronic systems, which are rarely observed in the study of molecular conductors. The resistivities of Co III (Pc)Cl 2 THF and Fe III (Pc)Cl 2 THF at room temperature were 6 10 2 and 6 10 3 cm, respectively. These are similar to or significantly smaller than those reported for neutral radical conductors showing pressure-induced metallization, indicating that further study of three-dimensional Mott insulators composed of M III (Pc)L 2 under the application of pressure could allow for phase transitions from insulating to metallic or superconducting states. Furthermore, it was revealed that a slight change in molecular arrangement led to drastic changes in the electrical properties. As M III (Pc)L 2 shows various stacking types depending on the fabrication conditions, there is a strong possibility that new single-component molecular conductors composed of M III (Pc)L 2 could be produced. |
Application of adaptive neuro-fuzzy inference system (ANFIS) for slope and pillar stability assessment This study seeks to apply ANFIS model for stability assessment of surface and underground excavation. Two cases of stability assessment for highway slope and underground rib-pillar mining were performed. This paper presents more than one practical approaches in ANFIS simulation for slope and pillar stability assessment. The most important factors to achieve good performance of ANFIS in this study are types of membership function, number of membership function and number of input variable. The results show that the simulation based on those important factors can reach proper error checking. Considering error checking of the model, type and number of MF simulations can show the best result by comparing more than one simulation. Excellent performance of the 4 (four) input variables in slope case and 3 (three) input variables in pillar case provides valuable information for stability assessment using ANFIS. Introduction Slope and pillar stability is a technical problem that affects many things in mining field project. Forecasting and pre-project stability assessment will give reference information to the project designer. It is related to the excavation geometry and amount of production. Stability assessment of existing slopes and pillars can provide a reference for designer to avoid unwanted accidents. For this reason, many of stability analysis methods have been applied to solve the problem of stability assessment. The methods that have been widely known are analytical, numerical, empirical, probability and observational methods. Since the problem of uncertainty in the field has become more and more complicated, ANFIS has been proposed to assess the stability problem. ANFIS and other artificial intelligence techniques have been widely used in recent years to assess surface and underground excavation. Chen et al. employed the ANFIS model to predict the stability of epimetamorphic rock slopes. ANFIS model applied 41 data pair for training and 5 input variables of rock properties and slope geometry. Gaussian type of MF and hybrid learning rule of ANFIS model produced a good testing error of prediction. Fattahi performed prediction of slope stability using ANFIS. The results obtained demonstrated the effectiveness of the ANFIS-SCM (subtractive clustering method). It shows effective result than multiple linear regression prediction approaches. Silva et al. applied fuzzy logic approach to assessing failure risk in earth dams. They performed the fuzzy fication of geotechnical variables related to the shear strength of soils (in a simpler way than when using conventional probabilistic methods), and also to use the current methods of slope stability analysis to obtain factors of safety expressed as fuzzy numbers. Adoko and Wu stated that the use of Fuzzy Inference Systems in geotechnical engineering was in two fundamental conditions. First; epistemic uncertainty (lack of information, updated data unavailable and impreciseness) were handled successfully as well as expert knowledge and linguistic variables which were very important for some decision-making process. Secondly, fuzzy systems models were proved to be a good tool for prediction mainly when neural network or genetic algorithms were combined with. Assessment of underground excavation using ANFIS and other artificial intelligence techniques also has been widely used by researchers. Salimi et al. evaluated the applicability of the ANFIS model on a limited data set of hard rock Tunnel Boring Machine (TBM) performance. The result shows that the prediction performance of the Support Vector Regression (SVR) model is slightly better than ANFIS. Lai et al. summarized the study of Qu; about the prediction of ground settlement during shield tunneling using an Artificial Neural Networks (ANN). In the study; cohesion, angle of internal friction, compressive modulus of soil, earth covering thickness, diameter of TBM, grouting pressure, grouting filling ration, shield jacking force and shield tunneling rate were applied as the input variable to predict the maximum surface settlement. It can be concluded that the prediction model can obtain high-precision results. Fattahi et al. performed assessment of damaged zone around underground spaces using ANFIS model. In its study; Three ANFIS models of grid partitioning (GP), subtractive clustering method (SCM) and fuzzy c-means clustering method (FCM) were implemented. A comparison was made between these three models, and the results show the superiority of the ANFIS-SCM model to predict the problem under consideration. ANFIS (Adaptive-Network-based Fuzzy Inference System) is a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. The essential part of neuro-fuzzy synergisms comes from a common framework called adaptive networks which unifies both neural networks and fuzzy models. The fuzzy models under the framework of adaptive networks are called ANFIS, which possess certain advantages over neural networks. There are several features that enable ANFIS to achieve great success. ANFIS refines fuzzy IF-THEN rules to describe the behavior of a complex system and does not require prior human expertise. In application for real problem, ANFIS is easy to implement, and it enables fast and accurate learning. ANFIS model offers desired data set, greater choice of membership functions to use, strong generalization abilities, excellent and explanation facilities through fuzzy rules. ANFIS is also easy to incorporate both linguistic and numeric knowledge for problem-solving. Since increasing the uncertainties of field problem for stability assessment, ANFIS becomes part of alternatives to be applied. The simulation of ANFIS model using limited variable is needed to perform to anticipate limited data condition in field and laboratory projects. Building ANFIS using more than one type of membership functions (MF) and choosing different type and number of variables can provide alternatives for comparison. This paper presents slope and pillar stability assessment using the learning ability of ANFIS. Method This study presents two cases of stability assessment using ANFIS model both in surface and underground excavation. The first case is slope stability in which the data was collected from Chen et al.. The data was based on a detailed investigation of 53 slopes in the vicinity of Kaili-Sansui highway in epimetamorphic rock region, at Guizhou Province of China. The second case is underground mining pillar stability. Pillar case histories were collected from thirteen different mines. All of the pillars within the database are open stope rib pillars. Application of ANFIS to assess those two cases was performed by dividing the data into two sets, namely training and checking data set. Training data set was used to build ANFIS training of the case. Checking data set was used to know the reliability of ANFIS training instability prediction/stability assessment. Error testing for data checking represents the ability of ANFIS models/ training to assess stability. The smaller the error checking, the better the stability assessment produced. It means the higher the level of reliability of the model. Data collection for slopes Data from highway slope as reported by Chen et al. was used for applying the ANFIS model. Slopes data was divided into two parts, 41 for training data set and 12 for checking data set. Descriptive statistics and checking data sets for slopes can be seen in Table 1 and Table 2. Data collection for pillars Data of underground mining pillar under consideration is presented in Hudyma as is quoted by Lunder. This study takes stable and failed pillars cases from the data. Thirty training data set and eight checking data set were applied in the model shown in Table 3 and Table 4. Applications of the ANFIS method ANFIS model was applied for slope and pillar stability cases. This study presents more than one practical approach/simulation in term of input variable and type of membership function (MF) in the model. A different number of input variables was used in each case. Less number of variables was applied as a comparison to the sufficient number of data conditions. It represents limited data in the field and laboratory test. Analysis and discussion were performed to understand the result of ANFIS model. The grid partition method was utilized to generate the training structure, and the hybrid learning rule was employed in the learning procedure. The output variables represent the stability of the slopes and pillars, 1 for stable and 0 for failed. Data were divided into two separate sets that are training data set and checking data set. Checking data set was applied in the process to know the accuracy and the effectiveness of the trained ANFIS model. Slope stability case Based on the same data which used by Chen et al., this paper presents some different practical approaches in term of the input variable, type of membership function and number of membership function in the model. In the work of Chen et al. the data was analyzed using ANFIS. It only used five input variables and the Gaussian membership function. It means usability of 4 input variables was not performed. In this study, the architecture of ANFIS model applied 5 and 4 input variables. Each input variable has two and three membership functions of Triangular and Trapezoidal-shaped type. Result of ANFIS model for slope stability case is shown in Table 5. According to Table 5, 5 input variables were applied in ANFIS1 (simulation 1.1 -1.4). Training and testing error of every type and number of MF is also presented in Table 5. Resulted from this model, most significant testing error has resulted from Triangular-shaped type of MF with 2*2*2*2*2 number of MF. The smallest testing error was resulted from Triangular-shaped type of MF, with 3*3*3*3*3 number of MF. In this model, 3*3*3*3*3 number of MF gives less error testing. Reducing the number of input variables was conducted in ANFIS2 model (simulation 1.5 -1.8). It was performed to know the effect on testing error if bulk density is not used as the input variable. Numerical attributes for output were the same with ANFIS1, 1 for stable and 0 for failed. Result of the ANFIS2 in term of training and testing error of every type and number of MF is presented in Table 5. Table 5, Triangular-shaped with 3*3*3*3 number of MF has the most significant testing error. While Trapezoidal-shaped with 2*2*2*2 numbers of MF has the smallest testing error. Table 5 shows that the number of input variables, type, and number of MF reports different result for error checking of ANFIS. In case of 5 variables, for Triangular-shaped type of MF, an increase in the number of MF produced significant differences (improve to better value) in error checking data. Triangular-shaped type of MF with number of MF 2*2*2*2*2 produced error of 2.393, significant if compared with error value resulted from number of MF 3*3*3*3*3. Trapezoidal-shaped type of MF reported that an increase in the number of MF produced no significant different value of error checking. A different result was reported for 4 variables case. Smaller error checking has resulted from less number of MF. Triangular-shaped type of MF with number of MF 2*2*2*2 produced error of 0.164, number of MF 3*3*3*3 produced error of 0.376. Trapezoidal-shaped type of MF with number of MF 2*2*2*2 produced error of 0.095, number of MF 3*3*3*3 produced error of 0.151. The results of these tests indicate that the most critical factors in achieving excellent performance are type of MF, number of MF and number of input variable. Every addition for number of MF not automatically give good error result. Evaluation of the model based on error checking plays an important rule. Type and number of MF which produces smallest error checking are chosen as the best model. More than one ANFIS simulation give alternatives to produce the best performance of ANFIS simulation, and it can be used to assess the slope stability under consideration. The results show that 53 data sets in which 80% data for training and 20% data for testing are reliable to be applied in the ANFIS model. Those amounts of data set are a powerful tool for modeling the problem of slope stability in rock mechanics engineering issues. This result gives valuable information to face the next problem of slopes stability particularly in amount of data set issues. The amount of data in the field can be more or less than 53 data, but through the selection of type and number of MF, as shown by this study, the excellent performance of ANFIS can be produced. Reducing the number of input variables can produce good error value. It indicates that involvement less number of input variables (4 input variables) based on the selection of type and number of MF still can obtain a proper error checking. It gives valuable information related to utilization of those 4 variables in different cases. Those 4 input variables are height, inclination, cohesion, and internal Pillar stability case Eight and 3 input variables were applied in the ANFIS model for pillar stability case. Each input variable has two and or three membership functions of Triangular and Trapezoidal-shaped type of MF. The grid partition method was utilized to generate the training structure, and the hybrid learning rule was employed in the learning procedure. Result of ANFIS model for pillar stability case (ANFIS3 and ANFIS4) is shown in Table 6. ANFIS3, simulation 2.1 -2.4, applied 8 input variables. Training and testing error of every type and number of MF is also presented in Table 6. From the simulation result, the biggest testing error has resulted from Trapezoidal-shaped type of MF with 2*2*2*2*2*2*2*2 number of MF for input variable. The smallest testing error has also resulted from Trapezoidal-shaped type of MF with 2*2*3*2*2 *2*2*3 number of MF for input variable. ANFIS4, simulation 2.5 -2.8, applied 3 input variables, it was performed to know the effect on testing error. The number of training and checking data sets used the same approach with ANFIS3. Three input variables represent the limited data condition in the field. Data acquisition is strongly related to site-specific. It is not probable to get all of variables which were applied in ANFIS3 in every project. Applying 3 (three) input variables gives an alternative approach to face the problem of limited data condition in field project. Result of the ANFIS4 model is shown in Table 6. The result shows that Triangular-shaped with 3*3*3 number of MF has the most significant testing error while Trapezoidal-shaped with 2*2*2 number of MF has the smallest testing error. change in number of MF will not have much effect on the error checking. While changing all number of MF, simulation 2.5-2.8, can give the significance different in error checking. Every addition for number of MF and number of input variables is not automatically produced good error result. Excellent performance of the 3 input variables in the model provides valuable information. Three input variables in the pillar case gave more good error result than 8 input variables. It can be stated that implementation 3 input variables based on selection of type and number of MF can give better error value. Conclusion This study shows that the ANFIS model can be performed as a powerful tool for modeling some problems involved in slope and pillar stability issue. The most important factors in achieving excellent performance of ANFIS are type of MF, number of MF and number of input variables. The results show that simulation of type and number of MF can give the chance to reach proper error checking. Every addition and change for number and type of MF not automatically give good error result. Considering error checking as a basis for evaluation, type and number of MF simulations can show the best results by comparing more than one simulation. Involvement of less number input variable based on selection of type and number of MF still can obtain a proper error checking. Proper performance of the 4 input variables in slope case and 3 input variables in pillar case provides valuable information for stability assessment using ANFIS. Four input variables in the slope case still can produce good error result. Three input variables in the pillar case gave more good error result than 8 input variables. It should be stated that the implementation of 4 and 3 input variables applied in the ANFIS model based on selection of type and number of MF can give excellent and better error value. |
Academic Procrastination and Negative Emotions Among Adolescents During the COVID-19 Pandemic: The Mediating and Buffering Effects of Online-Shopping Addiction Purpose The COVID-19 pandemic that began in 2019 has had a significant impact on peoples learning and their lives, including a significant increase in the incidence of academic procrastination and negative emotions. The topic of how negative emotions influences academic procrastination has been long debated, and previous research has revealed a significant relationship between the two. The purpose of this study was to further investigate the mediating and buffering effects of online-shopping addiction on academic procrastination and negative emotions. Methods The researchers conducted a correlation analysis followed by a mediation analysis and developed a mediation model. The study used stratified sampling and an online questionnaire as the data collection method. In this study, first, five freshmen students at vocational and technical colleges in Guangdong Province, China, were called to distribute the questionnaire. Second, after communicating with them individually, first-year students of Guangdong origin were selected as participants. Finally, 423 freshman students participated by completing the questionnaire. The questionnaire consisted of 4 parts: demographic information, an online-shopping-addiction scale, an academic-procrastination scale and a negative-emotions scale. A total of 423 students, 118 males (27.9%) and 305 females (72.1%) from 10 vocational and technical colleges in Guangdong were surveyed. SPSS 25.0 was used to process and analyze the data. The data collected were self-reported. Results The results showed that: first, academic procrastination was significantly and positively associated with online-shopping addiction (r = 0.176, p < 0.01). Second, academic procrastination was significantly and positively associated with negative emotions (r = 0.250, p < 0.01). Third, online-shopping addiction was significantly and positively associated with negative emotions (r = 0.358, p < 0.01). In addition, academic procrastination had a significant positive predictive effect on online-shopping addiction ( = 0.1955, t = 3.6622, p < 0.001). Online-shopping addiction had a significant positive predictive effect on negative emotions ( = 0.4324, t = 7.1437, p < 0.001). Conclusion This study explored the relationship between students academic procrastination, negative emotions, and online-shopping addiction during the COVID-19 pandemic. The results indicated that students level of academic procrastination positively influenced their level of online-shopping addiction and negative emotions, and their level of online-shopping addiction increased their negative emotions. In addition, there was a mediating effect between the degree of participants online-shopping addiction and their degree of academic procrastination and negative emotions during the pandemic. In other words, with the mediating effect of online-shopping addiction, the higher the level of a participants academic procrastination, the more likely that the participant would have a high score for negative emotions. Purpose: The COVID-19 pandemic that began in 2019 has had a significant impact on people's learning and their lives, including a significant increase in the incidence of academic procrastination and negative emotions. The topic of how negative emotions influences academic procrastination has been long debated, and previous research has revealed a significant relationship between the two. The purpose of this study was to further investigate the mediating and buffering effects of online-shopping addiction on academic procrastination and negative emotions. Methods: The researchers conducted a correlation analysis followed by a mediation analysis and developed a mediation model. The study used stratified sampling and an online questionnaire as the data collection method. In this study, first, five freshmen students at vocational and technical colleges in Guangdong Province, China, were called to distribute the questionnaire. Second, after communicating with them individually, first-year students of Guangdong origin were selected as participants. Finally, 423 freshman students participated by completing the questionnaire. The questionnaire consisted of 4 parts: demographic information, an online-shoppingaddiction scale, an academic-procrastination scale and a negative-emotions scale. A total of 423 students, 118 males (27.9%) and 305 females (72.1%) from 10 vocational and technical colleges in Guangdong were surveyed. SPSS 25.0 was used to process and analyze the data. The data collected were self-reported. Results: The results showed that: first, academic procrastination was significantly and positively associated with online-shopping addiction (r = 0.176, p < 0.01). Second, academic procrastination was significantly and positively associated with negative emotions (r = 0.250, p < 0.01). Third, online-shopping addiction was significantly and positively associated with negative emotions (r = 0.358, p < 0.01). In addition, academic procrastination had a significant positive predictive effect on online-shopping addiction INTRODUCTION The COVID-19 epidemic has posed a serious threat to people's health and to their lives (Zhang and Ma, 2020). As a result of the pandemic, the social networks that many people rely on have been disrupted. Many others have not had the luxury of social isolation while facing the threat of losing their jobs and even the loss of loved ones. Not surprisingly, depression and anxiety have been on the rise. In China, the characterization of novel coronavirus pneumonia by the National Health Commission has led to increased negative emotions such as anxiety and anger, decreased well-being, and increased sensitivity to social risks in the overall mindset of society (). Therefore, the negative effects of COVID-19 have received widespread attention from the academic community. Epidemics can have many negative effects on people's mental health. One survey showed that increased fear of COVID-19 was directly associated with weakened mental health, which in turn was associated with decreased quality of life (). Patients infected with COVID-19 have suffered from severe psychological problems, which may negatively affect their quality of life and their sleep. An infection can also trigger other psychological problems, such as panic attacks, anxiety, and depression (). Therefore, mentalhealth issues have been considered as an important topic during the COVID-19 pandemic. Academic procrastination and negative emotions have attracted the interest of researchers worldwide. Academic procrastination is a phenomenon whereby an individual intentionally delays some learning tasks that must be completed without regard for the possible adverse consequences. The challenges posed by the COVID-19 pandemic can increase procrastination in academic activities that a student either dislikes or is passionate about (). The relationship between the two phenomena deserves more in-depth study. According to the existing literature, many learners have experienced negative emotions during the pandemic. A common feature of procrastination is the emphasis on repairing negative emotions at the expense of pursuing other important selfcontrol goals (Solomon and Rothblum, 1984;). Sirois and Pychyl argue that as a form of self-regulation failure, procrastination has a great deal to do with short-term emotion repair and emotion regulation (Sirois and Pychyl, 2013). During the COVID-19 pandemic, levels of procrastination can mediate the link between academic anxiety and self-manipulation in part (). However, few studies have examined the relationship between academic procrastination and negative emotions during the pandemic. In addition, learners' onlineshopping addiction is often associated with their negative emotions. Researchers have found that compulsive shoppers often use shopping as a means to change their emotion for the better. Compulsive buyers satisfy their desires by shopping when faced with the frustration associated with a lack of shopping. The pleasurable feelings associated with shopping seem to temporarily overshadow the negative effects, thus perpetuating a compulsive-shopping cycle (Clark and Calleja, 2009). However, few studies have focused on the relationship between academic procrastination and online-shopping addiction or on the mediating role of online-shopping addiction between academic procrastination and negative emotions. To deepen the understanding of negative emotions in academic settings, this study explores the relationship between academic procrastination and negative emotions, and also the mediating role of online-shopping addiction between the two. We have divided this paper into six parts. The next part of the paper contains the literature review and hypotheses, focusing on the concepts and related research on academic procrastination, negative emotions, and onlineshopping addiction and hypothesizing the relationship between these variables. The third section describes the participant, instruments, and data-analysis methods for this study. This is followed by the results of the data analysis. The fifth section is a discussion of the results and of expectations for future research, and the final part is the conclusion. In summary, the assumptions were made: (a) a positive relationship existed between students' academic procrastination and negative emotions; (b) a positive relationship existed between academic procrastination and online-shopping addiction; (c) a positive relationship existed between online-shopping addiction and negative emotions. Academic Procrastination Procrastination is to voluntarily delay an intended course of action despite expecting to be worse off for the delay. At the same time, procrastination is an irrational act of delay. Procrastination is a failure of self-regulation. And people's attempts to regulate their emotions through procrastination can have counterproductive results (Tice and Bratslavsky, 2000). Procrastination is a form of self-regulatory failure rooted in unpleasant emotional states and an inability to control one's behavior in pursuit of short-term emotional fixes (;Ramzi and Saed, 2019). Steel found that procrastination is a common and harmful form of self-regulatory failure and applied temporal motivation theory to explain this self-regulatory behavior. Three researchers have illustrated the mechanisms by which procrastination occurs in terms of personal self-regulation and self-control failures. This fits in with the occurrence of adolescent procrastination that this study proposes to explore. Therefore, self-regulatory mechanisms and temporal motivation theory served as the theoretical basis for constructing this study's model. Some researchers have rethought the definition of procrastination and proposed that procrastination can be divided into active and passive procrastination. Passive procrastination is procrastination in its traditional sense, while active procrastination refers to planned behavior. Active procrastinators are more inclined to work under pressure and make the decision to procrastinate after deliberation (Chun Chu and Choi, 2005). Active procrastinators are more highly emotionally stable (). Active procrastinators are comfortable with change when there is an unexpected situation. So active procrastinators may be able to work more effectively than others (Chun Chu and Choi, 2005). In addition, active procrastination can indirectly influence creative thinking through creative self-efficacy, so researchers have argued that more attention could be paid to the positive effects of active procrastination (). Academic procrastination occurs when a person is clearly aware of the need to complete an academic task but does not complete it within the expected time. This tendency and intention of an individual to postpone learning activities is believed to be a result of post-modern values in a post-industrial society (). In addition, academic procrastination is related to self-regulated learning. Academic procrastination in college students has a significant negative impact on their subsequent academic performance (). Academic procrastination also have a greater negative impact on younger students, and high levels of academic procrastination is more detrimental to academic performance (Goroshit and Hen, 2019). Academic procrastination is associated with the pursuit of perfectionism. Research suggests that levels of maladaptive perfectionism may be exacerbated if students are criticized for not meeting parental expectations. Students can develop a fear of failure, which ultimately leads to procrastination. Self-directed perfectionism and socially oriented perfectionism are two dimensions of perfectionism. Self-directed perfectionism was negatively associated with academic procrastination and socially oriented perfectionism was positively associated with procrastination (Closson and Boutilier, 2017). A study suggests that social and environmental factors also influence students' procrastination (). Negative Emotions Positive emotions and negative emotions are the two main and relatively independent dimensions of the human emotion structure (). Positive emotions broaden people's reserves of thinking and action and help them build sustained and long-lasting personal resources, including physical, intellectual, social, and psychological resources. Negative emotions are subjective painful emotions and also a variety of aversive emotional states. It includes anger, contempt, disgust, guilt, fear, and tension (). These emotions can cause a person to have difficulty concentrating, paying attention to details, and understanding information (Rowe and Fitness, 2018). Expressing positive emotions undoubtedly brings many benefits. Expressing negative emotions may also have positive outcomes: a person who expresses their negative emotions may be helped more by those around them (). Researchers study the effects of negative emotions on human behavior. Negative emotions indirectly influence prosocial and aggressive behaviors by modulating emotional self-efficacy, while a person's depressive state exerts an inhibitory effect on selfefficacy to express positive emotions (). One researcher investigated the relationship between adolescents' sleep status and negative emotions. The results of the study showed that the shorter the sleep time, the lower the positive emotions and the stronger the negative emotions. Based on this, it is suggested that improving the quality of sleep can reduce emotion disorders (). Negative emotions are a trigger for emotional eating in female college students. Some research suggests that experiential avoidance may help understanding the relationship between negative emotions and emotional eating in women. Experiential avoidance typically refers to the tendency to not want to maintain contact with aversive private experiences (). In addition, some studies have focused on the application of negative emotions in consumer behavior. It was found that negative emotions form part of the visitor experience. The experience of negative emotions can lead to a decrease in visitor satisfaction and also provide the possibility of transformation and selfdevelopment (Nawijn and Biran, 2018). Few studies have examined the relationship between academic procrastination and negative emotions. However, some studies have focused on the relationship between procrastination itself and emotions of anxiety and fear. A study by Solomon and Rothblum suggested that fear of task failure and aversion to academic tasks are major factors contributing to students' academic procrastination (Solomon and Rothblum, 1984). Therefore, schools should pay more attention to factors that can reduce student procrastination and their cognitive avoidance behaviors, which is likely to reduce students' test anxiety (). People who procrastinate frequently may become sensitive to the anxiety it causes, which may trigger panic attacks (). Several studies have also explored the relationship between academic procrastination and anxiety. One study used statistical methods to demonstrate that there was a direct and significant relationship between these two phenomena. The longer the students in the sample procrastinated, the higher their anxiety levels, which may also led to increased test anxiety (Araoz and Uchasara, 2020;Wang, 2021). Also, it has been noted that attention to and intervention in academic procrastination may help to reduce students' test anxiety. There are also studies that focus on the aspect of negative emotions on academic procrastination. A study suggested that college students who have difficulty perceiving social support are more likely to have negative emotions in their daily lives. And they are more willing to engage in other irrelevant activities in order to have positive emotional experiences, which eventually leads to procrastination (). And there are other researchers who believe that negative emotions motivate procrastination behaviors. Students reported more procrastination behaviors after experiencing high levels of negative emotions. However, procrastination behaviors did not predict changes in negative emotions (Pollack and Herres, 2020). Therefore, we propose the following hypothesis: H1: There is a positive relationship between students' academic procrastination and negative emotions. Online-Shopping Addiction Internet addiction is an excessive or poorly controlled preoccupation, impulse, or behavior with Internet use that can lead to damage or suffering (Weinstein and Lejoyeux, 2010). Specific forms include the problematic use of the internet for such activities as the excessive viewing of internet videos or playing of online games (). Scholars have developed different tools to assess such addictions. The Internet Addiction Inventory (CIAS) was developed by Chen et al.. Other assessment tools have been created to identify and measure internet addiction in adolescents and adults (;;;;;). Internet addiction is directly related to online-shopping addiction (Seung-Hee and Won, 2005). Online-shopping addiction is defined as purchasing behavior that is out of one's control due to lack of self-monitoring (LaRose and Eastin, 2002). One study found that online shopping was the strongest predictor of Internet addiction. In addition, social applications all significantly increase the odds of students being addicted to Internet use (). The concept that is relevant to the study of online shopping addiction is impulse buying. A study distinguishes between traditional impulsive buying behavior and online buying behavior and suggest that online buying behavior will become an alternative to traditional buying and should also receive attention from researchers (). A number of studies have now investigated the factors that influence online-shopping addiction. In one study, materialism was significantly and positively correlated with internet addiction, and a positive correlation was found between internet addiction and a tendency for impulsive online buying (Sun and Wu, 2011). There is also a relationship between online-shopping addictive behavior and hedonism (Gn and Dogan Keskin, 2016;Dogan Keskin and Gn, 2017). This hedonic impulsive behavior brings joy, relaxation, and happiness. Also, online platforms offering inexpensive products, a wide selection, and various promotions are related to addiction (Gn and Dogan Keskin, 2016). Several researchers have developed tools for assessing online-shopping addictive behaviors, for example, the Online Shopping Addiction Scale (OSAS), which is based on the general addiction model (Duong and Liaw, 2021;). The research of Zhao et al. shows that the scale has good reliability. Few studies have focused on the relationship between academic procrastination and online-shopping addiction. However, some studies have focused on the relationship between general procrastination and Internet addiction or mobile phone use problems. Procrastination mitigates the effects of depression on internet addiction, and a high level of procrastination has a significant positive association with internet addiction (). At least one study has shown that procrastination is significantly and positively related to the problematic use of cell phones, and that students who procrastinated were more likely to use social media in class (). Procrastination may be the proximal cause of cell phone addiction in adolescents (Wang J. et al., 2019). In a study exploring the relationship between stress, internet addiction, and procrastination, procrastination had a significant positive relationship with internet addiction (). From a different explanatory perspective, a study of student Facebook users (N = 345) suggested that procrastination is associated with low self-control. Both escapist values and procrastination lead individuals to choose temptations that produce pleasure; that is, both distracting one from the internet and also becoming more addicted to it. Trait procrastination was positively associated with adolescents performing online multitasking and having inadequate control over Internet use, and adolescents with high levels of trait procrastination may be at increased risk for procrastination due to inadequate control over Internet use. In addition, a survey of 483 college students indicated that cell phone addiction had a direct predictive effect on academic procrastination and an indirect predictive effect through academic self-efficacy. Specifically, academic self-efficacy partially mediated and buffered the effect between cell phone addiction and academic procrastination (). In summary, it is reasonable to offer the following hypothesis: H2: There is a positive relationship between academic procrastination and online-shopping addiction. Currently, some studies have shown the relationship between online addiction and negative emotions, while some studies have also shown the relationship between shopping addiction and negative emotions. Users often feel guilty of procrastination when they overuse media. A study showed that smartphone addiction was positively related to adolescent depression. Hedonic online-shopping addiction can affect a person's depression or even cause it to occur. Because of the monetary expenditures, interpersonal or family conflicts may occur (Dogan Keskin and Gn, 2017). Internet addiction is seen as a mental health problem, with social anxiety avoidance behaviors being the strongest predictor of Internet addiction. A study has shown that internet addicts exhibit higher levels of social anxiety and depression, and lower self-esteem (Yucens and Uzer, 2018). Another study examined the effects of stress, social anxiety, and social class on adolescent Internet addiction and showed that Internet addiction was positively related to stress and social anxiety and negatively related to social class. Social class indirectly influenced Internet addiction by moderating the relationship between stress and social anxiety (). This review of the literature leads to the following hypothesis: H3: There is a positive relationship between onlineshopping addiction and negative emotions. Research Design To further investigate the mediating and buffering effects of online-shopping addiction on academic procrastination and negative emotions, freshman students in Guangdong Province were selected as participants in this study. Participants were in the preparation period for the college entrance exams, a highly stressful phase, during the pandemic outbreak. The questionnaire included four parts: demographic information, the Online Shopping Addiction Scale, the Academic Procrastination Scale, and the Depression-Anxiety-Stress Scale, around which the participants had developed their initial manifestations. Due to epidemic prevention and control restrictions, an online questionnaire was administered to participants. We used SPSS 25.0 software, Harman one-way test for common method bias to test the data, and Pearson correlation coefficient was calculated to test the variable relationships. Participants This study was conducted in Guangdong Province, China. The inclusion criteria vertebrae of the participants were: adolescents who were preparing for college entrance examinations during the pandemic outbreak; and the exclusion criteria were: non-Guangdong freshman students, so as to ensure the homogeneity and representation of the sample drawn. Therefore, we chose stratified sampling in probability sampling. The sampling process was divided into three steps. Step 1: 5 students distributed 450 questionnaires to first-year students in 10 vocational and technical colleges. Step 2:430 students completed the questionnaire, and the researchers communicated with them to confirm that they were from Guangdong Province. Step 3: The questionnaires of 423 students from Guangdong Province were retained for analysis. The age range of this cohort of students ranged from 15 to 22 years old, with a mean age of 18.7 years old (SD: 0.86). Of these, 118 (27.9%) were males and 305 (72.1%) were females. Before the study design was finalized, the researchers had conducted exploratory focus-group interviews with five volunteers to clarify the possible association between academic procrastination and negative emotions in students. A majority of participants indicated that they had exhibited some level of anxiety or stress during the COVID-19 outbreak. This study used a correlation design with an online questionnaire as the data collection method. The questionnaires were completed between November 10 and November 30, 2020. Five volunteers, either personally or by proxy, showed students the quick-response (QR) code for the questionnaire during breaks in an online university course. Students who volunteered scanned the QR code, went to the questionnaire screen, answered the questions, and then clicked on Submit. (A QR code is a readable barcode that contains information. A device such as a mobile phone or tablet scans the QR code with a camera, recognizes the binary data, and goes to a specific link. In China, QR codes are widely used to open specific link interfaces and various applications such as those for financial payments, identification, and information queries). It is important to emphasize that the purpose of the research were described in detail by the researcher before scanning the code, and all students filled out the questionnaire based on the voluntary principle. Material The questionnaire used in this study consisted of four parts: demographic information, an Online-Shopping Addiction Scale, an Academic Procrastination Scale, and a Negative-Emotions Scale. The demographic information related to gender and age. The three scales were originally developed in English and were translated into Chinese for this study. To improve the quality of the translations, the back-translation method was used: the first researcher translated the English-language scale into Chinese, then the second researcher translated that version into English, after which the third researcher compared the original, translated, and backtranslated versions to assess the accuracy of the translations. The translations were corrected and optimized before the questionnaire was finalized, which should have ensured the equivalence of the scales. Online Shopping Addiction Scale This study used the Online Shopping Addiction Scale developed by Zhao et al.. The scale includes 17 self-report items. Each item is rated on a 5-point Likert scale (1 = totally disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, 5 = totally agree). Possible scores range from 18 to 90. In this study, the internal consistency coefficient for the scale was 0.951. Tuckman Academic Procrastination Scale This study used the Tuckman Academic Procrastination Scale developed by Tuckman. This scale consists of 16 questions that measure a person's level of academic procrastination. In order to maintain consistency, this scale was converted from a 4-point to a 5-point scale, ranging from 1 (completely disagree) to 5 (completely agree). The conversion resulted in a range of possible scores of from 16 to 80. In the current study, the internal consistency coefficient of the scale was 0.917. Depression-Anxiety-Stress Scale This study used the Depression-Anxiety-Stress Scale, as revised by Antony et al., to measure negative emotions. The scale has 21 items divided among three dimensions: depression, anxiety, and stress. To maintain consistency, this scale was shifted from a 4-point to a 5-point scale, ranging from 1 (completely disagree) to 5 (completely agree), so that possible scores ranged from 21 to 105. In the current study, the internal consistency coefficient of the scale was 0.959. Data Analysis SPSS 25.0 was used to process and analyze the data. The data collected were self-reported, and so to ensure the validity, the common method biases were tested before data processing using Harman's one-way test (). The testing examined the 54 items in the questionnaire related to three variables. The results showed that 8 factors had eigenvalues greater than 1. The contribution of the 8 factors to the total variance was 70.14%, and the variance explained by the first factor was only 30.113%, which did not reach the critical criterion of 40% (Zhou and Long, 2004). Thus, there was no significant common method bias in this study. We next performed descriptive analysis, correlation analysis, and model testing of the data based on the research hypotheses. First, we examined the concentration and dispersion trends of the data through descriptive analysis. Then, we conducted correlation analysis among the variables to test the relationships among the independent, mediating, dependent, and moderating variables by calculating Pearson correlation coefficients. Based on the correlations, the research hypotheses were further tested by using the PROCESS (version 3.3) plug-in in SPSS to test the model's mediating and moderating effects. (The PROCESS plug-in was developed by Hayes specifically for path analysis-based moderating and mediating analyses and their combinations). Descriptive and Correlation Analysis The results of the descriptive analysis of academic procrastination, online-shopping addiction and negative emotions are shown in Table 1. The mean for academic procrastination was 45.5 with a standard deviation of 10.8; the mean for online-shopping addiction was 37.9 with a standard deviation of 12.0; and the mean for negative emotions was 48.5 with a standard deviation of 16.0. When the sample scores were above the mean, high levels of procrastination and addictive behavior were observed. When the sample scores were below the mean, there was a low level of procrastination and addictive behavior. Thus, the sample had varying degrees of procrastination and addictive behavior. The correlations of the three variables were assessed using Pearson's product difference correlation coefficient, and the results are presented in Table 1. First, academic procrastination was significantly and positively correlated with online-shopping addiction (r = 0.176, p < 0.01). Second, academic procrastination was significantly and positively correlated with negative emotions (r = 0.250, p < 0.01). Third, online-shopping addiction was significantly and positively associated with negative emotions (r = 0.358, p < 0.01). Analysis of Mediating Effects The PROCESS (version 3.3) software (model 4) was used to test the mediating effect of online-shopping addiction between the two variables, using academic procrastination as the independent variable and negative emotions as the dependent variable. The results indicated that academic procrastination significantly predicted negative emotions ( = 0.3716, t = 5.2888, p < 0.001). The predictive effect remained significant after adding online-shopping addiction as a mediating variable ( = 0.2819, t = 4.1744, p < 0.001). Academic procrastination had a significant positive predictive effect on online-shopping addiction ( = 0.1955, t = 3.8490, p < 0.001). The positive predictive effect remained significant after adding gender as a mediating variable ( = 4.1795, t = 3.2985, p < 0.001). Onlineshopping addiction had a significant positive predictive effect on negative emotions ( = 0.4416, t = 7.2006, p < 0.001) (see Table 2). In addition, both the direct effect of academic procrastination on negative emotions and the mediating effect of online-shopping addiction had bootstrap confidence intervals (95%) with no zero between the lower and upper bounds (see Table 3). Discussion of the Results In this study, we developed a moderating mediator model of the relationship between academic procrastination and negative emotions in adolescents during the COVID-19 pandemic. It was found that (a) a positive relationship existed between students' academic procrastination and negative emotions; (b) a positive relationship existed between academic procrastination and online-shopping addiction; (c) a positive relationship existed between online-shopping addiction and negative emotions; and (d) the online-shopping addiction mediated and buffered the relationship between academic procrastination and negative emotions. These results are consistent with the proposed hypothesis and previous findings. First, the findings are consistent with H1 and the existing literature, revealing that academic procrastination is positively associated with negative emotions. Sirois and Pychyl argued that procrastination may be best understood as a Boot CI, Boot confidence interval. Lower limit and upper limit = The lower limit and upper limit of 95% Boot confidence interval. Relative effect size = Direct effect size or indirect effect size divided by total effect size. form of self-regulation failure that involves the primacy of short-term emotion repair and emotion regulation over the longer-term pursuit of intended actions. Rahimi and Vallerand demonstrated that, during the COVID-19 pandemic, negative emotions can be positively associated with academic procrastination, while positive emotions can prevent it, so people choose to focus on immediate happiness during a pandemic. On the other hand, strong and consistent predictors of procrastination were task aversiveness, and its facets of self-control, distractibility, organization, and achievement motivation. If the person considers that they do not have the skills required to carry out a task and be successful, they will be more likely to postpone that task in order to prevent this skill deficit from manifesting (Quant and Snchez, 2012). Further, symptoms of panic disorder, social anxiety disorder, and health anxiety have been associated with the level of procrastination (), and Araoz and Uchasara's showed that the longer the period of procrastination, the higher the level of anxiety. The results of the present study also validated H2. First, the findings showed a positive relationship between academic procrastination and online-shopping addiction, which is consistent with the results of previous studies (). Yang et al. suggested that destructive academic emotions and behaviors among university students, such as anxiety and procrastination, might partly be a consequence of poor self-control concerning their smartphone use. Procrastination may well be the result of an unwillingness to stop pleasure-seeking activities using smartphones online or offline. Individuals suffering from smartphone addiction may find it particularly difficult to stop using this device or to manage its disruptions (Zhang and Wu, 2020). This leads to procrastination before adolescents turn to their studies. Second, procrastination as a personality trait may be a risk factor for mobile-phone addiction. Procrastinators are characterized by low self-control and a preference for short-term rewards, and they are prone to become internet addicts because of the design features of their devices (). More specifically, self-regulation can be used as an explanatory mechanism for this effect. Procrastinators are unable to control their behavior and prefer to surrender important goals in favor of pleasurable short-term activities (). Sirois and Pychyl argues that procrastination is strongly associated with short-term emotional repair and emotion regulation (Sirois and Pychyl, 2013). Aversive tasks can lead to anxiety, and avoiding them is a way to escape this negative emotion (Tice and Bratslavsky, 2000). The negative emotions that people experience after failure can also influence self-regulatory beliefs and behaviors, and performance goals. This would explain the different ability of different students to tolerate mistakes in the learning process (). In conclusion, in order to prevent and reduce these behaviors the school should focus on learning and test anxiety, motivation and emotion regulation issues in adolescents. The results are also consistent with H3. Studies have shown that hedonic shopping addiction can lead to a negative emotions or even to depression. Because of online-shopping addiction and its associated financial costs, interpersonal or family conflicts may occur (Dogan Keskin and Gn, 2017). Impulse buying is fundamentally a problem of failing to delay gratification. This leads to worse emotions, as it is still yourself who bears the consequences in the future (Sirois and Pychyl, 2013). Through the shopping excitement index prediction experiment, the buying behavior is described as used to regulate emotions, alleviate or escape negative emotions. Therefore, we think online-shopping can be addictive (). The desire to shop is replaced by feelings of guilt, and the pleasure gained from shopping quickly disappears: compulsive shoppers seem to cycle between full attention, ritualization, compulsive buying, and despair (Clark and Calleja, 2009). Finally, From the above we can see the findings suggest that online-shopping addiction mediates and buffers academic procrastination and negative emotions. A study based on the compensatory-internet-use model and emotion-regulation theory concluded that procrastination mediates the relationship between perceived stress and internet addiction (), which suggests that this article echoes the findings of previous studies. These results also provide a perspective on the possible cause of internet addiction among college students; namely, that individuals use the internet to avoid stress and to procrastinate (). That is, individuals who have procrastinated are more likely to have a cell phone addiction caused by stress (Wang J. et al., 2019), because people in negative emotions tend to engage in greater subsequent selfgratification and self-reward than people in neutral emotions (Tice and Bratslavsky, 2000). As the priority theory of short-term emotion regulation says that emotion regulation the processes underlying procrastination are driven by a need to regulate the emotion of the present self at the expense of the future self (Sirois and Pychyl, 2013). So online-shopping addiction can regulate and buffer against academic delays and negative emotions. Implications Theoretically, we have established a link between academic procrastination and negative emotions and deepened understanding of the impact of academic procrastination on negative emotions. In addition, we have shown that onlineshopping addiction can mediate and buffer the effects of academic procrastination on negative emotions. This finding suggests that college students who are academic procrastinators can slow down negative emotions by controlling their onlineshopping behavior. This could enhance their ability to cope with challenges. Practically, the relationship between the three variables we proposed can help researchers better understand the mechanisms of academic procrastination and its impact on negative emotions among college students, thus providing support for the improvement of college students' learning effectiveness. Limitations and Future Directions There are some limitations in this study. First, this study used a cross-sectional design and the study participants were all from one university in Guangdong Province, so the study sample limits the universality of the findings. Future researchers could use a longitudinal research design and try to recruit participants from different regions and institutions. Second, the scale measures students' depression, anxiety, and stress. The present study considered negative emotional problems with the combination of these three. Future research could focus on the relationship between depression, anxiety or stress and the remaining two variables, which may lead to new findings. In addition, the range of negative emotions measured in this study was broad, and appropriate instruments could be selected to measure academic emotions in the future. Indirect and mediating effects are weakly represented in the study. Therefore optimizing the conceptual model structure is also a direction we will continue to investigate in depth in the future. What's more, we could explore other possible mediating variables that have an impact on academic procrastination and negative emotions. Research is also needed to investigate how to guide students to minimize academic-procrastination behaviors in order to alleviate or reduce negative emotions. CONCLUSION We explored the relationship between the academic procrastination, negative emotions, and online-shopping addiction of college students during COVID-19. The results indicate that the level of academic procrastination positively influenced their levels of online-shopping addiction and negative emotions, and their level of online-shopping addiction positively influenced their level of negative emotions. In addition, there was a mediating effect between the degree of online-shopping addiction and the degree of academic procrastination and negative emotions. In other words, with the mediating effect of online-shopping addiction, the higher the level of students' academic procrastination, the more likely they were to have a high score for negative emotions. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of Capital Normal University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS KW collected the data. QW and YX designed the research and analyzed the data. QW and ZK reviewed the literature and edited the manuscript. QW, YX, ZK, YD, and KW wrote the manuscript. All authors have read and agreed to the published version of the manuscript. FUNDING This research was funded by Beijing Office for Education Sciences Planning (Grant No. CEJA18064). |
Achieving an Efficiency Exceeding 10% for Fullerenebased Polymer Solar Cells Employing a Thick Active Layer via Tuning Molecular Weight Recently, the influence of molecular weight (Mn) on the performance of polymer solar cells (PSCs) is widely investigated. However, the dependence of optimal thickness of active layer for PSCs on Mn is not reported yet, which is vital to the solution printing technology. In this work, the effect of Mn on the efficiency and especially optimal thickness of the active layer for PBTIBDTTSbased PSCs is systematically studied. The device efficiency improves significantly as the Mn increases from 12 to 38 kDa, and a remarkable efficiency of 10.1% is achieved, which is among the top efficiencies of widebandgap polymer:fullerene PSCs. Furthermore, the optimal thickness of the active layer is also greatly increased from 62 to 210 nm with increased Mn. Therefore, a device employing a thick (>200 nm) active layer with power conversion efficiency exceeding 10% is achieved by manipulating Mn. This exciting result is attributed to both the improved crystallinity, thus hole mobility, and preferable polymer orientation, thus morphology of active layer. These findings, for the first time, highlight the significant impact of Mn on the optimal thickness of active layer for PSCs and provide a facile way to further improve the performance of PSCs employing a thick active layer. |
Efficacy and safety of the SQstandardized grass allergy immunotherapy tablet in mono and polysensitized subjects The efficacy of singleallergenspecific immunotherapy in polysensitized subjects is a matter of debate. We therefore performed a post hoc analysis of pooled data from six randomized, doubleblind, placebocontrolled trials (N = 1871) comparing the efficacy and safety of the SQstandardized grass allergy immunotherapy tablet (AIT), Grazax (Phleum pratense 75 000 SQT/2800 BAU, ALK, Denmark), in mono and polysensitized subjects. A statistically significant reduction in the mean total combined symptom/medication score (TCS) of 27% was demonstrated in actively treated subjects compared with placebo (P < 0.0001). This was not dependent on sensitization status (P = 0.5772), suggesting a similar treatment effect in mono and polysensitized subjects (i.e. reductions of the TCSs of 28% and 26%, respectively, both P < 0.0001). Finally, a comparable and favourable safety profile of grass AIT was demonstrated in the two subgroups. Thus, no difference in efficacy and safety of singleallergen grass AIT was observed between mono and polysensitized subjects. |
Homogeneously staining chromosome regions and double minutes in a mouse adrenocortical tumor cell line. We have used a number of banding techniques to analyze the chromosomal content of two sublines of the steroid-secreting mouse adrenocortical tumor cell line Y1. One of the sublines contains marker chromosomes with large "homogeneously staining regions" (HSRs). The other subline contains numerous double minutes, but no HSR. The presence of marker chromosomes shared by both sublines indicates their derivation from a common precursor. Both sublines are karyotypically stable, have similar growth rates, and exhibit positive staining for delta 5, 3 beta-hydroxysteroid dehydrogenase. |
Prevalence of Helmet Induced Headache among Bikers Aim: To find prevalence of helmet induced headache among bikers. Methods: A cross sectional study using convenient sampling was conducted on male bike riders of Lahore. After approval from ethical committee participants were selected on the basis of inclusion and exclusion criteria. Informed consent was taken. 102 participants filled out self-made questionnaire. Data was analyzed by using SPSS version 21 in form of frequencies, mean, standard deviation and pie chart. Results: The mean age of participants was 22.77±1.72 years. A total of 52 participants (51%) were reported having headache. Prevalence of helmet induced headache came out to be 6(11.5%) among bikers. A total of 83% used helmet occasionally and 18.17% reported to use it always. The duration of headache for half of the participants (50%) was 20-30 min after wearing helmet, 38.5% had headache duration lasting for 1-5 days in a month. Majority had stabbing (25%) type of pain, among them only (9.6%) visited hospital due to headache. Conclusion: There was high prevalence of headache (stabbing and aching) in bikers but mostly didnt report complaint of headache specifically while wearing helmet or after removing it. Keywords: Headache, helmet, prevalence, primary prevention |
Oil control valve is one of the key parts of a variable valve timing system. Currently, according to ways for supplying oil, existing oil control valves can be classified into two types which are oil control valves with an end oil supply and oil control valves with a side oil supply.
As shown in FIG. 1 to FIG. 2, an existing oil control valve with an end oil supply includes two parts which are respectively a proportional electromagnet 1 and a hydraulic element 2. Wherein, the proportional electromagnet 1 includes a movable armature 10. The hydraulic element 2 includes: a valve body 20 with one end fixedly connected with the proportional electromagnet 1 and another end provided with an oil inlet I; a filter 21 having an annular body portion 21a and a strainer 21b fixedly disposed on an inner circumferential surface of the annular body portion 21a, wherein the annular body portion 21a is fixedly disposed on an outer circumferential surface of the end of the valve body 20 provided with the oil inlet I, the strainer 21b is located at an outer side of the oil inlet I so that hydraulic oil can flow into the valve body 20 from the oil inlet I after being filtered by the filter 21; and a pushrod 22, a movable piston 23, a compression spring 24, a spring retainer 25 and a circlip 26 successively disposed in the valve body 20 from inside out (i.e., along a direction C from the proportional electromagnet 1 to the oil inlet I).
Wherein, two ends of the pushrod 22 along an axial direction are fixedly connected with the movable armature 10 and the piston 23, respectively; two ends of the compression spring 24 stand against an end portion of the piston 23 which is far away from the pushrod 22 and the spring retainer 25, respectively, and the compression spring 24 is in a compressed state; and the circlip 26 is fixed on an inner circumferential surface of the valve body 20 and stands against the spring retainer 25 along the axial direction.
However, the existing oil control valve with an end supply has following drawbacks: 1) the filter 21, the spring retainer 25 and the circlip 26 are separated from each other, which makes a structure of the oil control valve complicated, cost of the oil control valve increase. Further, the spring retainer 25, the circlip 26 and the filter 21 need to be assembled one by one, which makes an assembling process for the oil control valve cumbersome and time-consuming.
2) The annular body portion 21a of the filter 21 is made of steel, and the annular body portion 21a is in interference fit with the valve body 20. Therefore, the annular body portion 21a needs to be pried up by tools for detaching the filter 21 from the valve body 20. The filter 21 will be damaged during the detaching process, which makes the filter 21 unable to be reused.
3) The spring retainer 25 is formed by a punching process. However, it is hard to keep a high machining precision in the punching process. Therefore, in order to form a spring retainer 25 with high machining precision, requirement for the punching process is very strict, which increases manufacturing cost of the spring retainer 25. |
An overall performance index for wind farms: a case study in Norway Arctic region Wind farms (WFs) experience various challenges that affect their performance. Mostly, designers focus on the technical side of WFs performance, mainly increasing the power production of WFs, through improving their manufacturing and design quality, wind turbines capacity, their availability, reliability, maintainability, and supportability. On the other hand, WFs induce impacts on their surroundings, these impacts can be classified as environmental, social, and economic, and can be described as the sustainability performance of WFs. A comprehensive tool that combines both sides of performance, i.e. the technical and the sustainability performance, is useful to indicate the overall performance of WFs. An overall performance index (OPI) can help operators and stakeholders rate the performance of WFs, more comprehensively and locate the weaknesses in their performance. The performance model for WFs, proposed in this study, arranges a set of technical and sustainability performance indicators in a hierarchical structure. Due to lack of historical data in certain regions where WFs are located, such as the Arctic, expert judgement technique is used to determine the relative weight of each performance indicator. In addition, scoring criteria are predefined qualitatively for each performance indicator. The weighted sum method makes use of the relative weights and the predefined scoring criteria to calculate the OPI of a specific WF. The application of the tool is illustrated by a case study of a WF located in the Norwegian Arctic. Moreover, the Arctic WF is compared to another WF located outside the Arctic to illustrate the effects of Arctic operating conditions on the OPI. Introduction Wind energy investments in the Arctic region is appealing because of the higher availability of wind power, which is almost 10% higher than in other regions due to the higher density of air Fortin et al.. Moreover, the Arctic region is sparsely populated, which makes it even more attractive for wind energy investments. However, the performance of wind farms (WFs) located in the Arctic is faced with a plethora of challenges. Most of these challenges are attributed to operating in severe weather conditions such as low temperatures, ice accretion on the blades and snow accumulation on roads. These weatherrelated challenges affect mainly the technical performance of WFs. For example, ice accretion on WT blades creates mass imbalances and instantaneous losses in power production, which, under certain conditions, can reach 30% of the power produced, even in light icing events, Laakso and Peltola, or in severe icing conditions, leading to total shutdown of the wind turbine (WT). Technical performance is related to the technical functions of WFs, in terms of the amount of electricity generated Koo et al..It also refers to the quality of the power produced by the WF, as well as their capacity and availability performances. Availability performance can be described in terms of the reliability, maintainability and supportability of the wind farms, IEC. Figure 1 illustrates the proposed technical performance indicators. The quality performance indicator reflects the design and manufacturing quality of WTs and the WF layout Zaki. The availability performance indicator depends, for the most part, on the reliability, maintainability and supportability of the wind farm IEC and Naseri and Barabady, and the capacity performance indicator reflects the maximum power delivered by the wind farm, considering the operating conditions in the respective region (). The primary objective of this work is to devise a method for calculating the Overall Performance of WFs and to evaluate the mutual impacts of WTs on their surroundings and impact of the surrounding environment on WTs. The impacts of WTs on their surroundings can be summarized into three categories, namely: social and safety impacts, environmental impacts, and economic impacts. According to Musango and Brent and Kucukali, these three types of impacts can be grouped under sustainability performance of WFs, as shown in Fig. 1. It is worth noting that many sustainability indicators can be included to describe the sustainability of WFs; however, these three indicators are described as the traditional pillars of sustainability Diaz-Balteiro et al.. The social and safety impacts constitute hazards such as noise generated by the WTs during construction and operation, traffic on public roads caused by transporting large WTs components, and ice fall and ice throw from WTs that can harm humans, animals and nearby structures, Mustafa et al.. Other concerns related to the social and safety impacts are,for example, the visual pollution that might detract from pristine views or hinder tourism, and doubts related to that WFs might interfere with the operation of military radar systems Welch and Venkateswaran. In addition, there are claims such as that governments are violating the rights of indigenous communities, by approving wind energy projects, causing cultural destruction. For example, constructing wind farms on Smi lands in northern Scandinavia, may be considered unethical and overtly political, simply because it might come across as a systematic dispossession of their lands, and a lack of recognition of their rights Lawrence and Moritz. The environmental impacts of WTs can be positive such as the carbon-free electricity production, no long-term waste and no cooling water required, for these concerns, WFs are environmentally benign. On the other hand, chemical deicing used to remove ice from the blades of WTs, and birds and bats mortalities caused by WTs, are examples of the negative impacts of WFs. However, the number of birds killed by WTs may be negligible compared to that by fossil fuels, and some other human activities Sovacool. In addition, water pollution in some areas, during the construction phase of WFs Lu et al., is another example of negative environmental impacts caused by WFs. The economic impacts are described as being crucial for wind energy investment in any country, Kucukali. Examples of these impacts are the job opportunities created by WFs projects for local communities, stabilizing the prices of electricity as the country will not be dependent on a single source to produce its electricity and help in lowering the prices of electricity. This, however, is dependent on the cost of electricity produced by the WF. Most wind energy projects are subsidized by governments due to their high capital and operational costs. Without governments' subsidies, wind energy projects will yield negative returns, and investors will find it difficult to cover for the cost of involved risks Welch and Venkateswaran. However, if the capital costs of wind energy investments were reduced and the utilization rate of WTs increased, which is the percentage of time a WT can be in use during the 8760 h (365 9 24) of the year, the wind energy projects would have positive returns on investments, without even the subsidies from governments. Furthermore, as the cost of sources of energy such as oil and natural gas become more expensive, wind energy becomes more competitive. Therefore, the accelerated increase in technology development that we witness every day, and the rise in oil and gas prices, will put wind energy on a short path to become financially self-sustaining and will have positive economic impact on investors and societies. The proposed model combines the technical and sustainability performances and can be applied to model the performance of WFs, located in cold climate regions such as the Arctic region, as well as other regions that are not characterized by cold climate conditions. In this paper, this model is used to evaluate the overall performance of a WF in Arctic Norway. The majority of current studies on the performance of WFs in the Arctic focus on the effects of icing on WTs in terms of their structural behavior Alsabagh et al., resulting power losses Kilpatrick et al., anti/de-icing technologies Wei et al. Dai et al. Parent and Ilinca and risks caused by ice fall, ice throw and thrown blade parts Bredesen and Refsum Rastayesh et al.. These studies mostly focus on the technical performance of WTs. It is observed that an integrated approach covering both the technical and sustainability performances of WFs is lacking. The rest of this paper is organized as follows: in Sect. 2 the methodology adopted for calculating the OPI for WFs using the WSM, expert judgements, and the predefined scoring criteria is presented. Section 3 presents the application of the methodology on a WF located in Arctic Norway. The conclusions and findings of this work are presented in Sect. 4. Weighted sum method for OPI calculation There are several multiple-criteria decision-making methods that can be used in the decision-making process such as weighted sum method (WSM), weighted product method (WPM), analytical hierarchy process (AHP), technique for order of preference by similarity to ideal solution (TOPSIS), etc. The common characteristic of these methods is that the analysis of the alternatives is based on determined criteria Bgrc. WSM, which is used in this paper, is one of the oldest and most-widely used methods in multi-criteria decision-making (MCDM) Triantaphyllou. For example, Stanujkic and Zavadskas used WSM to introduce an approach that helps decision makers to choose the best alternative, considering both the highest unit performance and the preferred performance, Kucukali developed a risk score card to rank the wind energy projects in Turkey using WSM and expert judgement. In addition, Williamson et al. used the WSM method to select the most appropriate low-head hydro-turbine alternatives by using quantitative and qualitative scoring. The basic idea of the WSM is to calculate the OPI as a sum of products of performance relative weights and scores of criteria, as follows in Eq. 1, Stanujkic and Zavadskas : where w i is the relative weight of the performance indicator i, S i is the criteria score for the performance indicator i. Figure 2 shows the steps followed in calculating the OPI for WFs using the WSM method. At first, the relative weight of each of the performance indicator shown in Fig. 1 needs to bdetermined. In case of lack of such data, the relative weight of performance categories is determined using expert judgment technique, explained in Sect. 2.1. Secondly, a set of qualitative scoring criteria is to be developed to define the scores for each performance indicator. The scoring criteria reflect the different levels of performance a WF can operate according to. The scoring for each performance indicator can be divided into 4 levels, where level 1 reflects the minimum level of performance and level 4 is the highest. The scoring criteria is illustrated in Sect. 2.2. Thirdly, the performance index for each performance indicator is calculated using Eq., where the relative weight is obtained from experts and the performance score is obtained from the scoring criteria table (Table 2 in Sect. 2.2), which is based on the characteristics of the selected WF. The same process is repeated to calculate the performance index for each indicator up to the overall performance index of the WF. Finally, we end up with a value of OPI that reflects how well or degraded the performance of a specific WF is. This index is instrumental for WFs operators and stakeholders to identify weaknesses in performance, in order to take the proper measures to alleviate them in cases, where the overall performance index was below the acceptable limit. Fig. 2. A case study will be presented to demonstrate the application of this methodology. Expert judgements Wind energy applications in Arctic Norway are relatively new. For example, in 2010, the total installed wind energy capacity in Norway was 436 MW, with only 48 MW installed in the Arctic Battisti. As such, long term data on the performance of WTs in Arctic Norway is far from satisfactory, which emphasizes the need for experts' knowledge that can contribute significantly to determining the relative weight of each performance indicator. However, expert judgement technique is indispensable even in situations where data is satisfactorily available as the statistical treatment of data cannot replace the expert judgments in the operational risk management process in hydropower plants, Mermet and Gehant as well as wind power plants. Expert judgement is recognized as a type of scientific data and methods are developed for treating it as such. This technique is typically applied when there is substantial uncertainty regarding the true values of certain variables, Colson and Cooke. It entails selecting experts with relevant experience (i.e. wind energy) and communicating with them, in order to elicit the needed information (i.e. the relative weight of each performance indicator). The Elicitation processes can involve simple correspondence, questionnaires, personal interviews (by telephone or in person) and various other combinations of interactions Beaudrie et al.. Each expert, in the elicitation process, can either be calibrated by giving his/ her answer a certain weight, that reflects the strength of the answer among other answers. The calibration process can consider, for example, the number of years of experience the expert has, the more experience the expert has the more important his answer is, compared to other experts' answers, example of that can be found in Naseri et al.. In another approach, all experts can be treated as the same with having equal importance for their answers. For simplicity, the latter approach is the one used in this case study. The selected group of experts in this study had expertise that ranged from academic doctors, and professors at universities involved in wind energy technologies to that of operators, engineers, and managers at WFs in Arctic Norway. Experts were interviewed physically or through distant conference meetings. Other means of communication with experts were telephone and email. Experts were asked to participate in a questionnaire that aimed to assess the relative weights of the performance indicators defined in the proposed model in Fig. 1. In total, 12 experts participated in answering the questionnaire. It is extremely unlikely that experts will ever be in total agreement with one another when answering questions where uncertainty is substantial. The questionnaire consisted of 11 questions, covering all the 11 performance indicators. The meaning and aspects of each performance indicator were explained to the experts for each question to avoid ambiguity. Experts were asked to assess the relative weight of each performance indicator qualitatively, by ranking each one from 1 to 10, where 1 indicated the lowest importance and 10 indicated the highest importance. Afterwards, experts' rankings were summed for each performance indicator, as shown in Table 1. The average weight of each performance indicator (PI) was calculated by dividing the sum of weight rankings from experts by the number of experts (n), as presented in Eq.. To calculate the relative weight of each performance indicator, the resulting average weight for each indicator is divided by the total weight for each group of performance indicators. For example, the availability performance represents a group of performance indicators that includes the reliability, maintainability, and supportability performance indicators. In order to calculate the relative weight of reliability performance, the average weight of reliability, which is 8.08 as per Table 2, is divided by the sum of the average weight of reliability (R), maintainability (M) and supportability (S), which is equal to 23.54. The relative weight of reliability in that case is equal to 34% as per Eq. 3. The same applies to maintainability and supportability performance indicators, with the relative weight equal to 33% for each, and to the rest of other performan indicators. Reliability PI relative weight average weight of R P average weight R; M; S 8:08 8:08 7:64 7:82 0:34 3 Figure 3 summarizes the relative weight of each performance indicator assessed by the experts. According to experts, there is a slight difference between the technical and sustainability performances in terms of their relative weights; this was indicated by assigning a higher relative weight (54%) to technical performance. Through discussion, experts explained that by improving the technical performance will improve the sustainability performance aspects, i.e. the social, economic, and environmental aspects. Therefore, the technical performance was assigned a higher relative weight. It can be seen from the Fig. 3 that all three performance indicators under the availability performance, i.e. the reliability, maintainability, and supportability, have almost the same relative weight. The experts have assigned the availability performance a higher relative weight (40%) compared to capacity and quality performances, which had relative weights of 32% and 28%, respectively as shown in Fig. 4. The experts have assessed that the environmental and economic performance indicators represent more than 70% of the total relative weight under sustainability performance, with the social performance indicator having 29% as a relative weight. The next step, after determining the relative weights, is to define the scoring criteria for each performance indicator. The selected score from the predefined criteria is mainly dependent on the performance characteristics of the selected WF. Performance scoring criteria A set of criteria was defined for each performance indicator, with specific scores from 1 to 4, as shown in Table 2, which is established based on a literature review, measured data, documented evidence, and human reasoning. Selecting criteria scores are dependent on the specifications and performance characteristics of the WF under study, which can include technical characteristics, location, and WF impact on its surroundings. An example of the use of scoring criteria was shown by the Japan International Cooperation Agency JICA JICA, in which a scoring criteria was used to assess the environmental and societal impacts of infrastructure projects around the world. As can be seen from Table 2, the scores for availability, technical and sustainability performance indicators are not defined. This is because these performance indicators are functions of the performance indicators under them. In order to obtain the scores of these undefine performance indicators, the WSM can be used. As an example, Eq. shows the method for calculating the criteria score for availability performance, which is equal to the sum of products of the relative weights of Reliability (R), Maintainability (M) and Supportability (S) indicators, and their criteria scores, taken from Table 2 for a specific WF. where w R, w M, w S are the relative weights of reliability, maintainability, and supportability respectively, and S R, S M, and S S are their criteria scores. Similarly, the overall WF score of a WF can be calculated as a function of its technical and sustainability performance indicators using Eq. below: where w tech and w sus are the relative weights of the technical and sustainability performance indicators respectively, assessed by the experts. S tech and S sus are the criteria scores, calculated using equations similar to Eq. for the technical and sustainability performances. Calculating OPI for Fakken wind farm: a case study The Arctic region considered in this case study is the northern rt of Norway, which experiences warmer temperatures than cities further south in the overall Arctic region, such as Canada or the United States. The coastal part of Arctic Norway is recognized to be ice free. Therefore, some WFs installed close to the coast do not need to equip their WTs with anti-icing systems, to prevent ice accretion on the blades, such as Fakken WF. kken WF is an onshore WF, located on a small island called Vannya to the north of Troms and Finnmark County, Norway. The WF is sited on a small hill at the southwestern edge of the island, at an altitude of 40 to 200 m above sea level Birkelund et al.. A mountain range is located to the west of the WF and two large fjords to the south, forming a complex terrain surrounding the The wind turbines are placed on birds' migration route, reindeers' grazing area or near to an ecologically sensitive area The wind farm has an Environmental Impact Assessment Report which is prepared by a desk study. But the wind farm is not located in the vicinity of wetlands, protected natural areas, caves, and birds' migration routes The wind farm has an Environmental Impact Assessment Report or study which is supported with field studies The wind farm has a detailed Environmental Impact Assessment Report in which biodiversity issues are addressed. The environmental analysis is supported with field surveys, and a monitoring system is established at the site for relevant environmental parameters Economic impact The price of electricity generated by the wind farm is 26-50% higher than what households in the country usually pay to purchase electricity The price of electricity generated by the wind farm is 1-25% higher than what households in the country usually pay to purchase electricity Fakken WF performance indicators scores Through communication with the WF manager and operator, we were able to get our hands on 28 service reports, and more than two years of alarm logs that contained the operation and maintenance data of one wind turbine (WT No.8), for the period from January 2018 until July 2020. Based on the analysis of this data, the performance indicators criteria scores were selected from Table 2 and calculated using variations of Eq., similar to Eq., as can be seen in Fig. 6. The justification for the selection and calculation of the scores is shown in Sect. 3.1.1. Justification of scores Reliability. By reviewing the service reports for the reference WT (WT No. 8), it was found that the WT experienced three main failures during 2019 that led to its operation being halted: a hydraulic pump failure, a generator bearing failure, and a defective bearing on the generator's fan. Based on that, a score of 2 was assigned to the reliability performance of that WT. Moreover, an overall regular annual inspection of the WT took place twice during the period from January 2018 until July 2020. The regular inspections took place in August 2018 and 2019, with no major failures reported in either of the inspections. Maintainability. According to the service reports, the mean time needed to replace the hydraulic pump, the generator's bearing and the generator's fan bearing were 10, 21 and 2 h, respectively. when referring to the scoring criteria Table 2, it is obvious that each time to repair of these failed components has a different criterion score as follows: the hydraulic pump has score of 3, the generator's bearing is assigned a score of 2 and the generator's fan bearing is assigned a score of 4. Therefore, by taking the average of these scores, the maintainability of the WT can be assigned a value of 3. Supportability. Both failures, the hydraulic pump and the generator bearing failures, were repaired during the same day they failed, which means that the mean downtime for the WT per failure is less than 25 h. Referring to Table 2, the supportability score is assigned a value of 4. Availability. The availability criteria score is a function of the reliability, maintainability and supportability performance indicators relative weights and criteria scores. By applying Eq., the calculated availability criteria score is equal to 3. Quality. The quality of the manufactured WTs is high. Vestas, the WTs manufacturer, is a well-known and a pioneer company in the WTs manufacturing, selling, installing, and servicing. Fakken WF is being monitored remotely by Vestas, and in case of failure, Vestas takes Table 2 care of the maintenance procedure required. Therefore, the quality of used spare parts is high. The selected model of WTs (V90-3 MW) is an improved design that provides more power without an appreciable increase in size, weight, and tower loads Vestas. The design of the WF layout is based on research and measurements of wind speed, humidity, temperature, and other factors, which are still being monitored until today. Moreover, a highly efficient software was being used to analyze the measured data. Therefore, the quality performance is assigned a score of 4. Capacity. The amount of energy produced by the WF throughout the year is estimated at 130 GWh TromsKraft, divided by the maximum amount of energy the WF would have produced at full capacity, which is estimated at 473 GWh. The resulting capacity factor is 27.5%. Based on that, the capacity performance is assigned a score value of 2. Technical performance. The technical performance score can be calculated as a function of the availability, capacity and quality relative weights and criteria scores, by applying an equation similar to Eq.. The resulting value pf technical performance score is 3. Environmental impact. the WF is not located in bird migration routes and does not represent threats to endangered species in the Arctic. Still, the WF was built on an important winter grazing area for reindeer. However, testing data showed that reindeer density within the wind farm area did not change significantly during and after the construction of the wind farm, Tsegaye et al.. The effects on reindeer spatial use during and after WF development were negligible, according to the same study. However, some significant changes in reindeers' use of the area was noticed that might be caused by human activities during certain construction stages of the WF. Based on that, the assigned environmental impact score of Fakken WF can be equal to 3. Economic impact. In the European Free Trade Association (EFTA) Surveillance Authority (ESA) report Sanderud and Monauni-Tmrdy, dated 16 March 2011, regarding the fund offered to Troms Kraft Produksjon AS to construct Fakken WF, Enova SF, a company owned by the Ministry of Climate and Environment in Norway, announced that the price of electricity from Fakken WF is calculated based on a six-month average of three year forward contracts, and it is going to be NOK 0.34/kWh. Comparing this price of electricity to the average price paid by households in Norway during the same period, i.e. the three years following the construction of the wind farm, 2012, 2013 and 2014, as taken from Statistics Norway, SSB, the price of electricity generated by Fakken WF was found to be 8% more expensive. An estimation of the levelized cost of energy produced by Fakken WF was conducted by Mustafa et al.. The cost estimation shows that the WF produces energy 25% more expensive than what households in Norway normally pay. However, households in Norway pay a unified price of electricity, whether it comes from wind energy or from hydropower, which is the main source of electricity in Norway. Therefore, the economic impact of Fakken WF has a score of 2. Social impact. The WF is located in a remote site away from residential areas, so the noise generated by the WTs does not affect the local society. The WTs are not equipped with anti/de-icing systems, as ice rarely accretes on them. Therefore, the risk of ice throw from WTs is negligible. This was confirmed when speaking to the manager of the WF. Moreover, the WF does not stop or limit local communities' ability to utilize the surrounding lands and gain a livelihood. However, some claims surfaced from the local community regarding the effects of the WTs on reindeers' use of the WF area, but these claims were disproved, by Tsegaye et al.. Based on that, the social impact score is assigned a value of 3. Sustainability performance. The sustainability performance score can be calculated as a function of the environmental, economic, and social impacts' relative weights and criteria scores, by applying an equation similar to Eq. 4. The resulting sustainability performance score is 2.65. Overall WF performance score. The overall performance score is a function of the technical and sustainability performances' relative weights and scores. By using Eq., the resulting value of the overall performance score of Fakken WF is equal to 2.84. Fakken WF overall performance index The proposed OPI is a normalized value of the overall WF performance score, which was calculated using Eq. 3. The value of the overall performance score is normalized to be from 0 to 1. This can be done by subtracting the lowest attainable score, which is 1 from the calculated overall performance score and dividing the result by the difference between the highest and lowest attainable scores, as shown in Eq. 6: OPI overall performance score minimum score maximum score minimum score The resulting OPI represents an absolute value that can help operators and stakeholder at a specific WF to decide whether the overall performance of that WF is acceptable or not. In case the resulting OPI was deemed to be unacceptable, the performance indicator that contributes to lowering the overall WF performance can be easily allocated. Moreover, the resulting OPI can be expressed qualitatively by defining a qualitative scale as show in Table 3. Based on that, the 61.3% OPI can be expressed to be good performance. In case the decision was to improve the OPI of Fakken WF, it can be seen, by referring to Fig. 5, that the sustainability performance indicates a lower impact than the technical performance. Therefore, improvements should be focused on the WF sustainability performance. Moreover, it is the economic performance indicator that has the lowest score among sustainability performance indicators. This can be attributed to the high operation and maintenance (O&M) costs that lead to increasing the cost of energy produced by the WF. Based on that, it can be proposed that more efforts are required to improve the (O&M) activities. Another advantage of using the OPI is that it can be calculated for multiple WFs that share similar characteristics, such as WTs brands, capacity, location, etc. The OPI can help us compare the overall performance of these WFs, or their specific performance indicators, and therefore, ranke them according to how high or how low their performances are. For example, the OPI of Fakken WF can be compared with other WFs located in Arctic Norway, such as Nygrdsfjellet and Kvittfjell/ Raudfjell WFs. Based on the resulting OPI values, decision-makers can decide which WFs need to be improved to provide better performance and which performance indicators need more focus. In order to compare the effects of Arctic operating conditions on the calculated OPI of Fakken WF, the same OPI quantification methodology is applied to a WF located in a non-cold-climate region, in Turkey. The Kozbeyli WF in Turkey has higher technical performance than Fakken WF, with a technical performance criterion score equal to 3.73 out of 4, due to higher reliability and capacity performances. This has led to an OPI value of nearly 75% if the sustainability performance of Kozbeyli WF was equal to that of Fakken WF, which is not the case. This is due to a lower environmental performance as Kozbeyli WF is located close to an Environmental Protected Area, migration route of birds, and endangered species. In addition, the Kozbeyli WF is 1.3 km away from a village that has a touristic value, which has reduced the social acceptance and performance of the WF Kucukali that consequently, reduces the sustainability performance criteria score of the WF to 1.7 out of 4. Consequently, the resulting OPI of Kozbeyli WF is nearly 60%, which is mainly due to lower sustainability performance of the WF. Conclusions The OPI is an important tool in providing a measure of the overall performance of WFs, especially in cases where performance data is scarce. The overall performance of WFs constituted the technical and sustainability performance indicators. The technical performance consisted of the quality, capacity, and availability performance indicators. The weighted sum method (WSM) is one of the most widely used methods for multiple-criteria decision making (MCDM). The use of WSM implies summing the products of the performance indicators relative weights and their scores of criteria. Due to data scarcity, the relative weight of each performance indicator was estimated using expert judgement technique. Experts estimated that the technical performance had higher relative weight (54%) than the sustainability performance (46%). The rest of performance indicators had relative weights estimated by the experts as follows: Quality (28%), Capacity (32%), Availability (40%), Reliability (34%), Maintainability (33%), and Supportability (33%). Moreover, the sustainability performance indicators had the following relative weights: social and safety impacts (29%), environmental impacts (36%), and the economic impacts (35%). The proposed methodology was applied to an onshore WF in Arctic Norway, called Fakken WF. The assigned and calculated scoring criteria for the performance indicators using Table 2 are found to be as follows: Reliability, Maintainability, Supportability, Availability, Quality, Capacity. The calculated technical performance score is equal to 3. The sustainability performance indicators had the following criteria scores: social and safety impacts, environmental impacts, and the economic impacts. The calculated sustainability criteria score is equal to 2.65. Consequently, the calculated total criteria score for the WF was found to be equal to 2.84. The calculated OPI of the WF is 61.3%, which was deemed to be good, when compared against a proposed qualitative criteria scale. The OPI indicated that the economic performance of the WF needs to be improved, which can be attained by lowering the O&M costs to lower the cost of energy of the WF. Moreover, in order to understand the effects of Arctic operating conditions on the performance of WFs, the OPI of Fakken WF has been compared to the OPI of Kozbeyli WF, which is a WF located in a non-cold-climate region. The comparison concluded that Kozbeyli WF had higher technical performance in its reliability and capacity performances, due to the absence of Arctic operating conditions. However, the location of Kozbeyli WF has led to lowering its sustainability performance, due to its negative impacts on the environment and society, which has led a lower OPI value (60%), which was lower than the OPI of Fakken WF. Funding Open access funding provided by UiT The Arctic University of Norway (incl University Hospital of North Norway). This research is funded by UiT the Arctic University of Norway. Availability of data and material Data and used material in the paper are available upon request. Declarations Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. |
The status of the former Mid-Cities Mall property will be discussed during a closed session by the Manitowoc Common Council at 5 p.m. Tuesday.
MANITOWOC – The Mid-Cities Mall and the debated county sales tax will be topics of discussion during this week’s government meetings.
Here are the top five agendas in Manitowoc County for this week.
Mid-Cities Mall. The status of the former Mid-Cities Mall property will be discussed under a closed session in the next Manitowoc Common Council Committee of the Whole meeting at 5 p.m. Tuesday. Council members will also discuss a contract with Lakeshore Humane Society and potential property acquisition during the closed session. The meeting will take place a City Hall in the Council Chambers.
County Sales Tax. The Manitowoc County Finance Committee will continue its discussion of the 0.5 percent sales tax during the next meeting at 4:30 p.m. Monday. The meeting will take place in the County Administration Building. Other items on the agenda include matters related to the sale of tax-deeded property and an update by County Comptroller Todd Reckelberg on the county finances.
Two Rivers School Board. The Two Rivers School Board will meet at 6:30 p.m. Monday in Room 218 of the Two Rivers High School. Items on the agenda include acceptance of donations, approval of the 2018 delegate assembly resolutions and updates from school administration.
Lester Library. The Board of Trustees for the Lester Public Library will meet at 6 p.m. Tuesday in the library’s Community Room. Items on the agenda include reports from city, county and school representatives and approval of the 2018 library budget.
Manitowoc Public Safety. Adult entertainment taverns, the plan for blighted properties and the status of properties identified as subject to acquisition and demolition will be the topics of discussion for Monday’s Public Safety Committee. The committee will meet at 6 p.m. at City Hall in the Council Chambers. |
Synthesis and Antileukemic Activity of Novel 4(3(Piperidin4yl) Propyl)Piperidine Derivatives To explore the anticancer effect associated with the piperidine framework, several (substituted phenyl) {4piperidin1yl} methanone derivatives 3(ai) were synthesized. Variation in the functional group at Nterminal of the piperidine led to a set of compounds bearing amide moiety. Their chemical structures were confirmed by 1H NMR, IR and mass spectra analysis. Among these, compounds 3a, 3d and 3e were endowed with antiproliferative activity. The most active compound among this series was 3a with nitro and fluoro substitution on the phenyl ring of aryl carboxamide moiety, which inhibited the growth of human leukemia cells (K562 and Reh) at low concentration. Comparison with other derivative (3h) results shown by LDH assay, cell cycle analysis and DNA fragmentation suggested that 3a is more potent to induce apoptosis. |
1. Field of the Invention
This invention relates to photosensitive compositions. More particularly, the invention relates to photosensitive compositions which have a high resolving power and high reproducibility and which are suitable for use in the formation of relief images and in photo-etching processes.
2. Description of the Prior Art
Many systems are known in which a photosensitive synthetic resin is coated on a suitable support or substrate as a thin layer, and then the coated layer is exposed to light through a mask whereby the unexposed portion of the coated layer is removed by dissolution in a solvent leaving the exposed material behind. This type of procedure has been found useful in many industrial processes such as printing, precision processing, printed circuits, the manufacture of integrated circuits (IC) and the like. However, the conventional photosensitive resin systems have several disadvantages when used in these applications, and a number of improvements have been proposed to overcome these disadvantages. These improvements have been particularly applicable to processes in which fine line patterns must be reproduced such as in the photofabrication field, the electronics field, and particularly in the manufacture of integrated circuits, and LSI.
Certain factors affect the coating characteristics of the material which is coated. The most important factor is to achieve a coating which forms a thin uniform membrane on substrate such as metal plates, glass plates, silicon wafers and the like without the presence of pin holes in the membrane. The most important factor for determining the characteristics of the coating step is the combination of the photosensitive resin used with the solvent in the photosensitive compositions, and a need continues to exist for photosensitive resins which have a high reproducibility and high resolving power. |
Interactions between fusidic acid and penicillins. Summary Interactions were studied between fusidic acid and each of several penicillins in their effect on both penicillinase-positive and penicillinase-negative strains of Staphylococcus aureus. Estimation of the number of staphylococci that survived overnight exposure to the antibiotics, alone and in combination, showed three types of interaction. In the commonest type, exhibited by more than half the penicillinase-positive and almost all the penicillinase-negative strains, there was two-way antagonism; more staphylococci survived overnight incubation in the presence of fusidic acid plus a penicillin than in the presence of either agent alone. Further evidence that penicillin antagonised the action of fusidic acid against these strains was provided by scanning electron microscopy, which revealed that the cell-wall collapse that followed the action of fusidic acid was inhibited by the presence of a penicillin. In the second type of interaction there was one-way antagonism of penicillin by fusidic acid; least survivors were recovered after incubation with a penicillin alone, more from the mixture and most from fusidic acid alone. The remaining strains showed indifference, in that the effect of the more bactericidal agentwhich against some strains was fusidic acidprevailed. Even when the effect of penicillin on the bulk of the bacterial population was antagonised, the presence of penicillin always prevented the emergence of fusidic acid-resistant mutants. |
Nokia has invited members of the press to a conference on May 14 where they plan to reveal the next chapter in the Lumia story. The Finnish handset maker didn’t outline exactly what we will see but rumors point to a Windows-based tablet, among other things.
Sources indicate the slate could run either Windows 8 or Windows RT. Other rumored specifications include a 10.1-inch display operating at a lowly 1,366 x 768 resolution, a 1GHz processor (likely a dual-core chip) and 1GB of RAM. All of this points to a mid-range device at best which may lean more towards Windows RT. The Lumia tablet reportedly measures 256.6×175.3×9.7mm and weighs in at 676g.
In the same respect, Nokia may use the London-based event to launch a new smartphone instead. Recent rumors suggest this could be the Lumia 920 successor, codenamed Catwalk. This phone is expected to be much lighter and thinner than current offerings as it will be constructed of aluminum instead of the typical polycarbonate.
If not the Catwalk, then perhaps we may see Nokia’s Windows Phone 8 handset with a true PureView image sensor inside, codename EOS. Such a device was first rumored to be in the works back in January. It would use an image sensor similar to the 41-megapixel unit found in the 808 PureView.
We’ll keep a close eye on Nokia’s event as they could be prepared to unveil any of these devices – or perhaps something totally new that we didn’t see coming. Any bets on what we might see next month? |
Nissan in Europe has unveiled a version of the Leaf coated in paint that uses ultraviolet energy absorbed during daylight in order to glow at night. That's right. It's a glow-in-the-dark Nissan Leaf.
Nissan Leaf is an affordable electric car that launched in 2010, and now it is being showcased with glow-in-the-dark paint. Nissan has partnered with inventor Hamish Scott, who created the spray-applied coating that absorbs UV energy during the day, and their collaboration has resulted in the Leaf being able to glow for up to 10 hours at night.
Glowing car paint and wraps have long been available, but Nissan said its ultraviolet-energised paint was created especially for the Leaf and is unique due to its "secret formula" made up of entirely organic materials. It contains a rare natural product called Strontium Aluminate, for instance, which is solid, odorless, and chemically and biologically inert.
Nissan said it is the first car maker to ever directly apply glowing paint, but what's even more unique is that the paint can supposedly last for a quarter century. Unfortunately, though, you can't currently order a glowing Leaf from a dealer.
Watch Nissan's video above to learn more about the glowing Leaf. Also, for those who are interested, Nissan's new paint is technically called Starpath. Trippy, right? |
Technical aspects of a new S-band atmospheric profiler A Transportable Atmospheric RAdar, TARA, has been developed at the International Research Centre for Telecommunications-transmission and Radar, IRCTR. State of the art but commercially available technology has been integrated to build a sensitive, versatile research radar primarily intended for atmospheric research. The system is based on the FM-CW principle and is fully solid-state. This yields improved system lifetime in comparison to tube-based systems. The antennas are dual linearly polarized for polarimetric applications. Multiple beams at 15° offset in two orthogonal directions in combination with the Doppler capabilities of the system are used for measuring three-dimensional wind fields. In this paper attention is paid to the technical realization of the system and some measurements are shown using the Doppler and multi-beam capabilities. |
People are just not willing to pay for music anymore. According to IFPI (International Federation of the Phonographic Industry), recorded music revenue peaked at $38bn worldwide in 1999, collapsed down to $16 billion (2011), edged up somewhat the year after, only to fall back down again to $15bn last year (2013).
Total recorded music industry revenues in 2013 were less than half of their 1999 peak. Less artists are getting record deals, labels don't have as many resources to promote artists, and most artists don't earn nearly what they used to. Although the most successful artists still live lives of luxury, the music industry doesn't have the same glamourous, rockstar perception that it once did.
There is no doubt that piracy has had a big contribution to the decline of music sales. However, the movie industry has the same piracy problems and is booming with expected growth. According to PWC, filmed entertainment revenue is expected to grow from $88bn in 2013 to $110bn by 2018. Yes, it can be argued that people are more willing to pay for movies because of the type of content it is and that less movies are produced each year than music releases, but the point is that the industry is both booming and growing. Additionally, according to an Ofcom report, those who illegally acquire content are the ones who actually spend more money on legal downloads!
During an average three-month period, infringers tend to spend more than non-infringers on legal digital content ($26 vs. $16). A similar pattern, although less pronounced, is seen when we include wider spend on offline content-related purchases ($110 vs. $83). Furthermore, the top 10% and top 20% of infringers tend to spend the most, in contrast to the bottom 80%, whose lower spend is more in keeping with non-infringers. - Ofcom
The ones who are "stealing" music are the ones who are actaully paying more for it as your best customers. Still, the fact that music piracy is so prominent does contribute to lackluster revenues, however, if someone didn't pirate certain music, it doesn't necessarily mean they would have paid for it instead. It most likely means they never would have heard it. We've reached a point that people expect music to be free.
There are lawsuits and complaints around the level of royalties that streaming services such as Pandora and on demand services such as Spotify pay out to rights holders. The streaming services seem like an easy target as it's the primary source for music consumption, but most of these services already pay so much in expenses that they are unprofitable! The main thing that is overlooked is that at the end of the day, the money for music comes from the consumer. All You need to do is follow the money. When music industry revenues were sky high, it was because consumers were willing to pay for it. Now, the same consumers who used to shell out $13 for an album or even $1 for an iTunes song download aren't even willing to pay the $10 a month for a Spotify premium account to access 25 million songs played on demand! So to save the music industry, the question is what is going to get the consumer to be willing to actaully pay for music again?
What do artists expect?
When you take into account the way that content is consumed now, what do artists expect?
Streaming/on demand services may be blamed for not paying enough royalties, but no one is forcing artists to put their songs on Spotify. It's just how the market now listens to music. If an artist could sell their music without the streaming services, they are more than welcome to. The streaming services are often looked at as the enemy, but they are the ones delivering the music to the consumers.
Many artists are more than happy to have their songs get airplay for free on terrestrial radio, but AM/FM radio stations only need an ASCAP/BMI/SESAC license to play music, meaning they only need to pay performance royalties to the song writers, not mechanical royalties to the record labels/recording artists. In fact, most record labels give their music away for free to the terrestrial radio stations as promotion! In addition, terrestrial radio is potentially playing a song to millions of listeners compared to streaming radio which plays a single stream to a single listener. Airplay on terrestrial radio reaches a lot more ears.
There is a key difference between "streaming" radio and "on demand" radio. Streaming radio includes services such as Pandora where you set a radio station that plays music and you can not choose to play a song. On demand services such as Spotify allow you to actually choose which songs you want to play. So why is streaming "internet" radio perceived as any different than terrestrial radio? It is just the new way that people listen to the radio in the digital age.
So let's take a look at what artists actaully are paid from streaming services. Gryffin Media put together this awesome infographic for Digital Music News that gives a great breakdown:
So let's break down the averages:
Artists, on average, earn $0.00217 per stream
A song needs to be streamed 456 times to make $0.99, the typical price of an iTunes download
According to the infographic, Spotify, which is a typical reference point for most artists, pays an average of $0.003 per stream which equtes to 330 streams being equal to one $0.99 download.
However, on Spotify's website, they claim the spread is $0.006 to $0.0084 for an average of $0.0072 or 137 streams per $0.99. Spotify also does not pay on a per stream basis. Their formula takes into account their monthly subscription revenues and an artist's overall popularity.
Now let's analyze what it means. There are two key differences between streams and purchases/downloads we need to look at:
When you purchase a download, you pay up front for the ability to listen to the song an unlimited number of times
When you listen to a stream on Spotify or other streaming service, the royalty is allocated on a pay as you go plan
If someone was to purchase the song for $0.99, would they listen to it at least 456 times? 330 times? 137 times? Also keep in mind that artists do not receive the full $0.99 from a download.
Let's compare the numbers in a sense of RPMs, or revenue per 1,000 impressions, a common metric used across the digital advertising industry. According to the infographic, RPMs earn artists an average of $2.17 and are as high as $8.10 to $10 on Deezer and Rdio, respectively. On Spotify, RPMs are, on average, as high as $7.20 according to their website. The table below, from Monetize Pros, shows that average CPMs/RPMs for typical digital impressions range between $2.80 and $3.00 from data collected as of January, 2014. So by comparison, music doesn't have the lowest RPMs for digital impressions and is right in line with industry standards.
Here is a breakdown by industry:
In the end, however, the market dictates prices and allocates recources. The growing trend shows that people are clearly looking to "rent" or "subscribe" to music over buying it.
Simple supply and demand economics can be used to help look at the situation as well. Because of the ability for artists and musicians to upload their music to any digital medium and market themselves virally, there is the highest supply of music readily available than ever before. Economics tells us that as supply increses, price decreases. The internet allows any artist to get their music out there, but as a result, there is a huge amount of supply, so the competition of so much music forces prices lower. Now this isn't necessarily a bad thing. If an artist creates great music and markets themselves, they will still get a great fan base, and ultimately get paid very well. The internet allows artists with truly great music to reach enough fans so that they will be able to sustain themselves.
With so much competition, the market will ultimately dictate which artists are popular and how much revenue they earn.
From TIME, here are the 10 most popular songs on Spotify from the week before Thanksgiving, 2013, with an estimate of how much money they’ve generated in royalties since they were released as of December, 2013:
The Monster Eminem
35.1 million streams, $210,000 – $294,000
Timber Pitbull
32.0 million streams, $192,000 – $269,000
Royals Lorde
65.3 million streams, $392,000 – $549,000
Counting Stars OneRepublic
57.7 million streams, $346,000 – $484,000
Hey Brother Avicii
46.5 million streams, $279,000 – $391,000
Wrecking Ball Miley Cyrus
60.4 million streams, $363,000 – $508,000
Roar Katy Perry
64.6 million streams, $388,000 – $543,000
Wake Me Up Avicii
152.1 million streams, $913,000 – $1.3 million
Hold On, We're Going Home Drake
47.1 million streams, $283,000 – $396,000
Burn Ellie Goulding
53.8 million streams, $323,000 – $452,000
When you keep in mind that these revenues are for individual songs, rather than albums, and are from just one platform, for a specific time period, it confirms that the most popular songs are still earning artists substantial revenues. Do these figures seem unfair to you?
What about live music?
There has been tremendous growth in live music and concerts, with many indicating that this is where artists will earn the bulk of their revenue. So much so that the Record Companies have added a large chunk of it into their infamous 360 deals. According a Pollstar analysis, The concert industry grossed a record $5.1bn in North America in 2013.
Historically, the goal of touring was to promote album sales, but with an explosion of music festivals in recent years, live concerts have become a large focal point for artists to earn revenue. Revenues from live concerts are split between the concert promoters such as Live Nation, Artists, and Labels.
Is this a sustainable approach? Live Nation, the biggest player in the concert promotion industry, earned a whopping $4.5bn in revenue from concerts in 2013 according to their annual report. However, they operated at a loss of $39.6mm. Concerts are very expensive to produce and often operate at break even or at a loss. This is because historically, touring was used as promotion rather than as a profit source.
Big name artsits, however, still get paid big bucks to perform at festivals. Jam bands and EDM DJs rely very heavily on revenues from their live shows. Phish is a prime example who have a strong cult like following from a touring philosphiy that is modeled after The Grateful Dead. Jam bands and EDM artists are very similar as they provide a different experience at every show. Phish actually turns every live show into a potential music sale by recording every show and selling the downloads. The Grateful Dead allowed anyone to record their live shows for free to share the music. They allowed free music sharing well before the days of Napster or the internet.
Artists who can master the art of touring and live shows can earn significant income, but in general, touring and live shows need to be taken in conjunction with music sales and other revenues. Up and coming artists, especially, are not making the big bucks to perform live. Live music on its own, does not seem viable to restore the entire industry.
So what is going to save the industry?
I wish I knew
Now here is the big question that has thousands of industry experts scratching their heads. Again, if you follow the money, you see that the real question is, what is going to get the consumer to pay for music again? Although I don't have the answer, we have posed a few questions that came out of some heated discussions around the Feature.fm office to help spark some brainstorming:
Keeping up with the changes in technology
As technology changes, history tells us that those who are resistant get left behind. Embrace new technology.
Consumers no long need to buy full albums to get the songs they want. They are able to buy the songs they care about piecemeal. Is there an opportunity to save the album? When an artist creates an album, isn't that the product? Not the individual songs, but the collection of songs into one product? How can the industry entice people to purchase full albums again? Maybe there is an opportunity to revolutionize what individuals actaully get with an album purchase to be more in touch with new technology and mediums. An album has the ability to provide way more than just a collection of songs, possibly music videos, maybe some commentary, and an artistic album cover. It can be so much more interactive.
In contrast, maybe the artists need to re-evaluate what the product is that they're offering. If the market is telling you that they prefer to just snag the few songs they care about, you can repurporse your product offering and music promotion strategy to fit the current model. Singles used to be used as a draw to purchase the full album, but people don't need to do that anymore.
Because of the way users now consume music, it's necessary for artists to provide and promote music in platforms such as Spotify, Pandora, etc., but don't confuse streaming radio with on-demand music. Streaming radio is just the new way that people listen to the radio. It is really the platform that is best used for promotion of your music rather than actual sales. Now as users are asked to subscribe to these services, a portion of that revenue should absolutely go back to the artists. Is the price right though? How many albums did users used to purchase per year? Is it more or less than paying $10 per month for Spotify or Rdio? What about $5 per month for Pandora where you can't play music on demand? Is it possible that charging less for subscriptions will actually get more people to sign up, which will provide more total revenue from consumers, and ultimately more revenue to give out to artists?
A freemium model
We compare this to the millions of app developers. App developers may spend a year developing an app that they believe is an amazing product that everyone will use. They upload it to the app store for people to download for free. Now just because they created something they believe is great and put it out there, it doesn't mean the consumers necessarily want to use it. How much should they earn from a few thousand downloads realistically? However, if the app is a success and people love it, then potentially millions of people will catch on and the app developers can now use the app that they gave away for free to earn revenue! This is a perfect comparison to musicians.
Should you give your music away for free?
Windowed and exclusive releases
We just saw the first of its kind with Apple's iTunes getting exclusive releases from Beyonce and U2 to promote their music. Windowed releases can be compared to how movies are released and is a unique music promotion strategy. First, movies are exclusively offered in theaters for a premium price, then they move to DVD, then to On Demand, then to Streaming Services, then premium channels such as HBO and Starz, and finally are available for free with commercials on cable and TV. Will it make sense for bigger artists who already have high demand to release albums exclusively on certain services such as iTunes and Spotify before releasing them elsewhere? What if a major artist only released an album to paying users of a streaming service?
A change in the psychology of paying for music
This goes along with a freemium model, but a change in the perception of paying for music can get consumers to pay way more than what they currently do. Why do you people readily give money away to Kickstarter campaigns? People are apt to give money away for causes they want to support, whether it's Kickstarter or charity. But when they are purchasing something, all of a sudden they start to analyze the value they get in return and whether the purchase is necessary. With the emotional connection that people have to music and artists, does it make sense to give them your music for free and ask them to help support you? On Kickstarter, someone is promised that they will receive one unit of a product for donating $X to help support the project. Is this any different than selling one unit of a product for $X? Maybe taking the approach of a street performer with an open guitar case to a grand scale will earn you more revenue if you create great music!
Add new revenue streams from brands
These are interesting opportunities to earn revenue from brands, but this route may have the association of being a "sell out" as artists would be forced to partner with brands.
Where else can the industry earn revenue for artists? There is a growing trend for brands to get more involved in music. In addition to traditional sponsorships, we see brands that actaully invest in artists careers themselves and take a real interest in the growth of an artist. Whether it's Red Bull Records, Hardrock Records, Pepsi, Coca Cola, brands are all seeing the value in having more focused promotion on music. The brands have the money to spend. They are also investing in up and coming artists as it is more cost effective to get a lower level artist and then help them to grow.
Packaging music with other products. Is there an opportunity to package a music download or album with another product that is in line with the artist? It would help to promote the product and to provide revenue to the artist for each unit sold.
Make yourself a business
Artists need to look at themselves as more than just music makers. They become their own bussinesses and can expand into additional areas to add revenue streams. Jay-Z is a prime example of an artist who opened his own record label, started a clothing company, started a sports agency, owns clubs, and has truly become an entrepreneur. When you are a growing artist, you are building a brand. You can use that brand to build other revenue streams.
Use live shows to really promote and add sales
As referenced above, Jam bands and EDM artists have mastered earning big revenue from live shows. Taking a lesson from historically using live shows to promote your music, you can take it to new levels with the progress of technology. Allow fans to record, whether professionally or on video, and upload your music as much as possible. The more your music spreads, the more fans you will have! You can also record your live shows and sell downloads of every show you perform.
Final thought
It might be difficult to get consumers to pay what they were paying when the music industry was at its peak, but can the industry get consumers to at least pay more than what they currently pay?
Case Studies
Here are some great case studies from a report by Floor 64 called "The Sky Is Rising" published on Techdirt.
Here are two videos of Dispatch explaning why they loved Napster and credit it for their success: |
The Mental Capacity Act 2005: Implications for health and social care providers in England and Wales Abstract This paper considers the implementation of the Mental Capacity Act 2005. The focus is upon its application to providers of health or social care in the UK. Readers are invited to consider the policy intention to both protect and promote the interests of vulnerable people, while developing an understanding of their formal responsibilities under the new legislation. All providers of care or treatment will become increasingly involved in the assessment of mental capacity in relation to decision making and the delivery of day-to-day care to people who may lack mental capacity. The implementation stages of the Act offer a number of opportunities to prepare staff, revise governance arrangements and anticipate the present or future needs of service users and patients. |
Detection of circulating tumour DNA is associated with inferior outcomes in Ewing sarcoma and osteosarcoma: a report from the Childrens Oncology Group Background New prognostic markers are needed to identify patients with Ewing sarcoma (EWS) and osteosarcoma unlikely to benefit from standard therapy. We describe the incidence and association with outcome of circulating tumour DNA (ctDNA) using next-generation sequencing (NGS) assays. Methods A NGS hybrid capture assay and an ultra-low-pass whole-genome sequencing assay were used to detect ctDNA in banked plasma from patients with EWS and osteosarcoma, respectively. Patients were coded as positive or negative for ctDNA and tested for association with clinical features and outcome. Results The analytic cohort included 94 patients with EWS (82% from initial diagnosis) and 72 patients with primary localised osteosarcoma (100% from initial diagnosis). ctDNA was detectable in 53% and 57% of newly diagnosed patients with EWS and osteosarcoma, respectively. Among patients with newly diagnosed localised EWS, detectable ctDNA was associated with inferior 3-year event-free survival (48.6% vs. 82.1%; p=0.006) and overall survival (79.8% vs. 92.6%; p=0.01). In both EWS and osteosarcoma, risk of event and death increased with ctDNA levels. Conclusions NGS assays agnostic of primary tumour sequencing results detect ctDNA in half of the plasma samples from patients with newly diagnosed EWS and osteosarcoma. Detectable ctDNA is associated with inferior outcomes. INTRODUCTION Ewing sarcoma and osteosarcoma are the most common bone malignancies of childhood and adolescence. Approximately 70-75% of patients with either localised Ewing sarcoma or osteosarcoma are expected to survive their disease with multiagent chemotherapy regimens and local control of the primary tumour. 1-4 While a range of clinical prognostic factors (e.g., tumour site and response to therapy) have been evaluated in these diseases, identification of the 25-30% of patients with localised disease with inadequate outcomes remains challenging. Development of circulating prognostic biomarkers in patients with localised disease is a high priority. Ewing sarcoma is characterised by hallmark translocation events (most commonly EWSR1/FLI1). Prior studies have evaluated the prognostic impact of fusion transcript detection in the peripheral blood or bone marrow using reverse transcription-polymerase chain reaction, yet have not shown consistent prognostic value. 8,9 Likewise, measures of Ewing sarcoma tumour cells in the peripheral blood and bone marrow by flow cytometry were not prognostic. A range of circulating biomarkers have been evaluated in osteosarcoma, though none yet validated. Circulating tumour DNA (ctDNA)-based assays hold promise as potentially important peripheral biomarkers. Successful utilisation of ctDNA for disease prognostication and association with response to therapy in patients with carcinomas has relied on the identification of highly recurrent single-nucleotide variants (SNVs). 16 Pediatric solid tumours are less amenable to such approaches, because these malignancies lack recurrent SNVs. www.nature.com/bjc Ewing sarcoma and osteosarcoma are at opposite ends of the spectrum of cancer genomic complexity. Ewing sarcoma is characterised by a simple translocation-driven genome, with STAG2 and TP53 loss-of-function mutations found in a minority of tumours. The majority of Ewing sarcoma tumours express an EWSR1/ETS translocation with a patientspecific intronic breakpoint, precluding the use of an assay that detects a single breakpoint across patients. Prior groups have detected ctDNA in patients with Ewing sarcoma using patientspecific digital droplet PCR (ddPCR) or hybrid capture nextgeneration sequencing (NGS). In this study, we utilised a custom hybrid capture NGS assay, termed TranSS-Seq, which we previously validated to detect ctDNA from patients with Ewing sarcoma. 23 The osteosarcoma genome is characterised by complex translocations and copy number changes. 8q gains are relatively common, may reflect MYC copy number gain/amplification, and may confer an inferior prognosis. 24 Prior attempts to identify ctDNA in the peripheral blood of patients with osteosarcoma utilised tumour biopsy sequencing to create probes for ctDNA detection and targeted NGS of commonly mutated genes. 25,26 Ultra-low-pass whole-genome sequencing (ULP-WGS) is a NGS method capable of identifying the complex structural variants seen in osteosarcoma. 27 These divergent patterns of genomic aberration (translocation associated vs. complex structural changes) are common in pediatric malignancies and provide potential avenues for detection of ctDNA. In this context, we conducted a retrospective cohort study to evaluate two NGS ctDNA methods capable of ctDNA detection without available tumour sequencing in patients with these diseases. We hypothesised that ctDNA would be detectable in blood samples and that the presence and level of ctDNA would be associated with clinical outcomes in patients with newly diagnosed localised disease. When possible, we controlled for previously described clinical features associated with outcomes in these diseases. Finally, we leveraged these techniques to study additional tumour characteristics (STAG2 and TP53 mutations in Ewing sarcoma; 8q gain in osteosarcoma) in ctDNA. Patient eligibility and sample collection Ewing sarcoma cohort. Patients in the Ewing sarcoma cohort were required to have a pathologic diagnosis of Ewing sarcoma and be enrolled on the COG Ewing sarcoma biology study AEWS07B1. Patients included in the primary analysis were required to have newly diagnosed localised disease. Samples from patients who presented with newly diagnosed metastatic disease or recurrent disease were analysed as separate cohorts. For each patient, ctDNA was analysed from a single blood sample drawn within 28 days of diagnosis or relapse and prior to the start of therapy. Each participating centre obtained a blood sample in an EDTA tube that was shipped overnight at room temperature to the University of California, San Francisco. On arrival, all samples were centrifuged, plasma was isolated, and then frozen at −70°C until ctDNA analysis. The median plasma volume for this cohort was 2 mL (range, 0.75-2.0). Osteosarcoma cohort. Patients in the osteosarcoma cohort were required to have a new pathologic diagnosis of localised osteosarcoma and were enrolled on the COG osteosarcoma biology study AOST06B1. Each patient had a blood sample collected in an EDTA tube prior to the start of therapy. Plasma was isolated on-site and frozen at −70°C before being shipped to Nationwide Children's for storage prior to ctDNA analysis. The median plasma volume for this cohort was 2 mL (range, 0.75-6.75). Both cohorts. All patients signed written informed consent at the time of enrollment to AEWS07B1 or AOST06B1. Separate approval for this retrospective use of patient samples and clinical data was obtained from the Dana-Farber/Harvard Cancer Center institutional review board. ctDNA analysis Cell-free DNA was extracted from plasma samples using the QIAamp Circulating Nucleic Acid Kit (Qiagen). Quantification of total cell-free DNA in ng/mL was performed using Quant-iT PicoGreen dsDNA Assay Kit (Thermo Fisher Scientific). Contamination of the sample with high-molecular weight DNA was determined by Bioanalyzer (Agilent) and an SPRI clean-up was performed to select for ctDNA if necessary. In total, 37 (39%) samples from patients with Ewing sarcoma and 2 (3%) patients with osteosarcoma underwent SPRI clean-up. Up to 40 ng of DNA were used for library preparation using the KAPA Hyper Prep Kit (Kapa Biosystems). Barcoded adapters were ligated during manual library preparation. Libraries were assessed by Bioanalyzer and quantified for pooling using the MiSeq Nano flow cell. For detection of Ewing translocations, sequencing libraries were enriched using the Agilent SureSelectXT Hybrid Capture Kit and a validated custom bait set targeting intronic regions of genes commonly involved in sarcoma translocations, including EWSR1, FUS, CIC, and CCNB3 and the coding regions of TP53 and STAG2. This approach, termed TranSS-Seq, allows for the detection of translocations involving these genes and any translocation partner as well as coding mutations in TP53 and STAG2. Post-enrichment libraries were quantified and sequenced with intended unique coverage at target regions of >500. The average measured coverage at enrichment sites for all samples tested by this approach was 579 (range 151.8-1311.2). For ULP-WGS for osteosarcoma samples, barcoded sequencing libraries were pooled and sequenced on an Illumina HiSeq 2500 to achieve an anticipated average coverage between 0.2 and 1 for the whole human genome. Samples for both TranSS-Seq and ULP-WGS were de-multiplexed, aligned, and processed using Picard tools, BWA alignment tool, and GATK tool. Identification of targeted translocations by TranSS-Seq was performed using BreaKmer. 31 To quantify the number of translocation reads and wild-type reads for every sample, each sequencing read was realigned to either the human reference genome or a custom sequence containing the patientspecific EWSR1/ETS translocation based on sequence homology. Percent ctDNA was calculated based on the expectation that each cancer genome contains one translocated and one wild-type EWSR1 allele, while normal genomes contain two wild-type EWSR1 alleles: % ctDNA = T/(((W − T)/2) + T), where T is the number of translocation reads and W is the number of wild-type reads. In a previous study, we demonstrated that ctDNA levels determined with TranSS-Seq were highly correlated with experimental serial dilution experiments and levels measured by patient-specific ddPCR. We also demonstrated that TranSS-Seq has an estimated sensitivity for detection of Ewing sarcoma ctDNA levels at or below 1.5% of total cell-free DNA. 23 ULP-WGS analysis was performed using the Broad Institute's ichorCNA algorithm, with manual curation of results to confirm tumour percentages. 27 Previous studies demonstrate that ULP-WGS can be used to identify ctDNA in patients with copy numberaltered tumours. Serial dilution experiments validated that this approach can detect and accurately quantify ctDNA when constituting as little as 3% of a cell-free DNA sample. 23 Independent variables The primary independent variable was ctDNA coded as positive or negative ('ctDNA positivity') for detectable fusion ctDNA in the Ewing sarcoma cohort or detectable copy number alterations in the osteosarcoma cohort. Percent ctDNA and total cell-free DNA Detection of circulating tumour DNA... DS Shulman et al. (ng/mL) were analysed as separate continuous variables and provided a secondary independent variables for analysis. Dependent variables The following variables obtained from AEWS07B1 and AOST06B1 were used to characterise patients: age at study enrollment; sex; whether the sample was drawn at the time of initial diagnosis or at relapse (relevant for Ewing sarcoma cohort only); stage; primary site; and vital status. Age was dichotomised for multivariate analysis to <18 or ≥18 for patients with Ewing sarcoma and <14 or ≥14 for patients with osteosarcoma. 6,32 Tumour size was measured as largest diameter in a subset of patients with Ewing sarcoma treated on a COG clinical trial (NCT01231906) and dichotomised according to established prognostic size criteria of <8 cm or ≥8 cm. 1 The primary endpoint was event-free survival (EFS) and was defined as time from enrollment to first episode of disease progression, second malignancy, or death, with patients without event censored at last follow-up. Overall survival (OS) was defined as time from enrollment to death, with alive patients censored at last follow-up Statistical analysis This binary predictor variable 'ctDNA positivity' was tested for association with clinical and demographic features using Fisher's exact tests and t tests as appropriate. Cell-free DNA quantities were tested for association with clinical features using Wilcoxon's rank-sum tests. Cell-free DNA was analysed as a continuous variable and tested for correlation with percent ctDNA using the Spearman's correlation coefficient. EFS and OS were estimated by Kaplan-Meier methods with 95% confidence intervals (CIs). Potential associations between EFS or OS with ctDNA positivity were tested with log-rank tests. We used Cox proportional hazards models of EFS and OS to assess the prognostic impact of the continuous ctDNA and cell-free DNA secondary predictors and to assess prognostic impact of ctDNA positivity independent of other prognostic factors in these diseases. A global test of proportional hazards was used to confirm the proportional hazards assumption. All p values are two-sided and a p value < 0.05 was considered statistically significant. All statistical analyses were performed with Stata ®. Patient characteristics We analysed ctDNA in 100 blood samples from 98 unique patients with Ewing sarcoma. Samples not drawn at the time of diagnosis or relapse (n = 4) and subsequent samples drawn from the same patient with a sample from an earlier timepoint (n = 2) were excluded. The Ewing sarcoma analytical cohort therefore included 94 patients (Table 1). We analysed ctDNA in 75 blood samples from 75 unique patients with newly diagnosed, localised osteosarcoma, each with a single blood sample taken at the time of diagnosis. Three patients presented with osteosarcoma as a secondary malignancy and were excluded a priori. The osteosarcoma analytical cohort therefore included 72 patients with primary osteosarcoma (Table 1). ctDNA is detectable at the time of diagnosis and relapse in patients with bone malignancies Within the Ewing sarcoma cohort, we detected ctDNA in 53.3% (41/77) of newly diagnosed patients and 47.1% (8/17) of patients with relapsed disease (p = 0.79; Table 2). Among patients with newly diagnosed Ewing sarcoma with detectable ctDNA, the median percent of total cell-free DNA that was ctDNA containing an EWSR1 translocation was 13.8% (range 1.4-43.2%). The median quantity of total cell-free DNA was 14.2 ng/mL (range 2.4-255.3). There was a weak correlation between total cell-free DNA and percent ctDNA (Supplemental Fig. 1). The 49 positive samples included EWSR1/FLI1 (n = 43), EWSR1/ ERG (n = 5), and one novel EWSR1/CSMD2 fusion. This novel fusion has not previously been described and we therefore obtained tumour material and confirmed the presence of this fusion using PCR (Supplemental Methods and Supplemental Fig. 2). Within the osteosarcoma cohort, 56.9% (41/72) had detectable ctDNA. Among patients with osteosarcoma with detectable ctDNA, the median percent of total cell-free DNA that was ctDNA was 11% (range 4.6-58%). The median quantity of total cell-free DNA was 4.5 ng/mL (range 1.7-318.2). There was a weak correlation between total cell-free DNA and percent ctDNA (Supplemental Fig. 3). Detection of ctDNA is associated with clinical features in Ewing sarcoma and osteosarcoma We compared binary detection of ctDNA with presenting patient and tumour characteristics ( Table 2). Among patients with newly diagnosed Ewing sarcoma, ctDNA was detected in 69.2% of patients with metastatic disease compared to 44.0% of patients with localised disease (p = 0.053). ctDNA was detected in 66.7% of patients with newly diagnosed pelvic Ewing sarcoma compared to 46.3% of patients with non-pelvic Ewing sarcoma (p = 0.18). Among patients with tumour size collected (n = 21), 83.3% of patients with tumour size ≥8 cm maximum diameter had detectable ctDNA compared to 33.3% of patients with tumour size <8 cm maximum diameter (p = 0.063, Table 2). In the osteosarcoma cohort, 71.0% of patients with femoral primary tumours had detectable ctDNA compared to 46.3% of patients with tumours originating from other sites (p = 0.054). Total cellfree DNA was higher among patients with newly diagnosed pelvic Ewing sarcoma compared to non-pelvic sites, but no other significant associations were seen between total cell-free DNA and clinical features in either Ewing sarcoma or osteosarcoma (Supplemental Tables 1 and 2). Presence of detectable ctDNA is associated with inferior outcomes in Ewing sarcoma Clinical outcome data and ctDNA results were available for 50 patients with newly diagnosed localised Ewing sarcoma (median follow-up 41 months). Patients with detectable ctDNA had inferior EFS and OS (Fig. 1) (Table 3). Clinical outcome data and ctDNA results were available for 23 patients with newly diagnosed metastatic Ewing sarcoma. Patients in this group with detectable ctDNA had inferior EFS compared to patients with no detectable ctDNA (3-year EFS: 34.1% (95% CI, 12.6-57.2; n = 8) vs. 85.7% (95% CI, 33.4-97.9; n = 18); p = 0.05; Supplemental Figure 4A). The observed difference in OS according to ctDNA positivity in this cohort was not statistically significant (p = 0.24, Supplemental Figure 4B). ctDNA level is associated with inferior outcomes in osteosarcoma Clinical outcome data and ctDNA results were available for 72 patients with newly diagnosed localised osteosarcoma (median follow-up 44.3 months). EFS and OS estimates were numerically lower for patients with detectable ctDNA, but these differences were not statistically significant (Fig. 2). After controlling for age (≥14 or <14 years) and sex, two variably reported prognostic factors in osteosarcoma, 6,7 ctDNA detection remained positively associated with inferior EFS and OS, but the results were also not statistically significant (Table 3). Evaluating percent ctDNA as a continuous variable, the HRs for each unit increase in percent ctDNA among patients with localised osteosarcoma (n = 72) were 1.06 (95% CI, 1.03-1.09; p < 0.001) and 1.09 (95% CI, 1.06-1.14; p < 0.001) for EFS and OS, respectively. When limiting the analysis to the 41 patients with detectable ctDNA, the analogous HRs were 1.07 (95% CI, 1.036-1.11; p < 0.001) and 1.10 (95% CI, 1.05-1.16; p < 0.001). Among all patients with newly diagnosed osteosarcoma, cell-free DNA levels were not associated with clinical outcomes (Supplemental Table 2). Identification of genetic features of Ewing sarcoma and osteosarcoma via ctDNA We attempted to determine whether potentially prognostic genetic features could be detected in ctDNA in patients with Ewing sarcoma and osteosarcoma. We were able to detect loss-offunction STAG2 mutations in three patients and TP53 mutations in four patients. The allelic fraction of these mutations correlated with the % ctDNA levels observed in the patient sample suggesting these are likely somatic events. Furthermore, as germline STAG2 loss-of-function mutations have not been described, these mutations are expected to be somatic. Although germline TP53 mutations in Ewing sarcoma are rare, 33 we cannot definitively confirm that these events were somatic in the absence of germline DNA. DISCUSSION Using two NGS ctDNA assays, we detected ctDNA in banked peripheral blood samples from 52.1% of patients with Ewing sarcoma and 56.9% of patients with osteosarcoma, all without the knowledge of tumour tissue sequencing results. Detectable ctDNA showed trends toward significant associations with metastatic disease and tumour size in Ewing sarcoma and with femoral primary site in osteosarcoma. Among patients with newly diagnosed localised Ewing sarcoma, binary detection of ctDNA was associated with an inferior outcome. In both diseases, an increased risk of event and death was significantly associated with an increase in ctDNA burden when evaluated as a continuous variable, a finding that was not seen when using total cell-free DNA as the marker of interest. Our study is therefore the first large study to demonstrate that qualitative and quantitative ctDNA detection provides prognostic information for patients with localised bone tumours. Finally, we identified additional genomic features from ctDNA including identification of a previously undescribed EWSR1 fusion, STAG2 loss in Ewing sarcoma, and 8q gain in osteosarcoma. Our results demonstrate that these two ctDNA assays may yield additional information about tumour biology. While ctDNA analysis has proven useful for patients with carcinomas, there have been relatively few studies evaluating the utility of ctDNA as a biomarker in patients with sarcomas. Four studies have demonstrated that detection of ctDNA using ddPCR or hybrid capture-based NGS is feasible in Ewing sarcoma but each effort had too few patients to demonstrate a prognostic value of these assays. A similar approach has been utilised in a small cohort of patients with chondrosarcoma, 34 and two studies including patients with osteosarcoma. 25,26 Our study demonstrates the feasibility of detecting ctDNA nave of the tumour genome. Leveraging thematic genome alterations in these two genomically diverse tumours provided an avenue to efficiently detect ctDNA in diseases without highly recurrent SNVs. The fact that these two approaches do not rely upon first sequencing a patient's tumour has advantages for evaluation of ctDNA in large retrospective cohorts, and in multicentre studies where tumour biopsy tissue might not be readily available. Further, such approaches may be generalisable to other tumours that are translocation-driven or characterised by copy number variations. For patients with these diseases, risk stratification has historically depended on the presence of radiologically detected metastatic disease, and in some instances, primary disease site. Currently, there is no validated tool available at initial diagnosis to identify patients with localised disease at high risk of relapse. While the detection of specific highly recurrent SNVs using ctDNA in pre-treated, colorectal carcinomas has been associated with a b prognosis, 35 prior attempts to utilise pre-treatment circulating tumour markers of poor prognosis in Ewing sarcoma have failed to show a consistent association with outcome. 8,9 The most successful prior attempts to identify circulating prognostic biomarkers in osteosarcoma have utilised microRNA. 14, 15 We demonstrate the potential for ctDNA burden at initial diagnosis to be utilised as a prognostic biomarker of inferior outcomes in these two diseases. Given that in other diseases ctDNA has been associated with stage or disease burden, 36,37 it is possible that ctDNA levels may be associated with disease burden or occult metastatic disease in the context of these sarcomas. If validated, these assays may improve risk stratification through identification of patients with localised disease at high risk of relapse. All samples analysed in our study were collected as part of COG biology studies and banked. The samples were collected on these studies without a specific plan for future ctDNA evaluation. Therefore, the collection and handling strategies used in these studies were not ideal for maintaining the integrity of ctDNA samples. That ctDNA was detectable in nearly half of all samples speaks to the robust nature of these assays. Although it will be important to validate these findings in a prospective study, the use of previously banked samples provided the only opportunity to perform a timely evaluation of the prognostic value of ctDNA in these two rare diseases. Furthermore, this study now justified the development of a recently opened prospective study which will optimise sample collection and allow for prospective validation of our findings. Similarly, tumour size, a key clinical feature needed to assess tumour burden, was available only for a subset of patients and was assessed in a non-uniform fashion, potentially limiting the possibility for detecting a strong association with ctDNA positivity. Another limitation of our ctDNA assays is that they were not optimised to identify point mutations in samples with low cancer genome fraction, a problem which may have been compounded by sample quality in the context of this retrospective study. Yet, we could detect mutations in TP53 and STAG2 in a limited set of ctDNA Ewing sarcoma samples. We caution that these mutations, particularly mutations in TP53, may in fact be heterozygous germline mutations. Such analysis would be more robust with available germline sequencing and serial ctDNA samples, which were not available for these two cohorts of patients. In summary, the use of two NGS ctDNA assays provides a robust means of detecting ctDNA in the absence of tumour biopsy tissue for two rare and genomically diverse malignancies. Qualitative and quantitative detection of ctDNA in these diseases provides prognostic information that may ultimately be used to improve risk stratification approaches. In order to move this finding into the clinic, we are planning a larger prospective validation study that will also assess the clinical utility of serial ctDNA samples during treatment. We will investigate whether such data could provide an early indication of chemoresponsiveness and serve as a minimal residual disease marker. Finally, with refinement of our assays and improved sample collection, we will further explore the capacity of these two technologies to uncover important tumour characteristics in the peripheral blood, which may provide key information at diagnosis, and inform our understanding of the clonal evolution of these sarcomas during treatment. Ethics approval and consent to participate: Informed consent for initial collection of samples was obtained at the time each patient enrolled to the approved COG banking studies for Ewing sarcoma and osteosarcoma. The need for additional informed consent for use of banked samples was waived by Dana-Farber Cancer Institute Institutional Review Board. |
Microwave in Organophosphorus Syntheses The spread of professional microwave (MW) reactors in the last 25 years brought about a revolutionary change in synthetic organic chemistry. This methodology has also had a positive inpact on organophosphorus chemistry enhancing reluctant reactions, or just making the reactions more efficient in respect of rate, selectivity and yield. In special cases, MW irradiation may substitute catalyst, or may simplify catalytic systems. Introduction The use of the microwave (MW) technique spread fast in synthetic laboratories, and these days it knocks at the door of industry. At the beginning, only domestic MW ovens were available, but later, different variations of professional MW equipment were developed and utilized in many kind of syntheses, such as substitutions, additions, eliminations, condensations, acylations, esterifications, alkylations, CC coupling reactions, cycloadditions, rearrangements and the formation of heterocycles. The main problem with industrial application is the scale-up. On the one hand, there is a problem with the structural material, as the batch reactors may be made of only teflon or glass. On the other hand, the limited penetration depth of MWs into the reaction mixtures prevents the construction of bigger size batch reactors. Presently, the only possibility for a certain degree of scale-up is the use of continuous-flow reactors. A batch MW reactor may be supplied with a flow cell, where the mixture is moved by pumps. In another variation, a continuous tube reactor with a diameter of up to 69 mm was elaborated that makes possible the processment of ca 300 l/day. A capillary microreactor consisting of four parallel capillary tubes was also described. The above equipment may be used well in industrial research and development laboratories. The only criterion of such application is that the reaction mixtures must not to viscous and heterogeneous. The author of this paper believes that bundle of tubes reactors incorporating a number of glass tubes with a diameter of several mm-s may bring a breakthrough in the industry. Another good accomplishment is to apply assembly linetype equipment that transports the solid reaction components placed in suitable vessels into a tunnel, where the irradiation takes place. The most common benefits from MW irradiation is the considerable shortening of reaction times and the increase in the selectivities. However, the most valuable benefit is when a reaction can be performed that is otherwise rather reluctant under traditional thermal conditions. This may be the consequence of a so-called special MW effect. There are, of course, other advantages as well that will be shown below within the pool of organophosphorus chemistry that is a dynamically developing field. Organophosphorus compounds including P-hetereocycles find applications in synthetic organic chemistry as reactants, solvents (ionic liquids), catalysts and P-ligands and, due to their biological activity, also as components of drugs and plant protecting agents. The utilization of MW irradiation in organophosphorus chemistry is a relatively new field. In this article, the attractive features of the application of the MW technique in organophosphorus syntheses are summarized in four groups. Reactions those are Reluctant under Thermal Conditions The most common way to prepare esters is the acid catalyzed direct esterification of carboxylic acids with alcohols (Figure 1). The reaction is reversible, hence the alcohol should be applied in excess and/or the water formed should be removed by distillation, in most cases, in the form of binary or ternary azeotropes. Phosphinic acids, however, do not undergo esterification with alcohols to afford phosphinates, or the reaction is rather reluctant (Figure 2/A). For this, the esters of phosphinic acids are synthesized by the reaction of phosphinic chlorides with alcohols in the presence of a base (Figure 2/B). An alternative possibility is preparation by the Arbuzov reaction (Figure 2/C). Introduction The use of the microwave (MW) technique spread fast in synthetic laboratories, and these days it knocks at the door of industry. At the beginning, only domestic MW ovens were available, but later, different variations of professional MW equipment were developed and utilized in many kind of syntheses, such as substitutions, additions, eliminations, condensations, acylations, esterifications, alkylations, C-C coupling reactions, cycloadditions, rearrangements and the formation of heterocycles. The main problem with industrial application is the scale-up. On the one hand, there is a problem with the structural material, as the batch reactors may be made of only teflon or glass. On the other hand, the limited penetration depth of MWs into the reaction mixtures prevents the construction of bigger size batch reactors. Presently, the only possibility for a certain degree of scale-up is the use of continuous-flow reactors. A batch MW reactor may be supplied with a flow cell, where the mixture is moved by pumps. In another variation, a continuous tube reactor with a diameter of up to 6-9 mm was elaborated that makes possible the processment of ca 300 l/day. A capillary microreactor consisting of four parallel capillary tubes was also described. The above equipment may be used well in industrial research and development laboratories. The only criterion of such application is that the reaction mixtures must not to viscous and heterogeneous. The author of this paper believes that "bundle of tubes" reactors incorporating a number of glass tubes with a diameter of several mm-s may bring a breakthrough in the industry. Another good accomplishment is to apply assembly linetype equipment that transports the solid reaction components placed in suitable vessels into a tunnel, where the irradiation takes place. The most common benefits from MW irradiation is the considerable shortening of reaction times and the increase in the selectivities. However, the most valuable benefit is when a reaction can be performed that is otherwise rather reluctant under traditional thermal conditions. This may be the consequence of a so-called special MW effect. There are, of course, other advantages as well that will be shown below within the pool of organophosphorus chemistry that is a dynamically developing field. Organophosphorus compounds including P-hetereocycles find applications in synthetic organic chemistry as reactants, solvents (ionic liquids), catalysts and P-ligands and, due to their biological activity, also as components of drugs and plant protecting agents. The utilization of MW irradiation in organophosphorus chemistry is a relatively new field. In this article, the attractive features of the application of the MW technique in organophosphorus syntheses are summarized in four groups. Reactions those are Reluctant under Thermal Conditions The most common way to prepare esters is the acid catalyzed direct esterification of carboxylic acids with alcohols ( Figure 1). The reaction is reversible, hence the alcohol should be applied in excess and/or the water formed should be removed by distillation, in most cases, in the form of binary or ternary azeotropes. Phosphinic acids, however, do not undergo esterification with alcohols to afford phosphinates, or the reaction is rather reluctant (Figure 2 Trends in Green Chemistry The preparations of phosphinates were summarized. The generally used esterification method (Figure 2/B) has the drawback of requiring the use of relatively expensive P-chlorides. Beside this, hydrogen chloride is formed as the by-product that must be removed by a base. Hence, the method is not too atomic efficient and is not environmentally friendly. We tried the direct esterification of phosphinic acids with alcohols under MW conditions. To our surprise, a series of phosphinic acids underwent esterification with alcohols with longer chain at around 200°C on MW irradiation (Figure 3). The esterification of cyclic phosphinic acids, such as 1-hydroxy-3-phospholene oxides, 1-hydroxy-phospholane oxides and 1-hydroxy-1, 2, 3, 4, 5, 6-hexahydrophosphinine oxides and phenylphosphinic acid was carried out in the presence of ca. 15-fold excess of the alcohols in a closed vessel to afford the phosphinates in acceptable to excellent yields. The method seems to be of general value. It was also found that the esterification of phosphinic acids is thermoneutral and kinetically controlled. Moreover, it was proved that the reaction under discussion is not reversible. Reactions that became More Efficient under MW Conditions There are many reactions in the field of organophosphorus chemistry that become more efficient on MW irradiation. The advantages include shorter reaction times and higher yields. Moreover, in a lot of cases, there is no need for solvents. Such reactions are, for example, Diels-Alder cycloadditions, fragmentation-related phosphorylations and inverse Wittig-type transformations. The MW-assisted synthesis of -hydroxyphosphonates from substituted arylaldehydes and dialkyl phosphites under solventfree conditions also belongs to this group (Figure 4). In a variation, dialkyl phosphites were added on the carbonyl group of -oxophosphonates. The hydroxy-methylenebisphosphonates were obtained selectively in the reaction of acetylphosphonates (R 1 =Me), but in the reaction of benzoylphosphonates (R 1 =Ph), the formation of mixed phosphonates-phosphates was inevitable as a result of a rearrangement (Figure 5). Reactions in which the Catalysts are Replaced by MW Irradiation We found that active methylene containing compounds underwent C-alkylation by reaction with alkyl halides in the presence of K 2 CO 3 under MW-assisted solvent-free conditions. In other words, the phase transfer catalyst could be substituted by MW irradiation. This method was then extended to the alkylation of tetraethyl methylenebisphosphonate, diethyl cyanomethylphosphonate and ethoxycarbonylmethylenephosphonate (11c) to afford the corresponding monoalkylated products in variable yields (Figure 6). Trends in Green Chemistry Earlier preparations utilized catalysts (e.g. BiNO 3, phthalocyanine, and Lantanoid (OTf) 3 ) that cannot be regarded "green" agents. We proved that under MW conditions, there is no need for any catalyst. Moreover, the syntheses could be performed without the use of any solvent. Double Kabachnik-Fields condensations were also elaborated applying two equivalents of the formaldehyde and the same quantity of the >P(O)H species (Figure 8). The bis (phosphinoxidomethyl) amines (Y=Ph) were useful precursors of bidentate P-ligands after double deoxygenation that could be used for the synthesis of ring platinum complexes. As the consequence of their diverse bioactivity, the -aminophosphonates are in the focus these days. Reactions in which the Catalysts may be Simplified under MW Conditions The Hirao reaction involves the P-C coupling of aryl bromides with dialkyl phosphites in the presence of Pd (PPh 3 ) 4 and a base in a solvent. We were successful in elaborating a P-ligand-free variation of the P-C coupling under MW conditions. Hence, a series of substituted aryl bromides were reacted with dialkyl phosphites in the presence of Pd(OAc) 2 catalyst, in the absence of any P-ligand to afford arylphosphonates in 69-93% yield. No solvent was used (Figure 9/ ). The reaction was then extended to couplings with alkyl phenyl-H-phosphinates and secondary phosphine oxides to give alkyl-diarylphosphinates and arylphosphine oxides, respectively (Figure 9/ & ). In the latter instance, solvent had to be used to overcome the problem of heterogeneity. A NiCl 2 -catalyzed version of the P-C coupling reactions was also developed. NiCl 2 is cheaper than Pd(OAc) 2, however, the previous is more toxic. The discovery that P-C coupling reactions may be carried out in the presence of P-ligand-free metal salts is important as decreases environmental burdens and costs. Summary In summary, the MW technique was shown to have an increasing potential in organophosphorus chemistry. It may make possible otherwise rather reluctant transformations, or, as in most cases, simply enhances the reactions, and makes them more efficient. In certain instances, MW irradiation may substitute catalysts, or may make possible the simplification of catalyst systems. Many |
(CNBC) — Mexico’s former president shared some choice words for President Donald Trump, describing the American leader as a “machine” with “no compassion” while at the World Government Summit in Dubai on Sunday.
“He doesn’t seem to be a human being, he just looks like a machine, he doesn’t have any compassion,” Vicente Fox told CNBC’s Hadley Gamble.
The former leader blasted Trump’s border wall plans and focused on the divide within the U.S., adding that Trump essentially has a “f— you” approach to the rest of the world. The White House did not immediately respond to a CNBC request for comment. |
ROBBINS — An early morning fire has forced the evacuation of about 400 residents of a mental health facility in the Chicago suburb of Robbins.
Robbins Fire Chief Charles Lloyd Jr. says the fire that broke out at the Lydia Healthcare Center at about 1 a.m. today started in a mattress and was probably caused by smoking. Firefighters doused the blaze, but the building had to be closed due to smoke and water damage.
Fire officials say three residents and an employee were taken to a hospital with smoke inhalation but their injuries don't appear to be life-threatening.
The displaced residents are being cared for at the Robbins Community Center. Officials from the American Red Cross of Greater Chicago are on site to help. |
When Spain's economy crashed hard a few years ago, the conservative party in power reacted by imposing severe spending cuts. Now voters have reacted by turning left. In yesterday's local elections, many voters elected members of a left-wing party that grew out of Spain's Occupy movement. That includes the incoming Mayor of Barcelona, Ada Colau. Lauren Frayer reports.
LAUREN FRAYER, BYLINE: One of the most tweeted photos today in Spain shows Ada Colau being hauled away by riot police. The photo's from July 2013 when she was trying to occupy a Barcelona bank that was foreclosing on homes - the caption, welcome, new mayor. Colau is a 41-year-old activist who made her name by physically trying to block police from serving eviction notices. She's been detained by police dozens of times. Testifying before parliament two years ago, she spoke right after a representative of Spain's banking industry.
FRAYER: "This man is a criminal, and he should be treated like one," she said, her voice shaking with rage. Lawmakers' jaws dropped, but her speech endeared Colau to millions of Spaniards hurt by layoffs and austerity.
ANTONIO ROLDAN: She's transparent. She's honest. She speaks the language of the people.
FRAYER: Analyst Antonio Roldan is at the Eurasia Group in London. He says Spaniards fed up with unemployment and corruption chose grassroots activists like Colau over the two parties that have ruled Spain for decades - the Socialists and the ruling conservative Popular Party.
ROLDAN: The Popular Party had absolute majorities in almost all regions, and now they have none. So it's a new period of somehow cleaning up the corrupt establishment.
FRAYER: The man who hopes to lead these grassroots activists to national power is Pablo Iglesias, a 36-year-old political science professor with a ponytail. He heads the new left-wing party Podemos - We Can in Spanish.
FRAYER: "We're not like other politicians supported by the banks. Our creditors are the people," Iglesias said at a rally for Colau in Barcelona. He hopes momentum from these local victories can propel left-wing activists to power in Spain's national elections later this year. For NPR News, I'm Lauren Frayer in Madrid. |
Multitask Network for Respiration Rate Estimation -- A Practical Perspective The exponential rise in wearable sensors has garnered significant interest in assessing the physiological parameters during day-to-day activities. Respiration rate is one of the vital parameters used in the performance assessment of lifestyle activities. However, obtrusive setup for measurement, motion artifacts, and other noises complicate the process. This paper presents a multitasking architecture based on Deep Learning (DL) for estimating instantaneous and average respiration rate from ECG and accelerometer signals, such that it performs efficiently under daily living activities like cycling, walking, etc. The multitasking network consists of a combination of Encoder-Decoder and Encoder-IncResNet, to fetch the average respiration rate and the respiration signal. The respiration signal can be leveraged to obtain the breathing peaks and instantaneous breathing cycles. Mean absolute error(MAE), Root mean square error (RMSE), inference time, and parameter count analysis has been used to compare the network with the current state of art Machine Learning (ML) model and other DL models developed in previous studies. Other DL configurations based on a variety of inputs are also developed as a part of the work. The proposed model showed better overall accuracy and gave better results than individual modalities during different activities. I. INTRODUCTION As the modern lifestyle of human beings is becoming hectic, it is imperative to estimate and monitor physiological parameters efficiently. Respiration Rate (RR) is one of the most critical parameters used for assessing the health conditions and performance during different activities. Given the current challenges presented by COVID-19, several research studies like have emphasized the importance of monitoring the respiration rate in point of care settings. Classical methods to estimate RR are pretty reliable but require an obtrusive setup for measurement. On the other hand, physiological signals like Electrocardiogram (ECG) and Photoplethysmography (PPG) are modulated by respiration and can extract RR. A detailed review of techniques to extract RR from ECG and PPG signal is presented in. Apart from ECG and PPG, accelerometer signals to extract the respiration signal and RR are also explored like the method based on Principal Component Analysis (PCA) is presented in. However, motion artifacts and other noises like Mayer waves makes these techniques error prone. These problems were addressed by through the concept of fusion, which compensates proposed to fuse the respiration rates obtained from different modulations of ECG and PPG using Machine Learning (ML) models to determine weights for each modality based on Respiratory Quality Indices (RQI). Bian et al. proposed a Convolutional Neural Network (CNN) to obtain the respiration rate from PPG waveforms. These studies were focused on estimating the average respiration rate for a window of fixed size. This problem formulation does not allow the estimation of individual respiration peak locations, which is required to estimate instantaneous respiration rate. This limitation was addressed in which proposed a RespNet (RN) to extract the respiration signal from PPG. However, this formulation required a considerable jump in run time, rendering it ineffective in a point of care settings. Moreover, all these works were directed at clinical data, and their efficacy on ambulatory datasets was not studied. We aim to obtain robust RR estimates using ECG and accelerometer signals derived from a chest-worn sensor while performing specific activities (walking, cycling, etc.), simultaneously optimizing it for inference speed. To this end, we propose a multitasking network that gives out both the average respiration rate and the instantaneous breathing cycles and leverages prior respiration estimates. The multitasking formulation ensures that the model optimizes actual breathing peak location while simultaneously predicting accurate breathing rates, enabling overall output improvement. Additionally, we extensively compare the model against other works and several other DL configurations. Furthermore, we have thoroughly evaluated the proposed model during various activities, using different plots. A. Data Set Description This study utilizes the PPG field study dataset which comprises of raw ECG, accelerometer, and respiration signal waveforms obtained from a chest-worn device. All the three signals were recorded at a sampling frequency of 700 Hz. The dataset also contains reference instantaneous heart rate for an 8-second window stridden by 2 seconds along with the corresponding R peak locations. Fifteen participants, seven males and eight females, were included in the study. The data collection protocol comprised Fig. 1. Description of blocks used to make DL architecture F 1 represents Encoder block, F 2 represents Decoder block, F 3 is down sampling block with dense layer. of eight different activities: sitting, table soccer, cycling, driving, lunch break, walking, working, ascending and descending stairs. Out of the 15 subjects, subject S6 is discarded from the study due to improper recording. B. Respiration Signal and Respiration Rate Extraction The variation in R-R interval (RRint) and R peak amplitudes (Rpeak) were used to extract the respiration signal from ECG as given in. The respiration signals were extracted for 32 second windows from the extracted R peak amplitude and R-R interval. Accelerometer-based respiration signal (ADR) was obtained using a process described in. RR from the obtained respiration signals and reference respiration signal was derived using an advanced counting algorithm. Based on the breathing peak locations, the duration between adjacent peaks and the duration between the first and last peak of the 32 second window can used to estimate the instantaneous and average RR respectively. C. Model Architecture and Configurations The inherent design of the architecture was inspired from. However, we fed the intermediate respiration signals as input to the model for adequate optimization of inference time rather than adopting an end-to-end DL formulation. The intermediate respiration signals were extracted using the methods given in Section II-B. Additionally, we used an IncResNet block followed by a dense layer to introduce the multitasking functionality. The 1D convolution layer of the IncResNet block has a kernel size 4 and stride 2. The number of filters in the Convolution layer was reduced by two, starting from 128. Furthermore a single unit dense layer was added to get the average RR. For the encoder, the 1D convolution with kernel size 3, stride 2 was used for downsampling. The filter count was advanced by two from 32 to 1024. For the decoder, the upsampling is performed by 1D transposed convolution of kernel size 3 and stride 2, and the filter count was reduced by two starting from 512 to get the output respiration signal. The 1D convolution layer of the encoder, IncresNet, and the 1D convolution transpose layer of decoder were followed by Batch Normalization, Leaky ReLU activation with slope 0.2, and Inception-Res block. The kernel size of the Inception-Res block was fixed to 15 empirically with a stride of 1. Multiple DL architectures compatible with different types of input signals were developed as part of this work, which henceforth is referred to as configurations (CONF). All the configurations, including the proposed multitask network (CONF-E), employ either a fully convolutional Encoder-Decoder, an Encoder-IncResNet, or both. A brief description of input, output, and architecture combination used for each configuration is given in Figure ??. For configuration C, D and E, the encoder (F 1 ) takes the respiration waveforms obtained from the three modalities as input x (i) and subsequently downsamples it multiple times to produce a compressed feature vector Z (i). For all the other configurations, the inputs are the raw ECG and acceleration waveform. For configuration B, C and E, the decoder (F 2 ) upsamples the feature vector obtained from the bottleneck layer to produce the respiration waveform y pred 1. The average respiration rate for configuration A, B, D and E was obtained through further downsampling of the bottleneck Z (i) by IncRes units followed by a dense layer (F 3 ). Equations and further elucidate the above architecture for the proposed network CONF-E, where F 1, F 2 and F 3 are the representations of the encoder, decoder and combination of Incres units and dense layer with parameters 1, 2 and 3 respectively. The weights and biases in the proposed architecture were optimized by minimizing the SmoothL 1 loss between (ni) and y (ni). The Loss function L(X) is defined as: E. Experimental and Evaluation details The training and test dataset was taken in the ratio 80 : 20. The parameters of the DL model were randomly initialized during training. Adam optimizer and the SmoothL1 loss function has been utilized as this loss possesses the characteristics of both Mean Square Error (MSE) and Mean Absolute Error (MAE) when the absolute value of the argument varies from low to high as given in eq.. Each configuration was trained for 100 epochs. The learning rate (LR) for CONF-C was 0.0001, while for other configurations, an adaptive learning rate was used as given in eq.. Each 32 second window of raw signal input was downsampled from 22400 to 2048 samples, therefore CONF-A, and CONF-B takes the input shape of. The input respiration signal was downsampled to 4Hz as given in Section II-B and therefore CONF-C, CONF-D and CONF-E takes the input shape of. The input batch size for each configuration is kept as 128. The ML based smart fusion (SF), the CNN based model and RN are also developed for comparative purpose. The design specification for these models are adapted from their respective literature. The model was implemented in Tensorflow on a workstation housing an Nvidia GTX1080Ti 11GB GPU. Evaluation of average respiration rates was done by obtaining the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) between the obtained average RR and the ground truth RR. For configurations that provides the respiration waveform as output, the individual breathing cycles were obtained, after which the RMSE and MAE between the RR and the reference instantaneous RR were computed as given in Section II-B. A. Comparison of Multitasking Model with other ML and DL Configurations In recent research studies, DL models have almost consistently outperformed ML models, especially with more data. However, the model's parameter count and inference time should be optimum to operate it in point of care settings. Hence, it is essential to offer insights on parameter count and run time apart from error scores. To this end, Table I consolidates the results of the comparison between the proposed multitask model (CONF-E), the other DL configurations, SF, CNN, and RN, based on MAE, RMSE, time taken (milliseconds), and parameter count (PC) (in a million). Among all configurations, the proposed multitask network (CONF-E) provides the lowest MAE and RMSE for average RR. It performs marginally better than CONF-C in terms of error scores on instantaneous RR, but the same was significantly less compared to other models for instantaneous RR. While our model has taken a higher parameter count and more inference time than CONF-C, CONF-A, CONF-D, SF, and CNN, it has shown significantly lower error than these models. Our model also provides average and instantaneous RR simultaneously, but this functionality is missing in SF, CNN, CONF-A, CONF-D. Compared to the RN and CONF-B, the proposed model has shown lesser error, PC, and inference time than RN and CONF-B. Noticeably, among the configurations that take raw signal as input CONF-A, CONF-B as given in Figure ??, the multitasking model-based configuration CONF-B, shows lower error, but with a higher parameter count and inference time. Hence, the multitasking model provides lesser error irrespective of the type of input given. Between two multitasking configurations CONF-B and CONF-E, the CONF-E operates at the optimal time and PC as given in Table I, which may be attributed to the smaller respiration signals that CONF-E takes as input. On a broad scale, considering the optimal tradeoff between accuracy and parameter count, CONF-E has performed efficiently while taking an optimal time for execution. The instantaneous RR output from CONF-E is considered for further statistical analysis. B. Bland Altman Analysis The box plot aids in understanding the errors concerning ground truth but, it does not account for the possible errors in estimating the ground truth RR itself. This aspect can be understood through the degree of agreement between the final RR output and reference RR obtained from the Bland Altman analysis shown in Figure 2. The limits of agreement obtained from the analysis are 7.90 BrPM and -6.14 BrPM having a mean bias of 0.88 BrPM, with 95.21% of the points lieing within the limit of agreement. The presence of outliers in the plot accounts for the highly erroneous modality during Fig. 2. Bland Altman analysis between the final respiration rate and reference respiration rate. certain activities. Overall, the multitasking network provides reasonable accuracy for both average RR and instantaneous RR. Hence this network is a reliable choice to be used for measuring RR during point of care settings. IV. CONCLUSION This study has proposed a multitasking DL-based network to estimate RR and the respiration signal from ECG and accelerometer waveforms during ambulatory activities. While being relatively parameter efficient and consuming optimal time compared to other DL configurations, it also gives the best instantaneous RR. We have also conducted an extensive statistical analysis of the model during various activities and demonstrated its efficacy. Despite the robust performance of the proposed network, it requires more parameters relatively. Our future work would involve reducing the model complexity without compromising accuracy through methods like Knowledge Distillation. Due to reduced parameter count and time complexity, we would test the model's efficacy in a real-time scenario. |
Confocal reflection microscopy: the "other" confocal mode. When most biomedical researchers think confocal microscopy, they usually have fluorescence imaging in mind. There is a very good reason for this connection. Many biomedical applications of the confocal microscope have utilized its optical sectioning power, combined with the exquisite specificity of immunofluorescence or fluorescence in situ hybridization (FISH) to produce improved images of multiple labeled cells and tissues. Confocal reflection microscopy (CRM) can be used to glean additional information from a specimen with relatively little extra effort, since it calls for minimal specimen preparation and instrument configuration. CRM provides information from unstained tissues, tissues labeled with probes that reflect light, and in combination with fluorescence. Examples of the latter would be for detecting unlabeled cells in a population of fluorescently labeled cells or for imaging the interactions between fluorescently labeled cells growing on opaque, patterned substrata (Figure 1). A major attraction of CRM for biomedical imaging is the ability to image unlabeled live tissue. In fact, CRM has been used to image many different tissues, including brain, skin, bone, teeth, and eye tissue. CRM works especially well for imaging the cornea and lens of the eye because they are transparent. For example, optical sections have been collected from as deep as 400 m into the living cornea and lens using a long working distance water immersion lens. There has been a history of using CRM for imaging unstained specimens since it was one of the only modes available to the designers of the early confocal instruments in the time before epifluorescence. Both the laser scanning confocal microscope (LSCM) and the spinning disk microscope can be used for CRM. The spinning disk microscope has the advantage that images can be collected in real time, viewed in real color, and lack a reflection artifact that is sometimes present in the LSCM. The artifact appears as a bright spot in the image and is caused by reflection from one or more of the optical elements in the microscope. There are several remedies for this artifact. It can be avoided by scanning away from the optical axis of the microscope and zooming the bright spot out of the frame. It can be removed from the image by digitally subtracting a background image of the spot away from the image of interest. Alternatively, polarizing filters can be added to the instrument to eliminate the reflection from optical elements. A traditional biological application of wide-field reflectedlight imaging is for observing the interactions between cells growing in tissue culture on glass coverslips using a technique called interference reflection microscopy. Here, the adhesions between the cell and its substratum are visible at the interface of the glass coverslip and the underside of the cell. These regions of cellular adhesion continue to be a research area of great interest. The proteins associated with the focal contacts are analyzed using immunofluorescence, and the contacts themselves can be viewed using interference reflection microscopy. Cell-substratum adhesions are viewed in a similar way using CRM (Figure 2). Again, the interface between the coverslip and the cell is imaged. This surface can be hard to find in the confocal microscope, but the highly reflective cov- |
Editor's note: This article has been updated to correct the amount the Republican tax law is projected to add to the federal deficit.
U.S. Rep. Peter Roskam was one of the Congressional shepherds of the Republican tax bill who helped the legislation become law.
"As the tax policy subcommittee chair, I played a key role in ensuring this plan would be good for our district," Roskam said. "This bill effectively cuts taxes for middle- to lower-income Americans across the country."
Roskam said he worked to increase the amount of property taxes people can deduct on their federal returns and to increase the amount of the child tax credit.
"Another win for residents of the 6th District," he said.
But the seven Democrats hoping to run against the longtime congressman from Wheaton in the November general election strongly oppose the tax policy changes, saying they benefit the rich, create a "path to inequality," or represent a "big, bold failure."
The Democrats running against each other in the March 20 primary are all trying to position themselves as the left-leaning leader to take on Roskam and represent the large, suburban 6th District from a different perspective.
Here's what the Democratic candidates have to say about the tax code changes -- and the ideas they propose instead.
The tax code is a place for the country to show its morals and act on its values, Cheney told a crowd of 800 during a forum in Carol Stream. That's why she says she opposes the GOP tax bill: it's unfair.
"This tax plan is a path to inequality with a clear destination: an economy and society that empowers a wealthy few, leaving hardworking families stuck in neutral," said Cheney, 56, of Naperville, and a former district chief of staff for 11th District U.S. Rep. Bill Foster.
She says her plan would make the tax system more progressive.
"Simplifying the tax code doesn't mean eliminating or reducing the number of brackets," she said. "Actually, increasing the number of brackets would make the tax code more fair for individuals in the society."
It's a "heist," Huffman said of the recently approved tax policy changes.
"This plan delivers massive amounts of wealth to the people and corporations who need it least, while offering only scraps for the rest of us," the 31-year-old policy analyst from Palatine said.
Huffman said his approach to tax policy would not be top-down. He said he doubts the trickle-down philosophy that some say will benefit everyday workers because of lower tax rates for corporations. He said the tax structure instead should focus on allowing investment in roads and bridges, education and health care.
The tax plan wasn't written for the "average American," but for campaign donors, corporation leaders and wealthy corporations, Howland said.
Because the plan is likely to add $1 trillion to the federal deficit during the next decade, according to the nonpartisan Joint Committee on Taxation, Howland said it is "laying the groundwork for cuts to Social Security, Medicare and Medicaid."
Howland, a 65-year-old College of Lake County trustee and civil rights attorney who ran against Roskam in 2016, said she will fight for "a new, fairer tax code that does not prioritize the wealthy over middle- and working-class families."
Included in such a plan, she said, should be tax incentives for job creation, infrastructure investment and workforce training.
Roskam's support for the tax changes shows he "put corporations and the wealthiest Americans first," said Casten, a 46-year-old scientist, engineer and entrepreneur from Downers Grove.
Tax reform isn't on Casten's personal list of top issues because it creates more problems than it solves, he said during the Carol Stream forum. But if pressed, Casten said he would create tax incentives for businesses to invest in infrastructure.
One of the worst provisions of the Republican tax law, in Mazeski's eyes, is it "guts the state and local tax deduction and property tax deduction." She said the change will lead to "double taxation on Illinois middle-class families."
Indeed, the bill caps the amount of state and local taxes that can be deducted from federal returns at $10,000. The cap is expected to affect owners of higher-valued or higher-taxed homes throughout the suburbs and has led many to prepay next year's property taxes now.
Mazeski, 58, and a Barrington Hills plan commission member, said she's confident voters will hold Roskam accountable for his support of the tax overhaul. If elected, she would propose tax breaks on small businesses, especially for research and development, to ensure the nation's technological advances don't fall behind.
"A big, bold failure," is Zordani's description of the Republican tax law, which she says fell short of enacting a comprehensive, bipartisan plan for stability and fairness.
Under the changes in the tax code, Zordani said the nation won't be able to fix infrastructure, decrease the national debt or strengthen the social safety net for people in need.
Tax credits to businesses to create jobs would be her priority instead, said Zordani, a 53-year-old regulatory and financial services attorney of Clarendon Hills.
If more women and people of diverse viewpoints negotiated the Republican tax law, Anderson Wilkins said the result would have been different. Instead, she said the law "doesn't make sense" and leaves out small businesses, a major economic engine that also provide what she calls "a social capital."
Anderson Wilkins, a 59-year-old small business owner and Naperville City Council member, said the cap on state and local tax deductions will hurt many in the district as big corporations get "a handout."
"This is about our values and what we think about people and how we're going to help people," she said. "We need a progressive income tax to make sure the rich pay their fair share." |
Quiet Computing with BSD: Fan control with sysctl hw.sensors We will discuss the topic of fan control and introduce sysctl-based interfacing with the fancontrolling capabilities of microprocessor system hardware monitors on OpenBSD. The discussed prototype implementation reduces the noise and power-consumption characteristics in fans of personal computers, especially of those PCs that are designed from off-the-shelf components. We further argue that our prototype is easier, robuster and more intuitive to use compared to solutions available elsewhere. |
A Multi-Objective Graph-based Genetic Algorithm for image segmentation Image Segmentation is one of the most challenging problems in Computer Vision. This process consists in dividing an image in different parts which share a common property, for example, identify a concrete object within a photo. Different approaches have been developed over the last years. This work is focused on Unsupervised Data Mining methodologies, specially on Graph Clustering methods, and their application to previous problems. These techniques blindly divide the image into different parts according to a criterion. This work applies a Multi-Objective Genetic Algorithm in order to perform good clustering results comparing to classical and modern clustering algorithms. The algorithm is analysed and compared against different clustering methods, using a precision and recall evaluation, and the Berkeley Image Database to carry out the experimental evaluation. |
A typical fire hose acting at a flow rate of 125 gallons/minute and a hydrant pressure of 110 psi will create a net rearward thrust that is on the order of 40 to 70 lbs. of force or more. This force must be resisted by a fireman holding the hose.
The effort required to control and contain such a hose is considerable. While the hose is on at full force, the thrust that must be absorbed is unremitting. Where a fireman is endeavouring to hold the hose directly in his hands, the muscles in the arms and hands of the fireman become fatigued. If a fireman's control over a hose declines sufficiently, a dangerous condition may arise when there is a risk that the hose may be dropped and thereby become freed to writhe and flail about.
Out of recognition of such risks a number of prior art patents have endeavoured to address this problem by providing firemen with devices to assist them in supporting a fire hose in operation:
U.S. Pat. No. 2,919,071--M. J. Dalton
U.S. Pat. No. 425,256--C. R. Robinson
U.S. Pat. No. 407,118--C. R. Robinson
U.S. Pat. No. 3,885,739--Phillip E. Tuttle
U.S. Pat. No. 1,829,621--G. L. Whiteford
U.S. Pat. No. 3,223,172--J. M. Moss
U.S. Pat. No. 5,052,624--Boers et al
U.S. Pat. No. 4,216,911--Huperz et al
The two Robinson references show the use of attachments in the form of anchored supports, that allow the thrust of a hose to be partially transmitted to the earth by a strut or post. Such arrangements, however, are of limited use where a fireman is continually moving to better reposition himself.
Whiteford shows the use of a convertible bar that can serve as a strut, as in the Robinson references, or permit two firemen to share in supporting a hose nozzle. While chains are provided in Whiteford to carry the weight of the hose, no provision is made to assist the firemen in absorbing rearwardly-directed thrust.
Dalton depicts a hand held pistol grip that attaches to a fire hose just behind a nozzle, allowing a fireman to absorb the thrust of the hose through the palm of his hand, rather .than through a grasping action.
Tuttle, like Dalton, depicts a pistol-grip arrangement, with a finger-operated valve added.
Moss addresses the problem of transferring thrust by providing a harness to be worn by a fireman, a series of nozzles being attached directly to the harness. This arrangement, however, lacks the flexibility of hand-control over a hose nozzle.
Two patents not relating to the fire hose application of the present invention are those to Huperz and Boers.
Huperz shows a nozzle with a pistol-grip and trigger arrangement as in Tuttle. Water is allowed to enter this nozzle system through the grip or handle. A cushioned "shoulder or arm rest" (19) is also shown as extending rearwardly from the nozzle/pistol-grip assembly, in the form of a single rod terminating in a curved and padded butt-end, which presumably may rest against a fireman's shoulder.
Boers also shows a nozzle with pistol-grip and trigger, water being fed through to the nozzle by a substantially linear rearward extension from the nozzle. Above this extension, a "shoulder rest" (26) is mounted at the end of an arm (27) that extends downwardly to fasten to the rearward extension behind the nozzle.
Both Huperz and Boers include a shoulder rest as part of an integral triggered nozzle assembly. However, this arrangement, in both cases contemplates applications involving small diameter, pressurized water-jets used for cleaning, rather than fire hose systems. One distinction, therefore, between these applications and that of the case of a fire hose is that the weight of water contained within the fire hose and the thrust developed will be far greater than that which would be present in a washing-jet nozzle.
A feature of the Huperz disclosure is that the "stock" on his nozzle-support system is mounted in-line with the nozzle. This directs the rearward thrust of the water jet directly against the Shoulder of the fireman, without developing any twisting couple arising from such thrust. A twisting couple may, however, as is subsequently shown, arise from the deflection of water within the hose and support itself.
Further, the Huperz arrangement requires the nozzle-support system to be elevated to shoulder level--an arrangement which would be fatiguing to a fireman holding a fire hose. By reason of the height of which the jet-gun of Huperz must be raised off of the ground, this design, if converted to pipes of the diameter required for fire hoses, would burden a fireman with lifting a heavy length of hose and water high off of the ground.
In Boers, the shoulder rest is elevated above the reaction line of the nozzle, allowing the sprayer to be located below shoulder level. A very slight downward bend appears to be depicted in the drawings in the Boers sprayer, rearwardly of the shoulder bracing plate. No reference is made, however, to this bend in the written disclosure. Further, based on the normal proportions of a user of the Boer's device, the hose coupling at the rearward end would appear to be positioned at a point which is behind the vertical plane defined by the rearward side of the user. These features will be contrasted with those of the present invention, discussed subsequently.
None of these references acknowledge that water changing direction as it flows around a bend will create an outward thrust. Depending on the location of this thrust, a twisting force or "couple" may be developed. This will be apparent to a user endeavouring to hold the fire hose as a tendency for the hose nozzle to deflect vertically from the direction of intended use. In the case of a fire hose these forces would be substantial.
There is a need, therefore, for a simple but robust type of support that can be readily adapted for use with typical existing fire hoses and nozzles and which will allow a fireman to absorb both the thrust created by the fire hose against his torso, and the torques that tend to cause the hose to twist, thereby reducing the fatigue experienced by the hands and arms of the fire fighter.
The invention in its general form will first be described, and then its implementation in terms of specific embodiments will be detailed with reference to the drawings following hereafter.
These embodiments are intended to demonstrate the principle of the invention, and the manner of its implementation. Such embodiments represent one example of how the benefits of the invention may be obtained. Other means for achieving the same effects will be apparent from an examination of the operations of the elements of such preferred embodiments.
The invention will then be further described, and defined, in each of the individual claims which conclude this specification. |
P.L. 94-142: Popular Welfare Bandwagon or a None-Too-Stable Educational Ship? the social-political surroundings that shroud them. What may be represented in the four responses is more than a difficulty in communicating (I to them, and they to me). It may be the inability of persons from different backgrounds (e.g., parents, legislators, officers of the judiciary), to perceive the same problem similarly. Therefore, it is not surprising that differing solutions are proposed. Simply then, three major issues loom in response to a federal law that requires state and local educational policy action. These issues are: 1. Is P.L. 94-142 achieving its purpose? 2. Will the Law set into motion a current that will ultimately result in a significant backlash, particularly by regular educators who view it as a threat? |
Q:
Пустые значения в базе данных
</tr>
<h3>Добавление оборудования</h3>
<form method="post" action = "Z:\home\localhost\www\Work_Log\page\journal1\">
<span class="label"> Название </span>
<input style="width: 5%;" type="text" class="input" name="name_equipment" >
<span class="label">Инвентарный </span>
<input style="width: 10%;" type="text" class="input" name="inv_number" >
<span class="label">Картинка </span>
<input style="width: 15%;" class="input" name="image" >
<span class="label">Примечание </span>
<input style="width: 15%;" class="input" name="description" >
<br/>
<br /><input type = "submit" name = "add" value ="Добавить"/><br />
<br />
</form>
<?php
getConect();
$name_equipment = ($_POST['name_equipment']);
$inv_number = ($_POST['inv_number']);
$image = ($_POST['image']);
$description = ($_POST['description']);
$query = mysql_query('INSERT INTO`equipment`(`name_equipment`,`inv_number`,`image`,`description`)
VALUES ("'.$name_equipment.'","'.$inv_number.'","'.$image.'","'.$description.'")');
}
?>
вот маленькая форма с функцией добавления соединение есть но бд таблицу equipment идут пустые значения при том не когда нажимаешь на кнопку а когда обновляешься
A:
Нужно перед добавлением соединения проверять на пустые значения массив $_POST
исходя из вашего кода запрос INSERT происходит всегда, нужно проверять массив $_POST например;
if(isset($_POST['name_equipment']) && isset($_POST['name_equipment']) && $_POST['image'] && isset($_POST['description']))
{
// Валидация переменных перед запросом (избежать sql инъекций)
// Выполняется создание подключения к бд и запрос
} |
Timothy Radcliffe
Formation
Timothy Radcliffe was born into a Catholic family in London. He studied at Downside School and St John's College, Oxford. He entered the Dominican Order in 1965 and was ordained a priest in 1971.
Career
During the mid 1970s Timothy was based at the West London Catholic Chaplaincy at More House, Cromwell Road, London SW7.
Timothy Radcliffe taught Holy Scripture at Oxford University at Blackfriars, and was elected provincial of England in 1988. In 1992 he was elected Master of the Dominican Order and held that office until 2001. During his tenure as Master, he was ex officio Grand Chancellor of the Pontifical University of Saint Thomas Aquinas, Angelicum in Rome.
In 2001, after the expiration of his nine-year mandate as Master of the Dominican order, Timothy Radcliffe took a sabbatical year. Starting in 2002, he became again a simple member of the Dominican community of Oxford and does public speaking.
Timothy Radcliffe occasionally presided at the Mass for gay people at Our Lady of the Assumption, Warwick Street, which Cardinal Murphy O'Connor, the Archbishop of Westminster, recognised as part of the Archdiocese's mission to gay people.
In 2015 Radliffe was named a consultor to the Pontifical Council for Justice and Peace. This caused controversy due to statements he had made about the "eucharistic" dimension of homosexual sexual activity.
The American television network EWTN dropped plans to cover an event in Ireland at which he was scheduled to speak because of Radcliffe's participation. A host at the station called Radcliffe's views "at sharp variance to Catholic teaching.”
Radcliffe had written, “Certainly [homosexual activity] can be generous, vulnerable, tender, mutual, and non-violent. So in many ways, I would think that it can be expressive of Christ's self-gift."
Honours
In 2003, Timothy Radcliffe was made an honorary Doctor of Divinity in the University of Oxford, the University's highest honorary degree. The Chancellor of the University of Oxford ended his citation with the following words: "I present a man distinguished both for eloquence and for wit, a master theologian who has never disregarded ordinary people, a practical man who believes that religion and the teachings of theology must be constantly applied to the conduct of public life: the Most Reverend Timothy Radcliffe, MA, sometime Master of the Dominican Order and Grand Chancellor of the Pontifical University of St Thomas Aquinas, for admission to the honorary degree of Doctor of Divinity."
He was the 2007 winner of The Michael Ramsey Prize for theological writing, for his book What Is the Point of Being A Christian?
Timothy Radcliffe is Patron of the International Young Leaders Network and helped launch Las Casas Institute, dealing with issues of ethics, governance and social justice. These are both projects of Blackfriars, Oxford.
He is also Patron of 'Catholic AIDS Prevention and Support', 'Christian Approaches to Defence and Disarmament', and 'Embrace the Middle East', as well as being on the Board of 'Fellowship and Aid to the Church in the East. |
Published: Jan. 24, 2017 at 01:15 p.m.
Updated: Jan. 24, 2017 at 01:33 p.m.
Ladarius Green's 2016 season was delayed by persistent headaches and ended prematurely due to another in a string of concussions.
While Pittsburgh Steelers quarterback Ben Roethlisberger plans to contemplate retirement this offseason, his tight end is not ready to follow suit.
Green insists he's not concerned with the short- or long-term effects of his concussions, which number at least four since the start of the 2014 season.
"I talked to all the doctors. They reassured me of that," Green explained Monday, via the Pittsburgh Post-Gazette. "I'm not too worried about that. I'm not focusing on that right now.
"I was able to play, show a little what I can do. I just hope it gets a lot better from there."
Signed to a four-year, $20 million contract in March, Green was expected to fill Heath Miller's rather sizable shoes in the Steelers' aerial attack. After starting the season on the physically unable to perform list, he ended up playing just six games, recording 304 yards and a touchdown on 18 receptions.
When healthy, Green made a difference with his speed down the seam, hauling in six passes for 110 yards and a score in a 24-14 victory over the Giants in Week 13. His presence was sorely missed in the season-ending loss to the Patriots.
"It was very frustrating," Green said of sitting out the AFC Championship Game. "Once I got going in the middle of the season, I was excited, I was happy to be out there. You can't plan stuff like that. I just had a setback. A frustrating season, but everybody has one. I hope this is the only one I have." |
Lima beans should not be eaten raw.
4 Why Are Leaves Turning Yellow on Pole Beans?
Lima beans (Phaseolus lunatus), also called butterbeans, are commonly grown as annual vegetables, but they need long a long growing season to produce. Pole lima beans need from 80 to 100 days to mature, and bush varieties take 65 days, so they grow well in U.S. Department of Agriculture plant hardiness zones 6 to 9. Adding Epsom salt to lima beans does not affect their growth speed or cause them to ripen faster.
Epsom salt contains magnesium sulfate, which is a water-soluble form of magnesium. Lima beans need magnesium, but it does not regulate growth in the plant. Instead, magnesium is incorporated as a component of chlorophyll, which gives the lima beans their green color. Chlorophyll is created as part of the photosynthesis process the lima beans use to create sugars and carbohydrates from sunlight. While chlorophyll and photosynthesis are vital to the lima beans, they do not regulate the growth speed of the plants.
A green lima bean plant does not benefit from an Epsom salt application. Magnesium deficiency in lima beans first shows up as yellow leaves with green veins. If the deficiency continues, leaves can become tan or bronze. Leaf discoloration can be a symptom of deficiency in other nutrients including phosphorus, sulfur and zinc, however. Symptoms can also be a result of excess boron or manganese in the soil as well, so testing the soil before adding Epsom salt or other amendments is recommended.
Because Epsom salt is highly water-soluble, it is quickly washed from the soil and is not available to the lima bean roots for long. It provides a quick boost for magnesium deficient plants, but adding magnesium that will stay in the soil longer is preferable. Dolomitic lime or potassium magnesium sulfate adds more magnesium to the soil for the long term. Applying Epsom salt to lima bean plants as a spray can result in burned leaves, so it should be applied with caution.
Many fertilizers and soil amendments contain nitrogen. Like other beans, lima beans can fix nitrogen from the air into the soil. Because of this ability, lima beans do not normally benefit from adding nitrogen fertilizers. Applying nitrogen to normal soil gives lima beans a growth spurt, but the growth is in the leaves and stems. The plant produces fewer bean pods as it puts energy into producing the lush foliage.
Grover, Heidi. "Does Epsom Salt Affect the Speed of Growth in Lima Beans?" Home Guides | SF Gate, http://homeguides.sfgate.com/epsom-salt-affect-speed-growth-lima-beans-98461.html. Accessed 26 April 2019. |
Depending on your point of view about biotechnology, recent revelations about the effects of CRISPR on organisms could be good or bad. As a science writer for Bioscription, I take the news as a little of both.
Gene Drives
So, what happened? Recent research between the University of Kansas and Cornell University has discovered that the usage of a gene drive, a system whereby genetic modifications are spread to other members of a population down the genetic line, does not come without its complications.
This sort of technology has multi-fold uses, even if it is currently still being developed. It could be used to give a person and later a population resistance genes to a deadly disease or it could be used to wipe out a species of deadly mosquitoes, like Aedes aegypti that spread horrible pathogens like dengue fever, chikungunya, and Zika.
Obviously, with humans, it would be a very slow process indeed. Such a desired change would be far more advantageous if just applied directly to each member of the population.
But for faster breeding species like mosquitoes, a gene drive would prove far more effective. Or so we thought.
Driving Population
The original problem in challenging A. aegypti included the issue of introducing a gene change that would damage the fitness of the modified individual. The forces of natural selection would, pardon the pun, naturally result in the changed mosquito not passing on its genes. There are even genetic factors that control for this by purposefully stripping out genes that lower fitness. This was a primary concern when using genetic modification in A. aegypti.
More research, however, resulted in the idea of tying the modified gene to so-called “selfish genes”, components that are far more likely to be passed on and dramatically increasing the spread of the modified gene. With this plan, scientists were able to overcome the fitness cost of the change by increasing the inheritance chance.
This is, essentially, what a gene drive is. A modification that isn’t positive for the individual or the population, but is forced to be passed on thanks to tying it to selfish genes.
The only other issue was being able to specifically place the genes next to the selfish genes to force this conservation. But you can likely guess what oft-discussed technological innovation solved that problem.
Yes, it’s our old friend, CRISPR-Cas9.
Resistance To Change
However, now we come to the new research and the new complication that has arisen. Using a genetic framework and modeling system, the researchers were able to determine that resistance to CRISPR modifications are not only possible, but also inevitable. Especially in fast-breeding organisms like mosquitoes.
So, does this mean that the idea of gene drives need to be tossed before they have even begun to be used? No, not quite. Not at all, in fact.
The same study also lays out other options, such as engineering methods with CRISPR that would significantly lower the resistance chance. It would still be inevitable, but it will take far longer. Next, the way in which the gene drive is introduced into a population will play a huge role in how quickly resistance forms. Models of the population to be modified are a necessity before any gene drive technology is to be rolled out.
A Directed Assault
Lastly, CRISPR techniques must be honed into one specific type of genetic repair when doing the modification. After inserting, changing, or removing the desired gene, CRISPR just allows the normal DNA repair systems to fix the broken strands of the genome.
What is desired is for the homology-directed repair option to be used by the cell, where it compares the genome to a template. This reduces the chance of mutations and resistance genes forming.
The other method is nonhomologous end joining, where the broken strands are just pasted back together with no care for genetic sequence. This has a huge chance of changing the genetic code and creating resistance.
Finally, multiple different CRISPR constructs should be created that can be swapped out via gene drives in a population as resistance inevitably forms, essentially repeatedly introducing modifications to cause the same desired effect every time an individual in the population develops a way around the prior modification.
That way, a change like making mosquitoes resistant to dengue fever so they can’t contract it and pass it on to humans will be retained in the population as a whole.
A Lengthier Consideration
At first glance, it would appear that this new research conclusion severely damages the potential for gene drives, but in reality, all it does is increase the need for prior thought and readiness to be made before using one.
And I think even the people against biotechnology can agree that increased planning is a good thing.
Press Article Link
Study Link
Photo CCs: Aedes aegypti adult overies from Wikimedia Commons |
For the fifth Comic-Con in a row, ABC’s Once Upon A Time is coming back to the San Diego confab. It’s always one of the big panels in ballroom 20 that wows fans with exclusive peeks into the new season; last year they showed off some of their Frozen footage to big screams. In addition, there’s always a huge dais filled with the series’ stars. Showing up this year on Saturday, July 11 from 10-10:45 AM will be Ginnifer Goodwin, Jennifer Morrison, Lana Parrilla, Josh Dallas, Emilie de Ravin, Colin O’Donoghue, Robert Carlyle, Rebecca Mader, Sean Maguire as well as executive producers Edward Kitsis and Adam Horowitz. The session will be moderated by Yvette Nicole Brown. In addition to giving fans a sneak peek, there’s typically a special announcement during the Once Upon A Time panel about a new character or actor debuting in the fall.
In addition, ABC will be showing off their new series The Muppets on Saturday from 3-4 PM in Room 6A. Panel will include executive producers and co-writers Bill Prady and Bob Kushell, director Randall Einhorn and EP Bill Barretta who is also the voice of Pepe, Dr. Teeth and Rowlf. Other talented performers from the show will be in attendance. Damian Holbrook will be moderating.
Among some of ABC’s notable Comic-Con marketing during the confab, there will be Once Upon a Time and The Muppets-branded pedicabs buzzing around the Gas Lamp district. There’s also The Muppets branded trolleys running outside of the convention center. |
Nonequilibrium relaxation of the two-dimensional Ising model: Series-expansion and Monte Carlo studies We study the critical relaxation of the two-dimensional Ising model from a fully ordered configuration by series expansion in time t and by Monte Carlo simulation. Both the magnetization (m) and energy series are obtained up to 12-th order. An accurate estimate from series analysis for the dynamical critical exponent z is difficult but compatible with 2.2. We also use Monte Carlo simulation to determine an effective exponent, z_eff(t) = - {1/8} d ln t /d ln m, directly from a ratio of three-spin correlation to m. Extrapolation to t = infinity leads to an estimate z = 2.169 +/- 0.003. I. INTRODUCTION The pure relaxational dynamics of the kinetic Ising model with no conserved fields, which is designated as model A in the Hohenberg-Halperin review, has been studied extensively by various approaches. Unlike some of the other models in which the dynamical critical exponent z can be related to the static exponents, it seems that z of model A is independent of the static exponents (however, see Ref. ). In the past twenty years, the numerical estimates for the dynamical critical exponent z scattered a lot, but recent studies seem to indicate a convergence of estimated values. Our studies contribute further to this trend. We review briefly some of the previous work on the computation of the dynamical critical exponents, concentrating mostly on the two-dimensional Ising model. The conventional theory predicts z = 2 −, where is the critical exponent in the two-point correlation function, G(r) ∝ r −d+2−. For the two-dimensional Ising model, this gives z = 1.75. It is known that this is only a lower bound. It is very interesting to note that series expansions gave one of the earliest quantitative estimates of z. Dammann and Reger have the longest hightemperature series (20 terms) for the relaxation times so far, obtaining z = 2.183 ± 0.005. However, re-analysis of the series by Adler gives z = 2.165±0.015. There are two types of field-theoretic renormalization group analysis: the -expansion near dimension d = 4 and an interface model near d = 1. It is not clear how reliable when it is interpolated to d = 2. Real-space renor-malization group of various schemes has been proposed in the early eighties, but it appears that there are controversies as whether some of the schemes are welldefined. The results are not of high accuracy compared to other methods. Dynamic Monte Carlo renormalization group is a generalization of the equilibrium Monte Carlo renormalization group method. The latest work gives z = 2.13 ± 0.01 in two dimensions. Equilibrium Monte Carlo method is one of the standard methods to estimate z. However, long simulations (t ≫ L z ) are needed for sufficient statistical accuracy of the time-displaced correlation functions. The analysis is quite difficult due to unknown nature of the correlation functions. Nonequilibrium relaxation, starting from a completely ordered state at T c, has nice features. The analysis of data is more or less straightforward. The lattice can be made very large, so that finitesize effect can be ignored (for t ≪ L z ). The catch here is that correction to scaling due to finite t is large. Recently, the idea of damage spreading has also been employed. Methods based on statistical errors in equilibrium Monte Carlo simulation, finite-size scaling of nonequilibrium relaxation, and finite-size scaling of the eigenvalues of the stochastic matrix are used to compute the exponent. A recent calculation with a variance-reducing Monte Carlo algorithm for the leading eigenvalues gives prediction z = 2.1665 ± 0.0012. This appears to be the most precise value reported in the literature. The high-temperature series expansions for the relaxation times are often used in the study of Ising dynamics. In this paper, we present a new series which directly corresponds to the magnetization (or energy) relaxation at the critical temperature. Our series expansion method appears to be the only work which uses time t as an expansion parameter. The generation of these series is discussed in Sections II and III. Dynamical scaling mentioned in Section IV forms the basis of the analysis, and the results are analyzed in Section V. We feel that the series are still too short to capture the dynamics at the scaling regime. We also report results of an extensive Monte Carlo simulation for the magnetization relaxation. We find that it is advantageous to compute an effective dynamical critical exponent directly with the help of the governing master equation (or the rate equation). The simulation and analysis of Monte Carlo data are presented in Section VI. We summarize and conclude in Section VII. II. SERIES EXPANSION METHOD In this section, we introduce the relevant notations, and outline our method of series expansion in time variable t. The formulation of single-spin dynamics has already been worked out by Glauber, and by Yahata and Suzuki long time ago. To our knowledge, all the previous series studies for Ising dynamics are based on high-temperature expansions of some correlation times. As we will see, expansion in t is simple in structure, and it offers at least a useful alternative for the study of Ising relaxation dynamics. We consider the standard Ising model on a square lattice with the energy of a configuration given by where the spin variables i take ±1, J is the coupling constant, and the summation runs over all nearest neighbor pairs. The thermal equilibrium value of an observable f () at temperature T is computed according to the Boltzmann distribution, The equilibrium statistical-mechanical model defined above has no intrinsic dynamics. A local stochastic dynamics can be given and realized in Monte Carlo simulations. The dynamics is far from unique; in particular, cluster dynamics differs vastly from the local ones. A sequence of Monte Carlo updates can be viewed as a discrete Markov process. The evolution of the probability distribution is given by where W is a transition matrix satisfying the stationary condition with respect to the equilibrium distribution, i.e., P eq = P eq W. A continuous time description is more convenient for analytic treatment. This can be obtained by fixing t = k/N, and letting t = 1/N → 0, where N = L 2 is the number of spins in the system. The resulting differential equation is given by where is a linear operator acting on the vector P (, t), which can be viewed as a vector of dimension 2 N, indexed by. If we use the single-spin-flip Glauber dynamics, we can write where and F j is a flip operator such that The flip rate w j ( j ) for site j depends on the spin value at the site j as well as the values of its nearest neighbor spins k. The full probability distribution clearly contains all the dynamic properties of the system. Unfortunately its high dimensionality is difficult to handle. It can be shown from the master equation, Eq., that any function of the state (without explicit t dependence) obeys the equation where and the average of f at time t is defined by Note that the time dependence of f is only due to P (, t). For the series expansion of this work, it is sufficient to look at a special class of functions of the form A = j∈A j, where A is a set of sites. In such a case we have With this set of equations, we can compute the n-th derivative of the average magnetization 0 t. A formal solution to Eq. is This equation or equivalently the rate equation, Eq., forms the basis of our series expansion in time t. A few words on high-temperature expansions are in order here. They are typically done by integrating out the time dependence-the nonlinear relaxation time can be defined as The equilibrium correlation time (linear relaxation time) can be expressed as where = N m 2 eq is the reduced static susceptibility. The average is with respect to the equilibrium distribution, P eq (). A suitable expansion in small parameter J/k B T can be made by writing L = L 0 + ∆L. It is clear that we can also perform the Kawasaki dynamics with a corresponding rate. Of course, since the magnetization is conserved, only energy and higher order correlations can relax. A very convenient form for the Glauber transition rate, Eq., on a two-dimensional square lattice is where the site 0 is the center site, and sites 1, 2, 3, and 4 are the nearest neighbors of the center site. At the critical temperature, tanh K c = √ 2 − 1, we have x = −5 √ 2/24 and y = √ 2/24. III. COMPUTER IMPLEMENTATION AND RESULTS A series expansion in t amounts to finding the derivatives evaluated at t = 0: The derivatives are computed using Eq. recursively. A general function is coded in C programming language to find the right-hand side of Eq. when the configuration A, or the set A, is given. The set A is represented as a list of coordinates constructed in an ordered manner. By specializing the flip rate as given by Eq., and considering each site in A in turn, the configurations on the right-hand side of the rate equation are generated in three ways: the same configuration as A, which contributes a factor (coefficient of a term) of −1; a set of configurations generated by introducing a pair of nearest neighbor sites in four possible directions, with one of the sites being the site in A under consideration, and making use of the fact 2 i = 1. We notice that the site in A under consideration always gets annihilated. Each resulting configuration contributes a factor of −x; and same as in but two more sites which are also the nearest neighbors of the site in A under consideration are introduced. This two extra sites form a line perpendicular to the line joined by the first pair of neighbor sites in. Each of this configuration has a factor of −y. It is instructive to write down the first rate equation, taking into account of the lattice symmetry (e.g. i = 0, for all i): The core of the computer implementation for series expansion is a symbolic representation of the rate equations. Each rate equation is represented by a node together with a list of pointers to other nodes. Each node represents a function A, and is characterized by the set of spins A. The node contains pointers to the derivatives of this node obtained so far, and pointers to the "children" of this node and their associated coefficients, which form a symbolic representation of the rate equations. The derivatives are represented as polynomials in y. Since each node is linked to other nodes, the computation of the n-th derivative can be thought of as expanding a tree (with arbitrary number of branches) of depth n. The traversal or expansion of the tree can be done in a depth-first fashion or a breadth-first fashion. Each has a different computational complexity. A simple depthfirst traversal requires only a small amount of memory of order n. However, the time complexity is at least exponential, b n, with a large base b. A breadth-first algorithm consumes memory exponentially, even after the number of the rate equations has been reduced by taking the symmetry of the problem into account. The idea of dynamic programming can be incorporated in the breadthfirst expansion where the intermediate results are stored and referred. To achieve the best performance, a hybrid of strategies is used to reduce the computational complexity: Each configuration (pattern) is transformed into its canonical representation, since all configurations related by lattice symmetry are considered as the same configuration. We use breadth-first expansion to avoid repeated computations involving the same configuration. If a configuration has already appeared in earlier expansion, a pointer reference is made to the old configuration. Each configuration is stored in memory only once. However, storing of all the distinct configurations leads to a very fast growth in memory consumption. The last few generations in the tree expansion use a simple depth-first traversal to curb the problem of memory explosion. Parallel computation proves to be useful. The longest series is obtained by a cluster of 16 Pentium Pro PCs with high speed network connection (known as Beowulf). The program is controlled by two parameters D and C. D is the depth of breadth-first expansion of the tree. When depth D is reached, we no longer want to continue the normal expansion in order to conserve memory. Instead, we consider each leave node afresh as the root of a new tree. The derivatives up to (n − D)th order are computed for this leave node. The expansion of the leave nodes are done in serial, so that the memory resource can be reused. The parameter C controls the number of last C generations which should be computed with a simple depth-first expansion algorithm. It is a simple recursive counting algorithm, which uses very little memory, and can run fast if the depth C is not very large. In this algorithm the lattice symmetry is not treated. The best choice of parameters is D = 6 and C = 2 on a DEC AlphaStation 250/266. The computer time and memory usage are presented in Table I. As we can see from the table, each new order requires more than a factor of ten CPU time and about the same factor for memory if memory is not reused. This is the case until the order D + C + 1, where no fresh leave-node expansion is made. There is a big jump (a factor of 60) in CPU time from 9th order to 10-th order, but with a much smaller increase in memory usage. This is due to the change of expansion strategy. Finally the longest 12-th order series is obtained by parallel computation on a 16-node Pentium Pro 200 MHz cluster in 12 days. The number of distinct nodes generated to order n is roughly 1 100 11 n. To 12-th order, we have examined about 10 10 distinct nodes. The series data are listed in Table II. IV. DYNAMICAL SCALING The traditional method of determining the dynamical critical exponent z is to consider the time-displaced equilibrium correlation functions. However, one can alternatively look at the relaxation towards thermal equilibration. The basic assumption is the algebraic decay of the magnetization at T c, This scaling law can be obtained intuitively as follows. Since the relaxation time and the correlation length are related through ∝ z by definition, after time t, the equilibrated region is of size t 1/z. Each of such region is independent of the others, so the system behaves as a finite system of linear length ∝ t 1/z. According to finite-size scaling, the magnetization is of order −/ on a finite system of length. Each region should have the same sign for the magnetization since we started the system with all spins pointing in the same direction. The total magnetization is equal to that of a correlated region, giving m ∝ t −/z. The same relation can be derived from a more general scaling assumption, By requiring that m(t, ) is still finite as the scaling argument t z → 0 and → 0 with fixed t, we get Eq.. Equation is only true asymptotically for large t. It seems that there is no theory concerning leading correction to the scaling. As a working hypothesis, we assume that The Monte Carlo simulation results as well as current series analysis seem to support this with ∆ near 1. Other possibility might be z = 2 with logarithmic correction. V. ANALYSIS OF SERIES A general method for extending the range of convergence of a series is the Pad analysis where a series is approximated by a ratio of two polynomials. We first look at the poles and zeros of the Pad approximants in variable s = t/(t + 1) for m. Since t varies in the range of [0, ∞), it is easier to look at s, which maps the interval [0, ∞) to [0, 1). There are clusters of zeros and poles in the s-interval which corresponds to negative t. But interval [0, 1) is clear of singularities, which gives us hope for analytic continuation to the whole interval [0, 1). If we assume the asymptotic behavior m ∝ t −a, then d ln m/dt = −a/t ≈ −a(1 − s) for large t or s → 1. This means that the Pad approximant should give a zero around s = 1. We do observe zeros near 1. But it is typically a pair of zeros off the real axis together with a pole at the real axis near 1, or sometimes, only a pair of real zeros. These complications make a quantitative analysis difficult. Since we know the exact singular point (corresponding to t = ∞), we use the biased estimates by considering the function An effective exponent z eff (t) is defined by z eff (t) = −/(F (t)) = −1/(8F (t)). Again we prefer to use the variable s to bring the infinity to a finite value 1. Due to an invariance theorem, the diagonal Pad approximants in s and t are equal exactly. For off-diagonal Pad approximants, s is more useful since the approximants do not diverge to infinity. We use methods similar to that of Dickman et al. and Adler. The general idea is to transform the function m(t) into other functions which one hopes to be better behaved than the original function. In particular, we require that as t → ∞, the function approaches a constant related to the exponent z. The first transformation is the Eq.. A second family of transformations is where p is a real positive number. One can show that the two functions are related by The last transform is where ∆ is an adjustable parameter, and F can also be replaced by G p. If the leading correction to the constant part is of the form t −∆, the transformation will eliminate this correction term. The transformation of the independent variable t to other variable is important to improve the convergence of the Pad approximants. We found that it is useful to consider a generalization of the Euler transform, The parameter ∆ is adjusted in such a way to get best convergence among the approximants. Since for t → ∞ or u → 1, a Pad approximant near u = 1 is an analytic function in u, which implies that the leading correlation is of the form t −∆. Note that ∆ = 1 corresponds to the Euler transformation (u = s when ∆ = 1). One of the fundamental difficulties of the transformation method is that one does not know a priori that a certain transformation is better than others. Worst still, we can easily get misleading apparent convergence among different approximants. Thus, we need to be very careful in interpreting our data. Specifically, we found that Eq. gives less satisfactory result than that of Eq., where the independent variable t is transformed into u according to Eq.. Figure 1 is a plot of all the Pad approximants of order , with N ≥ 4, D ≥ 4, and N + D ≤ 12, as a function of the parameter ∆, for G 1 (t = ∞). Good convergence is obtained at ∆ = 1.217 with z ≈ 2.170. The estimates z vary only slightly with p, at about 0.005 as p varies from 0.5 to 2. Using F (t) of Eq., the optimal value is ∆ = 1.4 with z ≈ 2.26. Using the function H does not seem to improve the convergence. Even though the value 2.170 seems to be a very good result, we are unsure of its significance since there are large deviations of the Pad approximation to the function F (t) for 1/t < 0.2 from the Monte Carlo result of Fig. 2. An objective error estimate is difficult to give. Estimates from the standard deviation of the approximants tend to give a very small error but incompatible among different methods of analysis. Different Pad approximants are definitely not independent; we found that Pad is almost equal to Pad to a high precision. A conservative error we quote from the series analysis is 0.1. Analysis of the energy series is carried out similarly with m replaced by 0 1 − √ 2/2, where the constant √ 2/2 is the equilibrium value. The large t asymptotic behavior is t −1/z. Both F and G functions give comparable results, better convergence is obtained for ∆ > 1. The value for z is about 2.2, but good crossing of the approximants are not observed. We feel better analysis method or longer series is needed. VI. MONTE CARLO SIMULATION Our motivation for a Monte Carlo calculation was to check the series result. It turns out that the data are sufficiently accurate to be discussed in their own right. Such an improved accuracy is achieved by using Eq., which permits a direct evaluation of the effective exponent z eff (t). We compute the magnetization m = 0, energy per bond 0 1, and the three-spin correlation m 3 = 1 2 3 where the three spins are the nearest neighbors of a center site having one of the neighbor missing in the product. With these quantities, the logarithmic derivative, Eq., can be computed exactly without resorting to finite differences. From Eq. we can write (at T = T c ) The above equation also defines the effective exponent z eff (t) which should approach the true exponent z as t → ∞. The estimates for the effective exponent based on the ratio of one spin to three-spin correlation, Eq., have smaller statistical errors in comparison to a finite difference scheme based on m(t) and m(t + 1). Error propagation analysis shows that the latter has an error 5 times larger. Both methods suffer from the same problem that error z ∝ t. Thus, working with very large t does not necessarily lead to any advantage. In order to use Eq., we need exactly the same flip rate as in the analytic calculations, namely the Glauber rate, Eq.. The continuous time dynamics corresponds to a random selection of a site in each step. Sequential or checker-board updating cannot directly be compared with the analytic results. However, it is believed that the dynamical critical exponent z does not depend on the details of the dynamics. We note that a Monte Carlo simulation is precisely described by a discrete Markov process while the series expansion is based on the continuous master equation. However, the approach to the continuous limit should be very fast since it is controlled by the system sizethe discreteness in time is 1/L 2. We have used a system of 10 4 10 4, which is sufficiently large. Apart from the above consideration, we also checked finite-size effect. Clearly, as t > L z, finite-size effect begins to show up. We start the system with all spins up, m = 1, and follow the system to t = 99. For t < 100, we did not find any systematic finite-size effect for L ≥ 10 3. So the finite-size effect at L = 10 4 and t < 100 can be safely ignored. Figure 2 shows the Monte Carlo result for the effective exponent as a function of 1/t. The quantities m, m 3, and 0 1 are averaged over 1868 runs, each with a system of 10 8 spins. The total amount of spin updating is comparable to the longest runs reported in the literature. Based on a least-squares fit from t = 30 to 99, we obtain z = 2.169 ± 0.003. The error is obtained from the standard deviation of few groups of independent runs. An error estimate based on the residues in the linear least-squares fit is only half of the above value, which is understandable since the points in Fig. 2 are not statistically independent. In Fig. 2, we also plot a series result for the F (t), obtained from the Pad of G 1 (u) and Eq.. Substantial deviations are observed for 1/t < 0.2, even though in the 1/t → 0 limit, both results are almost the same. This casts some doubts on the reliability of the series analysis. We note that the t → ∞ limit of the function F (t) is invariant against any transformation in t which maps t = ∞ to ∞. Thus, the discrepancy might be eliminated by a suitable transformation in the Pad analysis. VII. CONCLUSION We have computed series for the relaxation of magnetization and energy at the critical point. The same method can be used to obtain series at other temperatures or for other correlation functions. The analyses of the series are non-trivial. We may need much more terms before we can obtain result with accuracy comparable to the high-temperature series. We have also studied the relaxation process with Monte Carlo simulation. The ratio of three-spin to magnetization is used to give a numerical estimate of the logarithmic derivatives directly. This method gives a more accurate estimate for the dynamical critical exponent. ACKNOWLEDGMENT This work was supported in part by an Academic Research Grant No. RP950601. |
Medial gastrocnemius muscle growth during adolescence is mediated by increased fascicle diameter rather than by longitudinal fascicle growth Using a crosssectional design, the purpose of this study was to determine how pennate gastrocnemius medialis (GM) muscle geometry changes as a function of adolescent age. Sixteen healthy adolescent males (aged 1019years) participated in this study. GM muscle geometry was measured within the midlongitudinal plane obtained from a 3D voxelarray composed of transverse ultrasound images. Images were taken at footplate angles corresponding to standardised externally applied footplate moments (between 4Nm plantar flexion and 6Nm dorsal flexion). Muscle activity was recorded using surface electromyography (EMG), expressed as a percentage of maximal voluntary contraction (%MVC). To minimise the effects of muscle excitation, EMG inclusion criteria were set at <10% of MVC. In practice, however, normalised EMG levels were much lower. For adolescent subjects with increasing ages, GM muscle (belly) length increased due to an increase in the length component of the physiological crosssectional area measured within the midlongitudinal plane. No difference was found between fascicles at different ages, but the aponeurosis length and pennation angle increased by 0.5cm year−1 and 0.5° per year, respectively. Footplate angles corresponding to externally applied 0 and 4Nm plantarflexion moments were not associated with different adolescent ages. In contrast, footplate angles corresponding to externally applied 4 and 6Nm dorsal flexion moments decreased by 10° between 10 and 19years. In conclusion, we found that in adolescents' pennate GM muscles, longitudinal muscle growth is mediated predominantly by increased muscle fascicle diameter. |
Sustainable Retail Sector Growth: Role of State in Infrastructure Development in Recovering Economy For a decade the business sentiment in India has grown considerably centered on the fundamental economic plotting as the business man from India and from other parts of world is taking risk to explore the different business models and the retail sector, the age old highest employer is a buzz word for them. The organised retail in India has gone many steps ahead making it one of the lucrative sectors for big business houses to take their chance in this sector. Though the retail sector no doubt is the biggest and most promising field which can give Indian economy a good boost might face biggest crisis in the future of the basic requirement of the sector will not be given priority. Hence for the sustainable and continuous growth of the retail sector there is a need of the government's role in providing adequate infrastructural back up. This research paper examines the role of Role of State in Infrastructure Development in Recovering Economy for the Sustainable Growth in Retail Sector. In this paper we have used standard differentiation technique where we observed the role of sate in infrastructure development for Sustainable Growth of Retail Sector is maximum in a recovering economy. |
Investigating Signatures of Phase Transitions in Neutron-Star Cores Neutron stars explore matter at the highest densities in the universe. In their cores, this matter might undergo a phase transition from hadronic to exotic phases, e.g., quark matter. Such a transition could be indicated by non-trivial structures in the density behavior of the speed of sound such as jumps and sharp peaks. Here, we employ a physics-agnostic approach to model the speed of sound in neutron stars and study to which extent the existence of non-trivial structures can be inferred from existing astrophysical observations of neutron stars. For this, we exhaustively study different equations of state, including those with explicit first-order phase transitions. We conclude that astrophysical information to date do not necessarily require a phase transition to quark matter. While NS matter at densities n ≈ n sat is composed primarily of nucleons, a change in the degrees of freedom to exotic forms of matter, such as quark matter, might occur at larger densities. Such a change is expected to likely manifest itself in terms of non-trivial structures in the sound-speed profile. For example, an abrupt First-Order Phase Transition (FOPT) creates a discontinuous drop in the sound speed as a function of density. Particularly in a Maxwell construction for a FOPT, the speed of sound vanishes because the pressure is required to be constant in the mixed phase. Other forms of phase transitions, such as hyperonization or generic Second-Order Phase Transitions (SOPTs) such as Kaon condensation, can also lead to a sudden reduction in pressure and therefore to a decrease in the speed of sound. In stark contrast to such softening phase transitions, a transition to Quarkyonic Matter could stiffen the EoS, leading to a sharp peak in the sound speed. Recently, studies investigated non-trivial structures in the EoS above saturation density such as bumps in the sound speed or a kink in the EoS. The authors of Ref., using a general extension scheme in the speed of sound constrained by chiral effective field theory (EFT) calculations at low densities, perturbative QCD (pQCD) calculations at large densities, astrophysical observations of pulsars with masses around 2M, and the GW observation GW170817, observed a change in the polytropic index of the envelope of all EoS models. By comparing this to expectations for hadronic and quark matter and by comparing central densities in heavy NSs for selected hadronic EoS models 1 and EoSs constructed within their extension scheme, they concluded to have found evidence for a phase transition to quark matter in the heaviest NSs, appearing at an onset energy density of approximately 700 MeV fm −3. On the other hand, the authors of Ref. explicitly considered non-trivial structures in the speed of sound such as kinks, dips and peaks. This allowed the authors to construct massive NSs consistent with the mass of the secondary component of GW190814. They concluded that such non-trivial structures are likely present at densities probed in very massive NSs (≈ 2.5 M ). Here, we re-investigate if non-trivial structures in the speed of sound can be inferred from present astrophysical data. For this, we employ general schemes for the EoS that are able to describe a wide range of density behaviors. We use a systematic approach to model the EoS in the speed of sound vs. density plane, employing a piece-wise linear model for the speed of sound which is a modified version of the scheme of Ref. and similar to Ref. but with more model parameters. We then group different EoS realizations according to the slope in the speed of sound, the appearance of non-trivial structures, or explicit FOPT, and analyze the effect of astrophysical NS observations. We compare our results with those presented in the literature and comment on the evidence for phase transitions linked to astrophysical data. Equation of State Model -We use an extension scheme for the EoS in the speed of sound, c s. At low densities, up to nuclear saturation density n sat, we fix our EoS to be given by the SLy4 energy-density functional, a phenomenological force that is well calibrated to nuclear matter as well as finite nuclei properties, and commonly used in astrophysical applications. Beyond n sat, for each EoS we create a non-uniform grid in density between n sat and 12n sat by randomizing an initial uniform grid with a Mass-radius curves for all EoSs that we employ in our work (gray). The samples are divided into three panels corresponding to three EoS groups (see text). Note that for each group only 1000 samples (chosen randomly) out of the total 10000 EoSs are shown. We also show the observational constraints enforced in this work. EoSs that pass all observational constraints at the 90% confidence level are shown in green. spacing of n sat : at each grid point, a density shift drawn from a uniform distribution between −0.4n sat and 0.4n sat is added and defines the set 11. We then sample random values for c 2 s (n i ) between 0 and c 2 with c being the speed of light (we set c = 1 in the following). Finally, we connect all points c 2 s,i (n i ) using linear segments. We sort the resulting EoSs into three groups according to the maximal slope in the speed of sound c max = max{dc 2 s /dn}: defining the slope at n sat to be c sat = dc 2 s /dn(n sat ) = 0.55 fm 3, group 1 contains EoSs whose maximal slope is less than three times the slope at n sat, c max ≤ 3c sat, group 2 contains all EoSs with 3c sat < c max ≤ 6c sat, and group 3 contains all EoSs with 6c sat < c max ≤ 9c sat. These groups are thus mutually exclusive. The upper limit in group 3 allows us to disregard EoSs for which the sound speed strongly oscillates with density, a case which is not observed in any EoS models besides those which incorporate a FOPT. We explicitly construct FOPTs later in the manuscript and analyze their impact on our results. We have generated 10,000 EoSs in each group. The sound-speed profiles c 2 s (n) can be integrated to give the pressure p(n) and the energy density (n). By inverting the expression c 2 s = dp/d = (nd)/(dn), we obtain the chemical potential in the interval n i ≤ n ≤ n i+1, the pressure and the energy density, Finally, we solve the TOV equations for each EoS to determine NS radii (R) and dimensionless tidal deformabilities () as functions of masses (M ), see for instance Ref. for more details. EoSs of this work that satisfy observational constraints. We show envelopes for EoSs without FOPT (red) and EoSs with FOPT with different onset density ranges (green and blue). The shaded bands correspond to stable NS configurations, whereas the solid lines show the EoSs extended beyond the maximally massive NS configurations. The black contour depicts the results of Ref., and the gray contour represents the perturbative QCD constraint. In Fig. 1, we show the resulting mass-radius families together with the astrophysical observations that we consider in this work: the NICER observations of millisecond pulsars J0030+0451 and J0740+6620, the gravitational-wave observation GW170817, and upper and lower limits on the maximum NS mass, M TOV. For GW170817, we transformed the estimation of the tidal deformability, which is = 222 +420 −138 at 90% confidence level (CL), into a single constraint on the radius for the mass m = 1.38 M. This constraint gives an upper value of the radius obtained by running over all our EoSs compatible with GW170817 and by fixing the mass ratio q = m 1 /m 2 = 1. This upper radius is 12.9 km. The upper bound of 2.6 M is consistent with the mass of the secondary object in the GW190814 event, which is likely a black hole. The EoSs that satisfy all constraints at the 90% CL are shown in green in Fig. 1. Results for the EoS -After the imposition of astrophysical data, the range of radii explored by our EoSs is about 11-13 km. The upper radius limit is set by GW170817 (see Fig. 1) while the lower radius limit is a result of both the NICER and maximum-mass constraints. The envelopes of EoSs that survive all imposed astrophysical constraints are shown in Fig. 2. The filled contours encompass EoSs on the stable NS branch while the dashed contours encompass the EoSs on the unstable branch as well, i.e. for densities going above the maximum-mass NS, M TOV. Fig. 2 illustrates that when we represent EoSs for densities above M TOV we find a kink similar to the one from Annala et al. at kink 700 MeV fm −3. More precisely, we observe that EoSs can be arbitrarily soft above kink since this regime is not probed in stable NSs (with M ≤ M TOV ). At even larger densities, our EoS envelopes broaden compared to Ref., this time allowing stiff EoSs, but these differences have no impact for stable NSs. Note that unlike Ref., we do not incorporate constraints from pQCD since the objective of this work is to study the influence of astrophysical data alone. In a future work, we will address the theoretical constraints from pQCD. We also show results for EoSs with an explicit FOPT, built upon each original EoS. It is implemented in terms of three parameters: the transition energy density t, the width of the transition ∆ and a constant sound speed after the transition c 2 s,t. For each EoS, random values are drawn from the uniform ranges t = MeV fm −3, ∆ = t and c 2 s,t = , and the EoSs are compared with astrophysical constraints as before. We further separate the EoSs into two subgroups: t = MeV fm −3 (FOPT-1) and t = MeV fm −3 (FOPT-2). For the group with larger onset density, we again observe a kink of the EoS, while the other EoS group shows a smooth behavior. The softening of the EoSs without FOPT observed in Fig. 2 at kink is somewhat similar to the softening of the FOPT EoSs with t = MeV fm −3. It is, therefore, tempting to conclude that this softening is a signal of a phase transition to exotic matter. The polytropic index ≡ d log p d log = ( /p) c 2 s was computed in Ref. and a value of = 1.75 was chosen to distinguish between hadronic and quark-matter models. In the following, we will study the behaviour of the polytropic index and the sound speed c 2 s for all EoSs shown in Fig. 2. Results for the existence of phase transitions -In Fig. 3, we show sound speed distributions as function of energy density, for all EoSs fulfilling the observational constraints previously described. The model-averaged bands are terminated when the NSs enter the unstable branch. The precise value for which this happens, while slightly different for different models, have been fixed for the graphical representation shown in the figures. We separate each group into subgroups according to the density where the speed of sound reaches its maximum, n cmax : panel (a) shows all the EoSs for which n cmax is between 0.2-0.5 fm −3, in panel (b) n cmax lies between 0.5-0.8 fm −3, and panel (c) shows those for which n cmax is beyond 0.8 fm −3. As one can see, while some groups contain EoSs with clear peaks, astrophysical observations do not require the EoS to have significant structures the speed of sound, in particular a decrease to c 2 s ≈ 1/3, see Fig. 3(c). The more pronounced peaks in Fig. 3(a) are reminiscent of the Quarkyonic model whereas the broader peaks in Fig. 3(b) can potentially be interpreted as weaker phase transitions. We have found that the peaks in Fig. 3 disappear if we change the maximum mass constraint from 2 M <M TOV < 2.6 M to M TOV > 2.6 M. Therefore, the upper limit on M TOV is crucial for the appearance of a peak as it requires the EoS to soften at sufficiently high densities. These findings are in slight tension with those presented in Ref. which claim that non-trivial structures in the sound speed are likely responsible for the existence of very massive (M 2.6 M ) NSs. For the behavior below the peak, both the stiffness required by the 2 M observations and the NICER observations and the softness required by GW170817 play an important role. This explains the agreement of the individual groups at low densities but their deviations at higher densities. We show the polytropic index in the bottom panels of Fig. 3. We find that basically all bands drop below = 1.75 at higher densities, corroborating the findings of Ref.. However, we stress that this is only a necessary but not a sufficient condition for the appearance of quark matter. Indeed, a straightforward comparison of the top and bottom panels in Fig. 3 shows that while asymptotically approaches 1 in all cases, the sound speeds exhibit no preferred asymptotic value that can be identified as the conformal quark matter limit. As mentioned before, the softening at high density is strongly related to the upper boundary for M T OV, in our analysis. Also, contrary to the findings of Ref., we did not find that EoSs with > 2 in the maximally massive configuration have a significant softening at lower densities. We finally analyze the impact of explicitly including a FOPT on the sound speed profiles. The results are shown in Fig. 4, where we compare (left panel) the EoSs without FOPT but a strong peak (as shown in Fig. 3a) with (middle and right panels) EoSs with FOPTs separated into two groups with t = MeV fm −3 (FOPT-1) or t = MeV fm −3 (FOPT-2). When including a FOPT, we always observe the formation of a clear peak in the speed of sound profile, located at an energy density inside the region indicated by the dashed brown lines, similar to what we observe in Fig. 4(a). This suggests that if a FOPT occurs in dense matter, it is preceeded by a sharp increase in the sound speed beyond the conformal limit, reaching its maximum at approximately 400-500 MeV fm −3. However, we stress once again that present astrophysical data does not necessarily imply that the EoS undergoes a FOPT. Conclusions-We have used a framework based on the speed-of-sound extension to address the question of phase transitions in dense matter in a systematic way. We have classified EoSs agreeing with present astrophysical data based on the behaviour of their sound speed profiles. We were able to identify EoSs with sound-speed profiles resembling quarkyonic matter, while for others we explicitly included a FOPT. However, our analysis also revealed the presence of (sub)groups of EoSs that have neither a clear structure nor an asymptotic tendency to the conformal limit. This leads us to conclude that present astrophysical data does not favour exotic matter over the standard hadronic matter or a crossover feature. Nevertheless, our analysis also shows that, in the near future, new astrophysical data has the clear potential to be decisive, one way or the other, regarding the long-standing question of phase transitions in dense matter. We believe that the tools developed in this work -by classifying the type of sound-speed density dependence in terms of simple properties -paves the way for further research in this direction. Finally it might be interesting to systematically investigate whether pQCD calculations have an impact on the findings of this work. We aim to do so in the near future along the lines suggested in Ref.. |
Review of the Omnipod® 5 Automated Glucose Control System Powered by Horizon™ for the treatment of Type 1 diabetes. Type 1 diabetes (T1D) is a medical condition that requires constant management, including monitoring of blood glucose levels and administration of insulin. Advancements in diabetes technology have offered methods to reduce the burden on people with T1D. Several hybrid closed-loop systems are commercially available or in clinical trials, each with unique features to improve care for patients with T1D. This article reviews the Omnipod® 5 Automated Glucose Control System Powered by Horizon™ and the safety and efficacy data to support its use in the management of T1D. |
Mortgages may be underwritten by evaluating a mortgage applicant's credit, collateral, and capacity to pay. A preliminary evaluation of credit is typically performed by mortgage originators (that is, loan representatives for mortgage brokers and mortgage lenders) prior to submitting the loan application to underwriting. This evaluation is typically done by examining a tri-bureau merged credit report, which is created for the mortgage originator by a credit reporting agency by merging the consumer files provided by the three dominant credit bureaus: Experian, TransUnion, and Equifax.
Nearly all mortgage applications list either a single applicant, or two applicants of which one is the primary applicant and the other is the co-applicant. In the case of two applicants, their credit is typically evaluated separately.
Nearly all mortgages are underwritten using credit scores. The credit score used for underwriting each applicant is the mid-score; that is, the median among the three credit scores computed from the three credit bureaus.
In addition to the mid-score, other credit information generally used in underwriting consists of negative payment history on mortgages, the presence of unpaid collection accounts on public records, and the presence of accounts in credit counseling.
Capacity to pay in mortgage underwriting is typically evaluated using debt-to-income ratios. By convention, underwriters do not consider installment loans with 10 months of payments or less remaining and authorized user accounts in the debt-to-income ratios.
Mortgage originators often are required by lenders to obtain a new credit report after the initial evaluation and prior to closing on the mortgage, so that the period between underwriting and closing is not so long such that the information could have changed significantly. As a result, mortgage originators are concerned not just about the mid-score as it is when they obtain a credit report, but also about the potential for it to drop prior to closing the loan.
Authorized user accounts are included in the credit score calculation. They can be easily removed from the credit report and score calculation if they are having a negative effect on the credit score.
Mortgage lenders will generally refuse to underwrite an application where any of the accounts on the credit report is in dispute.
Mortgage originators also find it useful to know whether their applicants have been shopping around for mortgages prior to coming to them. This can be determined from the credit report by the presence of credit inquiries from other mortgage originators.
In addition to credit information, consumer credit files can also contain alerts of various kinds of dangers to the underwriting process, including possible fraud and presence on the OFAC prohibited parties list. Mortgage originators need to pay attention to these alerts for legal and policy compliance.
Unfortunately, all of the foregoing disparate factors can have an impact on the mortgage originator's evaluation of an applicant's credit. However, trying to keep track of this data can be a significant challenge for the mortgage originator, particularly given the varied nature and distributed character of such data. It would therefore be advantageous to provide a system and method capable of analyzing, condensing, and reporting the most relevant portions of data that would affect the mortgage originator's evaluation into a single, human-readable report. |
Dynamic electricity tariff definition based on market price, consumption and renewable generation patterns The increasing use of renewable energy sources and distributed generation brought deep changes in power systems, namely with the operation of competitive electricity markets. With the eminent implementation of micro grids and smart grids, new business models able to cope with the new opportunities are being developed. Virtual Power Players are a new type of player, which allows aggregating a diversity of entities, e.g. generation, storage, electric vehicles, and consumers, to facilitate their participation in the electricity markets and to provide a set of new services promoting generation and consumption efficiency, while improving players benefits. In order to achieve this objective, it is necessary to define tariff structures that benefit or penalize agents according to their behavior. In this paper a method for determining the tariff structures has been proposed, optimized for different load regimes. Daily dynamic tariff structures were defined and proposed, on an hourly basis, 24 hours day-ahead from the characterization of the typical load profile, the value of the electricity market price and considering the renewable energy production. |
Optical response of the H2A-LRE satellite We designed the arrangement of the corner cube reflectors to be carried on the H2A-LRE satellite. There are 126 reflectors on its quasi-spherical surface; 66 are made of fused silica and the others are made of BK7 glass. Fused silica remains active for more than a few decades, while BK7 degrades in a short time. The reflectors are arranged on the body faces so that the satellite can respond most angles of incidence and so that the optical response does not change significantly after the BK7 reflectors degrade. The optical response from the H2A-LRE satellite was simulated based on the measured position of all its reflectors. The centre-of-mass corrections for multi-photon systems were calculated at 215.3 mm while the BK7 was active and at 210.5 mm after it totally degraded. For single-photon systems, the corrections were 208.5 mm before degradation and 204.7 mm after degradation. |
The Importance of Integrated Healthcare in the Association Between Oral Health and Awareness of Periodontitis and Diabetes in Type 2 Diabetics. PURPOSE To assess the association of various factors including education level and oral health with type2 diabetics' awareness of periodontitis and periodontitis/diabetes relationship, and to evaluate the importance of integrated healthcare in this association. Materials and Methods: 288 type 2 diabetics were evaluated through a validated structured questionnaire about oral hygiene habits, access and attendance to dental treatment, the presence of periodontitis and previously received information of periodontitis and periodontitis/diabetes relationship. Descriptive data were explored and both simple and multiple logistic regressions were performed. Results: The average age of participants was 62.24 (±10.93) years, 81.6% were previously treated for periodontitis and approximately 70% have never received information on periodontitis and its relationship with diabetes. A higher chance of participants having previously received information regarding periodontitis was associated with more than 8years of schooling, daily flossing habit, presence of periodontitis and prior treatment for periodontitis (p<0.005). Regarding previously received information about periodontitis/diabetes relationship, statistically significant associations were observed for more than 12years of schooling and diabetes diagnosed more than 8years ago (p<0.05). Conclusion: The vast majority of participants were previously treated for periodontitis without receiving proper oral health education, which means that access to costly dental treatment is provided while patient education is neglected. It was shown the influence of habits and living conditions on the previously received information about diseases, and therefore, particular attention to the population characteristics is important to make the information accessible to everyone. |
A woman found dead on a bike trail known as the "black path" in Thunder Bay early Tuesday morning appeared to have been badly beaten, according to the passerby who found her body.
Devin Galloway, 23, said he was walking to buy cigarettes about 15 minutes after midnight when he found the woman face down in the middle of a section of the bike path near the Landmark Hotel.
"I seen the big lump on her face. I knew right away she got assaulted," he said.
Thunder Bay police are treating the death as a homicide.
The area where the woman was found has become locally known as the "black path" because of the number of deaths and assaults that have occurred along its stretches.
Marlan Patrick Chookomolin, 25, was found beaten and near death on the same bike path network shortly after midnight on June 25, 2017. He later died in hospital. His death remains unsolved.
In 2010, a 16-year-old girl was found dead in the area of the trail. Two teens were convicted in connection with her death.
Galloway said he thought the woman was unconscious at first and then realized it didn't seem that she was breathing.
"I ran back [home] as fast as I could and called the cops," he said.
He said he returned to the scene with a police officer who turned the woman over and tried to resuscitate her to no avail.
He was interviewed at the scene and re-interviewed by detectives at his home on Tuesday morning.
"I feel bad for the family, if she has any kids or anything," he said. |
Redeveloping a brownfield is usually a balancing act between the public and private sector, often necessitating creative partnerships to acquire land, funding, clean-up, and construction. It is a process that is often regulated through several levels of government, which only recently has become streamlined. Private practitioners have found niche markets within the process, and non-profit agencies have supported efforts when the government could not. This section aims to overview the various levels of government policy and programs in play in our region, the perspective of the private practitioner, and the roles of the regional non-profits. Several local programs are also described.
In 1980, the Federal government passed the Comprehensive Environmental Response and Liabilities Act (CERLCA, 42 U.S.C. §§ 9601-9675), also known as Superfund. (US Code) In addition to providing funds for clean-up, CERCLA essentially gave governments the power to require clean-up costs from whoever they deemed responsible for the environmental contamination. In response, investors and banks, afraid of being held liable for costly environmental clean-ups, halted their support of redevelopment efforts that had a perception, however slight, of being a brownfield.
All levels of government soon realized that CERLCA, and the risk of liability that it created, was preventing redevelopment. In 1995, the US EPA introduced the Brownfields Action Agenda to help clarify the government's role, to make funds available for pilot projects to test redevelopment approaches, and to provide direct assistance to those interested in redeveloping high-risk sites (DeSousa, 2006b).
Around the same time, Illinois and the US EPA Region 5 were at the cutting-edge of a group of state and local governments who began implementing strategies for encouraging remediation and redevelopment. Section 128(a) of CERCLA authorizes a grant program to establish state response programs, and allowed Illinois to create the Site Remediation Program (SRP), a voluntary clean-up program which provides participants the opportunity to receive IEPA review, technical assistance, and "No Further Remediation" (NFR) letters. The NFR letters guarantee that the participant has successfully demonstrated that environmental conditions at their remediation site do not present a significant risk to human health or the environment. Due to a memorandum of agreement with the US EPA, the NFR letter from IEPA serves as a release from liability and/or further responsibilities under the Illinois Environmental Protection Act (IEPA).
In addition, Illinois has adopted a unique Tiered-Approach to Corrective Action Objectives (TACO), which allows for flexibility in site remediation objectives. Rather than demand a one-size-fits-all remediation requirement for all brownfield sites, TACO allows site owners and developers to clean up the site to the appropriate tier based on risk (e.g., a site redeveloped to a daycare center would have to meet higher standards than a site redeveloped to a parking lot). Baseline objectives exist, but time and remediation cost can be saved by cleaning up the site to an appropriate level.
Brownfields Clean-up Grants – direct funding for clean-up activities at certain properties with planned green space, recreational, or other nonprofit uses.
In addition, US EPA also administers the Targeted Brownfields Assessment (TBA) program, which is designed to give technical assistance to municipalities dealing with brownfields, but haven't received US EPA grant funding. The program is focused on minimizing the uncertainties of brownfield remediation. It offers assistance with (Phase 1) screening, including a background and historical investigation and preliminary site inspection, and (Phase 2) environmental assessment, including sampling the site, determining clean-up and redevelopment options, and estimating costs.
Other federal agencies also support brownfields redevelopment, especially the Department of Housing and Urban Development (HUD), through the Brownfields Economic Development Initiative (BEDI). This competitive grant program targets its funding to brownfield redevelopment projects that provide economic stimulus through job and business creation/retention and increases in the local tax base.
State Response Action Program – provides financial and administrative resources for clean-up of environmental contamination which presents a threat and isn't addressed by other federal or state clean-up programs.
All of these programs can all be used in concert with several other traditional economic development tools such as CDBG funds or TIF districts. In fact, most successful projects rely upon a mixture of funding sources and government assistance. The federal Brownfields Tax Incentive allows for environmental cleanup costs at properties in targeted areas to be fully deducted in the year incurred, rather than having to be capitalized, as long as they were not responsible for the contamination. The Illinois Environmental Remediation Tax Credit allows a company or an individual to obtain an income tax credit for certain environmental clean-up costs. Municipalities throughout the region may also grant property tax relief to entice developers to redevelop specific brownfield sites. Furthermore, some local governments operate their own grant programs. Lake County is an example of this, as it operates a county-wide Brownfield Fund, established in 2000, which provides another source of grant funds to communities within the county.
The challenges of brownfield redevelopment include the risk of liability, uncertainty of cost and time-frame for environmental testing and clean-up, and the uncertainty of cost and time-frame for government intervention. Several government policies and programs have creatively managed these hurdles, limiting liability with NFR letters, providing a variety of grant and loan options and tax breaks to reduce costs, providing technical assistance and streamlining involvement with government agencies as much as possible to save time. However, each brownfield site is unique, and presents unique challenges. In addition, government efforts are focused on remediation, whereas redevelopment usually falls to the private sector or non-profits. Therefore, the following explores the roles of the private sector and non-profit groups, and how several local governments have established their own brownfields initiatives and pilot programs.
Has your community utilized any of these brownfield redevelopment programs? How successful was it? |
Wonder Woman. She has recently been dubbed the 20th greatest comic book character by Empire Magazine, and ranked fifth in IGN’s 2011 Top 100 Comic Book Heroes of All Time. She stands as one of the icons of the comic book world, and has been featured in dozens of comic titles since her debut in 1941. The character has also found success in other media, appearing in a popular live-action television series in the 70s, as well as several animated series (including Super Friends and Justice League). Now that DC Comics has produced several serious superhero films—Nolan’s Batman trilogy, Snyder’s Superman blockbuster, and presumably an upcoming Justice League film—the question on everyone’s mind is simple: when will we get a Wonder Woman movie?
Unfortunately, the answer isn’t simple. Wonder Woman is and has always been a problem for DC Comics, a company with a history of underutilizing, underwriting and just plain ignoring their biggest female icon. Now on the brink of launching their own motion picture juggernaut, they’re hitting the stumbling block they’ve been tripping over for years. Let’s analyze the problem of Wonder Woman—and maybe even talk some possible answers.
Wonder Woman is a complicated character with a complicated creation story. The brainchild of William Moulton Marston, Wonder Woman was created by the noted psychologist turned comic writer and his wife (some say with the help of their girlfriend too!) to be the epitome of gender equality and woman’s liberation in the 1940s. When she first appeared in All Star Comics #8 in 1941, Princess Diana was an Axis-fighting pro-democratic Amazon, leaving behind the island of her all-female society on Themyscira to help America crush its enemies. Over the years, Wonder Woman evolved as a character from her two-dimensional, somewhat bondage-heavy origins into a staple of DC Comics. She became a symbol for powerful, forward-thinking female characters in comic books as she espoused values like valor and honesty in the name of equality while living in a Man’s World.
It’s that last part that sets Wonder Woman apart from other comic book women. Wonder Woman is a radically feminist character packed in a stars and stripes bathing suit, a superwoman in need of no super man to qualify her. Where many other DC heroines are built on the legacies of popular male counterparts (Batgirl, Supergirl, Hawkgirl), Wonder Woman is a legend all her own. And while many things about the character have changed since her reinvention in 1987 after the Crisis On Infinite Earths storyline, her foundation as a powerful female character with staunchly feminist views has not changed.
That is one of the reasons Wonder Woman has had a difficult path in the comic world. She stands as an unapologetically feminist super heroine in an industry that often relegates women to sidekicks, damsels, and girlfriends. She’s also a character mired in a complicated backstory that is not only supernatural, but also steeped in a mythology that is difficult to translate to modern audiences. All of this has led to difficult years for Wonder Woman comics. One would think that the opportunity for a rewrite would have made the transition to modern comics a little easier. Yet the “reimagined” Wonder Woman featured in DC’s New 52 has done the character no favors.
The modern rewrite of Wonder Woman has suffered, like many of the New 52 characters, from a crisis of identity. She is a stern and often humorless character who sometimes takes a backseat in her own title to a myriad of ancillary characters. On the Justice League she serves as Superman’s new girlfriend, a super-powered relationship that has seen her agency as a character give way to lots of cheesecake covershots. Even her newest comic line, entitled Superman & Wonder Woman, seems focused on a lot of super-necking rather than comic book adventure. This is what the New 52 has created—a Wonder Woman lost in the backdrop of her own comic book, relegated to the role of armcandy for her super boyfriend.
With this to build on, it’s no wonder Hollywood is having problems with our Princess Diana.
Comic book films have emerged from years of cheesy, almost parodic movies in the 80s and 90s to solidify themselves as legitimate, character-driven films thanks to good direction and blockbuster budgets. So it’s no wonder that Wonder Woman makes a dangerous gambit for DC Comics. Nobody wants to be the one to do the film incorrectly—whatever that means—and present the studio with a flop starring one of its major characters.Wonder Woman stands as an enigma for studios wondering how to properly package a pro-feminist, butt-kicking, Amazon warrior. Focusing on her strong equality message risks alienating one audience, but favoring sex appeal over substance risks betraying the essence of the character. And you could get laughed off screen altogether, like the atrocious 2011 NBC Wonder Woman pilot. It’s a catch-22 that has kept the film in limbo for years.
And the failures of other comic book films with female leads (Elektra, Catwoman) are used as examples to argue that superheroines wouldn’t draw an audience to make the effort worthwhile. There’s also the problem of the vanishing movie heroine in today’s cinema. Even outside of comic book films, fewer movies feature women as their lead characters every year. Go to a cineplex and you’ll find women taking a backseat everywhere. But you couldn’t do that to Wonder Woman and get away with it—her character (and fanbase) demand a starring role worthy of her.
So scripts have come up and been rejected. Directors have been attached to potential projects. The CW announces a potential TV series for Wonder Woman, and then we don’t hear anything again. And people speculate on who would make the “perfect” actress for Diana in the films, critically viewing Hollywood actresses for everything from acting chops, fighting capability, and of course the ability to fill out the spangled bathing suit. And while Zack Snyder has hinted he’d love to helm a project about Wonder Woman, the debate continues.
But is the problem of Wonder Woman so difficult? Not really—because it’s been solved before.
One only has to turn to DC’s animated films division to see the Hollywood problem answered. DC has been putting out well-written animated versions of Wonder Woman for years now, including her portrayal in the acclaimed Justice League and Justice League Unlimited cartoon series. There was even a fantastic 2009 Wonder Woman animated movie with Keri Russell voicing Diana alongside Nathan Fillion as Steve Trevor. These animated portrayals were able to capture the essence of Wonder Woman and provide quality comic book entertainment by adhering to one basic rule: they never forgot where they came from.
Wonder Woman is a comic book character with all the grandeur and earnestness that the medium holds. The animated versions have managed to embrace that characterization without getting too preoccupied with making the films realistic, which frees them from being anything but intense, fun, well-done stories. They don’t tiptoe around the fantastic, as live-action comic films seem to do, and transcend the hemming and hawing about what makes these films super so they can focus on just being good stories. Christopher Nolan understood that when he adapted Batman, opting to mix the modern sensibilities of a live-action film with a thoughtfult homage to the comic book stories that made fans love the Dark Knight. Hollywood could take a lesson from this—or else just go and hire the animated Wonder Woman writers and be on their way.
And as for the controversy over who would portray the Amazon princess, there are plenty of talented actresses in Hollywood waiting patiently for a film that will finally put them front and center again. There is never going to be a “perfect” Diana because, in truth, she’s created as a comic book ideal. But Hollywood is full of capable women who could see the character done well. Names like Eva Green, Michelle Ryan, Katrina Law and Bridget Regan come to mind, or even an outlier like MMA fighter Gina Carano could fill the princess’ bracelets. Each of these women and plenty more could stand as great choices for one incarnation of Diana or another—if given half a chance.
So will we see Wonder Woman on the big screen sometime soon? I don’t doubt that we will. If DC wants to make a Justice League movie, they need a Wonder Woman. The question is, will they take a shortcut and make her just another member of the ensemble cast, or will they have the bravery to treat the character like they would her male compatriots in the Big Three and give her a vehicle for her own story? That remains to be seen.
Shoshana Kessock is a comics fan, photographer, game developer, LARPer and all around geek girl. She’s the creator of Phoenix Outlaw Productions and ReImaginedReality.com. |
Reduction of sleep bruxism using a mandibular advancement device: an experimental controlled study. PURPOSE The objective of this experimental study was to compare the effect on sleep bruxism and tooth-grinding activity of a double-arch temporary custom-fit mandibular advancement device (MAD) and a single maxillary occlusal splint (MOS). MATERIALS AND METHODS Thirteen intense and frequent bruxors participated in this short-term randomized crossover controlled study. All polygraphic recordings and analyses were made in a sleep laboratory. The MOS was used as the active control condition and the MAD was used as the experimental treatment condition. Designed to temporarily manage snoring and sleep apnea, the MAD was used in 3 different configurations: without the retention pin between the arches (full freedom of movement), with the retention pin in a slightly advanced position (< 40%), and with the retention pin in a more advanced position (> 75%) of the lower arch. Sleep variables, bruxism-related motor activity, and subjective reports (pain, comfort, oral salivation, and quality of sleep) were analyzed with analysis of variance and the Friedman test. RESULTS A significant reduction in the number of sleep bruxism episodes per hour (decrease of 42%, P <.001) was observed with the MOS. Compared to the MOS, active MADs (with advancement) also revealed a significant reduction in sleep bruxism motor activity. However, 8 of 13 patients reported pain (localized on mandibular gums and/or anterior teeth) with active MADs. CONCLUSIONS Short-term use of a temporary custom-fit MAD is associated with a remarkable reduction in sleep bruxism motor activity. To a smaller extent, the MOS also reduces sleep bruxism. However, the exact mechanism supporting this reduction remains to be explained. Hypotheses are oriented toward the following: dimension and configuration of the appliance, presence of pain, reduced freedom of movement, or change in the upper airway patency. |
Newly Emerged Populations of Plasmopara halstedii Infecting Rudbeckia Exhibit Unique Genotypic Profiles and Are Distinct from Sunflower-Infecting Strains. The oomycete Plasmopara halstedii emerged at the onset of the 21st century as a destructive new pathogen causing downy mildew disease of ornamental Rudbeckia fulgida (rudbeckia) in the United States. The pathogen is also a significant global problem of sunflower (Helianthus annuus) and is widely regarded as the cause of downy mildew affecting 35 Asteraceae genera. To determine whether rudbeckia and sunflower downy mildew are caused by the same genotypes, population genetic and phylogenetic analyses were performed. A draft genome assembly of a P. halstedii isolate from sunflower was generated and used to design 15 polymorphic simple sequence repeat (SSR) markers. SSRs and two sequenced phylogenetic markers measured differentiation between 232 P. halstedii samples collected from 1883 to 2014. Samples clustered into two main groups, corresponding to host origin. Sunflower-derived samples separated into eight admixed subclusters, and rudbeckia-derived samples further separated into three subclusters. Pre-epidemic rudbeckia samples clustered separately from modern strains. Despite the observed genetic distinction based on host origin, P. halstedii from rudbeckia could infect sunflower, and exhibited the virulence phenotype of race 734. These data indicate that the newly emergent pathogen populations infecting commercial rudbeckia are a different species from sunflower-infecting strains, notwithstanding cross-infectivity, and genetically distinct from pre-epidemic populations infecting native rudbeckia hosts. |
Application of Data Mining for high accuracy prediction of breast tissue biopsy results In today's world where awareness for Breast Cancer is being carried out at a large scale, we still lack the diagnostic tools to suggest whether a person is suffering from Breast Cancer or not. Mammography remains the most significant method of diagnosing someone with Breast Cancer. However, mammograms sometimes are not definite due to which a radiologist cannot pronounce his/her decision based solely on them and has to resort to a biopsy. This paper proposes a data mining technique based on Ensemble of classifiers following data pre-processing, to predict the outcomes of the biopsy using the features extracted from the mammograms. The results achieved in this paper on the Mammographic Masses dataset are highly promising and have an accuracy of 83.5% and an ROC (Receiver Operating Characteristics) area of 0.907 which is higher than the existing approaches. |
Cannon Hill Community Links to be Brisbane's first new public golf course in 70 years
Posted
A weekend on the fairways will be possible for more golfers in Brisbane with the first new public course in more than 70 years set to open by 2018.
Peter Castrisos, chairman of Golf Queensland, said the 18-hole course would see the sport reach new heights in Queensland.
There's 70,000 registered golfers in Queensland that play at members golf clubs but there's 140,000 people who play the game of golf and need access to courses too. Chairman Peter Castrisos
A former tip site at Cannon Hill has been cleared in preparation for the course, which will be the only public golf course south of the Brisbane River.
Brisbane City Council recently formalised a land swap deal with developers BMD Group.
The Cannon Hill Community Links project will also include new housing and conservation areas on 125 hectares between Creek Road, Fursden Road and Bulimba Creek.
Mr Castrisos said the new course would enable more golfers of all skill levels the chance to spend time on the greens.
"There are only two other public golf courses in Brisbane — Victoria Park and St Lucia," he told 612 ABC Brisbane's Tim Cox.
"It's great to see another one on the scene as we haven't seen one in 70 years ... that's been a long time.
"The more golf courses we have the better chance we have to get people playing this great game."
It's all about access
Mr Castrisos said the need for the new course was huge, with thousands of golfers already registered in Queensland.
"There's 70,000 registered golfers in Queensland that play at members golf clubs, but there's 140,000 people who play the game of golf and need access to courses too," he said.
"We have in Brisbane many members golf courses ... they pay a lot of money to be members and therefore they demand the right to play when they want to play.
"The average hacker isn't able to get on those courses, so they need public golf courses to have access to the game.
"The more people we can get onto golf courses the more likely they are to join a golf club and support the industry."
Mr Castrisos said the new course had been a long time coming.
"These facilities have to be built by councils and they are reluctant to spend the money as they are not big money-earning ventures," he said.
"It's like every public facility, they have to wait in a queue and it's taken many years to get it through council."
Once built, Mr Castrisos said the Cannon Hill course would be highly utilised.
"There's a huge market of people looking for access to a golf course who don't want to be members of a club but want to take part in the great game," he said.
Topics: golf, sport, community-and-society, human-interest, cannon-hill-4170, brisbane-4000 |
Physical Activity Level and Its Barriers among Patients with Type 2 Diabetes Mellitus, Qassim Province, Saudi Arabia Background: Diabetes mellitus is the most common chronic disease that affects millions of people worldwide with rapidly growing prevalence. World health organization announced that physical inactivity is one of the top 10 global principle causes of morbidity and mortality worldwide. On the other hand, physical activity is well known to be cornerstone domain for managing diabetes mellitus. Objectives: To estimate the physical activity level among patients with type 2 diabetes mellitus in Qassim province and to explore its common barriers. Methods: We conducted a cross-sectional study among type 2 diabetic patients who attended chronic diseases clinics of 10 primary health care centers in Buraidah city during 2021. We used two validated self-administrated questionnaires which were the global physical activity questionnaire (WHO) and barriers to being active quiz (CDC). Results: We surveyed 357 type 2 diabetic patients. While majority of patients (90%) recognized the importance of physical activity for body health, only 29% of the participants met WHO recommendations for physical activity. Males and females had slight difference in their activity level, 72% and 69% respectively. Top reported barriers of physical activity were lack of willpower, lack of resources and lack of energy. The most common reported domain of physical activity was recreational activity. Conclusion: Type 2 diabetic patients in Qassim province have low physical activity level. We recommend more focus on physical activity by all diabetic care professionals. Further in-depth studies to further evaluate prevalence and other relevant factors are suggested. |
This is a rush transcript. Copy may not be in its final form.
AMY GOODMAN: This is Democracy Now! I’m Amy Goodman, with Juan González.
JUAN GONZÁLEZ: We turn now to a scathing new report on poverty in the United States that found the Trump administration and Republicans are turning the U.S. into the, quote, “world champion of extreme inequality.” Philip Alston, the United Nations special rapporteur on extreme poverty and human rights, announced his findings after conducting a two-week fact-finding mission across the country, including visits to California, Alabama, Washington, D.C., and Puerto Rico. Alston also warned the Republican tax bill would transfer vast amounts of wealth to the richest earners while making life harder for the 41 million Americans living in poverty.
AMY GOODMAN: Among other startling findings in Alston’s report, the U.S. ranks 36th in the world in terms of access to water and sanitation. Alston discussed the report’s findings Friday with independent Senator Bernie Sanders of Vermont, who focused on economic inequality during his presidential campaign.
Well, for more, Philip Alston joins us here in our New York studio. He’s the United Nations rapporteur on extreme poverty and human rights. He is also a professor at NYU Law School, New York University.
Welcome to Democracy Now!, Professor Alston. So you just came back from this tour. Congress is poised to vote on this tax bill. Your assessment and why you’re weighing in as the U.N. special rapporteur on extreme poverty?
PHILIP ALSTON: Well, my job is to try to highlight the extent to which people—to which the civil rights of people who are living in extreme poverty are jeopardized by government policies. What I see in the United States now is not just a tax reform bill, but a very clear indication by government officials with whom I met, by the Treasury, in their analysis, that this is going to be funded in part by cuts to welfare, to Medicare, Medicaid. And so what you’ve got is a huge effort to enrich the richest and to impoverish the poorest. That is going to have very dramatic consequences.
JUAN GONZÁLEZ: And from what you saw, how did race and poverty overlap on this issue?
PHILIP ALSTON: There’s a very complex relationship, actually, between race and poverty. First, it is true that if you are African-American, if you’re Hispanic, your situation, in terms of poverty, is often going to be pretty bad. But there’s also a racialized discourse, where if you speak to policymakers, they will say, “Yes, we’ve got to cut back on welfare, because those black families out there are really ripping off the system.” And so what they do is to try to get some sort of race warfare going almost, that white voters think, “Yeah, I’m not going to be ripped off by the blacks and the Hispanics.” But, of course, the terrible thing is that the cuts are actually nondiscriminatory. In other words, they impact the poor whites every bit as much as the poor people of color. So, the race dimension is deeply problematic.
AMY GOODMAN: I wanted to turn to Benita Garner, a mother living in poverty, who’s participating in a program through a nonprofit called LIFT. She landed seasonal work with UPS, but worries what will happen when the job ends.
BENITA GARNER: It’s scary. It’s stuff that you don’t really think about, but it’s scary. It’s just that I know it can happen. I’m like really not looking forward to the end of the month, because I’m like, ahh, you know, right now you’re getting money, but then it’s like it’s going to shut down again. So, you always have to constantly think every day, “What’s my next move?”
AMY GOODMAN: She spoke at your launch of your report. Talk about why Benita is so important.
PHILIP ALSTON: Well, because there are millions of people in exactly that position. One of the things that the current administration is pushing is that we need to get people off welfare and into work. First of all, it’s not clear that there are the jobs for people with those sort of skills. But secondly, those who do get the jobs that are available are going to end up in Benita’s situation. I spoke with a lot of Walmart employees who are working full-time, but who are still eligible for and totally dependent upon food stamps. So, working 35 hours a week at Walmart is not enough to make a living out of. And there’s a much bigger problem in the U.S., of course: The precariousness of employment, as we move to the gig economy and so on, means that there are going to be ever more people in Benita’s situation, where, yes, you get a job; yes, the benefits are cut; but you can’t survive.
JUAN GONZÁLEZ: How does the U.S. compare to other countries in the world? I think most Americans would be shocked. I mean, we mentioned in the lede to this piece, 36th in water quality in the world? Talk about comparing it to other major advanced—especially advanced industrial countries.
PHILIP ALSTON: Well, the United States is, of course, one of the very richest countries in the world. But all of the statistics put it almost at the bottom—doesn’t matter what it is. Whether it’s child mortality rates, whether it’s the longevity of adults, whether it’s the degree of adequacy of healthcare, the United States is very close to the bottom on all of these. What’s really surprising is that when I go to other countries, the big debate is that “We don’t have the money. We can’t afford to provide basic services to these people.” And yet, in the United States, they’ve got a trillion or a trillion and a half to give to the very rich, but they also don’t have any of the money to provide a basic lifestyle that is humane for 40 million Americans.
AMY GOODMAN: Professor Alston, you were in Alabama right around the time of the December 12th special election between Doug Jones and Roy Moore. And why were you talking about poverty at that time? Why did you see it as so significant, weighing in in this election?
PHILIP ALSTON: I didn’t weigh in in the election. I was very careful not to do that.
AMY GOODMAN: No, why poverty weighs in.
PHILIP ALSTON: Oh, right. Well, I think that—I mean, one of the best quotes I got during my two-week visit was from an official in West Virginia, where voting rates are extremely low. And I said, “Why is it that no one votes in West Virginia?” And the response was: “Well, you know, when people are very poor, they lose interest. They just don’t believe there’s any point.” And, of course, one begins to wonder if that’s actually a strategy, that you make people poor enough, you make them obsessed with working out where their next meal is going to come from, they’re not going to vote, and so you can happily ignore them.
AMY GOODMAN: You talked about a massive sewage crisis in rural Alabama and also talked about meeting people there, like Pattie Mae Ansley McDonald, who told you about how her house was shot up by white neighbors when she voted in 1965 after the Voting Rights Act became law.
PHILIP ALSTON: Right. That was a very touching meeting. I was meeting with people who really are struggling to make it. But the main focus was actually on water and sanitation. What’s shocking is that in a country like India today, there’s a huge government campaign to try to get sewerage to all people, make it available. In Alabama and West Virginia, where I went, I asked state officials, “So, what’s the coverage of the official sewerage system?” “I don’t know.” “Really? So, what plans”—
AMY GOODMAN: That’s what they said to you, they didn’t know.
PHILIP ALSTON: “So, what plans do you have then for extending the coverage, albeit slowly?” “Uh, none.” “So, do you think people can live a decent life if they don’t have access to sewerage, if the sewage is pouring out into the front garden, which is what I saw in a lot of these places?” “That’s their problem. If they need it, they can buy it for themselves.” In Alabama, where the soil—
AMY GOODMAN: The authorities said.
PHILIP ALSTON: Yeah. In Alabama, where the soil is very tough, it can cost up to $30,000 to put in your own septic system.
JUAN GONZÁLEZ: I wanted to ask you about Puerto Rico, which is grappling with a $74 billion debt and as much as $100 billion in storm damage. The Republican tax bill includes a 20 percent excise tax on goods produced there. This is Congresswoman Nydia Velázquez, originally from Puerto Rico, talking about that.
REP. NYDIA VELÁZQUEZ: Now, with the potential passage of the Republican tax scam bill, Puerto Rico faces an economic hurricane. … Under this bill, American subsidiaries operating in Puerto Rico will now face a 20 percent tax when they move their goods off the island. If this becomes law, you can expect to see more than 200,000 manufacturing jobs disappear from the island. And the government of Puerto Rico could lose one-third of its revenue.
JUAN GONZÁLEZ: You visited Puerto Rico during your trip. The idea that even with an unemployment rate that hovers around 16, 17 percent, depression levels, that there could be even greater unemployment as a result of this tax bill?
PHILIP ALSTON: Puerto Rico is getting what it deserves. In other words, you don’t have any votes in the Congress, you don’t get anything. There is just no willingness on the part of members of Congress to be seen to be giving anything to Puerto Rico. The result is, you’ve got extremely high levels of people on welfare, but those welfare payments significantly lower than they are on the mainland. You’ve got a whole series of measures that are being proposed, in the tax bill and elsewhere. And you’ve got the PROMESA, the fiscal oversight board, poised to really impose a draconian austerity package. So, the situation is grim. But when you don’t have democratic representation, it’s very hard to defend yourself.
AMY GOODMAN: The New York Times has a piece, “The Next Crisis for Puerto Rico: A Crush of Foreclosures.” And it says, “Now Puerto Rico is bracing for another blow: a housing meltdown that could far surpass the worst of the foreclosure crisis that devastated Phoenix, Las Vegas, Southern California and South Florida … If the current numbers hold, Puerto Rico is headed for a foreclosure epidemic that could rival what happened in Detroit, where abandoned homes became almost as plentiful as occupied ones.” Professor Alston?
PHILIP ALSTON: So, what you’ve got is, the very poor—and I visited a lot of areas where people have no electricity still and are living in rubble, essentially—they won’t be going anywhere. They’ll be staying in their homes. Those who are reasonably well-off will be fleeing, because they can’t make it in Puerto Rico. They don’t have the electricity. They don’t have the economic support. And so there’s no point in staying. That housing market will then be terribly uninteresting for investors and others.
JUAN GONZÁLEZ: Could you talk about the positive side of your report, those areas that you saw communities organizing themselves and trying to deal with their problems directly?
PHILIP ALSTON: Well, I did—and particularly in Puerto Rico, it has to be said, I visited some of the cooperatives where people are really trying to reclaim the land in San Juan, trying to dredge the old canal that’s grown over. I met with a lot of people living around power plants who are very severely impacted by the daily flow of coal ash and so on. They’re really organized. They’re really focused. And that was terrific.
I think, in many other parts of the country, as well, what I saw was, you know, community health collectives in West Virginia. I saw the homeless in Skid Row in Los Angeles and so on. There is a real element of organization. But the bottom line, unfortunately, is that if you don’t have essential government services being provided, these people can’t do it on their own.
AMY GOODMAN: Well, we want to thank you very much for being with us. In 30 seconds, can you summarize—I mean, you’re not from this country. You’ve lived here for a long time. But you now take this different kind of tour, and you’re the special rapporteur on extreme poverty. Your thoughts on the United States after this two weeks?
PHILIP ALSTON: Well, the United States is unique. First of all, it doesn’t recognize what we call social rights at the international level—a right to healthcare, a right to housing, a right to food. The United States is unique in that, saying these are not rights.
Second, the issue with elimination of poverty always is around resources: “We don’t have the money.” The United States, again, uniquely, has the money. It could eliminate poverty overnight, if it wanted to. What we’re seeing now is the classic—it’s a political choice. Where do you want to put your money? Into the very rich or into creating a decent society, which will actually be economically more productive than just giving the money to those who already have a lot?
AMY GOODMAN: And what the tax bill does?
PHILIP ALSTON: That’s what the tax bill does, from what I’ve seen.
AMY GOODMAN: Philip Alston, United Nations special rapporteur on extreme poverty and human rights, a professor at New York University Law School, just completed a two-week tour examining extreme poverty and human rights in the U.S.
When we come back, the seven banned words, according to staff at the CDC, the Centers for Disease Control. Is this true that people at the CDC, when writing up budgets, are not supposed to use words like “transgender” or “science-based” or use the word “fetus”? Stay with us. |
Characteristics of intense pulsed heavy ion beam by bipolar pulse accelerator We have developed a new type of a pulsed ion beam accelerator named bipolar pulse accelerator for improvement of the purity of the intense pulsed ion beam. The system utilizes a magnetically insulated accelerate on gap and was operated with the bipolar pulse. A coaxial gas puff plasma gun was used as an ion source, which was placed inside of the grounded anode. Source plasma (nitrogen) of current density of ≈30 A/cm2 and pulse duration of ≈1.0 s was injected into the acceleration gap. When the bipolar pulse with voltage of about ±110 kV and pulse duration of about 70 ns was applied to the drift tube, the ions were successfully accelerated from the grounded anode to the drift tube in the 1st gap by the negative pulse of the bipolar pulse. The pulsed ion beam with current density of 70 A/cm2 and pulse duration of ≈50 ns was obtained at 50 mm downstream from the anode surface. The energy spectrum of the ion beam was evaluated by a magnetic energy spectrometer. The ion energy was in reasonable good agreement with the acceleration voltage, i.e., 1st pulse (negative pulse) voltage of the bipolar pulse. |
Any guy with serious grilling skills knows that the best BBQ depends on the proper equipment. To raise your grilling game to the next level consider these eight great grilling accessories.
Any guy with serious grilling skills knows that the best BBQ depends on having the proper tools and equipment. Unfortunately, many backyard cookout chefs step up to the flames armed with little more than a fork and a spatula. Where's the fun in that?
One of the great benefits of owning a grill--gas or charcoal--is that it provides the perfect excuse to buy some really cool tools.
For starters, you definitely need a set of long-handle barbecue utensils, some sort of side table or carrying tray and a work light for nighttime grilling. Clamp-on lights are popular, but I prefer a floor-standing model, which you can move around more easily. A motorized rotisserie is handy for roasting whole birds and large pieces of meat. And if you enjoy shish-kebab, invest in a set of steel skewers. However, to raise your grilling game to the next level--and to increase the chances of creating memorable meals--consider these eight great grilling accessories.
You gotta love the name of this cleverly designed barbecue fork that looks like a weapon from Braveheart. It has two long, stout tines--as any respectable BBQ fork should have--and a 13-in.-long hardwood handle. However, what makes this fork nasty (in a good way) is that protruding from each side of the main tines are two sharpened spikes designed for flipping over slabs of meat with just a twist of the wrist. This fork just might be the perfect gift for the carnivore in your life.
This versatile grill-top accessory takes the frustration out of cooking small and delicate foods, such as vegetables, seafood and fruit, that have a tendency to fall through cooking grates. It's also useful for searing all cuts of meat. The split-personality pan has a solid, ridged griddle on one side (for steaks, chicken and fish) and a perforated grill (for shellfish and vegetables) on the other. It measures 12 by 16 in., and is finished with nonstick porcelain enamel for easy cleaning.
As its name so proudly proclaims, the VCR holds a whole chicken--up to 12 pounds--upright as it roasts. This heavy-duty, stainless-steel skewer has a center reservoir that holds 12 ounces of beer, which adds flavor and moisture to the chicken as it cooks. (You could also use water, but why?) This particular beer chicken roaster stands out from the rest thanks to an integral 12-in.-dia. cooking pan, which allows you to simultaneously fire-roast vegetables.
It's a carpenter's tool belt designed for backyard grillers. Think Norm Abram meets Bobby Flay. Made of breathable, durable nylon, the belt has three pouches and comes equipped with a stainless-steel spatula and tongs. The insulated third pouch holsters a 12-ounce beverage can or bottle. The belt's quick-snap buckle adjusts to a generous 60 in. wide, and the price includes the personalized touch of a capital letter monogrammed onto one of the pouches.
Unlike any meat-branding tool I've ever seen, this kit comes with two branding irons and an entire alphabet of capital letters--plus an ampersand symbol--so you can personalize grilled meats for your guests. Simply clamp up to three letters into the branding iron, then set it over the grill. Once it's hot, press the iron against the cooked meat to brand it. Is this a great country or what?
This high-tech temperature taker pages you with audible updates as meat cooks on the grill. Simply insert the stainless-steel probe into the meat, select the meat type (beef, lamb, veal, hamburger, pork, turkey, chicken or fish), then choose how you want it cooked (rare, medium rare, medium or well done). Clip the remote receiver onto your belt and walk away to mingle with guests or work in the yard while dinner cooks. The wireless transmitter can send temperature readings up to 300 ft. away. A voice prompt will alert you when the meat is "almost ready," and then again when it's "ready." The transmitter requires two AA batteries; the receiver runs on two AAAs (batteries not included). If you're looking for something a little less sophisticated--in other words, less expensive--check out the Chef's Thermometer Fork ($30), which has an instant stab-it-and-read-it digital temperature display.
Go ahead and laugh, but the next time you singe your knuckles reaching over a hot grill, you'll wish you were wearing this 17-in.-long breakthrough in oven-mitt technology. Made of soft, pliable silicone, it's heat-resistant to 600 F for superior protection from high temperatures, flames and steam. The mitt's interior is lined with soft, stay-cool fabric to help eliminate sweating. Its grooved-silicone gripping surface offers greater flexibility and improved slip resistance, and the extra-long cuff protects your wrist and forearm. The Good Grips mitt fits on the right or left hand, and has a convenient hanging loop and an embedded magnet for attachment to metal surfaces.
Forever end the frustration and humiliation of running out of gas in the middle of a big cookout--because there's nothing sadder than a partially seared sirloin slowly turning cold. Simply thread the gauge onto your gas cylinder, and then connect the grill's regulator. Both connections are hand-tightened, so no wrench is needed. The needle gauge and color-coded display window give clear indications of the gas level: Green (plenty of fuel), Yellow (early warning) and Red (running on fumes). For improved accuracy, the gauge even compensates for the air temperature by use of temperature bands: Hot, Cool, and Cold. (Propane vaporizes at different rates depending on air temperature, and that wreaks havoc with lesser gas gauges.) For a precise reading, simply match the band with the day's approximate air temperature. Note that this particular gauge fits all propane tanks (up to 100 pounds) manufactured after 1995. |
JEDDAH — Swiss-Belhotel International (SBI) has announced its debut into the Holy City of Makkah with the signing of Swiss-Belhotel Al Aziziya Makkah in Saudi Arabia. Featuring 525 rooms, the upscale 4-star property is owned by Cardamom International Property Management LLC and is superbly located in the rapidly expanding hotel hub of Makkah only a few minutes� drive away from Al-Masjid Al-Haram in Al Aziziya. The hotel is expected to be ready for opening by the second quarter of 2018.
Included in the hotel’s 525 rooms are a good mix of rooms for groups and families tailored to meet the market needs. The hotel will also feature a restaurant, a lobby cafe and extensive banquet facilities. |
Nationalizing Realism in Dermot Bolgers The Journey Home Dermot Bolgers third novel, The Journey Home, emerged in 1990 in the authors home country of the Republic of Ireland, yet took 18 years to be republished in the United States in 2008. The novels graphic depiction of an array of abuses, including sexual, physical, political, and economic, not only illustrated the authors intention to shock the reading public regarding the governments conscious disregard for these struggles, but its publication also elucidated the aftereffects of exposing the differences between experiences with abuse and the ways in which both national and socio-economic processes mediate their interpretations. In this paper, I will argue that Bolgers illustration of corruption and abuse does not only display a contrast between the public and those who represent their image, but also how socioeconomic paradigms are used to mediate perceptions of what constitutes reality. |
As used herein, the term “membrane switch” means a switch including a plurality of conductive regions with at least one of the conductive regions located on a layer of flexible material.
Current membrane switches may include a first conductive region on a first layer of material aligned over a second conductive region on a second layer of material. A flexible material may be used for one or both of the first and second layers. One of the conductive regions may include interdigitated fingers forming a pair of terminals for the switch. Normally, the conductive regions do not make contact with each other and the switch is open. When a user presses one of the conductive regions such that the two conductive regions touch, a circuit is completed across the interdigitated fingers to close the switch. A spacer material is typically located between the two layers to prevent inadvertent contact of the conductive regions and switch closure. Apertures in the spacer material leave exposed the conductive regions, so they may be selectively engaged to close the switch. The thickness of the spacer material is typically in the range of 0.006 inches to 0.012 inches.
Reducing the thickness of the spacer material may improve the feel of the switch to the user. For example, by reducing the thickness of the spacer material, the touching of a conventional membrane switch to close the switch may feel to the user more like touching of a capacitive touch switch, which is a higher-end, more expensive switch. However, it is currently impractical to reduce the spacer material thickness in a membrane switch below the currently-employed range, because in doing so, one would cause inadvertent switch operation due to temperature and/or pressure gradients.
Thus, there was a need to overcome these and other limitations in membrane switches, whether the improvements thereof are employed in membrane switches or in any other switch design. |
Malcolm Gladwell’s essay in The New Yorker misses some of the key components of activism. Picketing and twittering have the exact same effect when it comes to political activism. Stating one’s beliefs through Twitter, Facebook or marching in the streets, though an important part of activism, cannot create social change on its own. Most of the modern discussion of activism ignores the process that is necessary to create social change. Social change comes through exposing one’s humanity to one’s oppressor while acknowledging the common humanity one’s oppressor has with oneself. Social change comes when we are able to see the truth of our common humanity. Ghandi called this satyagraha, or soul force.
Marching in the streets during the civil rights movement worked because when police released dogs and fire hoses on innocent black children, who were marched at the front of the picket lines, the world could not escape the humanity of these children. Their suffering, without seeking revenge against their oppressors, exposed their humanity. Marching on its own does not create social change, because it does not expose the problems that it seeks to solve. During the civil rights movement in parts of the United States where police brutality against blacks did not take place, integration did not happen. If the humanity of the oppressed people is not exposed to the oppressor, the oppressor will not be transformed. The point of the civil rights movement was not to depict racists as evil, but to show them the humanity of blacks and thus lay aside racism. The point of any social movement is to transform the beliefs of people who seek to oppress. If Facebook and Twitter can be employed as a tool to expose the humanity of the oppressed then they can be effective tools for social change.
Until Facebook and Twitter are utilized in this method they are like protests — illustrating the desire for change, but completely missing the means through which it can be achieved. In order to create effective social movements we must look in the past, not at what physical actions (picketing, marching, sit-ins etc.) created change, but what were the ideas behind them. |
Results of the evaluation and preliminary validation of a primary LNG mass flow standard LNG custody transfer measurements at large terminals have been based on ship tank level gauging for more than 50 years. Flow meter application has mainly been limited to process control in spite of the promise of simplified operations, potentially smaller uncertainties and better control over the measurements for buyers. The reason for this has been the lack of LNG flow calibration standards as well as written standards. In the framework of the EMRP Metrology for LNG project, Van Swinden Laboratory (VSL) has developed a primary LNG mass flow standard. This standard is so far the only one in the world except for a liquid nitrogen flow standard at the National Institute of Standards and Technology (NIST). The VSL standard is based on weighing and holds a Calibration and Measurement Capability (CMC) of 0.12% to 0.15%. This paper discusses the measurement principle, results of the uncertainty validation with LNG and the differences between water and LNG calibration results of four Coriolis mass flow meters. Most of the calibrated meters do not comply with their respective accuracy claims. Recommendations for further improvement of the measurement uncertainty will also be discussed. |
CLEVELAND, Ohio -- A Summit County man collected hundreds of thousands of dollars in state and federal disability payments over more than three decades, even though he also worked during that time, according to the U.S. Attorney's Office.
Thomas Cannell, 62, faces charges of theft of government funds and wire fraud. He was charged Wednesday via a criminal information, which usually means a plea agreement is forthcoming.
Cannell, who lives in Northfield, stole more than $684,000 in disability payments from the Ohio Bureau of Workers' Compensation and the U.S. Social Security Administration, according to federal prosecutors.
Cannell developed lower back pain while working at Loopco Corp., according to the criminal information. He filed a disability claim with the Ohio Bureau of Workers' Compensation in July 1980.
He continuously claimed he could no longer work and started receiving permanent total disability payments in 1990. He received them until September of last year, authorities say.
Every time he was contacted, he told the Ohio bureau that he had not worked since 1982.
He also received Type II disability insurance benefits from the U.S. Social Security Administration from 1982 until July of last year. Again, he said he wasn't employed, officials say.
That wasn't true, though. Before 1990, he worked selling fireplaces in Ohio. At his request, the business owners did not pay him directly for his services, but rather cut him checks made out to "Cannell Marketing," a company registered in his wife's name, according to the U.S. Attorney's Office.
He sold fireplaces until September 2017, prosecutors say.
Cannell's attorney Donald Riemer declined to comment.
The case is assigned to U.S. District Judge Benita Pearson in Youngstown.
Click here to read the charging document on a mobile device. |
Professor Gill Bristow is Dean of Research for the College of Arts, Humanities and Social Sciences and Professor of Economic Geography at Cardiff University and her research interests include regional economic development and policy
With the launch of the new Industrial Strategy Green Paper, the UK Government has signalled its intent to develop a place-based approach to build upon distinct industrial assets and innovation potential of each part of the country. The Green Paper states that for Britain to achieve maximum prosperity and for the economy to work for everyone, then “all parts of the country must be firing on all cylinders”. So, a key question becomes: what is needed for the cities and regions of South East Wales and the South West of England to exploit the potential of this new place-based approach?
The first step for any region seeking to develop a placed-based industrial and innovation strategy is to undertake a thorough assessment and diagnosis of its particular industrial strengths, its economic assets and resources, and its new and developing areas of growth and innovation potential. In this respect, South West England and South East Wales have a strong foundation on which to build, demonstrated by the Science and Innovation Audit published last November. This provided a unique opportunity to identify areas of world-leading research and innovation, with the Audit highlighting the region’s strengths in advanced engineering, digital innovation, energy generation and environmental technologies. The Audit was also significant in bringing together some of the regions’ key institutions and industrial partners in the execution of this task. Thus the GW4 Alliance of the universities of Bath, Bristol, Cardiff and Exeter formed part of a wider Great West Taskforce which included the Universities of the West of England and Plymouth, as well as key businesses and Local Enterprise Partnerships.
Institutional collaboration of this kind will be critical to the development of an effective place-based industrial strategy and certainly will be vital for any emerging strategy to gain wider political traction. Indeed, the UK Government’s Department of Business, Energy and Industrial Strategy (BEIS) has clearly indicated that it expects networks of collaborating universities such as the GW4 Alliance and the N8 Research Partnership to be key players in the new place-based approach. This makes very good sense. Universities are uniquely placed to act as key connectors in their regional economies and innovation ecosystems. They are place-bound and do not move in and out of their localities. Yet through their networks of knowledge production and exchange, they have unique capacities to bind industrial sectors into places, and to connect local firms with global networks and markets.
There is good reason to expect that our universities will play a particularly important role in this region as we progress from debating the Green Paper’s proposals to implementing them in practice. A central question here becomes who will co-ordinate, design and deliver a place-based industrial and innovation strategy for the Great Western ‘region’? A place-based strategy demands effective place leadership yet this is a territory of region which doesn’t have a co-ordinating administrative authority or any clearly aligned governance structures. The territory cuts across the Wales-England border and thus the boundaries of Welsh devolution, and includes a patchwork of metropolitan and local authorities and emerging city-region governance arrangements and plans. For the foreseeable future, this means this region will continue to be defined and shaped by the strategic relationships between its key institutions and stakeholders – the GW4 Alliance, the Great West Taskforce and the fledgling Great Western Cities collaboration between Cardiff, Newport and Bristol. As such, it is these alliances that will play a critical role in mobilising and co-ordinating the resources and policy levers of the region’s different authorities and their various devolution deals. For example, it is vital that any industrial strategy covering South East Wales and the South West of England complements the £1.2 billion City Deal for the nascent Cardiff Capital Region. The GW4 Alliance in particular, with its existing strategic partnerships, can play a critical role in making connections both horizontally amongst local authorities, and vertically to the Welsh and UK Governments.
The GW4 Alliance can also play a vitally important role in ensuring that any place-based industrial strategy for the region is suitably broad and does more than simply target key sectors or anchor firms. An effective place-based strategy must recognise that the fortunes of key sectors and firms are fundamentally shaped by the economic context within which they operate. Their ability to innovate and fully mobilise their latent potential will depend upon key factors in the local and regional economy such as levels of educational attainment and graduate retention, the management practices of firms and their propensity to export and innovate, as well as the wider physical and social infrastructures around them. Understanding the role and importance of these various people and place-based factors suggests an effective industrial strategy will need to mobilise and draw upon the GW4’s breadth of expertise in data science and analytics and the wider social sciences. This will be critical if we are to fully understand the transformative potential of key innovations, and be able to catalyse appropriately designed and targeted policy interventions.
Finally, and perhaps most importantly of all, an effective place-based industrial strategy will only be achieved if there is some overall vision and strategy agreed by all partners, and a clear set of objectives for collective action. Having an audit of strengths and opportunities is one thing. Having a clear roadmap for how to develop and maximise these is another. Our universities have enormous power to convene relevant stakeholders, orchestrate dialogue and partnership development and facilitate this activity. In the meantime, progress can be made with the development of a clear regional identity and brand. This will have important symbolic value in demonstrating the region’s coherence and strategic intent to external parties, as well as providing an identity around which the various stakeholders within the region can coalesce. Other UK regions are already seeing the advantages of mobilising in this way. We have seen the emergence of ‘the Northern Powerhouse’ and ‘the Midlands Engine’. Building on the GW4 Alliance and the Great West Taskforce, why not develop an identity under the banner of the ‘Great Western Force’? Deploying the word ‘force’ has some intuitive appeal, not least for conveying strength and a sense of forward momentum. It also usefully captures something of the region’s main industrial assets around energy, the environment and advanced engineering, and of the important alliances and taskforces which underpin its development and which will be key to its future.
All of this suggests that for the Great West region, as indeed for many other regions which have sought to develop effective place-based development strategies, it is not a question as to whether it has the ability to achieve its strategic development potential, but rather what needs to be done and by whom to ensure that it does. |
. Six lambs were experimentally infected with Cysticercus ovis. Some changes were followed up in the ECG by means of the radiotelemetric method. The infection process led to the following more important changes, such as sinus tachicardia and arhythmia, auricular fibrilation, sinoauricular block, atrial dissociation, the incidence of a pathologic Q deflection; lowering of the R deflection amplitude, and inversion of the T wave. It was found that the changes referred to persist for a longer period (in the case of infarction), and could be made use of in dispensary studies. |
Q:
Developing Android apps for someone else
We have developed several apps and published them on Android Market. We are now writing an app that another company will brand and sell through their own publisher account. The other company has no experience with Android Market or with Android development. I'd appreciate any insights from others who have faced similar situations. I'm specifically concerned about the following areas:
Signing the app
The alternatives we see are: sign with our usual key; create a signing key pair specific to the other company and sign with that; or help the other company install a development system, generate a key pair, and do the signing themselves. The latter would require us sending them the project sources, which presents its own problems. Other than our concern about sending the source, does the choice matter in any way?
Licensing
Since the license check will be done against their account, the code will need to embed their public key for decrypting the license response. Is there any reason they should be concerned about sharing that key with us? Are there any alternatives to them sharing the key with us?
Publishing
The other company is responsible for all marketing and sales; we are responsible for the app development. From what we can tell, Android Market is not set up to allow a clean separation of these roles. (It assumes that the developer will also be the publisher.) This makes it difficult to work out a division of responsibilities for the publication process. Our initial thought was to deliver the .apk file to them and let them handle it from there. The licensing issue was the first indication that we were being naive about this. The publishing process itself is rather technical, and we see two alternatives: walk them through all the steps or ask them to give us access to their publisher account and do it ourselves. What do others do?
A:
Signing the app
I would generate a separate key for the company, and sign it yourself. The other company doesn't sound like it's at a technical level to appreciate the importance of a private. Also, I'm not sure what your agreement is, but they could at a later date ask for the keys used to sign the app they are selling. If you sign it with your own key, that means they could potentially sign other apps with your key and market them, something I'm not sure you want. If you're fine with sending the sources to the other company (with all the associated support costs of helping them set up a development system), it's a good option.
Licensing
See above. If you have their key, you can sign apps as the other company, something they might not be ok with. Having each company handle their own keys presents the least potential for conflict.
Publishing
This is the area where I'm least familiar with. I guess the answer would depend on your relationship - is this a one-off or the first of many? If it's a one-off, and you have a good relationship, you could ask for temporary access and do it yourselves; if you envision further work down the road, going through the pain of teaching them would make it much easier down the road.
|
AANP award for meritorious contributions to neuropathology presented to Fusahiro Ikuta, MD. American Association of Neuropathologists. Figure. No caption available. It is my pleasure to make introductory remarks to the presentation of the Association's Award for Meritorious Contribution to Neuropathology to Dr. Fusahiro Ikuta. Dr. Ikuta graduated from Niigata University School of Medicine in Japan in 1955. After his internship, he joined the Neurosurgical Department of Niigata University established by Dr. Mizuho Nakata. He soon became more interested in neuropathology than neurosurgery. He came to America to receive training in neuropathology from Dr. Zimmerman's Neuropathology Department at Montefiore Hospital in 1960. Dr. Ikuta has been an active member of AANP since 1964 and his first presentation at the annual meeting of AANP received the Weil Award in 1964. |
As the European elections draw ever closer, EU leaders are quietly manoeuvring into position the candidates they want to install in the next European Commission. Here are some of the latest developments.
It’s anybody’s guess what the next European Parliament will look like, although many polls can give us a rough idea of how the ‘Camembert’ will shape up after May.
The make-up of the next Commission is also far from clear but the likes of Hungary, Luxembourg and even Denmark have shown or at least given us a glimpse of their cards. The really big players are still keeping them close to the chest.
Budapest has officially endorsed the current justice minister, László Trócsányi, as its pick, which will be an intriguing (at the very least) confirmation hearing to watch, given the low regard in which MEPs hold Hungary’s rule of law situation.
Would any Commission president sign off on Trócsányi’s nomination in the first place though?
Should Manfred Weber get the top job through the increasingly shaky-looking Spitzenkandidat process, the Hungarian lawyer could well get a portfolio. He’d probably inherit Tibor Navracsics’ all-important sport and culture duties.
Navracsics has successfully shed any notion that he is Viktor Orbán’s man in Brussels, so Trócsányi would probably have to make the same assurances of putting the Berlaymont first to stand any chance of getting a spot in the team.
Luxembourg, whose process for picking a Commissioner is far more entrenched in its political system (a nation that holds the record for most presidents, remember), has plumped for the experienced Nicolas Schmit.
An economist by trade, Schmit got the nod over the current deputy prime minister, Etienne Schneider, as well as Brussels’ former ‘Mr Energy’, Claude Turmes. Given he’ll follow in the footsteps of Jean-Claude Juncker, Schmit is unlikely to be awarded a portfolio of any note.
Denmark’s potential choice is far less clear-cut as current Commissioner Margrethe Vestager, although quite popular and highly regarded, can no longer count on the backing of the national government.
The Danes have an unwritten rule that their Commissioner should at least come from the ruling coalition [something Frans Timmermans, for example, does not have to tangle with when it comes to Dutch PM Mark Rutte].
But it’s unlikely that PM Lars Løkke Rasmussen would stand in Vestager’s way if the top job was on offer.
If the EU Council abandons the Spitzenkandidat procedure and embarks on its usual horse-trading behind closed doors, the Queen of EU competition law stands a chance of getting the presidency, because of her popularity, competence, experience and, surely, gender.
Vestager would be Denmark’s first EU executive chief and she has already given hints about her availability. This week she tweeted in support of Greens Spitzenkandidat Bas Eickhout, whose group’s backing could prove vital when crunch time comes.
But if Spitzenkandidat does inexplicably re-establish itself as the likely avenue, or if another candidate, such as Michel Barnier, emerges as the outright favourite, Vestager will probably say goodbye to Brussels.
Stepping into her shoes could be former PM Helle Thorning-Schmidt. Last week, she announced that she would quit her current CEO position in the summer, shortly after both the EU and Danish elections, which are earmarked for June.
Thorning-Schmidt’s social democrat party currently holds the most seats in the parliament, so it could regain power and give her the nod to go to Brussels.
EU institutions: a complex machine of many moving parts. Do your best to keep up.
The UK parliament votes on Theresa May’s Brexit deal later tonight. Although the EU side has insisted that the agreement is set in stone, Germany teased the prospect of more talks in case/when Westminster rejects the deal.
Madrid and Brussels disagree over Iberia’s Spanish ownership after Brexit as the flight operator tries to preserve its air carrier’s licence to operate intra-EU flights.
Talks between Brussels and Bern, ongoing since 2014, have hit an impasse, causing more Brexit confusion.
And who would have guessed – the jokes about no-deal are turning more and more towards dark British humour.
Greece and Russia, meanwhile, have exchanged furious statements over the Macedonia name dispute.
Berlin lacks the political will to bring the energy transition forward, according to the architect of Germany’s renewable energy law.
Last week, seven countries missed a December deadline to submit draft climate plans. Now the latecomers are getting their excuses in.
Single seat debate is back: the 5* Movement has decried the “waste of money” the Parliament’s Strasbourg seat represents, putting its closure at the centre of the party’s European election campaign.
MEPs, who are currently meeting in Strasbourg, have backed plans for artificial intelligence and robotics. But ethical concerns remain in place.
Lawmakers also criticised Austrian PM Sebastian Kurz’s policy on migration during his country’s holding of the EU presidency last year. His Czech counterpart is currently under the EU’s spotlight, as auditors look for possible conflicts of interest.
Serbian citizens are set to rally on the streets next week, both those unhappy with the state of affairs in the Balkan country and those who just want to show support for Vladimir Putin, who will be in town on an official visit.
Our caption contest is still open! It involves Juncker and a man in an amazing hat. Send us your best entries here.
Crunch time with the ‘meaningful vote’ in the House of Commons tonight, the vote itself is scheduled at 8pm. Here is a nice explainer on the voting procedure. Please disregard the fake yet hilarious news that May has delayed the vote until 30 February. |
Sensor Data Preprocessing, Feature Engineering and Equipment Remaining Lifetime Forecasting for Predictive Maintenance Analytics based on sensor data is gradually becoming an industry standard in equipment maintenance. However, it involves several challenges, such as sensor data preprocessing, feature engineering and forecasting model development. Due to work in progress, this paper is mainly focused on sensor data preprocessing, which plays a crucial role in predictive maintenance due to the fact, that real-world sensing equipment usually provides data with missing values and a considerable amount of noise. Obviously, poor data quality can render practically useless all the following steps of data analysis. Thus, many missing data imputation, outlier filtering, and noise reduction algorithms were introduced in the literature. Streaming sensor data can be represented in a form of univariate time series. This paper provides an overview of common univariate time series preprocessing steps and the most appropriate methods, with consideration of the field of application. Sensor data from different sources comes in different scales and should be normalized. Thus, the comparison of univariate time series normalization techniques is given. Conventional algorithm quality metrics for each of the preprocessing steps are described. Basic sensor data quality assessment approach is suggested. Moreover, the architecture of a sensor data preprocessing module is proposed. The overview of time series-specific feature engineering techniques is given. The brief enumeration of considered forecasting approaches is provided. |