content
stringlengths 37
2.61M
|
---|
The Environmental Protection Agency’s assessments show that breathing ground-level ozone at levels of just above 0.070 parts per million can cause a range of harmful health effects, including asthma and emphysema.
The effects add up: Air pollution kills tens of thousands of Americans every year and costs the economy over $4 billion.
The damage, however, is not evenly spread. Your exposure to ground-level ozone depends, in no small part, on where you live, and where you live often depends on your economic status. So according to the EPA’s own data — who exactly is America dumping its air pollution burden on?
This map of national ground ozone levels shows a large amount of variation across the country, ranging from safe to hazardous levels. Click on any of the dots signifying monitoring locations to view ozone levels and the demographics of local residents. |
Search for Astrophysical Nanosecond Optical Transients with TAIGA-HiSCORE Array A wide-angle Cerenkov array TAIGA-HiSCORE (FOV $\sim$0.6 sr), was originally created as a part of TAIGA installation for high-energy gamma-ray astronomy and cosmic ray physics. Array now consist on nearly 100 optical stations on the area of 1 km$^2$. Due to high accuracy and stability ($\sim$1 ns) of time synchronization of the optical stations the accuracy of EAS arrival direction reconstruction is reached 0.1$^\mathrm{o}$. It was proven that the array can also be used to search for nanosecond events of the optical range. The report discusses the method of searching for optical transients using the HiSCORE array and demonstrates its performance on a real example of detecting signals from an artificial Earth satellite. The search for this short flares in the HiSCORE data of the winter season 2018--2019 is carried out. One candidate for double repeater has been detected, but the estimated probability of random simulation of such a transient by background EAS events is not less than 10%, which does not allow us to say that the detected candidate corresponds to a real astrophysical transient. An upper bound on the frequency of optical spikes with flux density of more than $10^{-4} \mathrm{erg/s/cm}^2$ and a duration of more than 5\,ns is established as $\sim 2 \times 10^{-3}$ events/sr/hour. It was proven that the array can also be used to search for nanosecond events of the optical range. The report discusses the method of searching for optical transients using the HiSCORE array and demonstrates its performance on a real example of detecting signals from an artificial Earth satellite. The search for this short flares in the HiSCORE data of the winter season 2018-2019 is carried out. One candidate for double repeater has been detected, but the estimated probability of random simulation of such a transient by background EAS events is not less than 10%, which does not allow us to say that the detected candidate corresponds to a real astrophysical transient. An upper bound on the frequency of optical spikes with flux INTRODUCTION The astrophysical complex TAIGA (Tunka Advanced Instrument for cosmic ray physics and Gamma-ray Astronomy) is located in the Tunka valley at 50 km from Baikal Lake and is designed for the study of cosmic rays physics and gamma-ray astronomy of high and ultrahigh energies using a technique based on the observation of extensive air showers (EAS). The complex includes several instruments, including the TAIGA-HiSCORE wide-aperture Cherenkov array. The operating principle of the TAIGA-HiSCORE telescope is based on the idea presented in the article : the detector stations measure the amplitudes of the light signal caused by the Cherenkov light from a EAS, as well as the time evolution of the Cherenkov light front reconstructed according to the sequence of stations triggering. The HiSCORE array (High Sensitivity COsmic Rays and gamma Explorer) consists of integrated optical atmospheric Cherenkov stations, each of which includes four PMTs with a total entrance pupil area of about 0.5 m 2 and a FOV of 0.6 steradian, pointed to the same sky direction, and form an array with distance between two stations is 106 m (at present they are adapted for observing the Crab Nebula). The HiSCORE array is in the process of construction, and currently consists of 120 stations located on area of more than 1 km 2. New Cherenkov stations, as they were added to the telescope, were immediately put into operation, so the instrument was taking data during its construction. The design and construction details of the HiSCORE telescope are presented in the articles. It is possible to reconstruct the direction of arrival of a cosmic ray particle (charged or gamma quantum) from this data, its initial energy, and some other important characteristics of EAS. The characteristic duration of EAS Cherenkov light pulses ranges from several nanoseconds to tens of nanoseconds; therefore, the HiSCORE detector stations are adapted to register just such flashes. The detection threshold is approximately 3000 quanta of green region (∼530 nm) per square meter during an integration time of 10 ns. However, in princi-4 ple, the array is capable of registering not only EAS Cherenkov light pulses, but any flashes of light. In this case, according to a number of characteristics, a flash of a distant point source (hereinafter referred to as optical transient) in the great majority of cases can be easily distinguished from a passage of a Cherenkov front of a EAS. Indeed, the stations triggered by an EAS Cherenkov light pulse will distribute around the shower axis, so that the maximum signals from triggered stations will be closest to the axis and the event will look like a relatively compact "spot" near the EAS axis. On the contrary, distant transient events generate a flat and homogeneous optical front that evenly covers the entire HiSCORE array, so that all optical stations receive the same optical pulse at the input within the statistical accuracy. Therefore, it should be expected that the event of a distant transient will illuminate the HiSCORE array, uniformly distributed over the installation area, and there will be no pronounced dependence of the amplitude of the triggered station on its position. This idea is confirmed by comparing the structure of a typical EAS event and an event caused by a pulse of a laser installed on the orbital satellite (Fig. 1). The signal from the satellite flying over the HiSCORE array was unexpectedly detected in the data for December 10, 2018 (see Fig. 2) and was subsequently identified as the signal from the LIDAR on the CALIPSO satellite. Further, it was found that the signal from CALIPSO has been present in the data since 2015 and can be used for the absolute pointing calibration of HiSCORE array just as it was obtained using signals from the lidar on ISS. The source of the optical pulse on the satellite is distant enough to generate an almost flat and uniform optical front near HiSCORE, and well simulates an infinitely distant source. Fig. 1 clearly shows the difference between the structure of EAS events and the satellite events. The optical transient events easily pass the primary selection of events in the main working method of HiSCORE, therefore, the program for searching for transients in the HiSCORE data can be implemented in a pure concomitant mode. The task is reduced only to the implementation of some specific HiSCORE event filtering algorithms. Thus, the HiSCORE Cherenkov array, originally adapted for the study of cosmic ray physics and for gamma-ray astronomy, can be relatively easily adopted for solving problems of conventional optical astronomy. An interesting question is what kind of distant astrophysical sources can lead to nanosecond flashes of light that could be detected by the HiSCORE apparatus. If the source is incoherent, then the size of the emitter should be measured at most in units of meters, since in 1 ns light passes only 0.3 m. In principle, such compact incoherent emitters can be the remnants of evaporating relic black holes. If the source of the pulse is coherent, then there are no restrictions on the size of the source, but in this case we should talk about a quantum generator of either natural or artificial origin. Natural cosmic sources of coherent radiation, including optical lasers, are known, and they emit in a continuous mode. Natural sources of short coherent optical pulses is unknown and require further studies. The most understandable hypothetical source of short optical pulses is a distant optical laser of artificial origin. The use of lasers for interstellar communication was proposed by R. N. Schwartz and C. H. Townes in 1961. Thus, the task is within the framework of the SETI (Search for Extra Terrestrial Intelligence). It is easy to understand that even for very large distances, a laser with relatively modest parameters is sufficient for its radiation to be detected by HiSCORE optical stations. For example, for a transmitter with an aperture of 1000 m, for light with a wavelength of 530 nm (green), at a distance of 10 4 ly from the Solar System, the diffraction radius of the light spot will be about 50 million km, from where for a threshold of 3000 quanta/m 2 /10 ns we obtain that the laser energy should be about 9 MJ for every 10 ns. Such energy and power are comparable to the lasers already used on Earth in thermonuclear fusion programs. An aperture of 1000 m means the use of a laser phased array. Prototypes of such devices also already exist on Earth. These estimates show that, using very modest laser transmitters and modest optical receivers, the transmission of information through an optical channel can, in principle, be established at virtually any intragalactic distances. Note that the idea of using large Cherenkov telescopes to search for optical transients of astrophysical origin was proposed already at the very beginning of the 2000s, including also the framework of the SETI problem. After that, the idea was repeatedly discussed in various aspects and some search programs were implemented. However, in all these works, it was assumed and discussed the use of image Cherenkov telescopes, which have a small field of view, measured by angles of only a few degrees. In the present work, for the first time, an instrument of a completely different type is used for the purposes of optical astronomy. The HiSCORE Cherenkov array has a very wide field of view, which covers about one tenth of the visible sky. This is extremely important in the SETI when it comes to detecting very rare events. speaking, not all stations will be triggered, since for some stations the optical pulse may turn out to be higher than their threshold, while for some other stations -lower. However, the event will still demonstrate a small spread in the amplitudes of different triggered stations and the absence of a concentration of triggered stations around any axis, as is the case in 7 EAS events. The most important task of HiSCORE data processing is the reconstruction of the direction to the source, which is carried out using the measured response times of different optical stations caused by the passage of the light front. In this work, the direction is reconstructed under the assumption of a plane light front, which is absolutely correct for a distant optical transient, but also gives the direction of the EAS axis, despite the fact that the shape of the front of EAS Cherenkov light is not completely flat. This is possible due to the fact that, although the EAS front is not flat, it is axially symmetric with respect to the shower axis. The problem of finding the direction to the source is reduced to minimization along two angular coordinates (azimuth) and (polar angle) -of the function where N is the number of triggered optical stations, x i, y i, z i are the coordinates of the triggered stations, t * i are the trigger times of the stations, c is the speed of light and Several passes through the HiSCORE aperture of the CALIPSO satellite have been recorded. The satellite's orbital altitude is 700 km, the laser pulse duration is about 20 ns, therefore CALIPSO provides an almost ideal test for signals from a distant optical transient. The scatter of the trajectory points was used to determine the accuracy of the reconstruction of the angular coordinates for events near the zenith (where the satellite appears in the HiSCORE aperture), which was about 0.05 o. As mentioned in the Introduction, the observation of LIDAR events also confirmed the idea of filtering events of distant optical transients from the EAS background (see Fig. 1). RESULTS Since The answer to this question is "yes" if one looks not for single candidates for transients, but for repeater candidates, where a repeater is understood as a group of events that came from one point in the sky. There is no reason other than pure chance for ordinary EAS events to come from one point, so repeated events with the same coordinates in the sky must mean the presence of a real repeating source. Finding a repeater among the filtered 511 candidates would mean finding a real candidate for a repeating optical transient if it is shown that the random simulation of such a repeater by the EAS background is small. Obviously, by events that came from one point in the sky, one should understand such events for which the difference in angular coordinates is less than the errors in measuring of the coordinates. As mentioned above, an error in determining the angular coordinates of Fig. 6, is 78%. That is, the appearance of one double repeater should be expected simply by chance, therefore we cannot consider the found repeater as a real candidate for optical transients. Using Monte Carlo simulations, it is possible to understand what kind of events would lead to reliable detection of a repeater candidate under the same conditions. For example, the probability of chance simulating of one triple repeater with EAS bacground is 0.002, that is, the reliability of such a candidate would be about three sigma. The likelihood of simulating a quadruple repeater is low, which would mean a virtually reliable transient detection. Thus, the main result of this work is that neither single candidates for optical transients were detected under conditions of a low background near the zenith, nor candidate repeaters with large zenith angles in the background of an EAS were found. The absence of candidates for optical transients allows us to set limits on the frequency of occurrence of such events. Since the exposition for the season 2018-2019 is known (288 srhour), and no candidate was detected, then, taking into account the thresholds and signal duration that could be observed, the limitation on the flux of the sought transients can be formulated in the following way: for optical transients with an energy flux more than 10 −4 erg/cm 2 /s and pulse duration more than 5 ns the flow is less than 2 10 −3 events/sr/hour. In conclusion, we note that the data of the |
Capacity of the Hopfield model For a given if neurons deviating from the memorized patterns are allowed, we constructively show that if and only if all stored patterns are fixed points of the Hopfield model. If neurons are allowed with then where is the distribution function of the normal distribution. The result obtained by Amit and co-workers only formally coincides with the latter case which indicates that the replica trick approach to the capacity of the Hopfield model is only valid in the case. |
Oliphant v. Suquamish Indian Tribe
Facts
In August 1973 Mark David Oliphant, a non-Indian living as a permanent resident with the Suquamish Tribe on the Port Madison Indian Reservation in northwest Washington, was arrested and charged by tribal police with assaulting a tribal officer and resisting arrest during the Suquamish Tribe's Chief Seattle Days. Knowing that thousands of people would gather in a small area for the celebration, the Tribe requested Kitsap County and the Bureau of Indian Affairs for additional law enforcement assistance. The County sent just one deputy and the Bureau of Indian Affairs sent no one. When Oliphant was arrested, at 4:30 a.m., only tribal officers were on duty.
Oliphant applied for a writ of habeas corpus in federal court, claiming he was not subject to tribal authority because he was not Native American. He challenged the exercise of criminal jurisdiction by the tribe over non-Indians.
Procedural history
Oliphant's application for a writ of habeas corpus was rejected by the lower courts. The Ninth Circuit upheld Tribal criminal jurisdiction over non-Indians on Indian land because the ability to keep law and order within tribal lands was an important attribute of tribal sovereignty that was neither surrendered by treaty nor removed by the United States Congress under its plenary power over Tribes. Judge Anthony Kennedy, a judge of the Ninth Circuit Court of Appeals at the time, dissented from this decision, saying he found no support for the idea that only treaties and acts of Congress could take away the retained rights of tribes. According to Judge Kennedy the doctrine of tribal sovereignty was not "analytically helpful" in resolving this issue.
Court decision
The U.S. Supreme Court ruled in Oliphant's favor, holding that Indian tribal courts do not have criminal jurisdiction over non-Indians for conduct occurring on Indian land, reversing the Ninth Circuit's decision. More broadly, the Court held that Indian Tribes cannot exercise powers "expressly terminated by Congress" nor powers "inconsistent with their status" as "domestic dependent nations".
Analyzing the history of Congressional actions related to criminal jurisdiction in Indian Country, the Court concluded that there was an "unspoken assumption" that Tribes lacked criminal jurisdiction over non-Indians. While "not conclusive", the "commonly shared presumption of Congress, the Executive Branch, and lower federal courts that tribal courts do not have the power to try non-Indians carrie[d] considerable weight".
The Court incorporated this presumption into its analysis of the Treaty of Point Elliot, which was silent on the issue of Tribal criminal jurisdiction over non-Indians. The Court rejected the Ninth Circuit's approach, which interpreted the Treaty's silence in favor of tribal sovereignty, applying the "long-standing rule that legislation affecting the Indians is to be construed in their interest". Instead, the Court revived the doctrine of implicit divestiture. Citing Johnson v. M'Intosh and Cherokee Nation v. Georgia, the Court considered criminal jurisdiction over non-Indians an example of the "inherent limitations on tribal powers that stem from their incorporation into the United States", similar to Tribes' abrogated rights to alienate land.
By incorporating into the United States, the Court found that Tribes "necessarily [gave] up their power to try non-Indian citizens of the United States except in a manner acceptable to Congress". Arguing that non-Indian citizens should not be subjected to another sovereign's "customs and procedure", the Court analogizes to Crow Dog. In Crow Dog, which was decided before the Major Crimes Act, the Court found exclusive Tribal jurisdiction over Tribe-members because it would be unfair to subject Tribe-members to an "unknown code" imposed by people of a different "race [and] tradition" from their own.
Although the Court found no inherent Tribal criminal jurisdiction, it acknowledged the "prevalence of non-Indian crime on today's reservations which the tribes forcefully argue requires the ability to try non-Indians" and invited "Congress to weigh in" on "whether Indian tribes should finally be authorized to try non-Indians".
Dissenting opinion
Justice Thurgood Marshall dissented. In his view, the right to punish all individuals who commit crimes against tribal law within the reservation was a necessary aspect of the tribe's sovereignty. His dissent read, in full:
I agree with the court below that the "power to preserve order on the reservation ... is a sine qua non of the sovereignty that the Suquamish originally possessed." Oliphant v. Schlie, 544 F.2d 1007, 1009 (CA9 1976). In the absence of affirmative withdrawal by treaty or statute, I am of the view that Indian tribes enjoy, as a necessary aspect of their retained sovereignty, the right to try and punish all persons who commit offenses against tribal law within the reservation. Accordingly, I dissent.
Chief Justice Warren E. Burger joined the dissenting opinion.
Effects
In 1990 the U.S. Supreme Court extended the Oliphant decision to hold that tribes also lacked criminal jurisdiction over Indians who were not members of the tribe exercising jurisdiction in Duro v. Reina. Within six months, however, Congress abrogated the decision, by amending the Indian Civil Rights Act to affirm that tribes had inherent criminal jurisdiction over nonmember Indians. In 2004, the Supreme Court upheld the constitutionality of this legislation in United States v. Lara. Scholars have extensively criticized the decision. According to Professor Bethany Berger, "By patching together bits and pieces of history and isolated quotes from nineteenth century cases, and relegating contrary evidence to footnotes or ignoring it altogether, the majority created a legal basis for denying jurisdiction out of whole cloth." Rather than legal precedent, the holding was "dictated by the Court's assumptions that tribal courts could not fairly exercise jurisdiction over outsiders and that the effort to exercise such jurisdiction was a modern upstart of little importance to tribal concerns." Professor Philip Frickey describes Oliphant, along with the subsequent decisions limiting tribal jurisdiction over non-Indians, as rooted in a "normatively unattractive judicial colonial impulse", while Professor Robert Williams condemns the decision as "legal auto-genocide". According to Dr. Bruce Duthu, the case showed "that the project of imperialism is alive and well in Indian Country and that courts can now get into the action". Professor Duthu continues
The Oliphant Court essentially elevated a local level conflict between a private citizen and an Indian tribe into a collision of framework interests between two sovereigns, and in the process revived the most negative and destructive aspects of colonialism as it relates to Indian rights. This is a principal reason the decision has attracted so much negative reaction ... Oliphant's impact on the development of federal Indian law and life on the ground in Indian Country has been nothing short of revolutionary. The opinion gutted the notion of full territorial sovereignty as it applies to Indian tribes.
Evolution
Congress allowed the tribal courts the right to consider a lawsuit where a non-Indian man commits domestic violence towards a Native American woman on the territory of a Native American tribe, through the passage of Violence Against Women Reauthorization Act of 2013 (VAWA 2013) signed into law on March 7, 2013 by President Obama. This was motivated by the high percentage of Native American women being assaulted by non-Indian men, feeling immune due to the lack of jurisdiction of tribal courts. The new law allowed tribal courts to exercise jurisdiction over non-Indian domestic violence offenders but also imposed other obligations on tribal justice systems, including the requirement that tribes provide licensed attorneys to defend the non-Indians in tribal court. This new law took effect on March 7, 2015, but also authorized a voluntary "Pilot Project" to allow certain tribes to begin exercising special jurisdiction sooner. On February 6, 2014, three tribes were selected for this Pilot Project: the Pascua Yaqui Tribe (Arizona), the Tulalip Tribes of Washington, and the Confederated Tribes of the Umatilla Indian Reservation (Oregon). |
Canadian lawyer/author Jennifer Gold’s debut novel starts with such promise: a contemporary teenager’s discovery of the eponymous “soldier doll” is interwoven with the doll’s near-century-long journey from Europe to the U.S. to Vietnam, landing in a garage sale in Toronto to be bargained for a mere $1.75. Sounds intriguing, no?
While exploring a garage sale in her new neighborhood, 15-year-old Elizabeth buys a handmade baby doll painted with a soldier’s uniform as a birthday gift for her amateur collector father. He’s scheduled to serve a year in Afghanistan, shipping out sooner than later. When Elizabeth happens on a used book store, the friendly employee, Evan, serendipitously mentions a “Soldier Doll” poem by an author both teens admire.
Time shifts nine decades back, across the Pond to Devon, England, to the young woman for whom the doll was originally made – a gift from her father to comfort her after her mother’s death. She paints a soldier’s uniform onto the doll and presents it to her fiancé who has just enlisted in the first world war. The tragic news of his death inspires her to write “The Soldier Doll” … and the rest, as they say …
While Elizabeth’s new-girl-in-town story unfolds, alternating chapters follow the doll to Germany with a decorated World War I soldier who lands in a concentration camp after Kristallnacht on the eve of World War II. He passes it next to a little girl who performs in the eerie musical Brundibar while imprisoned in Terezin, who survives to donate the doll to an orphanage in what was Czechoslovakia. The doll continues to criss-cross the world with various owners, storing its silent history.
Somewhere about 2/3 through, the narrative falters, dissolving into an unconvincing melodramatic muddle by book’s end. Perhaps it starts with too many stereotypes and inaccuracies in the Asian chapters: American soldiers in Vietnam refer to their one African American compatriot as ‘Miles’ because his last name happens to be ‘Davis’; the Vietnamese immigrant wife annoys her hapa son by being more shadow than equal partner to her white Canadian husband (even as she’s apparently the more substantial breadwinner); pho gets mistaken for spring rolls in a hapa Vietnamese family (Google anyone?). In comparison to the better-researched European sections, the Asian chapters fall carelessly short.
Perhaps all is fair in love and war, but disappointingly, equitable representation on the page seems to be little more than wishful thinking.
Readers: Middle Grade, Young Adult
Published: 2014 |
Complement activation in shock associated with a surgically provoked bacteriaemia. The activation of complement (C) factors in a patient exhibiting an anaphylactoid shock 10 min after the initiation of transurethral coagulation of vesical papillomas was studied. Streptococcus faecalis was cultured from he urine before and after the shock and from the blood on two occasions within 2 days after the shock. The shock was associated with a marked but transient complement activation as judged from serial determinations of C3d and C3c by doubledecker rocket immuno electrophoresis and native C3 and C3c by crossed immuno electrophoresis. Conventional total C3 quantitation by rocket immuno electrophoresis did not reveal the activation. A decrease of native C4 to 20% of normal values and the appearance of C4 split products indicated classical pathway activation. Factor B conversion showed the alternative pathway to be involved as well. Total haemolytic complement activity was temporarily reduced, increasing without reaching normal values within 20 hours. The results indicated that the classical pathway was transiently activated, pointing to an acute antigen-antibody reaction, probably caused by the entrance of bacteria and bacterial products into the circulation of an immunized host. |
Optimal Timing of Cholecystectomy for Patients with Concurrent Acute Cholecystitis and Acute Cholangitis after Successful Biliary Drainage by Interventional Endoscopic Retrograde Cholangiopancreatography Background: Concurrent acute cholecystitis and acute cholangitis is a unique clinical situation. We tried to investigate the optimal timing of cholecystectomy after adequate biliary drainage under this condition. Methods: From January 2012 to November 2017, we retrospectively screened all in-hospitalized patients undergoing endoscopic retrograde cholangiopancreatography (ERCP) and then identified patients with concurrent acute cholecystitis and acute cholangitis from the cohort. The selected patients were stratified into two groups: one-stage intervention (OSI) group (intended laparoscopic cholecystectomy at the same hospitalization) vs. two-stage intervention (TSI) group (interval intended laparoscopic cholecystectomy). Interrogated outcomes included recurrent biliary events, length of hospitalization, and surgical outcomes. Results: There were 147 patients ultimately enrolled for analysis (OSI vs. TSI, 96 vs. 51). Regarding surgical outcomes, there was no significant difference between the OSI group and TSI group, including intraoperative blood transfusion (1.0% vs. 2.0%, p = 1.000), conversion to open procedure (3.1% vs. 7.8%, p = 0.236), postoperative complication (6.3% vs. 11.8%, p = 0.342), operation time (118.0 min vs. 125.8 min, p = 0.869), and postoperative days until discharge (3.37 days vs. 4.02 days, p = 0.643). In the RBE analysis, the OSI group presented a significantly lower incidence of overall RBE (5.2% vs. 41.2%, p < 0.001) than the TSI group. Conclusions: Patients with an initial diagnosis of concurrent acute cholecystitis and cholangitis undergoing cholecystectomy after ERCP drainage during the same hospitalization period may receive some benefit in terms of clinical outcomes. Introduction Acute cholecystitis (AC) and acute cholangitis (ACL) are common biliary diseases in daily practice. Both AC and ACL are regarded as complicated symptomatic cholecystolithiasis. ACL is a sequential condition due to obstruction of the bile duct, mostly due to biliary stones traveling down from the gallbladder. The demographic data, pathophysiology, and treatment of these patients have been extensively studied, and several clinical guidelines, such as the 2018 Tokyo Guidelines (TG18) and World Society of Emergency Surgery (WSES) guidelines, have proposed integrated principles of management for both AC and ACL. Published studies have shown that in Western populations, approximately 10% to 20% of patients who undergo cholecystectomy due to cholecystolithiasis have coexisting choledocholithiasis, with an even higher percentage, up to 30%, in the Chinese population. In addition to coexisting cholecystolithiasis and choledocholithiasis, concurrent AC and ACL is another unique variant encountered during daily clinical practice, and correctly diagnosing the coexistence of both diseases is not easy because of overlapping clinical presentations in some aspects. While the clinical guidelines have well addressed the management of AC and ACL, management guidelines for AC complicated with ACL have not been proposed. Owing to the lack of diagnostic principles and poor definition of this specific group, there have been no convincing data to indicate the actual incidence and guide proper management of this complicated biliary condition. Although cholecystectomy is the definitive treatment for symptomatic cholecystolithiasis, biliary drainage, especially endoscopic retrograde cholangiopancreatography (ERCP), is an important modality for impacted bile duct stones, ACL, and biliary pancreatitis. For impacted common bile duct stones, cholecystectomy has been proposed within two weeks after ERCP. However, there is no evidence for the standard management of cholecystectomy and ERCP planning for patients with concurrent AC and ACL. In the present study, we identified a group of patients with concurrent ACs and ACLs based on current clinical guidelines, and all patients had undergone successful biliary drainage via ERCP. We tried to investigate the optimal timing of cholecystectomy after biliary drainage in this selected cohort of concurrent AC and ACL. Methods and Materials From January 2012 to November 2017, we retrospectively screened all in-hospitalized patients undergoing ERCP at Chang Gung Memorial Hospital (CGMH). This study was approved by the Internal Review Board of CGMH under reference number 201801210B0. Due to the retrospective design of our study, informed consent was waived by the ethics committee for the entire study. We then identified patients with concurrent AC and ACL from the aforementioned cohort. The group of interest patients was clinically diagnosed with concurrent AC and ACL (cACC). The diagnostic criteria of cACC in the present study included clinical symptoms/signs such as tenderness over the right upper abdomen, Murphy's sign, and fever > 38 C; laboratory findings such as white blood cell count <4000 or >10,000/L, CRP ≥ 1 mg/dL and total bilirubin ≥ 2 mg/dL; evidence of biliary dilatation, etiology of biliary stone/stricture and findings characteristic of AC on imaging studies (Figure 1); and pus-like or turbid drainage content observed under endoscopic view when ERCP was conducted. A diagnosis of biliary pancreatitis was also made, defined as epigastric pain, especially radiating to the back; initial serum lipase level and amylase level at least three times greater than upper limits of normal range; and characteristic findings of acute pancreatitis on CT imaging. Biliary pancreatitis was identified before therapeutic management strategies, such as ERCP, antibiotics, or surgical intervention, were applied. Patients who underwent ERCP after cholecystectomy and those with dysfunction of at least one organ, a biliary or hepatic malignancy, a history of previous biliary tract infection or abdominal surgery, or Mirrizi's syndrome were all excluded. Since intended laparoscopic cholecystectomy (LC) is now the standard for symptomatic cholecystolithiasis, we also excluded patients who underwent intended open cholecystectomy. We then categorized the selected patients into a "one-stage intervention" group (OSI group), which received successful ERCP and urgent cholecystectomy at the same hospitalization, and a "two-stage intervention" group (TSI group), which received successful ERCP for ACL and conservative treatment for AC with interval cholecystectomy later. Therapeutic Strategies for Patients with Concurrent Acute Cholangitis and Acute Cholecystitis Empiric antibiotics with adequate hydration were prescribed as initial treatment. The antibiotic regimen may be adjusted later based on the results of microbiology tests or unsatisfying clinical responses regarding initial empiric drugs. Every patient underwent ERCP with endoscopic papillotomy for successful biliary decompression and/or stone retrieval. Once cholangitis had improved, cholecystectomy, percutaneous transhepatic gallbladder drainage (PTGBD), or conservative treatment with antibiotics only was then arranged. Preoperative Assessment and Surgical Procedures of Cholecystectomy After the cACC patients in our cohort underwent biliary drainage, cholecystectomy was arranged during the same hospitalization or performed later (interval cholecystectomy). Preoperative evaluations included plain chest film, electrical cardiography, laboratory tests, information on underlying conditions, and anesthetic risks. Intended LC was arranged for each patient using a standardized four-port or modified three-port procedure in selected patients. Conversion to the open procedure, namely, laparotomy, was indicated based on the surgeons' judgment. Clinical Information Clinical data, including sex, age, Charlson Comorbidity Index (CCI), American Society of Anesthesiology (ASA) score, clinical presentation, duration of the course, and disease severity, were all extracted from the medical records. Laboratory data were also collected, including complete blood count, C-reactive protein, bilirubin, hepatic enzymes, amylase, lipase, creatinine, and coagulation factors. The results of microbiological examination of pathogens were also obtained. Imaging studies, such as abdominal sonography, abdominal computed tomography, ERCP, and magnetic resonance cholangiopancreatography, were reviewed to confirm the diagnosis of the enrolled subjects. Outcome Evaluation General outcomes included recurrent biliary events (RBE), length of stay of first hospitalization (1st LOS), and total length of hospitalization (tLOS). An RBE was defined as recurrent cholecystitis or recurrent cholangitis following the first admission. tLOS is the total number of hospitalization days and includes first admission, admission for interval LC, and admission for RBE during the follow-up period. We also assessed the surgical outcome of these two groups in our cohort. The surgical outcomes consisted of the operative time of cholecystectomy, postoperative days until discharge, intraoperative blood transfusion, conversion to open procedure, and postoperative complications, including superficial site infection, postoperative bile leakage, intra-abdominal abscess, postoperative sepsis, and intra-abdominal bleeding. Recurrent Biliary Events For patients in the OSI group, RBE or overall RBE was defined as the first identified RBE during the follow-up period. For patients in the TSI group, two different RBEs were determined based on the time point of surgery: overall RBE and RBE after interval surgery. Since RBE is one of our outcomes of interest, we compared the OSI group with the TSI group in terms of overall RBE vs. overall RBE and overall RBE vs. RBE after interval surgery. Independent risk factors for RBE were also analyzed. Statistical Analysis We utilized R statistics and Statistical Package for Social Sciences (SPSS) for the analyses. Statistical methods included the chi-square test, Fisher's exact test for categorical variables, and Mann-Whitney U-test for continuous variables. Logistic regression was used to predict risk factors for associated data. Values of p < 0.05 were considered statistically significant. Results From January 2012 to November 2017 (Figure 2), 4537 patients diagnosed with cholelithiasis and biliary obstruction underwent ERCP in our institute, of whom 693 underwent intended LC. Among this subpopulation, 185 were diagnosed with cACC. After excluding 3 patients with ERCP performed after cholecystectomy, 16 patients with dysfunction in at least one organ, 6 patients with biliary or hepatic malignancy, 4 patients with a history of biliary tract infection, 6 patients with a history of abdominal surgery, and 3 patients with Mirrizi syndrome, 147 patients were ultimately enrolled in our study. Ninety-six of them underwent OSI, and fifty-one underwent TSI. There was no significant difference in the clinical data between the two groups (Table 1). In the TSI group, the median duration from ERCP to definitive cholecystectomy was 2.5 months. Analysis of General Outcomes In the general outcome analysis (Table 1), the 1st LOS was significantly longer in the OSI group than in the TSI group (8.50 days vs. 7.18 days, p = 0.001). In contrast to the 1st LOS, the tLOS was significantly shorter (9.00 days vs. 17.87 days, p < 0.001) in the OSI group than in the TSI group. There was no in-hospital mortality in both groups. The causes of five cases of overall mortality were aspiration pneumonia (n = 2), acute on chronic renal failure (n = 1), pulmonary embolism (n = 1), and terminal stage of lung adenocarcinoma (n = 1), and there was no significant difference in overall mortality (4.17% vs. 1.96%, p = 0.659) between the two groups. In the RBE analysis, the OSI group presented a significantly lower incidence of overall RBE (5.21% vs. 41.18%, p < 0.001) than the TSI group, but no significant difference in RBE was found between the two groups after cholecystectomy done (5.21% vs. 13.73%, p = 0.111). Analysis of Surgical Outcomes Regarding surgical outcomes (Table 3), there was no significant difference between the OSI group and TSI group in general, including in intraoperative blood transfusion (1. Table 3. Operative outcome analysis between one-stage intervention (OSI) group and two-stage intervention (TSI) group. At our institution, 1400 to 1500 (2015~2019) patients undergo LC annually, and the average conversion rate is approximately 1.5%, which is lower than that of the cACC group in our study (4.76%). Discussion Most cases of cholelithiasis, or biliary stones, originate from the gallbladder; de novo, biliary stones from the bile duct are relatively rare. Gallbladder stones may elicit symptoms that are localized in the gallbladder, and symptoms resulting from stone migration, such as impacted common bile duct stones and biliary pancreatitis, can also occur. Regarding managing impacted common bile duct stones, ACL, biliary pancreatitis, and acute cholecystitis, cholecystectomy has been recognized as part of the treatment plan. Without cholecystectomy, the greatest concern is recurrent symptoms and biliary infection, a concept similar to RBE in the present study. Lee et al. reported on 100 common bile duct stones patients who underwent ERCP for stone removal. With a mean observation of 18 months, 17% of their cohort suffered from AC, and another 13% was diagnosed with recurrent common bile duct stones. In our previous work on patients with AC undergoing percutaneous cholecystostomy, patients suffered from a cumulative incidence of 29.8% for biliary events within a median follow-up time of 4.27 months. Furthermore, a systematic review of 1841 patients conducted by Loozen et al. showed an overall recurrent gallstone-related disease rate of 22%. Therefore, cholecystectomy seems to be the mainstay of management for complicated cholelithiasis. The European Society of Gastrointestinal Endoscopy has advised that LC be performed no more than two weeks after successful stone retrieval by ERCP for patients with impacted common bile duct stones, and this practice may optimize clinical outcomes. While simultaneous treatment, namely cholecystectomy, has also been suggested for mild ACL, according to TG 18, half of the experts agreed that the same approach could be used for moderate ACL. In general, cholecystectomy is a definitive treatment for a spectrum of biliary conditions, and the timing of surgery for different situations should be tailored. In the present work, we tried to determine the optimal timing of cholecystectomy under a specific condition of biliary infection, namely concurrent AC and ACL (defined as cACC in the present study). While strategies for managing complicated biliary tract infections or conditions have been proposed in several clinical guidelines or studies with high-level evidence, few have addressed the cACC. Therefore, the present study aims to investigate the appropriate strategy for this complicated biliary infection. Although there is scarce evidence related to the timing of cholecystectomy for cACC, one study from Japan conducted by Abe et al. in 2019 reported on 101 patients diagnosed with both AC and ACL and 151 patients diagnosed with AC only. In their cohort, 78.2% of patients with cACC and 82.1% of AC-only patients underwent LC. While there was no difference in surgical complications, patients with cACC had a longer hospital stay. Abe et al. claimed that early cholecystectomy within 14 days after symptom onset was safely performed for patients with concomitant AC and ACL, showing similar surgical outcomes as patients with AC only. However, patients in their cohort were heterogeneous in several aspects; a significant portion underwent an open procedure, biliary drainage was not performed for every individual due to disease severity, and the rate of conversion to open surgery was not provided. Although relevant evidence is scarce, this challenging clinical situation should be investigated since this can be a scenario encountered in daily practice. In our work, we enrolled patients with cACC only and tried to determine the most suitable strategy: one-stage or two-stage treatment. Our results favored a one-stage intervention strategy. According to our results, the one-stage intervention strategy, i.e., intended LC after ERCP during the same hospitalization, may confer similar surgical outcomes compared with interval cholecystectomy. In addition, RBE was significantly lower in the OSI group. While the 1st LOS was longer in the OSI group, the tLOS was considerably shorter than the TSI group. In addition, the present study also revealed a similar risk of RBE when patients in the TSI group underwent cholecystectomy. Several studies have focused on the timing of cholecystectomy after successful treatment of ACL without AC. All these studies demonstrated similar surgical outcomes and RBE compared with our results under cACC conditions. Therefore, regardless of whether AC exists, cholecystectomy should be considered after successful bile duct clearance and drainage. If the patient is physically fit and suffers from a complicated biliary infection, cholecystectomy may be arranged as soon as successful medical treatment and drainage are achieved. There were several limitations in the present work. First, this was a retrospective study, and individual surgeons made the decision of timing for the intended LC immediately after ERCP. Even without statistical differences in age, ASA, and CCA, we believed that selection bias may still exist. Second, the diagnosis of cACC in our study was simply based on the modification of current criteria for AC and ACL diagnoses. However, the clinical spectrum of AC and ACL may overlap, which is the primary reason why we did not consider the severity of individual conditions (AC and ACL) in our investigation. Third, we did not include time from ERCP to intended LC in the analysis. Since there were few cases in the study, it was difficult to perform further subgroup analysis to precisely define the optimal surgical timing. We can only claim that patients could benefit from surgery in the same hospitalization, but we cannot specifically point out how many days we should wait after ERCP before arranging surgery. Conclusions In conclusion, patients with an initial diagnosis of concurrent AC and ACL, namely cACC in the present study, who are physically capable of tolerating surgery and undergoing ERCP and cholecystectomy during the same hospitalization period may receive some benefit in terms of favorable outcomes. Further studies on a larger scale are necessary to investigate issues related to incidence and severity assessment and to validate the strategy of the one-stage intervention proposed in the present study. |
University of Huddersfield Repository Flash Flood Assessment for Wadi Mousa City-Jordan This paper aims to assess risks due to potential Flash Floods hazards in Wadi Mousa and to determine the magnitude of flows for flash flood hazards and construct Floodplain Zone maps for the selected flood return periods of 25, 50, 75 and 100 years. Wadi Mousa is considered an ephemeral wadi with intermittent flash flood of flows that can exceed the 298 m 3 /s threshold. Its floods, however, do not flow every year. Nevertheless, at certain years the extent of flood can be huge. The surface drainage may be broadly divided into sub-catchments according to drainage namely; Wadi Als-Sader, Wadi Jelwakh with Wadi Khaleel, Wadi Al-Maghir. Wadi Zarraba is the confluence of the three sub- catchments. The study covers an area of 53.3 km 2 and comprises a high semi -arid infrequent flash floods generated by heavy rainstorm over the catchment and flows to Wadi Araba. Average annual rainfall of Wadi Mousa was calculated of 178 mm, and average annual evapotranspiration is 1300 mm per year. The runoff analysis indicates that only rainfall events exceeding 22 mm within the 24 hour period would generate runoff. Introduction Flood Risk analysis provides a rational basis for flood management decision-making at a national scale and locally. National-scale flood risk assessment can provide consistent information to support the development of flood management policy, allocation of resources and monitoring of the performance of flood mitigation activities, (Hall, J. W., et al. 2003). Wadi Mousa has a notorious history of extreme floods that had caused serve damage to the installations located in its floodway and considerable distances downstream. As documented in most of the available studies (e.g., Electricite de France, EDF 1995), the flood that occurred in April, 1963 was an extreme event, probably with a 100-year return period (Al-Weshah, R. and F. El-Khoury, 1999). During this extreme event, the intense and sudden rainfall caused flood water to flow from all wadis into Wadi Mousa outlet. The flood carried a huge sediment load which blocked most of the hydraulic structures in the wadi. The dam at the entrance of the Siq was filled with sediment; consequently, flood water overtopped the dam and entered the Siq. Despite the great emergency efforts, twenty French tourists lost their lives (Al-Weshah, R. and F. El-Khoury, 1999). In 1991 another flood, one probably of a 50-year or so return period, washed away two culverts upstream of the Siq and caused a serious problem for visitors and tourists. Although the flood water did not enter the Siq, traces of high water within Wadi Al-Matahah indicated that the water level reached an elevation of more than 12 m above the wadi bed. Other recent major floods in Petra occurred in January, 1995 andNovember, 1996. During these events, the Siq entrance area was flooded and tourists had to be rescued. In more recent times, many deaths have recorded in the same area (Al-Weshah, R. and F. El-Khoury, 1999). Therefore; floods and flash floods have historically caused a major threat to the Wadi Mousa City. Due to this, hydrologists ranked it as the wadi with the highest risk/damage in Jordan. Geomorphologic investigation in the wadi bed can easily prove that. Wadi Mousa is considered an ephemeral wadi with intermittent flash floods of flows that can exceed the 298 m3/s threshold. Its floods, however, do not flow every year (historical data for Wadi Mousa shows that 4 water year out of 22 was zero flow). Nevertheless, at certain years the extent of flood can be huge (2.42m3/sec daily discharge in January 1964). Its floods may be hydraulically categorized as a hybrid of debris and turbid flows. Study Area Study area covers an area of 53.3 km 2 and comprises a high semi -arid infrequent flash floods generated by heavy rainstorm over the catchment and flows to Wadi Araba. The catchment area is considered small by Jordan standard and the total watershed area amounts around 80 km 2 of which only 53.3 km 2 is effectively contributing to the floods upstream of Wadi Mousa city as shown in figure(1:a). Methodology and Approach Ten locations of the flood flow were determined to detect the drainage boundary of the watershed, figure (1:b). A properly flood risk protection requires a well-designed drainage system, therefore; depending on the extent and magnitude of flood; it is decided to conduct an integrated computerized hydraulic open channel model to determine the flood water level and discharge at every tributary of the critical reach rather than estimating the discharge magnitude and water level depth at sign locations. This requires a determination of the quantity of runoff reaching the drainage structures and an accurate analysis of water flow through the structures in order to properly size them. HAESTED (FLOW MASTER) Software's was customized to estimate the results. Data from various sources were collected in order to have a representative and homogenous information about the catchment of the study. Unfortunately, no data regarding the instantaneously of flood runoff flow is available for Wadi Mousa. For this purpose, all the available information in terms of meteorological rainfall, topographical maps and flood flow records were used in the conjunction with the best available tools to estimate the surface runoff. The raw data which has been created by the Ministry of Water and Irrigation were collected and revised. Other sources of data were collected from " GTZ und Agrar, and Hydrotechnik", 1977, in cooperation with Natural Resources Authority, who prepared a National Water Master plan for Jordan study. Hydrological parameters related to most of the basins in the country were estimated and calculated. The average annual rainfall over catchments is 225 mm and runoff coefficient of 1.2%, with average annual flood flow of 0.24 mcm. And from data analysing for flood flow the average runoff of Wadi Mousa at culvert of 0.63 mcm per year with the mean annual rainfall over sub-catchment of 35.47 mcm. Hence, given the catchment area of 21.4 km 2, the mean runoff coefficient is calculated. The collected raw data are subjected and treated by different approaches and statistical techniques to determine the important factors in the examination of the quality. The surface drainage of the study area may be broadly divided into three sub-catchment according to drainage namely; the biggest tributary is Wadi Als-Sader sub-catchment, the second is Wadi Jelwakh with Wadi Khaleel subcatchment and the third is Wadi Al Moghare sub-catchment. Wadi Zarraba is the confluence of the three subcatchments and represent about 53.3 km 2. All tributaries of Wadi Mousa is ephemeral except Wadi Als-Sader, a small amount of water from Als-Sader spring can be seen in the Wadi course all the year Figure( The Unit Hydrograph The unit hydrograph theory, which is based on the property of proportionality and the principle of superposition used in this study to determine the peak discharge and its magnitude values. The results obtained by applying these applications (average runoff is 0.63 mcm per year, average rainfall is 35.47 mcm, average runoff coefficient is 2.3%) are acceptable for hydrological simulation purposes. For each individual storm, the effective rainfall is calculated by applying the constant loss rate method. Therefore, the effective rainfall could be taken as input parameters to calculate the unit hydrograph by applying the matrix inversion method. The effective rainfall should be calculated preliminary to calculate the runoff discharge. To calculate the effective rainfall, the Intensity-Duration-Frequency Curve, (IDF Curve) was created as shown in Figure (2:b). Using the annual duration series is common in the probabilistic approach of analysis due to the presence of a theoretical basis for extrapolating annual series beyond the range of observation and the return periods, (Tr), of 25, 50,75 and 100 years. The data is analysed to establish the best fit for the sequences of random variables. So the 100-year line would represent rainfall events that have a probability of occurring once every 100 years. The event, which is expected to occur on an average once every N-year or XTr which can be computed by using Gumbel distribution of extreme given by equation : Where; X Tr : The magnitude of event reached or exceeded on average once in Tr years, X : The mean value of the samples, K: The frequency factor, 1 − n : The standard deviation of the variable of the samples. The K is a function of the reoccurrence interval (return period), Tr, is given by Gumbel,. In this approach using extreme value distribution, therefore, the maximum annual floods for the generalized data from the rainfall storms are assumed to be distributed in accordance with the distribution of extreme values. The K values are calculated by using equation as given by Gumbel : Where; n Y : the reduced mean, : the reduced standard deviation, Y Tr : the reduced variate. The (Y Tr ) parameter related to the return period is calculated by equation given by Gumbel as follows: After calculating values of (Y Tr ) and use to calculate the (K) values of ( n Y ) and (S n ) for the selected return periods and then substituted in the extreme equation to compute the (Y Tr ). The short rainfall duration for the computation of (Y Tr ) has been transformed into the corresponding intensities ( m m / h r ), Table. Rainfall intensity can be calculated using equation. Where; I : The intensity of rainfall in mm/hr, T : The duration in hours (1, 2, 3...), a, b : coefficients for the IDF equation of rain gauging station By applying the equation and use the best fit line, the regression equations were obtained, Station coefficients for the IDF equation (a and b) are determined for each site for the selected years return period. The rainfall increment was calculated. The intensity of the rainfall is obtained by apply this equation. Estimation Of Peak Discharge Characteristics of the watershed area directly affect the hydrologic analysis. Basic features of the watershed basin include size, shape, slope, land use, soil type, storage, and orientation. The size of the watershed basin is the most important characteristic affecting the determination of the total runoff. The shape of the watershed primarily affects the rate of water flow to the main channel. Peak flows may be reduced by the effective storage of drainage water. The geometric characteristics for each sub -catchment were identified to be used to calculate the time of concentration, time to base and other parameters that will be used later on in Unit Hydrographs. High relief and slope of wadi bed were calculated for each sub-catchment in feet (then it is converted to meters) from map for all sub-catchment. Area in square miles (then converted to square kilometres) were determined. The information obtained from this calculation was used to estimate the peak of runoff for each of the subcatchment in this study. Hydrological Model Construction Hydrological Model was constructed to determine the flood risk according to the runoff in the main tributaries of Wadi Mousa. The following procedure was adopted to construct a hydrological model for Wadi Mousa watershed. To estimate peak discharge by using the Hydrologic Model for Wadi Mousa watershed, the sub-catchment parameters were determined based on the available Drainage Elevation Map (DEM), calculated and stored in the sub-catchment tribute Table. To determine how the runoff is distributed over time, a time-dependent factor must be introduced. The time concentration, (Tc), which is the time required for the most remote drop of water to reach the outlet of the watershed were calculated using Kirpich's Formula given by equation. The ( Tc) mainly depends on the length and elevation difference between the highest and the lowest point of the flow path. According to the hydrological parameters of the watersheds it can be concluded that the estimated values of the peak discharge in the outlet of Wadi Mousa catchment could be reached 297 m 3 /sec which is the peak discharge for wadi cross section B2, A, B3, and B1 Rainfall-Runoff Model Approach There are two basic components to modelling flash flood scenarios. The first component is to convert rainfall into run-off and the second is to determine how that run-off will route to the catchment outlet. The SCS-curve number (CN: 0-100) method is used to establish the initial soil moisture condition and the infiltration characteristics. In this study, CN of 70 was chosen where t h e standard SCS curve number method is based on the following relationship between rainfall depth, p, and runoff depth, Q: Where; I a : Initial abstraction (I a : 0.2S) The initial abstraction (the losses due to infiltration and storage into depressions) has been calculated for the selected year return periods of 25, 50, 75 and 100 by using equation. The excess rainfalls that will flow into sub-catchments were calculated by using equation. Where; Q: the surface runoff (mm), P: storm rainfall (mm), S: the soil retention (mm) The potential retention (S) and the initial loss (I a ) were computed and the results of the calculation were of 109 mm and 22 mm respectively. These values were used in the Hydrological model for Wadi Mousa watershed. Calculations Of Flood Hydrographs and Discharge The intensity of rainfall was obtained from the Wadi Mousa Autographic rain gauging station IDF curve for the selected year return periods: 25years, 50 years, 75 years and 100 years. Computation of the flood discharge using the effective rainfall of 25, 50, 75 and 100 year return periods for Wadi Mousa sub-catchments were performed. The total discharge values in MCM that obtained from the calculation of 1.02,1.18, 1.91 and 2.52 MCM for the selected year return periods of 25, 50, 75 and 100 years respectively were used as input in FLOWMASTER software to calculate the water level at the selected sites. Flash Flood Hazard Scenarios Heasted Model (FLOWMASTER Model) was identifying model to be used for the calculations the water level of the antecedent floods in the selected wadi courses. The flow regime in the 100-year flood in Wadi Mousa is supercritical type, where the normal water surface elevation is lower than the critical water elevation. It means that any major obstacle in the wadi channels will result in hydraulic jumps increasing the depth of flow to subcritical levels with a potential to reach depths of about 2 to 4 meters further risking the road. At supercritical regime, the depths ranged from a minimum of 2 meters to a maximum of about 4.65 meters of critical depth. The 100-year flood velocities have extremely high values of 12.3 m/s and the flood values as calculated in this study can reach of 298 m 3 /sec at Wadi Mousa outlet. Runoff Frequency Analysis It is attempting in this section to evaluate the frequency and magnitude of the flood flow at the selected sites. The approach that used the daily series of the flood magnitude occurred in each water year is very common approach in hydrological studies. The runoff analysis indicates that only rainfall events exceeding 22 mm within the 24 hour period would be generate runoff. All the rainy days that have values over the calculated threshold value of 22 mm which was calculated previously were used in this section. The series values that obtained from flood magnitude are ranked in the order of descending magnitude. The reoccurrence time period T r of a given event is the average number of days within which the event is expected to be equal or exceed, Chow. There are several methods to determine the T r, the most common method used is given by Weibull's equation as: ( ) m n T r 1 + = ( 1 0 ) Where; T r : is the return period of n-year event, n : is the number of years of record, m : is the rank of the item on the series and m is being one for the largest. Discussion, Conclusions And Recommendations Hydraulic cross sections for the selected sites accompanied with the analyses of long-term rainfall data and flood flow discharge measurements showed that Wadi Mousa sub-catchments have experienced several floods. The analysis and computation of the flood hydraulic properties reveal that parts of the watercourse will be submersed in the 25, 50, 75 and 100-year of flood return periods and the surrounded area may be exposed to excessive flood velocities, putting the neighbour land at high risk. Little correlation between the measured and the estimated runoff was found due to the non-homogeneity between the rainfall records and the flow records. Calculated runoff coefficients based on measured flood reveal low runoff coefficient compared with the rainfall only of 2.3 %. This could be due to the limited number of storms that are greater than the initial abstraction, the nature of Wadi Mousa watershed. According to results obtained from unit hydrograph calculation, the Floodplain Zone in the main water paths were determined with the aid of ArcGIS software. Floodplains are geographical areas within which the likelihood of flooding is in a particular range. There are four types or levels of floodplains were defined for the purposes of this study as follows: 25 year Floodplain Zone where the probability of flooding from the wadi is highest (greater than 4% or 1 in 25 years), as can be seen in Figure (3a). Two Geo-units are considered as high probability of flooding. Geo-Unit 474 and 226 the first land used is commercial and the second is residential development. Most types of development would be considered inappropriate in this zone. Development in this zone should be avoided. 50 year Floodplain Zone Moderate probability of flooding (greater than 2% or 1 in 50 years) as can be seen in Figure (3b). Three Geo-units are considered as high probability of flooding. Geo-Unit 226, 273 and 474 the first land used is commercial and the second is residential development. Highly vulnerable development, such as hotels, buildings, markets, would generally be considered inappropriate in this zone. 75 year Floodplain Zone low -Moderate probability of flooding (greater than 1.5% or 1 in 75 years) as can be seen in Figure (3c). Six Geo-units are considered as high probability of flooding. Geo-Unit 247, 325, 397as well as the unit mentioned in 50 year Floodplain Zone. The land used for these units are commercial, agriculture, tourist and tourist investment respectively. 100 year Floodplain Zone (1% or 1 in 100 years). Low probability of flooding as can be seen in Figure (3d). Seven Geo-units are considered as high probability of flooding. and,438,473,474,485 as well as the units that mentioned in 75 year Floodplain Zone. Land used for these units are commercial, agriculture, tourist and tourist investment respectively. Development in this zone is appropriate from a flood risk perspective. |
This invention relates to data processing.
As an example of data processing, electronic games are well known and may be supplied on a variety of distribution media, such as magnetic and/or optical discs. General computers or more dedicated games consoles may be used to play these games.
It is known for games to make use of audio and/or video files that are stored on the distribution medium. For example, karaoke games such as the Sony® SingStar™ game are known in which a player sags into a microphone connected to the computer or games console. The computer or games console may display video and/or play an audio track, the intention being that the player sings along. Lyrics and/or notes may be displayed to the player so that the player knows what to sin and at what pitch.
Only a limited number of audio/video files may be stored on the distribution medium. As such, the player has only a limited number of tracks to select when playing the game.
One prior art game is known to allow (i) a “general” music CD (not related to the game) to be inserted into the computer or games console and (ii) the player to select a track from this general music CD as the audio file to be played.
Another prior art game is known to allow a player to insert an “expansion disc” into the computer or games console, the expansion disc being a medium that simply acts as an audio/video data storage medium.
It is known to release different versions of a game. Such different versions often have enhancements, such as new game functionality. Additionally, the different versions may have different audio/video files stored on them, for example to entice a user to purchase the game. However, when playing one version of a game, if the player wishes to play a track from another version of that game, it is necessary to exit the current game version and load the another game version into the computer or games console. This is often time consuming and detracts from the enjoyment of the game. |
More Trouble Ahead for J.C. Penney?
J.C. Penney is striking back against reports that it's not having any problems with its vendors. Shares have been moving around as a result. David Benoit discusses on MoneyBeat. Photo: AP. |
Inquiry-based Learning Environment Using Mobile Devices in Math Classrooms SMILE (Stanford Mobile Inquiry-based Learning Environment) is an inquiry-based mobile learning framework designed to promote student-centered inquiry and reflection leveraging mobile media in the classroom setting. Students can quickly create their own inquiry items based on their own learning and knowledge using SMILE. This paper introduces seven phases of SMILE that are applicable to math classroom environments, and presents the findings from SMILE implementation studies in Argentina and Indonesia. The students had a mobilized inquirybased learning in math, which was facilitated by teachers and researchers by tapping into the affordances of mobile technologies for supporting their own question generation in class. SMILE was found to engage students in heterogeneous/mixed ability classes and promote both team collaboration and competition in learning math. |
A longitudinal study on anthropometric and clinical development of Indian children adopted in Sweden. I. Clinical and anthropometric condition at arrival. One hundred and fourteen children (60% girls) adopted from India through five major adoption organizations, were recruited consecutively. This paper describes the environment of the children in India and in Sweden, discusses the certainty of the ages and reports their condition at arrival in Sweden. The median age at arrival was 9.3 months, 62% being below one year of age (range 3-72 months). Infectious diseases similar in kind and frequency to those noted in child populations in developing countries, were found. Height/age and weight/age mean values were approximately -2 standard deviation scores (SDS) of the NCHS/WHO standard, which is similar to the anthropometric status of Indian average children. There were no significant sex differences. Thirty-seven birth weights were known, the majority below 2,500 g. Psychomotor retardation was found in 29% of the children. In the children with stunting and in those with weight/age less than -3 SDS at arrival there were high percentages of psychomotor retardation, anaemia and combined wasting and stunting. Therefore these children should be regarded as a risk group and be followed up with special care. |
Three family members, including nine-year-old twin brothers, were found dead in Ireland yesterday, in what police suspect may be a double murder and suicide.
The twins, identified by the Irish Times as Tom and Paddy O’Driscoll, were discovered by their younger brothers, aged three and five, in their home just outside Charleville, north county Cork, shortly before 5pm.
The discovery sparked a manhunt for their 20-year-old half-brother Jonathan, whose body was later found by police at 7pm about 15km from the scene of the first deaths, by the Awbeg river in the ruins of Buttevant Castle, outside the town of Buttevant.
We’ll tell you what’s true. You can form your own view. From 15p €0.18 $0.18 $0.27 a day, more exclusives, analysis and extras.
It would appear cause of death was suicide.
Garda detectives are not seeking anyone in connection with his death. The deputy state pathologist, Dr Michael Curtis, is due in Cork today to carry out postmortems on the bodies of the three brothers.
It is believed older brother Jonathan was minding the boys after collecting them from school, while their parents were six miles away in Killmallock, County Limerick.
A local priest spent around two hours with the family after the bodies were discovered.
Friar Tom Naughton of the Holy Cross Parish in Charlville, speaking to the Mirror, said: “We prayed together. We comforted them and assured them, especially of the community here in the area, that we were with them.
“It’s clear that anybody who suffers a tragedy is going to be upset and hurt at this time but it is quite raw.”
The alarm was raised when a member of the family arrived home and discovered the youngsters in the detached, pink and white bungalow just off the main Cork-Limerick road, an area of Charleville known as Deerpark.
Arrangements are being made for post-mortem examinations to be carried out on all three bodies at Cork University Hospital.
If the murder-suicide suspicions are confirmed, it will be the second incident of its kind in Ireland in just over six weeks.
Two brothers died in a tragedy at their home in rural Sligo in late July - nine-year-old Brandon Skeffington was found with stab wounds in the family home at Banada, Tourlestrane near Tubbercurry, before the body of his elder brother Shane junior, 21, was found in a shed beside the property.
It was the first murder-suicide to occur in Ireland for more than a year.
We’ll tell you what’s true. You can form your own view.
At The Independent, no one tells us what to write. That’s why, in an era of political lies and Brexit bias, more readers are turning to an independent source. Subscribe from just 15p a day for extra exclusives, events and ebooks – all with no ads.
Subscribe now |
Rice-husk-templated hierarchical porous TiO/SiO for enhanced bacterial removal. To further enhance the bacterial removal capability, we synthesize a biotemplated hierarchical porous material coupling chemical components and hierarchical microstructure, which is derived from rice husk. The results show that the chemical components and hierarchical microstructure of the prepared material could both be factors in enhancing the bacterial removal capability. On the basis of the experimental results, we propose a hypothetical enhanced bacterial removal mechanism model of the prepared material. Furthermore, we propose a hypothetical method of inferring bacterial physical removal effects of samples by their dye adsorption results. Also, the hypothetical method has been proven to be reasonable by the experimental results. This work provides a new paradigm for bacterial removal and can contribute to the development of new functional materials for enhanced bacterial removal in the future. |
Sarkozy or Hollande? A Critical Choice for the Future of the EU
Latest Update 5/6/ 2012 at 11.00 AM PST: According to exit polls Hollande has defeated Sarkozy with 51.9 percent of the votes against 48.1 percent for the incumbent. Hollande is the first Socialist to be elected president in almost two decades since Mitterand. Celebrations are underway in France, notably at the Place de la Bastille in Paris.
On Sunday May 6, France will elect its next president. Beside the consequences for French people, this presidential election has geopolitical and economic ramifications for the policies, the cohesion, and even the survival of the European Union. President Sarkozy’s campaign slogan is “A strong France”, while his opponent socialist Francois Hollande is running on “The time for change is now”. If Hollande is elected, can he balance the rise of power of Germany and make sure the voices and interests of the people of Greece, Spain, Portugal and Italy are heard?
Sarkozy has been trailing consistently behind Hollande in the polls, and lost the first round of the elections, on April 22 nd, to the socialist candidate. Many French voters have had a falling out with the hyper-active Sarkozy, who can be called arguably the first American style French president. Sarkozy’s constant media exposure has become an invasion of privacy for many French people who like their politicians a bit more subdued than Americans. Mr Hollande is fully aware of this collective psychological element when he says that, if elected, he will be a “normal” president.
Sarkozy: More Petain than De Gaulle
The very first words of Sarkozy’s electoral pamphlet sent to French voters living overseas are: ” In a word in crisis, while so many of our European neighbours are going through great difficulties putting in jeopardy their unity and social model, I heard your anxiety, your sufferings and your expectations.” Playing on voters fear and anxiety is, of course, the oldest political trick in the book, and it doesn’t only apply to France. But Sarkozy is using it to portray himself as both, a benevolent father and a therapist: the only man who can “protect” France and French people in this “time of crisis”.
Ever since the French revolution, the motto of France has been “Liberty, Equality, Fraternity”. However, this changed for the few years of the shameful pro-German government of Petain during World War II to become “Work, Family, Motherland”. In Sarkozy’s electoral pamphlet, the three words- work, family and motherland- are used several times as key elements and sometime in bold letters such as “making the choice of work” and Sarkozy’s pledge to “protect the family”.
But regardless of projecting a sense of strength, and his eagerness of keeping himself, if not France, in the international limelight, Sarkozy has arguably made France considerably weaker by being the docile vassal of both Washington and Berlin. If his mentor former president Chirac had the courage to vehemently oppose the war in Iraq during the Bush era, Sarkozy is a weak opportunist going where the wind blows. Not only he has failed to oppose the sort of economic Fourth Reich imposed by Germany on the rest of the European Union, but he has also betrayed the traditional Gaullist foreign policy of maintaining France’s independence and own voice-even if it was in opposition of Washington. For decades, France brought this kind of balance in foreign policies, especially in the Middle-East, by being the only Western European country voicing criticisms of Israel’s policies and actively defending Palestinian rights. Despite his claim to the contrary, Sarkozy has made France weaker not stronger on the world stage, but, if elected, can Hollande reverse this situation and oppose, if needed, either president Obama or Chancellor Merkel?
Hollande: Can he rise to the occasion and make a stand for social justice?
Curiously, Hollande’s dullness and lack of charisma are part of his appeal for a lot of French voters. If Sarkozy is a brash political “rock star”, Hollande is a technocrat claiming to have a passion for social justice. Even if one of his electoral promises is to pay off France’s national debt by 2017, Hollande doesn’t think the policies of austerity favored by Merkel are good. To resolve France’s debt crisis, Hollande plans to cancel tax cuts for the wealthy and tax exemptions put in place by Sarkozy. Income tax would be at 75 percent for incomes topping one million euros. Hollande would also bring back the retirement age to 60- with full benefits- for people who have worked 42 years. 60,000 job cuts, made by Sarkozy, in public education would be recreated. But what are perhaps the most progressive of Hollande’s electoral promises are two of his pledges. The first one is to legalize gay marriage and adoption by gay couples. The second one is to grant the right to vote, in local elections, to residents without EU passports providing that they have been legal residents for at least five years.
Will xenophobia win the French election?
Unfortunately, the decisive voting block is likely to be the one of the far right Front National party of Marine Le Pen. She came third during the first round with a strong performance of around 20 percent of the vote. Even so Marine Le Pen has avoided the blunt xenophobic statements of her father, Jean-Marie Le Pen, the message is still the same. She wants immigration to be reduced by 95 percent as well as “national preferences” for French citizens for access to jobs and social services. On the Front National platform is also a withdrawal from the euro zone and the European Union as well as reinstating the death penalty. Needless to say, in the past few weeks, Sarkozy has been tailoring his discourse to the Front National’s electorate by pledging to reduce immigration by 50 percent. |
Reflex Seizures Triggered by Exposure to Characters With Numerical Value Reflex seizures can be triggered by a variety of stimuli. We present a case with drug-resistant complex partial seizures originating in right temporal lobe triggered extensively by visual, auditory, and mental exposure to multidigit numbers. The patient was investigated in video-EEG monitoring unit and seizures were triggered by numerical stimuli. Scalp EEG findings suggested a right temporal focus but ictal semiological findings suspicious for an extratemporal area necessitated the invasive EEG study. A right anterior temporal seizure focus was established with invasive monitoring and cortical stimulation studies. Magnetic resonance imaging showed a cortical dysplasia in right anterior temporal lobe and ictal single-photon emission computed tomography confirmed the epileptogenic focus, leading to a right temporal lobectomy and amygdalohippocampectomy and a pathological diagnosis of focal cortical dysplasia type Ia. The patient is seizure-free at the end of the second postoperative year despite repeated exposures to numbers. To our knowledge, this is the first report of seizures triggered by numbers. It is also of particular importance as the reflex seizures are associated with a cortical lesion and it may suggest involvement of right anterior temporal lobe in numerical processing. |
Assignment of the backbone 1H and 15N NMR resonances of bacteriophage T4 lysozyme. The proton and nitrogen (15NH-H alpha-H beta) resonances of bacteriophage T4 lysozyme were assigned by 15N-aided 1H NMR. The assignments were directed from the backbone amide 1H-15N nuclei, with the heteronuclear single-multiple-quantum coherence (HSMQC) spectrum of uniformly 15N enriched protein serving as the master template for this work. The main-chain amide 1H-15N resonances and H alpha resonances were resolved and classified into 18 amino acid types by using HMQC and 15N-edited COSY measurements, respectively, of T4 lysozymes selectively enriched with one or more of alpha-15N-labeled Ala, Arg, Asn, Asp, Gly, Gln, Glu, Ile, Leu, Lys, Met, Phe, Ser, Thr, Trp, Tyr, or Val. The heteronuclear spectra were complemented by proton DQF-COSY and TOCSY spectra of unlabeled protein in H2O and D2O buffers, from which the H beta resonances of many residues were identified. The NOE cross peaks to almost every amide proton were resolved in 15N-edited NOESY spectra of the selectively 15N enriched protein samples. Residue specific assignments were determined by using NOE connectivities between protons in the 15NH-H alpha-H beta spin systems of known amino acid type. Additional assignments of the aromatic proton resonances were obtained from 1H NMR spectra of unlabeled and selectively deuterated protein samples. The secondary structure of T4 lysozyme indicated from a qualitative analysis of the NOESY data is consistent with the crystallographic model of the protein. |
Quantification of sensibility to conductivity changes in bipolar rheoencephalography The authors' purpose is to quantify the rate of bipolar rheoencephalogram originated by changes in scalp and brain conductivities. They use a four-layer sphere model of the head, which is solved by numerical methods. Array of solutions obtained when brain and scalp conductivities change separately, shows that only 7% of the rheoencephalogram is caused by changes in brain conductivity. |
See the 2018 prize money winners and Joey Chestnut's net worth.
Joey Chestnut set a new world record and became the 2018 Nathan’s Hot Dog Eating Contest champion, winning his 11th Mustard Belt after consuming 74 hot dogs and buns (HDBs) in a thrilling 10 min. The previous record of 73.5 HDBs was also set by Chestnut. Miki Sudo won the 2018 women’s division title and Nathan’s Pink Belt for the fifth time in a row, scarfing down an astonishing 37 HDBs in 10 min.
Chestnut can relish his victory having put some real mustard on his effort. He ended up besting Carmen Cincotti’s second place effort of 63 HDBs by 11, meaning Cincotti will now have to avoid being sauerkraut about his loss.
Take a break from your barbecue to learn more about the Nathan’s Hot Dog Eating Contest’s competitors, their net worths and career earnings, and fun Nathan’s hot dog stats to share with your friends over the grill.
Rank Name Prize HDBs Consumed in 10 min.
Rankings according to Major League Eating as of 3:30 p.m. PT, July 4, 2018.
The annual 4th of July Nathan’s Hot Dog Eating Contest is one of the most famous eating contests in the world, where professional eaters compete for big money and competitive eating fame. You can watch replays of the event on the ESPN App.
Sudo — who, according to announcer George Shea predates the existence of light and sound — crushed the competition, beating out Michelle Lesco with 27 HDBs, Sonya Thomas and Juliet Lee tying for third with 25.
Read on to see the net worths of Joey Chestnut and other Nathan’s Hot Dog Eating Contest winners, their total career winnings, and more.
Major League Eating (MLE) — the organization that oversees professional eating contests — began paying cash prizes to top finishers in 2007. In 2018, the total prize money is $40,000: The men’s division winners get $20,000, and the women’s division winners get $20,000. Among those two groups, the $20,000 is broken down five ways according to how competitors place.
Additionally, the men’s champion is awarded the bejeweled mustard belt, and the women’s champion is awarded a bejeweled pink belt.
In 2017, an estimated 35,000 attendees and 1.11 million viewers on ESPN watched as competitive eaters Joey “Jaws” Chestnut and Miki Sudo devoured the most hot dogs and buns to come out on top.
Chestnut gobbled 72 HDBs for the 2017 men’s title — and fell just 1.5 HDBs short of his own record of 73.5 — and Sudo devoured 41 HDBs to conquer the women’s division, both in a mere 10 minutes.
ESPN’s Adam Amin provided the play-by-play for the 2018 eating competition. In-depth analysis was provided by Major League Eating’s George Shea.
Not just anyone can enter this eating contest. Contestants must compete in one of at least 10 qualifying events held from March to June in various cities across the U.S. to earn a chance at the Nathan’s Hot Dog Eating Contest prize money.
A handful of eaters have dominated the Nathan’s Hot Dog Contest in recent years, with reigning champion Joey Chestnut claiming 11 of the last 12 titles in the men’s division — the lone blemish was a loss to Matt Stonie in 2015. And Chestnut’s streak started when he broke a run of six straight titles for Takeru Kobayashi from 2001-2006.
Returning champion Joey Chestnut was heavily favored going into the 2018 event with a lot riding on him: ESPN estimated that more than $1 million would be bet on the hot dog eating contest by the time it kicked off on July 4th, with Chestnut coming in as a massive favorite at -700 — meaning you would have to wager $700 just to win $100.
At the pre-event weigh-in on July 3, Chestnut pointed out that — not only is he the 10-time champ — but the diminutive size of his main rival Carmen Cincotti could hurt his chances. Chestnut also said there was a potential he could break his own record this year, stating that he’s been eating 75 HDBs in his practice runs, according to the New York Post. Chestnut fasted in the days leading up to the contest, according to the event announcers.
Meanwhile, on the women’s side, only two people have ever won the title since MLE began awarding it in 2011. Sonya Thomas won the first three titles — setting the current women’s world record of 45 HDBs in 2012 — before giving way to Miki Sudo, who has won the last five Nathan’s Hot Dog Eating Contest titles.
Sudo expressed concern about the weather at her weigh-in, specifically the humidity, which the Las Vegas native has less experience with. She also said that she had no plans to alter her approach of eating two hot dogs separately before consuming two buns separately, citing her victory last year as proof of its efficacy.
Here’s everything you need to know about the competitive eating at the 2018 July 4th eating contest.
Joey “Jaws” Chestnut surprised no one by cruising to his 11th Mustard Belt in 12 years, extending a stretch of dominance that has defined his sport for over a decade. He might have surprised some, though, by consuming 74 HDBs for the new record despite a hot, humid day in Brooklyn. The top-ranked competitive eater in the world, Chestnut ate 72 HDBs to win the 2017 Nathan’s Hot Dog Eating Contest.
The California native and former construction manager has dominated the competition at Nathan’s Famous 4th of July Hot Dog Eating Contest, having won every single year he has entered, save for finishing second in 2015 by a mere two HDBs.
Joey Chestnut’s net worth is $800,000, according to Celebrity Net Worth.
Chestnut is arguably one of the most famous and successful professional competitive eaters. He just launched a new line of condiments in time for the 4th of July, reported USA Today. The new Joey Chestnut Select product line includes Chestnut’s Firecracker Mustard, Boardwalk Coney Sauce and Deli-Style Mustard. All profits from the online product sales through July 6, 2018, will go to benefit Hidden Heroes, the Elizabeth Dole Foundation’s program for assisting wounded warriors, according to PR Newswire.
Miki Sudo continued her abject dominance of the women’s hot dog eating field — winning her fifth title in a row — but she did fall short of matching last year’s 41 hot dogs or the record of 45 set by Sonya Thomas in 2012. Last year, Sudo’s 41 HDBs bested Michelle Lesco’s 32.5 HDBs and Thomas’ 30 HDBs.
Sudo first qualified for the annual 4th of July contest in 2013 by downing 40 hot dogs at the New York New York casino in Las Vegas, a feat that clearly didn’t take too much of a toll on her as she then downed 7.5 pounds of deep-fried asparagus in Stockton, Calif., the next week. And although she failed to top Thomas that year, she won the next year and hasn’t surrendered the Nathan’s Pink Belt since.
Sudo holds world records for eating 16.5 pints of vanilla ice cream in just six minutes at the Indiana State Fair in 2017 and for downing a stunning 8.5 pounds of kimchi at the Chicago Korean Festival in 2013.
Sudo’s Major League Eating career winnings total is $119,911.
Cincotti isn’t a one-trick pony, though. He set a world record by consuming a staggering 2.438 gallons of chili at the Orlando Chili Cookoff in February 2018 — the fourth world record he has set since October 2016.
He also holds the record for eating brats, downing 101 at the Linde Oktoberfest Tulsa in October 2016; croquetas, eating 158 in Miami in March 2017; and sweet corn, downing a stunning 61.75 ears just a month after setting the croquetas record.
Over the course of his MLE competitive career, he has won a total of $44,150.
The third-ranked eater in the world is Matt Stonie, and he remains the only person who has ever managed to best Chestnut in the 4th of July Hot Dog Eating Contest. He fell short this year, though, finishing off of the podium as Darron Breeden climbed into third by just one HDB.
Matt Stonie’s net worth is $700,000, according to Celebrity Net Worth.
The No. 2 female eater in the world, Michelle “Cardboard Shell” Lesco gave it her all this year but still failed to keep pace with the storming pace set by Sudo — missing victory and another $5,000 by a full nine hot dogs.
The marks the second year in a row she has finished second, losing to Sudo in 2017 as well. She is currently the second-ranked female competitive eater in the world and the ninth-ranked eater overall.
Lesco, who hails from Tucson, Ariz., is also MLE’s 2017 Humanitarian of the Year, working as the senior manager of youth services at the Volunteer Center of Southern Arizona and helping to fund wells in the Central African Republic and Ethiopia.
She has won $48,675 in career winnings as a competitive major league eater.
Takeru “The Prince” Kobayashi changed competitive hot dog eating forever when he guzzled 50 hot dogs in 2001 — double the number the previous year’s winner had eaten. That year, he debuted his new technique of dipping the buns in water to break down the starches, something most competitors now replicate.
Kobayashi won six consecutive Nathan’s Hot Dog Eating Contests (2001-2006) before he was overthrown by Chestnut in 2007. He tied with Chestnut by eating 59 hot dogs in 2008 but was declared the second-place winner after Chestnut was the first to finish a plate of five hot dogs in the first-ever overtime.
Kobayashi hasn’t competed in the 4th of July Nathan’s Hot Dog Eating Contest since 2009 due to a contract dispute with MLE, though he did manage to make it onto the stage in 2010, albeit briefly, storming the competition in protest, only to be arrested.
Takeru Kobayashi’s net worth is $3 million, according to Celebrity Net Worth.
1 million: The number of hot dogs that Nathan’s Famous has donated to New York food banks over the last 10 years.
$3.95: The price of a Nathan’s hot dog at its flagship restaurant in Brooklyn.
$0.05: The original cost of a Nathan’s hot dog on Coney Island in 1916, allowing Nathan Handwerker to undercut the competition that was selling their dogs for $0.10.
$292.30: The cost of the 74 HDBs that Joey Chestnut consumed to set the world record on July 4, 2018. However, back in 1916, he wouldn’t have needed to break a $5 bill — 74 hot dogs would have cost just $3.70.
40: The length in feet of Chestnut’s 74 hot dogs. Each Nathan’s hot dog is between 6.5 and 7 inches long, with a 6-inch bun.
515.25: Length in feet of all the hot dogs scarfed down by the eaters in the 2018 competition if laid end to end.
100: The number of times the Nathan’s Hot Dog Eating Contest has been held since 2016, with the 1941 edition canceled to protest the war in Europe and the 1971 contest not held to protest the war in Vietnam.
1971: The likely real year the first Nathan’s Hot Dog Eating Contest was held. Although Nathan’s and several publications will list the 1916 start date as fact, there are no records prior to 1971, and famed PR man Mortimer Matz has claimed that he and Max Rosey invented the contest from whole cloth and fabricated the history as a selling point.
500 million: The number of hot dogs that Nathan’s has sold since its launch.
50,000: The number of outlets selling Nathan’s Famous products, which are found in all 50 states and 10 countries.
$3,779.16: Total cost of the hot dogs consumed at the contest if purchased from the flagship Nathan’s restaurant in Coney Island.
22,200: The number of calories Joey Chestnut consumed in 10 min. by eating a record-setting 74 HDBs, according to ESPN’s Darren Rovell.
Legend has it that the Nathan’s Hot Dog Eating Contest began in 1916 as a way for four immigrant men to determine who among them was the most patriotic. The winner was Jim Mullen, who ate 13 hot dogs, according to author Jason Fagone in his book “Horsemen of the Esophagus: Competitive Eatings and the Big Fat American Dream.” Of course, if you believe the claims of famed PR guru Mortimer Matz, that’s all an elaborate fiction that he and Max Rosey fabricated to give the event more character when they first started holding it in the early 1970s, according to the New York Times.
The contest is currently governed by Major League Eating and held at Nathan’s Famous flagship restaurant in Coney Island in the New York borough of Brooklyn. If you do believe the stories about the earlier start date, 2018 marks the 100th occasion that the competition has been held since 1916 — the contest was (hypothetically) canceled in protest in 1941 and 1971.
It’s certainly an unusual way to make money, but competitive eating can pay well. Like the Nathan’s Hot Dog Eating Contest, many eating contests offer cash prizes.
Joey Chestnut, for example, has won nearly $600,000 since his start in professional competitive eating in 2005, according to Eat Feats, an online competitive eating news resource and database. The Nathan’s Hot Dog Contest is his biggest payday, but other recent lucrative purses include the 2016 Hooters World Wing Eating Championship ($8,500) — where he ate 194 wings in 10 minutes — and the 2014 Foxwoods World Turkey-Eating Championship ($5,000) — where he consumed 9.35 pounds of turkey in 10 minutes.
Much newer to professional competitive eating are Miki Sudo and Matt Stonie. Sudo has earned almost $120,000 in prize money since 2011. Her most recent first-place prize was the 2017 World Ice Cream Eating Championship, where she scored $2,000 and set the world record for short-form ice cream eating by consuming 16.5 pints in just six minutes.
Stonie, meanwhile, has pulled in just more than $150,000 since 2009. Last year, he managed a $4,200 second-place payday for consuming 229 chicken wings in 10 minutes at the Hooters World Wing Eating Championship in Las Vegas less than a week after his third-place finish in the Nathan’s Hot Dog Eating Contest, besting Joey Chestnut in the process.
Qualifying events for the 2019 Nathan’s Hot Dog Eating Contest have not been announced yet. If you think you have what it takes, your 4th of July party might be the time to start practicing. Regional winners of the 2018 qualifying events consumed anywhere from six to 46.5 hot dogs.
Surviving a job loss is easier in some states than others.
Instantly take your ramen up a notch with these recipes. |
Wednesday on Fox News Channel’s “The O’Reilly Factor,” Republican presidential front-runner Donald Trump downplayed the possibility of Democratic presidential front-runner Hillary Clinton’s White House hopes being spoiled by an indictment over transgressions stemming from her use of a private email server while secretary of state during President Barack Obama’s first term.
He did, however, tell host Bill O’Reilly he thought the emails would be fair game in a campaign situation for her opponents.
“I think the emails – I don’t think that’s playing dirty pool,” Trump said. “I think the emails are a big part of her life story right now. What she did is terrible and we’re going to see what happens.”
But Trump said he did not foresee an indictment because he thought Democrats were protecting her.
“No, because I think the Democrat Party is going to protect her,” he added. “I don’t think she is going to be indicted. I that what she has done is very, very serious. I know for a fact what Gen. [David] Petraeus and others have done is much less. And it destroyed their lives. So, I do believe, Bill, that she is being protected.”
O’Reilly asked if it were possible the Democratic Party was working an “non-corruptible” agency like the FBI to prevent the indictment. Trump answered by saying there was a human element to this. But he added the danger for Clinton would be if another president’s administration looked at it within the six-year statute of limitations.
“I hope they are not working together,” he replied. “You know, they are humans and they are people and they do talk, I would imagine. And I would like to think they don’t work together. But I would say that she is being protected. Now, what she can’t be protected from is the statute of limitations because on the assumption that somebody else got in – that’s a real dangerous situation.”
O’Reilly followed up by asking Trump if he would look at it if he were president, to which Trump acknowledged he “certainly” would.
“Certainly this falls within that period of time,” he added. “And you certainly have to look at it, very fairly. I would only do something if it was 100 percent fair. Certainly that is something that you would look at.”
Follow Jeff Poor on Twitter @jeff_poor |
You've heard that an incredibly influential economic paper by Reinhart and Rogoff (RR) - widely used to justify austerity - has been "busted" for "excel spreadsheet errors" and other flaws.
As Google Trends shows, there is a raging debate over the errors in RR's report:
Even Colbert is making fun of them.
Liberal economists argue that the "debunking" of RR proves that debt doesn't matter, and that conservative economists who say it does are liars and scoundrels.
Conservative economists argue that the Habsburg, British and French empires crumbled under the weight of high debt, and that many other economists - including Niall Ferguson, the IMF and others - agree that high debt destroys economies.
RR attempted to defend their work yesterday:
Researchers at the Bank of International Settlements and the International Monetary Fund have weighed in with their own independent work. The World Economic Outlook published last October by the International Monetary Fund devoted an entire chapter to debt and growth. The most recent update to that outlook, released in April, states: “Much of the empirical work on debt overhangs seeks to identify the ‘overhang threshold’ beyond which the correlation between debt and growth becomes negative. The results are broadly similar: above a threshold of about 95 percent of G.D.P., a 10 percent increase in the ratio of debt to G.D.P. is identified with a decline in annual growth of about 0.15 to 0.20 percent per year.” This view generally reflects the state of the art in economic research *** Back in 2010, we were still sorting inconsistencies in Spanish G.D.P. data from the 1960s from three different sources. Our primary source for real G.D.P. growth was the work of the economic historian Angus Madison. But we also checked his data and, where inconsistencies appeared, refrained from using it. Other sources, including the I.M.F. and Spain’s monumental and scholarly historical statistics, had very different numbers. In our 2010 paper, we omitted Spain for the 1960s entirely. Had we included these observations, it would have strengthened our results, since Spain had very low public debt in the 1960s (under 30 percent of G.D.P.), and yet enjoyed very fast average G.D.P. growth (over 6 percent) over that period. *** We have never advised Mr. Ryan, nor have we worked for President Obama, whose Council of Economic Advisers drew heavily on our work in a chapter of the 2012 Economic Report of the President, recreating and extending the results. In the campaign, we received great heat from the right for allowing our work to be used by others as a rationalization for the country’s slow recovery from the financial crisis. Now we are being attacked by the left — primarily by those who have a view that the risks of higher public debt should not be part of the policy conversation.
But whether you believe that the errors in the RR study are fatal or minor, there is a bigger picture that everyone is ignoring.
Initially, RR never pushed an austerity-only prescription. As they wrote yesterday:
The only way to break this feedback loop is to have dramatic write-downs of debt. *** Early on in the financial crisis, in a February 2009 Op-Ed, we concluded that “authorities should be prepared to allow financial institutions to be restructured through accelerated bankruptcy, if necessary placing them under temporary receivership.” Significant debt restructurings and write-downs have always been at the core of our proposal for the periphery European Union countries, where it seems to us unlikely that a mix of structural reform and austerity will work.
Indeed, the nation's top economists have said that breaking up the big banks and forcing bondholders to write down debt are essential prerequisites to an economic recovery.
Additionally, economist Steve Keen has shown that “a sustainable level of bank profits appears to be about 1% of GDP”, and that higher bank profits leads to a ponzi economy and a depression. Unless we shrink the financial sector, we will continue to have economic instability.
Leading economists also say that failing to prosecute the fraud of the big banks is dooming our economy. Prosecution of Wall Street fraud is at a historic low, and so the wheels are coming off the economy.
Moreover, quantitative analyses provides evidence that private debt levels matter much more than public debt levels. But mainstream economists on both the right and the left wholly ignore private debt in their models.
Finally, the austerity-verus-stimulus debate cannot be taken in a vacuum, given that the Wall Street giants have gotten the stimulus and the little guy has borne the brunt of austerity.
Steve Keen showed that giving money directly to the people would stimulate much better than giving it to the big banks.
But the government isn't really helping people ... and has instead chosen to give the big banks hundreds of billions a year in hand-outs.
(Obama's policies are even worse than Bush's in terms of redistributing wealth to the very richest. Indeed, government policy is ensuring high unemployment levels, and Obama – despite his words – actually doesn’t mind high unemployment. Virtually all of the government largesse has gone to Wall Street instead of Main Street or the average American. And “jobless recovery” is just another phrase for a redistribution of wealth from the little guy to the big boys.)
If we stopped throwing money at corporate welfare queens, military and security boondoggles and pork, harmful quantitative easing, unnecessary nuclear subsidies, the failed war on drugs, and other wasted and counter-productive expenses, we wouldn't need to impose austerity on the people.
Indeed: |
Streptozotocin diabetes-induced changes in aorta, peripheral nerves and stomach of Wistar rats. In rats with diabetes induced by streptozotocin (STZ), we studied the reactivity of the aorta in response to vasoconstrictor and vasorelaxant agents, changes in conduction velocity in the sciatic nerve, and glutathion (GSH) content in the gastric mucosa as well as the occurrence of spontaneous gastric lesions. STZ-induced diabetes was found to be accompanied by endothelial injury, exhibited by diminished endothelium-dependent relaxation and by increased noradrenaline- and H2O2-induced contraction. Conduction velocity in the nerves from STZ-treated animals was significantly lower compared to that in nerves from control animals. Moreover, gastric hyperaemia, occasional gastric lesions, and a significant depletion of GSH in the gastric mucosa were observed in STZ-treated rats. Our experiments confirmed the suitability of Wistar rats for the model of STZ-induced diabetes. |
The Effects of Timing of a Leucine-Enriched Amino Acid Supplement on Body Composition and Physical Function in Stroke Patients: A Randomized Controlled Trial The combination of exercise and nutritional intervention is widely used for stroke patients, as well as frail or sarcopenic older persons. As previously shown, supplemental branched chain amino acids (BCAAs) or protein to gain muscle mass has usually been given just after exercise. This study investigated the effect of the timing of supplemental BCAAs with exercise intervention on physical function in stroke patients. The participants were randomly assigned to two groups based on the timing of supplementation: breakfast (n = 23) and post-exercise (n = 23). The supplement in the breakfast group was provided at 08:00 with breakfast, and in the post-exercise group it was provided just after the exercise session in the afternoon at 14:0018:00. In both groups, the exercise intervention was performed with two sessions a day for two months. The main effects were observed in body fat mass (p = 0.02, confidence interval (CI): 13.217.7), leg press strength (p = 0.04, CI: 94.5124.5), and Berg balance scale (p = 0.03, CI: 41.652.6), but no interaction with intake timing was observed. Although the effect of the timing of supplementation on skeletal muscle mass was similar in both groups, BCAA intake with breakfast was effective for improving physical performance and decreasing body fat mass. The results suggest that a combination of BCAA intake with breakfast and an exercise program was effective for promoting rehabilitation of post-stroke patients. Introduction In recent years, combinations of exercise and nutritional interventions for frail older persons or sarcopenia patients have been actively investigated. For nutritional intervention, protein or branched chain amino acid (BCAA) supplements have been viewed with keen interest. Some systematic reviews showed the efficacy of a combination of exercise and protein supplementation for improving muscle strength and physical function in frail older persons. The same was also reported in sarcopenic persons. Furthermore, a recent systematic review showed that protein supplementation alone did not improve muscle mass, muscle strength, or physical function in frail older persons. On the other hand, Foley et al. reported that energy and protein intake were 10-20% below the required amounts of patients after strokes. Kokura et al. and Nishijima et al. reported that this deficiency affects the ability to improve functional independence measure (FIM) scores. Furthermore, Yoshimura et al. showed that BCAA supplementation immediately after exercise led to improved FIM scores, grip strength, and skeletal muscle mass in stroke patients. Such combined therapy has been widely used, not only for frail or sarcopenic older persons, but also for stroke patients. As previously shown, BCAA or protein supplementation for the purpose of gaining muscle mass has generally been given just after exercise. A comparison study of supplementation before and after exercise showed that it was more effective post-exercise than pre-exercise. However, considering the bias of a meal including protein intake and interval time, a recent randomized study reported that protein supplementation with breakfast stimulated post-prandial muscle protein synthesis and increased muscle mass in healthy older persons. In addition, Pekmez et al. showed that protein intake before a meal affected the post-prandial metabolic pattern in persons with metabolic syndrome. These findings suggest the hypothesis that protein or BCAA supplementation with breakfast is more effective for gaining muscle strength and muscle mass than supplementation immediately after exercise. This study, therefore, investigated the effect of the timing of BCAA supplementation with exercise therapy on improving skeletal muscle mass, muscle strength, and physical function in stroke patients. Subjects The eligible patients were 69 persons who were admitted to a ward for rehabilitation during convalescence after a stroke. The inclusion criteria were as follows: cerebral infarction, cerebral hemorrhage, or subarachnoid hemorrhage; age 40 years and older; no deglutition disorder; and able to walk independently or under supervision using an assistive device if needed. The exclusion criteria were as follows: deglutition disorder; renal disease or diabetes mellitus needing dietary restriction; and cardiac pacemaker, dementia, untreated cardiovascular disorder, depression, or a schizophrenic disorder. Recruitment was conducted at Showa University Fujigaka Rehabilitation Hospital from 15 April 2019 to 31 March 2020. The follow-up was conducted during the discharge period. All subjects gave their informed consent prior to participating in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Showa University Fujigaka Ethics Committee (ID: F2018C74). Experimental Design A single-blind randomized experimental design was used, with 2-month periods of supplementation at breakfast or immediately after exercise in the afternoon. One co-investigator (AK) created the assignment list using computer-generated random numbers in advance. Each participant was allocated a code number in order of recruitment. Randomization was performed using stratified randomization by age group as follows: 40-49 years old, 50-64 years old, and ≥65 years old. The subjects were randomly divided into two groups: the breakfast group (n = 23), who ingested a BCAA-rich nutritional supplement at breakfast; and the post-exercise group (n = 23), who ingested a BCAA-rich nutritional supplement immediately after exercise in the afternoon. The chief researcher (IK) was informed of the allocation by age group using the number container method by AK. Demographic Data Demographic data were collected from clinical records, including age, sex, body mass index (BMI), diagnosis, Brunnstrom recovery stage (BRS), and Charlson comorbidity index values. Outcome Measures Evaluations were conducted before and 2 months after starting supplementation. Investigators were not blind to group allocation. Investigators determined skeletal muscle mass, lower limb isometric strength, grip strength, timed up-and-go test (TUGT), Berg balance scale (BBS), functional independence measure (FIM), energy consumption and intake, number of combined therapy sessions, and nutritional status. Skeletal muscle mass and body fat mass Measurements of skeletal muscle mass and body fat mass were performed for all patients using the bioelectrical impedance method (InBody S10, InBody Japan, Tokyo, Japan). The participants were measured after 2 min of rest in a supine position, and were instructed not to move or speak during the measurement. Muscle strength The measurement of muscle strength was conducted by a co-investigator (RM) who was independent of the recruitment, intervention, and other data collection. Lower limb isometric strength measurement was performed on all patients using a muscle training machine (Wel-tonic L series, Minato Medical Science Co., Ltd., Osaka, Japan). Leg press strength was measured while the subject was sitting on the muscle training machine with knees and hips at 90 of flexion. After participants were familiarized with the test procedure, 2 trials at maximum effort were performed. The higher value was used for analysis. He grip strength of all patients was measured on the dominant side using a Smedley type grip dynamometer (Grip-D, Takei Scientific Instruments Co., Ltd., Niigata, Japan). Two trials at maximum effort were performed, and the higher value was used for analysis. Balance ability Dynamic balance ability was measured by the BBS and TUGT. Each test was done twice, and the higher value of the BBS and lower value of the TUGT were used. The BBS was recommended in combination with another balance scale. Energy consumption and intake Energy consumption was measured using a triaxial accelerometer (Active style PRO HJA-750C, OMRON Corporation, Kyoto, Japan). The accelerometer was attached on the patient's trunk for 5 days, measurements over 24 h for 3 days were used (excluding the dates of attachment and collection), and the median value was used for analysis. Energy intake was evaluated by the nurse after each meal and divided into a staple food and a side meal. The intake ratio was recorded as a percentage in the nursing records. Energy intake per day was calculated from the prescribed energy amount and the intake ratio of staple foods and side dishes. Participants were provided nutritional assessment and advice by a dietitian. This study was conducted without restricting the intake of food and drink other than those provided by the hospital. The energy sufficient ratio was calculated by the quotient of energy consumption and intake. BCAA supplementation A leucine-enriched nutritional supplement was provided every day for participants in the breakfast and post-exercise groups. Both groups ingested 125 mL of supplement (200 kcal; Hepas, CLINICO Co., Ltd., Tokyo, Japan). The supplement contained 3.5 g of amino acids and 6.5 g of protein, along with 40 IU of vitamin D per 125 mL. The ratio of amino acids was as follows: 1.6 g leucine, 0.9 g isoleucine, 1.1 g valine; the content percentage of leucine was 44.8%. Supplementation in the breakfast group was provided at 08:00 with breakfast, and in the post-exercise group just after the exercise therapy session in the afternoon at 14:00-18:00. Exercise The exercise intervention was performed in the rehabilitation hospital, with 2 sessions every day for 2 months for both groups. The rehabilitation sessions contained a 2-session exercise menu: 1 session of physical therapy (one 20-min set each of muscle conditioning exercises, range of motion exercises, and gait and activities of daily living training) and 1 session of occupational therapy (one 20-min set each of range of motion and muscle conditioning exercise, activities using the upper limb, and activities of daily living training). Statistical Analysis Statistical analysis was conducted by a co-investigator (AK) who was independent of the recruitment, intervention, and data collection. Based on a study by Yoshimura, the minimal sample size was calculated using two-way repeated measures analysis of variance to examine significant differences between the groups ( = 0.05, power = 0.8, effect size = 0.5); 23 participants per group were required. The two groups were created by random assignment of the time of supplementation, including a breakfast group (n = 23) and post-exercise group (n = 23). An intention-to-treat analysis was conducted for the groups. The data for participants who dropped out of the intervention were replaced by the "last observation carried forward" method. An unpaired t-test was used to determine the significance of differences between the groups in age, BMI, comorbidity index, energy efficiency ratio, FIM score, number of exercise and nutritional interventions, and nutritional status. The chi-squared test was used to determine the significance of differences between the groups in sex, diagnosis, and BRS. Skeletal muscle mass, body fat mass, leg press strength, grip strength, BBS, TUGT, and FIM scores were analyzed by two-way repeated-measures ANOVA (group time). The interaction was evaluated by time of supplementation and exercise therapy. Items for which a main effect in a group was observed were compared between the groups using the unpaired t-test. All data were analyzed using JMP software (version 15, SAS Institute Japan Ltd., Minato, Tokyo, Japan). Results A total of 46 persons participated in this study; 23 persons were excluded, since 20 met the exclusion criteria, and 3 declined participation. Six participants were unable to complete the study after randomization (Figure 1), and no participants had adverse events associated with BCAA supplementation. The demographic data for the participants were similar between the two groups (Tables 1 and 2 (Table 3). There were no significant interactions between the groups. The amount of change from baseline body fat mass (breakfast group: -2.5 ± 2.6 kg; post-exercise group: -0.9 ± 2.1 kg; p = 0.03) was significantly greater in the breakfast group (Table 4), and the change in leg press strength (breakfast group: 26.7 ± 28.5 kgf; post-exercise group: 12.8 ± 18.1 kgf; p = 0.05) was marginally significantly higher in the breakfast group (Table 4). The others did not show significant differences between the two groups (Table 4). (Table 3). There were no significant interactions between the groups. The amount of change from baseline body fat mass (breakfast group: -2.5 ± 2.6 kg; post-exercise group: -0.9 ± 2.1 kg; p = 0.03) was significantly greater in the breakfast group (Table 4), and the change in leg press strength (breakfast group: 26.7 ± 28.5 kgf; post-exercise group: 12.8 ± 18.1 kgf; p = 0.05) was marginally significantly higher in the breakfast group (Table 4). The others Discussion Recently, combined intervention has been viewed with keen interest; the effects of combined intervention on improving muscle strength or mass has gained a certain consensus 17,18], and the range of its use is expanding. Previous studies were done in patients with stroke, hip fracture, and artificial joint replacement. In the present study, whether the time of BCAA ingestion was at breakfast or post-exercise, there was no significant main effect or interaction with skeletal muscle mass. This result is in agreement with multiple previous studies [2,14,. Abe et al. reported that stroke patients had decreased skeletal muscle mass on both the affected and unaffected sides of the body starting one month after onset. Although skeletal muscle mass in the present study did not change significantly before or after the combined intervention, it seems that BCAA supplementation prevented a decrease of skeletal muscle mass in both groups. Therefore, the combined intervention for the purpose of increasing muscle strength in stroke patients appeared to be equally effective with breakfast or immediately after exercise in the afternoon. In the area of applied physiology, researchers have begun to find that BCAA or protein supplementation does not need to be given immediately after exercise, which was previously considered the gold standard . Regarding muscle protein synthesis, the effectiveness of supplementation has been reported when given at breakfast, pre-exercise, at dinner, or before sleep. Ikeda has shown the feasibility of pre-exercise supplementation for muscle strengthening in frail older persons. In these studies, it was common for supplementation to be combined with exercise intervention, although the time of supplementation varied (breakfast, pre-exercise, post-exercise, or pre-sleep). Atherton et al. reported the "muscle-full" phenomenon, in which muscle protein synthesis peaked despite sustained elevation of serum amino acids, and exercise combined with amino acid ingestion caused an increase in amino acid uptake response and sensitization of skeletal muscle to amino acids for up to 24 h. On the other hand, there were significant main effects on lower limb strength and balance ability according to the time of supplementation, which suggests the effectiveness of ingesting BCAA with breakfast for physical ability. In addition, recent research on metabolic syndrome has reported that protein intake before breakfast promotes post-prandial carbohydrate and lipid metabolism. In the present study, 31% of patients had obesity (BMI ≥ 25 kg/m 2 ), while body fat mass in the breakfast group was significantly decreased. In Japanese patients, obesity is one of the risk factors for stroke. Considering the prevention of stroke recurrence, if the timing of supplementation does not affect muscle gain, it may be desirable to take BCAAs with breakfast, which is expected to reduce body fat mass. This study has several limitations, including: there was no control group only taking BCAA without exercise or doing exercise training; some participants dropped out during the follow-up; and for some patients, energy intake was controlled because of diabetes mellitus or obesity. First, the design of the present study is lacking in that it does not allow us to separate the effects of nutritional supplementation and exercise, because there was no control group only taking BCAA without exercise or doing exercise training. It has not been possible to examine the effects on body composition and physical function when consuming BCAAs alone, because it is ethically questionable to not provide rehabilitation. The same issue was seen in previous studies. Regarding frail and non-frail older persons, the recent systematic reviews showed that protein supplementation alone did not improve muscle mass, muscle strength, or physical function. The same may occur in post-stroke patients. Second, the data of participants who dropped out of the intervention were replaced by the last observation carried forward method. Even if v intention to treat (ITT) analysis was performed in this study, there might be some biases regarding sample size. Third, in regard to restricted energy intake, the energy sufficient ratio was approximately 90% in the present study. However, there were no cases of malnutrition in the groups. Therefore, the effect of energy deficiency was relatively small. Conclusions Although the effect of leucine-enriched BCAA supplementation on muscle mass was similar in both groups, a combination of BCAA intake with breakfast and an exercise program was effective at improving physical performance and decreasing body fat mass. The results suggest that ingestion of BCAAs with breakfast is effective for promoting rehabilitation of post-stroke patients. Further study is needed to investigate the ideal timing of BCAA supplementation for stroke. |
With all the hullabaloo about new bike lanes and the nascent bike share program, you would think New York would be a bicycle haven. And this being National Bike Month, and Bike to Work Week, you think everyone would ditch their cars and ride to the office. But there’s a problem that plagues car and bicycle commuters alike: where do I park this damn thing? Midtown’s canyons of skyscrapers may wow the tourist, but to the bike commuter it’s a maze of 60-story tall “No Parking” signs. Where are health- and environmentally-conscious commuters supposed to put these things during the work day?
I got a new job in the big city and was pleased to find a little sign at our lobby sign-in desk that said “Bicycle Access Plan.” With nice weather on the way, I believed could finally wean myself off super-expensive monthly subway passes by riding the 8 miles or so to work under my own power. But bike racks are sparse, the building won’t let me bring it in, and the cost for a bike parking spot (already an absurd idea) makes me feel like I might as well have a car. I took to the internet to do some investigating.
Much to the rejoicing of cyclists everywhere, the city passed the The Bicycle Access to Office Buildings Law in 2009. The law requires all office buildings with a freight elevator to let bicycle commuters bring their rides up to their offices. In theory this is great, but there’s one catch: building management is required to let your bike in the building, but your employer isn’t. And if they don’t want to accommodate you, you’re out of luck. Mine doesn’t and they aren’t budging.
My building’s security guard suggested I park in the garage. When I asked him how much they charge, he kept repeating “It’s too much, if you ask me. It’s too much!” He never gave a figure.
“Garages allow bicycle parking? Great!” I thought. Well, that’s only because they’re legally required to. The city also passed a law (pdf) that requires “garages or lots that accommodate 100 or more vehicles [to] provide bicycle parking at a rate of at least one space for every ten vehicle spaces.” Sounds dandy, except most garages don’t want bicycle commuters. To discourage use, they charge outrageous prices. Icon Parking, the owner of the garage in my building, charges $75 (tax included) for a monthly spot for a bicycle. Most other garages in the area (midtown West) charge that and even higher.
The only option is the street. The same security guard who refused to name a price suggested I try the city rack by the subway stop around the corner. I am hesitant; I’ve had wheels, brake lines, seats, even reflectors ripped off various bikes over the years, all well under the typical 8-hour work day. A quick peek revealed the rusting carcass of one foolish enough park at 53rd and Broadway (pictured above).
And so up and down the street desperate delivery boys and determined commuters alike illegally tie their bikes up to parking signs, trees, even trash cans, at the risk of vandals, thieves and zealous parking enforcers.
I put it to you, Brokelyn bicycle commuters: where do you park your bicycle in Manhattan? Do you risk the street? Do you pay the extortionate prices at garages? Does your office provide parking? Tell us below!
Follow Conal (but not to his bike rack) @conaldarcy.
Tags
17 Comments |
Leukocytapheresis in inclusion body myositis A patient with inclusion body myositis was treated with a course of 22 leukocytaphereses combined with prednisone and azathioprine therapy. He improved clinically during an induction phase of frequent cytapheresis, which reduced the circulating levels of T lymphocytes and monocytes and decreased the ratio of the T4+ to T8+ lymphocyte subsets. During subsequent maintenance cytapheresis there was partial recovery of the T4+ population without recovery of T8+ lymphocytes, and the patient lost most of his clinical improvement. In contrast to T lymphocytes and monocytes, there was no persistent reduction in circulating Blymphocyte levels during the course of therapy. T8+ lymphocyte populations may regenerate more slowly than T4+ lymphocytes following depletion with leukocyatpheresis combined with prednisone and azathioprine therapy. A loss of of T8+ suppressor relative to T4+ helpercell function could lead to an intensification of autoimmune conditions. |
Ecology safety crop production for obtaining high-quality products The author considers the development priorities of crop production in Russia. Highly productive and ecologically safe crop production, based on the information technologies is the main priority of scientific development of Russia. In order to achieve this aim scientists constructed high productive and ecology safety kinds and hybrids of agricultural, medicinal plants, plant protection and cultivation technologies. These plants have tolerance to different climate conditions. |
North Carolina's 12th congressional district
Re-establishment from 1990
The district was re-established after the 1990 United States Census, when North Carolina gained a House seat due to an increase in population. It was drawn in 1992 as one of two minority-majority districts, designed to give African-American voters (who comprised 22% of the state's population at the time) the chance to elect a representative of their choice; Section 2 of the Voting Rights Act prohibited the dilution of voting power of minorities by distributing them among districts so that they could never elect candidates of their choice.
In its original configuration, the district had a 64 percent African-American majority in population. The district boundaries, stretching from Gastonia to Durham, were so narrow at some points that it was no wider than a highway lane. It followed Interstate 85 almost exactly. One state legislator famously remarked, after seeing the district map, "if you drove down the interstate with both car doors open, you’d kill most of the people in the district."
The United States Supreme Court ruled in Shaw v. Reno (1993) that a racial gerrymander may, in some circumstances, violate the Equal Protection Clause of the United States Constitution.
The state legislature defended the two minority-majority districts as based on demographics, with the 12th representing people of the interior Piedmont area and the 1st the Coastal Plain. Subsequently, the 12th district was redrawn several times and was adjudicated in the Supreme Court on two additional occasions. The version created after the 2000 census was approved by the US Supreme Court in Hunt v. Cromartie. The district's configuration dating from the 2000 census had a small plurality of whites, and it was changed only slightly after the 2010 census. African Americans make up a large majority of registered voters and Hispanics constitute 7.1% of residents.
On February 5, 2016, U.S. Circuit Judge Roger L. Gregory ruled that the district, along with North Carolina's 1st congressional district, must be redrawn from its post-2010 configuration, and that race could not be a mitigating factor in drawing the district. This decision, in the case of Cooper v. Harris, was subsequently upheld by a unanimous U.S. Supreme Court in a decision by Justice Elena Kagan on May 22, 2017. In her decision, Justice Kagan noted that this marked the fifth time the 12th district had appeared before the Supreme Court, following Shaw v. Reno and Hunt v. Cromartie which had both been heard twice before the Court.
In all of its configurations, it has been a Democratic stronghold. Its previous incarnation was dominated by black voters in Charlotte, Greensboro, and Winston-Salem. The redrawn map made the 12th a compact district comprising nearly all of Mecklenburg County, except the southeast quadrant. Due to Charlotte's heavy swing to the Democrats in recent years, the reconfigured 12th is no less Democratic than its predecessor. |
How Chinas Resource Quest Is Changing China China's economic emergence has been a defining event for global energy and commodity markets. Over the past decade, China has accounted for more than half of the growth in global energy demand and today consumes more than half of the iron ore, cement, copper, and a host of other commodities produced around the world. The stock of global direct investment by Chinese companies has grown from $53 billion in 2004 to $609 billion in 2013, a significant share of which is in natural resources.In By All Means Necessary: How China's Resource Quest Is Changing the World, Elizabeth Economy and Michael Levi help us make sense of Chinese natural resource trade and investment. While the title suggests the book might succumb to the alarmism and hyperbole that often dominates both public and academic discussions of China's rise, it is in fact an exceedingly nuanced, balanced, and well-researched assessment. The combination of Economy's deep China expertise and Levi's natural resource market and energy policy acumen provides an extremely valuable contribution.The book's chief strength is its combination of compelling storytelling and in-depth analysis-a commodity rarer than any resource Chinese companies search for around the globe. The authors put recent Chinese natural resource trade and investment trends in context, discussing precedents from Chinese history as well as from the economic emergence of the United States and Japan. They debunk the view, widely held in the West, that Chinese companies work in lockstep with the government, seeking to improve the security of the Chinese state at the expense of companies and countries elsewhere in the world. The book shows instead that Chinese companies are primarily commercially driven and that their motivations and interests often differ from the government's, just as is the case in Europe, Japan, and the United States.Economy and Levi rightly note that the most significant global impact of China's resource quest to date has been the country's rapid growth in energy and commodity consumption. This has put upward pressure on global prices and helped enrich resource-producing states. But what Chinese demand giveth, it can also taketh away. The growth of Chinese resource consumption has slowed dramatically over the past few years, leading to a sharp correction in the price of many commodities. Iron ore and copper prices fell by more than one-third between 2011 and 2013, and aluminum, steel, and coal prices have declined sharply as well. This development has been painful for resource exporters, from Australia to Chile and Brazil. And if the leadership is successful in "rebalancing" the Chinese economy from investment and industrial production toward services and domestic consumption, the resource-intensity of Chinese growth will likely continue to decline.Economy and Levi judiciously evaluate the positive and negative impacts of global resource investment by Chinese companies, both for recipient countries and competing firms. As the authors point out, Chinese resource investment has had less impact than Chinese imports of foreign-produced goods, despite attracting equal if not greater attention by foreign policymakers, scholars, and the press. Chinese companies are late to the resource game and often must pick through the scraps that U.S., Japanese, and European companies do not want to touch. Diplomatic support from Beijing and low-cost capital from state-owned banks help Chinese companies compete for investment opportunities in many developing countries. |
Being afraid of the dark is a big problem for an aspiring astronaut. In retired Canadian astronaut Chris Hadfield's new book, "The Darkest Dark" (Little, Brown and Co., 2016), a young Chris struggles with the alien-filled shadows of his bedroom the night before Neil Armstrong and Buzz Aldrin step down onto the moon.
Retired Canadian astronaut Chris Hadfield wrote "The Darkest Dark" to show a younger audience what it's like to face their fears.
The first moonwalkers were a strong inspiration for Hadfield, who has commanded the International Space Station and flown on three space missions. He was eager to spread that inspiration and share a message about dealing with fear that can apply to young non-astronauts, too.
Hadfield's two previous books, "An Astronaut's Guide to Life on Earth" (Little, Brown and Co., 2013) and "You Are Here" (Little, Brown and Co., 2014), were aimed at adults; they shared wisdom and photos, respectively, from his time in space. But for this newer book, he chose to speak to a younger audience about how to conquer a challenge everyone faces at some point in his or her life: overcom fear.
To reach this younger crowd, Hadfield got help from a new tool: illustration. The book was illustrated by Terry and Eric Fan, known as the Fan Brothers, and its detailed art hides strange, menacing little aliens in Chris' bedroom shadows.
Young Chris Hadfield dreams about walking on the moon with his dog, Albert, in his new picture book "The Darkest Dark."
"It is so iterative, making a children's book," Hadfield said. After perfecting the story and syntax, "you start working with the artists, and of course they are going to have a different vision than you are … Then, you have to go back and change the words, because it all has to flow, and so it's both very delightful and very painstaking to get right."
Hadfield also wrote a song to tie in to the book, detailing the many strange shadows in a darkened bedroom.
Of course, a big part of being an astronaut is facing fear and danger, and Hadfield has explored that theme many times in his books and lectures, as well as in a TED Talk. Fear of the dark brought those elements to young readers in a relatable way, he said.
"The direct response in person of so many people [has shown that] the book achieves what I hoped it would," Hadfield said. "And that was for people to recognize that just because you're afraid doesn't mean you have to stop, that just because something makes you fearful doesn't necessarily mean that it's dangerous.
"[I wanted to show] that within your own fears, sometimes, lies great opportunity, and that you can turn yourself into something that you're dreaming about and that normally," he said, "you only dream in the dark." |
Consider what has happened at either end of the spectrum – the growth in poverty, on one side, and extreme wealth, on the other.
According to Luxembourg Income Study data, the share of Israel’s population living on less than half the country’s median income – a widely accepted definition of relative poverty – more than doubled, to 20.5 percent from 10.2 percent, between 1992 and 2010. The share of children in poverty almost quadrupled, to 27.4 percent from 7.8 percent. Both numbers are the worst in the advanced world, by a large margin.
And when it comes to children, in particular, relative poverty is the right concept. Families that live on much lower incomes than those of their fellow citizens will, in important ways, be alienated from the society around them, unable to participate fully in the life of the nation. Children growing up in such families will surely be placed at a permanent disadvantage.
At the other end, while the available data – puzzlingly – don’t show an especially large share of income going to the top 1 percent, there is an extreme concentration of wealth and power among a tiny group of people at the top. And I mean tiny. According to the Bank of Israel, roughly 20 families control companies that account for half the total value of Israel’s stock market. The nature of that control is convoluted and obscure, working through “pyramids” in which a family controls a firm that in turn controls other firms and so on. Although the Bank of Israel is circumspect in its language, it is clearly worried about the potential this concentration of control creates for self-dealing.
You might think that Israeli inequality is a natural outcome of a high-tech economy that generates strong demand for skilled labor – or, perhaps, reflects the importance of minority populations with low incomes, namely Arabs and ultrareligious Jews. It turns out, however, that those high poverty rates largely reflect policy choices: Israel does less to lift people out of poverty than any other advanced country – yes, even less than the United States.
Meanwhile, Israel’s oligarchs owe their position not to innovation and entrepreneurship but to their families’ success in gaining control of businesses that the government privatized in the 1980s – and they arguably retain that position partly by having undue influence over government policy, combined with control of major banks.
In short, the political economy of the promised land is now characterized by harshness at the bottom and at least soft corruption at the top. And many Israelis see Netanyahu as part of the problem. He’s an advocate of free-market policies; he has a Chris Christie-like penchant for living large at taxpayers’ expense, while clumsily pretending otherwise.
So Netanyahu tried to change the subject from internal inequality to external threats, a tactic those who remember the Bush years should find completely familiar. We’ll find out on Tuesday whether he succeeded. |
Mental Disorders of Bangladeshi Students During the COVID-19 Pandemic: A Systematic Review Background The unprecedented COVID-19 pandemic has become a global burden disrupting peoples quality of life. Students being an important cohort of a country, their mental health during this pandemic has been recognized as a concerning issue. Therefore, the prevalence and associated risk factors of Bangladeshi students mental health sufferings (ie, depression, anxiety, and stress) are systematically reviewed herein for the first time. Methods Adhering to the PRISMA guideline, a systematic search was performed from 1 to 5 April, 2021 in several databases including PubMed; and finally, a total of 7 articles were included to this review. Results The prevalence rates of mild to severe symptoms of depression, anxiety, and stress ranged from 46.92% to 82.4%, 26.6% to 96.82%, and 28.5% to 70.1%, respectively. The risk factors concerning mental health problems included the factors related to (i) socio-demographic (younger age, gender, lower educational grade, urban residence, family size, currently living with family/parents, and having children in the family), (ii) behavior and health (smoking status, lack of physical exercise, more internet browsing time, and dissatisfaction with sleep), (iii) COVID-19 pandemic- (COVID-19 related symptoms, COVID-19 related perceptions, and fear of COVID-19 infection), (iv) miscellaneous (losing part-time teaching job, lack of study concentration, agitation, fear of getting assaulted or humiliated on the way to the hospital or home, financial problems, academic dissatisfaction, inadequate food supply, higher exposure to COVID-19 social and mass media, engaging with more recreational activities, and performing more household chores). Conclusion The overall assumption of mental disorders prevalence rates can be regarded as problematic to this cohort. Thus, the authorities should consider setting up possible strategies to diminish the pandemic effect on students mental health. Introduction At the end of December 2019, the COVID-19 was originated from Wuhan province, China. After a short period, it has spread out globally, turning itself to be the most challenging disaster after the World War II. 1 Consequently, peoples' normal life has been reported disrupting as of its devastating effects. Therefore, on 11 March 2020, the World Health Organization declared this outbreak to be a pandemic due to its unprecedented and rampant behaviors. 2,3 However, there has been 651,652 COVID-19 cases (including 9384 deaths) identified in Bangladesh, as of 7 April 2021. To mitigate this viral outbreak transmission in the community levels, a number of public health measures such as (i) imposing countrywide lockdown, (ii) shutting down educational institutions, (iii) isolating the infected cases, (iv) quarantining the suspected cases, (v) confining social and community movements, etc., were executed throughout the entire world. These measures are expectedly effective to suppress its transmission. For instance, 44% and 31% incident and death rates were identified for the communities with quarantine measures, whereas it was 96% and 76% for the non-quarantine communities. 7 Evidently, incident and death rates decrease after imposing these preventive measures. Although these measures have importance to suppress the outbreak, subsequent mental health impacts are not escaped, which evaluates mental health instabilities by accumulating psychological stressors like experiencing fear and panic, feeling frustrated and bored, facing a paucity of basic supplies, lacking authentic and reliable information, overwhelming with stigma, losing from jobs, and facing financial recession, etc., and these devastating issues are alleged for subsequent suicide occurrences in globally including Bangladesh. For instance, a systematic review 18 reported high rates of mental health outcomes (ie, up to 50.9% anxiety, 48.3% depression, 53.8% post-traumatic stress disorder, 38% psychological distress, and 81.9% stress were found) among the general population globally, as consistent with other literatures concerned with either multi-centered cross-sectional studies, 19,20 or systematic reviews. 6,21 However, on 8 March 2020, the first case of COVID-19 was confirmed in Bangladesh. 22 Being a country of limited resources in the healthcare setting, the authorities imposed a countrywide lockdown to lessen its infection. 22,23 During the COVID-19 pandemic, however, students' psychological health becomes a concern of interest as all the educational institutions were closed down, and their social circle, communication, and interaction processes were also changed. 24 Also, online schooling, the newly introduced method of teaching can be unfavorable for a higher number of students as of difficulties related to understanding materials, technical issues, lack of interest in attending classes, limited access to online schooling materials, and so forth. Evidently, the severity of lack of willingness in online schooling led to suicide occurrences in Bangladesh. That is, an undergraduate student refused to partake online exam, that turned into conflicts within their family, as a result, the mother and son's suicide pact occurred. 27 Thus, it is apparent that some of students are facing the pandemicrelated obstacles such as social and economic disruptions, uncertainty about their future careers, loneliness, fear of losing loved ones, etc. 28,29 Considering the Bangladeshi students' vulnerabilities toward psychological stressors related to the current COVID-19 pandemic, several studies were conducted assessing their mental health. But, for generalization of the factors influencing mental health problems of this cohort, there is nothing but a review. To the authors' best of knowledge, no attempt to review the Bangladeshi students' mental health was made. Therefore, a systematic review was conducted herein for the first-time, considering the prevalence and risk factors of the Bangladeshi students' common mental disorders (depression, anxiety, and stress), which is anticipated to help take appropriate mental health strategies in the policy levels. Search Strategy For conducting a systematic review, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline 30 was adhered to the present study. Thus, a systematic literature search was performed in PubMed from 1 to April 5, 2021. Within this time, additional searches were carried out on the databases such as Scopus, PsycINFO, Global Health, Web of Science, CINAHL, even Google Scholar, and ResearchGate for retrieving articles and preprints that were not indexed in PubMed. The utilizing search strategy included keywords: (depression OR anxiety OR stress OR mental health OR psychological health OR psychological impact); AND (COVID-19 OR pandemic); AND (Bangladeshi student). Study Selection Criteria First of all, each publication was screened based on "Titles and Abstract". Then, the full-text article was evaluated for confirming if the article to be added or not. The articles were included in this review after adhering to the inclusion criteria, including (i) being a Bangladeshi study concerned about student cohorts, (ii) being a cross-sectional study, (iii) being conducted after the pandemic initiation, (iv) utilizing established scales/tools for measurement, (v) reporting prevalence and/or risk factors of mental disorders (depression, anxiety, stress), (vi) being published in a peer-reviewed journal or preprint, and (vii) being published in the English language. Data Eligibility A total of 107 articles were retrieved from several databases, whereas 103 article exists after removing the duplicate ones. Then, "Titles and Abstracts" of each articles were screened, where 92 articles were eliminated. A total of 11 full-text articles were assessed for eligibility. Finally, 7 studies were survived in the process after adhering to the inclusion criteria. Full-text articles (n=4) were excluded if they were (i) review article, (ii) qualitative study (iii) mixed-method study (that is, both qualitative or quantitative) ( Figure 1). Data Extraction In Microsoft Excel, a data extraction file was created to recruit the information from the included studies. Data were extracted utilizing the following criteria (i) first author and publication year, (ii) specific group and sample size, (iii) sampling procedure, (iv) sample characteristics, (v) assessment tool, (vi) cutoff score, (vii) prevalence rate, (viii) associated risk factors, and (ix) prevalence assessment criteria. Description of the Included Studies A total of 7 cross-sectional online survey-based studies were included in this review after adhering to the inclusion criteria. All of the studies were conducted between April and May 2020. The number of participants ranged from 425 to 15,543; whereas only 3 studies reported participant mean age. The DASS-21 and GAD-7 were the most frequently used scale for detecting the mental disorder(s) (n=3), while other scales including PHQ-9, HADS, and CESD-R-10 were also utilized. Three studies reported the prevalence rate of depression, anxiety, and stress and its associated factors while the rest of the studies reported either depression and anxiety or only anxiety ( Table 1). The Prevalence Rate of Mental Disorders The prevalence of depression, anxiety, and stress showed fluctuation across the included studies. All but one study considered the cutoff of point for at least the presence of mild symptoms, which seems the comparison of the Depression A total of 6 studies reported the prevalence of depression, ranging from 46.92% to 82.4% among both college and university students, 37 and university students, 36 respectively. However, studies considering the university students found the rate to be within 72% to 82.4%, 33,34,36 whereas 49.9% was reported for medical students. 32 Also, a 61.9% prevalence rate of depression was identified in a study, although detail about the student status was not mentioned in that study. 31 Anxiety The prevalence of anxiety was determined in all of the included articles, ranging from 26.6% to 96.82%. Within the university students, the anxiety rate was noted to be 40% to 96.82%; where the lowest prevalence rate (40%) was detected by moderate to a severe cutoff scheme, which leads to a reasonably lower anxiety rate to that rate compared to other studies'. 33 However, 65.9% of medical students reported being anxious, 32 whereas it was 33.3% for both college and university students. 37 Stress The prevalence of stress was reported in three studies utilizing the same instrument (that is, DASS-21), and found its range within 28.5% to 70.1%. Khan et al 37 reported the lowest stress prevalence rate among the college and university students, whereas university students were identified as dominantly stressed. The rest of the study within the students (no specific information regarding the student status was mentioned), found 57.05% of the participants to be suffering from stress. 31 Socio-Demographic Risk Factors Age Two studies predicted age as a significant risk factor for psychological suffering. 31,34 Sayeed et al 31 found that less than or equal 22 years (vs more than 22 years) old students were at 4.49-, 4.46-and 3 times higher risk of depression, anxiety, and stress, respectively. Whereas, 18 to 24 years old students were observed to be prone to these disorders than those who had reported being aged 25 to 29 years. 34 Gender The relationship between gender and mental disorders was significant in a total of 4 studies, where all but one study found females to be at higher risk. 31,32,34,35 For instance, females were at 3.44-, 3.44-and 4.54-times higher risk of depression, anxiety, and stress, respectively, 31 which is similar to other studies. 32,34 But, the unusual finding, that is, male students were found to be more anxious in a study compromised with a total of 15,543 university students. 35 Family Monthly Income Lower family income was significantly associated with increasing the risk of mental health disorders. 31 The only study reported that students with a family income of less than or equal to 2,7000 BDT were at 2.62-and 2.56-times higher risk of DovePress suffering from depressive and anxiety symptoms, respectively, than more than 2,7000 BDT. 31 Education The levels of education were predicted as a significant risk factor for mental disorders. 31 The only study depicted that students of secondary education status were at 11.03-and 11.15-fold risk of anxiety and depression, respectively, than graduate and higher ones. 31 Residence A significant mental health effect was observed by the students' place of residence. 31,34,35 A study depicted that students residing in urban areas were 3.22-and 3.22-times the high risk of depression and anxiety compared to the rural areas' ones. 31 Besides this, other studies reported people living in urban areas were more vulnerable to psychological problems. 34,35 Family Size The number of family members was found as a significant risk factor for mental health disorders. Sayeed et al reported that students who had a family size less than or equal to 4 were at 1.89-and 1.91-times high risk of anxiety and depressive symptoms than family size greater than 4. 31 Adversely, another study found that having more than or equal to five family members compared to less than four family members were more prone to mental disorders. 34 Number of Children in the Family The number of children in the family was identified as a risk factor for mental health problems. 31 More detailly, the study found students from these families with children under 5 years were at 2.32 times higher risk of developing depressive symptoms. 31 Living with Family or Parents Living with family or parents was also found to increase the risk of psychological suffering. 36 Participants who reported currently living with their family were at 2.6and 1.8-times higher risk of depression and anxiety, respectively, 36 whereas did not live with parents increased the rate of anxiety. 35 Behavior and Health-Related Risk Factors Smoking Status The habit of smoking was reported as a significant factor for evaluating mental health problems. 34 Islam et al 34 found that these students who were engaged in smoking were more prone to psychological sufferings compared to those who were not. Physical Exercise The role of physical exercise on mental health was found as a significant predictor. 34,37 Based on two studies, students who reported not engaging in physical exercise were at higher risk of mental disorders. 34,37 Internet Use Time Internet browsing time was found as a predictive factor of psychological disorders in only a study. 34 The study reported that students reporting internet browsing for 5 to 6 hours, and more than 6 hours compared to less than 2 hours were at increased risk of psychological problems. 34 Satisfaction with Sleep Having sleep difficulties could also influence the higher risk of mental health problems. A study claimed 34 that students reporting dissatisfaction with their sleep experienced higher psychological sufferings compared to those who were satisfied. COVID-19 Related Perceptions The perceptions about the negative impact of the COVID-19 pandemic were reported as the influential factors of psychological suffering. These perceptions included that (i) normal life disrupted by the COVID-19 pandemic, (ii) negatively affecting themselves by the COVID-19 pandemic, (iii) the country's healthcare system would be overrun and people would not be able to get proper medical care, (iv) the trajectory of COVID-19 suspicion, (v) its negative impact on education like lagging academically, 31,36 (vi) worrying about the effects of COVID-19. 33 Fear of COVID-19 Infection Fear of getting infected by the COVID-19 was found as a significant predictor of depression, anxiety, and stress in a couple of studies, 31,32,37 whereas relatives or friends being infected with the COVID-19 had also increased the level of anxiety. 35 Safa et al 32 reported that students severely tensed of being contacted with the COVID-19 infected individuals were at 3.5-and 2.75-times higher risk of being anxious and depressed, respectively, than the students who had no or minimal contract. Similarly, contact with the confirmed COVID-19 cases was reported to be at 4-and 3.17-times higher risk of getting stressed and anxious, respectively. 31 COVID-19 Related Symptoms Mental health sufferings were also increased when students reported experiencing the COVID-19 related symptoms. For example, Sayeed et al 31 reported that experiencing one and more symptoms, and at least one symptom increased the risk of stress by 1.60 and 3.06 times; whereas it was 3.02 and 4.96 times, respectively, for anxiety. Also, another study found having the symptoms of fever, dry cough, fatigue, sore throat, difficulty breathing as the influential factors of mental disorders. 37 Other Risk Factors Some other factors also reported increasing the risk of psychological sufferings. Such as (i) losing part-time teaching job (that is, serving as a tutor), 36 (ii) reporting lack of concentration on the study, 32 (iii) being agitated more easily, 32 (iv) fear of getting assaulted or humiliated on the way to the hospital or home, 32 (v) worsening their financial condition, 35,37 (vi) being dissatisfied with academic studies, 34 (vi) having an inadequate food supply, 37 (viii) being more exposed to the COVID-19 news in social media and mass media, 37 (ix) being more active in recreational activities (ie, watching TV series, reading storybooks, online and offline gaming, etc.), (x) engaging in household chores, 37 etc. Discussion During the pandemic, several public health measures were already implemented in Bangladesh, which is being alleged for aggravating the risk of mental instabilities. Besides this, the online learning process, the alternative way of running educational activities after the inception of the pandemic, reportedly plays a significant role in exacerbating psychological sufferings. Online schooling turns to issues related to lack of concentration on the study, agitation, dissatisfaction with academic studies, which are reported for intensifying psychological burdens. 32,34 Therefore, student cohorts are allegedly at risk of mental health sufferings because of lockdown-related issues. The present review is the first systematic approach considering students' mental health problems during the COVID-19 pandemic in Bangladesh, which is anticipated to be helpful for the mental health authorities. However, students are found at immense risk of psychological suffering, that is, the prevalence rate of depression, anxiety, and stress ranges from 46.92% to 82.4%, 26.6% to 96.82%, and 28.5% to 70.1%, respectively. These prevalence rates across studies may vary because of utilizing different tools, and cutoff scores of the same instrument. 38 Besides this, pandemic-related issues such as COVID-19 infection rate of the participants' area, history of the COVID-19 infection, financial difficulties, etc., may increase the probability of higher mental health suffering. 32,37, These issues are not identified across the included studies, which limits the generalizability of the findings. Despite these limitations of the included studies, the estimated prevalence rates of mental disorders are supposed to be helpful in representing the cautious situation rather than reflecting as factual findings. Based on the present findings, mental disorders' risk factors related to basic socio-demographics included younger age, gender, lower family monthly income, lower grade education status, urban residence, family size, living with family/parents, and having children in the family. Also, behavior and health-related mental health risk factors included being smokers, lack of physical exercise, reporting more internet use time, and dissatisfaction with sleeping status. The COVID-19 pandemic related negative perceptions and social stressors, COVID-19 related symptoms, fear of COVID-19 infection, fear of getting assaulted or humiliated on the way to hospital or home acted as the triggers to escalate mental sufferings. Study found that COVID-19 patients were at higher risk of psychological suffering than psychiatric patients and healthy control individuals. 43 Finally, other risk factors included losing the part-time teaching job, lacking concentration on the study, agitation, fear of getting assaulted or humiliated on the way to hospital or home, financial problems, having dissatisfaction with academic studies, having an inadequate food supply, being more exposed to COVID-19 related news, engaging with more recreational activities (ie, watching TV series, reading storybooks, online and offline gaming, etc.), performing more household chores, etc. Being a resource limited country, Bangladesh has lack of capability to combat the current awful situation. 22,44 Therefore, involving medical students in the health sector might be effective to fight against the COVID-19 pandemic in both hospital settings and community levels. 45 Focus on mental wellbeing activities during the pandemic is not possibly started actioning to the country as expected. But, it has become urgent for the health authorities to initiate appropriate measures concerning the psychological sufferings of this cohort, where the present review might have some potential DovePress implications. First of all, based on the present findings (eg, mental health problem risk factors), special attention should be provided to the risky individuals considering their vulnerability towards serious psychological disorders. Also, the mental health or educational institutes' authorities are suggested to arrange frequent webinars on mental health wellbeing to motivate and facilitate students taking care of their mental health problems. These webinars should also be concerned with the de-stigmatization of mental health issues. Besides, the government should provide beneficial funds considering the financial conditions of the respective students' families, which might be able to reduce the student's mental sufferings if they are needy. An individual effort like avoiding smoking, taking part in regular physical exercise (at least daily 30 minutes running, cycling, gym, and so on), browsing less internet, and sleeping adequately might decrease the psychological impacts on students. In addition, the most evidence-based treatment is cognitive behavior therapy (CBT), especially internet-based CBT that can be helpful for mental health interventions during the pandemic. 46,47 Finally, the authority should command on the media/news channel not to spread misinformation, because social media exposure also triggered the mental state of the students. Conclusions Psychological sufferings of the Bangladeshi students during the pandemic are reported reasonably high. The present review provides an initial overview of depression, anxiety and stress prevalence rates and associated risk factors among Bangladeshi students during the COVID-19 pandemic. Thus, this systematic approach might be a strength for the policymakers of the country to take necessary steps considering the findings reported herein. Ethics Statements Being review on secondary data, ethical approval was not applicable for this study. Besides, there was no conflict of interest in relation to present work as informed consent was inapplicable. Publish your work in this journal Psychology Research and Behavior Management is an international, peer-reviewed, open access journal focusing on the science of psychology and its application in behavior management to develop improved outcomes in the clinical, educational, sports and business arenas. Specific topics covered in the journal include: Neuroscience, memory and decision making; Behavior modification and management; Clinical applications; Business and sports performance management; Social and developmental studies; Animal studies. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www. dovepress.com/testimonials.php to read real quotes from published authors. |
Model predictive control for series-parallel plug-in hybrid electrical vehicle using GPS system Aming at the problem of Hybrid Electric Vehicle(HEV) control strategy being sensitive to the road situation, a model predictive control strategy was designed with length information of the drive mission from a global positioning system(GPS) to improve the Robustness against the road situation based on a Matlab/Simulink vehicle dynamic model for series-parallel plug-in HEV. In this paper, a blended driving mode (BDM) was realized with assistance of the variable SOC reference depending on knowledge about the mission length and the distance the vehicle had driven in the cycle. The test drive cycles include the data based on standart test and the real drive cycle of the 600th bus in Tianjin. The simulation results showed that the control strategy could greatly improve the fuel consumption 6% to 24% compared to the charge depleting charge sustaining strategy (CDCS). |
Evaluation and Management of Patients with Hematoma after Gynecologc and Obstetric Surgery Objective: Postoperative hematoma following abdominal surgery is relatively rare and mainly depends on the type of surgery. Specific treatment including surgery or interventional radiology is sometimes necessary. The aim of this study is to evaluate the cases of postoperative hematoma after gynecologic and obstetric surgery. Study Design: This is a retrospective cohort study of 30 patients with hematoma developed after gynecologic and obstetric surgery. We included the patients who hospitalized with the diagnosis of a postoperative hematoma between June 2017 and April 2019 at Gazi Yasargil Training and Research Hospital of Health Sciences University. Hematomas occurring after endoscopic surgery and episiotomy were not included. The diagnosed cases were divided into three groups as wound hematoma, rectus sheath hematoma and intra-abdominal hematoma (intraperitoneal and retroperitoneal). All cases were assessed by patient demographics and clinical findings, hematoma of characteristics, treatment methods and results. Results: A total of 30 patients were included in the study with a mean age of 33.0±8.6 years. Incidence of hematoma account for 0.2%. The mean c-reactive protein was 37.9±47.4 mg/dL at admission and 14.6±25.8 mg/dL at discharge, respectively. The decrease was statistically significant (p < 0.001). The mean hemoglobin was 10.6±2.1 g/dL at admission and 10.7±1.5 g/dL at discharge. Fever was detected in 7 (23.3%) patients. Only 12 patients (40%) were followed up by observation and symptom management. In 10 (33.3%) patients, antibiotics were included in the treatment due to infection. In addition, 4 patients (13.3%) had relaparotomy, 5 patients (16.7%) underwent percutaneous radiological drainage and 8 (26.7%) received blood transfusion. The mean time of resorption of the hematoma was 4.6 ± 2.0 days. The evaluation of the hematoma locations revealed that 14 patients (46.7%) had wound hematoma, 7 patients (23.3%) had rectus sheath hematoma (Type I: 2 cases, type II: 3 cases, type III: 2 cases), 8 patients (26.7%) had pelvic hematoma and 2 patients (6.7%) had a retroperitoneal hematoma. The mean hematoma size was 68.1±15.18 mm. Conclusions: In cases of hematoma resistant to antibiotic treatment and non-resorbable hematoma, we can consider percutaneous catheter drainage as an alternative to surgical intervention. |
Mr. Navid Hanif has been appointed to the post of Director, Office for ECOSOC Support and Coordination, where he was Acting Director since September 2011 and also chaired the UN DESA Task Force on Peacebuilding.
In January 2010, Mr. Hanif became the Head of the newly established UN DESA Strategic Planning Unit and was also designated as the Secretary of the Executive Committee of Economic and Social Affairs effective 1 August 2010.
During 2004-2009, he served as the Chief of the Policy Coordination Branch and led the process of designing and launching new functions of ECOSOC, particularly the Annual Ministerial Review.
In 2005, he was sent on a special one year assignment as a Principal Officer in the Office of the United Nations Secretary-General. He worked there as a member of the team for the 2005 World Summit, which adopted a number of new initiatives, including new functions for ECOSOC.
Before joining UN DESA in 2001, Mr. Hanif served for 6 years at the Permanent Mission of Pakistan to the UN where he chaired and facilitated negotiations on a number of important resolutions, which led to the convening of major UN conferences.
Mr. Hanif holds Masters in International Political Economy from Columbia University, New York and Masters in English Literature from the Government College/University, Lahore. |
Promoter Specific Methylation of SSTR4 is Associated With Alcohol Dependence in Han Chinese Males Alcohol dependence (AD), a disease can be affected by environmental factors with epigenetic modification like DNA methylation changes, is one of the most serious and complex public health problems in China and worldwide. Previous findings from our laboratory using the Illumina Infinium Human Methylation450 BeadChip suggested that methylation at the promoter of SSTR4 was one of the major form of DNA modification in alcohol-dependent populations. To investigate whether DNA methylation levels of the SSTR4 promoter influence alcohol-dependent behaviors, genomic DNA was extracted from the peripheral blood sample of 63 subjects with AD and 65 healthy controls, and pyrosequencing was used to verify the results of BeadChip array. Linear regression was used to analyze the correlation between the methylation levels of SSTR4 promoter and the scores of alcohol dependence scales. Gene expression of SSTR4 in brain tissue was obtained from the Genotype-Tissue Expression (GTEx) project and Human Brain Transcriptome database (HBT). We found the methylation levels of SSTR4 in AD group were significantly lower than healthy controls (two-tailed t-test, t = 14.723, p < 0.001). In addition, only weak to moderate correlations between the methylation levels of the SSTR4 promoter region and scale scores of Alcohol Use Disorders Identification Test (AUDIT), Life Events Scale (LES) and Wheatley Stress Profile (WSS) based on linear regression analyses (AUDIT: R 2 = 0.35, p < 0.001; LES: R 2 = 0.27, p < 0.001; WSS: R 2 = 0.49, p < 0.001). The hypomethylated status of SSTR4 may involve in the development of AD and increase the risk of AD persistence in Han Chinese males. INTRODUCTION Alcohol dependence (AD) is a common chronic disorder which imposes a substantial burden on global health. According to World Health Organization (WHO) reports, there were approximately 3.3 million alcohol-related deaths worldwide in 2014, including 320,000 young individuals aged 15 to 29 (Organizacin Mundial de la Salud, 2014). It is estimated that more than 1.8 million persons were dependent on alcohol, and 1.6 million persons had a lifetime history of alcohol abuse in Germany. Family, twin and adoption studies have indicated genetic basis for AD susceptibility (), with the variation in heritability from a range of 40%-70% (Enoch and Goldman, 2001;Agrawal and Lynskey, 2008;;). In addition, environmental factors may play important roles in AD development through epigenetic regulation of gene expression without DNA sequence alterations (). Epigenetic regulatory mechanisms could induce stable changes in gene expression with a range of phenotypic outcomes via DNA methylation, histone acetylation, chromatin remodeling, and noncoding RNA regulation (Kouzarides, 2007;). Cytosine methylation at CpG dinucleotides-rich regions (CpG islands) is the common epigenetic modification found in DNA where the methylation plays a pivotal role in mediating gene transcription regulation by affecting transcription factor binding. Numerous studies have indicated although most genomic CpGs were stably methylated, CpG islands near or within the promoter regions maintained commonly low methylation levels to allow the transcriptional activation of related gene dynamically, and its dysregulated methylation contributed to disease progression in cases of environmental challenges (;). It was thought that disturbances of epigenetics also participate in pathophysiological processes of AD (Basavarajappa and Subbanna, 2016). Other studies have also found hypomethylation of several genes such as GDAP1 correlated with increased alcohol consumption (), and elevated N-methyl-D-aspartate 2b receptor subunit gene () and proopiomelanocortin gene () promoters methylation was detectable in DNA from peripheral blood of patients with AD. These alterations in DNA methylation might impact the transcriptional profile and the susceptibility to AD (). For example, in animal models of AD, up-regulation of Gdnf expression due to altered methylation of core promoter or negative regulatory element has been observed in Nucleus Accumbens, which are key brain regions associated with reward and addictive behaviors (). Moreover, specific genetic variants at methylation quantitative trait loci might also influence AD susceptibility via altering DNA methylation status (). However, evidence from laboratory-based data may not be conclusive, and epidemiological studies are required to better understand the biological mechanisms of alcohol addiction, which could aid in the clinical treatment or prevention of AD. Somatostatin receptor 4 (SSTR4) is a brain-specific G-proteincoupled receptor as known substrate of somatotropin-release inhibitory factor implicated in the pathophysiological processes of anxiety and depression-like behavior (). Previous studies have shown SSTR4 is expressed in areas involved in learning and memory processes, and the activation of hippocampal SSTR4 leads to a switch from hippocampus-based memory to dorsal striatum-based behavioral responses (). In addition, experimental data suggest SSTR4 might represent important therapeutic targets for the treatment of Alzheimer's disease and seizures, yet the direct evidence for the role of SSTR4 in alcoholism is still lacking. Our previous genome-wide study based on methylation detection utilizing the Illumina Infinium Human Methylation450 (Illumina Inc., San Diego, California) on DNA extracted from peripheral blood (PB) of 10 AD subjects and 10 paired siblings without AD revealed 1,581 differentially methylated CpG positions (including 865 hypomethylation islands and 716 hypermethylation islands), which were associated with 827 well-annotated reference genes (). Our data suggested novel potential epigenetic targets relevant to AD. DNA pyrosequencing technology has also been to examine the 2 top-ranked hypo or hypermethylation ADrelated genes from Illumina microarrays determined by DAVID. Linear regression analysis showed good correlation between DNA microarrays and pyrosequencing results. In alcohol-dependent subjects, the most prominent hypomethylated CG dinucleotide sites were located in the promoter of SSTR4. The objective of current research was to validate the demethylated status of SSTR4 in Han Chinese alcohol-dependent males. Subjects The current research utilized clinical and methylation microarrays (Illumina Infinium Human Methylation450) data extracted from our previous analyses and newly recruited subjects. This study was approved by the Ethics Committee of the Second Affiliated Hospital of Xinxiang Medical University (2015 Ethics number 27), and written or oral informed consent was obtained from each participant. Blood samples of validation cohort included 128 male participants (63 AD and 65 healthy controls) recruited from community or medical clinic settings of northern Henan Province. A consistent diagnosis of AD was made by at least two psychiatrists according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-Ⅳ) (American Psychiatric Association, 1994). The Alcohol Use Disorders Identification Test (AUDIT, score range 0-40) was utilized to measure quantity-frequency of alcohol consumption, and the score of AUDIT greater than or equal to 8 suggested and problematic drinking and AD tendency (). The Life Events Scale (LES) () and Wheatley Stress Profile (WSS) were used to assess negative life events and possible stress factors associated with AD. Controls were screened to exclude those with alcohol or drug abuse or dependence. We also ruled out subjects with other substance misuse, comorbidity in major psychiatric disorders, serious medical complications, severe neurological or somatic illnesses. DNA Extraction and Amplification The QIAmp DNA Blood Mini Kit (Qiagen, Hilden, Germany) was utilized to extract and purify genomic DNA from PB. Forty microliters of DNA solution were treated with the CT conversion reagent included in the EpiTect Plus LyseAll Bisulfite Kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol. The concentration of bisulfite-treated DNA was determined using a Thermo Nanodrop 2000 spectrophotometer, and the DNA volume was determined to be at least 3 l. Because the nucleotide composition of DNA is changed and the DNA fragments are smaller after bisulfite conversion, the results of subsequent experiments are not ideal. Therefore, the whole genome was amplified after bisulfite transformation, and the sequence after transformation was maintained. One hundred nanograms of bisulfite-treated DNA were amplified using an EpiTect Whole Bisulfitome Kit (Qiagen, Hilden, Germany) under the following conditions: 8 h at 28°C and 5 min at 95°C. Pyrosequencing was performed using PyroMark Gold Q96 Reagents (Qiagen, Hilden, Germany). We applied the Biotage PyroMark MD System (Biotage) to conduct pyrosequencing reactions via sequential nucleotide additions in the predetermined orders based on the instructions of manufacturer. RAW sequencing data were quantitatively analyzed by using Pyro Q-CpG 1.0.9 software (Biotage). The methylation levels of CpG regions were assessed by the percentage of methylated cytosines (M) over the total methylated and unmethylated cytosines (M + U) in the genome. Statistical Methods The DNA methylation microarray from Illumina were utilized in reference to our previous findings. DNA methylation levels between 10 AD subjects and 10 paired siblings were compared using the twotailed paired Student's t-test based on the unequal variance assumption. The methylation value of the SSTR4 promoter region in validation cohort (63 subjects with AD and 65 healthy controls) was analyzed using a two-tailed unpaired t-test with unequal variance. Linear regression analysis was used to examine the associations between the methylation levels of the SSTR4 promoter region and scale scores of AUDIT, LES and WSS. Gene expression of SSTR4 was confirmed by the Genotype-Tissue Expression database (GTEx, www.gtexportal.org) and Human Brain Transcriptome database (HBT, www.hbatlas.org) (;;GTEx Consortium, 2015). Two-tailed p value less than 0.05 were considered statistically significant. An overview of subject recruitment and promoter methylation levels analysis was presented as a flow chart in Figure 1. The Correlation Between the Methylation Levels of the SSTR4 Promoter Region in Blood and the AUDIT, LES, WSS Scores Considering potential gender effect on genome-wide DNA methylation, only male subjects were recruited in this study. Overall, the mean age of subjects were similar between AD group (mean age 39.1 ± 7.3) and healthy control (mean age 39.6 ± 8.1), with no significant difference (p = 0.722). The scale scores of AUDIT, LES and WSS were higher for AD subjects than for the control group (25.4 ± 7.4 vs. 8.4 ± 3.7, 22.2 ± 5.6 vs. 10.2 ± 3.5, 25.2 ± 5.1 vs. 9.2 ± 2.9, respectively, p-value < 0.001). Analysis results were summarized in Table 1. Linear regression analysis revealed the methylation levels of SSTR4 were only weak to moderate correlations with the scores of AUDIT, LES and WSS as shown in Figure 2 (AUDIT: R 2 = 0.35, p < 0.001; LES: R 2 = 0.27, p < 0.001; WSS: R 2 = 0.49, p < 0.001). The Methylation Difference in the SSTR4 Promoter Region in 10 Paired Siblings Using Microarray Compared to Case-Controls Eith Pyrosequencing The results of previous DNA methylation microarrays showed that the level of methylation of the SSTR4 promoter region between cases and paired siblings was statistically significant (t = 2.348, p = 0.043, Figure 3A). Likewise, the level of methylation of the SSTR4 promoter region confirmed by pyrosequencing in cases and controls was statistically significant (t = 14.723, p < 0.001), and hypomethylation of the SSTR4 promoter region was observed in AD cases ( Figure 3B). Gene Expression of SSTR4 Using GTEx and HBT Expression of SSTR4 in various tissues revealed relatively strong expression in brain tissue. Although SSTR4 is highly expressed in the cerebellar hemisphere and cerebellum, it is moderately expressed in the nucleus accumbens (NAC), prefrontal cortex (PFC), amygdala (AMY) and hippocampus (HPC) ( Figure 4A). These regions are related to the reward pathway of addiction in the brain. Temporal expression analyses showed that the expression level of SSTR4 was relatively stable across lifespan ( Figure 4B). DISCUSSION These findings revealed that compared to controls, AD patients experienced more negative life events (LEs) and higher stress levels, which indicated that environmental factors play a role in the formation and maintenance of AD. This result is also consistent with our previous clinical research of 10 AD cases and 10 paired siblings without AD as controls (). The study of Linda Azucena Rodrguez Puente et al. showed that stressful events occur that have the potential to trigger the consumption of substances, such as alcohol. Stressful events are greater in those who consume alcohol than in those who do not consume alcohol. Likewise, the study of Marketa Krenek showed that although alcohol use severity did not predict changes in recent LEs, the emergence of LEs is associated with subsequent increases in drinking severity. This article also provided partial support for the hypothesis that distal LEs influence changes in both LEs and heavy alcohol use over time (). Ethan H indicated that although LEs may not necessarily contribute to the maintenance of long-term alcohol abuse among heavy drinkers with high addiction severity, daily stressful events predicted increases in daily drinking the whole time for all heavy drinking, and stress may influence the emergence of early drinking behaviors (). These studies' findings were consistent with our research. In addition, the follow-up results of our study revealed that the lower the methylation value of SSTR4 was, the higher the AUDIT, LES and WSS values were. According to this result, stressful events (higher values of LES and WSS) may FIGURE 4 | Gene expression of SSTR4 by GTEx and HBT. (A) Spatial expression pattern of the SSTR4 gene in human brain regions from GTEx. TPM = transcripts per kilobase million. Expression threshold: >0.1 TPM and ≥6 reads in 20% or more of samples. Box plots are shown as median and 25th and 75th percentiles; points are displayed as outliers if they are above or below 1.5 times the interquartile ranges. Data Source: GTEx Analysis Release V8 (dbGaP Accession phs000424. v8. p2). (B) Dynamic expression pattern of the SSTR4 gene in 6 human brain regions across lifespan from HBT. NCX, neocortex; CBC, cerebellar cortex; MD, mediodorsal nucleus of the thalamus; STR, striatum; AMY, amygdal; HIP, hippocampus. Period 1, Embryonic development; Period 2, Early fetal development; Period 3, Early fetal development; Period 4, Early mid-fetal development; Period 5, Early mid-fetal development; Period 6, Late mid-fetal development; Period 7, Late fetal development; Period 8, Neonatal and early infancy; Period 9, Late infancy; Period 10, Early childhood; Period 11, Middle and late childhood; Period 12, Adolescence; Period 13, Young adulthood; Period 14, Middle adulthood; Period 15, Late adulthood. Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 915513 contribute to alcohol use disorder and AD (higher value of AUDIT), and then influence the methylation of SSTR4 (hypomethylation). In contrast, hypomethylation of SSTR4 may induce addictive behavior. It can be inferred that stressful events that lead to the hypomethylation of SSTR4 mediate alcohol abuse. A study by Scheich et al. revealed that activation of SSTR4 in the central nervous system plays a role in modulation of behavioral responses to acute stress and neuroendocrine changes induced by mild chronic stress in mice, suggesting involvement of SSTR4 in anxiety and depression-like behavior (;), consistent with our research. Through these studies, we can better understand how LEs and higher stress act as high risk factors for AD. This result offers treatment options for reducing the negative effect on LEs and higher stress to reduce the germination and maintenance of AD. AD and drugs of abuse have a moderate to high heritability component (). In addition to the variation of basic sequences, epigenetic modification of gene sequences may also be associated with substance dependence (). The present study suggested that there was significantly lower DNA methylation of the SSTR4 promoter region in AD cases than in healthy controls. Sample sizes of AD cases and controls were increased to perform theoretical verification, which was used to confirm the results based on the research of 10 AD cases and 10 paired siblings without AD as controls. Somatostatin (SST), also known as somatotropin-release inhibitory factor, is a cyclopeptide that plays an important role in inhibiting hormone secretion and neuronal excitability (). Somatostatin receptor 4 (SSTR4) belongs to the SSTR family of G protein-coupled transmembrane receptors (GPCRs) comprised of five members (SSTR1-5), which trigger various transmembrane signaling pathways (Reisine and Bell, 1995;Csaba and Dournaud, 2001;). SSTR4 is expressed in areas involved in learning and memory processes (). Gastambide et al. found that hippocampal SSTR4 is functionally involved in a switch from hippocampus-based memory to dorsal striatum-based behavioral responses. Through a biological database, we found that SSTR4 is highly expressed in brain tissue, and moderately expressed in the NAC, PFC, AMY and HPC. Psychostimulants are involved in the major brain regions including the ventral tegmental area (VTA), NAC, PFC, AMY, and HPC (). Furthermore, a study by Moneta D indicated that SSTR4 enhanced (-amino-3-hydroxy-5-methyl-4-isoxazoleioni-c axid, AMPA)-receptor-mediated excitatory signaling () and that AMPA receptors were related to addiction (). These results suggest that SSTR4 may be related to reward and addiction. Temporal expression analyses showed that the expression level of SSTR4 was relatively stable over time. However, our study showed hypomethylation of SSTR4 in AD cases, which indicated a potential high expression of SSTR4. According to this, expression of SSTR4 might be an upstream regulator of alcohol abuse, which can be inferred from previous findings, and suggests that alcohol abuse may ultimately affect SSTR4 expression. At present, there are few reports about the methylation of SSTR4 related to AD. Dominika Berent interviewed 176 AD cases and 127 healthy controls to assess genotyping for the SSTR4 rs2567608 polymorphism. The result revealed that AD cases and the controls did not differ significantly according to the SSTR4 rs2567608 genotype and allele frequencies (a). This study involved the relationship between the SSTR4 genotype and AD, but did not examine the methylation of SSTR4. Another study interviewing the same participants revealed that the SSTR4 promoter region was methylated in 21.6% of patients with AD and only 2.3% of controls (b), suggesting a difference in methylation levels of SSTR4 between AD cases and controls. This result is consistent with our present research in some respects. The present study has several limitations. First, the sample size was relatively small, so further research will enlarge the sample size to verify the methylation levels of SSTR4 by pyrosequencing. Second, we did not examine SSTR4 expression levels in blood samples because they were unavailable for RNA extraction, and further research will analyze the correlation between DNA methylation and SSTR4 expression in blood samples. Third, it still remains unclear whether epigenomic changes in peripheral cells could fully reflect the true DNA methylation status of brain. Nevertheless, tissue biopsies in every alcohol dependent subject are neither ethical nor practical, and previous studies have showed methylation of CpG positions occurring in PB might track part of the changes in central nervous system. And last, AD in males were relatively easy to recruit, so subjects in this study were only males. But in view of the potential effect of gender on methylation, this may be a limitation of this study. In the further research, we may be recruit AD in females. In summary, the promoter region of SSTR4 differs between AD cases and controls. This study provides novel insights that heavy drinking likely results in alteration of epigenetic modification, which might in turn promote AD development. The hypothesis would integrate the understanding of methylation mechanism in the process of gene-environment interactions in alcohol-dependent patients. In addition, the SSTR4 gene may represent a new biomarker for AD, which offers new ideas for the treatment of AD. Given these findings, additional effective therapeutic options may be developed in the future. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT This study was approved by the Ethics Committee of the Second Affiliated Hospital of Xinxiang Medical University Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 915513 |
Story highlights Social media made Steubenville, Ohio, a household name for the wrong reasons
When two boys were arrested there for rape, few in the small town wanted to talk about it
A school official is accused of covering up for them
Photos and videos that were made of the drunken victim enraged bloggers
The small town of Steubenville became a household name for the wrong reasons, thanks to social media, but when two teenage boys were arrested there, accused of raping a 16-year-old girl , very few people in the Rust Belt town in Ohio were eager to talk.
And someone may have tried to cover up for them. An Ohio school official was jailed Monday without bond after being indicted in connection with the case, the Ohio Attorney General's Office said.
William Rhinaman, 53, director of technology at Steubenville High School, faces four counts: tampering with evidence, obstructing justice, obstructing official business and perjury in connection with the case, Ohio Attorney General Mike DeWine said. Rhinaman was arrested Monday.
If convicted, he could face four years behind bars, more time than the two convicted boys will serve.
Details of the indictment, including what kind of evidence was allegedly tampered with, were not immediately available.
JUST WATCHED Social media consequences and Steubenville Replay More Videos ... MUST WATCH Social media consequences and Steubenville 05:24
JUST WATCHED Attorney: Rape victim has not forgiven Replay More Videos ... MUST WATCH Attorney: Rape victim has not forgiven 03:14
JUST WATCHED Steubenville victim's mother speaks out Replay More Videos ... MUST WATCH Steubenville victim's mother speaks out 05:23
JUST WATCHED Teen's apology: I know I ruined her life Replay More Videos ... MUST WATCH Teen's apology: I know I ruined her life 04:18
"This is the first indictment in an ongoing grand jury investigation," DeWine said in a prepared statement. "Our goal remains to uncover the truth, and our investigation continues."
Rape convictions
Authorities said star Steubenville High School football players Ma'lik Richmond and Trenton Mays, who were respectively 16 and 17 at the time, raped the girl during a series of end-of-summer parties in August 2012.
Photos and videos of the victim, sent out with lurid text messages, hit social media and attracted the attention of bloggers, who questioned everything from the behavior of the football team to the integrity of the investigation.
Richmond and Mays were convicted of rape in March after a trial that divided their football-crazed town of less than 20,000 souls. Mays also was found guilty of disseminating a nude photo of a minor.
At the heart of the case was the question of whether the victim, who testified that she remembered little, was too drunk to understand what was happening to her and too drunk to consent.
Richmond was sentenced to a minimum of one year in a juvenile correctional facility. Mays got two years.
Silent town
After the two teenagers were convicted, DeWine revealed that 16 people had refused to talk to investigators. A grand jury would determine whether other crimes had been committed.
Rhinaman will be arraigned in Steubenville at the Jefferson County Court House on Wednesday, attorney general's office spokesman Dan Tierney said. Court-appointed counsel will represent him.
"I am aware of the situation, and I will get you a press release on Tuesday, " Mike McVey, superintendent for Steubenville City Schools, said in an e-mail response.
Bob Fitzsimmons, the family attorney representing the 16-year-old victim who was raped, told CNN that the grand jury's first indictment is a significant first step.
"I think it's important that this shows some fruits from the investigative grand jury and also considers the importance of those responsible for reporting and/or preserving evidence after a crime is committed involving a child, in this case a girl 16 years of age," Fitzsimmons said.
CNN was unable to reach Rhinaman's attorney for comment Monday. |
Four men plead not guilty to plotting to kill staff of Jyllands-Posten after it published cartoons of Prophet Mohammed.
Four men on trial over a suspected plot to murder staff of a Danish newspaper that first published controversial cartoons of the Prophet Mohammed have pleaded not guilty.
The men appeared in court on Friday in the Danish capital Copenhagen. The prosecution named them as Sahbi Ben Mohamed Zalouti, Munir Awad and Omar Abdalla Aboelazm, all Swedish citizens of Tunisian, Lebanese and Moroccan origin respectively.
The fourth man, Mounir Ben Mohamed Dhahri, a Tunisian national living in Sweden who pleaded guilty to arms possession, faces charges of "attempted terrorism".
Prosecutors say the four were plotting to "kill a large number of people" at the Jyllands-Posten daily's offices in Copenhagen when they were arrested on December 29, 2010.
Jyllands-Posten published 12 cartoons in 2005 of the Prophet Mohammed that Muslims believed were insulting, sparking violent and sometimes deadly protests around the world.
The men were in possession of a machine gun with a silencer, a revolver, 108 bullets, reams of duct tape, and $20,000 when they were arrested.
Danish police, who had been collaborating with their Swedish counterparts and had been wiretapping the suspects, arrested them just after hearing them say they were "going to" the newspaper office.
Henrik Plaehn, one of the two prosecutors, told the Glostrup district court that a ceremony celebrating the Sporting Newcomer of the Year at the newspaper was likely the target of the suspected plot.
In addition to a number of sports celebrities, Danish Crown Prince Frederik was present at the ceremony.
"It appears this event was the target," Plaehn said, according to Jyllands-Posten. But he stressed the prosecution did not know if the four accused had known the prince was there, and did not think they had been after him.
Plaehn also said there was evidence linking the plot to Pakistan, but said he would provide more details later in the trial, which is set to last until June.
Sweden's foreign ministry told the AFP news agency last year that one of the men, Munir Awad, had been previously arrested for suspected links to terrorism groups.
Awad was arrested in Somalia by Ethiopian troops in 2007, and again in Pakistan two years later, when he was travelling with his wife, their two-year-old son and Mehdi Ghezali, a Swede who had spent two years at Guantanamo Bay.
Sahbi Ben Mohamed Zalouti had also previously been arrested in Pakistan for entering the country illegally.
The prosecution has not yet said what penalty it will be seeking beyond a request that the four, after serving their sentences, be expelled from Denmark and never allowed to return.
According to public broadcaster DR, they all risked "a historically severe punishment", with up to 14 years behind bars. |
Maternity leave is seen by some mothers as a time to reflect on their past career and think about making a change. For some 54,000 women in the UK who lose their jobs during pregnancy or new motherhood, it becomes imperative. Others feel they'd like more flexibility.
And yet, while the new mother isn't being paid to do her job - she is certainly still working. Looking after a baby requires all your time and energy. You don't have the privilege of ending the day at 5pm and heading home for an evening off; it continues through the night.
This means that career planning takes place while the baby naps - if indeed, the mother isn't having a synchronised nap, to catch up on lost sleep - or when a partner or grandparent can help out. But eventually, if she decides to make the leap into freelancing, she's faced with an issue.
The freelance (to be) mum needs time to work. She needs someone to look after her baby. But how can she pay for someone to care for her child if she's yet to launch her freelance career? It becomes a 'chicken and egg' situation; what comes first - payment or childcare?
So we’re finally trying to sort some childcare for the kids. I think it’s safe to say I’m actually self employed now and I have to stop living as if one day I’m going to wake up and everything will be gone. But trying to find a childminder who understands the uncertainty of working on the internet is proving a little bit tricky. For instance this week, I only have two engagements (none of which pay actual cash but in the internet industry you have to spend a lot of time building) so it’s a quiet week. But next week, I won’t have a moment to myself. @iam_papab is at capacity. Especially since I’ve stared sending him to some things on my behalf (he is cuter and way more sociable so it’s a win/win) so it really is time to trust someone else with our kids and that’s hard.
For Candice and her partner, each week is different. One may present plentiful freelance work opportunities, while the next will be more for networking, planning, preparing. So how do you decide on the childcare hours you need, when work hours are so changeable?
At first, it might work well to commit to fewer childcare hours, so that you're not spending out a lot more than you're earning. Perhaps one or two days a week, to kick it off. This will also help your child(ren) to settle in with their new childminder.
Now your working week has some structure. So when a job opportunity arises, you'll be able to say: I'm free to work on Tuesdays and Wednesdays, giving clear boundaries to the client. If they need you to work on a Thursday, perhaps you can arrange ad-hoc childcare for that day.
If you find these two days are being easily filled with paid work and you would like to earn more, you can commit an additional day of childcare. Gradually build it up so that you know on the days you have childcare, you'll be guaranteed to have work.
Of course, week to week, it may differ. But it's about building up your childcare commitments alongside your clients. It is also beneficial to secure repeat work so that you know, for instance, each Tuesday you're working with one specific client.
This may be your bread-and-butter work and will hopefully cover a second day of childcare, too, so that if you don't have paid work on that day, you'll be free to research, pitch, invoice, start your tax return - or do any of the other admin that is required of freelance parents.
It can be tricky, knowing when to start investing in your freelance career - and that's what childcare means; it's a commitment - but at some stage, if you really want to give it a go as a freelancer, you need to have the confidence to say: this is what I'm doing now, and make that call. |
Context-Aware Deep Spatio-Temporal Network for Hand Pose Estimation from Depth Images As a fundamental and challenging problem in computer vision, hand pose estimation aims to estimate the hand joint locations from depth images. Typically, the problem is modeled as learning a mapping function from images to hand joint coordinates in a data-driven manner. In this paper, we propose Context-Aware Deep Spatio-Temporal Network (CADSTN), a novel method to jointly model the spatio-temporal properties for hand pose estimation. Our proposed network is able to learn the representations of the spatial information and the temporal structure from the image sequences. Moreover, by adopting adaptive fusion method, the model is capable of dynamically weighting different predictions to lay emphasis on sufficient context. Our method is examined on two common benchmarks, the experimental results demonstrate that our proposed approach achieves the best or the second-best performance with state-of-the-art methods and runs in 60fps. I. INTRODUCTION H AND pose estimation is a fundamental and challenging problem in computer vision, and it has a wide range of vision applications such as human-computer interface (HCI), and augmentation reality (AR). So far, there have been a number of methods - proposed on this topic, and significant progress has been made with the emergence of deep learning and low-cost depth sensors. The performance of previous RGB image based methods is limited to the clustered background, while the depth sensor is able to generate depth images in low illumination conditions, and it is simple to do hand detection and hand segmentation. Nevertheless, it is still difficult to accurately estimate 3D hand pose in the practical scenarios due to data noises, high freedom of degree for hand motion, and self-occlusions between fingers. The typical pipeline for estimating hand joints from depth images could be separated into two stages: 1) extracting robust features from depth images; 2) regressing hand pose based on the extracted features. Current approaches concentrate on improving the different algorithms in each of the stages. Specifically, in, the features extracted by multi-view CNNs are utilized for regression, then the same author in replaces the 2D CNN by 3D CNN to fully exploit 3D spatial information. Besides, hierarchical hand joint regression and iterative refinement, for hand pose are deployed in several approaches. In this paper, we propose a novel method named as CADSTN (Context-Aware Deep Spatio-Temporal Network) to jointly model the spatio-temporal context for 3D hand pose estimation. We adopt sliced 3D volumetric representation, which keeps the structure of the hand to explicitly model the depth-aware spatial context. Moreover, motion coherence between the successive frames is exploited to enforce the smoothness of the predictions of the images in sequence. The model is able to learn the representations of the spatial information and the temporal structure in the image sequences. Moreover, the model is capable of combining the spatiotemporal properties for final prediction. As shown in Figure 1, the architecture is composed of three parts: first, we sufficiently capture the spatial information via Spatial Network; second, the dependency between frames is modeled via a set of LSTM nodes in Temporal Network; third, to exploit spatial and temporal context simultaneously, the predictions from the above two networks are utilized for the final prediction via Fusion Network. The main contributions of our work are summarized as the following two folds: First, we propose a unified spatio-temporal context modeling approach with group input and group output, which benefits from the inter-image correspondence structures between consecutive frames. The proposed approach extracts the feature representation for both unary image and successive frames, which generally leads to the performance improvement of hand pose estimation. Second, we present an end-to-end neural network model to jointly model the temporal dependency relationships among the multiple images while preserving spatial information for an individual image in a totally data-driven manner. Moreover, an adaptive fusion method is enabled to make the network dynamically adapt to the various situations. We evaluate our proposed method on two public datasets: NYU and ICVL. With detailed analysis of the effects of different components of our method, and the experimental results demonstrate that most results of our method are best comparing with state-of-the-art methods. Our method runs in 60fps on a single GPU that satisfies the practical requirements 1. The rest of the paper is organized as follows. In Section II, we review some works for 3D hand pose estimation. In Section III, we introduce our proposed method. In Section IV, we present the experimental results of the comparison with state-of-the-art methods and ablation study. Finally, the paper is concluded in Section V. A. Feature Representation Learning Most state-of-the-art performance hand pose estimation methods based on deep learning rely on visual representation learned from data. Given a depth image, different methods try to learn the sufficient context and get the robust feature. For example, in, the original segmented hand image is downscaled by several factors, and each scaled image is followed with a neural network to extract multi-scale feature. Although increasing the number of scale factor may improve the performance, the computational complexity will also be increased. In, Guo et al. propose a regional feature ensemble neural network and extend it into 3D human pose estimation in, Chen et al. propose a cascaded framework denoted as Pose-REN to improve the performance of region ensemble network. In, various combinations of voxel and pixel representations are experimented, and finally use a voxel-to-voxel prediction framework to estimate pervoxel likelihood. Furthermore, recovering the fully 3D information catches researcher's attention. Multi-View CNN is introduced for hand pose estimation, which exploits depth cues to recover 3D information of hand joints. In, 3D CNN and truncated signed distance function (TSDF) are adopted to learn both the 3D feature and the 3D global context. Ge et al. take advantage of the above two methods, and propose a multiview 3D CNN to regress the hand joint locations. In addition, data augmentation is usually considered for extracting robust feature. A synthesizer CNN is used to generate synthesized images to predict an estimate of the 3D pose by using a feedback loop in. Wan et al. use two generative models, variational autoencoder (VAE) and generative adversarial network (GAN), to learn the manifold of the hand pose, with an alignment operation, pose space is projected into a corresponding depth map space via a shared latent space. Our method is different from the aforementioned architectures in the following ways: 1) we deploy the sliced 3D volumetric representation to recover 3D spatial information. 2) we extract the temporal property from an image sequence to enhance the consistency between images. B. Spatio-Temporal Modeling Towards the tasks based on the video, spatio-temporal modeling is widely used (e.g. hand tracking, human pose tracking and action recognition). In the matters of hand tracking problems, Franziska et al. propose a kinematic pose tracking energy to estimate the joint angles and to overcome the challenge occlusion cases. In, Oberweger et al. propose a semi-automated method to annotate the hand pose in an image sequence by exploiting spatial, temporal and appearance constraints. In the context of human pose tracking, Fragkiadaki et al. propose an Encoder-Recurrent-Decoder (ERD) module to predict heat maps of frames in an image sequence. More recently, Song et al. introduce a spatio-temporal inference layer to perform message passing on general loopy spatiotemporal graphs. In terms of action recognition problem, Molchanov et al. employ RNN and deep 3D CNN on a whole video sequence for spatio-temporal feature extraction. Our approach is closely related to who propose a two-stream architecture to encode the spatial and temporal information separately. Our approach is based on the depth image, which leads us to learn spatio-temporal feature via LSTM instead of optical flow. C. Regression Strategies Regression strategy has the close relationship with the performance of methods, and it can be roughly classified into three camps: 1) 2D heat-map regression and 3D recovery; 2) directly 3D hand joint coordinates regression; 3) cascaded or hierarchical regression. In,, 2D heat-maps, which represent the 2D location for hand joints, are regressed based on the extracted feature, then the 3D positions are determined via recovery. Directly 3D hand joint coordinates regression predicts the hand locations in a single forward-pass, which exhibits superior performances comparing to the previous 2D heat-map regression -,,,. Cascaded and hierarchical regression has shown good performance in hand pose estimation. Oberweger et al. use the iterative refinement stage to increase the accuracy, and in, they use the generated synthesized images to correct the initial estimation by using a feedback loop. Tang et al., introduce hierarchical regression strategies to predict treelike topology of hand. Similarly, the strategy to first estimate hand palm and sequentially estimate the remaining joints is adopted in,. Recently, Ye et al. combine the attention model with both cascaded and hierarchical estimation, and propose an end-to-end learning framework. Figure 1 illustrates the main building blocks of our method. Given an image sequence, two predictions are estimated by Temporal Network and Spatial Network, then the predictions are fused by the output from Fusion Network. A. Overview We denote a single depth image as I, and the corresponding K hand joints are represented as a vector J = {j k } K k=1 ∈ R 3K, where j k is the 3D position of k-th joint in the depth image I. Currently, most discriminative approaches concentrate on modeling a mapping function F from the input image to hand pose. We define a cost function as follows: where 2 calculates the l 2 norm, N is the number of images. Then the expected mapping functionF is expressed asF = argmin F ∈ F C(F, J), where F is the hypothesis space. To obtain a robust mapping function, most approaches concentrate on the hand structure to capture features for estimation. In this paper, we capture the spatial and temporal information simultaneously to learn features for prediction. As shown in Figure 1, the architecture is separated into three parts. The first part is denoted as Spatial Network, which is applied on a single frame to learn a mapping function F spa. Inspired by the texture based volume rendering used in the medical image, we slice the depth image into several layers, which makes the different joints scatter in the different layer. We feed the depth image and corresponding sliced 3D volumetric representation into the network, depth and spatial information are extracted hierarchically for hand pose estimation. The second part is denoted as Temporal Network. This network concentrates on the time coherence property in image sequences, and learns the mapping F temp from an image sequence to a set of hand poses. By using LSTM layer, this network takes the features from previous frames into account when estimating the pose in the current frame. The third part is named as Fusion Network. For the sake of predicting pose via the combination of information from the individual and consecutive frames, we fuse the aforementioned two networks as an integration by adaptive fusion method, the final prediction is calculated as the summation of the different networks' outputs. By means of employing this approach, spatial information from the individual frame and temporal property between frames are thought to have in mind as a unified scheme. B. Proposed Method In this section, we present details of our method for hand pose estimation. Spatial Network In an individual frame, the spatial information is critical for hand pose estimation. By adopting 3D sliced volumetric representation and Deeply-Fusion network, we extract features to learn a robust mapping function. In the original depth image I, the value of the pixel encodes the depth information. We first crop the hand region out of the image, this step is same as. As shown in Figure 2, the cropped depth image is resized to M M, and depth D(x, y) is represented as a function of (x, y) coordinates. Previous methods represent the 3D hand in different ways. In, the depth image is recovered as a binary voxel grid for training. The hand surface and occluded space are filled with 1, the free space is filled with 0. In,, TSDF and projective Directional TSDF (D-TSDF) are adopted to encode more information in the 3D volumetric representation. However, the size of 3D volume used in these methods is commonly set as 32 32. Different from previous methods, we employ a solution that the hand surface is sliced into L pieces. Then our sliced 3D volumetric representation is an M M L voxel grid, where L represents the number of sliced layers in z-axes. For an exact cropped depth image, the free space and occluded space are omitted, then we denote D min and D max as the depth of closest and farthest surface point, Figure 2, from the top layer to the bottom layer, the fingertips and palm center scatter in the different layers. Sliced 3D volumetric representation keeps the structure of the hand. To obtain the hand structure and depth information simultaneously, we input depth image and sliced 3D volumetric representation at the same time. As shown in Figure 3, the left part extracts features from depth image, and the right part extracts features from sliced 3D volumetric representation. We denote the features from max-pooling layers as depth and 3D, then the features are hierarchically fused by Deeply-Fusion Network as follows: where ⊕ is element-wise mean for features, H is the transformation for features learned by a fully connected layer, l is the index for layer. Owing to the adoption of several fully connected layers, the network tends to overfitting, so we use an auxiliary loss as the regularization as well as dropout. In Figure 3, every fully connected layer is followed by a dropout layer, the nodes are dropout randomly with 30% probability. For the purpose of obtaining more representative capability for each input, the auxiliary paths are added in training stage. As shown in the green boxes, the layers in the auxiliary paths are same as the main network, the layers connected by the blue dotted line share parameters. And in the training stage, the total losses are consist of three losses, i.e. three regression losses in auxiliary paths and main network. When testing, the auxiliary paths are removed. Temporal Network As we know, the hand poses between successive frames are closely related (e.g. when grabbing, the joints on fingers are most likely to be closer). The temporal property in the between frames are also critical for estimating hand joints. So we extend the mapping function from single frame to a sequence of images. The sequential prediction problem could be reformulated as follows: where T is the number of images in a sequence. Currently, due to the feedback loop structure in the recurrent neural network (RNN), it possesses powerful "longterm dependencies" modeling capability, and is widely used in computer vision tasks. In general, RNN is difficult to train due to the vanishing gradient and error blow-up problems. In this paper, we adapt LSTM,, a variant of RNN which is expressive and easy to train, to model the temporal context of successive frames. As shown in Figure 4, an image sequence is taken as input and the network gives out the estimated poses. Without loss of generality, we think about the t-th depth image I t, the image is fed into the convolution neural network and the feature t extracted from the first fully connected layer is denoted as follows: Then we feed the features t into LSTM layer, and get the hidden state h t as the new features. where () is sigmoid function, tanh() is tanh function, is element-wise product, and matrices W h *, W x *, b * ( * is in i, f, o, c) are parameters for the gates in LSTM layer. The features h t output from LSTM layer and the original features t are concatenated, and then are fed into the last fully connected layer to regress the hand pose coordinates. Fusion Network The aforementioned Temporal Network and Spatial Network estimate the joints by placing emphasis on capturing spatial and temporal information, we name the predictions from these two networks as J temp and J spa respectively. Due to the importance of spatial and temporal information, we jointly model the spatio-temporal properties, which adaptively integrates different predictions for the final estimation. As shown in Figure 5, Fusion Network uses an activation function sigmoid after the last fully connected layer, then output w 1 and w 2. Fusion Network fuses two predictors, and gives out the final prediction as follows: where w 1 + w 2 = 1, and 1 ∈ R 3K is a vector that all elements are one. The weights w 1 and w 2 are learned as the confidence of two predictions, and the final prediction J out is estimated as the weighted summation of each prediction. Because Temporal Network considers the temporal information and Spatial Network extracts the spatial information, then the network infers the hand joint locations depend on spatial and temporal features. Finally, we summarize our proposed method in Algorithm 1. Algorithm 1: Summary for Context-Aware Deep Spa-tio-Temporal Network Input: IV. EXPERIMENTS In this section, we evaluate our method on two public datasets for comparison with the state-of-the-art methods. In addition, we do ablation experiments and analyse the performance of each component. A. Experiment Setting 1) Datasets: We evaluate our method on NYU and ICVL, details about the datasets are summarized in Table I, where Train is the number of train images, Test is the number of test images and the number in bracket is the number of sequences; Resolution is the resolution of images; Annotation is the number of annotated joints. NYU dataset is challenging because of its wider pose variation and noisy image as well as limited annotation accuracy. On ICVL dataset, the scale of image is small and the discrepancies between training and testing is large, these make the estimation difficult. 2) Evaluation Metrics: The evaluation follows the standard metrics proposed in, including accuracy, pre-joint error distances and average error distance. As mentioned above, we denote j kn as the predicted k-th joint location in the n-th frame, j * kn is the corresponding ground truth. The total number of frames is denoted as N, and K is the number of hand joints. Per-joint error distance calculates the average Euclidean distance between the predicted joint location and the ground truth in 3D space. Average error distance computes the mean distance for all joints. Accuracy is the fraction of frames that all predicted joints from the ground truth in a frame below the given distance threshold. 3) Implementation Details: We implement the training and testing with Caffe framework. The pre-process step follows, then the cropped image is resized to 128 128 with the depth value normalized to , and for sliced 3D volumetric representation, L is set to 8 in the experiments. Moreover, we augment the dataset from the available dataset. Data augmentation On NYU dataset, we do augmentation by random rotation and flipping. On one side, we randomly rotate the images by an angle selected from 90 −90 and 180. On the other side, we flip the image vertically or horizontally. For each image in the dataset, we just generate one image by the above-mentioned augmentation method. So the size of the dataset is twice as many as the original. On ICVL dataset, in-plane rotation has been applied and there are 330k training samples in total, so the augmentation is not used on ICVL. Training configuration Theoretically, training the three networks jointly is considerable, but it takes a longer training time to optimize the parameters of the three networks. Besides, we have tried to fine-tune the spatial and temporal network, but there is no much difference to freeze the Spatial Network and Temporal Network due to the limited size of data set. Practically, we strike a balance between training complexity and efficiency, so we adopt the simpler method as follows. We train Temporal Network and Spatial Network first, then train Fusion Network by fixing the above two networks. The training strategy is same for Spatial Network and Temporal Network, we optimize the parameters by using back-propagation and apply Adam algorithm, the batch size is set as 128, the learning rate starts with 1e-3 and decays every 20k iterations. Spatial Network and Temporal Network are trained from scratch with 60k iterations. Fusion Network is trained by 40k iterations with the above two networks fixed. Our training takes place on machines equipped with a 12GB Titan X GPU. NYU Dataset contains 72k training images from one subject and 8k testing images from two subjects. There are totally 36 annotated joints, and we only evaluate a subset of 14 joints as used in other papers for a fair comparison. The annotation for hand is illustrated in Figure 7. We compare our method with several state-of-the-art approaches on NYU dataset, including 3DCNN, Deep-Model, Feedback, DeepPrior, Matrix, HeatMap, Lie-X. The accuracy and the pre-joint error distances are shown in Figure 6, our proposed method has a comparable performance with state-of-the-art methods. In Figure 6a, our method outperforms the majority of methods. For example, the proportion of good frames is about 10% higher than Feedback when distance threshold is 30mm. 3DCNN adopts the augmentation and projective D-TSDF method, and trains a 3D convolution neural network which maps the sliced 3D volumetric representation to 3D hand pose, our method could not as accurate as 3D CNN from 15-60mm due to the sufficient 3D spatial information captured by the 3D CNN. Lie-X infers the optimal hand pose in the manifold via a Lie group based method, it stands out over the range of 20-46mm, but our method overtakes at other threshold. In Figure 6c, the per-joint error distance for five methods and our method are illustrated. Table II reports the average error distance for different methods. Generally, results show that our method has a comparable performance with competing methods. Our method outperforms -, by a large margin, and is comparing to. To have an intuitive sense, we present some examples as depicted in Figure 8. Our method could get an acceptable prediction even in the extreme circumstance. On ICVL dataset, we compare our proposed method against five approaches: CrossingNet, LRF, DeepPrior, DeepModel and LSN. The quantitative results are shown in Figure 6b. Our method is better than LRF, Deep- Prior 2, DeepModel and CrossingNet, and is roughly same as LSN. CrossingNet employs GAN and VAE for data augmentation, and our method surpasses it when the threshold is bigger than 9mm. Compared to the hierarchical regression framework LSN, our method is not as accurate from 14-30mm but outperforms when the threshold is bigger than 30mm. Furthermore, Figure 6d reveals the same case that fingertips estimation is generally worse than the palm of the hand. As summarized in Table III, the mean error distance of our method is the lowest in four methods, which obtains a 1.4mm error decrease than DeepPrior and is 0.16mm smaller than LSN. Ablation Study For Feature Representation Except for Spatial Network and Temporal Network, we train two networks named Baseline Regression Network and Sliced 3D Input Network. As shown in Figure 9, these two networks are parts of Spatial Network. We show the results in Figure 10 and Table IV. The experimental results reveal that Spatial Network and Temporal Network improve the performance by exploiting the spatial and temporal context respectively, and our final model performs best by utilizing spatio-temporal context simultaneously. Baseline Regression Network is a simple network composed of convolution, max-pooling, and fully connected layers. The network regresses the labels directly, and the average error Spatial Network hierarchically fuses the features from depth image input and sliced 3D volumetric representation. We could find that Sliced 3D is the worst in several competitors. In our opinion, the bad performance is because the depth in-formation is sacrificed in sliced 3D volumetric representation. Via the Deeply-Fusion network, Spatial Network borrow the spatial information from Sliced 3D branch, and it achieves a better result than Baseline Regression Network and Sliced 3D Input Network. Temporal Network replaces the second fully connected layer with LSTM layer and concatenates the hidden output with the input features. We find that Temporal Network slightly improves the performance due to the temporal information from experimental results, and the T value for LSTM is set as 16 in training. CADSTN is our proposed method, which integrates Spatial Network and Temporal Network by Fusion Network. Fusion Network fuses the predictions from two networks and yields the final prediction. The three networks are connected with the implicitly adaptation weights, and influence each other in the optimization process. Because of the temporal coherence and spatial information concerned, our method performs best in ablation experiments, and Figure 10b reveals that the joint error distance for every joint is the lowest. V. CONCLUSION In this paper, we propose a novel method for 3D hand pose estimation, termed CADSTN, which models the spatiotemporal context with three networks. The modeling of spatial context, temporal property and fusion are separately done by three parts. The proposed Spatial Network extracts depth and spatial information hierarchically. Further on making use of the temporal coherence between frames, Temporal Network gives out a sequence of joints by feeding into a depth image sequence. Then we fuse the predictions from the above two networks via Fusion Network. We evaluate our method on two publicly available benchmarks and the experimental results demonstrate that our method achieves the best or the secondbest result with state-of-the-art approaches and can run in realtime on two datasets. Gang Wang received the BEng degree in electrical engineering from the Harbin Institute of Technology and the PhD degree in electrical and computer engineering from the University of Illinois at Urbana-Champaign. He is an associate professor in the School of Electrical and Electronic Engineering, Nanyang Technological University (NTU), Singapore. He had a joint appointment at the Advanced Digital Science Center (operated by UIUC) as a research scientist from 2010 to 2014. His research interests include deep learning, scene parsing, object recognition, and action analysis. He is selected as a MIT Technology Review innovator under 35 for Southeast Asia, Australia, New Zealand, and Taiwan. He is also a recipient of Harriett & Robert Perry Fellowship, CS/AI award, best paper awards from PREMIA (Pattern Recognition and Machine Intelligence Association) and top 10 percent paper awards from MMSP. He is an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence and an area chair of ICCV 2017. He is a member of the IEEE. Jianwei Yin received the PhD degree in computer science from Zhejiang University, in 2001. He is currently a professor in the College of Computer Science, Zhejiang University, China. He was a visiting scholar at the Georgia Institute of Technology, Georgia, in 2008. His research interests include service computing, cloud computing, and information integration. Currently, he is the associate editor of the IEEE Transactions on Service Computing. He is a member of the IEEE. |
The Way Forward for Personal Insolvency in the Indian Insolvency and Bankruptcy Code In 2016, the Indian Parliament passed the Insolvency and Bankruptcy Code (IBC). The Government has chosen to notify only the part on corporate insolvency. It is expected that the part on personal insolvency will be notified for individuals with business debt and personal guarantors. In this context, this paper describes the Indian credit market and presents an argument for the need for personal insolvency law. It provides a brief overview of the provisions on personal insolvency in the IBC. It makes suggestions on questions of policy that need to be addressed before the law can be meaningfully implemented as the success of the IBC depends on the design of the subordinate legislation as well as the evolution of the institutional infrastructure. |
Drug-induced QRS morphology and duration changes. Drug-induced ECG changes may affect all components of the ECG curve. The attention of regulatory agencies, researchers and clinicians has been directed towards drug-induced QT-interval prolongation and its well-documented proarrhythmia. This presentation focuses on druginduced changes, i.e., morphology, amplitude and QRS complex duration (QRSd). A great variety of pharmacological agents (e.g., class IA and IC antiarrhythmics, antihistamines, antidepressants, antipsychotics) exert an influence on the QRSd. The QRSd is assessed by a variety of ECG methodologies. Standardization of measurements of QRSd ensures the comparability of results by different ECG modalities, and of serial QRSd assessments. Some analgesics and hypoglycemic agents influence the amplitude of QRS complexes by way of their propensity to cause peripheral oedema (extracardiac mechanism). Perhaps a new culture could evolve in which the entire ECG curve, from the onset of the P-wave to the offset of the U-wave, will be used in the evaluation and monitoring of drug safety, with emphasis primarily on the standard ECG. |
About
Honestly we know, there`s a lot of online gamers who record every minutes what they spend online, on WoW or on ordinary games (we know no one game is ordinary). Some people just sit front of the camera and tell you their opinion or streaming on Twitch, while they just stare at the monitor mute. Some of them just pretend to be a weirdo, which is funny and he does it great because that`s their performance and tools to get a higher view number, but we don`t know anything about their experience, about the game, so we just laugh. There`s also a few guys who use their sexy deep voice, or just a company of friends plays against each other online and record every funny minutes. We can agree, they`re actually funny, professionals, expert but something misses. The beginning of their carrier and some facts behind the scene! Yes obviously we can find out which was their first video, how did they look like, they got fatter or lost a lot of kilograms. But you couldn`t see how did they start their online path, how did they build everything up around them, how got their knowledge about the game market, and so on…. I thought what if I want to do all the same, but from the beginning and sometimes from more deeper…
So what if I`m telling you, I`d like to start everything from the beginning, live or sometimes from record! Find out the mainstream games, developers, start to play online, be an advisor or a potential target: be killed by everybody, every time, everywhere. Also I try to build a Facebook community by my posts, reviews, comparisons, opinions, questions, short videos, Vines, Youtube videos, meetings, competitions, explanations, information about everything and many more!
Why not? I think nobody has got an answer or see it yet, how to start to build a Youtube Subscription Empire, which camera was used since the first time. How he/she started his/her online game carrier, how lame was he/she, how they got a lot of information from everybody, how they got a lot of friends, how really angry/mad he/she was because he couldn`t pass the boss battle at the fifth time, and how he corrected him/herself; which programme was the perfect to record everything.
Now that`s what I`d like to show you. Everything behind the scene I try to answer and show you every of these crucial question while you actually just laughing me, because I`m obviously so debutant, lame and pathetic in the gaming! Sure you can find these answers in a lot of Youtube videos, but separately: I think I can be the first non-professional person (player) who can be followed since the beginning, almost every topic: maintenance, walkthroughs, online playing, diaries, experiences, opinions and reviews. So I think it could be great and funny, if I develop myself, while I want to show everybody my brand new gaming knowledge, and also they can follow me how to build a Youtube channel, to raise the numbers of subscribers, adjust your computer, achieved more XP (or however you call it), or just learn something new from my failure.
The pre-conception is this: first of all I opened a Facebook and a Twitter page, where I`ve already posted a few short parts of diaries, opinions, next time it will come some Vines, short videos, game reviews. Then I`d like to move to the Youtube, where I can show some hard part of walkthrough and easy game steps, tricks, how the beginners can develop themselves to be semi-professional and answer a couple of question. I always thought, the professionalism is a good think, but a lot of times on the Youtube is just disbelieved or on the Twitch is just uninspired. The personality, be contacted with everybody should be the most important part for every Youtube user or channel lead, what they always forgot. Finally the last step will be the Twitch, where I can play, talking with subscribers, and answer in live. That will be hopefully the semi-professional part. That`s the way how I can show truly, how everybody can become easily a semi-professional game user.
Use the easy, understandable correct langue, be friendly, being contact, because that`s the only way to avoid yourself to be repulsive professional or go to from one extreme to the other. That`s the suitable way how I can imagine the development by number of subscribers or likes:
Every good vlog or Youtube channel use the best equipment to record every crucial moment on the game, show their face, reaction, scream or just interpret something in live. The Kickstarter gives all of us a great opportunity to build up our plan, our wish. Kickstarter gives this plan an amazing chance to get a necessarily equipment such as cameras, microphones, new consoles, purchasing game capture hardware, maintenance tool kit while I could offer this plan directly to those who would enjoy it most. It also puts those who would pledge in contact with us so we can tailor The Game Blover to what people want. Please support that plan as much as you can, and be a part of a new challenge, be a part of something new. So here`s my first promise: if the project is going to get the the right amount of help, i`m gonna make the same video with a new equipments to prove you guys, what is the difference between video without anything and video with your help!
We will count every little help, so please follow us on
https://www.facebook.com/gameblover
https://twitter.com/gameblover
and this will be the YouTube channel:
https://www.youtube.com/user/gameblover
and the soon-to-be Twitch channel:
http://www.twitch.tv/levytorday
Support this to become possible!
We are counting on you! |
Excerpted from the Human Iterations blog:
“Openness is antithetical to a core presupposition of advertising: people are susceptible to suggestion and anecdote because they don’t have enough information–or time to process that information–when it comes to purchasing choices. Forget everything you’ve learned about madison avenue manipulations. Those manipulations are only possible when people have any reason to pay attention. Build a box that delivers all the relevant information and perfectly sorts through it in an easily manageable way and any form of advertising starts to look like laughable shucksterism. Who are you trying to fool? Why aren’t you content to let your product speak for itself?
In this sense much of the fertile territory being seized by Google is detrimental in the long run to one of its core income sources. As search improves and our instincts adapt to it there’s simply no reason to click on the ‘featured product’ getting in the way of our actual results. The more intuitive, streamlined and efficient our product comparison the less need there is to pay any attention to anything else. And if the app providing our results is tampered with then we can swap to another app. Walk into any given store with its inventory already listed and analyzed on our phone. Of course advertising covers more than just price comparisons between laundry detergents, but there’s no end to what can be made immediately transparent. “How cool is this product with a certain subculture or circle of my friends?” “Give me a weighted aggregate of consumer reports highlighting the ups and downs.” “List common unforeseen complexities and consequences.” “How would I go about navigating the experience of changing checking accounts?” Et cetera. Every conceivable variable. With ease of interface and sufficient algorithmic rigor one can easily recognize a tipping point.
Algorithms trawling for greater targeting power on the part of advertisers are jumping at comparatively trivial increases in efficiency with serious diminishing returns. (And insofar as new understandings might inform actual development/policy wouldn’t that a good thing?) Further, taken in a broad view, the issues of complexity to such datatrawling and analysis leans to the favor of consumers because there’s simply far more of us than there are sellers. Relatively simple advances in consumer analysis of sellers would drastically turn the tables against advertisers and corporate bargaining advantage in general. In such light their current golden age of analysis is but one last rich gasp.
In no way do I mean to underplay the threat posed by governments themselves, who surely have a huge investment in the establishment of institutions like Facebook and or projects like that of Palantir. At the end of the day they will remain a threat and continue working on these kinds of projects. But the context they’re operating in makes a big difference. The NSA isn’t going to cut Facebook a check to keep it afloat. The government simply doesn’t have the kind of money that the private sector is putting in to distort the development of norms in social networking / communications in the first place. Those are slippery cultural / user-interface issues that are far too complex for the state to navigate with requisite nuance.
The sooner we take it upon ourselves to kill the advertising industry the less time it’ll have to build weapons for the state.
Sure, like our current struggle to kill the IP Industry, it’ll be a fight that’ll last a while and involve complex cultural/political campaigns alongside purely technical ones. But at core it’ll be a downhill battle for us. Easier to spread information–both technologically and culturally–than to contain it.
Such a push would provide a number of agorist benefits too. Both through the integration of projects like this that empower the counter-economy, and through the further development of dual-power anarchist justice systems like those longstanding radical listservs that disseminate information on and track rapists and abusers, forcing them to accountability through organized dissociation or at the very least warning others. At the end of the day the wider availability of public information is a good thing. In any society we need to be able to convey and measure trust on various things in various ways. It’s an old truism: just because institutions of power have seized monopolistic control over certain functions of civil society, perverted them and threatened us with them, doesn’t always mean we should entirely turn against or seek to abolish those root functions themselves.” |
BigScholar 2019: The 6th Workshop on Big Scholarly Data Recent years have witnessed the rapid growth in the number of academics and practitioners who are interested in big scholarly data as well as closely-related areas. Quite a lot of papers reporting recent advancements in this area have been published in leading conferences and journals. Both non-commercial and commercial platforms and systems have been released in recent years, which provide innovative services built upon big scholarly data to the academic community. Examples include Microsoft Academic Graph, Google Scholar, DBLP, arXiv, CiteSeerX, Web of Knowledge, Udacity, Coursera, and edX. The workshop will contribute to the birth of a community having a shared interest around big scholarly data and exploring it using knowledge discovery, data science and analytics, network science, and other appropriate technologies. |
. OBJECTIVE Based on the characteristics of hierarchical data, a multilevel model was used to analysis possible influencing factors of urinary cadmium levels in one county population, and to discuss the advantages of multilevel model for processing hierarchical data in practical problems. METHODS In May 2013, 1 460 participants aged 20 and above in 12 administrative villages in one county in central China were recruited by cluster sampling. Urinary cadmium level and its possible influencing factors were investigated, and cadmium level in farmland soil of survey area was also tested. A total of 1 410 participants completed the survey and met the inclusion criterion. 318 farmland samples in survey area were detected. According to the data, individuals were set as the level one unit, and the village was set as level two unit. the data were analyzed by MIXED procedure for hierarchical data of SAS 9.3 software. In the case of not considering the hierarchy of data, the general linear model was fitted by SAS 9.3 software, and the fitting results of the two models were compared. RESULTS A total of 1 410 participants were included finally, the age was (55.2 ± 11.1) years. 645 (45.74%) were males and 765 (54.26%) were females. The amount of household per capita consumption of rice was (100.9 ± 40.3) kg/y. All 18.65% (262/1 410) of the participants had mining and mineral separation work experience. The urinary cadmium level was (9.39 ± 2.16) g/g Cr. Most of the soil cadmium levels in villages were greater than tolerance value. The fitting results of general linear model suggested that whether doing mining and mineral separation work does not have significant difference ( = 1.05, P = 0.305). There was significant difference in the village soil cadmium levels, age, the amount of household per capita consumption of rice, and gender ( = 401.39, 34.9, 4.16 and 86.15, respectively, P < 0.01, <0.01, 0.041, <0.01, respectively). The fitting result of empty model showed the ICC was 0.435 5, the urinary cadmium had clustering at village level. The results of multilevel model showed that the explanatory variables of the village soil cadmium levels, age, the amount of household per capita consumption of rice and gender had significant difference (Wald values 2.55, 6.34, 2.37 and 10.32, respectively, P = 0.029, <0.01, = 0.018 and <0.01), while whether doing mining and mineral separation work had no significant difference ( = 0.78, P = 0.438). To the fitting optimization index using for the comparison of models, the results of multilevel model were less than that of general linear model. The regression coefficient of level-2 explanatory variable (the village soil cadmium levels) was 0.84, which could explain the 35.26% of the total variance. CONCLUSION Multilevel model could analyze hierarchical data more reasonably than general linear model. Urinary cadmium levels is highly influenced by the village soil cadmium levels. |
A Mixed Methods Approach Exploring Risk Factors Associated with Cyber Dating Victimization and Resilience in Adolescents and Emerging Adults ABSTRACT The use of technology as a communication tool among youth has fostered opportunities for harassment, stalking or pressuring in romantic relationships. There is a paucity of studies exploring the links between childhood victimization experiences and cyber dating violence (DV) and how youth positively adapt to such experiences. This mixed methods study documents the effects of individual, familial, and social risks associated with cyber DV while exploring strenghts and protective factors in victimized youths narratives. A sample of 332 heterosexual youths and emerging adults (mean age 19.9 years) completed a survey exploring romantic and sexual trajectories and 16 participants reporting cyber victimization participated in semi-structured interviews. Results indicated that younger age, exposure to interparental violence, neglect, and a lower level of perceived support from a friend predicted cyber DV, whereas gender, emotional violence, sexual abuse, and perceived parental support were not significant predicting factors. When facing cyber DV and relational difficulties, participants expressed strengths illustrating their resilience: self-regulatory strength (e.g., considering a breakup as an opportunity for self-care), meaning making strength (e.g., thinking that a partner may not have been the right person), and interpersonal strength (e.g., being thankful for support provided by a trusted person). Fostering social support and multiple strengths of youth constitute promising avenues for effective cyber DV interventions. |
The National Institute of Standards and Technology released a report in February identifying products available to enhance the electric grid’s cybersecurity. New technologies employed onto the grid are multiplying the number of access points for cyber threats.
An actual cyberattack on an electric grid occurred in December 2015 when Ukraine’s electricity was interrupted. A third party, widely suspected to be operating from Russia, conducted the attack -- which resulted in 225,000 customers losing power.
It is only a matter of time until another country experiences a cyberattack that shuts down the power. If that occurs, devastating economic and security consequences may result since electricity is needed to operate pipelines, medical facilities, telecommunications, military bases and other critical infrastructure.
At present, consistent cybersecurity controls for the distribution system, where utilities deliver electricity to customers, are lacking. If a cyberattack on a utility successfully causes a power outage, a ripple effect that destabilizes electricity in large areas could occur, possibly damaging parts of the interconnected system. So it is easy to understand why research firm Zpryme estimates that U.S. utilities will spend $7.25 billion on grid cybersecurity by 2020.
Consistent cybersecurity controls for the distribution system, where utilities deliver electricity to customers, are lacking.
Cutting edge technologies are essential to help the electricity sector adapt to the evolving cyber threat landscape. Specific products that could help with cybersecurity on the grid are Siemens’ Ruggedcom Crossbow, Cisco’s 2950 (Aggregator) and Schneider Electric's Tofino Firewall. The National Institute of Standards and Technology found that these solutions can be integrated within a utility’s network to boost situational awareness and cope with attacks.
Utilidata and Raytheon are working together to detect and respond to cyberattacks on the grid. More specifically, Raytheon is creating products that provide warnings of possible cyberattacks and identify power grid data collection and communication issues. The company is also evaluating how to maintain emergency communication networks after a cyberattack has occurred.
The Sierra Nevada Corporation has created a product called Binary Armor that provides bidirectional security for communication layers on the grid. This allows tailored rules to be set for specific messages to enter the network. Other providers of smart grid cybersecurity include VeriSign, Raytheon, ViaSat Inc., Leidos, BAE Systems and IBM.
Collaborators at the University of California, Berkeley and Lawrence Berkeley National Laboratory have produced sensors that search for behavioral irregularities on the grid. Furthermore, Massachusetts Institute of Technology, Raytheon, Boeing, BAE Systems and other companies are teaming up to keep digital information safe from cyber threats.
While there is a plethora of commercial products available to protect the grid from cyber threats, states and utilities are experiencing difficulty funding cybersecurity efforts. Though the Federal Department of Energy and the Department of Homeland Security offer grants to fund such initiatives, resources are limited. Utilities could partner with other companies to identify creative ways to fund cost-effective cybersecurity. For instance, utilities could update energy infrastructure with products that result in cost savings. Those savings could then be applied to fund and enhance cybersecurity efforts.
A risk assessment of cyber threats should be required by every utility. This will allow for clear cybersecurity goals, informed decisionmaking and the identification of steps to reduce threats. After utilities conduct a risk analysis, they should work with the private sector to incorporate relevant products to ensure that power is not shut down as it was in Ukraine.
Unfortunately, this is the kind of threat that tends to be neglected until a traumatic attack heightens awareness of consequences. |
An improved memory prediction strategy for dynamic multiobjective optimization In evolutionary dynamic multiobjective optimization (EDMO), the memory strategy and prediction method are considered as effective and efficient methods. To handling dynamic multiobjective problems (DMOPs), this paper studies the behavior of environment change and tries to make use of the historical information appropriately. And then, this paper proposes an improved memory prediction model that uses the memory strategy to provide valuable information to the prediction model to predict the POS of the new environment more accurately. This memory prediction model is incorporated into a multiobjective evolutionary algorithm based on decomposition (MOEA/D). In particular, the resultant algorithm (MOEA/D-MP) adopts a sensor-based method to detect the environment change and find a similar one in history to reuse the information of it in the prediction process. The proposed algorithm is compared with several state-of-the-art dynamic multiobjective evolutionary algorithms (DMOEA) on six typical benchmark problems with different dynamic characteristics. Experimental results demonstrate that the proposed algorithm can effectively tackle DMOPs. |
Minimising Disruption Caused by Online FPS Game Server Discovery in a Wireless Network A key part of First Person Shooter network gaming is the game server discovery phase. Whilst probing for suitable servers from a wireless network, a large burst of network traffic is generated, potentially leading to detrimental effects on network capacity available to other wireless users. This can be minimised using an optimised algorithm to order the discovery probes and subsequently terminate the discovery process early. In this paper we explore further modifications to a previously proposed algorithm and examine its efficacy in further reducing the probe time/traffic during the server discovery phase. We show that it is possible to further reduce the overall discovery process duration by up to 13% while still presenting all suitable servers to the user for selection. |
A senior member of special counsel Robert Mueller’s investigative team said he was in “awe” of former acting Attorney General Sally Yates the day she was fired for refusing to defend President Trump’s controversial travel ban, according to emails obtained by a conservative watchdog group.
Andrew Weissmann, a veteran Justice Department prosecutor who is one of Mueller’s top lieutenants on the special counsel probe into Russian interference in the 2016 election, sent a Jan. 30 email to Yates that appeared to laud her for standing up to Trump.
ADVERTISEMENT
“I am so proud and in awe,” Weissmann wrote, according to emails obtained by Judicial Watch through a Freedom of Information Act request. “Thank you so much.”Trump fired Yates that same day and selected Dana Boente, a U.S. attorney for the Eastern District of Virginia, to replace her until current Attorney Generalcould be confirmed.
On Monday, the Supreme Court ruled that a third version of the controversial travel ban could be fully reinstated after several court challenges.
The Judicial Watch emails are likely to give new political ammunition to conservatives, who have argued that the special counsel has been hopelessly compromised by Democratic prosecutors. Weissmann in particular has donated thousands of dollars to Democrats over the years.
“This is an astonishing and disturbing find. Andrew Weissman, a key prosecutor on Robert Mueller’s team, praised Obama DOJ holdover Sally Yates after she lawlessly thwarted President Trump,” said Judicial Watch president Tom Fitton. “How much more evidence do we need that the Mueller operation has been irredeemably compromised by anti-Trump partisans? Shut it down.”
A spokesperson for the special counsel declined to comment.
While President Trump has largely held his fire on Mueller, conservatives have been working furiously to discredit his efforts, drawing attention to Mueller’s relationship with fired former FBI Director James Comey and his hiring of Democrats.
Many on the right are also raising alarms that the investigation is a fishing expedition that has extended beyond its mandate.
Still, Mueller has a sterling reputation in Washington and GOP leaders on Capitol Hill have argued that he should have the space he needs to conduct his investigation without political pressure.
Weissmann has been a frequent target of the right and his role at the special counsel is likely to come under new scrutiny given the probe’s recent turns.
The veteran Justice Department lawyer has a reputation for working with “converting defendants into collaborators,” according to The New York Times.
Last week, Trump’s former national security adviser Michael Flynn pleaded guilty to lying to the FBI about his contacts with Russians and said he will be a cooperating witness for the broader investigation.
George Papadopoulos, a former low-level adviser to the Trump campaign, has similarly pleaded guilty to lying to federal prosecutors and is also working with the special counsel. |
Small Steps Approach to Conflict Settlement The article analyzes the phenomenon of the small steps tactics in the peace process. An attempt is made to demonstrate how the systematic interaction of the parties to resolve non-politicized issues allows either to avoid another "freeze" of the negotiation process, or, at least, to maintain an informal dialogue when negotiations are not conducted at the political and diplomatic level. This approach is adjacent to the Track II diplomacy or one-and-a-half track diplomacy, as well as to the concepts of sustainable dialogue and confidence building measures. Reconciliation and finding a reliable formula for settlement is impossible in cases of protracted and smoldering conflicts without creating a sufficient level of mutual trust, at least between those social groups of representatives of the parties who form the political agenda and who are at the negotiating table. The study allowed the author to identify the similarity of confidence building measures and the "small steps" tactics, as well as conceptual differences that allow us to talk about its innovative nature. The article reviews the positive narratives of the "small steps" tactics and identifies limitations for its application. For this purpose, archival documents of the negotiation process, reports of the OSCE and foreign ministries of the parties to the conflict, statements of the involved participants, as well as the personal experience of the author, who was involved for a number of years in the negotiation process on the Transdnistrian settlement in the "5+2" format. The paper concludes that the "small steps" tactic is not able to resolve the conflict or build a settlement model, but, thanks to the principle of mutual security of behavior models, it makes it possible to achieve a change in the relations of the parties to the conflict, to transfer them from confrontational to cooperative, thus influencing the situation in the conflict zone. |
Wente Vineyards
History
The winery was established by Carl Wente in 1883 on 47 acres of land. Having received training in wines while working for Charles Krug of Napa Valley, Wente purchased a few vineyards and land of excellent soil. In 1934 his sons, Ernest and Herman, introduced California's first varietal wine label, Sauvignon Blanc. In 1936, they introduced the first varietally labeled Chardonnay. The efforts of the Wente family, including pioneering night-time mechanical harvesting and being a leader in sustainable winegrowing practices, have helped establish the Livermore Valley as one of the premier wine-growing areas of California. Since then, it has expanded to over 2,000 acres (8 km²), plus an additional 700 acres (3 km²) in Arroyo Seco.
Wente Clone
The Wente clone is budwood that is used to plant Chardonnay at many California vineyards. In 1912, 2nd Generation Winegrower Ernest Wente took cuttings from the University of Montpellier viticultural nursery in France. Cuttings from the Wente vineyard then spread to a number of other wineries before eventually being certified by the Foundation Plant Materials Service of the University of California, Davis. Clones taken from the certified vines are known as "Wente" or "heat-treated Wente," and clones taken from vines before certification are known as "Old Wente."
Estate
Wente Vineyards also offers a golf course, tasting rooms, a catering service, and a gourmet restaurant all nestled in the heart of the vineyards. During the summer, they host Concerts in the Vineyard, featuring performing artists from classics like The Doobie Brothers and Frankie Valli, to comedians like George Carlin and jazz artists such as Diana Krall, as well as current pop artists like The Band Perry. The catering service also provides food for these concerts for people to eat as they watch the performance. |
Traditional methods used by mothers living in different regions of Turkey for increasng breast milk supply and weaning _________________________________________________________________________________ Introduction: It is a known fact that traditional practices mothers use in increasing breast milk supply and weaning differ in different parts of countries and even among communities sharing the same city. This study was conducted to find out the use of herbal tea and some foods to increase breast milk, the traditional methods used for weaning and the factors influencing these. Materials and methods: This study is descriptive and cross-sectional. Three cities in Turkey with different levels of development in terms of geographical and socio economic regions were chosen. The data of the study were collected through a questionnaire form developed by the researchers. Results: It was found that the mothers who received breast milk increasing training the most were in eastern region, while mothers in western region fed their babies with formula since they thought their milk was not enough and this result was found to be statistically significant (p<0.05). In our study, it was found that 42.1% of the mothers resorted to some plants and foods to increase breast milk. When the mothers were asked about how they weaned their babies, it was found that 38.2% pasted things like hair and wool or put tomato paste on the breast, 26.9% applied bitter food on the nipple while 27.7% stated that the babies stopped breastfeeding spontaneously. Conclusion: It was found that mothers resorted to traditional methods to increase breast milk and to wean and that there were regional differences. |
Leveraging Semantic Embeddings for Safety-Critical Applications Semantic Embeddings are a popular way to represent knowledge in the field of zero-shot learning. We observe their interpretability and discuss their potential utility in a safety-critical context. Concretely, we propose to use them to add introspection and error detection capabilities to neural network classifiers. First, we show how to create embeddings from symbolic domain knowledge. We discuss how to use them for interpreting mispredictions and propose a simple error detection scheme. We then introduce the concept of semantic distance: a real-valued score that measures confidence in the semantic space. We evaluate this score on a traffic sign classifier and find that it achieves near state-of-the-art performance, while being significantly faster to compute than other confidence scores. Our approach requires no changes to the original network and is thus applicable to any task for which domain knowledge is available. |
What’s been the big UK story of the last couple of months? The Icelandic ash cloud has certainly garnered some significant column inches and the General Election has been a definite headline hogger.
Yet one topic – the launch of the Apple iPad – has received attention way in excess of its news value. Not that the iPad isn’t a significant technology launch; as I detail below, the portable device requires a new approach to application development.
But how significant is a technology launch? At the time of writing, ash receives 50,000 Google News results; Apple receives 55,000. Could any other company receive so much media attention, eclipsing a once-in-a-lifetime volcanic eruption that has led to huge economic and social consequences?
The answer, of course, is no. While Apple devices are beloved by the media clique, they are only – lest we forget – well designed computers. And while the excitement surrounding their products is excessive, an element of sensible analysis that understands the potential impact of the iPad is required.
Much has been made of the device’s form; what actually is an iPad? Wikipedia – again, at the time of writing – refers to the iPad as a tablet computer that is “unlike many older laptops”. As understated definitions go, that takes some beating – particularly given the level of media hype associated to the device.
So, let’s be clear. What makes the iPad different is its resting place in the middle ground between smart phones and laptops. Its multi-touch screen provides a new way to interact with information in a portable device.
Its intuitive user interface, and Apple’s strong link-up with publishers, will help drive the next generation of electronic books. The device also has a different screen aspect – instead of a widescreen 16:9 aspect ratio, the iPad screen uses a traditional 4:3 TV ratio.
Analysts have suggested that Apple is trying to find a middle ground between the requirements for publishing, video and gaming. Once again, such converged thinking should be of concern to developers.
Apple is far from taking an all-encompassing approach to software and services. Rival technology giant Microsoft has talked about its three-screen strategy, an attempt to ensure that Windows can give people access to information on the PC, television and mobile phone.
For developers, the message is clear: do not make the mistake of creating an application for a single platform. In the future, successful developers will have to accommodate applications to fit more than one screen size.
While the iPad isn’t the social and economic revolution suggested by media type, it is a significant evolution in technology and developers must be prepared. |
"MacGregor Tells the World," Elizabeth McKenzie's first novel, is a story for romantics. Its main character, 22-year-old MacGregor West, is a literature-loving lost soul whose mother has drowned herself in the Seine. MacGregor, or Mac, spends the book uncovering his scandalous family history while romancing a San Francisco heiress and teaching a little boy about great literature.
It's a heady mix, and has its share of pleasures. The heiress, Carolyn, is the child of aging literary lion Charles Ware. Charles is a wonderfully pretentious blowhard, and his circle of hangers-on is amusing and well rendered. His past, which may have included Mac's mother, is the novel's most interesting part.
As a young writer, Charles befriended a poor Italian American youth named Bill Galleotto, and the two began a decadeslong love affair that may or may not have been platonic. Charles immortalized their relationship in the barely fictionalized "Tangier," and he's now planning a sequel. Because he is estranged from Bill, however, he may lack for material.
With this groundwork in place, McKenzie seems poised to consider the importance of male friendship in literature and the relationship of a middle-class author to his working-class subject. She even throws in a parallel in Mac's own life -- a friend named Cesar, tragically killed in a motorcycle accident just after his high school graduation. She soon pushes these elements to the wayside, however, in favor of the growing love between Mac and Carolyn and Mac's attempts to find out about his mother.
Unfortunately, neither of these story lines is particularly satisfying. Carolyn is an unconventional character, combining a deep care for her younger sister with a tendency to throw money around. But her relationship with Mac is heavy on articulate banter and light on real passion. Mac's mother never emerges as a fully realized person. We learn about her tragic childhood and her dissolute youth, but Mac never succeeds in his quest to render her whole.
Or perhaps he does. In one of his first conversations with Carolyn, Mac likens his mother to the Colossus of Rhodes -- she's a "wonder of the world" and he wants to "rebuild" her. Yet, by the end of the novel, Mac's efforts fail to yield a person with whom the reader can empathize. Instead, what's presented is a kind of monument: the mistreated child who grows up damaged and abandons her own offspring. Perhaps we are meant to gaze upon her tragic history rather than to get inside her head.
At heart, "MacGregor Tells the World" is not a realist novel. Its characters -- particularly such supporting players as the drawling, toothless groundskeeper Glen -- are over-the-top. Its set pieces, such as Carolyn Ware's appearance on a fold-up bed, are straight out of daydreams. This is why the novel works best when it deals with the stuff of literary legend. The story of Charles Ware and Bill Galleotto -- hints of Jack Kerouac notwithstanding -- is larger than life. Globe-trotting celebrities with big egos and big appetites, these two mesh perfectly with McKenzie's sensibilities. They have far more in common with the Colossus of Rhodes than Mac's mother does, and they make far better monuments.
They have a lot of competition, not just with Mac's mother but with a host of other supporting characters. "MacGregor Tells the World" is chock-full of plotlines, some far-fetched and some decidedly not. The problems of this approach are twofold. First, the novel seems to straddle the line between the real world and McKenzie's world, raising uncomfortable questions. Take Mac's hyper-responsible cousin Fran, whose idea of fun is buying household appliances, for instance. Would this woman really let Mac stay with her rent-free and never pressure him to get a job?
The second problem is one of space. At 259 pages, "MacGregor Tells the World" doesn't quite have room for all of its stories. As a result, many tantalizing questions about Charles and Bill go unanswered. Cesar and his motorcycle almost completely drop from view, despite their seeming importance in Mac's life. And Filipo, the little boy with whom Mac reads Dickens, keeps popping up and getting forced down again.
Had McKenzie trimmed her more quotidian plotlines and allowed the zanier ones to flourish, "MacGregor Tells the World" might have been a very good book. McKenzie has a gift for thinking big and a knack for developing offbeat characters. Her deliciously nasty portrayal of the aging genius Charles Ware suggests that she might make a master satirist. Whatever she turns her pen to next, may she give her imagination free rein. "MacGregor Tells the World" is a novel with its head half in the clouds -- next time, let's hope McKenzie goes all the way. |
Could taking good care of gums and teeth also help to protect the brain? A recent study has added to growing evidence of a link between severe gum disease, or periodontitis, and a raised risk of dementia.
New research suggests that keeping your gums healthy may prevent dementia.
Using data from an extensive national health insurance screening program, investigators from Seoul National University in South Korea examined the relationship between chronic periodontitis and dementia.
In a paper that now features in the Journal of the American Geriatrics Society, the researchers describe how they found a modest link between severe gum disease and dementia, which is consistent with some previous studies.
The researchers also point out that their "retrospective cohort study" is likely the first to establish that lifestyle factors, such as alcohol consumption, smoking, and exercise, did not appear to have any effect on the connection.
The term dementia describes a decline in mental capacity – such as increasing difficulty with memory and reasoning – that becomes so severe that it disrupts daily living. Alzheimer's disease is the most common cause of dementia.
A joint 2012 report by the World Health Organization (WHO) and Alzheimer's Disease International stated that dementia is a global "public health priority."
The report stated that there were 35.6 million people worldwide living with dementia in 2012. It also estimated that the global prevalence of dementia would increase threefold by 2050.
A common mouth bacterium hastens colorectal cancer by spurring the growth of already cancerous cells.
In their study paper, the researchers discuss the potential impact that reducing dementia risk factors could make to this projected massive burden.
The researchers cite a 2014 study that suggested that decreasing dementia risk factors by 20 percent could reduce the anticipated 2050 prevalence of dementia by more than 15 percent. "One such risk factor," they suggest, "is chronic periodontitis."
Periodontitis is a common human disease in which the gums and the structures that support the teeth become inflamed due to bacterial infection. It usually starts as gingivitis, or inflammation of the gums.
Although the human mouth is home to a wide range of bacteria, when conditions are right, the bacteria populations can increase dramatically to cause inflammation. This usually happens when bits of food and bacteria deposit on tooth surfaces to form plaque.
The bacterial colonies in the plaque grow and produce toxins that trigger inflammation responses in the gums. If untreated, the inflammation becomes persistent and destroys bone, causing tooth loss.
Several animal and human studies have suggested links between chronic periodontitis and dementia. The authors of the new study refer to a retrospective investigation that found participants with chronic periodontitis had a "significantly higher risk" of developing Alzheimer's disease than those without it.
However, they also note that these previous studies have been limited by small sample sizes, and by the fact that they did not consider forms of dementia outside of Alzheimer's disease.
For the new investigation, the team analyzed 2005–2015 health data on 262,349 people aged 50 and older from South Korea's National Health Insurance Service-Health Screening Cohort.
The analysis revealed that people who had received a diagnosis of chronic periodontitis had a 6 percent higher risk of developing dementia than those who had not. The risk was particularly significant for those who developed Alzheimer's disease.
Due to the study's design limitations, the findings cannot prove that periodontitis causes dementia; they can only suggest a link.
This leaves open the possibility of reverse causality. For example, could it be that pre-diagnosed early stages of dementia cause lapses in oral hygiene that lead to gum disease?
If, however, the causal direction should be that periodontitis leads to dementia, the authors propose three biological ways in which it could come about.
The first mechanism through which periodontitis could cause dementia would involve bacteria from the infected gums entering the bloodstream and then crossing the blood-brain barrier into the brain. These could then trigger brain tissue inflammation and even spur production of the toxic proteins that are hallmarks of Alzheimer's disease.
Medical News Today recently reported research that makes a convincing case for such a causal link. In that study, researchers revealed that Porphyromonas gingivalis, a bacterium that drives gum disease, can also be present in the brains of people with Alzheimer's disease.
The second mechanism would be a similar process in that the gum infection could set up a "systemic inflammatory state" that releases agents that promote inflammation. These agents could also cross the blood-brain barrier to trigger inflammation in brain tissue, which, if prolonged, can also contribute to toxic protein buildup.
The researchers suggest that the third mechanism would occur through damage to the lining of blood vessels. They note that evidence from previous research showed that such damage has ties to an increase in toxic proteins in the brain.
"In conclusion, [chronic periodontitis] appeared to be associated with increased risk for dementia even after taking into consideration lifestyle behaviors including smoking, alcohol intake, and physical activity."
They call for further research to look into whether the prevention and treatment of chronic periodontitis could reduce the risk of developing dementia.
In a short editor's note, Drs. Joseph G. Ouslander and Mary Ganguli comment that these findings, "in combination with the recently published report on P. gingivalis, should make us all think more seriously about optimizing our own and our patients' oral hygiene practices and dental care, with the added potential of perhaps protecting our brain health as well." |
Primary reconstruction of complex midfacial defects with combined lip-switch procedures and free flaps. Free flaps are generally the preferred method for reconstructing large defects of the midface, orbit, and maxilla that include the lip and oral commissure; commissuroplasty is traditionally performed at a second stage. Functional results of the oral sphincter using this reconstructive approach are, however, limited. This article presents a new approach to the reconstruction of massive defects of the lip and midface using a free flap in combination with a lip-switch flap. This was used in 10 patients. One-third to one-half of the upper lip was excised in seven patients, one-third of the lower lip was excised in one patient, and both the upper and lower lips were excised (one-third each) in two patients. All patients had maxillectomies, with or without mandibulectomies, in addition to full-thickness resections of the cheek. A switch flap from the opposite lip was used for reconstruction of the oral commissure and oral sphincter, and a rectus abdominis myocutaneous flap with two or three skin islands was used for reconstruction of the through-and-through defect in the midface. Free flap survival was 100 percent. All patients had good-to-excellent oral competence, and they were discharged without feeding tubes. A majority (80 percent) of the patients had an adequate oral stoma and could eat a soft diet. All patients have a satisfactory postoperative result. Immediate reconstruction of defects using a lip-switch procedure creates an oral sphincter that has excellent function, with good mobility and competence. This is a simple procedure that adds minimal operative time to the free-flap reconstruction and provides the patient with a functional stoma and acceptable appearance. The free flap can be used to reconstruct the soft tissue of the intraoral lining and external skin deficits, but it should not be used to reconstruct the lip. |
l-Ascorbic acid oxygen-induced micro-electronic fields over metal-free polyimide for peroxymonosulfate activation to realize efficient multi-pathway destruction of contaminants Peroxymonosulfate (PMS) activation in heterogeneous processes for pollutant removal is a promising water treatment technique. However, the existing rate limiting step of the metal-containing system greatly restrains its performance and increases the consumption of PMS and energy. Herein, we describe an efficient, metal-free strategy for solving these problems. In this study, L-ascorbic acid O-doped carbonnitrogenoxygen polymer nano-flowers (OVc-CNOP Nfs) were fabricated via a facile thermal polymerization process. The metal-free OVc-CNOP Nfs showed excellent activity and stability in PMS activation for the degradation of various organic pollutants. Even the refractory endocrine interferon bisphenol A can be completely removed in just 1 min. Doping with O induced bidirectional transfer of electrons on aromatic rings through COC bonding , which led to the formation of micro-electronic fields on the catalyst surface. During the reaction, efficient oxidation of organic pollutants by utilizing the charges of pollutants themselves around the electron-poor C() areas inhibited undesirable oxidation of PMS and promoted PMS reduction in the electron-rich O areas to generate hydroxyl (OH) and sulfate (SO4−) radicals. The reduction of dissolved oxygen to O2− around the electron-rich O areas also destroyed the pollutants, and this further drove electron donation from the pollutants and accelerated catalytic oxidation. These multi-pathway synergistic processes involving oxidation of the pollutants themselves and attack by free radicals enabled rapid degradation of contaminants. |
Exelixis and Bristol-Myers Squibb Initiate Phase 3 Trial of Opdivo® in Combination with CABOMETYXTM or Opdivo and Yervoy® in Combination with CABOMETYX, Versus Sunitinib in Previously Untreated Advanced or Metastatic Renal Cell Carcinoma SOUTH SAN FRANCISCO, Calif. & NEW YORK--(BUSINESS WIRE)--Exelixis, Inc. (NASDAQ:EXEL) and Bristol-Myers Squibb Company (NYSE:BMY) today announced the init iat ion of the phase 3 CheckMate 9ER trial to evaluate Opdivo® (nivolumab) in combination with CABOMETYXTM (cabozantinib) tablets, a small molecule inhibitor of receptor tyrosine kinases, or Opdivo and Yervoy® (ipilimumab) in combination with CABOMETYX versus sunit inib in patients with previously untreated, advanced or metastat ic renal cell carcinoma (RCC). The primary endpoint for the trial is progression-free survival (PFS). |
Carmustine: Promising Drug for Treatment of Glioma Background: The treatment for glioma has challenging and survival rate is not more than one year after diagnosis. Carmustine is a non-specific antineoplastic agent that belongs to the nitrosourea group of compounds (bischlo-roethyl nitrosourea) and has various mechanisms of tumor cytotoxicity. Main body: It can alkylate reactive sites on nucleoproteins as an alkylating agent, interfering with DNA and RNA synthesis and DNA repair. It can create interstrand crosslinks in DNA, preventing DNA replication and transcription. Under physiological conditions, carmustine undergoes spontaneous nonenzymatic decomposition, releasing reactive intermediates with alkylating and carbamoylating activities, which are thought to be responsible for carmustines anticancer and cytotoxic properties. Human gliomas are the for the most part kind of tumor for brain. Gliomas can be treated with a variety of chemotherapeutics, but carmustine has recently emerged as a promising treatment option. Conclusion: The adoption of a nose-to-brain medication delivery method has several advantages over efforts that try to breach the blood-brain barrier. The focused strategy also lowers the risk of cardiovascular harm. A significant advantage of nose to brain administration is the relatively high patient compliance. Research into new chemotherapeutic compounds and treatment delivery technologies is crucial for present and future patients |
New light for old enigmas in epithelial transport? The maintenance of a layer of fluid at the surface of the air-ways and the gas exchange region of the lung is an essential activity of the respiratory epithelium, ensuring efficient gas exchange and mucociliary clearance-driven bacterial removal, among other functions. Regulation of the height of this airway or alveolar surface liquid layer is dependent on opposing secretory and absorptive fluid transportprocessesthat,inturn,aredrivenbyiontransport.Secretion into the alveolar surface liquid layer requires blood-to-lumen Cl − flow mediated by cystic fibrosis transmembrane conductance regulator (CFTR) channels, whose functional significance is highlighted by the fact that their impairment is associated to cystic fibrosis. Absorptive fluxes affecting the alveolar surface liquid layer are dependent on Na + movement mediated by the apical membrane epithelial Na + channel (ENaC) and basolaterally located Na + K + pump. The relevance of the ENaC in the respiratory system can be gauged from the consequence of human mutations enhancing its activity or overexpression in trans-genic mice, both of which lead to cystic fibrosis-like pathology. The ENaC and its relationship with Na + K + pump are therefore important in the physiology of the respiratory epithelium and in potentially life-threatening pathological states. Our knowledge of ENaC physiology and the cell biology determining its expression and localization are of great interest. Two remarkable observations are presented in a paper from Luis Galietta ' s laboratory in this issue of Experimental Physiology (). First, immunolocalization |
Edward O'Shaughnessy
Edward O'Shaughnessy (16 November 1860 – 6 August 1885) was an English cricketer who played first-class cricket for Kent and the Marylebone Cricket Club (MCC) between 1879 and 1885. He was born in Canterbury, Kent and died at St John's Wood, London.
O'Shaughnessy was a professional cricketer and played as an all-rounder – a right-handed middle-order batsman, sometimes used as an opener, and a right-arm slow bowler who bowled in the roundarm style. Most of his first-class games were for Kent, though he was also on the ground staff at Lord's. His early impact was as a bowler: against Sussex in 1879 he took seven first-innings wickets for 16 runs in 35.2 four-ball overs, and followed that in the second innings with five for 24 for match figures of 12 for 40. In the same fixture in 1882, as an opening batsman he made 98, which was his highest score. His best for MCC was an innings of 89 against Somerset in 1883.
O'Shaughnessy played a couple of times for Kent early in the 1885 season, but in early August of the same year he died of tuberculosis in London. The match between Sussex and Kent at Brighton the following week was paused for the duration of his funeral. |
Knotting of linear DNA in nano-slits and nano-channels: a numerical study The amount and type of self-entanglement of DNA filaments is significantly affected by spatial confinement, which is ubiquitous in biological systems. Motivated by recent advancements in single DNA molecule experiments based on nanofluidic devices, and by the introduction of algorithms capable of detecting knots in open chains, we investigate numerically the entanglement of linear, open DNA chains confined inside nano-slits. The results regard the abundance, type and length of occurring knots and are compared with recent findings for DNA inside nano-channels. In both cases, the width of the confining region, D, spans the 30nm- 1\mu m range and the confined DNA chains are 1 to 4\mu m long. It is found that the knotting probability is maximum for slit widths in the 70-100nm range. However, over the considered DNA contour lengths, the maximum incidence of knots remains below 20%, while for channel confinement it tops 50%. Further differences of the entanglement are seen for the average contour length of the knotted region which drops significantly below D ~100nm for channel-confinement, while it stays approximately constant for slit-like confinement. These properties ought to reverberate in different kinetic properties of linear DNA depending on confinement and could be detectable experimentally or exploitable in nano-technological applications. I. INTRODUCTION Like other forms of entanglement, the abundance and type of knots in equilibrated DNA molecules depends on both intrinsic and extrinsic properties. The former include the chain contour length, the bending rigidity and chirality while the latter include spatial constraints such as confinement in narrow spaces. A notable example of spatially-confined DNA is offered by viral genome packaged inside capsids. Typically, the viral DNA contour length exceeds by orders of magnitude the capsid diameter, resulting in a near crystalline density of the packaged genome. Over the years, several numerical studies have accordingly tried to understand not only how the DNA filament is packaged inside the virus but especially how it can be ejected into the host cell through the narrow capsid exit channel without being jammed by self-entanglement. A solution to this conundrum was proposed in ref which reported that the ordering effect of DNA cholesteric selfinteraction is responsible for keeping the entanglement at a minimum and compatible with an effective ejection process. The effects of spatial constraints on DNA self-entanglement and the possible implications on DNA condensation, packaging and translocation have been systematically addressed only recently, largely because of the introduction of suitable nano-devices and micro-manipulation techniques that allow for probing the properties of few confined molecules at a time. In such contexts a still largely-unexplored research avenue is the characterization of the occurrence of knots in open, linear DNA molecules. In fact, theoretical and experimental studies of knot occurrence have largely focused on equilibrated chains where knots are trapped by a circularization reaction which ligates the two chain ends, thus forming a ring. The topology of such rings, is clearly maintained until they open up, and therefore their knottedness is well-defined. This is not the case for open chains, where non-trivial entanglement cannot be permanently trapped because of the two free ends. Yet, we are all familiar with the fact that knots in open chains can be long-lived and can affect various physical and dynamical properties of polymers. In particular in lab-on-chip experiments the presence of knots in linear DNAs may interfere with the confinement elongation process of the molecules, an essential step for the detection of protein-DNA interactions and also towards genome sequencing by pore-translocation. These considerations have stimulated a number of efforts aimed at suitably extending the algorithmic notion of knottedness to linear, open chains. Building on these theoretical advancements and motivated by the upsurge of DNA nano-manipulation experiments here we report on a numerical study of the knotting properties of linear DNA chains confined in nanoslits and nano-channels. The investigation is based on a coarse-grained model of DNA and is a followup of two recent studies of the metric and entanglement properties that we carried out for closed and open chains in nano-slits and nano-channels. Specifically, the properties of knotted open chains confined in slits is reported here for the first time and is compared with the earlier results for channel confinement. The model Hereafter we provide a brief, yet self-contained, description of the coarse-grained DNA model and simulation and numerical techniques used to characterize the topological properties in slit-and channel-like confining geometries. By following the approach of ref., linear filaments of dsDNA are modeled as semi-flexible chains of N identical cylinders. The cylinders diameter is d = 2.5nm, corresponding to the dsDNA hydration diameter, and their axis length is equal to b = 10nm, i.e. a fraction of the nominal dsDNA persistence length, l p = 50nm. A chain configuration is fully specified by the location in space of the cylinders axes, t 1, t 2, t N and, in the unconstrained case, its energy is given by the sum of two terms: The first term accounts for the excluded volume interaction of the cylinders. It is equal to "infinity" if two non-consecutive cylinders overlap and zero otherwise. The second term is the bending energy, with T = 300K being the system temperature and K B the Boltzmann constant. It is assumed that the DNA is in a solution with high concentration of monovalent counterions, so that its electrostatic self-repulsion is effectively screened. Because non-local self-contacts of the DNA molecule are infrequent for two-and one-dimensional confinement we neglect both desolvation effects and the DNA cholesteric interaction. Finally, because linear chains can effectively relax torsion, the DNA torsional rigidity is neglected too. The confinement of the DNA inside slits is enforced by requiring that the chain maximum span perpendicular to the slit plane, ∆, is lower than a preassigned value, D. Likewise, for channel confinement, it is required that the maximum calliper (diameter), ∆, measured perpendicularly to the channel axis is smaller than D. The conformational space of confined chains was explored by means of a Monte Carlo scheme employing standard local and global moves (crankshaft and pivot moves). Following the Metropolis criterion, a newly-generated trial conformation is accepted or rejected with probability given by In the latter expression represents an auxiliar parameter that couples to the chain span (or calliper size), ∆. Accordingly, by using different values of it is possible to bias the sampling of the configurations towards configurations with different average values of ∆. Next, because the biasing weight is set a priori, it is possible to remove it by using thermodynamic reweighting techniques and recover the canonical expectation values of the observables of interest. Advanced sampling and reweighting techniques (which are reviewed in detail in ref. ) is adopted here because a direct enforcement of the geometrical constraints in the Monte Carlo sampling would be inefficient due to the high Metropolis rejection rate. We considered chains of N =120, 240, 360, 400 and 480 cylinders, corresponding to contour lengths L c = N b in the range 1-4.8m. Across the various values of we collected ∼ 10 5 uncorrelated configurations of chains that could be accommodated inside channels or slits with width D in the 40 nm-1m range. Notice that, because of excluded volume effects, the minimal width achievable by slits and channels in presence of a linear chain is d. It is therefore convenient to profile all properties in terms of the effective width D eff = D − d. Chain size and knot detection To characterize the average size of the chain we consider its root mean square radius of gyration where runs over the three Cartesian components of the position vector r i of the center of the i-esim cylinder of the chain, r = 1 N N i=1 r i is the center of mass of the chain and the brackets denote the canonical average over chains in equilibrium an confined in slits or channels. Profiling the knot spectrum The entanglement properties of the confined model DNA filaments were characterised by establishing the knotted state of the open chains and by measuring the knot contour length. From a mathematical point of view only circular chains have a well-defined topological knotted state since it cannot be altered by distorting or changing the chain geometry as long as the chain connectivity is preserved. To extend the concept of knottedness to open chains, it is therefore necessary to close them in a ring with an auxiliar arc. The knotted state of the closed ring is then assigned to the open chain. The auxiliary arc must clearly be suitably defined to ensure the robustness of the topological assignment; in particular it must avoid interfering with the self-entanglement of the open chain. To this purpose we adopted the minimally-interfering closure scheme introduced in ref.. The position of the knot along the chain is next established by identifying the shortest chain portion that, upon closure has the same knotted state of the whole chain. To minimize the chance of detecting slipknots it is also required that the complementary arc on the closed chain is unknotted. Fig. 1 illustrates two knotted configurations of 2.4m-long open chains confined inside a slit and a channel. The knotted portion of the chains is highlighted. III. RESULTS AND DISCUSSION The metric properties of linear DNA chains for various degrees of slit-like confinement were systematically addressed in ref.. Such study indicated that for increasing confinement, the two principal axes of inertia of linear molecules first orient in the slit plane and next grow progressively as the chain spreads out in a quasi-two-dimensional geometry. The interplay of the increase of the chain size projected in the slit plane, R ||, and the concomitant decrease of the transverse size, R ⊥, results in the non-monotonic behaviour of R g, as illustrated in Fig. 2. The data shows that R g attains a minimum at an effective channel width, D *, that is slightly larger then the average radius of gyration of the unconstrained, bulk case (marked by the black dashed line). For comparative purposes, in the same figure it is shown the average radius of gyration of equally-long linear DNA chains confined in channels. In this case too one observes the nonmonotonic dependence of R g on the width of the confining region, which attains its minimum at a channel width, D * slightly larger that the slit case (red dashed line). However, the increase of R g past the minimum is much more dramatic than for the slit case. This reflects the fact that the chain can only elongate in only one direction rather than in a plane. A relevant question regards the extent to which the dimensionality of the confining region (slits or channels) and its width can affect the incidence, complexity and size of knots in linear chains. We accordingly applied the minimallyinterfering closure scheme to establish the knotted state of equilibrated linear chains and to locate the knot along their contour. We first discuss the overall incidence of non-trivial knot topologies. The results for slit-confined chains are shown in Fig. 3(a). As expected, for each fixed value of D eff the knotting probability depends strongly on L c. In fact, going from L c = 1m to 4.8m the knotting probability increases by one order of magnitude. By comparison, the knotting probability variations on D eff (at fixed L c ) are smaller, though noticeable. More importantly, the knotting probability displays, as D eff decreases, a non-monotonic behaviour with a maximum enhancement peak at a width, D e that falls within the 50-100 nm range. In particular, the knotting probability varies by a factor of 2 going from the unconstrained case D ∼ 1m to D e ∼ 80nm. As consistently indicated by analogous results of slit-confined rings, this knotting enhancement should be measurable experimentally by circularizing dsDNA molecules with complementary single-stranded ends inside slits. As shown in Fig. 3(b), the confinement-induced enhancement of non-trivial knots is even stronger for the channels case: at the largest contour length, L c = 4.8m, the probability peak value is about 10 times larger than the bulk one. One may anticipate that the maximum knot enhancement is attained for the same channel width, D *, for which R g is minimum ( i.e. at the highest value of the overall chain density). This is, however, not the case since D e is about one third of D * at all considered chain lengths. Besides the overall knotting probability, it is of interest to examine in which proportion the various types of knots contribute to the overall knot population. The results for 4.8mlong chains inside slits are shown in Fig. 4. One notices that, at all explored values of D eff, prime and composite knots with up to six crossings in their minimal diagrammatic representation account for at least 90% of the observed knot population. In particular the simplest knot type, the 3 1 or trefoil knot, is by far the most abundant. For channel confinement the predominance of simple knots is even stronger. In particular the peak probability of the trefoil knot is, at D eff = D e about 23%, which is four times larger than for the unconstrained, bulk case. The result is noteworthy for several reasons. Firstly, at variance with the case of three-dimensional isotropic confinement (cavity, capsids) of DNA rings the knot spectrum of chains in slits and channels is dominated by the simplest knot types. This fact was recently established for closed chains in slits and channels and for open chains in channels only. The present results for open chains inside slits therefore complete the overall picture in a consistent way. Secondly, the percentage of simple knots (and in particular of the simplest one, the trefoil) found in two-dimensional con- To profile the finer characteristics of the chain selfentanglement, we finally analysed the average length of the chain portion that is spanned by the highly abundant trefoil knots, l 31. The results shown in Fig. 5 show that, at fixed L c, l 31 is non-monotonic on D eff for both slits and channels. However, one major difference is that is that the knot length decreases dramatically after the peak for channels while for slits it appears to approach a limiting value not dissimilar from the bulk one. For both types of confinement, l 31 depends strongly on L c. For instance, the confinement width associated with the maximum knot length varies noticeably with L c. In addition, the relative difference of the peak value of l 31 respect to the bulk case increases with L c too (it ranges from 10% to 22% for slits and from 16% to 36% for the channels). It is interesting to examine the observed dependence of l 31 on L c in connection with earlier scaling studies on closed rings in various conditions (unconstrained, collapsed, stretched etc. ). In particular, we recall that knots in unconstrained rings are known to be weakly localized in that their contour length scales sublinearly with L c. Specifically, for trefoils it was shown that for asymptotically-long rings, l 31 ∼ L t c with t ≈ 0.65. Motivated by these earlier findings we analysed the data in Fig. 5 and rescaled them so to collapse the l 31 curves for very weak confinement (D eff > 800nm). The optimal rescaling was obtained for the exponent, t = 0.62 ± 0.05 for both slits and channels. For stronger confinement, the rescaled curves depart from each other. By contrast with the slit case, the channel data show a systematic upward trend for increasing L c. As a matter of fact, the portion of the curve for D eff D e shows a good collapse for t = 0.74 ± 0.01. This suggests that, as confinement increases, the knotted subregion of the chain remains weakly localized but it further swells along the unconstrained dimensions. IV. CONCLUSIONS In polymer physics there is an ongoing effort to understand the extent to which spatial constraints affect the probability of occurrence, the complexity and size of topological defects in linear polymers. For DNA this problem has various implications both for the understanding of some biological elementary processes (such as translocation and viral ejection) and for the development of efficient setups for DNA nanomanipulation protocols (such as sorting or sequencing). Here we reported on a numerical study, based on advanced Monte Carlo simulations, thermodynamic reweighting and scaling analysis of the equilibrium topological properties of a coarse grained model of linear DNA confined in rectangular slits and cylindrical nanochannels. The investigation was carried out for linear chains with contour lengths ranging between 1.2 and 4.8m and confined within geometries whose transversal dimension D eff span continuously the 30 − 1000nm range. We found that, both for slits and channels, the knotting probability is a non monotonic function of D eff with a peak that occurs at a length-dependent confinement width D e. Most importantly and unlike DNA in capsids (i.e. under full confinement) the enhancement of the topological entanglement in slits and channels is not followed by a corresponding enhancement of the entanglement complexity. Indeed, despite that the peak knotting probability exceeds by several times the one in the bulk, most of the knots observed belong to very simple knot types. This effect is particularly evident for channel confinement. This suggests that nano-fluidic devices based on this or similar one-dimensional geometries may be very effective for producing a good population of linear DNA molecules with a simple knot tied in. Finally, by using a robust algorithm for locating knots in open chains, we show that the typical contour length of the knotted region displays a non monotonic behaviour similar to the one observed for the knotting probability. Moreover, by looking at its scaling behaviour as a function of the chain contour length, it is found that for the whole range of confinement and both for slits and channels, knots are weakly localized. FIG. 4. Knot spectrum as a function of D eff for 4.8m-long linear DNA chains confined within slits (a) and channels (b). The knotting data for linear chains inside channels are based on the study of ref.. ACKNOWLEDGMENTS We thank L. Tubiana for useful discussions. We acknowledge support from the Italian Ministry of Education, grant PRIN 2010HXAW77. NOTE The final version of this manuscript will be available at http://link.springer.com as part of a special issue of the Journal of Biological Physics, DOI: 10.1007/s10867-013-9305-0. |
Notch signaling induces cytoplasmic CD3 epsilon expression in human differentiating NK cells. It has been proposed that heterogeneity in natural killer (NK)-cell phenotype and function can be achieved through distinct thymic and bone marrow pathways of NK-cell development. Here, we show a link between Notch signaling and the generation of intracellular CD3epsilon (cyCD3)-expressing NK cells, a cell population that can be detected in vivo. Differentiation of human CD34(+) cord blood progenitors in IL-15-supplemented fetal thymus organ culture or OP9-Delta-like 1 (DL1) coculture resulted in a high percentage of cyCD3(+) NK cells that was blocked by the gamma-secretase inhibitor DAPT. The requirement for Notch signaling to generate cyCD3(+) NK cells was further illustrated by transduction of CD34(+) cord blood (CB) cells with either the active intracellular part of Notch or the dominant-negative mutant of mastermind-like protein 1 that resulted in the generation of NK cells with respectively high or low frequencies of cyCD3. Human thymic CD34(+) progenitor cells displayed the potential to generate cyCD3(+) NK cells, even in the absence of Notch/DL1 signaling. Peripheral blood NK cells were unable to induce cyCD3 expression after DL1 exposure, indicating that Notch-dependent cyCD3 expression can only be achieved during the early phase of NK-cell differentiation. |
The Infernal Cauldron
Plot
In a Renaissance chamber decorated with devilish faces and a warped coat of arms, a gleeful Satan throws three human victims into a cauldron, which spews out flames. The victims rise from the cauldron as nebulous ghosts, and then turn into fireballs. The fireballs multiply and pursue Satan around the chamber. Finally Satan himself leaps into the infernal cauldron, which gives off a final burst of flame.
Versions
Méliès's pre-1903 films, especially the popular A Trip to the Moon, were frequently pirated by American producers such as Siegmund Lubin. In order to combat the piracy, Méliès opened an American branch of his Star Film Company and began producing two negatives of each film he made: one for domestic markets, and one for foreign release. To produce the two separate negatives, Méliès built a special camera that used two lenses and two reels of film simultaneously.
In the 2000s, researchers at the French film company Lobster Films noticed that Méliès's two-lens system was in effect an unintentional, but fully functional, stereo film camera, and therefore that 3D versions of Méliès films could be made simply by combining the domestic and foreign prints of the film. Serge Bromberg, the founder of Lobster Films, presented 3D versions of The Infernal Cauldron and another 1903 Méliès film, The Oracle of Delphi, at a January 2010 presentation at the Cinémathèque Française. According to the film critic Kristin Thompson, "the effect of 3D was delightful … the films as synchronized by Lobster looked exactly as if Méliès had designed them for 3D." Bromberg screened both films again—as well as the 1906 Méliès film The Mysterious Retort, similarly prepared for 3D—at a September 2011 presentation at the Academy of Motion Picture Arts and Sciences. |
The number of criminal prosecutions of physicians for medical negligence related to gastroenterology is on the rise in Japan Dear Editor, Medical malpractice continues to be a topic of importance. We had previously reported that criminal prosecution of physicians for medical negligence is increasing in Japan as in Western countries. In that report, we had retrieved data from the Japanese database of criminal prosecutions pertaining to medical negligence between 1974 and 2003. Prosecutions have continued to increase since then according to the database. This involves an increase in both the total number of physicians prosecuted and the prosecutions related to gastroenterology (Figure 1). The number of prosecuted physicians was 16 or less per 5 years before 1999 but has dramatically increased since 2000. Fifty-nine physicians were prosecuted between 2000 and 2004 and 48 between 2005 and 2009. The number of prosecuted physicians related to gastroenterology was 1 or less per 5 years before 1999 but rose to 5 between 2000 and 2004 and 7 between 2005 and 2007. Between 1975 and 2009, 161 physicians were prosecuted with only 7 (4%) found innocent. Of the 154 convicted, 115 (75%) were convicted of professional negligence resulting in death, 27 (18%) for professional negligence resulting in bodily injury, 3 (2%) for professional negligence resulting in both death and bodily injury, and 9 (6%) for other offences. Fourteen physicians were prosecuted for medical negligence related to gastroenterology. For example, one physicianwas prosecuted for overlooking a colonic perforation after colonoscopy (Kaya Summary Court, 8 October 1999). The patient complained of lower abdominal pain the day after the colonoscopy, and her X-ray showed a free-gas shadow in the abdominal cavity. The physician overlooked the abnormal shadow, just observed the patient and did not prepare for surgery. She died of multiple organ failure due to panperitonitis. The physician was fined 300,000 yen (US$ 3000). Another physician was prosecuted for administering an anticancer drug overdose to a patient with esophageal cancer, leading to the patients death (Saitama District Court, 6 October 2006). The physician was sentenced to 1 year in prison with a 3-year suspended sentence. A third physician was prosecuted for incorrect placement of a percutaneous gastrostomy tube (Sendai Summary Court, 19 January 2006). The tube was placed in the abdominal cavity and not the gastric lumen. The patient died of panperitonitis after receiving nutrient solution. This physician was also fined 300,000 yen (US$ 3000). One reason for the increase in the number of physicians being prosecuted for medical negligence relating to gastroenterology may be due to the rapid increase in the population of elderly Japanese. For example, the number of patients receiving percutaneous gastrostomy tube has increased because the number of old people with difficulties in swallowing has increased rapidly. As a result, the number of tube exchanges has increased, and related accidents occur. The number of cancer patients has also increased, and physicians have accidentally given overdoses of anticancer drugs to their patients. Another reason for the increase in criminal prosecutions may be due to growing concern in Japan about medical malpractice. |
Linking ghost penalty and aggregated unfitted methods In this work, we analyse the links between ghost penalty stabilisation and aggregation-based discrete extension operators for the numerical approximation of elliptic partial differential equations on unfitted meshes. We explore the behavior of ghost penalty methods in the limit as the penalty parameter goes to infinity, which returns a strong version of these methods. We observe that these methods suffer locking in that limit. On the contrary, aggregated finite element spaces are locking-free because they can be expressed as an extension operator from well-posed to ill-posed degrees of freedom. Next, we propose novel ghost penalty methods that penalise the distance between the solution and its aggregation-based discrete extension. These methods are locking-free and converge to aggregated finite element methods in the infinite penalty parameter limit. We include an exhaustive set of numerical experiments in which we compare weak (ghost penalty) and strong (aggregated finite elements) schemes in terms of error quantities, condition numbers and sensitivity with respect to penalty coefficients on different geometries, intersection locations and mesh topologies. of the methods were originally proposed: A bulk penalty term that penalised the distance between the finite element solution in a patch and a polynomial of the order of the approximation; a face penalty term that penalised normal derivatives of the function (up to the order of approximation) on faces that were in touch with cut cells. The face-based GP has been the method of choice in the so-called CutFEM framework. These schemes were originally motivated for C 0 finite element spaces on simplicial meshes and later used in combination with discontinuous Galerkin formulations. The so-called cell aggregation or cell agglomeration techniques are an alternative way to ensure robustness with respect to cut location. This approach is very natural in DG methods, as they can be easily formulated on agglomerated meshes, and the method is robust if each cell (now an aggregate of cells) has enough support in the interior of the domain. However, the application of these ideas to C 0 Lagrangian finite elements is more involved. The question is how to keep C 0 continuity after the agglomeration. This problem was addressed in, and the resulting method was coined AgFEM. AgFEM combines an aggregation strategy with a map that assigns to every geometrical entity (e.g., vertex, edge, face) one of the aggregates containing it. With these two ingredients, AgFEM constructs a discrete extension operator from well-posed degrees of freedom (DOFs) (i.e., the ones related to shape functions with enough support in the domain interior) to ill-posed DOFs (i.e., the ones with small support) that preserves continuity. As a result, the basis functions associated with badly cut cells are removed and the ill-conditioning issues solved. The formulation enjoys good numerical properties, such as stability, condition number bounds, optimal convergence, and continuity with respect to data; detailed mathematical analysis of the method is included in for elliptic problems and in for the Stokes equation. AgFEM is amenable to arbitrarily complex 3D geometries, distributed implementations for large scale problems, error-driven ℎ-adaptivity and parallel tree-based meshes, explicit time-stepping for the wave equation and elliptic interface problems with high contrast. In this work, we aim to explore the links between the weak ghost penalty strategy and the strong aggregation-based strategy. In order to do this, we analyse the strong form limit of GP methods. In this process, we discuss the locking phenomenon of current GP methods. Next, we make use of the AgFEM machinery to define new GP formulations that converge to the classical (strong) AgFEM and thus, are locking-free. We propose two alternative expressions of the penalty method. The stabilisation term penalises the distance between the solution and its interpolation onto the AgFEM space. This distance can be expressed using a weighted 2 product or an 1 product. This work is structured as follows. We introduce the geometrical discretisation in Section 2 and the problem statement in Section 3, which include notations and definitions that are required to implement GP and AgFEM. Next, we introduce some GP formulations and analyse their strong limit in Section 4. After that, we present the AgFEM and its underlying discrete extension operator in Section 5. In this section, we propose a new family of GP methods that are a weak version of AgFEM. We comment on the implementation aspects of all the methods in Section 6. In Section 7, we provide a detailed comparison of all these different schemes, in terms of accuracy and condition number bounds, for different values of the penalty parameter, geometries, and intersection locations. We consider Poisson and linear elasticity problems on isotropic and anisotropic meshes. We draw some conclusions in Section 8. The original contributions of the article are: A discussion about the links between strong (AgFEM) and weak (GP) methods for solving the ill-conditioning of C 0 Lagrangian unfitted finite element methods; A discussion about the locking phenomenon of GP strategies in the strong limit and corrective measures; Design and analysis of GP schemes that are a weak versions of AgFEM and locking-free; A thorough numerical experimentation comparing GP methods and strong discrete extension methods in terms of error quantities, condition numbers, sensitivity with respect to penalty coefficients, etc. G Let us consider an open bounded polyhedral Lipschitz domain ⊂ R, being the space dimension, in which we pose our partial differential equation (PDE). Standard FE methods rely on a geometrical discretisation of in terms of a partition of the domain (or an approximation of it). This step involves so-called unstructured mesh generation algorithms. The resulting partition is a body-fitted mesh of the domain. Embedded discretisation techniques alleviate geometrical constraints, because they do not rely on body-fitted meshes. Instead, these techniques make use of a background partition T ℎ of an arbitrary artificial domain art ℎ such that ⊂ art ℎ. The artificial domain can be trivial, e.g., it can be a bounding box of. Thus, the computation of T ℎ is much simpler (and cheaper) than a body-fitted partition of. For simplicity, we assume that T ℎ is conforming, quasi-uniform and shape-regular; we represent with ℎ the diameter of a cell ∈ T ℎ and define the characteristic mesh size ℎ max ∈ T ℎ ℎ. We refer to for the extension to non-conforming meshes. The definition of unfitted FE discretisations requires some geometrical classifications of the cells in the background mesh T ℎ and their n-faces. We use n-face to denote entities in any dimension. E.g., in 3D, 0-faces are vertices, 1-faces are edges, 2-faces are faces and 3-faces are cells. We use facet to denote an n-face of dimension -1, i.e., an edge in 2D and a face in 3D. The cells in the background partition with null intersection with are exterior cells. The set of exterior cells T out ℎ is not considered in the functional discretisation and can be discarded. T act is the active mesh. E.g., the XFEM relies on a standard FE space on T act ℎ. Unfortunately, the resulting discrete system can be singular (see the discussion below). This problem, a.k.a. small cut cell problem, is due to cells with arbitrarily small support on. This fact motivates the further classification of cells in T act ℎ. Let T in ℎ be the subset of cells in and T cut ℎ the cut cells (see Figure 1). The interior of in, cut}. Some of the methods below require the definition of aggregates. Let us consider T ag ℎ (usually called an agglomerated or aggregated mesh) obtained after a cell aggregation of T act ℎ, such that each cell in T in ℎ only belongs to one aggregate and each aggregate only contains one cell in T in ℎ (the root cell). Based on this definition, cell aggregation only acts on the boundary; interior cells that are not in touch with ill-posed cells remain the same after aggregation. Let T,ag ℎ T ag ℎ \ T in ℎ be the non-trivial aggregates on the boundary (see Figure 1). We refer the interested reader to for the definition of cell aggregation algorithms. The aggregation has been extended to non-conforming meshes in and interface problems in, and its parallel implementation described in. It is essential for convergence to minimise the aggregate size in these algorithms. In particular, the characteristic size of an aggregated cell must be proportional to the one of its root cell. The motivation of the cell aggregation is to end up with a new partition in which all cells (aggregates) have support in away from zero, are connected and are shape-regular. We represent with C # ℎ the (simplicial or hexahedral) exact complex of T # ℎ for # ∈ {act, in, out}, i.e., the set of all n-faces of cells in T # ℎ. C ag ℎ is the subset of n-faces in C act ℎ that lay on the boundaries of aggregates in T ag ℎ. We also define the set of ghost boundary facets F gh,cut ℎ as the facets in C cut ℎ that belong to two active cells in T act ℎ (cut or not). F gh,ag ℎ F gh,cut ℎ \ C ag ℎ is the subset of these facets that do not lay on aggregate boundaries, see Figure 1. P Let us consider first the Poisson equation in with Dirichlet boundary conditions on D ⊂ and Neumann boundary conditions on N \. After scaling with the diffusion term, the equation reads: find ∈ 1 () such that where is the source term, is the prescribed value on the Dirichlet boundary and the prescribed flux on the Neumann boundary. The following presentation readily applies to other second-order elliptic problems. In particular, we will also consider the linear elasticity problem: find ∈ 1 () such that One can consider a more subtle classification as follows. For any cell ∈ T ℎ, we define the quantity | ∩| | |, || being the measure of the set. For a given threshold 0 ∈ (0, 1], we classify cells ∈ T act ℎ as well-posed if ≥ 0 and ill-posed otherwise. The interior-cut classification is recovered for 0 = 1. In any case, the following discussion applies verbatim. Illustration of the main geometrical sets introduced in Sect. 2. where, : → R, are the strain tensor ( ) 1 2 (∇ + ∇ ) and stress tensor ( ) = 2 ( ) + tr( ( ))Id; Id denotes the identity matrix in R. (, ) are the the Lam coefficients. We consider the Poisson ratio /(2( + )) is bounded away from 1/2, i.e. the material is compressible. Since = 2 /(1 − 2 ), is bounded above by, i.e. ≤, > 0. The simplification of the geometrical discretisation, in turn, complicates the functional discretisation. Standard (body-fitted) FEMs cannot be straightforwardly used. First, the strong imposition of Dirichlet boundary conditions relies on the fact that the mesh is body-fitted. In an embedded setting, Dirichlet boundary conditions are weakly imposed instead. Second, the cell-wise integration of the FE forms is more complicated; integration must be performed on the intersection between cells and only. Third, naive discretisations can be arbitrarily ill-posed. Let V act ℎ be a standard Lagrangian FE space on T act ℎ. As stated above, we consider a weak imposition of boundary conditions with Nitsche's method. This approach provides a consistent numerical scheme with optimal convergence for arbitrary order FE spaces. According to this, we approximate the Poisson problem in with the Galerkin method: find ℎ ∈ V act ℎ such that ℎ ( ℎ, ℎ ) = ℎ ( ℎ ) for any ℎ ∈ V act ℎ, with with being the outward unit normal on. In the case of the linear elasticity problem in, the bilinear form and linear functional read: We note that the second term in all the forms above are associated with the weak imposition of Dirichlet boundary conditions with Nitsche's method. Stability of Lagrangian FEMs relies, e.g., for the Other weak imposition of boundary conditions involve the penalty method or a non-symmetric version of Nitsche's method. In any case, the penalty formulation is not weakly consistent for higher order methods and the non-symmetric formulation breaks the symmetry of the system and is not adjoint consistent. Poisson problem, on the following property: ∫ for some > 0 independent of ℎ. A value of that satisfies can be computed using a cell-wise eigenvalue problem. In shape-regular body-fitted meshes, one can simply use ℎ −1. Here, =2, witha large enough problemdependent parameter and the order of V act ℎ ; while ℎ is the diameter of. For XFEM, we only have stability over ∇ ℎ in the right-hand side of. In this case, the minimum value of that makes the problem stable tends to infinity in some cell configurations as | ∩ | → 0. As a result, unfitted FEMs like XFEM are not robust to cell cut locations (boundaries or interfaces). Condition number bounds for the resulting linear system are strongly linked to the stability issue commented above. A shape function of the finite element space must satisfy, in at least one cell for + > − > 0 independent of ℎ. This condition can be proved for Lagrangian FEs on body-fitted meshes ( ∩ = ). It implies that the mass matrix is spectrally equivalent to the identity matrix for quasi-uniform and shape-regular partitions. However, it is obvious to check that this condition fails as | ∩ | → 0 since ∫ 2 d → 0. The plain vanilla FEM on the background mesh leads to severely ill-conditioned linear systems. Recent research works have tried to solve this issue at the preconditioner level (see, e.g., ). In the coming sections, we will present methods that solve the previous problems by providing stability over full background mesh cells ∈ T act ℎ, i.e., instead of ∩. With these methods, we can use the same expression of as in body-fitted meshes, ℎ being the background cell size. We will often make abuse of notation, using ℎ to refer to both the scalar unknown in and the vector unknown in, making distinctions where relevant. We use (resp. ) to denote ≥ (resp. ≤ ) for some positive constant that does not depend on ℎ and the location of the cell cuts. Uniform bounds irrespectively of boundary or interface locations is the driving motivation behind all these methods. G The ghost penalty formulation was originally proposed in to fix the ill-posedness problems of XFEM discussed above. The method adds a stabilising bilinear form ℎ to the formulation that provides an extended stability while preserving the convergence rates of the body-fitted method. () → ( act ℎ ) be a continuous extension operator. Let V (ℎ) V act ℎ + 2 ( act ℎ ). We endow V (ℎ) with the extended stability norm: A suitable GP term ℎ for problems and is a positive semi-definite symmetric form that satisfies the following properties uniformly w.r.t. the mesh size ℎ of the background mesh and interface intersection: (i) Extended stability: (ii) Continuity: (iii) Weak consistency: Under these conditions, the GP unfitted formulation is well-posed and exhibits optimal convergence rates. The following abstract results have been proved in the literature for specific definitions of ℎ that satisfy Def. 4 . Note that the norm includes control, not only over ∇ ℎ 2 () (which comes from the standard Laplacian term in the formulation), but ∇ ℎ 2 ( act ℎ ). This extra stabilisation in is provided by the GP term and essential for well-posedness independent of the boundary location. Proposition 4.2. Let ℎ satisfy Def. 4.1. It holds: for any ℎ, ℎ ∈ V act ℎ, ∈ V (ℎ). Thus, there is a unique Furthermore, the condition number of the resulting linear system when using a standard Lagrangian basis for V act ℎ holds ℎ −2. Proof. The main ingredient is the following trace inequality for continuous functions on cut cells (see ): where ( ∩ ) is the boundary of ∩. Using a standard discrete inequality on T act ℎ, namely Using, Cauchy-Schwarz and Young inequalities, and the expression for, we can prove. Thus, the GP provides a stronger coercivity result, namely for the same definition of that is needed in body-fitted meshes, i.e., the value of does not depend on the location of the cell cut. Continuity in is obtained invoking, the standard inverse inequality ∇ ℎ 2 ( ) ℎ −1 ℎ 2 ( ) and. The cut-independent condition number bound is a by-product of the enhanced coercivity in. Using the Poincar inequality, one can readily prove that. Using the inverse inequality on the background mesh and one can prove that ℎ V (ℎ) ℎ −1. Finally, one can use the standard bounds for the mass matrix on the background mesh to prove the result (see, e.g., ). Proposition 4.3. Let be the order of V act ℎ. If ∈ +1 (), ≥ ≥ 1, is the solution of or and ℎ ∈ V act ℎ is the solution of, then: Proof. The convergence analysis relies on standard approximation error bounds for E +1 ( ) in V act ℎ and the weak consistency in. In the following, we show some definitions of ℎ with these properties. Bulk ghost penalty (B-GP). The bulk ghost penalty stabilisation was originally proposed in. The idea of this formulation is to add the stabilisation term where () represents the 2 ( ) projection onto the polynomial space P ( ) (for simplicial meshes) or Q ( ) (for hexahedral meshes). > 0 is a numerical parameter that must be chosen. This method penalises the difference between the restriction of the FE solution in the aggregate and a local polynomial space on the aggregate. Since the aggregate-wise polynomial spaces have enough support in by construction, it provides stability on ill-posed cells. Face ghost penalty (F-GP). The most popular ghost stabilisation technique relies on the penalisation of inter-element normal derivatives on the facets in F gh,cut ℎ, where stands for the FE space order. This is the stabilisation that is being used in the so-called CutFEM framework. Let us consider linear FEs in the following exposition. Given a face ∈ F gh,cut ℎ and the two cells and sharing this face, we define ∇ | + ∇ |. ℎ is some average of ℎ and ℎ. We define the GP stabilisation term: The method weakly enforces C continuity across cells as soon as one of the cells is cut. This weak constraint of cut cells also provides the desired extended stability described above. is a suitable GP stabilisation that satisfies the conditions - in Def. 4.1. Proposition 4.5. The expression of ℎ in Proof. The analysis of the F-GP stabilisation can be found in [ 4.3. The limit → ∞. The GP method underlying idea is to rigidise the shape functions on the boundary zone. One question that arises is: what happens when the penalty parameter → ∞? Can we use the previous methods in a strong way? Can we build spaces that exactly satisfy the constraints in that limit? Let us consider the F-GP method first. In the limit → ∞, the stabilisation term in that case enforces full C continuity of the FE solution on F gh,cut ℎ. The only functions in V act ℎ that satisfy this continuity are global polynomials in cut ℎ. As a result, the method is not an accurate method that converges to the exact solution as ℎ → 0. Let us look at the B-GP now. In the limit → ∞, the solution must be a piecewise C 0 polynomial on the aggregated mesh T ag ℎ. The limit constraint is weaker than the one for F-GP. However, it is still too much to eliminate the locking phenomenon in general. In order to satisfy exactly the penalty term, the aggregate values are polynomial extensions of the root cell polynomials. In general, these extensions do not satisfy C 0 continuity (Aggregates are not tetrahedra or hexahedra anymore, but an aggregation of these polytopes that can have more general shapes, e.g., L-shaped.) As a result, the B-GP term is not just constraining the values of the ill-posed cell (as one could expect) but the ones of the well-posed cells too. Furthermore, these constraints are not local but can propagate across multiple aggregates, depending on the mesh topology. The previous observations motivate a slight improvement of F-GP (CutFEM). Instead of considering the face penalty on all faces in F gh,cut ℎ, one can only penalise the subset of intra-aggregate faces F gh,ag ℎ : It is easy to check that this variant of the F-GP method, referred to as intra-aggregate ghost penalty (A-GP), has the same behaviour in the limit as the bulk one, i.e., it is still affected by locking. The analysis in this case readily follows from Prop. 4.5. A The natural question that arises from this discussion is: how can we define a strong version of the GP stabilisation that is free of the locking described above? The answer to this question was the AgFEM proposed in. The underlying idea of this method is to define a new FE space that can be expressed in terms of an aggregate-wise discrete extension operator E Since V act ℎ is a nodal Lagrangian FE space, there is a one-to-one map between shape functions, nodes and DOFs. For each node in the mesh, we can define its owner as the lowest dimensional n-face (e.g., vertex, edge, face, cell) that contains it, to create a map O dof→nf ℎ. We define the set of ill-posed DOFs as the ones owned by cut/external n-faces, i.e., C cut ℎ \ C in ℎ. The rationale for this definition is the fact that the shape functions associated to these DOFs are the ones that can have an arbitrarily small support on. The definition of the discrete extension operator in the AgFEM requires to define an ownership map O nf→ag ℎ,ag ℎ from ill-posed cut/external n-faces to aggregates. This mapping is not unique for inter-aggregate DOFs and can be arbitrarily chosen. On the other hand, each aggregate has a unique root cell in T in ℎ. Composing all these maps, we end up with an ill-posed to root cell map i.e., the map that returns the DOFs owned by an n-face in C act ℎ. We also need a closed version O nf→dof ℎ of this ownership map, which given an n-face in C act ℎ returns the owned DOFs of all n-faces ∈ C act ℎ in the closure of, is the map that describes the locality of FE methods. The only DOFs that are active in a cell ∈ T act ℎ are the ones in O nf→dof ℎ ( ). Analogously, only the shape functions associated to these DOFs have support on. We are in position to define the discrete extension operator as follows. An ill-posed DOF in C cut ℎ \ C in ℎ is computed as a linear combination of the well-posed DOFs in the closure of the root cell that owns it. Using the notation introduced so far, these DOFs are the ones in O We can readily check that this expression extends the well-posed DOF values on interior cells to the ill-posed DOF values that only belong to cut cells. The application of to a FE function in V in ℎ provides the sought-after discrete extension operator E ag ℎ and the AgFE space V ag ℎ. The implementation of AgFEM simply requires the imposition of the constraints in. These constraints are cell-local (much simpler than the ones in ℎ-adaptive mesh refinement). On the other hand, the method does not require any modification of the forms in -. Instead, it is the FE space the one that changes. The method reads: find ℎ ∈ V ag ℎ such that ( ℎ, ℎ ) = ( ℎ ) for any ℎ ∈ V ag ℎ. The key properties of the AgFEM are: It is also important to note that, by construction, V ag ℎ ( ) for ∈ T ag ℎ is a subspace of P ( ) (for simplices) of Q ( ) for hexahedra, solving the issues in the → ∞ limit of F-GP and B-GP. As a result, the method preserves the convergence properties of V act ℎ. We summarise these observations in the following definition. ℎ →⊂ V act ℎ must satisfy the following properties: (ii) Approximability: The image V It holds: for any ℎ, ℎ ∈ V ag ℎ, ∈ V ag (ℎ). Thus, there is a unique Furthermore, the condition number of the resulting linear system when using a standard Lagrangian basis for V in ℎ holds ℎ −2. Proof. In order to prove, the Nitsche terms can readily be bounded as in Prop. 4.2 using. The condition number proof follows the same lines as the one for the GP in Prop. 4.2 (see ). Proposition 5.3. Let be the order of V act ℎ. If ∈ +1 (), ≥ ≥ 1, is the solution of or and ℎ ∈ V ag ℎ is the solution of, then: Proof. The convergence analysis is standard and relies on the approximability property. 5.1. Ghost penalty with discrete extension. In this section, we propose a novel scheme, which combines the discrete extension operator defined above for the AgFEM and the GP stabilisation ideas. This development is motivated by the locking phenomenon of GP methods as → ∞. The idea is to penalise the distance between the solution in the (unconstrained) V act ℎ space and the one in V ag ℎ. In Section 5, we have defined the discrete extension operator E Next, we define the following stabilisation term: where * can be chosen to be or \ (i.e., only adding stability in the exterior part). We refer to method as weak AgFEM by 2 product (W-Ag-2 ). We note that this term can be computed cell-wise, in the spirit of so-called local projection stabilisation methods. On the other hand, the term vanishes at the root cell of the aggregate by construction, i.e., it is only active on cut ℎ. Alternatively, one could consider an 2 -projection at each aggregate, as in the B-GP. It is also obvious to check that this method converges to the AgFEM in the limit → ∞. Alternatively, we can consider a stabilisation that relies on the 1 product: that is, a weak AgFEM by 1 product (W-Ag-∇), where again * can be or \. Its implementation is as easy as the previous one, but does not require to compute a characteristic mesh size ℎ. Proof. Let us start proving the extended stability as follows. Using the stability of the discrete extension operator and the triangle inequality, we get:, for any ℎ ∈ V act ℎ. The result for ℎ in can be obtained by bounding the second term in using the quasi-uniformity of the background mesh T act ℎ, the standard inverse inequality in T act ℎ and the assumption that aggregate size ℎ is proportional to the root cell size times a constant that does not depend on ℎ. For weak consistency, we need to extend the definition of ℎ for ∈ +1 ( act ℎ ). To this end, we invoke the standard Scott-Zhang interpolant sz ℎ : +1 ( act ℎ ) −→ V act ℎ, using the definition in. We letP ℎ is a projection. Hence, weak consistency of the proposed methods can readily be proved using the optimal approximation properties of V ag ℎ in. for ℎ in and for ℎ in. The proof of continuity relies on the 1 -stability of the Scott-Zhang interpolant and the discrete extension operator in. We observe that, for * \, W-Ag-∇ in can also be understood as an improvement of the so-called finite cell method that is both convergent and robust. Instead of substracting a projection, the finite cell method adds a grad-grad term on act ℎ \, premultiplied by a numerical parameter. When the parameter is large, convergence is deteriorated. When the parameter is small, robustness is affected. Subtracting the projection, these problems are solved, since the resulting method enjoys weak consistency. I The implementation of the previous methods have some non-standard requirements that are not present in body-fitted FE codes. First, there are some geometrical functions that have to be implemented. All methods, but the F-GP, require the computation of an aggregated mesh with the properties described in Sec. 2. Furthermore, all the GP methods, but the original bulk penalty in, require the -face to aggregate map described in Sec. 5. Besides, the GP method needs to identify the set of faces that are in touch with cut cells and intersect the domain. In any case, the implementation of all these steps is quite straightforward and can be easily implemented in distributed memory machines (see ). The implementation of F-GP involves jumps of normal derivatives up to the order of the FE space. This functionality is uncommon in FE software (see, e.g., ). The implementation of the B-GP methods requires the computation of aggregate-wise 2 projections, which is a non-obvious task in FE software, since it breaks the standard procedure, namely to compute cell-local matrices, assemble the local matrices in a global sparse array and solve a global linear system. In order to avoid this implementation issue, we can consider an interpolator in the spirit of W-Ag-2. We replace the aggregate-wise 2 projection in by the interpolation of the FE function on the root cell over the aggregate. Let V #,− ℎ be the DG counterpart of V # ℎ, for # ∈ {in, cut}. We can define the DG discrete extension operator E ag,− ℎ using the definition above for the DG spaces. We note that the construction of the extension is much simpler in this case, since all DOFs belong to cells. Since V in ℎ ⊂ V in,− ℎ, we readily obtain E. Now, we can define an alternative B-GP stabilisation with interpolation (B-GP-i) as follows: The proof for the bulk penalty method readily applies for this version, since the aggregate-wise interpolant share the required continuity and error bounds of the aggregate-wise 2 projection. On the other hand, it is penalising the component that does not belong to the same space, thus it has the same locking problems. Since the interpolant-based version simplifies the implementation, this is the one to be used in Sect. 7. The GP and AgFE methods make use of an ill-posed to well-posed DOF that can be readily computed with the geometrical machinery described above and the local-to-global DOF map. These methods require the implementation of a discrete extension operator. In general, it can be stored as the set of linear constraints in that can be implemented cell-wise (traversing all root cells). The computation of this constraints is much simpler than in ℎ and ℎ -adaptivity (see, e.g., ). It requires to evaluate the shape functions of the root cell in the nodes of the ill-posed DOFs (see ). In most FE codes, it requires the implementation of the geometrical map inverse, in order to map points from the physical to the reference space, which is not standard in general. This problem is trivial for Cartesian background meshes, which is arguably the most natural and efficient choice. For the (strong) AgFEM, one must impose these constraints in the assembly process. This is readily available (for more complex constraints) in any code that allows adaptive mesh refinement with hanging nodes (see, e.g., ). For the novel weak versions W-Ag-2 and W-Ag-∇, the assembly process does not involve the imposition of constraints. However, the local matrix in cut cells does not only include the local DOFs but also the local DOFs of the root cell. E.g., for W-Ag-∇, we have to assemble the following entries: and the transpose of the second term, for any ∈ T cut ℎ. 7. N 7.1. Methods and parameter space. We collect in Table 1 all the numerical methods that will be analysed in this work, in terms of the stabilisation term ℎ and the FE space being used. We consider first the F-GP method defined in. We include the expression of ℎ for linear elements. Higher order approximations would require to evaluate inter-facet jumps of higher order derivatives that are not available in the numerical framework Gridap. We also consider the modification of this method, that only penalises intra-aggregate facets (A-GP). As indicated in Sect. 6, we consider the interpolation-based B-GP-i method. We also analyse the results for both versions of the weak AgFEM: W-Ag-2 and W-Ag-∇, i.e., using the 2 product and 1 product. In both cases, we set * =. All these methods make use of the standard FE space V act ℎ on the active mesh T act ℎ. Finally, we compare all these stabilised formulations against the strong version of AgFEM (S-Ag) in Sect. 5, which makes use of the FE space V ag ℎ. We show in Table 2 the parameter space being explored. We have reduced, as much as possible, this space, due to length constraints. We solve the Poisson problem and linear elasticity discrete problems for two different sets of boundary conditions, namely weak imposition of Dirichlet boundary conditions on the whole boundary and Neumann and strong Dirichlet boundary conditions. This way, we can analyse the behaviour of the methods with respect to Nitsche terms and Neumann boundary conditions separately. To compute errors analytically, we use polynomial manufactured solutions of the form (, ) = ( + ) +1 in 2D and (,, ) = ( + + ) +1 in 3D, with the order of the FE space. We have observed that the use of simplicial and hexahedral meshes leads to similar conclusions. So, we only show results for simplicial meshes below. We have considered the square in 2D and the cube, a tilted cube, the sphere and the 8-norm sphere in 3D. All geometries are represented in Figure 2. They are centred at the origin of coordinates. For weak Dirichlet tests, we simulate the whole geometry; while only the region in the first quadrant/octant for mixed boundary conditions. We have configured all geometries, in a similar way as in, to ensure that the small cut cell problem is present in all cases with min ∈ T act ℎ of the order of 10 −8. In the case of the square, cube and 8-norm sphere, we also enforce sliver cuts. We have considered both linear and quadratic C 0 Lagrangian FE spaces. We pay particular attention to the penalty parameter ; as we will see, some methods are highly sensitive to this value. 7.2. Experimental environment. All the algorithms have been implemented in the Gridap open-source scientific software project. Gridap is a novel framework for the implementation of grid-based algorithms for the discretisation of PDEs written in the Julia programming language. Gridap has a user interface that resembles the whiteboard mathematical statement of the problem. The framework leverages the Julia just-in-time (JIT) compiler to generate high-performant code. Gridap is extensible and modular and has many available plugins. In particular, we have extensively used and extended the GridapEmbedded plugin, which provides all the mesh queries required in the implementation of the embedded methods under consideration, level set surface descriptions and constructive solid geometry. We use the cond() method provided by Julia to compute condition numbers. Condition numbers have been computed in the 1-norm for efficiency reasons. It was not possible to compute the 2-norm condition number for all cases with the available computational resources. The numerical experiments have been carried out at the TITANI cluster of the Universitat Politcnica de Catalunya (Barcelona, Spain), and NCI-Gadi, hosted by the Australian National Computational Infrastructure Agency (NCI). NCI-Gadi is a petascale machine with 3024 nodes, each containing 2x 24-core Intel Xeon Scalable Cascade Lake processors and 192 GB of RAM. All nodes are interconnected via Mellanox Technologies' latest generation HDR InfiniBand technology. 7.3. Sensitivity analysis. We consider first log-log plots. They show values of the penalty parameter = 10 − for ∈ {−2, −1, 0, 1, 2, 4, 6, 8}, in the X-axis, and the value of the condition number (in 1-norm) ( ) 2D square and 3D cube on plane = 0 ( ) 3D tilted cube on plane = 0 ( ) 3D sphere on plane = 0 ( ) 3D 8-norm sphere on plane = 0 F 2. Geometries considered in the sensitivity analysis. We superpose different configurations of each geometry, with increasing opacity, to expose how we have shrinked the geometries to force cut volumes of the order of 10 −8. We refer the reader to for the level set descriptions of 2a, 2b and 2d. and the 2 and 1 errors, in the Y-axis. These plots allow us to easily compare the behaviour of the different methods and observe their dependency with respect to the penalty parameter. 7.3.1. 2D results. We show in Figure 3 the plots corresponding to the 2D square using both linear and quadratic FEs for the solution of the Poisson problem with weak imposition of Dirichlet boundary conditions. Let us consider first the condition number plots. For linear elements, we observe that all weak methods have the same behaviour. In all cases, the minimum is attained at = 1. The condition number increases as → 0. The stabilisation is not enough to fix the small cut cell problem effect on the condition number. The condition number linearly increases as → ∞. This phenomenon is well-understood. The minimum eigenvalue does not grow with, since eigenvectors can be in the kernel of the penalty term, i.e., they can belong to a subspace that cancels the penalty term. On the contrary, the maximum eigenvalue linearly increases with. The results for second order elements are similar, even though there is a more erratic behaviour for values of below 1. The condition number of the strong AgFEM (S-Ag) is constant (the method does not depend on ) and is below weak schemes in all cases. Now, we analyse the 2 and 1 error of the methods. For linear elements, we can observe the locking phenomenon predicted in Sect. 4.3 in the limit → ∞. We can observe that we have three types of weak methods. The worst method is F-GP, because it is the one that rigidises on all ghost skeleton facets (see Sect. 4.3). The methods that only introduce intra-aggregate rigidisation, i.e., the weaker facet-stabilisation A-GP and the bulk-based method B-GP-i, show slightly lower errors, but still exhibit locking and are not convergent methods in the limit. It is more serious for practical purposes the strong sensitivity of these methods to. The behavior is highly erratic. For linear elements, the best result is for = 10 −2, but this is not the case for quadratic elements. Besides, F-GP, A-GP and B-GP-i are worse than W-Ag-2, W-Ag-∇ and S-Ag in all cases but = 1, where B-GP-i has a similar behavior as the Ag-based methods. The novel weak versions of AgFEM are far less sensitive to. The methods can exhibit some instability as → 0, since the stabilisation is not enough. However, the methods are very robust for ≥ 1. On the other hand, as stated in Sect. 5.1, W-Ag-* show the same error as S-Ag, indicating that these algorithms do converge to S-Ag in the limit → ∞. Thus, these methods are not affected by any locking in this limit. Before reporting the 3D results, we take a closer look at the locking phenomenon. In Figure 4, we consider plots of the solution in the square for = 10000, using linear elements in a 20 20 structured triangular mesh. As expected, the F-GP solution is affected by severe locking and completely wrong. The A-GP and B-GP-i solutions are barely indistinguishable; they are still polluted by locking, which constraints the values of interior DOFs away from the analytical solution. Finally, W-Ag-* are completely free of locking and coincide with the S-Ag solution. 7.3.2. 3D results. We show in Figure 5 the same kind of plots for 3D geometries. In this test, we consider three different geometries, in order to evaluate whether the geometry affects the behavior of the different methods. We have considered the cube, a tilted cube and a sphere. We consider linear elements, only. We observe essentially the same behavior as in 2D. The results for all methods are quite insensitive to the geometry. The sphere is an easier geometry in general, since it is smoother and does not involve corners. However, the results are very similar to the ones for the cube and tilted cube. The only difference would be the slightly better results of weak methods for below 1. Anyway, the more important observations clearly apply. F-GP, A-GP and B-GP-i are very sensitive to and exhibit locking. W-Ag-* methods are very insensitive to. The S-Ag method is close to the optimal value in most situations, while parameter-free. We compare now the linear elasticity problem and Neumann boundary conditions and consider both linear and quadratic elements on the 3D cube. In this case, W-Ag-* errors are totally independent of, also for low values of. Condition number plots have the same behavior as above; for the largest values of the problem is so ill-conditioned that the solution is severely affected by rounding errors, but such a value of is for presentation purposes and certainly not a value to be used in practise. The W-GP, A-GP and B-GP-i methods show the same problems discussed above. S-Ag exhibits optimal condition numbers and errors in all cases. The results are very similar for linear and quadratic FEs. The only difference is that the condition number of S-Ag is about one order of magnitude lower than the weak methods. In Figure 7, we analyse the behavior of the methods for linear elasticity and Neumann boundary conditions on the sphere for both linear and quadratic FEs. Again, W-Ag-* methods turn to be very insensitive to. F-GP, A-GP and B-GP-i show the same bad behavior indicated in the previous examples. The main difference with respect to above is the fact that the condition number of S-Ag for quadratic elements is about one order of magnitude larger than the minimum value obtained with weak schemes. 7.4. Results under refinement. In this section, we analyse results in a more standard way. We consider the variation of the condition number and 2 and 1 errors as we refine the mesh. We show the results for three different values, ∈ 1, 100, 10 8. In Figure 8, we consider the Poisson problem with Nitsche's method for linear elements on the cube. For the condition number bounds, we observe that the weak methods are in some cases still in a pre-asymptotic regime and in all cases above S-Ag. The loss of convergence of F-GP with increasing values of is very clear. As indicated above, the results for A-GP and B-GP-i are slightly better, but they still show the loss of convergence in this limit. W-Ag-* and S-Ag practically show the same convergence behaviour in all cases. We show in Figure 9 the same plots for elasticity, Neumann boundary conditions and the 8-norm sphere using quadratic elements. Despite all the differences in the numerical setup, the results are strikingly similar and the same conclusions apply. C In this work, we have analysed the link between weak ghost penalty methods, which rely on a stabilisation term, and strong AgFEMs, which rely on discrete extension operators for the definition of the FE spaces. When comparing these two families of methods, we observe that standard GP formulations, e.g., the one being used in the CutFEM method, do not lead to acceptable strong versions. The problem is that the kernel of the penalty term does not enjoy the right approximability properties. The stabilisation is rigidising too much the solution, and exhibits a locking phenomenon as the penalty parameter increases. As a result, we propose novel GP methods that explicitly use the discrete extension operator in AgFEM. The idea is to penalise the distance between the standard FE space and the AgFE space. The penalty term has a local stabilisation structure in which the distance between these two spaces is expressed in terms of the difference between the solution and the discrete extension of its value on interior cells. The kernel of the penalty term is the AgFE space. Thus, these methods converge to the strong AgFEM as the penalty parameters increases and are locking-free. We have carried out a thorough numerical analysis, in which we have compared all the methods with respect to condition number, 2 and 1 error. We have considered linear and quadratic FEs, Poisson and linear elasticity problems, weak imposition of Dirichlet conditions using Nitsche's method and Neumann boundary conditions, and a set of geometries in 2D and 3D. Due to the problems commented above, standard GP formulations are strongly sensitive to the penalty parameter, losing convergence properties for moderate to large values of this parameter. On the contrary, the new GP methods that are weak versions of AgFEM are much less sensitive to the penalty parameter and their convergence is preserved in all cases. They are a good alternative to the strong formulation of AgFEM and are systematically superior to existing GP formulations. Condition number 1 ( ), and error norms − ℎ 2 () and − ℎ 1 () vs. mesh size ℎ for the elasticity problem with Neumann boundary conditions on the 8-norm sphere using quadratic elements. The dashed lines indicate the theoretical bound, i.e., ℎ −2 for 1 ( ), ℎ 3 for − ℎ 2 () and ℎ 2 for − ℎ 1 (). |
According to the California Department of Fish and Wildlife, mink are uncommon permanent residents of California, mostly in the northern part of the state. They are semi-aquatic and nocturnal carnivores that hunt small prey such as frogs, mice and crayfish.
Roberts said the mink is one of many species that have taken up residence on the 4,200 acres owned by the conservancy, which manages land set aside to compensate for habitat lost to development in the Natomas basin. Much of the land is flooded to create wetlands that approximate what once existed in Natomas, where seasonal flooding was the historical norm.
The conservancy has been able to keep its lands wet despite the drought by pumping groundwater from 11 wells drilled on its property since 1999. |
Twisted: Illustrated Stories Inspired by Hip Songs will be available at Chapters locations in Ontario and selected flagship stores across Canada.
Where: Chapters Rideau, 47 Rideau St.
Chelsea writer David Sachs has published a new book of short stories inspired by Tragically Hip songs. Twisted: Illustrated Stories Inspired by Hip Songs will be available at Chapters locations in Ontario, and select flagship stores across the country. A portion of proceeds are to be donated to the Gord Downie Fund for Brain Cancer Research. Sachs, a 43-year-old husband and father of two, spoke to Lynn Saxberg about the project.
Q: Why did you start writing fiction inspired by Hip songs?
A: It was really an accidental idea I had 15 years ago. I was driving around and heard Locked in the Trunk of A Car on the radio, and this story came to me while I was listening so I went home and wrote it. Then I started think to think, you know, a lot of the Tragically Hip’s songs evoke stories to me. Not the songs that tell a story explicitly but the ones that hint at stories. So it became a small hobby I had. Over the course of 15 years I wrote about nine stories.
Q: What made you decide to self-publish a book?
A: Last summer, when they had their last tour, I thought it would be a nice tribute and hit upon the idea of making it a gift book. The stories are very Canadian, and of course, there’s that feeling in the Hip’s songs, so I added some photos of Canada. It’s a mix of short stories, Hip photos and Canadiana photos.
Q: Is this the sort of writing you usually do?
A: I’ve done a lot of different things. My first novel came out a couple of years ago, and I’ve done freelance writing on and off. I spent about eight years as a communications consultant and then I got into commercial real estate so I’m able to write what I want to write without having to worry about making money from it. I was able to devote the time, and invest some money in it. As hobbies go, it’s cheaper than skiing (laughs).
Q: You’re a fan of the band?
A: I’ve been a Hip fan since they first broke out, around ’88 or ’89. I had a summer job one year where we listened to Fully Completely all the time. It’s still my favourite album. Four of the stories are from Fully Completely.
Q: When did you last see the band perform?
A: I saw them the summer before last at Bluesfest, and I was blown away at how theatrical Gord (Downie) still was. I remembered that from when he was young but I hadn’t seen them in some years at that point. I was mesmerized watching him.
Q: What speaks to you about their music?
A: I have a theory about it now, after having written this and thinking a lot about why the Tragically Hip. Why are they telling me stories, not the Beatles or other bands? I think it has to do with how they have this kind of expressionistic quality in their lyrics where they give little snippets of things to make you feel something. They have this emotional nuance that other bands don’t have. Bob Dylan, maybe, has similarly abstract lyrics. The difference is, with the Hip, it’s more like scenes cut from something and you can’t quite figure out what they’re cut from, but there’s a story there that you can interpret in different ways. You can actually come up with an infinite number of stories from one song.
Q: So you’re filling in the holes?
A: Exactly. And that’s why I think Gord is able to do his interpretation. He’s doing performance art while listening to the song. He’s interpreting it in different ways. I think there is something about their music that lends itself to interpretation.
Q: Did you try to find out what inspired the songs?
A: No, I didn’t want to know what they’re about. I didn’t do any research into any of the songs. I just wanted to write what I felt it was about. I listened to the songs on repeat while I was writing so that it would carry some of the emotional feeling of the song, and the lyrics would find their way into the story. A word here, a phrase there.
Q: How do you describe your stories?
A: It’s a wide range. Some are nostalgic, some comic, some sad. There’s one historic adventure about Étienne Brûlé just before he was killed. Another one takes place at a high school cottage party outside Winnipeg. One is a scary thriller, a two-man psychodrama. That’s inspired by Scared, one of my favourite songs. I have no idea what Gord meant with that song so I made it about a fear consultant.
Q: What’s the most recent?
A: Put it Off, from Trouble at the Henhouse. It was really the emotional tone of the music, more than the lyrics, that came out. I listened to it and listened to it until the story came, and it’s about the regret of putting things off in life. |
Freilichtbühne Mülheim an der Ruhr
The Freilichtbühne Mülheim an der Ruhr (Mülheim an der Ruhr Open-Air Theatre) is an open-air amphitheatre in North Rhine-Westphalia, Germany, built in 1936 as a Nazi Thingplatz. It is the most important open-air theatre in the Rhine-Ruhr region and with 20,000 seats, one of the largest in Germany.
Location
The theatre is only a few hundred metres from the former centre of Mülheim, now known as the Kirchenhügel (church hill). It was constructed in a former quarry across the street from the old city cemetery, and is within a city park. It has both a large and a small stage.
Construction, use in the Third Reich
In the early 1930s, the parks director of Mülheim, Fritz Keßler, was seeking a way to make the former quarry into a park instead of letting it laps into a dump. As a result of the Great Depression, there was not enough money to construct the park until the Volunteer Labour Service made it an unemployment relief project, which reduced the projected cost to the city from 29,800 to 14,100 RM. However, in May 1933 funds ran out and the project was left unfinished. To make up for the loss of funds, Keßler arranged for the footpaths to be finished by city relief recipients, who received extra money each month in payment.
Later the same year, the project was taken over by the Reich Labour Service and the open-air theatre was added to the plans in the context of the Thingspiel movement. The theatre was designed by the city itself, and work directed by Erich Schulzke of the parks department. The park opened three years later, the theatre being ceremonially opened with a performance of Shakespeare's A Midsummer Night's Dream on the small stage on 28 June 1936, a Sunday evening in midsummer. The enthusiasm for the production was strong, and the limit of 3,000 tickets necessitated a repeat performance, scheduled for the following night.
The open-air theatres built under the Third Reich as part of the Thingspiel movement were used for community-building and political events as well as traditional theatre. The Mülheim theatre was used, for example, for inductions into the League of German Girls. During wartime, there were no performances and people stripped the theatre of its wooden benches to use as firewood.
Modern use
After the war, the theatre was eventually repaired and reopened on 30 June 1954 with a performance of Bizet's opera Carmen that drew 2,300 people: standing-room-only tickets were sold for 50 pfennings. From then until 1965, it was used for a total of 56 performances of operas, operettas and dramas. It was then fell into disuse until summer 1971, when the only Karl May Festival in Mülheim took place as an addition to the festival in Bad Segeberg and Old Shatterhand und Winnetou - Geheimnis der Bonanza (Old Shatterhand and Winnetou - Secret of the Bonanza) was staged there.
In 2000, the newly founded "Freunde der Europa-Freilichtbühne Mülheim" (Friends of the Mülheim Europe Open-Air Theatre), the city and other cultural organisations decided to use the theatre to present a broad spectrum of entertainment including concerts and theatre of all kinds. In 2003 they were joined by Regler Produktion, which has since produced numerous events there. Improvements made in 2006 and 2008 include provision for permanent refreshment service. In 2006, events associated with the World Cup attracted a new audience, and since 2007 Regler have provided acoustic performances on summer Wednesday evenings under the title "Sunset Folks". At those and other Regler events at the theatre, the proceeds from sponsorships and refreshment sales are put back into the theatre and its equipment and the performers are paid by "passing the hat".
The city leases the theatre to the Friends of the Theatre organisation, which changed its name in 2009 from "Verein der Freunde der Europa-Freilichtbühne in Mülheim an der Ruhr e. V." to "Freunde der Freilichtbühne Mülheim an der Ruhr e. V." (Friends of the Mülheim an der Ruhr Open-Air Theatre"). As of February 2013, however, the possibility is being discussed of Regler leasing it autonomously rather than working with them.
Since 2012 Regler, the city culture office, the Theater an der Ruhr, the Ringlokschuppen (a former roundhouse also used for performances) and the Department for Children, Youth and Schools have together organised the Mülheimer Ruhrsommer (Mülheim Ruhr Summer). |
1. Field of the Invention
The present invention relates to a plasma display panel, and more particularly to a method of fabricating electrodes of plasma display panel using a photo-peeling method, which can make the electrode highly precise in correspondence to high resolution. Further, the present invention relates to a method of fabricating electrodes of plasma display panel using a photo-peeling method that is environment-friendly, with which it is easy to recycle materials and that is capable of reducing cost when forming the electrodes of the plasma display panel.
2. Description of the Related Art
A plasma display panel (hereinafter, PDP) displays a picture by exciting phosphorus to emit light ultraviolet ray generated when an inert mixture gas such as He+Xe, Ne+Xe, or He+Xe+Ne discharges electricity. The PDP can not only be easily made into a thinner and high definition large-scaled screen, but also improves in its quality due to the recent technology development.
Ref erring to FIG. 1, a discharge cell of three electrode AC surface discharge PDP includes a pair of sustain electrodes having a scan electrode Y and a sustain electrode Z formed on an upper substrate 1, and an address electrode X formed on a lower substrate 2 crossing the sustain electrode pair perpendicularly. Each of the scan electrode Y and the sustain electrode Z includes a transparent electrode and a metal bus electrode formed on top of it. An upper dielectric substance 6 and an MgO protective layer 7 are deposited on the upper substrate 1 provided with the scan electrode Y and the sustain electrode Z. A lower dielectric layer 4 is formed on the lower substrate 2 provided with the address electrode X, to cover the address electrode X. Barrier ribs 3 are perpendicularly formed on the lower dielectric layer 4. Phosphorus 5 is formed on the surface of the lower dielectric layer 4 and the barrier ribs 3. An inert mixture gas, such as He+Xe, Ne+Xe, or He+Xe+Ne, is injected into a discharge space provided between the upper substrate 1 and the lower substrate 2 and the barrier ribs 3. The upper substrate 1 and the lower substrate 2 are bonded together by a sealant (not shown).
Scan signals are applied to the scan electrode Y to select scan lines. And sustain signals are alternately applied to the scan electrode Y and the sustain electrode Z to maintain the discharge of the selected cells. Data signals are applied to the address electrode X to select cells.
The metal bus electrode of the scan electrode Y and the sustain electrode Z needs to have its width as narrow as it can be within the scope where line resistance is not too much high because it intercepts light from phosphorus to deteriorate brightness as much. Such a metal bus electrode is made by depositing a metal layer with three-layered structure of Cr/Cu/Cr on the transparent electrode by a vacuum deposition method and then patterning the metal layer by photolithography and etching process.
The address electrode X is formed on the lower substrate 2 by a pattern print method where silver Ag paste is printed on the lower substrate 2 through a screen after the screen for patterning is printed on the lower substrate 2, or by a photo method including photolithography and etching process after the silver paste is printed on the lower substrate 2.
However, there is the following problem with the pattern print method and photo method. The pattern print method has an advantage in that the process is relatively simple and the metal electrode can be formed at low cost, but it has two disadvantages. First, it is difficult to use the method for large size and high precision which are required for high resolution of PDP because the electrode width cannot be smaller than a given limit. Second, material such as volatile solvent, which is harmful to humans, has to be used because the material has to be in a state of paste. When compared to this, the photo method has an advantage in that it can be applied to large size and high precision because a relatively small electrode pattern can be formed, but it too has two disadvantages. First, it is not environment-friendly because the material is in the state of paste, and, second, the material is wasted and its cost is high because the entire surface of the substrate has to be printed with the material in paste.
Accordingly, it is an object of the present invention to provide a method of fabricating electrodes of PDP using a photo peeling method, by which the electrode can be made highly precise according to high resolution.
It is another object of the present invention to provide a method of fabricating electrodes of PDP using a photo peeling method that is environment-friendly, with which it is easy to recycle materials and that is capable of reducing cost when forming the electrodes of the plasma display panel.
In order to achieve these and other objects of the invention, a method of fabricating an electrode of a plasma display panel using a photo peeling method according to an aspect of the present invention includes the steps of forming a photo material layer on a substrate, wherein the adhesive strength of the photo material layer decreases when the photo material is exposed to light; exposing the photo material layer to light according to a desired pattern; forming an electrode material layer on the exposed and unexposed areas of the photo material layer; forming a peeling material layer on the electrode material layer, wherein the peeling material layer has higher adhesive strength for the electrode material than area of the photo material layer has for the electrode material; and the peeling material layer to leave the desired pattern of the electrode material layer on the unexposed areas of the photo material layer.
In the method, the exposed area of the electrode material layer is removed when removing the peeling material layer.
The method further includes the step of firing the remaining area except where the electrode material layer was removed by the peeling material layer.
The photo material layer includes binder of 20˜50 wt %; reactive monomer of 40˜70 wt %; photo initiator of 2˜5 qt %; and additive of 2˜5 wt %.
In the method, the binder includes at least one of polyurethane, polyester, polyacrylate, co-polymer with carboxylic -COOH and radical OH or tri-polymer with carboxylic -COOH and radical OH.
In the method, the reactive monomer includes at least one of a multi-functional monomer with 2˜5 reactive radicals, acrylic monomer or urethane monomer and oligomer.
In the method, the photo initiator includes at least one of 1-hydroxy-cyclochexyl-phenyl ketone, p-pheny benzo phenone, benzyldimethylketal, 2,4-dimethylthioxanthone, 2,4-diethylthioxanthone, benzoin ethyl ether, benzoin isobutyl ether, 4,4′ diethylaminobenzo-phenone, p-dimethyl amino benzoic acid ethylester.
In the method, the additive includes at least one of dispersing agent, stabilizer and polymerization prohibiting agent.
The electrode material layer includes silver Ag powder of 90˜99 wt %; and glass-frit of 1˜10 wt %.
The peeling material layer includes binder of 70˜80 wt %; and additive of 20˜30 wt %.
In the method, the binder includes at least one of polyurethane, polyester, polyacrylate, co-polymer with radical OH or tri-polymer with radical OH.
In the method, the additive includes at least one of dispersing agent, stabilizer and polymerization prohibiting agent. |
Building The Character Of The Millennial Generation In Revolution 5.0 To Realize The Unity Of The Indonesian Nation Building the character of millennials in the era of revolution 5.0 in filling independence is an obligation as a large-populated country such as the Unitary State of the Republic of Indonesia. In facing it, it is necessary to prepare a resilient and characterful generation facing the challenges in revolution 5.0, which is growing at this time, namely applying Pancasila values for the younger generation that can be implemented in the right momentum. In reality, some young people are now damaged by morals because of various things that affect them. Influences include the adverse effects of globalization, friends, increasingly sophisticated electronic media, drugs, liquor, and other harmful things. This research aims to direct the younger generation in building the nation. Research methods using literature studies are data collection techniques by conducting study studies of books, literature, and others. The results of this research confirm that the younger generation at the time of revolution 5.0 still needs direction and understanding in the life of the nation and state. The conclusion is that the younger generation, as the holder of the leadership relay, needs to be equipped with a national character with Pancasila spirit. So that the relay of the continuity of the nation to be maintained by the republic will be the unity and unity of the country and the realization of our national goals. |
Published: June 12, 2012 at 04:47 p.m.
Updated: Oct. 25, 2012 at 05:02 p.m.
Jake Ballard was only an ex-New York Giant for 24 hours.
The New York Daily News noted that the tight end was back in the Giants facility Tuesday precisely at 4:02 p.m. ET, two minutes after he officially cleared waivers. The Giants waived Ballard on Monday after he failed a team physical.
Ballard is expected to be placed on the team's physically-unable-to-perform list. He is expected to miss the entire season and eventually end up on injured reserve.
"I'm on the road to recovery, and I believe I'll be in a Giants uniform in '13" Ballard wrote Monday on Twitter.
It looked bad that Ballard was waived on Monday, but ultimately it was a procedural move. His very significant knee injury will knock him out of football for a season. He should have a chance to come back with the Giants next year. |
· Jan. 7-8 - Northwest Quadrant: From the west side of North Potomac Street to the north side of West Washington Street (U.S. 40) and all areas in between.
· Jan. 9 - Northeast Quadrant: From the north side of East Washington Street (U.S. 40) to the east side of North Potomac Street and all areas in between.
· Jan. 10 - Southwest Quadrant: From the south side of West Washington Avenue (U.S. 40) to the east side of South Potomac Street and all areas in between.
· Jan. 11 - Southeast Quadrant: From the east side of South Potomac Street to the south side of East Washington Street (U.S. 40) and all areas in between.
All ornaments must be removed from the tree or it will not be collected. Residents may place their tree at the curb in front of their residence as early as 4 p.m. the day before their scheduled collection date, and no later than 7 a.m. on their scheduled collection date.
The Department of Public Works will be collecting Christmas trees only. |
Dexamethasone counteracts the antiosteoclastic, but not the antileukemic, activity of TNFrelated apoptosis inducing ligand (TRAIL) We have analyzed the effect of the synthetic glucocorticoid dexamethasone, used alone or in combination with recombinant TRAIL, on in vitro osteoclastic differentiation of peripheral bloodderived macrophages cultured in the presence of macrophagecolony stimulating factor (MCSF)+RANKL for 1214 days. Dexamethasone exhibited different effects based on the concentration used. Indeed, while at 10−7M dexamethasone reduced the number of mature osteoclasts, at 10−8M showed no significant effects and at 10−9M significantly increased the number of mature osteoclasts, with respect to cells cultured with only MCSF+RANKL. On the other hand, the addition in culture of recombinant TRAIL inhibited the output of mature osteoclasts induced by MCSF+RANKL. However, the presence of dexamethasone (10−8 or 10−9M) into the culture medium significantly counteracted the antiosteoclastic activity of TRAIL. In order to ascertain whether dexamethasone, might also interfere with the antileukemic activity of TRAIL, the degree of apoptosis induced by TRAIL was evaluated in several myeloid (OCI, MOLM, HL60) and lymphoid (SKW6.4, MAVER, BJAB) leukemic cell lines. The levels of TRAILtriggered apoptosis were not significantly different between leukemic cells cultured in the absence or presence of dexamethasone. Concerning the molecular mechanism mediating the dexamethasonesuppression of the TRAIL activity in preosteoclasts, but not in leukemic cells, we found that dexamethasone induced a significant downregulation of the surface levels of TRAILR2 in cells of the osteoclastic lineage but not in leukemic cells. The ability of dexamethasone to counteract the TRAIL pathway envisions a novel mechanism mediating the proosteoclastic activity of dexamethasone in vivo. J. Cell. Physiol. 222: 357364, 2010. © 2009 WileyLiss, Inc. |
You might never spot an air marshal on your flight, but it’s reassuring to know one might be there. In a few weeks, though, many of them are likely to get pink slips — along with food safety inspectors, border patrol agents and countless other government employees who play a crucial if hidden role in everyone’s lives.
These and many other cuts to important programs like child nutrition and low-income housing are part of a giant buzz saw known as the sequester, which will indiscriminately slash $100 billion in discretionary spending beginning in January and will continue for nine years.
The cuts, mandated by the 2011 budget agreement, would not affect entitlement programs like Social Security , and have not received the same attention as the tax increases that are also scheduled to take effect in January. They are the rarely discussed part of the so-called fiscal cliff, and negotiators have little time left to prevent their impact. But all signs suggest that Republicans remain unrelenting in their insistence on preserving the full amount of the cuts, and adding many billions more.
The slogan that Republicans have relentlessly repeated on every talk show, and that Speaker John Boehner invoked Thursday — Washington has a spending problem, not a revenue problem — may sound superficially appealing to weary taxpayers, but those who mouth it never bother to give details on what their budget-cutting demands really mean. It is not an abstraction, like their vague calls for “entitlement cuts.” None of these brave budget-cutters want to go on television and say, cut the F.B.I. Or cut the Border Patrol . Or cut the Centers for Disease Control and Prevention . But that’s exactly what they’re doing by insisting on slashing discretionary spending. Republicans are so afraid of this reality that they won’t even detail their demands for cuts to the White House in the fiscal-cliff talks, instead waiting for President Obama to go first so they won’t be stuck with the blame.
The sequester cuts were balanced between domestic and military programs, to spread the pain evenly. The Republicans have sought to move all the cuts to the domestic side and make the president do the dirty work of choosing which popular programs go on the block. |
The Role of the Hematologist/Oncologist as a Primary Care Provider To determine the role of the hematologist-oncologist as a primary care provider, a survey was administered to a consecutive sample of 238 hematologyoncology patients. Patients were selected at random from the outpatient hematologyoncology clinics at three institutions: 66.1% from a university medical center, 22.5% from a private hospital, and 11.4% from a health maintenance organization-affiliated clinic. A total of 73 (30.9%) respondents reported they would see their hematologistoncologist (heme/onc) for routine illnesses, such as sinus or bladder infections. Of the respondents who did not have a family doctor, the percentage increased to 45.5%. Of those patients who had other medical problems, such as hypertension or diabetes, 46 (43.0%) were followed by their heme/onc for these other medical problems. If the respondent did not have a family doctor, the percentage increased to 66.0%. Patients at the university medical center more frequently did not have a family doctor and used their heme/onc for primary care to a greater degree than patients at the private hospital and health maintenance organization (HMO)-affiliated clinic. A total of 55.9% of the respondents reported having the best physicianpatient relationship with their heme/onc, whereas only 8.5% had the best relationship with their family doctor. The heme/one does provide primary care for their patients. If the patient does not have a family doctor, the amount of primary care administered by the heme/onc greatly increases. It is likely that patients rely on their heme/onc for this care because of the close physician-patient relationship that develops as a result of the frequent clinic visits required by many patients with cancer or blood disorders. |
Shortest path - capacitated maximum covering problems I study the shortest path capacitated maximum covering problem (SP-CMCLP). Current, ReVelle and Cohon first studied the un-capacitated version of this problem. The two objectives of the problem are the minimization of the path length from a predetermined starting node to a predetermined terminal node and the maximization of the total demand covered by the facilities located at the nodes in the path. They solved a special case in which a demand can be covered only if it is located on the path. I solve the general model. I also introduce facility capacity constraints, new algorithms and new demand coverage structures to this problem. I decompose the problem into a k-shortest path problem (kSP) and a capacitated maximum covering problem (CMCLP). The k-shortest path problem is solved by a path deletion algorithm (refer to Azevedo, et al., 1993, 1994). The capacitated maximum covering problem is solved by various heuristics and meta-heuristics including lagrangian relaxation, two versions of Tabu search and a simulated annealing method. To the knowledge of the author, the Tabu search and simulated annealing methods introduced are the first meta-heuristics developed for the capacitated maximum covering problem. In these meta-heuristics, I use four neighborhood structures. These are 1) one-interchange which exchanges an selected facility with an unselected facility, 2) client shift which shifts a satisfied demand from one selected facility to another selected facility, 3) demand swap (or demand reallocation) which swaps one (or more) ii assigned demand node (nodes) with one (or more) unassigned demand node (nodes) within the coverage distance of a selected facility site, 4) demand addition which adds one or more unassigned demand to a selected facility. These neighborhoods are at different levels and are used in different stages of the meta-heuristics. I design an embedded meta-heuristic procedure which has inner loops of single neighborhoods and an outer loop of multiple alternate inner loops for different neighborhoods. I design a heuristic method and a penalty method for the demand allocation sub-problem in the embedded Tabu search. In the penalty method, I use surrogate relaxation and add a penalty term to the objective function for the violated capacity constraints. An embedded simulated annealing method with temperature vibration is also designed using heuristic demand allocation. I also solve a new version of the shortest path capacitated maximum covering problem with tree coverage structure (SP-CMCLP-TREE). Demand is supplied by sub-paths on a minimum spanning tree constructed from an underlying network. A demand is counted as covered if the total arc length of a path from the demand to a facility site is within coverage distance and the demand can be satisfied only if all the intermediate demand nodes on the path are satisfied. Computational results for networks selected from literature show the effectiveness of the heuristics designed. Tabu search performs the best in solution quality, while Lagrangian relaxation and simulated annealing generate solutions of satisfactory quality using less time. Different path-coverage structures are used based on the properties of the networks. Tree demand coverage structure works better than traditional coverage structure for large partial networks. The impact of different network parameters are also studied. |
The way Sidney Crosby and Brad Marchand are meshing at the World Cup, it's easy to envision Marchand in a Penguins' sweater. One problem. It'll probably never happen.
There’s no doubt that Sidney Crosby and Brad Marchand have talked about it. They skate together in the summer and are making magic in the World Cup of Hockey. Lots of it. And with just 263 shopping days left until Marchand stands to become an unrestricted free agent, it’s never too early to start envisioning Marchand playing alongside Crosby with the Pittsburgh Penguins.
After all, Marchand is the winger Crosby has never had. Throughout his international career, the challenge has been finding a winger that meshes with Crosby and that search has often been a challenging one. But here at the World Cup, Crosby, with some help from Marchand, delivered on the big stage and was the biggest reason Canada won its 14th straight best-on-best game dating back to 2010 in Vancouver and will advance to the final against either Sweden or Team Europe starting Tuesday night. There were a lot of contributors to Canada’s 5-3 win over Russia in the semifinal, but it was Crosby and Marchand who provided the spark.
“That’s a long ways away,” Marchand acknowledged when asked last night about the possibility of playing with Crosby. “There’s championship games here, we got to think about that first. But we’ll deal with whatever needs to be dealt with down the road. But it’s a lot of fun playing with Sid, there’s no question about that. But for now we’ll keep that to here.”
He kept that open-ended enough, didn’t he? So let the Brad Marchand Free Agent Watch officially begin. It makes so much sense on so many levels.
Except there’s almost no chance it’s going to happen. The Boston Bruins, who must be growing weary of losing star players as salary cap casualties, seem to finally have their financial house in order. There has been an ongoing dialogue between the Bruins and Marchand all summer and all signs point toward him signing a long-term deal in Boston, likely for eight years and somewhere in the range of $6 million per season. In fact, don’t be surprised if something gets done with Marchand before the start of next season.
And that’s a good thing for the Bruins. Smart call on their part. Because Marchand’s play in the World Cup has been nothing short of brilliant with Crosby. And if he has another season in 2016-17 like he did in 2015-16, the price would continue to rise. Marchand is on a very team-friendly deal in Boston and deserves a raise of at least $1.5 million on a long-term deal. In fact, the first couple of years of that deal might be a bargain for the Bruins still.
So we’ll have to be content with Crosby, Marchand and Patrice Bergeron being a marvel for Canada. And while both Marchand and Bergeron have been terrific, Crosby has been otherworldly. When asked why Crosby has been so good in the World Cup, Canadian defenseman Shea Weber mused, “Because he’s only had a month-and-a-half off? I don’t know. It looks like he just kept skating.”
Indeed. In fact, it looks as though, at the age of 29, Crosby might actually be getting better. The 2016 playoffs will be remembered as the point in his career when Crosby channeled his inner Steve Yzerman. His impressive two-way play was the main reason he won the Conn Smythe Trophy as playoff MVP last spring. But when you watch him strip the puck off Dmitry Kulikov, drive to the front of the net and make a poised 1-on-1 play the way he did on Canada’s first goal of the game, it sure looks as though he may not have actually hit his ceiling. When Crosby struggled through Vancouver, and to a lesser extent Sochi, he might not have scored a goal like that one. Add to the fact that he went 12-5 in the faceoff circle and you may be seeing a player who is actually approaching his apex. It’s little wonder that Crosby and Marchand are running 1-2 in tournament scoring at the moment.
“I just think he knows how good he is and he’s more patient with what he’s doing,” Team Canada coach Mike Babcock said of Crosby. “When things don’t go well, he doesn’t get frustrated. When people crosscheck him he doesn’t get riled up. He just knows he’s going to have success over time. The other thing that happens when he plays with Toews and not on the same line, but Toews does a lot of stuff so he can do what he does. So to me that’s a pretty good one-two punch.”
Crosby is part of a core group – Bergeron, Weber, Jonathan Toews, Drew Doughty, Corey Perry and Ryan Getzlaf are the others – who have been together through both the Vancouver and Sochi Games and on this team. That certainly helped Canada when it went down 2-1 in the second period and things were looking dicey for, oh, about five minutes there. Seventy-two seconds after Evgeny Kuznetsov put the Russians ahead 2-1 late in the second period, Crosby dug out a loose puck and sent it to Marchand for the easy goal.
And that Crosby vs. Ovechkin thing? Well, that’s becoming as big a rout as Canada vs. Russia, isn’t it? Crosby has bettered Alex Ovechkin in two playoff series and in international competition, his Canada teams are 4-0-0 and have outscored Russia by a 25-8 margin. “I don’t think it’s over,” Crosby said when asked whether Canada-Russia as a compelling game has run its course. “If you look at their team, they have some pretty special players, a lot of talent, a lot of skill, exciting guys to watch and it’s great hockey.”
Particularly when Crosby is playing in it. |
(Image: Gerhard Scholtz, Peter K. L. Ng and Stephen Moore)
Meet Blinky. This tiny freshwater crab has three eyes, just like its mutant fish namesake from The Simpsons. But unlike the fictional Blinky, whose deformity is blamed on nuclear waste, this crab may actually be a pair of conjoined twins, one of which is nothing but part of the head.
Gerhard Scholtz of the Humboldt University of Berlin in Germany found the Amarinus lacustris crab in the Hoteo river on New Zealand’s North Island in 2007. Instead of the usual two compound eyes, it has three. It also has a peculiar structure on its back, rather like an antenna. No animal has been seen with this particular pattern of deformities before.
When Scholtz and colleagues took a closer look, they found that the crab’s brain had not developed properly either. It was unusually small, and somewhat deformed.
Advertisement
It’s not clear how the crab ended up like this. Scholtz’s best guess is that it is actually a pair of conjoined twins. In this scenario, the crab grew up with an extra pair of eyes, one of which developed between the normal pair while the other was forced onto its back. Later, this fourth eye was damaged, and the tissue regenerated into the antenna-like structure Scholtz found.
Read more: “Review: Freaks of Nature by Mark S. Blumberg“
Journal reference: Arthropod Structure & Development, DOI: 10.1016/j.asd.2013.10.007 |
A 7th grader in Rosenberg says he was forced by school administrators to cover up his Star Wars t-shirt.Joe Southern says his son, Colton, wore a shirt depicting the "Star Wars - The Force Awakens" logo, along with a Storm Trooper holding a weapon, to class Thursday at George Junior High School. He's apparently worn it to school several times before without any issue.On Thursday, though, school officials told Colton the shirt was banned because it has a gun, or at least a picture of what in the movie is weapon."It's political correctness run amok. You're talking about a Star Wars t-shirt, a week before the biggest movie of the year comes out. It has nothing to do with guns or making a stand. It's just a Star Wars shirt," Southern said.A spokesperson for Lamar Consolidated Independent School District says the LCISD secondary school handbook spells out potential violations of dress code. The list includes "symbols oriented toward violence."Administrators say they did not reprimand the student, though they could have required him to change or assigned him in-school suspension. They say they only required him to zip up his jacket.Southern says the incident, in his opinion, amounts to a violation of the first amendment. He says the weapon shown is fictional as is the character holding it and that any implication his son would hurt anyone would be incorrect."He's a Boy Scout, active in church, volunteers at Brazos Bend State Park. There's not a violent bone in his body. He's just an excited kid for the movie," said Southern. |
Effects of Magnesia Fines on Physical Properties of Al2O3-MgO Unfired Bricks Al2O3-MgO unfired bricks were prepared by using brown corundum, white corundum, fused magnesia and -A12O3 micropowder as main starting materials, Al2O3-SiO2 gel powder as a binder. The effects of magnesia fines addition on physical performance of Al2O3-MgO unfired bricks were investigated. The phase composition and microstructure were investigated by means of X-ray diffraction (XRD), scanning electron microscopy (SEM) coupled with energy dispersive X-ray spectroscopy (EDS). The results showed that with the increase of magnesia fines addition, bulk density of Al2O3-MgO unfired bricks after dried decreased and strength increased. After heat treatment at 1100 °C, apparent porosity (AP) slightly reduced, bulk density (BD) slightly increased. Strength had little change. After heat treatment at 1500 °C, AP first decreased and then increased, and strength change correspondingly. The hot modulus of rupture (HMOR) first increased and then decreased with increasing magnesia content. The optimum magnesia addition is at 6.0 wt. %. |
The Great Recession caused labor market devastation on a scale not seen for many decades. Millions of jobs were lost in the United States during 2008 and 2009, leaving the labor market with a hard road to recovery. Indeed, that recovery has required many years of job growth, and it was only in April 2014 that total employment reached its pre-recession level.
However, this milestone did not mark a return to pre-recession labor market conditions. Because the U.S. population is growing, simply reaching the previous number of jobs is not sufficient to return to pre-recession employment rates. At the same time, more baby boomers have entered retirement, somewhat offsetting the effects of population growth and reducing the number of jobs needed for a full economic recovery.
In order to accurately track the progress of the labor market recovery, The Hamilton Project developed a measure of labor market health—the “jobs gap”—that reflects changes in both the level and the demographic composition of the U.S. population (more details regarding the jobs gap methodology are provided in appendix A). Beginning in May of 2010, The Hamilton Project has calculated the number of jobs needed to return to the national employment rate prior to the Great Recession, accounting for population growth and aging.
With today’s employment report, we can report that the national jobs gap relative to November 2007 has closed (see figure 1). This indicates that, by our calculations, nearly a full decade after the start of the recession, employment has returned to its demographically adjusted pre-recession level. This does not mean that all harm to the labor market resulting from the Great Recession has dissipated, nor that the economy is at full employment. It does mean, though, that the economy has added enough jobs to make up for the losses during the Great Recession.
Because the population was growing while the labor market was shedding jobs, the trough of the jobs gap (more than 10 million jobs needed to recover to pre-recession employment rates) exceeded the actual decline in number of jobs (about 8.5 million). The average rate of recovery in the jobs gap after the trough of the Great Recession was 116 thousand jobs per month, and it took 89 months to close the gap.
To be sure, the closing of the jobs gap does not mean that the labor market scars of the Great Recession are entirely healed. Indeed, while some economic markers indicate a tight labor market—a low unemployment rate and relatively abundant job openings—others, like the depressed 25- to 54-year-old employment rate, an elevated share of people working part-time for economic reasons, and restrained wage growth, are consistent with a weaker labor market. The decline in the employment-to-population ratio for 25 to 54 year olds has been offset to some degree by rising employment rates for those 55 and older, helping to close the jobs gap.1 Since November 2007, the overall labor force participation rate has fallen from 66.0 percent to 62.9 percent. However, much of this drop was due to demographic change, and a slight reduction in the unemployment rate over that period helped to mitigate the impact on employment. Appendix A provides additional detail regarding the economic forces underlying movement in the jobs gap.
In figure 2, we apply the jobs gap methodology to three other recent recessions: 2001, 1990, and 1981. Compared to these recessions, the jobs gap during the Great Recession was much larger and took years longer to close. The recessions of 1981 and 1990 involved smaller and briefer jobs gaps, with recovery to the demographically adjusted, pre-recession employment rate after 40 and 48 months, respectively. The 2001 recession saw a more gradual decline in jobs, and a slower recovery; the jobs gap from the 2001 recession did not close before the Great Recession started. The fact that the labor market was not necessarily at full strength at the start of the Great Recession is one reason the closing of the jobs gap does not necessarily signal an end to “slack” in the labor market: the under-used labor that could be profitably employed.
The labor market recoveries depicted in figures 1 and 2 reflect the overall experience of the entire United States. However, not all regions of the country or demographic groups experienced the same recovery—while some groups have reached and substantially exceeded their pre-recession employment rates, others have lagged behind.2 Because the payroll employment data (from the Current Employment Statistics survey) do not include information on demographic characteristics, we use individual-level Current Population Survey (CPS) data in these calculations. These data are less current than the payroll data—we use individual-level data through May 2017—and the growth in employment measured in the CPS is somewhat lower. Given these differences and the distinct features of the CPS data, we will now implement the jobs gap concept as an “employment rate gap,” defined as the difference between the demographically adjusted 2007 employment-to-population ratio and the actual employment-to-population ratio at a given point in time.
The employment rate gap recovery has been uneven in other respects: notably, women have outperformed men. Two male-dominated occupation groups—production and construction—were particularly hard hit during the Great Recession. Employment in these occupations remains low relative to other occupations, contributing to weaker employment growth for men over the last ten years. In addition, the employment rate of men aged 25 to 54 had been falling for several decades prior to the Great Recession, driven by forces that are still not entirely understood, but possibly contributing to the disparities between the employment trajectories of men and women.
Figure 4 shows the employment rate gap separately for men and women. The immediate employment loss from the recession was somewhat less severe for women, with the gap reaching a trough of -2.9 percentage points in 2011. By contrast, the employment rate gap for men reached a low point of -5.5 percentage points in 2010. Men have considerably more ground to make up than do women to regain their pre-recession employment rate: the gap for men stands at -1.6 percentage points, while it has closed entirely for women. However, it is important to note that men remain employed at a much higher rate than women, even with their relative decline over the past ten years: 65.7 percent of men and only 54.8 percent of women are employed.
The disparate labor market experiences of racial groups over the business cycle have recently received additional deserved attention. In figure 5, we show that, while whites were less hard hit in the immediate aftermath of the Great Recession than blacks and Hispanics, their recovery has been slower; whites are now in a worse position relative to their pre-recession employment rates than are blacks and Hispanics. The employment rate gap for blacks and Hispanics fell to a trough of -6.0 and -5.4 percentage points in 2011 before narrowing to 0.4 and -0.2 in 2017, respectively. By contrast, the trough was smaller and occurred earlier for whites, at -3.4 percentage points in 2010, and whites’ employment rate gap now stands at -0.6 percentage points. It is important to remember that each group’s employment rate gap is measured relative to its pre-recession level. Blacks still face higher levels of unemployment and lower levels of employment than Hispanics or whites.
In appendix B, we explore differences by race and gender in more depth, and find that the strong recovery among Hispanics is driven by women, while Hispanic males still face a -1.9 percentage point employment rate gap. Strikingly, the current gap for Hispanic women is 3.4 percentage points smaller than that of Hispanic men (appendix figure 4).
Finally, we examine the employment rate gaps for people with differing levels of education. Figure 6 shows that the employment rate gap has generally been largest for those with high school degrees or less, somewhat smaller for those with only some college or an associate’s degree, and much smaller for people with bachelor’s degrees or graduate degrees. At its trough, the employment rate gap for those with only a high school degree or less was -5.3 percentage points, as compared to -1.1 percentage points for those with graduate degrees. In fact, the employment rate gap for people with a high school degree or less is worse today than the gap was for those with a bachelor’s degree in 2010.
Ensuring that the gains from economic progress are broadly shared must be a primary goal of economic policy. Since its inception, The Hamilton Project has worked to this end, maintaining a special focus on employment and policies to promote both skills development and wage growth. From this perspective, the Great Recession was particularly damaging insofar as it disproportionately harmed disadvantaged individuals and families. Now that—by one important marker—American workers have returned to their pre-recession position, it is important for policy makers to have a better understanding of how the labor market evolved during the recession and recovery.
We are grateful for valuable comments from Adam Looney, Kriston McIntosh, Jay Shambaugh, and Louise Sheiner, as well as excellent research assistance from Patrick Liu, Karna Malaviya, Greg Nantz, and Rebecca Portman. In addition, we thank the many individuals associated with The Hamilton Project who have contributed to the development of the jobs gap project over the years.
Note that the Hamilton Project’s jobs gap is calculated using the Bureau of Labor Statistic’s payroll survey. This survey has shown stronger growth than the household survey, which is used to calculate employment-to-population estimates. For additional discussion of the Hamilton Project jobs gap analysis, including details of methodology and interpretation, please refer to prior publications and Appendix A.
Importantly, though our analysis of states and demographic groups takes the same basic approach as our national jobs gap calculation, the former differs in exclusively relying on the Current Population Survey. Because employment as measured by this dataset has recovered more slowly than payroll employment, recovery in the employment rate gap for state and demographic groups tends to be somewhat weaker. In addition, we describe the results that follow recovery in employment rates in percentage points rather than numbers of jobs. A final difference is that we present annual estimates for 2007-16. For 2017, we pool the available five months of data; seasonality does not appear to be an important factor in these calculations.
The case of North Dakota illustrates the employment rate gap methodology. There is a well-known economic boom in the state, which has gained 78,392 jobs since 2007. At the same time, though, migration to the state has boosted the population—particularly among younger workers with high levels of expected employment participation. The number of jobs created have not been enough to offset the change in demographic characteristics, especially since the fracking boom has subsided over the last few years. |
Antioxidant and Antibacterial Effect of Vitis labrusca, Vitis vinifera and Vitis vinifera Seeds Extract Grape seeds extract has therapeutic values including antimicrobial activity, antioxidant effect, wound healing and prevention of cardiovascular diseases. This study aimed to evaluate and compare antibacterial activity of different species of grape seed) Vitis labrusca, Vitis vinifera and Vitis vinifera( against some bacterial strains (Staphylococcus aureus, Streptococcus pneumonia, Acinetobacter Calcoaceticus, Klebsiella pneumoniae and Escherichia coli). Determine antioxidant effect of grape seed extracts (qualitatively). Antibacterial effects was performed using agar cup cut diffusion method for all bacterial species, followed by using minimum inhibitory concentration MIC for the species showed to be inhibited by grape seeds extracts. Antioxidant assay was done using DPPH scavenging test, methanolic solution of each grape seeds was spotted on TLC paper, sprayed with 0.2 % methanolic solution of diphenyl picryl hydrazyl (DPPH) reagent. Vitamin C was used as positive control. From the results, all grape species didnt have any effect on K. pneumonia and E. coli, red and black grape seeds showed the highest inhibition zone (20 mm) on Staph. Aureus agar plate, green grape had the highest effect on Sterp. Pneumonia agar plate (20 mm). The lowest effect was for the red grape seeds extract (13 mm) on Acinetobacter calciaceticus. In general the three grape seeds extract had effect on Staph. aureus, Sterp. pneumonia and Acinetobacter calciaceticus. The red and black grape seed extract was effective against Sterp. pneumonia strain at MIC values of 7.8 mg/mL and black grape seeds extract had MIC at 7.8 mg/mL on Staph. aureus. However, the test for MIC of seeds extracts for the rest of bacterial species ranged between 15.62 and 87.5 mg/mL. The result showed that black grape seeds extract had the largest spot change in color indicating strong antioxidant effect. The lowest effect was by red grape seeds. From this result black grape showed to be the best grape seeds extract among the three chosen species in its antibacterial and antioxidant efficacy. INTRODUCTION Grape seed extract is can be found as a dietary supplement in a liquid form, tablets or capsules. It's generally containing 50 to 100 mg of the extract. Grape seed consist of vitamins, minerals, lipid, protein, carbohydrates and 5-8% polyphenols. The most abundant phenolic compounds isolated from grape seeds are catechin, epicatechin, and procyanidins in addition to dimmers and trimmers. Grape seed proanthocyanidins constitute a complex mixture that consisted of procyanidins and procyanidin gallates. The polyphenols of grape seeds have been known for their advantageous role in human health. The grape seed is shown to exhibit therapeutic values such as antioxidant, antiinflammatory, anti-bacterial, anti-cancer, antiviral, cardioprotective, hepatoprotective, neuroprotective, antiaging and anti-diabetic. The oil extracted from grape seeds is used in cosmetic, culinary, pharmaceutical and medical purposes. The polyphenolic fractions and gallic acid derivatives are reported to have antibacterial activity, Grape seed proanthocyanidins have been reported to have potent antioxidant effect. Extract of grape seed which were obtained from grapes cultivated in Hasandede, Emir and Kalecik Karasi wine cultivars in Turkey showed concentrations of 2.5%-5% exhibited the most inhibitory effect against a variety of microorganisms including E. coli, K. pneumoniae, and S. aureus. A similar grape seed extract was tested against 21 gram positive strains and gram negative cocci and showed that gram positive cocci are more susceptible, especially S. aureus. Complete inhibition of 43 clinical strains of Methicillin resistant S. aureus was noted at concentration of 3 mg\mL crude grape seed extract. In other study performed on grape pomace (skin and seed) which produced during wine production and considered as a waste product of wine industries, antimicrobial activity of grape seed extract (GSE) was determined using agar well diffusion method. It was conducted to determine inhibitory effect of GSE against Staphylococcus aureus, Escherichia coli and Klebsiella pneumonia isolated from urinary tract infection. S. aureus and E. coli was inhibited to high and least extent respectively by GSE. GSE can be used in the treatment of urinary tract infections. Another study tested the antioxidant activity of grape seed extracts using -carotene-linoleate model system and linoleic acid peroxidation method. The results showed that different extracts had 65%-90% (scavenging rate) antioxidant activity at 100 ppm concentration, indicating that grape seed extracts might be used as preservation of food products as well as for health supplements and nutraceuticals. The present study aimed to determined antimicrobial activity of three spp. of grape seed extracts (GSE) against chosen bacteria in addition to the qualitative antioxidant effect. Collection and Preparation of Grape Seed Extract (GSE) Fresh grape fruits (Vitis labrusca or red, Vitis vinifera or black and Vitis vinifera or Green grape) were collected in the time between Augusts and October 2019. Seeds of black and green grape were collected house garden in Tripoli, while seeds of red grape were collected from local Libyan market. All seeds were manually separated and collected from its fruits, dried in oven at temperature of 50 C ( Figure 1). Grape seeds were weighed continuously until weight is fixed. The dried grape seeds were grinded to powder, weighed, labeled and kept in dry place. Fig-1: Grape seed drying in oven at 50 C Dried seed samples were extracted by cold maceration (Figure 2). Known weights of grape seeds powder were macerated in methanol in a closed container tank and left for 72 hrs x 3 with stirring from time to time. Methanolic seeds extract was filtered using filter paper. All extracts were concentrated using rotary evaporator at 64-65.5 C, 150 rpm to obtain the crude extract. Percentage of yield of all extracts was calculated using this formula: % of yield = (weight of crude extract / weight of original plant powder) x 100 Antibacterial activity of GSE Grape seeds crude extracts were tested for their antibacterial activities against Staphylococcus aureus, Streptococcus pneumonia, Acinetobacter Calcoaceticus, Klebsiella pneumoniae and Escherichia coli. Bacterial strains were obtained from the bacterial stock, Department of Microbiology, faculty of pharmacy, Tripoli University and respiratory disease hospital. Agar cup cut diffusion method was used for all bacterial species, followed by determination of minimum inhibitory concentration (MIC) for strains that were inhibited by grape seeds extracts. Bacterial suspensions were cultured in nutrient broth for 24 hrs, stocked on agar plate by cotton swab. Cuts were made on agar surface. One gram of each seeds extract was weight and dissolved in 1ml of distilled water, then 100 l of extract was poured in agar cup cuts and the plates were incubated at 37 C for 24 hrs. Diameter around cup cuts with no bacterial growth was measured and recorded. MIC test was applied for bacteria with the highest diameters produced by plant seeds extracts Minimum Inhibitory Concentration test (MIC) A series of broths are mixed with 2 fold serially diluted antibiotic (Cephalexin) solutions and a standard inoculum is applied (Figure 3). After incubation, the minimum inhibitory concentration MIC was determined as the first broth in which growth of the organism has been inhibited. The more resistant an organism the higher MIC will be. 1ml of each bacterial strain equal to 108 cfu per ml was transferred into set of six test tubes and 1ml of plant extracts was added, then two-folds dilution has been made to obtain a concentration range of 500mg/ml to 6.25mg/ml. after 18 hr of incubation at 37 C, 0.1 ml of each concentration were inculcated on to MHA agar using the streak plate method. The plates were then incubated at 37°C for 24 hours. The least concentration that did not show growth of the test was recorded. Antioxidant (DPPH) scavenging activity Methanolic solution of each grape seeds was spotted on TLC plate, left to dry eluted by suitable solvent system and sprayed with 0.2 % methanolic solution of diphenyl picryl hydrazyl (DPPH) reagent. Change in color from violet to yellow considered as a positive result. Vitamin C was used as positive control. RESULTS AND DISCUSSION All grape seeds samples were extracted by cold maceration for 72*3, filtrated and concentrated. The percentage of yield was calculated (table 1) for all spices samples using this formula: % Percentage of yield = (weight of crude extract / weight of original powder) 100 As shown in the above table, red grape seeds percentage of yield were higher than the percentage of yield of other two grape species, although there was no big difference in the % of yield of the three species. Antibacterial activity of GSE Grape seeds crude extracts were tested for their antibacterial activities. The results in table 2 showed that all grape seed extracts didn't have any effect on K. pneumonia and E. coli, red and black grape seeds showed the highest inhibition zone (20 mm) on Staph. Aureus agar plate, green grape had the highest effect on Sterp. Pneumonia agar plate (20 mm). The lowest effect was for the red grape seeds extract (13 mm) on Acinetobacter calciaceticus. In general the three grape seed extracts had effect on Staph. aureus, Sterp. pneumonia, and Acinetobacter calciaceticus. Table-2: Antibacterial effect of three grape seed extracts against different bacterial species As the above mentioned results showed, the three grape seeds extracts exhibited antibacterial effect on the chosen strains. This might be explained due to the presence of phenolic compound in grape seeds. Minimum Inhibitory Concentration test results (MIC) MIC test was performed on the micro-organisms which were susceptible to the effects of the grape seed extract, namely Staph. aureus, Sterp. pneumonia and Acinetobacter calciaceticus. This test was performed to measure the lowest concentration of a reagent that inhibits the visible growth of test microbes (MIC) ( Table 3). The red and black grape seed extract was effective against Sterp. pneumonia strain at MIC values of 7.8 mg/mL and black grape seeds extract had MIC at 7.8 mg/mL on Staph. aureus. However, the test for MIC of seeds extracts for the rest of bacterial species ranged between 15.62 and 87.5 mg/mL. As the tables 2 and 3 showed, grape seeds extract had antibacterial effect on most of the chosen strains. This might be due to the phenolic contents found in grape seed which are partially hydrophobic, and are considered to interact with the bacterial cell wall and lipopolysaccharide interfaces by decreasing membrane stability. The amount of phenolic content in grape seed extract has been directly correlated to the antibacterial properties. Antibacterial activity of grape seed extract is attributed to its content of Stigmasterol, which cause bacterial components degradation by surface interaction and pore formation in the cell wall of bacteria. It might also be accounted to the presence of tannins that has the ability to inactive microbial adhesions, enzymes and cell envelope transport proteins, their complexity with polysaccharide and their ability to modify the morphology of microorganisms. Therefore, this observation is suggestive of the antibacterial effect of grape seed extract. In our study, grape seed extract proved to be bactericidal and was able to produce zones of inhibition ranging from 13-20 mm against chosen bacterial strains with the concentrations of the extract ranging from 15.62 mg/mL and 87.5 mg/mL. Red and black grape seeds extracts showed the highest inhibition zone (20 mm) on Staph. Aureus, green grape extract had the highest effect on Sterp. Pneumonia (20 mm). Reagor L et al. carried out a study to determine the effectiveness of processed grape fruit seed extract as an antibacterial agent against sixty seven distinct biotypes. The results suggested that the antibacterial characteristic of grape seed extract is comparable to that of proven topical antibacterials. One study reported that the structure-activity correlation assays showed that the hydroxyl group of the phenolic compound was effective against E.coli and the benzene ring was effective against S. aureus. According to Al-Habeb A and Al-Saleh E et al. the antibacterial effect of grape seed extract against MRSA is due to disruption of bacterial cell wall membrane in scanning and transmission electron microscopy which could be accounted to the presence of potent polyphenolics in grape seed extract. Antioxidant (DPPH) scavenging activity test results The result showed that black grape seeds extract had the largest spot change in color. The lowest effect was by red grape seeds ( figure 4). All values are expressed as mean ± SD Fig-4: DPPH scavenging activity test of three grape species on TLC in different solvent system, Vit C was used as +ve control. R= Red, B= black, G= Green As the above figure showed, all grape seeds extracts exhibited a change in DPPH color indicating antioxidant properties. This had been studied extensively by many researchers. Antioxidants such as flavonoids, which act as free radical quenchers. Phenolic compounds from grape seeds have pharmacological and nutraceutical benefits showing antiviral and antimutagenic actions that are closely related to their antioxidant and singlet oxygen quenching ability. Recognition of such health benefits of catechins and procyanidins has led to the use of grape seed extract as a dietary supplement. Besides its antioxidant activity, the grape seed extract proved to act also as antibacterial agent. CONCLUSION In this study, different grape seeds species were extracted by cold maceration and tested on antibacterial effects on different bacterial strains using cup-cut agar diffusion method and MIC method, in addition to its antioxidant activity using DPPH scavenging activity test, red grape (Vitis labrusca) gave the highest percentage of yield (12.9 %) compared to the other two tested grape species. While black grape resulted in the lowest percentage of yield (8.3 %). Black Grape showed to be the most effecting as DPPH scavenger compared to the other two tested species, while the lowest effect was by red grape seeds. In general the three tested grape extracts showed to be effective against Staph. aureus, Sterp. pneumonia and Acinetobacter, while they had no effect on K. pneumonia and E. Coli. The result of MIC test showed that black grape had the best effect against Staph. aureus, Sterp. Arugenosa (7.8 mg/ml). finally, black grape extract was the best among the three chosen species in its antibacterial and antioxidant efficacy. |
The refs didn't do the Jets any favors during their late-game collapse against the Packers.
EAST RUTHERFORD -- The Jets made several inexcusable mistakes during their late game collapse Sunday.
But the referees didn't do them any favors in their 44-38 overtime loss to the Packers at MetLife Stadium. And Jets coach Todd Bowles uncharacteristically crushed the officials after the game.
"We can't play two teams," Bowles said. "It was one of those games. I haven't seen one like that in my 18, 19 years in the league. ... I thought we were playing two teams -- I thought we were playing the Packers and the striped shirts."
The Jets were flagged for 16 penalties for a total of 172 yards in the loss -- the most penalty yards in franchise history, according to ESPN Stats & Info.
Bowles, who was visibly angry after the Jets blew their third fourth-quarter lead in the last four games, doesn't usually criticize the officiating. But in the aftermath of Sunday's game, he went after the refs without even being asked about them.
"That's how bad it was," Bowles said. "I'm sure I'm getting fined already, so I care not to even say any more. But something's got to be done about that. That's ridiculous."
The Jets were called for eight penalties for a total of 109 yards in the final 12 minutes of regulation and overtime. And the calls always seemed to come at critical times.
The defense was flagged three times during Aaron Rodgers' game-winning drive in overtime. Three more penalties were called on the defense during a fourth-quarter scoring drive that pulled the Packers within one possession.
They were also called for a penalty on a two-point conversion late in regulation in which Rodgers threw an interception on a two point conversion that was returned by Jets cornerback Darryl Roberts for a touchdown. That would have been a game-winning two-point conversion for the Jets. But it was called back for holding on Roberts.
One of the most questionable calls was a pass interference in overtime, when cornerback Trumaine Johnson was flagged while defending receiver Marquez Valdes-Scantling. It appeared that both players were jockeying for position and shoving each other, but Johnson was flagged for the penalty.
"I thought it was a bad call as I did quite a few other calls," Bowles said.
We don't know how Johnson felt about the call. The man who signed a five-year, $72.5 million deal as a free agent in March blew past reporters without taking questions, leaving his teammates to answer questions about the call and another fourth-quarter meltdown.
But the players who stuck around and were accountable for the loss echoed what Bowles said about the officiating.
"I think it was bull--- myself. Period," said Jets cornerback Morris Claiborne, who was called for illegal use of the hands to the face late in the second quarter. "Even the receiver who I supposedly [fouled] looked at me and said, 'Wow, I'll take it. You didn't even do anything.'"
Linebacker Avery Williamson was still angry about 30 minutes after the game.
"I'm still pissed off," Williamson said. "I'm sure [the NFL] will come out tomorrow and say there were bad calls, because that's what they always do."
"I would agree with Coach Bowles," wide receiver Jermaine Kearse said. "It's already hard enough to win a NFL game each week, when you're dealing with some tough stuff out there it just makes it that much more frustrating."
One call Bowles didn't have a problem with: Leonard Williams' ejection in the second quarter for throwing a punch at Packers offensive lineman Bryan Bulaga.
"You can't throw a punch, obviously," Bowles said.
It wasn't all about penalties for the Jets. The offense had chances to put the game away early in the fourth quarter, but couldn't score a touchdown. The defense gave up eight plays of more than 20 yards, including four plays of 23 yards or longer in the fourth quarter. They gave up 540 yards for the game, their most allowed since 1998 when they gave up 557 against the 49ers in an overtime game.
But for a team that has invented ways to lose games this season, the penalties were an effective way to finish the job. |
INVESTIGATION OF THE CARBON MONOXIDE POST-COMBUSTION FLAME IN THE WORKING SPACE OF A STEELMAKING UNIT In order to optimize thermal mode of the steelmaking process and to bring down energy consumption, we examined effect of thermophysical parameters of the carbon monoxide post-combustion flame considering aerodynamic processes on the thermal-technological parameters during melting. Based on modern approaches and methods, we obtained data on the character of macro-physical processes that occur in the working space of the unit and in the reaction zone, taking into consideration the influence of aerodynamic processes in the bath of a steelmaking unit. We conducted a comparative analysis of the shape and temperature fields of the flame taking into consideration the influence of aerodynamic processes at different intensities of blowing the bath of a steelmaking unit with oxygen for different types of blowing devices. It was established that the shape and magnitude of temperature fields of the drigted flame varies depending on the content of carbon (from 4 to 0.1 %) in the melt, the intensity of blowing the bath with oxygen (from 1,800 to 2,400 m 3 /h), as well as design features of the blowing device (nozzle diameter and inclination angle). In this case, lances with an increase in the inclination angle of the nozzles to 50° and varying nozzle diameters from 10 to 20 mm, compared to lances of basic design (at the same inclination angles), make it possible to improve flame organization, to increase the length and temperature of the flame, to improve uniformity of the structure of flame, to increase heat exchanging surface between the flame and the bath, and to improve heating capability of the bath in a steelmaking unit. The studies reported in the present paper are applicable to industrial steelmaking units with the intensity of blowing the bath with oxygen in a range of 1,800 2,400 m 3 /h. The results obtained bring us closer to the development of a rational design of the blowing device to optimize the thermal mode of steelmaking process that will make it possible to reduce energy consumption in steel production. Introduction Development of ferrous metallurgy under contemporary conditions is characterized by a significant consumption of natural gas in the process of steelmaking. Relevant tasks in this regard are the development of theoretical and practical aspects of the new energy-and resource-saving techniques for steel smelting in steelmaking units with oxygen blowing (O 2 ) and post-combustion of carbon monoxide (CO). In order to meet these challenges, promising is the application of modes of steel smelting with an increased degree of CO post-combustion in the flue gases by jets of O 2 with the subsequent transfer of heat from the CO post-combustion flames to the melt and a steelmaking bath. The rational mode of CO post-combustion by jets of oxygen in the flow of high temperature gases discharged from the zone of blowing should ensure more efficient use of the heat released from the CO post-combustion in the system of gas flows in order to heat the bath, reduce natural gas consumption and to improve other technical-economic indicators without compromising resistance of the unit's lining. Literature review and problem statement In paper, authors studied effect of oxygen jet flow on the process of CO post-combustion when changing the inclination angles of a blowing device from 8° to 12°. Other ranges of change in the inclination angles and their influence on temperature fields of the CO post-combustion flame were not, however, examined. Article investigated influence of the blowing device's nozzle inclination angle, equal to 16°, on the process of CO post-combustion and agitation of the melt in the bath of a steelmaking unit. It, however, did not address other possible variations of inclination angles and their effect on the temperature field of a CO post-combustion flame. The blowing process of the bath of a steelmaking unit using a cold model simulation was studied in paper. The work, however, was limited to modeling the intensity of blowing up to 450 m 3 /h. Authors of article examined conditions of overheating reaction zones relative to the peripheral part of the bath and estimated temperature gradients. In this case, the research is limited only to surface measurements of the bath's temperature fields, without detailed examination of macrophysical processes in the reaction zone and of the effect of change in the intensity of blowing on the temperature fields of a CO post-combustion flame. In paper, authors theoretically studied physical-chemical processes in the reaction zone when blowing the melt with oxygen. Results of the work, however, were not tested under industrial conditions. Authors of article performed theoretical modeling of the effect of changing the intensity of blowing in the bath of 75 tons of arc furnace. Results of the study, however, were not tested at the industrial unit; formation of the temperature fields of a CO post-combustion flame was not studied. Parameters and shape of reaction zones in steelmaking units when designing and applying experimental multi-nozzle lance and their influence on the process of CO post-combustion were investigated in paper. In this case, the authors did not take into consideration the influence of aerodynamic processes in the bath and changes in the temperature fields of a CO post-combustion flame. INVESTIGATION The process of blowing the bath of a steelmaking unit using a cold model simulation was studied in. The authors, however, did not specify the effect of changing blowing intensity on the temperature fields in the unit. Intensive splashing and the existence of a post-combustion flame of carbon monoxide in the region of the lance in a general form was recorded by a photographing method excluding the impact of change in the intensity of blowing. It is worth noting that all the above studies consider behavior of a post-combustion flame of carbon monoxide and the effect on it resulting from the carbon monoxide flow discharged from the reaction zone, including the bubbles of CO formed when oxygen interacts with the melt. They do not take into consideration the influence of aerodynamic processes in the bath of an industrial steelmaking unit, formed under the influence of thrust produced by the fume collection vanes. Their influence on the temperature fields of a CO post-combustion flame was not examined either. The aim and objectives of the study The aim of present study is to examine a CO post-combustion flame in the working space of a two-bath steelmaking unit. This will make it possible to proceed to the optimization of thermal mode of steelmaking process, which would reduce energy consumption per unit. To achieve the set aim, the following tasks have been solved: -based on existing approaches and methods, to obtain data on the nature of macro-physical processes that occur in the workspace of the unit and in the reaction zone, taking into consideration the effect of aerodynamic processes in the bath of a steelmaking unit; -to establish dependences of thermophysical parameters of a post-combustion flame of carbon monoxide, taking into consideration the effect of aerodynamic processes on the thermotechnical parameters of steel melting; -to perform comparative analysis of the shape and temperature fields of the flame considering the influence of aerodynamic processes under different intensities of blowing the bath of a steelmaking unit with oxygen for various types of blowing devices. Methods applied for studying working space of the unit and the reaction zone The methods of theoretical research are the formalization and synthesis. The methods of empirical studies are the laboratory and field experiment (industrial tests). The most common methods for examining a reaction zone of the interaction between oxygen jets and the melt in a steelmaking unit are the methods of photo-and video recording and filming, which were widely used in paper. We propose to employ these methods to study the reaction zone and a post-combustion flame of carbon monoxide. Additionally, we used the thermal imaging camera NEC H2640 (NEC Avio Infrared Technologies Co. Ltd., Japan) and the infrared pyrometer Raynger (Raytek, USA) with additional blocks. Results of examining the working space of a steelmaking unit during interaction between oxygen jets and the melt While conducting balance melting during blowing the bath of a two-bath steelmaking unit (TSU) (Fig. 1) at the PAO ZMK (Ukraine), we obtained information on the character of macro-physical processes that occur in the reaction zone. The experiments were carried out using oxygen lance with an oxygen flow rate of 1,800 m 3 /h and above. The furnace operates in the following way: one bath (hot) is used for melting and finishing with intense blowing of the metal with oxygen while the second bath (cold) is used at the same time for filling and warming the hard charge. Gases under the influence of thrust created by the fume collection vanes, are directed from the "hot" part of the furnace to the "cold". In the cold part of the furnace, CO burns to CO 2 with warming the solid charge by the released heat. The heat lacking for the heating process is replenished by supplying natural gas through the burners installed in the roof of the furnace. Some fragments of imaging managed to record that the discharge of CO proceeded in several separate regions corresponding to the jets of oxygen that enter the bath from each nozzle of the lance. A diameter of the area of the released carbon monoxide from one such region is 0.25-0.31 m. There are metal splashes observed within the range of a near-lance flame on the circle with a radius of up to 0. At the same time, we registered, when oxygen is fed through the lance placed over the surface of the bath, the interaction modes between the blowing and the melt with an increase in the pressure from jet to the bath, which has a slag cover of insignificant thickness (0.2-0.3 m). The formation of a near-lance flame is observed during operation mode of blowing when the lance head is located at the slag-metal boundary and above, within a circle of radius up to 0.5-0.8 m (Fig. 2). The flame is formed as a result of after-burning of the carbon monoxide flow released from the reaction zone in the flow of air streams. The oxidizer flow moves inside TSU from one bath to another under the influence of thrust produced by the fume collection vanes. A shape of the near-lance flame is typical for jets, blown into entraining flow (Fig. 3). When captured by a photo camera, it is possible to see that the flame is displaced to the right from the lance body (in the direction of the discharged gas motion). At the same time, the CO that is released from the melt is sucked by the flame, followed by its pulsating ignition at a distance of 0.4-0.5 m from the lance. The formation of flame along the length of the bath, carried away by the flow of flue gases at a different content of carbon in the melt is shown in Fig. 4. Based on the photo-recording (Fig. 4) and the measurement of temperatures in the visible part of the carried-away flame on TSU throughout the entire melting, we constructed a dependence of change in the length and temperature of the carrying flame at a different content of carbon in the melt (Fig. 5). In this case, correlation coefficient R reached 0.95, the relative error when testing the regression equation against actual data amounted to 1.5-2 %. -Temperature of the drifted flame on the content of carbon in the melt, : where is the carbon content in the melt, %. In this case, correlation coefficient R amounted to 0.87, the relative error when testing the regression equation against actual data reached 1.7-2.5 %. In order to perform a detailed analysis of temperature fields of the visible part of the flame, we split the flame lengthwise into 3 conditional sections (zones). Each zone was divided into 2 vertical parts (top and bottom) and 2 horizontal parts (beginning and end of the zone). According to a special test program, during balance melting, we conducted 8 experimental measurements of the visible part of temperature fields of the drifted flame: 4 experimental measurements on basic lances (a) with the same inclination angle of nozzles relative to the melt (30°) and 4 experimental measurements on the tested lances (b) with a combined inclination angle of the nozzles relative to the melt (20/50°). During tests, the intensity of blowing the bath with oxygen varied from 1,800 to 2,400 m 3 /h at a content of carbon in the melt of 3 %. The measurements were carried out using the infrared pyrometer Raynger 3i and thermal imager NEC H2640. Results of experiments are shown in Fig. 6-10 and are summarized in Table 1 The blowing was conducted through basic six-nozzle lances (inclination angle of the nozzles to the vertical is 30, diameter of nozzles -15 mm) and experimental (at inclination angles of the nozzles up to 50° to the vertical and with varied nozzle diameters from 10 to 20 mm). ---polynomial (experimental lance) Table 1 Results of measurements of temperature fields of the drifted flame (basic lances) at a carbon content of 3 % Table 2 Results of measurements of temperature fields of the drifted flame (experimental lances) at a carbon content of 3 % |
Abrupt withdrawal of atenolol in patients with severe angina: comparison with the effects of treatment. Sir, I have read the report by Walker et al (1985; 53: 276-82) on the results of a study on the effect of atenolol withdrawal in patients with chronic stable angina pectoris. As a result of their findings, they came to the rather dangerous conclusion that atenolol withdrawal can be expected to carry no appreciable risk of precipitating a coronary event in patients with little or no angina. In 1979, Meinertz et al suggested that the abrupt discontinuation ofany beta blocking agent should be expected to produce a withdrawal syndrome similar to that described for propranolol.' The point at which rebound phenomena occur can be delayed for as long as 21 days after withdrawal2'4; the duration of the study performed by Walker et al was therefore too short to permit a conclusion that atenolol is devoid of this risk. Furthermore, in a different study, there was evidence of rebound withdrawal phenomena in two of 14 patients after substitution ofatenolol by placebo.5 Others have shown no difference between the beta blockers propranolol, oxprenolol, atenolol, and acebutolol in their propensity to cause a rebound increase of heart rate under conditions of increased sympathetic drive after withdrawal.6 The statement that atenolol has not yet been associated with a withdrawal syndrome is therefore incorrect. Clearly there is a great deal of variability in the appearance of the beta blocker withdrawal syndrome and advice that treatment with any beta blocker should be withdrawn gradually, irrespective of the disease under treatment, still stands. |
Overview and Comparison of Screen Test Methods Used in Quantifying Ocular Motility Disorders In much the same way as a formal visual field test using a perimeter can document and quantify a visual field defect, so can plotting the limited range of motion of extraocular muscles on a screen at a fixed distance provide a graphic pattern for interpretation and a quantified baseline for subsequent comparison. Classic patterns may be recognized in both examples, such as a homonymous hemianopia in a visual field defect or an incomitant esotropia from abducens palsy in a motility defect. Both techniques permit the examiner to interpret the results in a graphic rather than strictly numerical fashion and aid in differential diagnosis. More accurate sequential follow-up is provided to document progression, recovery, or stability of the condition and assist in its medical or surgical management. |
A diphtheria toxin/fibroblast growth factor 6 mitotoxin selectively kills fibroblast growth factor receptor-expressing cell lines. The fibroblast growth factors (FGFs) constitute a family of nine polypeptides implicated in a number of physiological and pathological processes. They bind to at least three types of cell surface molecules, including low and high affinity receptor families. The role of FGFs and their receptors in human tumorigenesis has been suspected but not formally proven. FGF6 is an oncogene encoding a precursor protein of 208 amino acids that has been shown to bind to FGF receptors. Its normal function has not been identified, but its restricted pattern of expression suggests a role in muscle development or function. We have constructed, produced, and purified a diphtheria toxin/FGF6 mitotoxin that selectively kills FGF receptor-expressing cells. Interestingly, at least two cell lines that normally respond to FGF6 have been found resistant to DT/FGF6, suggesting that FGF6 acts on these cells through a transduction pathway that does not involve FGF receptor. |
Final-state rule vs the Bethe-Salpeter equation for deep-core x-ray absorption spectra The independent-electron approximation together with the final-state rule provides a well established method for calculating x-ray absorption spectra that takes into account both core-hole effects and inelastic losses. Recently a Bethe-Salpeter Equation approach based on Hedin's GW approximation that explicitly treats particle-hole excitations has been applied to the same problem. We discuss here the formal relationships between these approaches for deep-core x-ray spectra and show that they are similar, apart from differences in their practical implementations. Their similarity is illustrated with results from both theoretical approaches and compared with experiment. This comparison also suggests ways to improve both approaches. |
In reaction to Carol Griffin's letter (Viewers' Views, March 3), I, too, was greatly impressed by the Feb. 12 episode of "thirtysomething." I don't think the writers were necessarily trying to do the "opposite of what is expected." Rather, they were demonstrating how life itself is unpredictable.
This show really left me out of breath and with a sick feeling in the pit of my stomach. Kudos to all the actors. |
Q:
One data structure to both parse and stringify
You can create a data structure such as a Parsing Expression Grammar (PEG) that will be used to parse:
string -> object
You can then write a function that iterates through the object's properties and serializes back into a string, or will stringify it:
object -> string
I haven't seen any data structures for defining how stringifying would work (similar to a PEG), but I think this would be possible.
What I'm wondering is if you can create one data structure to do both. That is, define a grammar or something that will allow you to both parse and serialize some text. There would still be two functions (parse and stringify), but they would both take the same data structure / grammar thingy to figure out what to do automatically.
I am asking because I feel like I've read before that the whole original purpose of grammars was for language generation rather than parsing, and parsing only came after. I can imagine a data structure to generate strings, but I'm wondering if it can get more finely tailored so you can give it a data structure (say a parsed URL object), and it would generate the URL string from it without having to write the custom serialization code. The data structure / grammar thing would do the stringification for you. Wondering if that's possible.
Or maybe it's just better to have two data structures. Just trying to think in terms of reducing duplication.
A:
... that will be used to parse: string -> object
Em, no. The output of a parser is not an "arbitrary object". It is parse tree (and a boolean indicating if the input string matched the given grammar or not).
I haven't seen any data structures for defining how stringifying would work (similar to a PEG), but I think this would be possible.
That is because there is no datastructure needed to stringify a parse tree (except the tree itself). Just do a depth-first in-order tree traversal and concatenate the string representation of the nodes - that should result in the same string as you started with (assumed your parser did not swallow characters like whitespaces from the tree). |
The number of units the US will produce in one month would break a record.
But there are still shale issues, even with the boom.
Capacity is one factor to look out for.
Shale oil production in the United States will rise by a record-breaking 144,000 bpd from May to June, hitting 7.178 million bpd, the Energy Information administration estimated in its latest Drilling Productivity Report.
Hardly surprisingly, the Permian will lead the way with a 78,000-bpd increase in production, from 3.199 million bpd this month to 3.277 million bpd in June. The Permian will be followed by Eagle Ford, where average daily oil production will rise by 33,000 bpd from 1.354 million bpd to 1.387 million bpd.
The only shale play that will not register an increase in oil production will be Haynesville, where gas production, however, will grow by 201 million cu ft daily—the third-highest monthly gas production increase in the shale patch. The highest will be in Appalachia, where gas production will rise by 373 million cu ft between May and June.
But the record-breaking increase in production is not without problems, especially in the Permian. Production there is rising so fast, the transport infrastructure cannot keep up with it, and bottlenecks are beginning to emerge.
According to analysts, the current pipeline capacity in the Permian will be exceeded by the middle of the year, which means new ones need to be built urgently. There are already proposals for new capacity of a total 2.4 million bpd and strong interest from producers, who are concerned about the discount their oil would have to trade to other shale crudes because of the transport constraints.
One of these is the EPIC pipeline, which will be able to carry 590,000 bpd of Permian crude to refineries and other buyers. The company behind the project, EPIC Midstream Holdings, has received commitments for almost a third of the planned capacity from Apache Corp and Noble Energy, and chances are more will follow and soon to secure an outlet for their oil. |
Erik Verlinde doesn't believe in gravity. Normally that wouldn't matter. Lots of people believe unusual things. However, in this case, we're talking about a respected string theorist and professor of physics at the University of Amsterdam. According to the New York Times, he's caused a veritable "ruckus" in the scientific community.
Reversing the logic of 300 years of science, he argued in a recent paper, titled “On the Origin of Gravity and the Laws of Newton,” that gravity is a consequence of the venerable laws of thermodynamics, which describe the behavior of heat and gases.
“For me gravity doesn’t exist,” said Dr. Verlinde, who was recently in the United States to explain himself. Not that he can’t fall down, but Dr. Verlinde is among a number of physicists who say that science has been looking at gravity the wrong way and that there is something more basic, from which gravity “emerges,” the way stock markets emerge from the collective behavior of individual investors or that elasticity emerges from the mechanics of atoms.
Your hair frizzles in the heat and humidity, because there are more ways for your hair to be curled than to be straight, and nature likes options. So it takes a force to pull hair straight and eliminate nature’s options. Forget curved space or the spooky attraction at a distance described by Isaac Newton’s equations well enough to let us navigate the rings of Saturn, the force we call gravity is simply a byproduct of nature’s propensity to maximize disorder.
If that doesn't make sense, you're not alone. Apparently, some of the most well-respected physicists don't get Verlinde's paper either. |
Optokinetic set-point adaptation functions as an internal dynamic calibration mechanism for oculomotor disequilibrium Summary Experience-dependent brain circuit plasticity underlies various sensorimotor learning and memory processes. Recently, a novel set-point adaptation mechanism was identified that accounts for the pronounced negative optokinetic afternystagmus (OKAN) following a sustained period of unidirectional optokinetic nystagmus (OKN) in larval zebrafish. To investigate the physiological significance of optokinetic set-point adaptation, animals in the current study were exposed to a direction-alternating optokinetic stimulation paradigm that better resembles their visual experience in nature. Our results reveal that not only was asymmetric alternating stimulation sufficient to induce the set-point adaptation and the resulting negative OKAN, but most strikingly, under symmetric alternating stimulation some animals displayed an inherent bias of the OKN gain in one direction, and that was compensated by the similar set-point adaptation. This finding, supported by mathematical modeling, suggests that set-point adaptation allows animals to cope with asymmetric optokinetic behaviors evoked by either external stimuli or innate oculomotor biases. INTRODUCTION Neural development is organized according to several levels of regulatory mechanisms. Although genetically coded chemoaffinity cues are important for the large-scale organization of neuronal projections, fine tuning of neural circuits requires activity-dependent plasticity in response to real-life experience. Before the onset of vestibular function in zebrafish during early development (;Bever and Fekete, 2002;), freely swimming larvae largely rely on their visual system. Owing to technological advancements, larval zebrafish have evolved into an excellent model organism for studying neural circuitry related to visuomotor learning and control (Bollmann, 2019;). A novel optokinetic set-point adaptation identified recently in larval zebrafish may reflect a visual experience-dependent eye movement adjusting mechanism (); however, its physiological function remains unclear. Optokinetic nystagmus (OKN) is an eye-tracking behavior that uses a large moving field to stabilize images on the retina. OKN consists of both eye tracking and eye position resetting, i.e., slow phase and quick phase, respectively. Negative optokinetic afternystagmus (OKAN), on the other hand, is an eye movement aftereffect in which the eyes move in the opposite direction of the previous OKN. Studies conducted in various animal species have shown that an optokinetic stimulation period of longer than a minute is required to elicit negative OKAN (Barmack and Nelson, 1987;;;;. After two days of unidirectional stimulation, negative OKAN can last for up to 1-4 days in rabbits (Barmack and Nelson, 1987). In our previous study, we established an underlying optokinetic set-point adaptation mechanism that can be accounted for the subsequent negative OKAN. Briefly, the optokinetic system exerts a negative-feedback control to reduce the error between retinal slip velocity and its set point by commanding the tracking eye movements. The default set point is about 0 deg/sec without previous visual stimulation, and to stabilize moving images on the retina, the retinal slip velocity should be reduced to zero. However, persisting optokinetic stimulation in one direction creates a sustained error signal that with time leads to the set-point adjustment to dampen the continuous eye-tracking movement. As a In most of the previous investigations of negative OKAN, the behavior was induced by a prolonged period of unidirectional optokinetic stimulation (Barmack and Nelson, 1987;;Bures and Neverov, 1979;;;;Maioli, 1988;;;Henn, 1977, 1978;Waespe and Wolfensberger, 1985;). Such study designs are far from animals' visual experience during natural exploration. Since a persistent unidirectional optokinetic stimulus is not common outside the laboratory, it remains unclear what the physiological relevance of the optokinetic set-point adaptation is in everyday life. On the other hand, freely swimming larval zebrafish demonstrated spontaneous turning behaviors that consist of alternating states in which fish repeatedly turned in one direction before switching to the other (). Thus, in accordance with their swimming patterns, in the current study, we investigated OKN and negative OKAN under a direction-alternating optokinetic stimulation paradigm with the aim of developing a comprehensive understanding of the set-point adaptation and its functional significance in animals. Specifically, the set-point adaptation was measured under prolonged optokinetic stimulation, with the direction alternating every 15 s. Moreover, we studied the OKN and the subsequent OKAN by applying both symmetric and asymmetric stimulus velocities in different directions. Our results revealed that prolonged asymmetric direction-alternating optokinetic stimulation was sufficient to induce the optokinetic set-point adaptation and the resulting negative OKAN in darkness, which further suggests that the adaptation is a consequence of temporal integration and not an instantaneous effect of the visual experience. Most strikingly, the results showed that symmetric stimulation could lead to negative OKAN in individual larvae that inherently displayed asymmetric OKN gains in different directions. Ultimately, our mathematical model predicts that innate OKN biases will result in set-point adaptation and the subsequent negative OKAN. Thus, our study suggests that the functional relevance of the set-point adaptation is to adjust the inherent oculomotor disequilibrium in the optokinetic system. RESULTS Asymmetric direction-alternating optokinetic stimulation is sufficient to elicit set-point adaptation and the resulting negative optokinetic afternystagmus To understand how differences in visual experience may tune the underlying optokinetic set point, and how such a process would further impact the oculomotor behavior, we recorded eye movements before, during, and after optokinetic stimulation in larval zebrafish five days postfertilization. Figure 1 depicts the experimental setup and the different stimulus paradigms of the current study. Specifically, unidirectional, symmetric direction-alternating (SA), and asymmetric direction-alternating (AA) stimulation were applied ( Figure 1C). Figure S1A shows a representative eye-position trace and the corresponding slow-phase velocity (SPV) of zebrafish larvae under unidirectional stimulation-the typical experimental procedure to elicit negative OKAN. During the pre-stimulus dark period, the eyes of the fish spontaneously beat to an eccentric position, followed by centripetal eye drift owing to the leaky velocity-to-position integrator in larval zebrafish (). The beating directions were not biased toward either side. During the 10-min OKN period, the SPV decreased over time, indicating an ongoing set-point adaptation of the retinal slip velocity. During the post-stimulus dark period, there was robust negative OKAN which manifested as eye drifts, with SPV occurring in a direction opposite to the previous OKN. In contrast to the unidirectional stimulation, with symmetric direction-alternating stimulation of +/10 deg/sec stimulus velocities alternating every 15 s (10/10 SA stimulation, Figure 1Cii) did not lead to an observable OKAN in the majority of the animals (Figure 2A). To examine whether introducing different velocities in the two directions would give rise to negative OKAN, we applied two different asymmetric direction-alternating stimulation paradigms: 10 deg/sec in one direction and 5 deg/sec in the opposite direction (10/5 AA stimulation, Figure 1Ciii), and 20 deg/sec in one direction and 5 deg/sec in the opposite direction (20/5 AA stimulation, Figure 1Civ). After the 10/5 AA stimulation, in most larvae no obvious OKAN was observed ( Figure S1B). In contrast, after the 20/5 AA stimulation, most larvae displayed a robust negative OKAN with SPV directing opposite to the faster stimulus velocity ( Figure 2B). Collectively, these data reveal that unidirectional stimulation is not necessary. Rather, velocity asymmetry under the AA stimulation is sufficient to elicit negative OKAN. Figure 1. Experimental design to elicit optokinetic set-point adaptation in larval zebrafish (A and B) Experimental setup of eye recording in larval zebrafish. Individual larval zebrafish were restrained with agarose and placed in the middle of an optokinetic cylinder. Optokinetic nystagmus (OKN) was elicited by moving gratings projected on the cylinder and recorded by an infrared (IR) camera on top of the fish. (C) From top to bottom, schematic illustrations show optokinetic stimulations with (i) unidirectional +10 deg/sec, (ii) +/10 deg/sec symmetric alternating (10/10 SA), (iii) +10/5 deg/sec asymmetric alternating (10/5 AA), and (iv) +20/5 deg/sec asymmetric alternating (20/5 AA) stimulus velocities. Under all stimulus conditions, a 5-min prestimulus dark period and a 10-min post-stimulus dark period were included. Stimulus duration was 10 min for (i) and twice 20 min with a 5-min inter-stimulus dark period for (ii)-(iv). OPEN ACCESS iScience 25, 105335, November 18, 2022 3 iScience Article under sustained AA stimulation. Interestingly, the T-N asymmetry was only recorded under 10/10 SA and 10/5 AA stimulation and was no longer observable under 20/5 AA stimulation ( Figure S3), suggesting that significant stimulus velocity differences between left and right directions may prevail over the inherent T-N asymmetry. To gain an understanding of the nature of the mechanisms that enable negative OKAN under AA but not SA stimulation, we built a mathematical model that incorporated both sensory habituation and set-point adaptation () but was adapted to account for bidirectional stimulation ( Figure 3A, see STAR methods). Best-fit parameter values were obtained by using one-half of the behavioral dataset for a given stimulation condition, and quantifying goodness-of-fit using the other half of the dataset (see STAR methods). Overall, our model successfully reproduced experimentally observed eye movements under directionalternating stimulation (Figures 3 and S2). Specifically, Figure 3 shows the model-predicted SPV in comparison with the empirical data under both 10/10 SA ( Figures 3B-3D) and 20/5 AA ( Figures 3E-3G) stimulation. Generally, the set-point adaptation operator of the model is counteractively charged with opposite signs under the alternating cycles of the stimulus directions. Through SA stimulation, the inputs with opposite signs cancel each other out, resulting in an unchanged set point value of around 0 deg/sec. In comparison, under AA stimulus conditions, the adaptation integrator is charged more in the faster-stimulus direction than the other. As a result, a set point is built over time. Conceptually, the changed set point alone is expected to reduce the SPV in the direction of the faster stimulus but increase it with an equal value in the opposite direction. However, with the incorporation of sensory habituation, the model predicts a relatively stable SPV in the slower-stimulus direction yet an SPV with a significant velocity reduction in the faster-stimulus direction ( Figures 3E-3G). Furthermore, during the post-stimulation period after the 20/5 AA stimulus, the non-zero set point leads to negative OKAN ( Figure 3G). Similarly, our model also predicts SPV analogous to the experimental data under a 10/5 AA stimulus ( Figure S2). Notably, although our model also predicts negative OKAN after 10/5 AA stimulation (albeit weaker than the 20/5 AA condition), it was not obvious in the experimental data. Individual larvae displayed behavioral asymmetries under symmetric optokinetic stimulation Based on the population SPV median, only the asymmetric stimulation would elicit set-point adaptation and the resulting negative OKAN ( Figure 3). Surprisingly, however, eye-position traces revealed that some individual larvae did develop negative OKAN under symmetric stimulation. Figure 4A depicts a representative trace of a larva that displayed robust negative OKAN following 10/10 SA stimulation. We wondered whether this indicated that the preceding SPV was biased toward one direction. To overcome the challenge of spotting minor behavioral asymmetries in the left-right directions on top of the robust T-N asymmetry (;Braun and Gault, 1969;De';;;Huang and Neuhauss, 2008;Keng and Anastasio, 1997;Klar and Hoffmann, 2002;Mueller and Neuhauss, 2010;;Wallman, 1993;Wallman and Velez, 1985), we computed the SPV median of the population (n = 29), applied it as the standard response curve, then compared that with single larva's behavior ( Figures 4B-4D). Indeed, the analyzed results confirmed that, compared to the population median, the representative larva displays a faster SPV in the positive direction but a similar SPV in the other direction during early OKN ( Figure 4C), followed by negative OKAN in the negative direction ( Figure 4D). To better visualize the SPV difference, we subtracted the population median from the individual's SPV, denoted as DSPV ( Figures 4E-4G). The DSPV of the representative individual is biased in a positive direction during the early OKN ( Figure 4F); moreover, throughout the stimulation, the asymmetry becomes slightly mitigated, as shown during the late OKN ( Figure 4G). To quantify the asymmetric responses and the resulting adaptation under symmetric stimulation across individuals, we computed average DSPVs for each subject within a time window of 4 min before, at the. Continued sensory habituation operator, |u|, which produces the absolute value of the input (orange shaded area), together with a sign switch of output lead to a continuous habituating effect regardless of stimulus direction. The filtered V r passes through a nonlinear gain (T-N gain) that captures the T-N asymmetry. The error between the filtered V r and the set point is scaled by oculomotor gain (g) and added to the velocity storage mechanism (VSM) to control V e. VSM contains a leaky integrator with a time constant of T vsm and a gain of k vsm that contributes to the eye velocity. V e is then integrated with a time constant of T a and a gain of k a to adjust the set point. Considering the starting direction of alternating stimulation was randomized, we aligned the first stimulus direction as positive instead of left or right. Therefore, the plots show one eye started with a nasalward movement (blue trace) and another eye started with a temporalward movement (red trace). The stimulus image pattern (darkness or gratings) and the stimulus velocity are shown on the top as horizontal bars and lines, respectively. iScience Article beginning of, at the end of, and after the OKN, depicted as pre, early, late and post, respectively ( Figures 4E-4G). We skipped the first minute post-stimulus to avoid the influence of the velocity storage (). Probability distribution plots revealed that data of both early and late OKN display normal distributions centered at around 0 deg/sec ( Figure 4H). Notably, compared to the early OKN period, the bell curve of DSPV in the late OKN generates a taller and narrower shape, which corresponds to the mitigated behavioral asymmetry over prolonged stimulation and thus less DSPV data distributed at two tails. Next, we plotted the late DSPV over the early DSPV. A Pearson correlation test revealed a significant positive correlation, and the slope of regression line is significantly smaller than 1 ( Figure 4I), consistent with the reduced variance of the late DSPV distribution ( Figure 4H). In other words, fewer fish showed behavioral asymmetry after a prolonged period of symmetric stimulation. To avoid data misinterpretation owing to regression toward the mean, we additionally computed and compared our data with the null expectation derived from the temporally shuffled data ( Figures 4I and S4; see STAR methods). We then computed the changes in DSPV across stimulus phases (late DSPV -early DSPV, DDSPV) and plotted them over the early DSPV ( Figure 4J). Our data revealed a negative correlation with a significantly smaller regression-line slope compared to the regression-line slope of the shuffled data, implying that greater inherent behavioral asymmetries (i.e., early DSPV) would lead to a higher degree of the optokinetic setpoint adaptation. The gradual reduction of the behavioral asymmetry can be further seen in Figure S4. Finally, we also found a significant negative correlation between changes in eye movements in the dark before and after OKN (post-DSPV -pre-DSPV, DDSPV) and the early DSPV ( Figure 4K), which further suggests the dependency of negative OKAN on the inherent OKN asymmetry. Degrees of innate bias are accountable for different behavioral asymmetries and interindividual variations Based on the empirical data, we hypothesize that under symmetric stimulation, individual larvae that underwent set-point adaptation and developed the resulting negative OKAN may embrace an innate directional bias within the optokinetic system. To validate this conceptual model, we introduced a constant value for ''bias'' ( Figure 5A). Indeed, under symmetric stimulation, varying this value can predict either no asymmetry ( Figures 5B-5D) or robust asymmetry (Figures 5E-5G) in SPV in both directions, with the latter also predicting the resulting negative OKAN. Next, we optimized the parameter fitting for each larva to estimate the innate bias based on its individual empirical data. The distribution of the estimated innate bias is computed and depicted in Figure 5H. As the distribution is roughly symmetric and centers at 0 deg/sec with a few outliers biased in either direction, empirically the population median SPV does not show asymmetric OKN or negative OKAN. Furthermore, not only does the innate bias predict the DSPV (Figures 5I and 5J; the same DSPV data are shown in Figure 4I), but it reveals a higher correlation coefficient with the early DSPV ( Figure 5I) than with the late DSPV ( Figure 5J), suggesting that the innate bias-caused ocular motor asymmetry can be compensated by set-point adaptation. The bias also predicts the change in eye movements in the dark (post-DSPVpre-DSPV, DDSPV; Figure 5K), suggesting that negative OKAN is dependent on innate bias. In conclusion, adjusting the innate bias in the model will predict the magnitude of behavioral asymmetry and the resulting set-point adaptation. Finally, we applied the estimated biases to simulate DSPV ( Figures 6A-6F) in all subjects. Overall, the model simulation predicts all critical features revealed by the empirical data ( Figure 6, compared to Figure 4). Specifically, the variance of the DSPV distribution is reduced throughout the stimulation ( Figures 6G-6I and S5). We also found a significant negative correlation between the change in eye movements in the iScience Article dark (post-DSPV -pre-DSPV, DDSPV) and the early DSPV ( Figure 6J). Altogether, the model is generalizable across individuals, and the innate bias spectrum is accountable for the inter-individual variation. Finally, in addition to the proposed model, we further tested an alternative model by introducing an innate bias at the motor level ( Figure S6). However, the innate bias at the motor level drives a behavioral asymmetry prior to the optokinetic stimulus ( Figure S6F) which fails to predict our empirical observation. Also, this alternative model does not reproduce the compensation of behavioral asymmetry throughout the symmetric stimulation (Figures S6H-S6J). DISCUSSION In vertebrates, a stable oculomotor system relies on precise sensorimotor control and coordination, which largely involves both the visual and vestibular systems. The optokinetic system plays an essential role in stabilizing the retina with respect to the visual surroundings. Any disturbance or malfunction in the system may cause oculomotor disequilibrium and interfere with accurate sensorimotor transformation. In the current study, we describe two types of directional eye movement bias in the optokinetic system: the typical T-N asymmetry existing in most lateral-eyed animals (;Braun and Gault, 1969;De';;;Huang and Neuhauss, 2008;Keng and Anastasio, 1997;Klar and Hoffmann, 2002;Mueller and Neuhauss, 2010;;Wallman, 1993;Wallman and Velez, 1985) and a left-right asymmetry that can readily be triggered by an asymmetric stimulus in a laboratory setting. On top of that, we noted an innate bias in the optokinetic system that can equally cause leftright asymmetry in some animals. The aim of this study was to scrutinize how the optokinetic system applies set-point adaptation to adjust for oculomotor asymmetries. Set-point adaptation results from the temporal integration of visual experience Compared to negative OKAN, positive OKAN refers to another aftereffect in which eyes move in the same direction as the preceding OKN. This is attributed to the velocity storage mechanism (VSM), which helps estimate head motion (Laurens and Angelaki, 2011). VSM-associated positive OKAN is relatively short-lasting and can be evoked by just 2 s of optokinetic stimulation (), which then lasts up to 1-2 min (;;Demer and Robinson, 1983;Waespe and Henn, 1977;). Furthermore, the stored velocity is sensitive to and can be rapidly discharged by cerebellar uvula in response to variation in visual and vestibular sensory inputs. In contrast to VSM, optokinetic set-point adaptation has recently been proposed as a novel underlying mechanism to compensate for persisting asymmetric sensory input from the periphery and can manifest as another distinct longer-lasting oculomotor aftereffect-the negative OKAN (). Except for a few pathological situations, however, it is not common for animals to consistently turn their bodies so that rotation is always perceived in the same direction. Intuitively, the brain should be capable of rendering set-point adjustment under dynamic visual conditions and thus extract and estimate the net sensory asymmetry over an extended period of time. Our results indeed show that unidirectional stimulation is not necessary, but direction-alternating stimulation is sufficient to generate negative OKAN as the result of a temporally integrated set-point adaptation. This finding demonstrates very distinct Figure 5. An innate bias in the optokinetic system accounts for the asymmetric OKN and the corresponding set-point adaptation under the symmetric stimulus (A) A modified mathematical model to explain the asymmetric optokinetic nystagmus (OKN) gain under symmetric stimulation and the following optokinetic afternystagmus (OKAN). The design is based on the proposed model shown in Figure 3A; however, a constant innate bias (yellow box) is introduced to the optokinetic system. (B-G) The model predicted slow-phase velocity (SPV, colored lines) superimposed on the empirical SPVs of individual subjects (black lines) with symmetric (B-D) and asymmetric (E-G) eye movements. (E) is the same data shown in Figure 4A. Considering the starting direction of alternating stimulation was randomized, we aligned the first stimulus direction as positive instead of left or right. Therefore, the plots show one eye started with a nasalward movement (blue trace) and another eye started with a temporalward movement (red trace). The stimulus image pattern (darkness or gratings) and the stimulus velocity are shown on the top as horizontal bars and lines, respectively. iScience Article characteristics of set-point adaptation and the resulting negative OKAN compared to the VSM and the resulting positive OKAN ). An innate bias in the oculomotor system can lead to optokinetic nystagmus asymmetry and negative optokinetic afternystagmus iScience Article be directionally symmetric, we asked ourselves what the physiological necessity of the underlying set-point adaptation is. We observed that not all animals developed a perfectly symmetric optokinetic system. On the contrary, some animals displayed OKN asymmetry under symmetric stimulation, followed by negative OKAN (Figure 4). It is worth noting that, analogous to the asymmetric OKN and the subsequent negative OKAN in healthy animals, patients with latent nystagmus (MLN) and infantile nystagmus syndrome (INS) () and as well the zebrafish INS model belladonna strain (;) exhibit direction-asymmetric spontaneous nystagmus in light followed by negative OKAN in darkness ( for patients, and unpublished data for zebrafish). To conceptualize factors that affect the oculomotor behavioral asymmetry, we introduce a constant ''innate'' directional bias to the mathematical model, incorporating sensory habituation and set-point adaptation ( Figure 5A). Indeed, the model predicts behavioral asymmetry of OKN ( Figures 5E-5G) which varies across animals ( Figure 6G), in line with the varying innate biases within the population ( Figure 5H). Furthermore, this model also successfully predicts the mitigated OKN asymmetry over the stimulus period and the post-stimulus negative OKAN (Figure 6), phenocopying the empirical data ( Figure 4). Accordingly, we argue that an innate bias can cause an asymmetric OKN which is attenuated by set-point adaptation. Since all animals used for the present study were wild-type and carried no artificially induced factor that might cause behavioral asymmetry, we wondered why and in what situation an innate bias might evolve in animals. In the young brain, coarse neural wiring has to be in place under molecular guidance, but experience-dependent fine-tuning remains in effect and critical. Prior to the fine-tuning, it's conceivable that inherent structural and functional asymmetry could just arise stochastically by nature. In fact, an unwanted developmental asymmetry during the early developmental stage is not uncommon. For example, the initial somite development in zebrafish embryos often shows asymmetry in length and position between left and right, and later at the fine-tuning stage, this asymmetry can be adjusted by the molecular signaling according to the surface tension (). The set-point adaptation may play an analogous role in fine-tuning the optokinetic system to continuously seek a state of equilibrium through the symmetric visual experience. We might also expect an innate bias deriving from unilateral/asymmetric physical conditions associated with injuries or diseases. Comparable cases have also been reported in the vestibular system. To estimate head rotation on the horizontal plane, signals from both sides of the vestibular end organs are constantly compared. The imbalance of rotational signals from the two sides owing to a unilateral vestibular loss can lead to a vestibular behavioral asymmetry-the vestibular nystagmus (Fetter and Zee, 1988). The subsequent recovery process has been described as a set-point adaptation mechanism (). In the current study, a similar mathematical framework of the vestibular set-point adaptation (;;) has been constructed to simulate the adaptation process under optokinetic asymmetry, and the results validate the applicability of the model to explain the empirical phenomena. Multiple underlying mechanisms of set-point adaptation in the oculomotor system Besides the behavioral aspects of the optokinetic set-point adaptation, there have been several valuable research works using molecular markers and/or neural activity recordings to investigate its underlying mechanisms. The results have suggested multiple processing stages and pinpointed the involved brain areas. A recent study identified the pretectum as crucial for initiating negative OKAN in zebrafish larvae (). The initial behavior-compensating mechanism may be relatively short and temporary; however, the underlying ongoing molecular events may further shape the anatomical and functional neural circuits to maintain equilibrium. Various biochemical experiments have demonstrated that prolonged unidirectional optokinetic stimulation leads to changes in the molecular expression profile that are known to play roles in the neural plasticity of floccular Purkinje cells (Barmack and Qian, 2002;Barmack et al.,, 2014). Thus, it is conceivable that the corresponding molecular signaling cascades may underlie or consolidate the dynamic calibration process during oculomotor controls. The link between negative OKAN and the vestibular nucleus was identified in the past (Waespe and Henn, 1977). Relevant studies in the vestibular nucleus may potentially shed light on the underlying mechanisms of the optokinetic set-point adaptation. In laboratory rodents, unilateral labyrinthectomy has been introduced to model the long-term vestibular imbalance and the subsequent functional restoration (Darlington and Smith, 2000). iScience Article intrinsic () plasticity of the vestibular nucleus, animals recovered from the surgeryinduced behavioral imbalance. However, in mice that underwent unilateral labyrinthectomy, the ipsilesional vestibular nucleus showed increased excitability within 24 h of the injury but returned to its normal firing state while the behavioral restoration remained (). This further suggests that, in addition to the vestibular nucleus plasticity, multiple mechanisms were involved in the compensating tasks. To obtain a comprehensive understanding of the dynamic mechanisms underlying set-point adaptation in different sensory systems, further mechanistic studies-including a screening of larger-scale functional brain networks-will be indispensable. Limitations of the study In the current study, we reported a left-right asymmetry of optokinetic gain that naturally occurs at the early developmental stage of wild-type larval zebrafish and demonstrated a fine-tuning sensory adaptation mechanism in the optokinetic system that helps to compensate for the naturally occurring neurobehavioral disequilibrium. However, the alternating duration of stimulus in nature is dynamic and averaged at around 6 s (), which was not perfectly copied by our optokinetic paradigm. Also, a relatively strong visual stimulus (i.e., high contrast and stimulus velocity) was given within a relatively short (i.e., 1 h) recording period. Experiments with milder stimuli but longer recording time or with repeated exposures could simulate the natural conditions more closely and might generate a longer-lasting adaptation. Although the set-point adaptation functioning as an internal calibration mechanism for oculomotor disequilibrium is evident in zebrafish following our current experimental procedure, to understand its role in neural development, further investigation on cross-age comparison of innate bias in zebrafish that are raised in a relatively natural environment will be necessary. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: Lead contact Further information and requests about this study should be directed to and will be fulfilled by the lead contact, Ting-Feng Lin ([email protected]). Materials availability This study did not generate new unique reagents. Data and code availability d Data presented in this study are saved as MAT-file (version 7.0) and publicly available from Mendeley Data: https://data.mendeley.com/datasets/wgyyp4jw5w/1. d The code of the mathematical model is written in MATLAB R2020a and publicly available from GitHub: https://github.com/tingfenglin-ac/Optokinetic-set-point-adaptation-model. d Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request. Larval zebrafish In accordance with the Federal Veterinary Office of Switzerland (FVO) guidelines -TSchV art. 112, no ethical approval is required for studies on larvae under the age of 120 h/5 days post fertilization prior to the independent feeding. Zebrafish embryos of TU wild-type zebrafish line were bred and maintained in 28 C E3 solution (5 mM NaCl, 0.17 mM KCl, 0.33 mM CaCl2, and 0.33 mM MgSO4) () under a cycle of 14 h of light and 10 h of darkness. Larvae at the age of 5 days postfertilization were used for the experiments. Visual stimulation To minimize the influence of circadian rhythms on zebrafish behavior, all experiments were performed between 8:00 AM and 7:00 PM. Zebrafish larvae were restrained with low-melting agarose (Sigma Type V||-A) in the center of an optokinetic cylinder without immobilizing the eyes (Arrenberg, 2016 Stimulus procedure Throughout this study, prestimulatory eye movements were recorded in darkness for 5 min. Unidirectional optokinetic stimulation in 10 deg/sec was given for 10 min followed by 10 min of poststimulatory darkness, respectively. An alternating stimulation paradigm consisted of two sessions of stimulatory phases interspersed with pre-, inter-and poststimulatory phases. Prestimulatory eye movements were recorded in 5 min of darkness followed by 20 min of direction-alternating moving grating, i.e., the first stimulatory phase. After the first stimulatory phase, the aftereffect was tested in the dark for 5 min (the interstimulatory phase), and then alternating stimulation was given for another 20 min (the second stimulatory phase). After the second session of alternating stimulation, the recording continued for 10 min in darkness (the poststimulatory phase). During both symmetric (10 deg/sec in both directions; 10/10 SA stimulation) and asymmetric (20 or 10 deg/sec in one direction and 5 deg/sec in the other direction; 20/5 and 10/5 AA stimulations) alternating optokinetic stimulation, a cycle of a 15-s stimulus in one direction followed by a 15-s stimulus in the other direction was repeated for the entire 20-min stimulatory phase ( Figure 1C). The starting direction of the stimulus was random. Eye movement recording and analysis Zebrafish larvae at the age of 5 days postfertilization were chosen and tested. The restrained larva was illuminated from below with infrared (IR)-emitting diodes (lpeak = 875 G 15 nm, OIS-150 880, OSA Opto Light GmbH, Germany). The eye movements were recorded at a sampling rate of 40 frame/second by an IR-sensitive charge-coupled device (CCD) camera. The area around the eyes was manually selected as the region of interest. Data were analyzed using custom-developed programs written in MATLAB (MathWorks, Natick, Massachusetts, USA). Eye-position traces were smoothed using a Gaussian filter with a cutoff frequency of 5.5 Hz. Eye movement velocity was calculated as the derivative of the eye-position traces. The SPV was estimated as the median velocity in the first second of each slow phase after discarding the quick-phase eye movement. Quick-phase eye movement was first selected by an algorithm with a velocity threshold of 20 deg/sec and an eye dislocation threshold of 1 deg, after which the cursor was manually adjusted if necessary. Population median was obtained from the average SPV of every 1-s time bin in every individual fish. In this study, we collected data from both eyes, under the assumption that the inherent T-N asymmetry of the optokinetic system in lateral-eyed animals (;Braun and Gault, 1969;De';Erickson and Barmack, 1980;;;Huang and Neuhauss, 2008;Keng and Anastasio, 1997;Klar and Hoffmann, 2002;Mueller and Neuhauss, 2010;;Wallman, 1993;Wallman and Velez, 1985;) may lead to different results for each eye. Model We used the model from our previous study () to explain how set-point adaptation as well as negative OKAN are induced during asymmetric stimulation. During stimulus presentation, retinal slip, V r t, is computed by comparing the eye velocity, V e t, to stimulus velocity, V s t, as V r t = V s t V e t, and is fed into a habituation leaky integrator which captures habitation in retinal ganglion cells and downstream areas () ( Figure 3A). As the previous model has been used to predict the behavior under a unidirectional stimulation, in order to preserve the proposed function of each operator under the direction-alternating stimuli, here, we adapted the model according to the new stimuli. In particular, we added an absolute value function and corrected the sign of the habituated retinal slip velocity signal accordingly, as the habituation of the signal does not depend on the direction in which the stimulus is moving (see Figure 4Ai by P rez-Schuster et al. ()). Thus, the output of the habituation integrator, Ht, is described by where V r t denotes retinal slip velocity, |. | denotes absolute value operator, and k h and T h denote habituation integrator gain and time constant, respectively. Our behavioral data demonstrates asymmetric nasal and temporal gains consistent with previous studies (Huang and Neuhauss, 2008;Mueller and Neuhauss, 2010;) which are captured via a piecewise linear function whose value depends on whether the eye is moving in a nasalward or temporalward direction. Nasal/temporal gains are calculated from data ll OPEN ACCESS iScience 25, 105335, November 18, 2022 iScience Article by averaging and scaling the eye velocity during nasal/temporal eye movement (see Table S1). The filtered retinal slip signal, V f t, is then computed as following: V f t = g T=N hV r t signV r tHt (Equation 2) where sign: is the sign function, and h and g T=N are habituation and temporal/nasal gain, respectively. V f t, together with the internal setpoint signal (the output of the adaptation integrator), At, is compared to define an error signal, Et = V f t At, that drives the oculomotor motor system in order to match the eye movement to stimulus as V e t = gEt + Qt in which Qt is the output of the velocity storage integrator and g is the oculomotor gain. Moreover, Qt and At are computed by solving the following differential equations: where k a, T a, k vsm, and T vsm denote adaptation gain, adaptation time constant, velocity storage gain, and velocity storage time constant, respectively. During darkness, V f is zero and therefore, the error signal is solely defined by inverting the sign of the setpoint value. Equations 1, 3 and 4 constitute a system of ordinary differential equation with three variables, Ht; At; and Qt which were solved using ode45 function in MATLAB for a given set of parameters (i.e., time constants and gains). Parameter estimation To estimate the parameters (i.e., k a, T a, k vsm, T vsm, k h, T h, g, and h) of the model for a given stimulation condition, we used the system identification toolbox in MATLAB. We used nonlinear least square method and trust-region algorithm with fixed time steps of 0.1 to find a set of parameters that maximize variance-accounted-for (VAF) for our model: VAF = 1 varV e;model V e;measured V e;measured 3 100 % (Equation 5) where V e:model and V e;measured denote simulated and experimental values and var: denotes variance. A model with a perfect fit yields a VAF value of 100%, whereas any deviation of the simulated model from the experimental data results in VAF values less than 100%. For median population behavior, we chose the data from 10/10 SA stimulus condition to estimate the parameters of the model. Each stimulation protocol includes 5 min of darkness followed by two stimulatory phases. The first stimulatory phase consists of 20 min of stimulus followed by 5 min of darkness. The second stimulatory phase consists of 20 min of stimulus followed by 10 min of darkness. We estimated the parameters of the model by using the data from one stimulatory phase and tested the performance of the model by using the data from the other. Specifically, we built the first model by using the data from the first stimulatory phase, then tested and validated the model by using the data from the second stimulatory phase. Similarly, we built the second model by using the data from the second stimulatory phase, then tested and validated the model by using the data from the first stimulatory phase. Regardless, both models resulted in qualitatively and quantitively similar performance. Also, the parameters were similar in both cases (Table S2). In all the representative simulations, we only predict the behaviors by using the first model. When validating the model, we used the same parameters for all conditions, but g T=N, the temporal/nasal gain. Specifically, g T=N is estimated separately for each stimulus condition (Table S1) by scaling the average SPV during OKN (i.e., colored circles in Figures S3C, S3F, and S3I). After estimating the parameters using the data from a given stimulus condition, we quantified the goodness of fit for each stimulus condition. Here, we report the parameters estimated and tested using symmetric +/ 10 deg/s stimulus and validated by +20/-5 and +10/-5 deg/s stimulus (Table S2). Qualitatively similar results were obtained when using other stimulus conditions for parameter estimation (data not shown). Finally, we show that introducing an internal bias to the model can explain observed adaptation and negative OKAN seen in symmetric stimulation condition for particular individual fish ( Figure 5A). The parameters of the model for individual fish were the same as that of the median population, except g T=N for each fish ll OPEN ACCESS iScience Article |
Coordinated Control of Active Suspension and DYC for Four-Wheel Independent Drive Electric Vehicles Based on Stability Active suspension control and direct yaw-moment control (DYC) are widely used in the vehicle control field. To solve the coupling between those two controllers, a coordinated control of active suspension and DYC is proposed to further improve the vehicle roll and yaw stability. To enhance the adaptive ability of the active suspension, a proportional integral control optimized by the genetic fuzzy algorithm is introduced. DYC is proposed based on the sliding mode control. To restrain the chattering, the parameters of the sliding mode control is optimized by a genetic algorithm. Finally, a coordinated controller is presented based on the adaptive distribution of the anti-roll torque in the front and rear suspension. The simulation results show that the proposed active suspension and DYC can greatly improve the roll and yaw stability, respectively. The expected vehicle status can be well tracked. In addition, the coordinated control is compared by simply using two independent controllers under a different tireroad friction coefficient and different steering maneuver. The results show that the coordinated control has an even better performance under each working condition. |