text
stringlengths
0
1.59M
meta
dict
Blends of polyetherimide and polyester resins derived predominantly from cyclohexanedimethanol and a carbocyclic dicarboxylic acid, such as, for example, a poly(cyclohexane-dimethanol terephthalate) resin that provide improved impact strength are disclosed in U.S. Pat. No. 5,439,987. Blends of polyetherimide resins and copolyesters of terephthalic acid and/or isophthalic acid, 1,4-cyclohexanedimethanol and ethylene glycol, that is, certain poly(cyclohexane-1,4-dimethylene-co-ethylene terephthalate) resins that are said to exhibit a high flexural modulus are disclosed in U.S. Pat. No. 5,439,987. Use of these polyetherimide-polyester blends has become prominent in areas such as microwave food containers and others where visual clarity is desired and often demanded by the consumers, and the articles formed from these blends are often subjected to significant stresses including bending such that tab-bending performance is important. This prominence is driving the need in the industry for improved blends. Consequently, polyetherimide-polyester blends that exhibit visual clarity, resistance to elevated temperature, and further improvements in thermal and hydrolytic stability, impact resistance, and tab-bending performance, are desired.
{ "pile_set_name": "USPTO Backgrounds" }
Tomb of Tulak Hord "Lord of Hate, Master of the Gathering Darkness and Dark Lord of the Sith. These are but a few of the titles worn by the great Tulak Hord. His command of the dark side and mastery of lightsaber techniques won Hord many battles, and each victory earned him enemies abroad and within the Sith ranks. Of the many who challenged his might, none were successful. Among Hord’s greatest triumphs were the battles of Yn and Chabosh. With an army of dark side warriors and his faithful Dashade assassin at his side, he annihilated the rebels who defied the expansion of the Sith Empire and went on to conquer the Dromund system–setting the stage for Dromund Kaas to eventually become capital of the Empire. Imperial historians believe the worlds conquered by Hord number in the hundreds, but any records from his bygone era were lost in the Great Hyperspace War." The tomb was infested with tukat'a and shyracks that were most likely drawn by the Dark Side power in the tomb. In the inner chamber of the tomb, inside Hord's coffin, there was still to be found Tulak Hord's Mask.
{ "pile_set_name": "Pile-CC" }
Q: How to show the legend of a trend line? Problem It seems that I'm having difficulty showing the trend line that generated using stat_smooth(). Before I used argument show.legend = T, I have a graph looks like this: After adding the argument, I got something like this: But you see, I want to show the trendline legend separately, like this: How do I achieve this? My source codes are here if you need them (I appreciate it if you can help me truncate the codes to make it more concise): library(ggplot2) library(ggrepel) library(ggthemes) library(scales) library(plotly) library(grid) library(extrafont) # read data econ <- read.csv("https://raw.githubusercontent.com/altaf-ali/ggplot_tutorial/master/data/economist.csv") target_countries <- c("Russia", "Venezuela", "Iraq", "Myanmar", "Sudan", "Afghanistan", "Congo", "Greece", "Argentina", "Brazil", "India", "Italy", "China", "South Africa", "Spane", "Botswana", "Cape Verde", "Bhutan", "Rwanda", "France", "United States", "Germany", "Britain", "Barbados", "Norway", "Japan", "New Zealand", "Singapore") econ$Country <- as.character(econ$Country) labeled_countries <- subset(econ, Country %in% target_countries) vector <- as.numeric(rownames(labeled_countries)) econ$CountryLabel <- econ$Country econ$CountryLabel[1:173] <- '' econ$CountryLabel[c(labeled_countries$X)] <- labeled_countries$Country # Data Visualisation g <- ggplot(data = econ, aes(CPI, HDI)) + geom_smooth(se = FALSE, method = 'lm', colour = 'red', fullrange = T, formula = y ~ log(x), show.legend = T) + geom_point(stroke = 0, color = 'white', size = 3, show.legend = T) g <- g + geom_point(aes(color = Region), size = 3, pch = 1, stroke = 1.2) g <- g + theme_economist_white() g <- g + scale_x_continuous(limits = c(1,10), breaks = 1:10) + scale_y_continuous(limits = c(0.2, 1.0), breaks = seq(0.2, 1.0, 0.1)) + labs(title = 'Corruption and human development', caption='Source: Transparency International; UN Human Development Report') g <- g + xlab('Corruption Perceptions Index, 2011 (10=least corrupt)') + ylab('Human Development Index, 2011 (1=best)') g <- g + theme(plot.title = element_text(family = 'Arial Narrow', size = 14, margin = margin(5, 0, 12, 0)), plot.caption = element_text(family = 'Arial Narrow', hjust = 0, margin=margin(10,0,0,0)), axis.title.x = element_text(family = 'Arial Narrow', face = 'italic', size = 8, margin = margin(10, 0, 10, 0)), axis.title.y = element_text(family = 'Arial Narrow', face = 'italic', size = 8, margin = margin(0, 10, 0, 10)), plot.background = element_rect(fill = 'white'), legend.title = element_blank() ) + theme(legend.background = element_blank(), legend.key = element_blank(), legend.text = element_text(family = 'Arial Narrow', size = 10)) + guides(colour = guide_legend(nrow = 1)) g <- g + geom_text_repel(data = econ, aes(CPI, HDI, label = CountryLabel), family = 'Arial Narrow', colour = 'grey10', force = 8, point.padding = 0.5, box.padding = 0.3, segment.colour = 'grey10' ) g grid.rect(x = 1, y = 0.996, hjust = 1, vjust = 0, gp = gpar(fill = '#e5001c', lwd = 0)) grid.rect(x = 0.025, y = 0.91, hjust = 1, vjust = 0, gp = gpar(fill = '#e5001c', lwd = 0)) Bonus Request As a man of high aesthetic standard, I would like to know: How to make country-label segments not straight? Refer to the third image, notice the segment line for 'China' is not straight. How do I arrange my country labels so that they don't overlap on scatter points and the trendline? (I consulted this Stack Overflow post, and as you can see from my codes, I created empty strings for countries I don't need. However, the overlapping persists) How to convert the whole plot into an interactive plot that can be embedded on a website? EDIT: Thanks @aosmith for helpful suggestions. I followed this post and tried to override.aes my trend line. This is what I added to the #Data Visualisation session: g <- ggplot(data=econ, aes(CPI,HDI))+ geom_smooth(se = FALSE, method = 'lm', aes(group = 1, colour = "Trendline"),fullrange=T, linetype=1,formula=y~log(x))+ scale_colour_manual(values = c("purple", "green", "blue", "yellow", "magenta","orange", "red"), guides (colour = guide_legend (override.aes = list(linetype = 1))) )+ geom_point(...) ... Thankfully it shows the trendline legend. But still not ideal: How do I improve the codes? A: The problem is in the guides statement. Here is the data visualization part of your code, somewhat fixed up: # Data Visualisation g <- ggplot(data = econ, aes(CPI, HDI)) + geom_smooth(se = FALSE, method = 'lm', aes(group = 1, colour = "Trendline"), fullrange=T, linetype=1, formula=y~log(x)) + geom_point(stroke = 0, color = 'white', size = 3, show.legend = T) + scale_colour_manual(values = c("purple", "green", "blue", "yellow", "magenta", "orange", "red")) g <- g + geom_point(aes(color = Region), size = 3, pch = 1, stroke = 1.2) g <- g + theme_economist_white() g <- g + scale_x_continuous(limits = c(1,10), breaks = 1:10) + scale_y_continuous(limits = c(0.2, 1.0), breaks = seq(0.2, 1.0, 0.1)) + labs(title = 'Corruption and human development', caption='Source: Transparency International; UN Human Development Report') g <- g + xlab('Corruption Perceptions Index, 2011 (10=least corrupt)') + ylab('Human Development Index, 2011 (1=best)') g <- g + theme(plot.title = element_text(family = 'Arial Narrow', size = 14, margin = margin(5, 0, 12, 0)), plot.caption = element_text(family = 'Arial Narrow', hjust = 0, margin=margin(10,0,0,0)), axis.title.x = element_text(family = 'Arial Narrow', face = 'italic', size = 8, margin = margin(10, 0, 10, 0)), axis.title.y = element_text(family = 'Arial Narrow', face = 'italic', size = 8, margin = margin(0, 10, 0, 10)), plot.background = element_rect(fill = 'white'), legend.title = element_blank() ) + theme(legend.background = element_blank(), legend.key = element_blank(), legend.text = element_text(family = 'Arial Narrow', size = 10)) g <- g + geom_text_repel(data = econ, aes(CPI, HDI, label = CountryLabel), family = 'Arial Narrow', colour = 'grey10', force = 8, point.padding = 0.5, box.padding = 0.3, segment.colour = 'grey10' ) g + guides(colour = guide_legend(nrow = 1, override.aes = list(linetype = c(rep("blank", 6), "solid"), shape = c(rep(1, 6), NA) ) ) )
{ "pile_set_name": "StackExchange" }
/* * This file is part of "SnipSnap Radeox Rendering Engine". * * Copyright (c) 2002 Stephan J. Schmidt, Matthias L. Jugel * All Rights Reserved. * * Please visit http://radeox.org/ for updates and contact. * * --LICENSE NOTICE-- * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * --LICENSE NOTICE-- */ package org.radeox.filter.context; /** * InitialFilterContext is used to give the filter information after it's * startup (e.g. locales) * * @author Stephan J. Schmidt * @version $Id: InitialFilterContext.java 7707 2006-04-12 17:30:19Z * [email protected] $ */ public interface InitialFilterContext { }
{ "pile_set_name": "Github" }
Assessing the value of customized birth weight percentiles. Customized birth weight percentiles are weight-for-gestational-age percentiles that account for the influence of maternal characteristics on fetal growth. Although intuitively appealing, the incremental value they provide in the identification of intrauterine growth restriction (IUGR) over conventional birth weight percentiles is controversial. The objective of this study was to assess the value of customized birth weight percentiles in a simulated cohort of 100,000 infants aged 37 weeks whose IUGR status was known. A cohort of infants with a range of healthy birth weights was first simulated on the basis of the distributions of maternal/fetal characteristics observed in births at the Royal Victoria Hospital in Montreal, Canada, between 2000 and 2006. The occurrence of IUGR was re-created by reducing the observed birth weights of a small percentage of these infants. The value of customized percentiles was assessed by calculating true and false positive rates. Customizing birth weight percentiles for maternal characteristics added very little information to the identification of IUGR beyond that obtained from conventional weight-for-gestational-age percentiles (true positive rates of 61.8% and 61.1%, respectively, and false positive rates of 7.9% and 8.5%, respectively). For the process of customization to be worthwhile, maternal characteristics in the customization model were shown through simulation to require an unrealistically strong association with birth weight.
{ "pile_set_name": "PubMed Abstracts" }
/* * Copyright (c) 2002-2010 LWJGL Project * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * * Neither the name of 'LWJGL' nor the names of * its contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package org.lwjgl.opencl; import org.lwjgl.util.generator.opencl.CLDeviceExtension; @CLDeviceExtension public interface KHR_3d_image_writes { }
{ "pile_set_name": "Github" }
Revisiting the overlap between autistic and schizotypal traits in the non-clinical population using meta-analysis and network analysis. The present study aimed to explore the relationship between autistic and schizotypal traits in the non-clinical population. We first conducted a meta-analysis to quantify the correlation between self-reported autistic traits and the three dimensions of schizotypal traits (positive, negative and disorganization). The strongest correlation was found between autistic traits and negative schizotypal traits (r = 0.536, 95% CI [0.481, 0.586]), followed by the disorganization (r = 0.355, 95% CI [0.304, 0.404]) and positive (r = 0.256, 95% CI [0.208, 0.302]) dimensions. To visualize the partial correlations between dimensional behavioural traits, we constructed a network model based on a large sample of college students (N = 2469). Negative schizotypal traits were strongly correlated with autistic social/communicative deficits, whereas positive schizotypal traits were inversely correlated with autistic-like traits, lending support to the psychosis-autism diametrical model. Disentangling the overlapping and diametrical structure of autism and schizophrenia may help to elucidate the aetiology of these two neurodevelopmental disorders.
{ "pile_set_name": "PubMed Abstracts" }
Edward William Bok (1863–1930). The Americanization of Edward Bok. 1921. Page 17 III. The Hunger for Self-Education WITH school-days ended, the question of self-education became an absorbing thought with Edward Bok. He had mastered a schoolboy’s English, but seven years of public-school education was hardly a basis on which to build the work of a lifetime. He saw each day in his duties as office boy some of the foremost men of the time. It was the period of William H. Vanderbilt’s ascendancy in Western Union control; and the railroad millionnaire and his companions, Hamilton McK. Twombly, James H. Banker, Samuel F. Barger, Alonzo B. Cornell, Augustus Schell, William Orton, were objects of great interest to the young office boy. Alexander Graham Bell and Thomas A. Edison were also constant visitors to the department. He knew that some of these men, too, had been deprived of the advantage of collegiate training, and yet they had risen to the top. But how? The boy decided to read about these men and others, and find out. He could not, however, afford the separate biographies, so he went to the libraries to find a compendium that would authoritatively tell him of all successful men. He found it in Appleton’s Encyclopædia, and, determining to have only the best, he saved his luncheon money, walked instead of riding the five miles to his Brooklyn home, and, after a period
{ "pile_set_name": "Pile-CC" }
Gillberg (wrestler) Duane Gill (born July 10, 1959) is an American retired professional wrestler, best known for his appearances in the World Wrestling Federation (WWF) during the Attitude Era under the ring name Gillberg, a parody of then-rival promotion World Championship Wrestling's top star Goldberg. During his tenure in the WWF, Gill became a one time Light Heavyweight Champion. He would go on to hold the title for 15 months, becoming the longest reigning Light Heavyweight Champion as recognized by WWE. Professional wrestling career Early career Gill made his debut on the American independent scene as a part of a masked tag team with Barry Hardy called The Lords of Darkness, with Hardy billed as Agony and Gill billed as Pain. On August 2, 1991, they defeated Cream Team (Dino Casanova and Rip Sawyer) to become the Mid-Eastern Wrestling Federation's first ever Tag Team Champions. The Lords participated in two of three 40-man battle royals held in 1992. World Wrestling Federation (1991–1994) Gill (sometimes teaming with Barry Hardy) became a jobber with the WWF in 1991 usually appearing on WWF Superstars of Wrestling and WWF Wrestling Challenge losing to the likes of The Undertaker, Kamala, High Energy, The Texas Tornado, Sgt. Slaughter, Jim Duggan, The Bushwackers and The Beverly Brothers. Gill and Hardy competed in a battle royal becoming the second version of The Executioners and took part in a 40-man battle royal won by Tatanka. Gill and Hardy then went back to their real names and began competing on Monday Night Raw as well as WWF Superstars against several other tag teams losing to the likes of The Quebecers and The Steiner Brothers. One night they fought as The Toxic Turtles (dressed up as the Teenage Mutant Ninja Turtles) and won a victory over jobbers, but the gimmick was cut off. The Executioners split in early 1994, and Hardy left the company on April 18. Gill then began competing as an enhancement talent, losing to the likes of Mr. Perfect, 1-2-3 Kid, Razor Ramon, Doink, The British Bulldog, Tatanka and Adam Bomb. He left the company soon after. Return to World Wrestling Federation/Entertainment J.O.B. Squad (1998–1999) In 1998, Gill made his return to the World Wrestling Federation at Survivor Series as Mankind's mystery opponent. Vince McMahon seemingly facilitated Mankind's route to victory in a tournament for the vacant WWF Championship as he appeared to be McMahon's favorite to win. McMahon built up the suspense before the entrance by referring to Gill as a wrestler with an unmatched win/loss record. Although the statement was implied that Mankind's opponent won more than lost in his career, the exact opposite was true, and Gill was squashed by Mankind. He later joined The J.O.B. Squad with Al Snow, Scorpio and Bob Holly. During this time, Gill became notable for "ending Marc Mero's career" when Mero challenged him to a match, announcing to the crowd that he would retire from wrestling if he could not beat Gill. Gill won the match with some help from the J.O.B. Squad, and Mero left the WWF, although he did not legit retire. Light Heavyweight Champion (1998–2000) On November 17, 1998, Gill won the Light Heavyweight Championship after defeating Christian on Raw. Shortly thereafter, Gill was given his most notable gimmick: "Gillberg", a parody of rival promotion World Championship Wrestling's top star Goldberg. When he became Gillberg, the original plan of the bookers was reputedly to have him lose 173 consecutive matches, parodying Goldberg's winning streak of 173 matches. The Gillberg character parodied numerous other aspects of Goldberg's character, such as his entrance being accompanied by the pre-recorded sound of a crowd chanting "Gillberg" (which was an allusion to WCW's alleged use of pre-recorded chants in Goldberg's usual entrance) and stage hands that would hold up sparklers (parodying Goldberg's pyrotechnics) and then spray the entrance way with fire extinguishers. He also had a dotted line "tattoo" on his right arm (parodying Goldberg's tribal tattoo) and would use the catchphrase "Who's First?" in reference not only to Goldberg's catchphrase "Who's Next?", but also to the fact that Gill would lose to each and every one of his opponents. Gill made his Royal Rumble debut in 1999, but was immediately eliminated by Edge. Gill's only victory as Gillberg came on the February 8, 1999, edition of Raw when he defeated Goldust with help from former J.O.B. Squad member The Blue Meanie, who was feuding with Goldust at the time. He competed for the WWF Championship against Triple H in a losing effort on the August 31, 1999, edition of SmackDown!. While he still came to the ring with the belt, the Light Heavyweight Championship was all but forgotten as Gill seldom defended the title on television or at house shows. After being off of WWF television for several months, Gill returned on the February 13, 2000, episode of Sunday Night Heat for one final match in order to lose the championship to the debuting Essa Rios. Upon losing the title, Gill's reign ended at 15 months, making him the longest reigning Light Heavyweight Champion in WWF history. After leaving the WWF, Gill continued to use the Gillberg gimmick on the independent circuit, most prominently for Maryland Championship Wrestling. Part-time appearances (2003–present) When Goldberg came to WWE in 2003, his first feud was against The Rock, who on April 21, 2003, episode of Raw brought in Gill, once again under his Gillberg gimmick, to mock Goldberg. After beating up The Rock's security guards, who were trying to apprehend him for interrupting a concert "dedicated" to him, Gillberg attacked Goldberg, which prompted Goldberg to begin choking him. The Rock then attacked Goldberg from behind, after which both Gillberg and The Rock quickly ran out of the arena to avoid further conflict. On December 10, 2007, Gill, now sporting two new tattoos on his left deltoid, returned to WWE television under his Gillberg name and gimmick for the 15th Anniversary of Raw. During the show, he participated in a 15-man battle royal against fourteen other former Raw wrestlers, but was the first man eliminated only a few seconds into the match by every other competitor. In 2016, Gill returned to WWE, where he made a brief appearance on The Edge and Christian Show. Gillberg made a surprise appearance on the February 13, 2017 episode of Raw, coming to the ring in place of Goldberg before being attacked by Kevin Owens. Return to the independent circuit (2018–present) On February 28, 2018, Gill won the IWC High Stakes Championship, before being challenged by James Ellsworth for a match on March 17, which Gill lost. On April 1, 2018, Gill teamed with Ellsworth to win the ACW Tag Team Championships. On February 28th 2020, Gill wrestled his last match against Ellsworth at an Adrenaline Wrestling show. Personal life Gill is married and has two adult children and a granddaughter. Gill operated an independent wrestling school in Severn, Maryland named Gillberg's Pro Wrestling Academy that opened in July 2010. Championships and accomplishments Adrenaline Championship Wrestling ACW Tag Team Championship (1 time, current) – with James Ellsworth Atlantic States Wrestling Alliance ASWA Tag Team Championship (2 times) – with Agony East Coast Pro Wrestling ECPW Tag Team Championship (1 time) – with Executioner #2 Eastern Wrestling Federation EWF Tag Team Championship (1 time) – with Agony International Wrestling Cartel IWC High Stakes Championship (1 time) Maryland Championship Wrestling MCW Hall of Fame (Class of 2009) Mid-Eastern Wrestling Federation MEWF Tag Team Championship (1 time) – with Agony NWA New Jersey NWA New Jersey Junior Heavyweight Championship (1 time) Pro Wrestling Illustrated Ranked No. 120 of the top 500 singles wrestlers in the PWI 500 in 1999 World Wrestling Alliance WWA World Tag Team Championship (5 times) – with Barry Hardy (3) and Wayne Gill (2) World Wrestling Federation WWF Light Heavyweight Championship (1 time) Other titles ASWA Tag Team Championship (1 time) – with Wayne Gill References External links Category:1959 births Category:American male professional wrestlers Category:Living people Category:Parodies Category:People from Glen Burnie, Maryland Category:Professional wrestlers from Maryland Category:Professional wrestling jobbers Category:Professional wrestling trainers Category:Sportspeople from Baltimore
{ "pile_set_name": "Wikipedia (en)" }
Communicating Science in the Developing World, But How? Villagers collect water in Koraro, Ethiopia. Photo: Jeffrey Marlow Scientific research grants are highly prized pots of money that allow scientists to collect data and pursue new discoveries, the professional and literal currency that makes the enterprise tick. But there’s often another stipulation that comes with a grant: public outreach – the opportunity to engage with non-scientists and convey the importance of the research. The nature of this outreach component can vary dramatically, from straightforward education to more dynamic interactions that build on different perspectives for the ultimate benefit of both the science and the society. The latter approach is generally preferable, but in a recent study published in Public Understanding of Science, Sarah Palmer and Renato Schibeci noted that developing world funding agencies in particular focus heavily on the more didactic brand of outreach. The study’s authors called for more active participation from the public in developing countries – something that funding bodies in places like the UK and Australia have begun to encourage more explicitly. These sorts of participatory engagements work best when the public has a stake in the results, a fact that could end up encouraging certain types of research projects. The Square Kilometer Array – a massive astronomy project that will be built largely in southern Africa – will no doubt produce fascinating breakthroughs, but not ones that will directly affect the lives of those living in the area. Projects that take a more applied approach, such as development-oriented improvements of water quality, sanitation practices, or crop yields will likely get more impassioned local input. A few years ago, I visited the village of Koraro, Ethiopia, where Jeffrey Sachs and his Earth Institute team were several years into a sustained intervention aimed at achieving the Millennium Development Goals. These eight benchmarks target clearly delineated gains in health, education, and economics, and while critics have questions many aspects of the program, it’s difficult to deny that the effort has been a positive force for the global poor. In Koraro, arid conditions make farming difficult, and every drop of rain is a critical resource. The Earth Institute team observed local farmers, watching them plant crops fastidiously, water carefully, and harvest at a particular time of year. But one thing looked a little unusual: on the steep hills surrounding Koraro, farmers often planted lines of crops perpendicular to the slope, not as parallel terraces. This caused water to trickle down the hill rather than collect in pools around the plants and seep into the soil. This isn’t necessarily science that will grace the pages of Nature, but it will engage local citizens and, if interventions can be linked to better results, build a culture of evidence-based decision making. If such a mindset scales up, Western input itself may no longer be necessary as trial-and-error tinkerers take over. The role of science in this context is promoting a mindset of evidence-based decision making, teaching the language of the scientific method, and establishing platforms to communicate and apply findings at scale in some of the most remote, rural places on the planet. Understanding how such an approach can benefit career-oriented scientists living the “publish or perish” lifestyle is a challenge. Many researchers are reluctant to devote much time to outreach, seeing little incentive in terms of career advancement or funding opportunities. Nonetheless, if this formula – in which scientists engage locals as intellectual partners in projects that are mutually beneficial – can be incorporated into the developing world’s standard operating procedure, researchers and citizens can build a strong foundation for a scientifically literate society.
{ "pile_set_name": "Pile-CC" }
Blog Your strengths are your superpowers and will help you move through the transition of managing work and home life. Strengths aren’t just things you’re good at, first and foremost they’re things that energise you. They should leave you feeling brilliant when you use them. If you want to figure out what yours are start by asking yourself What are you doing when you feel positive and relaxed? What do you find fun? What do you love doing? What keeps your focus and attention? Your strengths are driven by your values, so you’re more likely to stick to doing something if it’s in line with your own beliefs. When you have a list of things to do, what are you drawn to first? What makes you pick those things? Is it who you’re working with, the environment, a particular way of working (creatively, being in the detail, collaborating with others etc), the type of activity? Have a really good look over 7-10 days and you’ll start to notice patterns that will help you answer those questions and pin down when you feel great. Consciously tuning into how you’re feeling when you’re doing something is a really good way to figure out what you’re energised by. It’s so easy to let things become transactional and background noise but paying attention to your energy levels is a sure-fire way to pin point those oh-so-important energisers. Using your strengths more Let’s use an example. If you found from your list that you really love working with other people, it’s time to do a bit of an audit on where you’re spending your time. Are you working with others enough? If you spend most of your day solo but you know you’d be more energised with other people around, how can you make that happen? It might be changing where you work – if you’re home based, can you go into an office a couple of days a week? If you can’t, do you want to put in some virtual meetings? You might need to speak to your employer if it comes down to logistics outside of your control but explaining that you need a bit more collaboration and human interaction will make them sit up and listen. If you’re a one-person band maybe co-working spaces or even pitching up to a coffee shop for a couple of hours is the way to go. It doesn’t have to be costly, lots of places offer free trials and it’s amazing how long you can stretch a drink out for! Beyond the work context, you might want to put in some more outings for either you and your hangers on, or just you by yourself when you can. It doesn’t have to mean costly baby groups, it can be a walk where you know you’ll see some different faces and get some fresh air, whatever works for you. What skills can you share? If the goal is to do more of what you love, you can get a bit creative with how you do that because regardless of who you’re helping, at the centre of it, it will always directly link back to your wellbeing. You might be one of those people who is great at talking to others and pitching ideas, being at your best when you’re needing to persuade someone to come around to your way of thinking. There’s always someone who is drained by this particular type of interaction and you could absolutely make their day by offering to help – you get to use your strengths and someone else gets help…winner winner. You can insert pretty much any other strength to this example! A personal favourite of mine is planning your day around your usual energy levels and your strengths. For me there are peak times in the day when I know I’m at my best (energy and productivity wise), so I use those times to pick off the tasks that don’t fire me up – that way I’m not on a double whammy of feeling low energy plus doing something that is going to leave me drained – and vice versa. If you consciously observe for about a week / 10 days your energy patterns, you’ll start to tune in to how you fluctuate. What about when strengths go too far though? That phrase about having too much of a good thing really comes into play with this. If you use your strengths in the wrong situation or just generally go too far, you’re likely to switch other people off and end up getting the gift of ‘constructive’ feedback. For example, if you feel energised by getting to a result but you get there at the expense of engaging other people, what was a good thing (getting the outcome) becomes distracted by the desire to achieve. Make sense? Here are some tips and questions to help you get to grips with when this kind of thing happens… Feedback. Now, I think feedback and knowing how you’re doing is incredibly important, however I’ve seen some truly horrific examples of ‘helping’ with apparently constructive thoughts. Someone once said to me that feedback is yours to do what you want with, and you can be open to receiving feedback without agreeing all the time – so please, hold on to this before we go any further!! It’s not always easy to spot in yourself where you’re going too far or to figure out the consistent triggers, which is why it can be helpful when you ask for someone else to input. Being specific about what you want help with is really important though, so just asking ‘how am I doing’ will rarely get you any quality stuff to work with. Phrasing your request with something like; ‘I’m working on ‘xxxx’ (insert whatever is relevant; things like working with bigger teams, building relationships, coaching others etc) – what do you see me doing well with this at the moment? What could I do more of? Is there anything I need to do less of?’. That way, you get feedback that should directly impact what’s important. What brings you balance? Thinking about your strengths, are there any that can work together to stop you going too far? I gave the example of being too focused on results before, so let’s use that to bring this to life. You might like getting stuff done, but if you also like working with other people and being collaborative then reminding yourself that delivering as part of a team will also energise you will help you balance the need for the result. That’s one example, there are thousands of combinations of getting your strengths to work together, but hopefully that gives you a flavour for it. Focusing on 3 is your start point – of course you’ve got more than 3 strengths but starting small will limit any overwhelm. Doing more of what you love isn’t about needing to take up loads of time, it’s about looking at what you’re already doing and changing your mindset on how you approach it. If you’re heading back to work after any sort of extended leave, this growing list of strengths is a great conversation to have with a line manager. You can let them have a handy reminder of where you’ll add lots of value, be at your best and ultimately it has a mega impact on your engagement (which for those of you who are interested, employers who are engaged in what they’re doing will work harder and results will improve!) Explain to them / casually drop in what energises you, what leaves you feeling great and what you’d love to do more of. In turn you can ask them the same thing, always making sure you frame it as what makes them feel good rather than the things they’re skilled at first and foremost (remember you can become great at the things that motivate you though if you’re not already). Then have the other side of the conversation – the things that drain you. You’ll start to find that you tune into them more, the longer you work with your strengths and suddenly conversations will open up that could get some of those drainers off your list. One person’s trash is another person’s treasure and all that…there could be something that you put off doing that your friend / work mate / partner loves to do and would welcome taking it on for you. Again, if you’ve had some time out of the workplace and projects / tasks etc have shifted while you’ve been away it can feel incredibly daunting, especially if you’re asked what ‘role’ you want to go back to. However, take out the competencies and replace it with what you’re energised by and suddenly you’ll have a brighter picture of where you’re going to be happy and performing at your peak. Plan your day around your strengths. I can’t say I do it all the time, however when I do I really feel the difference. It takes a bit of planning but it’s totally worth it… Have a look at how you work during the day, whatever ‘work’ is for you. You’ll have peaks and troughs of energy, yes? If you don’t already know when you work at your best plot it over 7 – 10 days to get a good idea. So for example, when people describe themselves as a morning person, or a night owl, or always needing a sugar hit at 3pm? Yeah, all of those clichés are clichés for a reason! Now you have a picture of your energy levels you get overlap your tasks and strengths in two ways. The method I use is to pick off things I’m drained by and usually put off into the times of the day I’m most energised. I’m not going from a low base and there’s much less chance I’ll get distracted, and vice versa. The link between strengths, values and beliefs means that you’re more likely to stick at doing something you enjoy (sounds obvious doesn’t it) so those times when I know I’ll do anything other than want to work, I save the best bits for then! There is a school of thought that would suggested doing what you love when you’re at your most awake and energised because you’ll be massively efficient and flying high already, so you can always give that a go too and see which way round works best for you.
{ "pile_set_name": "Pile-CC" }
(* TEST * toplevel *) (* Correct escapes and their encoding *) let () = assert ("\xF0\x9F\x90\xAB" = "\u{1F42B}"); assert ("\xF0\x9F\x90\xAB" = "\u{01F42B}"); assert ("\x00" = "\u{0}"); assert ("\x00" = "\u{00}"); assert ("\x00" = "\u{000}"); assert ("\x00" = "\u{0000}"); assert ("\x00" = "\u{00000}"); assert ("\x00" = "\u{000000}"); assert ("\xC3\xA9" = "\u{E9}"); assert ("\xC3\xA9" = "\u{0E9}"); assert ("\xC3\xA9" = "\u{00E9}"); assert ("\xC3\xA9" = "\u{000E9}"); assert ("\xC3\xA9" = "\u{0000E9}"); assert ("\xC3\xA9" = "\u{0000E9}"); assert ("\xF4\x8F\xBF\xBF" = "\u{10FFFF}"); () ;; (* Errors *) let invalid_sv = "\u{0D800}" ;; let invalid_sv = "\u{D800}" ;; let invalid_sv = "\u{D900}" ;; let invalid_sv = "\u{DFFF}" ;; let invalid_sv = "\u{110000} ;; let too_many_digits = "\u{01234567}" ;; let no_hex_digits = "\u{}" ;; let illegal_hex_digit = "\u{u}" ;;
{ "pile_set_name": "Github" }
Attention The browser or device you are using is out of date. It has known security flaws and a limited feature set. You will not see all the features of some websites. Please update your browser. A list of the most popular browsers can be found below. Males of egg-laying chicken breeds are of little value to producers because only a few roosters are required for reproduction. A day after they’re hatched, chicks’ sex is determined, with unfortunate males heading to the grinder for use as animal feed. David Paul Morris / HSUS Males of egg-laying chicken breeds are of little value to producers because only a few roosters are required for reproduction. A day after they’re hatched, chicks’ sex is determined, with unfortunate males heading to the grinder for use as animal feed. David Paul Morris / HSUS The short, brutal life of male chickens Hundreds of millions of newly hatched males are killed each year because they’re no good for egg laying or meat When a chick hatches in Arne Block’s and Agnes Block’s henhouse in the southern Swedish province of Småland, it can look forward to a long life. If the chick is a female, she’ll grow up to lay eggs. “With 21 chickens, we get five to seven eggs per day,” says Arne Block. “Some are more diligent than others in laying eggs.” And if the chick is a male, he grows up to become a chicken that the Blocks and their five young children use for meat. But such a harmonious life is rare for 21st century chickens. For the past 50 years or so, farmers and the poultry industry have begun to breed chickens to be either egg layers or meat. By breeding species optimized for one of the two, they’ve been able to create chickens able to lay up to 350 eggs per year and broiler chickens that can reach their slaughter weight in a speedy four weeks. The winner? Apart from the producers, us consumers. At Tesco, Britain’s largest supermarket chain, an egg can be had for 12 cents. Here’s the problem: Males of the egg-laying breeds are of little value, as only a few roosters are required for reproduction. A day after they’re hatched, chicks are sexed (their gender determined), with the unfortunate males heading straight to the grinder for use as animal feed. In the United States alone, several hundred million newly hatched chicks are killed this way each year, while Germany estimates its annual day-old-chick death figure at about 50 million. “We’ve pushed chickens to the point where they have to suffer,” notes Carlos Gonzalez Fischer, scientific officer at Compassion in World Farming, a British animal-welfare organization. “Broiler chickens grow so heavy so fast that many can’t stand, and egg-laying chickens have been bred to lay so many eggs that the eggshells consume calcium from their bones and they get bone fractures. And they’re the lucky ones, because they’ve survived past one day.” In 2013, Germany’s most populous state, North Rhine-Westphalia, passed a pioneering law banning the practice, starting this year. Earlier this month, an appeals court ruled that the law violated the rights of businesses granted by Germany’s constitution. Indeed, legislating an end to the practice has proved to be difficult. But now a motley crew of animal-rights groups and academic researchers at institutions such as the University of Leipzig in Germany are working on innovative alternatives. Their most practical solution, which may come to a factory farm near you in just a couple of years’ time, is essentially the chicken version of gender-selective abortion. The technology, which has been successfully tested in labs, allows hatcheries to determine with extreme accuracy a chick’s gender even before it hatches. This is how it works: Nine days into an egg’s 21-day incubation period, the farmer — or more likely, a machine — makes a tiny hole in the egg and extracts a small amount of fluid. A quick genetic analysis resembling the amniocentesis performed on human embryos to discover infections and genetic abnormalities determines whether the egg will become a female chick, in which case it will be allowed to incubate until it hatches. If it would become a male, the egg is discarded and can be used as animal feed. Because 9-day-old eggs don’t experience pain, the practice causes fewer ethical dilemmas than the killing of chicks. Mature broiler chickens in a large poultry house in Iowa.Scott Sinklier / Corbis At Catholic University in the Belgian city of Leuven, a team of researchers added an additional twist with an egg-gender test that doesn’t involve extracting fluid. “Male and female chickens’ feathers have different colors, so we’ve developed a technology using special light rays that illuminate the eggs and shows which ones are male and female,” reports team leader Dr. Bart De Ketelaere. “After nine days incubation, we can determine the gender of the egg with 95 percent accuracy. After 11 days, the accuracy is 99 percent.” The catch? “It only works for brown eggs. Our technology is ready to go on the market if we find hatcheries that are fine with just gender-testing brown eggs." Here’s another catch: A gene-testing machine costs money, and hatcheries are unlikely to buy one simply to prevent chicks’ suffering. But consumer-products giant Unilever, which owns major brands such as Ben & Jerry’s ice cream and Hellman’s mayo and buys some 350 million eggs each year, took a first step last year, announcing that it will push its egg suppliers to stop male chick culling. “Eggs are a vital ingredient used in many of our products, ranging from mayonnaises to dressings, sauces and ice cream,” says a representative for the company. “We’re working with egg producers and the wider industry, the animal-welfare community and R&D companies to find tangible ways to address this important issue.” Yet no other major buyer or producer of eggs has taken similar action. And in the absence of consumer boycotts of eggs produced on the backs of dead male chicks, why would they? But the in ovo sexing pioneers have a trump card: In the long run, the technology saves money. With the male eggs removed from incubation machines 12 days earlier, heating the remaining half requires much less energy. Gender-selected eggs are not the only innovative solution the industry is pursuing. Last year Lohmann Animal Breeding, the German-based world leader in chicken genetics, which is also experimenting with egg-gender testing, presented its “dual-purpose chicken,” the result of five years of genetic experimentation. The Lohmann Dual lays 250 eggs per year and reaches a respectable 5-pound slaughter weight in 56 days. (Current broiler chickens reach a slaughter weight of 7 pounds.) A male chick born as a Lohmann Dual would, in other words, face the more promising prospect of growing up a broiler, much like the Blocks’ male chicks. In Switzerland, meanwhile, the supermarket chain Coop and two leading farms have launched a pilot project to develop their own dual-use chicken. Compassion in World Farming, for its part, advocates resurrecting the Beijing-You, an almost-extinct Chinese chicken breed that excels both at egg laying and growing a large, broilerlike body. “The result is the same. The chicken will die,” says Gonzalez Fischer. “But at least it won’t have lived for nothing.” But since dual-purpose chickens eat more and lay fewer eggs, their meat and eggs are be more expensive. The obvious question is whether consumers will buy expensive eggs and meat that don’t involve the killing of male chicks or close their eyes and go for the cheap version. And there’s more experimentation underway, with researchers exploring a genetic-modification technology that changes the gender of eggs as well as one in which male eggs take on a slightly different color. But such futuristic engineering may prove too off-putting to consumers who eventually have to eat the eggs and meat. Researchers are already trying to up the egg-amniocentesis ante by creating a gender test that could be used on two- to three-day-old eggs, allowing the male eggs to be used as ordinary eggs. Eggs sexed at nine days can’t be used as eggs because the chicken embryo has started developing. Would the Blocks, who like most other hobbyist hen keepers don’t sex their eggs, now consider doing so? “In our small henhouse, it doesn’t really make sense,” says Arne Block. “In an ideal world, male chicks of the laying-hen breed should be allowed to live and become broilers, but I do realize that that’s not viable in large companies. Sorting out male eggs before they hatch seems more merciful than killing the chicks.”
{ "pile_set_name": "Pile-CC" }
@echo off pyinstaller --noconfirm artisan-win.spec rem # rem # Don't make assumptions as to where the 'makensis.exe' is - look in the obvious places rem # if exist "C:\Program Files (x86)\NSIS\makensis.exe" set NSIS_EXE="C:\Program Files (x86)\NSIS\makensis.exe" if exist "C:\Program Files\NSIS\makensis.exe" set NSIS_EXE="C:\Program Files\NSIS\makensis.exe" if exist "%ProgramFiles%\NSIS\makensis.exe" set NSIS_EXE="%ProgramFiles%\NSIS\makensis.exe" if exist "%ProgramFiles(x86)%\NSIS\makensis.exe" set NSIS_EXE="%ProgramFiles(x86)%\NSIS\makensis.exe" rem # rem # rem # %NSIS_EXE% setup-install3-pi.nsi
{ "pile_set_name": "Github" }
CHRIS McCALL WHILE work progresses on the new Queensferry Crossing over the Forth, plans are being drawn up to deliver the next generation of bridges across the Clyde. From a small footbridge linking two historic districts of Glasgow to an ambitious multi-million road link downstream, local authorities along Scotland’s second longest river are examining new ways of bridging the gap between communities. Mark Macmillan, leader of Renfrewshire Council (centre), joins Stuart Bloomfield (left) and Neil Cooper of Aird Geomatics on the banks of the Clyde at Renfrew, where a new road bridge will eventually be built The former county town of Renfrew, on the south bank, has been linked with Yoker, a suburb on the edge of the city limits, by a ferry crossing for more than two centuries. Given the two settlements’ close proximity to several major shipyards, as well as the Clyde Tunnel, it was previously considered uneconomic to build a bridge so far upstream. Now Renfrewshire Council is advancing plans to build a 200m road bridge, costing £78m, as part of the Glasgow and Clyde Valley City Deal, which will deliver major infrastructure improvements across the region. The crossing would be capable of opening to accommodate river traffic heading to and from the nearby BAE yards at Scotstoun and Govan. The Renfrew crossing will be an exciting addition to the Clyde “It’s easy to imagine this bridge spanning the Clyde and opening to allow ships to navigate the river and it’s heartening to envisage the potential growth it will unlock in the immediate area and for the Renfrewshire as a whole,” said council leader Mark Macmillan. “The Renfrew crossing will be an exciting addition to the Clyde - and its only opening road bridge. “It will also bring a unique engineering distinction to Renfrewshire. The Bascule Bridge and the new crossing mean that Renfrewshire will be the only place in Scotland to have two opening road bridges in such close proximity. Both bridges illustrate the importance of rivers, engineering and connectivity in Renfrewshire’s past and in its future.” Downstream, there are plans to open a third footbridge on a stretch of the Clyde that was until the late 20th century dominated by commercial shipping. The historic White Cart bascule bridge near Renfrew was designed by Sir William Arrol, who also oversaw construction of the Forth Bridge Residents in Govan and Partick, districts in the west of Glasgow that were independent burghs until 1912, have expressed their desire to see improved connections between the two. A charette - a community discussion group - that took place in March last year found strong support for a footbridge to be built, a plan Glasgow City Council is taking forward. Although such a crossing is likely to be two-three years away from opening, it would most likely be built near the Riverside Museum at Kelvinhaugh. Two footbridges at Finnieston and Stobcross - the Bell’s and Millennium bridges - have proved popular since their openings in 1988 and 2002 respectively. They replaced a series of passenger ferries that operated along the Clyde until the late 1970s. Meanwhile, Glasgow City Council is to approach the Scottish Government to request funding to complete a refurbishment of the Clyde Tunnel, which opened in 1963 and links Whiteinch with Linthouse. Members of the local authority’s sustainability and environment committee voted this month by a majority to approach Holyrood for cash to help pay for some of the urgent repair work required on the tunnel, which is used by 25 million cars annually. Councillor Paul Carey, committee convener, told The Scotsman that as the link is not classed as a trunk road - despite high levels of traffic - the council must pay for the majority of its up-keep. The number of vehicles using the tunnel has soared in recent years following the opening of the new Queen Elizabeth University Hospital in Govan and the Hydro events venue in Finnieston.
{ "pile_set_name": "Pile-CC" }
Edward Augustus Dickson Edward Augustus Dickson (1879–1956) was an American educator. He co-founded the University of California, Los Angeles. Biography Early life Edward Augustus Dickson was born in Sheboygan, Wisconsin on August 29, 1879. He moved to California in 1885 with his family. He graduated from the University of California, Berkeley in 1901. Career He taught in Japan in 1901-1902. Back in California, he worked as a journalist for the Sacramento Record-Union, the San Francisco Chronicle, and the Los Angeles Express. In 1919, he purchased the Los Angeles Express and became its editor. In 1912, at the age of thirty-three, he was appointed to the Board of Regents of the Los Angeles State Normal School, the precursor to UCLA. On October 25, 1917, he had lunch with Ernest Carroll Moore (1871-1955) at the Jonathan Club, a private member's club in Los Angeles. Together, they decided to establish the Southern Branch in Westwood, Los Angeles, which eventually became the new campus of UCLA. He served as a Regent for forty-three years, until 1956. He also served as the President of the Board of Regents in 1948. He served as President of the Western Federal Savings and Loan Association from 1931 to 1956. He also sat on the Board of Directors of the Central Investment Corporation. He was a member of the California Republican Party. Moreover, he co-founded the Lincoln–Roosevelt League and served as a delegate to the 1932 Republican National Convention. He also served on the Board of Directors of the Olympic Games Association for the 1932 Summer Olympics in Los Angeles. Furthermore, he was involved with the Los Angeles Art Association, the Los Angeles County Art Institute and the UCLA Art Council. He was featured in Who's Who in America. Personal life He married Wilhelmina de Wolff in 1907. Death He died on February 22, 1956, at the age of seventy-six. Bibliography The University of California at Los Angeles: Its Origin and Formative Years (1955) References Category:1879 births Category:1956 deaths Category:People from Sheboygan, Wisconsin Category:People from Los Angeles Category:University of California, Berkeley alumni Category:University of California, Los Angeles faculty Category:California Republicans
{ "pile_set_name": "Wikipedia (en)" }
Preliminary experience with epidural and perineural catheter localization with pulsed wave Doppler ultrasonography. Various methods for peripheral nerve and epidural catheter location assessment exist, with varying degrees of ease of use, utility, and accuracy. Pulsed wave Doppler (PWD) evaluates the presence of fluid flow and is possible modality to assess the location of a percutaneously inserted perineural catheter. A retrospective chart review was conducted in which PWD ultrasonography was used to confirm the position of nerve catheters for regional anesthesia. Data was collected to assess 24-hour postoperative pain scores, opioid consumption, complications, and the incidence of catheter replacement. Eighty-six patients were included; average age was 58 years and a 27% incidence of chronic pain. These catheters were left in place based on the PWD images. Three catheters failed and a total of 16 catheters were repositioned. In the first 24 hours average pain scores ranges between 3.5 to 5.9 and median postoperative opioid consumption range was 11.3 mg to 60.8 mg. For epidural catheters, PWD changes were more obvious with air injection and there was only one episode of hemodynamic instability. Our preliminary experience with PWD ultrasonography suggests that they may offer the ability to selectively assess flow at different locations to identify the proper location of epidural and perineural catheters. Future randomized, controlled investigations are warranted to further evaluate the effectiveness and safety of this modality.
{ "pile_set_name": "PubMed Abstracts" }
Q: How to create a grouped TableView without using a TableViewController I have a UITableView and I would like it to have 2 sections. I now know that you can only have grouped sections if you're using a UITableViewController and if you're using static cells, neither of which I am. Is that I want to do possible? If so where can I turn for help on setting this up. It seems like every tutorial I have found is for the example of using a UITableViewController. A: Setting the tableView's style property to .Grouped and return 2 from numberOfSections...-method should yield a good result. Where does this standard approach fail?
{ "pile_set_name": "StackExchange" }
This table lists the transmission codes actually found on transmissions used in the applications below. To find which specifications were found with each Build Code, go to C4/C5 Transmission Specifications.
{ "pile_set_name": "Pile-CC" }
Angelo Carasale Angelo Carasale (died 1742) was an Italian architect, active mainly in Naples. He held the primary responsibility for designing the elaborate furnishings of the Teatro di San Carlo, which was the new opera house in Naples in 1737. Alexandre Dumas recounts the commonly repeated, yet likely apocryphal, tale that the king was so taken by the beauty of the theatre that he personally presented Carasale to the public for applause, remarking that the only thing lacking from the new theater was a private passageway for royalty from the adjacent Royal Palace. The anecdote continues by stating that, a few hours later, at the end of the performance of the opera Achille in Sciro by Domenico Sarro, Carasale approached the king and notified him that the passageway was ready. Carasale subsequently served as impresario of the San Carlo opera house for the first four years of its existence. Earlier, Carasale had been the architect given the task of redesigning San Carlo's predecessor, the small San Bartolomeo theatre, in order that it might be converted into a church. He also worked on the interiors of a number of Neapolitan churches. Apparently, Carasale was imprisoned in the fortress of Sant'Elmo on charges of embezzling funds meant for the San Carlo house. Some sources says that he "died in disgrace." References Sources Dumas, Alexandre, The Bourbons of Naples, pub? de Filippis, Felice (ed.) (1951), Il Teatro di S. Carlo. Naples: Ente Autonomo del Teatro di San Carlo. Holmes, William Holmes (1993), Opera Observed. Chicago: University of Chicago Press, p. 98 Category:Year of birth missing Category:1742 deaths Category:18th-century Italian architects Category:Architects from Naples
{ "pile_set_name": "Wikipedia (en)" }
By creation of the Ti (Titanium) NVIDIA had for several objects: from a response to the Radeon 8500/7500 which are yet unreleased to definite figures of the performance of this baby. The situation is quite annoying - the NVIDIA experts have to wait for the definite final figures of the G2 Radeon to decide into what niches and at what prices they should throw into the market their products. The figures are still unclear and the shareholders and the governing body of the company are urged on by the 6-month term. The only way-out is new drivers which were released to fill up an awkward pause during which ATI and NVIDIA try to let each other pass first to the autumn market. However, it was not rational to launch the NV25 or NV17, killers of the Radeon 8500 and 7500, until the mass sales of the latter ones started. But a proactive blow should have been done. The situation was solved by a release of three nonexistent chips - GeForce 2 Ti, GeForce 3 Ti200 and GeForce 3 Ti500. GeForce2 Ti - you have already seen this chip in the GeForce 2 Ultra (NV15A, the rated 250 MHz core frequency), but it is coupled with a slower 200 (400) MHz DDR memory. It is meant to replace the GeForce2 Pro (NV15) by offering a bit higher performance at the same amount of money and taking the market position a little lower than the Radeon 7500. It is obvious that a considerable dependence of GeForce2 chips on a memory bandwidth will provide a performance gain only at low resolutions and in games (tests) with heavy geometry. GeForce3 Ti200 is just a slow GeForce3 (NV20) with a 175 MHz core and 200 (400) MHz memory. The cards on it will be cheaper than the GeForce3 and meant for a direct competition with the Radeon 7500 (while taking the position a bit higher than the latter). What the company wanted to say is "Take DirectX 8.0 at the price of DirectX 7.0 of the competitors". GeForce3 Ti500 is a new top model of the GeForce 3 family based on the new NV20 stepping and having a 240 MHz core and 250 (500) MHz DDR memory. This set is positioned at the same niche as the competitor's top and will directly compete with the Radeon 8500. These releases are followed by three solid announcements. Announcement 1 - DirectX 8.1 support NVIDIA just made Microsoft to declare the pixel shaders of v1.2 and v1.3 sufficient for compatibility with DirectX 8.1 (see ATI R200 review). And while the Radeon 8500 supports considerably improved pixel shaders (a longer shader code, a lot of new instructions etc.) the NVIDIA products have just minor alterations (several new instructions, the number of registers and shader length are the same). Moreover, in the current drivers (21.85) the pixel shaders were still of v1.1. I suspect that the v1.2 and v1.3 will be available only in the NV25 and NV17. The NV25 is said to support the proprietary shaders of the Radeon 8500 (1.4), but I think it is impossible. This support would require a considerable redesigning of the whole pipeline and a system of texture values sampling, for what NVIDIA has no time. Announcement 2 - 3D texture support The 3D texture support is realized now only in the OpenGL; in the current Direct3D it is still locked, although there are some traces of it in the D3DCAPS. It lacked for a long time in the Direct3D, although it was to be done yet for the NV20. It seems that there is some error in the chip, and while it could be eliminated on the driver level in the OpenGL, which passes all data by creating lists of requests to the accelerator according to the API calls, it was impossible in the Direct3D, where the most of data structures are implemented directly by the chip. ATI, however, has 3D texture support both in the Direct3D and in the OpenGL drivers since the Radeon 1. But NVIDIA provides much better support for 3D textures than it is done in the Radeon1. It supports really compressed (in three dimensions, with a compression factor equal to 1:4 and 1:8) texture formats, 3D texture MIP mapping and their procedural generation and usage of them as 3D and 4D parametrized tables for calculation of some effects. Compression must be obligatory provided since one 256X256X256 texture takes 64 MBytes if uncompressed. When compressed, it takes 8 MBytes. But we still only dream of the scenes where a lot of objects have their own 3D textures; the local memory of accelerators allows us to use only several 3D textures to create impressing effects such as 3D mist or complex illumination. MIP mapping also allows decreasing a data volume (making it smaller as the distance from an object grows) and to improve the visual quality of objects with 3D textures. The procedural textures allow generating data on the fly according to a certain formula, which is, however, calculated by a CPU. This approach is possible for different special effects with quickly changing 3D textures, but it is not justified for economizing the memory in case of a great heap of various objects with 3D textures - the performance will be too low. Today there is only one game application which uses 3D textures - DroneZ: It is quite a primitive game in plot but it is rich in special effects and uses all latest innovations available in the OpenGL (including 3D textures): But there are also Imposters - flat stripes which are usually drawn quite fast (because they have a constant Z and don't require a per-pixel interpolation of parameters) with rectangular 2D textures in use. Such stripes are used to optimize displaying of a lot of similar small objects, for example, a system of particles. Usage of 3D textures here opens new prospects - we can animate these objects, for example, in respect of the position of an imaginary source of light. If we use a set of preliminary calculated images, we can create an illusion of a great number of really 3D objects displayed at a great speed since in reality they are only 2D sprites. Well, now 3D textures remain still a resource-demanding tool useful only for creation of special effects. Their realization would really look great in the GeForce3 family if a normal support were not absent in the Direct3D. Announcement 3 - Windows XP support It supports both a complete set of 2D features necessary for normal acceleration of the Windows XP, and a new 2D API Microsoft GDI+ on a lower level. It provides fast image building and a more effective usage of hardware acceleration. The GDI+ is an attempt to overcome the architectural drawbacks of the API. Besides, it contains several new possibilities such as gradient shading. Cards I will introduce only the GeForce3 Ti500 since only this one has an increased frequency relative to the GeForce3. Operation of the GeForce3 Ti200 was emulated also on it with the frequencies decreased to 175/200 (400) MHz. Operation of the GeForce2 Ti was estimated on the Leadtek WinFast GeForce2 Ultra video card with the memory frequency decreased to 200 (400) MHz. The reference NVIDIA GeForce3 Ti500 card has AGP x2/x4 interface, 64 MB DDR SDRAM located in 8 chips on the right side of the PCB. The memory works at 250 (500) MHz. The memory chips are covered with traditional heatsinks. The frequency of the memory is lower than the rated one according, obviously, to the recommendations of makers of such a fast memory. But I hope it will be possible to overclock it up to the rated speed according to the access time. The GPU works at 240 MHz. It is not a considerable increase, but taking into account that GeForce3 cards are well balanced, the performance gain should be quite large. If you look through our related reviews and 3Digest you will see that the GeForce3 was already overclocked up to 255/280 (560) MHz. This is higher than the Ti500 is able to give. There are also a lot of GeForce3 cards which can operate at over 240/250 (500) MHz. The GeForce3 Ti500 looks very close to the reference card, but there are some differences: The most considerable changes are made near the VGA connector: On the left you can see the GeForce3 Ti500 card and on the right is the GeForce3 one. At the expense of a different core feeding unit a logic set which controls 2D quality (filters etc.) was moved from the rear side of the PCB to the front one. Overclocking Being cooled enough, the NVIDIA GeForce3 Ti500 raised its frequency up to 260/290 (580) MHz. While the memory overclocking is impressing (although it is only 10 MHz over the rated frequency), the GPU speed, which is higher by 20 MHz, is moderate. Note: in course of overclocking you must provide additional cooling, in particular, for the card (first of all, for its memory); overclocking depends on the definite sample, and you shouldn't generalize the results of one card to all video cards of this mark or series. The overclocking results are not the obligatory characteristics of a video card. Test results 2D quality is traditionally high. At 1600X1200, 85 Hz you can play comfortably with a high-quality monitor which supports such modes. I noticed no changes despite the fact that the PCB was redesigned. For estimation of 3D quality we used several tests: id Software Quake3 v.1.17 - a game test which demonstrates operation of a card in OpenGL with a standard demo benchmark demo002; MadOnion 3DMark2001 Pro - a synthetic test which shows how a certain card works in DirectX 8.0. Quake3 Arena demo002, standard modes The tests were carried out in two modes: Fast (16-bit color) and High Quality (32-bit color). Operation of the GeForce2 Ti was emulated with the help of the Leadtek WinFast GeForce2 Ultra card by setting a 250/200 (400) MHz frequency. Operation of the GeForce3 Ti200 was emulated with the NVIDIA GeForce3 Ti500 card by setting a 175/200 (400) MHz frequency. As the GeForce2 Ti and GeForce3 Ti cards refer to different market niches I have divided the diagrams into two units for the performance analyses. NVIDIA GeForce2 Ti NVIDIA GeForce3 Ti200/500 The Ti200 takes an intermediate position between the GeForce2 Ultra and GeForce3 (there is almost no difference between this card and the GeForce2 Ultra in 16-bit color). demo002, highest quality and load modes The detailing levels of geometry and textures were maximum, and the objects were extremely complicated because of curved lines (r_subdivisions "1" r_lodCurveError "30000"). NVIDIA GeForce2 Ti The GeForce2 Ti beats its predecessor in 16-bit color at the expense of a higher core speed, but in 32-bit color it was put into its place by the memory bandwidth. NVIDIA GeForce3 Ti200/500 Due to a considerable drop in the core speed the Ti200 lags behind the GeForce2 Ultra in 16-bit color. But in 32-bit one the situation is different. demo002, anti-aliasing and anisotropic filtering tests The GeForce3, as you know, possesses two important functions in 3D: anti-aliasing and anisotropic filtering. The most optimal AA mode for the GeForce3 is Quincunx, and the best graphics is obtained with the Level 8 of anisotropy which uses up to 32 texture samples. The performance drop is quite big at that. Even 1024X768 is not playable with the Quincunx AA and Level 8 anisotropy enabled simultaneously. Let's see how the Ti500 can boost the performance: The performance drop is considerable, and even an overclocking of the Ti500 hardly saves the situation in high resolutions. But there are many games which do not require hundreds of FPS, and in 1024X768 in 32-bit color one can play excellently with the highest AA and anisotropy. In our 3Digest you can look at the anisotropy quality of the GeForce3. 3DMark2001 As the test consists of several subtests and, therefore, there are a lot of diagrams, I didn't divide GeForce2 Ti and GeForce3 Ti200/500 into different diagrams. 3D Marks The general results of the 3DMark2001 show that the GeForce2 Ti takes an intermediate position between the GeForce2 Pro and GeForce2 Ultra. The Radeon 7500, however, outscores them. In the GeForce3 family the performance gradually increases from the GeForce3 Ti200 to the Ti500. Game1, Low details The Game1 is a scene from car races where you can practice shooting. There are a lot of effects and detailed objects. In the 16-bit color there is a good stair, while in the 32-bit one the GeForce2 Ti loses to the ATI RADEON 7500. Game1, High details This is rather a processor test because of a too complex scenes. In 16-bit color all go on a par inside their niches, and in 32-bit color the RADEON 7500 again outshines the GeForce2 Ti. Game2, Low details Here the RADEON 7500 comes very close to the GeForce3 Ti200 in 32-bit color. But this scene has a high overdraw factor, and traditional accelerators without optimization of operation with a Z-buffer implement a lot of unnecessary operations (they draw many invisible surfaces). The GeForce3 is able to implement such optimizations, that is why the performance is much higher. Game2, High details The situation is similar. Game3, Low details This scene shows the expected correspondence of the performance levels in the GeForce2 clan in 16-bit color and an advantage of the ATI RADEON 7500 in 32-bit one. Game3, High details The competition in the GeForce2 Pro niche shows that the RADEON 7500 excels in 32-bit color. Game4 Despite its lower frequencies the GeForce3 Ti200 copes excellently with this scene thanks to the new Detonator XP driver. Conclusion We have just studied a new Titanium series from NVIDIA which includes 3 cards: GeForce2 Ti, GeForce3 Ti200 and GeForce3 Ti500. The new 0.15 micron technology allows the GeForce2 Ti to reach much more than 250 MHz. But the positioning of this card doesn't permit the memory to work at higher than 200 (400) MHz, that is why it is not rational for the chip to operate at over 250 MHz. NVIDIA says that the GeForce2 Ti outscores the GeForce2 Ultra and costs as much as the GeForce2 Pro. But they exaggerate since in 32-bit color it doesn't provide the speed of the Ultra, but it will, indeed, cost much lower than the GeForce2 Pro cards. The GeForce3 Ti200/500 line is meant to make the powerful accelerator with DirectX 8.0 (Ti200) possibilities affordable and to beat the ATI RADEON 8500 (Ti500). The time will show whether the GeForce3 Ti500 is able to outdo the ATI RADEON 8500 whose capabilities are not known yet. I think the Ti200 will be quite popular, while the Ti500 will hardly be such at the beginning. The current GeForce3 will be replaced with the Ti line, that is why there will be no choice. But it is possible that NVIDIA will develop some new solution by that time. That is why GeForce3-card owners should wait for the NV25, and those who still lack for such a powerful accelerator should look at the Ti200 or wait for the RADEON 8500 to clarify the price situation. Today it is quite difficult to recommend one or another product since their prices are not available yet. Highs: Replacement of the old GeForce2 Pro with the new GeForce2 Ti will allow us to get the Ultra by overclocking the memory at quite a small sum of money; A relatively cheap GeForce3 Ti200 will allow those who can spend up to $200 for a powerful video card to get, in fact, a normal GeForce3; Replacement of the GeForce3 with the Ti500 will reduce the prices for the latter, and you then will be able to buy a more powerful accelerator; All advantages of the GeForce3 also concern the Ti200/Ti500 line; Lows: None. Just a little time ago we have received some information concerning the retail prices which will be up to $349 for the GeForce3 Ti500, $199 (!) for the GeForce3 Ti200 and $149 for the GeForce2 Ti. The prices look quite attractive taking into consideration the possibilities and the performance of these cards. Let's wait for the ATI RADEON 8500 and ATI RADEON 7500 which will obviously initiate a new price war. Our testers will publish reviews of the new line of production Titanium cards very soon.
{ "pile_set_name": "Pile-CC" }
Joseph Trimpont Joseph Trimpont (born 24 September 1918) was a Belgian wrestler. He competed at the 1948 Summer Olympics and the 1952 Summer Olympics. References Category:1918 births Category:Possibly living people Category:Belgian male sport wrestlers Category:Olympic wrestlers of Belgium Category:Wrestlers at the 1948 Summer Olympics Category:Wrestlers at the 1952 Summer Olympics Category:Sportspeople from Brussels
{ "pile_set_name": "Wikipedia (en)" }
Artinvest is a 20 years old company and one of the leading company in Serbian market in selling material for furniture production and at the same time finished furniture. "We have ten shops dislocated from the central workshop; customers come to our shop, they have some ideas or some specification of elements and fittings, they can say 'I need this element which this color, this elements with this edgebanding', or we can even help him if is not familiar with that kind of business, so they can produce alone their own furniture. Then we use Optiplanning, a Biesse’s program for optimization of cutting, we collect different orders from customers and put it automatically in the system here in the headquarter of our company, and then we put it in production. We need to finish everything, to produce the elements and then we need to deliver them to customers without any mistakes and right on time. The Whole system prepares the boards for the next cutting with the look ahead function or during the night, and cutting is automated on that way that the operator cannot make some mistakes or choose some other color. When we finish we're going on drilling on skipper machine or on Rover. With this software we have now, from Biesse, and with some other software we want to integrate together with your software, I think that our advantage in comparison with the competition will go on some higher level with all of this”. "When we started to think about this investment and we recognized that we need something like this, we contacted five biggest producers in Europe. There are many elements if you want to make some decision like this: trust, price, quality of the equipment, even delivery, and very important after sales service. Biesse really listened to us, we know that Biesse have service in Serbia with many technicians, and it is very important for us to have really good support in after sales. In this few months after installation we had really good support from Biesse, machines are working properly and everything is ok and I can say that we're satisfied with our choice". Without the help of software we would be blind: we cannot do anything Sasa KosticGeneral manager Great ideas need a great partner. Discover how you can transform your business with Biesse by your side.
{ "pile_set_name": "Pile-CC" }
114 Pear and Cinnamon Frangipane Tart I have a thing for cinnamon. I don’t know exactly what it is about this spice that makes it feel so homely and comforting. And although I now associate cinnamon with much more than just Christmas baking, it still holds a big corner of my heart, being one of my earliest memories of home. Cinnamon, now one of the best known and versatile species, is a name given to several species of Cinnamomum, but only a few of them are grown commercially for spice. Cinnamon, although not grown in Europe, has been well known since antiquity. First imported to Egypt, somewhere around 2000 BC but the source of this aromatic spice was kept secret in the Mediterranean world for centuries by the protective of their business middlemen, who handled the spice trade. Cinnamon is native to India, Sri Lanka, Bangladesh and Myanmar, but is now also grown in other countries, Indonesia, China,Sri Lanka, Vietnam and Madagascar being the biggest producers of this spice. It’s probably worth knowing that what most of us tasted as ‘cinnamon’ is a different species called Cinamomum cassia or simply cassia, often referred to as Chinese cinnamon. Cinamomum verum, mostly grown in Sri Lanka, is often considered a ‘true cinnamon’. Cinnamon appeared under many names throughout history. The English word cinnamon has been in use since around 15th century, and it’s derived from a Greek word κιννάμωμον (kinnámōmon). Early modern English also used terms canel and canella, which are now still in use in other European languages (French name for cinnamon is cannelle, and Italian – cannella). The word is borrowed from the Latin word cannella, a diminutive of canna meaning ‘tube’, to describe the way cinnamon bark curls up as it dries. This beautifully warming and aromatic spice has many uses. And although most of us associate cinnamon with autumn and winter, especially Christmas bakes, it is also used in many beverages, preserves, jams and pickles. Cinnamon also appears as an addition to many meat dishes, especially lamb and chicken. Many Indian savoury dishes use cinnamon to add extra flavour and aroma. The list is endless, and it certainly doesn’t end on the well-known apple pie, which without cinnamon would be somehow incomplete. As I am warming myself up for the craze of Christmas baking, I made this Pear and Cinnamon Frangipane Tart. Maybe not a Christmas classic, but it definitely proved that it can become one. Not too difficult to make and not too time-consuming, this tart can definitely be your go-to emergency bake, for the times when family or friends decide to only give you a few hour notice of their arrival in your home. That is, if you want to treat them with something nice ^^ If you have more time to spare, try this Apple Frangipane Tart, żubrek’s favourite! HOW TO MAKE?1. Start with making the pastry. Mix the flour and salt, and add it to the pastry board. Add butter and cut it into small cubes with a knife, mixing butter with flour at the same time. Add egg yolk, water, sugar and almond extract, mix everything together. 2. Quickly knead a smooth dough. It should be quite soft, not too dry. Wrap your ready dough in some cling film and put it in the fridge for about half an hour. 3. Prepare the pears. Add water with whisky, cinnamon and sugar into a pot, bring it to a boil. Once boiled, turn the heat down, add the pears into the mixture and keep cooking them for about 15min with the lid on. After this time, take the lid off and cook the pears for another 5 minutes. Drain the pears. 4. Prepare the frangipane. Cream butter and sugar with your electric mixer, until light and fluffy. Add egg and egg yolk, one at the time. Add Amaretto or almond extract, flour and almonds. Mix until combined. 5. Grease a tart tin with butter, set aside. Take the dough out of the fridge and roll it out on the pastry board, until big enough to cover the bottom and sides of the tart tin. Transfer the rolled out pastry to the tin, and prick the bottom with a fork. Transfer the frangipane to the baking tin with pastry, spread evenly. Decorate with drained cinnamon pears. 6. Bake in the preheated to 180 degree Celsius oven, for about 25-30 minutes. 7. To add a ‘glossy’ effect, you can spread some apricot jam (heated with some water) on top of the baked pastry. Not too much though, as the tart is sweet as it is.
{ "pile_set_name": "Pile-CC" }
Q: Did God change from a wrathful God to a loving God between Old Testament and New Testament? This question is mainly to the evangelical Christians and Bible believing Christians, who believe that God doesn't change and His nature is Love. The Old Testament is filled with accounts that describe how God poured out His wrath on people, including His chosen people, the Israelites. However, when we read the New Testament particularly the life, teachings and message of the Lord Jesus Christ we don't see the outpouring of God's wrath on people. Instead, we read about God's grace, mercy, and love. How do we square these two seemingly opposite manifestations of God's nature? A: There was a gap of about 400 years between the two Testaments, with the OT covering a vast time span, from creation till then. Taking the time from after the Flood, that alone has been variously calculated as 2,454 years to 2,518 years. This means that the OT deals with about two and a half thousand years of history after the Flood, whereas the NT only covers less than seventy-five years of history! The NT does not detail the horrific destruction of Jerusalem and its temple in A.D. 70 as all but its last book was completed before then. (The last book named Revelation may have used coded language to infer that event but it deals a lot with future events where the wrath of God will be poured out on the nations.) It is unbalanced to compare the historical dealings of God with his people and the nations over thousands of years, with a mere 75 years history in the NT. This is especially so when the NT does not hold back from warnings about the coming wrath of God, both on individuals who continue in rebellion against him, and the various “bowls of wrath” coming on the whole world before Christ returns in judgment. The idea that God must have changed in nature between the two testaments may indicate some ignorance of what those two testaments state, on the matter of God’s nature and his dealings with mankind. In both testaments, the immense patience and love of God is demonstrated, yet without holding back from clear evidence of God’s holiness, righteousness and sovereign judgements. There may be a bit of ‘cherry-picking’ going on, selecting gruesome events in the OT (which tells things the way they were) while only citing nice sentiments expressed in the NT. Finally, you addressed your question to evangelical, Bible believing Christians, “who believe that God doesn't change and His nature is Love.” As one such Christian I would point out that the Bible does not limit God’s nature to love, but that his love is perfectly balanced with his holiness, his righteousness and his justice. It’s imbalanced to focus only on God’s love, as if a loving God would sweep sin under the carpet without judging sin and sinners. In his love, God has done everything we could never do to spare repentant sinners the punishment due their sin, by pouring it out on the sinless Son of God instead. But if people disregard what God has lovingly done, they will have to bear that punishment. Then they will know the righteous wrath of God. That was the pattern in the OT because forgiveness and time to repent was always available to those seeking to please God, and that continues in the NT. No change there, God be praised! A: If you read the whole Bible from Genesis to Revelation, you'll notice: God's consistent character, who is compassionate and merciful to those who love and fear Him but who pours out His wrath to those who are rebellious, unthankful, unfaithful, and disobey His commandments. In the OT He revealed his character to Abraham, Moses, David, the prophets, etc; in the NT He revealed his SAME character to Jesus, Paul, the Apostles, etc. His commandments (both in OT and NT) were meant to protect us from harm and to enable us to flourish. In the OT He revealed the famous 10 commandments; in the NT Jesus recapitulates them into the great 2 commandments. God kept renewing His covenant starting with His chosen people Israel and later with the whole world (the Gentiles), exemplifying his faithfulness toward all his creation, and ask us to also be faithful to the covenant. In the OT it was the Mosaic covenant; in the NT it was the covenant with Jesus. In both the OT and the NT the covenant has the same structure: God blesses those who are faithful and obey, and God punishes, judges, and curses those who do not (see Deuteronomy for OT and Revelation for NT). God is especially angry at those who are not only proud (meaning pursuing their own standard instead of God's standard) but also persecute the weak (the poor, the widows, and the orphans). In the OT the Kings and the Jerusalem elite were some of the ones that God was angry with; in the NT it was the Jerusalem leaders and the Pharisees. But to those who were faithful but oppressed and cried out to God, in both the OT and the NT God promised vindication, deliverance, and reward, which we can read in many places such as the Psalms (OT) and in Revelation (NT). I hope from the above you see how God's nature doesn't change between OT and NT: loving to the righteous but wrathful to the wicked. Jesus came to save the sinners who WANT to be righteous (because it is impossible to be righteous without God's help). But on the Day of Judgment when Jesus come again, He comes as a judge who will cast the wicked to hell. In between the 2 comings, the door is still open for us to take the offer of salvation. A: Wrath is an important part of God's nature. I think a good way into answering this question is to ask the question, 'What did Jesus save us from?' They tell how you turned to God from idols to serve the living and true God, and to wait for his Son from heaven, whom he raised from the dead – Jesus, who rescues us from the coming wrath. (1 Thessalonians 1:9-10) There are many passages in the NT which talk about God's wrath. The Father loves the Son and has placed everything in his hands. Whoever believes in the Son has eternal life, but whoever rejects the Son will not see life, for God’s wrath remains on them. (John 3:35-36) For of this you can be sure: no immoral, impure or greedy person – such a person is an idolater – has any inheritance in the kingdom of Christ and of God. Let no one deceive you with empty words, for because of such things God’s wrath comes on those who are disobedient. (Ephesians 5:5-6) And one more, this wonderful description of Jesus coming again: I saw heaven standing open and there before me was a white horse, whose rider is called Faithful and True. With justice he judges and wages war. ... Coming out of his mouth is a sharp sword with which to strike down the nations. ‘He will rule them with an iron sceptre.’ He treads the winepress of the fury of the wrath of God Almighty. (Revelation 19:11-16) In other words, God's nature has not changed between the Old Testament and the New Testament. Sin still provokes God's wrath and one day it will be punished. Jesus came to save us from sin, to save us from God's wrath. As he said: "God did not send his Son into the world to condemn the world, but to save the world through him. Whoever believes in him is not condemned, but whoever does not believe stands condemned already because they have not believed in the name of God’s one and only Son." (John 3:17-18) There are many other ways in which God's consistency and character are displayed through the Old and New Testaments and I think the other answers do a good job picking up some of those too. I'd just like to mention - as suggested in a comment - one thing extra, which is that the OT does display God as loving and merciful. This aspect of his character also has not changed. For example, God's self-description in Exodus 34, one of the most famous descriptions of him: Then the Lord came down in the cloud and stood there with him and proclaimed his name, the Lord. And he passed in front of Moses, proclaiming, ‘The Lord, the Lord, the compassionate and gracious God, slow to anger, abounding in love and faithfulness, maintaining love to thousands, and forgiving wickedness, rebellion and sin. Yet he does not leave the guilty unpunished; he punishes the children and their children for the sin of the parents to the third and fourth generation.’ (Exodus 34:5-7) So here in the OT we have a description of God as loving and compassionate, slow to anger, yet not leaving sin unpunished. This is the same God of the New Testament: the God who is so loving that he refuses to let us to our sins, and yet the God who also is so just that he cannot leave sin unpunished - so Christ is punished in our place. His character does not change.
{ "pile_set_name": "StackExchange" }
Q: Setting tabBarController.selectedIndex/selectedViewController when it's a UINavigationController I've got 5 views in my tabBarController and all of them are embedded in a separate navigationcontroller, i.e. every view has it's own navigationcontroller. I did this to make it easier to push segues, I know it's probably not the best solution but it works fine. Now to my question: I'm trying to set the initial view with the following code: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { tabBarController.selectedIndex = 2; return YES; } However it's not working at all, the app simply starts and index 0 (the left most view). I've searched through thorough threads like this and tried many different ways to solve this without any success... Closest I got was when I in MainStoryboard_iPhone.storyboard checked the box "Is initial view controller" in the viewcontroller I want to start with. This way the I got the correct starting viewcontroller but the tabbar wasn't showing. A: Since you're using storyboard, do this : 1) Give your tabBarController a storyboard identifier (say tbc); 2) In your appDelegate DidFinishLaunching, do this :: UITabBarController *tbc = [[UIStoryboard storyboardWithName:@"MainStoryboard" bundle:nil] instantiateViewControllerWithIdentifier:@"tbc"]; [tbc setSelectedIndex:1]; [self.window setRootViewController:tbc]; [self.window makeKeyAndVisible]; return YES; PS : This is just one of many ways to make it work
{ "pile_set_name": "StackExchange" }
require 'aws-sdk-apigateway' require 'aws-sdk-cloudformation' STAGE_NAME = "<%= @stage_name %>" def lambda_handler(event:, context:) puts("event['RequestType'] #{event['RequestType']}") puts("event: #{JSON.dump(event)}") puts("context: #{JSON.dump(context)}") puts("context.log_stream_name #{context.log_stream_name.inspect}") mimic = event['ResourceProperties']['Mimic'] physical_id = event['ResourceProperties']['PhysicalId'] || "PhysicalId" puts "mimic: #{mimic}" puts "physical_id: #{physical_id}" if event['RequestType'] == 'Delete' if mimic == 'FAILED' send_response(event, context, "FAILED") else mapping = BasePathMapping.new(event) mapping.delete(true) if mapping.should_delete? send_response(event, context, "SUCCESS") end return # early return end mapping = BasePathMapping.new(event) mapping.update response_status = mimic == "FAILED" ? "FAILED" : "SUCCESS" response_data = { "Hello" => "World" } send_response(event, context, response_status, response_data, physical_id) # We rescue all exceptions and send an message to CloudFormation so we dont have to # wait for over an hour for the stack operation to timeout and rollback. rescue Exception => e puts e.message puts e.backtrace sleep 10 # provide delete to make sure that the log gets sent to CloudWatch send_response(event, context, "FAILED") end def send_response(event, context, response_status, response_data={}, physical_id="PhysicalId") response_body = JSON.dump( Status: response_status, Reason: "See the details in CloudWatch Log Stream: #{context.log_stream_name.inspect}", PhysicalResourceId: physical_id, StackId: event['StackId'], RequestId: event['RequestId'], LogicalResourceId: event['LogicalResourceId'], Data: response_data ) puts "RESPONSE BODY:\n" puts response_body url = event['ResponseURL'] uri = URI(url) http = Net::HTTP.new(uri.host, uri.port) http.open_timeout = http.read_timeout = 30 http.use_ssl = true if uri.scheme == 'https' # must used url to include the AWSAccessKeyId and Signature req = Net::HTTP::Put.new(url) # url includes query string and uri.path does not, must used url t req.body = response_body req.content_length = response_body.bytesize # set headers req['content-type'] = '' req['content-length'] = response_body.bytesize res = http.request(req) puts "status code: #{res.code}" puts "headers: #{res.each_header.to_h.inspect}" puts "body: #{res.body}" end class BasePathMapping def initialize(event) @event = event @rest_api_id = get_rest_api_id @domain_name = get_domain_name @base_path = '' end def update # Cannot use update_base_path_mapping to update the base_mapping because it doesnt # allow us to change the rest_api_id. So we delete and create. delete(true) create end # Dont delete the newly created base path mapping unless this is an operation # where we're fully deleting the stack def should_delete? deleting_parent? end def delete(fail_silently=false) apigateway.delete_base_path_mapping( domain_name: @domain_name, # required base_path: '(none)', ) # https://github.com/tongueroo/jets/issues/255 # Used to return: Aws::APIGateway::Errors::NotFoundException # Now returns: Aws::APIGateway::Errors::InternalFailure # So we'll use a more generic error rescue Aws::APIGateway::Errors::ServiceError => e raise(e) unless fail_silently end def create apigateway.create_base_path_mapping( domain_name: @domain_name, # required base_path: @base_path, rest_api_id: @rest_api_id, # required stage: STAGE_NAME, ) end def get_domain_name param = deployment_stack[:parameters].find { |p| p.parameter_key == 'DomainName' } param.parameter_value end def deployment_stack @deployment_stack ||= cfn.describe_stacks(stack_name: @event['StackId']).stacks.first end def get_rest_api_id param = deployment_stack[:parameters].find { |p| p.parameter_key == 'RestApi' } param.parameter_value end def deleting_parent? stack = cfn.describe_stacks(stack_name: parent_stack_name).stacks.first stack.stack_status == 'DELETE_IN_PROGRESS' end def parent_stack_name deployment_stack[:root_id] end private def apigateway @apigateway ||= Aws::APIGateway::Client.new end def cfn @cfn ||= Aws::CloudFormation::Client.new end end
{ "pile_set_name": "Github" }
The below feature is still being tested. Expect bugs and wacky behavior. Vote for who Atomic Joker is strong and weak against. This will be particularly useful once draft/ranked matches are available. You must be logged in to use this feature. Atomic Joker's Detonate Atomic Joker's Detonate has a shorter Cooldown. Deal 40 Power Damage to all Champions and drones near target enemy drone, creature or environmental object. Creatures in the area (including the one targeted) are dealt 160 True Damage instead and environmental objects are destroyed. Hot Potato (Q) Cooldown: 7.0s20 Will Atomic Joker fires a potato at his target that deals 50% Total Attack Damage. If there are any enemies standing behind that target, it will bounce to one of them and deal 120% Total Attack Damage. Damage: 50% 60% 70% 80% Bounce Damage: 120% 140% 160% 180% Cost: 20 25 30 35 Plutonium Peel (W) Cooldown: 1.0s35 Will Atomic Joker uses a charge to toss a peel to the ground. The first enemy Champion or creature to touch it is Knocked Down for 0.75s. Targets are immune to peels for 3s after being Knocked Down. The peel lasts for 25s. Up to 3 can be active at a time. A charge is granted every 20s. Duration: 25 30 35 40 Recharge Time: 20 18 16 14 Acid Balloon (E) Cooldown: 12.0s35 Will Atomic Joker launches a balloon at the ground, dealing 50 Attack Damage over 4s to targets in the area, and Revealing them. Jack in the Box (R) Cooldown: 120.0s Atomic Joker fires a salvo of rockets at the 2 closest Champions every 1s, up to 4 times. Each rocket deals 60 Attack Damage. Each Champion can be hit by 1 rocket per salvo, up to a total of 240 Attack Damage per target. When they broke into an abandoned military bunker, those blighted survivors of the nuclear war couldn't have realized what they were about to unleash on themselves. Within its derelict walls, they discovered a hibernating computer system. Sensing their movements, the system activated and they found attached to it the head of the Joker suspended in a preservation vessel. The Joker welcomed them with a cackle and told them it was time to build the future. The Joker told them that the military had wanted to use his unique mind to come up with nuclear war scenarios. Displaying a brutal pragmatism that the Joker appreciated, they put his head in a preservation vessel and got rid of the superfluous parts. Now that the serious world was dead and gone, he told the survivors, it was time to build a new crazy one. So desperate were the survivors that they decided to listen to the Joker and began building a new community within the bunker. As the community prospered, the survivors increasingly followed the Joker's orders without question, until they eventually had no will of their own. When the Joker commanded that his vessel be attached to the back of one of his minions, so that he could better see the world he was building, many vied for the honor. The gruesome combat that took place left but one survivor: a scarred, voiceless brute who the Joker named Gaggy. From his spot perched on Gaggy's back, the Joker now rides through the nuclear wasteland, always looking to expand the borders of Jokertown. The Joker of Earth-17 is the ultimate combination of brains and brawn. His minion Gaggy blindly follows any mad command to come from the Joker, without concern for himself. Basic audio Champion select Sorry! Your browser is outdated and cannot play audio. Please use a newer browser, such as Chrome or Firefox. Rejoining the fight (respawn) Sorry! Your browser is outdated and cannot play audio. Please use a newer browser, such as Chrome or Firefox. Victory over another champion Sorry! Your browser is outdated and cannot play audio. Please use a newer browser, such as Chrome or Firefox. Level up / new abilities Sorry! Your browser is outdated and cannot play audio. Please use a newer browser, such as Chrome or Firefox. Using an ultimate ability Sorry! Your browser is outdated and cannot play audio. Please use a newer browser, such as Chrome or Firefox. Full list of champions Latest comments Why would you tell people to mod Royal Seal over Prism. Until 2.0 sometime in the distant future, it is broken and should be modded in every tank set until it is fixed. I would also put priority in E since it has such a long cooldown, and is probably Great guide, this is how I usually play her! But I'd recommend getting the Disintegration 2 mod for Cosmic Staff over 350 Credits off Starheart. Also if you're going defensive, Cosmic Belt is better than Ruby of Life in that it also gives you Power D
{ "pile_set_name": "Pile-CC" }
Bryan cervical disc prosthesis: 12-month clinical outcome. A prospective observational clinical study was carried out to determine whether Bryan disc replacement surgery is a suitable alternative to arthrodesis for cervical disc disease.
{ "pile_set_name": "PubMed Abstracts" }
Tenofovir-induced Fanconi syndrome in chronic hepatitis B monoinfected patients that reverted after tenofovir withdrawal. Tenofovir disoproxil fumarate (TDF) is a nucleotide reverse transcriptase inhibitor widely used to treat patients with human immunodeficiency virus (HIV) and hepatitis B virus (HBV) infection. Despite the excellent safety records of this regimen, a few cases of acute renal failure and Fanconi syndrome have been reported among HIV patients exposed to TDF. In the HBV monoinfection scenario, only two cases of TDF-associated Fanconi syndrome have been reported thus far. Here, we describe two additional patients with chronic hepatitis B (CHB) who developed a TDF-induced Fanconi syndrome that reverted after TDF withdrawal and had viral replication fully suppressed upon switching to entecavir (ETV). Though the overall risk of TDF associated severe renal toxicity in HBV patients appears to be negligible, both glomerular and tubular function should be monitored in patients exposed to TDF, especially when other renal risk factors or a history of previous exposure to adefovir dipivoxil (ADV) are present.
{ "pile_set_name": "PubMed Abstracts" }
PTEN deletion leads to up-regulation of a secreted growth factor pleiotrophin. Tumor suppressor gene PTEN is highly mutated in a wide variety of human tumors. To identify unknown targets or signal transduction pathways that are regulated by PTEN, microarray analysis was performed to compare the gene expression profiles of Pten null mouse embryonic fibroblasts (MEFs) cell lines and their isogenic counterparts. Expression of a heparin binding growth factor, pleiotrophin (Ptn), was found to be up-regulated in Pten-/- MEFs as well as Pten null mammary tumors. Further experiments revealed that Ptn expression is regulated by the PTEN-PI3K-AKT pathway. Knocking down the expression of Ptn by small interfering RNA resulted in the reduction of Akt and GSK-3beta phosphorylation and suppression of the growth and the tumorigenicity of Pten null MEFs. Our results suggest that PTN participates in tumorigenesis caused by PTEN loss and PTN may be a potential target for anticancer therapy, especially for those tumors with PTEN deficiencies.
{ "pile_set_name": "PubMed Abstracts" }
Great for busy bars, the 160ES Undercounter Bottle Cooler from Osborne's eCold range has a lockable glass fronted door for display and boasts a 120 beer bottle capacity. Designed to be highly efficient, this energy saving chiller has a low voltage fan, LED lighting and efficient compressors. Please Note: This item allows curbside delivery only. Once the item is delivered, it is the responsibility of the customer to transport it further. For more information regarding this, please contact us on 01763 264 280. Please Note: This item allows curbside delivery only. Once the item is delivered, it is the responsibility of the customer to transport it further. For more information regarding this, please contact us on 01763 264 280. Please Note: This item allows curbside delivery only. Once the item is delivered, it is the responsibility of the customer to transport it further. For more information regarding this, please contact us on 01763 264 280.
{ "pile_set_name": "Pile-CC" }
--- abstract: 'We derive a general formulation of the laws of irreversible thermodynamics in the presence of electromagnetism and gravity. For the handling of macroscopic material media, we use as a guide the field equations and the Noether identities of fundamental matter as deduced in the framework of gauge theories of the Poincaré$\otimes U(1)$ group.' author: - Romualdo Tresguerres title: Thermodynamics in dynamical spacetimes --- Introduction ============ The present work is based on our previous paper [@Tresguerres:2007ih]. There we studied jointly gravitation and electrodynamics in the form of a gauge theory of the Poincaré group times the internal group $U(1)$. Following the approach of Hehl et al. to gauge theories of gravity [@Hehl:1974cn]–[@Obukhov:2006ge], we made use of a Lagrangian formalism to get the field equations and the Noether identities associated to the gauge symmetry, devoting special attention to energy conservation. This latter aspect of [@Tresguerres:2007ih], where exchange between different forms of energy plays a central role, strongly suggests to look for a thermodynamic interpretation of the corresponding formulas, although this aim remains unattainable as only single matter particles are involved. For this reason, we are interested in extending similar energetic considerations to macroscopic matter in order to be able to construct an approach to thermodynamics compatible with gauge theories of gravity. In this endeavor, our starting point is provided by the dynamical equations found for a particular form of fundamental matter, namely Dirac matter, with the help of the principle of invariance of the action under local Poincaré$\otimes U(1)$ transformations. Our main hypothesis is that the equations still hold for other forms of matter with the same $U(1)$, translational and Lorentz symmetry properties, and we assume that these are possessed by macroscopic matter. Accordingly, we consider that material media obey equations with a form which is known to us, also when we have to reinterpret several quantities involved in them –in particular the matter sources– in order to give account of macroscopic features which are not present in the original formulation. Moreover, a major alteration of the almost purely geometrical approach to physical reality characteristic for gauge theories occurs with the introduction of thermodynamic variables. Briefly exposed, regarding the latter ones we proceed as follows. From the original gauge theoretically defined matter energy current $\epsilon ^{\rm matt}$, we define a modified matter energy current $\epsilon ^{\rm u}$ with an energy flux component $q$ identified as heat flux, and a further component $\mathfrak{U}$ representing the internal energy content of a volume element. As a requirement of the transition to macroscopic matter [@Callen], we postulate $\mathfrak{U}$ to depend, among others, on a new macroscopic variable $\mathfrak{s}$ with the meaning of the entropy content of an elementary volume. (Contrary to other authors [@Landau:1958]-[@Priou:1991], we do not introduce an additional entropy flow variable.) The definition of temperature as the derivative of $\mathfrak{U}$ with respect to $\mathfrak{s}$ completes the set of fundamental thermal variables. We are going to prove that they satisfy the first and second laws of thermodynamics. In our approach, the energy and entropy forms, as much as the temperature function, are Lorentz invariants, as in Eckart’s pioneering work [@Eckart:1940te]. There, as in our case, the first principle of thermodynamics is derived from the energy-momentum conservation law not as the zero component of this vector equation, but as a scalar equation. The paper is organized as follows. In Sections II and III we present the gauge-theoretically derived field equations and Noether identities. After introducing in IV a necessary spacetime foliation, Section V is devoted to defining total energy and its various constitutive pieces, and to studying the corresponding conservation equations. In VI, explicit Lagrangians for electrodynamics and gravity are considered, while VII deals with some aspects of the energy-momentum of macroscopic matter. In Section VIII we argue on the most suitable way to include the features of material media in the dynamical equations. Lastly, the main results are presented in Section IX, where we deduce the laws of thermodynamics in two different scenarios. The paper ends with several final remarks and the conclusions. Field equations =============== The results of [@Tresguerres:2007ih] relevant for the present paper are summarized in what follows with slight changes needed to replace the fundamental Dirac matter by macroscopic matter. Interested readers are referred to [@Tresguerres:2007ih] for technical details, in particular those concerning the handling of translations. A complementary study of the underlying geometry of dynamical spacetimes of Poincaré gauge theories can be found in Refs. [@Tresguerres:2002uh] and [@Tresguerres:2012nu]. Our point of departure is a Lagrangian density 4-form $$L=L(\,A\,,\vartheta ^\alpha\,,\Gamma ^{\alpha\beta}\,;F\,,T^\alpha\,,\,R^{\alpha\beta}\,;{\rm matter\hskip0.2cm variables}\,)\,,\label{totalLag}$$ invariant under local Poincaré$\otimes U(1)$ symmetry. Its arguments, along with matter fields, are the following. On the one hand, we recognize the connection 1-forms of $U(1)$, of translations and of the Lorentz subgroup respectively: that is, the electromagnetic potential $A$, the (nonlinear) translational connections $\vartheta ^\alpha$ geometrically interpreted as tetrads, and the Lorentz connections $\Gamma ^{\alpha\beta}$ required to guarantee gauge covariance, being antisymmetric in their indices. On the other hand, further arguments are the covariantized derivatives of the preceding connections. The differential of the electromagnetic potential is the familiar electromagnetic field strength $$F:= dA\,,\label{Fdef}$$ and analogously, torsion [@Hehl:1995ue] defined as the covariant differential of tetrads $$T^\alpha := D\,\vartheta ^\alpha = d\,\vartheta ^\alpha + \Gamma _\beta{}^\alpha\wedge\vartheta ^\beta\,,\label{torsiondef}$$ together with the Lorentz curvature $$R^{\alpha\beta} := d\,\Gamma ^{\alpha\beta} + \Gamma _\gamma{}^\beta\wedge \Gamma ^{\alpha\gamma}\,,\label{curvdef}$$ play the role of the field strengths associated respectively to translations and to the Lorentz group. Lorentz indices are raised and lowered with the help of the constant Minkowski metric $o_{\alpha\beta}= diag(-+++)$. The derivatives of (\[totalLag\]) with respect to the connections $A$, $\vartheta ^\alpha $ and $\Gamma ^{\alpha\beta}$ are the electric four-current 3-form $$J :={{\partial L}\over{\partial A}}\,,\label{definition03a}$$ the total energy-momentum 3-form $$\Pi _\alpha :={{\partial L}\over{\partial \vartheta ^\alpha}}\,,\label{definition03b}$$ (including, as we will see, electrodynamic, gravitational and matter contributions), and the spin current[^1] $$\tau _{\alpha\beta} :={{\partial L}\over{\partial \Gamma ^{\alpha\beta}}}\,.\label{definition03c}$$ Finally, derivatives of (\[totalLag\]) with respect to the field strengths (\[Fdef\]), (\[torsiondef\]) and (\[curvdef\]) yield respectively the electromagnetic excitation 2-form $$H:=-{{\partial L}\over{\partial F}}\,,\label{definition01}$$ and its translative and Lorentzian analogs, defined as the excitation 2-forms $$\quad H_\alpha :=-{{\partial L}\over{\partial T^\alpha}}\,,\quad H_{\alpha\beta}:=-\,{{\partial L}\over{\partial R^{\alpha\beta}}}\,.\label{definition02}$$ With these definitions at hand, the principle of extremal action yields the field equations $$\begin{aligned} dH &=&J\,,\label{covfieldeq1} \\ DH_\alpha &=&\Pi _\alpha\,,\label{covfieldeq2}\\ DH_{\alpha\beta} +\vartheta _{[\alpha }\wedge H_{\beta ]}&=&\tau _{\alpha\beta}\,.\label{covfieldeq3}\end{aligned}$$ As we will see below, suitable explicit Lagrangians uncover respectively (\[covfieldeq1\]) as Maxwell’s equations and (\[covfieldeq2\]) as a generalized Einstein equation for gravity, whereas (\[covfieldeq3\]) completes the scheme taking spin currents into account. Notice that Eqs. (\[covfieldeq1\])–(\[covfieldeq3\]) are explicitly Lorentz covariant[^2]. In addition, they are invariant with respect to translations as much as to $U(1)$ as a consequence of the (nonlinear) symmetry realization used in [@Tresguerres:2007ih]. Noether identities ================== Following [@Hehl:1995ue], we separate the total Lagrangian density 4-form (\[totalLag\]) into three different pieces $$L=L^{\rm matt}+L^{\rm em}+L^{\rm gr}\,,\label{Lagrangedecomp}$$ consisting respectively in the matter contribution $$L^{\rm matt} = L^{\rm matt}(\,\vartheta ^\alpha\,;{\rm matter\hskip0.2cm variables}\,)\,,\label{mattLagcontrib}$$ (in the fundamental case, matter variables consisting of matter fields $\psi$ and of their covariant derivatives including connections $A$ and $\Gamma ^{\alpha\beta}$), together with the electromagnetic part $L^{\rm em}(\,\vartheta ^\alpha\,,\,F\,)\,$ and the gravitational Lagrangian $L^{\rm gr}(\,\vartheta ^\alpha\,,\,T^\alpha\,,\,R_\alpha{}^\beta\,)$. According to (\[Lagrangedecomp\]), the energy-momentum 3-form (\[definition03b\]) decomposes as $$\Pi _\alpha =\Sigma ^{\rm matt}_\alpha +\Sigma ^{\rm em}_\alpha +E_\alpha\,,\label{momentdecomp}$$ with the different terms in the right-hand side (rhs) defined respectively as $$\Sigma ^{\rm matt}_\alpha :={{\partial L^{\rm matt}}\over{\partial \vartheta ^\alpha}}\,,\quad \Sigma ^{\rm em}_\alpha :={{\partial L^{\rm em}}\over{\partial \vartheta ^\alpha}}\,,\quad E_\alpha :={{\partial L^{\rm gr}}\over{\partial \vartheta ^\alpha}}\,.\label{momentdecompbis}$$ Starting with the matter Lagrangian part $L^{\rm matt}\,$, let us derive the Noether type conservation equations for the matter currents associated to the different symmetries, that is $$J={{\partial L^{\rm matt}}\over{\partial A}}\,,\quad \Sigma ^{\rm matt}_\alpha = {{\partial L^{\rm matt}}\over{\partial \vartheta ^\alpha }}\,,\quad\tau _{\alpha\beta} = {{\partial L^{\rm matt}}\over{\partial \Gamma ^{\alpha\beta}}}\,.\label{mattcurrdefs}$$ Provided the field equations (\[covfieldeq1\])–(\[covfieldeq3\]) are fulfilled, as much as the Euler-Lagrange equations for matter fields (non explicitly displayed here), from the invariance of $L^{\rm matt}$ under vertical (gauge) Poincaré $\otimes$ $U(1)$ transformations follow the conservation equations for both, the electric current $$dJ =0\,,\label{elcurrcons}$$ and the spin current $$D\,\tau _{\alpha\beta} +\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]}=0\,.\label{spincurrconserv}$$ On the other hand, the Lie (lateral) displacement ${\it{l}}_{\bf x} L^{\rm matt}$ of the Lagrangian 4-form along an arbitrary vector field $X$ yields the identity $$D\,\Sigma ^{\rm matt}_\alpha =(\,e_\alpha\rfloor T^\beta )\wedge\Sigma ^{\rm matt}_\beta +(\,e_\alpha\rfloor R^{\beta\gamma}\,)\wedge\tau _{\beta\gamma} +(\,e_\alpha\rfloor F\,)\wedge J\,,\label{sigmamattconserv}$$ with the matter energy-momentum 3-form given by $$\Sigma ^{\rm matt}_\alpha =-(\,e_\alpha\rfloor\overline{D\psi}\,)\,{{\partial L^{\rm matt}}\over{\partial d\overline{\psi}}} +{{\partial L^{\rm matt}}\over{\partial d\psi}}\,(\,e_\alpha\rfloor D\psi\,) + e_\alpha\rfloor L^{\rm matt}\label{sigmamatt}$$ (for Dirac matter, and thus to be modified for the case of macroscopic matter). In the rhs of (\[sigmamattconserv\]) we recognize, besides the proper Lorentz force 4-form in the extreme right, two additional terms with the same structure, built with the field strengths and the matter currents of translational and Lorentz symmetry respectively. Next we apply the same treatment to the remaining constituents of (\[Lagrangedecomp\]). The gauge invariance of the electromagnetic Lagrangian piece implies $$\vartheta _{[\alpha}\wedge\Sigma ^{\rm em}_{\beta ]} =0\,,\label{Symem-emt}$$ while in analogy to (\[sigmamattconserv\]) we find $$D\,\Sigma ^{\rm em}_\alpha =(\,e_\alpha\rfloor T^\beta )\wedge\Sigma ^{\rm em}_\beta -(\,e_\alpha\rfloor F\,)\wedge dH\,,\label{sigmaemconserv}$$ being the electromagnetic energy-momentum $$\Sigma ^{\rm em}_\alpha =(\,e_\alpha\rfloor F\,)\wedge H + e_\alpha\rfloor L^{\rm em}\,.\label{sigmaem}$$ Finally, regarding the gravitational Lagrangian part, its gauge invariance yields $$D\,\Bigl( DH_{\alpha\beta} +\vartheta _{[\alpha }\wedge H_{\beta ]}\,\Bigr) +\vartheta _{[\alpha}\wedge\Bigl( DH_{\beta ]} -E_{\beta ]}\,\Bigr)=0\,,\label{redund}$$ (derivable alternatively from (\[spincurrconserv\]) with (\[covfieldeq2\]), (\[covfieldeq3\]), (\[momentdecomp\]) and (\[Symem-emt\])), and the (\[sigmamattconserv\]) and (\[sigmaemconserv\])– analogous equation reads $$\begin{aligned} &&D\,\Bigl( DH_\alpha -E_\alpha\,\Bigr) -(\,e_\alpha\rfloor T^\beta )\wedge\Bigl( DH_\beta -E_\beta\,\Bigr)\nonumber\\ &&\hskip0.2cm -(\,e_\alpha\rfloor R^{\beta\gamma}\,)\wedge\Bigl( DH_{\beta\gamma}+\vartheta _{[\beta }\wedge H_{\gamma ]}\,\Bigr)=0\,,\label{ealphaconserv}\end{aligned}$$ with the pure gravitational energy-momentum given by $$\begin{aligned} E_\alpha =(\,e_\alpha\rfloor T^\beta )\wedge H_\beta +(\,e_\alpha\rfloor R^{\beta\gamma}\,)\wedge H_{\beta\gamma} +e_\alpha\rfloor L^{\rm gr}\,.\label{ealpha}\end{aligned}$$ Eq.(\[ealphaconserv\]) is also redundant, being derivable from (\[sigmamattconserv\]) and (\[sigmaemconserv\]) together with the field equations (\[covfieldeq1\])–(\[covfieldeq3\]) and (\[momentdecomp\]). Spacetime foliation =================== General formulas ---------------- The definition of energy to be introduced in next section, as much as its subsequent thermodynamic treatment, rests on a foliation of spacetime involving a timelike vector field $u$ defined as follows. (For more details, see [@Tresguerres:2012nu].) The foliation is induced by a 1-form $\omega = d\tau $ trivially satisfying the Frobenius’ foliation condition $\omega\wedge d\omega =0$. The vector field $u$ relates to $d\tau$ through the condition $u\rfloor d\tau =1$ fixing its direction. This association of the vector $u$ with $\tau $, the latter being identified as [*parametric time*]{}, allows one to formalize time evolution of any physical quantity represented by a $p$-form $\alpha$ as its Lie derivative along $u$, that is $${\it{l}}_u\alpha :=\,d\,(u\rfloor\alpha\,) + u\rfloor d\alpha \,.\label{Liederdef}$$ (Notice that the condition $u\rfloor d\tau =1$ itself defining $u$ in terms of $\tau$ means that ${\it l}_u\,\tau := u\rfloor d\tau =1$.) With respect to the direction of the time vector $u$, any $p$-form $\alpha$ decomposes into two constituents [@Hehl-and-Obukhov], longitudinal and transversal to $u$ respectively, as $$\alpha = d\tau\wedge\alpha _{\bot} +\underline{\alpha}\,,\label{foliat1}$$ with the longitudinal piece $$\alpha _{\bot} := u\rfloor\alpha\,,\label{long-part}$$ consisting of the projection of $\alpha$ along $u$, and the transversal component $$\underline{\alpha}:= u\rfloor ( d\tau\wedge\alpha\,)\,,\label{trans-part}$$ orthogonal to the former as a spatial projection. The foliation of exterior derivatives of forms is performed in analogy to (\[foliat1\]) as $$d\,\alpha = d\tau\wedge\bigl(\,{\it{l}}_u\underline{\alpha} -\,\underline{d}\,\alpha _{\bot}\,\bigr) +\underline{d}\,\underline{\alpha }\,,\label{derivfoliat}$$ with the longitudinal part expressed in terms of the Lie derivative (\[Liederdef\]) and of the spatial differential $\underline{d}$. For its part, the Hodge dual (\[dualform\]) of a $p$-form $\alpha$ decomposes as $${}^*\alpha =\,(-1)^p\, d\tau\wedge {}^{\#}\underline{\alpha} - {}^{\#}\alpha _{\bot}\,,\label{foliat2}$$ being $^\#$ the Hodge dual operator in the three-dimensional spatial sheets. Foliation of tetrads -------------------- Let us apply the general formulas (\[Liederdef\])–(\[foliat2\]) to the particular case of tetrads $\vartheta ^\alpha $, which, as universally coupling coframes [@Tresguerres:2007ih], will play a significant role in what follows. Their dual vector basis $\{e_\alpha\}$ is defined by the condition $$e_\alpha\rfloor \vartheta ^\beta = \delta _\alpha ^\beta\,.\label{dualitycond}$$ When applied to tetrads, (\[foliat1\]) reads $$\vartheta ^\alpha = d\tau\,u^\alpha + \underline{\vartheta}^\alpha\,,\label{tetradfoliat}$$ where the longitudinal piece $$u^\alpha := u\rfloor\vartheta ^\alpha\label{fourvel}$$ has the meaning of a four-velocity. In terms of it, the time vector $u$ can be expressed as $u =u^\alpha e_\alpha$, being the requirement for $u$ to be timelike fulfilled as $$u_\alpha u^\alpha = -1\,.\label{form01}$$ In terms of (\[fourvel\]), let us define the projector $$h_\alpha{}^\beta :=\delta _\alpha ^\beta + u_\alpha u^\beta\,.\label{form03}$$ Replacing (\[tetradfoliat\]) in (\[dualitycond\]) and making use of (\[form03\]) we find $$e_\alpha\rfloor \Big(\,d\tau\,u^\beta + \underline{\vartheta}^\beta\,\Bigr) = \delta _\alpha ^\beta =-u_\alpha u^\beta +h_\alpha{}^\beta \,.\label{dualitycondbis}$$ implying $$e_\alpha \rfloor d\tau = -\,u_\alpha\,,\label{form02}$$ and $$e_\alpha\rfloor \underline{\vartheta}^\beta = h_\alpha{}^\beta\,.\label{dualitycondbis}$$ On the other hand, let us generalize the definition (\[Liederdef\]) of Lie derivatives by considering covariant differentials instead of ordinary ones [@Hehl:1995ue]. In particular, we will make extensive use of the covariant Lie derivative of the tetrads, defined as $$\begin{aligned} {\cal \L\/}_u\vartheta ^\alpha &:=& D\left( u\rfloor\vartheta ^\alpha\right) + u\rfloor D\vartheta ^\alpha\nonumber\\ &=& D u^\alpha + T_{\bot}^\alpha \,,\label{thetaLiederiv01}\end{aligned}$$ where $${\cal \L\/}_u\vartheta ^\alpha = {\it{l}}_u\vartheta ^\alpha +{\Gamma _{\bot}}_\beta{}^\alpha\wedge\vartheta ^\beta\,,\label{thetaLiederiv02}$$ with (\[thetaLiederiv01\]) decomposing into the longitudinal and transversal pieces $$\begin{aligned} ({\cal \L\/}_u\vartheta ^\alpha )_{\bot} &=& {\cal \L\/}_u u^\alpha\,,\label{thetaLiederiv03}\\ \underline{{\cal \L\/}_u\vartheta ^\alpha} &=& \underline{D} u^\alpha + T_{\bot}^\alpha\nonumber\\ &=& {\cal \L\/}_u\underline{\vartheta}^\alpha\,.\label{thetaLiederiv04}\end{aligned}$$ For what follows, we also need complementary formulas concerning the foliation of the eta basis. Since they require more space, we introduce them in Appendix A. Definition and conservation of energy ===================================== In Ref.[@Tresguerres:2007ih] we discussed the definition of the total energy current 3-form $$\epsilon := -\left(\,u^\alpha\,\Pi _\alpha + Du^\alpha\wedge H_\alpha\,\right)\,.\label{energycurr}$$ By rewriting it as $$\epsilon =-d\left( u^\alpha H_\alpha\right) + u^\alpha \left( DH_\alpha -\Pi _\alpha \right)\,,\label{exactform01}$$ and making use of (\[covfieldeq2\]), we find that it reduces to an exact form $$\epsilon =-d\left( u^\alpha H_\alpha\right)\,,\label{exactform02}$$ automatically satisfying the continuity equation $$d\,\epsilon =0\,.\label{energyconserv01}$$ The interpretation of (\[energycurr\]) as total energy, and thus of (\[energyconserv01\]) as local conservation of total energy, becomes apparent with the help of (\[momentdecomp\]). The energy (\[energycurr\]) reveals to be the sum of three pieces $$\epsilon =\epsilon ^{\rm matt}+\epsilon ^{\rm em}+\epsilon ^{\rm gr}\,,\label{energydec}$$ defined respectively as $$\begin{aligned} \epsilon ^{\rm matt} &:=& -u^\alpha\,\Sigma ^{\rm matt}_\alpha\,,\label{mattenergy}\\ \epsilon ^{\rm em} &:=& -u^\alpha\,\Sigma ^{\rm em}_\alpha\,,\label{emenergy}\\ \epsilon ^{\rm gr} &:=& -\left(\,u^\alpha\,E_\alpha + D u^\alpha\wedge H_\alpha\,\right)\,.\label{grenergy}\end{aligned}$$ On the other hand, decomposing (\[energycurr\]) into its longitudinal and transversal components $$\epsilon = d\tau\wedge\epsilon _{\bot} +\underline{\epsilon}\,,\label{energyfol01}$$ the foliated form of the local energy conservation equation (\[energyconserv01\]) reads $${\it l}_u\,\underline{\epsilon}-\underline{d}\,\epsilon _{\bot}=0\,,\label{conteq}$$ showing (when integrated) that the rate of increase of the energy $\underline{\epsilon}$ contained in a small volume equals the amount of energy flowing into the volume over its boundary surface as the result of the balance of inflow and outflow of the energy flux $\epsilon _{\bot}$ crossing through the closed surface. Conservation of total energy is the result of exchanges between the different forms of energy. Let us write the continuity equations of the different pieces (\[mattenergy\])–(\[grenergy\]). As we will see immediately, in all these equations, when considered separately, sources and sinks of energy are involved, reflecting the fact that, inside the small volume considered, energy is produced or consumed, wether on account of work or of any other manifestation of energy. These terms only cancel out when all forms of energy are considered together, that is, in (\[energyconserv01\]) with (\[energydec\]). Regarding the matter contribution to energy (\[mattenergy\]), using (\[sigmamattconserv\]) we find $$d\,\epsilon ^{\rm matt} = -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha -R_{\bot}^{\alpha\beta}\wedge\tau _{\alpha\beta} -F_{\bot}\wedge J\,.\label{mattender}$$ The interpretation of this conservation equation when its validity is extended to macroscopic matter constitutes the main task of the present work. Actually, Eq. (\[mattender\]) provides the basis for our approach to thermodynamics. In analogy to (\[mattender\]), definition (\[emenergy\]) of electromagnetic energy with (\[sigmaemconserv\]) yields the Poynting equation $$d\,\epsilon ^{\rm em} = -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm em}_\alpha + F_{\bot}\wedge dH\,,\label{emender}$$ generalized to take into account spacetime as defined in Poincaré gauge theories. In (\[emender\]), the energy flux (or intensity of flowing energy) is represented by the Poynting 2-form $\epsilon ^{\rm em}_{\bot}$, and the last term in the rhs is related to Joule’s heat. Finally, from the gravitational energy definition (\[grenergy\]) with (\[ealphaconserv\]) we get $$\begin{aligned} d\,\epsilon ^{\rm gr} &:=& -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\left(\,E_\alpha -DH_\alpha\right)\nonumber\\ &&+R_{\bot}^{\alpha\beta}\wedge \left(\,DH_{\alpha\beta} +\vartheta _{[\alpha }\wedge H_{\beta ]}\right)\,.\label{grender}\end{aligned}$$ The field equations (\[covfieldeq1\])–(\[covfieldeq3\]) guarantee that the sum of (\[mattender\]), (\[emender\]) and (\[grender\]) is conserved, in agreement with (\[energyconserv01\]). Electrodynamical and gravitational Lagrangians ============================================== In the present Section we introduce explicit Lagrangian pieces (\[Lagrangedecomp\]) describing electrodynamics and gravity. We do so in order to calculate in particular the excitations defined in (\[definition01\]) and (\[definition02\]), which extend to the macroscopic arena without alterations, as will be discussed in Section VIII. We also derive the electromagnetic and gravitational energy-momentum contributions to (\[momentdecomp\]) as defined in (\[momentdecompbis\]), and the corresponding energies (\[emenergy\]) and (\[grenergy\]). The form found for (\[emenergy\]), namely (\[explemen1\]), and in particular that of its transversal part (\[emendh\]), provides us with a criterion to choose the way to extend the [*microscopic*]{} fundamental equations to macroscopic material media. (See Section VIII.) Electrodynamics --------------- In the context of fundamental matter in vacuum, we consider the Maxwell Lagrangian $$L^{\rm em}=-{1\over 2}\,F\wedge\,^*F\,.\label{emlagrang1}$$ From it follows a field equation of the form (\[covfieldeq1\]) where the excitation (\[definition01\]) is given by the Maxwell-Lorentz electromagnetic spacetime relation $$H={}^*F\,,\label{emmom}$$ involving (\[Fdef\]), which identically satisfies $$dF =0\,.\label{vanfder}$$ Eqs. (\[covfieldeq1\]) and (\[vanfder\]) complete the set of Maxwell’s equations for fundamental matter in vacuum. On the other hand, the electromagnetic part (\[sigmaem\]) of energy-momentum derived from the explicit Lagrangian (\[emlagrang1\]) reads $$\Sigma ^{\rm em}_\alpha = {1\over 2}\,\left[\,\left( e_\alpha\rfloor F\right)\wedge H -F\wedge\left( e_\alpha\rfloor H\right)\,\right]\,,\label{emenergymom}$$ so that (\[emenergy\]) becomes $$\epsilon ^{\rm em} = -{1\over 2}\,\bigl(\,F_{\bot}\wedge H -F\wedge H_{\bot}\,\bigr)\,,\label{explemen1}$$ obeying Eq.(\[emender\]). The transversal component $\underline{\epsilon}^{\rm em}$ of the electromagnetic energy current 3-form (\[explemen1\]) is the energy 3-form representing the amount of electric and magnetic energy contained in a small volume, and the longitudinal part $\epsilon ^{\rm em}_{\bot}$ is the energy flux or Poynting 2-form. Gravity ------- For the gravitational action, we consider a quite general Lagrangian density taken from Ref. [@Obukhov:2006ge], including a Hilbert-Einstein term with cosmological constant, plus additional contributions quadratic in the Lorentz-irreducible pieces of torsion and curvature as established by McCrea [@Hehl:1995ue] [@McCrea:1992wa]. The gravitational Lagrangian reads $$\begin{aligned} L^{\rm gr}&=&{1\over{\kappa}}\,\left(\,\,{a_0\over 2}\,\,R^{\alpha\beta}\wedge\eta_{\alpha\beta} -\Lambda\,\eta\,\right)\nonumber\\ &&-{1\over 2}\,\,T^\alpha\wedge \left(\sum_{I=1}^{3}{{a_{I}}\over{\kappa}}\,\,{}^{*(I)} T_\alpha\right)\nonumber\\ &&-{1\over 2}\,\,R^{\alpha\beta}\wedge\left(\sum_{I=1}^{6}b_{I}\,\, {}^{*(I)}R_{\alpha\beta}\right)\,,\label{gravlagr}\end{aligned}$$ with $\kappa$ as the gravitational constant, and $a_0$, $a_{I}$, $b_{I}$ as dimensionless constants. From (\[gravlagr\]) we calculate the translational and Lorentz excitations (\[definition02\]) to be respectively $$\begin{aligned} H_\alpha &=& \sum_{I=1}^{3}{{a_{I}}\over{\kappa}}\,\,{}^{*(I)} T_\alpha\,,\label{torsmom}\\ H_{\alpha\beta}&=&-{a_0\over{2\kappa}}\,\eta_{\alpha\beta} +\sum_{I=1}^{6}b_{I}\,\, {}^{*(I)}R_{\alpha\beta}\,,\label{curvmom}\end{aligned}$$ and we find the pure gravitational contribution (\[ealpha\]) to the energy-momentum $$\begin{aligned} E_\alpha &=& {a_0\over {4\kappa}}\,e_\alpha\rfloor \left(\,R^{\beta\gamma}\wedge\eta_{\beta\gamma}\,\right)-{\Lambda\over{\kappa}}\,\eta _\alpha\nonumber\\ &&+{1\over 2}\,\left[\,\left( e_\alpha\rfloor T^\beta\right)\wedge H_\beta -T^\beta\wedge\left( e_\alpha\rfloor H_\beta \right)\,\right]\nonumber\\ &&+{1\over 2}\,\left[\,\left( e_\alpha\rfloor R^{\beta\gamma}\right)\wedge H_{\beta\gamma} -R^{\beta\gamma}\wedge\left( e_\alpha\rfloor H_{\beta\gamma}\right)\,\right]\,.\nonumber\\ \label{gravenergymom}\end{aligned}$$ (Notice the resemblance between (\[gravenergymom\]) and (\[emenergymom\]).) The gauge-theoretical equations (\[covfieldeq2\]) with (\[gravenergymom\]) and (\[momentdecomp\]) constitute a generalization of Einstein’s equations. Actually, for $a_0=1\,$, $a_{I}=0\,$, $b_{I}=0\,$ and vanishing torsion, (\[gravenergymom\]) reduces to $$E_\alpha = {1\over{\kappa}}\,\left(\,\,{1\over 2}\,\,R^{\beta\gamma}\wedge\eta_{\beta\gamma\alpha} -\Lambda\,\eta _\alpha\,\right)\,,\label{H-Egravenergymom}$$ which is simply an exterior calculus reformulation of Einstein’s tensor plus a cosmological constant term. Using the general expression (\[gravenergymom\]), we calculate the gravitational energy (\[grenergy\]) to be $$\begin{aligned} \epsilon ^{\rm gr} &=& -{a_0\over {4\kappa}}\,\bigl(\,R^{\alpha\beta}\wedge\eta_{\alpha\beta}\,\bigr)_{\bot}+{\Lambda\over{\kappa}}\,u^\alpha\eta _\alpha\nonumber\\ &&-{1\over 2}\,\bigl(\,T_{\bot}^\alpha\wedge H_\alpha -T^\alpha\wedge H_{{\bot}\alpha}\,\bigr)\nonumber\\ &&-{1\over 2}\,\bigl(\,R_{\bot}^{\alpha\beta}\wedge H_{\alpha\beta} -R^{\alpha\beta}\wedge H_{{\bot}\alpha\beta}\,\bigr)\nonumber\\ &&-D u^\alpha\wedge H_\alpha\,,\label{explgren}\end{aligned}$$ (compare with (\[explemen1\])), obeying Eq.(\[grender\]). Energy-momentum 3-form of macroscopic matter ============================================ Contrarily to the former cases of electromagnetism and gravity, we do not propose a Lagrangian for macroscopic matter. Instead, we focus our attention on the matter energy-momentum 3-form $\Sigma ^{\rm matt}_\alpha $, for which we postulate the dynamical equation (\[sigmamattconserv\]), and any other in which it appears, to hold macroscopically. The energy-momentum (\[sigmamatt\]) found for Dirac matter does not play any role when considering macroscopic systems. The description of each kind of material medium requires the construction of a suitably chosen energy-momentum 3-form adapted to it. In the present Section we merely present a useful decomposition applicable to any $\Sigma ^{\rm matt}_\alpha$, and we consider the form of the simplest of all mechanic energy-momentum contributions, namely that due to pressure, which we explicitly separate from the whole macroscopic matter energy-momentum. By using projectors (\[form03\]) and definition (\[mattenergy\]), we find $$\begin{aligned} \Sigma ^{\rm matt}_\alpha &&\equiv ( -u_\alpha u^\beta + h_\alpha{}^\beta ) \Sigma ^{\rm matt}_\beta\nonumber\\ &&=: u_\alpha\,\epsilon ^{\rm matt} +\widetilde{\Sigma}^{\rm matt}_\alpha\,,\label{enmom02}\end{aligned}$$ making apparent the pure energy content of energy-momentum . On the other hand, to give account of pressure, we separate the pressure term from an energy-momentum 3-form as $$\begin{aligned} \Sigma ^{\rm matt}_\alpha &=& p\,h_\alpha{}^\beta\,\eta _\beta +\Sigma ^{\rm undef}_\alpha\nonumber\\ &=&-d\tau\wedge p\,\overline{\eta}_\alpha +\Sigma ^{\rm undef}_\alpha\,,\label{enmom01}\end{aligned}$$ with $\overline{\eta}_\alpha$ as defined in (\[3deta07\]), while $\Sigma ^{\rm undef}_\alpha $ is left undefined. By decomposing (\[enmom01\]) according to (\[enmom02\]), we get $$\Sigma ^{\rm matt}_\alpha = u_\alpha\,\epsilon ^{\rm matt} -d\tau\wedge p\,\overline{\eta}_\alpha +\widetilde{\Sigma}^{\rm undef}_\alpha\,.\label{enmom03}$$ The piece $\widetilde{\Sigma}^{\rm undef}_\alpha $ present in (\[enmom03\]) after the separation of the energy term can be chosen in different manners to describe, as the case may be, viscosity, elasticity, plasticity, etc. Actually, (\[enmom03\]) resembles the energy-momentum 3-form of a fluid plus additional contributions responsible for different mechanic features. Notice that, being (\[sigmamattconserv\]) a dynamical equation of the form $$D\,\Sigma ^{\rm matt}_\alpha = f_\alpha\,,\label{force01}$$ where the 4-form $f_\alpha$ is a generalized Lorentz force, by replacing (\[enmom03\]) in it, we get (at least formally) an extended Navier-Stokes equation. Electrodynamic equations in material media ========================================== Looking for a general criterion about the most suitable procedure to include phenomenological matter in the fundamental equations, let us examine in particular electromagnetism in order to find out how to generalize (\[covfieldeq1\]) as much as (\[emender\]) in such a manner that they become applicable macroscopically while preserving their form. As a matter of fact, Maxwell’s equations in matter admit two alternative formulations, depending on how the electric and magnetic properties of material media are taken into account [@Hehl-and-Obukhov] [@Obukhov:2003cc]. Actually, polarization and magnetization can be described, in seemingly equivalent ways, either as due to modifications of the electromagnetic excitations $H$ or as the result of the existence inside such materials of generalized currents $J$ including both, free and bound contributions. With the latter approach in mind, we define the total current density $J^{\rm tot}$ as the sum of a current $J^{\rm free}$ of free charge and a matter-bounded contribution $J^{\rm matt}$ characteristic for the medium, that is $$J^{\rm tot} = J^{\rm free} + J^{\rm matt}\,,\label{totcurr01}$$ with the assumption that they are conserved separately as $$dJ^{\rm free}=0\,,\qquad dJ^{\rm matt}=0\,,\label{totcurrconserv}$$ so that, although both types of charge can coexist, no exchange occurs between them. From the second conservation condition in (\[totcurrconserv\]), we infer the existence of an independent excitation 2-form, which we denote as $H^{\rm matt}$, such that $$J^{\rm matt}= -dH^{\rm matt}\,.\label{indepexcits}$$ For the longitudinal and transversal pieces of $H^{\rm matt}$ we introduce the notation $$H^{\rm matt}= -d\tau\wedge M + P\,,\label{matexcit01}$$ where $M$ is the magnetization 1-form and $P$ the polarization 2-form. The extension of Maxwell’s equations (\[covfieldeq1\]) to include the contribution (\[indepexcits\]) of the material medium without altering their form can then be performed in any of the alternative ways mentioned above. Let us define $$H^{\rm bare} :={}^*F\,,\label{macMax05}$$ (where we call [*bare fields*]{} the fields in vacuum) in analogy to the Maxwell-Lorentz spacetime relation (\[emmom\]). Then, according to the first procedure, consisting in considering the electromagnetic effects of the medium as due to a modification of the electromagnetic excitations, the latter ones $H$ as much as $J$ in (\[covfieldeq1\]) are to be understood respectively as $$H = H^{\rm tot} := H^{\rm bare} +H^{\rm matt}\quad{\rm and}\quad J= J^{\rm free}\,,\label{secondcase}$$ while in the second case such effects are characterized in terms of bounded currents, so that the same equation (\[covfieldeq1\]) is to be read taking in it now $$H = H^{\rm bare}\quad{\rm and}\quad J = J^{\rm tot} := J^{\rm free} - dH^{\rm matt}\,.\label{firstcase}$$ Let us show that, despite appearances, both formulations are not trivially interchangeable. Actually, only one of them can be easily adjusted to our program of generalizing the [*microscopic*]{} formulas (\[mattender\]) and (\[emender\]) to include the contributions of the medium. Our main argument to decide in favor of one of both alternatives (in the present context) is that the electromagnetic energy (\[explemen1\]) is different in each case, in such a way that, for arbitrary $P$ and $M$, Eq.(\[emender\]) is compatible with only one of the possible choices. Making use of (\[foliat1\]), we decompose the electromagnetic excitation 2-form $H$, the electromagnetic field strength 2-form $F$ and the current $J$ of Maxwell’s equations (\[covfieldeq1\]) and (\[vanfder\]) as $$\begin{aligned} H &=& d\tau\wedge {\cal H} + {\cal D}\,,\label{Max01}\\ F &=& -d\tau\wedge E + B\,,\label{Max02}\\ J &=& -d\tau\wedge j + \rho\,.\label{Max03}\end{aligned}$$ Accordingly, the foliation of (\[covfieldeq1\]) yields $$\begin{aligned} {\it{l}}_u {\cal D} -\underline{d}\,{\cal H} &=& -j\,,\label{Max07}\\ \underline{d}\,{\cal D}&=& \rho\,.\label{Max08}\end{aligned}$$ and that of (\[vanfder\]) gives rise to $$\begin{aligned} {\it{l}}_u B +\underline{d}\,E &=& 0\,,\label{Max09}\\ \underline{d}\,B &=& 0\,.\label{Max10}\end{aligned}$$ In Eqs. (\[Max07\])–(\[Max10\]) we do not prejudge which of both interpretations is to be given to the different fields. In order to decide, we express (\[macMax05\]) in terms of the Hodge dual (\[foliat2\]) of (\[Max02\]) $$^*F = d\tau\wedge{}^\#B + {}^\#E\,.\label{Max04}$$ So we see that (\[secondcase\]) corresponds to the choice $${\cal D} ={}^\#E +P\,,\quad {\cal H}={}^\#B -M\,,\quad J=J^{\rm free}\,,\label{elmagexcits02}$$ in the Maxwell equations (\[Max07\])–(\[Max10\]), with $$J^{\rm free}= -d\tau\wedge j^{\rm free} + \rho ^{\rm free}\,,\label{freecurr}$$ while (\[firstcase\]) gives rise to $${\cal D} ={}^\#E\,,\quad {\cal H}={}^\#B\,,\quad J=J^{\rm tot}\,,\label{elmagexcits01}$$ being $$J^{\rm tot}= -d\tau\wedge ( j^{\rm free} +{\it{l}}_u P +\underline{d}\,M\,) + (\rho ^{\rm free}-\underline{d}\,P\,)\,,\label{totcurr}$$ as calculated from (\[totcurr01\]) with (\[indepexcits\]) and (\[matexcit01\]). Now, in order to check the compatibility either of (\[elmagexcits02\]) or (\[elmagexcits01\]) with (\[emender\]), we add (\[Max07\]) and (\[Max09\]) to each other, respectively multiplied by $E$ and ${\cal H}$, to get $$E\wedge{\it{l}}_u {\cal D} + {\it{l}}_u B\wedge{\cal H} +\underline{d}\,(E\wedge {\cal H}) = -E\wedge j\,,\label{Poynting01}$$ and on the other hand, we rewrite the transversal part of (\[explemen1\]) as $$\underline{\epsilon}^{\rm em} ={1\over 2}\,( E\wedge{\cal D} + B\wedge {\cal H}\,)\,.\label{emendh}$$ We can see that, in general, for nonspecified $P$ and $M$, the step from (\[Poynting01\]) to (\[emender\]) with $\epsilon ^{\rm em}$ given by (\[emendh\]) is only possible with the choice (\[elmagexcits01\]) for the excitations. Indeed, notice that the first term in the rhs of (\[emender\]) has its origin in the relation $$\begin{aligned} &&{\it{l}}_u \underline{\epsilon}^{\rm em} := {\it{l}}_u\,{1\over 2}\left( E\wedge{}^\#E + B\wedge {}^\#B\,\right)\nonumber\\ &&\hskip1.0cm \equiv E\wedge {\it{l}}_u {}^{\#}E + {\it{l}}_u B\wedge {}^{\#}B -({\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm em}_\alpha )_{\bot}\,,\nonumber\\ \label{ident02}\end{aligned}$$ derived with the help of the identities $$\begin{aligned} {\it{l}}_u {}^{\#}E &\equiv &\,{}^{\#}\Bigl(\,{\it{l}}_u E -{\cal \L\/}_u\underline{\vartheta}^\alpha\, e_\alpha\rfloor E\,\Bigr) +{\cal \L\/}_u\underline{\vartheta}^\alpha\wedge\left( e_\alpha\rfloor {}^{\#}E\,\right)\,,\nonumber\\ \label{formula01}\\ {\it{l}}_u {}^{\#}B &\equiv &\,{}^{\#}\Bigl(\,{\it{l}}_u B -{\cal \L\/}_u\underline{\vartheta}^\alpha\wedge e_\alpha\rfloor B\,\Bigr) +{\cal \L\/}_u\underline{\vartheta}^\alpha\wedge\left( e_\alpha\rfloor {}^{\#}B\,\right)\,.\nonumber\\ \label{formula02}\end{aligned}$$ (Compare with (\[dualvar\]).) Thus, although (\[Poynting01\]) holds in both approaches, it only can be brought to the form (\[emender\]) within the scope of choice (\[elmagexcits01\]), or equivalently of (\[firstcase\]), the latter thus revealing to be necessary in order to guarantee the general applicability of the fundamental formulas found for microscopic matter. Accordingly, we choose option (\[firstcase\]), which in practice means that, in order to apply the original formula (\[covfieldeq1\]) of the fundamental approach, we have to keep in it the excitation $H =H^{\rm bare}={}^*F$ built from bare fields, and to include all contributions of the medium in the matter current by replacing $J$ by $J^{\rm tot}=J -dH^{\rm matt}$, where the new $J$ in $J^{\rm tot}$ is understood to be $J^{\rm free}$. In the following, we generalize this criterion of strict separation between bare electromagnetic fields (say radiation in vacuum) and matter, in such a way that it also applies to the gravitational case. So, in all field equations and Noether identities established in Sections II and III, we have to leave untouched the excitations $H$, $H_\alpha$, $H_{\alpha\beta}$ built from bare fields as in Section VI, while modifying the matter currents $J$, $\Sigma ^{\rm matt}_\alpha $, $\tau _{\alpha\beta}$. The matter contributions separated from bare fields will enter $\epsilon ^{\rm matt}$ and thus $\epsilon ^{\rm u}$ as defined in Section IX, so that they will play a role in the thermodynamic relations to be established there. Deduction of the laws of thermodynamics ======================================= First approach, in an electromagnetic medium -------------------------------------------- In view of the discussion of previous section, we identify $H$ with $H^{\rm bare}$ and, in order to adapt Eq.(\[mattender\]) to a macroscopic medium with electromagnetic properties, we replace in it (as everywhere) $J$ by $J^{\rm tot}$, that is $$d\,\epsilon ^{\rm matt} = -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha -R_{\bot}^{\alpha\beta}\wedge\tau _{\alpha\beta} -F_{\bot}\wedge J^{\rm tot}\,.\label{emmattender}$$ Taking into account the explicit form (\[totcurr\]) of $J^{\rm tot}$, we find that (\[emmattender\]) can be rewritten as $$\begin{aligned} &&\mkern-60mu d\,\bigl(\,\epsilon ^{\rm matt} +F\wedge M\,\bigr)\nonumber\\ &&= -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha -R_{\bot}^{\alpha\beta}\wedge\tau _{\alpha\beta}-F_{\bot}\wedge J\nonumber\\ &&\quad + d\tau\wedge\Bigl\{ -F_{\bot}\wedge{\it l}_u P +\underline{F}\wedge{\it l}_u M\Bigr\}\,,\label{diff02}\end{aligned}$$ where we use simply $J$ instead of $J^{\rm free}$. Let us define the modified matter energy current in the left-hand side (lhs) of (\[diff02\]) as $$\epsilon ^{\rm u} := \epsilon ^{\rm matt} + F\wedge M\,.\label{intenergycurr01}$$ Then, from (\[diff02\]) and (\[intenergycurr01\]) and using the notation (\[Max02\]) for $F$, we find the more explicit version of (\[diff02\]) $$\begin{aligned} d\,\epsilon ^{\rm u} &=&d\tau\wedge\Bigl\{{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\cal \L\/}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\cal \L\/}_u u^\alpha +R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot}\nonumber\\ &&\hskip1.5cm +E\wedge j +E\wedge{\it l}_u P +B\wedge{\it l}_u M \Bigr\}\,,\label{diff01}\end{aligned}$$ where we recognize in the rhs, among other forms of energy, the electric and magnetic work contributions $E\wedge{\it l}_u P$ and $B\wedge{\it l}_u M$ respectively. Let us now decompose (\[intenergycurr01\]) foliating it according to (\[foliat1\]) and introducing a suitable notation for the longitudinal and transversal pieces, namely $$\begin{aligned} \epsilon ^{\rm u} &=& d\tau\wedge\epsilon ^{\rm u}_{\bot} + \underline{\epsilon}^{\rm u}\nonumber\\ &=:& d\tau\wedge q + \mathfrak{U}\,.\label{intenergycurr02}\end{aligned}$$ As we are going to justify in the following (in view of the equations satisfied by these quantities), $q$ will play the role of the heat flux 2-form and $\mathfrak{U}$ that of the internal energy 3-form. From (\[intenergycurr02\]) with (\[derivfoliat\]) we get $$d\,\epsilon ^{\rm u}= d\tau\wedge\left(\,{\it l}_u\,\mathfrak{U} - \underline{d}\,q\,\right)\,.\label{energycurrder01}$$ At this point, we claim as a characteristic of macroscopic matter systems [@Callen] the dependence of the internal energy 3-form $\mathfrak{U}$ on a certain new quantity $\mathfrak{s}$ –the entropy– which we take to be a spatial 3-form (representing the amount of entropy contained in an elementary volume). Eq.(\[secondlaw\]) to be found below confirms [*a posteriori*]{} that $\mathfrak{s}$ actually behaves as expected for entropy. Moreover, the structure of (\[diff01\]) suggests to promote a shift towards a fully phenomenological approach by considering $\mathfrak{U}$ to possess [@Callen] the following general functional dependence $$\mathfrak{U} = \mathfrak{U}\,(\mathfrak{s}\,,P\,,M\,,\underline{\vartheta}^\alpha \,, u^\alpha\,)\,.\label{uargs}$$ In (\[uargs\]), as in the matter Lagrangian piece (\[mattLagcontrib\]), tetrads are still taken as arguments of $\mathfrak{U}$ while new variables replace the fundamental matter fields $\psi$ and their covariant derivatives $D\psi$. Connections involved in the derivatives $D\psi$ are thus excluded together with the fields. Besides the new entropy variable and the polarization and magnetization of the medium (induced by external fields), we find the components (\[tetradfoliat\]) of the tetrads in terms of which the volume 3-form (\[3deta06\]) with (\[3deta09\]) is defined. Accordingly, the Lie derivative of (\[uargs\]) present in (\[energycurrder01\]) takes the form $$\begin{aligned} {\it l}_u\,\mathfrak{U} &=& {{\partial\mathfrak{U}}\over{\partial\mathfrak{s}}}\,{\it l}_u\mathfrak{s} +{{\partial\mathfrak{U}}\over{\partial P}}\wedge{\it l}_u P +{{\partial\mathfrak{U}}\over{\partial M}}\wedge{\it l}_u M\nonumber\\ &&+{{\partial\mathfrak{U}}\over{\partial\underline{\vartheta}^\alpha}}\wedge{\it l}_u\underline{\vartheta}^\alpha +{{\partial\mathfrak{U}}\over{\partial u^\alpha}}\,{\it l}_u u^\alpha\,,\label{uLiederiv01}\end{aligned}$$ where we identify the derivatives [@Callen] as $$\begin{aligned} && {{\partial\mathfrak{U}}\over{\partial\mathfrak{s}}} =T\,,\quad {{\partial\mathfrak{U}}\over{\partial P}} =E\,,\quad {{\partial\mathfrak{U}}\over{\partial M}} =B\,,\label{Uder01}\\ &&{{\partial\mathfrak{U}}\over{\partial\underline{\vartheta}^\alpha}}={\Sigma _{\alpha}}_{\bot}^{\rm matt}\,,\quad {{\partial\mathfrak{U}}\over{\partial u^\alpha}}=-\underline{\Sigma}^{\rm matt}_\alpha\,.\label{Uder02}\end{aligned}$$ Let us call attention to the temperature defined in (\[Uder01\]) as the derivative of the internal energy with respect to the entropy. On the other hand, a plausibility argument to justify the identifications we make in (\[Uder02\]) can be found in Appendix B. Replacing (\[Uder01\])–(\[Uder02\]) in (\[uLiederiv01\]) we get $$\begin{aligned} {\it l}_u\,\mathfrak{U} &=& T\,{\it l}_u\mathfrak{s} +E\wedge{\it l}_u P +B\wedge{\it l}_u M\nonumber\\ &&+{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\it l}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\it l}_u u^\alpha\,.\label{uLiederiv02}\end{aligned}$$ In order to rearrange the non explicitly invariant terms in (\[uLiederiv02\]) to get invariant expressions, we replace the ordinary Lie derivatives by covariant Lie derivatives of the form (\[thetaLiederiv02\]), so that the last terms in (\[uLiederiv02\]) become $$\begin{aligned} {\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\it l}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\it l}_u u^\alpha &\equiv& {\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\cal \L\/}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\cal \L\/}_u u^\alpha\nonumber\\ &&+\Gamma _{\bot}^{\alpha\beta}\bigl(\,\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]}\bigr)_{\bot}\,.\label{identity01}\end{aligned}$$ Replacing (\[identity01\]) in (\[uLiederiv02\]) we finally arrive at $$\begin{aligned} {\it l}_u\,\mathfrak{U} &=& T\,{\it l}_u\mathfrak{s} +E\wedge{\it l}_u P +B\wedge{\it l}_u M\nonumber\\ &&+{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\cal \L\/}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\cal \L\/}_u u^\alpha\nonumber\\ &&+\Gamma _{\bot}^{\alpha\beta}\bigl(\,\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]}\bigr)_{\bot}\,.\label{uLiederiv03}\end{aligned}$$ In the rhs of (\[uLiederiv03\]), the term containing explicitly the Lorentz connection is obviously noninvariant. Its emergence is due to an inherent limitation of the phenomenological approach, namely the absence of explicit dependence of $\mathfrak{U}$ on fundamental matter fields and their derivatives, together wit connections. Indeed, provided matter fields $\psi$ with derivatives $d\psi$ were present, connections were required to define covariant derivatives preserving local symmetry. However, in the phenomenological case, $\mathfrak{U}$ depends neither on $\psi$ nor on $d\psi$, so that (since $d\psi$ and connections need each other) it cannot give rise to invariant expressions, either one takes it or not to depend on the connections. The noninvariant term in (\[uLiederiv03\]), reflecting the lack of invariance of the terms in the lhs of (\[identity01\]), will be dragged to equations (\[energycurrder02\]) and (\[secondlaw\]) below. (We will find a similar situation in (\[uLiederiv04bis\]) and (\[diff01tot\]).) In any case, let us mention that the invariance is restored in the particular case when the macroscopic free spin current $\tau _{\alpha\beta}$ vanishes. Making use of (\[uLiederiv03\]), Eq.(\[diff01\]) reduces to $$\begin{aligned} d\,\epsilon ^{\rm u} &=& d\tau\wedge\Bigl[\,{\it l}_u\,\mathfrak{U} - T\,{\it l}_u\mathfrak{s} + E\wedge j + R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot}\nonumber\\ &&\hskip1.5cm -\Gamma _{\bot}^{\alpha\beta}\bigl(\,\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]}\bigr)_{\bot} \,\Bigr]\,,\label{energycurrder02}\end{aligned}$$ and finally, comparison of (\[energycurrder02\]) with (\[energycurrder01\]), making use of (\[spincurrconserv\]), yields $${\it l}_u\mathfrak{s} -{{\underline{d}\,q}\over T} = {1\over T}\,\bigl[\,E\wedge j + R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot} +\Gamma _{\bot}^{\alpha\beta}\bigl(\,D\,\tau _{\alpha\beta}\bigr)_{\bot}\,\bigr]\,.\label{secondlaw}$$ In the lhs of (\[secondlaw\]) we find the rate of change of the entropy 3-form combined in a familiar way with heat flux and temperature. The interpretation of the first term in the rhs is facilitated by the fact that, according to Ohm’s law $j=\sigma\,{}^\# E$, it is proportional to $E\wedge j ={1\over\sigma} j\wedge{}^\# j \geq 0$, so that it is responsible for entropy growth. The second term is analogous to the first one. If we suppose that all terms in the rhs of (\[secondlaw\]) are $\geq 0$, or, in any case, for vanishing macroscopic free spin current $\tau _{\alpha\beta}$, we can consider (\[secondlaw\]) to be a particular realization of the second law of thermodynamics. On the other hand, the first law is no other than the conservation equation (\[emmattender\]) for matter energy, rewritten as (\[diff01\]) in terms of the internal energy current 3-form (\[intenergycurr01\]). This reformulation is necessary in order to bring to light the components of $\epsilon ^{\rm u}$ defined in (\[intenergycurr02\]), that is, heat flux and internal energy respectively, thus making possible to compare the first law with the second one (\[secondlaw\]) deduced above. (By the way, notice that the inversion of (\[intenergycurr01\]) to express $\epsilon ^{\rm matt}$ in terms of $\epsilon ^{\rm u}$ suggests to interpret $\epsilon ^{\rm matt}$ as a sort of enthalpy current 3-form.) Making use of (\[energycurrder01\]), the first law (\[diff01\]) can be brought to the more compact form $$\begin{aligned} {\it l}_u\,\mathfrak{U} -\underline{d}\,q &=& -\bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha \bigr)_{\bot} +R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot}\nonumber\\ &&+E\wedge j +E\wedge{\it l}_u P +B\wedge{\it l}_u M\,.\label{uLiederiv03bis}\end{aligned}$$ The first term in the rhs of (\[uLiederiv03bis\]), that is, the longitudinal part of ${\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha $, encloses information about mechanic work, whose form depends on the explicit matter energy-momentum 3-form we consider. In particular, by taking it to consist of a pressure term plus an undefined part, as in (\[enmom01\]), we find $$\bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha\,\bigr)_{\bot} = \bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm undef}_\alpha\,\bigr)_{\bot} +{\cal \L\/}_u\underline{\vartheta}^\alpha\wedge p\,\overline{\eta}_\alpha\,,\label{presscontrib}$$ where the last term, in view of (\[volLieder\]), results to be $${\cal \L\/}_u\underline{\vartheta}^\alpha\wedge p\,\overline{\eta}_\alpha = p\,{\it l}_u\overline{\eta}\,,\label{pressderiv}$$ being thus identifiable as the ordinary pressure contribution to work as pressure times the derivative of the volume. It is worth remarking that the emergence of this pressure contribution to the first law does not ocur through derivation of $\mathfrak{U}$ with respect to the volume $\overline{\eta}$ (which is not an independent variable by itself, being defined from the tetrads as (\[3deta06\])), but with respect to the tetrad components, as in (\[Uder02\]). Replacing (\[presscontrib\]) with (\[pressderiv\]) in the first law equation (\[uLiederiv03bis\]), we get for it the more explicit formulation $$\begin{aligned} {\it l}_u\,\mathfrak{U} -\underline{d}\,q &=& -\bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm undef}_\alpha \bigr)_{\bot} +R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot} +E\wedge j\nonumber\\ &&-p\,{\it l}_u\overline{\eta}+E\wedge{\it l}_u P +B\wedge{\it l}_u M \,,\label{firstlaw01}\end{aligned}$$ where one recognizes the familiar contributions of internal energy, heat flux and work \[including $\bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm undef}_\alpha\,\bigr)_{\bot}$ among the latter ones\], together with additional terms. In particular, $E\wedge j$ and the formally similar quantity $R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot}$ are present in (\[firstlaw01\]) due to irreversibility, as read out from (\[secondlaw\]). General approach ---------------- Let us extend the previous results to the most general scenario in which we modify all matter currents in analogy to $J^{\rm (tot)}$ in order to take into account further possible contributions of a medium. In an attempt to expand the electromagnetic model, we introduce –associated to gravitational interactions– translational and Lorentz generalizations of the electromagnetic polarization and magnetization of macroscopic matter. Maybe this constitutes a merely formal exercise. However, it can also be understood as a proposal to look for new properties of material media, since we are going to consider the hypothesis of certain new phenomenological matter contributions to the sources of gravity, acting perhaps as dark matter. Generalizing (\[firstcase\]), we propose to modify the complete set of field equations (\[covfieldeq1\])–(\[covfieldeq3\]) as $$\begin{aligned} dH &=&J^{\rm (tot)}\,,\label{covfieldeq1bis} \\ DH_\alpha &=&\Pi ^{\rm (tot)}_\alpha\,,\label{covfieldeq2bis}\\ DH_{\alpha\beta} +\vartheta _{[\alpha }\wedge H_{\beta ]}&=&\tau ^{\rm (tot)}_{\alpha\beta}\,,\label{covfieldeq3bis}\end{aligned}$$ with bare excitations and total currents consisting of the sum of free and bound contributions, defined respectively as $$\begin{aligned} J^{\rm (tot)} &=& J-dH^{\rm matt}\,,\label{Jtot} \\ \Pi ^{\rm (tot)}_\alpha &=& \Pi _\alpha -DH^{\rm matt}_\alpha \,,\label{Pitot}\\ \tau ^{\rm (tot)}_{\alpha\beta} &=& \tau _{\alpha\beta} - ( DH^{\rm matt}_{\alpha\beta} +\vartheta _{[\alpha }\wedge H^{\rm matt}_{\beta ]})\,,\label{Tautot}\end{aligned}$$ where we introduce generalizations of the electromagnetic polarization and magnetization (\[matexcit01\]) as $$\begin{aligned} H^{\rm matt} &=& -d\tau\wedge M + P\,,\label{matexcit01bis}\\ H_\alpha ^{\rm matt} &=& -d\tau\wedge M_\alpha + P_\alpha \,,\label{matexcit02}\\ H_{\alpha\beta}^{\rm matt} &=& -d\tau\wedge M_{\alpha\beta} + P_{\alpha\beta}\,,\label{matexcit03}\end{aligned}$$ whatever the physical correspondence of these quantities may be. Since, as discussed above, only matter currents are to be modified, we understand (\[Pitot\]) in the sense that only the matter part is altered, that is $$\Pi ^{\rm (tot)}_\alpha = \Sigma ^{\rm matt}_{{\rm (tot)}\alpha } +\Sigma ^{\rm em}_\alpha +E_\alpha\,,\label{totmomentdecomp}$$ being $$\Sigma ^{\rm matt}_{{\rm (tot)}\alpha } = \Sigma ^{\rm mat}_\alpha -DH^{\rm matt}_\alpha\,.\label{totmattmom}$$ In view of (\[totmattmom\]), we extend (\[mattenergy\]) as $$\epsilon _{\rm (tot)}^{\rm matt} := -u^\alpha\,\Sigma ^{\rm matt}_{{\rm (tot)}\alpha } =\epsilon ^{\rm matt} + u^\alpha DH^{\rm matt}_\alpha\,,\label{totmattenergy}$$ and, as a generalization of (\[mattender\]) to include macroscopic matter, we postulate the formally analogous equation $$d\,\epsilon _{\rm (tot)}^{\rm matt} = -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_{{\rm (tot)}\alpha } -R_{\bot}^{\alpha\beta}\wedge\tau ^{\rm (tot)}_{\alpha\beta} -F_{\bot}\wedge J^{\rm (tot)}\,,\label{genmattender01}$$ as the law of conservation of total matter energy. Eq.(\[genmattender01\]) can be rearranged as $$\begin{aligned} &&\mkern-60mu d\,\bigl(\,\epsilon ^{\rm matt} +F\wedge M + T^\alpha\wedge M_\alpha + R^{\alpha\beta}\wedge M_{\alpha\beta}\,\bigr)\nonumber\\ &&= -{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm matt}_\alpha -R_{\bot}^{\alpha\beta}\wedge\tau _{\alpha\beta} -F_{\bot}\wedge J\nonumber\\ &&\quad + d\tau\wedge\Bigl\{ -F_{\bot}\wedge{\it l}_u P +\underline{F}\wedge{\it l}_u M\nonumber\\ &&\hskip1.6cm -T_{\bot}^\alpha\wedge{\cal \L\/}_u P_\alpha +\underline{T}^\alpha\wedge{\cal \L\/}_u M_\alpha\nonumber\\ &&\hskip1.6cm -R_{\bot}^{\alpha\beta}\wedge{\cal \L\/}_u P_{\alpha\beta} +\underline{R}^{\alpha\beta}\wedge{\cal \L\/}_u M_{\alpha\beta}\Bigr\}\,.\nonumber\\ \label{diff03bis}\end{aligned}$$ (Compare with (\[diff02\]).) Without going into details, we proceed in analogy to the former case. We define a similar internal energy current 3-form $$\widehat{\epsilon}^u := \epsilon ^{\rm matt} +F\wedge M + T^\alpha\wedge M_\alpha + R^{\alpha\beta}\wedge M_{\alpha\beta}\,,\label{totintenergy}$$ decomposing as $$\widehat{\epsilon}^{\rm u} =: d\tau\wedge \widehat{q} + \widehat{\mathfrak{U}}\,.\label{intenergycurr02bis}$$ Supposing the functional form of $\widehat{\mathfrak{U}}$ to be $$\widehat{\mathfrak{U}} = \widehat{\mathfrak{U}}\,(\widehat{\mathfrak{s}}\,,P\,,M\,,P_\alpha\,,M_\alpha\,,P_{\alpha\beta}\,,M_{\alpha\beta}\,,\underline{\vartheta}^\alpha \,, u^\alpha\,)\,,\label{uargsbis}$$ and with the pertinent definitions analogous to (\[Uder01\]) and (\[Uder02\]), first we get $$\begin{aligned} {\it l}_u\,\widehat{\mathfrak{U}} &=& \widehat{T}\,{\it l}_u\widehat{\mathfrak{s}} +{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\it l}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\it l}_u u^\alpha\nonumber\\ &&-F_{\bot}\wedge{\it l}_u P +\underline{F}\wedge{\it l}_u M\nonumber\\ &&-T_{\bot}^\alpha\wedge{\it l}_u P_\alpha +\underline{T}^\alpha\wedge{\it l}_u M_\alpha\nonumber\\ &&-R_{\bot}^{\alpha\beta}\wedge{\it l}_u P_{\alpha\beta} +\underline{R}^{\alpha\beta}\wedge{\it l}_u M_{\alpha\beta}\,,\label{uLiederiv02bis}\end{aligned}$$ and finally, suitably rearranging the noncovariant quantities in (\[uLiederiv02bis\]) into covariant ones defined in analogy to (\[thetaLiederiv01\]), we arrive at $$\begin{aligned} {\it l}_u\,\widehat{\mathfrak{U}} &=& \widehat{T}\,{\it l}_u\widehat{\mathfrak{s}} +{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\cal \L\/}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\cal \L\/}_u u^\alpha\nonumber\\ &&-F_{\bot}\wedge{\it l}_u P +\underline{F}\wedge{\it l}_u M\nonumber\\ &&-T_{\bot}^\alpha\wedge{\cal \L\/}_u P_\alpha +\underline{T}^\alpha\wedge{\cal \L\/}_u M_\alpha\nonumber\\ &&-R_{\bot}^{\alpha\beta}\wedge{\cal \L\/}_u P_{\alpha\beta} +\underline{R}^{\alpha\beta}\wedge{\cal \L\/}_u M_{\alpha\beta}\nonumber\\ &&+\Gamma _{\bot}^{\alpha\beta}\Bigl[\,D\,\bigl(\tau ^{\rm (tot)}_{\alpha\beta} -\tau _{\alpha\beta}\bigr) +\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]{\rm (tot)}}\Bigr]_{\bot} \,.\label{uLiederiv04bis}\end{aligned}$$ Assuming that the analogous of (\[spincurrconserv\]) holds for generalized matter, that is $$D\,\tau ^{\rm (tot)}_{\alpha\beta} +\vartheta _{[\alpha}\wedge\Sigma ^{\rm matt}_{\beta ]{\rm (tot)}} =0\,,\label{totspinconserv}$$ from (\[diff03bis\]) with (\[totintenergy\]) and (\[uLiederiv04bis\]) follows $$\begin{aligned} d\,\widehat{\epsilon}^{\rm u} &=& d\tau\wedge\Bigl[\,{\it l}_u\,\widehat{\mathfrak{U}} -\widehat{T}\,{\it l}_u\widehat{\mathfrak{s}} -F_{\bot}\wedge j + R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot}\nonumber\\ &&\hskip1.5cm +\Gamma _{\bot}^{\alpha\beta}\bigl(\,D\,\tau _{\alpha\beta}\bigr)_{\bot}\,\Bigr]\,,\label{diff01tot}\end{aligned}$$ giving rise, when compared with the differential of (\[intenergycurr02bis\]), to the second law of thermodynamics with exactly the same form as (\[secondlaw\]). Regarding the first law (\[diff03bis\]) with (\[totintenergy\])–(\[uargsbis\]), taking (\[enmom01\]) as before and using the notation (\[Max02\]), it takes the form $$\begin{aligned} {\it l}_u\,\widehat{\mathfrak{U}} -\underline{d}\,\widehat{q} &=& -\bigl(\,{\cal \L\/}_u\,\vartheta ^\alpha\wedge\Sigma ^{\rm undef}_\alpha \bigr)_{\bot} +R_{\bot}^{\alpha\beta}\wedge{\tau _{\alpha\beta}}_{\bot} +E\wedge j\nonumber\\ &&-p\,{\it l}_u\overline{\eta}+E\wedge{\it l}_u P +B\wedge{\it l}_u M \nonumber\\ &&-T_{\bot}^\alpha\wedge{\cal \L\/}_u P_\alpha +\underline{T}^\alpha\wedge{\cal \L\/}_u M_\alpha\nonumber\\ &&-R_{\bot}^{\alpha\beta}\wedge{\cal \L\/}_u P_{\alpha\beta} +\underline{R}^{\alpha\beta}\wedge{\cal \L\/}_u M_{\alpha\beta}\,,\label{firstlaw01bis}\end{aligned}$$ which only differs from (\[firstlaw01\]) in the additional work contributions corresponding to the gravitational generalizations of polarization and magnetization. Final remarks ============= Gravity and conservation of total energy ---------------------------------------- Let us examine the role played by gravity in the conservation of energy. In our approach, the first law of thermodynamics can take alternatively the forms (\[emmattender\]) or (\[diff01\]), being concerned with the matter energy current either in its form $\epsilon ^{\rm matt}$ or $\epsilon ^{\rm u}$. Differentiation of such matter energy currents generates work expressions, the latter ones acting physically by transforming themselves into different forms of energy. So, mechanic work can produce electric effects, etc. However, these subsequent transformations are not explicitly shown by the thermodynamic equation (\[diff01\]). Neither the sum of the matter and electromagnetic energy currents is conserved separately, since the addition of (\[mattender\]) and (\[emender\]) yields $$\begin{aligned} d\,(\epsilon ^{\rm matt}+\epsilon ^{\rm em}) &&= -{\cal \L\/}_u\,\vartheta ^\alpha\wedge (\,\Sigma ^{\rm matt}_\alpha +\Sigma ^{\rm em}_\alpha \,) -R_{\bot}^{\alpha\beta}\wedge\tau _{\alpha\beta}\nonumber\\ &&\neq 0\,.\label{energyconserv02}\end{aligned}$$ Conservation of energy in an absolute sense, with all possible transformations of different forms of energy into each other taken into account, requires to include also the gravitational energy. Indeed, from (\[energyconserv01\]) with (\[energydec\]) we get $$d\,(\epsilon ^{\rm matt}+\epsilon ^{\rm em}+\epsilon ^{\rm gr})=0\,.\label{energyconserv03}$$ This conservation equation, concerned with all forms of energy simultaneously, completes the first law of thermodynamics (\[diff01\]), which concentrates on the behavior of only the matter energy current $\epsilon ^{\rm u}$. The total energy flux $\epsilon _{\bot}$ in (\[energyconserv03\]) includes heat flux, Poynting flux in a strict sense and other Poynting-like contributions. The integrated form (\[exactform02\]) of (\[energyconserv03\]) can be seen as a sort of generalized Bernouilli’s principle. Thermal radiation ----------------- The formalism is not necessarily restricted to gauge theoretically derived forms of energy. It is flexible enough to deal with other thermodynamic approaches, as is the case for thermal radiation, the latter being described not in terms of electromagnetic fields but as a foton gas [@Prigogine] [@Demirel]. A body in thermal equilibrium is modelized as a cavity filled with a gas of thermal photons in continuous inflow and outflow. The number of photons, the internal energy and the entropy contained in the cavity, the pressure of thermal radiation on the walls and the chemical potential are all functions of the temperature, being respectively given by $$\begin{aligned} \mathcal{N} &=& \alpha\,T^3\,\overline{\eta}\,,\label{photgas01}\\ \mathfrak{U} &=& \beta\,T^4\,\overline{\eta}\,,\label{photgas02}\\ T \mathfrak{s} &=& {4\over 3}\,\mathfrak{U}\,,\label{photgas03}\\ p\,\overline{\eta} &=& {1\over 3}\,\mathfrak{U}\,,\label{photgas04}\\ \mu &=& 0\,.\label{photgas05}\end{aligned}$$ The quantities (\[photgas01\])–(\[photgas05\]) automatically satisfy the relation $${\it l}_u\,\mathfrak{U} = T\,{\it l}_u\mathfrak{s} -p\,{\it l}_u \overline{\eta}\,,\label{uLiederiv09}$$ which constitutes a particular case of the thermodynamic equations found above. Indeed, Eq. (\[uLiederiv03\]) with vanishing $P$, $M$ and $\tau _{\alpha\beta}$ reduces to $$\begin{aligned} {\it l}_u\,\mathfrak{U} = T\,{\it l}_u\mathfrak{s} +{\Sigma _{\alpha}}_{\bot}^{\rm matt}\wedge{\cal \L\/}_u\underline{\vartheta}^\alpha -\underline{\Sigma}^{\rm matt}_\alpha\,{\cal \L\/}_u u^\alpha \,.\label{uLiederiv07}\end{aligned}$$ By handling the photon gas as matter, and taking for it an energy-momentum (\[enmom03\]) with $\widetilde{\Sigma}^{\rm undef}_\alpha =0$ as $$\Sigma ^{\rm matt}_\alpha = u_\alpha\,\epsilon ^{\rm matt} -d\tau\wedge p\,\overline{\eta}_\alpha\,,\label{enmom04}$$ replacement of (\[enmom04\]) in (\[uLiederiv07\]) yields $${\it l}_u\,\mathfrak{U} = T\,{\it l}_u\mathfrak{s} -p\,{\it l}_u \overline{\eta} +\epsilon ^{\rm matt}_{\bot}\wedge u_\alpha\,T_{\bot}^\alpha\,,\label{uLiederiv08}$$ from where, for vanishing torsion, (\[uLiederiv09\]) follows. On the other hand, for thermal radiation, the second law (\[secondlaw\]) reduces [@Prigogine] to that of reversible processes $${\it l}_u\mathfrak{s} -{{\underline{d}\,q}\over T} = 0\,,\label{revsecondlaw}$$ and since the number of photons (\[photgas01\]) inside the cavity is in general not constant, we propose for this quantity the continuity equation $${\it l}_u\mathcal{N} +\underline{d} j_{_N} = \sigma _{_N}\,,\label{photnumber}$$ where we introduce $j_{_N}$ as the photon flux and $\sigma _{_N}$ as the rate of photon creation or destruction. Now, from (\[photgas01\])–(\[photgas03\]), replacing the values $$\alpha ={{16\,\pi\,k_B^3\,\zeta (3)}\over{c^3\,h^3}}\,,\qquad \beta ={{8\,\pi ^5\,k_B^4}\over{15\,c^3\,h^3}}\,,\label{alphabeta01}$$ with $\zeta $ as the Riemann zeta function, such that $\zeta (3)\approx 1.202$, and being $k_B$ the Boltzmann constant, we get the relation $$\mathfrak{s} = {4\over 3}\,{\mathfrak{U}\over T} = {{4\beta}\over{3\alpha}}\,\mathcal{N}\approx 3.6\,k_B\,\mathcal{N}\,,\label{alphabeta02}$$ so that (\[revsecondlaw\]) with (\[alphabeta02\]) yields $$\underline{d}\,q = T\,{\it l}_u\mathfrak{s} \approx 3.6\,k_B\,T\,{\it l}_u\mathcal{N}\,.\label{photheatflux}$$ With (\[photnumber\]), Eq.(\[photheatflux\]) transforms into $$\underline{d}\,q \approx 3.6\,k_B\,T\,(\sigma _{_N} -\underline{d} j_{_N})\,.\label{fluxrelat}$$ According to (\[fluxrelat\]), the divergence of the heat flux $q$ of thermal radiation is proportional to the divergence of the photon flux $j_{_N}$ continuously emitted and absorbed by a body, and it also depends on possible additional contributions $\sigma _{_N}$ due to photon production or destruction. Conclusions =========== We propose an approach to thermodynamics compatible with gauge theories of gravity and beyond. Indeed, the formalism developed in the present paper is explicitly covariant under local Lorentz transformations unless for the symmetry breaking terms present in (\[secondlaw\]) and (\[diff01tot\]), (which vanish for $\tau _{\alpha\beta}=0$). Moreover, local translational symmetry as much as local $U(1)$ symmetry are also present in our equations as hidden symmetries, due the particular realization of the Poincaré$\otimes U(1)$ gauge group used to derive the field equations and Noether identities which constituted our starting point [@Tresguerres:2007ih] [@Tresguerres:2002uh] [@Tresguerres:2012nu]. In particular, the thermodynamic equations, concerned with the exchange between different forms of energy, are both Poincaré and $U(1)$ gauge invariant. The laws of thermodynamics deduced by us concentrate on the conservation of the matter energy current $\epsilon ^{\rm matt}$ (or, equivalently, $\epsilon ^{\rm u}$), but in addition we complete the scheme giving account of the conservation of total energy, as discussed in Sec. X. In this way we synthesize the total energy balance in classical physics of material media. Eta basis and its foliation =========================== Four-dimensional formulas ------------------------- The eta basis consists of the Hodge duals of exterior products of tetrads. One defines $$\begin{aligned} \eta &:=&\,^*1 ={1\over{4!}}\,\eta _{\alpha\beta\gamma\delta}\,\vartheta ^\alpha\wedge\vartheta ^\beta\wedge\vartheta ^\gamma\wedge\vartheta ^\delta\,,\label{eta4form}\\ \eta ^\alpha &:=&\,^*\vartheta ^\alpha ={1\over{3!}}\,\eta ^\alpha{}_{\beta\gamma\delta} \,\vartheta ^\beta\wedge\vartheta ^\gamma\wedge\vartheta ^ \delta\,,\label{antisym3form}\\ \eta ^{\alpha\beta}&:=&\,^*(\vartheta ^\alpha\wedge\vartheta ^\beta\,)={1\over{2!}}\,\eta ^{\alpha\beta}{}_{\gamma\delta}\,\vartheta ^\gamma\wedge\vartheta ^\delta\,,\label{antisym2form}\\ \eta ^{\alpha\beta\gamma}&:=&\,^*(\vartheta ^\alpha\wedge\vartheta ^\beta\wedge\vartheta ^\gamma\,)=\,\eta ^{\alpha\beta\gamma}{}_\delta\,\vartheta ^\delta\,,\label{antisym1form}\end{aligned}$$ with $$\eta ^{\alpha\beta\gamma\delta}:=\,^*(\vartheta ^\alpha\wedge\vartheta ^\beta\wedge\vartheta ^\gamma\wedge\vartheta ^ \delta\,)\,,\label{levicivita}$$ as the Levi-Civita antisymmetric object, and where (\[eta4form\]) is the four-dimensional volume element. With tetrads $\vartheta ^\alpha$ chosen to be a basis of the cotangent space, an arbitrary $p$-form $\alpha$ takes the form $$\alpha ={1\over{p\,!}}\,\vartheta ^{\alpha _1}\wedge ...\wedge\vartheta ^{\alpha _p}\,(e_{\alpha _p}\rfloor ... e_{\alpha _1}\rfloor\alpha\,)\,.\label{pform}$$ Its Hodge dual is expressed in terms of the eta basis (\[eta4form\])–(\[levicivita\]) as $$\,{}^*\alpha ={1\over{p\,!}}\,\eta ^{\alpha _1 ... \alpha _p}\,(e_{\alpha _p}\rfloor ... e_{\alpha _1}\rfloor\alpha\,)\,.\label{dualform}$$ Comparison of the variations of (\[pform\]) with those of (\[dualform\]) yields the relation $$\delta \,{}^*\alpha =\,{}^*\delta\alpha -{}^*\left(\delta\vartheta ^\alpha\wedge e_\alpha\rfloor\alpha\,\right) +\delta\vartheta ^\alpha\wedge\left( e_\alpha\rfloor {}^*\alpha\,\right)\,,\label{dualvar}$$ analogous to the three-dimensional identities (\[formula01\]) and (\[formula02\]) used in the main text. Foliated eta basis ------------------ Let us now make use of (\[foliat1\]) and (\[tetradfoliat\]) to calculate $$\begin{aligned} \vartheta ^\alpha &=& d\tau\,u^\alpha + \underline{\vartheta}^\alpha\,,\label{teth01}\\ \vartheta ^\alpha\wedge\vartheta ^\beta &=& d\tau\,\Bigl( u^\alpha\,\underline{\vartheta}^\beta - u^\beta\,\underline{\vartheta}^\alpha \Bigr) + \underline{\vartheta}^\alpha\wedge\underline{\vartheta}^\beta\,,\label{teth02}\end{aligned}$$ etc. Taking then the Hodge duals of (\[teth01\]), (\[teth02\]) etc., we find the foliated version of (\[eta4form\])–(\[levicivita\]), that is $$\begin{aligned} \eta &=& d\tau\wedge\overline{\eta}\,,\label{eta04}\\ \eta ^\alpha &=& -d\tau\wedge\overline{\eta}^\alpha - u^\alpha\,\overline{\eta}\,,\label{eta03}\\ \eta ^{\alpha\beta} &=& d\tau\wedge\overline{\eta}^{\alpha\beta} -\Bigl( u^\alpha\,\overline{\eta}^\beta - u^\beta\,\overline{\eta}^\alpha \Bigr)\,,\label{eta02}\\ \eta ^{\alpha\beta\gamma} &=& -d\tau\,\epsilon ^{\alpha\beta\gamma} -\Bigl( u^\alpha\,\overline{\eta}^{\beta\gamma} + u^\gamma\,\overline{\eta}^{\alpha\beta} + u^\beta\,\overline{\eta}^{\gamma\alpha}\Bigr)\,,\nonumber\\ \label{eta01}\\ \eta ^{\alpha\beta\gamma\delta}&=& -\Bigl( u^\alpha\,\epsilon ^{\beta\gamma\delta}-u^\delta\,\epsilon ^{\alpha\beta\gamma}+ u^\gamma\,\epsilon ^{\delta\alpha\beta}- u^\beta\,\epsilon ^{\gamma\delta\alpha}\Bigr)\,,\nonumber\\ \label{eta00}\end{aligned}$$ where $$\begin{aligned} \overline{\eta}&:=& \Bigl( u\rfloor \eta \Bigr) ={1\over{3!}}\,\epsilon _{\alpha\beta\gamma}\, \underline{\vartheta}^\alpha\wedge\underline{\vartheta}^\beta\wedge\underline{\vartheta}^\gamma ={}^{\#}1 \,,\label{3deta06}\\ \overline{\eta}^\alpha &:=&-\Bigl( u\rfloor \eta ^{\alpha}\Bigr) ={1\over{2!}}\,\epsilon ^\alpha{}_{\beta\gamma}\,\underline{\vartheta}^\beta\wedge\underline{\vartheta}^\gamma ={}^{\#}\underline{\vartheta}^\alpha \,,\label{3deta07}\\ \overline{\eta}^{\alpha\beta}&:=&\Bigl( u\rfloor \eta ^{\alpha\beta}\Bigr) =\,\epsilon ^{\alpha\beta}{}_{\gamma}\,\underline{\vartheta}^\gamma ={}^{\#}(\underline{\vartheta}^\alpha\wedge\underline{\vartheta}^\beta\,)\,,\label{3deta08}\\ \epsilon ^{\alpha\beta\gamma}&:=&-\Bigl( u\rfloor \eta ^{\alpha\beta\gamma}\Bigr) =\,u_\mu\,\eta ^{\mu\alpha\beta\gamma}={}^{\#}(\underline{\vartheta}^\alpha\wedge\underline{\vartheta}^\beta\wedge\underline{\vartheta}^\gamma\,)\,,\nonumber\\ \label{3deta09}\end{aligned}$$ being (\[3deta06\]) the three-dimensional volume element, such that $\overline{\eta} = u^\alpha\,\eta _\alpha$. Making use of (\[thetaLiederiv01\])–(\[thetaLiederiv04\]), (\[3deta06\]) and (\[3deta07\]), one can prove that the Lie derivative of this volume can be decomposed as $$\begin{aligned} {\it l}_u \overline{\eta} ={\cal \L\/}_u\underline{\vartheta}^\alpha\wedge\overline{\eta}_\alpha\,.\label{volLieder}\end{aligned}$$ On the other hand, the contractions between tetrads and eta basis in four dimensions (see for instance [@Hehl:1995ue]), when foliated reduce to $$\begin{aligned} \underline{\vartheta}^\mu\wedge\overline{\eta}_\alpha &=& h^\mu{}_\alpha\,\overline{\eta}\,,\label{rel04bis}\\ \underline{\vartheta}^\mu\wedge\overline{\eta}_{\alpha\beta} &=& -h^\mu{}_\alpha\,\overline{\eta}_\beta +h^\mu{}_\beta\,\overline{\eta}_\alpha\,,\label{rel03bis}\\ \underline{\vartheta}^\mu\,\epsilon _{\alpha\beta\gamma} &=& h^\mu{}_\alpha\,\overline{\eta}_{\beta\gamma} +h^\mu{}_\gamma\,\overline{\eta}_{\alpha\beta} +h^\mu{}_\beta\,\overline{\eta}_{\gamma\alpha}\,,\label{rel02bis}\\ 0 &=& -h^\mu{}_\alpha\,\epsilon _{\beta\gamma\delta} + h^\mu{}_\delta\,\epsilon _{\alpha\beta\gamma} -h^\mu{}_\gamma\,\epsilon _{\delta\alpha\beta} + h^\mu{}_\beta\,\epsilon _{\gamma\delta\alpha}\,.\nonumber\\ \label{rel01bis}\end{aligned}$$ Taking (\[dualitycondbis\]) into account, we also find $$\begin{aligned} e_\alpha\rfloor\overline{\eta}&=&\overline{\eta}_\alpha\,,\label{contract02}\\ e_\alpha\rfloor\overline{\eta}_{\beta}&=&\overline{\eta}_{\beta\alpha}\,,\label{contract03}\\ e_\alpha\rfloor\overline{\eta}_{\beta\gamma}&=&\epsilon _{\beta\gamma\alpha}\,.\label{contract04}\end{aligned}$$ In view of definition (\[3deta09\]), the contraction of all objects (\[3deta07\])-(\[3deta09\]) with $u_\alpha$ vanishes. From (\[3deta07\]) then follows that $0=u_\alpha\,\overline{\eta}^\alpha ={}^{\#}(u_\alpha\,\underline{\vartheta}^\alpha )\,$, thus implying $u_\alpha\,\underline{\vartheta}^\alpha =0\,$. Plausibility argument ===================== Let us argue here against the seemingly [*ad hoc*]{} character of Eqs.(\[Uder02\]), namely $${{\partial\mathfrak{U}}\over{\partial\underline{\vartheta}^\alpha}}={\Sigma _{\alpha}}_{\bot}^{\rm matt}\,,\qquad {{\partial\mathfrak{U}}\over{\partial u^\alpha}}=-\underline{\Sigma}^{\rm matt}_\alpha\,,\label{condit1bbb}$$ showing that, in fact, the internal energy 3-form $\mathfrak{U}$ inherits properties of the original mater Lagrangian, in particular of $L^{\rm matt}_{\bot}$. First we notice that, according to (\[intenergycurr02\]), $\mathfrak{U}$ is the transversal part of the internal energy current $\epsilon ^{\rm u}$ defined in (\[intenergycurr01\]) as proportional to $\epsilon ^{\rm matt}$. On the other hand, from the fundamental matter energy-momentum 3-form (\[mattenergy\]) with (\[sigmamatt\]) follows $$\epsilon ^{\rm matt} =\overline{{\cal \L\/}_u\psi}\,\,{{\partial L}\over{\partial d\overline{\psi}}} -{{\partial L}\over{\partial d\psi}}\,\,{\cal \L\/}_u\psi -L^{\rm matt}_{\bot}\,,\label{expmattenergy}$$ so that, at least for Dirac matter, we get $\mathfrak{U}= -L^{\rm matt}_{\bot} +$ additional terms. According to this relation, Eqs.(\[condit1bbb\]) should resemble the analogous derivatives of $L^{\rm matt}_{\bot}$. In order to calculate them, we make use of the following result proved in [@Tresguerres:2007ih]. When considering the foliated Lagrangian density form $L = d\tau\wedge L_{\bot}\,$, depending on the longitudinal and transversal parts of any dynamical variable $Q = d\tau\wedge Q_{\bot} + \underline{Q}\,$, Eq.(D14) of [@Tresguerres:2007ih] establishes that $${{\partial L}\over{\partial Q}} = (-1)^p\, d\tau\wedge{{\partial L_{\bot}}\over{\partial\underline{Q}}}+{{\partial L_{\bot}}\over{\partial Q_{\bot}}}\,,\label{condit1}$$ with $p$ standing for the degree of the $p$-form $Q$. In view of (\[condit1\]), the matter energy-momentum 3-form defined in (\[momentdecompbis\]) decomposes as $$\begin{aligned} \Sigma ^{\rm matt}_\alpha := {{\partial L^{\rm matt}}\over{\partial \vartheta ^\alpha}} = -d\tau\wedge{{\partial L_{\bot}^{\rm matt}}\over{\partial\underline{\vartheta}^\alpha}}+{{\partial L_{\bot}^{\rm matt}}\over{\partial u^\alpha}}\,,\label{condit1b}\end{aligned}$$ implying $${{\partial L_{\bot}^{\rm matt}}\over{\partial\underline{\vartheta}^\alpha}} = -{\Sigma _{\alpha}}_{\bot}^{\rm matt}\,,\qquad {{\partial L_{\bot}^{\rm matt}}\over{\partial u^\alpha}} = \underline{\Sigma}^{\rm matt}_\alpha\,,\label{condit1bb}$$ which reproduce the form of (\[condit1bbb\]), provided $\mathfrak{U}= -L^{\rm matt}_{\bot}$ as suggested above. R. Tresguerres, Translations and dynamics, Int.J.Geom.Meth.Mod.Phys. [**05**]{} (2008) 905-945, arXiv:gr-qc/0707.0296. F.W. Hehl, G.D. Kerlick and P. Von der Heyde, General relativity with spin and torsion and its deviations from Einstein’s theory, Phys. Rev. [**D10**]{} (1974) 1066-1069. F.W. Hehl, P. Von der Heyde, G.D. Kerlick and J.M. Nester, General Relativity with spin and torsion: Foundations and prospects, Rev. Mod. Phys. [**48**]{} (1976) 393-416. F.W. Hehl, Four lectures on Poincaré gauge field theory, given at 6th Course of Int. School of Cosmology and Gravitation, Erice, Italy, 6-18 May 1979, eds. P.G. Bergmann and V. de Sabbata (New York: Plenum, 1980). F. Gronwald, Metric-affine gauge theory of gravity. I: Fundamental structure and field equations, Int. J. Mod. Phys. [**D 06**]{} (1997) 263-304, gr-qc/9702034. F.W. Hehl, J.D. McCrea and E.W. Mielke and Y. Neeman, Metric affine gauge theory of gravity: Field equations, Noether identities, world spinors, and breaking of dilation invariance, Phys. Rept. [**258**]{} (1995) 1-171, gr-qc/9402012. R.D. Hecht, Conserved quantities in the Poincaré gauge theory of gravitation (in German), Ph.D. Thesis, University of Cologne, 1993. Y.N. Obukhov, Poincaré gauge gravity: Selected topics, Int. J. Geom. Meth. Mod. Phys. [**03**]{} (2006) 95-138, gr-qc/0601090. H.B. Callen, Thermodynamics and an introduction to thermostatistics, (John Wiley $\&$ Sons, New York, Chichester, Brisbane, Toronto, Singapore, 1985). L.D. Landau and E. M. Lifshitz, Fluid Mechanics, Addison Wesley, Reading, Mass. (1958). W. Israel, Nonstationary irreversible thermodynamics: A causal relativistic theory, Annals Phys. [**100**]{} (1976) 310-331. W. Israel and J.M. Stewart, Transient relativistic thermodynamics and kinetic theory, Annals Phys. [**118**]{} (1979) 341-372. B. Carter, Convective variational approach to relativistic thermodynamics of dissipative fluids, Proc. Roy. Soc. Lond. [**A433**]{} (1991) 45. D. Priou, Comparison between variational and traditional approaches to relativistic thermodynamics of dissipative fluids, Phys. Rev. [**D43**]{} (1991) 1223. C. Eckart, The Thermodynamics of irreversible processes, III. Relativistic theory of the simple fluid, Phys.Rev. [**58**]{} (1940) 919-924. R. Tresguerres, Unified description of interactions in terms of composite fiber bundles, Phys. Rev. [**D66**]{} (2002) 064025. R. Tresguerres, Motion in gauge theories of gravity, Int.J.Geom.Meth.Mod.Phys. [**10**]{} (2013) 1250085, arXiv:gr-qc/1202.2569. F.W. Hehl and Y.N. Obukhov, Foundations of Classical Electrodynamics, (Birkhauser Boston, Basel, Berlin, 2003). J.D. McCrea, Irreducible decompositions of non-metricity, torsion, curvature and Bianchi identities in metric-affine spacetimes, Class. Quant. Grav. [**9**]{} (1992) 553-568. F.W. Hehl and Y.N. Obukhov, Electromagnetic energy-momentum and forces in matter, Phys.Lett. [**A311**]{} (2003) 277-284, arXiv:physics/0303097v1 D. Kondepudi and I. Prigogine, Modern thermodynamics, From heat engines to dissipative structures, (John Wiley $\&$ Sons, New York, 1998). Y. Demirel, Nonequilibrium thermodynamics: Transport and rate processes in physical and biological systems, (Elsevier Science $\&$ Technology Books, Amsterdam, 2002). [^1]: The definition of spin current given in Eq.(61) of Reference [@Tresguerres:2007ih] differs from the present one due to the fact that there we considered an internal structure for the tetrads, with a particular dependence on $\Gamma ^{\alpha\beta}$, giving rise to additional terms. The latter ones are not present when the internal structure of the tetrads is ignored, as is the case here. [^2]: The covariant differentials in (\[covfieldeq2\]) and (\[covfieldeq3\]) are defined as $$DH_\alpha := dH_\alpha -\Gamma _\alpha{}^\beta\wedge H_\beta\,,$$ and $$DH_{\alpha\beta} := dH_{\alpha\beta} -\Gamma _\alpha{}^\gamma\wedge H_{\gamma\beta} -\Gamma _\beta{}^\gamma\wedge H_{\alpha\gamma}\,,$$ respectively.
{ "pile_set_name": "ArXiv" }
Mandy Colleran Mandy Colleran (born 7 July 1962) is a comic, writer, actress and disability arts activist. Career Mandy Colleran has been involved in disability arts since the 1980s. She is a member of the comedy trio No Excuses along with Mandy Redvers-Rowe and Ali Briggs. In 1986 Colleran became Joint Development Officer of Arts Integration Merseyside (AIM) with John McGrath, it later became North West Disability Arts Forum (NWDAF). In 1990 Colleran became a director of NWDAF. Credits Stage 2009 DaDaFest Awards, Liverpool (co-presenter) 2017 In Water I'm Weightless by Kaite O'Reilly. Directed by John McGrath, movement by Nigel Charnock. Television 1995 The Alphabet Soup Show (BBC) Film Awards 2007 Lifetime Achievement award from DaDaFest Further reading References External links Category:1962 births Category:20th-century English actresses Category:21st-century English actresses Category:21st-century British women writers Category:Actresses from Liverpool Category:British feminists Category:British people with disabilities Category:British stand-up comedians Category:Disability rights activists from the United Kingdom Category:English stage actresses Category:English women comedians Category:Living people Category:People from Liverpool Category:Socialist feminists
{ "pile_set_name": "Wikipedia (en)" }
Hyaluronan tetrasaccharide in the cerebrospinal fluid is associated with self-repair of rats after chronic spinal cord compression. The objective of this study was to explore changes in hyaluronan levels in the cerebrospinal fluid (CSF) in a spinal cord compression model, to investigate whether hyaluronan tetrasaccharide was involved in this process, and to test the effects of hyaluronan tetrasaccharide on neuron and oligodendrocyte repair. We developed a chronic spinal cord compression model with various sizes of polymer sheets (1.5×0.7×0.3 mm(3); 5×1.5×0.7 mm(3)) that were implanted microsurgically underneath the C(5-6) laminae. The rats were divided into three groups: a sham group, a mildly compressed (MC) group, and a widely compressed (WC) group. Locomotor functional evaluations revealed that the behavioral function of the MC and WC groups dropped to their lowest level from the fourth to fifth week and gradually recovered thereafter. The hyaluronan levels in the CSF gradually increased after spinal cord compression. Furthermore, hyaluronan tetrasaccharide was involved in the hyaluronan change. In addition, we found that nuclear factor kappa B (NF-κB) and cellular inhibitor-of-apoptosis protein 2 (c-IAP(2)) were co-expressed in neurons and oligodendrocytes, and caspase-3 expression gradually decreased in the compression model. The brain-derived neurotrophic factor (BDNF) and vascular endothelial growth factor (VEGF) expression was upregulated in astrocytes at the fourth week post-compression. Hyaluronan tetrasaccharide (HA(4)) induced NF-κB and c-IAP(2) to suppress the H(2)O(2)-induced apoptosis in primary neuronal cultures and increased BDNF and VEGF expression in astrocytic cultures in vitro. These findings suggest that HA(4) in the CSF may associate with behavioral recovery by increasing the levels of NF-κB, c-IAP(2), and neurotrophic factors after chronic spinal cord compression.
{ "pile_set_name": "PubMed Abstracts" }
// sst.h // Copyright (c) 2014 - 2017, zhiayang // Licensed under the Apache License Version 2.0. #pragma once #include "defs.h" #include "sst_expr.h" #include "mpreal/mpreal.h" namespace fir { struct Type; struct ClassType; struct FunctionType; struct Function; struct ConstantValue; } namespace cgn { struct CodegenState; } namespace ast { struct FuncDefn; struct TypeDefn; } namespace sst { //! ACHTUNG ! //* note: this is the thing that everyone calls to check the mutability of a slice of something //* defined in typecheck/slice.cpp bool getMutabilityOfSliceOfType(fir::Type* ty); struct StateTree; struct Block; struct HasBlocks { HasBlocks() { } virtual ~HasBlocks() { } virtual std::vector<Block*> getBlocks() = 0; bool elideMergeBlock = false; }; struct TypeDefn : Defn { TypeDefn(const Location& l) : Defn(l) { this->readableName = "type definition"; } ~TypeDefn() { } ast::TypeDefn* original = 0; }; struct TypeExpr : Expr { virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; //* allows us to intern this, so we don't leak memory. static TypeExpr* make(const Location& l, fir::Type* t); TypeExpr(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "<TYPE EXPRESSION>"; } ~TypeExpr() { } }; struct RawValueExpr : Expr { RawValueExpr(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "<RAW VALUE EXPRESSION>"; } ~RawValueExpr() { } virtual CGResult _codegen(cgn::CodegenState*, fir::Type* = 0) override { return this->rawValue; } CGResult rawValue; }; struct ArgumentDefn; struct Block : Stmt { Block(const Location& l) : Stmt(l) { this->readableName = "block"; } ~Block() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Location closingBrace; bool isSingleExpr = false; std::vector<Stmt*> statements; std::vector<Stmt*> deferred; std::function<void ()> preBodyCode; std::function<void ()> postBodyCode; }; struct IfStmt : Stmt, HasBlocks { IfStmt(const Location& l) : Stmt(l) { this->readableName = "if statement"; } ~IfStmt() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; virtual std::vector<Block*> getBlocks() override; struct Case { Expr* cond = 0; Block* body = 0; std::vector<Stmt*> inits; Case(Expr* c, Block* b, const std::vector<Stmt*>& i) : cond(c), body(b), inits(i) { } }; std::vector<Case> cases; Block* elseCase = 0; }; struct ReturnStmt : Stmt { ReturnStmt(const Location& l) : Stmt(l) { this->readableName = "return statement"; } ~ReturnStmt() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* value = 0; fir::Type* expectedType = 0; }; struct WhileLoop : Stmt, HasBlocks { WhileLoop(const Location& l) : Stmt(l) { this->readableName = "while loop"; } ~WhileLoop() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; virtual std::vector<Block*> getBlocks() override; Expr* cond = 0; Block* body = 0; bool isDoVariant = false; }; struct VarDefn; struct ForeachLoop : Stmt, HasBlocks { ForeachLoop(const Location& l) : Stmt(l) { this->readableName = "for loop"; } ~ForeachLoop() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; virtual std::vector<Block*> getBlocks() override; VarDefn* indexVar = 0; DecompMapping mappings; Expr* array = 0; Block* body = 0; }; struct BreakStmt : Stmt { BreakStmt(const Location& l) : Stmt(l) { this->readableName = "break statement"; } ~BreakStmt() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; }; struct ContinueStmt : Stmt { ContinueStmt(const Location& l) : Stmt(l) { this->readableName = "continue statement"; } ~ContinueStmt() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; }; struct SizeofOp : Expr { SizeofOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "sizeof expression"; } ~SizeofOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; fir::Type* typeToSize = 0; }; struct TypeidOp : Expr { TypeidOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "sizeof expression"; } ~TypeidOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; fir::Type* typeToId = 0; }; struct FunctionDefn; struct AllocOp : Expr { AllocOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "alloc statement"; } ~AllocOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; fir::Type* elmType = 0; std::vector<Expr*> counts; std::vector<FnCallArgument> arguments; Defn* constructor = 0; VarDefn* initBlockVar = 0; VarDefn* initBlockIdx = 0; Block* initBlock = 0; bool isMutable = false; }; struct DeallocOp : Stmt { DeallocOp(const Location& l) : Stmt(l) { this->readableName = "free statement"; } ~DeallocOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* expr = 0; }; struct BinaryOp : Expr { BinaryOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "binary expression"; } ~BinaryOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* left = 0; Expr* right = 0; std::string op; FunctionDefn* overloadedOpFunction = 0; }; struct UnaryOp : Expr { UnaryOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "unary expression"; } ~UnaryOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* expr = 0; std::string op; FunctionDefn* overloadedOpFunction = 0; }; struct AssignOp : Expr { AssignOp(const Location& l); ~AssignOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::string op; Expr* left = 0; Expr* right = 0; }; //* for the case where we assign to a tuple literal, to enable (a, b) = (b, a) (or really (a, b) = anything) struct TupleAssignOp : Expr { TupleAssignOp(const Location& l); ~TupleAssignOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::vector<Expr*> lefts; Expr* right = 0; }; struct SubscriptDollarOp : Expr { SubscriptDollarOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "dollar expression"; } ~SubscriptDollarOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; }; struct SubscriptOp : Expr { SubscriptOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "subscript expression"; } ~SubscriptOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* expr = 0; Expr* inside = 0; }; struct SliceOp : Expr { SliceOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "slice expression"; } ~SliceOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* expr = 0; Expr* begin = 0; Expr* end = 0; }; struct FunctionCall : Expr { FunctionCall(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "function call"; } ~FunctionCall() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::string name; Defn* target = 0; std::vector<FnCallArgument> arguments; bool isImplicitMethodCall = false; }; struct ExprCall : Expr { ExprCall(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "function call"; } ~ExprCall() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* callee = 0; std::vector<Expr*> arguments; }; struct StructDefn; struct ClassDefn; struct StructConstructorCall : Expr { StructConstructorCall(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "struct constructor call"; } ~StructConstructorCall() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; StructDefn* target = 0; std::vector<FnCallArgument> arguments; }; struct ClassConstructorCall : Expr { ClassConstructorCall(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "class constructor call"; } ~ClassConstructorCall() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; ClassDefn* classty = 0; FunctionDefn* target = 0; std::vector<FnCallArgument> arguments; }; struct BaseClassConstructorCall : ClassConstructorCall { BaseClassConstructorCall(const Location& l, fir::Type* t) : ClassConstructorCall(l, t) { this->readableName = "base class constructor call"; } ~BaseClassConstructorCall() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; }; struct VarDefn; struct VarRef : Expr { VarRef(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "identifier"; } ~VarRef() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::string name; Defn* def = 0; bool isImplicitField = false; }; struct SelfVarRef : Expr { SelfVarRef(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "this"; } ~SelfVarRef() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; }; struct ScopeExpr : Expr { ScopeExpr(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "<SCOPE EXPRESSION>"; } ~ScopeExpr() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::vector<std::string> scope; }; struct FieldDotOp : Expr { FieldDotOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "field access"; } ~FieldDotOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* lhs = 0; std::string rhsIdent; bool isMethodRef = false; bool isTransparentField = false; size_t indexOfTransparentField = 0; }; struct MethodDotOp : Expr { MethodDotOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "method call"; } ~MethodDotOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* lhs = 0; Expr* call = 0; }; struct TupleDotOp : Expr { TupleDotOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "tuple access"; } ~TupleDotOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* lhs = 0; size_t index = 0; }; struct BuiltinDotOp : Expr { BuiltinDotOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "dot operator"; } ~BuiltinDotOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* lhs = 0; std::string name; bool isFunctionCall = false; std::vector<Expr*> args; }; struct EnumDefn; struct EnumDotOp : Expr { EnumDotOp(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "enum case access"; } ~EnumDotOp() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::string caseName; EnumDefn* enumeration = 0; }; struct LiteralNumber : Expr { LiteralNumber(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "number literal"; } ~LiteralNumber() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; mpfr::mpreal num; }; struct LiteralString : Expr { LiteralString(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "string literal"; } ~LiteralString() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::string str; bool isCString = false; }; struct LiteralNull : Expr { LiteralNull(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "null literal"; } ~LiteralNull() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; }; struct LiteralBool : Expr { LiteralBool(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "boolean literal"; } ~LiteralBool() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; bool value = false; }; struct LiteralChar : Expr { LiteralChar(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "character literal"; } ~LiteralChar() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; uint32_t value = false; }; struct LiteralTuple : Expr { LiteralTuple(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "tuple literal"; } ~LiteralTuple() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::vector<Expr*> values; }; struct LiteralArray : Expr { LiteralArray(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "array literal"; } ~LiteralArray() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::vector<Expr*> values; }; struct RangeExpr : Expr { RangeExpr(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "range expression"; } ~RangeExpr() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* start = 0; Expr* end = 0; Expr* step = 0; bool halfOpen = false; }; struct SplatExpr : Expr { SplatExpr(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "splat expression"; } ~SplatExpr() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* infer = 0) override; Expr* inside = 0; }; struct TreeDefn : Defn { TreeDefn(const Location& l) : Defn(l) { this->readableName = "<TREE DEFINITION>"; } ~TreeDefn() { } virtual std::string getKind() override { return "namespace"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; StateTree* tree = 0; }; struct NamespaceDefn : Stmt { NamespaceDefn(const Location& l) : Stmt(l) { this->readableName = "namespace"; } ~NamespaceDefn() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::string name; std::vector<Stmt*> statements; }; struct VarDefn : Defn { VarDefn(const Location& l) : Defn(l) { this->readableName = "variable definition"; } ~VarDefn() { } virtual std::string getKind() override { return "variable"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* init = 0; bool immutable = false; FunctionDefn* definingFunction = 0; }; struct ArgumentDefn : VarDefn { ArgumentDefn(const Location& l) : VarDefn(l) { this->readableName = "<ARGUMENT DEFINITION>"; this->immutable = true; } ~ArgumentDefn() { } virtual std::string getKind() override { return "argument"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; }; struct FunctionDecl : Defn { std::vector<FnParam> params; fir::Type* returnType = 0; fir::Type* parentTypeForMethod = 0; bool isVarArg = false; virtual std::string getKind() override { return "function"; } protected: FunctionDecl(const Location& l) : Defn(l) { this->readableName = "function declaration"; } ~FunctionDecl() { } }; struct FunctionDefn : FunctionDecl { FunctionDefn(const Location& l) : FunctionDecl(l) { this->readableName = "function definition"; } ~FunctionDefn() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::vector<ArgumentDefn*> arguments; // bleh, this exists so we can go *into* the scope to inspect stuff if necessary StateTree* insideTree = 0; Block* body = 0; bool needReturnVoid = false; bool isVirtual = false; bool isOverride = false; bool isMutating = false; ast::FuncDefn* original = 0; }; struct ForeignFuncDefn : FunctionDecl { ForeignFuncDefn(const Location& l) : FunctionDecl(l) { this->readableName = "foreign function definition"; } ~ForeignFuncDefn() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; bool isIntrinsic = false; std::string realName; }; struct OperatorOverloadDefn : FunctionDefn { OperatorOverloadDefn(const Location& l) : FunctionDefn(l) { this->readableName = "operator overload definition"; } ~OperatorOverloadDefn() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; }; struct DecompDefn : Stmt { DecompDefn(const Location& l) : Stmt(l) { this->readableName = "destructuring variable definition"; } ~DecompDefn() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* init = 0; bool immutable = false; DecompMapping bindings; }; struct StructFieldDefn : VarDefn { StructFieldDefn(const Location& l) : VarDefn(l) { } ~StructFieldDefn() { } virtual std::string getKind() override { return "field"; } virtual CGResult _codegen(cgn::CodegenState*, fir::Type* = 0) override { return CGResult(0); } TypeDefn* parentType = 0; bool isTransparentField = false; }; struct ClassInitialiserDefn : FunctionDefn { ClassInitialiserDefn(const Location& l) : FunctionDefn(l) { } ~ClassInitialiserDefn() { } virtual std::string getKind() override { return "initialiser"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override { return this->FunctionDefn::_codegen(cs, inferred); } }; struct BareTypeDefn : TypeDefn { BareTypeDefn(const Location& l) : TypeDefn(l) { this->readableName = "type definition"; } ~BareTypeDefn() { } virtual std::string getKind() override { return "type"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; }; struct TraitDefn : TypeDefn { TraitDefn(const Location& l) : TypeDefn(l) { this->readableName = "trait definition"; } ~TraitDefn() { } virtual std::string getKind() override { return "trait"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::vector<FunctionDecl*> methods; }; struct StructDefn : TypeDefn { StructDefn(const Location& l) : TypeDefn(l) { this->readableName = "struct definition"; } ~StructDefn() { } virtual std::string getKind() override { return "struct"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::vector<StructFieldDefn*> fields; std::vector<FunctionDefn*> methods; std::vector<TraitDefn*> traits; }; struct ClassDefn : StructDefn { ClassDefn(const Location& l) : StructDefn(l) { this->readableName = "class definition"; } ~ClassDefn() { } virtual std::string getKind() override { return "class"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; ClassDefn* baseClass = 0; std::vector<TypeDefn*> nestedTypes; std::vector<VarDefn*> staticFields; std::vector<FunctionDefn*> staticMethods; std::vector<FunctionDefn*> initialisers; FunctionDefn* deinitialiser = 0; FunctionDefn* copyInitialiser = 0; FunctionDefn* moveInitialiser = 0; }; struct EnumCaseDefn : Defn { EnumCaseDefn(const Location& l) : Defn(l) { this->readableName = "enum case definition"; } ~EnumCaseDefn() { } virtual std::string getKind() override { return "enum case"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; Expr* val = 0; size_t index = 0; EnumDefn* parentEnum = 0; fir::ConstantValue* value = 0; }; struct EnumDefn : TypeDefn { EnumDefn(const Location& l) : TypeDefn(l) { this->readableName = "enum definition"; } ~EnumDefn() { } virtual std::string getKind() override { return "enum"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; fir::Type* memberType = 0; util::hash_map<std::string, EnumCaseDefn*> cases; }; struct RawUnionDefn : TypeDefn { RawUnionDefn(const Location& l) : TypeDefn(l) { this->readableName = "raw union definition"; } ~RawUnionDefn() { } virtual std::string getKind() override { return "raw union"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; util::hash_map<std::string, StructFieldDefn*> fields; std::vector<StructFieldDefn*> transparentFields; }; struct UnionVariantDefn; struct UnionDefn : TypeDefn { UnionDefn(const Location& l) : TypeDefn(l) { this->readableName = "union definition"; } ~UnionDefn() { } virtual std::string getKind() override { return "union"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; util::hash_map<std::string, UnionVariantDefn*> variants; }; struct UnionVariantDefn : TypeDefn { UnionVariantDefn(const Location& l) : TypeDefn(l) { this->readableName = "union variant definition"; } ~UnionVariantDefn() { } virtual std::string getKind() override { return "union variant"; } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; std::string variantName; UnionDefn* parentUnion = 0; }; struct UnionVariantConstructor : Expr { UnionVariantConstructor(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "union constructor"; } ~UnionVariantConstructor() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; size_t variantId = 0; UnionDefn* parentUnion = 0; std::vector<FnCallArgument> args; }; struct RunDirective : Expr { RunDirective(const Location& l, fir::Type* t) : Expr(l, t) { this->readableName = "run directive"; } ~RunDirective() { } virtual CGResult _codegen(cgn::CodegenState* cs, fir::Type* inferred = 0) override; // mutually exclusive! Block* block = 0; Expr* insideExpr = 0; }; }
{ "pile_set_name": "Github" }
Cheluvamba Hospital Cheluvamba Hospital for Women and Children (formerly Vanivilas Hospital) in Mysore, was established in 1889 by Nalwadi Krishnaraja Wadiyar. It is a tertiary referral center and teaching hospital attached to the Mysore Medical College and Research Institute. It is located on Irwin Road, in Mysore opposite the medical college and in the same campus as Krishnarajendra Hospital. The hospital offers services to obstetrics, gynecology and paediatric patients, and has specialized units providing neonatal care, paediatric surgery, diarrhoeal diseases treatment, immunizations and others. It has about 410 beds including 130 pediatric beds and 280 beds in obstetrics and gynecology and also has a designated neonatal ward and a diarrhoeal diseases unit. About 40–45 babies are delivered here each day. There are two neonatal intensive care units (NICU) accommodating 30 infants at a time. The hospital's out-patient department sees 500 to 600 out-patients from Mysore and surrounding districts every day. See also List of Heritage Buildings in Mysore References Sources External links Cheluvamba Hospital - official website Category:Children's hospitals in India Category:Hospitals in Mysore
{ "pile_set_name": "Wikipedia (en)" }
347 S.W.2d 131 (1961) CITY OF UNIVERSITY CITY ex rel. and to the Use of Edwin D. MACKEY, et al., Plaintiffs-Appellants, v. FRANK MICELI & SONS REALTY & BUILDING CO., a Corporation, and Travelers Indemnity Company, a Corporation, Defendants, Travelers Indemnity Company, Respondent. No. 48336. Supreme Court of Missouri, Division No. 2. May 8, 1961. Motion for Rehearing or to Transfer Denied June 12, 1961. H. Jackson Daniel and Martin Schiff, Jr., Husch, Eppenberger, Donohue, Elson & Jones, Maurice Schechter, St. Louis, for appellants. Evans & Dixon, John F. Evans, St. Louis, for respondent. Motion for Rehearing or to Transfer to Court en Banc Denied June 12, 1961. BARRETT, Commissioner. In this suit, prosecuted at the relation of University City, the individually named plaintiffs are ten owners of residence property in an area known as Mayflower Court, a subdivision in University City platted as McKnight Downs. The defendants are Frank Miceli & Sons Realty and Building Company and the Travelers Indemnity Company. In 1953 Miceli subdivided and platted McKnight Downs and eventually sold the eighteen lots or plots of ground in the subdivision to the individually named plaintiffs, and others, or their predecessors *132 in title. In connection with the platting of the subdivision, the Municipal Code required the subdivider to make certain improvements. In lieu of final completion of the improvements the subdivider was permitted to post a surety bond. This Miceli did and the Travelers Indemnity Company is his surety in the principal sum of $18,000. There is a drainage ditch across the north and west side of the subdivision and one of the improvements required of Miceli was the paving, grading and concreting of the ditch, within two years. Miceli did not pave the ditch and the consequence has been that large sections, approximately one-fifth to one-third, of the plaintiffs' lots have washed away. The injury to the individual lots varied, the damages were estimated from $1,500 to $3,500, but the total damages to the ten lots was said to be $18,000. The object of this suit, according to the prayer of the petition, is to have the court "declare forfeit the bond issued by the defendant Travelers Indemnity Company and for judgment in accordance with their damages" in the sum of $18,000 against both defendants. Miceli defaulted, nevertheless, the trial court did not enter judgment against him and the plaintiffs do not complain of the fact upon this appeal. Other than on crossexamination of the plaintiffs' witnesses Travelers Indemnity Company offered no evidence, and at the close of the plaintiffs' evidence offered a separate motion to dismiss the action. The trial court entered this judgment in favor of Travelers Indemnity Company "against each of the plaintiffs as a cause of action is not stated upon the surety bond executed by defendants upon the finding that it does not come within the provisions of Section 522.020, and Section 522.050, R.S.Mo.1949; although upon all other controverted issues, plaintiffs should prevail." Therefore the trial court dismissed the plaintiffs' action with prejudice and they have appealed. The plaintiffs have briefed and argued the single point that the court erred in dismissing their action "because sections 522.020, 522.050, 522.080 and 522.150 * * * authorize the plaintiffs, as aggrieved parties, to maintain their action against defendants to forfeit the surety bond." Briefly, the statutes, R.S.Mo.1959, provide that persons injured by the neglect or misfeasance of any officer may proceed against the principal or his sureties in any proceeding authorized by law against such officer "for official neglect or injury." Section 522.010. Sections 522.020 and 522.030, mentioned by the court in its judgment, authorize the prosecution of suits by a "person so suing" in the name of the "obligee named in the bond," or, as here, at the relation of the obligee city. Sections 522.050 and 522.080 provide that "Any other party aggrieved may, in like manner, prosecute an action on such official bond, * * *." And section 522.150 provides that "The provisions of this chapter in relation to suits on official bonds shall apply as well to suits on bonds of executors * * * and others required by law to give bond, with conditions for the performance of any duty or trust, as to suits on bonds of officers; and the persons aggrieved may prosecute suits in the same manner, and with like effect, and shall be subject, in all respects, to the provisions herein contained in respect to suits on official bonds, and the court shall possess the same power in relation to such suits." The plaintiffs point to these statutes and say that they are "aggrieved parties," that Miceli was a person required by law (the municipal code) to give bond, that the bond was executed to secure the faithful performance of his obligation to pave the drainage ditch and having failed to do so they meet the requirements of these statutes and are entitled to maintain this action. The respondent bonding company contends that these statutes are not applicable "to the type of bond involved" and that plaintiffs acquired no rights by reason of the statutes to recover on the bond. The respondent says that these statutes apply only to "official bonds" insuring faithful performance of the duties of public officials *133 and quasi public officers such as executors. Section 522.300 permits persons "furnishing material or performing labor" to sue on the bonds of contractors performing public works for the state, county or cities. The respondent contends that the bond involved here is a "public improvement bond" to secure completion of a public work and is therefore governed by section 522.300 but that plaintiffs may not maintain the action under that section because their claim is not for labor or materials furnished. And, finally, the respondent contends that the plaintiffs are not "obligees or beneficiaries under the bond" and may not maintain an action against the surety for damages "on the theory of breach of contract of the principal." In summary, the respondent contends that the purpose of this bond is to indemnify "the city alone" against the duty and expense of providing drainage or sewers and that only the city as sole obligee could enforce it. After thus, perhaps unnecessarily, elaborately setting forth the contentions of the parties, it is not essential to a determination of this appeal to consider the history and applicability of the statutes. It is assumed for the purposes of this opinion, if the plaintiffs have a cause of action, that they could avail themselves of the remedy afforded by the statutes. See and consider Cooper v. Massachusetts Bonding & Ins. Co., 239 Mo.App. 67, 186 S.W.2d 549; State ex rel. Patterson v. Collins, Mo.App., 172 S.W.2d 284, 289; City of Chillicothe ex rel. Matson v. Raynard, 80 Mo. 185; 63 C.J.S. Municipal Corporations § 1026, p. 623, § 1172, p. 858 and the annotations 47 A.L.R. 5, 170 A.L.R. 1299. It is also sufficient for the purposes of this opinion to summarily say that section 522.300, despite the descriptive catch phrase "Bonds Of Contractors For Public Works," affords relief to "those persons furnishing labor and itid Serial on public work, which cannot be subjected to a mechanic's lien" (City of St. Louis, to Use of Stone Creek Brick Co. v. Kaplan-McGowan Co., 233 Mo.App. 789, 794, 108 S.W.2d 987, 989; Camdenton Consolidated School District, etc. v. New York Casualty Co., 340 Mo. 1070, 104 S.W.2d 319) and has no bearing on this action or its subject. The plaintiffs' basic difficulty here, as in several of the actions brought under these statutes (to illustrate see State ex rel. Funk v. Turner, 328 Mo. 604, 42 S.W.2d 594), is that they have not established a substantive cause of action. Miceli, admittedly, did not pave the drainage ditch and the city has not attempted to compel performance of his contract, nor has the city taken any action on the bond. In addition to the action's being instituted at the relation of the city, the city attorney and the city engineer were the plaintiffs' principal witnesses. The plaintiffs offered two types of proof to establish the damages to each of the ten lots. First, a witness described the erosion and injury to each lot, the measures necessary to correct the injury and then the witness testified to the cost of repairing the loss. To illustrate, the Schwartz lot lost a strip of ground 20 by 30 feet by erosion and the cost to that lot of paving the ditch, replacing and leveling the dirt was said to be $2,712, and to this was added $350 for sodding the yard and $150 for a fence. Second, the plaintiffs proved the total value of their properties on the assumption that the improvements had been made and the value without the improvements. In the case of the Schwartzes the value of their property with the paved channel was $27,000, without the paved ditch its value was $23,000. The ordinance, under which the subdivision was platted, provided that "No subdivision plat shall be approved by either the planning commission or by the council unless it conforms to the following minimum standards and requirements." The subdivider was to make certain improvements but in lieu of final completion and before approval of his plat "the subdivider may post a surety bond, approved by the council, which bond will insure to the city that the improvement will be completed by the subdivider within two years after the final approval of the plan. The amount of *134 the bond shall not be less than the estimated cost of improvements, and the amount of the estimate must be approved by the director of public works. If the improvements are not completed within the specified time, the council may use the bond or any necessary portion thereof to complete same." Also under the ordinance, the subdivider was required, according to approved standards and specifications, to "install storm sewers to provide drainage for the development." On May 18, 1953, University City enacted an ordinance approving the platting of McKnight Downs and attached to the ordinance is the "Land Subdivision Improvement Bond" with Miceli as principal and the Travelers Indemnity Company as surety. The bond recites that the principal and surety "are held and firmly bound unto the City of University City" in the sum of $18,000. Among other things, the bond recites that whereas the principal "proposes to improve and develop a certain tract of land" and has filed a proposed subdivision and plat "showing certain improvements," including "storm water sewers," the principal in lieu of completion of the improvements has filed this "Surety Bond" in favor of the city. It is then recited that this bond "shall indemnify said City and secure to said City the actual construction of such improvements and utilities in a manner satisfactory to said City, in the event said Principal shall fail to install said improvements and utilities within two (2) years." (Italics supplied throughout the quotations.) While the bond is in the principal sum of $18,000, the director of public works informed Miceli that it "may be broken down into three bonds as follows:" for construction of pavement "$11,500, construction of sidewalks $3,000, and "for construction of creek paving $3,500.00." As previously indicated, "in a proper case" third persons, for whose benefit or protection a contract has been made by a municipal corporation with a private contractor, may maintain an action on the contract (63 C.J.S. Municipal Corporations § 1026, p. 623) and this includes, of course, a bond "to secure the performance of a municipal improvement contract." 63 C.J.S. Municipal Corporations § 1172, p. 858. It is not necessary that the property owners be named as obligees, the problem is whether the contract and bond were for their benefit and protection. Statutes and bonds, as those involved here, are not, however, a substitute for public liability insurance and in the absence of specific agreement do not cover the principal contractor's tort liability to adjoining property owners or other third persons. 63 C.J.S. Municipal Corporations § 1172, p. 858; annotation 67 A.L.R. 990; State ex rel. Leatherman v. Harris, 229 Mo.App. 304, 77 S.W.2d 846; Kansas City ex rel. Blumb v. O'Connell, 99 Mo. 357, 12 S.W. 791; Gerber v. Kansas City, 304 Mo. 157, 263 S.W. 432. There is some analogy in the cases, but tort liability aside, the contractor, here Miceli, and his surety could contract to pay for any injury to adjoining property. 63 C.J.S. Municipal Corporations § 1259(2), p. 994. The difficulty here is that they have not done so and the contract, ordinance and bond are not reasonably subject to the construction that they were intended for the protection of adjoining property owners. The ordinance required a bond, in lieu of completion of improvements, in a sum not less "than the estimated cost of improvements," which bond "will insure to the city that the improvements will be completed." In the event the improvements were not completed, the council was authorized to resort to the bond "to complete same." The bond, plainly, indemnified and "secure(d) to said City the actual construction of such improvements * * * in a manner satisfactory to said City." The improvements, and in a sense the contract and bond, were for the benefit of property owners in the subdivision, but the contract and bond did not in terms protect third persons or adjoining owners against either torts or breach of contract (Compare Schnaier v. Bradley Contracting Co., 181 *135 App.Div. 538, 169 N.Y.S. 88) and they are not reasonably subject to the construction that the parties intended that they should indemnify these plaintiffs for these particular injuries or losses. City of St. Louis v. G. H. Wright Contracting Co., 202 Mo. 451, 101 S.W. 6; Royal Indemnity Co. v. Independence Indemnity Co., 9 Cir., 29 F.2d 43. Compare Cooper v. Massachusetts Bonding & Insurance Co., supra; Hardware Dealers Mutual Ins. Co. v. R. H. Hidey, Inc., 349 Mich. 490, 84 N.W.2d 795, and see Coley v. Cohen, 289 N.Y. 365, 45 N.E.2d 913, and Freigy v. Gargaro Company, Inc., 223 Ind. 342, 60 N.E.2d 288. For these indicated reasons the judgment is affirmed. BOHLING and STOCKARD, C.C., concur. PER CURIAM. The foregoing opinion by BARRETT, C., is adopted as the opinion of the Court. All concur.
{ "pile_set_name": "FreeLaw" }
[Endometrioid endometrial cancer--the prognostic value of selected clinical and pathological parameters]. To assess the relationship between selected clinical and pathological factors and disease free survival (DFS) and overall survival (OS) in endometrioid endometrial cancer patients. A retrospective review of 262 patients aged 37-86 (6.0 +/- 9.0) was performed. Selected clinical and pathological data were correlated with DFS and OS. Follow-up was 8-123 months (64.9 +/- 27.1). In 4 patients (1.5%) clinical progression was diagnosed during the treatment. In 43 patients (16.4%) relapse was diagnosed 2-61 months (23.9 +/- 15.7) after commencing treatment. DFS and OS were 82.1% and 81.3% respectively. In univariate analysis worse DFS was related to older patients (p = 0.007) and non-radical surgery (p < 0.001). In multivariate analysis worse DFS was related to older patients (HR = 1.058; 95% CI = 1.024-1.093; p < 0.001), younger at menopause (HR = 0.910; 95% CI = 0.851-0.973; p = 0.006), with higher staging (HR = 2.639; 95% CI = 1.968-3.539; p < 0.001) operated non-radically (HR = 0.220; 95% CI = 0.096-0.504; p < 0.001). In univariate analysis worse OS was connected with older patients (p = 0.018), diabetes type II (p = 0.019) and non-radical surgery (p < 0.001). In multivariate analysis worse OS was related to younger age at menopause (HR = 0.932; 95% CI = 0.873-0.996; p = 0.039), diabetes type II (HR = 2.372; 95% CI = 1.260-4.466; p = 0.008), higher staging (HR = 2.053; 95% CI = 1.482-2.845; p < 0.001), and non-radical surgery (HR = 0.240; 95% CI = 0.091-0.636; p = 0.004). Relapsed endometrial cancer developed in 90.7% during four years after commencing treatment. In 79.1% of these patients distant metastases were present. Most significant prognostic factors were radicality of surgery age of patients and staging. The presence of diabetes type II and early menopause were connected with worse prognosis.
{ "pile_set_name": "PubMed Abstracts" }
See, all along it's been the women who were trying to get the men drunk... I knew I had the wrong approach; I was always trying to get the women drunk. So now you're telling me if I had just sat back and let the women come to me, they would actually bring me beer? What a revelation! Gosh, if I had only known that in college... could of saved a lot of money... We obviously have too much time on our hands. Who's ready for another happy hour gathering of ex-account managers??? Let me know. Steve
{ "pile_set_name": "Enron Emails" }
Saturated hydrocarbons are obtained from petroleum, natural gas reservoirs, and other petroliferous deposits. They are, on a relative basis to other hydrocarbons, available in a relatively large supply. They have many uses in addition to being suitable as fuels. One of those uses, and one which has a high order of value in terms of uses, is as a raw material in chemical reactions when they can be made to react in an efficient, economical and predictable if not selective fashion. Particularly desirable is the ability to prepare terminally-substituted compounds, because terminally-substituted, or primary functional compounds, are in the greatest demand commercially. However, saturated hydrocarbons have strong C--H and C--C bonds which make the necessary reactions difficult for one or more reasons. Various approaches to reaction of hydrocarbons have been studied over the years including thermal, chemical and photochemical. Examples of these are set forth in Janowicz and Bergman, J. Am. Chem. Soc. 105, 3929-3939 (1983). Most of these prior methods have consumed large amounts of energy in one form or another; and, importantly have lacked selectivity. Either, in addition to or separately, the prior methods have suffered other disadvantages. Unsaturated compounds, in addition to being a valuable raw material for reactions which functionalized alkanes are not, do not always form terminally-substituted compounds but form 2-substituted derivatives according to Markovnikoff's rule. Recently we found that certain organo-iridium complexes are capable of intermolecular oxidative addition to single C--H bonds in saturated hydrocarbons leading to hydridoalkyl iridium complexes which can be used to convert alkanes to alkyl halides. This is reported in Janowicz and Bergman, J.A.C.S. 104, 352 (1982). While this procedure enjoys a degree of benefits over the prior art it leaves room for improvement in several respects. One such important feature in the use of iridium complexes is the need to pass through an organomercurial intermediate. The process using iridium also provides much less selectivity than theoretically possible and desirable.
{ "pile_set_name": "USPTO Backgrounds" }
Even the storm could be just a rain once.⚡#FourYearsOld. That was the time when I have made a decision of becoming a herpetologist (soon later choosing a writer path was added)...Remaining quite identical by the views and values (of course, gaining another levels of progress over time)...🦕 After just few years from that, I have made a paleontological theory about blood temperature in some of the dinosaur species and just few years ago from now, scientists had discovered that some indeed, were mesothermal. Ah, so many hypothesis still are left in the skull of mine yet silence I give...Although illness invaded this physis at the age of two and passing through existence equalled experiencing multum of accidents and gaining more sicknesses & ailments than ten common folks (at least that's what others judge <nothing contagious nor nasty>), I shall be filled with power and am in faith that intellectual #strength will bring some enlightenment both in the aspect of science and philosophical comprehension thus becoming a source of society's responsibility.❓Did your #childhood's #decisions of who to become as an adult met reality?🐉 ✒️ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ #brat#memories#past#blackandwhite#カワイ#bw#bwphoto#bwportrait#child#아이#throwbackthursday#nocolour#nocolours#blackwhite#makeup#kid#junior#childhoodmemories#pasttime#tbt#children#elfling#4yearsold#子供#timepass#stillsame
{ "pile_set_name": "Pile-CC" }
divided by s. 34 Let l(x) = -x**3 - 4*x**2 + 5*x + 4. Let j be l(-5). Suppose 3*q = q - i + 10, 16 = j*q + i. What is the remainder when 7 is divided by q? 1 Suppose -4*b + 116 = 4*u, -b + 3*u + 99 = 2*b. Calculate the remainder when 90 is divided by b. 28 Let i(t) = t + 4. Let o = -4 - -1. What is the remainder when 2 is divided by i(o)? 0 Suppose l + 20 = 5*l. Let n(o) = 3*o - 1. Calculate the remainder when 40 is divided by n(l). 12 Let p = 31 + 19. What is the remainder when p is divided by (-3)/(-6)*-2 + 14? 11 Suppose 5*q - 317 + 102 = 0. Calculate the remainder when q is divided by 15. 13 Let c be 1/(-3)*(3 + 0). Let j = 4 + c. Suppose 5*q - 30 = 2*q - 3*y, 0 = 5*q - 2*y - 15. Calculate the remainder when q is divided by j. 2 Let c = 0 - -3. Suppose -2*b = -0*b. Suppose -q - a = -b*a, 0 = -a - 4. What is the remainder when q is divided by c? 1 Suppose 0 = -5*w + 55 - 5. Let o = 33 - 68. What is the remainder when 7/14*(-1 - o) is divided by w? 7 Let b(i) = -12*i - 8. Let d be b(-10). Let g = d + -74. Calculate the remainder when g is divided by 13. 12 Suppose -3*h - 9 = -3*n, n = 6*n - 2*h - 12. Let s = n + 1. Calculate the remainder when 8 is divided by s. 2 Let v(j) = 6*j**3 - 6*j**2 + 8*j - 14. Let f(a) = 5*a**3 - 6*a**2 + 7*a - 13. Let w(g) = -7*f(g) + 6*v(g). What is the remainder when w(-6) is divided by 7? 6 Let v = 27 - 21. Calculate the remainder when 23 is divided by v. 5 Let w(b) = -3*b - 22. What is the remainder when 99 is divided by w(-14)? 19 Let u = -20 + 48. Let t be (-296)/(-7) + 4/(-14). Suppose 4*r - 32 = -2*n, -3*r + 6*n = 3*n - t. Calculate the remainder when u is divided by r. 8 Suppose 0 = 3*a - q - 7, -6*a - 4*q + 11 = -5*a. Let l = -20 - -33. What is the remainder when 10/a*(-15)/(-2) is divided by l? 12 Let j be (-60)/(-14) - 12/42. Suppose -h - 60 = -j*h. What is the remainder when 79 is divided by h? 19 Let r = -9 + 14. Let b be (-2)/5 - (-27)/r. Calculate the remainder when 30 is divided by ((-16)/(-5))/(1/b). 14 What is the remainder when -8*(1 + (9/(-2))/3) is divided by 2? 0 Suppose -4*w + 5*w = 8. Let v = 5 - -16. What is the remainder when v is divided by w? 5 Let t(a) = 2*a**2 - 18*a - 2. Calculate the remainder when t(10) is divided by 7. 4 Let c = -32 - -68. Suppose -2*p = 2*p + c. What is the remainder when p/(-12)*(1 + 27) is divided by 12? 9 Suppose -140 = -2*q - 5*y, 4*q - 3*q - 64 = -y. Calculate the remainder when q is divided by 21. 18 Suppose 0 = -2*j + j - 4*m + 49, 5*j - 245 = 3*m. Calculate the remainder when j is divided by 13. 10 Calculate the remainder when 2/6 + (-1127)/(-21) + 8 is divided by 21. 20 Suppose -4*o + 35 = 3*q, -4*q - 4*o - 7 + 47 = 0. Suppose -q*r = -r - 64. What is the remainder when 31 is divided by r? 15 Let h(w) = -3*w - 14. Let o = 16 - 1. Calculate the remainder when o is divided by h(-6). 3 Let i(y) = 8*y + 6. What is the remainder when 119 is divided by i(3)? 29 Suppose 28 = -z - 55. Let i = -53 - z. What is the remainder when i is divided by 11? 8 Suppose 0 = -2*n - 2*d + 42, -2*n + 2*d = -18 - 40. Let s(t) = 10*t**2 + 10*t - 12. What is the remainder when s(2) is divided by n? 23 Suppose 0 = -3*d + 5*t - 15, 3*d - 4*t + 24 = 6. Let r(u) = -u**3 - 10*u**2 - u - 4. What is the remainder when 9 is divided by r(d)? 3 Let b(m) = m**2 + 3*m - 5. Let a be (-3)/9 - (-76)/3. Suppose -6*u - a = -u. What is the remainder when b(u) is divided by 2? 1 Let t(u) = -3*u + 0 + 2 - u. Calculate the remainder when 41 is divided by t(-3). 13 Suppose 2*y + y - 48 = 0. Let f be 2*-12*(-1)/2. Let z = y + f. Calculate the remainder when z is divided by 15. 13 Let v(y) = -15*y + 3. Let h be v(2). Let n = -20 - h. What is the remainder when 19 is divided by n? 5 Suppose 0 = -5*c + 2*r - 4*r + 79, -3*c + 48 = r. Calculate the remainder when 31 is divided by c. 14 Let t = 1 + 1. Suppose 5*a - 4*n = 347, 100 = t*a + 3*n - 25. What is the remainder when a is divided by 23? 21 Suppose 6*d = -d + 35. Calculate the remainder when 8 is divided by d. 3 What is the remainder when 18 is divided by (5/30)/((-3)/(-30))*6? 8 Suppose 0 = 4*d - 11 - 57. What is the remainder when d is divided by 9? 8 Let f(x) = 28*x**2 + x. Let h be f(-1). Let z = -5 + 5. Suppose -45 = -5*k - 5*w, -w = -3*k - z*w + h. Calculate the remainder when 26 is divided by k. 8 Let c(m) = m + 3. Let k be c(0). Calculate the remainder when 19 is divided by (k + -2)*(-2 + 7). 4 Suppose 0 = 10*f - 8*f - 6. What is the remainder when (-2)/f - (-214)/6 is divided by 18? 17 Suppose 2*c - r = 72, 0 = -c + 5*c - 3*r - 142. What is the remainder when c is divided by 27? 10 Suppose 3*s + 5 = -1. Let p be 5/s*72/(-20). Suppose 15 + p = 4*o. Calculate the remainder when 15 is divided by o. 3 Let q be (-2)/3*(-684)/8. Suppose -v + q = 2*v. Calculate the remainder when v is divided by 7. 5 Let p(z) = 2*z - 12. Let y(h) = -2*h + 11. Let i(j) = -5*p(j) - 6*y(j). Let a be i(7). Let d = a - -10. Calculate the remainder when d is divided by 10. 8 Suppose -2*f - 3*k = -97, -4*k + 5 = -3*k. Suppose 2*w + 10 = x, -80 = -5*x - 0*x - 5*w. What is the remainder when f is divided by x? 13 Let a be ((-3)/2)/((-3)/(-20)). Suppose 0 = -5*v + 31 + 49. What is the remainder when v is divided by 0 - 0 - (1 + a)? 7 Calculate the remainder when 78 is divided by 1 + (6 - 4) + 13. 14 Let p(i) = i**3 - 14*i**2 - 16*i - 1. Let m be p(15). Suppose 2*o - 46 = 4*o. Let v = m - o. Calculate the remainder when 12 is divided by v. 5 Suppose h = -0*h + 33. Suppose q - 6 = -3. Suppose -o = -i - 11, -h = -3*o - q*i + i. Calculate the remainder when 31 is divided by o. 9 Suppose -75 - 98 = -3*j + u, 0 = 3*j + 4*u - 163. Calculate the remainder when j is divided by (36/(-14))/((-7)/((-686)/(-21))). 9 Let i = -21 - -24. What is the remainder when 7 is divided by i? 1 Let u be (-3 + 2)*(4 - -1). Suppose -4*z + 2 - 62 = 0. Let o = u - z. Calculate the remainder when 28 is divided by o. 8 Let g be (8/10)/((-1)/(-10)). Let m = -6 + g. Suppose -m*n + 18 = 4*d, 4*n = -15 + 3. Calculate the remainder when d is divided by 4. 2 Let x = 45 + -43. What is the remainder when (9*-1)/((-3)/2) is divided by x? 0 Suppose 5*h - 38 = -w, -3*w = -w - 3*h - 37. Calculate the remainder when 67 is divided by w. 21 Let a(t) = 3*t**3 - 4*t**2 - 4*t + 12. What is the remainder when a(3) is divided by 6? 3 Let a(x) = -x**2 - 6*x + 1. Let n be a(-6). Calculate the remainder when 2*(3/n + -1) is divided by 3. 1 Let z(k) = -k**2 + k + 4. Let u be z(3). Let p be u*(-7*1)/1. Suppose i + i - p = 0. Calculate the remainder when 19 is divided by i. 5 Let x(t) = -t**2 + 15*t - 14. Let u be x(10). Suppose 4*d = -0*d + u. Let q = -4 + 21. Calculate the remainder when q is divided by d. 8 Let a = -21 - -36. Calculate the remainder when 37 is divided by a. 7 Suppose -4*s + 7 = -3*s - b, b - 48 = -4*s. Calculate the remainder when s is divided by 6. 5 Let t = 16 + 36. What is the remainder when t is divided by 18? 16 Let q be 0 - 2 - 1 - -1. Let m(u) = -14*u - 4. Calculate the remainder when m(q) is divided by 13. 11 Let m = -11 + 15. Suppose 3*d - m = -1. Calculate the remainder when 3 is divided by d. 0 Let i = 42 + -14. Let v(x) = -6*x - 3. Calculate the remainder when v(-2) is divided by 147/i + (-2)/8. 4 Let l be (-35 + 4)*3 - -4. What is the remainder when 68 is divided by l/(-4) + 27/36? 22 Let h = -28 + 48. Suppose 99 = 4*m + 3*v, 4*m - 2*v - 116 = -22. Let z = m + -17. Calculate the remainder when h is divided by z. 6 Let r(x) be the third derivative of -x**6/120 - 7*x**5/60 - x**4/8 - 7*x**3/6 + x**2. Calculate the remainder when 67 is divided by r(-7). 11 Suppose -5*a = 5, 0*o = -3*o + 4*a + 13. Let x be o + 4/(-1) + 3. Suppose -42 = -x*p + 32. Calculate the remainder when p is divided by 13. 11 Suppose -3*l = 4*b - 148, 238 = 3*l + 2*l - 2*b. Suppose f = -3*f + l. Calculate the remainder when 34 is divided by f. 10 Let z = 4 - -45. Calculate the remainder when z is divided by 13. 10 Suppose -4*i - 1 = 5*j - 2, -7 = 4*i - 3*j. What is the remainder when 17 is divided by 10*(i + 2) - 1? 8 Suppose -2*x - 11 = -33. Calculate the remainder when 31 is divided by x. 9 Let z(g) = -5*g. Let a be z(-1). Suppose 165 = 6*d - d + 3*y, 0 = -4*d - a*y + 132. Calculate the remainder when d is divided by 2 + (16 - (
{ "pile_set_name": "DM Mathematics" }
/*===------------ avx512bf16intrin.h - AVX512_BF16 intrinsics --------------=== * * Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. * See https://llvm.org/LICENSE.txt for license information. * SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception * *===-----------------------------------------------------------------------=== */ #ifndef __IMMINTRIN_H #error "Never use <avx512bf16intrin.h> directly; include <immintrin.h> instead." #endif #ifndef __AVX512BF16INTRIN_H #define __AVX512BF16INTRIN_H typedef short __m512bh __attribute__((__vector_size__(64), __aligned__(64))); typedef short __m256bh __attribute__((__vector_size__(32), __aligned__(32))); typedef unsigned short __bfloat16; #define __DEFAULT_FN_ATTRS512 \ __attribute__((__always_inline__, __nodebug__, __target__("avx512bf16"), \ __min_vector_width__(512))) #define __DEFAULT_FN_ATTRS \ __attribute__((__always_inline__, __nodebug__, __target__("avx512bf16"))) /// Convert One BF16 Data to One Single Float Data. /// /// \headerfile <x86intrin.h> /// /// This intrinsic does not correspond to a specific instruction. /// /// \param __A /// A bfloat data. /// \returns A float data whose sign field and exponent field keep unchanged, /// and fraction field is extended to 23 bits. static __inline__ float __DEFAULT_FN_ATTRS _mm_cvtsbh_ss(__bfloat16 __A) { return __builtin_ia32_cvtsbf162ss_32(__A); } /// Convert Two Packed Single Data to One Packed BF16 Data. /// /// \headerfile <x86intrin.h> /// /// This intrinsic corresponds to the <c> VCVTNE2PS2BF16 </c> instructions. /// /// \param __A /// A 512-bit vector of [16 x float]. /// \param __B /// A 512-bit vector of [16 x float]. /// \returns A 512-bit vector of [32 x bfloat] whose lower 256 bits come from /// conversion of __B, and higher 256 bits come from conversion of __A. static __inline__ __m512bh __DEFAULT_FN_ATTRS512 _mm512_cvtne2ps_pbh(__m512 __A, __m512 __B) { return (__m512bh)__builtin_ia32_cvtne2ps2bf16_512((__v16sf) __A, (__v16sf) __B); } /// Convert Two Packed Single Data to One Packed BF16 Data. /// /// \headerfile <x86intrin.h> /// /// This intrinsic corresponds to the <c> VCVTNE2PS2BF16 </c> instructions. /// /// \param __A /// A 512-bit vector of [16 x float]. /// \param __B /// A 512-bit vector of [16 x float]. /// \param __W /// A 512-bit vector of [32 x bfloat]. /// \param __U /// A 32-bit mask value specifying what is chosen for each element. /// A 1 means conversion of __A or __B. A 0 means element from __W. /// \returns A 512-bit vector of [32 x bfloat] whose lower 256 bits come from /// conversion of __B, and higher 256 bits come from conversion of __A. static __inline__ __m512bh __DEFAULT_FN_ATTRS512 _mm512_mask_cvtne2ps_pbh(__m512bh __W, __mmask32 __U, __m512 __A, __m512 __B) { return (__m512bh)__builtin_ia32_selectw_512((__mmask32)__U, (__v32hi)_mm512_cvtne2ps_pbh(__A, __B), (__v32hi)__W); } /// Convert Two Packed Single Data to One Packed BF16 Data. /// /// \headerfile <x86intrin.h> /// /// This intrinsic corresponds to the <c> VCVTNE2PS2BF16 </c> instructions. /// /// \param __A /// A 512-bit vector of [16 x float]. /// \param __B /// A 512-bit vector of [16 x float]. /// \param __U /// A 32-bit mask value specifying what is chosen for each element. /// A 1 means conversion of __A or __B. A 0 means element is zero. /// \returns A 512-bit vector of [32 x bfloat] whose lower 256 bits come from /// conversion of __B, and higher 256 bits come from conversion of __A. static __inline__ __m512bh __DEFAULT_FN_ATTRS512 _mm512_maskz_cvtne2ps_pbh(__mmask32 __U, __m512 __A, __m512 __B) { return (__m512bh)__builtin_ia32_selectw_512((__mmask32)__U, (__v32hi)_mm512_cvtne2ps_pbh(__A, __B), (__v32hi)_mm512_setzero_si512()); } /// Convert Packed Single Data to Packed BF16 Data. /// /// \headerfile <x86intrin.h> /// /// This intrinsic corresponds to the <c> VCVTNEPS2BF16 </c> instructions. /// /// \param __A /// A 512-bit vector of [16 x float]. /// \returns A 256-bit vector of [16 x bfloat] come from conversion of __A. static __inline__ __m256bh __DEFAULT_FN_ATTRS512 _mm512_cvtneps_pbh(__m512 __A) { return (__m256bh)__builtin_ia32_cvtneps2bf16_512_mask((__v16sf)__A, (__v16hi)_mm256_undefined_si256(), (__mmask16)-1); } /// Convert Packed Single Data to Packed BF16 Data. /// /// \headerfile <x86intrin.h> /// /// This intrinsic corresponds to the <c> VCVTNEPS2BF16 </c> instructions. /// /// \param __A /// A 512-bit vector of [16 x float]. /// \param __W /// A 256-bit vector of [16 x bfloat]. /// \param __U /// A 16-bit mask value specifying what is chosen for each element. /// A 1 means conversion of __A. A 0 means element from __W. /// \returns A 256-bit vector of [16 x bfloat] come from conversion of __A. static __inline__ __m256bh __DEFAULT_FN_ATTRS512 _mm512_mask_cvtneps_pbh(__m256bh __W, __mmask16 __U, __m512 __A) { return (__m256bh)__builtin_ia32_cvtneps2bf16_512_mask((__v16sf)__A, (__v16hi)__W, (__mmask16)__U); } /// Convert Packed Single Data to Packed BF16 Data. /// /// \headerfile <x86intrin.h> /// /// This intrinsic corresponds to the <c> VCVTNEPS2BF16 </c> instructions. /// /// \param __A /// A 512-bit vector of [16 x float]. /// \param __U /// A 16-bit mask value specifying what is chosen for each element. /// A 1 means conversion of __A. A 0 means element is zero. /// \returns A 256-bit vector of [16 x bfloat] come from conversion of __A. static __inline__ __m256bh __DEFAULT_FN_ATTRS512 _mm512_maskz_cvtneps_pbh(__mmask16 __U, __m512 __A) { return (__m256bh)__builtin_ia32_cvtneps2bf16_512_mask((__v16sf)__A, (__v16hi)_mm256_setzero_si256(), (__mmask16)__U); } /// Dot Product of BF16 Pairs Accumulated into Packed Single Precision. /// /// \headerfile <x86intrin.h> /// /// This intrinsic corresponds to the <c> VDPBF16PS </c> instructions. /// /// \param __A /// A 512-bit vector of [32 x bfloat]. /// \param __B /// A 512-bit vector of [32 x bfloat]. /// \param __D /// A 512-bit vector of [16 x float]. /// \returns A 512-bit vector of [16 x float] comes from Dot Product of /// __A, __B and __D static __inline__ __m512 __DEFAULT_FN_ATTRS512 _mm512_dpbf16_ps(__m512 __D, __m512bh __A, __m512bh __B) { return (__m512)__builtin_ia32_dpbf16ps_512((__v16sf) __D, (__v16si) __A, (__v16si) __B); } /// Dot Product of BF16 Pairs Accumulated into Packed Single Precision. /// /// \headerfile <x86intrin.h> /// /// This intrinsic corresponds to the <c> VDPBF16PS </c> instructions. /// /// \param __A /// A 512-bit vector of [32 x bfloat]. /// \param __B /// A 512-bit vector of [32 x bfloat]. /// \param __D /// A 512-bit vector of [16 x float]. /// \param __U /// A 16-bit mask value specifying what is chosen for each element. /// A 1 means __A and __B's dot product accumulated with __D. A 0 means __D. /// \returns A 512-bit vector of [16 x float] comes from Dot Product of /// __A, __B and __D static __inline__ __m512 __DEFAULT_FN_ATTRS512 _mm512_mask_dpbf16_ps(__m512 __D, __mmask16 __U, __m512bh __A, __m512bh __B) { return (__m512)__builtin_ia32_selectps_512((__mmask16)__U, (__v16sf)_mm512_dpbf16_ps(__D, __A, __B), (__v16sf)__D); } /// Dot Product of BF16 Pairs Accumulated into Packed Single Precision. /// /// \headerfile <x86intrin.h> /// /// This intrinsic corresponds to the <c> VDPBF16PS </c> instructions. /// /// \param __A /// A 512-bit vector of [32 x bfloat]. /// \param __B /// A 512-bit vector of [32 x bfloat]. /// \param __D /// A 512-bit vector of [16 x float]. /// \param __U /// A 16-bit mask value specifying what is chosen for each element. /// A 1 means __A and __B's dot product accumulated with __D. A 0 means 0. /// \returns A 512-bit vector of [16 x float] comes from Dot Product of /// __A, __B and __D static __inline__ __m512 __DEFAULT_FN_ATTRS512 _mm512_maskz_dpbf16_ps(__mmask16 __U, __m512 __D, __m512bh __A, __m512bh __B) { return (__m512)__builtin_ia32_selectps_512((__mmask16)__U, (__v16sf)_mm512_dpbf16_ps(__D, __A, __B), (__v16sf)_mm512_setzero_si512()); } /// Convert Packed BF16 Data to Packed float Data. /// /// \headerfile <x86intrin.h> /// /// \param __A /// A 256-bit vector of [16 x bfloat]. /// \returns A 512-bit vector of [16 x float] come from convertion of __A static __inline__ __m512 __DEFAULT_FN_ATTRS512 _mm512_cvtpbh_ps(__m256bh __A) { return _mm512_castsi512_ps((__m512i)_mm512_slli_epi32( (__m512i)_mm512_cvtepi16_epi32((__m256i)__A), 16)); } /// Convert Packed BF16 Data to Packed float Data using zeroing mask. /// /// \headerfile <x86intrin.h> /// /// \param __U /// A 16-bit mask. Elements are zeroed out when the corresponding mask /// bit is not set. /// \param __A /// A 256-bit vector of [16 x bfloat]. /// \returns A 512-bit vector of [16 x float] come from convertion of __A static __inline__ __m512 __DEFAULT_FN_ATTRS512 _mm512_maskz_cvtpbh_ps(__mmask16 __U, __m256bh __A) { return _mm512_castsi512_ps((__m512i)_mm512_slli_epi32( (__m512i)_mm512_maskz_cvtepi16_epi32((__mmask16)__U, (__m256i)__A), 16)); } /// Convert Packed BF16 Data to Packed float Data using merging mask. /// /// \headerfile <x86intrin.h> /// /// \param __S /// A 512-bit vector of [16 x float]. Elements are copied from __S when /// the corresponding mask bit is not set. /// \param __U /// A 16-bit mask. /// \param __A /// A 256-bit vector of [16 x bfloat]. /// \returns A 512-bit vector of [16 x float] come from convertion of __A static __inline__ __m512 __DEFAULT_FN_ATTRS512 _mm512_mask_cvtpbh_ps(__m512 __S, __mmask16 __U, __m256bh __A) { return _mm512_castsi512_ps((__m512i)_mm512_mask_slli_epi32( (__m512i)__S, (__mmask16)__U, (__m512i)_mm512_cvtepi16_epi32((__m256i)__A), 16)); } #undef __DEFAULT_FN_ATTRS #undef __DEFAULT_FN_ATTRS512 #endif
{ "pile_set_name": "Github" }
--- abstract: | A C-coloring of a hypergraph $\cH=(X,\cE)$ is a vertex coloring $\vp:X\to\enn$ such that each edge $E\in\cE$ has at least two vertices with a common color. The related parameter $\UU (\cH)$, called the upper chromatic number of $\cH$, is the maximum number of colors can be used in a C-coloring of $\cH$. A hypertree is a hypergraph which has a host tree $T$ such that each edge $E \in \cE$ induces a connected subgraph in $T$. Notations $n$ and $m$ stand for the number of vertices and edges, respectively, in a generic input hypergraph. We establish guaranteed polynomial-time approximation ratios for the difference $n-\overline{\chi}({\cal H})$, which is $2+2 \ln (2m)$ on hypergraphs in general, and $1+ \ln m$ on hypertrees. The latter ratio is essentially tight as we show that $n-\overline{\chi}({\cal H})$ cannot be approximated within $(1-\epsilon) \ln m$ on hypertrees (unless [NP]{}$\subseteq$[DTIME]{}$(n^{\cO(log\;log\; n)})$). Furthermore, $\overline{\chi}({\cal H})$ does not have ${\cal O}(n^{1-\epsilon})$-approximation and cannot be approximated within additive error $o(n)$ on the class of hypertrees (unless ${\sf P}={\sf NP}$). [**Keywords:**]{} approximation ratio, hypergraph, hypertree, C-coloring, upper chromatic number, multiple hitting set. **AMS 2000 Subject Classification:** 05C15, 05C65, 05B40, 68Q17 author: - | Csilla Bujtás $^{1}$   Zsolt Tuza $^{1,2}$\ $^1$ Department of Computer Science and Systems Technology\ University of Pannonia, Veszprém, Hungary\ $^2$ Alfréd Rényi Institute of Mathematics\ Hungarian Academy of Sciences, Budapest, Hungary title: '-1.5cm [   ]{} Approximability of the upper chromatic number of hypergraphs[^1]' --- Introduction ============ In this paper we study a hypergraph coloring invariant, termed upper chromatic number and denoted by $\UU(\cH)$, which was first introduced by Berge (cf. [@B]) in the early 1970’s and later independently by several further authors [@ABN; @Vol2] from different motivations. The present work is the very first one concerning approximation algorithms on it. We also consider the complementary problem of approximating the difference $n-\UU$, the number of vertices minus the upper chromatic number. One of our main tools to prove a guaranteed upper bound on it is an approximation ratio established for the 2-transversal number of hypergraphs. As problems of this type are of interest in their own right, we also prove an approximation ratio in general for the minimum size of multiple transversals, i.e., sets of vertices intersecting each edge in a prescribed number of vertices at least. Earlier results allowed to select a vertex into the set several times; we prove bounds for the more restricted scenario where the set does not include any vertex more than once. Notation and terminology ------------------------ A *hypergraph* $\cH=(X, \cE)$ is a set system, where $X$ denotes the set of vertices and each edge $E_i\in \cE$ is a nonempty subset of $X$. Here we also assume that for each edge $E_i$ the inequality $|E_i|\ge 2$ holds, moreover we use the standard notations $|X|=n$ and $|\cE|=m$. A hypergraph $\cH$ is said to be *$r$-uniform* if $|E_i|=r$ for each $E_i \in \cE$. We shall also consider hypergraphs with restricted structure, where some kind of host graphs are assumed. A hypergraph $\cH=(X,\cE)$ admits a *host graph* $G=(X,E)$ if each edge $E_i \in \cE$ induces a connected subgraph in $G$. The edges of the host graph $G$ will be referred to as *lines*. Particularly, $\cH$ is called *hypertree* or *hyperstar* if it admits a host graph which is a tree or a star, respectively. Note that under our condition, which forbids edges of size 1, $\cH$ is a hyperstar if and only if there exists a fixed vertex $c^*\in X$ (termed the center of the hyperstar) contained in each edge of $\cH$. A *C-coloring* of $\cH$ is an assignment $\vp:X\to\enn$ such that each edge $E\in\cE$ has at least two vertices of a common color (that is, with the same image). The *upper chromatic number* $\UU(\cH)$ of $\cH$ is the maximum number of colors that can be used in a C-coloring of $\cH$. We note that in the literature the value $\UU(\cH)+1$ is also called the ‘cochromatic number’ or ‘heterochromatic number’ of $\cH$ with the terminology of Berge [@B p. 151] and Arocha *et al.* [@ABN], respectively. A C-coloring $\vp$ with $|\vp(X)|=\UU(\cH)$ colors will be referred to as an *optimal coloring* of $\cH$. The *decrement* of $\cH=(X,\cE)$, introduced in [@proj-plane], is defined as $\dec(\cH)=n-\UU(\cH)$. Similarly, the decrement of a C-coloring $\vp:X\to\enn$ is meant as $\dec(\vp)=|X|-|\vp(X)|$. For results on C-coloring see the recent survey [@BT-JGeom]. A *transversal* (also called hitting set or vertex cover) is a subset $T \subseteq X$ which meets each edge of $\cH=(X, \cE)$, and the minimum cardinality of a transversal is the *transversal number* $\tau(\cH)$ of the hypergraph. An *independent set* (or stable set) is a vertex set $I \subseteq X$, which contains no edge of $\cH$ entirely. The maximum size of an independent set in $\cH$ is the *independence number* (or stability number) $\aaa(\cH)$. It is immediate from the definitions that the complement of a transversal is an independent set and vice versa, so the Gallai-type equality $\tau(\cH)+\aaa(\cH)=n$ holds for each hypergraph. Remark that selecting one vertex from each color class of a C-coloring yields an independent set, therefore $\UU(\cH)\le\aaa(\cH)$ and, equivalently, $\dec(\cH) \ge \tau (\cH)$. More generally, a *$k$-transversal* is a set $T\subseteq X$ such that $|E_i \cap T|\ge k$ for every $E_i \in \cE$. A 2-transversal is sometimes called double transversal or strong transversal, and its minimum size is the *2-transversal number* $\tau_2(\cH)$ of the hypergraph. For an optimization problem and a constant $c>1$, an algorithm $\cA$ is called a *$c$-approximation algorithm* if, for every feasible instance $\cI$ of the problem, if the value has to be minimized, then $\cA$ delivers a solution of value at most $c\cdot Opt(\cI)$; if the value has to be maximized, then $\cA$ delivers a solution of value at least $Opt(\cI)/c$. Throughout this paper, an approximation algorithm is always meant to be one with polynomial running time on every instance of the problem. We say that a value has guaranteed approximation ratio $c$ if it has a $c$-approximation algorithm. In the other case, when no $c$-approximation algorithm exists, we say that the value cannot be approximated within ratio $c$. For a function $f(n,m)$, an $f(n,m)$-approximation algorithm and the related notions can be defined similarly. A polynomial-time approximation scheme, abbreviated as PTAS, means an algorithm for every fixed $\eps > 0$ which is a $(1+\eps)$-approximation and whose running time is a polynomial function of the input size (but any function of $1/\eps$ may occur in the exponent). For further terminology and facts we refer to [@B; @BM; @Vaz] in the theory of graphs, hypergraphs, and algorithms, respectively. The notations $\ln x$ and $\log x$ stand for the natural logarithm and for the logarithm in base 2, respectively. Approximability results on multiple transversals ------------------------------------------------ The transversal number $\tau(\cH)$ of a hypergraph can be approximated within ratio $(1+\ln m)$ by the classical greedy algorithm (see e.g. [@Vaz]). On the other hand, Feige [@Fei] proved that $\tau(\cH)$ cannot be approximated within $(1-\epsilon) \ln m$ for any constant $0<\epsilon <1$, unless [NP]{}$\subseteq$[DTIME]{}$(n^{\cO(\log \log n)})$. As relates to the $k$-transversal number, in [@Vaz] a $(1+ \ln m)$-approximation is stated under the less restricted setting which allows multiple selection of vertices in the $k$-transversal. In the context of coloring, however, we cannot allow repetitions of vertices. For this more restricted case, when the $k$-transversal consists of pairwise different vertices, we prove a guaranteed approximation ratio $(1+ \ln (km))$. In fact we consider a more general problem, where the required minimum size of the intersection $E_i \cap T$ can be prescribed independently for each $E_i \in \cE$. \[multiple\] Given a hypergraph $\cH=(X,\cE)$ with $m$ edges $E_1,\dots,E_m$ and positive integers $w_1,\dots,w_m$ associated with the edges, the minimum cardinality of a set $S\sst X$ satisfying $|S\cap E_i|\ge w_i$ for all $1\le i\le m$ can be approximated within $\sum_{i=1}^{W} 1/i < 1+\ln W$, where $W= \sum_{i=1}^m w_i$. This result, proved in the next section, implies a guaranteed approximation ratio $(1+ \ln 2m)$ for $\tau_2(\cH)$. Approximability results on the upper chromatic number ----------------------------------------------------- The problem of determining the upper chromatic number is -hard, already on the class of 3-uniform hyperstars. On the other hand, the problems of determining $\overline{\chi}({\cal H})$ and finding a $\overline{\chi}({\cal H})$-coloring are fixed-parameter tractable in terms of maximum vertex degree on the class of hypertrees [@BT-cejor]. A notion closely related to our present subject was introduced by Voloshin [@Vol93; @Vol2] in 1993. A *mixed hypergraph* is a triple $\cH =(X, \cC, \cD)$ with two families of subsets called $\cC$-edges and $\cD$-edges. By definition, a coloring of a mixed hypergraph is an assignment $\vp:X\to\enn$ such that each $\cC$-edge has two vertices of a common color and each $\cD$-edge has two vertices of distinct colors. Then, the minimum and the maximum possible number of colors, that can occur in a coloring of $\cH$, is termed the lower and the upper chromatic number of $\cH$ and denoted by $\chi(\cH)$ and $\UU(\cH)$, respectively. For detailed results on mixed hypergraphs we refer to the monograph [@Volmon]. Clearly, the of a hypergraph $\cH=(X, \cE)$ are in one-to-one correspondence with the colorings of the mixed hypergraph $\cH'=(X, \cE, \es)$, and also $\UU(\cH)= \UU(\cH')$ holds. The following results are known on the approximation of the upper chromatic number of mixed hypergraphs: For mixed hypergraphs of maximum degree 2, the upper chromatic number has a linear-time $\frac{5}{3}$-approximation and an $O(m^3+n)$-time [@KKV-degree Theorem 14 and Theorem 15] There is no PTAS for the upper chromatic number of mixed hypergraphs of maximum degree 2, unless $=$. [@KKV-degree Theorem 20] There is no $o(n)$-approximation algorithm for the upper chromatic number of mixed hypergraphs, unless $=$. [@K-spect Corollary 5] All these results assume the presence of $\cD$-edges in the input mixed hypergraph. In this paper we investigate how hard it is to estimate $\UU$ for C-colorings of hypergraphs. On the positive side, we prove a guaranteed approximation ratio for the decrement of hypergraphs in general, furthermore we establish a better ratio on the class of hypertrees. \[appr-gen\] The value of $\dec(\cH)$ is $(2+2\ln (2m))$-approximable on the class of all hypergraphs. \[appr-htree\] The value of $\dec(\cH)$ is $(1+\ln m)$-approximable on the class of all hypertrees. These theorems are essentially best possible concerning the ratio of approximation, moreover the upper chromatic number turns out to be inherently non-approximable already on hypertrees with rather restricted host trees, as shown by the next result. \[ratio\] For every $\epsilon > 0$, $\dec(\cH)$ cannot be approximated within $(1-\epsilon)\ln m$ on the class of hyperstars, unless [NP]{}$\subseteq$[DTIME]{}$(n^{\cO(\log \log n)})$. For every $\epsilon > 0$, $\UU(\cH)$ cannot be approximated within $n^{1-\epsilon}$ on the class of $3$-uniform hyperstars, unless [P]{}$=$[NP]{}. As regards the *difference* between a solution determined by a polynomial-time algorithm and the optimum value, the situation is even worse. \[additive\] Unless $=$, neither of the following values can be approximated within additive error $o(n)$ for hypertrees of edge size at most 7: $\UU(\cH)$,  $\dec(\cH)$, $\aaa(\cH)-\UU(\cH)$,  $\tau(\cH)-\dec(\cH)$, $\dec(\cH)-\tau_2(\cH)/2$. The relevance of the last quantity occurs in the context of Proposition \[decr-tau2\] of Section \[decr-transv\]. We prove the positive results with guaranteed approximation ratio in Section 3, and the negative non-approximability results in Section 4. Lemmas on connected colorings of hypertrees ------------------------------------------- Suppose that $\cH$ is a hypergraph over a host graph $G$, and $\vp$ is a C-coloring of $\cH$. We say that $\vp$ is a *connected coloring* if each color class of $\vp$ induces a connected subgraph of $G$. We will use the following two lemmas concerning connected C-colorings of hypertrees, both established in [@BT-cejor]. A line $uv$ of the host tree $G$ is termed *monochromatic line* for a C-coloring $\vp$ if $\vp(u)=\vp(v)$. ([@BT-cejor Proposition 2]) \[conn\] If a hypertree admits a C-coloring with $k$ colors, then it also has a connected C-coloring with $k$ colors over any fixed host tree. ([@BT-cejor Proposition 3]) \[mono-lines\] If $\vp$ is a connected C-coloring of a hypertree $\cH$ over a fixed host tree $G$, then the decrement of $\vp$ equals the number of monochromatic lines in $G$. Multiple transversals ===================== In this section, we describe a variation of the classical greedy algorithm, with the goal to produce a multiple transversal with pairwise different elements. Analyzing the greedy selection we will prove Theorem \[multiple\]. We recall its statement. **Theorem \[multiple\]**. *Given a hypergraph $\cH=(X,\cE)$ with $m$ edges $E_1,\dots,E_m$ and positive integers $w_1,\dots,w_m$ associated with its edges, the minimum cardinality of a set $S\sst X$ satisfying $|S\cap E_i|\ge w_i$ for all $1\le i\le m$ can be approximated within $\sum_{i=1}^{W} 1/i < 1+\ln W$, where $W= \sum_{i=1}^m w_i$.* Denote by $\cS$ the collection of all feasible solutions, that are the sets $S\sst X$ such that $|S\cap E_i|\ge w_i$ holds for all $i=1,\dots,m$. By definition, the optimum of the problem is the integer $$M:=\min_{S\in\cS} |S| .$$ We will show that the greedy selection always yields an $S^*\in\cS$ with $$|S^*| \le M \cdot \left(1 + 1/2 + \dots + 1/W\right).$$ To prove this, for any $Y\sst X$ and any $1\le i\le m$ we define $$w_{i,Y} := \max \left(0, \, w_i - |E_i\cap Y|\right)$$ which means the reduced number of elements to be picked further from $E_i$, once the set $Y$ has already been selected. Moreover, to any vertex $x\in X\smin Y$ we associate its usefulness $$u_{x,Y} := | \{E_i \mid x\in E_i, \ w_{i,Y} > 0 \}|.$$ The greedy algorithm then starts with $Y_0=\es$ and updates $Y_k := Y_{k-1}\cup\{x_k\}$ where $x_k\in X\smin Y_{k-1}$ has maximum usefulness among all values $u_{x,Y_{k-1}}$ in the set $X\smin Y_{k-1}$, as long as this maximum is positive. Reaching $u_{x,Y_t}=0$ for all $x\in X\smin Y_t$ (for some $t$), we set $S^* := Y_t$; we will prove that this $S^*$ satisfies the requirements. It is clear by the definition of $u_{x,Y}$ that $S^*$ meets each $E_i$ in at least $w_i$ elements, i.e. $S^*\in\cS$. We need to prove that $S^*$ is sufficiently small. For this, consider the following auxiliary set of cardinality $W$: $$Z := \{ z(i,j) \mid 1\le i\le m, \ 1\le j\le w_i \}.$$ At the moment when $Y_k$ is constructed by adjoining an element $x_k$ to $Y_{k-1}$, we assign weight $1/u_{x,Y_{k-1}}$ to all elements $z(i,w_{i,Y_{k-1}})$ such that $x_k\in E_i$ and $w_{i,Y_{k-1}}>0$. Note that $w_{i,Y_{k}}=w_{i,Y_{k-1}}-1$ will hold after the selection of $x_k$. Moreover, total weight 1 is assigned in each step, hence the overall weight after finishing the algorithm is exactly $|S^*|$. We put the elements $z(i,j)$ in a sequence $Z^*=(z_1,z_2,\dots,z_W)$ such that the elements of $Z$ occur in the order as they are weighted (i.e., those for $x_1$ first in any order, then the elements weighted for $x_2$, and so on). Just before the selection of $x_k$, the number of elements $z(i,j)$ to which a weight has been assigned is precisely $m_{k-1} := \sum_{\ell=1}^{k-1} u_{x_\ell,Y_{\ell-1}}.$ We are going to prove that $u_{x_k,Y_{k-1}} \ge (W-m_{k-1})/M$. Assuming that this has already been shown, it follows that each $z_q$ in $Z^*$ has weight at most $M/(W+1-q)$ and consequently $|S^*| \le M \cdot \left(1 + 1/2 + \dots + 1/W\right)$ as required. Let now $S_0\in\cS$ be any fixed optimal solution. Consider the bipartite incidence graph $B$ between the sets $E_i$ and the elements of $S_0$. That is, the first vertex class of $B$ has $m$ elements $a_1,\dots,a_m$ representing the sets $E_1,\dots,E_m$ while the second vertex class consists of the elements of $S_0$; we denote the latter vertices by $b_1,\dots,b_M$. There is an edge joining $a_i$ with $b_j$ if and only if $b_j\in E_i$. Since $S_0\in \cS$, each $a_i$ has degree at least $w_i$. Moreover, considering the moment just before $x_k$ is selected, if we remove the vertices of $S_0\cap Y_{k-1}$, in the remaining subgraph still each $a_i$ has degree at least $w_{i,Y_{k-1}}$. We take a subgraph $B'$ of this $B-Y_{k-1}$ (possibly $B$ itself if $Y_{k-1}\cap S_0=\es$) such that each $a_i$ has degree *exactly* $w_{i,Y_{k-1}}$. The number of edges in $B'$ is then equal to $W-m_{k-1}$; hence, some $b_j$ has degree at least $(W-m_{k-1})/M$. It follows that this $b_j$ has usefulness at least $(W-m_{k-1})/M$ at the moment when $x_k$ is selected; but $x_k$ is chosen to have maximum usefulness, hence $u_{x_k,Y_{k-1}} \ge (W-m_{k-1})/M$. This completes the proof. \[k-transv\] For each positive integer $k$, the $k$-transversal number $\tau_k$ has a $(1+\ln (km))$-approximation on the class of all hypergraphs. Guaranteed approximation ratios for the decrement ================================================= In this section we establish a connection between the parameters $\dec(\cH)$ and $\tau_2(\cH)$, and then we prove our positive results stated in Theorems \[appr-gen\] and \[appr-htree\]. Decrement vs. 2-transversal number {#decr-transv} ---------------------------------- First, we give an inequality valid for all hypergraphs without any structural restrictions and then, using this relation, we prove Theorem \[appr-gen\]. \[decr-tau2\] For every hypergraph $\cH$ we have $\tau_2(\cH)/2\le\dec(\cH)\le\tau_2(\cH)-1$, and both bounds are tight. In particular, $\tau_2(\cH)$ is a 2-approximation for $\dec(\cH)$. [*Lower bound:*]{}If $\UU(\cH) \le n/2$, then $\dec(\cH)\ge n/2 \ge \tau_2(\cH)/2$ automatically holds. If $\UU(\cH) > n/2$, then every $\UU$-coloring contains at least $2\UU(\cH)-n$ singleton color classes, therefore the total size of non-singleton classes is at most $n-(2\UU(\cH)-n)= 2(n-\UU(\cH))$. Since the union of the latter meets all edges at least twice, we obtain $2\dec(\cH)\ge\tau_2(\cH)$. [*Upper bound:*]{}If $S$ is a 2-transversal set of cardinality $\tau_2(\cH)$, we can assign the same color to the entire $S$ and a new dedicated color to each $x\in X\smin S$. This is a C-coloring with $n-|S|+1$ colors and with decrement $\tau_2(\cH)-1$. [*Tightness:*]{}The simplest example for equality in the upper bound is the hypergraph in which the vertex set is the only edge, i.e. $\cH=(X,\{X\})$. Many more examples can be given. For instance, we can specify a proper subset $S\sst X$ with $|S|\ge 2$, and take all triples $E\sst X$ such that $|E\cap S|=2$ and $|E\smin S|=1$. If $|S|\le n-2$, then $S$ is the unique smallest 2-transversal set, and every C-coloring with more than two colors makes $S$ monochromatic, hence the unique $\UU$-coloring uses $n-|S|+1$ colors. For the lower bound, we assume that $n=3k+1$. Let $X=\{1,2, \dots, 3k+1\}$ and $$\begin{aligned} \cE&=&\{\{3r+1, 3r+2, 3r+3\}\mid 0\le r\le k-1\} \nonumber \\ & &\cup~ \{\{3r+2, 3r+3, 3r+4\}\mid 0\le r\le k-1\}\} \nonumber\end{aligned}$$ Then $\tau_2(\cH)=2k$ because the $k$ edges in the first line are mutually disjoint and hence need at least $2k$ vertices in any 2-transversal set, while the $2k$-element set $\{3r+2\mid 0\le r\le k-1\}\cup \{3r+3\mid 0\le r\le k-1\}$ meets all edges twice. On the other hand, there exists a unique C-coloring with decrement $k$, obtained by making $\{3r+2,3r+3\}$ a monochromatic pair for $r=0,1,\dots,k-1$ and putting any other vertex in a singleton color class. This verifies equality in the lower bound. Now, we are ready to prove Theorem \[appr-gen\]. Let us recall its statement. **Theorem \[appr-gen\]**. *The value of $\dec(\cH)$ is $(2+2\ln (2m))$-approximable on the class of all hypergraphs.* By Corollary \[k-transv\], we have a $(1+ \ln (2m))$-approximation algorithm $\cA$ for $\tau_2$. Hence, given a hypergraph $\cH=(X,\cE)$, the algorithm $\cA$ outputs a 2-transversal $T$ of size at most $(1+ \ln (2m))\tau_2(\cH)$. Then, assign color 1 to every $x\in T$, and color the $n-|T|$ vertices in $X\setminus T$ pairwise differently with colors $2,3,\dots, n-|T|+1$. As each edge $E_i\in \cE$ contains at least two vertices of color 1, this results in a C-coloring $\vp$ with decrement satisfying $$\dec(\vp) = |T|-1 \le (1+ \ln (2m))\tau_2(\cH) -1 < 2(1+ \ln (2m))\dec(\cH),$$ where the last inequality follows from Proposition \[decr-tau2\]. Therefore, algorithm $\cA$ together with the simple construction of coloring $\vp$ is a $(2+2\ln 2m)$-approximation for $\dec(\cH)$. Guaranteed approximation ratio on hypertrees -------------------------------------------- In this short subsection we prove Theorem \[appr-htree\]. We recall its statement. **Theorem \[appr-htree\]**. *The value of $\dec(\cH)$ is $(1+\ln m)$-approximable on the class of all hypertrees.* Given a hypertree $\cH=(X, \cE)$ and $G=(X,L)$ which is a host tree of $\cH$, construct the auxiliary hypergraph $\cH^*=(L^*, \cE^*)$ such that each vertex $l_i^* \in L^*$ represents a line $l_i$ of the host tree, moreover each edge $E_i^* \in \cE^*$ of the auxiliary hypergraph corresponds to the edge $E_i \in \cE$ in the following way: $$E_i^*=\{l_j^* \mid l_j \subseteq E_i\}.$$ Now, consider any connected C-coloring $\vp$ of $\cH$. This coloring determines the set $S \subseteq L$ of monochromatic lines in the host tree, moreover the corresponding vertex set $S^* \subseteq L^*$ in $\cH^*$. By Lemma \[mono-lines\], $\dec(\vp)=|S|=|S^*|$. As $\vp$ is a connected C-coloring, each edge of $\cH$ contains a monochromatic line and, consequently, $S^*$ is a transversal of size $\dec(\vp)$ in $\cH^*$. Similarly, in the opposite direction, if a transversal $T^*$ of $\cH^*$ is given and the corresponding line-set is $T$ in the host tree, then every edge $E_i$ of $\cH$ contains two vertices, say $u$ and $v$, such that the line $uv$ is contained in $T$. Then, the vertex coloring $\phi$, whose color classes correspond to the components of $(X, T)$, is a connected C-coloring of $\cH$, and in addition $\dec(\phi)=|T|=|T^*|$ holds. By Lemma \[conn\], $\cH$ has a connected C-coloring $\vp$ with $\dec(\vp)=\dec(\cH)$, therefore the correspondence above implies $\dec(\cH)= \tau(\cH^*)$. As $\cH^*$ can be constructed in polynomial time from the hypertree $\cH$, and since a transversal $T^*$ of size at most $(1+ \ln m)\tau(\cH^*)$ can be obtained by greedy selection, a C-coloring $\phi$ of $\cH$ with $$\dec(\phi)=|T^*| \le (1+ \ln m)\tau(\cH^*)=(1+ \ln m)\dec(\cH)$$ can also be constructed in polynomial time. This yields a guaranteed approximation ratio $(1+ \ln m)$ for the decrement on the class of hypertrees. Approximation hardness ====================== The bulk of this section is devoted to the proof of Theorem  \[additive\] on non-approximability for hypertrees. Then, we prove a lemma concerning parameters $\UU(\cH)$ and $\dec(\cH)$ of hyperstars. The section is closed with the proof of Theorem \[ratio\] and with some remarks. Additive linear error --------------------- Our goal in this subsection is to prove Theorem \[additive\]. This needs the following construction, which was introduced in [@perf-htree]. (We note that a similar construction was given already in [@KKPV].) #### Construction of $\cH(\Phi)$. Let $\Phi= C_1 \wedge \cdots \wedge C_m$ be an instance of 3-SAT, with $m$ clauses of size 3 over the set $\{x_1,\dots,x_n\}$ of $n$ variables, such that the three literals in each clause $C_j$ of $\Phi$ correspond to exactly three distinct variables. We construct the hypertree $\cH=\cH(\Phi)$ with the set $$X = \{ c^* \} \cup \{ x'_i,\, t_i,\, f_i \mid 1 \le i \le n \}$$ of $3n+1$ vertices, where the vertices $x'_i,t_i,f_i$ correspond to variable $x_i$. First, we define the host tree $T=(X,E)$ with vertex set $X$ and line-set $$E = \{ c^*x_i',\, x_i't_i,\, x_i'f_i \mid 1\leq i\leq n \}.$$ Hypergraph $\cH$ will have 3-element “variable-edges” $H_i=\{x_i',t_i,f_i\}$ for $i=1,\dots,n$, and 7-element “clause-edges” $F_j$ representing clause $C_j$ for $j=1,\dots,m$. All the latter contain $c^*$ and six further vertices, two for each literal of $C_j$: If $C_j$ contains the positive literal $x_i$, then $F_j$ contains $x_i'$ and $t_i$. If $C_j$ contains the negative literal $\neg x_i$, then $F_j$ contains $x_i'$ and $f_i$. Since $H_1,\dots,H_n$ are disjoint edges, it is clear that $\dec(\cH)\ge n$ and $\UU(\cH)\le 2n+1$. We shall see later that equality holds if and only if $\Phi$ is satisfiable. In addition, since $x_1',\dots,x_n'$ is a transversal set of $\cH$, the equalities $\tau(\cH)=n$ and $\aaa(\cH)=2n+1$ are valid for all $\Phi$, no matter whether satisfiable or not. Also, $\tau_2(\cH)=2n$ for all $\Phi$. #### Optimal colorings of $\cH$. By Lemma \[conn\], we may restrict our attention to colorings where each color class is a subtree in $T$. This makes a coloring irrelevant if it 2-colors a variable-edge in such a way that $\{t_i,f_i\}$ is monochromatic but $x_i'$ has a different color. Hence, at least one of the lines $x_i't_i$ and $x_i'f_i$ is monochromatic (maybe both) for each $i$. Moreover, we may assume the following further simplification: there is no monochromatic line $c^*x_i'$. Indeed, if the entire $H_i$ is monochromatic, then we would lose a color by making the line $c^*x_i'$ monochromatic. On the other hand, if say the monochromatic pair inside $H_i$ is $x_i't_i$, then every clause-edge $F_j$ containing $c^*x_i'$ but avoiding $t_i$ also contains the line $x_i'f_i$, therefore we get a coloring with the same number of colors if we assume that $x_i'f_i$ is monochromatic instead of $c^*x_i'$. Summarizing, we search an optimal coloring $\vp:X\to\enn$ with the following properties for all $i=1,\dots,n$: $\vp(c^*)\ne\vp(x_i')$ $\vp(x_i')=\vp(t_i)$ or $\vp(x_i')=\vp(f_i)$ In the rest of the proof we assume that all vertex colorings occurring satisfy these conditions. #### Truth assignments. Given a coloring $\vp$, we interpret it in the following way for truth assignment and clause deletion: If $H_i$ is monochromatic, delete all clauses from $\Phi$ which contain literal $x_i$ or $\neg x_i$. Otherwise, assign truth value $x_i\mapsto\ttt$ if $\vp(x_i')=\vp(t_i)$, and $x_i\mapsto\fff$ if $\vp(x_i')=\vp(f_i)$. It follows from the definition of $\cH(\Phi)$ that this truth assignment satisfies the modified formula after deletion if and only if $\vp$ properly colors all edges of $\cH$. Also conversely, if $\Phi'$ is obtained from $\Phi$ by deleting all clauses which contain $x_i$ or $\neg x_i$ for a specified index set $I\ssq\{1,\dots,n\}$, then a truth assignment $a:\{x_i \mid i\in \{1,\dots,n\}\smin I \}\to\{\ttt,\fff\}$ satisfies $\Phi'$ if and only if the following specifications for the monochromatic lines yield a proper coloring $\vp$ of $\cH$: If $i\in I$, then $\vp(x_i')=\vp(t_i)=\vp(f_i)$. Otherwise, let $\vp(x_i')=\vp(t_i)$ if $a(x_i)=\ttt$, and $\vp(x_i')=\vp(f_i)$ if $a(x_i)=\fff$. The observations above imply the following statement: For any instance $\Phi$ of [3-SAT]{}, the value of $\dec(\cH(\Phi))$ is equal to the minimum number of variables whose deletion from $\Phi$ makes the formula satisfiable. To complete our preparations for the proof of the theorem, let us quote an earlier result on formulas in which every positive and negative literal occurs in at most four clauses. The problem [Max 3Sat$(4,\overline 4)$]{} requires to maximize the number of satisfied clauses in such formulas. The following assertion states that this optimization problem is hard to approximate, even when the input is restricted to satisfiable formulas. ([@bounded-sat Corollary 5]) \[sat-bounded\] Satisfiable [Max 3Sat$(4,\overline 4)$]{} has no PTAS, unless $=$. Now we are ready to verify Theorem \[additive\], which states: **Theorem \[additive\]**. *Unless $=$, neither of the following values can be approximated within additive error $o(n)$ for hypertrees of edge size at most 7:* $\UU(\cH)$,  $\dec(\cH)$, $\aaa(\cH)-\UU(\cH)$,  $\tau(\cH)-\dec(\cH)$, $\dec(\cH)-\tau_2(\cH)/2$. We apply reduction from Satisfiable [Max 3Sat$(4,\overline 4)$]{}. For each instance $\Phi$ of this problem, we construct the hypergraph $\cH=\cH(\Phi)$. Since $\Phi$ is required to be satisfied, no variables have to be deleted from it to admit a satisfying truth assignment. This means precisely one monochromatic line inside each variable-edge. Hence, the above observations together with Lemma \[conn\] imply that $\dec(\cH)=n$ and $\UU(\cH)=2n+1$. On the other hand, Lemma \[sat-bounded\] implies the existence of a constant $c>0$ such that it is -hard to find a truth assignment that satisfies all but at most $cm$ clauses in a satisfiable instance of [Max 3Sat$(4,\overline 4)$]{} with $m$ clauses. Since each literal occurs in at most four clauses, this may require the cancelation of at least $cm/8\ge c'n$ variables. Thus, for the coloring $\vp$ determined by a polynomial-time algorithm, $\dec(\vp)-\dec(\cH)=\Theta(n)$ may hold, and hence also $\UU(\cH)-|\vp(X)|=\Theta(n)$. No efficient approximation on hyperstars ---------------------------------------- Proposition \[decr-tau2\] established a relation between $\dec(\cH)$ and $\tau_2(\cH)$, valid for all hypergraphs. Here we show that for hyperstars there is a stronger correspondence between the parameters. After that, we prove Theorem \[ratio\] which states non-approximability results on hyperstars. Given a *hyperstar* $\cH=(X,\cE)$, let us denote by $c^*$ the center of the host star. Hence, $c^*\in E$ holds for all $E\in\cE$. We shall use the following notations: $$E^- = E \smin \{c^*\}, \quad \cE^- = \{E^-\mid E\in\cE\}, \quad \cH^- = (X\smin\{c^*\},\cE^-) .$$ \[star-decrem\] If $\cH$ is a hyperstar, then $\dec(\cH) = \tau(\cH^-) = \tau_2(\cH) -1$ and $\UU(\cH)=\aaa(\cH^-)+1$. If a 2-transversal set $S$ does not contain $c^*$, then we can replace any $s\in S$ with $c^*$ and obtain another 2-transversal set of the same cardinality. This implies $\tau(\cH^-) = \tau_2(\cH) -1$. Let us observe next that the equalities $\UU(\cH)=\aaa(\cH^-)+1$ and $\dec(\cH) = \tau(\cH^-)$ are equivalent, due to the Gallai-type equality for $\aaa+\tau$ in $\cH^-$. Now, the particular case of Lemma \[conn\] for hyperstars means that there exists a $\UU$-coloring of $\cH$ such that all color classes but that of $c^*$ are singletons. Those singletons form an independent set in $\cH^-$, because the color of $c^*$ is repeated inside each $E^-$. Thus, we necessarily have $\UU(\cH)\le\aaa(\cH^-)+1$. Conversely, if $S$ is a largest independent set in $\cH^-$, i.e.$|S| = \aaa(\cH^-) = |X|-1-\tau(\cH^-)$ and $E^-\smin S\ne\es$ for all $E^-$, then making $X\smin S$ a color class creates a monochromatic pair inside each $E\in\cE$ because the color of $c^*$ is repeated in each $E^-$. Hence, assigning a new private color to each $x\in S$ we obtain that $\UU(\cH)\ge\aaa(\cH^-)+1$, consequently $\UU(\cH)=\aaa(\cH^-)+1$ and $\dec(\cH)=\tau(\cH^-)$. The following non-approximability results concerning $\UU(\cH)$ and $\dec(\cH)$ are valid already on the class of hyperstars. We recall the statement of Theorem \[ratio\]. **Theorem \[ratio\]**. By Proposition \[star-decrem\], the equalities $\UU(\cH)=\aaa(\cH^-)+1$ and $\dec(\cH)=\tau(\cH^-)$ hold whenever $\cH$ is a hyperstar. If $\cH$ is a generic hyperstar (with no restrictions on its edges), then $\cH^-$ is a generic hypergraph. Thus, approximating $\dec(\cH)$ on hyperstars is equivalent to pproximating $\tau(\cH^-)$ on hypergraphs, which is known to be intractable within ratio $(1-\eps)(\log m)$ unless [ NP]{}$\subseteq$ [DTIME]{}$(n^{\cO(\log \log n)})$, by the result of Feige [@Fei]. If $\cH$ is a generic 3-uniform hyperstar, then $\cH^-$ is a generic graph. Thus, approximating $\UU(\cH)$ on 3-uniform hyperstars is equivalent to approximating $\aaa(\cH^-)+1$ on graphs, which is known to be intractable within ratio $n^{1-\eps}$ unless =, by the result of Zuckerman [@Zuck]. In a similar way, we also obtain the following non-approximability result concerning $\tau_2$. The value $\tau_2(\cH)$ does not have a polynomial-time $((1-\eps)\ln m)$-approximation on hyperstars, unless [NP]{}$\subseteq$[DTIME]{}$(n^{\cO(\log \log n)})$. By Proposition \[star-decrem\], the approximation of $\tau_2(\cH)$ on hyperstars $\cH$ is as hard as that of $\tau(\cH^-)$ on general hypergraphs $\cH^-$. In connection with Theorem \[ratio\] one may observe that, even if we restrict the problem instances to 3-uniform hypergraphs in which each vertex pair is contained in at most three edges, $\UU(\cH)$ does not admit a PTAS. This follows from the fact that the determination of $\aaa(G)$ is -complete on graphs of maximum degree 3, by the theorem of Berman and Fujito [@BF]. Concluding remarks ================== Our results on hyperstars show that $\dec(\cH)$ admits a much better approximation than $\UU(\cH)$ does. In a way this fact is in analogy with the following similar phenomenon in graph theory: The independence number $\aaa(G)$ is not approximable within $n^{1-\eps}$, but $\tau(G)=n-\aaa(G)$ admits a polynomial-time 2-approximation because $\nu(G)\le\tau(G)\le 2\nu(G)$, and the matching number $\nu(G)$ can be determined in polynomial time. In this way, both comparisons $\dec(\cH)$ with $\UU(\cH)$ and $\tau(G)$ with $\aaa(G)$ demonstrate that there can occur substantial difference between the approximability of a graph invariant and its complement. Perhaps hypertrees with not very large edges admit some fairly efficient algorithms: Determine the largest integer $r$ such that there is a PTAS to approximate the value of $\UU(\cH)$ for hypergraphs $\cH$ in which every edge has at most $r$ vertices. Our results imply that $r\le 6$ is necessary. From below, a very easy observation shows that for $r=2$ there is a linear-time algorithm, because for graphs $G$, the value of $\UU(G)$ is precisely the number of connected components. For hypertrees with non-restricted edge size, the following open question seems to be the most important one: Is there a polynomial-time $o(n)$-approximation for $\UU$ on hypertrees? [99]{} J. L. Arocha, J. Bracho and V. Neumann-Lara, On the minimum size of tight hypergraphs. [*Journal of Graph Theory*]{} 16 (1992), 319–326. G. Bacsó and Zs. Tuza, Upper chromatic number of finite projective planes. [*Journal of Combinatorial Designs*]{}, 16:3 (2008), 221–230. C. Bazgan, M. Santha and Zs. Tuza, On the approximation of finding a(nother) Hammiltonian cycle in cubic Hamiltonian graphs. [*Journal of Algorithms*]{}, 31 (1999), 249–268. C. Berge, [*Hypergraphs*]{}. North-Holland, 1989. P. Berman and T. Fujito, On approximation properties of the Independent Set problem for degree 3 graphs. In: [*Algorithms and Data Structures*]{}, 4th International Workshop, WADS ’95, Lecture Notes in Computer Science 955 (1995), 449–460. J. A. Bondy and U. S. R. Murty, [*Graph Theory*]{}. Graduate Texts in Mathematics 244, Springer, 2008. Cs. Bujtás and Zs. Tuza, Voloshin’s conjecture for C-perfect hypertrees. [*Australasian Journal of Combinatorics*]{}, 48 (2010), 253–267. Cs. Bujtás and Zs. Tuza, Maximum number of colors: C-coloring and related problems. [*Journal of Geometry*]{}, 101 (2011), 83–97. Cs. Bujtás and Zs. Tuza, Maximum number of colors in hypertrees of bounded degree. Manuscript, 2013. U. Feige, A threshold of $\ln n$ for approximating set cover. [*J. ACM*]{}, 45 (1998), 634–652. D. Král’, On feasible sets of mixed hypergraphs. [*Electronic Journal of Combinatorics*]{}, 11 (2004), \#R19, 14 pp. D. Král’, J. Kratochvíl, A. Proskurowski and H.-J. Voss, Coloring mixed hypertrees. [*Discrete Applied Mathematics*]{}, 154 (2006), 660–672. D. Král’, J. Kratochvíl and H.-J. Voss, Mixed hypergraphs with bounded degree: edge-coloring of mixed multigraphs. [*Theoretical Computer Science*]{}, 295 (2003), 263–278. F. Sterboul, A new combinatorial parameter. In: [*Infinite and Finite Sets*]{} (A. Hajnal et al., eds.), Colloq. Math. Soc. J. Bolyai 10, Vol. III, Keszthely 1973 (North-Holland/American Elsevier, 1975), 1387–1404. V. Vazirani, [*Approximation Algorithms*]{}, Springer-Verlag, 2001. V. I. Voloshin, The mixed hypergraphs. [*Computer Sci. J. Moldova*]{}, 1 (1993), 45–52. V. I. Voloshin, On the upper chromatic number of a hypergraph. [*Australas. J. Combin.*]{}, 11 (1995), 25–45. V. I. Voloshin, [*Coloring Mixed Hypergraphs: Theory, Algorithms and Applications*]{}, Fields Institute Monographs 17, Amer. Math. Soc., 2002. D. Zuckerman, Linear degree extractors and the inapproximability of Max Clique and Chromatic Number. [*Theory of Computing*]{}, 3 (2007), 103–128. [^1]:  Research supported in part by the Hungarian Scientific Research Fund, OTKA grant T-81493, and by the European Union and Hungary, co-financed by the European Social Fund through the project TÁMOP-4.2.2.C-11/1/KONV-2012-0004 – National Research Center for Development and Market Introduction of Advanced Information and Communication Technologies.
{ "pile_set_name": "ArXiv" }
Work The prayer of a righteous person is powerful and effective (James 5:16). I work hard at prayer because I believe prayer works. Or maybe I should say God works in response to prayer. The tension between how God’s will and my prayers work together is a mystery. But James is clear. Prayer is to be our first response all of life’s situations. Prayer is about our relationship with God. But it’s also productive. It actually accomplishes something. I’ll say it again. Prayer works. Just how does prayer work? What are the conditions for power-filled prayers? James gives us some tips. He talked earlier about how the “prayer of faith” will heal the sick. But now he raises the bar. It’s the prayer of faith offered by the “righteous” person that works best. I especially like the Amplified version here: The heartfelt and persistent prayer of a righteous man (believer) can accomplish much [when put into action and made effective by God—it is dynamic and can have tremendous power](James 5:16 AMP). If we glance over this verse too quickly, we can become weighed down with the idea that we have to be “good enough” to earn the answers to our prayers. Nothing could be further from the truth. A “works–based” prayer is not at all what’s implied. But we must look closely at this verse to fully absorb its powerful message. First, we need to be firmly grounded in what it means to be “righteous.” What we could never do for ourselves, Jesus did for us through the cross. And it’s only by faith that we have access to that free gift: We’re saved by grace through faith, and not our works, lest any man should boast (Ephesians 2:8). Even the Old Testament saints were made “just through faith,” a concept that could never be grasped by the works-obsessed Pharisees. Second, let’s look at the Greek word for “prayer” as it’s used in this verse. Deesis, a different form of prayer than James previously described, is an urgent prayer. It comes from a word that means “to be impoverished.” This is desperate prayer—more like begging. When you pray in this way, you’re coming “needy” to God. A sinner, saved by grace. But you’re wearing Christ’s robe of righteousness, so you can approach God with bold faith that He can do anything. Let’s dig a little deeper. The word in this verse translated “effective,” or energeo, is where we get the word energy. It means to “set in motion; to cause something to happen.” So you see, this kind of prayer is not only desperate, it’s active. It gets results. In short, it works. James wants to shake us free from lazy prayers and low expectations. Old “camel knees” knew the extraordinary power available through prayer. He wants us to know this power, too! Lord, I come boldly to You today, made confident because of my righteousness in YOU. And because I have faith in what You can do—though my neediness is ever before me—I can expect great things! Give me a steadfast heart to believe the promises in Your Word. Give me alertness to watch for signs of You at work. And when I notice the answers to prayer (and even when I am still waiting), help me remember to give glorious praise and thanks to You.
{ "pile_set_name": "Pile-CC" }
Q: To implement registration page with Vaadin or not? This is a tactical implementation question about usage of Vaadin or in some part of my application. Vaadin is a great framework to login users and implement sophisticated web applications with many pages. However, I think it is not very well suited to desgin pages to register new users for my application. Am I right? Am I am wrong? It seems to me that a simple HTML/CSS/Javascript login + email registration + confirmation email with confirmation link cannot be implemented easily with Vaadin. It seems like Vaadin would be overkill. Do you agree? Or am I missing something? I am looking for feedback from experienced Vaadin users. A: Login/registration can be implemented with Vaadin, but there are good arguments to implement login page as a JSP too. It is often question on if you have a traditional web site too and how you want to integrate to that. A: I had to make the same decision and went for a simple HTML login using plain servlets and templates. The rationale was: 1) We're using OpenID and I experienced some difficulty catching redirects from providers in a Vaadin app. 2) By managing security at the servlet level there is a reduced surface area for attack. You can just override getNewApplication in AbstractApplicationServlet to control access to the app. This approach is recommended in this article: Creating Secure Vaadin Applications using JEE6
{ "pile_set_name": "StackExchange" }
Q: edit contact information in iphone I am developing an app in which I have to allow user to edit the contact programatically. I googled about it I found that ABPersonViewController will be used. I am not able to find it how to implament it. Address Book Programming Guide for iPhone OS also didnt work for me. Can you suggest me the way to do it. Thnx in advance A: Ok at last i have to find the solution myself here is it -(IBAction)showPicker:(id) sender{ ABAddressBookRef addressBook = ABAddressBookCreate(); CFArrayRef allPeople = ABAddressBookCopyArrayOfAllPeople(addressBook); ABRecordRef person = CFArrayGetValueAtIndex(allPeople,0); ABPersonViewController *personController = [[ABPersonViewController alloc] init]; personController.displayedPerson = person; personController.addressBook = addressBook; personController.allowsEditing = YES; personController.personViewDelegate = self; UINavigationController *contactNavController = [[UINavigationController alloc] initWithRootViewController:personController]; [personController release]; CFRelease(allPeople); CFRelease(person); [self presentModalViewController:contactNavController animated:YES]; } -(void)personViewControllerDidCancel:(ABPersonViewController *)peoplePicker { [self dismissModalViewControllerAnimated:YES]; } -(BOOL)personViewController:(ABPersonViewController *)peoplePicker shouldPerformDefaultActionForPerson:(ABRecordRef)person property:(ABPropertyID) property identifier:(ABMultiValueIdentifier) identifier { return YES; }
{ "pile_set_name": "StackExchange" }
A system for measuring complex dielectric properties of thin films at submillimeter wavelengths using an open hemispherical cavity and a vector network analyzer. Quasi-optical (QO) methods of dielectric spectroscopy are well established in the millimeter and submillimeter frequency bands. These methods exploit standing wave structure in the sample produced by a transmitted Gaussian beam to achieve accurate, low-noise measurement of the complex permittivity of the sample [e.g., J. A. Scales and M. Batzle, Appl. Phys. Lett. 88, 062906 (2006); R. N. Clarke and C. B. Rosenberg, J. Phys. E 15, 9 (1982); T. M. Hirovnen, P. Vainikainen, A. Lozowski, and A. V. Raisanen, IEEE Trans. Instrum. Meas. 45, 780 (1996)]. In effect the sample itself becomes a low-Q cavity. On the other hand, for optically thin samples (films of thickness much less than a wavelength) or extremely low loss samples (loss tangents below 10(-5)) the QO approach tends to break down due to loss of signal. In such a case it is useful to put the sample in a high-Q cavity and measure the perturbation of the cavity modes. Provided that the average mode frequency divided by the shift in mode frequency is less than the Q (quality factor) of the mode, then the perturbation should be resolvable. Cavity perturbation techniques are not new, but there are technological difficulties in working in the millimeter/submillimeter wave region. In this paper we will show applications of cavity perturbation to the dielectric characterization of semi-conductor thin films of the type used in the manufacture of photovoltaics in the 100 and 350 GHz range. We measured the complex optical constants of hot-wire chemical deposition grown 1-μm thick amorphous silicon (a-Si:H) film on borosilicate glass substrate. The real part of the refractive index and dielectric constant of the glass-substrate varies from frequency-independent to linearly frequency-dependent. We also see power-law behavior of the frequency-dependent optical conductivity from 316 GHz (9.48 cm(-1)) down to 104 GHz (3.12 cm(-1)).
{ "pile_set_name": "PubMed Abstracts" }
Q: Regular expressions to segregate between good words and bad ones I have a blacklist of bad words that are blocked if they are used on any web search engine. Examples of those words are: anal, ass, bum bra, butt, cum, dick, lust, tit. Normal words that contain any of these previous words as part of their structures are accordingly blocked. Examples of those good words are: analog, canal, analysis, asset, compass, album, brand, button, circumstance, dickson, illustrate, repetition. My question: is there a regular expression (or bash shell script) that enables me to use the normal words without being blocked because of their blacklisted parts? Appreciating your interest to help. Thank you. A: You can use \b, which means "word boundary": \bbutt\b would match butt but not button.
{ "pile_set_name": "StackExchange" }
382 So.2d 190 (1980) Benny RICARD v. STATE of Louisiana et al. No. 12951. Court of Appeal of Louisiana, First Circuit. January 21, 1980. Rehearing Denied March 31, 1980. Dennis R. Whalen, Baton Rouge, for plaintiff-appellant Benny Ricard. Emile C. Rolfs, III, Howard P. Elliot, Jr., Baton Rouge, for defendants-appellees State of Louisiana, through the Department of Public Safety, Division of State Police, and Steve Jones. Before COVINGTON, LOTTINGER and COLE, JJ. COLE, Judge. This is a suit by Benny Ricard under 42 U.S.C. § 1983[1] (Civil Rights Act) against the State of Louisiana; Steve Jones, a state *191 trooper; and, the Department of Public Safety. Plaintiff allegedly was pistol-whipped by a state trooper and sustained injuries. From a partial summary judgment dismissing his claim for punitive or exemplary damages only, plaintiff has appealed. The issue before the court is whether an award of punitive or exemplary damages may be made in a suit brought in Louisiana under 42 U.S.C. § 1983. Plaintiff argues that when a federal cause of action is brought in state court, federal substantive and state procedural law apply, citing Presley v. Upper Mississippi Towing Corp., 141 So.2d 411 (La.App. 1st Cir. 1961), and on remand and reappeal, 153 So.2d 416 (La.App. 1st Cir. 1963). The point is made that "federal common law" substantively allows punitive damages in civil rights actions. Defendants concede that "when a cause of action that is alleged to have arisen under federal law is brought in the state court, federal law must apply pursuant to the Supremacy Clause of the United States Constitution." They argue, however, that 42 U.S.C. § 1983 is silent as to whether punitive damages are recoverable, and thus we must look to 42 U.S.C. § 1988[2], which in essence provides that where the laws of the United States do not provide "suitable remedies," laws of the state in which the court having jurisdiction is located must be applied. In support thereof they cite Baggett v. Richardson, 473 F.2d 863 (5th Cir. 1973), as authority that a federal court will not award punitive damages if such damages are not allowed under state law. While we agree that this case stands for that holding, we find it inapposite because it was a maritime tort case applying Louisiana tort law, under which there are no punitive damages. As an apparent indication that punitive damages are not allowed in § 1983 actions, defendants cite Carey v. Piphus, 435 U.S. 247, 98 S.Ct. 1042, 55 L.Ed.2d 252 (1978) which in part states: "To the extent that Congress intended that awards under § 1983 should deter the deprivation of constitutional rights, there is no evidence that it meant to establish a deterrent more formidable than that inherent in the award of compensatory damages." (98 S.Ct. 1048-49). The footnote to that statement, however, in part states: "This is not to say that exemplary or punitive damages might not be awarded in a proper case under § 1983 with the specific purpose of deterring or punishing violations of constitutional rights. See, e. g., Silver v. Cormier, 529 F.2d 161, 163-164 (CA10 1976); Stengel v. Belcher, 522 F.2d 438, 444 n. 4 (CA6 1975), cert. dismissed, 429 U.S. 118, 96 S.Ct. 1505, 47 L.Ed.2d 760 (1976); Spence v. Staras, 507 F.2d 554, 558 (CA7 1974); Caperci v. Huntoon, 397 F.2d 799, 801 (CA1), cert. denied, 393 U.S. 940, 89 S.Ct. 299, 21 L.Ed.2d 276 (1968); Mansell v. Saunders, 372 F.2d 573, 576 (CA5 1967); Basista v. Weir, 340 F.2d 74, 84-88 (CA3 1965). Although *192 we imply no approval or disapproval of any of these cases, we note that there is no basis for such an award in this case. The District Court specifically found that petitioners did not act with malicious intention to deprive respondents of their rights or to do them other injury, see n. 6, supra and the Court of Appeals approved only the award of `non-punitive' damages, 545 F.2d, at 31." An examination of the cases cited in the above quoted footnote reveals that all were decided by federal courts in traditional common law states (Colorado, Ohio, Illinois, Massachusetts, Florida and Pennsylvania) applying in most instances the common law majority rule that punitive damages may be awarded where actual damages have been suffered. Admittedly, in Basista, supra, the progenitor of the other cases cited, the federal court sitting in Pennsylvania awarded only punitive damages contrary to the majority rule which that state follows. However, the case does not stand for the proposition that there existed some mystical "federal common law" which supremely negated Pennsylvania law. The actual holding of the case in this regard is found at page 85 of the opinion: "In this court Scalese's counsel raises the issue, seeking to apply the law of Pennsylvania, that there can be no exemplary or punitive damage where actual damage is not shown. (citation omitted.) But Scalese's counsel made no objection to the court's submission to the jury of the issue of exemplary damages, and, therefore, must be deemed to have waived any objection to this portion of the court's instructions. See Rule 51, Fed.R.Civ.Proc." After deciding the issue thusly, and noting that § 1988 had apparently not previously been construed by any court with respect to the issue of punitive damages, the Basista court then engaged in a discussion of the history of the Civil Rights Act and, by dictum, espoused the view that Congress intended the Act to be applied uniformly throughout the United States. The court reasoned that "federal common law" must apply to effect that uniformity and "federal common law" allows punitive damage awards. Interestingly, all of the subsequent cases cited Basista as authority for the proposition that "federal common law" decrees the award of punitive damages in Civil Rights cases. In chronological order, Mansell cited Basista; Caperci cited Mansell and Basista; Spence cited Mansell and Basista; Stengel cited Basista; and, Silver cited Spence and Basista. It is little wonder that the United States Supreme Court chose to neither approve nor disapprove of these cases. Perhaps, aside from a recognition that the cases subsequent to Basista represent no more than jurisprudential fission, the court remembered the words of Mr. Justice Brandeis in Erie R. Co. v. Tompkins, 304 U.S. 64, 58 S.Ct. 817 at p. 822, 82 L.Ed. 1188 (1938): "Except in matters governed by the Federal Constitution or by acts of Congress, the law to be applied in any case is the law of the state. And whether the law of the state shall be declared by its Legislature in a statute or by its highest court in a decision is not a matter of federal concern. There is no federal general common law." (Emphasis added.) We acknowledge that the above noted federal cases are persuasive. The same result has been reached under 42 U.S.C. § 1981, another provision of the Civil Rights Act. See Claiborne v. Illinois Central Railroad, 583 F.2d 143 (5th Cir. 1978). However, we agree with the United States Supreme Court's pronouncement in Carey, supra, that with respect to actions under § 1983, there is no evidence that Congress meant to establish a deterrent more formidable than that inherent in the award of compensatory damages. We have found no provisions of law enacted by Congress mandating punitive damages in Civil Rights cases. Nor does the United States constitution require the imposition of punitive damages. In such instance, § 1988 requires "the common law, as modified and changed by the constitution and statutes of the State wherein the court having jurisdiction of such civil or criminal cause is held. . . shall . . . govern the said courts in the trial and disposition of the cause . . ." *193 A careful reading of § 1988 leads to the unmistakable conclusion that there is no impediment to the application of Louisiana law in this instance. In fact, it is required. It is academic that Louisiana has never embraced the common law. By contrast our people, upon acquiring statehood, retained the civil law heritage. However, within the context of 42 U.S.C. § 1988 it can be said that we have indeed modified and changed the common law as regards the award of exemplary or punitive damages. Art. 2315 of the Civil Code of Louisiana in part provides: "Every act whatever of man that causes damage to another obliges him by whose fault it happened to repair it." (Emphasis added.) This provision is the cornerstone of our tort law and may readily be traced to the Code Napoleon, promulgated in 1804. It is found in the Civil Code of 1808, the basic law of the Louisiana territory, and has remained intact throughout our history. As so often explained, it contemplates simple reparation, a just and adequate compensation for injuries. It suggests no idea of revenge or punishment. As explained in Post v. Rodrigue, 205 So.2d 67, at p. 70 (La.App. 4th Cir. 1968): "The settled law of Louisiana is that vindictive, punitive or exemplary damages are not allowed in civil cases unless specifically provided for; in the absence of such a specific provision only compensatory damages may be recovered." (Citations omitted.) We conclude that exemplary or punitive damages are not available in a § 1983 action brought in this state. In this regard, Louisiana law is applicable pursuant to the provisions of § 1988. For the above and foregoing reasons the judgment of the trial court is affirmed. All costs of this appeal are to be paid by plaintiff. AFFIRMED. LOTTINGER, J., dissents and assigns written reasons. LOTTINGER, Judge, dissenting. I respectfully dissent. The majority cites six federal appellate court cases which held that punitive damages are awardable in a § 1983 action, but then disregards them in deference to a recent United States Supreme Court opinion which arguably would allow punitive damages under the facts alleged in the case at bar. In Carey v. Piphus, 435 U.S. 247, 98 S.Ct. 1042, 55 L.Ed.2d 252 (1978), the case upon which the majority bases its holding, the Supreme Court did make the general statement that there is no evidence that Congress meant to establish a deterrent more formidable than compensatory damages in § 1983 actions. However, the footnote to that general statement is all important. Indeed, the majority cites the footnote in its opinion. The footnote limits the general statement made in the text of the opinion to the facts of that case. The court suggests that punitive damages might be recoverable in the "proper case" under § 1983 if the purpose of awarding punitive damages is to deter or punish violations of constitutional rights, particularly when the court finds a "malicious intention to deprive respondents of their rights or to do them other injury." 98 S.Ct. 1048-1049. The petitioner in this case alleges that he was pistol-whipped by a state trooper and that he sustained injuries because of the pistol-whipping. If the plaintiff proves these allegations at trial, then I think that there has been a malicious intention to deprive him of his constitutional rights or to do him other injury. Perhaps more important than its disregard of the six federal appellate cases cited in its opinion is the majority's disregard of the underlying rationale of Basista v. Weir, 340 F.2d 74 (CA3 1965). Basista stands for, and has been cited as standing for, the proposition that the Federal Civil Rights Act should be applied uniformly throughout the United States. The majority makes only short mention of the uniformity argument and does nothing to dispute it. Six *194 federal circuit courts, including the Fifth Circuit Court of Appeals, have held that punitive damages are awardable in the proper case under a § 1983 action. These federal cases represent an interpretation of a congressional statute and are therefore binding upon us. The Basista argument that the Civil Rights Act should be applied uniformly throughout the United States is a substantial argument that should not be rejected without careful consideration. Confusion would be the rule if the act were given a different interpretation depending upon the tort law of the state in which an action was brought. Congress surely did not intend this when it enacted § 1983, and the federal courts which have interpreted § 1983 have attempted to give it some uniformity. There is no need in this case to rely upon § 1988 of Title 42 because federal courts have interpreted § 1983 to allow punitive damages. As noted earlier, we are bound by this interpretation and we should at least allow the issue to go to trial in this case. Therefore, I respectfully dissent from the majority opinion in this case. NOTES [1] 42 U.S.C. § 1983 reads: "Every person who, under color of any statute, ordinance, regulation, custom, or usage, of any State or Territory, subjects, or causes to be subjected, any citizen of the United States or other person within the jurisdiction thereof to the deprivation of any rights, privileges, or immunities secured by the Constitution and laws, shall be liable to the party injured in an action at law, suit in equity, or other proper proceeding for redress." [2] 42 U.S.C. § 1988 provides: "The jurisdiction in civil and criminal matters conferred on the district courts by the provisions of this chapter and Title 18, for the protection of all persons in the United States in their civil rights, and for their vindication, shall be exercised and enforced in conformity with the laws of the United States, so far as such laws are suitable to carry the same into effect; but in all cases where they are not adapted to the object, or are deficient in the provisions necessary to furnish suitable remedies and punish offenses against law, the common law, as modified and changed by the constitution and statutes of the State wherein the court having jurisdiction of such civil or criminal cause is held, so far as the same is not inconsistent with the Constitution and laws of the United States, shall be extended to and govern the said courts in the trial and disposition of the cause, and, if it is of a criminal nature, in the infliction of punishment on the party found guilty. In any action or proceeding to enforce a provision of sections 1981, 1982, 1983, 1985, and 1986 of this title, title IX of Public Law 92-318, or in any civil action or proceeding, by or on behalf of the United States of America, to enforce, or charging a violation of, a provision of the United States Internal Revenue Code, or title VI of the Civil Rights Act of 1964, the court, in its discretion, may allow the prevailing party, other than the United States, a reasonable attorney's fee as part of the costs."
{ "pile_set_name": "FreeLaw" }
Astrid Gunnestad Astrid Synnøve Gunnestad (19 August 1938 – 18 May 2016) was a Norwegian journalist and radio presenter. She was born in Asker. She started her journalistic career in Morgenbladet in 1971, and was hired by Norsk Ukeblad in 1972. From 1987 to 2007 she was their editorial manager, then, from 2007 to her death in 2016 she presented her own radio show for P4. The Association of Norwegian Editors' Honorary Award was bestowed upon her in 2015. References Category:1938 births Category:2016 deaths Category:People from Asker Category:Norwegian magazine editors Category:Norwegian radio personalities Category:Norwegian women writers Category:Women magazine editors
{ "pile_set_name": "Wikipedia (en)" }
Q: Can i use nested queries for database entities in LINQ? I'm working on an ASP.Net application which uses a SQL-Server DB and database entities. Further i got three database entities which are dependend on each other. This is the dependency hierarchy: Instance (Key: InstanceID) CustomField (Key: CustomFieldID, InstanceID) CustomFieldData (Keys: CustomFieldDataID, CustomFieldID) CustomFieldData_Person (Keys: CustomFieldData_PersonID, CustomFieldDataID) I can find out the entries from the entity CustomField by this with the InstanceID: var customFieldEntries = DB_Instance_Singleton.getInstance.CustomField.Where(x => x.InstanceID == instanceId); Now i want to find out all entries from CustomFieldData_Person which belong to the hierarchy with the InstanceID as key. In SQL i would write something like this: SELECT * FROM CustomFieldData_Person WHERE CustomFieldDataID in ( SELECT * FROM CustomFieldData WHERE CustomFieldID in ( SELECT * FROM CustomField WHERE InstanceID = instanceId)) Unfortunately i'm absolutely new to LINQ. So my question is, how can i write such a nested query in LINQ (aacording to the first code example above)? Thanks in advance! A: Firstly if you create your ER model correctly you will have most of that logic already set up for you Person would have a property Person.CustomData which would have Properties for Field and Value so you can just navigate the object structure however if you dont have that then you can just convert the in statements to Contains CustomFieldData_Person.Where(cfdp=>CustomFieldData.Where(nested query for CustomFieldData).Contains(cfdp.CustomFieldDataID )
{ "pile_set_name": "StackExchange" }
No need to Focus on more power If you have driven the Ford Focus RS there may be a couple of things you might change. The seating position is too high for example and the carbon-look interior borders on tacky.<br>&nbsp;<br>There is one thing the RS certainly doesn’t need though, and that’s a fat slice of extra power and torque. If you have driven the Ford Focus RS there may be a couple of things you might change. The seating position is too high for example and the carbon-look interior borders on tacky. There is one thing the RS certainly doesn’t need though, and that’s a fat slice of extra power and torque. The RevoKnuckle is a wonderful thing, reining in the torque admirably but it still struggles. 300bhp through the front wheels is simply a big ask of any car and as you accelerate in the RS there is still a tug of the wheel as it is battered by the 324lb ft of torque. Ford Focus RS tuned to 340bhp With this is mind tuning specialist Graham Goode Racing has launched a 340bhp and 397lb ft Focus RS performance upgrade package. Yep that’s over 70lb ft more, not to mention 40bhp extra. This is only an ECU conversion too, so don’t expect a tweaked and fettled chassis to keep it all on the tarmac. It will be interesting to see how the RS handles all this power and if it does miraculously stay on the road it will go from being very fast to devastatingly rapid. I’m just not sure if the already fantastic Focus really needs it. Join the debate RevoKnuckle has been in development for a very long time. And I'm sure was originally ment to help with the growing torque levels that diesels are putting out. This is just the first car it's been used on. Also a midengined or 4wd car would hit 30k+ and who's going to pay that much for a focus? Pity they didn't make it 4wd though , or better still mid engined , such a car would be totally awesome. Then there would be no need to have gone to huge effort and expense developing the RevoKnuckle , which can't solve the inherent problems had by transverse engined fwd cars. No fancy suspension system ever can.
{ "pile_set_name": "Pile-CC" }
This subproject is one of many research subprojects utilizing the resources provided by a Center grant funded by NIH/NCRR. The subproject and investigator (PI) may have received primary funding from another NIH source, and thus could be represented in other CRISP entries. The institution listed is for the Center, which is not necessarily the institution for the investigator. The use of functional MRI has been applied to a number of brain systems such as language, sensory (motor, visual), pain, and higher level cognitive functions such as memory or spatial attention. Typically, there is a known model of the experiment (i.e. the subject moves their fingers at the appropriate time);however, in the case of acupuncture this may not hold true. There is known model of the presence of the needling but the brains response may differ. This response may be variable from subject to subject. We are proposing to use a combination of principal components analysis followed by independent component analysis to identify the primary brain responses to acupuncture of the visual system. These experiments have a large number of voxels (64x64x32) for a large number of time points (500) making this analysis difficult on standard computational hardware. Furthermore, we plan to investigate how the independent components interact across a population of subjects to identify the population-based time course of acupuncture. We seek the use of the super-computing resources (storage, processing and memory) to conduct our large scale analysis of within and across subject responses to acupuncture as measured by functional MRI collected using NIH funds from the National Center for Complementary and Alternative Medicine.
{ "pile_set_name": "NIH ExPorter" }
Q: Adding 3rd condition to ngClass is not working I am trying to add a 3rd condition to my ngClass. At first, I got the following two class to work in my ngClass to alternate the color of the rows [ngClass]="{ totalrow:i%2 != 0, odd:i%2 == 0}" I am trying to add a 3rd class and condition where the mat-list will show a border line for the top of the mat-list-item. However when I add the 3rd condition, it gives me an error [ngClass]="{ totalrow:i%2 != 0, odd:i%2 == 0, borderTopClass : operator === 'fas fa-equals'}" I get the following error which is confusing to me Parser Error: Missing expected : at column 47 in [{ totalrow:i%2 != 0, odd:i%2 == 0, borderTopClass : operator === 'fas fa-equals'}] in Here is the code with the ngFor <div class="ng-container" *ngFor="let operator of operatorList; let i = index"> <mat-list-item fxLayoutAlign="start" style="padding-left: 0px; padding-right: 0px;" [ngClass]="{ totalrow:i%2 != 0, odd:i%2 == 0, borderTopClass : operator === 'fas fa-equals'}"> <i class="{{operator}}"></i> </mat-list-item> </div> Any help is appreciated. A: I guess a commenter needs a deeper explanation of how this works. <div [ngClass]="{ 'is-active': condition, 'is-inactive': !condition, 'multiple': condition && anotherCondition, }"> multiple class will apply when two conditions are both met. Not just one but both. You could add a third like this: 'multiple': condition && anotherCondition && thirdCondition Here's a StackBlitz of the OP's code working as he expected and without error. If I can help more pleas let me know.
{ "pile_set_name": "StackExchange" }
John Truscott(1936–1993) John Truscott was born on February 23, 1936 in Melbourne, Victoria, Australia as John Edward Truscott. He was an actor and costume designer, known for The Spy Who Loved Me (1977), Camelot (1967) and Paint Your Wagon (1969). He died on September 5, 1993 in Melbourne. See full bio »
{ "pile_set_name": "Pile-CC" }
Jean-Baptiste de La Croix de Chevrières de Saint-Vallier Jean-Baptiste de la Croix de Chevrière de St. Vallier (November 14, 1653 – December 26, 1727) is most known as Quebec’s second bishop. Born in the southeastern French city of Grenoble in 1653, to a wealthy land owning family, Saint-Vallier swiftly became a community figure, known for founding a hospital in St. Valier. His officious and dominating personality, led him to accept the position of bishop in 1685 at the call of Louis XIV and François de Laval, former bishop of Quebec. Often referred to as Abbé Saint-Vallier, he was a controversial figure as Bishop of Quebec, since he rarely listened to advice. He spent large amounts of money that left the seminary in great debt at the time of death in 1727. He was deeply involved in the Catholic reform tradition and promoted several missions throughout Canada. He was seen as a very strict leader for most of his reign. He refused demands for his resignations both by the King and the religious of New France. He was suspected of Jansenism, and his administration of the diocese led to popular revolts and struggles with various religious groups. Accomplishments during his 42-year reign include: the founding of the Hôpital-Général de Québec (1692), the edifice for the bishop (1688), and the installations of religious reformist communities in the Montreal area. The development of the Roman Catholic Archdiocese of Quebec and Roman Catholic faith was his utmost priority and interest; he was particularly sensible on the point of morality, which he believed was failing in his see. He was also greatly involved with the Society of Foreign Missions of Paris. Biography Born November 14, 1653 to Jean de La Croix de Chevrières de Saint-Vallier and Marie de Sayve, Jean-Baptiste was part of the La Croix family, known to be ranked among the best in Dauphiné with prestigious posts such as country noblemen, officers, magistrates and ambassadors. Jean-Baptiste's father was a Grenoble magistrate and worked for the diplomatic services and his grandfather was a lawyer and poet, then a judge at the Parliament of Grenoble. The La Croixs owned a large amount of land including the castle of Saint-Vallier in the Rhone, which previously belonged to King Henry II's mistress, Diane de Poitiers. This was where Jean-Baptiste spent most of his childhood. However, little is known about him during that period besides his charitable deeds and his education at the Jesuit College in Grenoble. The La Croix children were much influenced by religion; three out of ten entered religious life. Jean-Baptiste entered the seminary of Saint-Sulpice in Paris and obtained his licentiate in theology in 1672 at 19 years of age. In 1676, he was appointed almoner-in-ordinary to King Louis XIV, a promotion that can be attributed to his family's connections. He was ordained priest in 1681. He personally funded a small hospital in Saint-Vallier in 1683. Jean-Baptiste was known for his austerity, his strong will and his dynamism. He was a close friend of the bishop of Grenoble, Le Camus, and would regularly visit hospitals, prisons and country parishes. At the court of the "Sun King", he kept his religious attire. Ideology Saint-Valier was a supporter of the Counter Reformation. His initial intent in the New World was to engage in the conversion of the indigenous residents. He introduced Jesuits and Recollects in an attempt to evangelize New France. Many of these missions (Illinois, Louisiana, and Mississippi) resulted in conflicts between Bishop Saint-Vallier, the Jesuits and the seminary of Quebec. His various construction projects reflect a desire to restore and renew the authority in the Catholic Church as the main institution of administrative organization. In 1697, Saint-Valier built a palace in Quebec for his clergy and as a place of hospitality. During the same year, he also established a nuns monastery in Trois-Rivières Saint-Vallier’s zeal for religious activities and establishments, stretched from Quebec, Montreal, Acadia and Louisiana. His way of life embodied that of the ideals of the Council of Trent. Diocese of Quebec The Diocese of Quebec was vast and its population diverse and widespread. It included the whole of French North America, or what was called New France, divided in seven colonies: Newfoundland, Acadia, Île Royale, Louisiana, Illinois, Upper Country and Canada, inhabited by Indigenous people and the European settlers. During the tenure of Saint-Vallier, immigration from France was mostly over; the European colonists were farmers, fishermen, sailors, merchants and ‘coureurs des bois’, overseen by a small elite of aristocratic leaders, but a great demographic explosion occurred between 1685 and 1730, the white population in New France jumping from c. 12,000 inhabitants to c. 41,500. During the same time, the number of Amerindian fell from c. 163,500 to c. 61,500. That loss, mainly in the tribes of Louisiana, was attributed to warfare and diseases brought to the valley of the Mississippi. The number of Aboriginals compared to white settlers is one reason for the presence of so many religious orders in New France. The missions and conversions to Christianity were deemed very important. Priests of the Missions Étrangères of Paris, the Jesuits, the Recollets and the Sulpicians often worked in collaboration with the nuns from different orders like the Congrégation de Notre-Dame or the Canonesses of St. Augustine of the Mercy of Jesus at l’Hôtel-Dieu de Québec. The arrival of Saint-Vallier and his strong views on what should be the duties of the priests created a shock wave in the orders, especially for the Seminary of Quebec, newly founded by his predecessor Bishop Laval. Beginnings as Bishop Advancing quickly in the religious and social hierarchies, it was but a matter of time before Saint-Vallier would be elevated to the rank of bishop. In 1685, Mgr de Laval, Bishop of Quebec, gave his resignation to the King and proposed Saint-Vallier to replace him. His entourage first pushed him to refuse the see, since the Diocese of Quebec was relatively new, poor, far from court and at that time "perhaps the most wretched and difficult of all the dioceses in mission lands". Abbot Saint-Vallier finally decided to accept the position, and left France for a sojourn in his future see with the title of vicar general of Bishop Laval, since the ceremony of his investiture had to be postponed due to the difficult relationship between the Pope Innocent XI and Louis XIV. His first stay in Canada lasted a year and a half. Saint-Vallier surprised the clergy with his passion and energy. His trip started in Quebec, down to the parishes along the St. Lawrence River, Montreal and then to Acadia. During this time, he preached to both the French and the Indians. In 1686, he debated going further into the Great Lakes in order to continue his investigations. However, his strong personality intimidated people. The superiors of the seminary later wrote to Bishop Laval that they believed he wasn't a suitable candidate for the task of governing the Quebec diocese. Laval sided with them and requested that Saint-Vallier leave his post. This of course offended him and he refused this request, backed by the King, who ‘exiled’ Mgr Laval in France and refused to permit his return to Quebec. Disappointed and angry, as he had expected to die at the Quebec church he had co-founded, Laval made many accusations that portrayed Saint-Vallier as a manipulative traitor. Saint-Vallier was consecrated bishop at Saint-Sulpice on January 25, 1688 and allowed his predecessor to go back to Canada. However, this would prove to be detrimental for him as upon his return in the summer of 1688, there was a disagreement between him and the seminary of Quebec. Three priests and the Bishop Laval conspired together in order to undermine Saint-Vallier’s authority and "three quarters of the clergy in Canada […] [had] already escaped the direct authority of the bishop, who found himself, in addition, obliged to share his jurisdiction over his own secular clergy with his seminary." Autumn of 1688, Bishop Saint-Vallier initiated a turnover of the old system and replaced it with new changes in the organization of the seminary which the latter rejected with backing from the Bishop Laval. "Mgr de Saint-Vallier worked on establishing more strict and clear pastoral norms […] the directives that he fixed throughout his episcopate concentrate mainly on the administration of the sacraments, especially the sacrament of penitence, and on the preaching" At that time, the Iroquois started attacking the French again and the impending approach of the English loomed ahead. Attacked on every side and called a tyrant and a jansenist, he decided to seek for arbitration by higher religious authorities, in this case the Archbishop of Paris and the private confessor of the King, who "both decided in favour of the bishop on the essential points […], the seminary of Quebec lost its privileges and came [back] under the usual rule." Nevertheless, by the end of 1694, Saint-Vallier’s relation with his diocese had deteriorated to the point that Louis XIV was forced to recall him to Paris. While Saint-Vallier defended his actions, he was asked to resign, which he refused to do. After being kept in France until 1697, without consenting to resign, Saint-Vallier was allowed to return to Canada after agreeing to be more "prudent" and moderate in his ways. He returned to his see and authorized a new establishment of Ursulines at Trois-Rivières. Quarrels With Different Institutions Saint-Vallier's tenure as bishop was defined by interminable quarrels with governmental and religious institutions in French North America. Even before he was officially consecrated as bishop, Saint-Vallier's active leadership style brought him into conflict with various groups, who perceived him as, at times, domineering and micromanaging. He quarreled with Governor Frontenac over their respective social standing, going so far as to threaten to place an interdict on the Recollet order for giving the Governor precedence. He also clashed with the female religious order of the Congrégation de Notre-Dame. The order was active in teaching and nursing, and the Bishop sought to impose upon them a stricter cloistered lifestyle. In addition, he demanded they assent to dowry payments, solemn vows, and that they swear obedience to him as bishop. While the Congregation resisted, they were eventually forced to accept many of Saint-Vallier’s dictates. Upon his return from France, Saint-Vallier quickly became entangled in more intra-religious disputes. Further conflict arose in regard to competing claims to evangelization rights. In 1698, the seminary of Quebec requested permission to send a mission to the Tamaroa tribe. Saint-Vallier, who, after the "great quarrel" with the seminary, was eager to remain on good terms, consented. This was a slap in the face to the Jesuits, who felt their evangelizing efforts were under pressure worldwide from the secular church. Claiming the Tamaroas were included in the Illinois tribe, whose conversion had been entrusted to them, they objected. When the dispute was put to his arbitration, Saint-Vallier decided in favour of the Seminary. When the Jesuits appealed to the King Louis XIV in 1700, the Bishop returned to France to defend his decision. Although it was upheld, the damage done to his relation with the Jesuits was lasting. While subject to much criticism, Saint-Vallier was also admired in his diocese for his dedication and self-sacrifice. Rather than staying in Quebec or Montreal, he tirelessly traveled the back-country. The founding of the Hôpital Général and installation of Jesuits and Recollets at Montreal were also to his credit. Saint-Vallier and Jansenism There was a very strong suspicion in the colonies and in France that the Bishop of Quebec was in fact a follower of Jansenism. Named for Cornelius Jansen, a Dutch Catholic Bishop, Jansenism was characterized by a very strict and austere Christianity, a rigorism in the practice of religion and a certain individualism. The Critic Dictionary of Theology explain the large meaning of Jansenism thus: "designated an intern movement of Catholicism that refutes the necessity of certain condemnations and limits their range, and tries to present Christianity in its original form and closer to its objectives" Opposed to the centralization of power and the absolutism, this religious movement was seen as a plague by the court of King Louis XIV and in New France, where the government system was strongly based on absolutism. If Saint-Vallier presented Jansenist ideas, it was in certain aspects of his writing and in his austerity and deep orthodoxy, but he was certainly not a Jansenist. In the beginning of the 18th century the Bishop wrote 3 books; the Ritual, the Catechism and the ‘Statuts et ordonnances’. Because of his quarrels with the Jesuits, the Superior of the order decided to attack Saint-Vallier’s authority by writing a long critic of those three books seeing them as a "lapse into Arianism, Pelagianism, Jansenism, Lutheranism, and Calvinism". Father Bouvart based his accusations on different passages of the works of the Bishop, for example this extract from the Catechism. "Le nombre des réprouvez sera-t-il bien plus grand que celui des bienheureux ? Oui, le chemin de la perdition est large, au lieu que le chemin qui conduit à la vie éternelle est étroit." (Will the number of the damned be much greater than the number of the blessed? Answer: Yes, the road to perdition is broad, whereas the road that leads to the everlasting life is narrow.) Bishop Saint-Vallier eventually appealed to the Sorbonne to have his works rehabilitated. The doctors of the Faculty of Theology declared the Ritual and the Catechism perfectly orthodox and censured the critic of Bouvart. Nevertheless, Saint-Vallier decided to re-edit in 1713 the Ritual so as to cast away all doubts about his pretended Jansenist ideas. This book remained in use in the parishes until the middle of the 19th century. Capture and Detention On his return to New France, Msgr de Saint-Vallier’s vessel, along with other ships from the convoy sailing to the New France, was attacked by English naval forces and sent to England. There he was made a diplomatic prisoner and placed under house arrest, as France was at war with England in the War of Succession of Spain. With Saint-Vallier unable to rule from custody, the religious dimension of the diocese of Quebec fell into decay. The problem in the eyes of the Bishop and many of the priests was the lack of morality in the colony. They encountered much reluctance from the population, especially with the Natives, who were in disagreement with the clergy's fight against alcoholism, ‘indecency and immorality’ and their attempt to instill Christian practices into the tribes while ridding them of their own set of customs. The dispute over the sale of alcohol also created waves in the colonial population since the government and especially the merchants sought to use spirits as a way to maintain good relations with the Amerindian tribes. The Bishop remained a prisoner in London for five years while Queen Anne ruled. During this time, the King of France and the war council were deliberately slowed negotiations for his release. Many people were happy to be rid of Saint-Vallier and his incessant disputes, while the Queen of England demanded in exchange for the Bishop of Quebec the return of the Baron de Méan, "a dangerous man for France’s interests". It wasn't until 1709 that the king decided to set the dean of Liège free and in turn the English returned Saint-Vallier. At that time, Saint-Vallier's diocese had deteriorated greatly especially after Bishop Laval's death in 1708. Despite his pleas, the king was reluctant to let him go back to New France, fearing new religious conflicts. Thus Saint-Vallier underwent a 'forced exile' for four years (1709-1713) before he could return. Late Life, Death & Epilogue After thirteen years of absence, Saint-Vallier finally returned to Quebec, having persuaded the king to give consent to his departure. He arrived in his Diocese tired and worn by the torments of the last 20 years of constant infighting. The disputes with the religious orders of New France, the government and the merchants gave way to a more peaceful period that lasted until his death, although he retained some of his old habits. He refused, for example, to ring the bell of the cathedral for the death of the governor Rigaud de Vaudreuil and "grudges subsisted between [him] and his seminary". Austere throughout his life, he became more and more humble in his way of living and turned toward contemplation and simple duties. As Timothy Pearson explained in Becoming holy in early Canada: "Charity, both the love one bore for God and the public acts of altruistic gift-giving […] became the prominent trope of holiness after 1650". Saint-Vallier, following the example of the ‘Saints’, showed his generosity by helping the poor and the Hôpital Général of Quebec. He also took very seriously his duties of Bishop and developed parishes in the farthest corners of the diocese. Weak from sickness, he died the 26th December 1727 in the Hôpital Général, which he founded. His last words showed his charity, for he said: "Forget me, but do not forget my poor". The Abbot Gosselin who wrote about the Bishop Saint-Vallier in the late 19th century said of him: "especially by his great virtues and the holiness of his life, he is revealed in history with the halo of charity and disinterest: his memory shall be eternal" (surtout par ses grandes vertus […] et la sainteté de sa vie, […] il nous apparaît dans l’histoire avec l’auréole de la charité et du désintéressement : sa mémoire sera immortelle) See also Michel Bertier Michel Sarrazin References Bibliography Biography at the Dictionary of Canadian Biography Online the Catholic Encyclopedia - Jean-Baptiste de Saint-Vallier Saint-Vallier, Jean-Baptiste de La Croix de Chevrières de. Catéchisme du diocèse de Québec par Monseigneur l’illustrissime & reverendissime Jean de La Croix de Saint Valier, évêque de Québec. Paris, Urbain Coustelier, 1702. Saint-Vallier, Jean-Baptiste de La Croix de Chevrières de. Estat present de l’Eglise et de la colonie francoise dans la Nouvelle France, par M. l’Evêque de Quebec. Paris, Robert Pepie, 1688. Saint-Vallier, Jean-Baptiste de La Croix de Chevrières de. Rituel du diocèse de Québec, publié par l’ordre de Monseigneur de Saint-Valier, évêque de Québec. 1re édition. Paris, Simon Langlois, 1703. Saint-Vallier, Jean-Baptiste de La Croix de Chevrières de. Rituel du diocèse de Québec, publié par l’ordre de Monseigneur l’évêque de Québec. 2e édition. Paris, Simon Langlois, 1703 [vers 1713]. Saint-Vallier, Jean-Baptiste de La Croix de Chevrières de. Statuts, ordonnances et lettres pastorales de Monseigneur de Saint-Valier évêque de Québec pour le reglement de son diocese. Paris, Simon Langlois, 1703 Blouin, Annie. 1999. Les exigences pastorales de Mgr de Saint-Vallier envers ses prêtres, 1685-1727. Mémoire (M.A.)—Université de Laval, 1999. Campeau, Lucien. "Bouvart, Martin" in Dictionary of Canadian Biography, vol. 2, University of Toronto/Université Laval, 2003. (accessed February 22, 2015) <http://www.biographi.ca/en/bio/bouvart_martin_2E.html.> Choquette, Robert. Canada’s Religion: An Historical Introduction. Ottawa: University of Ottawa Press, 2004. Cliche, Marie-Aimée. 1988. Les pratiques de dévotion en Nouvelle-France: comportements populaires et encadrement ecclésial dans le gouvernement de Québec. Québec: Presses de l'Université Laval. Fay, Terence. "A History of Canadian Catholics: Gallicanism, Romanism, and Canadianism : Volume 20 of History of religion". McGill-Queen's Press - MQUP, 2002. Foley, Mary Anne, ""We Want No Prison Among Us": The Struggle for Ecclesiastical Recognition in Seventeenth-Century New France," Beyond the Walls: Women Religious in American Life 14 (Winter 1996); pp. 1-18. (Accessed February 5, 2015). <https://www.jstor.org/stable/25154538> Gosselin, August. "Mgr. de Saint-Vallier et son temps". Nos Racines/Our Roots. (Accessed February 6, 2015). <http://www.ourroots.ca/f/toc.aspx?id=1702> Greer, Allan. 1985. Peasant, lord, and merchant: rural society in three Quebec parishes, 1740-1840. Toronto: University of Toronto Press. Grès-Gayer, Jacques M., « Jansénisme », dans Jean-Yves Lacoste (dir.), Dictionnaire critique de théologie, Paris, Presses universitaires de France, 2002, p. 708-710. La Charité, Claude, «Les deux éditions du Rituel du diocèse de Québec de Mgr de Saint-Vallier, datées de 1703 : de l’édition janséniste à l’édition revue et corrigée par la Compagnie de Jésus», Revue de Bibliothèque et Archives nationales du Québec, No. 3 : pp. 74–85. Pearson, Timothy G. Becoming holy in early Canada. McGill-Queen’s University Press, 2014. Pearson, Timothy G. Becoming holy in early Canada: performance and the making of holy persons in society and culture. Thesis (Ph. D.)--McGill University, 2008. Pritchard, James S. 2004. In search of empire: the French in the Americas, 1670-1730. Cambridge, UK: Cambridge University Press. Rambaud, Alfred. "La Croix de Chevrières de Saint-Vallier, Jean-Baptiste De." Dictionary of Canadian Biography. (Accessed February 1, 2015).<http://www.biographi.ca/en/bio/la_croix_de_chevrieres_de_saint_vallier_jean_baptiste_de_2E.> Scalberg, Daniel Allen. 1990. Religious life in New France Under the Laval and Saint-Vallier bishoprics : 1659-1727. Thesis (Ph. D.)-- University of Oregon, 1990 Scott, M. Eileen. "Barbier, Marie, de l’Assomption" in Dictionary of Canadian Biography, vol. 2, University of Toronto/Université Laval, 2003. (accessed February 20, 2015) <http://www.biographi.ca/en/bio/barbier_marie_2E.html.> Tallon, Alain. 1997. La France et le Concile de Trente, 1518-1563. [Rome]: École française de Rome. Thomas, James H., "Quebec's BIshop as Pawn: Sait-Vallier's Imprisonment in England 1704-1709," CCHA Historical Studies 64 (1998), pp. 151–160. (Accessed February 1, 2015). <http://www.cchahistory.ca/journal/CCHA1998/THOMAS.pdf> Valois, Jacques. "Denys, Joseph" in Dictionary of Canadian Biography, vol. 2, University of Toronto/Université Laval, 2003. (accessed February 20, 2015) <http://www.biographi.ca/en/bio/denys_joseph_2E.html.> Category:Roman Catholic Bishops of Quebec Category:17th-century Roman Catholic bishops Category:18th-century Roman Catholic bishops Category:1653 births Category:1727 deaths Category:Burials at Notre-Dame de Québec Cathedral Category:Persons of National Historic Significance (Canada)
{ "pile_set_name": "Wikipedia (en)" }
Australians' wealth doubled in seven years: study Australians are now richer than ever before, new figures showed today, thanks to a buoyant sharemarket and high house prices. Private wealth rose 18 per cent to a new record high of $5 trillion in the year to June, or $250,000 a head, while debt averaged $19,000 a head. Stock broker CommSec's analysis of Treasury and Australian Bureau of Statistics data found wealth doubled in the past seven years and rose almost 120 per cent in the past decade, beating the record set in the late 1980s. Advertisement "The gains in wealth over the past decade have not been equalled in at least 40 years," CommSec chief equities analyst Craig James said. "Rising income and wealth levels combined with historically low interest rates and unemployment near 23-year lows will ensure that consumers keep spending in coming months of record petrol prices." Mr James said the rise in wealth was due to the sharemarket - which hit successive record highs this week - and house prices, despite a dip in the early part of this year. Separate figures today indicated the building industry was firing in the June quarter, reaching record highs. Total building work done rose 0.7 per cent to a high of $12.86 billion and home building was the second-highest on record, pipped only by the period before the introduction of the GST in June 2000. Renovation activity also hit a fresh high, while the amount of unfinished work on builders' books was also in uncharted territory at $23.7 billion by the end of June. Treasurer Peter Costello welcomed a 3.5 per cent fall in personal borrowing in August, with lending on credit cards and overdrafts falling 6.5 per cent. He said the figures, combined with a steadying in retail trade and new car sales, pointed to a slowing in previously heady consumption growth. "With incomes remaining strong and consumption easing from high levels, it may well be that households are strengthening their balance sheets," he said. However, crude oil prices reached above $US54 a barrel in New York overnight for the first time, pointing to a further spike in petrol prices at the bowser. Prices retreated marginally during Asian trade but traders were still nervously monitoring developments in strike-hit Nigeria and recovery efforts in the hurricane-battered Gulf of Mexico. Meanwhile, an official measure of future employment fell in October for the third month in a row, but the Department of Employment and Workplace Relations said it was too early to pick a trend.
{ "pile_set_name": "Pile-CC" }
Lisnagarvey Hockey Club Lisnagarvey Hockey Club is a field hockey club based in Hillsborough, County Down, Northern Ireland. The club was founded in 1901 and was originally based in Lisburn. The club was named after Lisnagarvey, the townland that eventually expanded into Lisburn. The club's senior men's team plays in the Men's Irish Hockey League, the Men's Irish Senior Cup, the Kirk Cup and the Anderson Cup. They have previously played in the Ulster Senior League. The men's reserve team plays in the Men's Irish Junior Cup. Lisnagarvey has also represented Ireland in European competitions, winning the 1991 EuroHockey Club Trophy. Lisnagarvey also fields various men's and women's teams in junior, senior and veterans leagues and cup competitions affiliated to the Ulster Hockey Union. History Early years Lisnagarvey Hockey Club was founded in September 1901, following a meeting held at the Temperance Institute on Railway Street, Lisburn. An earlier Lisburn Hockey Club was founded in 1897 so the new club was named after Lisnagarvey, the townland that eventually expanded into Lisburn. In 1903–04 the club joined a league for the first time and in 1904–05 the club won its first trophy, the Mulholland Shield. In 1905–06 Lisnagarvey reached the final of the Irish Junior Cup. After the first game against Monkstown finished 2–2 after extra time, they lost the replay 5–0. In 1922–23 Lisnagarvey won their first senior trophy when they won the Anderson Cup, defeating Antrim in the final. In 1924–25 Lisnagarvey won a quartet of trophies. In addition to winning the Anderson Cup for a second time, they also won the Irish Senior Cup, the Kirk Cup and the Ulster Senior League, all for the first time. Men's Irish Senior Cup Lisnagarvey are the Irish Senior Cup's most successful team. They won the cup for the first time in 1924–25, defeating Limerick PMYA over three games. Between 1987–88 and 1993–94 with a team that included Jimmy Kirkwood, Lisnagarvey won the cup for seven successive seasons. Notes Ulster Senior League Men's Irish Junior Cup In 1905–06 Lisnagarvey reached the final of the Irish Junior Cup for the first time. After the first game against Monkstown finished 2–2 after extra time, they lost the replay 5–0. In 1954–55 Lisnagarvey won the Irish Junior Cup for the first time after defeating UCD 4–0 in the final. Notes Kirk Cup Notes Anderson Cup Notes Men's Irish Hockey League In 2008–09 Lisnagarvey were founder members of the Men's Irish Hockey League. Regular season Notes EY Champions Trophy Lisnagarvey in Europe Lisnagarvey has also represented Ireland in European competitions. After winning both the 1969–70 Irish Senior Cup and the 1969–70 British Club Championship, Lisnagarvey were invited to play in the 1971 EuroHockey Club Champions Cup. After retaining both the Irish Senior Cup and the British Club Championship in 1970–71, Lisnagarvey were invited to play in the 1972 EuroHockey Club Champions Cup. Women's section Lisnagarvey first formed a women's section in 1903–04. The original women's section was suspended during the First World War but was reformed in 1920. During the 1920s at least two Lisnagarvey women's players – Sylvia Kirkwood and K. Kirkwood – represented Ireland. Women's Irish Junior Cup Grounds Lisnagarvey originally played there home games at two separate pitches in Lisburn – one at Magheralave Road and the other at Antrim Road. Lisnagarvey took over the Magheralave Road pitch from the original Lisburn Hockey Club after it disbanded around 1907–08. They continued to use this pitch until 1933–34. In the early 1950s Lisnagarvey purchased ground in Blaris, near the Lisnagarvey transmitting station. The club members subsequently built their own pitch and pavilion. In the 1980s the club established an artificial pitch complex at a completely new venue nearby. The new home was named New Blaris. In 2002 New Blaris was sold and the club temporarily played its home games at Queen's University. Work on a new home at Comber Road, Hillsborough, County Down was started in 2004. This facility featuring a new clubhouse and two water-based artificial turf pitches was completed in time for the start of the 2005–06 season. Notable players Men's field hockey internationals In 1908 Fred Hull became the first Lisnagarvey player to play for Ireland. He made his debut as a substitute in a match against Wales. Steven Johnson Jimmy Kirkwood Stephen Martin men's cricket internationals Jack Bowden Jimmy Kirkwood Nelson Russell Women's field hockey internationals K. Kirkwood Sylvia Kirkwood Recipients of the Military Cross During the First World War forty-three club members served with the British Armed Forces. Of these four were killed and four were wounded. Four others received the Military Cross. E. B. B. Hamilton R. P. McGregor Hugh Morrow Nelson Russell Honours Men EuroHockey Club Trophy Winners: 1991: 1 Runners Up: 1989: 1 British Club Championship Winners: 1969–70, 1970–71: 2 Men's Irish Hockey League Winners: 2011–12, 2015–16, 2018–19: 3 Runners Up: 2009–10, 2010–11: 2 Irish Senior Cup Winners: 1924–25, 1926–27, 1940–41, 1944–45, 1945–46, 1950–51, 1951–52, 1957–58, 1959–60, 1961–62, 1965–66, 1969–70, 1970–71, 1987–88, 1988–89, 1989–90, 1990–91, 1991–92, 1992–93, 1993–94, 1996–97, 2002–03, 2004–05: 23 Runners Up: 1942–43, 1948–49, 1958–59, 1977–78, 1980–81, 1995–96, 1999–2000, 2005–06, 2015–16, 2018–19: 10 Irish Junior Cup Winners: 1954–55, 1955–56, 1957–58, 1958–59, 1959–60, 1961–62, 1962–63, 1966–67, 1969–70, 1971–72, 1972–73, 1973–74, 1976–77, 1986–87, 1989–90, 2002–03, 2010–11: 17 Runners Up: 1905–06, 1953–54, 1974–75, 1988–89, 1992–93, 1998–99, 2000–01, 2003–04: 8 EY Champions Trophy Winners: 2016: 1 Runners Up: 2019: 1 Ulster Senior League Winners: 1924–25, 1933–34, 1937–38, 1938–39, 1944–45, 1949–50, 1950–51, 1951–52, 1952–53, 1953–54, 1954–55, 1959–60, 1960–61, 1962–63, 1964–65, 1965–66, 1969–70, 1971–72, 1976–77, 1977–78, 1980–81, 1989–90, 1990–91, 1991–92, 1993–94, 1994–95, 1996–97, 1998–99, 1999–2000, 2000–01, 2001–02, 2010–11: 32 Kirk Cup Winners: 1922–23, 1923–24, 1924–25, 1933–34, 1938–39, 1941–42, 1942–43, 1944–45, 1945–46, 1947–48, 1952–53, 1953–54, 1955–56, 1960–61, 1961–62, 1963–64, 1970–71, 1972–73, 1973–74, 1977–78, 1979–80, 1981–82, 1989–90, 1994–95, 1995–96, 1996–97, 1997–98, 1998–99, 2000–01, 2001–02, 2011–12: 31 Runners Up: 1936–37, 1948–49, 1965–66, 1974–75, 1983–84, 1984–85, 1990–91, 1991–92, 1999–2000, 2004–05, 2006–07, 2007–08, 2008–09, 2012–13, 2013–14: 15 Anderson Cup Winners: 1922–23, 1924–25, 1933–34, 1934–35, 1937–38, 1942–43, 1945–46, 1946–47, 1951–52, 1953–54, 1954–55, 1955–56, 1957–58, 1959–60, 1960–61, 1963–64, 1975–76, 1979–80, 1980–81, 1986–87, 1993–94, 1995–96, 1996–97, 2007–08, 2018–19: 25 Runners Up: 1926–27, 1943–44, 1952–53, 1964–65, 1976–77, 1977–78, 2005–06, 2009–10, 2013–14, 2014–15 : 10 Notes In 1940–41 bad weather originally delayed the final and when a date was eventually arranged it was again cancelled following the Belfast Blitz. A number of Lisnagarvey players served as A.R.P.s and were unable to travel to the final. The Irish Hockey Union and Limerick PMYA both agreed the trophy should be shared. In 1961–62 Three Rock Rovers and Lisnagarvey shared the Irish Senior Cup after two replays. First game finished 2–2; Second game finished 1–1. In 1955–56 Lisnagarvey and Parkview shared the Kirk Cup. Women Irish Junior Cup Runners Up: 2010–11, 2012–13 References External links Lisnagarvey Hockey Club on Facebook Lisnagarvey Hockey Club on Twitter Category:Field hockey clubs in Northern Ireland Category:Men's Irish Hockey League teams Category:Sports clubs in County Down Category:1901 establishments in Ireland Category:Field hockey clubs established in 1901 Category:Civil parish of Hillsborough, County Down
{ "pile_set_name": "Wikipedia (en)" }
Q: Why do my components appear next to each other? I'm currently trying to create a program that moves a rectangle over a background Image with keyboard keys. The problem I'm facing is that when I draw the components they are simply placed next to each other, instead of the square overlaying the background image. Here's the code to display both the components; JLayeredPane panel = new JLayeredPane(); panel.setLayout(new FlowLayout()); add(panel); paintBackground pb = new paintBackground(bimg); panel.add(pb, 1, 0); paintPlayer cc = new paintPlayer(startX, startY); panel.add(cc, 2, 0); pack(); setVisible(true); I believe the problem is that the paintPlayer component is set to full size, and there seems to be a background. The paintPlayer component code looks like this: public Dimension getMinimumSize() { return new Dimension(800,600); } @Override public Dimension getPreferredSize() { return new Dimension(800,600); } @Override public Dimension getMaximumSize() { return new Dimension(800,600); } @Override public void paintComponent(Graphics g) { super.paintComponent(g); g.setColor(Color.red); System.out.println(startX + startY ); g.fillRect(startX, startY, 30, 30); } I've had a go at setting the component size to just the size of the rectangle, but that way I can't move the rectangle by using the first two values in fillRect. The background for the rest of the space filled by the component (800x600) seems to be opaque. When added, the components just display next to each other, like this: https://gyazo.com/57245c518e02778c36ffc89ba75d5a81. How do I go about adding the paintPlayer ontop of the paintBackground, so that it only covers the rectangle on the background Image. I've done a fair bit of searching but I can't seem to work it out. Perhaps something to do with the layout? One other thing I've noticed is that by doing this, neither the frame or the pane benefit from a setBackground, as it's not visible. Cheers for any help. A: This is the default Constructor of JLayerdPane. public JLayeredPane() { setLayout(null); } You see it uses normaly AbsolutLayout. And if you read here: Note: that these layers are simply a logical construct and LayoutManagers will affect all child components of this container without regard for layer settings. You should understand what is wrong. Check OverlapLayout.
{ "pile_set_name": "StackExchange" }
This is the slideshow I made from Tricia & Nick’s engagement session last November. We went to the playground where Nick proposed and did some photos in the tunnel there. Had to clear out all the kids playing and got some weird looks, but it all worked out well in the end. My favorites are toward the end at this outdoor art exhibit we found that had these white pillars scattered around. Also, their dog was fun and made me think dogs should be mandatory on engagement shoots because they make every photo better. Looking forward to the wedding later this month, and hope my readers enjoy the slideshow! ]]>http://www.alanfriedmanphoto.com/blog/?feed=rss2&p=18930Kimberly & Kevin’s Weddinghttp://www.alanfriedmanphoto.com/blog/?p=1870 http://www.alanfriedmanphoto.com/blog/?p=1870#commentsThu, 17 Mar 2011 17:45:37 +0000adminhttp://www.dalanfriedman.com/blog/?p=1870Here’s Kimberly & Kevin’s wedding, start to finish! I loved the venue for this one (Art Foundry Loft). It made for some great naturally lit ceremony shots and some of my favorite portraits from last year. I wish every wedding was in a loft with massive windows. It’s a photographer’s dream… Congrats Kimberly & Kevin, and thanks for the awesome wedding! ]]>http://www.alanfriedmanphoto.com/blog/?feed=rss2&p=18700Hope & Michael’s Weddinghttp://www.alanfriedmanphoto.com/blog/?p=1841 http://www.alanfriedmanphoto.com/blog/?p=1841#commentsWed, 02 Mar 2011 16:43:02 +0000adminhttp://www.dalanfriedman.com/blog/?p=1841Long winter – no blog posts : ( But to kick off Spring, here’s a wedding from late last year – Hope & Mike. At the awesome Georgian Terrace, this was one super nice wedding, filled with great people. The venue was amazingly beautiful, inside and out, and I shot with mainly natural light at the reception to capture the ambience. It’s nice when indoor lighting actually enhances the image, rather than detracting from it! Anyway, enough talk – enjoy the images! And thanks Hope & Mike for the great wedding! ]]>http://www.alanfriedmanphoto.com/blog/?feed=rss2&p=184101 Pic: Kimberly & Kevinhttp://www.alanfriedmanphoto.com/blog/?p=1835 http://www.alanfriedmanphoto.com/blog/?p=1835#commentsWed, 26 Jan 2011 22:34:47 +0000adminhttp://www.dalanfriedman.com/blog/?p=1835Time for some one pic action. This picture is from Kimberly & Kevin’s wedding last month. I loved the emotion in this shot. It was right after she got her dress on, finished makeup and everything and looked at herself for the first time with friends by her side. More pics from that awesome wedding to come! ]]>http://www.alanfriedmanphoto.com/blog/?feed=rss2&p=18350Elizabeth & Seth’s Weddinghttp://www.alanfriedmanphoto.com/blog/?p=1673 http://www.alanfriedmanphoto.com/blog/?p=1673#commentsSat, 15 Jan 2011 18:13:13 +0000adminhttp://www.dalanfriedman.com/blog/?p=1673Elizabeth & Seth’s wedding took place at the Gardens at Great Oaks in Roswell in October. Amazing light during the ceremony! And overall a great wedding with lots of dancing, dad jamming with the band, fun with a spiral staircase, and a shot of the guys looking very sophisticated while playing chess and reading leather bound books. The Wedding Photojournalist Association (wpja.com) holds quarterly contests for all its members and in the last contest I won 9th place in the Humor category, thanks to Jessica and her awesome dancing seen here: ]]>http://www.alanfriedmanphoto.com/blog/?feed=rss2&p=161501 Pic: Twyla & Shaunhttp://www.alanfriedmanphoto.com/blog/?p=1611 http://www.alanfriedmanphoto.com/blog/?p=1611#commentsSat, 31 Jul 2010 15:42:35 +0000adminhttp://www.dalanfriedman.com/blog/?p=1611There were lots of good ones from this session but this was one of my favorites because I liked the pattern of lights on the ceiling and the windows behind them. Also their outfits made this shoot feel super classy!
{ "pile_set_name": "Pile-CC" }
Q: How to run an action when clicking on an appindicator I'm looking at writing a simple app indicator and I need it to update it's information whenever it's clicked on to open the menu. Is there any kind of on_click action thing? Let me rephrase it then: How to perform an action (any action) when the user clicks on the appindicator to open its menu? A: An app indicator can only open its menu. It can't perform any other action and your program doesn't get notified when the menu is displayed. You could either include some kind of "Update" menu item or find other events that trigger the update.
{ "pile_set_name": "StackExchange" }
Introduction {#Sec1} ============ Sustainable growth in the Atlantic salmon (*Salmo salar*) aquaculture sector depends on good fish health and welfare. Currently, low-cost open sea cages are predominantly used for on-growth of salmon. However, there are concerns related to salmon lice (*Lepeophtheirus salmonis*), escapees, nutrient discharge and fish mortalities^[@CR1]^. Development of semi-closed containment technologies (S-CCS) in sea and closed containment systems (CCS) in land-based facilities are promising strategies aiming to solve these problems, and provide further expansion of the Atlantic salmon production in Norway^[@CR2]^. CCS and S-CCS are mainly intended for the production of post-smolt during a limited period after seawater transfer. As of to date, no regulation exist for maximum fish density in CCS^[@CR3]^. In contrast, the maximum allowed fish density in sea cages are 25 kg/m^3^. Crowding and high fish densities may weaken the skin and increase the risk of mechanical damage^[@CR4]--[@CR8]^. Damage to the skin may threaten the barrier function of the fish resulting in reduced animal welfare^[@CR9]^. If the skin is severely wounded (epidermal and dermal damage), a well-conserved wound healing cascade is activated in order to restore tissue integrity. The wound healing cascade is initiated by the re-epithelialization processes, accompanied by inflammation and later onset of tissue repair and remodeling^[@CR10]^. In salmonids, re-epithelialization is initiated immediately by wounding^[@CR11],[@CR12]^, while inflammation lasts more than two weeks, accompanied by tissue repair which may be active more than 100 days post wounding^[@CR13]^. Cutaneous diseases and wounds are common for many farmed fish species^[@CR14],[@CR15]^. Hence, there has been a few studies reporting effects of environmental factors, hormones and dietary components on the healing rate of deep cutaneous wounds. In Atlantic salmon, low water temperature result in delayed epidermal repair^[@CR11],[@CR16]^, while the stress hormone cortisol delay the dermal repair processes^[@CR12]^. In contrast, dietary intake of zinc enhanced epidermal repair in Atlantic salmon^[@CR16]^, while the dermal repair was promoted in Rainbow trout (*Oncorhynchus mykiss*) with high dietary levels of vitamin C^[@CR17]^. Other factors such as therapeutics and immunostimulants may also affect the wound healing rate in fish^[@CR18]--[@CR20]^. To the best of our knowledge, there is no study investigating the relationship between densities and deep cutaneous wound healing in fish. The present study was designed to test the hypothesis that high fish density delays wound repair in post-smolt Atlantic salmon. The fish were wounded with a 5 mm biopsy needle and stocked at two different densities, a high fish density treatment (HFD) (100 kg/m^3^) and a low fish density treatment (20 kg/m^3^) serving as control. A 15k oligonucleotide array was used to observe changes in gene transcripts, while histology and photography were used to assess changes in wound morphology and contraction. Overall, our results show that HFD induces prolonged activation of inflammation and transient repression of tissue repair, which results in alterations in wound contraction. Results and Discussion {#Sec2} ====================== Cortisol levels {#Sec3} --------------- The HFD treatment did not affect mortality. The overall mortality rate was low, \<1% in the control and \<5% in the HFD treatment. Similarly, Calabrese *et al*. 2017 did not observe increased mortality in fish reared at high densities (25--125 kg/m^3^). Significant differences in plasma cortisol levels (2-way ANOVA, p \< 0.001) were found both between time points and treatment (Fig. [1](#Fig1){ref-type="fig"}). However, the post-hoc analysis only showed significant differences between groups at 43 days post wounding (dpw). Similar observations were done in our previous experiment investigating animal welfare and the effect of five different fish densities (25, 50, 75, 100 and 125 kg/m^3^)^[@CR4],[@CR5]^. In this previous study, plasma cortisol levels peaked in the intermediate density treatment (75 kg/m^3^) after four weeks, whereas plasma cortisol levels in the highest density treatment (125 kg/m^3^) peaked after eight weeks. Other studies have also demonstrated that fish exposed to crowding have limited cortisol response. Basal levels of plasma cortisol in unstressed salmonid fish are normally in the range 0--5 ng/mL, while crowding resulted in an elevation of plasma cortisol to 10 mg/mL^[@CR21]^. These results suggest that cortisol as a sole indicator of animal performance during long-term intensive rearing conditions may be misleading and other parameters should be included in order to assess animal welfare.Figure 1Plasma cortisol response to the treatment. The bars show mean plasma cortisol levels and error bars SEM, HFD (black bars) and control (white bars). A significant difference in plasma cortisol levels between time point and treatment was indicated by 2-way ANOVA (p \< 0.001). Differences between groups (for each time point, Tukey *post-hoc* test) are indicated with a star (p \< 0.05). N = 12 for treatment and time point. Wound contraction {#Sec4} ----------------- To investigate how HFD affected the overall wound morphology and wound contraction, the length, width, total wound area and non-pigmented inner area of the wounds was measured. The results showed that HFD altered all measured parameters, except the total wound area (Figs [2](#Fig2){ref-type="fig"} and [3](#Fig3){ref-type="fig"}). As a measure for wound morphology, the length and width ratios (l/w) of the wounds were calculated. Wounds from the control fish had a higher (l/w) ratio (p \< 0.01) compared to the wounds of HFD treated fish from 36 dpw and onward. Thus, wounds from the HFD treatment were contracting in a more circular manner compared to control wounds. Differences was also observed between the inner non-pigmented wound area, being larger at the last three sampling points in the HFD treated samples. Fish weight and wound position did not have any significant impact on wound contraction (ANOVA, p-value \> 0.05, data not presented).Figure 2Wound measurements and body weight. Comparison of body weight, wound width, wound length and length/width ratios of the wounds. Solid bars represent the group mean and error bars SEM. 2-way ANOVA indicate significant differences between group (p \< 0.001) and time point (p \< 0.001) for all measurements. Lower-case letters mark differences between groups (Tukey *post-hoc* test). Groups which do not share a letter were significantly different (p \< 0.05). N = 12 for treatment and time point.Figure 3Wound morphology and wound contraction. (**a**) Representative photos of wound development at 36, 43 and 57 dpw in HFD and control treated fish. (**b**) The graphics present the total wound area as an elliptic figure. The length (mm) and width (mm) measurements are indicated in the figure. The solid lines are the mean length (L) and width (W) while the dashed lines indicate SEM. The white circle represents the whole wound area, the blue circle represents the inner non-pigmented area (NP). Significance levels of pairwise comparison of wounds in the same position and at the sampling point is indicated, stars indicate significance levels (P-value \*0.05, \*\*0.01, \*\*\*0.001) according to Kruskal-Wallis rank test. N = 12 for treatment and time point. Microarray {#Sec5} ---------- To better understand the molecular processes behind the alterations in wound contraction, the transcriptomic response in the wounds was measured with a 15k oligonucleotide microarray. The effect of HFD was strongest at 3 and 7 dpw, with 254 differentially expressed genes (DEG) at 3 dpw, and 206 DEG at 7 dpw (Table [1](#Tab1){ref-type="table"}). It should be mentioned that the general transcriptomic profile in the two treatments were similar. None of the DEG changed direction, up vs. down, because of HFD. The effect of HFD was therefore only on the magnitude of transcription.Table 1High fish density changes the transcriptional response in the wounds.DPW013714364357DEG1458254206140467241The table shows total number of differentially expressed genes (DEG), HFD -- control, at 0--57 days post wounding (dpw). Day 0 represents intact skin. Genes having a p \< 0.05 and log~2~FC \> ± 0.8 (fold change 1.75) were considered significantly different from each other. Clustering of DEG with known roles (N = 652) was performed for functional interrogation of transcriptomic differences between the two treatments (Fig. [4a](#Fig4){ref-type="fig"}). The majority of genes in the first cluster were downregulated by HFD the first two weeks of the experiment. Most of these genes were involved in secretion, DNA replication and immunity (acute phase, chemokines and immunoglobulins), genes encoding components of mucus and collagens were also found in this cluster (Fig. [4b](#Fig4){ref-type="fig"}). Genes in the second cluster were in general downregulated by HFD during the whole experimental period. Genes in this cluster were involved in secretion and exocytosis. Genes within the third cluster were in general enhanced by HFD. Most of these genes were involved in immune functions such as eicosanoids, lectins, proteases, cytokines and chemokines. Overall, the cluster analyzes indicate that HFD in general enhance the immune responses while tissue regeneration was repressed during the first two weeks after wounding.Figure 4Transcriptomic responses to HFD. (**a**) The heat map shows the transcription profile of 652 differentially expressed genes (DEG). The colors represent log~2~FC of HFD vs. control samples. Red color indicates enhanced transcription in HFD samples whereas blue color represents repressed transcription. To the right, three clusters were drawn based on the transcriptional profiles of the DEG. The transcription profiles for each gene within the respective cluster is presented as a thin grey line, blue lines represent the average within the cluster. (**b**) The plot shows the enrichment results for functional categories found within each of the three clusters, cluster 1, 2 and 3 respectively. The sizes of the dots indicate the Fisher-test p-values (0.05, 0.01 and 0.001), and the color indicates enrichment category; blue for "cell", red for "immune" and green for "tissue". N = 5 for treatment and time point. Inflammation {#Sec6} ------------ As suggested by the cluster analysis, a wide range of immune genes were up-regulated in the HFD group, including eicosanoids, lectins, proteases, cytokines and chemokines (Fig. [5](#Fig5){ref-type="fig"}). Most of the inflammatory genes showed a transient response to HFD, with enhanced transcription levels at 3 and/or 7 dpw. This included multiple genes involved in the metabolic pathway of leukotriene B4 (*Cytosolic phospholipase A2*, *Prostaglandin G/H synthase*, arac*hidonate 5 lipoxygenase, lipoxygenase 3, leukotriene A4 hydrolase, cytochrome P450 4F3*, *leukotriene B4 receptor 1*). Leukotriene B4 is known to be a major product of activated neutrophils and macrophages, with the ability to recruit and activate a range of immune effector cells^[@CR22]^.Figure 5Selected genes regulated by HFD The plot to the left shows an overview of selected genes with immune functions, the right plot shows selected genes involved in tissue repair. Red color indicates up-regulation and blue color down-regulation relative to control samples. Genes with a p-value \< 0.05 and a Log~2~FC \> 0.8 were considered significantly different from each other and are indicated by their Log~2~FC. N = 5 for treatment and time point. Transcription of several proteases was enhanced by HFD at multiple time points. The *Matrix metalloproteinases* 9 and 13 (*mmp9* and 13) showed a four-fold higher difference during the first two weeks of the experiment (Fig. [5](#Fig5){ref-type="fig"}). These proteases were the immune genes that were longest and strongest induced by the HFD treatment. Matrix metalloproteinases are secreted both by keratocytes and macrophages, and they are essential components of several wound healing processes (Schultz *et al*., 2005). Since both *mmp9, mmp13* and *leukotriene b4* are produced by leukocytes, these results may indicate stronger recruitment of inflammatory cells in the HFD wounds. In this context, it is relevant to mention that the transition from inflammation to tissue regeneration is dependent on matrix metalloproteinases activity^[@CR23]--[@CR25]^. Enhanced proteolytic activity, in particular *mmp9* and *mmp13*, is reported as a key factor causing chronic and delayed wound healing in mammals^[@CR26],[@CR27]^. Tissue repair {#Sec7} ------------- Transcription of genes involved in tissue repair was temporarily repressed by HFD. Several genes involved in DNA replication were repressed at 3 dpw (Fig. [5](#Fig5){ref-type="fig"}), including *proliferating cell nuclear antigen* (*PCNA*). At 1, 3 and 7 dpw, several collagens were down-regulated by HFD and this effect was further enhanced at 14 dpw with downregulation of multiple collagens, growth factors and mitochondrial genes. This transcriptional response shows several similarities with our previous study where a panel of collagen genes and growth factors were down-regulated in the skin of Atlantic salmon with cortisol injections^[@CR28]^. Several genes involved in secretory functions and mucus responses were also dampened by HFD. The genes being strongest down-regulated by HFD was several transcripts of *zymogen granule membrane protein 16*. In mammals, this protein is found in mucous-secreting cells of the digestive system^[@CR29]^, and therefore likely to be involved in mucous secretion in fish skin. Also glycosyltransferases, which are involved in glycosylation of mucins and two *giant mucus proteins* were down-regulated at 7 dpw in the HFD treatment. Further, our results indicate that transcription of *muc5ac.1* was affected by treatment (p \< 0.05), with lower transcription in the HFD treated fish at six out of seven time points (Fig. [6](#Fig6){ref-type="fig"}). Transcription of *muc5b* and *muc5ac.2* changed during the healing process, but transcription was not affected by treatment. These results are contradictive to our previous findings where high biomass led to increased mucin transcription in the skin^[@CR4],[@CR30]^. However, acute handling stress had the opposite effect reducing mucin transcription^[@CR30]^. Synthesis and secretion of large amounts of high molecular weight proteins with heavy glycosylation represents a significant metabolic commitment of the cell^[@CR31],[@CR32]^. Hence, the observed reduction in transcripts related to mucus production in the HFD treatment may be an allostatic response to a challenging environment.Figure 6HFD alters mucin transcription. The bars show the mean transcription levels of three measured mucin genes and error bars ± SEM. Fold changes are relative to mean values of intact skin (n = 10) and log~2~ transformed. The HFD treatment is represented by black bars and the control treatment by white bars. Lower-case letters mark differences between groups (two-way ANOVA, Tukey *post-hoc* test). Groups which do not share a letter were significantly different (p \< 0.05). N = 5 for treatment and time point. Histology {#Sec8} --------- In concordance with the wound measurements and the transcriptional results, histological analysis of tissue samples further confirmed that HFD resulted in delayed epidermal repair, scale mineralization and formation of dermis. Epidermal repair {#Sec9} ---------------- We observed a clear effect of the treatment on the epidermal layer at 3 dpw (Fig. [7](#Fig7){ref-type="fig"}). Severe epidermal spacing was observed in five out of six HFD samples, compared to one in five control samples (Supplementary File [1](#MOESM1){ref-type="media"}). The epidermal spaces were caused by keratocytes with extended pseudopods, resulting in extracellular spaces in the epidermis (Fig. [7d--f](#Fig7){ref-type="fig"}). Immunohistochemistry with PCNA further showed that cell proliferation at 3 dpw was mainly located in the epidermal layer (Fig. [7g,h](#Fig7){ref-type="fig"}). The observations also suggests less proliferative activity in the epidermis in the HFD treatment, a finding also supported by the transcriptional results (Fig. [5](#Fig5){ref-type="fig"}). The observed epithelial spacing at 3 dpw could be a side effect of reduced epidermal cell proliferation. Another factor, which may have contributed to the reduced epidermal repair, could be enhanced protease activity in the wounds. In murine models, excessive protease activity provoked type-IV collagen degradation and resulted in delayed epithelial migration^[@CR26]^.Figure 7HFD and temperature-stress induces epidermal spacing. (**a**--**c**) Dense epidermal layer at 3 dpw in the control treatment. The same picture is displayed at three different magnifications (10, 40 and 60×). Note the extension of small pseudopods in c. (**d**--**f**) The epidermal layer at 3 dpw in the HFD treatment. The same picture is displayed at three different magnifications (10, 40 and 60×). Note the long extended pseudopods in f. (**g**) Plenty of dividing cells (PCNA+) were found in the epidermal layer of control fish. (**h**) PCNA+ cells in the epidermal layer in the HFD treatment. (**i**--**k**) Primary cell cultures of fish keratocytes incubated at three different temperatures 4 °C, 12 °C and 16 °C. Symbols; epidermis (**e**), basement membrane (bm), scale (sc), black arrows (pseudopods), white arrows (PCNA + cells). Hematoxylin and eosin stained tissue section (**a**--**f**), N = 5 control samples, N = 6 HFD samples. Immunohistochemistry with PCNA, nuclei of dividing cells stain brown (**g**,**h**), N = 3 for both treatments. Cell culture experiment, N = 9. Similar observations with epidermal spacing have also been reported in Atlantic salmon with 5 mm punch biopsy wounds reared at low temperatures (4 °C)^[@CR16]^. In order to further investigate the relationship between the environment and the morphology of the keratocytes, a cell culture experiment with primary keratocytes was performed. The stress factor in this experiment were low (4 °C) and high (16 °C) temperatures. The results showed that keratocytes reared at both high and low temperatures had longer pseudopods compared to the control cells (12 °C) (Fig. [5g--i](#Fig5){ref-type="fig"}). This morphology was similar to the epidermal cells at 3 dpw in the HFD treated fish (Fig. [5d--f](#Fig5){ref-type="fig"}). It is known that the mitotic rate of the keratocytes is temperature dependent^[@CR11]^, however it is unclear if the observed epidermal spacing emerged due to reduced cell proliferation in the wounds or as a response to the environment. Mucus response {#Sec10} -------------- A clear difference in the mucus response between control and HFD treatment was observed at 7 dpw. At this time point there were less mucus and mucous cells on HFD samples (Fig. [8](#Fig8){ref-type="fig"}). The average mucus score of the wounds from HFD treatment was 1.7 (SEM ± 0.34) while the average score of the control samples were 3 (SEM ± 0.31) (Supplementary File [1](#MOESM1){ref-type="media"}). Combined with the transcriptional results with down-regulation of mucins, zymogens, transferases and giant mucus proteins (Figs [5](#Fig5){ref-type="fig"} and [6](#Fig6){ref-type="fig"}), these findings strongly suggest that HFD dampens the mucus response at 7 dpw. The poor mucus response may be an indirect effect of the delayed organization of the epidermal layer at 3 dpw (Fig. [7](#Fig7){ref-type="fig"}). The rapid re-epithelialization process, followed by formation of a neo-epidermis and a mucus plug is believed to be essential in order to protect the healing wound^[@CR16],[@CR33]^. Here we show that increased epidermal spacing and reduced mucus production is an early consequence of HFD, which in turn may affect the ability of the wound to withstand secondary infections.Figure 8Mucus response in the HFD treatment 7 days post wounding. (**a**--**c**) The mucus response in the control samples at 7 dpw. (**d**--**f**) The mucus response in the HFD treatment at 7 dpw. Each photo represents one individual and the mean mucus score is indicated. The tissue sections (5 µm) were stained with periodic acid-Shiff to detect mucus and mucous cells (pink color). N = 5 for each treatment. Scale formation {#Sec11} --------------- At 14 dpw formation of new scales at the wound margins was observed in all the analyzed samples (Fig. [9a,d](#Fig9){ref-type="fig"}). In the HFD treatment, none of the scales contained mineralized matrix. In contrast, three out of five control samples contained scales with mineralized matrix. This trend continued at 36 dpw with weaker staining, on both right and left scale, in the HFD treated fish (Fig. [9b,c,e,f](#Fig9){ref-type="fig"}). As fish scales consist of an upper mineralized layer of hydroxyapatite and a lower fibrous layer of un-mineralized matrix and collagen fibers^[@CR34]^, delays in collagen transcription as shown by the array results (Fig. [5](#Fig5){ref-type="fig"}), may cause a delay in scale formation and mineralization. In the European eel (*Anguilla anguilla*), vertebral bone demineralization has been observed after chronic cortisol treatment^[@CR35]^. Further, Atlantic salmon vertebra is less mineralized when exposed to increased temperature stress, such as 16 °C compared to 12 °C^[@CR36]^. The same has also been shown for salmon mesenchymal stem cells differentiating to bone cells *in vitro*^[@CR37]^. Overall, HFD may have a negative effect on scale development and mineralization, which may impair the mechanical barrier of the fish.Figure 9HFD delays scale mineralization. (**a**) Control sample at 14 days post wounding (dpw) with mineralized matrix (red color) in a newly formed scale. (**b**,**c**) Mineralized collagen plate from two different individuals, control samples at 36 dpw. Degree of mineralization is indicated as high or low. (**d**) HFD treated sample with newly formed scale. Note that the matrix does not stain red with Alizarin red. (**e**,**f**) Mineralized collagen plate from two different individuals, HFD samples at 36 dpw. 5 µm tissue sections stained with Alizarin red, N = 6 at 14 dpw, N = 3 at 36 dpw for each treatment. Fibrous repair and pigmentation {#Sec12} ------------------------------- Fibrous repair and restoration of skin pigmentation was delayed by HFD (Fig. [10](#Fig10){ref-type="fig"}). In normal fish skin, the pigment cells are localized below the epidermal and dermal layer (Fig. [10](#Fig10){ref-type="fig"}). At 36 dpw, four out of six control samples and only one out of six HFD samples had a layer of melanocytes below the epidermal layer (Fig. [10b,g](#Fig10){ref-type="fig"}). At 43 dpw, five out of six control samples had pigment cells organized in two layers, under the epidermal and dermal layer, while none of the HFD samples had this organization (Fig. [10c,h](#Fig10){ref-type="fig"}). These data support our findings that wounds from HFD treated fish retains a bigger non-pigmented area in the wound center (Fig. [3](#Fig3){ref-type="fig"}). Furthermore, the dermal layer looked more organized in the control samples, thus the pigment cells appear to follow the formation of connective tissue. At 57 dpw all samples had melanocytes organized beneath the epidermal and dermal layer, suggesting that tissue repair in the HFD treated fish was catching up on the control (Fig. [10d,i](#Fig10){ref-type="fig"}). The transcriptomic results also support this finding, with higher collagen transcription in wounds from HFD treated fish at 43 dpw (Fig. [5](#Fig5){ref-type="fig"}).Figure 10Delayed formation of pigmentation and dense connective tissue. Representative photos of unstained tissue samples from control and HFD treatment, 14-57 days post wounding. (**a**--**d**) Healing wounds from control fish. (**f**--**i**) Healing wounds from HFD treated fish. Epidermis (**e**), dermis (**d**), inflammation (**i**), granulation tissue (gr). Arrows point at pigmentation beneath the epidermal layer, and beneath the dense connective tissue. Stereoscope pictures (40×), N = 6 for each treatment and time point. Cortisol treatment in Atlantic salmon and zebrafish (*Danio rerio*)^[@CR10],[@CR12]^, and chronic stress situations in mice and humans is associated with reduce dermal repair and delayed wound contraction^[@CR38]--[@CR43]^. Given these results presented in this article, we believe that chronic stress and associated physiological responses is the best explanation for the delayed wound healing in the HFD treatment. Conclusion {#Sec13} ========== The results presented in this article show that HFD interferes with the wound healing capacity in Atlantic salmon. At the transcriptional level, the HFD treatment enhances inflammatory reactions in the wound while repressing tissue repair (cell proliferation, tissue secretion, and collagen production) (Fig. [11](#Fig11){ref-type="fig"}). The observed alterations in gene transcripts had a lag time, manifesting themselves at later time points on the morphological appearance of the wounds. These morphological differences included poor epidermal organization, delayed scale mineralization, delayed the formation of fibrous tissue and altered wound contraction. Combined, our findings suggest that HFD interferes with the wound healing capacity of the fish resulting in delayed epidermal and dermal repair.Figure 11Summary of events that are altered by HFD in the healing wounds of Atlantic salmon. Inflammation and tissue repair were the two dominating transcriptional responses to wounding. In general HFD enhanced transcription of genes related to diverse inflammatory responses, while tissue repair was repressed at most time points. This resulted in several transient morphological changes in the wound and permanent alterations in wound contraction. Materials and Methods {#Sec14} ===================== Fish stock, rearing conditions and sampling procedure {#Sec15} ----------------------------------------------------- This study was carried out at the Industrial Laboratory (ILAB, Bergen Norway) between November 4^th^ 2014 to January 30^th^ 2015. Smolts (mean size 80 g) were distributed randomly in two 500 L tanks. The fish were stocked at 20 kg/m^3^ (N = 125) in the control treatment and at 100 kg/m^3^ (N = 625) in the HFD treatment. From the 4^th^ to the 6^th^ of November the fresh water in each tank was gradually replaced with seawater. The specific water flow was adjusted to 5 L/min in the control tank and 25 L/min in the HFD tank, corresponding to 0,5 L/kg/min at the start of the experiment. Water velocity (10 cm/sec) in each tank was kept stable and equal by adjusting the angle on the inlet water pipe. The oxygen level in the outlet water was kept higher than 80% saturation by automatic oxygenation of the water in the header tanks (Oxyguard Commander). Both temperature (ranging from 9.4--10 °C) and oxygen saturation were measured once daily (YSI 550, Xylem Inc., Yellow Springs, USA). Following transfer to full strength seawater the fish were exposed to a 12 hours light and 12 hours dark regime. The fish were fed a commercial dry diet (EWOS, size 2--3 mm, Oslo, Norway) in 10% excess throughout the study. The main experimental period lasted from 28^th^ of November 2014 until 30^th^ of January 2015. The biomass was not adjusted in order to avoid adding additional stressors to the experiment. As a result the fish growth were greater than the biomass sampled in the HFD treatment, causing the biomass to increase over the time course of the study. The fish density in the HFD group ranged from 116 to 146 kg/m^3^, with a mean fish density of 126 kg/m^3^. The fish growth in the control treatment did not compensate for the biomass lost at sampling, thus the fish density was reduced over time, ranging from 6 to 22 kg/m^3^ with a mean fish density of 14 kg/m^3^. Four weeks after transfer to seawater three biopsies (N = 90 per treatment), were excised with a 5 mm biopsy needle (Integra^TM^Miltex^TM^) as described by several authors^[@CR16],[@CR44]^. Prior to wounding the fish were fully anesthetized with (MS22, Sigma-Aldrich). Skin samples were taken at 1,3,7,14,36,43 and 57 dpw (N = 12 for time point and treatment). Fish for sampling were killed with an overdose of anesthetic (MS-222). Individual fish were weighed (g) and their length measured (cm). Blood was sampled with a heparinized syringe (Omnifix-F) from the caudal blood vessels and centrifuged (10 min at 4 °C and 4000 rpm). Plasma was stored at −80 °C for further analysis. Skin samples were collected from a standardized 1 cm^2^ area around each wound. Samples for gene transcriptional analyses were snap frozen in liquid nitrogen and transferred to −80 °C for storage. The skin samples were fixed in 4% Paraformaldehyde solution (Electron microscopy science) overnight and then washed in 1 x PBS (Sigma Aldrich), before stepwise dehydration to 70% ethanol and transferred to −20 °C for storage. *In vitro*-study, primary cell culture {#Sec16} -------------------------------------- In order to investigate the effect of temperature stress on keratocyte morphology, an *in-vitro* study with primary cell cultures were established. Keratocytes were cultured from scale explants, a method modified from previous work^[@CR45],[@CR46]^. In brief, Atlantic salmon (N = 9, weight \~ 500 g), were killed by a blow to the head and transported from the rearing site (NIVA Research station, Solbergstrand, Norway) to Nofimas' research facilities (Aas, Norway) in transport tanks with seawater. Single scales were picked (using forceps) and placed in 6 well tissue culture plate (Falcon Multiwell™ Becton Dickinson, NJ, U.S.A.) containing L-15 supplemented with fetal bovine serum (FBS) 10% (Sigma), 25 μg amphotericin B, 10 mL/L antibiotics (Sigma), 10 mL/L antimycotics (Sigma) and 0.01 M HEPES (Sigma). Each well contained three scales, and for each fish three plates were used. Each plate were cultured at one out of three different rearing temperatures: control (12 °C), low temperature (4 °C) and high temperature (16 °C). The temperatures were chosen based on optimum (12--14 °C) rearing temperature for Atlantic salmon as both lower and higher temperatures are associated with reduced fish growth^[@CR47],[@CR48]^. After four days the cells were microscopically analyzed (Leica). Cortisol analysis {#Sec17} ----------------- A direct enzyme immunoassay (EIA) was used to measure plasma cortisol^[@CR49]^. Samples were added to 96 well plates coated with Rabbit anti-cortisol (Cat\# 20-CR50, Fitzgerald Ind. Int'l, North Acton, MA, USA; diluted 1:30000). Color development was measured at 650 nm by an automatic plate reader (Sunrise BasicTM, software: MagellanTM V6.5, Tecan Group Ltd, Männedorf, Switzerland). Maximum binding (B0 = 150 µl EIA + 100 µl cortisol -- HRP conjugate) and non-specific binding (NSB = 150 µl EIA − 100 µl cortisol -- HRP conjugate) were determined. All standards were run in triplicates and samples in duplicates. Photography {#Sec18} ----------- Photographs were taken with Cyber-shot DSC-RX100 (Sony) with an internal calibration standard in each picture. The length, width, non-pigmented area and total wound area were measured with Image J (Image J. Inc), (N = 6 at 1--7 dpw and N = 12 at 14--57 dpw for each treatment). Histology {#Sec19} --------- Skin samples for histology were embedded in paraffin, using the program 70%, 96%, and 3 × 100% et-OH, 3X xylene and 2X paraffin, total duration of 10 h (Leica TP1020). Following embedding, samples were sectioned into 5 µm sections. All the sections were stained with haematoxylin-eosin (Sigma-Aldrich) with an automatic staining machine (Leica autostainer XL). Staining with periodic acid-Schiff was done by oxidizing the sections in 0.5% periodic acid solution (Sigma-Aldrich) for 5 min, followed by Schiff reagent (Merck) for 15 min, and counterstaining in Mayer's hematoxylin (Sigma-Aldrich). Staining for alizarin red (Sigma Aldrich) were done in a solution of 2 g Alizarin red in 100 mL dH~2~O, pH 4.3 for 2 min. Samples were dehydrated in increasing alcohol gradient (50--100%) and cleared in xylene. The slides were mounted with Fully Automated Glass Coverslipper (Leica CV5030). Staining of Proliferating Cell Nuclear Antigen (PCNA), was done with mouse anti-PCNA IgG2a (Millipore) and VECTASTAIN® Abc - HRP kit, anti-mouse IgG (Vector Laboratories) according to the manufacturer's instructions (N = 3). Three independent researchers analyzed the samples blind for mucus, epithelial spacing and scale mineralization, all scores may be found in Supplementary File [1](#MOESM1){ref-type="media"}. The amount of mucus present on samples were scored in a 10X magnification area on a scale from 0--4. Score values were defined as 0 (no mucous cells), 1 (less than 15 mucous cells), 2 (more than 15 mucous cells partly forming a continuous layer), 3 (one continuous layer of mucous cells), 4 (two continuous layers of mucous cells). Epidermal spacing were scored on a scale from 0--3 in a 5X magnification area, with the following scoring values: 0 (normal epidermis with no-spacing between keratocytes), 1 (little epidermal spacing), 2 (occurrence of epidermal spacing) and 3 (severe epidermal spacing). Since quantification of alizarin in sections is complicated, scoring of alizarin stained scales was defined as those with the strongest staining gained the maximum score "high", those with less staining were classified as "low". RNA extraction {#Sec20} -------------- Frozen skin sections with wounds were cut in half by a diagonal section, and repeated, and approximately one-fourth of the wound were transferred directly to 1 mL TRIzol (Thermo Fisher Scientific). Samples were homogenized in a Precellys®24 homogenizer. RNA was extracted from the homogenized tissues using PureLink™ Pro 96 well purification kit (Thermo Fisher Scientific) with on-column-DNase (Qiagen) digestion according to the protocol for TRIzol-homogenised samples. The concentration of extracted total RNA was measured with NanoDrop 1000 Spectrometer (Thermo Fisher Scientific) and RNA integrity was determined with Agilent 2100 Bioanalyzer with RNA Nano kits (Agilent Technologies). Samples with RNA integrity number (RIN) of 8 or higher were accepted. Microarray {#Sec21} ---------- Analyses were performed with Nofima's Atlantic salmon DNA oligonucleotide microarray SIQ-6 (custom design, GPL16555) containing 15 K probes of genes selected by annotations and expression profiles. Microarrays were fabricated by Agilent Technologies; all reagents and equipment were purchased from the same source. All kits were used according to manufacturer's protocol. In brief, RNA amplification and labelling with Cy3 was performed with Low Input Quick Amp Labeling Kits (200 ng of total RNA per reaction) and Gene Expression Hybridization Kits were used for fragmentation of labelled RNA and preparation of the hybridization setup. Microarrays were hybridized for 17 h in a hybridization oven at 65 °C and rotation speed of 10 rpm, washed for one minute with Gene Expression Wash Buffer I at room temperature, and one minute with Gene Expression Wash Buffer II at 37 °C. Washed slides were scanned with an Agilent SureScan Microarray scanner. Nofima's bioinformatics package STARS^[@CR50]^, was used for data processing and mining. Five replicates per group and time-point were included in analyses, and four biological replicates per group from intact skin, totally 78 arrays were used. qPCR {#Sec22} ---- Synthesis of cDNA was performed on 500 ng RNA with SuperScript® VILO cDNA Synthesis Kit and Master Mix (Thermo Fisher Scientific). QPCR was performed in duplicates in 364 optical plates on a QuantStudio5-384w (AppliedBiosystems) in default "fast mode". Each well had a final reaction volume of 10 μl (5 μl PowerUp™ SYBR™ Green Master Mix (AppliedBiosystems), 4 μl of 1:10 diluted cDNA and primers 0,5 μl of 10 µM forward and reverse primer). Quantification cycle (Ct) values were calculated using the second derivative method in the QuantStudio™ Design and Analysis Software v1.4.3. The efficiency of the RT qPCR reactions was estimated for all primer pairs by eight times 2-fold dilution series. RT-qPCR primers for *muc5ac.1, muc5ac.2/4* and *muc5b* and the housekeeping genes *elf1a* and *etif3* as describe by^[@CR30]^. Two reference genes were evaluated for stability using the web-based comprehensive tool RefFinder which integrates the computational programs geNorm, Normfinder, BestKeeper and the comparative delta-Ct method^[@CR51]^. The *etif3* and *elf1a* obtained similar stability score, while the mean value of the two genes were most stable and was therefore used in the normalization procedure. Five replicates per group and time-point were included in analyses. Relative expression ratios to mean expression levels in intact skin (N = 10, equal amounts of control and HFD) were calculated and the data was log~2~ transformed before data analysis and plotting. Statistics {#Sec23} ---------- Data analyses were performed in R (version 3.3.1, [www.r-project.org](http://www.r-project.org)). Data series were tested for normal distribution (Shapiro-Wilk normality test, R-function shapiro.test()). If the test was passed (p-value \> 0.05), data were analyzed by ANOVA (R function aov()) and in case significant differences were found (p-value \< 0.05), a Tukey *post-hoc* test was calculated (R function TukeyHSD()). If the normality test was not passed, Kruskal-Wallis rank tests (R function kruskal.test()) were used. For evaluations of microarray results, the differentially expressed genes (HFD-control) were selected by the following criteria: log~2~-Expression Ratio \>0.8\| (1.75-fold) and p \< 0.05. A complete gene list of DEG, gene identifier and their respective STARS category^[@CR52]^, can be found in Supplementary File [1](#MOESM1){ref-type="media"}. The Euclidean distances were calculated, and the complete linkage clustering was drawn as a heat map. The dendrogram was pruned in order to identify 3 clearly-defined sub-clusters. For each cluster one-tailed Fisher tests for significant over-representation of functional categories (STARS) were run. Filtering, statistical analyses and plotting of results were performed in R. Animal statement {#Sec24} ---------------- This study was approved by the local responsible laboratory animal science specialist under the surveillance of the Norwegian Animal Research Authority (NARA) and registered by the national ethics committee (the Norwegian Food Safety Authority, ID7058). The methods were carried out in accordance with the relevant guidelines and regulations. Electronic supplementary material ================================= {#Sec25} Supplementary file 1 **Publisher's note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Electronic supplementary material ================================= **Supplementary information** accompanies this paper at 10.1038/s41598-018-35002-5. The authors wish to thank Marianne Helén Selander Hansen, Mads Alexsander Haneborg, Christian Karlsen and Vibeke Høst for laboratory assistance. Tom Ole Nilsen, Lars Ebbesson and Bendik Fyhn Terjesen for contributing to initial proposal. The research was supported by the Research Council of Norway ("SalmoFutura", grant \#233870/E40), the CtrlAQUA SFI, Centre for Closed-Containment Aquaculture, funded by the Norwegian Research Council (grant \#237856/O30) and the Norwegian Research Council funded project "ImCom", grant \#267644. The funders provided support in the form of salaries for authors, laboratory equipment, analysis and fish trial, but did not have any additional role in the study design, data collection, analyses, interpretation of results, decision to publish or preparation of the manuscript. Writing original draft: L.R.S. Laboratory work, data processing and mining: A.K., E.Y., G.T. and L.R.S. Conceptualization: L.R.S., H.T., E.Y., S.H. Funding acquisition and project administration: E.Y., S.H., S.O.S., and H.T. All authors were involved in reviewing the manuscript. Data were submitted to Gene Expression Omnibus (GSE122142). Competing Interests {#FPar1} =================== The authors declare no competing interests.
{ "pile_set_name": "PubMed Central" }
Enzymatic assembly of the bis-indole core of rebeccamycin. Rebeccamycin is a member of the family of indolocarbazole antibiotics with broad spectrum antitumor activity. The indolocarbazole framework is derived from two molecules of tryptophan, but very little is known about the enzymes involved in rebeccamycin biosynthesis. Here, we show that RebD is responsible for all catalytic steps forming the central pyrrole ring of chlorochromopyrrolic acid from two molecules of chloroindolepyruvic acid. This transformation does not require any additional cofactors and constitutes the first step of bis-indole formation in the biosynthesis of rebeccamycin.
{ "pile_set_name": "PubMed Abstracts" }
Drinking beer, so you don't have to. Menu Societe Urchin, photo has potato adjuncts Listen they all can’t be some deft expression of the AWA realm and this one missed the mark for me. Taking a [altbier?] base and adding cranberries to it, I don’t think even BFM could pull that shit off. The beer is clean and honestly lacks expression from the fruit. You heard that right, the cranberry wasn’t pronounced enough. Those of you still undergoing maxillofacial surgery from Cranberry Cascade can involuntarily gape jawless at that statement. The underlying beer is extremely clean and well crafted but the entire affair just seems ill conceived. It’s like walking away from a Tinder date that on paper has no deficiencies but lacks punch beyond “what major did you study at Cranberry State?” And I get it, a darker underpinning with arguably the hardest fruit profile to successfully wrangle, the fact that this didn’t end up tasting like some Deschutes BBXXVIII dogshit is astounding. I just kinda ejaculated a blithe lil “ehhhhhh” and got into costume. It was deece to deece mas.
{ "pile_set_name": "Pile-CC" }
Aldose reductase prevents aldehyde toxicity in cultured human lens epithelial cells. Aldehydes are widespread environmental and industrial compounds, which cause cytotoxicity, tissue damage, mutagenicity, and carcinogenicity leading to various disease conditions such as cardiovascular, bronchial, and visual complications. We have shown earlier that aldose reductase (AR) besides reducing glucose to sorbitol, efficiently reduces various toxic lipid-derived aldehydes, generated under oxidative stress, with K(m) in the physiological range. We have identified the role of AR in the prevention of various lipid aldehyde-induced cytotoxic signals leading to apoptosis in human lens epithelial cells (HLEC). HLEC were cultured without or with AR inhibitors followed by addition of various saturated and unsaturated lipid aldehydes with a carbon chain length varying from C3 to C10. The cell viability was assessed by cell counts and MTT assay, and apoptosis was measured by evaluating nucleosomal degradation and caspase-3 activation using specific ELISA kits. Although all the aldehydes caused apoptosis of HLEC, the unsaturated aldehydes were more toxic than saturated aldehydes. Inhibition of AR by sorbinil potentiated while the over-expression of AR prevented the apoptosis induced by various lipid aldehydes. AR over-expression also prevented the lipid aldehyde-induced activation of caspase-3, MAPK, JNK and the expression of Bcl-2 family of proteins in HLEC. The results indicate that the lipid aldehydes generated under oxidative stress are cytotoxic to HLEC leading to apoptosis and that the reduction of lipid aldehydes by AR would prevent it.
{ "pile_set_name": "PubMed Abstracts" }
Q: KeyError generated when editing site setting with foreign key set Django Version: 2.1.5 Python Version: 3.6.8 Wagtail Version: 2.4 I have a template with four columns of links in the footer. I have set up the following models which consist of a BaseSetting object and footer link objects for each column of links. The footer link objects each ForeignKey to the TemplateItems object. @register_setting class TemplateItems(BaseSetting): page_banner = models.OneToOneField('wagtailimages.Image', null=True, blank=True, on_delete=models.SET_NULL, related_name='+', help_text='Banner image that shows below menu on pages other than home page') footer_link_col1_header = models.CharField(max_length=25, default='', verbose_name='Footer Link Column 1 Header') footer_link_col2_header = models.CharField(max_length=25, blank=True, default='', verbose_name='Footer Link Column 2 Header') footer_link_col3_header = models.CharField(max_length=25, blank=True, default='', verbose_name='Footer Link Column 3 Header') footer_link_col4_header = models.CharField(max_length=25, blank=True, default='', verbose_name='Footer Link Column 4 Header') panels = [ ImageChooserPanel('page_banner'), MultiFieldPanel([ FieldPanel('footer_link_col1_header'), InlinePanel('footer_links_col_1', label='Column 1 Links'), FieldPanel('footer_link_col2_header'), InlinePanel('footer_links_col_2', label='Column 2 Links'), FieldPanel('footer_link_col3_header'), InlinePanel('footer_links_col_3', label='Column 3 Links'), FieldPanel('footer_link_col4_header'), InlinePanel('footer_links_col_4', label='Column 4 Links'), ], heading='Footer Links'), InlinePanel('social_media_links', label="Social Media Links"), ] class FooterLink(Orderable): name = models.CharField(max_length=60, default='') url = models.CharField(max_length=200, default='') panels = [ FieldRowPanel([ FieldPanel('name'), FieldPanel('url'), ]) ] class Meta: abstract = True def __str__(self): return f'{self.name}' class FooterLinkCol1(FooterLink): template_items = ForeignKey('TemplateItems', related_name='footer_links_col_1', null=True, on_delete=models.SET_NULL) class FooterLinkCol2(FooterLink): template_items = ForeignKey('TemplateItems', related_name='footer_links_col_2', null=True, on_delete=models.SET_NULL) class FooterLinkCol3(FooterLink): template_items = ForeignKey('TemplateItems', related_name='footer_links_col_3', null=True, on_delete=models.SET_NULL) class FooterLinkCol4(FooterLink): template_items = ForeignKey('TemplateItems', related_name='footer_links_col_4', null=True, on_delete=models.SET_NULL) Migrations are created and migrated successfully, but when I go to the TemplateItems settings object in the Wagtail admin in order to add footer links, I receive the following error: KeyError at /admin/settings/main/templateitems/2/ 'footer_links_col_1' If I comment out any of the footer_links_col_X items, then I receive the error for the first one that is not commented out. There are no existing footer links in the database for any of the columns. I wondered if the problem was coming because the ForeignKey is to a BaseSetting object, but when I declare these models in the Django admin (including the inlines for each of the column links), it displays and allows me to add links just fine. Traceback: File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner 34. response = get_response(request) File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response 126. response = self.process_exception_by_middleware(e, request) File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response 124. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/django/views/decorators/cache.py" in _wrapped_view_func 44. response = view_func(request, *args, **kwargs) File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/wagtail/admin/urls/init.py" in wrapper 102. return view_func(request, *args, **kwargs) File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/wagtail/admin/decorators.py" in decorated_view 34. return view_func(request, *args, **kwargs) File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/wagtail/contrib/settings/views.py" in edit 83. instance=instance, form=form, request=request) File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/wagtail/admin/edit_handlers.py" in bind_to_instance 153. new.on_instance_bound() File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/wagtail/admin/edit_handlers.py" in on_instance_bound 295. request=self.request)) File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/wagtail/admin/edit_handlers.py" in bind_to_instance 153. new.on_instance_bound() File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/wagtail/admin/edit_handlers.py" in on_instance_bound 295. request=self.request)) File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/wagtail/admin/edit_handlers.py" in bind_to_instance 153. new.on_instance_bound() File "/opt/virtualenvs/MY_SITE-a0hNfZxl/lib/python3.6/site-packages/wagtail/admin/edit_handlers.py" in on_instance_bound 692. self.formset = self.form.formsets[self.relation_name] Exception Type: KeyError at /admin/settings/main/templateitems/2/ Exception Value: 'footer_links_col_1' A: InlinePanel requires the corresponding foreign key to be a ParentalKey: from modelcluster.fields import ParentalKey class FooterLinkCol1(FooterLink): template_items = ParentalKey('TemplateItems', related_name='footer_links_col_1', null=True, on_delete=models.SET_NULL) In turn, ParentalKey requires the parent model to inherit from ClusterableModel (which is automatically true for Wagtail Page models): from modelcluster.models import ClusterableModel class TemplateItems(BaseSetting, ClusterableModel): (There's some explanation of the motivation for ClusterableModel / ParentalKey in the readme for django-modelcluster.)
{ "pile_set_name": "StackExchange" }
import pydoc import keyword from jedi._compatibility import is_py3 from jedi import common from jedi.evaluate import compiled try: from pydoc_data import topics as pydoc_topics except ImportError: # Python 2.6 import pydoc_topics if is_py3: keys = keyword.kwlist else: keys = keyword.kwlist + ['None', 'False', 'True'] def keywords(string='', pos=(0, 0), all=False): if all: return set([Keyword(k, pos) for k in keys]) if string in keys: return set([Keyword(string, pos)]) return set() def keyword_names(*args, **kwargs): kwds = [] for k in keywords(*args, **kwargs): start = k.start_pos kwds.append(KeywordName(k, k.name, start)) return kwds def get_operator(string, pos): return Keyword(string, pos) class KeywordName(object): def __init__(self, parent, name, start_pos): self.parent = parent self.names = [name] self.start_pos = start_pos @property def end_pos(self): return self.start_pos[0], self.start_pos[1] + len(self.name) class Keyword(object): def __init__(self, name, pos): self.name = name self.start_pos = pos self.parent = compiled.builtin def get_parent_until(self): return self.parent @property def names(self): """ For a `parsing.Name` like comparision """ return [self.name] @property def docstr(self): return imitate_pydoc(self.name) def __repr__(self): return '<%s: %s>' % (type(self).__name__, self.name) def imitate_pydoc(string): """ It's not possible to get the pydoc's without starting the annoying pager stuff. """ # str needed because of possible unicode stuff in py2k (pydoc doesn't work # with unicode strings) string = str(string) h = pydoc.help with common.ignored(KeyError): # try to access symbols string = h.symbols[string] string, _, related = string.partition(' ') get_target = lambda s: h.topics.get(s, h.keywords.get(s)) while isinstance(string, str): string = get_target(string) try: # is a tuple now label, related = string except TypeError: return '' try: return pydoc_topics.topics[label] if pydoc_topics else '' except KeyError: return ''
{ "pile_set_name": "Github" }
If you were to describe Signal Ops in high concept fashion, you might call it a squad-based, first-person, strategy shooter, with elements of Rainbow Six and, more aptly, Space Hulk thrown into the mix. The basic premise is relatively simple: you start out in a muddy, grimy bunker, surrounded by gentlemen who speak as if plucked from Tinker, Tailor, Soldier, Spy, and you're tasked with handling several covert operations. Instead of hitting the ground running yourself, you take up a position in front of a large monitor, dictating the actions of your agents via multiple displays. It's a neat idea, and Space Hulk managed to pull off something similar back in the mid-Nineties, but the multiple perspectives frequently get muddled and confused. The game's graphical stylings - a washed-out masterclass in all of the colours you might class under the umbrella of "murky grime" - don't help, and though objects are relatively distinct in each of your displays, the shadows are not, which is not ideal for a game that provides much of its satisfaction from successfully completing missions in a stealthy fashion. Missions will, more often than not, see you presented with a relatively simple objective - retrieve something on the other side of the map, assassinate a designated target, plant incriminating evidence and then hightail it out of there - carried out by two or three agents. Those agents will, typically have different roles to fulfil. The Wrench agent, for example, is a mechanical fixer of sorts, and useful for opening doors. The Shield agent is your tank, able to soak up a fair amount of damage and ideal for direct confrontation. The Scope is, as one might expect, useful from long-range, and the Demo likes to blow things up. And then there's the Bolt agent, also known as The Fun Killer. You see, in most strategy games, you'll have intel going into an area, or a blanket fog-of-war that clears as you explore. In Signal Ops, you have to manage your own fog-of-war, as an agent's screen will turn to the white noise and dancing particles of a TV without signal if you move beyond the marked borders of your Bolt agent's radio. You can unplug the radio to move it, but its power will deplete rather quickly, and you have to shuffle your other agents around inside the transmission boundaries just to make sure everyone can see. The existence of the radio, rather than having a radar, injects proceedings with constant, needless doses of tedium, and makes missions more of a chore than they should be. You can issue commands to agents you're not directly controlling, using the perspectives of other agents in the field to help move 'dark' agents back into the field of the radio's transmission, but it's all so fiddly, unintuitive, and laborious that you'll quickly get bored. Instead of working out strategic ways of completing the mission, Signal Ops wastes a lot of goodwill by forcing you into micromanagement. It's a problem compounded by a control scheme that's clearly designed around a gamepad rather than a keyboard and mouse, with the interface mapped in a four-button cross. There are some strange choices in terms of default button mapping (which can't be changed), and it took me five minutes to open the very first door, pressing every single button on my keyboard before realising that the doors respond to the scroll wheel on the mouse. There's a tutorial that goes some way to explaining the game's control system, but it's so convoluted that you'll almost certainly end up issuing move commands rather than interaction orders, accidentally deploying agents into uncharted territory, and skipping to control another agent at the very moment you want to deliver a decisive order. In most squad-based games, you have either a 'commander's' view of the battlefield that allows you certain strategic benefits, or (as in Rainbow Six) a certain level of friendly AI that means you're not constantly holding everybody's hand, because babysitting is hardly the most riveting pastime. Unfortunately, Space Bullet have made it the entire focus of their game. Stealth is an exercise in frustration because the inky cel-shaded look provides little by way of detailed visual feedback, so shadow-creeping either leads you into a black hole from which you can see nothing, or indistinct cover of darkness where you think you're safe, but aren't. I actually love the game's aesthetics, from a purely visual standpoint. But practically, they provoke violence against computer accessories. Direct combat is dependent on some of the worst, fuzziest shooting mechanics I've seen in a long time, and any strategic planning using the more interesting agents requires you first to move the Bolt into position, time and time again. It's nice to have the freedom of total control, but not from this perspective, where every step causes the camera to wobble in a fashion that would make even Paul Greengrass blow chunks. It's a real shame because there are some lovely little touches, especially back at the base. The dialogue that plays out amongst the spymasters is often chuckle-inducing, and the fact that you're representing a cartoonish, bumbling branch of Orwellian totalitarianism is worked well into the script and mined for humour. You'll laugh when you first unlock the Spy agent, who distracts guards by pretending to be a pensioner and blathering on "about the war". The mission settings are nicely varied, and you do have the freedom to go about your objectives however you wish, but you're hamstrung by the game's systems themselves. It's a nice conceit that the archaic machinery and distance from the base places heavy reliance on the radio, but it just doesn't translate into fun gameplay. Pros Open missions allow for players to take whatever approach they choose Humorous script The devs have been pumping out patches to account for the game's numerous glitches and bugs The watercolour, cartoonish look is awesome... Cons ... But not exactly practical Awkward controls and interface First-person perspective ruins the strategic element The Bolt agent makes things terribly tedious Awful AI, even when you're constantly hand-holding The Short Version: Signal Ops manages to combine FPS action together with RTS tactical gameplay, but it does so in a manner which manages to squeeze the goodness out of both of those genres, leaving players with a clunky game, crippled by its own impositions. Nice ideas, but ultimately tedious and laborious in execution. Well, I like the idea of controlling doors with the mousewheel. If anything, someone should run with this concept, perhaps using it to move various scenery elements up and down such as platforms, barriers, lifts etc - might make for an enjoyable platformer (perhaps you could control your character with WASD, jump with LMB and manipulate the environment with the wheel). Shame about most of the other design decisions though, by the looks of things.
{ "pile_set_name": "Pile-CC" }
418 P.2d 549 (1966) The STATE of Montana, Plaintiff-Respondent, v. Rick TULLY, Defendant-Appellant. No. 10933. Supreme Court of Montana. Submitted September 14, 1966. Decided September 28, 1966. Rehearing denied October 25, 1966. Vernard C. Anderson, Jr. (argued) Billings, for appellant. Forrest H. Anderson, Atty. Gen. Helena, John L. Adams, Jr., Deputy Co. Atty., Billings, Alfred Coate, Asst. Atty. Gen., (argued) Helena, for respondent. HARRISON, Chief Justice. This is an appeal from a judgment entered in the District Court of Yellowstone County following a jury verdict of guilty of the crime of uttering and delivering a fraudulent check. The defendant was sentenced to four years in prison. Defendant makes two specifications of error. First, defendant contends that the trial court erred in admitting into evidence certain checks other than the check upon which the defendant was being tried. Second, defendant contends that admitting into evidence facts tending to prove a prior felony conviction was prejudicial to the defendant when the State was unable to prove in fact that the defendant was convicted of a felony. Section 94-2702, R.CM. 1947, establishes the elements of the crime of uttering and delivering a fraudulent check. It provides in part: "Any person who for himself * * * wilfully, with intent to defraud shall make * * * any check * * * for the payment of money upon any bank * * * knowing at the time of such making * * * that the maker * * * has no funds * * * with such bank * * * for the payment of such check * * * in full upon its presentation, although no express representation is made with reference thereto, shall upon conviction be punished as follows: If there are no funds in * * * such bank * * * for the payment of any part of such check * * * upon presentation, then in that case the person convicted shall be punished by imprisonment in the state prison not exceeding five (5) years, or by a fine not exceeding five thousand dollars ($5,000.00) or by both such fine and imprisonment * * *." The record before this court reveals the following facts: The State's witnesses *550 testified that on January 7, 1964, the defendant had purchased some groceries in a Billings grocery market; that he had paid for the groceries with a $20.00 check, drawn on the Billings State Bank, receiving some $11.00 or $12.00 in change; that the check had been returned three days later marked "no account"; that the defendant had never had an account at the Billings State Bank; that a search for the defendant conducted by an employee of the grocery market had been fruitless; and that the check had been turned over to the county attorney's office for prosecution on January 15, 1964. At the trial, defendant testified in his own behalf. The defendant admitted making the check, giving the check in payment for the groceries, and not having any account at the Billings State Bank. However, upon cross-examination defendant denied any intent to defraud Thus, defendant put in issue one of the prime elements of the crime, namely, the wilful intent to defraud. To prove this wilful intent to defraud, the State cross-examined the defendant concerning ten other checks that had been drawn by the defendant upon the Security Trust & Savings Bank in Billings in which he again had no account. These checks totaled $225.00. They were cashed in three different Billings business establishments from December 18, 1963, to January 3, 1964 Defendant did not deny writing the checks or cashing them. Defendant was further asked if he had written eight other checks on the Billings State Bank in January, 1964, before his arrest Defendant's answer to the question was vague, but there was no further questioning as to these individual checks. This brings us to a consideration of defendant's first specification of error. Defendant contends that admitting over objection the evidence concerning the other checks which he wrote from December 18, 1963, until the time of his arrest on January 16, 1964, was prejudicial error With this contention we do not agree. In State v. Hollowell, 79 Mont 343, 349, 256 P. 380, 382, this court commented as follows: "* * * The general rule is that evidence of crimes other than the one for which a defendant is on trial is not admissible, but to this rule there are exceptions, and one is where evidence is material as tending to show the intent or motive of the defendant in the commission of the offense for which he is on trial, notwithstanding the fact that it also tends to prove the commission by him of another offense." (Citing previous cases.) (Emphasis supplied.) Later in State v. Simpson, 109 Mont. 198, 208, 95 P.2d 761, 764, this court further commented: "* * * * The rule which Montana has followed, and to which we now adhere, is succinctly stated in 20 American Jurisprudence, page 289, as follows: `Evidence of other crimes is always admissible when such evidence tends directly to establish the particular crime and it is usually competent to prove the motive, the intent, the absence of mistake or accident, a common scheme or plan embracing the commission of two or more crimes so related to each other that proof of one tends to establish the others, or the identity of the person charged with the commission of the crime on trial.'" (Emphasis supplied.) In a period of about three weeks, the defendant had written eleven checks on two nonexistent bank accounts for a total of $275.00. The amount of the eight other checks was not shown. This evidence was properly received for consideration by the jury concerning whether defendant had the wilful intent to defraud, a subject which defendant had affirmatively denied. We now consider defendant's second specification of error. Section 93-1901-11, R.C.M. 1947, provides that a witness may be impeached if he has ever been convicted of a felony and that this may be shown by examination of the witness or the record of the judgment. Upon cross-examination the defendant was *551 asked if he had ever been convicted of a felony. He replied, "That I cannot answer to on advice of counsel because we can't find out whether I have or not * * *." Further questioning of defendant revealed that he had received a five-year deferred sentence in the State of Washington for grand larceny. Upon redirect examination, the defendant was allowed to further explain the details of the State of Washington incident. Defendant's testimony on cross-examination concerning the incident was without objection. His explanation of the incident on redirect examination was made over various objections of the State. The State called W.E. McConnell, a probation and parole officer for the State of Montana, in an attempt to prove the prior felony conviction. His testimony only substantiated what the defendant had testified to, inasmuch as he stated that the crime was grand larceny and that the defendant was on probation to him from the State of Washington. When an uncertified copy of the Judgment and Order Deferring Sentence and Granting Probation that was in Mr. McConnell's possession was attempted to be entered into evidence by the State, it was refused upon the timely objection of defendant's counsel. From State v. Coloff, 125 Mont. 31, 231 P.2d 343, defendant quotes the rule that if a witness denies a prior conviction, then the only evidence concerning the conviction that can be allowed is the record of the judgment. However, the defendant fails to see the distinction in the rule of the Coloff case, and the happenings in this case. Here defendant neither denied nor affirmed a prior felony conviction, but instead attempted to explain the State of Washington incident. As we read the transcript of the trial, whatever prejudice that may have occurred to the defendant came from his own lips. Mr. McConnell's testimony only confirmed the defendant's testimony concerning his sentence and probation from the State of Washington. The defendant explained to the jury his reasons for giving the check when he had no bank account He offered witnesses to substantiate his story He was afforded a complete opportunity to explain fully his brush with the law in the State of Washington. However, the jury did not believe his story. They found the necessary wilful intent to defraud. Finding no error in the record, the judgment is affirmed. MR. JUSTICES JOHN CONWAY HARRISON, ADAIR and CASTLES, concur.
{ "pile_set_name": "FreeLaw" }
Q: How to navigate divs in a custom built single page app? I'm not sure how JS frameworks work as far as single page app functionality. I've got a pseudo single page app, with no framework. I have 3 tabs that will toggle visibility for 3 different hidden div's on the same page. So far I've been able to hide two and display one on click to allow the page to "navigate" without changing pages. I'm running into some complications, however, because I'd like to run some ajax calls to keep the data on my div's updated when visible. I'd also like to be able to pass which page I want visible in the URL for links, etc. Basically I'm wondering what the best way is to identify what "screen" is visible, so I know what ajax calls to make, but I'd prefer if I didn't have to check the css on the element for visibility for these types of things. Can this be done with anchor href's in the url? I could use URL variables, but again I don't want to reload the page, and I could probably make a JS variable to look at and change as I click my tabs, but I wouldn't really be able to pass this in the url. Here is some code. The app is for a dice game, to add some context. The three tabs are simple empty divs with background images that sit on the left hand side of my screen for nav. $('#chatTab').click(animateChat); $('#timelineTab').click(animateTimeline); $('#rollTab').click(animateTheTable); //opens and closes chat box function animateChat () { $('#stats').fadeOut('slow', function(){$('#chat').delay(800).fadeIn('slow');}); $('#theTable').fadeOut('slow', function(){$('#chat').delay(800).fadeIn('slow');}); } //opens and closes timeline box function animateTimeline () { $('#chat').fadeOut('slow', function(){$('#stats').delay(800).fadeIn('slow');}); $('#theTable').fadeOut('slow', function(){$('#stats').delay(800).fadeIn('slow');}); } //opens and closes the roll table function animateTheTable(){ $('#stats').fadeOut('slow', function(){$('#theTable').delay(800).fadeIn('slow');}); $('#chat').fadeOut('slow', function(){$('#theTable').delay(800).fadeIn('slow');}); } A: This is a very open ended question so the answer depends on how far you want the app to go. If your page will only ever have a few UI elements and only one layer of navigation, you are probably better off doing it with straight Jquery and avoiding extra complication. Jquery can handle URL tracking by getting window.location on page load and performing your animations above. Read about handling URLs with JavaScript here: https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Manipulating_the_browser_history But keep in mind that this only works in modern browsers. Old versions will not handle dynamic URL changes well. To save having to set up all the logic to check the URL you could use a location framework like history.js or Jquery Address. If you intend the game to become very complex with multiple screens and some kind of database, go with Angular Back or another JS framework. This will handle all your routing and animations, including URL tracking plus heaps of other features you may or may not need down the track. The learning curve is steep but once you are there you can make ANYTHING. Be careful though, it's easy to jump in headfirst and use the whizz-bang frameworks and ending up spending weeks doing something you could have barreled out in a few days with straight JS, CSS and HTML. Complexity kills completion.
{ "pile_set_name": "StackExchange" }
Q: Is every group isomorphic to some nontrivial quotient group? For any group $G$, does there exist a group $H$ and a nontrivial normal subgroup $N$ of $H$ such that $H/N\cong G$? A: Yes, for example $H:=G\times G$ and $N:=G\times e$ (if $G$ is trivial just take any nontrivial group $H$ and $H=N$, for example $H=N=\Bbb Z $) A: Yes, if $N$ (for nontrivial) is any nontrivial group (for instance the one with two elements), then the projection $H=G\times N\to G$ on the first factor has kernel $N\subseteq H$, so $G\cong H/N$.
{ "pile_set_name": "StackExchange" }
Your browser is no longer supported Planning portal: The problem with PPS5 Current planning guidance for the historic environment is urgently in need of a robust review, says Paul Velluet The proposals for fundamental change in the structure and content of central government’s planning policy might be viewed at best with benign scepticism and at worst with outright cynicism. With the prospect of the demise of the Planning Policy Statements, the abandonment of Regional Spatial Strategies and the creation of a single national planning framework, offering a ‘shorter, more decentralised and less bureaucratic’ approach to policy, it is reasonable to ask whether this will lead to greater consistency and coherence in planning policy. Some have expressed fears that much practical and legally tested policy could be jettisoned, leaving a significant policy vacuum at a strategic level. Others have argued that the proposal offers the opportunity to rationalise and update existing policy and remove inconsistencies. Assuming that policies for sustaining the historic built environment will be retained, a useful starting point should be a robust and searching review of that most controversial policy document, Planning Policy Statement 5: Planning for the Historic Environment (PPS5), issued by the Department for Communities and Local Government last March. If there is one policy document that needs attention before its inclusion in any national planning framework, it must be PPS5 – not so much in order to shorten it, but rather to give it the clarity and coherence it lacks and to recover some of the usefulness and certainty of the guidance that it superseded. It is now 12 months since the sudden – and arguably premature – publication of PPS5 and it is time to review the document and to question whether it has aided or hindered the delivery of sound decisions and successful conservation outcomes. While some have welcomed those positive features of PPS5 that have improved upon the guidance contained in the earlier Planning Policy Guidance 15 (PPG15), many across the private sector and in local government have viewed key areas of PPS5 with a substantial degree of scepticism. The statement of government’s broad objectives is clearly most welcome, in particular, the three stated parts of the ‘overarching aim’ and the specific recognition that ‘intelligently managed change may…be necessary if heritage assets are to be maintained for the long term’, and the clear policies linking the protection of ‘historic assets’ to sustainability and climate change issues. Welcome, too, is the clear acknowledgement of ‘proportionality’ in the level of detail required in any description of the significance of heritage assets affected by planning proposals, and the need to recognise that the greater the harm to the significance of a heritage asset, the greater will be the justification needed for any loss. Much of the wording of PPS5 reflects the aspirations of the legislative reforms advanced, but then abandoned, by the last government. In this regard the authors of PPS5 have sought to bundle together policy relating to all kinds of ‘assets’ of architectural, artistic, historical, archaeological and landscape interest into one all-embracing statement, leading to the adoption of obscure language that is substantially inconsistent with that of existing legislation and offering scope for challenges in the courts. Anomalously, the terms ‘preservation’, ‘special architectural or historic interest’, ‘preservation or enhancement’ and ‘character or appearance’ are conspicuous by their absence or sparing use. Instead, the term ‘significance’, which has little, if any, statutory basis, is repeated endlessly. Sadly, the terms ‘judgement’ and ‘discernment’ are missing altogether. Above all, much of the advice in PPS5 presupposes and depends crucially upon the existence and free availability of readily accessible, accurate and up-to-date records and adequately resourced, experienced and knowledgeable conservation staff in local planning authorities and in bodies such as English Heritage – surely an increasingly questionable assumption today. Troubling, too, are the inclusion in the document of a complex and awkwardly worded series of inter-related policies applicable to the demolition of unlisted buildings in conservation areas and the failure to provide any distinction in the relevant policies between ‘working buildings’ and what might be described as ‘cultural monuments’ with little, if any, beneficial use. Further major change in planning policy would offer the opportunity to secure the urgent resolution of the deficiencies of PPS5. Such a revision is essential if certainty and consistency in decision-making in relation to the built heritage are to be recovered. Subscribe to the AJ The Architects’ Journal is the UK’s best-selling weekly architecture magazine and is the voice of architecture in Britain About the Architects' Journal The Architects' Journal is the voice of architecture in Britain. We sit at the heart of the debate about British architecture and British cities, and form opinions across the whole construction industry on design-related matters
{ "pile_set_name": "Pile-CC" }
Item reduction of the patient-rated wrist evaluation using decision tree modelling. The aim of this study is to assess the viability of a decision tree version of an often used questionnaire to measure wrist pain and disability, the Patient Rated Wrist Evaluation. Patient Rated Wrist Evaluation scores were collected from a cohort of 10394 patients who are part of a routine outcome measurement system. A decision tree version of the Patient Rated Wrist Evaluation (PRWE) was created. The intraclass correlation was used to evaluate the inter-version reliability between the original PRWE and the decision tree version. The decision tree reduced the number of questions from 5 to 3 for the pain subscale, and from 10 to 3 for the disability subscale. The intraclass correlation between the original PRWE and the decision tree version was 0.97. The mean difference between the Patient Rated Wrist Evaluation and the decision tree Patient Rated Wrist Evaluation total sumscore was 0.35 (95% CI -9.92-10.62). We found that the decision tree was successful at reducing the items of the Patient Rated Wrist Evaluation from fifteen to only six questions with very high similarity to the scores of the full questionnaire. Implications for rehabilitation The Patient Rated Wrist Evaluation can reliably be used with 6 instead of 15 questions. Decision trees are useful statistical tools to shorten lengthy questionnaires, especially when large amounts of data are available. Having a shortened Patient Rated Wrist Evaluation saves patients and clinicians time in answering this specific questionnaire.
{ "pile_set_name": "PubMed Abstracts" }
Q: Linkind date with many folders I have a CSV folder containing the date (from 7/8/2005 to 9/27/2013). So the csv contains only one column with the date I have 50 other folders containing the same structure. The columns are : Date [structure of the date is the same as my date.csv ] Open High Low Close Volume Adj. Close Symbol I've bolded the columns I'm interested in to get my final output I give 2 files of those folders : AI.PA.csv ALV.DE.csv My aim (question) is to get a final new file with x column : Date (same structure date as date.csv) Symbol AI.PA Symbol AI.DE Symbol of all the other files I have So the column should contain the symbol as column header and the closing price if there is a closing price for the ad hoc date. And if tehre is no closing price it should contain nothing. I really don't know how to solve the issue. I'm open to solve my issue with any "open source" solution (ideally SQL, Python, R) A: Acutally I just solve my issue by using a simple Pivot table on Excel
{ "pile_set_name": "StackExchange" }
Details "He withdraws his finger briefly and with tender care, inserts the balls one at a time, pushing them deep inside me." - Anastasia Place the lightest balls in the silicone cradle and advance to weightier ball combinations as your muscles strengthen from extended wear. Reap the benefits of toned muscles, control, enhanced sensitivity and heightened orgasms. Suitable for complete beginners through to advanced users. Select the lightest ball combinations (15g and 25g) to start and place them in the silicone cradle. Apply a generous amount of water-based lubricant and gently insert the balls one at a time, leaving the retrieval cord outside of the vagina. Your muscles automatically contract to keep the weighted balls in place, and this is where your pleasurable workout begins. Wear them for extended periods of time, starting off for just 15 minutes at a time and progress to longer periods of time as your muscles increase in tone and strength. Advance your training further by experimenting with different weight options. Once you're used to wearing the balls and cradle, why not wear the balls without anything but the addition of lubrication for new and exciting sensations? Part of the Fifty Shades of Grey The Official Pleasure Collection approved by author E L James. Cleaning // Wash your Pelvic Floor Exercisers time with water, neither hot nor cold . Avoid immersing them or leave them under the faucet with a strong stream of water. Apply soap with your hands with a gentle massage and avoid wetting electronic parts should have this possibility , particularly in those toys salpicables not . To clean Pelvic Floor Exercisers , use a mild soap , natural antibacterial solution or specific tissues . Once clear and clean , always dry your toys into the air to avoid waste rags, paper or towels in direct contact material . In case of accidental wetting electronic somewhere , leave your Pelvic Floor Exerciser disassembled into all its parts possible and deposit it in a dry and ventilated place , until it loses any trace of moisture. Conservation // Keep your Pelvic Floor Exerciser in a cool, dry, isolated from direct sunlight and extreme temperature changes . Find contact material that is made to avoid possible allergic reactions to components or deterioration by incorrect use of lubricants , creams , fluids or soaps unsuitable or incompatible with his composition. Remember that the lubricants with bases called " water " are compatible with all materials. If you will not use your Pelvic Floor Exerciser ( contact materials not rigid) for a long period of time, should save it under a thin layer of natural talc without flavorings or additives to better preserve its qualities.
{ "pile_set_name": "Pile-CC" }
In the United States Court of Appeals For the Seventh Circuit No. 99-1092 Cheryl K. McPhaul, Plaintiff-Appellant, v. Board of Commissioners of Madison County, Indiana, Arleen Horine, in her official and individual capacity, and Madison County Board of Health, Defendants-Appellees. Appeal from the United States District Court for the Southern District of Indiana, Indianapolis Division. No. 97 C 97--Sarah Evans Barker, Chief Judge. Argued February 18, 2000--Decided August 16, 2000 Before Bauer, Posner, and Manion, Circuit Judges. Manion, Circuit Judge. Cheryl McPhaul sued her former employer, the Madison County Board of Commissioners, alleging that the County failed to accommodate her disability in violation of the Americans with Disabilities Act (ADA). She also brought an individual capacity suit, under 42 U.S.C. sec. 1983, against her former supervisor, Arleen Horine, alleging that Horine discriminated against her because of her race, in violation of the Equal Protection Clause of the Fourteenth Amendment. The defendants moved for summary judgment. The district court granted the motion, concluding that McPhaul failed to establish a prima facie case for her ADA and section 1983 claims. McPhaul appeals, and we affirm. I. Cheryl McPhaul is a black woman who worked as a registered nurse for the Women, Infants and Children (WIC) program in Madison County, Indiana. WIC is a federally-funded program that provides health care and nutrition assistance for pregnant women, infants and children. McPhaul’s supervisor was Arleen Horine, a registered nurse who coordinates the WIC program in Madison County. McPhaul began working for WIC as a nurse nutritionist in April of 1994, where her responsibilities included counseling WIC clients about nutrition and certifying them for program benefits like food supplements. In May 1995, Horine concluded that McPhaul’s performance as a nutritionist was deficient because she was writing the same information on the charts of WIC clients regardless of their varying situations, including the infants, a practice that Horine described as "totally inappropriate." Thus, Horine transferred McPhaul to the position of intake clerk in May 1995. Intake clerks certify clients for the WIC program in order to secure federal funding. They record the heights and weights of clients so that the nurse nutritionists can properly advise clients about their diets. As an intake clerk, McPhaul continued to receive the same benefits and pay that she received as a nutritionist. In September 1995, McPhaul received her first performance evaluation as an intake clerk, in which Horine rated her performance "Below Average," the second lowest rating on a scale of five. Horine’s evaluation states that McPhaul was having "great difficulty in doing her job," that she was making "gross errors" in charting the heights and weights of clients, and that she was having trouble remembering shot schedules for infants and children and how to certify clients. Although McPhaul was retrained after her initial evaluation, she fared no better on her second evaluation in November 1995. According to Horine’s notes, McPhaul’s performance was still "Below Average" because she continued to make "gross errors" in plotting the heights and weights of clients, and was still unable to understand the certification process. In January 1996, Horine completed McPhaul’s third (and last) performance review, in which McPhaul received the lowest possible rating of "Unsatisfactory." Horine stated that McPhaul was making "numerous errors" in the routine tasks of the job, and that she was still failing to accurately record the heights, weights, and even the ages of clients. Horine recommended to the WIC administrator that McPhaul should be discharged. The administrator and the Health Officer approved Horine’s recommendation, and McPhaul was terminated on January 22, 1996. After her termination, McPhaul sued the Board of Commissioners, alleging that she was disabled and that the Board failed to accommodate her disability, in violation of the ADA. She also sued Horine in her individual capacity, under section 1983, alleging that Horine discriminated against her because of her race, thus affecting the terms and conditions of her employment. McPhaul also claimed that Horine failed to protect her from an alleged campaign of racial harassment by her white co-worker, Marcia Shock. Concerning her ADA action, McPhaul claims that she had been suffering from fibromyalgia since February 1995 (before Horine transferred her from the nutritionist position to the intake clerk position in May 1995). Fibromyalgia is a disease that is similar to chronic fatigue syndrome; its cause is unknown, there is no cure, and the symptoms are entirely subjective and usually involve chronic pain and fatigue. McPhaul’s fibromyalgia symptoms included fatigue, insomnia, shortness of breath and muscle pain, including sore hands and joints. She claims that her condition made it difficult for her to concentrate, bathe, walk, write and work, and that in September 1995 she requested Horine to accommodate her alleged disability by allowing her to arrive at work one hour later or to leave one hour earlier, or both. According to McPhaul, her request was denied. Horine claims that McPhaul never made the request. On January 11, 1996, McPhaul saw Dr. Van Dellen at the Mayo Clinic. He concluded that it was "possible" that McPhaul had fibromyalgia, and he gave her a card that instructed her to participate in an education program about the disease. McPhaul allegedly presented the card to Horine, but Horine asserts that she was never informed of McPhaul’s disease. McPhaul was not diagnosed with fibromyalgia until February 1, 1996, several days after she was terminated. McPhaul’s disparate treatment claim under section 1983 is based on several allegations that Horine discriminated against her because of her race by demoting her to the intake clerk position, terminating her from that position, and by treating her differently in regards to other terms and conditions of her employment. Horine disputes these allegations. In support of her hostile environment claim under section 1983, McPhaul alleges that she was harassed by Shock’s discussion of racially sensitive subjects and her repeated use of the word "nigger" in McPhaul’s presence. McPhaul also alleges that Horine knew about and tolerated Shock’s conduct, and is thus liable in her individual capacity. Horine disputes these allegations as well. The defendants moved for summary judgment, arguing that McPhaul failed to establish a prima facie case to support her claim under the ADA, or to support her disparate treatment and hostile environment claims under section 1983. The district court granted the motion, concluding that McPhaul’s ADA claim failed because she did not present sufficient evidence that she was disabled; that her disparate treatment claim failed because she presented no evidence that Horine was motivated by discriminatory intent; and that her hostile environment claim failed because she produced no evidence that her work environment was objectively hostile, or that Horine knew or consented to Shock’s conduct. "We review the district court’s entry of summary judgment de novo," Miller v. American Family Mut. Ins. Co., 203 F.3d 997, 1003 (7th Cir. 2000), and we will view all of the facts and draw all reasonable inferences in favor of the nonmoving party. See id. Summary judgment is proper if the evidence shows that "there is no genuine issue as to any material fact and that the moving party is entitled to a judgment as a matter of law." Fed. R. Civ. P. 56(c). McPhaul cannot merely allege the existence of a factual dispute to defeat summary judgment. Skorup v. Modern Door Corp., 153 F.3d 512, 514 (7th Cir. 1998). She must supply evidence sufficient to allow a jury to render a verdict in her favor. Ross v. Indiana State Teacher’s Association, 159 F.3d 1001, 1012 (7th Cir. 1998). II. A. The ADA Claim McPhaul’s first argument on appeal is that the district court erred in concluding that her reasonable accommodation claim fails because she was not disabled under the ADA. The ADA proscribes discrimination "against a qualified individual with a disability because of the disability of such individual in regard to job application procedures, the hiring, advancement, or discharge of employees, . . . and other terms, conditions and privileges of employment." 42 U.S.C. sec. 12112(a). The Act also provides that an employer discriminates against a qualified individual with a disability by "not making reasonable accommodations to the known physical or mental limitations of an otherwise qualified individual with a disability . . . ." 42 U.S.C. sec. 12112(b)(5)(A). To establish a prima facie case for failure to accommodate under the ADA, McPhaul must show that: (1) she was disabled; (2) the Board was aware of her disability; and (3) she was a qualified individual who, with or without reasonable accommodation, could perform the essential functions of the employment position. Feldman v. American Memorial Life Ins. Co., 196 F.3d 783, 789 (7th Cir. 1999). Although the district court held that McPhaul failed to establish that she was disabled, we reserve opinion on that determination because we find it dispositive that McPhaul has failed to present sufficient evidence to show that she was a "qualified individual" under the ADA. See id. A "qualified individual with a disability" is "an individual with a disability who, with or without reasonable accommodation, can perform the essential functions of the employment position that such individual holds or desires." 42 U.S.C. sec. 12111(8). McPhaul has the burden of proof on this issue, as she must show that she could perform the essential functions of the nutritionist and intake clerk jobs either with or without a reasonable accommodation. Bultemeyer v. Fort Wayne Community Schools, 100 F.3d 1281, 1284 (7th Cir. 1996); 29 C.F.R. sec. 1630.2(m). The evidence clearly demonstrates that McPhaul was not able to perform the essential functions of the nutritionist and intake clerk positions. Horine concluded that McPhaul’s performance as a nutritionist was deficient because she was recording the same information on the charts of all of her patients, regardless of the various facts each presented, including the infants. For obvious reasons, Horine described this practice as "totally inappropriate." McPhaul does not dispute Horine’s conclusion. Moreover, McPhaul does not dispute Horine’s three evaluations that thoroughly documented McPhaul’s performance deficiencies as an intake clerk./1 And McPhaul presents no medical evidence to show that her performance deficiencies at either job were due to her alleged disability of fibromyalgia. McPhaul responds by claiming that she would have been able to perform the essential functions of the nutritionist and intake clerk jobs if Horine accommodated her request to arrive at work one hour later, or to leave one hour earlier. Aside from the fact that Horine claims that McPhaul never requested reduced hours, McPhaul provides no medical evidence to support her claim that her requested accommodation would have improved her performance, as none of her physicians ever recommended any work restrictions or accommodations due to her condition./2 All that McPhaul can present in support of her reasonable accommodation claim is her own self-serving testimony, and in this case, that is just not sufficient for a reasonable jury to find that she is a qualified individual with a disability under the ADA. See Slowiak v. Land O’Lakes, Inc., 987 F.2d 1293, 1295 ("Self-serving affidavits without factual support in the record will not defeat a motion for summary judgment."). Therefore, McPhaul’s ADA claim fails. B. The Section 1983 Claims McPhaul also argues that Horine is personally liable for discriminating against her because of her race, in violation of the Equal Protection Clause of the Fourteenth Amendment and 42 U.S.C. sec. 1983. According to McPhaul, Horine treated her differently regarding the terms and conditions of her employment, and failed to act to stop Shock’s alleged campaign of racial harassment. To state a prima facie case under the Equal Protection Clause of the Fourteenth Amendment, a plaintiff must demonstrate that she: (1) is a member of a protected class; (2) is otherwise similarly situated to members of the unprotected class; (3) suffered an adverse employment action; (4) was treated differently from members of the unprotected class; and (5) the defendant acted with discriminatory intent. Greer v. Amesqua, 212 F.3d 358, 370 (7th Cir. 2000); Jackson v. City of Columbus, 194 F.3d 737, 751-52 (6th Cir. 1999). Regarding the fifth element, McPhaul must show that Horine "acted [or failed to act] with a nefarious discriminatory purpose," and discriminated against McPhaul because of her membership in a definable class (because she is black). Nabozny v. Podlesny, 92 F.3d 446, 453 (7th Cir. 1996) (internal citations omitted). 1. Disparate treatment. McPhaul first contends that Horine discriminated against her because of her race by treating her differently in regards to the terms and conditions of her employment by: (1) transferring her to the intake clerk position; (2) terminating her from that position; (3) neglecting to train her for the intake clerk position while Shock, a white intake clerk, received more sufficient training; (4) denying her request to work reduced hours while granting Shock’s request for the same accommodation; (5) requiring her to see more clients than Shock; and (6) prohibiting her from wearing a nurse’s uniform while allowing Shock to wear one. McPhaul’s claims regarding her transfer and termination clearly fail because she does not establish the second and fifth elements of a prima facie case. She does not establish the second element--that she was otherwise similarly situated to other nutritionists or intake clerks who are members of an unprotected class--because she does not identify any co-worker with a similar "Below Average" or "Unsatisfactory" performance rating./3 See O’Connor v. Chicago Transit Authority, 985 F.2d 1362, 1371 (7th Cir. 1993) ("To make a prima facie case, O’Connor would have to show that another grossly insubordinate worker was treated better than him.") (citation omitted). And because McPhaul presents no evidence to indicate that Horine’s transfer and termination decisions were motivated by any reason other than McPhaul’s performance deficiencies (which are undisputed), she clearly fails to show that Horine’s decisions were motivated by racial animus. Nabozny, 92 F.3d at 453. On her claim about inadequate training, McPhaul essentially argues that Horine set her up for failure by neglecting to prepare her for the intake clerk position while Horine ensured that Shock was well prepared before she started the job. Horine disputes McPhaul’s claim, and the record contains no evidence that Shock received better (or more timely) preparation for the position. See Slowiak, 987 F.2d at 1295. Moreover, McPhaul does not dispute Horine’s notes that McPhaul was "retrained fully for the job" after her first evaluation, but her performance still deteriorated to the "Unsatisfactory" level. Because the record discredits McPhaul’s argument, and she presents no evidence that Horine acted with racial animus, this claim fails. McPhaul’s next contention is that Horine discriminated against her when she allegedly denied her request to work a reduced schedule, but granted Shock’s request for the same accommodation. According to McPhaul, Horine’s reason for denying her request was that she already reduced hours for Shock and could not grant the same favor to McPhaul./4 But McPhaul’s actual testimony was that Shock’s time away from work "varied," and not that she was regularly allowed to work a reduced schedule, which corroborates Horine’s testimony that Shock never requested a reduced schedule, but occasionally took sick leave and vacation days. McPhaul presents no evidence to dispute that Shock used her accrued sick or vacation time when Horine allowed her to take a portion of a day off. And the record demonstrates that by January 1996, McPhaul had used all of her vacation and sick time. Nevertheless, Horine’s decision to allow Shock to take accrued leave, and not to allow McPhaul to take leave that had not been accrued, does not evince that Horine was motivated by a "nefarious discriminatory purpose," and this claim fails./5 McPhaul also contends that Horine required her to see more WIC clients than Shock on a daily basis. In support of her contention, McPhaul relies solely on her own observations through a window to Shock’s office, and fails to challenge the scheduling book in the record that demonstrates that the WIC receptionist distributed WIC clients equally to McPhaul and Shock. Thus, McPhaul provides no evidence that Horine intentionally assigned more clients to McPhaul, or did so because of her race. McPhaul’s last claimed instance of disparate treatment is that Horine prohibited her from wearing a nursing uniform while she allowed Shock to wear one. According to McPhaul, Horine told her not to wear a uniform because WIC clients feel more comfortable when WIC staff are dressed in casual clothes. McPhaul does not indicate that she requested to wear a uniform, or that she was ever punished for wearing a uniform, or that she ever asked why Shock was apparently allowed to wear a uniform. The uniform was not a factor in her transfer or her termination, and there is no evidence that the uniform was an important issue at WIC. McPhaul just does not show that Horine’s policy on uniforms was an adverse employment action. See Southard v. Texas Bd. of Criminal Justice, 114 F.3d 539, 555 (5th Cir. 1997) ("Not every negative employment decision or event is an adverse employment action that can give rise to a discrimination or retaliation cause of action under section 1983."); see also Silk v. City of Chicago, 194 F.3d 788, 800 (7th Cir. 1999). McPhaul also provides no evidence that Horine’s policy was motivated by racial animus. We conclude that McPhaul’s claimed instances of discrimination (considered individually and collectively) do not constitute sufficient evidence for a reasonable jury to conclude that Horine discriminated against her because of her race. Thus, McPhaul’s disparate treatment claim fails. 2. Hostile environment. McPhaul also contends that Horine is personally liable for failing to act to stop Shock’s alleged campaign of racial harassment. McPhaul does not allege any harassment by Horine, but that Shock, her co-worker, harassed her by making racially sensitive and derogatory remarks in her presence while Horine failed to intervene to rectify the situation. To establish an individual capacity claim under section 1983 against a supervisory official, there must be a showing that the official was directly responsible for the improper conduct, Wolf-Lillie v. Sonquist, 699 F.2d 864, 869 (7th Cir. 1983), and "knowingly, willfully, or at least recklessly caused the alleged deprivation by [her] action or failure to act." Rascon v. Hardiman, 803 F.2d 269, 274 (7th Cir. 1986). However: [A] defendant’s direct participation in the deprivation is not required. An official satisfies the personal responsibility requirement of section 1983 if she acts or fails to act with a deliberate or reckless disregard of plaintiff’s constitutional rights, or if the conduct causing the constitutional deprivation occurs at her direction or with her knowledge and consent. Id. (quoting Smith v. Rowe, 761 F.2d 360, 369 (7th Cir. 1985)). The plaintiff must also show that the supervisor acted (or failed to act) because of the plaintiff’s race. See Nabozny, 92 F.3d at 453. To prevail on a hostile environment racial harassment claim, the plaintiff must also show that her work environment was both subjectively and objectively hostile./6 See Adusumilli v. City of Chicago, 164 F.3d 353, 361 (7th Cir. 1998) (citing Harris v. Forklift Systems, 510 U.S. 17, 21 (1993)). An objectively hostile environment is one that a reasonable person would find hostile or abusive. [Harris, 510 U.S. at 21]. In determining whether a plaintiff has met this standard, courts must consider all the circumstances, including "the frequency of the discriminatory conduct; its severity; whether it was physically threatening or humiliating; or a mere offensive utterance; and whether it unreasonably interferes with an employee’s work performance." [Id. at 23]. Adusumilli, 164 F.3d at 361. We shall evaluate McPhaul’s claims according to these standards. McPhaul alleges that Shock harassed her by discussing racially sensitive subjects and by repeatedly using the racial epithet "nigger" in McPhaul’s presence. Although McPhaul alleges that Shock’s comments occurred on a weekly basis, she presents three specific instances on appeal. In the first instance, Shock repeated to McPhaul a comment (made to Shock by a WIC client) that Horine looked like "a little nigger lady." The second instance involved Shock calling McPhaul’s attention to the fact that a client was a dark- skinned mother who had a lighter-skinned baby. And lastly, Shock told McPhaul that Shock’s family was once harassed by the Ku Klux Klan. According to McPhaul, she complained to Horine about Shock’s derogatory and racially insensitive remarks, and that Horine advised her to "ignore it." But McPhaul also admitted that Horine later separated her from Shock by moving her to her own office. Horine testified that McPhaul never complained to her about Shock’s alleged harassment, and that she never witnessed Shock using the word "nigger." We first consider whether Shock’s remarks created an objectively hostile environment for McPhaul. Shock allegedly used the word "nigger" when she repeated a comment made by a WIC client about Horine,/7 and thus Shock did not direct that epithet at McPhaul or anyone else. When such harassment is directed at someone other than the plaintiff, the "impact of [such] ’second hand harassment’ is obviously not as great as the impact of harassment directed at the plaintiff." Gleason v. Mesirow Financial, Inc., 118 F.3d 1134, 1144 (7th Cir. 1997). Although McPhaul also alleges that Shock used the word "nigger" on a weekly basis, she never claims that Shock directed it at McPhaul or anyone else, which indicates that Shock tended to repeat the epithet out of her own immaturity and insensitivity, rather than racial animus. Moreover, McPhaul stated twice in her deposition that she considered Shock’s remarks (especially her use of the word "nigger") to be "offensive," but she never claimed that they interfered with her work performance, or were physically threatening or humiliating. Thus, the "mere utterance of an . . . epithet which engenders offensive feelings in an employee" is not sufficient to establish a hostile working environment. Harris, 510 U.S. at 21 (quoting Meritor Savings Bank, FSB v. Vinson, 477 U.S. 57, 67 (1985)). Shock’s comment about the child’s skin color was understandably offensive to McPhaul, but it was not about McPhaul, and merely demonstrates Shock’s ignorance of the probable consequences of her careless chatter rather than racial hostility. And Shock’s claim that the Ku Klux Klan once harassed her family does not implicate any hostile intent. We conclude, therefore, that McPhaul fails to present sufficient evidence to support a reasonable inference that Shock’s remarks created an objectively hostile working environment. See Adusumilli, 164 F.3d at 361. Moreover, there is insufficient evidence to indicate that Horine deliberately or recklessly intended or allowed Shock’s alleged conduct, or that Horine failed to act because she was motivated by racial animus against McPhaul. The record does not indicate that Horine intended or directed any of Shock’s comments, as they appear to have involved Shock’s spontaneous (and inconsiderate) reactions to what she had observed or heard. And McPhaul admits that Shock’s comments decreased after Horine gave McPhaul her own office. Therefore, McPhaul presents insufficient evidence to indicate that Horine was responsible for Shock’s alleged campaign of harassment, and the hostile environment claim fails./8 We conclude that McPhaul has failed to establish a prima facie case under the ADA because she is not a qualified individual with a disability. She has also failed to establish a prima facie case under section 1983 because she has not made a sufficient showing that Horine discriminated against her because of her race. Accordingly, We AFFIRM the district court. /1 While McPhaul does not dispute her performance evaluations directly, she does claim that Horine failed to sufficiently train her for the intake clerk position, and required her to see more clients than other intake clerks. But as we explain in our analysis of McPhaul’s disparate treatment claim, she fails to present any evidence to support these allegations, and the record actually discredits them. /2 The record does contain, however, a January 17, 1996 note from Dr. Van Dellen of the Mayo Clinic that simply states that McPhaul "could return to work January 15, 1996." There is no indication of any work restrictions or of any need for a work accommodation. /3 McPhaul only identifies Marcia Shock, a white intake clerk, as a member of an unprotected class who was allegedly treated more favorably by Horine. But Shock was not similarly situated to McPhaul because Horine rated Shock’s performance as "Average," which is a superior rating to McPhaul’s "Below Average" and "Unsatisfactory" ratings. McPhaul does not challenge Horine’s performance evaluations. Also, at the time of her discharge, McPhaul was paid over $14.00 per hour while Shock was paid $11.00 per hour. /4 Horine claims that neither McPhaul nor Shock made such a request, and thus no such accommodation was granted at all. We note that even if Horine did grant Shock’s request on a first come, first served basis, that would be a legitimate business decision that is beyond our purview. See McCoy v. WGN Continental Broadcasting Co., 957 F.2d 368, 373 (7th Cir. 1992) (this court does not sit as a super personnel department to review an employer’s business decisions). /5 And we have already established that McPhaul provided no medical evidence to support her request for a reduced schedule, and thus Horine had no compelling reason to grant it. /6 Because section 1983 claims generally follow "the contours of Title VII claims," we will apply the same "hostile environment" standard that is applied in Title VII cases. King v. Board of Regents of University of Wisconsin System, 898 F.2d 533, 537 (7th Cir. 1990). /7 Horine is white. /8 McPhaul also argues that we must consider Horine’s alleged failure to protect her from Shock’s offensive remarks as further evidence of McPhaul’s disparate treatment claim. Because we conclude that no reasonable jury could find that Shock’s remarks created an objectively hostile environment, or that Horine was somehow motivated by racial animus to endorse them, our consideration of these allegations (individually, and collectively with the other six alleged instances of disparate treatment) does not change our conclusion that McPhaul’s disparate treatment claim fails.
{ "pile_set_name": "FreeLaw" }
Order entered June 10, 2019 In The Court of Appeals Fifth District of Texas at Dallas No. 05-19-00038-CV BASIL BROWN, Appellant V. ROBERT HAWKINS, Appellee On Appeal from the County Court At Law No. 1 Kaufman County, Texas Trial Court Cause No. 16C-0127 ORDER Appellant has been declared a vexatious litigant and is required to obtain permission from the local administrative judge to file this appeal. See TEX. CIV. PRAC. & REM. CODE ANN. § 11.103(a). Before the Court is appellant’s May 28, 2019 motion to proceed in this appeal. Appellant states that he attempted to obtain an order from the Honorable Casey Blair, Presiding Judge of the 86th Judicial District Court. According to appellant, Judge Blair informed appellant that he was not the appropriate local administrative judge to grant appellant permission to appeal. Under section 25.1312(f) of the Government Code, a district judge serves as the local administrative judge for the district and statutory county courts in Kaufman County. See TEX. GOV’T CODE ANN. § 25.1312(f). Although the webpage for the 422nd Judicial District Court states that Judge Michael Chitty is the local administrative judge, the current local administrative district judge for Kaufman County is Judge Blair. See https://www.txcourts.gov/judicial- directory/. Accordingly, we ORDER Judge Blair to consider and sign a written order on appellant’s request for permission to appeal WITHIN TWENTY DAYS of the date of this order. If Judge Blair signs an order denying appellant permission to appeal, appellant may apply for a writ of mandamus with this Court not later than the thirtieth day after the date of the order. See TEX. CIV. PRAC. & REM. CODE ANN. § 11.102(f). We ORDER Rhonda Hughey, Kaufman County District Clerk, to file, WITHIN TWENTY-FIVE DAYS of the date of this order, a supplemental clerk’s record containing Judge Blair’s order. We DIRECT the Clerk of this Court to send a copy of this order to Judge Blair; Ms. Hughey; and all parties. /s/ BILL WHITEHILL JUSTICE
{ "pile_set_name": "FreeLaw" }
Introduction {#Sec1} ============ Implant dentures are supported by dental implants, which are inserted into the alveolar bone and connect directly to the bone without intervening soft tissue, following osseointegration^[@CR1]--[@CR3]^. Osseointegration is very important for an implant to maintain its stability and provide occlusal support^[@CR4]^. Occlusal force is transmitted from the implant to the alveolar bone by functional loading during mastication. The biomechanical response of the bone to proper occlusal force determines the osseointegration of the implant and bone remodeling after implantation^[@CR5]--[@CR8]^. A failed osseointegration would decrease the stability of the implant and cause the failure of the restoration. Implant dentures may suffer from impact, particularly during activities such as sports and physical training. Impact is a complex phenomenon that occurs when two or more bodies undergo a collision^[@CR9]^. Characteristics of impact are very brief duration, high force levels reached, rapid dissipation of energy and large accelerations and decelerations present^[@CR9]^. When the pulse duration of the impact load is in the range of microseconds, the force is transmitted and reflected, usually in the form of stress waves, through the composite structure consisting of the implant, the interface and the alveolar bone, in which each has the different impedance. If the dental implant and alveolar bone are unable to buffer the impact through deformation of their structure, the osseointegration at the implant-bone interface and the bone microstructure around the implant would be damaged. However, the mechanism of trauma is still unclear. The bone is a functional remodeling biomaterial. Bone remodeling is usually defined as a process where bone gradually alters its morphology when mechanical signals are sensed and conducted by osteocytes through the lacunar-canalicular system to adapt to the biomechanical environment^[@CR10]--[@CR16]^. Bone remodeling includes two opposing processes: resorption and deposition. Sclerostin secreted by osteocytes is a glycoprotein encoded by the SOST gene^[@CR17],\ [@CR18]^. Several studies have reported that sclerostin plays important roles in the anabolic response of bone to mechanical stimulation through the Wnt/β-catenin pathway and the catabolic response of bone to mechanical stimulation through the RANK/RANKL pathway^[@CR19]--[@CR22]^. However, the expression of sclerostin in peri-implant bone following impact is still unclear, and the correlation between the expression of sclerostin and bone remodeling needs to be studied. In order to address these questions, we established an animal model with implants under impact load. The characteristics of the microdamage and the expression of sclerostin, β-catenin and RANKL were investigated to explore the mechanism of impact damage and the subsequent repair of bone around the implant. These results could help in the evaluation of alveolar bone trauma around implants and provide guidance for the management of dental implants following impact. Materials and Methods {#Sec2} ===================== Animal Studies {#Sec3} -------------- ### Animal Preparation {#Sec4} Thirty 20- to 22-week-old female New Zealand rabbits weighting approximately 3.5--4.0 kg were purchased from the animal centre of the Fourth Military Medical University (Shaanxi, China) and housed for 1 month. The animals were allowed access to water and pelleted commercial diet ad libitum. The weights were monitored weekly in whole research. ### Implant Insertion {#Sec5} Femoral distal condyles were chosen as a standard site to insert implants as previously described^[@CR23]^, and the implantation surgeries were performed under general anesthesia. A skin incision was made over the lateral femur-tibia joint, the femoral distal condyle was exposed after the muscle and fascia tissues were peeled away. The implant, which was 2.3 mm in diameter and 5 mm in length, was inserted into the prepared hole with an implant placement torque of 5--10 N·cm. The incision was closed with sutures in a single layer. An intramuscular injection of penicillin (200 thousand IU kg^−1^) was given to each animal once daily for 5 days postoperatively to prevent infection, and the animals were allowed unrestricted activity. ### Impact Loading {#Sec6} Three months later, impact experiments were performed after favorable osseointegration occurred. The animals were randomly divided into two experimental groups suffering impact load and one control group without impact. The impact protocol^[@CR24],\ [@CR25]^ is illustrated in the Supplementary Fig. [S1](#MOESM1){ref-type="media"}. During impact loading, a 25 g or 50 g impact mass was dropped from a height of 1 m onto a pressure sensor attached to an implant. The masses were arrested immediately to prevent multiple impacts. The voltage signals received from the pressure sensor were magnified by a charge amplifier (10 N/V), and the impulse waveforms were captured with an oscilloscope. Animals in experimental groups received a final impact load of 500 or 1000 N in 0.2 ms in order to observe the trabecular microdamage around the implant, meanwhile, prevent the detachment of the implant based on the results of our pre-experiments (Fig. [1](#Fig1){ref-type="fig"}). Then, animals were sacrificed on day 0, 7, 14, or 28, and the implants and surrounding bone were harvested for micro-CT, histomorphometry, immunofluorescence (IF) and RT-qPCR analysis. The sample sizes at each time point are illustrated in Table [1](#Tab1){ref-type="table"}. In the experiment, one femur was dissected for micro-CT, histomorphometry and immunofluorescence randomly, and another femur from the same animal was used for RT-qPCR. So the total sample size was 60, and n (Table [1](#Tab1){ref-type="table"}) indicates the numbers of the femurs from different rabbits. In order to investigate the characteristics of microdamage, the samples of the two experimental groups at 0d were only used for micro-CT and histomorphometry analysis. The 1000 N group was chosen for the study of peri-implant bone remodeling following impact due to its more typical microdamage. The sample size of the control group for statistical analysis was the sum of that at every time point, for the control group was not treated with an impact load.Figure 1Impulse waveforms. Animals received a final impact load of 500 N (**a**) or 1000 N (**b**) in 0.2 ms. Table 1Sample Sizes at Each Time Point.DayAnalysisn500 N1000 NControl0Micro-CT & Histology & IF/RT-qPCR6/−6/−2/27Micro-CT & Histology & IF/RT-qPCR−5/52/214Micro-CT & Histology & IF/RT-qPCR−5/52/228Micro-CT & Histology & IF/RT-qPCR−6/62/2^n^Indicates the numbers of the femurs from different rabbits. Micro-computed Tomography Analysis {#Sec7} ---------------------------------- Bones containing implants were dissected, fixed in 4% paraformaldehyde and scanned by micro-CT (Inveon Research Workplace 2.2, Siemens Inc., Germany) at a resolution of 19.64 μm with an X-ray voltage of 80 kV and a current of 50 μA. The scanner software (Inveon Acquisition Workplace 2.2, Siemens Inc., Germany) was applied for image reconstruction and analyses. The bone around the implant with 0.5 mm thickness and 5.5 mm length (including 5 mm length of implant and 0.5 mm length of bone under the implant) was chosen as the region of interest (ROI) in each sample for microstructure analyses^[@CR26]^ (Fig. [2a](#Fig2){ref-type="fig"}), and parameters such as trabecular bone volume/total volume (Tb.BV/TV, %), trabecular number (Tb.N, mm^−1^), trabecular thickness (Tb.Th, mm), trabecular separation (Tb.Sp, mm) and bone mineral density (BMD, mg/cc) were measured.Figure 2Results of Micro-CT scan and analyses. (**a**) Micro-CT images of implant and bone 3 months after implantation show a favorable osseointegration. (**b**) Damage of peri-implant bone under impact load derived from micro-CT scan. Red arrows indicate fractured trabeculae (experiment n = 6, control n = 8). (**c**) Micro-CT analysis of BV/TV, Tb.Th, Tb.Sp and BMD in the ROI. Values are expressed as means ± SD. \*P \< 0.05 vs. the control group and ^\#^P \< 0.05 vs. the 500 N group. Histomorphometry and Immunofluorescence {#Sec8} --------------------------------------- Undecalcified samples were prepared for subsequent histomorphometry studies. Samples were cleaned of soft tissue, dehydrated in graded alcohols and embedded in methyl methacrylate. Then, sections of 200 μm in thickness were cut longitudinally, ground to 20 μm and stained with Van Gieson (VG) and Hematoxylin-Eosin (H&E) for microscopic examination. Images were observed and captured with a light microscope (DMI6000, Leica Inc., Germany). For sclerostin immunofluorescence, sections were washed in phosphate buffered saline (PBS) and blocked in goat serum for 10 min at room temperature. Endogenous peroxidases were quenched with 3% H~2~O~2~, and sections were incubated in rabbit sclerostin polyclonal antibody (Bioss Inc., China) for 24 h at 4 °C. After washing in PBS, sections were incubated with secondary antibody labeled with FITC. Images were captured with a confocal laser scanning microscope (CLSM) (Fluo View FV-1000, Olympus Inc., Japan). Fluorescence intensity was measured in five areas of each section using the CLSM software, and the average was calculated for statistical analyses. RNA Extraction and RT-qPCR {#Sec9} -------------------------- Bone tissue at the base of the implant was snap frozen and crushed in liquid N~2~. According to the manufacturer's instructions, total RNA was extracted using Trizol reagent and was dissolved in DEPC H~2~O. cDNA was synthesized using a 10 μl reverse transcription reaction mixture composed of 0.5 μg total RNA, 2 μl 5× PrimeScriptTM Buffer, 0.5 μl PrimeScriptTM RT Enzyme Mix I and 0.5 μl random primer. Partial sequences of SOST, β-catenin, RANKL and β-actin in the reverse transcribed cDNA were amplified using a fluorescence RT-qPCR instrument (RG-3000, Gene Inc., Australia). The forward and reverse primer sequences used for amplification are listed in Supplementary Table [S2](#MOESM1){ref-type="media"}. Relative mRNA expression levels were standardized to β-actin expression for quantified analyses using the ∆Ct method. Ethics {#Sec10} ------ This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the Animal Center of the Fourth Military Medical University. The protocol was approved by the Laboratory Animal Care & Welfare Committee, Fourth Military Medical University (No: 20150201). All surgeries were performed under sodium pentobarbital anesthesia, and all efforts were made to minimize suffering. Statistical Analysis {#Sec11} -------------------- The data were presented as mean ± standard deviation and analyzed using SPSS19.0 (IBM Inc., USA) for one-way ANOVA statistical analyses. Statistical significance was considered as p \< 0.05. Results {#Sec12} ======= Bone-implant Osseointegration {#Sec13} ----------------------------- Implants inserted into the rabbit femoral distal condyle were stable by 3 months after implantation. Micro-CT scan was performed to study the osseointegration at this time. The image showed that the areas around the implant were full of trabeculae. The intact and thick trabeculae were arranged regularly and distributed around and at the bottom of the implant. A good interface was observed between the implant and the bone (Fig. [2a](#Fig2){ref-type="fig"}). The Microdamage of Bone around the Implant and the Failure of the Implant-bone Interface {#Sec14} ---------------------------------------------------------------------------------------- ### Histological Study {#Sec15} No obvious change in the cortical bone around the implant was observed after impact (Fig. [3a](#Fig3){ref-type="fig"}). However, histological study showed that the osseointegration failed at the interface between the bone and the implant. Fractured trabeculae were observed around and at the bottom of the implant.Figure 3The results of bone damage under impact. (**a**) Morphology of the cortical bone around the implant under impact load. (**b**) Damage to the osseointegration among the threads and at the bottom of implant with VG staining (10×). Yellow arrows show trabeculae fracture at the bottom of implant. (**c**) Histomorphometry of impacted peri-implant bone with H&E staining (10×). Black arrows indicate trabeculae fracture. The implant and bone stained with VG are shown in Fig. [3b](#Fig3){ref-type="fig"}. In this figure, new bone appeared around the implant threads and at the bottom of implant by 3 months after insertion, and trabeculae were regularly arranged in the control group, which indicated that the osseointegration was favorable. After impact loading, bone tissues in some areas were broken away from the implant, and the trabeculae at the bottom of implant were fractured. This demonstrated that the interface between the implant and the surrounding tissues was incomplete in the experimental groups. Similar microdamage in peri-implant bone is shown in Fig. [3c](#Fig3){ref-type="fig"} with H&E staining. Trabeculae around the implant were fractured and disordered under impact loading, while trabeculae in the control group were regularly arranged. Damage photography of interface and surrounding bone was more severe in group 1000 N than in group 500 N. ### Micro-CT Analyses {#Sec16} The image of micro-CT shows similar failed osseointegration and fractured trabeculae around and at the bottom of the implant (Fig. [2b](#Fig2){ref-type="fig"}). To describe the microdamage quantitatively, micro-CT analyses were performed in this study. The results demonstrated that BV/TV and Tb.Th decreased, while Tb.Sp and BMD increased compared with the control group (p \< 0.05). The results indicated that the bone mass decreased after impact damage, but BMD increased when the trabeculae were fractured and compressed. There were statistically significant differences in BV/TV and Tb.Sp between the test groups (p \< 0.05), which indicated that the damages were different under impact load (Fig. [2c](#Fig2){ref-type="fig"}). The Peri-implant Bone Healing Process: From Impact Damage to Remodeling {#Sec17} ----------------------------------------------------------------------- To investigate the remodeling of peri-implant bone after impact damage, bone structures along the interface between the implant and the bone were studied at different time points. Cancellous bones around the implant are shown in Fig. [4](#Fig4){ref-type="fig"}.Figure 4Histomorphometry of the osseointegration among the threads and peri-implant bone in the remodeling process following impact injury with VG and H&E staining (10×). Black arrow indicates trabeculae fracture. At day 7 after impact, the gap between the implant and the bone was observed in VG stained sections and the structure of the trabeculae stained with H&E was broken (Fig. [4](#Fig4){ref-type="fig"}). The gaps attributed to impact damage were extended, and the trabeculae were scattered at day 14, which indicated that bone absorption may have occurred in this area. By the last time point, the gaps disappeared, and regularly arranged trabeculae were observed. There was no significant difference in the microstructure compared with the control group, which demonstrated that the osseointegration had completely reformed by day 28. Micro-CT scanning and analyses were applied to quantify the changes in trabecular microstructure (Fig. [5](#Fig5){ref-type="fig"}). It could be observed that BV/TV, Tb.N decreased while Tb.Sp increased 14 days after impact (p \< 0.05). And BMD decreased at day 7 and 14 after temporary increasing at day 0. There was no significant difference in these parameters between the experimental and control group by day 28. These results were consistent with the histomorphological results.Figure 5Micro-CT scan and analyses of peri-implant bone during remodeling after impact. (**a**) Remodeling of peri-implant bone at different time points after impact derived from micro-CT scan. (**b**) Micro-CT analysis of BV/TV, Tb.N, Tb.Sp and BMD in the ROI at different time points (7d and 14d n = 5, 28d n = 6, control n = 8). Values are expressed as means ± SD. \*P \< 0.05 vs. the control group, ^\#^P \< 0.05 vs. 7d, and ^&^P \< 0.05 vs. 14d. Immunofluorescence Staining of Sclerostin in Trabeculae after Impact {#Sec18} -------------------------------------------------------------------- Immunofluorescence staining was used to study the expression of sclerostin during the process of remodeling following bone impact damage. Immunofluorescence images are shown in Fig. [6a](#Fig6){ref-type="fig"}. In this figure, the sclerostin protein staining is indicated by the green fluorescence points, and sclerostin protein expression is measured as fluorescence intensity in Fig. [6d](#Fig6){ref-type="fig"}. From the results (Fig. [6d](#Fig6){ref-type="fig"}), it can be observed that the expression of sclerostin was higher than that in the control group on days 7 and 14 after impact (p \< 0.05). During the process of remodeling after damage, it increased significantly from day 7 to 14 (p \< 0.05). However, there was no obvious difference in expression of sclerostin between the experimental and control group by day 28.Figure 6The expression of sclerostin, β-catenin and RANKL of peri-implant bone during remodeling after impact. (**a**) Immunofluorescence staining on sclerostin in trabeculae at different time points after impact. White arrow indicates the expression of sclerostin. (**b**) Extracted total RNA in agarose gel electrophoresis. In this figure, [3](#Fig3){ref-type="fig"} bands representing 28 s, 18 s and 5 s rRNA are showed clearly, which demonstrates that the extracted total RNA was not degraded. (**c**) The SOST PCR amplification curve. The marker in this figure represents the threshold of SOST in PCR amplification. (**d**) The fluorescence intensity of sclerostin immunofluorescence staining. (**e**) The expression of SOST, β-catenin and RANKL mRNA (7d and 14d n = 5, 28d n = 6, control n = 8). Values are expressed as means ± SD. \*P \< 0.05 vs. the control group, ^\#^P \< 0.05 vs. 7d, and ^&^P \< 0.05 vs. 14d. The Expression of SOST, β-catenin and RANKL mRNA in Bone Tissue around the Implant after Impact {#Sec19} ----------------------------------------------------------------------------------------------- RT-qPCR was used to quantify the expression of SOST and to correlate sclerostin expression with β-catenin and RANKL. Figure [6b](#Fig6){ref-type="fig"} shows the extracted total RNA in agarose gel electrophoresis and Fig. [6c](#Fig6){ref-type="fig"} illustrates the SOST PCR amplification curve. The expression of SOST, β-catenin and RANKL mRNA was quantified using the value of 2^−∆Ct^ in Fig. [6e](#Fig6){ref-type="fig"}. In the figure, the expression of SOST and RANKL mRNA increased rapidly after impact compared with the control group, and the values reached a maximum at day 14 (p \< 0.05). Then, the expression of SOST and RANKL mRNA decreased gradually to the control group levels at day 28. The expression of SOST mRNA was similar to sclerostin protein expression based on the immunofluorescence staining results. The expression of β-catenin mRNA was opposite to that of SOST and RANKL mRNA, and it did not return to the level of the control group completely until day 28 (p \< 0.05). Discussion {#Sec20} ========== When an implant denture experiences an impact load, there will be microdamages in the peri-implant bone and failure of the implant-bone interface. In addition, the expression of proteins will be changed to modify the bone structure correspondingly. In this study, implants were inserted into the femoral distal condyles of New Zealand white rabbit, and an impact load was applied to the implants after osseointegration occurred. The features of bone damage around the implant and the remodeling of bone were studied through micro-CT analyses and hard tissue slicing with VG and H&E staining. Further, the expression of sclerostin, β-catenin and RANKL were analyzed by immunofluorescence and RT-qPCR during the process of remodeling following bone damage. The results showed that there was no significant change in the cortical bone around the implant, but debonding at the interface and impaired osseointegration in specific areas around the implant were observed. Microdamage in cancellous bone was also observed around the implant. The expression of sclerostin, β-catenin and RANKL correlated with the bone damage and process of remodeling. These data indicate that sclerostin may be involved in bone formation and resorption caused by impact through regulating the Wnt/β-catenin and RANKL/RANK pathways. The results of this study reveal the characteristics of the impact damage to the bone around the implant, and provide a reference for damage assessment and clinical treatment of patients with impact loading. Impact load is a transient load that transmits through or reflects between an implant and bone in stress waves when the pulse duration of the load is in the microsecond range. The stress waves not only spread through the implant-bone interface but also reflect at the interface when they pass through the anisotropic composite structure of the implant and bone^[@CR9],\ [@CR27]^. The impact energy is spread and dissipated quickly along the interface, which may cause the failure of the implant-bone interface and surrounding bone when the energy cannot be absorbed and buffered by this composite structure. However, the characteristics of impact damage in the bone around the implant have not been reported. In this study, microdamage of peri-implant bone under impact was investigated. The results showed that the bone tissue was broken away from the implant and fractured trabeculae around the implant were observed in histology, although no obvious change was found in the cortical bone around the implant. Micro-CT analyses demonstrated that trabecular thickness (Tb.Th) decreased while trabecular space (Tb.Sp) increased after the cancellous bone was fractured and exposed to the impact compression. This change of trabecular structure also led to the decrease of bone volume (BV) and its percentage in total volume (BV/TV). Meanwhile, BMD increased accordingly. And the damages correlated significantly with the impact energy. The results denote that the failure of the interface and cancellous bone may be attributed to the transmission of stress waves and reveal the characteristics of bone damage caused by impact in this composite structure. Together, these results suggest that injuries at the implant-bone interface and in the cancellous bone around the implant are invisible but should be given more attention to evaluate the condition of the impact damage and provide guidelines for clinical treatment. To clinically maintain implants after they sustain an impact load, it is also important to study the processes of bone remodeling and osseointegration formation. In the micro-CT analysis, trabecular number (Tb.N) is defined as the number of trabecular intersections between bone and other tissues in the ROI. Thus, it is usually used to describe changes in trabeculae during bone remodeling. In this study, the trabeculae were first found fractured after impact, which would lead to an increase in the number of intersections and the corresponding Tb.N. However, Tb.N decreased with the increase of Tb.Sp in this study. The main reason for this may be that bone loss occurred around the implant, and this phenomenon could be indicated by a reduced BV/TV. The change of BMD also reflected the structure alteration of trabeculae, and the value of BMD turned to decrease at day 7 and 14 after temporary increase at day 0. All these results demonstrate that bone remodeling may be present as resorption 14 days after impact. The process of remodeling was also confirmed by the histological results that the implant-bone interface was damaged and marrow was found to grow into the gap between the implant and the bone. From day 14 to 28, the bone mass and BMD surrounding the implant increased, with increases in both Tb.N and BV/TV, and intact and regularly arranged trabeculae were gradually observed histologically. By day 28, there was no significant difference in the bone mass and BMD compared with the control group. The microstructure of the peri-implant bone returned to normal morphology, and a favorable osseointegration reformed. These results indicate that bone formation is dominated in the process of bone remodeling from day 14 to 28. From these results, it could be observed that the characteristics of the impact damage of bone around the implant were different than that of clinically normal surgical insertion, but the processes of bone remodeling were similar for the formation of the osseointegration around the implant. Therefore, the principles for treatment after insertion surgery could be used as a reference for the management of impact-loaded patients. Sclerostin encoded by the SOST gene is a glycoprotein secreted by osteocytes^[@CR28]--[@CR30]^. Previous studies have reported that there are alterations in its expression in animal models of bone defects or fractures^[@CR31]--[@CR34]^. Changes in the microstructure could cause fluid flow in the bone matrix, which would change the mechanical environment surrounding the osteocytes^[@CR11]^. This mechanical signal is sensed and conducted by osteocytes though the lacunar-canalicular system^[@CR15],\ [@CR35],\ [@CR36]^, and then sclerostin is secreted by osteocytes to adapt to the biomechanical environment^[@CR37]--[@CR39]^. In this study, the expression of sclerostin continued to increase after impact and reached a maximum at day 14. Then, it decreased gradually to normal levels at day 28. From the results above, it can be observed that the expression of sclerostin is related with the process of bone damage and remodeling. Other studies have reported that sclerostin is an effective antagonist of bone formation^[@CR40]--[@CR42]^. The nodular cystine domain in sclerostin allows the protein to competitively bind to the Wnt co-receptor LRP5/6 of the osteoblast. This reduces β-catenin nuclear translocation. As a result, osteoblast activity is decreased, and new bone formation and mineralization are inhibited^[@CR43],\ [@CR44]^. Meanwhile, sclerostin could promote the secretion of RANKL from osteoblasts, which would stimulate the differentiation of osteoclasts from precursors and accelerate bone resorption^[@CR22]^. The relationship of sclerostin with the Wnt/β-catenin and RANKL/RANK pathways is illustrated in Fig. [7](#Fig7){ref-type="fig"}.Figure 7The relationship of sclerostin with Wnt/β-catenin and RANKL/RANK. In this study, the expression of RANKL mRNA continued to increase with the increase of SOST, while β-catenin decreased over the first 14 days after impact. Then, the expression of RANKL and SOST mRNA decreased gradually to the level of the control group, while, in contrast, β-catenin increased by day 28. The aforementioned changes in protein expression and mRNA levels were consistent with the behavior of peri-implant bone absorption and formation in the micro-CT and histological results. These results suggest that sclerostin may be involved in both bone anabolism and catabolism in response to mechanical stimulation by regulating the Wnt/β-catenin and RANKL/RANK pathways, respectively. The lower expression of β-catenin than that of control group at day 28 in this study indicated that it didn't reach the normal value. The main reason might be that it will take longer for the cancellous bone to return to normal completely, and β-catenin may continue to increase during this process. In future experiments, the expression of β-catenin and RANKL should be investigated when the expression of sclerostin is down-regulated, and the change in bone microstructure should also be analyzed to determine that whether sclerostin serves as a potential target to regulate bone remodeling under damage through Wnt/β-catenin and RANKL/RANK pathways, which would provide a new therapeutic target for patients with dental implant to improve the osseointegration by regulating the expression of sclerostin. Electronic supplementary material ================================= {#Sec21} Supplementary Information **Electronic supplementary material** **Supplementary information** accompanies this paper at doi:10.1038/s41598-017-06867-9 **Publisher\'s note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This study was supported by National Natural Science Foundation of China (Contract no. 11672327). D.X. and X.H. contributed to the design of the study. D.X., L.Z., A.B., F.F. and D.C. contributed to the acquisition, collection and assembly of data. D.X. and X.H. contributed to the statistical analyses of data and wrote the main manuscript text. D.X., X.H., W.Y. and L.K. contributed in revising the manuscript. All authors reviewed the manuscript and approved the final version to be submitted. Competing Interests {#FPar1} =================== The authors declare that they have no competing interests.
{ "pile_set_name": "PubMed Central" }
Description Yield: 4,200* * Page Yield is an approximation of the number of standard pages that can be printed with one cartridge, usually measured at 5% page coverage. 5% coverage is described as a standard 8.5 x 11 inch page with a light letterhead, address and three paragraphs of text double spaced. If you print mixed things such as text and graphics or webpages your page coverage is closer to 15% and therefore the number of pages printed with each cartridge will be much less. If you are printing photos your page coverage is closer to 100% and the number of printed photos will be much less than the amount the number of pages you would be able to print if you were just printing standard text style documents.
{ "pile_set_name": "Pile-CC" }
When sexual freedom comes under the gun by politicians in our nation’s capital, who is it that speaks to defend our freedoms? Quite often it is Woodhull Sexual Freedom Alliance. But who are they? What do they do? And how can we support them? Metis Black interviews Woodhull's Executive Director, Ricci Joy Levy, for the inside scoop. Asking for something new in the bedroom is a vulnerable position to be in. This is especially true when a couple has been together with a relatively stable or ‘normal’ sex life for many years. How do you start a new sexual conversation with a long-time partner? All too many times we find ourselves looking for simple solutions to make things in our lives easier - especially when it comes to sex! Things should feel great, be safe, and be simple for us to get to climax. It’s easy to get overwhelmed when looking at the array of sex toys available on the market, but here at Tantus we design toys with care. Every design element has been created with the intentions of driving sensation and orgasms. Here’s our guild to exploring just a few of the anal toys we have in our line. For 15 years I’ve talked to anyone and everyone about toxic chemicals in our sex toys. Many in the novelty industry, manufacturers, store owners, and educators alike heard me mention this funny word and then spell it for them: p-h-t-h-a-l-a-t-e-s and it may have been the first time they had heard about it at all. Metis Black started Tantus with the goal of shaping the pleasure products industry into a safer, educated marketplace. As a pioneer in silicone toy manufacturing, Black says she experienced resistance but now sees the fruit of her labor with the increased usage of body-safe materials. When you first decide to try out sex toys, you are most likely going to feel overwhelmed. The first step is to accept that with your first few sex toy purchases, there will inevitably be some choices that you end up not liking. To minimize the chances of that, here are some tips for buying your first sex toys. A doctor is sworn to keep your confidence. Any issue you bring to them, sex or otherwise, will be private. This makes it a bit easier to confide the most personal things with them. But sex can complicate any conversation. Ever blurt "I love you" in the middle of sex with someone you barely knew? Are you a consummate creator of 6 week relationships? Do you stay years in a relationship that's bad for you? If any of these are true, you can blame your brain chemistry. Tantus, Inc. just refined their prostate toy offerings, taking prostate massage to the next level. “The Prostate Play by Tantus is evolution in motion,” said Mr. Will of mrwillshouseofthrills.com. “Different than...
{ "pile_set_name": "Pile-CC" }
Post Microsoft's uneventful Windows 8.1 launch, the company began selling its Surface 2, Surface Pro 2, and the barrage of accessories that go with them, in the US. The lineup begins with the Surface 2, a 10-inch class tablet running Windows 8.1 RT, running atop a Qualcomm Snapdragon 800-based hardware, which starts at $449.99 for the 32 GB variant, and $549.99 for the 64 GB variant. A notch above the Surface 2, is the Surface Pro 2 series, which really picks up the mantle from the original Surface. It runs standard Windows 8.1 atop Intel Core i5 "Haswell" based hardware, with up to 8 GB of RAM, and up to 512 GB of SSD-based storage. The lineup starts at $899.99, and promises "the full PC experience" on a 10-inch class tablet. Surface 2 and Surface Pro 2 can be heavily customized, with accessories that push it either close to a laptop, or a desktop. The lineup of accessories include the Touch Cover 2, a cover with a soft-touch keyboard and LED backlighting, which is priced at $119.99. Next up is the Type Cover 2, which is pretty much Touch Cover 2 with membrane keys that offer tactile feedback; priced at $129.99. The Power Cover builds on Type Cover 2, by offering an additional battery that steps up your tablet's battery life by 50 percent. It's priced at $199.99. Also priced at $199.99 is the Docking Station for Surface Pro, a kit that converts your Surface Pro 2 into a compact AIO desktop, optimized for AC power, and featuring a few additional connectivity options. The Wireless Adapter for Touch and Type Covers lets you detach the two from the tablet, and run them over Bluetooth. It's priced at $59.99. Next up is the Car Charger with USB, which, well, does what its name reads. It's priced at $49.99. Lastly, there's the Arc Touch Mouse Surface Edition, a highly ergonomic wireless optical mouse for Surface, priced at $69.99.
{ "pile_set_name": "Pile-CC" }
Diệp Minh Châu Diệp Minh Châu (10 February 1919 - 12 July 2002) was a Vietnamese painter and sculptor. He studied at the University of Hanoi. He was awarded the Ho Chi Minh Prize for fine art in 1996. Works "Passing the mangrove forest” (Gouache, 1947) Vietnam National Museum of Fine Arts Comparison of the same statues made from different material: References Category:Vietnamese sculptors Category:1919 births Category:2002 deaths Category:20th-century Vietnamese sculptors Category:20th-century Vietnamese painters
{ "pile_set_name": "Wikipedia (en)" }
J.I.PACKER WROTE OF SCHAEFFER, “His communicative style was not that of acautious academic who labors for exhaustive coverage and dispassionate objectivity. It was rather that of an impassioned thinker who paints his vision of eternal truth in bold strokes and stark contrasts.Yet it is a fact that MANY YOUNG THINKERS AND ARTISTS…HAVE FOUND SCHAEFFER’S ANALYSES A LIFELINE TO SANITY WITHOUT WHICH THEY COULD NOT HAVE GONE ON LIVING.” Francis Schaeffer in Art and the Bible noted, “Many modern artists, it seems to me, have forgotten the value that art has in itself. Much modern art is far too intellectual to be great art. Many modern artists seem not to see the distinction between man and non-man, and it is a part of the lostness of modern man that they no longer see value in the work of art as a work of art.” Many modern artists are left in this point of desperation that Schaeffer points out and it reminds me of the despair that Solomon speaks of in Ecclesiastes. Christian scholar Ravi Zacharias has noted, “The key to understanding the Book of Ecclesiastes is the term ‘under the sun.’ What that literally means is you lock God out of a closed system, and you are left with only this world of time plus chanceplus matter.” THIS IS EXACT POINT SCHAEFFER SAYS SECULAR ARTISTSARE PAINTING FROM TODAY BECAUSE THEY BELIEVED ARE A RESULTOF MINDLESS CHANCE. Schaeffer noted: How Do We Know We Know? During the early stages of modern philosophy (as distinguished from medieval philosophy) – that is, around the seventeenth century in Europe – the question that was troubling philosophers was this: how do we know that we know? The early modern scientists had made advances in the physical sciences by rejecting previous human authority. For example, they rejected much of what had been inherited from the science of the Middle Ages. At that time, investigation had been governed and restrained by the concepts of Aristotle. In the field of astronomy, this had meant that the Ptolemaic system held sway. Suddenly, observations were made which cast doubt on that entire system of understanding the heavenly bodies. The result was, of course, the Copernican revolution: the discovery that the sun does not move around the earth but, rather, the earth around the sun. Thus, a general attitude was developed toward the ideas which had prevailed till then. The scientists said, “We must not accept the ideas passed down to us or derived from various previous authorities. We must start from scratch and simply observe the world and see how it works. Otherwise, we may be hampered from seeing what is there.”The early modern scientists did not, however, reject the knowledge that God gave in the Bible as they rejected previous human authority and opinion. For example. in Novum Organum (1620) Francis Bacon wrote: “To conclude, therefore, let no man out of weak conceit of sobriety, or an ill applied moderation, think or maintain that a man can search too far of be too well studied in the book of God’s word, or in the book of God’s works.”81 “The book of God’s word” is the Bible. “The book of God’s works” is the world which God has made. Modern scientists in general lived, thought, and worked in the framework of rejecting human authority, while respecting what was taught in the Bible in regard to the cosmos – right up to the time of Michael Faraday and James Clerk Maxwell in the second half of the nineteenth century. The philosophers (and later the materialistic scientists) went further. Their error was to confuse the escape from past human authority (which was indeed confining) with putting man at the center and rejecting God’s authority as well. They wanted to reject all outside authority. They wanted to establish everything only on human observation. That was how the question of epistemology (how we know we know) became so important in modern philosophy. It has remained so right up to our own day. _______________________________________ The philosopher who first raised these questions was Rene Descartes (1596-1650). Descartes wrote in Meditations on First Philosophy: How often it happened to me that in the night I dreamt that I found myself on this particular place … whilst in reality I was lying on my bed! At this moment it does seem that it is with eyes awake that I am looking at this paper …. But in thinking over this I remind myself that on many occasions I have in sleep been deceived by similar illusions, and in dwelling carefully on this reflection I see so manifestly that there are no certain indications by which we may clearly distinguish wakefulness from sleep that I am lost in astonishment. And my astonishment is such that it is almost capable of persuading me that I now dream.82 Here is the modern epistemological problem expressed three centuries ago! All knowledge comes through the senses, but how can we rely on our own senses? Sometimes, as in dreaming, we seem to be experiencing things very really, yet the reality is only in our heads. ______________________________ We are reminded of the 1966 film by Michelangelo Antonioni called Blow-Up, in which one of the central issues was this same question. A photographer had taken a picture of a murdered man in a park in London and then became uncertain whether this was, in fact, part of reality or an experience of fantasy similar to a drug trip. Within the humanist world-view there is no final way of telling. And Antonioni ends his film by making the point graphically. Tennis players play the game without a ball. The invisible “ball” goes back and forth and the spectators watch its “path” from side to side until finally the “ball” (which does not exist) goes out over the surrounding wire and “falls” at the photographer’s feet. He pauses for a moment, uncertain about what he should do. (Is observation simply a matter of the majority? Does the reality of things come from the general agreement in society and nothing more?) Then the photographer stoops down, picks up the “ball,” and throws it back onto the court. Here, depicted brilliantly, is the problem of any system which builds its epistemology on man alone. This film was a philosophic statement of the period in which we are living. Julio Cortázar is an argentinian writer, of an incredible style, who made “Las babas del diablo”, from where the story was taken. In this Film also The Yardbirds perform Stroll on, a stylish raging Mod song, a permited version of “Train Kept A-Rollin”. Originally, The Who were approached, but they declined, and then The In-Crowd had been planned but they were unable to attend the filming. The Yardbirds filled in at short notice, and the guitar that Beck smashes at the end of their set is a replica of Steve Howe’s instrument. Antonioni instructed Beck to smash his guitar in emulation of The Who’s Pete Townshend In 1967 antonioni won the Golden Palm in Cannes for this film, and also the Critic’s Award in 1968 for Best Foreign Film. Watch this movie Category License Standard YouTube License Bergman and Antonioni Published on Aug 4, 2012 In this archived episode from the Movie Geeks United podcast, the hosts pay tribute to Ingmar Bergman and Michelangelo Antonioni days after their deaths on July 30, 2007 with guests Peter Burnett and NY Times writer Adam Bernstein. Dr. Francis Schaeffer – The Biblical Flow of Truth & History (part 2) Francis Schaeffer liked to talk about two aspects of the human experience that every person has to wrestle with. These are constants—every person who has ever lived has encountered these two things. The first (which I will explore in this post) is the existence of the external world. The second (which I will explore tomorrow) is what Schaeffer referred to as “the mannishness of man.” We live in the midst of a world. We can’t deny it. We keep bumping into it. It’s everywhere we look. Try as we might, we can’t see beyond it, nor can we quite manage to see it differently than it is, though we often try. We can’t get its smell out of our nostrils or its feel away from our nerve endings. It’s just there. Unavoidable. Undeniable. Of course, people being what they are, some have tried to deny the existence of the external world. Or at least cast doubt upon its existence. Rene Descartes’ famous dictum “I think therefore I am” was the conclusion of his experiment of systematic doubt. How do I really know anything at all? How do I know I even exist? Could not my senses or some evil spirit be deceiving me about everything I’ve ever known? The only thing that Descartes could not doubt was the fact that he was doubting. Some of the eastern religions teach that this world is nothing more than an illusion. The trick is to call it out and realize that all of the distinctions we make between individual objects (I am not you, you are not a tree, the land is not the sea) are misguided. These distinctions are illusions. So we must let go of the illusion of an external world and mindlessly meld with everything. How do I know I exist? How do I know you’re not a figment of my imagination? We can certainly ask ourselves these questions. But at the end of the day, we’re still living in the real world. Go ahead and believe that this world is an illusion. You still can’t escape it. You still have to follow the dictates of gravity. You still come into contact with real people. You still see things like beauty and understand things like truth. Believe what you want, but we all know—truly and deeply—that the external world is real. Literally every thing points to the reality of the external world. As Christians, the inescapable reality of the external world works in our favor. We can have a discussion with a Buddhist, for example, about the whole world being an illusion. And we can try to convince him intellectually. He will argue against us, but then he must go about his day living as though this world is a real place. In other words, he can say what he wants, but at this point—if he wants to function in the world that exists—he must live inconsistently with regard to his stated beliefs. Or talk to the person who denies the existence of a Creator. She will explain that the existence of God is improbable or even impossible. But then she has to face the fact that this world is here. Why should it be here? She can appeal to concepts like “deep time” and talk about what could happen when time and chance work together over billions of years, but still—something is here! Where did it come from? That question must persist like a thorn in the brain when the only available answer is, “Well, who knows what could happen when you give it enough time and chance?” The beauty of this whole thing is that the God who gave us the gospel is also the God who fashioned the external world. And he knows what he’s talking about. So when we speak to people about the truth of the Christian worldview, we can have full confidence that our worldview matches the world that exists completely. No one else has this advantage. So we have both truth and reality on our side—both working together to point people to the truth and power of the gospel. But even more powerful than the existence of the external world is “the mannishness of man”—a concept that we will explore tomorrow. Mark Beuving Mark has worked in youth, college, and worship ministry since 1999, and now serves at Eternity Bible College as the Associate Professor of Interdisciplinary Studies. He is passionate about building up the body of Christ, training future leaders for the Church, and writing. Though he is interested in many areas of theology and philosophy, Mark is most fascinated with practical theology and exploring the many ways in which the Bible can speak to and transform our world. He is the author of “Resonate: Enjoying God’s Gift of Music” and the co-author with Francis Chan of “Multiply: Disciples Making Disciples.” Mark lives in Simi Valley with his wife and two daughters. RC Sproul : The Illusion Of Descartes – Defending Your Faith Part 17 Published on Mar 3, 2012 The illusion of Descartes MESSAGE INTRODUCTION Rene’ Descartes was a French philosopher and mathematician, born in La Haye, France. In Bavaria, in the winter of 1619, he took on the mission to re-create the philosophical world by doubting every assumption and building a philosophy based on math. It may seem as though he was a wild-eyed mystic, but he was actually very quiet and careful, keeping many of his books from publication because Roman Catholicism was in the very act of condemning Galileo’s work. But after his works were released, they caused a storm in philosophy and apologetics that still troubles and amazes us. LEARNING OBJECTIVES 1. To begin a critique of the four explanations of reality. 2. To discuss the philosophy of Descartes and its impact on apologetics. QUOTATIONS AND THOUGHTS I can only trace the lines that flow from God. (Albert Einstein) Sin has gotten man into more trouble than science can get him out of. (Vance Havner) The scientific way of looking at the world is not wrong any more than the glassmaker’s way of looking at the window. This way of looking at things has its very important uses. Nevertheless the window was placed there not to be looked at, but to be looked through; and the world has failed of its purpose unless it too is looked through and the eye rests not on it, but on its God. (B.B. Warfield) LECTURE OUTLINE I. We start with four possibilities to explain reality. a. Illusion: Reality is not real. b. Self-Created: Reality came into existence through itself. c. Self-Existent: Reality exists by its very nature. d. Created: Reality is created by a self-existent being. II. Descartes’ Critique of Reality as Illusion a) Rene’ Descartes (1596-1650), a mathematician, was confronted by a wave of irrationality, an epistemological breakdown. b) The controversies of Copernicus and the Reformation and Galileo created a crisis of authority. c) Descartes attempted to restore certitude. “Clear and distinct ideas” were his goal, ideas that could reconstruct man’s search for knowledge. d) Illustration: What are ten things that I know for sure? e) Descartes doubted everything that he could conceivably doubt, and whatever was left, that is where he would begin. Perhaps everything was just the dream of a demon, he offered. f) He found that the one thing he could not doubt was that he was doubting. There is no way to escape the reality of doubt and the underlying reality that there is a doubter. III. Assumptions of Self-Consciousness: Cogito, Ergo Sum a) If Descartes is right, then whatever else is in doubt, our existence is not in doubt. b) Going a bit further, if a piece of chalk actually exists, then a self-existent Creator must exist. c) The two major assumptions of Descartes in this formula are the law of non-contradiction and the law of causality. The philosopher, mathematician and natural scientist René Descartes, Du Perron (Latin: Cartesius Renatus), was born in La Haye near Tours Touraine in France on March 31, 1596. He stemmed from an old French aristocratic family. His mother died one year after his birth, so René Descartes grew up with a nurse and his grandmother. From the age of eight, René Descartes attended the Jesuit college in La Flèche as a boarder. At 16, René Descartes successfully completed his education. He studied law in Poitiers. In 1616 he graduated with a degree in law. That same year Descartes worked under the famous general Moritz von Nassau in the Dutch town of Breda, where he met the doctor and natural scientist Isaac Beekman. He awakened René Descartes’ interest in physics and was also the person, to whom Descartes dedicated his first work on mathematics and physics “Musicae compendium”, which was published in 1618. Between 1619 and 1620 Descartes entered into the service of Duke Maximilian of Bavaria. He was a soldier in the Thirty Years’ War and participated in the siege of Prague on the side of the emperor and the Catholics. In 1625 Descartes settled in Paris, having received a considerable inheritance. There, he soon came in contact with intellectuals and members of the wealthy society. In 1628 he wrote “Regulae ad directionem ingenii” (“Rules for the Direction of the Mind”), with which he earned significant acclaim and recognition. One year later René Descartes moved to Holland, where he spent the next 18 years of his life. In Holland he worked on a treatise on metaphysics, which he left unfinished, to write another natural scientific piece, namely a work entitled “Traité du Monde”, which he also left incomplete, when he found out about the fate of Galileo Galilei. In 1637 René Descartes published his most important popular scientific work “Discours de la méthode”, he wrote on very intricate subjects but still in a style that “even women” were able to understand. In his works he incorporated epistemology, ethics, metaphysics and the general laws of physics. His “Mediations on First Philosophy”, which provide proof for the existence of God and the eternity of the soul, were first printed in Latin in 1641 and later also in French, his Principles of Philosophy were published in 1844. These works by Descartes lead to such aggressive attacks by Dutch theologists, that in 1645 Descartes considered moving to England. It might have been this experience that inspired Descartes to write a treatise on the “Passions of the Soul” in 1649, a work on human emotions. In 1649 René Descartes followed the invitation of his long-term penpal Queen Christine of Sweden and visited Stockholm. There, he fell ill with pneumonia in early 1650 and died. Some theories, however, say that René Descartes might not have died of natural causes but might have been poisoned with arsenic. Olafur Eliasson (Icelandic: Ólafur Elíasson; born 1967) is a Danish–Icelandic artist known for sculptures and large-scale installation art employing elemental materials such as light, water, and air temperature to enhance the viewer’s experience. In 1995 he established Studio Olafur Eliasson in Berlin, a laboratory for spatial research. Eliasson represented Denmark at the 50th Venice Biennale in 2003 and later that year installed The Weather Project in the Turbine Hall of Tate Modern, London. In 2004, Eliasson told Berlin magazine 032c that his father was also an artist; in the same interview he also said that at one time he considered his “break-dancing” during the mid-1980s to be his first artworks.[1] In 1990, when he was awarded a travel budget by the Royal Danish Academy of Arts, Eliasson went to New York where he started working as a studio assistant. He received his degree from the academy in 1995, after having moved in 1993 to Cologne for a year, and then to Berlin, where he has since maintained a studio.[2] First located in a warehouse right next door to the Hamburger Bahnhof, the studio moved to a former brewery in Prenzlauer Berg in 2008. In 1996, Eliasson started working with Einar Thorsteinn, an architect and geometry expert 25 years his senior as well as a former friend of Buckminster Fuller‘s.[3] The first piece they created called 8900054, was a stainless-steel dome 30 feet (9.1 m) wide and 7 feet (2.1 m) high, designed to be seen as if it were growing from the ground. Though the effect is an illusion, the mind has a hard time believing that the structure is not part of a much grander one developing from deep below the surface. Thorsteinn’s knowledge of geometry and space has been integrated into Eliasson’s artistic production, often seen in his geometric lamp works as well as his pavilions, tunnels and camera obscura projects.[4] For many projects, the artist works collaboratively with specialists in various fields, among them the architects Thorsteinn and Sebastian Behmann (both of whom have been frequent collaborators), author Svend Åge Madsen (The Blind Pavilion), landscape architect Gunther Vogt (The Mediated Motion), architecture theorist Cedric Price (Chaque matin je me sens différent, chaque soir je me sens le même), and architect Kjetil Thorsen (Serpentine Gallery Pavilion, 2007). Today, Studio Olafur Eliasson is a laboratory for spatial research that employs a team of c. 30 architects, engineers, craftsmen, and assistants who work together to conceptualize, test, engineer, and construct installations, sculptures, large-scale projects, and commissions.[5] Works and Projects Ventilator pieces Early works by Eliasson consist of oscillating electric fans hanging from the ceiling. Ventilator (1997) swings back and forth and around, rotating on its axis.[6]Quadrible light ventilator mobile (2002–2007) is a rotating electrically powered mobile comprising a searchlight and four fans blowing air around the exhibition room and scanning it with the light cone.[7] The weather project The weather project was installed at the London’s Tate Modern in 2003 as part of the popular Unilever series. The installation filled the open space of the gallery’s Turbine Hall. Eliasson used humidifiers to create a fine mist in the air via a mixture of sugar and water, as well as a semi-circular disc made up of hundreds of monochromatic lamps which radiated yellow light. The ceiling of the hall was covered with a huge mirror, in which visitors could see themselves as tiny black shadows against a mass of orange light. Many visitors responded to this exhibition by lying on their backs and waving their hands and legs. Open for six months, the work reportedly attracted two million visitors, many of whom were repeat visitors.[8] Light installations Eliasson has been developing various experiments with atmospheric density in exhibition spaces. In Room For One Colour (1998), a corridor lit by yellow monofrequency tubes, the participants find themselves in a room filled with light that affects the perception of all other colours. Another installation, 360 degrees Room For All Colours (2002), is a round light-sculpture where participants lose their sense of space and perspective, and experience being subsumed by an intense light.[9] Eliasson’s later installation Din blinde passager (Your blind passenger) (2010), commissioned by the Arken Museum of Modern Art, is a 90-metre-long tunnel. Entering the tunnel, the visitor is surrounded by dense fog. With visibility at just 1.5 metres, museumgoers have to use senses other than sight to orient themselves in relation to their surroundings.[10] For Feelings are facts, the first time Eliasson has worked with Chinese architect Yansong Ma as well as his first exhibition in China, Eliasson introduces condensed banks of artificially produced fog into the gallery of Ullens Center for Contemporary Art, Beijing. Hundreds of fluorescent lights are installed in the ceiling as a grid of red, green, and blue zones. Your black horizon This project, a light installation commissioned for the Venice Biennale by Thyssen-Bornemisza Art Contemporary in collaboration with British architectDavid Adjaye, was shown from 1 August to 31 October 2005 on the island of San Lazzaro in the lagoon near Venice, Italy. A temporary pavilion was constructed on the grounds of the monastery to house the exhibit, consisting of a square room painted black with one source of illumination – a thin, continuous line of light set into all four walls of the room at the viewers eye-level, serving as a horizontal division between above and below. From June 2007 through October 2008, the pavilion was reopened on the island of Lopud, Croatia near the city of Dubrovnik. Your mobile expectations: BMW H2R project Eliasson was commissioned by BMW in 2007 to create the sixteenth art car for the BMW Art Car Project. Based on the BMW H2R concept vehicle, Eliasson and his team removed the automobile’s alloy body and instead replaced it with a new interlocking framework of reflective steel bars and mesh. Layers of ice were created by spraying approximately 530 gallons of water during a period of several days upon the structure. On display, the frozen sculpture is glowing from within. Your mobile expectations: BMW H2R project was on special display in a temperature controlled room at the San Francisco Museum of Modern Art from 2007–08[11] and at the Pinakothek der Moderne, Munich, in 2008. The Parliament of Reality Dedicated on 15 May 2009, this permanent sculpture stands at Bard College, Annandale-on-Hudson, NY. The installation is based on the original Icelandic parliament, Althingi [1], one of the world’s earliest democratic forums. The artist envisions the project as a place where students and visitors can gather to relax, discuss ideas, or have an argument. The parliament of reality emphasizes that negotiation should be the core of any educational scheme. The man-made island is surrounded by a 30-foot circular lake, 24 trees, and wild grasses. The 100-foot-diameter (30 m) island is composed of a cut-bluestone, compass-like floor pattern (based upon meridian lines and navigational charts), on top of which 30 river-washed boulders create an outdoor seating area for students and the public to gather. The island is reached by a 20-foot-long stainless steel lattice-canopied bridge, creating the effect that visitors are entering a stage or outdoor forum. Frogs gather in this wiry mesh at night, creating an enjoyable symphony. Harpa Eliasson designed the facade of Harpa, Reykjavík‘s new concert hall and conference centre which was completed in 2011. In close collaboration with his studio team and Henning Larsen Architects, the designers of the building, Eliasson has designed a unique facade consisting of large quasi bricks, a stackable twelve sided module in steel and glass. The facade will reflect the city life and the different light composed by the movements of the sun and varying weather. During the night the glass bricks are lit up by different colored LED lights. The building was opened on 13 May 2011. Your rainbow panorama In 2007, Eliasson’s idea to an art work which could complete ARoS Aarhus Kunstmuseum in Aarhus got chosen among five other proposals in a bidding process by a panel of judges. Eliasson’s artwork called “Your rainbow panorama” consists of circular, 150 feet long and ten feet wide circular corridor made of glass in every color of the rainbow. The work has a diameter of 52 meters and is mounted on slender pillars 3.5 meters above the museum’s roof. The artwork is at night lit up from the inside by spotlights in the floor. The project cost 60 million Danish kroner; construction began in May 2009 and was completed in May 2011.[13] Other projects Commissioned by Louis Vuitton in 2006, lamps titled Eye See You were installed in the Christmas windows of Louis Vuitton stores; a lamp titled ‘You See Me’ went on permanent display at Louis Vuitton Fifth Avenue, New York.[14] All fees from the project were donated to 121Ethiopia.org, a charitable foundation established by Eliasson and his wife. Along with James Corner‘s landscape architecture firm Field Operations and architecture firm Diller Scofidio + Renfro, Eliasson was part of the design team for New York’s High Line park.[15] Eliasson was originally supposed to create an outdoor-based artwork for the 2012 Summer Olympics; however, his proposed £1m project Take A Deep Breath was rejected due to funding problems.[16] E P I S O D E 9 Dr. Francis Schaeffer – Episode IX – The Age of Personal Peace and Affluence 27 min T h e Age of Personal Peace and Afflunce I. By the Early 1960s People Were Bombarded From Every Side by Modern Man’s Humanistic Thought II. Modern Form of Humanistic Thought Leads […] E P I S O D E 8 Dr. Francis Schaeffer – Episode VIII – The Age of Fragmentation 27 min I saw this film series in 1979 and it had a major impact on me. T h e Age of FRAGMENTATION I. Art As a Vehicle Of Modern Thought A. Impressionism (Monet, Renoir, Pissarro, Sisley, […] E P I S O D E 7 Dr. Francis Schaeffer – Episode VII – The Age of Non Reason I am thrilled to get this film series with you. I saw it first in 1979 and it had such a big impact on me. Today’s episode is where we see modern humanist man act […] E P I S O D E 6 How Should We Then Live 6#1 Uploaded by NoMirrorHDDHrorriMoN on Oct 3, 2011 How Should We Then Live? Episode 6 of 12 ________ I am sharing with you a film series that I saw in 1979. In this film Francis Schaeffer asserted that was a shift in […] E P I S O D E 5 How Should We Then Live? Episode 5: The Revolutionary Age I was impacted by this film series by Francis Schaeffer back in the 1970′s and I wanted to share it with you. Francis Schaeffer noted, “Reformation Did Not Bring Perfection. But gradually on basis of biblical teaching there […] Dr. Francis Schaeffer – Episode IV – The Reformation 27 min I was impacted by this film series by Francis Schaeffer back in the 1970′s and I wanted to share it with you. Schaeffer makes three key points concerning the Reformation: “1. Erasmian Christian humanism rejected by Farel. 2. Bible gives needed answers not only as to […] Francis Schaeffer’s “How should we then live?” Video and outline of episode 3 “The Renaissance” Francis Schaeffer: “How Should We Then Live?” (Episode 3) THE RENAISSANCE I was impacted by this film series by Francis Schaeffer back in the 1970′s and I wanted to share it with you. Schaeffer really shows why we have so […] Francis Schaeffer: “How Should We Then Live?” (Episode 2) THE MIDDLE AGES I was impacted by this film series by Francis Schaeffer back in the 1970′s and I wanted to share it with you. Schaeffer points out that during this time period unfortunately we have the “Church’s deviation from early church’s teaching in regard […] Francis Schaeffer: “How Should We Then Live?” (Episode 1) THE ROMAN AGE Today I am starting a series that really had a big impact on my life back in the 1970′s when I first saw it. There are ten parts and today is the first. Francis Schaeffer takes a look at Rome and why […]
{ "pile_set_name": "Pile-CC" }
VIDEO How to Make Your Own Shampoo with Baking Soda and Lemon Juice http://www.paleolifestylemagazine.com I started doing this a while back as a means of saving money, but as I became Paleo I realized this was a great way to keep extra chemicals off of my body! Read more » SOCIAL ACTIVITY 0 SHARES 0 TWEETS 0 +1's 0 PINS 0 SHARES WEB RESULTS How To Make Dandruff Control Shampoo at Home | Shampoos … How To Make Dandruff Control Shampoo at Home Here’s an easy How To you can do at ... Make your own chemical ... Mix lemon juice and baking soda until you have ... https://www.pinterest.com/pin/568438784188655450/ How To Make Soap - About.com Education Video embedded · How To Make Soap. Make Your Own ... Stir in the lemon juice and fragrance oil ... Make Your Own Shampoo With This Simple Recipe; How Saponification Makes … http://chemistry.about.com/cs/howtos/ht/makesoap.htm DIY Shampoo: The Baking Soda Experiment - Wise Bread Make shampoo with baking soda and ... fizz went the sound as the baking soda and vinegar mixed together to create a ... I switched to using lemon juice in the same ... http://www.wisebread.com/diy-shampoo-the-baking-soda-experiment How to Make Your Own Hair Lightening Shampoo - HubPages Here's how to make your own, which doesn't damage your hair. HubPages. Sign In; Help; ... You can use either actual lemon juice or this kind. ... Baking soda with a ...
{ "pile_set_name": "Pile-CC" }
Q: How to iterate python windowed() to last element? According to the more_itertools.windowed specification, you can do: list(windowed(seq=[1, 2, 3, 4], n=2, step=1)) >>> [(1, 2), (2, 3), (3, 4)] But what if I want to run it all to the end? Is it possible to get: >>> [(1, 2), (2, 3), (3, 4), (4, None)] A: A workaround but not the best solution is to append None with the sequence. list(windowed(seq=[1, 2, 3, 4,None], n=2, step=1))
{ "pile_set_name": "StackExchange" }
Q: It is possible to configure both pppoe and dhcp on the same interface? I would like to setup on a debian 9 machine inside the /etc/interfaces file both the dhcp and pppoe config. So that I can move my machine with it's eth0 and attach it: to a friend house that has pppoe, in this case the machine see pppoe available and establish a connection with it (I have the provider config file already setup on the machine) to my house where I have my router with dhcp, and I benefit from dhcp for getting my IP address Is it possible to do that? Should I specify inside interfaces both pppoe and inet dhcp? A: To answer your question, yes it should be possible to use both dynamic and static IP interface configurations. You do this by creating virtual interfaces to use the same physical interface. Each virtual interface will need to be configured properly to your network's needs. I am not as familiar with PPPoE but I have found some links that could help you. This post covers how to configure having both static and dynamic interfaces. Here is the Official Debian Wiki on how to set up PPPoE. Again I suggest you read through the Debian Wiki on how to do network configuration using different interface settings. According to the aforementioned links, your /etc/network/interfaces should look something like this: auto lo eth0 eth0:0 iface lo inet loopback iface eth0 inet dhcp iface eth0:0 inet manual auto dsl-provider iface dsl-provider inet ppp pre-up /sbin/ifconfig eth0:0 up provider dsl-provider Don't forget to run pppoeconf to generate and/or modify /etc/ppp/peers/dsl-provider, /etc/ppp/*ap-secrets files and /etc/network/interfaces. It is best, in most cases, to keep the suggested answers I would substitute eth0 with whatever the name of your device actually appears by default to keep things simple. However I highly suggest you read through the Debian manual on how to set up networking before you do anything. Remember to figure out where your network is getting its configuration information from and make the appropriate changes there. Best of Luck!
{ "pile_set_name": "StackExchange" }
The Curse of Milk Sickness - samclemens https://www.appalachianhistory.net/2019/02/the-curse-of-milk-sickness-part-1-of-2.html ====== taneq Part 2: [https://www.appalachianhistory.net/2019/02/the-curse-of- milk...](https://www.appalachianhistory.net/2019/02/the-curse-of-milk- sickness-part-2-of-2.html) ------ alexandercrohde Interesting how important an animal's diet can be, something I didn't fully appreciate. Also it's nice to see accounts of how science semi-functioned historically. I'm fascinated by the role of human-factors in our attempts at objective- science. ~~~ cainxinth You are what you eat eats. ~~~ jjtheblunt That's cute, but partially false, and is the main reason animals eating other animals exists. Obligate carnivores, as an example, eat other animals not as a shortcut to the diets of the other animals, but because the other animals' livers manufacture amino acids that the obligate carnivores' livers do not. ------ goda90 I still distinctly remember milk sickness being used as the "story" behind some experiments they had us do in early science classes in middle or high school. Basically, a teenager was sick/had died and we had to figure out where the milk came from(then the experiment was to study density of different plastics to determine which dairy had bottled it). One thing that stood out to me was that the parents in the story described the victim's breathe as smelling like nail polish. One of the symptoms is the build up of ketone bodies like acetone, which might be noticeable in the breathe. ~~~ mrpoptart Friendly correction: you meant "breath" not "breathe." ------ loblollyboy Pretty interesting story. A little history, a little science. hi-sci? ------ kupiv I like it, very interesting informaton. Thank you for sharing this:) ------ symmitchry [https://en.wikipedia.org/wiki/Ageratina_altissima](https://en.wikipedia.org/wiki/Ageratina_altissima) ------ eps ... _continued tomorrow_ Perhaps could use a repost tomorrow then :) ~~~ aaron_oxenrider [https://www.appalachianhistory.net/2019/02/the-curse-of- milk...](https://www.appalachianhistory.net/2019/02/the-curse-of-milk- sickness-part-2-of-2.html) It's already posted.
{ "pile_set_name": "HackerNews" }
Kinetics of the conformational changes of hemopexin in acid media. Under the action of acid media the hemopexin molecule unfolds with resulting heme expulsion from the binding site, followed by heme dimerization and reassociation of dimeric heme with the unfolded protein molecule. The rate of the reaction is pH dependent and the whole process is fully reversible for a certain time interval. Prolonged treatment of hemopexin at acidic conditions, however, leads to the irreversible denaturation of this protein.
{ "pile_set_name": "PubMed Abstracts" }
Q: Cocoa Touch Question. Should [NSMutableArray array] be retained? Here is the gist of some code I'm writing. I'm concerned that I am not properly addressing the retain/release issues with the array class method on NSMutableArray. Is the following actually leaking memory? for(a while) { // do stuff NSMutableArray *a = nil; // do stuff if (!a) { a = [NSMutableArray array]; } } // for(a while) A: You wouldn't leak memory in this code, and releasing the array yourself will cause a crash when the array is autoreleased at the end of the run loop. Most Cocoa classes provide a couple of ways of making a new object, and are very consistent with this convention: [[NSSomeObject alloc] init] : you are responsible for releasing the object (instance method). [NSSomeObject someObject] : the object will be autoreleased for you, usually at the end of the run loop (class method). It's roughly equivalent to [[[NSSomeObject alloc] init] autorelease]. The proper use of the instance method would be: a = [[NSMutableArray alloc] init]; // do stuff [a release]; The proper use of the class method would be: a = [NSMutableArray array]; // do stuff, array is in the autorelease pool Note that Apple has recommended you stay away from the convenience methods as much as possible to improve performance. This is controversial advice, may not save much processor time, and separates the alloc-init from the release on an object you may not actually care much about keeping. A: From the Cocoa Memory Managment Rules: You take ownership of an object if you create it using a method whose name begins with “alloc” or “new” or contains “copy” (for example, alloc, newObject, or mutableCopy), or if you send it a retain message. You are responsible for relinquishing ownership of objects you own using release or autorelease. Any other time you receive an object, you must not release it. Therefore with the line: a = [NSMutableArray array]; you do not take ownership of the array, and it will be passed to you autoreleased. The memory will be handled for you automatically by the autorelease pool, and once it is no longer being used, it will be released for you. If you want to keep the array outside the current event, however, you must retain it, otherwise it will be released for you.
{ "pile_set_name": "StackExchange" }
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>hsweb-system-workflow</artifactId> <groupId>org.hswebframework.web</groupId> <version>3.0.11</version> <relativePath>../pom.xml</relativePath> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>hsweb-system-workflow-starter</artifactId> <dependencies> <dependency> <groupId>org.hswebframework.web</groupId> <artifactId>hsweb-system-workflow-local</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <!-- test --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.0.26</version> <scope>test</scope> </dependency> <dependency> <groupId>org.hswebframework.web</groupId> <artifactId>hsweb-spring-boot-starter</artifactId> <version>${project.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.hswebframework.web</groupId> <artifactId>hsweb-system-authorization-starter</artifactId> <version>${project.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.hswebframework.web</groupId> <artifactId>hsweb-system-organizational-starter</artifactId> <version>${project.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.hswebframework.web</groupId> <artifactId>hsweb-tests</artifactId> <version>${project.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.hswebframework.web</groupId> <artifactId>hsweb-system-dynamic-form-starter</artifactId> <version>${project.version}</version> <scope>test</scope> </dependency> </dependencies> </project>
{ "pile_set_name": "Github" }
Friday, July 11, 2014 Poh Eng expecting no less than seven gold COMMONWEALTH Games chef-de-mission Datuk Ong Poh Eng is expecting Malaysian athletes to deliver no less than seven gold medals in Glasgow on July 23-Aug 3. Poh Eng said it would be tough to replicate the 12-gold haul achieved in Delhi four years ago but is nevertheless hoping the younger athletes will rise to the occasion. “Obviously we want to win as many medals as we can but we also have to look at the current situation as well,” said Poh Eng, when met at Bata Malaysia’s official shoe presentation for the contingent at the Olympic Council of Malaysia (OCM) in Kuala Lumpur yesterday. “I believe we will definitely win at least seven and equal or better our performance in Manchester (2002) and Melbourne (2006).
{ "pile_set_name": "Pile-CC" }
/** * Copyright (c) 2010-2020 Contributors to the openHAB project * * See the NOTICE file(s) distributed with this work for additional * information. * * This program and the accompanying materials are made available under the * terms of the Eclipse Public License 2.0 which is available at * http://www.eclipse.org/legal/epl-2.0 * * SPDX-License-Identifier: EPL-2.0 */ package org.openhab.binding.airvisualnode.internal.config; /** * Configuration for AirVisual Node. * * @author Victor Antonovich - Initial contribution */ public class AirVisualNodeConfig { public static final String ADDRESS = "address"; public String address; public String username; public String password; public String share; public long refresh; }
{ "pile_set_name": "Github" }
Europolemur klatti Europolemur klatti was a medium to large size adapiformes primate that lived on the continent of Europe from the middle to early Eocene. One possible relative to this species is Margarita stevensi, whose type specimen is about the size of a white-footed sportive lemur (Lepilemur leucopus). Characteristic of most adapines are the reduced or absence of a paraconid and morphology of the paracristid. These and a few other features are synapomorphies that were used to link E. klatti with Leptadapis priscus and Microadapis sciureus, as well as Smilodectes. Morphology Europolemur klatti is part of a group of long-digited fossils, and most likely approximates early euprimate hand proportions. E. klatti has a grasping hallux and there is evidence that supports that E. klatti may have had nails instead of claws. This insinuates that stabilizing the tips of the digits and hand must have in some way been an important function for them and their lifestyle in their habitat. Relative to the forearm, the hand of E. klatti was large which may be related to vertical climbing or posture. The shape of the calcaneus (heel) resembles that found in Smilodectes and Notharctus. E. klatti had an average body mass of 1.7 kilograms. Dentition In 1995, two isolated upper molars belonging to E. klatti were found in an old lake deposit during excavations by the Natural History Museum of Mainz (Naturhistorisches Museum Mainz/Landessammlung fur Naturkunde Rheinland-Pfalz). The museum determined that the molars—as well as a mandible with nearly complete dentition belonging to another cercamoiines, Periconodon—were representative of the first primates from the Middle Eocene Eckfeld maar in Southwest Eifel, Germany. E. klatti has a dental formula of 2:1:3:3 and the milk dentition of this species consisted of four premolars while the adults only had three premolars. References External links Mikko's Phylogeny Archive Category:Prehistoric strepsirrhines Category:Eocene primates Category:Prehistoric mammals of Europe Category:Prehistoric mammals of North America
{ "pile_set_name": "Wikipedia (en)" }
Q: Why does the wallet contract convert addresses to integers? In the multiowned part of the wallet contract, owners are stored in a uint array, such as in line 59 in the constructor. Here is a relevant excerpt: contract multiowned { // METHODS function multiowned(address[] _owners, uint _required) { m_numOwners = _owners.length + 1; m_owners[1] = uint(msg.sender); m_ownerIndex[uint(msg.sender)] = 1; for (uint i = 0; i < _owners.length; ++i) { m_owners[2 + i] = uint(_owners[i]); m_ownerIndex[uint(_owners[i])] = 2 + i; } m_required = _required; } // FIELDS uint[256] m_owners; mapping(uint => uint) m_ownerIndex; // why not address[] m_owners and mapping(address => uint) m_ownerIndex ? } Why not store them in address-type variables? Is there a special reason for this? Does it make the storage lighter? Thanks, A: I think in the code he uses uint instead address because as you know an array needs an integer as index.and the idea behind if i understand well the snippet it is to return the index of the participant or an owner (while there is multiple owners) using its address without using a loop. for example if the first sender is 0X123 and the "nth" adress is 0x555 but we don't know the order n. we need just to call m_ownerIndex[uint(0x555)] to get the value of n without a loop. if you use an address array for the same example you will need something like for(int i=0,i<;i++) { if (m_ownerIndex[i]==0X555) return i;break; }
{ "pile_set_name": "StackExchange" }
INTRODUCTION ============ In fixed implant-supported prosthesis, the load applied to the occlusal surface of artificial teeth is transmitted along the framework and the abutment to the surrounding bone where most part of it is absorbed at the expenses of bone deformation. According to Frost[@B8] (2004), the bone reacts to forces according to the intensity of the tension. Bone responses to tension could then be divided in four intervals or *windows*: 1: *The acute disuse window* with tensions below 50 me (micrometer), resulting in bone loss because of an increase in the remodeling process; 2: *The adaptation window* with tensions between 50 me and 1500 me where physiological adaptation occurs with a balance between resorption and formation; 3: *The mild overload window* with tensions between 1500 me and 4000 me and where an increase in the modeling process occurs, improving bone structure; and 4: *The pathologic overload window* characterized by tensions above 4000 µε when bone resorption takes place. According to Chang, et al.[@B6] (2013), knowledge regarding the response of the peri-implant bone when the dental implant is excessively loaded is limited and the level of evidence is poor. With animal experimental studies showing conflicting results, it is unclear whether occlusal overload might cause marginal bone loss or total loss of osseointegration to already osseointegrated dental implants when the applied load exceeds the biologically-acceptable limit. This biological limit is also unknown. Furthermore, higher remodeling activity of the peri-implant bone is found around implants subjected to high loading forces. The strain values that can actually cause biological changes are not completely known[@B30]. Certain hormones and biochemical agents can also change the system, causing changes to the limits of tolerance[@B8]. The implant-supported fixed prosthesis with distally extended lever arms present peculiar characteristics of force distribution since all the force applied in the posterior region of the cantilever is transmitted to the implants and consequently to the adjacent bone[@B29]. The findings of other studies, such as Benzing, et al.[@B5] (1995) and Lewinstein, et al.[@B16] (1995), demonstrate that the increase of the cantilever arm promotes an increase in stress concentration around the terminal implant. A cantilever arm of 10-20 mm is considered acceptable depending on the quality of the bone where implants are placed[@B14] ^,^ [@B20] ^,^ [@B21] ^,^ [@B27]. According to Benzing, et al.[@B5] (1995), the load application on the cantilever arm of an implant-supported framework produces deformation energy in the system that causes bending, depending on the differences of elastic modulus of several materials and components. Studies have demonstrated that the pattern of stress distribution among abutments depends, among other factors, on the alloy type used for framework[@B2] ^,^ [@B10] ^,^ [@B11]. According to some authors, Benzing, et al.[@B5] (1995), Geng, et al.[@B9] (2001) and Duyck & Naert[@B7] (2002), a material with smaller elastic modulus offers smaller flexure resistance; frameworks made with rigid basic alloys suffer less deformation, being less prone to fatigue and, consequently, not overloading the screws. Some clinical[@B12] and laboratory[@B1] ^,^ [@B13] ^,^ [@B23] ^,^ [@B26] ^,^ [@B29] studies have used CoCr alloys for implant-supported prostheses frameworks. The clinical success of osseointegrated implants are largely influenced by the manner mechanical stresses are transferred from the implant to the surrounding bone without generating forces of a magnitude that would jeopardize the longevity of implants and prostheses[@B25]. The force applied on the cantilevered implant supported fixed prostheses is transmitted to the peri-implant area. However, the magnitudes of the resultant stresses, considering the elasticity of the bone, are underestimated. The aim of this *in vitro* study was to verify the mechanical stress generated on the peri-implant bone of an implant prosthodontic system when: (1) a load is applied at different cantilever lengths and (2) alloys of different elastic modulus (E) are used to fabricate the framework. MATERIAL AND METHODS ==================== A "U" shaped polyurethane model (PU, Axson -- Cergy, St. Ouen l'Aumône, France) with the following dimensions: 100 mm in length, 13 mm in width, 19 mm in height, 46 mm in internal diameter, and 59 mm in external diameter was used to simulate the mandibular bone[@B18] ^,^ [@B19]. Two external hexagon Brånemark System^®^ Mk III Groov (Nobel Biocare -- Göteborg, Västra Götaland, Sweden) implants of 3.75 mm in diameter and 13 mm in length were embedded in the model during polyurethane's liquid pouring in a matrix. After polyurethane hardening, two multi-unit abutments (Nobel Biocare -- Göteborg, Västra Götaland, Sweden) of 5 mm in length were manually screwed into the implants. A previously calibrated electronic torque controller device (Nobel Biocare Torque Controllerä, Göteborg, Västra Götaland, Sweden) was used to tighten the abutment screws to 20 Ncm torque. Eight strain gages (KFG-02-120-C1-11, Strain Gages -- Kyowa Electronic Instruments Co., Ltd., Tokyo, Honshu, Japan) were bonded with cyanoacrylate on the surface of the polyurethane model on the distal (D), lingual (L), mesial (M), and buccal (B) sides of implant 1 (distal) and implant 2 (mesial), as can be seen in [Figure 1](#f01){ref-type="fig"}. Strain gauges are able to measure the tension suffered by an object or structure with which it is in close contact. The tension (ε) represents the amount of deformation of a body when submitted to a given force that can be tensile (+) or compressive (-). Figure 1Positioning of the loading application point for application of the static 300 N load in the framework The strain gauges were connected to a data acquisition device (NIcDAQ-9172 -- National Instruments Corp., Austin, Texas, USA) that sent a signal to a LabVIEW 8.1 program for Windows (National Instruments Corp., Austin, Texas, USA) installed in a computer were inputs from the eight strain gauges were analyzed. Two frameworks simulating a cantilevered implant-supported fixed partial dentures made of different alloys (CoCr and PdAg) were used in the study. The implants in the PU model were transferred and a gypsum model (Durone IV --Dentsply Ind. and Com., Petrópolis, RJ, Brazil) was obtained. Prosthetic cylinders were attached to the abutment replicas to construct an acrylic resin pattern (Durallay -- Reliance Dental Mfg. Co., Alsip, Illinois, USA) with the following dimensions: 55 mm in length, 4 mm in width, and 4 mm in height. The cantilever arm measured 27 mm on the distal side of the bars. A silicon matrix helped to keep the same dimensions for all frameworks. The framework patterns were cast in one piece, one in cobalt-chromium alloy (Rexillium^®^ N.B.F. -- Jeneric^®^/Pentron^®^ Incorporated, Wallingford, Connecticut, USA) cast on cobalt-chromium abutments and one in palladium-silver alloy (Pors-on 4 -- Degussa S.A., São Paulo, SP, Brazil) cast on palladium-silver abutments. To allow the correct positioning of the loading application point, a dimple was made on the upper side of the framework at 5 mm, 10 mm and 15 mm distal to the center of the terminal abutment. The frameworks were positioned in the PU model abutments and tested manually. As observed, only frameworks that adapted well to the abutments were to be approved for the tests. The two frameworks met this criterion and therefore there was no need to repeat the casts. Thus, titanium screws were tightened to 10 Ncm using an electronic torque controller (Nobel Biocare Torque Controllerä, Göteborg, Västra Götaland, Sweden). The PU model was adapted and stabilized in a cylindrical steel base. The use of this rigid metallic base aimed at not interfering with the deformation of the PU model and not absorbing the load applied during the tests. Six test groups were formed (CoCr-5mm, PdAg-5mm, CoCr-10mm, PdAg-10mm, CoCr-15mm and PdAg-15mm), according to the alloy framework and to the point of load application. Test specimens were taken to a Universal Testing Machine (model K-2000 MP -- Kratos Equipamentos Industriais Ltda., São Paulo, SP, Brazil) and baseline readings of the absolute specific deformation values developed on each strain gauge were carried out prior to the load application (reading precision of order 1X10^-^ [@B6]). Before initiating the readings on the deformation caused by loading the frameworks, the output of the measuring system was set to zero to separate from the deformation caused by abutment/prosthetic screw tightening. A round steel point was fixed to the load cell and adjusted to the pre-determined reference point in the framework ([Figure 1](#f01){ref-type="fig"}). Thus, the testing machine was set to compression at a cross-head speed of 0.5 mm/min until it reached 300 N and stopped for one minute. The 300 N load was used to run the test according to the maximal occlusal bite force values found by Akça, et al.[@B3] (2006) for implant-supported prostheses in opposition to natural teeth. Deformation readings were taken at each one of the eight strain gauges for the duration of load application and 1 minute after load stabilization. Only the last 30 values of deformation were taken into account to ensure the maximum and stable levels of deformation were recorded for each site. Load application was repeated 5 times to calculate the mean and the standard deviation. The two-way ANOVA statistical test was applied with the first variable being the type of alloy (CoCr and PdAg) and the second variable being the peri-implant region (D, L, M and B), which confirmed the presence of statistically significant differences. The Tukey test was applied to compare groups regarding the effect of two types of alloy. There was a statistically significant difference in each peri-implant region (p\<0.05) for the force applied at the 5 mm cantilever (D1, M1, B1, D2 and B2), the 10 mm cantilever (D1, L1, M1, B1, D2, L2 and M2) and the 15 mm cantilever (D1, L1, M1, B1, D2, L2 and M2). The Pearson correlation test was applied to correlate the distance of load application on the cantilever and the values of deformation in each peri-implant region. RESULTS ======= The final mean deformation and the standard deviation values in each strain gauge are the result of 150 deformation readings. The numerical values obtained are expressed as tension (positive values) and compression (negative values), as seen in [Table 1](#t1){ref-type="table"} and represented in [Figure 2](#f02){ref-type="fig"}. Table 1Final mean and standard deviation of deformation values for each strain gauge with CoCr and PdAg alloys framework in tree conditions of load application (in mɛ)Implants12Stain gaugesD1L1M1B1D2L2M2B25 mm-2181.89-2004.14-178.39398.77-343.7-2276.291118.65-17.51Group CoCr±215.48±459.63±72.39±99.60±109.99±485.69±1874.19±66.3710 mm-3113.64-3160.51-633.56547.47-215.146-3885.7490.39-163.74±70.00±93.10±77.28±7.08±65.30±240.98±9.50±53.1715 mm-4302.05-5538.95-773.881018.86-43.17-6003.71120.29-252.24±81.58±101.36±80.82±114.05±26.09±250.94±9.80±46.245 mm-1397.19-1704.6556.281231.87-33.97-1963.47118.02-112.62Group PdAg±35.68±119.33±14.21±14.56±12.11±56.90±39.03±16.9010 mm-2180.87-2400.27-102.241351.3511.29-3364.77138.38-183.15±62.86±113.45±13.25±83.30±33.72±139.45±8.36±9.3415 mm-3960.32-5034.83419.632009.8325.42-5019.2127.31-249.93±56.65±271.49±47.96±52.13±7.56±312.71±16.73±16.01 Figure 2Graphic of the deformation means captured by the strain gauges in CoCr and PdAg alloy groups With the load applied on the cantilever for all six groups, the most relevant results of compression occurred on the distal (D1) and lingual (L1) sides of implant I (distal), and on the lingual (L2) side of implant 2 (mesial). Tension occurred on the buccal (B1) side of implant I. According to Suedam, et al.[@B29] (2009), we can not sum the deformation suffered in every peri-implant region of each implant and consider this value as deformation of the entirety because each component of the system prosthesis/abutment/implant/bone can be found under various conditions of adaptation and load. As a result, a quantitative and qualitative evaluation of the results based on the statistical tests becomes necessary, which give us a biomechanical behavior view of the entire system involved with, and not only of the strain gauges or of the peri-implant regions individually. The results of the Tukey test ([Table 2](#t2){ref-type="table"}) demonstrated the difference in the framework's elastic modulus influenced the intensity of deformation occurred in the peri-implant region, as can be noted in [Table 1](#t1){ref-type="table"} and in [Figure 2](#f02){ref-type="fig"}. The Pearson correlation test showed positive correlation ([Table 2](#t2){ref-type="table"}). Table 2Tukey test for comparisons between groups and Pearson correlation test (distance X deformation) for each groupImplants12Stain gaugesD1L1M1B1D2L2M2B2CoCr x PdAg5 mm0.000247\*0.1963140.000296\*0.000223\*0.000433\*0.1906380.2669850.014714\*10 mm0.000223\*0.000223\*0.000223\*0.000223\*0.000318\*0.003219\*0.000237\*0.44504415 mm0.000260\*0.004769\*0.000223\*0.000223\*0.000660\*0.000754\*0.4416270.9187CoCrDISTANCE x DEFORMATION-0.9875-0.967-0.92320.91830.8741-0.9772-0.3776-0.8797p value0.000\*0.000\*0.000\*0.000\*0.000\*0.000\*0.1650.000\*PdAgDISTANCE x DEFORMATION-0.9748-0.94170.6740.91820.767-0.98870.1591-0.9742p value0.000\*0.000\*0.006\*0.000\*0.001\*0.000\*0.5710.000\*[^1] DISCUSSION ========== Knowledge on the amount of mechanical stress generated in the peri-implant area when load is applied along the cantilever arm is essential for the planning, execution and longevity of the treatment with implant-supported prostheses. This study showed the pattern of bone deformation generated by applying a static force of 300 N varied according to: 1- The position where the strain gages were located on the peri-implant region (D1, L1, M1, B1, D2, L2, M2 and B2); 2- The point of load application (with 5 mm, 10 mm and 15 mm cantilevers); 3- Implant position relative to load application (I1 and I2); 4- Type of alloy used for making frameworks (CoCr and PdAg). For all studied groups the behavior was singular, with tension forces present in a larger degree in the strain gauge located at the buccal region (B1) of implant 1, and compressive forces present in a larger degree in strain gauges at the distal and lingual region (D1, L1) of implant 1. This behavior can be due to the curved shape of the framework, where the resulting force tends to rotate the whole system to the distal and buccal sides. The mean and standard deviation was calculated after five load applications for each group, result of 150 partial mean. After each load application the mechanical behavior of the each component of the system prosthesis/abutment/implant/bone suffered deformation under stress. This condition, associated to different degrees of fit among components, can be a possible cause of the high values of standard deviations found in this study. According to data from other experiments[@B4] ^,^ [@B5] ^,^ [@B13] ^,^ [@B21] ^,^ [@B23] ^,^ [@B28] ^,^ [@B29] in cantilevered prostheses, the most distal implants represent the fulcrum and, therefore, are subjected to compression forces while intermediary abutments suffer tension. In this study, the peri-implant regions were divided in four (D, L, M and B) allowing the observation that the distal and lingual sides of the most distal implant (I1) were subject to higher values of compression forces and that these values increased as the cantilever increased, as verified by the Pearson correlation test ([Table 2](#t2){ref-type="table"}). The numerical values expressed in tension and compression for both alloys are the result of framework behavior due to load application, where the alloy elastic modulus influences the type of deformation and consequently the tension transmitted to the bone[@B2] ^,^ [@B10] ^,^ [@B11] ^,^ [@B13] ^,^ [@B17] ^,^ [@B29]. Because of the lower elastic modulus of the palladium-silver alloy compared to the cobalt-chromium alloy, and consequently for presenting a smaller flexion resistance, the results expressed in [Table 1](#t1){ref-type="table"}, and in [Figure 2](#f02){ref-type="fig"} demonstrated that when the load was applied to the CoCr alloy groups, larger compression values were recorded compared to PdAg alloy groups, for the same distances of cantilever. According to Rubo & Souza[@B23] (2009) and Suedam, et al.[@B29] (2009), the PdAg alloy deflects more, absorbing part of the load applied to the cantilever resulting compression forces of lower intensity being transmitted to the surrounding bone. On the other hand, because of its greater deflection when compared to CoCr alloy, the PdAg alloy presented the largest values of tension on the buccal side of implant 1 (I1), as confirmed by the Tukey test. The load is transmitted to the surrounding bone where the most part of it is absorbed at the expense of deformation in bone structure, which is the less rigid structure in the system. Physiologic levels of tension serve also the purpose of bone remodeling. This mechanism would help maintain bone structural integrity indefinitely[@B22]. Nevertheless, mechanical overload can lead to biological failure[@B24]. When a pathological overload is applied to an osseointegrated implant, tension exceeds the physiological threshold tolerated by the bone and micro fractures may occur at the implant-bone interface. Repeated overload can lead to fatigue failure of the implant-bone interface, reducing peri-implant bone density and leading to the formation of bone defects such as craters. The pathologic overload window of Frost's Theory represents this situation, when bone undergoes tensions above 4000 µε, being prone to resorption. It is important to note that inflamed peri-implant tissue reacts differently to occlusal overload promoting increased bone resorption, as demonstrated by Kozlovsky, et al.[@B15] (2007). The measure of tension generated in peri-implant area gave us the possibility of correlating these values with the bone remodeling theory[@B8] in an attempt to clarify the biological process that takes place in that area, considering an ideal clinical condition. According to the polyurethane model validation studies made by Moretti, et al.[@B19] (2011) and Miyashiro, et al.[@B18] (2011), the homogeneity of polyurethane (PU) could favor its use in biomechanical studies of force distribution on implant supported prostheses, aimed at establishing correlations between strains generated in the peri-implant region and physiological strains as proposed by Frost's Theory. Nevertheless, it is known that considerable differences exist between this study and clinically integrated implants. Although polyurethane can present similar elastic modulus to bone, other features, such as anisotropy are difficult to mimic. This study does not claim that the strains found in the polyurethane model matches precisely to the *in-vivo* situation but acknowledges the biomechanical process of load transmission in an attempt to understand how bone tissue processes these transmitted loads. A strain diagram was used as a graphic representation of the deformation readings generated on each side of the peri-implant region. This diagram consists of a circular figure in a target shape with scales of 0 me to 7000 me, where readings of deformations generated on the distal, lingual, mesial and buccal sides of each implants of the groups CoCr-15 mm and PdAg-15 mm are visualized ([Figure 3](#f03){ref-type="fig"}). In these diagrams, tensions above to 4000 µε can be seen for the CoCr-15 mm group (D1=-4302.05 me, L1=-5538.95 me); the same occurring with the PdAg-15 mm group (D1=-3960.32 me and L1=-5034.83 me). Based on the literature, these results have shown the two groups presented peri-implant regions within the pathologic overload window, being prone to bone resorption. According to this finding, cantilever arms smaller than 15 mm should be considered during the treatment planning of the mandibular implant-supported fixed partial dentures. Figure 3Strain diagrams with score label for group CoCr-15mm (a) and PdAg-15mm (b) CONCLUSIONS =========== Under the limited conditions of this *in vitro* study, the following conclusions were drawn: (1) The point of load application to the cantilever arm influenced the deformation of the peri-implant regions; (2) The type of alloy used for fabricating the framework influenced the biomechanical behavior and the deformations of the peri-implant regions; (3) Cantilever arms smaller than 15 mm must be considered for mandibular implant-supported fixed partial dentures. This study was financially supported by CAPES (Coordination of Higher Education and Graduate Training) and FAPESP (São Paulo Research Foundation) grants 99/01402-6 and 05/56182-3. [^1]: \* statistically significant difference for p\<0.05
{ "pile_set_name": "PubMed Central" }
487 F.2d 1205 159 U.S.App.D.C. 334 UNITED STATES of Americav.Michael H. HINKLE, Appellant. No. 72-1990. United States Court of Appeals,District of Columbia Circuit. Argued Sept. 5, 1973.Decided Nov. 7, 1973.Rehearing Denied Dec. 4, 1973. Robert L. Weinberg, Washington, D. C., and John F. Mathews, appointed by this Court, for appellant. Lee Cross, Asst. U. S. Atty., with whom Harold H. Titus, Jr., U. S. Atty., John A. Terry, and Warren L. Miller, Asst. U. S. Attys., were on the brief, for appellee. Before BAZELON, Chief Judge, and LEVENTHAL and ROBINSON, Circuit Judges. PER CURIAM: 1 Appellant was indicted for second degree murder. The evidence showing that Hinkle stabbed the decedent was undisputed. Hinkle, himself, testified that he had no recollection of the events that took place on the evening in question because he was intoxicated. His defense rested on claims of self-defense, provocation, lack of malice, and a contention that the fatal wound was not the one he administered, but one that occurred during the surgery occasioned by the initial wound. The jury found him guilty as charged, and he was sentenced to five to twenty years, to run concurrently with sentences in two other cases. 2 On appeal, Hinkle raises several challenges to his conviction: failure to hold a coroner's inquest into the cause of death; improper jury instructions on the definition of malice; failure to allow the jury to consider appellant's intoxication in deciding whether he acted with sufficient "recklessness" to justify a finding of second degree murder; and failure to grant a subpoena duces tecum for production of the deceased's juvenile records. 3 We do not address the issues of whether appellant's first and last contentions constitute error, for we find that even if they were error, in the context of this case they were harmless. Although appellant was entitled to a coroner's inquest, Crump v. Anderson, 122 U.S.App.D.C. 173, 352 F.2d 649 (1965), the likelihood that the inquest if held would have produced evidence tending to exculpate him is so remote that we see no justification for reversing, and in effect (since a coroner's inquest is now impossible),1 dismissing his homicide charge. Similarly, even if Hinkle were entitled to subpoena the deceased's juvenile records, it seems inconceivable that they would be of any assistance to him. He claims that they might show prior acts that indicate a propensity toward violence, and thereby buttress his claim of self-defense.2 But appellant's case on self-defense was virtually non-existent, and thus it does not appear that he was harmed by being denied the juvenile records. 4 Appellant's claim that the trial court gave an improper jury instruction on malice is a troubling one. He requested the proper instruction as set forth in our decision in United States v. Bush, 135 U.S.App.D.C. 67, 416 F.2d 823 (1969). Bush prohibited use of the phrase " 'malice' is a state of mind showing a heart regardless of social duty," and in subsequent cases we have advised that the Bush instruction should be used "to avoid a claim of reversible error." Carter v. United States, 141 U.S.App.D.C. 259, 437 F.2d 692, 697 (1970).3 In the face of these decisions, defense counsel's request, and his subsequent objection, the court still gave the improper "social duty" instruction. The Government now admits that the court's instruction was error, but argues it was harmless. We are concerned that the court would simply ignore the proper instruction in these circumstances, but since death was caused by a knife wound, we do not find that appellant was harmed by the erroneous instruction.4 We hope that in the future, trial courts do not place us in this sort of difficult situation. 5 We take this occasion to amplify on Bush by condemning interrelated portions of the "old" standard instruction: 6 "Malice" is a state of mind showing a heart regardless of social duty, a mind deliberately bent on mischief, a generally depraved, wicked and malicious spirit. 7 In Bush, as indicated above, we set forth the need for eliminating the phrase whereby any violation of "social duty" or "duty" might be equated to malice, even though not dangerous to life or limb. On further reflection, we conclude that similar problems of over-reach are presented by the segment that defines malice in terms of "a mind deliberately bent on mischief, a generally depraved, wicked and malicious spirit." Juries are to determine whether specific acts have been committed with requisite culpability, not whether defendants have generally depraved, wicked and malicious spirits. A sound replacement for the original sentence would be simply this: 8 "Malice" is a state of mind showing a heart that is without regard for the life and safety of others. 9 Here again we recognize that there are cases where the old instruction could lead a jury to misconstrue its role or be otherwise prejudicial; however, the facts before us do not present such a case. Although we do not reverse Hinkle's conviction, we trust that our comments on the deficiency of the old "standard" instruction will be given heed. 10 Appellant also alleges error in the failure of the trial court to instruct the jury as to the difference in the nature of recklessness required for second degree murder, and that required for manslaughter. Although we do not foreclose consideration of this issue in an appropriate case, the facts here do not justify serious consideration of the matter at this time. 11 Otherwise we find appellant's trial without error. His conviction is therefore 12 Affirmed. 1 The office of coronor and the statutory requirement of an inquest have been abolished in the District of Columbia 2 See Evans v. United States, 107 U.S.App.D.C. 324, 277 F.2d 354 (1960) 3 See also United States v. Lumpkins, 141 U.S.App.D.C. 387, 439 F.2d 494 (1970) 4 See United States v. Johnson, 140 U.S.App.D.C. 54, 433 F.2d 1160, 1164 n. 27 (1970); United States v. McCall, 148 U.S.App.D.C. 444, 460 F.2d 952, 958 (1972)
{ "pile_set_name": "FreeLaw" }
NCAA Coaches Among 10 Charged With Fraud and Corruption Since 2015, the FBI has been investigating the criminal influence of money on coaches and players in the NCAA, federal authorities said. John Chandler reports. (Published Tuesday, Sept. 26, 2017) Four college basketball assistant coaches charged in a bribery scheme were among eight people indicted Tuesday by a federal grand jury in New York City. The charges and accusations in three indictments largely mirrored the facts found in criminal complaints filed against the men when they were arrested in late September. An indictment, though a procedural step, is a document prosecutors rely upon at trial. Prosecutors said the men were accused of using bribes to influence star athletes' choice of schools, shoe sponsors and agents. They face fraud and other charges that carry potential penalties upon conviction of decades in prison. The assistant coaches charged were Chuck Person, 53, of Auburn, Emanuel "Book" Richardson, 44, of Arizona, Tony Bland, 37, of Southern California and Lamont Evans, 40, of Oklahoma State. After their arrests, Person and Evans were suspended and Bland was placed on administrative leave. Richardson was suspended and is appealing the school's effort to dismiss him. The time to return an indictment was extended for a month for two defendants: Brad Augustine, the AAU program director who stepped down, and financial adviser Munish Sood. Augustine was accused in a criminal complaint in September with brokering and facilitating corrupt payments in exchange for a promise from players to retain the services of Sood and a sports agent also charged in the case while Sood was described as paying bribes to the coaches. In late October, prosecutors said in court papers that it was continuing discussions with lawyers for Sood and Augustine to bring about a possible disposition of the charges against them before indictment. "Chuck Person did not commit a crime and we're confident he will be vindicated at trial after a jury hears all the evidence," attorney Theresa Trzaskoma said. Attorney Jeffrey Lichtman said Bland was a hardworking and well-regarding assistant coach who was "being scapegoated for all the ills of college basketball — all due to an alleged $13,000 payment." "No multi-millionaire head coach was charged, or any multi-billion dollar sneaker company after years of investigation. It's not fair and anyone who knows anything about college basketball knows this to be true," Lichtman added.
{ "pile_set_name": "Pile-CC" }
George Boone figured that 70 yards away was the closest he could get to his late wife's grave site during his visit to Arlington National Cemetery. At 96, the WWII veteran lacked the strength to make the trek from the car on his own. But then two observers stepped up to help him. Advertisement Boone, a former B-24 pilot who was shot out of the sky and held as a prisoner of war by the Nazis during his service, traveled to Washington, D.C. from North Carolina on an Honor Flight. Although his wife Alma's grave deviated from the planned tour, volunteers wanted to make sure Boone could visit his beloved, who was laid to rest in April 2008. As the group rushed to the site on Saturday, Boone's son, Jon, realized they'd left a crucial thing behind. "I said, 'Dad, I forgot the wheelchair. Do you think you can walk with assistance?'" he told CNN. His father said no. But the two other people in the car, who the younger Boone described as volunteers, weren't giving up on their mission. They made a chair out of their arms and lifted Boone all the way to his wife's resting place, where they held him up for 10 minutes while he paid his respects. The male volunteer who assisted him was so moved by the exchange that he offered to carry the veteran back to the car on his own. Although Boone refused at first, his son says the man insisted. The stranger's gesture left him speechless, as did the entire Honor Flight's recognition of what veterans have done for their country. "Without a doubt, it gives you so much pride to be an American," Jon Boone said. "It's not all what we see on the news. There are incredible people out there waiting to do good things and show acts of kindness." Courtesy Jon Boone via CNN WWII veteran George Boone is carried to his wife’s grave site during his visit to Arlington National Cemetery.
{ "pile_set_name": "Pile-CC" }