text
stringlengths 0
3.53M
| meta
dict |
---|---|
Q:
Get the H-Axis(X-Axis) value of google area chart
In the google chart datatable object, I can get the current select point value(V-Axis/Y-Axis) using datatable.getValue(...). However, if I wanna get the time/date/year from X-Axis(see screenshot). I did not find any datatable's function to achieve that. Does anyone know how??
This is my code
google.visualization.events.addListener(chart, 'select', function(a, b, c, d) {
var selectedItem = chart.getSelection()[0];
if (selectedItem) {
// Get the current Y-axis value which is 1120
var value = data.getValue(selectedItem.row, selectedItem.column);
alert('The user selected ' + value);
// How I can get the value of 2015 which is the X-axis value??????
}
});
A:
In most cases, your axis value will be in column 0, so just change out selectedItem.column for 0, and you will have the axis value:
var axisValue = data.getValue(selectedItem.row, 0); | {
"perplexity_score": 1242.4,
"pile_set_name": "StackExchange"
} |
Top seeds See All News
The odds of Super Bowl LII featuring the top seed from each conference are a long shot.
Since 1975 when the NFL’s playoff format began basing home field advantage on teams’ regular season winning percentages, Super Bowl LII (2017 season) will mark just the thirteenth time the game will showcase the conference’s top seeds.
Following the NFL-AFL merger in 1970 and through 1974, the realigned NFL set a predetermined postseason schedule for a four-year period based on specific divisions hosting playoff games regardless of the records of that season’s playoff teams. That explains why, in 1972, the 14-0 Miami Dolphins had to travel to Pittsburgh for the AFC championship game against the 11-3 Steelers. The format that year had the AFC Central Division champions hosting the game if they won the divisional playoff one week earlier.
Since home field advantage counted in the post-merger playoffs, 24 of the NFC’s top seeds have reached the Super Bowl. The 2016 New England Patriots are just the 23rd top seeded AFC team to reach the Super Bowl but only eight were crowned world champions.
Here is a look at the year-by-year summary of how the top seeded teams in the AFC and NFC fared since 1975.
From 1970-1974, the NFL used a pre-determined playoff format whereby winning percentage was not considered. Season Top Seeded AFC Team Result 2017 New England Patriots Lost Super Bowl LII 2016 NEW ENGLAND PATRIOTS Won Super Bowl LI 2015 DENVER BRONCOS Won Super Bowl 50 2014 NEW ENGLAND PATRIOTS Won Super Bowl XLIX 2013 Denver Broncos Lost Super Bowl XLVIII 2012 Denver Broncos Lost Divisional Playoff Game 2011 New England Patriots Lost Super Bowl XLVI 2010 New England Patriots Lost Divisional Playoff Game 2009 Indianapolis Colts Lost Super Bowl XLIV 2008 Tennessee Titans Lost Divisional Playoff Game 2007 New England Patriots Lost Super Bowl XLII 2006 San Diego Chargers Lost Divisional Playoff Game 2005 Indianapolis Colts Lost Divisional Playoff Game 2004 Pittsburgh Steelers Lost Championship Game 2003 NEW ENGLAND PATRIOTS Won Super Bowl XXXVIII 2002 Oakland Raiders Lost Super Bowl XXXVII 2001 Pittsburgh Steelers Lost Championship Game 2000 Tennessee Titans Lost Divisional Playoff Game 1999 Jacksonville Jaguars Lost Championship Game 1998 DENVER BRONCOS Won Super Bowl XXXIII 1997 Kansas City Chiefs Lost Divisional Playoff Game 1996 Denver Broncos Lost Divisional Playoff Game 1995 Kansas City Chiefs Lost Divisional Playoff Game 1994 Pittsburgh Steelers Lost Championship Game 1993 Buffalo Bills Lost Super Bowl XXVIII 1992 Pittsburgh Steelers Lost Divisional Playoff Game 1991 Buffalo Bills Lost Super Bowl XXVI 1990 Buffalo Bills Lost Super Bowl XXV 1989 Denver Broncos Lost Super Bowl XXIV 1988 Cincinnati Bengals Lost Super Bowl XXIII 1987 Denver Broncos Lost Super Bowl XXII 1986 Cleveland Browns Lost Championship Game 1985 Los Angeles Raiders Lost Divisional Playoff Game 1984 Miami Dolphins Lost Super Bowl XIX 1983 LOS ANGELES RAIDERS Won Super Bowl XVIII 1982 Los Angeles Raiders Lost Second Round Playoff 1981 Cincinnati Bengals Lost Super Bowl XVI 1980 San Diego Chargers Lost Championship Game 1979 San Diego Chargers Lost Divisional Playoff Game 1978 PITTSBURGH STEELERS Won Super Bowl XIII 1977 Denver Broncos Lost Super Bowl XII 1976 OAKLAND RAIDERS Won Super Bowl XI 1975 PITTSBURGH STEELERS Won Super Bowl X | {
"perplexity_score": 539,
"pile_set_name": "OpenWebText2"
} |
As set out in the above referenced applications/patents, the Applicant has spent a substantial amount of time and effort in developing printheads that incorporate micro electro-mechanical system (MEMS)—-based components to achieve the ejection of ink necessary for printing.
As a result of the Applicant's research and development, the Applicant has been able to develop printheads having one or more printhead chips that together incorporate up to 84 000 nozzle arrangements. The Applicant has also developed suitable processor technology that is capable of controlling operation of such printheads. In particular, the processor technology and the printheads are capable of cooperating to generate resolutions of 1600 dpi and higher in some cases. Examples of suitable processor technology are provided in the above referenced patent applications/patents.
Common to most of the printhead chips that the Applicant has developed is a component that moves with respect to a substrate to eject ink from a nozzle chamber. This component can be in the form of an ink-ejecting member that is displaceable in a nozzle chamber to eject the ink from the nozzle chamber.
A particular difficulty that the Applicant has been faced with is to achieve a suitable interface between a prime mover in the form of an actuator and the moving component. This interface is required to permit the moving component to be displaced in the nozzle chamber and to inhibit leakage of ink from the nozzle chamber.
As set out in the above referenced patents/patent applications, the printhead chip is manufactured using integrated circuit fabrication techniques. This is the usual manner in which MEMS-based devices are fabricated. Such forms of fabrication are subject to constraints since they involve successive deposition and etching techniques. It follows that MEMS-based devices are usually formed in layers and that components having relatively complex shapes are difficult and expensive to fabricate.
In FIG. 1, reference numeral 10 generally indicates part of a nozzle arrangement of a printhead chip. The part 10 shown illustrates an actuator 12 and an ink-ejecting member 14. The actuator 12 includes an elongate actuator arm 16 that extends from an anchor 18. The actuator arm 16 is configured so that, when it receives a drive signal, the actuator arm 16 bends towards a substrate 20 as indicated by an arrow 22. A connecting formation 24 is interposed between the actuator arm 16 and the ink-ejecting member 14. Thus, when the actuator arm 16 is bent towards the substrate 20, the ink-ejecting member 14 is displaced in the direction of an arrow 26 to eject ink from the nozzle chamber.
It would be intuitive simply to use the arrangement 10 together with a suitable sealing structure to achieve effective ink ejection and sealing. The reason for this is that it would appear that the actuator arm 16, the connecting formation 24 and the ink-ejecting member 14 could be in the form of a unitary structure. However, the Applicant has found that it is not possible to achieve a working configuration as shown by using MEMS-based fabrication techniques. In particular, it has been found by the Applicant that such a unitary structure does not lend itself to such fabrication techniques.
It follows that the Applicant has been led to conceive the present invention. | {
"perplexity_score": 305.3,
"pile_set_name": "USPTO Backgrounds"
} |
1. Field of the Invention
The present invention relates to a method for manufacturing float glass.
2. Discussion of the Background
A molten metal bath used for the method of manufacturing float glass is divided generally into three regions along a direction of movement of a glass ribbon. A first region is called a fire polishing region which is adapted to receive molten glass on the surface of molten metal bath in which a glass ribbon having an equilibrium thickness is formed while the width of the glass ribbon is expanded and at the same time the surface of the ribbon is made flat. Generally, soda-lime glass is used for the molten glass and it is kept at a temperature of 1,110.degree. C.-950.degree. C. A second region is used for forming the glass ribbon in a predetermined thickness. The second region is so constructed especially that when glass having a thickness lower than an equilibrium thickness is to be formed, a pulling force is applied to the glass ribbon in its longitudinal direction while top rollers are engaged with both edges of the glass ribbon to suppress contraction of the glass ribbon in its width direction to thereby form the glass ribbon having a predetermined thickness. In the second region, the top rollers are placed to be engageable with the glass ribbon. Further, the second region is kept at a temperature sufficient to change the thickness of the glass ribbon when it is pulled by a pulling force. Namely, the glass ribbon is formed at a temperature range of about 950.degree. C.-800.degree. C. when soda-lime glass is used. A third region is so adapted that the glass ribbon formed in a predetermined thickness can be drawn from the molten metal bath and the glass ribbon is cooled to a temperature suitable to be transferred by means of rollers. The third region is kept at a temperature ranging from about 800.degree. C.-600.degree. C. when soda-lime glass is used.
A temperature distribution formed in the molten metal bath in its longitudinal direction has been attained by changing the depth of the metal bath as shown in Japanese Examined Patent Publication No. 18353/1966, or by arranging a barrier at the boundary of each region. However, in the method of obtaining a predetermined temperature distribution by changing the depth of the metal bath, it is necessary to use a molten metal bath having a depth of 40 mm in order to avoid reduction in processability. Accordingly, when a temperature distribution is formed in the metal bath in its longitudinal direction, there takes place a strong convection current in the molten metal bath, so that a gradient of temperature in the bath becomes flat. Therefore, it is necessary to obtain a predetermined temperature distribution by increasing the length of the molten metal bath. However, it increases an amount of released heat and a large-sized apparatus is required to manufacture a glass ribbon.
In the method of using a barrier in the molten metal bath, a large temperature difference is produced between the upstream side and the downstream side of the barrier and there takes place a strong convection in a spiral form along the barrier. The convection current changes a temperature distribution in the molten metal bath, whereby small stripe-like ridges and recesses, i.e. a so-called distortion results in the glass ribbon.
Further, in the later method, it is necessary to determine the upper end of the barrier to be 20 mm-30 mm lower than the bath surface of the molten metal in order to avoid the contact between the barrier and the glass ribbon. Therefore, an effect of interruption of heat from the molten metal can not be obtained. In the conventional method, the depth of the metal bath is so determined that the longest thin glass ribbon can be produced. Accordingly, when a glass ribbon having a relatively large thickness is manufactured, there is a useless glass ribbon forming region, which is results in increases heat loss.
On the other hand, use of a movable barrier system is proposed. However, a movable barrier can not be used for a vessel in which the cross-sectional area in its width direction is not uniform. | {
"perplexity_score": 325.6,
"pile_set_name": "USPTO Backgrounds"
} |
Same goes for the actual product. At a fraction of the money ask, Checkpoint Asia does nearly as much quantity-wise, and picks content that, in my opinion, is smarter and more worthwile for the long run than what anybody else does.
Thus far 33 of you are backing the site for the next 3 months with $805. Thank you!
Italy Becomes First G7 Nation to Sign Up for China’s Belt and Road
“…if Berlin and Paris have so far refrained from explicitly endorsing the BRI, the reasons are fundamentally different from those of Washington.”
‘Xi in Rome: The escort fit for kings’ — that was the headline of one of Italy’s oldest and most read newspaper Corriere della Sera. The Chinese president Xi Jinping was greeted at the presidential Quirinale Palace by guards on horseback — an honour that was last bestowed on then-Pope Benedict XVI in 2010. Xi also attended a gala dinner at the Quirinale, with a concert by singer Andrea Bocelli. (Celine Dion once said, “If God would have a singing voice, he must sound a lot like Andrea Bocelli.”)
To be sure, Italy has pulled out the red carpet. Italy, being the cradle of Western civilisation, has a profound sense of history, and is intensely conscious that Xi’s visit is destined to be a seminal event of the 21st century that marks the resurgence of Asia and dawn of the era of Chinese influence.
Once again, Italy, which had given birth to the Renaissance triggering a combination of economic and political transformations in Europe that eventually led to nearly five centuries of Western dominance, is positioning itself as the vanguard and arbiter of a new global order that is taking shape. It is going to be a new era of ‘contested modernity’ where the central player will be China (to borrow an expression from Martin Jacques’When China Rules the World: The End of the Western World and the Birth of a New Global Order.)
Perhaps, it is just as well that Italy, which is a civilisation-state itself, is assuming the role of the arbiter between the West and China. For, as Martin Jacques argues in his book, China will remain highly distinctive and is not to become a western-style society and its multi-faceted influence goes far beyond its growing economic dominance and is bound to be felt as political and cultural influence with imperatives, priorities and values that are quite different from the West’s.
No doubt, China’s rise signals not only the end of the global dominance of the West. What is less obvious but eventually more poignant is that it also marks the emergence of a world which China is destined to shape in myriad ways, much of it increasingly disconcerting and unfamiliar to the West.
Therefore, the enormous resistance in the West, led by the United States, to Italy’s formal decision to formally participate join the Belt and Road Initiative (BRI) needs to be put in perspective.
Indeed, Italy has its motivations, too. Italy has decided to break ranks with the G7 since it needs money for infrastructure upgrading. A highlight of Xi’s visit was expected to be the signing of a memorandum of understanding, which should lead to increased Chinese participation in the historic port city of Trieste and perhaps three other harbours, including Genoa and Palermo.
It is what China would call a ‘win-win’. The BRI is a centre piece of Chinese foreign policy strategy and carries Xi’s imprimatur. Besides, Trieste, situated at the northern tip of the Adriatic Sea, is an ideal duty-free port in Europe for China to get easy access to European markets if it invests in ‘Make in Italy’ enterprises — assembling Chinese products with value added. For Italy, of course, that brings jobs to the region.
However, China also appreciates that Italy is exercising strategic autonomy when it turns its back on the US and the European Union by endorsing the BRI. A furious White House statement said, “Endorsing BRI lends legitimacy to China’s predatory approach to investment and will bring no benefits to the Italian people.”
But Italy has ignored the criticism. Italy is not in any mortal danger of falling into a debt trap, as interest rates on Chinese loans are not excessive for viable governments.
Also, Italy has no reason to identify with the US efforts to contain China’s expanding global influence, particularly with regard to strategic infrastructure projects such as harbours, airports and 5G networks.
The EU’s reservations are an entirely different affair. For EU, China is a ‘systemic rival’ (as against the US’ description of China as ‘strategic rival’), which poses threat to their industries. The EU shares some of the US concerns such as the lack of access to the Chinese market, forced technology transfer and exclusion from major Chinese projects. The EU’s trade deficit with China in 2017 was around US$200 billion or 30.7 per cent of total trade in goods between the two sides. But this is not a foreign policy issue for the EU.
In fact, on Tuesday, German Chancellor Angela Merkel pushed back against American calls to impose a blanket ban on Chinese telecoms equipment vendor Huawei, which would prevent it from providing gear for Germany’s 5G networks. Merkel said in Berlin:
“There are two things I don’t believe in. First, to discuss these very sensitive security questions publicly, and second, to exclude a company simply because it’s from a certain country. The (German) government has said our approach is not to simply exclude one company or one actor, but rather we have requirements of the competitors for this 5G technology.”
That is to say, while Germany is weighing strong requirement limits for Chinese telecoms equipment vendor Huawei and its regulators are drafting stricter requirements for operators and vendors on cybersecurity, Berlin has dismissed calls for blanket bans on Huawei and ZTE. Calling China a “systemic” competitor, Merkel said on Tuesday that “the answer can’t be that we fight those who are economically strong. We must stand up for fair, reciprocal rules and not give up on multilateralism.”
Interestingly, European leaders have an EU-China summit scheduled early April to debate trade relations, and the issue of restrictions on Chinese telecoms equipment vendors Huawei and ZTE is high on the agenda!
In sum, if Berlin and Paris have so far refrained from explicitly endorsing the BRI, the reasons are fundamentally different from those of Washington. As the well-known Sinologist Ambassador Chas Freeman noted this week in the context of Xi’s visit to Italy:
“From the US point of view, the objection to Italian outreach to China is just part of hysteria about China that has seized Washington. The US is treating the Belt and Road as a military strategic challenge. The Europeans are treating it as an economic issue that they need to be cautious about.”
Ambassador Freeman added:
“The Europeans are scrambling to come to grips with the fact that China is now a global great power, economically … the debate for them is less about Belt and Road than it is about the terms of Chinese investment and competition in the technology area. In the US, there is no debate. There is pretty much an anti-China consensus now.”
Suffice to say, by investing heavily in Greece, Italy and other EU countries, China would hope for more European endorsement of the BRI that would also incrementally undermine European resistance to Chinese influence. As a matter of fact, the ‘16+1’ group that was formed in 2012 bringing together China and the relatively poor countries of central and eastern Europe is already a testing ground for the BRI.
Italy’s MoU on BRI is the first of its kind. But nearly half of EU members have already agreed to participate in the BRI. Italy is just the first large economy to do so. The Italy-China MoU could be a template for other EU countries and improve the negotiated standards between the EU and China. In a nutshell, Italy has opened the door for BRI to step into the heart of Europe.
Italy has signed up for China’s multibillion-dollar “Belt and Road Initiative”, becoming the first Western European nation to jump on board despite scepticism from its EU counterparts and Washington.
Italian Prime Minister Giuseppe Conte and Chinese President Xi Jinping witnessed the signing of a memorandum of understanding on Beijing’s trade and infrastructure scheme on Saturday in Rome.
Among the 29 other agreements signed were two port management deals between China Communications Construction and the ports of Trieste, situated in the northern Adriatic Sea, and Genoa, Italy’s biggest seaport.
While Genoa is a long-established port, Trieste has the most potential for China, Italian government sources earlier told the South China Morning Post.
The port is strategically important for China because it offers a link from the Mediterranean to landlocked countries such as Austria, Hungary, the Czech Republic, Slovakia and Serbia, all of which are markets Beijing hopes to reach through its belt and road programme.
Other deals signed cover areas including satellites, e-commerce, agriculture, beef and pork imports, media, culture, banking, natural gas and steel. The two countries also agreed to boost cooperation on innovation and science, increase bilateral trade and set up a finance ministers’ dialogue mechanism.
Although full details of the contracts were not given, a government source told Reuters the deals could be worth up to €20 billion (US$22.64 billion). The value was estimated at around €5 billion by Italian media. | {
"perplexity_score": 278.9,
"pile_set_name": "Pile-CC"
} |
CSE Evening Update – Friday December 16th, 2016 – Keeping the Plates Spinning
TLDR: If you want to jump to the top-ten-ish, skip down to the first bold text. If you want to skip to the weekend’s testing update, skip to the second section of bold text. If you want to read about a very productive week, read on…
Folks,
It has been a very busy week. All good, all good! Don’t let your imagination get away from you. Let me take you on a short journey of my week. Each Monday morning Brittany and I have a list of concerns, tasks, details, that each of us wants to cover, as well as a long list of what everyone is working on, in case it becomes relevant in our morning conversation with Mark, Andrew, Tim, and Scott, as we prep for the week. For both of us, the primary concern is to make sure everyone is able to work well, make sure any possible problems are rectified quickly, and make sure the week’s goals as a team are met to the best of our ability, Murphy’s law withstanding. (We’ll come back to this guy later.) One thing on the top of my list was this, “To Do: Autumn/forest – sticks, logs, rocks, bushes, tree variations, moar – then build that into an update on the mod itself.” Now just because it was near the top doesn’t mean there was a priority, it’s just something I really want to get working on. (We’ll come back to this guy too.)
As much as I’d like to go in order here, let’s just hit the highlights. Monday and Tuesday fly by. Being a Producer requires you to jump from issue to issue quickly, solving and moving on. Add in the environment art oversight I do and the days tend to fill quickly. During Wednesday morning stand-ups I find myself saying, “Now that I’ve gotten all those Producer things out of the way, I’m going to try and just be head’s down in art today.” Of course Wednesday is also the day Mark is flying out to join our Seattle team members for a general get together, meet some Backers while he’s there, look for a good CSE Seattle office space (Which we may have found.), as well as possibly stream the week’s update, remotely. So there’s an extra bit of “Is everything taken care of before Mark leaves?” running around in my head. Small things like, “Does he have his laptop? Is he set to stream from it? etc.” (We’ll come back to that last one.)
Wednesday evening, Dionne and I check in on tasking we set up Monday morning on the place of power. As a reminder, we’re working from Michelle’s concept:
Boom! – This sounds cooler in my head. But look at that! I’m jealous I’ve handed this off to Dionne. So she has been working on that this week, with a focus of roughing it out so we can get it in the world early for testing combat on and around it. Starting from Ben’s block model, and Jon’s and my rough assets, she’s making some good progress….
Boom!? I know it’s not as flashy, but if you want to see the progress of how we go from concept to in-game asset, there you go. WIP in Maya. If you’d like to actually watch this being worked on, you can catch Dionne’s stream from Thursday.
Thursday morning comes around and I think to myself, “Man I really want to get those updated screenshots knocked out so we can update the website galleries. First order of business for screenshots is to re-take all those shots I took of Necromaniak’s buildings in C.U.B.E. before George did his HDR and bloom push. Definite thanks are due to Necro not only for his clear enthusiasm of C.U.B.E. but also for these amazing creations. And thank you George for you first, and second, pass of our lighting improvements.
Boom!
So let me just head off where I know some of you may be going with this. No this is not the final lighting in the game. No this is not the final look of the game, and no, that thing you’re looking at right now wondering why it looks so good, or maybe so bad, will probably not stay that way. Trust me when I say we see the things you see and probably some you don’t, and have plans to make them better. This is something we’ve said before, proven, and continue to iterate not only in context here, but in general practice on the team. What I do want you to see is the wider range of color and brightness in the after shot. We actually overcompensated our brightness and saturation levels on our materials/textures in the previous lighting. So of course things look much more vibrant in the after shot. What this means is we not only have more accurate color representation between Photoshop and the game, but we now have a wider range to work with. We can now push vibrant colors in one area, like the current Autumn forest, or mute and desaturate things for something like a foggy swamp. Think of this as a wider range of possibilities. That’s a good thing!
Thursday: While Scott is working up a sweat updating and auditing last weeks armor and character work, Michelle, Jon, and myself are checking in on progress of the Place of Power statues Jon streamed earlier this week HERE and HERE.
Boom! Check out the scale of these things! Plan is to put these before the “bridges” to the place of power, or the POP as we call it in Trello. Oh, and did I say bridges? While not a plan for the start of Beta, Mark wanted to make sure the place of power was something we could “turn off” if need be, or could possibly support realm ownership. Just today Michelle reworked her original concept to include this.
Giant-floating-rune-bridge- magic-things! Boom! Okay, maybe you’re tired of that. I’ll stop. As you can see this is an evolving concept that will still continue to see changes as we put it in, test it, optimize it, slap it around, etc. Can’t wait!
Mid day I finally get a moment to stop and work on some art. The reason being, that despite Brittany still being relatively new, she and the engineering team are working great together. It’s also odd that she and I seem to already share a “Psychic Oversight” connection. But I digress. There’s a whole lot of coordinating that happens on a daily basis outside of individuals completing tasks. Without that oversight, there’s the potential for things simply going wrong, any number of ways. I think every time there’s some conflict a Producer is born.
This is the part where I put in a picture of what I was working on, with credit to Mark for the idea. You’ll have to go find it on the north island, on Hatchery this weekend. I’ll just say that it’s important for us to make fun of ourselves, while we’re busting ass.
Remember that guy Murphy I mentioned earlier? Apparently testing the laptop in the office for our Seattle stream really didn’t matter when you plug in a different webcam. You know those USB plug and play things? Through forty five minutes before the stream, and into 15 minutes after we were supposed to start we finally got things going. For those of you who were gracious with your patience, thank you. For those of you who were patient enough to wait through the pre-show, I doubly thank you.
For those that tuned in Thursday, Mark streamed from George’s house in Seattle, ( https://www.youtube.com/watch? v=VqFjjhITPSA ) Washington, accompanied by Brittany, George, Brad, Colin, and Matt, as well as his son Michael and Lady J, at least at the start. If this stream doesn’t convince you we’re focused on making this game, and not flashy production value, nothing will. (It should also show you there’s a strong family vibe Mark brings to this team, regardless of miles between offices.) Fun was had by all during the stream, and afterwards as they all, including respective family members, went out for food and drink.
For those of you jumping past all my words, here’s the top-ten-ish for the week, covered during the stream:
1. WIP – More stabilization of networking: One of the most interesting aspects of programming is learning how different languages manipulate memory. But when delays start showing up at intervals, it’s time to go in and work some magic. Marc, Tim, and Rob have been hard at work continuing to stabilize these spikes, bending C#’s memory management to their will.
2. Particle performance improvements: In a continuation of last week’s optimizations, George found a way to make particles three times faster! This will make it much easier to compare performance as we test different methods of optimizing our shaders.
3. Fixed drowning: Through all of our bot testing, we found a performance hit when a large number of bots all tried to drown in the same area. As our bots stress things as players would, fixing the issue gives everyone a nice performance gain, bots or not.
4. WIP – Banes and Boons Improvements: With each new Bane or Boon we implement, we find new ways we can extend and debug the in-progress framework we’re building. Once we add a UI on top of everything, we will be much more confident in our foundation.
5. WIP sound + abilities: Playing sounds with abilities continues to make steady progress. dB and Gabe have been getting SFX hooked up to primary and secondary components. Just one step closer to achieving a more in-depth feel to our ability system.
6. WIP Animation System: Because Andrew’s animation changes are such a large improvement to our existing system, he’s been adding in additional ways to test current work but also better prepare us for our animators needs moving forward.
7. WIP Tools: One of the un-sung heroes of any project is the guy building and managing tools for the team. Bull has been creating a robust tagging system making it easier for us to work smarter, not harder, with our ever growing asset library.
8. WIP Art: We’ve begun work in earnest on the Place of Power. We’ve had three streams this week with both Jon and Dionne working on assets for it, including some awesome, realm themed statues. This is all off some amazing concept art Michelle completed which was in our previous updates.
9. WIP Art: This week Tyler continues working on new assets and materials for our environments. Part of his goal is to have fun things to come across in the world while exploring.
10. Art VFX: As part of our improvements to our VFX engine, Cross continues to make new vfx to use not only in the current biomes, but future ones as well.
11. Art Armor: Sandra has begun rigging our TDD Fall Court heavy armor for not only the female TDD, but also the male and female luchorpan.
Before the stream I asked Sandra if she could take all the CSE Seattle folks and add ’em to a great image to accompany our stream update:
Friday. Do you think I’ve made any sticks, twigs, or bushes yet? Nope. Friday morning is typically very busy as we are finishing up any code we want for weekend testing. In this case we’ve started off with three spinning plates to watch, as we coordinated some ongoing improvements related to physics from Colin, sound on abilities from Gabe and dB, ability fixes from several people, small threading improvements from Matt, as well as a stretch goal to get in the first part of a server performance fix for our 1000+ bot tests we’ve been working on. All of this work has been ongoing, it’s just that last push from Wednesday to Friday that can demand attention. While Brittany, Tim, and Marc, work to push these things along, I was able to get some time with Michelle to really work out improvements in not only our communication, but how we want to improve the process of designing realm ownership visuals of biomes. I’d love to show you some of the art that came out of those discussions, but not just yet. Maybe next week.
Remember that guy Murphy? Of course he had to come throw a wrench into things this evening, which sorta slowed us down. Tim, Marc, and Matt stayed late trying to figure out the problem on our Hatchery server and landed that last minute clutch band aid. That’s a thing right? Two for three though, really really good! So on to this weekend’s testing notes:
Weekend Testing Information:
Where:
Wyrmling Prep – IT and Alpha level Backers.
Hatchery – IT level Backers.
When:
Now until Monday, unless something goes boom. This goes for both servers, and doubly so for Hatchery as it is our live dev environment and contains more changes than Prep.
What:
Wyrmling Prep has a few content changes we’d love tested. Gabe and dB have been hard at work on hooking up sound effects to ability components and Ben has been tweaking numbers here and there.
Hatchery – Hatchery includes all the changes noted above for Wyrmling Prep, as well as some networking backend improvements. As mentioned previously, Colin is working on moving our physics server to its own process, which will give us huge performance gains when he finishes this task. The first part of that change is live, and we want to make sure everything is stable before more large changes make their way in early next week.
Please post your feedback and bugs here:
IT Hatchery testing – https://forums. camelotunchained.com/topic/ 15517-weekend-it-testing- feedback-on-hatchery-121616/
IT and Alpha Wyrmling Prep testing – https://forums. camelotunchained.com/topic/ 15518-weekend-testing- feedback-it-alpha-backers- wyrmling-prep-server-121616/
Instructions:
If you’re headed into either Hatchery or Wyrmling Prep: Run around on the available island Create abilities using different components. Use abilities to try and kill other players and dummies. If you run out of munitions, use the command /refillammo to give yourself more. Use the below ability parts, as they all have sound effects. BlackKnight: Melee – Broad Cleave, Crushing Blow, Deft Thrust, Desperation Blow, Disorienting Pummel, Impact Strike, Overpowering Strike, Precision Slash, Vital Strike Shout – Empower Tenacity, Exceed Limits, Final Stand, Noble Virtue, Rallying Call, Resolute Charge Fianna: Melee – Arcing Slash, Deep Laceration, Disorienting Blow, Dominating Crush, Extended Lunge, Precision Blow, Slowing Strike, Sudden Strike, Swift Stab Shout – Arduous Exertion, Disorienting Shout, Overwhelming Fury, Second Wind Mjolnir: Magic – Furious Shock, Thunder Strike Melee – Blade Twist, Charged Burst, Crippling Blow, Driving Lunge, Lighting Shockwave, Pulverizing Impact, Startling Slash, Static Smash, Immeasurable Brutality Physician: Bottle – Careless Splash, Vial Toss Stone Healer: Magic – Alteration of Life, Creeping Petrification, Curative Influence, Fingers of Earth, Languid Rejuvenation, Rejuvenation Recovery, Sand Blast, Violent Tremor, Imbued Shard Stone – Drop Stone, Stone Cast, Terrestrial Transference
Things to look out for: (Please make sure and note which server you were in!)
1. If you hit an assert, please post either the text or a screenshot to the forums.
2. Report anything that crashes, general slow-downs, or anything that produces weird behavior.
3. Take note of anything that looks incredibly unexpected – players warping around, instant death, players rendering inside out, etc.
4. Does a skill part that SHOULD have a sound effect not play one? Let us know!
It’s been a long and very productive week here. I know I’ve said that often, and it’s not me twisting the facts or adding embellishment. (Maybe with the “Boom.”) It’s just not how we do things here. We work hard, we’re honest, and we own up to our stumbles and our mistakes. That makes it all the better when I can simply say, it was a good week. Not only were we productive in the office here stateside, but one of our team members has been most productive abroad!
Congrats Charles on your new team member!
– t
P.S. – It’s just me in the office auditing my spelling and grammar, with a bit of help from Mark sitting in traffic in Seattle. Sorry for any errors.
P.P.S. – Scott took this great shot of our male TDD in the Fall Court armor this afternoon. Had to share. | {
"perplexity_score": 568.5,
"pile_set_name": "OpenWebText2"
} |
{
"perplexity_score": 69405.5,
"pile_set_name": "PhilPapers"
} |
|
Q:
CSS - why my browsers can't reflect any changes made on the server?
I was working on some CSS code and then suddenly the server stopped reflecting any changed made on the server. At first I thought it was a caching problem but I disabled caching on my browser and even tried using different browsers but still it's using the old version of my CSS file.
If I download my CSS file from the server and open in the text editor, it shows all the changes I made to the code but they don't reflect on my website at all. The site is using old version of the CSS file that doesn't even exist anymore on the server.
What on Earth is happening with my server? Can it be a router caching problem?
A:
I can't answer this but try to add in your html, where you include the css this: ?v1.
Example: src="resources/yourDir/style.css?v1"
This will force everything to download the new css.
You can add after the ? everything you want. Like a timestamp, just a number or words. Whatever you like. | {
"perplexity_score": 674.9,
"pile_set_name": "StackExchange"
} |
Here you go Lawrence. Thanks again for the invitation. We would like to
participate in hearings such as this. I may be at the hearing on Monday,
but I'm not the FERC expert.
Let me know if you have any questions.
Mona
(See attached file: skean FERC comments.doc)
- skean FERC comments.doc | {
"perplexity_score": 849.1,
"pile_set_name": "Enron Emails"
} |
Juan González (baseball)
Juan Alberto González Vázquez (born October 20, 1969) is a former Major League Baseball outfielder. During his 16 years in the league, González played for four teams, but is most identified with the Texas Rangers baseball club (1989–1999, 2002–2003). One of the premier run producers and most feared hitters of the 1990s and early 2000s, González hit over 40 home runs five times and amassed at least 100 runs batted in eight times. He also had a batting average of .310 or higher in five seasons. In his career as a whole, González averaged an impressive 42 home runs, 135 RBI, and 81 extra-base hits per 162 games, placing him well within the top ten all-time in these season-adjusted statistics.
González was known as a line drive hitter, not a fly-ball home-run hitter as were many power hitters of the 1990s. He was a full-time player at the age of 21 and a two-time MVP before his 30th birthday. González explained his propensity for bringing runners home by saying, "I concentrate more when I see men on base."
Biography
González grew up in a rough area of Puerto Rico, where as a young boy he learned to hit bottlecaps and corks with a broomstick handle in the Alto de Cuba barrio. In the Puerto Rico youth league, González batted cleanup behind future Yankee center fielder Bernie Williams, where both competed against González's future teammate Iván Rodríguez. When the Yankees scouted the teenage Williams, he requested that they also bring his friend González to their scouting camp on the east coast; however, due to a lack of funding, González would remain in Puerto Rico.
The Texas Rangers signed González as an amateur free agent on May 30, 1986, at the age of 16. González has always wanted to serve as a role model for the kids of Puerto Rico, as they are faced with the downfalls of drugs and prostitution frequently. González avoided such temptations growing up. His father, a math teacher, and mother, a housewife, made sure González and his two sisters behaved properly and stayed away from negative influences. González moved his family out of the barrio early in his MLB career. He paid utility bills for down-on-their-luck friends and plans on working to construct recreation facilities and a baseball diamond in his home town. One of Juan's managers, Johnny Oates, believed that until you've walked where Juan González has walked, you just won't understand. Speaking from experience, as Oates has walked the streets of Vega Baja, Puerto Rico, during visits multiple times, he had this to say: "I don't think you can appreciate how far he's come until you've been there", Oates said. "We might be making choices between going to the movies or going to the skating rink. But look at the choices the kids there were faced with growing up – do you want to do drugs or get beaten up? I think it says so much about him that he was able to rise above the peer pressure in Vega Baja. He had enough intelligence to say, 'I don't want to do that.'"
In Puerto Rico he is known as "Igor", the nickname he has carried since he was a nine-year-old fascinated by the professional wrestler "Igor the Magnificent."
"I watched wrestling all the time and I still like it", González said. "One day when I was nine, I told another guy, 'I'm Igor.' And he said,'Okay, your name is Igor from now on.' And I've been Igor since then."
Career in the major leagues
1986–1990: minor leagues
González debuted with the 1986 GCL Rangers and finished with .240 batting average, .303 on-base percentage, and a .266 slugging percentage in 60 games. He only had five extra-base hits (none of them home runs) in 233 AB and struck out 57 times. He tied Harvey Pulliam by grounding into a Gulf Coast League-leading 9 double plays.
In 1987, González showed some improvement with the Gastonia Rangers, though Mark Whiten and Junior Felix were deemed better outfield prospects in the South Atlantic League. In ratings by Baseball America, González tied Ryan Bowen for 10th place on the prospect listing. He finished with .265 AVG, .306 OBP, and .401 slugging percentage with 14 home runs and 74 RBI.
González spent 1988 with the Charlotte Rangers and batted .256/~.327/.415 with 8 home runs in 277 AB. One of his outfield teammates that year was Sammy Sosa. The next year, he showed more improvement with the Tulsa Drillers hitting .293/~.322/.506 with 21 home runs and led the Texas League with 254 total bases. He outhomered Sosa by 14 and was third in the League in home runs, behind teammate Dean Palmer (25) and Chris Cron (22). González was rated the league's No. 4 prospect by Baseball America, behind Ray Lankford, Andy Benes and José Offerman. Lankford and Warren Newson joined him in the TL All-Star outfield. He was called up by the Texas Rangers in September of that year, but only hit .150/.227/.250. During his time with the Rangers that year, González only hit 1 HR. That HR was the first HR ever hit by a teenager (19 yrs old) for the Rangers.
In 1990, González – playing with the Oklahoma City 89ers – led the American Association in home runs (29), RBI (101) and total bases (252). He made the AA All-Star outfield alongside Lankford and Bernard Gilkey and was named the league MVP. Baseball America named him the top prospect in the league in a poll of managers. He finished with .258/~.343/.508 for the 89ers. In the AAA All-Star Game, González hit 4th for the AL prospects and played as a designated hitter. He went 2 for 5 with a double, one of the game's two homers, two runs and two RBI in the AL's 8–5 loss. González was again called up by the Rangers and did far better this time, batting .289/.316/.522.
1991–1999: Texas Rangers
In 1991, Texas gave González a chance to be an everyday player. He batted .264 while hitting 27 home runs and recording 102 runs batted in (RBIs). González came up as a center fielder, as did teammate Sammy Sosa; but the Rangers opted to keep González and trade Sosa. González split his time in the OF between CF (93 games) and LF (92 games). González thrilled the club in his first full season at the young age of 21, as his 27 HR's led the Rangers. His 102 RBI was good enough for 2nd on the club, and 7th in the AL. Two of those HR's were Walk-Off's for the young González. The first coming off Steve Searcy and the Tigers on May 15, the second off Rick Honeycutt and the Athletics on October 6.
In 1992, González finished with a .260 batting average, 43 home runs, and 109 RBIs. He spent most of his time in CF in '92, playing 123 games there, 31 in LF and making just one appearance in RF, while DH-ing 4 games. He was the American League home run champion (one more than Mark McGwire) while also ranking 3rd in TB (309), 4th in Extra Base Hits (69), 5th in SLG (.529%), 7th in RBIs (109) while winning his first Silver Slugger Award. Winning the HR Crown at the age of 22 made him the youngest player to lead the majors since Johnny Bench in 1970.
In 1993, González broke through to true stardom. He led the AL for the second consecutive year with 46 bombs, while raising his batting average an impressive 50 points to .310, all this to go along with a league-leading slugging percentage of .632%. That production garnered González an invite to his first All-Star team. During the 1993 All-Star Weekend, he participated in the only Home Run Derby of his career. González and Ken Griffey, Jr. put on an amazing display of raw-power, as they each golfed 7 homers a piece. González, however, wowed the national audience even more, becoming the first player to hit a homer into the facade of the upper deck in left field (estimated 473 feet) at Oriole Park at Camden Yards and the green wall behind the center-field fence (estimated 455 feet). González then defeated Griffey in a winner-take-all playoff for the individual Home Run Derby title, 5–4. When asked about the title, González responded: "It was very exciting to surprise everybody. I never thought in my mind that I'd win the Home Run Derby. I even surprised myself." He also finished fourth in voting for the 1993 AL MVP and earned his second consecutive Silver Slugger Award.
In 1994, the Rangers moved from Arlington Stadium to The Ballpark in Arlington. González batted 19 home runs in 1994 due to injuries, but belted 27 home runs in 1995, in just 90 games.
From 1995–98, González was an RBI machine, averaging more than an RBI per game (514 RBI, 511 G). This made him the first player since World War II to drive in a run per game for any four-year period. He won two MVP awards in this stretch (1996 and 1998). The New Bill James Historical Baseball Abstract listed him as the player who had the highest ratio of slugging percentage to on-base percentage in baseball history at that time, ahead of Dave Kingman and Tony Armas and 4th in RBI per game by an outfielder (behind Sam Thompson, Joe DiMaggio and Babe Ruth). James also ranked González as the 52nd-best right fielder in baseball history as of mid-2000.
In 1996, González had one of his best seasons hitting .314 with a .643 slugging percentage. He edged Alex Rodríguez by one first-place vote (11–10) and 3 award points (290–287) in a very close vote to win the American League MVP. He won his third Silver Slugger as an outfielder and was second in the AL in slugging (87 points behind McGwire). Was selected to the Associated Press Major League All-Star Team and The Sporting News A.L. All-Star squad at season's end. González was also named the Puerto Rico Pro Athlete of the Year by Associated Press and the DFW Metroplex Pro Athlete of the Year by the Dallas All Sports Association. He received the honorable selection of American League Player of the Month in July, leading the majors in batting (.407), homers (15), rbi (38), slugging (.917) and total bases (99). González was also the A.L. Player of the Week for July 29 – August 4. González had a pair of 21-game hitting streaks, June 25 – July 19 and August 8–31, matching the 3rd longest hitting streaks in team history with Mickey Rivers (1980) being the only other Ranger with 2 20-game hitting streaks in the same season. On July 30, González went 5–5 vs. New York, a career best and tied the club record for hits in a game. González was also chosen as a member of the Major League Baseball All-Star Team that traveled to Japan for 8-game exhibition series in November, batting .500 (10–20) with one homer and 3 rbi in 7 games. That year, the Texas Rangers made the playoffs, and in the 1996 American League Division Series, González homered five times in four games and batted .438/.526/1.375 with 9 RBI. Texas ended up losing in four games to the New York Yankees. González tied Jeffrey Leonard's 1987 NLCS record by homering in four straight post-season games and joined Reggie Jackson and Ken Griffey, Jr. as the only players to hit five home runs in a single post-season series. González, however, accomplished this feet in less games (4) than Leonard, Jackson and Griffey Jr; all of whom needed at least 5 games to accomplish said feat. Combining the regular season and postseason, González hit .315 with 52 home runs, 153 RBIs, and .664 slugging percentage in 1996.
In 1997, González batted .296/.335/.589 as a DH-RF for the Rangers, winning his fourth Silver Slugger Award. In 133 games he was 4th in slugging, 6th in total bases (314), third in homers (42) and RBI (131), 10th in extra-base hits (69) and tied for 6th with 10 sacrifice flies. González missed the first month of the season and was not activated from the DL until May 2 due to a torn ligament in his left thumb. Despite the injury he still managed to earn American League Player of the Month honors in September (.337, 10 hr, 26 rbi) and was the Rangers Player of the Month in both August and September. González was selected to Baseball America's American League All-Star Team.
In 1998, he reached the 100 RBI mark before the All-Star break (101), being the first player (and still most recent) to do so since Hank Greenberg 63 years earlier. He hit cleanup for the AL in the 1998 All-Star Game and decisively won the AL MVP award. González was 10th in the 1998 AL in batting average, second in slugging, fourth in OPS, 6th in hits (193), 4th in total bases (382), first in doubles (50), tied for fourth in home runs (45), first in RBI (157) in 154 games, tied for 8th in OPS+ (149), second in extra-base hits (97), tied for third in sac flies (11), tied for sixth in intentional walks (9) and tied for third in double plays ground into (20). In April, he drove in 35 runs, a major league record for the month that still stands today. González produced the 5th season ever of at least 50 doubles and 40 home runs. González started 115 games in Right and 36 as the DH.
Became the 1st 5-time winner of the Rangers Player of the Year Award and was also named as the A.L.'s Most Valuable Player by USA Today and USA Today Baseball Weekly. González was selected to major league all-star teams selected by the Associated Press (OF) and Baseball America (DH) and to the Sporting News A.L. all-star squad (OF). He was named as an outfielder on the A.L. Silver Slugger Award team for the 5th time in his career, his 3rd consecutive year. González shared Rangers Player of the Month honors with Iván Rodríguez in April and won the award outright in May. González also received the American League Player of the Week, for August 31 – September 6. He received 21 of 28 1st place mvp votes and 7 2nd place votes for 357 total points to defeat Boston's Nomar Garciaparra, who had 5 1st place votes and 232 points. González also became the 1st native of Latin America to ever win multiple MVP's since the award was instituted in 1931. This award also made him the 16th player to capture 2 MVP's in a 3-year span. The Rangers reached the playoffs, only to be swept by the Yankees. The Rangers offense was miserable in the Division Series, scoring just one run on a Pudge Rodriguez single after doubling to lead off the inning.
In 1999, he was 9th in the AL in average, 4th in slugging, 6th in OPS, 10th in runs (114), 6th in total bases (338), 6th in home runs (39), 5th in RBI (128), 7th in extra-base hits (76) and 2nd in sacrifice flies (12). However, he and the Rangers wound up being swept for the second consecutive year by the Yankees in the Division Series. González wasn't able to do much in the 3-game series, hitting .182/.250/.455 with 1 HR, but his solo bomb was the only run the Rangers scored in the series.
González announced just before the 1999 All-Star Game that if the fans did not elect him to the starting lineup, he would refuse an invitation to be added to the roster (as a result he was not invited). González believed that the system was flawed, he thought managers and players should vote for the starters, not the fans. A few weeks later González didn't dress for the Hall of Fame exhibition game because (according to the media) the uniform pants the Rangers brought for him were too large. González later had this to say about the incident "I couldn't play because my right wrist was sore. The pants they gave me were size 40. I wear 34. They were clown pants."
2000–2001: Detroit and Cleveland
Following the 1999 season, with one year left on his contract, the slugger was traded by the Texas Rangers along with Danny Patterson and Gregg Zaun in a blockbuster nine-player deal with the Detroit Tigers for Frank Catalanotto, Francisco Cordero, Bill Haselman, Gabe Kapler, Justin Thompson, and Alan Webb. He became the first two-time MVP to be traded since Dale Murphy was sent from Atlanta to Philadelphia in 1990. Detroit Tiger general manager Randy Smith was paying a high price for González by trading six young players, but he couldn't pass up on acquiring González, who he referred to as "a two-time MVP and future Hall of Famer", even though González would more than likely be a one-year rental (and was).
Gambling that they would be able to extend his contract past the 2000 season, the Tigers reportedly offered González an eight-year, $140 million contract soon after the deal was struck. González refused, which turned out to be the bigger gamble. He began the season badly, hobbled by foot pain and unable to adjust to the spacious dimensions of Detroit's new Comerica Park, where the left-center field fence stood nearly 400 feet from home plate. By mid-season he had announced that the Tigers would have to bring the fences in if they wanted to re-sign him as a free agent.
Detroit shopped González before the trading deadline, but a deal that would have sent him to the Yankees for outfielder Ricky Ledée and two minor leaguers was scuttled when the outfielder made it clear that he didn't want to play in New York. The Puerto Rico native stumbled through the rest of the season and saw his production dip to an all-time low (22 HR, 67 RBI in 115 games). After missing the last weeks of the 2000 season, he was granted free agency on November 1.
On January 9, 2001, he signed a one-year $10 million contract with the Cleveland Indians. González opened the season with a great start, batting .388 (40–103) with 9 homers and 32 RBIs in season's first 25 games through May 2. González completed the first half on a torrid pace. He was voted in as an All-Star starter and batted 5th in the 2001 All-Star Game. González hit .347 with 23 HR 83 RBI in 79 games (.640 SLG% / 1.031 OPS%) in the first half.
He appeared to be on his way to easily capturing the RBI title, but an RBI drought at the end of the season (0 RBI in last 10 games) allowed Bret Boone to pass him by one. González hit over .300 in each of season's 1st 5 months before dropping to .299 for the month of September. His top months were .387 (36–93) in April and .356 (26–73) in July. González was hitting as high as .360 on June 5, then went 17–64 (.266) in next 17 contests, dropping to .338 through June 26. Had a .351 (73–208) mark in next 56 games and was at .344 overall, 2nd in the A.L., through September 9. After this he hit just .130 (6–46) in final 13 games, going 3–34 (.088) in last 10 contests. González was hitless in his final 15 trips after his single on September 24. Despite his cold streak over the last week and a half of the season, he still finished with a .325 BA /.370 OBP/.590 SLG% and a 147 OPS+ close to his MVP seasons. He also won his sixth Silver Slugger and finished fifth in MVP voting. His .325 average was one point shy of his career high (1999) and marked his 5th .300 season, 3rd in the last 4 years.
He was sixth in the 2001 AL in batting average, 5th in slugging, 6th in OPS, 9th in home runs (35), second in RBI (140, (in 140 games) one behind leader Bret Boone), 8th in OPS+, tied for third in double plays grounded into (18) and led the league with 16 sacrifice flies. González was also a 2nd team selection on Baseball America's Major League all-star squad and was named as the Indians player of the year by Baseball America. This proved to be the last season in which González averaged an RBI a game. Although González finished the regular season rather slowly, he showed up in a big way in the playoffs where he hit .348 BA/.348 OBP /.739 SLG for Cleveland in the Division Series with 3 doubles, 2 homers and 5 RBI in 5 games. Despite this Cleveland still fell in defeat.
González had a season best 15-game hitting streak from August 29 – September 19 at .345 (20–58) and hit safely in 10 straight games from April 17–27. González also had a 4 hit game April 11 at the Chicago White Sox. González batted .368 (43–117) vs. left-handers, 3rd best in the A.L. and had a .335 (53–158) mark with runners in scoring position, the 8th highest. As the DH, he hit .392 (31–79), this was the highest average in A.L. among players with 35 or more DH at bats, with 8 homers and 33 rbi in 21 games.
Through 11 full major league seasons (1991-2001), González had 392 homers and 1,263 RBI, an average of 36 homers and 115 RBI per year. His RBI total was the most in MLB in during that time frame by 40, despite having 1,000 less plate appearances than the 2nd person with the most RBIs for the time, Jeff Bagwell (who was inducted into the Hall of Fame in 2017).
2002–2003: Return to Texas
On January 8, 2002, González made his return to Arlington by signing a two-year $24 million contract with the Texas Rangers. He hit .282/.324/.451 (94 OPS+) the first year in 70 games. On June 18, he participated in the first MLB game ever with four players with 400+ home runs to that point. Rafael Palmeiro and Fred McGriff joined Sosa and González in a game which Texas lost to the Chicago Cubs, 4–3. His first season back in Arlington he had a .358 (29–81) average versus Lefties and hit .328 (21–64) with runners in scoring position while posting a .307 mark(42–137) in Arlington. He hit just .171 (6–35) with 2 homers and 4 RBI as the DH. He had Texas' only hit, a leadoff double in the 8th, off Cory Lidle on July 19 at Oakland.
In 2003, González started the first few weeks rather slowly. He had a .230 average with 4 homers and 8 RBI in his first 18 games through April 20. He quickly picked it up though and went on a .349 (29–83) tear with 9 homers and 24 RBI in his next 21 games, improving to .293 by May 5. As of May 7, González was tied for the Major League Lead in HR with 12. He followed that up by going just 8-for-39 (.205) in his next 9 games, falling to .276 through May 25. He started a hot streak yet again though by hitting .321 (42–131) with 10 homers and 36 RBI in the next 34 games. But his season was cut short by a tear in his calf muscle on July 19. At the time, González was hitting .294 and ranked 3rd in HR (24) 4th in SLG% (.572) and 7th in RBI (70) in the AL. González was on pace to recapture his 2001 Indians form, but the tear lingered and the injury proved to be the end of his season.
González hit 2 homers in a game 4 times: April 5 vs. Seattle; April 29 and May 1 at Toronto and July 10 against Minnesota. His 47 career multi-homer games are 12th most all-time. He also hammered 5 homers in 3 games, April 29 – May 1 at Toronto, the 4th time in Rangers history that feat had been accomplished. He had a season best 5 RBI on April 29 at Toronto and drove in 4 runs in a game on 3 occasions. González had 18 RBI in a 9-game span, April 22 – May 1, including 10 in 3-game series at Toronto, April 29 – May 1. He was selected as A.L. co-player of the week for April 28 – May 4. He also had a season high 9-game hitting streak, June 3–17.
He started 57 games in right field and 24 games as the designated hitter. He did not make an error in 108 total chances in the outfield and was tied for 6th in the league in outfield assists (10), despite his short season. He ranked 5th on the club in home runs (24), and completed his 11th season with 20 or more home runs. The Rangers, however, were preparing for a youth movement and on October 26, 2003, he was granted free agency.
2004 to 2008: Ending of MLB career
On January 6, 2004, González was signed by the Kansas City Royals to a one-year, $4.5 million deal with an option for the next season. However, his back worsened in the middle of May and his season came to an end after May 21. He ended up hitting .276/.326/.441 with five home runs and 17 RBI in 33 games. The Royals declined to renew his option, making him a free agent.
He was signed by the Cleveland Indians for the 2005 season, and was activated in May. Despite a thorough workout regimen, González suffered a major hamstring injury (he tore his right medial hamstring totally off the bone at the knee joint) in his first plate appearance of the season while running out a grounder. This put him out for the season after just one at-bat.
González signed on with the independent Atlantic League in 2006, playing for the Long Island Ducks. He hit .323/.377/.515 in 36 games, with 6 HR and 23 RBI. His time was again limited by injuries.
The St. Louis Cardinals invited González to spring training prior to the 2008 season. He was one of 26 non-roster invitees, participating in full roster workouts that began on February 19, 2008. He hit .308 with a .462 SLG% in spring training with 1 home run, 1 double and 5 RBI in 9 games. However, he was put on the inactive list with an abdominal strain and he returned to Puerto Rico with an invitation to rejoin the Cardinals once he was healthy. González decided to stay in Puerto Rico, and did not rejoin the Cardinals.
In June 2013, González was invited to become a member of the Texas Rangers Hall of Fame. He declined the invitation at the time, saying, "I closed the Texas Rangers chapter in my life a long time ago." A couple years later, though, he accepted the invitation and was inducted on July 11, 2015. González is the Rangers' all-time leader with 372 home runs, 1,180 RBIs and a .565 slugging percentage. His 157 RBIs in 1998 and .643 slugging percentage in 1996 are also club records. González ranks in the top 5 in club history in most every other major offensive category.
Career statistics
In four American League Division Series covering 15 games, González hit .290 (18-for-62) scoring 11 runs, 8 home runs and 15 RBI.
Career in Puerto Rico
In the 1989–1990 Puerto Rican Professional Baseball League, González hit .269/~.345/.500 for the Criollos de Caguas and hit 9 home runs, one less than former league leader Greg Vaughn.
During the 1992–1993 season, he batted .333 for the Santurce Crabbers and won the league MVP award despite not playing until after the All-Star break. He hit 7 home runs and led the league despite playing in only 66 games. González did not accompany Santurce to the 1993 Caribbean Series. The next season, he ended up hitting .268 with 7 homers, 3 behind Phil Hiatt.
In 1995, González joined the San Juan Senators for the 1995 Caribbean Series and hit .375 with 6 RBI as the Puerto Rican "Dream Team" won the title. González hit 5th, between Carlos Delgado and Rubén Sierra on a team that also boasted Roberto Alomar, Bernie Williams, Carlos Baerga and Edgar Martínez. San Juan outscored their opponents 49–15.
During the 2006–2007 Puerto Rican League, in 33 games playing for the champion Carolina Giants, González hit .281 with 18 RBIs and 4 homers. In 12 playoff games, he batted .369 with 3 home runs and 5 RBIs. González claims he is healthy and no longer feels pain in his legs. He was 10 for 26 (.385) in the 2007 Caribbean Series and made the All-Star team at DH.
Presently, he is the owner of the baseball team in his hometown, Vega Baja, in the Confederative Baseball League in Puerto Rico, where he also plays as a DH. Aside from baseball, he focuses on helping the community, with the condition that no attention from the media occurs when he becomes involved in a cause, stating "What value does it have to help someone and then publicizing it in newspapers? That is not giving. I help, but I ask them to please not say anything."
For the 2015-2016 season, González served as coach of the Double A Vega Baja team, the Caimanes del Melao Melao. However, after a 3-11 record, he was fired.
Steroid allegations
González was one of several players, including popular Rangers catcher Iván Rodríguez, whom Jose Canseco claimed to have introduced to steroids. Canseco leveled these allegations in his book Juiced: Wild Times, Rampant 'Roids, Smash Hits & How Baseball Got Big. González was also briefly mentioned in the Mitchell Report regarding a 2001 incident in which an unmarked bag in the Indians' team luggage was detained by customs in Toronto, Canada. González's assistant said that the bag belonged to Angel Presinal, a prominent personal trainer for a number of professional players, but Presinal claimed that the bag belonged to González. It was also disputed whether or not the bag actually contained steroids. Although Presinal claimed the bag was not his, he said that he was aware of its contents and that they were not in fact steroids. He stated that the bag contained Soladek (a painkiller), Dolo-neurobion (a vitamin B complex used in fighting the flu), and Clenbuterol (a stimulant similar to ephedrine, which is believed by some to promote muscle tone and weight loss). González immediately cut ties with the trainer following the incident. In 2007, ESPN published an article on its website about Presinal, describing him as "fitness guru, massage therapist and personal trainer to baseball's Latino elite." In the same article, ESPN asked John Hart, the Indians' former general manager, about the 2001 incident involving Presinal. Hart said that the team looked into the matter and ultimately exonerated Gonzalez.
In 2007, Rangers' owner Tom Hicks also speculated González used steroids, saying in an interview, "Juan González for $24 million after he came off steroids, probably, we just gave that money away." He later confirmed that he did not personally know whether González had used them or not, saying, "The way his body broke down at a young age and his early retirement makes me suspicious." Luis Mayoral, a former Ranger employee and good friend of González, speculates that Hicks's comments were why González declined induction into the Texas Ranger Hall of Fame in 2013.
Like his former teammate, hall-of-famer Iván Rodríguez, who was also accused of steroid use, González has consistently stated over the years that he has never taken steroids, and is in fact a vegetarian. "I have nothing to hide," said González. "Nothing. And I offered to be tested, whenever they wanted. If you have nothing to hide, there is nothing to worry [about]," González said. In Rodriguez's case, similar unproven allegations from baseball's "steroid" era did not prevent him from being elected to the Hall of Fame on the first ballot.
Personal life
González has been married four times. He was married to Puerto Rican volleyball player Elaine López, sister of fellow major leaguer Javy López, during the early 1990s. This marriage broke down when a local newspaper released a cover photo of singer Olga Tañón kissing González during a concert in San Juan. A scandal followed, with González divorcing Elaine López and marrying Tañón, who said she had no idea González was married to Lopez when she kissed him. González and Tañon had a daughter together, Gabriela González Tañón, in 1998. González and Tañon divorced less than two years later. His daughter later became one of only 50 people in the world (and the first Puerto Rican) ever to have been diagnosed with Sebastian syndrome, a mild blood clotting disorder.
González has a friendship with George W. Bush which began when González debuted with the Texas Rangers who at the time were owned by Bush.
González stated that "a friendship that goes beyond baseball was created between them" and during his time in office Bush invited González to the White House twice. The first of reunions took place on April 16, 2001 and the second on December 3, 2007; in this reunion he was accompanied by historian Luis Rodriguez Mayoral. The discussion lasted 35 minutes and involved González's future in the Major Leagues and other baseball related topics, as well as the happenings of their respective careers. During this visit to Washington, D.C. González was also involved in a meeting with Rudy Giuliani and a visit to Walter Reed Army Medical Center in order to visit Puerto Rican soldiers that were injured in the Iraq War.
After a history of personal setbacks, González stated in a 2007 interview that his personal life was now in order. "I'd rather have health and my family, my relationship with God than money", he said. "How many people who can buy whatever they want have committed suicide? God is first, then your kids, your family, good health."
Accomplishments
3-time All-Star (1993, 1998, 2001)
2-time American League MVP (1996, 1998)
5-time Top 10 MVP (4th, 1993; 1st, 1996; 9th, 1997; 1st, 1998; 5th, 2001)
5 40+ HR Seasons (1992, 43; 1993, 46; 1996, 47; 1997, 42; 1998, 45) and 1 39-HR Season (1999)
His .561 slugging percentage ranks 15th on the all-time list
His 434 career home runs ranks 47th on the all-time list
Ranks 4th all-time in plate appearance/HR with 16.49 (#1 Mark McGwire – 13.14 No. 2 Babe Ruth – 14.87 No. 3 Sammy Sosa – 16.25)
Ranks 7th all-time in RBI/game with .831.
Ranks 15th all-time in AB per HR with 15.1 AB/HR.
6 Silver Slugger awards (1992, 1993, 1996, 1997, 1998, 2001)
2-time American League Home Run Champion (1992, 1993)
Finished Top 5 in RBI 6 times. (1993, 4th, 118; 1996, 2nd, 144; 1997, 3rd, 131; 1998, 1st, 157; 1999, 5th, 128; 2001, 2nd, 140)
Finished Top 5 in slugging percentage 7 times. (1992, 5th, .561%; 1993, 1st, .632%; 1996, 2nd, .643%; 1997, 4th, .589%; 1998, 2nd, .630%; 1999, 4th, .601%; 2001, 5th, .590)
Became just the 2nd player in major league history to have at least 100 RBI before the All-Star Break. (101 in 1998, second to Hank Greenberg who had 103)
Holds all-time record for RBI in the month of April (35 in 1998)
One of only six players post-1950 with 150+ RBI in a single season
Hit his 300th home run in the fewest games in American League history (1,096)
9th Youngest ever to hit 300 Career HR (28 years, 334 days)
Tied for 1st in postseason history in home runs in a single Division Series with Ken Griffey, Jr. (Gonzalez – 5 HR in 4 games 1996, Griffey – 5 HR in 5 games in 1995)
Tied for 2nd in most HR in a single playoff series with 5 HR in just 4 games in 1996. (Reggie Jackson 1977, 5 HR in 6 Games, Chase Utley 2009, 5 HR in 6 Games, Ken Griffey, Jr. 1995, 5 HR in 5 Games) Nelson Cruz is 1st with 6 HR in 6 games in 2011.
Ranks 2nd in postseason history in slugging percentage in a single playoff series (1.375% in 1996)
Ranks 2nd in postseason history in OPS in a single Division Series (1.901 in 1996)
Ranks 5th in postseason history in OPS in a single playoff series among qualified leaders (1.901 in 1996)
Tied for 2nd with 10 other players in extra base hits in a single Division Series (5 in 1996 & 2001)
Ranks 3rd in postseason history in total bases in a single Division Series (22 in 1996)
Ranks 7th in postseason history in RBI in a single Division Series (9 in 1996)
Tied for 2nd in postseason history in career HR in the Division Series (8 HR)
Ranks 4th in postseason history in career slugging percentage in the Division Series (.742)
Ranks 7th in postseason history in career extra base hits in the Division Series (12)
Ranks 8th in postseason history in career OPS in the Division Series (1.075)
See also
List of Major League Baseball career home run leaders
List of Puerto Ricans
List of Major League Baseball career runs scored leaders
List of Major League Baseball career runs batted in leaders
List of Major League Baseball annual runs batted in leaders
List of Major League Baseball annual home run leaders
List of Major League Baseball players named in the Mitchell Report
References
External links
Category:1969 births
Category:Living people
Category:American League All-Stars
Category:American League home run champions
Category:American League RBI champions
Category:American League Most Valuable Player Award winners
Category:Buffalo Bisons (minor league) players
Category:Caribbean Baseball Hall of Fame inductees
Category:Cleveland Indians players
Category:Detroit Tigers players
Category:Kansas City Royals players
Category:Long Island Ducks players
Category:Major League Baseball designated hitters
Category:Major League Baseball players from Puerto Rico
Category:Major League Baseball right fielders
Category:People from Vega Baja, Puerto Rico
Category:Texas Rangers players
Category:Tulsa Drillers players
Category:Silver Slugger Award winners
Category:American Association (20th century) MVP Award winners | {
"perplexity_score": 189.6,
"pile_set_name": "Wikipedia (en)"
} |
<?xml version="1.0" encoding="UTF-8"?>
<!--
create table foo(a int, b int)
partition by list (b)
(partition p1 values(1,3), partition p2 values(4,2), default partition other);
select * from foo where b = 1 union all
select * from foo where b > 3 union all
select * from foo where b in (0,1) union all
select * from foo where b is not null;
-->
<dxl:DXLMessage xmlns:dxl="http://greenplum.com/dxl/2010/12/">
<dxl:Thread Id="0">
<dxl:OptimizerConfig>
<dxl:EnumeratorConfig Id="0" PlanSamples="0" CostThreshold="0"/>
<dxl:StatisticsConfig DampingFactorFilter="0.750000" DampingFactorJoin="0.010000" DampingFactorGroupBy="0.750000"/>
<dxl:CTEConfig CTEInliningCutoff="0"/>
<dxl:WindowOids RowNumber="7000" Rank="7001"/>
<dxl:CostModelConfig CostModelType="1" SegmentsForCosting="3">
<dxl:CostParams>
<dxl:CostParam Name="NLJFactor" Value="1.000000" LowerBound="0.500000" UpperBound="1.500000"/>
</dxl:CostParams>
</dxl:CostModelConfig>
<dxl:Hint MinNumOfPartsToRequireSortOnInsert="2147483647" JoinArityForAssociativityCommutativity="2147483647" ArrayExpansionThreshold="25" JoinOrderDynamicProgThreshold="10" BroadcastThreshold="10000000"/>
<dxl:TraceFlags Value="103027,101013,102120,103001,103014,103015,103022,104004,104005,105000"/>
</dxl:OptimizerConfig>
<dxl:Metadata SystemIds="0.GPDB">
<dxl:GPDBScalarOp Mdid="0.521.1.0" Name=">" ComparisonType="GT" ReturnsNullOnNullInput="true">
<dxl:LeftType Mdid="0.23.1.0"/>
<dxl:RightType Mdid="0.23.1.0"/>
<dxl:ResultType Mdid="0.16.1.0"/>
<dxl:OpFunc Mdid="0.147.1.0"/>
<dxl:Commutator Mdid="0.97.1.0"/>
<dxl:InverseOp Mdid="0.523.1.0"/>
<dxl:Opfamilies>
<dxl:Opfamily Mdid="0.1976.1.0"/>
<dxl:Opfamily Mdid="0.3027.1.0"/>
</dxl:Opfamilies>
</dxl:GPDBScalarOp>
<dxl:ColumnStatistics Mdid="1.322247.1.1.8" Name="gp_segment_id" Width="4.000000" NullFreq="0.000000" NdvRemain="0.000000" FreqRemain="0.000000" ColStatsMissing="true"/>
<dxl:ColumnStatistics Mdid="1.322247.1.1.1" Name="b" Width="4.000000" NullFreq="0.000000" NdvRemain="0.000000" FreqRemain="0.000000" ColStatsMissing="true"/>
<dxl:ColumnStatistics Mdid="1.322247.1.1.0" Name="a" Width="4.000000" NullFreq="0.000000" NdvRemain="0.000000" FreqRemain="0.000000" ColStatsMissing="true"/>
<dxl:Type Mdid="0.16.1.0" Name="bool" IsRedistributable="true" IsHashable="true" IsMergeJoinable="true" IsComposite="false" IsFixedLength="true" Length="1" PassByValue="true">
<dxl:EqualityOp Mdid="0.91.1.0"/>
<dxl:InequalityOp Mdid="0.85.1.0"/>
<dxl:LessThanOp Mdid="0.58.1.0"/>
<dxl:LessThanEqualsOp Mdid="0.1694.1.0"/>
<dxl:GreaterThanOp Mdid="0.59.1.0"/>
<dxl:GreaterThanEqualsOp Mdid="0.1695.1.0"/>
<dxl:ComparisonOp Mdid="0.1693.1.0"/>
<dxl:ArrayType Mdid="0.1000.1.0"/>
<dxl:MinAgg Mdid="0.0.0.0"/>
<dxl:MaxAgg Mdid="0.0.0.0"/>
<dxl:AvgAgg Mdid="0.0.0.0"/>
<dxl:SumAgg Mdid="0.0.0.0"/>
<dxl:CountAgg Mdid="0.2147.1.0"/>
</dxl:Type>
<dxl:Type Mdid="0.23.1.0" Name="int4" IsRedistributable="true" IsHashable="true" IsMergeJoinable="true" IsComposite="false" IsFixedLength="true" Length="4" PassByValue="true">
<dxl:EqualityOp Mdid="0.96.1.0"/>
<dxl:InequalityOp Mdid="0.518.1.0"/>
<dxl:LessThanOp Mdid="0.97.1.0"/>
<dxl:LessThanEqualsOp Mdid="0.523.1.0"/>
<dxl:GreaterThanOp Mdid="0.521.1.0"/>
<dxl:GreaterThanEqualsOp Mdid="0.525.1.0"/>
<dxl:ComparisonOp Mdid="0.351.1.0"/>
<dxl:ArrayType Mdid="0.1007.1.0"/>
<dxl:MinAgg Mdid="0.2132.1.0"/>
<dxl:MaxAgg Mdid="0.2116.1.0"/>
<dxl:AvgAgg Mdid="0.2101.1.0"/>
<dxl:SumAgg Mdid="0.2108.1.0"/>
<dxl:CountAgg Mdid="0.2147.1.0"/>
</dxl:Type>
<dxl:Type Mdid="0.26.1.0" Name="oid" IsRedistributable="true" IsHashable="true" IsMergeJoinable="true" IsComposite="false" IsFixedLength="true" Length="4" PassByValue="true">
<dxl:EqualityOp Mdid="0.607.1.0"/>
<dxl:InequalityOp Mdid="0.608.1.0"/>
<dxl:LessThanOp Mdid="0.609.1.0"/>
<dxl:LessThanEqualsOp Mdid="0.611.1.0"/>
<dxl:GreaterThanOp Mdid="0.610.1.0"/>
<dxl:GreaterThanEqualsOp Mdid="0.612.1.0"/>
<dxl:ComparisonOp Mdid="0.356.1.0"/>
<dxl:ArrayType Mdid="0.1028.1.0"/>
<dxl:MinAgg Mdid="0.2118.1.0"/>
<dxl:MaxAgg Mdid="0.2134.1.0"/>
<dxl:AvgAgg Mdid="0.0.0.0"/>
<dxl:SumAgg Mdid="0.0.0.0"/>
<dxl:CountAgg Mdid="0.2147.1.0"/>
</dxl:Type>
<dxl:Type Mdid="0.27.1.0" Name="tid" IsRedistributable="true" IsHashable="false" IsMergeJoinable="false" IsComposite="false" IsFixedLength="true" Length="6" PassByValue="false">
<dxl:EqualityOp Mdid="0.387.1.0"/>
<dxl:InequalityOp Mdid="0.402.1.0"/>
<dxl:LessThanOp Mdid="0.2799.1.0"/>
<dxl:LessThanEqualsOp Mdid="0.2801.1.0"/>
<dxl:GreaterThanOp Mdid="0.2800.1.0"/>
<dxl:GreaterThanEqualsOp Mdid="0.2802.1.0"/>
<dxl:ComparisonOp Mdid="0.2794.1.0"/>
<dxl:ArrayType Mdid="0.1010.1.0"/>
<dxl:MinAgg Mdid="0.2798.1.0"/>
<dxl:MaxAgg Mdid="0.2797.1.0"/>
<dxl:AvgAgg Mdid="0.0.0.0"/>
<dxl:SumAgg Mdid="0.0.0.0"/>
<dxl:CountAgg Mdid="0.2147.1.0"/>
</dxl:Type>
<dxl:Type Mdid="0.29.1.0" Name="cid" IsRedistributable="false" IsHashable="true" IsMergeJoinable="false" IsComposite="false" IsFixedLength="true" Length="4" PassByValue="true">
<dxl:EqualityOp Mdid="0.385.1.0"/>
<dxl:InequalityOp Mdid="0.0.0.0"/>
<dxl:LessThanOp Mdid="0.0.0.0"/>
<dxl:LessThanEqualsOp Mdid="0.0.0.0"/>
<dxl:GreaterThanOp Mdid="0.0.0.0"/>
<dxl:GreaterThanEqualsOp Mdid="0.0.0.0"/>
<dxl:ComparisonOp Mdid="0.0.0.0"/>
<dxl:ArrayType Mdid="0.1012.1.0"/>
<dxl:MinAgg Mdid="0.0.0.0"/>
<dxl:MaxAgg Mdid="0.0.0.0"/>
<dxl:AvgAgg Mdid="0.0.0.0"/>
<dxl:SumAgg Mdid="0.0.0.0"/>
<dxl:CountAgg Mdid="0.2147.1.0"/>
</dxl:Type>
<dxl:Type Mdid="0.28.1.0" Name="xid" IsRedistributable="false" IsHashable="true" IsMergeJoinable="false" IsComposite="false" IsFixedLength="true" Length="4" PassByValue="true">
<dxl:EqualityOp Mdid="0.352.1.0"/>
<dxl:InequalityOp Mdid="0.0.0.0"/>
<dxl:LessThanOp Mdid="0.0.0.0"/>
<dxl:LessThanEqualsOp Mdid="0.0.0.0"/>
<dxl:GreaterThanOp Mdid="0.0.0.0"/>
<dxl:GreaterThanEqualsOp Mdid="0.0.0.0"/>
<dxl:ComparisonOp Mdid="0.0.0.0"/>
<dxl:ArrayType Mdid="0.1011.1.0"/>
<dxl:MinAgg Mdid="0.0.0.0"/>
<dxl:MaxAgg Mdid="0.0.0.0"/>
<dxl:AvgAgg Mdid="0.0.0.0"/>
<dxl:SumAgg Mdid="0.0.0.0"/>
<dxl:CountAgg Mdid="0.2147.1.0"/>
</dxl:Type>
<dxl:ColumnStatistics Mdid="1.322247.1.1.3" Name="xmin" Width="4.000000" NullFreq="0.000000" NdvRemain="0.000000" FreqRemain="0.000000" ColStatsMissing="true"/>
<dxl:ColumnStatistics Mdid="1.322247.1.1.2" Name="ctid" Width="6.000000" NullFreq="0.000000" NdvRemain="0.000000" FreqRemain="0.000000" ColStatsMissing="true"/>
<dxl:RelationStatistics Mdid="2.322247.1.1" Name="foo" Rows="0.000000" EmptyRelation="true"/>
<dxl:Relation Mdid="0.322247.1.1" Name="foo" IsTemporary="false" HasOids="false" StorageType="Heap" DistributionPolicy="Hash" DistributionColumns="0" Keys="7,8,2" PartitionColumns="1" PartitionTypes="l" NumberLeafPartitions="3">
<dxl:Columns>
<dxl:Column Name="a" Attno="1" Mdid="0.23.1.0" Nullable="true" ColWidth="4">
<dxl:DefaultValue/>
</dxl:Column>
<dxl:Column Name="b" Attno="2" Mdid="0.23.1.0" Nullable="true" ColWidth="4">
<dxl:DefaultValue/>
</dxl:Column>
<dxl:Column Name="ctid" Attno="-1" Mdid="0.27.1.0" Nullable="false" ColWidth="6">
<dxl:DefaultValue/>
</dxl:Column>
<dxl:Column Name="xmin" Attno="-3" Mdid="0.28.1.0" Nullable="false" ColWidth="4">
<dxl:DefaultValue/>
</dxl:Column>
<dxl:Column Name="cmin" Attno="-4" Mdid="0.29.1.0" Nullable="false" ColWidth="4">
<dxl:DefaultValue/>
</dxl:Column>
<dxl:Column Name="xmax" Attno="-5" Mdid="0.28.1.0" Nullable="false" ColWidth="4">
<dxl:DefaultValue/>
</dxl:Column>
<dxl:Column Name="cmax" Attno="-6" Mdid="0.29.1.0" Nullable="false" ColWidth="4">
<dxl:DefaultValue/>
</dxl:Column>
<dxl:Column Name="tableoid" Attno="-7" Mdid="0.26.1.0" Nullable="false" ColWidth="4">
<dxl:DefaultValue/>
</dxl:Column>
<dxl:Column Name="gp_segment_id" Attno="-8" Mdid="0.23.1.0" Nullable="false" ColWidth="4">
<dxl:DefaultValue/>
</dxl:Column>
</dxl:Columns>
<dxl:IndexInfoList/>
<dxl:Triggers/>
<dxl:CheckConstraints/>
<dxl:PartConstraint DefaultPartition="0" Unbounded="true">
</dxl:PartConstraint>
</dxl:Relation>
<dxl:ColumnStatistics Mdid="1.322247.1.1.5" Name="xmax" Width="4.000000" NullFreq="0.000000" NdvRemain="0.000000" FreqRemain="0.000000" ColStatsMissing="true"/>
<dxl:ColumnStatistics Mdid="1.322247.1.1.4" Name="cmin" Width="4.000000" NullFreq="0.000000" NdvRemain="0.000000" FreqRemain="0.000000" ColStatsMissing="true"/>
<dxl:GPDBScalarOp Mdid="0.97.1.0" Name="<" ComparisonType="LT" ReturnsNullOnNullInput="true">
<dxl:LeftType Mdid="0.23.1.0"/>
<dxl:RightType Mdid="0.23.1.0"/>
<dxl:ResultType Mdid="0.16.1.0"/>
<dxl:OpFunc Mdid="0.66.1.0"/>
<dxl:Commutator Mdid="0.521.1.0"/>
<dxl:InverseOp Mdid="0.525.1.0"/>
<dxl:Opfamilies>
<dxl:Opfamily Mdid="0.1976.1.0"/>
<dxl:Opfamily Mdid="0.3027.1.0"/>
</dxl:Opfamilies>
</dxl:GPDBScalarOp>
<dxl:GPDBScalarOp Mdid="0.96.1.0" Name="=" ComparisonType="Eq" ReturnsNullOnNullInput="true">
<dxl:LeftType Mdid="0.23.1.0"/>
<dxl:RightType Mdid="0.23.1.0"/>
<dxl:ResultType Mdid="0.16.1.0"/>
<dxl:OpFunc Mdid="0.65.1.0"/>
<dxl:Commutator Mdid="0.96.1.0"/>
<dxl:InverseOp Mdid="0.518.1.0"/>
<dxl:Opfamilies>
<dxl:Opfamily Mdid="0.1976.1.0"/>
<dxl:Opfamily Mdid="0.1977.1.0"/>
<dxl:Opfamily Mdid="0.3027.1.0"/>
</dxl:Opfamilies>
</dxl:GPDBScalarOp>
<dxl:ColumnStatistics Mdid="1.322247.1.1.7" Name="tableoid" Width="4.000000" NullFreq="0.000000" NdvRemain="0.000000" FreqRemain="0.000000" ColStatsMissing="true"/>
<dxl:ColumnStatistics Mdid="1.322247.1.1.6" Name="cmax" Width="4.000000" NullFreq="0.000000" NdvRemain="0.000000" FreqRemain="0.000000" ColStatsMissing="true"/>
</dxl:Metadata>
<dxl:Query>
<dxl:OutputColumns>
<dxl:Ident ColId="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Ident ColId="2" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:OutputColumns>
<dxl:CTEList/>
<dxl:UnionAll InputColumns="1,2;28,29" CastAcrossInputs="false">
<dxl:Columns>
<dxl:Column ColId="1" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="2" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:Columns>
<dxl:UnionAll InputColumns="1,2;19,20" CastAcrossInputs="false">
<dxl:Columns>
<dxl:Column ColId="1" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="2" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:Columns>
<dxl:UnionAll InputColumns="1,2;10,11" CastAcrossInputs="false">
<dxl:Columns>
<dxl:Column ColId="1" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="2" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:Columns>
<dxl:LogicalSelect>
<dxl:Comparison ComparisonOperator="=" OperatorMdid="0.96.1.0">
<dxl:Ident ColId="2" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="1"/>
</dxl:Comparison>
<dxl:LogicalGet>
<dxl:TableDescriptor Mdid="0.322247.1.1" TableName="foo">
<dxl:Columns>
<dxl:Column ColId="1" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="2" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="3" Attno="-1" ColName="ctid" TypeMdid="0.27.1.0"/>
<dxl:Column ColId="4" Attno="-3" ColName="xmin" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="5" Attno="-4" ColName="cmin" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="6" Attno="-5" ColName="xmax" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="7" Attno="-6" ColName="cmax" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="8" Attno="-7" ColName="tableoid" TypeMdid="0.26.1.0"/>
<dxl:Column ColId="9" Attno="-8" ColName="gp_segment_id" TypeMdid="0.23.1.0"/>
</dxl:Columns>
</dxl:TableDescriptor>
</dxl:LogicalGet>
</dxl:LogicalSelect>
<dxl:LogicalSelect>
<dxl:Comparison ComparisonOperator=">" OperatorMdid="0.521.1.0">
<dxl:Ident ColId="11" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="3"/>
</dxl:Comparison>
<dxl:LogicalGet>
<dxl:TableDescriptor Mdid="0.322247.1.1" TableName="foo">
<dxl:Columns>
<dxl:Column ColId="10" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="11" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="12" Attno="-1" ColName="ctid" TypeMdid="0.27.1.0"/>
<dxl:Column ColId="13" Attno="-3" ColName="xmin" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="14" Attno="-4" ColName="cmin" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="15" Attno="-5" ColName="xmax" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="16" Attno="-6" ColName="cmax" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="17" Attno="-7" ColName="tableoid" TypeMdid="0.26.1.0"/>
<dxl:Column ColId="18" Attno="-8" ColName="gp_segment_id" TypeMdid="0.23.1.0"/>
</dxl:Columns>
</dxl:TableDescriptor>
</dxl:LogicalGet>
</dxl:LogicalSelect>
</dxl:UnionAll>
<dxl:LogicalSelect>
<dxl:ArrayComp OperatorName="=" OperatorMdid="0.96.1.0" OperatorType="Any">
<dxl:Ident ColId="20" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:Array ArrayType="0.1007.1.0" ElementType="0.23.1.0" MultiDimensional="false">
<dxl:ConstValue TypeMdid="0.23.1.0" Value="0"/>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="1"/>
</dxl:Array>
</dxl:ArrayComp>
<dxl:LogicalGet>
<dxl:TableDescriptor Mdid="0.322247.1.1" TableName="foo">
<dxl:Columns>
<dxl:Column ColId="19" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="20" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="21" Attno="-1" ColName="ctid" TypeMdid="0.27.1.0"/>
<dxl:Column ColId="22" Attno="-3" ColName="xmin" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="23" Attno="-4" ColName="cmin" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="24" Attno="-5" ColName="xmax" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="25" Attno="-6" ColName="cmax" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="26" Attno="-7" ColName="tableoid" TypeMdid="0.26.1.0"/>
<dxl:Column ColId="27" Attno="-8" ColName="gp_segment_id" TypeMdid="0.23.1.0"/>
</dxl:Columns>
</dxl:TableDescriptor>
</dxl:LogicalGet>
</dxl:LogicalSelect>
</dxl:UnionAll>
<dxl:LogicalSelect>
<dxl:IsNotNull>
<dxl:Ident ColId="29" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:IsNotNull>
<dxl:LogicalGet>
<dxl:TableDescriptor Mdid="0.322247.1.1" TableName="foo">
<dxl:Columns>
<dxl:Column ColId="28" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="29" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="30" Attno="-1" ColName="ctid" TypeMdid="0.27.1.0"/>
<dxl:Column ColId="31" Attno="-3" ColName="xmin" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="32" Attno="-4" ColName="cmin" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="33" Attno="-5" ColName="xmax" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="34" Attno="-6" ColName="cmax" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="35" Attno="-7" ColName="tableoid" TypeMdid="0.26.1.0"/>
<dxl:Column ColId="36" Attno="-8" ColName="gp_segment_id" TypeMdid="0.23.1.0"/>
</dxl:Columns>
</dxl:TableDescriptor>
</dxl:LogicalGet>
</dxl:LogicalSelect>
</dxl:UnionAll>
</dxl:Query>
<dxl:Plan Id="0" SpaceSize="1">
<dxl:GatherMotion InputSegments="0,1,2" OutputSegments="-1">
<dxl:Properties>
<dxl:Cost StartupCost="0" TotalCost="1724.000307" Rows="1.000000" Width="8"/>
</dxl:Properties>
<dxl:ProjList>
<dxl:ProjElem ColId="0" Alias="a">
<dxl:Ident ColId="0" ColName="a" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
<dxl:ProjElem ColId="1" Alias="b">
<dxl:Ident ColId="1" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
</dxl:ProjList>
<dxl:Filter/>
<dxl:SortingColumnList/>
<dxl:Append IsTarget="false" IsZapped="false">
<dxl:Properties>
<dxl:Cost StartupCost="0" TotalCost="1724.000277" Rows="1.000000" Width="8"/>
</dxl:Properties>
<dxl:ProjList>
<dxl:ProjElem ColId="0" Alias="a">
<dxl:Ident ColId="0" ColName="a" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
<dxl:ProjElem ColId="1" Alias="b">
<dxl:Ident ColId="1" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
</dxl:ProjList>
<dxl:Filter/>
<dxl:Sequence>
<dxl:Properties>
<dxl:Cost StartupCost="0" TotalCost="431.000069" Rows="1.000000" Width="8"/>
</dxl:Properties>
<dxl:ProjList>
<dxl:ProjElem ColId="0" Alias="a">
<dxl:Ident ColId="0" ColName="a" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
<dxl:ProjElem ColId="1" Alias="b">
<dxl:Ident ColId="1" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
</dxl:ProjList>
<dxl:PartitionSelector RelationMdid="0.322247.1.1" PartitionLevels="1" ScanId="1">
<dxl:Properties>
<dxl:Cost StartupCost="10" TotalCost="100" Rows="100" Width="4"/>
</dxl:Properties>
<dxl:ProjList/>
<dxl:PartEqFilters>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="1"/>
</dxl:PartEqFilters>
<dxl:PartFilters>
<dxl:ConstValue TypeMdid="0.16.1.0" Value="true"/>
</dxl:PartFilters>
<dxl:ResidualFilter>
<dxl:ConstValue TypeMdid="0.16.1.0" Value="true"/>
</dxl:ResidualFilter>
<dxl:PropagationExpression>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="1"/>
</dxl:PropagationExpression>
<dxl:PrintableFilter>
<dxl:Comparison ComparisonOperator="=" OperatorMdid="0.96.1.0">
<dxl:Ident ColId="1" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="1"/>
</dxl:Comparison>
</dxl:PrintableFilter>
</dxl:PartitionSelector>
<dxl:DynamicTableScan PartIndexId="1">
<dxl:Properties>
<dxl:Cost StartupCost="0" TotalCost="431.000069" Rows="1.000000" Width="8"/>
</dxl:Properties>
<dxl:ProjList>
<dxl:ProjElem ColId="0" Alias="a">
<dxl:Ident ColId="0" ColName="a" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
<dxl:ProjElem ColId="1" Alias="b">
<dxl:Ident ColId="1" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
</dxl:ProjList>
<dxl:Filter>
<dxl:Comparison ComparisonOperator="=" OperatorMdid="0.96.1.0">
<dxl:Ident ColId="1" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="1"/>
</dxl:Comparison>
</dxl:Filter>
<dxl:TableDescriptor Mdid="0.322247.1.1" TableName="foo">
<dxl:Columns>
<dxl:Column ColId="0" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="1" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="2" Attno="-1" ColName="ctid" TypeMdid="0.27.1.0"/>
<dxl:Column ColId="3" Attno="-3" ColName="xmin" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="4" Attno="-4" ColName="cmin" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="5" Attno="-5" ColName="xmax" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="6" Attno="-6" ColName="cmax" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="7" Attno="-7" ColName="tableoid" TypeMdid="0.26.1.0"/>
<dxl:Column ColId="8" Attno="-8" ColName="gp_segment_id" TypeMdid="0.23.1.0"/>
</dxl:Columns>
</dxl:TableDescriptor>
</dxl:DynamicTableScan>
</dxl:Sequence>
<dxl:Sequence>
<dxl:Properties>
<dxl:Cost StartupCost="0" TotalCost="431.000069" Rows="1.000000" Width="8"/>
</dxl:Properties>
<dxl:ProjList>
<dxl:ProjElem ColId="9" Alias="a">
<dxl:Ident ColId="9" ColName="a" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
<dxl:ProjElem ColId="10" Alias="b">
<dxl:Ident ColId="10" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
</dxl:ProjList>
<dxl:PartitionSelector RelationMdid="0.322247.1.1" PartitionLevels="1" ScanId="2">
<dxl:Properties>
<dxl:Cost StartupCost="10" TotalCost="100" Rows="100" Width="4"/>
</dxl:Properties>
<dxl:ProjList/>
<dxl:PartEqFilters>
<dxl:ConstValue TypeMdid="0.16.1.0" Value="true"/>
</dxl:PartEqFilters>
<dxl:PartFilters>
<dxl:Or>
<dxl:ArrayComp OperatorName="<" OperatorMdid="0.97.1.0" OperatorType="Any">
<dxl:ConstValue TypeMdid="0.23.1.0" Value="3"/>
<dxl:PartListValues Level="0" ResultType="0.1007.1.0" ElementType="0.23.1.0"/>
</dxl:ArrayComp>
<dxl:DefaultPart Level="0"/>
</dxl:Or>
</dxl:PartFilters>
<dxl:ResidualFilter>
<dxl:ConstValue TypeMdid="0.16.1.0" Value="true"/>
</dxl:ResidualFilter>
<dxl:PropagationExpression>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="2"/>
</dxl:PropagationExpression>
<dxl:PrintableFilter>
<dxl:Comparison ComparisonOperator=">" OperatorMdid="0.521.1.0">
<dxl:Ident ColId="10" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="3"/>
</dxl:Comparison>
</dxl:PrintableFilter>
</dxl:PartitionSelector>
<dxl:DynamicTableScan PartIndexId="2">
<dxl:Properties>
<dxl:Cost StartupCost="0" TotalCost="431.000069" Rows="1.000000" Width="8"/>
</dxl:Properties>
<dxl:ProjList>
<dxl:ProjElem ColId="9" Alias="a">
<dxl:Ident ColId="9" ColName="a" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
<dxl:ProjElem ColId="10" Alias="b">
<dxl:Ident ColId="10" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
</dxl:ProjList>
<dxl:Filter>
<dxl:Comparison ComparisonOperator=">" OperatorMdid="0.521.1.0">
<dxl:Ident ColId="10" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="3"/>
</dxl:Comparison>
</dxl:Filter>
<dxl:TableDescriptor Mdid="0.322247.1.1" TableName="foo">
<dxl:Columns>
<dxl:Column ColId="9" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="10" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="11" Attno="-1" ColName="ctid" TypeMdid="0.27.1.0"/>
<dxl:Column ColId="12" Attno="-3" ColName="xmin" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="13" Attno="-4" ColName="cmin" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="14" Attno="-5" ColName="xmax" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="15" Attno="-6" ColName="cmax" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="16" Attno="-7" ColName="tableoid" TypeMdid="0.26.1.0"/>
<dxl:Column ColId="17" Attno="-8" ColName="gp_segment_id" TypeMdid="0.23.1.0"/>
</dxl:Columns>
</dxl:TableDescriptor>
</dxl:DynamicTableScan>
</dxl:Sequence>
<dxl:Sequence>
<dxl:Properties>
<dxl:Cost StartupCost="0" TotalCost="431.000069" Rows="1.000000" Width="8"/>
</dxl:Properties>
<dxl:ProjList>
<dxl:ProjElem ColId="18" Alias="a">
<dxl:Ident ColId="18" ColName="a" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
<dxl:ProjElem ColId="19" Alias="b">
<dxl:Ident ColId="19" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
</dxl:ProjList>
<dxl:PartitionSelector RelationMdid="0.322247.1.1" PartitionLevels="1" ScanId="3">
<dxl:Properties>
<dxl:Cost StartupCost="10" TotalCost="100" Rows="100" Width="4"/>
</dxl:Properties>
<dxl:ProjList/>
<dxl:PartEqFilters>
<dxl:ConstValue TypeMdid="0.16.1.0" Value="true"/>
</dxl:PartEqFilters>
<dxl:PartFilters>
<dxl:Or>
<dxl:Or>
<dxl:ArrayComp OperatorName="=" OperatorMdid="0.96.1.0" OperatorType="Any">
<dxl:ConstValue TypeMdid="0.23.1.0" Value="0"/>
<dxl:PartListValues Level="0" ResultType="0.1007.1.0" ElementType="0.23.1.0"/>
</dxl:ArrayComp>
<dxl:DefaultPart Level="0"/>
</dxl:Or>
<dxl:Or>
<dxl:ArrayComp OperatorName="=" OperatorMdid="0.96.1.0" OperatorType="Any">
<dxl:ConstValue TypeMdid="0.23.1.0" Value="1"/>
<dxl:PartListValues Level="0" ResultType="0.1007.1.0" ElementType="0.23.1.0"/>
</dxl:ArrayComp>
<dxl:DefaultPart Level="0"/>
</dxl:Or>
</dxl:Or>
</dxl:PartFilters>
<dxl:ResidualFilter>
<dxl:ConstValue TypeMdid="0.16.1.0" Value="true"/>
</dxl:ResidualFilter>
<dxl:PropagationExpression>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="3"/>
</dxl:PropagationExpression>
<dxl:PrintableFilter>
<dxl:Or>
<dxl:Comparison ComparisonOperator="=" OperatorMdid="0.96.1.0">
<dxl:Ident ColId="19" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="0"/>
</dxl:Comparison>
<dxl:Comparison ComparisonOperator="=" OperatorMdid="0.96.1.0">
<dxl:Ident ColId="19" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="1"/>
</dxl:Comparison>
</dxl:Or>
</dxl:PrintableFilter>
</dxl:PartitionSelector>
<dxl:DynamicTableScan PartIndexId="3">
<dxl:Properties>
<dxl:Cost StartupCost="0" TotalCost="431.000069" Rows="1.000000" Width="8"/>
</dxl:Properties>
<dxl:ProjList>
<dxl:ProjElem ColId="18" Alias="a">
<dxl:Ident ColId="18" ColName="a" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
<dxl:ProjElem ColId="19" Alias="b">
<dxl:Ident ColId="19" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
</dxl:ProjList>
<dxl:Filter>
<dxl:ArrayComp OperatorName="=" OperatorMdid="0.96.1.0" OperatorType="Any">
<dxl:Ident ColId="19" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:Array ArrayType="0.1007.1.0" ElementType="0.23.1.0" MultiDimensional="false">
<dxl:ConstValue TypeMdid="0.23.1.0" Value="0"/>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="1"/>
</dxl:Array>
</dxl:ArrayComp>
</dxl:Filter>
<dxl:TableDescriptor Mdid="0.322247.1.1" TableName="foo">
<dxl:Columns>
<dxl:Column ColId="18" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="19" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="20" Attno="-1" ColName="ctid" TypeMdid="0.27.1.0"/>
<dxl:Column ColId="21" Attno="-3" ColName="xmin" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="22" Attno="-4" ColName="cmin" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="23" Attno="-5" ColName="xmax" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="24" Attno="-6" ColName="cmax" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="25" Attno="-7" ColName="tableoid" TypeMdid="0.26.1.0"/>
<dxl:Column ColId="26" Attno="-8" ColName="gp_segment_id" TypeMdid="0.23.1.0"/>
</dxl:Columns>
</dxl:TableDescriptor>
</dxl:DynamicTableScan>
</dxl:Sequence>
<dxl:Sequence>
<dxl:Properties>
<dxl:Cost StartupCost="0" TotalCost="431.000069" Rows="1.000000" Width="8"/>
</dxl:Properties>
<dxl:ProjList>
<dxl:ProjElem ColId="27" Alias="a">
<dxl:Ident ColId="27" ColName="a" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
<dxl:ProjElem ColId="28" Alias="b">
<dxl:Ident ColId="28" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
</dxl:ProjList>
<dxl:PartitionSelector RelationMdid="0.322247.1.1" PartitionLevels="1" ScanId="4">
<dxl:Properties>
<dxl:Cost StartupCost="10" TotalCost="100" Rows="100" Width="4"/>
</dxl:Properties>
<dxl:ProjList/>
<dxl:PartEqFilters>
<dxl:ConstValue TypeMdid="0.16.1.0" Value="true"/>
</dxl:PartEqFilters>
<dxl:PartFilters>
<dxl:Or>
<dxl:PartListNullTest Level="0" IsNull="false"/>
<dxl:DefaultPart Level="0"/>
</dxl:Or>
</dxl:PartFilters>
<dxl:ResidualFilter>
<dxl:ConstValue TypeMdid="0.16.1.0" Value="true"/>
</dxl:ResidualFilter>
<dxl:PropagationExpression>
<dxl:ConstValue TypeMdid="0.23.1.0" Value="4"/>
</dxl:PropagationExpression>
<dxl:PrintableFilter>
<dxl:Not>
<dxl:IsNull>
<dxl:Ident ColId="28" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:IsNull>
</dxl:Not>
</dxl:PrintableFilter>
</dxl:PartitionSelector>
<dxl:DynamicTableScan PartIndexId="4">
<dxl:Properties>
<dxl:Cost StartupCost="0" TotalCost="431.000069" Rows="1.000000" Width="8"/>
</dxl:Properties>
<dxl:ProjList>
<dxl:ProjElem ColId="27" Alias="a">
<dxl:Ident ColId="27" ColName="a" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
<dxl:ProjElem ColId="28" Alias="b">
<dxl:Ident ColId="28" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:ProjElem>
</dxl:ProjList>
<dxl:Filter>
<dxl:Not>
<dxl:IsNull>
<dxl:Ident ColId="28" ColName="b" TypeMdid="0.23.1.0"/>
</dxl:IsNull>
</dxl:Not>
</dxl:Filter>
<dxl:TableDescriptor Mdid="0.322247.1.1" TableName="foo">
<dxl:Columns>
<dxl:Column ColId="27" Attno="1" ColName="a" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="28" Attno="2" ColName="b" TypeMdid="0.23.1.0"/>
<dxl:Column ColId="29" Attno="-1" ColName="ctid" TypeMdid="0.27.1.0"/>
<dxl:Column ColId="30" Attno="-3" ColName="xmin" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="31" Attno="-4" ColName="cmin" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="32" Attno="-5" ColName="xmax" TypeMdid="0.28.1.0"/>
<dxl:Column ColId="33" Attno="-6" ColName="cmax" TypeMdid="0.29.1.0"/>
<dxl:Column ColId="34" Attno="-7" ColName="tableoid" TypeMdid="0.26.1.0"/>
<dxl:Column ColId="35" Attno="-8" ColName="gp_segment_id" TypeMdid="0.23.1.0"/>
</dxl:Columns>
</dxl:TableDescriptor>
</dxl:DynamicTableScan>
</dxl:Sequence>
</dxl:Append>
</dxl:GatherMotion>
</dxl:Plan>
</dxl:Thread>
</dxl:DXLMessage> | {
"perplexity_score": 2011.1,
"pile_set_name": "Github"
} |
EV
The Northern state is one of Tesla Motors’ largest markets thanks to itssubsidies, and it appears that it also wants to be ahead in the race to get autonomous cars in the hands of the average Joe.Enabling autonomous vehicle testing on public roads will be done through a bill, which the Norwegian government intends to pass in the spring of 2017.As you already know, several companies have begun testing self-driving vehicles on public roads, and most of these tests have been conducted in the USA. China has a strategy that focuses on the development of driverless vehicles, and Japan is also taking similar steps to allow its automakers to offer qualified products on the market as soon as possible, Norway Today notes.Norway wants to pass a bill that would enable companies like Google’s Alphabet, Uber, Ford Tesla Motors , and many other corporations test self-driving automobiles on its roads.Evidently, the brands mentioned above have not explicitly announced their desire to perform these kinds of tests on Norwegian roads, but the country does bring a few challenges that are of interest to those that develop driverless automobiles.Norway is known for its climate, and some of its infrastructure often makes drivers face inclement weather out of the blue. Autonomous cars will have to be prepared to withstand whatever nature throws at its sensors, and performing tests in remote regions, which have extreme temperatures, is not enough to be sure that a product is capable of handling itself in any environment.Evidently, Norway does not want anyone to bring a prototype and let it drive itself on public roads, because the goal of the project focuses on advanced vehicles that have already proven themselves in controlled conditions.In other words, technologically mature autonomous vehicles will easily get approval for testing on Norwegian roads, while experiments will have to stay on closed roads. | {
"perplexity_score": 266.2,
"pile_set_name": "OpenWebText2"
} |
Manuel Garcia Sees Physics That Don't Exist
by KEVIN RYAN
December 27, 2006
Another Opportunity to Understand Our Predicament
Over the years we've heard from a few educated people who claim
to understand and support the latest story given by the US government
for the unprecedented destruction of the WTC buildings.
Unfortunately, those folks usually turn out to either work for
the Bush Administration directly, like FEMA and NIST,
or are in some other way profiting from the War on Terror.
Some people accept what these Bush scientists say because
they have PhDs in scientific fields,
or because certain media sources promote the official myths.
In a way, the curious behavior of these scientists and media sources
allows us to better see the predicament we all face.
With the case of Manuel Garcia,
and his three recent, rapid-fire articles in Counterpunch,
we appear to have another opportunity to examine the phenomenon
of Bush science.
Here we see a fully educated scientist making strong supportive statements
of the Bush Administration's 9/11 theories,
despite the fact that he must know those theories are based on false
or unsubstantiated claims.
For our own understanding,
let's take a closer look at Manuel Garcia and his efforts.
Garcia not only works for the government,
he works for a very interesting organization in terms of
the best hypothesis for what happened that day.
Lawrence Livermore National Laboratory (LLNL), Garcia's employer,
appears to be where explosive thermite was invented,
and it continues to be a focus of research there.
(1)
At LLNL, government scientists have learned how to combine
the exothermic power of the thermite reaction with organic moieties
to produce a thermite reaction that can do pressure/volume work
(i.e. turn massive quantities of concrete
and other building materials into dust).
From the research of Steven Jones,
we know that the thermite reaction likely played a role
in bringing the towers down,
and it would not be surprising if technology developed by LLNL was involved.
Could that be why Manuel Garcia is so intent on seeing Physics
that don't exist,
in order to avoid seeing links to technology developed by his employer?
There may be more to it than that.
Notice that there are many aspects about the official story
of 9/11/01 that strain credulity, to say the least,
but none more so than the "collapse" of the WTC buildings.
As with the air defense failures,
we've been given several contradictory stories
about these events over the years, none of which have panned out.
The first was an urban legend that grew,
as a result of the long delays in official commitment,
from media reports of extreme temperatures and melting steel.
We were given other stories for the destruction of these buildings,
but the Pancake Theory,
which was the primary explanation offered by FEMA
and was the central explanation in numerous media stories,
lasted for a period of more than three years.
The Pancake Theory recently died a quiet death with the FAQ responses
offered by NIST, but as with the urban legend media stories,
we have been offered no apologies from those who propped-up
the ongoing 9/11 Wars with these false claims.
So when a US weapons scientist, like Manuel Garcia,
offers more wild speculation in support of the Bush Administrations'
ever-changing stories, we must first recognize that this is not really
about serious researchers quibbling over minor details of physical evidence.
We must realize that Garcia and his ilk have already given us
several false stories for the destruction of these buildings,
and those lies have resulted not only in the deaths
of hundreds of thousands of people,
but have also supported the systematic conversion of America
into a totalitarian state waiting to happen.
What will become of those Bush scientists if honest people ever achieve
the awareness and political will to call for a real investigation?i
Quantum Behavior
In his "Physics of 9/11", Garcia offered his new "twisted joints" theory,
adding more conjecture to the miasma that NIST spent three years crafting.
Garcia may have twisted a few joints himself before writing these articles,
but it is clear that he did not put much time into
reviewing NIST's WTC report before putting his reputation,
and perhaps much more, on the line to defend it.
NIST did not actually describe the all-important forces that supposedly
pulled in the tower's external columns.
In their computer model, these forces were phantom forces,
applied to the external columns by sagging floors that had,
paradoxically, been disconnected from those columns.
Garcia's talk of twisting joints is, therefore,
only imaginative conjecture at best.
Garcia seems to admit his own sloppy dishonesty by claiming
that high temperatures in the impact zone were sufficient to soften the steel,
and that floors in the impact zone sagged.
One only has to read the summary of the NIST report to know
that the impact zones were far from where NIST says the buildings failed.
However,
there could be another explanation for this "spooky action at a distance."
Garcia may have stumbled upon a new demonstration of
the principle of Physics known as non-locality,
one in which steel heated in one place causes steel located in another,
far away place to soften and fail. That would be amazing if true.
The greater part of this first article is simply wild speculation,
a crime in itself at this point.
Although NIST admits that many scientists,
given millions of dollars and several years to work on it,
could not describe the dynamics of collapse for the towers,
Garcia makes a one-man job of it in order
to put those silly conspiracy theorists in their place.
He offers equations and terms like "wave trains"
to ensure that those of us who need to "expand [our] range
of rationality and hence [our] political maturity" can do so,
if only we can follow his superior thought processes.
In Garcia's dreamlike world of superiority,
strange things happen.
Floors vanish and buildings begin crashing to the ground,
as a result of fire-softened steel,
at an initial speed of 16 mph.
That is, there is no zero point at which such a building begins to drop.
These skyscrapers actually exhibit quantum behavior,
as large multi-floor sections go from rest to a speed of
16 mph instantaneously!
Garcia appears humble when describing this monumental discovery,
telling us little about the "rippling wavelets" that make it so.
He finally sums up his findings by simply stating that "The towers shattered,
and the pieces fell to the ground." Perhaps someone should call
the Nobel committee.
The Energy Crisis Solved
Garcia's second article,
"The Thermodynamics of 9/11", does not improve upon the first.
Here he repeats his ridiculous claim of the importance of high temperatures
in the impact zone (far from the failure zone),
and then states that the "fatal element in the WTC Towers story
is that enough of the thermal insulation was
banged off the steel frames by the airplane jolts…".
Of course,
those of us who have actually followed NIST's investigation
know that they could not produce any "robust criteria"
to establish that fireproofing was lost through forces of vibration.
Instead, NIST performed a shotgun test to see if the fireproofing
could have been lost through shearing forces.
The shotgun test not only failed to support NIST's pre-determined
conclusions, as was the case for all of their other physical tests,
but it actually proved that the fireproofing could not have been sheared off
because too much energy would be needed.
This did not deter NIST,
as they simply proceeded by filling their computer model with vague,
sweeping assumptions like suggesting that the fireproofing
was completely removed wherever the office furnishings were damaged
(i.e. if a cube wall fell or a pencil was broken,
thousands of square meters of fireproofing must have been sheared off too).
(2)
If it was not already clear that Garcia never read NIST's WTC report,
we might think that he got his quantum leaps from them.
Garcia's analysis of the WTC thermodynamics then begins
with the removal of all of the fireproofing from all the steel,
an unsupported assumption at best.
In any case, to consider temperature increases,
an honest scientist would take the materials and the energy sources involved,
and perform some straightforward calculations to evaluate
the available energies.
In the case of the WTC towers,
we know from FEMA and NIST that about 4,500 gallons of jet fuel
were available to feed the fires on the impact and failure floors,
giving an energy value of approximately 600 GJ,
considering moderate combustion.
And we know the buildings had a fire load of 20 Kg/m^2,
which would provide an energy value of about 500 GJ for the furnishings
on several floors in the vicinity of the failure zones.
(3)
These realistic values give a total energy of about 1,100 GJ
that would be available to heat one building,
but Garcia uses 8,000 GJ and 3,000 GJ,
values NIST created through their deceptive, pretzel-logic manipulations.
Maybe this incredible energy yield means that Garcia and NIST
have solved the energy crisis,
and we can end the 9/11 Wars and bring our troops home.
If not, maybe Garcia can help us understand where all that additional energy
came from, instead of just spouting off with so much arrogance.
We really would appreciate it.
In the absence of this explanation,
Garcia proceeds to apply this tremendous amount of mysterious energy
to the heating of the materials involved.
But instead of taking the quantities of steel, concrete,
and other materials into account (don't forget the aircraft itself)
Garcia helps us to "expand our range of rationality"
by dumbing-down the scenario using a "fictitious homogenized"
substance called "ironcrete".
Garcia muddies the water with his ironcrete because,
although he doesn't give the calculations,
this allows him to use a sleight of hand,
giving a value for specific heat that is less than that
of any of the starting materials.
Few would notice, but this means that, in support of Garcia's purposes,
it takes less heat to increase the temperature of each kilogram of ironcrete
than it would to increase the temperature of the steel and concrete
used in the WTC towers.
Since he's using eight times more energy than could have been available anyway,
this minor scam doesn't seem worth the effort.
But note that Garcia also suggests all the available heat became trapped
in his ironcrete, thereby assuming that no hot gases left the impact zone,
that no heat escaped by conduction,
and that the steel and concrete had an unlimited amount of time
to absorb all the heat.
He also conveniently ignores all the other materials in the aircraft
and the buildings,
including the Aluminum, all the office furnishings,
and the vast amount of air and water vapor,
all of which would be heated too, absorbing energy.
Considering his quantum mechanical collapse dynamics
and magical fireproofing loss,
these distributions of heat energy may not seem so strange,
that is until Garcia needs that energy back to support his later claims
of melting Aluminum, plastic "cracking" to create dense pockets
of hydrocarbon vapors that mimic high-explosives,
and even a replay of the beginnings of life on earth (no kidding).
The Dark Matter of Intelligent Fuel
By the time we get to Garcia's third article,
we're either believing this guy is the greatest scientist in history,
or we're understanding that his series in Counterpunch may be something
more of a sucker punch.
In this third article, "Dark Fire",
Garcia claims to have single-handedly solved the problem
that baffled FEMA and NIST,
as well as all objective people around the world -- the collapse of WTC 7.
Now if Garcia had proven the quantum behavior of large objects,
and had developed a method for extracting eight times the normal amount
of energy from a gallon of hydrocarbon fuel,
as his previous articles suggest, we might be intrigued.
Maybe the title of this third article is a reference to the combustion
of the elusive dark matter that Physicists have long sought after.
Let's take a look.
The challenge of explaining the collapse of WTC 7 was described by
Jim Hoffman with the following three points.
* This steel-framed skyscraper collapsed into its footprint with all
the characteristics of a standard controlled demolition.
* No other steel-framed building that collapsed for any reason has ever
shown any of those features -- let alone all those features.
* No other tall steel-framed building has ever collapsed from fires --
the primary cause of WTC 7's collapse according to the official story.
Additionally,
we know that FEMA spent eight months of exhaustive work on the investigation,
finally claiming that "The performance of WTC 7 is of significant interest
because it appears the collapse was due primarily to fire,
rather than any impact damage from the collapsing towers."
They then added that their
"best hypothesis has only a low probability of occurrence."
NIST has spent several years trying to come up with a legitimate,
non-explosive explanation, so far without success.
But our hero Garcia takes FEMA's low probability scenario,
embellishes it a bit, and boldly declares "This is what happened",
leaving no room for doubt.
The initiating event of Garcia's low probability theory is a "hot volley" of "incandescent metal and heated stone" that destroyed a fuel oil distribution pipe in the southwest corner of WTC 7.
Where the stone came from we'll never know (maybe it was an asteroid),
but the probability that the temperature of the steel from the towers,
which NIST measured at 250 C, could cause that steel to glow in incandescence,
is essentially nil.
FEMA pointed out a string of improbable events that would
need to come together for the rest of this story to pan out,
but they made clear that the biggest problem was that WTC 7
first began to collapse on the east side,
far from the distribution pipe that Garcia builds his story on.
This means that the fuel oil released from any such breakage
would need to exhibit the features of an intelligent migration,
pumping up from multiple tanks on the ground floor,
through the damaged pipe in the southwest corner of the building,
traversing a distance of 250 feet across the fifth floor
without any spreading or transfer loss,
to pool very selectively beneath truss #2 in a mechanical room
on the east side of the building.
This theory depends on much more creative guesswork,
including the following.
* None of this intelligently migrating fuel oil found its way
to the containment vessel that was designed for such an event,
and therefore never triggered the safety mechanism
that would automatically de-energize the pumps.
* The authorities that decided not to fight the fires in WTC 7
also decided not to cut the power to these pumps,
allowing them to spray oil within this burning skyscraper,
for up to seven hours, in the middle of Manhattan.
* The pooled fuel oil was somehow heated to a sufficient temperature
for ignition, at which point an unknown ignition source initiated an efficient,
multi-hour burn.
* Although now situated in an enclosed room with limited space,
the oil found limitless Oxygen in order to extract every bit of energy
from the assumed maximum amount of 12,000 gallons.
* The fires generated by this burning fuel oil centered
in a highly specific formation directly beneath that critical truss,
and the heat produced was perfectly contained and directed
at the truss itself but nowhere else.
* This truss-specific fire raged for up to seven hours
but was never visible from any external view.
* This miraculous fire then caused the failure of that one critical truss,
which somehow initiated the total collapse of this 47 story building
in just 6.6 seconds.
Garcia's "Dark Fire" piece might as well have been about the combustion
of that elusive dark matter,
because even if we really wanted to believe his extended string
of astounding events,
he doesn't address the primary problems of the collapse dynamics.
Instead, he simply states
"a progressive collapse propagates up and material falls freely."
And as with the work of NIST,
we're expected to believe that just saying so makes it true.
Begging Off and Catching On
When questioned more closely about his speculative articles,
Garcia claimed in an email that he was done talking and writing
about these issues,
and that folks would need to rely on the authoritative information
provided by Frank Greening and Popular Mechanics.
In this plea, Garcia says that Greening's results jibe with his,
although he was unaware of Greening before he wrote his series.
It is clear from his "thermodynamics" piece that Garcia might subscribe
to Greening's "Fruit Crumble" theory,
where ultra-fine Aluminum and Iron Oxide spontaneously form
in the buildings to produce "natural thermite",
distributed in just the right forms and places.
It could be that Garcia would buy Greening's
"They Just Forgot to Use the Bolts" theory as well,
if it comes to that.
(4)
Government scientists get paid to support government policies,
particularly in this era of "Bush Science",
and clearly Garcia is willing to play along.
But why would political news organizations, like Counterpunch,
that present themselves as alternatives to the corporate media,
promote these false claims?
Consider for a moment the implications of a breakthrough
in the truth about 9/11.
If the official story about 9/11 is completely false,
as it has proven to be,
that fact should call into question those media sources
who have helped to cover-up the details over the last five years,
even if only through gross negligence of the facts.
Whether or not collusion with alternative media was involved,
if there is a possibility that the neo-cons actually helped in planning
or executing the attacks,
then the fact that they pulled it off means that Alexander Cockburn
and other (ostensibly) liberal leaders might no longer enjoy the
"irreverent and biting" superiority that they identify themselves with.
It could be very distressing for some of these rebel leaders
to realize that instead of "muckraking with a radical attitude"
they have spent years meekly bolstering the status quo.
It appears that these kinds of realizations are inevitable,
and actually offer us a chance to improve our situation.
In the US, we'll soon have more opportunity to notice
the default states in which we are expected to accept scientific authority
no matter how illogical,
and accept a cartoonish political framework no matter how impotent.
In the next few months,
these opportunities will come like "hot volleys" from Manuel Garcia,
providing stark examples of how pretentious "experts",
and other types of fictitious,
homogenized (ironcrete) leaders give no real alternatives
to the problems we've seen in the last five years.
If our new Democratic Congress will not call for impeachment
or a new 9/11 investigation,
will they at least repeal the Military Commissions Act,
or the Patriot Act?
Will they stop construction of Halliburton's secretive "detention centers"
or put an end to the illegal 9/11 Wars?
Will our government's efforts to protect us from unexplained terrorism
ever stop looking exactly like the efforts they would take
to protect themselves from us? In other words,
will these new leaders, in practice, be any different than the neo-cons?
Over the next few months we will realize the answers to these questions,
and perhaps then we can begin taking more responsibility
for the deception in our lives. | {
"perplexity_score": 673.2,
"pile_set_name": "Pile-CC"
} |
Response of auditory units in the barn owl's inferior colliculus to continuously varying interaural phase differences.
1. We studied the response of single units in the central nucleus of the inferior colliculus (ICc) of the barn owl (Tyto alba) to continuously varying interaural phase differences (IPDs) and static IPDs. Interaural phase was varied in two ways: continuously, by delivering tones to each ear that varied by a few hertz (binaural beat, Fig. 1), and discretely, by delaying in fixed steps the phase of sound delivered to one ear relative to the other (static phase). Static presentations were repeated at several IPDs to characterize interaural phase sensitivity. 2. Units sensitive to IPDs responded to the binaural beat stimulus over a broad range of delta f(Fig. 4). We selected a 3-Hz delta f for most of our comparative measurements on the basis of constraints imposed by our stimulus generation system and because it allowed us to reduce the influence of responses to stimulus onset and offset (Fig. 3A). 3. Characteristic interaural time or phase sensitivity obtained by the use of the binaural beat stimulus were comparable with those obtained by the use of the static technique (Fig. 5; r2 = 0.93, Fig. 6). 4. The binaural beat stimulus facilitated the measurement of characteristic delay (CD) and characteristic phase (CP) of auditory units. We demonstrated that units in the owl's inferior colliculus (IC) include those that are maximally excited by specific IPDs (CP = 0 or 1.0) as well as those that are maximally suppressed by specific IPDs (CP = 0.5; Figs. 7 and 8). 5. The selectivity of units sensitive to IPD or interaural time difference (ITD) were weakly influenced by interaural intensity difference (IID).(ABSTRACT TRUNCATED AT 250 WORDS) | {
"perplexity_score": 560.6,
"pile_set_name": "PubMed Abstracts"
} |
Want the best from VICE News in your inbox? Sign up here.
WASHINGTON — Special Counsel Robert Mueller devoted a good chunk of his brief, and presumably only, public press conference Wednesday to explaining why he didn’t, or rather, couldn’t charge President Trump with a crime.
It boils down to a Watergate-era memo from 1973 written by the DOJ’s internal Office of Legal Counsel, or OLC. That memo concluded that charging a sitting president with a crime would unduly interfere with the functioning of the entire executive branch.
Indeed, to do so, Mueller said, would be “unconstitutional.”
The thing is, plenty of legal experts don’t actually buy that argument. They say the policy, and the memo from which it was born, represent an overly creative interpretation of the Constitution and probably wouldn’t hold up if challenged in the Supreme Court.
“There’s nothing in the Constitution that says directly that a president can’t be indicted,” said Ric Simmons, professor of law at the Ohio State University Moritz College of Law. “I think Mueller’s hands were tied. But there’s another question: Is the memo correct? I think it’s not.”
The question is far from academic. If not for that DOJ policy, Trump could easily have been charged with obstruction of justice based on the findings in Mueller’s final report, according to a public letter signed by over 1,000 former prosecutors.
And at least one candidate for president — Sen. Elizabeth Warren, Democrat of Massachusetts — has pledged to reverse the Nixon-era memo if elected.
Back to 1973
The memo was originally drafted by lawyers working for former president Richard Nixon’s Department of Justice in the midst of the Watergate scandal, during which Nixon himself was facing potential legal jeopardy.
“The president is the symbolic head of the nation,” Nixon’s DOJ declared. “The spectacle of an indicted president still trying to serve as chief executive boggles the imagination.”
Three decades later, the Office of Legal Counsel reaffirmed the decision once again at a moment when another president, Bill Clinton, was facing potential charges for lying under oath about having an affair.
“Every OLC opinion I’ve ever read, with one exception, has favored the executive over the legislative”
“The indictment or criminal prosecution of a sitting President would unconstitutionally undermine the capacity of the executive branch to perform its constitutionally assigned functions,” the Clinton-era follow-up memo from the year 2000 said.
Yet both were drafted by an internal DOJ body — the Office of Legal Counsel — that has a long history of taking positions favorable to the expansion of power by its ultimate boss: the president.
“Every OLC opinion I’ve ever read, with one exception, has favored the executive over the legislative, or the executive over the courts,” said Paul Rosenzweig, a former member of Independent Counsel Ken Starr’s investigation into Clinton. “They are the apotheosis of the unitary executive view, the commander-in-chief authority.”
Rosenzweig said he doesn’t think the OLC opinion would withstand a court challenge.
He’s hardly alone. His former boss, Starr, told VICE News earlier this year that he also doesn’t agree with the memo’s conclusion.
“My own view is the president can be indicted, but that’s not the Justice Department’s view,” Starr said — while correctly predicting Mueller would follow DOJ policy anyway, and decline to do so.
The source of tension lies in the memo’s legal argument. The original 1973 memo acknowledges that nothing in the Constitution explicitly says the president can’t be charged with a crime. But it argues that the Constitution gives the president a job to do, and he should be allowed to do it — without having the threat of criminal prosecution standing in his way.
The memo concludes that “criminal proceedings against a president in office should not go beyond a point where they could result in so serious a physical interference with the president’s performance of his official duties that it would amount to an incapacitation.”
That conclusion, however, could easily be viewed differently by the Supreme Court, said Frank Bowman, author of the forthcoming book, High Crimes and Misdemeanors: A History of Impeachment for the Age of Trump.
“The memo takes some reasonable inferences about how such an indictment would be troublesome for the president, and then spins them out into conclusions that aren’t supportable,” he said. “I’m not convinced the OLC memo was based on Constitutional grounds.”
Others have similarly argued that the conclusion is insupportable.
“I’m not convinced the OLC memo was based on Constitutional grounds”
In 1974, lawyers working for former Watergate prosecutor Leon Jaworski wrote a memo concluding that president Nixon could be indicted, although Jaworski decided to make Nixon an unindicted co-conspirator instead, alongside his formally-charged underlings.
In 1998, Starr got a law professor named Ronald Rotunda to draft a memo saying Clinton could be indicted while in office. But Rotunda cautioned that “it may be the case” that if Clinton were convicted, he might not be able to be sent to prison until “after he leaves office.”
The Supreme Court, has given some clues how they would view this memo. In a 1997 unanimous ruling, the court ruled that suing a sitting president in civil court is fair game.
But we may never see a similar courtroom challenge over the OLC policy.
Since a prosecutor working for the Department of Justice is forbidden by DOJ policy from charging the president, a legal dispute over a presidential indictment seemingly wouldn’t be able to get going in the first place, legal scholars said.
“There’s no means to challenge that opinion outside the Justice Department itself, and no means of compelling the department to change the rule if they don’t want to,” Bowman said.
Mueller’s view
In his brief remarks, however, Mueller appeared to endorse not just the DOJ policy, but also the OLC’s interpretation of the Constitutionality of indicting a sitting president.
“Under long-standing Department policy, a President cannot be charged with a federal crime while he is in office. That is unconstitutional,” Mueller said. “Even if the charge is kept under seal and hidden from public view — that too is prohibited.”
Mueller took the DOJ official position one step further — and argued that since he could not indict the president, it would be improper to say he broke the law.
“It would be unfair to potentially accuse somebody of a crime when there can be no court resolution of an actual charge,” Mueller said on Wednesday.
U.S. Attorney General William Barr, right, listens to concerns raised about public safety in rural Alaska during at a roundtable discussion at the Alaska Native Tribal Health Consortium on Wednesday, May 29, 2019, in Anchorage, Alaska. (AP Photo/Mark Thiessen)
Yet that view, too, is up to debate — and has been disputed by Mueller’s boss, Attorney General William Barr.
Barr, who’s been criticized for misleading the public on the contents of the special counsel’s report, said this week he thought Mueller should have reached a decision on whether Trump’s behavior broke the law or not.
“The opinion says you can’t indict a president while he’s in office,” Barr told CBS News in an interview released Thursday. “But he could have reached a decision as to whether it was criminal activity.”
Because Mueller didn’t, Barr said, it was up to him and Deputy AG Rod Rosenstein, to reach their own decision. The rest, as they say, is history. | {
"perplexity_score": 323,
"pile_set_name": "OpenWebText2"
} |
Q:
Kivy Plyer Notification
I am very new to kivy and I just tried Plyer for the App I am making. But for some reason I cannot get the notify method to work and as soon as the Clock method runs, it gives me this Error: TypeError: notify() missing 1 required positional argument: 'self'
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.uix.button import Button
from kivy.uix.gridlayout import GridLayout
from kivy.uix.anchorlayout import AnchorLayout
from kivy.uix.switch import Switch
from kivy.clock import Clock
from kivy.uix.label import Label
import datetime
from kivy.event import EventDispatcher
import plyer
count = 0
class ConnectPage(GridLayout):
def __init__(self, **kwargs):
super(ConnectPage, self).__init__(**kwargs)
self.cols = 1
self.switch = Switch()
self.add_widget(self.switch)
self.label = Label(text="0")
self.add_widget(self.label)
def manager(self):
global count
count += 1
print("[", count, "]")
plyer.facades.Notification.notify(title='hehe', message='huhu')
Clock.schedule_interval(manager, 1 / 1.)
class TestService(App):
def build(self):
return ConnectPage()
TestService().run()
A:
notify() is a method of the class Notification, and it is not marked @staticmethod. So you need an instance of the class to call it.
According to the documentation, the proper way to create a notification is:
from plyer import notification
notification.notify(title='hehe', message='huhu') | {
"perplexity_score": 3470.5,
"pile_set_name": "StackExchange"
} |
Saturday, January 3, 2009
Sisterhood of the Travelling Pants II
Sisterhood of the Traveling Pants II
Dir: Sanaa Hamri
Starring: Amber Tamblyn, America Ferrera, Alexis Bledel
Runtime: 1 hr 57 mins
Film buffs across the country rejoiced when we heard Hollywood was remaking Samuel Goldwyn's classic 1954 film, Women Can Wear Pants Now. No one can forget the tale of the four lovable housewives, who (with their husbands' permission) start wearing pants. Even more memorable than the film itself was the ensuing controversy, as some women actually took to wearing pants! Many film historians believe that the loose morality and ultra-liberal stereotype of Hollywood began with this very film.
As revered and groundbreaking as it was in 1954, some elements of Women Can Wear Pants Now comes across as dated. For instance, the film was shot in technicolor, and the sound is primitive. Also, several scenes are blatant anti-Soviet propaganda, and have nothing to do with the plot.
There was a social revolution following Women Can Wear Pants Now as women everywhere fought for more rights. In the years following the film, women's rights groups fought for, and achieved the right to refuse sex to anyone, and women's bathrooms (before 1955, there were only men's bathrooms. It wasn't much of an issue, because said bathrooms were only in public places anyway).
They also fought for the right to vote, and in a particularly emotional part of the month in September of 1956, this issue was taken before the Supreme Court, where it was realized, due to a legal loophole created by an obscure constitutional amendment in 1920, women technically already had the right to vote! And wouldn't you know it, black people could vote too.
Although everyone learned a lot that day, they all agreed to leave these events out of the history books, because it made everyone feel very silly.
The remake, Sisterhood of the Traveling Pants II, is a success in that it stays true to the original while also managing to breathe fresh, modern air into it. The quartet of edgy, pants-wearing sisters includes Ugly Betty star America Ferrera. Her performance adds plenty of vaguely ethnic spice to the otherwise snow-white cast.
Thankfully, all the anti-Soviet propaganda of the original has been removed, and replaced with anti-Chinese propaganda. | {
"perplexity_score": 231.3,
"pile_set_name": "Pile-CC"
} |
Q:
C++ - "Most important const" doesn't work with expressions?
According to Herb Sutter's article http://herbsutter.com/2008/01/01/gotw-88-a-candidate-for-the-most-important-const/, the following code is correct:
#include <iostream>
#include <vector>
using namespace std;
vector<vector<int>> f() { return {{1},{2},{3},{4},{5}}; }
int main()
{
const auto& v = f();
cout << v[3][0] << endl;
}
i.e. the lifetime of v is extended to the lifetime of the v const reference.
And indeed this compiles fine with gcc and clang and runs without leaks according to valgrind.
However, when I change the main function thusly:
int main()
{
const auto& v = f()[3];
cout << v[0] << endl;
}
it still compiles but valgrind warns me of invalid reads in the second line of the function due to the fact that the memory was free'd in the first line.
Is this standard compliant behaviour or could this be a bug in both g++ (4.7.2) and clang (3.5.0-1~exp1)?
If it is standard compliant, it seems pretty weird to me... oh well.
A:
There's no bug here except in your code.
The first example works because, when you bind the result of f() to v, you extend the lifetime of that result.
In the second example you don't bind the result of f() to anything, so its lifetime is not extended. Binding to a subobject of it would count:
[C++11: 12.2/5]: The second context is when a reference is bound to a temporary. The temporary to which the reference is bound or the temporary that is the complete object of a subobject to which the reference is bound persists for the lifetime of the reference except: [..]
…but you're not doing that: you're binding to the result of calling a member function (e.g. operator[]) on the object, and that result is not a data member of the vector!
(Notably, if you had an std::array rather than an std::vector, then the code† would be absolutely fine as array data is stored locally, so elements are subobjects.)
So, you have a dangling reference to a logical element of the original result of f() which has long gone out of scope.
† Sorry for the horrid initializers but, well, blame C++. | {
"perplexity_score": 674,
"pile_set_name": "StackExchange"
} |
Artist Daniel LuVisi posted tryout concept art for the movie adaptation of the zombie saga World War Z. It's a veritable Where's Waldo of splattered zombie carnage, mutilated New Yorkers, and other hidden gems.
This gorgeously gory work created by the incredibly gifted concept artist and production designer LuVisi, was put together as a try-out to secure his role in Marc Forester's World War Z. Titled "The Battle Of Yonkers," the piece showcases how seriously LuVisi is taking his work. There are millions of details, from the camera crew to the red tape wrapped around one soldier's piece (click to enlarge in the gallery).
According to the artist:
This is the image I did, to get on the film World War Z with Marc Forster, director of Quantum of Solace. Can't say whats going on or what the outcome is right now, but it's not in the negative zone :) This was one of the most difficult images I've ever done. Incredibly challenging to the point where I wanted to quit. But through thick and thin I forced myself to complete it.
Because we're now officially in love with Dan LuVisis art, we've also rounded up a few of his other stellar pieces. And to those of you worried that World War Z is too intricate to be truncated into a mere feature, if they make the movie as compelling as this piece of art (and should LuVisi truly be connected to the film) then they're winning me over.
G/O Media may get a commission LG 75-Inch 8K TV Buy for $2150 from BuyDig Use the promo code ASL250
UPDATE: We just got the chance to hear from Dan LuVisis himself and he let us in on a few arty details going on inside this masterpiece, sounds like our new favorite artist is going to be busy, busy. While he couldn't officially confirm he's working on the movie, all signs point to yes. And thank the gods, because it looks and sounds like he's got the goods to help make this picture a success.
What is your medium?
I work in Photoshop CS4, with a WACOM Cintiq tablet. And my right hand of doom.
How long have you been a concept artist?
Since I was 3 :) Professionally, since I got out of high school at 18. Got hired by a German company, Acony Games..then ever since then, just been busting my ass off.
What movies have you worked on?
Not as many as I want to.. Two foreign films, over in Japan. FOX's They Came From Upstairs, and a new one for Universal which I can't disclose yet. I feel very blessed for what I've done so far, but I don't feel like I'm there yet. When I'm working with geniuses such as Richard Taylor, WETA, ILM, Ryan Church, Dylan Cole and the other 50 of my inspirations...then I'll know I've done something right.
What can you tell us about this piece "The Battle Of Yonkers"?
What can't I tell of it? Let's start with the book, my girlfriend's sister was reading World War Z one day, and I asked what it was. She told me, sounded cool, but I never put too much interest into it. Few months passed, read it...and just instantly fell in love with it. Max Brook's take on Yonkers literally blew my mind. I'm a huge action fan and military nut (don't support the wars, but love technology and design) and I just creamed when I read those pages. Instantly, I saw the whole battle in my head. Then my manager told me he had a connection with Forster and we could potentially get some work in front of him.
Right there my eyes shot open and I knew I had to do it. So I sketched out my idea and started working. Two hours into it, I knew I was in deep waters and felt as if I couldnt finish it. Way too many characters (Sorry for the ones who expected me to draw every single marine!) and just so much chaos, it just intimidated me. Thankfully for two awesome friends, a very supporting girlfriend, a German manager and my cat's undying company I just worked my heart off. And hopefully it shows. As for whats going on with the movie and myself? I've heard back recently, but I can't disclose any information at this time as I feel it is not my place. Yet. :)
What are the little bits and pieces that people might miss (there's so much amazing work going on inside the piece)?
The camera man is my friend Reid Southen, he's a great young artist that you can check out at www.rahll.deviantart.com Much potetional.
I wrote WAINO, the last name of the character who explains the battle on the main MARINE's helmet. (I know, he's in a humvee half the time)
If you look closely, you'll see a little homage to Shaun of the Dead's zombies.
Quantum of Solace is advertised on the billboard in the right corner.
A lot of the marines are left handed. I don't know why I do this, I'm a right handed artist..but it's some weird habit.
Thats about it, there's other things..but I'd rather you guys look for them!
How long did it take you to make it?
A week, straight. Literally, every day from about 9am till 3 am. Just working and eating. Huge picture-size wise. About 6500x4500 pixels.
Are you currently working for the movie World War Z at all and did that art help you get in the door?
We'll see. :)
How has it been going so far? Where are they in production?
See Above.
[Deviant Art] | {
"perplexity_score": 425.8,
"pile_set_name": "OpenWebText2"
} |
Microembolism of single cortical arterioles can induce spreading depression and ischemic injury; a potential trigger for migraine and related MRI lesions.
Increasing epidemiological evidence suggests an association between migraine with aura (MA) and cardiovascular events. There is experimental as well as clinical evidence implying cerebral microembolism as a potential trigger for MA attacks. Microembolism may also account for some of the ischemic MRI lesions more commonly observed in MA than in general population. Limited size of clinically-silent MRI lesions suggests isolated occlusion of a small vessel. However, it is not known whether selective thrombosis of a small arteriole (e.g. single mouse penetrating arteriole - PA), can induce cortical spreading depression (CSD), the putative cause of migraine aura and, hence, trigger an MA attack. For this, we mimiced thrombosis of a small vessel caused by microembolism by selectively occluding a PA just before diving into the cortex (radius; 10-25 µm) in the mouse. Clotting was induced with FeCl3 applied focally over the PA by a glass micropipette for 3 min. DC potential changes were recorded and the alterations in cortical blood flow were monitored by laser speckle contrast imaging. Mice were kept alive for 1-4 weeks and brain sections were stained with H&E or luxol-fast blue to evaluate changes induced by PA occlusion. We found that single PA occlusion consistently triggered a CSD originating from the tissue around the PA soon after occlusion and induced delayed, small ischemic lesions within territory of the affected vessel a few weeks later. These findings suggest that cerebral microembolism can lead to MA attacks and may account for some of the silent brain lesions. | {
"perplexity_score": 435.7,
"pile_set_name": "PubMed Abstracts"
} |
1. Field of the Invention
The present invention relates to an integrated circuit having protection from electrostatic discharge (ESD).
2. Description of the Prior Art
The protection of integrated circuits from electrostatic discharge has been a significant design issue, especially as transistor electrode dimensions shrink below the 1.5 micron level. An excessively high ESD voltage conducted from a package terminal to the integrated circuit bond pad can easily damage input or output circuitry, unless protection techniques are adopted. It appears that the use of the lightly-doped drain (LDD) structure and silicided source/drain regions has increased ESD susceptibility, especially in output buffers that utilize n-channel field effect transistors. One recent study by C. Duvvury and C. Diaz, "Dynamic Gate Coupling of NMOS for Efficient Output ESD Protection" Proceedings of the IRPS (1992), indicates that improved ESD performance can be obtained using a field oxide capacitor to couple the gate of the output transistor to the bond pad; see FIG. 6 therein. In that technique, the output transistor is made to carry the ESD current. However, the field oxide capacitor undesirably increases the capacitive lead on the bond pad, requiring a larger output transistor.
A somewhat similar prior-art technique is shown in FIG. 1, wherein an output buffer 10 is connected to the bond pad 11. A protective n-channel transistor 13 is connected to the bond pad for conducting ESD current (I) to the power supply conductor (V.sub.SS). The ESD voltage is conducted to the gate of transistor 13 by capacitor 12, typically about 10 picofarads in one design. This conduction tends to allow transistor 13 to conduct by means of bipolar break-down action during an ESD event, allowing the current I to flow. The resistor 14, typically about 2 kilohms, causes the positive charge on the gate of transistor 13 to be conducted to V.sub.SS, thereby turning transistor 13 off after the ESD event has dissipated. In this manner, transistor 13 does not conduct during normal operation of the output buffer. However, the circuitry of FIG. 1 requires that the protective transistor be sufficiently large so as to be able to carry the relatively large ESD current. This requirement increases the area required to implement the output buffer. In addition, the transistor 13 presents an additional capacitive lead to the buffer 10, which again undesirably requires that the buffer have additional drive capability, and hence increased size.
In some cases, protection against positive ESD voltages is improved by the presence of a p-channel output transistor. In that case, the p-n junction of the drain electrode, which is connected to the bond pad, provides for clamping positive ESD voltages to a power supply conductor. However, some designs use only n-channel output transistors. For example, TTL output buffers typically use n-channel transistors for both the pull-up and pull-down devices. More recently, the Standard Computer Systems Interface (SCSI) chips have output buffers that typically use only n-channel transistors. It is therefore desirable to have an improved ESD protection technique that is effective with output buffers, and which mitigates certain problems associated with the prior-art techniques. | {
"perplexity_score": 539.1,
"pile_set_name": "USPTO Backgrounds"
} |
Q:
PHP preg_quote() and less-than sign
echo preg_quote("aaa<bbb");
should write:
aaa\<bbb
but I get:
aaa\
This is the only sign that makes problems.
A:
If you want to display it in browser what it just is, you could wrap it in <pre> tag.
echo '<pre>'.preg_quote("aaa<bbb").'</pre>';
Or you could use htmlspecialchars to escape the <.
echo htmlspecialchars(preg_quote("aaa<bbb")); | {
"perplexity_score": 2805.4,
"pile_set_name": "StackExchange"
} |
So it isn’t often we think ourselves as that awkward L-shaped brick from classic computer game Tetris, but fundamentally that’s how aircraft seat designers see us. Unfortunately-shaped wedges that have to fit neatly into a rectangular shape. In a constant fight to give airline passengers more space, more comfort and a better quality experience, the war of the seat configuration continues. British Airways’ latest patent application shows that perhaps the ideal future of front of the plane comfort isn’t as clearly cut as we once thought.
The Past
Originally, in 1999 British Airways brought the flat bed concept to the skies with it’s Club World seat. Seen as a quantum leap in Business Class comfort, with space only considered for the super wealthy, who could afford First Class opulence. Since then many carriers have offered similar comfort, but the forward and backward concept took into consideration the ergonomics of the body, offering more space to the wider upper body. This was done by creating interconnecting forward-backward seats that operated as a singular unit, reducing seat costs and increasing space where it was needed – around the shoulders.
The original club world seat was then fairly quickly redesigned, to what we see on BA’s fleet today. The modern seats offer more privacy, more space, and more technological advancement. But the seat concept is sound, even the older seats can still be found on BA’s subsidiary OpenSkies 757 fleet now titled ‘Biz Bed’.
The forward backward concept was new, and whilst open to initial scepticism, proved a success, and was quickly admired by business travellers, who enjoyed the extra comfort, for little extra price, due to the LOPA (the seat’s real estate on the plane) being hardly compromised compared to the big bucket recliners that the rest of the industry enjoyed.
Virgin Atlantic, British Airways’ younger upstart though, turned the market on its head, by offering significantly more space, a dedicated sleeping surface, and all-aisle access. Virgin, started by Richard Branson competed against BA (Who still offered first class) by marketing their Upper Class ‘Suite’ at the same price point of the Club World product. With extra elements, from pyjamas and an onboard bar, the war was well and truly on.
Until recently, there was little improvement on either of these concepts, and the herringbone format was taken on by more and more carriers, from Delta, Air Canada, Cathay Pacific to Air New Zealand. The all aisle access concept proved to be increasingly appealing to solo passengers, whose main gripe was having to clamber over their fellow traveller to use the restroom. In comparison, British Airways passengers liked the seating configuration, as it was more socially interactive for those travelling as a couple.
The Present
Zodiac Aerospace turned the market on its head with the launch of its Cirrus and Aries seats who basically reversed the seats around, facing the passengers away from the aisle, to increase the sense of privacy and sense of space. These became increasingly popular, with Cathay Pacific and American airlines becoming two major carriers adopting the new seating for their business class cabin.
Even Virgin Atlantic reworked their seats to be able to stagger their herringbone and fit more passengers in without taking away space from the paying guests. As for the forward backward configuration, American Airlines managed to combine the best of both worlds on their 777-200s, offering a forward backward herringbone concept, which actually has been proven as the most effective way of creating space for the passenger.
British Airways on their A380’s due to the space and width of the aircraft have been able to adapt their business class seats to provide a little more space for their passengers and improve the experience, but based on their latest patents, this wasn’t going to be enough for the one airline everyone is looking to for the benchmark of airline business class comfort.
The Future
As originally announced by our good friends at Australian Business Traveller, British Airways have launched a patent for their brand new business class seats, designed by award winning London based Priestmangoode. What’s unique and interesting, is that British Airways have finally (potentially) surrendered their forward/rearward seating concept, and adopted the herringbone seating configuration, perhaps conceding their competitor had a superior product.
The forward/rear seating concept isn’t dead though, as the EDC created the forward rear business class seat for Etihad’s 787 and A380, which is about to be released. The difference to this product is that every seat offers aisle access, and more importantly, with the curving of the corridor has even managed to offer increased space to those in their 787’s first class cabin.
Unlike their OneWorld companions, American, Cathay Pacific and Qatar Airways, British Airways has decided to have their passengers sleep with their feet to the aisle which seemingly seems an interesting choice (In some cultures, the showing of the sole of the foot as seen as an insult) however to us here at thedesignair.net, whilst we prefer the privacy of the seats that face away from the aisle, we prefer the sense of space, by looking into the cabin, rather than at a wall. Priestmangoode have attempted to address both options, by offering two seating configurations for just the one seat, either facing to or away the aisle.
Carriers such as JAL, Asiana, and even Etihad and Airberlin all offer a staggered forward facing concept, which has also proved successful, all with aisle access, and increased privacy for certain seats. The forward facing seats are more conventional, yet offer a less economical seat configuration with increased square foot coverage per passenger.
So what is the future of the business class seat? It seems that the forward/rear facing seats that changed the business class experience some 15 years ago may have their demise written on the wall, yet carriers like Etihad have proved the seating concept still has great merit, and isn’t going anywhere just yet. But the herringbone format that has proved more popular both by seat designers and consumers alike is still being refined, with no clear definition on what is best for both the airline and the passenger. Will we end up seeing more interconnecting seats, or will we as consumers end up voting with our feet, that aisle or window facing seats are the way forward. Whilst BA’s seat is still only in the patent stage, one thing is for sure: space, comfort and the passenger are at the heart of seat designers minds when it comes to the future of business class seats. And that’s all good. | {
"perplexity_score": 450.8,
"pile_set_name": "OpenWebText2"
} |
2. Contact Info
3. Dealer Selection
Over the seven months and 18,000 miles I’ve spent with my Titanium Beige Quest LE, I’ve come to love all its pros (easy ingress, great cargo flexibility, and blind-spot monitor) and excuse its few cons (OK fuel economy, big doors low to the curb, and slow turn-in). I’ve driven it through 10 states, subjected it to Colorado snow and Arizona heat, and used it as a movie theater, storage closet, and airport shuttle. But for a recent road trip, I decided to pass on my Quest and spend some time in Julia LaPalme’s long-term Honda Odyssey Touring Elite, to see how the two compare. Head-to-head foes, the $44,030 Odyssey Touring Elite costs a smidge more than my $43,790 Quest LE, but offers similar features — blind-spot monitor, 3.5-liter V-6, rear-seat entertainment, leather, navigation, 18-inch wheels, power doors and hatch — with the main differences being the Odyssey uses a six-speed automatic (to the Quest’s CVT) and, well, it feels a lot different. My thoughts on how they compare:
Steering.Honda‘s steering is very light on-center compared to the Nissan‘s, and is comparatively quick just off-center. The Nissan’s steering offers more (and better) feel but is a bit slower.
Driver Feel. The Honda feels more like a crossover while the Nissan feels like a bigg van, even though the two are relatively similar in size (the Nissan is 3.1 inches taller, 1.6 inches narrower, and 2.1 inches shorter from bumper to bumper). The Nissan’s seating position is lower and more enveloping (feels like you’re resting in a comfy La-Z-Boy) with the dash sitting higher up, while the Honda’s seating position seems elevated, offering a lower cowl and the feel of a more focused driving experience.
Ride and Handling. The Honda’s ride is firmer, so you feel more of the road, but it also suffers from slightly worse ride quality. On the other hand, the Honda feels like it stays flatter through turns than the Nissan, which doesn’t handle as sharply.
Acceleration. The Honda is quite a bit quicker on the test sheet (0-60 mph in 7.4 vs. Nissan’s 8.0) but doesn’t feel noticeably quicker on the street. Power output is close (244 hp for the Honda, 260 hp for Nissan) and feels like it on the road. The Honda’s engine note is sportier, though.
Transmission. The Honda’s 6A is plenty good but I actually prefer the Nissan’s CVT — it’s smoother (no shifts) and seems appropriate in a minivan. The Nissan’s overdrive-off button is essentially a sport button — keeps the revs in the powerband and provides useful engine braking. The Honda offers a D4 button that limits top gear to fourth — useful but not as versatile as the Nissan’s. Both feel lazy down low in the rpm band when accelerating normally from a stop.
Convenience. The Nissan wins hands down in terms of entry convenience — its Intelligent Key system allows the
front doors to be unlocked by just pressing a button on a door handle; the Honda requires taking the key fob out of your pocket and pressing a button. Similarly, to have the Nissan’s power sliding doors work their automatic magic requires the press of a door-handle button (or a key-fob button), while the Honda requires gripping and pulling the door handle (or pressing a key-fob button) — neither of which is easy if your hands are full of groceries or kids. Back-up cameras are comparable, although the Honda has parking sensors to give it a slight edge. The Nissan has two power sunroofs to the Honda’s one.
Infotainment. Both the Honda’s and Nissan’s navigation systems seem similar. Each provides real-time traffic updates, although the Honda’s also offers the Zagat restaurant guide. The Honda has an ultra-wide 16-in split-screen rear-seat video display; the Nissan has a taller 11-in display — edge to Honda.
Seating. I prefer Nissan’s seating system, especially the fold-flat second row, which lies down with a simple lever pull. Honda’s second row has to be removed (seats aren’t exactly light) and stored, which can be a pain on the back (literally). Nissan’s third row folds flat with a simple tug of a leash; Honda’s folds flat with a pull of leash, too, but requires more effort, as the seats are articulating more. Room is essentially a push — Honda offers more legroom for second and third rows but Nissan provides more headroom for the second row and more hip and shoulder room for the third row.
Storage. Nissan’s covered storage bin aft of the third row is great and still usable even when the third row is folded flat. When Honda’s third row is flat, storage bin (uncovered) is not even available. Honda does offer more storage room when the second row is removed and the third is flat (148.5 cubic feet), but the Nissan’s 108.4 cubic feet seems more than enough to me.
Styling. Honda’s styling is much sportier and thus looks hunkered down compared to the Nissan. Honda’s “lightning bolt” profile cue gives it a fast-forward appearance. Nissan looks big and stately in comparison.
My preference? As much as I like the Honda’s extended range, flashier styling, and sportier feel, what makes a minivan really attractive to me are its conveniences and ease of use; thus, I’ll take the Nissan. The Quest offers an easier, better entry system, handy fold-flat second and third rows, and ample storage space that includes a covered bin that’s always available. For my trip, it’s Quest over Odyssey.
We’ve Temporarily Removed Comments
As part of our ongoing efforts to make MotorTrend.com better, faster, and easier for you to use, we’ve temporarily removed comments as well as the ability to comment. We’re testing and reviewing options to possibly bring comments back. As always, thanks for reading MotorTrend.com.
Seating
2011 Nissan Quest News and Reviews
A little over a year ago, in May 2011, I, along with photographer Brian Vance and videographer Jim Gleason, picked up our long-term 2011 Nissan Quest LE in Seattle. At the time, we were embarking on an Epic Drive in a Lotus Evora, and we needed a support vehicle (can't fit much in an Evora, after all). Since the Epic…
In January the Quest received a new gas tank and pump, part of a service bulletin that remedied fuel not being reachable when the tank was below a quarter full and the van was parked on an incline. So in March when I took my wife and son up to Santa Cruz, I figured all the fuel problems were a…
I recently loaded my wife and son into the Quest and headed up to San Luis Obispo for a weekend getaway. One of my favorite places, SLO is a quaint college town that's nestled in the scenic mountains of Central California, just a few miles from the Pacific Ocean. Speaking of the big blue, we stopped at Pismo Beach on…
With 22,500 miles on the odometer, the Quest went in for its third scheduled maintenance visit, which consisted of an oil change, tire rotation, and full inspection. The bill? A somewhat pricey $113.85. Thus far, the Quest has tallied a service tab of $425.23.A couple months back, I took the Quest along a curvy stretch of Mulholland Highway on my…
Over the seven months and 18,000 miles I've spent with my Titanium Beige Quest LE, I've come to love all its pros (easy ingress, great cargo flexibility, and blind-spot monitor) and excuse its few cons (OK fuel economy, big doors low to the curb, and slow turn-in). I've driven it through 10 states, subjected it to Colorado snow and Arizona… | {
"perplexity_score": 508.7,
"pile_set_name": "Pile-CC"
} |
JWorks
Are you a fan of technical and cultural trends such as Microservices and DevOps? Do you want to help business in their transformation to cloud-native market disruptors? Are you eager to use Angular in your next project? To turn your hobby into your profession and implement end-to-end Internet of Things solutions for our customers, with Raspberry Pis or Arduinos? To participate in workshops, organised by your colleagues, or attend well-known international conferences to further enhance your level of competence as a developer? Then JWorks is the ideal place for you! You will join a creative, professional and dynamic team! Feel free to discover our Accelerator program.
Opgelet: Omdat onze opleidingen bestaan uit een mix tussen Nederlands en Engels, vind je de uitleg van het programma volledig in het Engels.
Introduction JWorks Technology Radar & preferred stack
You'll get acquainted with the technologies we love and put into practice in our greenfield projects.
Development
Spring recap & advanced
Over the years, the Spring framework by Pivotal has become the de facto standard in developing Java applications. We'll deep-dive into the different projects of this large ecosystem.
Backend
Advanced Security Principles
We believe that every Ordina consultant should have security in his or her DNA. During this course, you'll learn to identify potential security risks as early as possible in a development process.
Development
TypeScript with Angular
TypeScript ensures maintainable software... if it is well used. You'll learn TypeScript on the basis of Angular, our standard choice when building slick frontend applications.
Frontend
Frontend build tools, testing, package managers etc
The number of possible tools for testing and packaging a frontend application have grown significantly over the last few years. When finished with these courses, you won't be overwhelmed anymore and you'll know which tool to use for your newest project.
Frontend
Ionic 2
Our preferred technology for building mobile applications is the Ionic framework. Ionic and Angular are two peas in a pod. We'll teach you how to build and deploy a mobile application.
Frontend
Pragmatic Project Estimations
Estimating projects is very difficult and not an exact science. However, there are some tips and tricks which can help you estimate an upcoming project. You'll be an expert in breaking up a large project into smaller pieces, identifying risks and estimating the unexpected.
Softskill
Test development best practices
There is no excuse for not testing your software. If you want to avoid regression in your application, you have to test your application thoroughly. This course is a deep-dive into developing unit tests, integration tests and end-2-end testing.
Development
Team leadership
Which styles of leadership are acceptable in which situation? What sort of leader are you? How to coach and when? How to handle difficult situations? In this three-day course you'll discover answers to these questions.
Softskill
Storytation
It's inevitable: everyone has to give a presentation someday. But what makes a presentation stick with your audience? You'll discover what characteristics an intriguing story must have, that you need to know your audience and how you can tell the story with great confidence.
Softskill
Software architecture
Software architecture is getting a lot of attention. It looks beyond the details of today's technologies to the underlying trends, techniques, and principles that underpin lasting success in our fast-moving field. It is critical to today's business success; yet it requires technical, business and organizational talents and skills that warrant their own path of career development, education, and research.
Architecture
Domain-Driven Design, CQRS and Event Sourcing
We believe in microservices. Microservices and DDD principles are thick as thieves. You'll learn about bounded contexts, event sourcing and the frameworks you can use to implement these principles.
Architecture
Microservice architecture design
Distributed systems and microservices are a hot topic these days. It's all about developing smaller applications that work together. It's all about the operational measures that need to be taken into account when putting a distributed system into production. It's all about faster business value delivery. This in-depth course is an A-Z journey through the microservices landscape.
Architecture
Reactive Programming
Asynchronous programming is coming our way. RxJS, RxJava, Spring Reactor, Spring 5, .... Working with observable streams is a mindshift, but as you start with the basics and work your way up, you'll get a grip on this way of programming.
DevOps & CI/CD
We believe that a developer shouldn't live on an island, only doing development stuff. A developer with operational skills is of more value to an organization, because he or she can bridge the gap between project teams and infrastructure teams. Knowing how to ship your application is an important aspect. This course will take you from feature branching and Git to CI tools and orchestration tools.
Development & Operations
PaaS + PCF/Openshift
The "cloud"... Another buzzword. But investigating Platform as a Service can be of great value to your organization. Building and maintaining server racks in your basement is not always necessary anymore. You have different levels of abstracting the infrastructure side of things. You'll learn about the basic principles of cloud-native applications and PaaS platforms and we'll dig deeper into Pivotal Cloud Foundry and OpenShift.
Development
JWorks projects - lessons learned & best practices
Alright, cool. You've been granted a project by a customer and the development team can start in a couple of weeks! But there are a lot of things to think about when scoring and managing a project: presales activities, scope management, team coaching, architecture, backlog prioritization, etc. This course is about all of those topics and is essential in becoming a good team leader or project manager. | {
"perplexity_score": 523.1,
"pile_set_name": "Pile-CC"
} |
SS Alpena
The SS Alpena was a sidewheel steamer built by Thomas Arnold of Gallagher & Company at Marine City, Michigan in 1866. She was operated by the Goodrich Line after being purchased from Gardner, Ward & Gallagher in April 1868. The Alpena sank in Lake Michigan in the "Big Blow" storm on October 15, 1880, with the loss of all on board.
Construction
Built in 1866, by the Thomas Arnold of Gallagher & Company of Marine City, Michigan, the Alpena was 197 feet in length, 27 feet in breadth, with a depth of 12 feet. It was rated at 654 tons displacement. The vessel was driven by a steam engine, and photographs of the vessel show its walking beam suspended above the paddlewheels.
Sinking
At least 80 people died when the ship, also carrying a large cargo of apples, capsized in the middle of the lake. The ship was on a trip from Grand Haven, Michigan, to Chicago, Illinois, and was spotted at 8:00 am on October 16 in heavy seas. Some time later, probably due to a shift in the cargo on deck caused by the waves, it capsized and drifted northwest. On the 17th, debris including a piano came ashore in Holland, Michigan, while apples and wood debris were found at Saugatuck. A section of beach near Holland where debris was found is still called Alpena Beach.
Similarly named ships
Another ship named Alpena was a freighter built in 1874 and burned to the waterline in 1891. Another Alpena was a tugboat which sank in 1943 at Huron, Ohio.
SS City of Alpena was a paddlewheel steamboat operating between Detroit and Mackinac Island by the Detroit & Cleveland Line from 1893 to 1921. She was long, carried 400 passengers, and was powered by steam engines.
There is also a Great Lakes ship named Alpena, formerly the Leon Fraser, owned by Inland Lakes Management, an affiliate of Lafarge. It is used as a bulk freighter to haul cement. Built in 1942 and equipped with a steam turbine engine, it was originally 639 feet long, 67 feet in breadth with a depth of 35 feet. It has a 15,550 ton capacity. It was renamed, shortened and converted to a bulk cement carrier in 1991. The Alpena is a moderate sized ship in the Great Lakes fleet; the largest Lakers are almost twice its length and breadth and carry four times its cargo. She is able to transit the canals of the St. Lawrence Seaway due to her small size.
See also
List of maritime disasters in the 19th century
List of storms on the Great Lakes
Sea Wing disaster
References
External links
1880 Alpena sinking
Michigan Shipwrecks.org - Alpena
Category:Maritime incidents in 1880
Category:1866 ships
Category:1880 in the United States
Category:Ships lost with all hands
Category:Shipwrecks of Lake Michigan | {
"perplexity_score": 171.3,
"pile_set_name": "Wikipedia (en)"
} |
Protection for the purse is paramount to halt elder abuse
Financial abuse of older people is on the rise; a trustworthy, competent attorney is an essential safeguard.
With at least 5 per cent of Australia’s elderly reporting some form of physical, social, financial, emotional or sexual mistreatment, elder abuse is not the rare and incomprehensible act you would naturally presume.
In fact it can be systematic, it can be hurtful and for some, it is very, very real.
With family members frequently the perpetrators of elder abuse, the topic is often shrouded in secrecy and shame. By its definition, elder abuse is the mistreatment of an older person committed by someone with whom the person has a relationship of trust. This simple fact makes the abuse difficult for a third party to identify and even harder for the victim to admit, accept and report.
As the population ages, the number of vulnerable people increases, meaning that elder abuse is becoming one of the most serious social issues affecting older Australians today. In the 10 years to June 2011, the number of people aged 85 and over in Australia increased by more than 40 per cent.
From community education and awareness through to advice on spotting the early warning signs, it’s time a light was shone on this dark and difficult topic.
Financial exploitation: the most common elder abuse
A recent study found that financial abuse is the most common form reported. This is a fact to which State Trustees can attest, having handled more than 120 cases involving financial elder abuse in the past month alone.
These cases generally include incidents in which a person in a position of trust, such as a son or daughter, takes or misuses their parent’s money, property or assets for their own personal gain.
From grandchildren manipulating their grandparents to prevent the sale of their property, to children high-jacking their elderly parents’ pensions and life savings to pay for their own living expenses, the lengths abusers go to in exploiting the vulnerability of older people is extraordinary.
A combination of shame, fear of not being believed, and some form of reliance on the perpetrator makes that this type of abuse chronically under-reported.
There are a number of circumstances that may increase the likelihood an individual will fall victim to financial elder abuse. These include:
Diminished capacity due to dementia and related illnesses
Isolation and dependence on others
Reliance on others for transactions and services relating to the management of finances, particularly if they are from a non-English speaking background
Women over the age of 80, although in some cases as young as 65
There are also a number of early warning signs that someone may be suffering from financial elder abuse. These include:
Suspicious changes in wills or powers of attorney
Lack of money for day-to-day items
Financial activity the person could not have carried out alone; for example, withdrawals from your bedridden mother’s bank account
Large and unexplained sums of money missing from a bank account
Bills not being paid
Unusual purchases
What can be done to prevent financial elder abuse?
State Trustees strongly believes prevention and early intervention are the keys to making a difference for Australia’s ageing population. We urge anyone who thinks they could be at risk of elder abuse now or in the future to take a hands-on approach.
Recommended steps for prevention:
Appoint an independent attorney for financial matters or, if appointing family members, appoint more than one attorney for financial matters
Get independent advice
Keep your will up to date
Make loans legally binding
Formally document living arrangements.
An enduring power of attorney (financial) lets someone else help you with your financial and legal decisions. Your financial attorney should be someone who can be impartial and trustworthy, is able to manage any possible conflict and has the time to carry out your wishes accordingly.
Although a friend or family member might like to help out, it may be simpler and less stressful to appoint a professional organisation. The most important thing is your confidence that the person or organisation you choose will act in accordance with your wishes and best interests.
Case study: how a financial attorney can help
Joyce (not her real name) was aged in her 70s and had built a large investment portfolio worth more than $1 million dollars with her late husband. She had reached the point where she had trouble managing her financial affairs and her son suggested she nominate him as her financial attorney.
Believing this was a good idea, Joyce was happy to appoint her son. Her son convinced Joyce that she would be unable to have a pension if the house remained in her name, and persuaded her to sign the property over to him.
Unfortunately, soon after she did this, her son’s marriage broke down. With the house now in her son’s name, his wife was entitled to half of its value. As a result, the property was sold with the proceeds being split evenly between both parties. Joyce was then forced to move into care, as she no longer had a home to live in.
Joyce decided to appoint State Trustees as her new financial attorney, revoking the appointment of her son. State Trustees was then able to step in to protect Joyce’s investments from further depletion. Joyce’s remaining assets were secured and negotiations were commenced with Centrelink for a pension.
Where to get help
To help prevent financial elder abuse, all those at risk should ensure they have an updated will and enduring power of attorney in place, using a professional organisation.
If you are the victim of financial elder abuse or know of someone who is at risk, contact the relevant organisation in your state.
2 comments
This is a very real and important topic. But, of course, elder abuse extends well beyond financial abuse. One of the key risks in the move to complete package portability will be the loss of the inside advocate in good case management. Previously, good case managers identified the early signs of elder abuse and could intervene. With package portability, how easy is it for offenders to simply apply pressure to shift the package to a “less troublesome agency”?
Consumer empowerment is a big step forward, but lets not lose sight of one of the greatest safeguards in our Homecare system so far – an *independent* Case Manager.
Elder Abuse is the unseen issue of our time. The fundamental element of elder abuse is the shift in the power balance between an ageing person and their children (or other person in a position of trust). It is the awareness of elder abuse that is rising, not necessarily the incidence because the extent and scope of elder abuse is unknown and little national research has been conducted to pin down this information. We certainly need the research so that we can better target our limited resources for avoidance and recovery as appropriate. | {
"perplexity_score": 327.8,
"pile_set_name": "Pile-CC"
} |
Even worse, the East’s silver back looked a lot like the West’s white front, so it often looked like a player was being guarded by a member of his own team. There were also some surreal moments when two players came together and a composite third player seemed to form in between them. Was this the proverbial sixth man?
Hardwood Paroxysm was on the scene in New Orleans this weekend, and let it be said that no one else in the mainstream media has published nearly as many candid photographs of Taylor Hicks.
5 Responses to “NBA All-Star Uniforms : Not Since The Heyday Of The Red Rockers, Has N’awlins Seen Such Ugly Threads”
I thought Dwyer was out at Yahoo? Anyway, I’m glad he’s not. He’s really good. And while it hardly needs to be said, the With Leather dude is the worst. He makes the Deadspin weekend subs look like Red Smith riding on Grantland Rice’s back, Master Blaster style.
this (2008) was the best ASG I can remember. The first one I remember was probably my favorite: the 1988 turn. I can’t remember the 1987 ASG in its initial presentation, and that’s the one that is probably the finest ever.
You must’ve missed the boy genius from 2 weeks ago on Deadspin Saturdays talking about popping a chubby (which I hope relates to boners) and going gay and nutting over some dude’s back and all sorts of homoerotic bro-down Highlights-level ass slapping – otherwise, your mythical Thunderdome would be rockin’, summer school style.
In my (non-existent) contract with CSTB, it’s actually mandated that that if I use the word “nut” as a verb, I’m to be fired and then fined $50. I know it doesn’t seem that bad to most people, but that’s about a month’s salary for me. | {
"perplexity_score": 373.4,
"pile_set_name": "Pile-CC"
} |
Q:
Get user's home folder under Python process started by supervisord
I would like to store login secrets in a file from the current user's account in a Django configuration. I'm using the recommended portable way to get the home folder, as in:
os.path.expanduser("~")
This worked in all environments, both locally and both when started with gunicorn -D config.wsgi on a server.
My problem is however that I introduced supervisord to control the gunicorn process and now this function doesn't work, it simply returns /.
This is the relevant section of supervisord.conf
[program:kek_django]
command=.../venv/bin/gunicorn config.wsgi
directory=.../django
user=testuser
Under this environmet, os.path.expanduser("~") becomes /.
Can you tell me either how to fix this problem either by fixing the environment or the function used to detect the home directory?
note: OS is FreeBSD 10, if that is relevant
update: os.environ reports the following under the running process:
'SUPERVISOR_SERVER_URL': 'unix:///var/run/supervisor/supervisor.sock',
'RC_PID': '84177',
'SERVER_SOFTWARE': 'gunicorn/19.3.0',
'SUPERVISOR_ENABLED': '1',
'SUPERVISOR_PROCESS_NAME': 'test_django',
'PWD': '/',
'DJANGO_SETTINGS_MODULE': 'config.settings.production',
'SUPERVISOR_GROUP_NAME': 'test_django',
'PATH': '/sbin:/bin:/usr/sbin:/usr/bin',
'HOME': '/'
A:
As supervisord's docs for Subprocess Environment say:
No shell is executed by supervisord when it runs a subprocess, so environment variables such as USER, PATH, HOME, SHELL, LOGNAME, etc. are not changed from their defaults or otherwise reassigned. This is particularly important to note when you are running a program from a supervisord run as root with a user= stanza in the configuration. Unlike cron, supervisord does not attempt to divine and override “fundamental” environment variables like USER, PATH, HOME, and LOGNAME when it performs a setuid to the user defined within the user= program config option. If you need to set environment variables for a particular program that might otherwise be set by a shell invocation for a particular user, you must do it explicitly within the environment= program config option. An example of setting these enviroment variables is as below.
[program:apache2]
command=/home/chrism/bin/httpd -c "ErrorLog /dev/stdout" -DFOREGROUND
user=chrism
environment=HOME="/home/chrism",USER="chrism"
So, that's the actual fix. (If you construct the supervisord.conf file dynamically and need to know how to look those values up dynamically, I can explain that, but it's pretty easy, and I don't think you need it anyway.)
[program:kek_django]
command=.../venv/bin/gunicorn config.wsgi
directory=.../django
user=testuser
environment=HOME="/home/testuser"
If this doesn't make sense to you, consider:
If you're running supervisord as root, it doesn't have testuser's HOME or anything else. And all it does is setuid(testuser), which just changes its user ID; it doesn't give the shell, or any other part of the system, any opportunity to set up the variables for testuser. Most similar tools have workarounds to fake it, following in the well-worn footsteps of how cron works, but supervisord intentionally chose not to do that.
Alternatively, as the docs for expanduser say:
On Unix, an initial ~ is replaced by the environment variable HOME if it is set; otherwise the current user’s home directory is looked up in the password directory through the built-in module pwd. An initial ~user is looked up directly in the password directory.
And a quick look at the source shows that it does this in the most obvious way possible.
So there are three obvious workarounds from within your code:
Use ~testuser instead of ~ (and you can even generate that programmatically from the username if you want).
Write your own expanduser function that just does the pwd.getpwuid(os.getuid()).pw_dir without checking for HOME.
Manually set HOME to pwd.getpwuid(os.getuid()).pw_dir at startup if it's /. | {
"perplexity_score": 1114.1,
"pile_set_name": "StackExchange"
} |
I have a springfield scout scope mount that I no longer need. It is in great shape and comes with 2 hand guards. One came with it (springfield synthetic) and is black and the other is a brown GI guard I cut to fit.
Sold shipped I can take postal money order or paypal (add 3%)
I can send pics if someone wants to see.
Thanks Chuck. | {
"perplexity_score": 977.6,
"pile_set_name": "Pile-CC"
} |
Q:
How to get a related object sorted with Entity Framework for ASP.NET MVC
Having two classes like Blog and Post, in Entity Framework (and LINQ-to-Entities), how can you get the blogs with the posts sorted by date. I was getting the blogs with the posts this way:
from blog in db.BlogSet.Include("Posts") select blog
and now I'm forced to do this:
public class BlogAndPosts {
public Blog Blog { get; set; }
public IEnumerable<Post> Posts { get; set; }
}
from blog in db.BlogSet
select new BlogAndPosts () {
Blog = blog,
Posts = blog.Posts.OrderByDescending(p => p.PublicationTime)
}
which is very convoluted and ugly. The reason why I'm creating a BlogPosts class is that now, since I have to pass two variables, Blog and Posts, to MVC, I need a view model.
I'm even tempted to try this hack:
from blog in db.BlogSet
select new Blog(blog) {
Posts = blog.Posts.OrderByDescending(p => p.PublicationTime)
}
but what's the correct way to do it? Is Entity Framework not the way to go with MVC?
A:
I generally create a presentation model type wholly unaware of the Entity Framework and project into that. So I would do something like:
public class PostPresentation {
public Guid Id { get; set; }
public string Title { get; set; }
public DateTime PostTime { get; set; }
public string Body { get; set; }
}
public class BlogHomePresentation {
public string BlogName { get; set; }
public IEnumerable<Post> RecentPosts { get; set; }
}
from blog in db.BlogSet
select new BlogHomePresentation
{
BlogName = blog.name,
RecentPosts = (from p in blog.Posts
order by p.PublicationTime desc
select new PostPresentation
{
Id = p.Id,
Title = p.Title,
PostTime = p.PublicationTime,
Body = p.Body
}).Take(10)
}
Does this seem like a lot of work? Consider the advantages:
Your presentation is entirely ignorant of your persistence. Not "ignorant as in having to make all of the properties public virtual," but entirely ignorant.
It is now possible to design the presentation before designing the database schema. You can get your client's approval without doing so much work up front.
The presentation model can be designed to suit the needs of the page. You don't need to worry about eager loading or lazy loading; you just write the model to fit the page. If you need to change either the page or the entity model, you can do either one without affecting the other.
Model binding is easier with simple types. You will not need a custom model binder with this design. | {
"perplexity_score": 1527.1,
"pile_set_name": "StackExchange"
} |
Antibody-based proteomics for esophageal cancer: Identification of proteins in the nuclear factor-kappaB pathway and mitotic checkpoint.
To identify the molecular background of esophageal cancer, we conducted a proteomics study using an antibody microarray consisting of 725 antibodies and surgical specimens from three cases. The microarray analysis identified 24 proteins with aberrant expression in esophageal cancer compared with the corresponding normal mucosa. The overexpression of 14 of the 24 proteins was validated by western blotting analysis of the same samples. These 14 proteins were examined by immunohistochemistry, in which nine proteins showed consistent results with those obtained by western blotting. Among the nine proteins, seven were localized in tumor cells, and two in infiltrating cells. The former included proteins associated with mitotic checkpoint control and the nuclear factor (NF)-kappaB pathway. Although mitotic checkpoint gene products (budding uninhibited by benzidazoles 1 homolog beta (BubR1) and mitotic arrest deficient-like 1 (Mad2)) have previously been reported to be involved in esophageal cancer, the association of NF-kappaB-activating kinase, caspase 10, and activator protein-1 with esophageal cancer has not been previously reported. These proteins play a key role in the NF-kappaB pathway, and NF-kappaB is a signal transduction factor that has emerged as an important modulator of altered gene programs and malignant phenotype in the development of cancer. The association of these proteins with esophageal cancer may indicate that mitotic checkpoint gene products and NF-kappaB play an important part in the carcinogenesis of esophageal cancer. | {
"perplexity_score": 220,
"pile_set_name": "PubMed Abstracts"
} |
Enjoy Laura dore with her sexy American Big Ass, America Love it Babe! | {
"perplexity_score": 2487.7,
"pile_set_name": "OpenWebText2"
} |
If you're a big, big whale with a gigantic tail swimming through water, nothing gets in your way, not the water, not the other fish, not nothin'. You are so much bigger than the water molecules around you, you move through the sea the way humans move through the air on a calm day — you just go. Whales, I imagine, don't think much about water.
But now, imagine yourself smaller. Much, much, much smaller. Instead of a sperm whale, let's make you, say — a human sperm, a teeny little critter with hardly any mass, and a very skinny, beating tail. All of a sudden you are much closer to the size of the water molecules around you.
Now that you're little, getting through water is a huge headache. You are wedged in among equal sized H2O thingies that slow you way, way down. A human sperm in water is like a car in an enormous traffic jam — barely able to move.
So how do the littlest things in life — sperm, bacteria, pond scum — get where they need to go? How to they find food? Cows don't sit in meadows waiting for grass to grow next to their mouths ...
No, little things are cleverer than that. Much, much cleverer, as you will see in this video, elegantly illustrated by Brad Purnell, narrated by Addison Anderson and thought up by Aatish Bhatia, here's how they do it.
I guess of all the moving strategies I've ever encountered, this one — used by a little Broadway billboard-ish critter that dazzles trills of rainbow colored light in the shallow ocean — is the craziest. It uses the same strategies Brad and Aatish describe ... but it seems to think it's a Rockette at Radio City Music Hall. | {
"perplexity_score": 453.6,
"pile_set_name": "Pile-CC"
} |
Abstract
In a market with informationally connected traders, the dynamics of volume, price informativeness, price volatility, and liquidity are severely affected by the information linkages every trader experiences with his peers. We show that in the presence of information linkages among traders, volume and price informativeness increase. Moreover, we find that information linkages improve or damage market depth, and lower or boost the Traders' profits, according to whether these linkages convey positively or negatively correlated signals. Finally, our model predicts patterns of trade correlation consistent with those identified in the empirical literature: trades generated by “neighbor” traders are positively correlated and trades generated by “distant” traders are negatively correlated. | {
"perplexity_score": 677.7,
"pile_set_name": "Pile-CC"
} |
Worldwide
Russian Federation and Iran have rejected Turkey's call for a ceasefire in Syria's last rebel enclave despite President Erdogan's warnings that civilians would be massacred in any attack. At least o...
Trump has shown increasing concern about the stakes for Republicans and, by extension, himself in the elections that will determine political control of Congress for the next two years. "They're the o...
She added that the nerve agent attack was likely approved at a " senior level of the Russian state". Thanks in no small part to the pervasive surveillance cameras deployed by the Brits, they have a n...
It is the first time anybody has been convicted for same-sex relations in the state and the first time a caning has been carried out in public there, Satiful Bahri Mamat, a member of the state executi...
The confirmation hearings are set to start on Tuesday and will go through the week. With his decades of work as a Washington power player - as a Bush lawyer, White House Staff Secretary, and then Ap...
The court ordered lawmakers to redraw the state map by October 30. The panel determined that lawmakers illegally packed African-American voters into some districts to make surrounding districts more R...
According to the charging document , both men repeatedly attempted to tamper with witness testimony in Mueller's Russian Federation probe related to Manafort's political consulting work. Berman Jacks...
Senate Majority Leader Mitch McConnell, House Speaker Paul Ryan and Vice President Mike Pence are expected to speak. What the U.S. people are also realising is that Mr Trump is pushing friends of the ...
UNRWA has faced a cash crisis since the United States, long its biggest donor, slashed funding earlier this year, saying the agency needed to make unspecified reforms and calling on the Palestinians t...
The issue alone compels me to resign the whip. The meeting was called following Field's shock resignation from the party whip on Thursday. The MP said he intends to remain MP for Birkenhead as an in...
Shoigu had earlier announced the start of immediate conflict readiness checks in the central and eastern military areas before the planned exercises. All Russia's airborne units and two of its naval...
According to the head of the state, 96-percent of Google Search results for the term "Trump news" lead to left-leaning media outlets critical of his presidency. Constitution: "If government tried to d...
In what appeared to be an urgent bid for support from conservative Christians across the country, Mr Trump told the leaders that the elections were a referendum "on your religion" and "on free speec...
The lawyers argued that the sanctions violated the 1955 Treaty of Amity, Economic Relations, and Consular Rights between Iran and the U.S., which grants the ICJ jurisdiction over disputes. Iran's cu...
McCain was one of Trump's sharpest critics, and made clear in one of his final wishes as he struggled with brain cancer that he did not want the president to attend his funeral. The Washington Post ...
It specifically faults Aung San Suu Kyi , the Nobel laureate who is the country's effective civilian leader, for failing to use her position, or her "moral authority, to stem or prevent the unfolding...
Events
An explosion damaged a restaurant in Manbij, Syria , Wednesday, shown in a screengrab from the Kurdish Hawar News agency, or ANHA. Graham's dramatic statement associating Trump's words with the bloody attack that killed Americans came from a loyalist who has golfed frequently with the president, although he has also tried to the president out of a policy he has called unsafe to Americans.
Edition choice
Known for being an author of both lifestyle and children's books, she is also the eldest daughter of Arnold Schwarzenegger and ex-wife Maria Shriver . Pratt got engaged to Anna Faris in 2008, a year after meeting her on the set of Take Me Home Tonight . But the exes have managed to remain friends, even trick-or-treating with Jack and their new partners on Halloween last October.
The photo was posted on January 4 , according to CNBC. The cute little battle between Kylie and the egg has gone viral on the internet. When did the egg break the record for most-liked image on Instagram? And with the entire profile being dedicated to one photo , you could say they put all their eggs in one basket.
While it seems very few people were in attendance, royal observers are reading quite a bit into the fact that the Duchess of Cambridge's sister-in-law Meghan Markle , was not among the guests. While Kensington Palace declined to comment on the assistant's departure, a spokesperson reportedly described her as a "hugely talented person" who "played a pivotal royal in the success of the royal wedding". | {
"perplexity_score": 274.1,
"pile_set_name": "Pile-CC"
} |
While other shows in Elizabeth Newman’s inaugural repertory season have emphasised nostalgic musical theatre and light comedy, Newman’s only entirely self-directed piece is Arthur Miller’s play of claustrophobic insularity.
In many ways, this staging of The Crucible is also a powerful statement of intent from Newman, not least in her use of a full ensemble of 17 performers throughout.
While this often makes for a busy stage, the intensity of focus upon the core story is maintained; that of God-fearing but critical 17th-century Massachusetts farmers John and Elizabeth Proctor’s efforts not to be swept away by a hysterical collective fever.
Harry Long and Claire Dargo invest these central roles with just the right balance of stoicism and growing disbelief in the face of institutional madness, with Fiona Wood’s conniving servant girl Abigail, Ali Watt’s troublingly fanatical clergyman Reverend Parris and Deirdre Davis’ all-powerful judge Danforth – an implacable evocation of the human vanity that can unbalance the scales of justice – lined up against them.
Adrian Rees’ set is very effective, consisting of an open stage and an old iron bridge that raises and lowers to become a wall, walkway or elevated pedestal, while there’s subtly subdued live musical accompaniment from Ben Occhipinti.
It’s Newman’s mustering of her varied resources to create a work of propulsive dramatic tension and pinpoint contemporary relevance that impresses most, however. Six decades on from McCarthy, stages sadly seem ripe once more for such a perfectly refreshed treatise on the unreliability of weaponised truth.
We need your help…
When you subscribe to The Stage, you’re investing in our journalism. And our journalism is invested in supporting theatre and the performing arts.
The Stage is a family business, operated by the same family since we were founded in 1880. We do not receive government funding. We are not owned by a large corporation. Our editorial is not dictated by ticket sales.
We are fully independent, but this means we rely on revenue from readers to survive.
Help us continue to report on great work across the UK, champion new talent and keep up our investigative journalism that holds the powerful to account. Your subscription helps ensure our journalism can continue. | {
"perplexity_score": 362.6,
"pile_set_name": "Pile-CC"
} |
Deliveryman steals Laptop delivered to apt by another Amazon delivery man in San Jose. Security video gets him fired pic.twitter.com/ppDaw4QiA4 — Vic Lee (@vicleeabc7) July 18, 2016
SAN FRANCISCO (KGO) -- There's been a rash of packages that have been stolen after being delivered to doorsteps. One San Jose woman had enough and finally did something about it.There have been a lot of these so-called doorstep thefts. Usually, you file a police report and tell the online store about it and they send you a replacement.But in this case, that particular thief turned out to be a real delivery man and that angered a particular victim who said: "That's not right."Having a laptop stolen is not something you should expect from Amazon delivery service.The woman ABC7 spoke with did not to be identified for security reasons, so for this story, we'll call her Jennifer.Security video taken last Wednesday showed an Amazon carrier delivering her laptop by placing the box on the apartment office doorstep.A photo taken from another camera showed a different delivery man making his rounds at the apartment complex. Then later, a video revealed him stealing the box with her laptop.Jennifer contacted Amazon and the police.Amazon sent her another laptop, but she decided she had enough. "I actually had another Amazon order a few months ago that disappeared and this would be the second time," she said.This time she had the goods - the video. "Hopefully someone will recognize him and he'll get caught," she said.She sent the video and photographs to Amazon and got a reply from Amazon's delivery unit. "The manager actually recognized his employee from the video and he contacted me and let me know that he has fired the employee seen in the video," she said.There has been a spike in the number of packages stolen from doorsteps.Jennifer did her part in exposing a thief who's probably stolen before.She's sending a message to others. "If you see a wrong, do what you can to fix it," she said. | {
"perplexity_score": 478.1,
"pile_set_name": "OpenWebText2"
} |
If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Re: The Video Post Discussion Thread
Originally Posted by Naomidee
It all started when I saw that I had a Spongebob shirt. And then I was like I CAN BE BOB
Awesome Spongebob t-shirt by the way. And the video really is awesome; you do a great Adam and Orgz . More impressions next time; Do Charlie/Milly! Your Nana made me laugh out so loud. Episode 2 is out when?
Re: The Video Post Discussion Thread
I might... I just don't know what to say. And when I hear my own English, I'm like "oh gawd no". xD
Btw, you should do some vlogging!
That's ok! It can be short and you can maybe just say hi or play us one of your favorite songs or pour a beer while playing some dance techno music.
Oh my gosh! My vlogging would be SO boring!
Originally Posted by Ancy
Yup I agree with Adi...I really liked the video..you're awesome...besides, you have a cat that can fly....there's nothing epicer than that
yes yes Adi...do one
Flying cat, totally interesting and epic. And I'm glad you liked it.
She should totally do one! But you should too, so don't think you're off the hook or anything.
Originally Posted by Adorien
If only Teub could see your impression of him, he'd be very happy.
I know. I might send it to him. XD
Originally Posted by naruto-niichan
buhahaha laughed hard, that's my favourite shemale
please try to imitate Herm's accent next time
Yeah, I might do one in the future possibly. There were a lot of ones that I didn't get to.
Originally Posted by shinsengumi
excellent work naomi ,i laughed enough for today but i really felt like garry's thumb was missing .cant wait to see ep 2
Yeah! Gary's vost was a little later on and I was doing a lot of the beginning ones.
Originally Posted by goldb
Awesome Spongebob t-shirt by the way. And the video really is awesome; you do a great Adam and Orgz . More impressions next time; Do Charlie/Milly! Your Nana made me laugh out so loud. Episode 2 is out when?
EDIT: oh and the Harry Potter music in the background when you were in the kitchen:
Thanks.
I loved doing Orgz the most because his hair is similar to mine and it looks more credible.
I was going to try and sound like Nana, but her accent was too hard to do!
And I was wondering who might catch the Harry Potter music playing. That's what makes you so top notch Bob.
Re: The Video Post Discussion Thread
At first I didn't read the text about the video and just watched it....was wondering "why is this girl trying on a british accent?.." I had missed the bit where you said my nick...then i put the tshirt and accent together I .
Re: The Video Post Discussion Thread
The best parody was of Nana awkwardly trying to get comfortable. That is probably the hallmark moment of this thread. When I saw the original of her doing that and then her look on her face that made it seem she might cry...I didn't know whether to reach through the screen and hug her or die of laughter. | {
"perplexity_score": 719,
"pile_set_name": "Pile-CC"
} |
30 Ill. App.2d 114 (1961)
173 N.E.2d 850
Joe Evans, Doing Business as Joe Evans and Sons, Plaintiff-Appellee,
v.
Frank S. Owens, Defendant-Appellant.
Gen. No. 10,331.
Illinois Appellate Court Third District.
April 17, 1961.
Rehearing denied May 2, 1961.
Costigan, Wollrab & Yoder, of Bloomington, for appellant.
Chester Thomson and John W. Biggers, of Bloomington, Substitute Attorneys for plaintiff-appellee.
(Abstract of Decision.)
Opinion by JUDGE REYNOLDS.
Reversed and remanded with directions.
Not to be published in full. | {
"perplexity_score": 681.5,
"pile_set_name": "FreeLaw"
} |
Q:
jquery iframe a links into new div / tab / window?
I have a tricky question.
All pages are within same server / portal.
I have a page that embeds iframe from the other page using the following code:
$('#mydiv').append('<p id="loading">Loading ...</p>');
$('#mydiv').append('<iframe id="myframe" name="myframe" class="myframe" onload="myframe()" style="display:none;"></iframe>');
$('#myframe').attr('src', 'https://mywebsite.com/page2');
function myframe() {
var $myframesearch = $('#myframe').contents();
$myframesearch.find("a").attr('target','_parent');
}
$('#myframe').load(function(){
$('#loading').remove();
$('#myframe').fadeIn();
});
All of the links within iframe have no href (but href="javascript:void(0)") and uses scripts within iframe to process the action dynamically.
Some links does open in new window some does not.
I would like to force all links to either open in new Tab, Window, or append to new Div, but none of the methods work, like base / parent, onclick / new window, _top, _parent, etc.
However, my idea was to hide and wait till the content of iframe is loaded after a click and then to append loaded content in new hidden div and then fade it in. When doing so the loaded iframe content resets back to default and not with new content.
Does anyone knows how this can be solved?
Thank you all!
A:
So I check and it appears that other JavaScript overwrites the "a" tag action with some "data" field in the tag only for the "a" tags that contain "Open:" in their link.
I found solution to the problem below by the link for this "a" tag from another page bypassing the JavaScript overwriting:
$(function(){
$('#mydiv').append('<p id="loading">Loading ...</p>');
$('#mydiv').append('<iframe id="myframe" name="myframe" class="myframe" src="https://mywebsite.com/page2" onload="myframe()" style="display:none;"></iframe>');
$('#myframe').load(function(){
$('#loading').hide();
$('#myframe').fadeIn();
});
$('<div id="popup" style="display:none; width:1px; height:1px; position:absolute; top:0; left:0;"></div>').appendTo('html');
});
function myframe() {
var $myframesearch = $('#myframe').contents();
$myframesearch.find("a").attr('target','_parent');
$myframesearch.find('a:contains("Open:")').on('click',function(){
$(this).attr('data','');
var $texta = $(this).text();
var $text = $texta.replace(/Open: /,"");
$('#popup').load('https://mywebsite.com/page2' + ' a:contains("'+$text+'")', function(){
$('#popup a').each(function(){
this.href = this.href.replace(/https:\/\/mywebsite\.com\/page2\/a/, "https://mywebsite.com/page2");
this.click();
});
});
});
}
Hope this helps :) | {
"perplexity_score": 2583.1,
"pile_set_name": "StackExchange"
} |
This spot was clearly a low water crossing. There was a flood gauge and several signs that indicated this road crossing the river was subject to flash flooding. There is a dam before the road crosses the river and this is different from other low water crossings and can pose a greater risk. | {
"perplexity_score": 175.3,
"pile_set_name": "Pile-CC"
} |
Hong Kong will not follow Fed rate cut of 0.5%
The China interest rate is on the rise & Hong Kong inflation is climbing. There is noway that Hong Kong will follow the US interest rate. 美國聯儲局大幅減息半厘,進一步削弱美元的吸引力,美元兌一籃子貨幣匯價,至十五年低位. 高息貨幣備受追捧. But HSBC cut prime rate by 0.25% tomorrow. 匯豐銀行跟隨美國減息四份一厘,最優惠貸款利率明日起,下調至七厘半.存款息率亦相應下調,十五萬以上的紅簿仔息率降至兩厘二五. 本港多家銀行跟隨美國,減息四份一厘.其中,匯豐,恆生銀行(0011.HK)及中銀香港(2388.HK),最優惠貸款利率明日起,下調至七厘半.而渣打及東亞就調低至7.75厘. 匯豐銀行總經理梁高美懿指,新股活動令本港拆息仍偏高,所以今次並無跟足美國減息幅度.她又預期,美國今年內會再減息四分一厘,以紓緩次按造成的信貸收縮. | {
"perplexity_score": 23114.9,
"pile_set_name": "Pile-CC"
} |
Project Summary/Abstract The National Cancer Institute (NCI) established the Cancer Genetics Network (CGN) in 1998 as a multi-centered national project designed to support collaborative investigations on the genetic basis of cancer susceptibility and to explore mechanisms to integrate this new knowledge into medical practice. A fundamental issue in this high-risk population is how to optimally screen for early detection of disease. Recent discovery of new biomarkers that change with early stages of disease offer promising non-invasive strategies for screening. However, two aspects that are fundamental for determining an optimal strategy are: for what cancers is the individual at elevated risk and how can longitudinal measurements on a biomarker be used to predict risk of disease. To focus on these issues in the context of the Cancer Genetics Network project, there are two aims in this proposal. The first aim will develop and apply methods for analyzing data on family history to discover what cancers tend to aggregate in individuals and families. We will develop methodology that can be applied to the CGN Registry (family history) data and appropriately accounts for age and mode of ascertainment of participants. We will apply these methods to identify clusters of cancer sites that occur together in families. The second aim of this proposal focuses on the development and application of a method to assess the relationship between longitudinally collected biomarkers and early evidence of disease onset. The method will allow us to analyze data from the CGN Ovarian Cancer (CA125) Biomarker Screening study, as it will appropriately handle this longitudinal and event time data even though the screening schedule for these women varied due to missed or delayed visits. The PI of this proposal is the PI on the Statistical Coordinating Center of the Cancer Genetics Network, and is responsible for overseeing data acquisition and analysis of CGN studies, but is not funded for statistical methods research. This grant would provide the support needed to complete the research for and apply the required statistical methods. This research has the potential of a major impact on the vulnerable population of individuals with family history of cancer, as it will allow the screening strategy to be more focused on cancers to which the individual is at elevated risk, and potentially improve the chances that the disease will be detected in a treatable stage. | {
"perplexity_score": 279,
"pile_set_name": "NIH ExPorter"
} |
0 of 0 people found this review helpful.
Great Winch
Comments On Mar 10, 2015:Installed this winch with very little struggle. Takes some time because you have to take off the front fairing and brush guards if you have one. Otherwise fit perfect and looks great. Highly recommend.
0 of 0 people found this review helpful.
winch mount and contactor mount plate
Comments On Feb 05, 2015:Winch mount was great. Needed to expand the upper support bracket but other than that it worked great.
Plate sent for the winch contactor didn't work with the supplied part from warn. Looks like design changed. Little modifying and another hole drilled and it works.
Was this review helpful?
0 of 0 people found this review helpful.
Excellent
BRIAN in UT
Comments On Dec 25, 2014:Item is very well made... Great fitment :)
Was this review helpful?
0 of 0 people found this review helpful.
Fits like a glove
Comments On Dec 04, 2014:Great product, easy instructions and all the hardware was included. The instructions tell you how to put on the plate, add the winch, run the wiring and has lots of pictures. The fairlead that I bought bolted right onto it and everything looks like its supposed to be there.
Was this review helpful?
0 of 1 people found this review helpful.
winch mounting plate had to be redrilled.
Todd in OR – Machine this part was bought for: 2002 POLARIS SPORTSMAN 400 4X4
Comments On Sep 12, 2014:The motor on the winch comes in contact withe the radiator guard using the holes provided in the winch mount. I had to drill new holes 1-1/2" to the right to get it to fit. Otherwise seem to be a good unit.
Was this review helpful?
0 of 1 people found this review helpful.
winch mount
Dan in ID
Comments On Sep 10, 2014:Great fit, thick metal, shouldn't have any worries with this part.
Was this review helpful?
0 of 1 people found this review helpful.
Great Winch Mount
Comments On Jul 15, 2014:This Winch mount fitted up perfectly as if it was an OEM product. RMATV sent it out to me in Australia promptly, arriving in just a few days from ordering. fantastic service. Thanks Guys.... keep up the great work.
Mark
1 of 2 people found this review helpful.
Can't beat Rocky Mountain
Comments On Mar 02, 2013:Can't put this anywhere else so a big shout out to Rocky Mountain & its staff, always had the best service from them. I ordered items located in Australia where I live the same day that I ordered the winch and I got the winch first. Keep it up guys. By the way the winch is great, found it very easy to install no problems.
0 of 1 people found this review helpful.
Good fit for the machine.
Comments On Dec 10, 2012:I ended up with a XT40(4000lb) Warn winch that was incompatable with the mount,and Warn did not offer another option, although the mount fit the machine perfectly.
Was this review helpful?
2 of 3 people found this review helpful.
Works well. Solid.
Douglas in NE
Comments On Oct 28, 2012:Excellent instructions. Not too bad to install and the way it mounts leaves me no doubt that it will be sturdy.
Originally shipped the wrong part (issue with Warn not having enough info on my model to catalog correctly). Both Warn and Rocky Mountain were quick to fix the issue and were VERY helpful on the phone. Will continue to business with both companies due to excellent customer service.
2012 Razor 4 800.
Was this review helpful?
0 of 1 people found this review helpful.
Rincon Mount
DON in NY
Comments On Oct 20, 2012:Mounted the Warn 25 winch no problems...
Was this review helpful?
1 of 2 people found this review helpful.
Fits perfect
Jimmy in NC
Comments On Oct 07, 2012:easy to install , i went back and forth trying to decide to buy this winch plate or the cheaper one, i am glad that i got this one, well worth the $
0 of 1 people found this review helpful.
Warn winch mount plate
Scott in TN
1 of 2 people found this review helpful.
Very detailed instructions and well built
Kevin in IN
Comments On Mar 12, 2012:I was very impressed with the mount and the instructions. It's obvious that warn has a good relationship with OEMs as this fit my Rancher 420 perfectly. Don't buy anything else!
Was this review helpful?
0 of 1 people found this review helpful.
Perfect Fit
David in TN
Comments On Dec 06, 2011:Slipped right into frame on 2011 Grizzly 700 and bolted right up. Great product at great price.
Was this review helpful?
0 of 1 people found this review helpful.
A OK
Robert in WA
Comments On Dec 02, 2011:Went right on the machine and is in keeping with the overall quality of Warn products.
Was this review helpful?
0 of 1 people found this review helpful.
perfect
Aaron in OH
Comments On Dec 14, 2010:did the job!
Was this review helpful?
2 of 3 people found this review helpful.
So easy.
PATRICK in MI
Comments On Nov 23, 2010:Just buy the plate. This thing makes it so easy. This was my first winch install and it was easy. Mount the winch and the fairlead. Install the wires to the winch. Then just slide the whole thing in and torque it down.
Was this review helpful?
0 of 1 people found this review helpful.
great
Lorin in NE
Comments On Aug 05, 2010:perfect fit and built to last
Was this review helpful?
0 of 1 people found this review helpful.
MOUNTING PLATE
toney in MO
Comments On Nov 02, 2009:Perfect Fit For My 2009 Honda Recon ES.
Was this review helpful?
0 of 1 people found this review helpful.
WARN® Winch Mount Plate
charles in PA
Comments On Sep 23, 2009:not to bad . no complaints here
Was this review helpful?
3 of 4 people found this review helpful.
2009 Honda Rancher AT Winch Mount
BRADLY in GA
Comments On Sep 17, 2009:The good and the not so good.
Good:
1.Fits perfectly.
2. Requires only minor trimming of skid plate.
Bad:
1. There is no way to visually inspect the spooling drum to see if the wire rope is wrapping properly. This increases the potential for winch and rope damage.
2. The fairlead mounting is held by only two 90 degree sheet metal tabs and is subject to bending from minor front end hits.
3. High price compared to other mounts.
Was this review helpful?
0 of 1 people found this review helpful.
!!!
Mary Pat in MI
Comments On Aug 11, 2009:easy to install and really sturdy
Was this review helpful?
5 of 6 people found this review helpful.
Warn Good Plate Right Fit
Jeffery in UT
Comments On Nov 02, 2008:I purchased the Warn winch plate to replace a competitors winch plate that didn't fit my KingQuad very well as I had trouble with the cable rubbing on the mounting plate. The Warn winch plate fits great. All holes line up to my Mile Marker winch and the fits is great, so there was no drilling or modifications made to the plate. Kudos to Warn Engineering as the winch plate fits as if it was factory installed. Quality of the winch plate is first rate as the steel is heavy and does not flex or bend under heavy loads. I know this winch plate will not fail me as it was well worth the extra money. I only wished I had bought the Warn winch plate in the first place.
Was this review helpful?
0 of 2 people found this review helpful.
WARN, The Industry Standard
BRIAN in TX
Comments On Oct 01, 2008:Easy installation.
Was this review helpful?
0 of 1 people found this review helpful.
Good fit
Ron in CO
Comments On Jun 16, 2008:Had all the holes in the right spots and good powder coating finish.
Was this review helpful?
0 of 1 people found this review helpful.
WARN WINCH MOUNT PLATE
BOB in PA
Comments On Apr 03, 2008:FIT PERFECT... EASY INSTALL. HEAVY DUTY.
Was this review helpful?
0 of 1 people found this review helpful.
Works great
Russ in WA
Comments On Feb 05, 2008:Easy install. Comes complete.
Was this review helpful?
0 of 1 people found this review helpful.
a+
MICHAEL in UT
Comments On Jan 29, 2008:a+
Was this review helpful?
0 of 1 people found this review helpful.
Great
Doug in UT
Comments On Jan 13, 2008:Very well made and fit great. No installation problems at all.
RAYMOND in FL
1 of 2 people found this review helpful.
Winch mount plate
WILLIAM in PA
Comments On Nov 03, 2007:Loved it. Fit nright in to machine and was simple to tighten hardwear up.
Was this review helpful?
1 of 2 people found this review helpful.
Warn Winch Mounte Plate
Jason in UT
Comments On Nov 02, 2007:This was so easy to use and install. It required very little experience! I wouldn't buy any other BRAND!
Was this review helpful?
1 of 2 people found this review helpful.
like it
Stephen in MO
Comments On Oct 26, 2007:it is a good fit and is well made
Was this review helpful?
1 of 2 people found this review helpful.
Easily to install and tough
Eric in MS
Comments On Oct 23, 2007:It was easy to install with the winch. It is a little pricey but with WARN you know it was built right.
Was this review helpful?
0 of 1 people found this review helpful.
amazing
CHRIS in NH
Comments On Oct 22, 2007:amazing product fits perfectly with the stock bumper and is extremely durable.
Was this review helpful?
0 of 1 people found this review helpful.
Couldnt ask for more.
Wade in RI
Comments On Oct 21, 2007:Fit perfect. went on easily directions were awesome. couldnt ask for more
Was this review helpful?
2 of 4 people found this review helpful.
Mount it Right
AL in NV
Comments On Oct 20, 2007:This product is of high quality. I had no problem mounting my Warn winch on my 2006 Kodiak. The holes all lined up like they should. The mounting plate itself is powder coated black for long protection from the elements. | {
"perplexity_score": 1533,
"pile_set_name": "Pile-CC"
} |
President Obama’s tack on Syria looks a lot like President George W. Bush’s handling of Iraq and “sounds an awful lot like how Vietnam started,” former Rep. Ron Paul argues in his weekly column.
Mr. Paul, Texas Republican, argues that the recent announcement from the Obama administration that the Syrian government has used poison gas on rebels follows already-made decisions to transfer weapons to the Syrian rebels.
“The process was identical to the massive deception campaign that led us into the Iraq war,” Mr. Paul writes. “Remember the famous quote from the leaked ‘Downing Street Memo,’ where representatives of British Prime Minister Tony Blair’s administration discussed Washington’s push for war on Iraq?”
“Here the head of British intelligence was reporting back to his government after a trip to Washington in the summer of 2002:
‘Military action was now seen as inevitable. Bush wanted to remove Saddam, through military action, justified by the conjunction of terrorism and WMD. But the intelligence and facts were being fixed around the policy.’
“That is exactly what the Obama Administration is doing with Syria: fixing the intelligence and facts around the already determined policy,” Mr. Paul continues. “And Congress just goes along, just as they did the last time.”
Mr. Paul asks if anyone sees the irony in Mr. Obama’s plans to send weapons to the Syrian rebels, many of whom have pledged loyalty to al Qaeda, after 12 years of the “war on terror” and the struggle against al Qaeda.
“The Obama administration promises us that this is to be a very limited operation, providing small arms only, with no plans for a no-fly zone or American boots on the ground,” Mr. Paul writes. “That sounds an awful lot like how Vietnam started. Just a few advisers. When these few small arms do not achieve the predetermined U.S. policy of regime change in Syria what is the administration going to do? Admit failure and pull the troops out, or escalate? History suggests the answer and it now appears to be repeating itself once again. The president has opened a can of worms that will destroy his presidency and possibly destroy this country. Another multi-billion dollar war has begun.” | {
"perplexity_score": 335.5,
"pile_set_name": "OpenWebText2"
} |
[Clinical analysis of a new triphasic estroprogestational combination with gestodene].
New steroidal compounds have been recently synthetized and analysed in order to improve the metabolic tolerance of oral contraception. Gestodene, a new progestogen from the 19 nortestosterone series seems to fulfil these conditions. It has also high antigonadotropic effects. The authors report the clinical results of a study with a triphasic combination containing Gestodene. Despite the small doses of hormones, this triphasic combination with Gestodene provides excellent contraceptive efficacy and good cycle control and clinical tolerance. | {
"perplexity_score": 508.2,
"pile_set_name": "PubMed Abstracts"
} |
1. Introduction {#sec1-1}
===============
Orthokeratology (OK) is also known as *corneal reshaping* or *corneal refractive therapy* that uses the programmed application of rigid contact lenses to reshape the cornea and temporarily reduce refractive errors.[@ref1][@ref2] Since the introduction of reverse-geometry lens design with high oxygen transmissibility that allows its overnight use for accelerated OK, many studies have shown that OK lenses provide effective means of temporarily reducing low to moderate myopia to \~3.00 D in power.[@ref3][@ref4][@ref5][@ref6]
Most OK studies limited the use of high refractive error (\< −6.00 D), probably because high-myopia participants seemed to be relatively unsuccessful in OK treatment. There is a significant amount of residual refractive error that results in poor unaided vision after the treatment. Fan et al[@ref7] included young adolescents with pretreatment spherical-equivalent refraction up to −10.75 D in a mix of daily wear and overnight OK lens wear in their study. Over the 6 months' duration, the average myopia reduction was only 3.00 D with maximum reduction \< 5.00 D in their participants. The higher pretreatment refractive error and the lens design may explain why the percentage of myopia reduction estimated from their study was relatively low.
Wang et al,[@ref8] in their retrospective study of 46 patients undergoing OK for up to 12 months' period, reported the mean reduction in mild myopia (defined as from −0.50 to −3.50 D) and moderate myopia (defined as from −3.75 to −7.00 D) groups were 0.77 ± 1.14 D and 2.90 ± 1.42 D, respectively. Visual acuity (VA) improved to ≥ 20/40 (6/12) in 95% of their participants in the low-myopia group compared to 75% in the moderate myopic group. Spherical changes were stable after the third month of OK lens wear. There was also a significant increase in astigmatism after 12 months.
In OK, the unaided VA and refractive error follow a pattern of change and recovery similar to that seen in corneal curvature, given that refractive error depends on corneal curvature and unaided VA is based on refractive error.[@ref9] Many studies have shown that most children and young adolescents achieved uncorrected logarithm of the minimum angle of resolution (logMAR) VA of ≥ 0.10 (6/7.5 Snellen) at the end of their study, which lasts all-day long after the OK treatment.[@ref3][@ref6][@ref10]
Myopia reduction is achieved by flattening of the anterior corneal curvature, and this occurred within the first few hours wearing reverse-geometry OK lenses.[@ref11] The flattening resulted in the redistribution of corneal epithelium and stroma tissue that contributed to the refractive changes. Recently, it was reported that myopia reduction is mainly effected by the flattening of the anterior corneal curvatures without the contribution of posterior corneal curvatures.[@ref12]
Studies of OK using reverse-geometry lenses have not reported significant increases in corneal toricity,[@ref3][@ref13][@ref14] possibly because lens centration is maintained more reliably due to the steeper secondary curve of these lenses. Indeed, Soni et al[@ref9] have claimed that OK with reverse-geometry lenses can reduce with-the-rule astigmatism by up to 60%. Mountford and Pesudovs[@ref15] reported an average reduction in corneal toricity of 50% with accelerated OK using reverse-geometry lenses.
In a recent study using OK lens to slow down myopia, Charm and Cho[@ref16] reported a successful retardation in myopia progression of the participants having myopia higher than 5.00 D, but their target reduction was only 4.00 D, and the highest spherical-equivalent refraction was −5.75 D.
To date, there is limited information in the literature regarding changes in visual parameters after OK treatment in high-myopia participants with a sphere equivalent \< −6.00 D. Because there were no reports available on Malaysian studies on the effects of OK treatment among low myopes, we decided to examine and compare the results obtained with high and low myopia in terms of refractive error, corneal curvature, and VA changes after wearing OK lenses for a duration of 6 months.
2. Methods {#sec1-2}
==========
This was a retrospective study. A total of 30 files of children and adolescents undergoing OK treatment from an optometry clinic were reviewed. Data collected from the patients' files include participant demography and visual parameters: refractive errors, corneal curvatures \[simulated keratometry (Sim K) readings along the flattest and steepest meridians\], and uncorrected and best corrected logMAR VA. These parameters were documented before the commencement of overnight OK treatment, and post-OK treatment at 1 day, 1 week, 3 months, and 6 months. This study was approved by the Universiti Kebangsaan Malaysia Ethical Committee (Kuala Lumpur, Malaysia; UKM 1.5.3.5/244/NN-175- 2009), and followed the tenets of the Declaration of Helsinki.
The inclusion criteria included age between 7 years and 17 years, first time fitted with OK lens, maximum with-the-rule astigmatism of −2.50 DC, able to achieve 0.1 logMAR VA or better in each eye, no systemic or ocular disease affecting the ocular health, not using any systemic or topical medications that could affect the ocular physiology or contact-lens fitting, and no ocular lid or anterior segment abnormalities for which contact-lens wear could be contraindicated. They must be successfully fitted with OK lenses for a period of 6 months and had worn their lenses for at least 6 hours nightly, especially before the follow-up visits. Of all the files, 25 participants met the inclusion criteria. For the analysis, the participants were arbitrarily divided into two groups in this study, which comprised the high-myopia group with sphere equivalents \< −6.00 D and the low-to-moderate-myopia group with sphere equivalents ≥−6.00 D.
The OK data were collected by the practitioner within 2 hours after lens removal at the 1^st^ day of overnight wear and the following post-OK visits. The manifested refractive error and residual subjective-refraction results at every visit were recorded. VA was measured using the Early Treatment of Diabetic Retinopathy Study charts with a lighted box of luminance (85 cd/m^2^). It was a high-contrast VA logMAR chart with a testing distance of 4 m. VA was recorded as logMAR with Snellen VA equivalents. Corneal-topography measurements were taken using the corneal topographer, Tomey TMS-4 (Tomey Co., Aichi, Japan), with at least three measurements obtained from each eye before the OK treatment and after the OK lens wear at each follow-up visit. Data retrieved included Sim K readings along both the flattest (Sim K~flat~) and steepest (Sim Ksteep) meridians. The average Sim K readings were also recorded at every visit.
All participants used the same lens type and design throughout the OK treatment. The OK lens was manufactured by Global-OK Vision, San Diego, USA. It was made of Optimum Extra (Roflufocon D) material with a high oxygen permeability (DK) value {oxygen permeability: 100 × 10^−11^ cm^2^/sec \[mL O~2~/(mL × mmHg)\]}.
The data recorded for this study were analyzed using SPSS version 20.0 (IBM Corp., New York, NY, USA). All data were tested using Shapiro--Wilk tests before the statistical analysis for normality (*p* \> 0.05). Repeated-measures analysis of variance (ANOVA) with *p* = 0.05 was used to examine the visual-parameter changes from the baseline over the 6 months' study period. Paired *t* tests with Bonferroni correction (to minimize any type 1 error) were used to test for differences between any two consecutive visits. For all the parameters (5 comparisons), *p* \< 0.01 (0.05/5) was considered significant. The relationship between change in refractive error and change in unaided VA was examined by Pearson correlation analysis.
3. Results {#sec1-3}
==========
The demographic data are presented in [Table 1](#T1){ref-type="table"}. There were no significant differences in data from the right and left eyes of the participants in spherical equivalent refraction (SER), uncorrected visual acuity, best-corrected visual acuity, anterior Sim K~flat~, and Sim K~steep~ (*t* tests, SER, best-corrected visual acuity, anterior Sim K~flat~, and Sim K~steep~, respectively, *p* \> 0.05; uncorrected visual acuity, paired *t* test, *p* \> 0.05); therefore, for the subsequent analysis, only data from the right eye were analyzed. There were significant differences in the baseline refraction values between the high and low to moderate myopic groups (*p* \< 0.05; *t* test).
######
Demographical data of the participants (*n* = 25) at the baseline visit.^a^
High-myopia group Low-to-moderate-myopia group
------------------------- ------------------------------- --------------- ------------------------------- ---------------
Age (y) 13.60 ± 3.10 (range: 8--17 y) 13.00 ± 3.25 (range: 7--17 y)
Sex (male/female) 2/8 5/10
Race 8 Chinese, 2 Indian 15 Chinese
Eye Right Left Right Left
SER (D) −7.11 ± 0.79 −7.19 ± 1.05 − 3.91 ± 1.01 − 4.33 ± 1.81
Refractive sphere (D) −6.63 ± 0.66 − 6.70 ± 1.16 − 3.65 ± 1.14 − 4.00 ± 2.27
Refractive cylinder (D) −0.98 ± 0.79 −1.08 ± 0.82 − 0.52 ± 0.62 − 0.93 ± 1.50
UCVA 1.34 ± 0.11 1.33 ± 0.14 0.83 ± 0.18 0.96 ± 0.38
BCVA 0.11 ± 0.52 0.11 ± 0.51 − 0.05 ± 0.08 − 0.01 ± 0.13
Sim K~steep~ (D) 44.20 ± 1.50 44.26 ± 1.47 44.02 ± 1.19 44.09 ± 1.34
Sim K~flat~ (D) 42.58 ± 1.77 42.55 ± 1.69 42.77 ± 1.17 42.70 ± 1.17
Corneal toricity (D) − 1.62 ± 1.11 −1.71 ± 1.05 − 1.25 ± 0.57 −1.39 ± 1.14
Data are presented as mean ± standard deviation.
BCVA = best-corrected visual acuity; SER = spherical-equivalent refraction; UCVA = uncorrected visual acuity.
^a^ Spherical-equivalent refraction, uncorrected and best-corrected logarithm-of-the-minimum-angle-of-resolution visual acuity, and simulated-keratometry readings along the flattest (Sim K~flat~) and steepest (Sim K~steep~) meridians.
3.1. Refractive error {#sec2-1}
---------------------
The average change in refraction (n~total~ = 25 participants) during the 6 months' treatment duration for high (*n* = 10) and low to moderate myopic (*n* = 15) groups is shown in [Fig. 1](#F1){ref-type="fig"}. After 6 months of post-OK lens wear, the refractive error in both groups was significantly reduced from the baseline readings (ANOVA, *p* \< 0.001). High myopes showed a greater significant change on myopia reduction from the baseline value compared to the low to moderate myopes (ANOVA, *p* \< 0.05).
![Change of refractive error for both high-myopia and low-to-moderate-myopia groups at every visit. Error bars indicate standard deviation. \*,\*\* Significances of *p* \< 0.01 for the high-myopia and low-to-moderate-myopia groups, respectively (the residual refraction is significantly different compared with previous visits).](TJO-5-164-g001){#F1}
In the high-myopia group, the spherical-equivalent refractive error reduced from an average of −7.11 ± 0.79 D--−0.18 ± 0.31 D after 6 months of overnight lens wear. In this group, the largest spherical-equivalent reduction was observed after an overnight wear, and continued until the 1^st^ week of wear with a spherical-equivalent value reaching -0.41 ± 1.13 D (paired *t* tests, *p* \< 0.01). No further significant change was observed over subsequent visits (paired *t* tests, *p* \> 0.01). In the low-to-moderate-myopia group, myopia was reduced from the baseline value of −3.91 ± 1.01 D--−0.27 ± 0.75 D after 6 months. The largest spherical-equivalent-refraction reduction was observed after the 1^st^ night of lens wear (paired *t* tests, *p* \< 0.01), and appeared to stabilize since no significant myopia reduction was observed over subsequent visits (paired *t* tests, *p* \> 0.01).
3.2. Visual acuity {#sec2-2}
------------------
[Fig. 2](#F2){ref-type="fig"} shows the uncorrected logMAR VA for both high and low to moderate myopic groups, which improved after different periods of OK lens wear. Statistically significant improvements in unaided VA were found relative to the baseline in all the participants over the 6 months of lens wear for both myopic groups (ANOVA, *p* \< 0.05). For both the high- and low-myopia groups, the VA stabilized after 1 day of wearing OK lenses (*p* \< 0.01). No more significant changes in the VA were seen between the subsequent visits (*p* \> 0.01). The unaided VA after 6 months of lens wear in high and low to moderate myopes were −0.07 ± 0.10 and 0.01 ± 0.16, respectively.
![Unaided high-contrast logarithm-of-the-minimum-angle-of-resolution visual acuity for both high-myopia and low-to-moderate-myopia groups at every visit. \*,\*\* Significances of *p* \< 0.01 for the high-myopia and low-to-moderate-myopia groups, respectively (the residual refraction is significantly different compared with previous visits). logMAR = logarithm of the minimum angle of resolution; HCVA = high contrast visual acuity.](TJO-5-164-g002){#F2}
There was a strong negative correlation between the change in spherical equivalent and the change in unaided VA after 6 months of overnight lens wear ([Fig. 3](#F3){ref-type="fig"}), which was statistically significant (Pearson correlation coefficient, *r* =-0.94, *n* = 14, *p* \< 0.01).
![The relationship between the change in spherical equivalent and the change in unaided visual acuity after 6 months of overnight lens wear. logMAR = logarithm of the minimum angle of resolution.](TJO-5-164-g003){#F3}
3.3. Corneal topography {#sec2-3}
-----------------------
[Fig. 4](#F4){ref-type="fig"} shows the reduction in corneal power along the steepest and flattest meridians for both high and low to moderate myopic groups after different periods of OK lens wear. In both groups, the Sim K along the steepest (Sim K~steep~) and flattest meridians (Sim K~flat~) after OK lens wear was flatten significantly over time compared to the baseline (ANOVA, *p* \< 0.001). For the high-myopia group, the corneal flattening along both meridians appeared to stabilize by the first week of lens wear (paired *t* test, *p* \< 0.01). No further significant flattening was observed (*p* \> 0.01) beyond 1 week of lens wear. For the low-to-moderate-myopia group, Sim K~flat~ was flattened and appeared to stabilize after an overnight of lens wear, whereas Sim K~steep~ stabilized within 1 week of lens wear (paired *t* test, *p* \< 0.01). No further significant flattening was observed (*p* \> 0.01) in subsequent visits.
![Simulated keratometry along the steepest and flattest meridians (D) in both high-myopia and low-to-moderate-myopia groups at every visit. Error bars indicate standard deviation. \*,\*\* Significances of *p* \< 0.01 for the high-myopia and low-tomoderate- myopia groups, respectively (paired *t* test; the residual refraction is significantly different compared with the previous visit).](TJO-5-164-g004){#F4}
3.4. Topographic corneal toricity {#sec2-4}
---------------------------------
The change in corneal toricity over the 6 months of OK lens wear was demonstrated in [Fig. 5](#F5){ref-type="fig"}. There was no significant change in the corneal toricity with time in both groups of participants (ANOVA, *p* \> 0.05). In all the participants, corneal toricity is not significantly different compared to the mean at previous visits during the OK lens wear (paired *t* test, *p* \> 0.01). The corneal toricity after 6 months of lens wear in high and low to moderate myopes were − 1.86 ± 1.12 D and − 1.41 ± 0.75 D, respectively.
![Corneal toricity (D) in both high-myopia and low-to-moderate-myopia groups at every visit. Error bars indicate standard deviation.](TJO-5-164-g005){#F5}
4. Discussion {#sec1-4}
=============
The effect of OK lens in low myopia is well known. The results of our study on low myopia up to −6.00 D in power concur with most previous studies. Myopic power was reduced to almost zero at the end of the 6 months' study after wearing OK lenses. Surprisingly, the results of our study on high myopia also follow the same pattern with low myopia, with most of the changes occurring within 24 hours after wearing OK lenses overnight. In this study, the mean change in refractive error of the high-myopia participants was 6.93 ± 0.92 D at 6 months of post-OK lens wear from the baseline value of −7.11 ± 0.79 D. The high-myopia participants achieved a final residual spherical equivalent of −0.18 ± 0.31 D. For the high myopes, the largest reduction in myopia occurred after one night of OK lens wear, and this stabilized after 1 week. Our study showed that high myopes with power range −6.25--−8.25 D can be successfully treated with OK lenses worn nightly. Koffler and Smith[@ref17] used Paragon HDS 100 paflufocon D OK lens that has the same DK value as our lens, and also included myopic participants that have a refractive-error range similar to ours and reported a myopia reduction of 5.80 ± 1.80 D in their participants. However, there were only two participants with a refractive error \< −6.00 D in the study. At the baseline, their participants' refractive errors range from −1.00 Dto −7.75 D. They also concluded that participants between −1.00 D and −6.00 D refractive error with up to 1.50 D of astigmatism can expect a good outcome with OK lenses. The outcome of our results among the low-myopia group concurred with their results and also with many other studies reported earlier.[@ref4][@ref6][@ref18]
In a retrospective study, Wang et al[@ref8] examined the data of participants that have undergone overnight OK using Contex OK-3 design with a spherical power up to −7.00 D and astigmatism \< 2.00 D. For the analysis, they divided their data into two groups of mild myopia (0--− 3.5 D) and moderate myopia (− 3.75--− 7.00 D). They noted a mean change of 0.77 ± 1.14 D in the low-myopia group, and 2.90 ± 1.42 D in the moderately myopic group after 1 year of OK treatment. The reduction of sphere reached its maximum only in the 3^rd^ month and fluctuated slightly thereafter. The mean spherical change was only 51% less than its baseline value. In both our high- and low-myopia groups, the mean change in refractive error relative to the baseline is \> 93%, and almost all these changes occurred within 1 day (low myopes) and 1 week (high myopes) of OK lens wear. Although the outcome of our results seems better than Wang et al,[@ref8] we could not compare them directly since we have strict inclusion criteria, different classifications of low and high myopia, as well as different lens designs and materials that may have contributed in part to the outcome.
The reduction in refractive error in both groups of high and low myopes was also reflected in the final VA achieved by the end of 6 months. The high myopes achieved a final acuity of −0.07 logMAR (6/4.8 Snellen) compared to 1.38 ± 0.09 at the baseline. This is almost the same as the final acuity achieved by the low to moderate myopes (0.01 ± 0.16 logMAR) in our study. After 6 months of OK lens wear, 88% of the high-myopia participants achieved a VA of 0.0 logMAR (6/6 Snellen) or better, whereas 82% of the low to moderate myopic participants achieved a VA of 0.0 logMAR (6/6 Snellen). Our results seem better than Wang et al[@ref8] who reported 95% of their low-myopia group (defined as −0.75--−3.50 D) achieved a VA of 0.3 logMAR (6/12 Snellen), and only 75% of the moderate myopic group (−3.75--−7.00 D) achieved 0.3 logMAR by the end of 1 year of lens wear. This could be due to the improvement of much more highly oxygen-permeable lens materials and a different lens design that led to the successful fitting of OK lens in our groups of participants. Walline et al[@ref3] used Paragon corneal refractive therapy, which has the same DK value like ours (DK value of 100) in their participants, and they also achieved an unaided logMAR VA of 0.08 ± 0.15 (\~6/7.5 Snellen) at 6 months of lens wear. Lum and Swarbrick[@ref19] have demonstrated that an increase in the lens Dk/t not only provides physiological advantages, but most importantly enhanced the clinical outcomes of overnight OK.
In this study, we found a good correlation between uncorrected VA and refractive error ([Fig. 3](#F3){ref-type="fig"}). This is in agreement with many studies reported previously where flattening of the cornea resulted in improvement in VA.
The change in refractive error reflects the changes in corneal shapes. The average changes of corneal power along a flat meridian after 6 months from the baseline were − 4.57 ± 1.39 D and −2.28 ± 1.01 D in the high myopes and low to moderate myopes, respectively. Corneal flattening was stabilized by 1 week of lens wear in all the participants, and this concurred with many other studies.[@ref4][@ref8][@ref11][@ref20][@ref21]
The mean changes of cornea toricity were −0.67 ± 1.48 DC and -0.01 ± 0.60 DC in the high and low to moderate myopic groups, respectively. There were no significant differences in tor-icity over the 6 months' duration of OK lens wear in both groups. Our finding was in agreement with Kang et al,[@ref22] Sridharan and Swarbrick,[@ref23] Cheung et al,[@ref24] and recently by Chou et al[@ref25] who reported no significant change in corneal toricity after an overnight OK wear. The spherical central base-curve radii on the contact lens flatten both principal meridians of the cornea almost equally; thus, corneal toricity was not significantly altered throughout the treatment (Soni et al[@ref9]). However, Wang et al[@ref8] reported an increase in astigmatism in the moderately myopic group of participants. Many studies on the astigmatism change in OK lens wear are not comparable because the studies vary in the nature of astigmatism being investigated (corneal vs. refractive), the parameters used for describing astigmatism (refractive error vs. aberration), and the differences in the participant inclusion criteria.
In summary, the OK lens wear significantly reduced the refractive error and flattened the corneal curvature that resulted in an improvement in VA in both the high and low-myopia participants. The end point of residual refraction in both groups seemed to occur at the same time, despite the difference in initial myopic power. Overnight OK using modern reverse-geometry lens designs is an effective nonsurgical method for both refractive correction and clear unaided vision in high-myopia children (range from −6.25 D to −8.25 D) with maximum refractive astigmatism of −2.50 DC. The spherical OK lens designs used in this study neither induce nor reduce corneal toricity. The main limitation in this study is the limited number of participants in the high-myopia group. Further researches on a larger sample size are needed to provide a better understanding of the efficacy and safety of OK treatment, particularly among high-myopia children.
The authors acknowledge receipt of the Ministry of Science, Technology and Innovation grant 060102-SF0604.
Conflict of interest: The authors declared no conflict of interest. | {
"perplexity_score": 638.7,
"pile_set_name": "PubMed Central"
} |
Brachial plexopathy following herpes zoster infection: two cases with MRI findings.
There are few reports of brachial plexopathy following the onset of a herpes zoster skin rash. Moreover, the MRI findings of zoster-induced brachial plexopathy have rarely been described. In the present study, we describe two cases of zoster brachial plexopathy and their MRI findings. MRI of the brachial plexus demonstrated T2 hyperintensity and contrast enhancement in the part of the brachial plexus that was compatible with both the clinical symptoms and the electrophysiological findings. Especially, MR imaging reflected the functional impairments more accurately than electrophysiological studies in the acute phase, during which MRI showed more extensive inflammatory involvement of the brachial plexus. MRI findings in the present cases suggest that, in addition to electrophysiological studies, MRI of the brachial plexus could provide valuable information for evaluating the location and extent of lesions and for understanding the pathophysiological mechanisms of zoster brachial plexopathy. | {
"perplexity_score": 155,
"pile_set_name": "PubMed Abstracts"
} |
The lasagna is a great example of why meat, or the absence of it, is a non-issue at Zinc Café. A mixture of ricotta, ginger, shallots, garlic and spinach is lavished between the noodles, making it rich and filling. Served on a soft bun with all the trimmings, the vegetarian Zinc burger imparts that certain carnivorous satisfaction that few meatless burgers do.
Get the Things to Do Newsletter
Sign up for our weekly guide to events in Orange County, and never be bored again. With suggestions for every day of the week, our recommendations will keep you busy on any budget.
Let's just give up hope that our beautiful, multicultural county will ever get an accurate photoplay treatment. But can't The O.C. and Laguna Beach at least feature grubberies other than upscale places most of us Central Countians c... | {
"perplexity_score": 489.1,
"pile_set_name": "Pile-CC"
} |
Background
==========
Locally aggressive bone tumors are a group of commonly recurrent and metastatic bone tumors that predominantly occur in the epiphysis of the long bone of the adjacent joints, including giant-cell tumor (GCT) of bone, aneurysmal bone cyst (ABC), chondroblastoma (CBT) and osteoblastoma \[[@B1]\]. According to Enneking surgical staging, progression of these tumors can be understood in terms of tumor stage \[[@B2]\]. Using this scale, stage I represents the latent phase. At stage II, tumors become active, exhibiting expansive growth and thinning of the bone cortex within the compartment. Stage III is the most aggressive, with lesions piercing through the bone cortex and involving soft tissues surrounding the compartment. Routine treatment includes surgical therapy with extended curettage (IC) of lesions, local adjuvant treatment, graft implantation and effective internal fixation \[[@B1]\]. Because most lesions are adjacent to the joint and muscle (tendon) insertion, IC of lesions may damage muscle (tendon) insertions, exerting detrimental effects on postsurgical limb function. If protection is limited to the muscle (tendon) insertion, a sufficient operative field is difficult to obtain, preventing effective curettage. Therefore, improved methods for treatment selection are required for more effective and successful treatment of locally aggressive musculoskeletal tumors. The current status of treatment and recurrence of these tumors, as discussed in the present study, are briefly reviewed.
Treatment of aneurysmal bone cysts
----------------------------------
ABCs are relatively rare in primary bone tumors, constituting only 1% to 2% of all primary bone tumors. These tumors exhibit rapid growth and high invasiveness, and are often destructive to surrounding tissues \[[@B3]\]. Ubiquitin-specific protease 6 has been implicated in the development of ABCs \[[@B3],[@B4]\], and treatment with curettage and bone grafting or bone cementation has been reported to achieve 70% to 90% success with 10% to 30% recurrence rates \[[@B3],[@B5]\]. IC, or aggressive curettage, applies drills, local adjuvant therapies (such as cauterization), phenolic therapies or cryotherapies to reduce the rate of local recurrence \[[@B3],[@B5],[@B6]\]. ABCs of specific sites, such as the vertebral body and pelvis, can be treated with sclerosing therapies using percutaneous injections of Ethibloc, ethanol and methylprednisolone. In addition, selective arterial embolization has been recommended as an alternative therapy \[[@B7]\].
Treatment of chondroblastomas
-----------------------------
A CBT is a rare cartilage-derived bone tumor that constitutes 1% of all benign bone tumors. It predominantly affects young men and often exhibits invasiveness or malignant behavior \[[@B8],[@B9]\]. Following recommended surgical treatments, the two- to three-year recurrence rate is as high as 10% to 20%, which may partially be due to wide application of inappropriate surgical methods \[[@B9],[@B10]\]. IC, however, has been demonstrated to reduce local recurrence rates \[[@B9]-[@B11]\]. IC has been employed for the treatment of bone CBT, producing a recurrence rate of only 11% (2 out of 18) in one study \[[@B11]\]. In another study of 25 patients with a CBT, IC produced a recurrence rate of only 4.2% (1 out of 24) over an eight-year follow-up period \[[@B8]\].
Treatment of giant-cell tumors of bone
--------------------------------------
GCTs of bone constitute 4% to 8% of all primary bone tumors and predominantly occur in 20- to 40-year-old (middle-aged) individuals \[[@B12]\]. Recently, biotherapy with receptor activator of nuclear factor kappa-B ligand monoclonal antibodies and colony-stimulating factor 1 signaling pathway inhibitor has been introduced for clinical treatment of GCT \[[@B13]\], though optimal clinical treatment still depends on surgery. Conventional intralesional curettage has been reported to result in recurrence rates higher than 30% in GCT \[[@B12],[@B14]\], whereas wide excision or segmental tumor resection has been reported to produce much lower recurrence rates of only 7% to 10%. These methods, however, require complex bone and soft tissue repair and reconstruction combined with revision surgery following segmental resection of tumors. Thus, the high rate of complications associated with these procedures necessitates careful consideration of the risk of recurrence versus maximum conservation of joint function \[[@B14],[@B15]\].
Both primary and recurrent GCT of bone that are classified as Enneking stage II are conventionally treated with IC, and joint function surrounding the lesion is generally conserved to the highest possible extent \[[@B2],[@B13],[@B16]\]. In addition, IC has been widely recommended for the treatment of cyst walls \[[@B2],[@B12],[@B16],[@B17]\]. A retrospective analysis of 349 GCT cases demonstrated that IC significantly reduced the rate of local recurrence of Enneking stage II and partial stage III tumors to only 11.1% \[[@B17]\].
Although extended resection with conservation of the bone cortex has been previously applied to treat locally aggressive bone tumors in the extremities, the treatment strategy proposed in this study mainly aimed to address the problem of bone cortex reservation and conservation of muscle (tendon) insertion. The clinical efficacy of extended resection with osteotomy, fenestration and conservation of muscle (tendon) insertion for treatment of musculoskeletal tumors was investigated in patients with locally aggressive musculoskeletal tumors including ABC, CBT and GCT types. Rates recurrence and functional restoration was assessed, providing potential prognostic indicators that may aid in the clinical selection of treatments for patients with locally aggressive musculoskeletal tumors.
Methods
=======
Participants
------------
From 2004 through 2009, a total of 29 patients with locally aggressive bone tumors of the extremities were admitted to our hospital. Of these, only those exhibiting tumors in the adjacent muscle (tendon) insertion of the proximal extremities humerus or femur were included in the present study. All included patients also exhibited an Enneking stage of II. Patients were excluded if they presented signs of malignant bone tumor according to presurgical pathological examinations using fine needle aspiration cytology or open biopsy.
For each included patient, bone cortex injury at the lesion site was identified according to presurgical X-rays and computed tomography (CT) scans, and the extent of each lesion was identified using magnetic resonance imaging (MRI).
Surgical procedures
-------------------
For all patients, brachial plexus anesthesia, lumbar anesthesia or general anesthesia was administered according to the lesion site, and intravenous antibiotics were administered 30 minutes prior to the surgery.
Each patient exhibiting a tumor of the proximal femur was placed in the lateral position, and the insertion of the abductor muscles (gluteus medius and minimus muscles) and vastus lateralis muscle were exposed using the lateral femoral approach (resection of biopsy tissues). The insertion of the abductor muscle was maintained intact, and osteotomy was performed via the greater trochanter. The greater trochanter bone flap was uncovered to expose lesions. Lesion tissues in the intertrochanteric area and femoral neck were completely excised by scraping with a curet at various angles until the bursa wall was reached, and the grooved region of the bony crest was progressively removed with a high-speed drill. The bursa wall is the inner wall or bony crest of the lesion, representing the boundary of aggressive bone tumors. Extended curettage requires thorough curettage of the lesion tissue including the residual lesions in the groove of the bursa to achieve complete excision of the lesion.
The wound was cauterized using an argon gas knife followed by normal saline washing. This procedure was repeated three times. Femoral head and neck areas beyond the reach of drill and knife techniques were further treated by immersion in anhydrous ethanol for 15 min. Greater trochanter bone flap lesions on the medial side and bony surface were treated similarly (Figure [1](#F1){ref-type="fig"}B,C).
![**Male patient, 19 years old with an aneurysmal bone cyst of the right proximal femur. (A)** Presurgical X-ray and computed tomography scan showing expansive cystic change in lesions in the right proximal femur, characterized by bony partition and thinning bone cortex. Presurgical magnetic resonance imaging indicates that lesions exhibit unequal signal intensities with lobular changes. **(B)** Sketch map of presurgical osteotomy line with osteotomy line (red) showing the greater trochanteric osteotomy and conserved insertion of the gluteus medius and minimus muscles. **(C)** Intraoperative cauterization using an argon gas knife, anhydrous ethanol immersion, mixed bone graft with autogenous bone, and artificial bone and reduction osteotomy. **(D)** X-ray at postsurgical month 24 revealing scattered calcifications on the graft regions in the proximal femur without right hip pain. **(E)** Internal fixation removed after postsurgical month 36. X-ray revealing scattered calcifications on the graft regions of the right hip, not significantly different from 24-month X-rays. No hip pain, normal weight-bearing, good Enneking score and normal hip joint activity were reported.](1477-7819-11-54-1){#F1}
Each patient exhibiting a tumor of the proximal humerus involving the intertubercular regions was placed in the beach chair position, and an incision was made on the anterior third of the deltoid muscle prior to resection of biopsy tissues. The rotator cuff insertion was maintained intact, and osteotomy and fenestration of the greater trochanter were performed. The muscle bone flap was uncovered and lesions were excised using the same method previously described for a tumor of the proximal femur (Figure [2](#F2){ref-type="fig"}B,C).
![**Male patient, 19 years old, with chondroblastomas of the left proximal humerus. (A)** Presurgical X-ray and computed tomography scan showing expansive lesion growth in the proximal humerus and thinning of the bone cortex with bony septum and scattered calcifications. Presurgical magnetic resonance imaging indicates T2-weighted images of tumors with inhomogeneous moderate signals and scattered high signals complicated by edema and swelling of the surrounding soft tissues and the rotator cuff insertion adjacent to tumors. **(B)** Sketch map of osteotomy of the greater tuberosity of the humerus showing osteotomy line (red) without injury to the subscapularis muscle, infraspinous muscle, or insertion of the supraspinatus muscle. **(C)** Intraoperative treatment showing osteotomy and fenestration performed via the greater tuberosity of the humerus and repeated cauterization with an argon gas knife. Autogenous and artificial bones were grafted. The defective region of the aneurysm shell was then covered with autologous iliac bone containing cortical bone and treated with internal fixation using an anatomical titanium alloy plate. **(D)** X-ray at postsurgical months 12 and 24 revealing no local recurrence. **(E)** Internal fixation was removed after 24 months, resulting in normal left shoulder joint function, good Enneking score and no pain.](1477-7819-11-54-2){#F2}
After complete lesion curettage of tumors of either the proximal femur or humerus, autogenous iliac cancellous bone grafts and/or artificial bones were implanted immediately after tumor resection in the same surgery (stage I), rather than performing functional reconstruction in a separate operation. Bone cement augmentation was routinely administered according to standard protocols, particularly for patients with GCT. The proximal femur was fixed with a titanium alloy plate. The proximal humerus was fixed with anatomical humeral plates and screws (Figures [1](#F1){ref-type="fig"}C and [2](#F2){ref-type="fig"}C).
Postsurgical treatment
----------------------
All patients were administered continuous intravenous antibiotics for 24 h after surgery, and drainage tubes were removed between 24 h and 48 h after surgery. Postsurgical bracing was applied for four to six weeks after surgery. Isometric muscle strength training of the upper extremity was performed within 24 h after surgery in most patients.
Follow-up
---------
All patients underwent follow-up after surgery for a minimum of two years, and the time to full weight-bearing was recorded for each patient. Periodic postsurgical X-rays were performed to monitor bone healing and track local recurrence. Imaging investigations, including MRI, were used to confirm the recurrence of lesions. During the follow-up period, CT and MRI were performed only if deemed necessary due to abnormal X-ray or functional assessments. Musculoskeletal tumors were evaluated using the Enneking postoperative Musculoskeletal Tumor Society score consisting of 30 total possible points \[[@B1]\]. More than 70% of functional recovery was rated very good, 60% to 70% of recovery was rated good, 50% to 60% of recovery was rated moderate and less than 50% of recovery and amputation or death was defined as poor.
Statistical analysis
--------------------
All data were analyzed using SPSS version 11.0 (SPSS Inc., Chicago, IL, USA). Quantitative data for age, follow-up duration and Enneking score data are expressed as means ±SD.
Results
=======
Demographic and clinical characteristics of included patients
-------------------------------------------------------------
Of 29 patients treated, 15 presented with locally aggressive tumors of the proximal femur (six patients) or humerus (nine patients), including eight men and seven women with a mean age of 29 ±7.75 years (range, 18 to 42). An ABC was diagnosed in seven patients including five primary and two recurrent patients; CBT was diagnosed in three patients; and GCT of bone was observed in five patients. Tumors in the proximal humerus were observed in six patients, and tumors in the proximal femur were found in nine patients (Table [1](#T1){ref-type="table"}).
######
Characteristics and follow-up outcomes of 15 malignant musculoskeletal tumor patients
**Gender** **Age (years)** **Site** **Diagnosis** **Treatment** **Follow-up (months)** **Recurrence (months after surgery)** **Enneking score** **Rating of score**
------------ -------------- ------------------------ ---------------- -------------------- -------------------- ---------------------------- --------------------------------------- --------------------------- -----------------------
F 22 PH ABC IC+BG 37 \- 29 Very good
M 19 PF ABC IC+BG 47 \- 29 Very good
M 22 PH ABC IC+BG 36 \- 29 Very good
F 27 PF ABC-R IC+BG 72 9 20 Good
M 21 PF ABC IC+BG 35 \- 28 Very good
F 28 PF ABC-R IC+BG 45 \- 29 Very good
M 27 PF CBT IC+BG 52 \- 29 Very good
M 37 PF GCT IC+BC 46 \- 29 Very good
M 19 PH CBT IC+BG 54 \- 29 Very good
F 33 PF GCT IC+BC 43 \- 28 Very good
F 38 PF GCT IC+BC 66 12 18^a^ Moderate^a^
F 42 PH GCT IC+BC 53 \- 29 Very good
M 38 PH CBT IC+BG 45 \- 28 Very good
F 26 PH ABC IC+BG 25 \- 19 Good
M 36 PF GCT IC+BC 67 \- 29 Very good
**Totals** M 8/15 (53%) 29 ±7.75 (18 to 42)^b^ PF: 6/15 (40%) ABC: 7/15 (47%)^c^ IC+BC: 5/15 (33%) 48 ±12.95 (25 to 72)^b^ Two 27 ±4.07 (18 to 29)^b^ Very Good 12/15 (80%)
F 7/15 (47%) PH: 9/15 (60%) CBT: 3/15 (20%) IC+BG: 10/15 (66%) PH: 42 ±11.2 (25 to 67)^b^ PH: 27 ±4.3 (19 to 29)^b^
GCT: 5/15 (33%) PF:53 ±12.7 (35 to 72)^b^ PF: 27 ±4.0 (18 to 29)^b^
ABC, aneurysmal bone cyst; ABC-R, recurrent aneurysmal bone cyst; BC, bone cement augmentation; BG, bone grafting; CBT, chondroblastoma; F, female; GCT, giant-cell tumor of bone; IC, extended curettage; M, male; PH, proximal humerus; PF, proximal femur.
^a^Postsurgical scores and ratings after recurrence prior to secondary surgery; ^b^mean ±SD (range); ^c^two recurrent cases.
Proximal tumor and bone flap characteristics
--------------------------------------------
In proximal femur tumors, most lesions were widely distributed in the intertrochanteric areas and femur neck, whereas proximal humerus tumors demonstrated expansive growth in the humeral head, leading to complications related to thinning bone cortex (Figures [1](#F1){ref-type="fig"}A and [2](#F2){ref-type="fig"}A). No lesions penetrated the bone cortex, and the surface of the articular cartilage remained intact in all patients. Soft tissue edema without lesion infiltration was observed in some cases. The bone flap with tendon insertion was reduced in all cases, and small bone defects were present in three patients following lesion curettage.
After the tumor-shell bone was treated at the osteotomy site, remaining small bone defects were covered with sheets of autogenous iliac bone containing the cortical bone. For the cases with GCT of bone, the cavity was filled with bone cement, whereas the cavity of other cases were filled with autogenous cancellous bone and/or artificial bone grafts.
Postsurgical outcomes
---------------------
The mean follow-up for all patients was 48 ±12.95 months (range, 25 to 72). During this period, no short-term complications, such as hematoma or infection, were observed. Furthermore, no long-term complications were observed, including pathological fractures, local malignant transformation or distant metastasis. Hip and joint activity gradually improved in patients treated with bracing during the two weeks following surgery, and all braces were removed between four and six weeks after surgery. Most patients were able to initiate isometric muscle strength training of the lower extremity within 24 h of surgery. Patients with femoral tumors began partial weight-bearing and walking with the aid of crutches at eight weeks, and all braces were removed by postsurgical week 12.
One case of recurrent ABC was treated with curettage of lesions with fenestration underneath the tuberosity prior to IC with fenestration, resulting in hip pain within six months of surgery. X-ray follow-up at postsurgical month 12 revealed local bone graft absorption and slight hip pain in one patient, particularly when walking, that persisted at postsurgical month 36. A second surgery was advised. The patient thought that the hip pain was tolerable and could normally walk with weight-bearing. They did not undergo surgery and is currently being followed-up.
Tumor recurrence
----------------
Local recurrence was observed in only two of fifteen patients (13%), including at nine months in one patient with ABC of the proximal femur, who underwent a second surgery and at 12 months in one patient with GCT. The patient with recurring GCT underwent secondary IC and bone cement augmentation. In the postsurgical follow-up of the secondary surgeries, no additional recurrence was reported. No local recurrence was detected in any of the other 13 patients (Table [1](#T1){ref-type="table"}).
Enneking scores
---------------
The mean Enneking score for all patients was 27 ±4.07 (18 to 29). The mean Enneking score for upper limbs was 27 ±4.0 (18 to 29) and lower limbs was 27 ±4.3 (19 to 29). The lowest Enneking score was reported in the patient with recurrent GCT. Except for this patient, all patients reported good or very good outcomes. Very good outcomes were reported in 80% of all patients (12 out of 15) and 92% of patients (12 out of 13) without recurrence (Table [1](#T1){ref-type="table"} and Figure [3](#F3){ref-type="fig"}).
![**Postsurgical follow-up of Enneking score and recurrence.***Enneking score ratings*: Very good (\>70); good (60 to 70); moderate (50 to 60); and poor (\<50).](1477-7819-11-54-3){#F3}
Discussion
==========
Patients with locally aggressive bone tumors such as ABC, GCT and CBT were effectively treated with IC (aggressive curettage) and high-speed drilling in combination with local adjuvant treatment measures, including repeated cauterization of the cyst wall with an argon gas knife and anhydrous alcohol immersion of deep regions. These findings indicate that successful treatment of locally aggressive bone tumors of Enneking stage II is possible using these methods in practical clinical settings. In the current study, the postsurgical recurrence rate was 13.3% (two out of fifteen), much lower than the postsurgical recurrence reported by several similar studies \[[@B3],[@B11],[@B12],[@B14]\]. Therefore, IC may also effectively reduce the postsurgical local recurrence of bone tumors of each of these types.
Following IC treatment, satisfactory functional restoration was achieved by all patients in the current study. Using IC to treat CBT of bone, van der Geest *et al*. \[[@B11]\] reported a postoperative Musculoskeletal Tumor Society score of 93% (28 out of 30). In another study, IC for treatment of 24 cases of CBT resulted in 87.5% very good or good functional outcomes according to Enneking scores (28 to 30) \[ 8\]. These treatments, however, did not consider the importance of the muscle (tendon) insertion. The current method allows for optimal conservation of the bone context and avoids intraoperative damage to the muscle (tendon) insertion. In the proximal femur, for example, when the muscle insertion is protected during surgery, traditional curettage can be performed with fenestration underneath the tuberosity or on the posterior femoral neck. The relatively small operating field and complexity of the operation can result in incomplete resectioning of tumors that widely involve the intertrochanteric area and femoral neck. As a result, the risk of postsurgical recurrence and osteonecrosis of the femoral head is high.
The current study employed the surgical techniques described by Ganz *et al*. \[[@B18]\], who suggested that lesion exposure could be effectively achieved through greater tuberosity osteotomy. This strategy not only protects the muscle insertion but also provides a larger bone window (about 5 cm × 3 cm) that increases the operating field view. This strategy is optimal for achieving IC of lesions in the intertrochanteric areas and femoral head (Figure [2](#F2){ref-type="fig"}B,C). In the present study, imaging studies confirmed that the anatomical structures of the rotator cuff insertion, curettage of lesions with osteotomy and fenestration of the greater tubercle of humerus achieved good exposure and conserved the rotator cuff insertion, particularly in tumors involving the proximal humerus (Figure [1](#F1){ref-type="fig"}B,C).
After complete treatment of the lesions using the technique applied in the current study, bone fragments on the muscle insertion were reduced by applying effective internal fixation. This IC method was designed to conserve the muscle insertion integrity, thus enabling reconstruction of bone-bone fixation but not tendon-bone fixation following curettage. As a result, postsurgical healing was primarily bone healing instead of tendon-bone healing, which may produce more pronounced functional impairments.
In the present study, postsurgical follow-up revealed that about 93% of patients achieved very good or good functional results according to Enneking scores, which is similar or superior to the results of previous studies \[[@B8],[@B11],[@B19]\]. Although these results indicate that the current surgical strategy is superior for conservation of joint function and postsurgical rehabilitation in patients with bone tumors of several types, it is important to consider that this study also has some limitations. In particular, the small number of patients and the use of only a single-center patient population, which may not be representative of general patient populations due to the specialized nature of the treatment facility, must be considered. Furthermore, early functional exercise following surgery and brace protection may have also impacted these positive findings. These techniques, however, merit further study in larger cohorts and more varied patient populations due to their promising potential for dramatically reducing recurrence rates and enhancing functional outcomes. In addition, application of this technique for secondary surgery for recurrent tumors should be more carefully explored, though positive results can be expected based on these preliminary findings.
Conclusions
===========
Effective treatment of locally aggressive bone tumors requires comprehensive analysis of the characteristics, site and aggressiveness (Enneking stage) of tumors. When determining an appropriate treatment strategy, clinicians should carefully weigh the benefits of function conservation in the proximal limb and joint with complete tumor resection. The results of previous studies and the currently reported study suggest that extended resection of tumors with osteotomy, fenestration and conservation of muscle (tendon) insertion can be combined with effective internal fixation to reduce postsurgical recurrence and protect joint function. Thus, this strategy may be effective and feasible for treating primary or recurrent locally aggressive bone tumors classified as Enneking stage II in the proximal extremities, particularly those of the femur and humerus.
Consent
=======
Written informed consent was obtained from the patients for publication of this report and any accompanying images.
Abbreviations
=============
ABC: aneurysmal bone cyst; CBT: chondroblastoma; CT: computed tomography; GCT: giant-cell tumor; IC: extended curettage; MRI: magnetic resonance imaging.
Competing interests
===================
The authors declare that they have no competing interests.
Authors' contributions
======================
Xia Jun and FeiYan Chen designed the study protocol, Xia Jun, YiBing Wei, SiQun Wang and FeiYan Chen performed the operations. SiQun Wang, JianGu Wu, and GangYong Huang participated the postoperative care, follow-up work and data statistics. Jie Chen and JingSheng Shi performed the literature research and manuscript preparation. All authors read and approved the final manuscript. | {
"perplexity_score": 370.6,
"pile_set_name": "PubMed Central"
} |
2011 South American Championships in Athletics
The 2011 South American Championships in Athletics were the 47th edition of the South American Championships, organised under the supervision of the CONSUDATLE. They were held at the National Center of High Performance Athletics (Centro Nacional de Alto Rendimiento Deportivo, CeNARD) in Buenos Aires, Argentina from 2 to 5 June 2011. Forty-four track and field events were contested, with the number of contests split evenly between the sexes. A total of 345 athletes participated at the championships.
It was the first time since 1967 that the city had hosted the event. Brazil continued its dominance at the continental competition, winning the most medals of the fourteen participating countries (51 in total, 21 of them gold). It also retained both the men's and women's title on points. Colombia was the next most successful nation, taking twelve gold medals and thirty-three overall, while the host nation Argentina came third with five golds and twenty medals altogether.
In the events, two South American records were set in the men's and women's 20,000 m track walk competition. Although cold weather conditions affected performances, a total of eight Championships records were improved over the course of the four-day competition, which also saw ten national records beaten.
On the first day, Brazil's Fabiana Murer won the women's pole vault in a championship record, while Argentine Jennifer Dahlgren achieved the same feat in the women's shot put. Reigning Olympic champion Maurren Maggi won her sixth title in the long jump. On day two Juan Ignacio Cerra won his ninth hammer throw gold medal in the history of the event, while Luiz Alberto de Araújo made his breakthrough in the men's decathlon – a championship record of 7944 points made him the fourth best South American of all time.
The women's track events on day three saw Ana Cláudia Silva complete a sprint double over 100 and 200 metres. Rosibel García did the middle-distance equivalent, taking the titles over 800 and 1500 metres. On the final day, Simone da Silva of Brazil won the women's 10,000 metres in 31:59.11 minutes, making her the second fastest South American runner over the distance.
Records
Medal summary
For full event details see 2011 South American Championships in Athletics – Results
Men's results
Track
Field
Women's results
Track
Field
Medal table
Points table
''Note: Points are scored by athlete's finishing positions in event finals. All data from official website.
Participating nations
(71) (Host nation)
(1)
(11)
(78)
(34)
(56)
(22)
(3)
(25)
(15)
(1)
(13)
(15)
References
Day reports
Biscayart, Eduardo (2011-06-02). Murer vaults to world season leading 4.70m in Buenos Aires - South American Championships Day 1. IAAF. Retrieved on 2011-06-05.
Biscayart, Eduardo (2011-06-04). Cerra wins ninth Hammer Throw title in Buenos Aires – South American Champs Day 2. IAAF. Retrieved on 2011-06-05.
Biscayart, Eduardo (2011-06-05). Windy 14.59m Triple Jump for Ibargüen in Buenos Aires – South American Champs, Day 3. IAAF. Retrieved on 2011-06-05.
Biscayart, Eduardo (2011-06-06). Brazil retains South American title in Buenos Aires – Final Day. IAAF. Retrieved on 2011-06-06.
External links
Official website
CONSUDATLE Official website
2011
Category:Sport in Buenos Aires
South American
Athletics
Category:International athletics competitions hosted by Argentina
Category:2011 in South American sport | {
"perplexity_score": 200.5,
"pile_set_name": "Wikipedia (en)"
} |
Kaszewska Wola
Kaszewska Wola is a village in the administrative district of Gmina Przytyk, within Radom County, Masovian Voivodeship, in east-central Poland. It lies approximately north-east of Przytyk, north-west of Radom, and south of Warsaw.
References
Kaszewska Wola | {
"perplexity_score": 33.5,
"pile_set_name": "Wikipedia (en)"
} |
S-Isopetasin, a sesquiterpene of Petasites formosanus, allosterically antagonized carbachol in isolated guinea pig atria.
We investigated the antimuscarinic effect of S-isopetasin in isolated guinea pig atria to clarify whether it preferentially acts on muscarinic M 2 or M 3 receptors. The tension changes of isolated atria were isometrically recorded on a polygraph. S-Isopetasin at 50 and 100 microM significantly inhibited baselines of contractile tension and heart rate, but atropine at 1 microM enhanced both. S-Isopetasin (10 - 100 microM) did not significantly alter the concentration-negative inotropic response curves of carbachol (CCh) in left atria. S-Isopetasin (10 - 100 microM) allosterically antagonized negative inotropic and chronotropic responses induced by CCh in spontaneously beating right atria, based on the slopes of Schild plots significantly differing from unity. On the contrary, atropine (0.01 - 1 microM) competitively antagonized all the above responses to CCh. The pA 2 values of S-isopetasin were significantly less than that of S-isopetasin in guinea pig trachealis, suggesting that S-isopetasin may preferentially act on tracheal muscarinic M 3, but not cardiac muscarinic M 2 receptors. However, atropine preferentially acts neither. This finding reveals that S-isopetasin may have benefit in the treatment of asthma. | {
"perplexity_score": 707.2,
"pile_set_name": "PubMed Abstracts"
} |
Praying God’s Word for Your Marriage
Praying God’s Word for your marriage is the best way to redirect your thoughts about your circumstances and claim the promises of God for your marriage and your spouse. Whenever I find myself discouraged about my marriage or wanting to encourage my husband, I turn to the Scriptures and begin to pray God’s word for my husband and my marriage.
Praying the Fruit of the Spirit for Your Husband/Wife
According to Galatians 5:22-23, the fruit of the Spirit is love, joy, peace, patience, kindness, goodness, faithfulness, gentleness and self-control (ESV). When you pray the fruit of the Spirit for your husband/wife, you are praying that the Holy Spirit would encourage him/her to walk according to God’s Word for his/her life.
Here are some great Bible verses to pray for your husband/wife specifically related to the fruit of the Spirit:
Praying God’s Word for Your Husband/Wife
When you don’t know how best to pray for your husband/wife, the Bible is a great place to start. Here we’ve gathered some common Scriptures to pray for your marriage, and we’ve included our month-long praying for your spouse challenge:
Praying for Your Husband/Wife’s Struggles
Is your husband/wife struggling with a specific area in his/her life? Here are a few common areas of struggle we’ve gathered Scripture to pray over our marriage, asking God to work in our marriage specifically related to these issues:
Privacy Overview
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognizing you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
You can adjust all of your cookie settings by navigating the tabs on the left hand side.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again. | {
"perplexity_score": 325.8,
"pile_set_name": "Pile-CC"
} |
I spent five years trying to learn all these bits I need to build full stack - andrewstuart
Here's what I know in some depth following a tangible decision five years ago to invest my free time in learning all the technologies I need to build my own applications end to end:<p><i></i> back end programming: Python 3<p><i></i> database: Postgres<p><i></i> Python database: SQLAlchemy, psycopg2<p><i></i> Python web server: Falcon<p><i></i> JavaScript version: ES2015 / ES7 (async & await)<p><i></i> browser front end development: ReactJS (without Redux!)<p><i></i> browser framework: Bootstrap 3<p><i></i> desktop application development: Electron with ReactJS<p><i></i> operating system: Linux<p><i></i> cloud: AWS primarily, have developed with all the major cloud platforms.<p><i></i> cloud services: S3, EC2, SQS, Cognito, SES, Lambda<p>I am happy to say after 5 years I have now a level of competence sufficient in each of these to be able to assemble the parts into a whole application.<p>There's a real joy in knowing that for the most part, I have already solved most of the major problems and learning challenges required to get a substantial application built.<p>Five years in, my productivity is now dramatically higher than when I started on my mission.<p>Along the way, so many, many other technologies tried and discarded because they didn't appeal to me at a personal level.
======
tmnvix
Congratulations. It's good to look back and appreciate how far you've come.
What is it that you have built (or plan to build) with these technologies?
What you describe is very similar to my own experience - in terms of the
timeframe (previous five years) and technologies. It's been incredibly
rewarding and satisfying. About two years in I was able to make an actual
living from my new skills and knowledge.
I am about to start a project that will bring together all of the various
parts of my preferred 'stack'; Nginx, Django, React, AWS (though looking
closely at GCP), Redis, Postgres, etc... I'm also currently trying to evaluate
whether graphql would be a worthwhile addition (most likely graphene +
apollo).
~~~
andrewstuart
I've built about seven major projects, most of which are either internal or no
longer online.
www.lunikernel.com is freshly complete.
bootrino is complete but not yet released - video here:
[https://www.youtube.com/watch?v=jB4oan18MpI](https://www.youtube.com/watch?v=jB4oan18MpI)
Another one should be complete within a week or so.
------
k__
I always get asked why I won't call myself full-stack developer when they here
what I built in my 10 years as a dev, but it's not just a question of job
availability to me.
Yes, I got "forced" to build and deploy services at some jobs, I even had to
work with some low level MQTT message queues with IoT devices, but I enjoy it
much more to build front-ends and do UX.
------
darth_mastah
Well done. I'm just wondering why React "without Redux!". Is it one of those
bits which did not appeal to you on a personal level? | {
"perplexity_score": 502.6,
"pile_set_name": "HackerNews"
} |
require('../modules/web.timers');
require('../modules/web.immediate');
require('../modules/web.dom.iterable');
module.exports = require('../modules/$.core'); | {
"perplexity_score": 1235.9,
"pile_set_name": "Github"
} |
from typing import Type
from psqlextra.models import PostgresMaterializedViewModel
from .view import PostgresViewModelState
class PostgresMaterializedViewModelState(PostgresViewModelState):
"""Represents the state of a :see:PostgresMaterializedViewModel in the
migrations."""
@classmethod
def _get_base_model_class(self) -> Type[PostgresMaterializedViewModel]:
"""Gets the class to use as a base class for rendered models."""
return PostgresMaterializedViewModel | {
"perplexity_score": 2227.4,
"pile_set_name": "Github"
} |
Q:
Ubuntu linux takes longer time for incorrect passwords
When I log into my Ubuntu 8.10 box with a correct password the system figures out almost instantaneously that the password is correct and logs me in. However, if I supply an incorrect password, it takes significantly longer to figure out that the password is incorrect and to show me the login screen.
Why is this? Should it not take the same amount of time in both cases?
--
Thanks
A:
It's a security feature to slow down people who are trying to guess your password. It takes Ubuntu the same amount of time to see if it's correct or not, but then it waits for a few seconds before letting you try again.
A:
As Dentrasi has explained - this is to make it more difficult for the attacker to carry out a brute-force attack on the password store. In almost all circumstances, you don't to change this behavior.
If you have a good reason to (which I can't think of), you can modify it via /etc/login.defs - See the login.defs(5) man page.
FAIL_DELAY (number)
Delay in seconds before being allowed another attempt after a login failure.
Hmmm... At the end of the manpage...
Much of the functionality that used to be provided by the shadow password suite
is now handled by PAM. Thus, /etc/login.defs is no longer used by passwd(1), or
less used by login(1), and su(1). Please refer to the corresponding PAM
configuration files instead.
The appropriate PAM entry instead...
# Enforce a minimal delay in case of failure (in microseconds).
# (Replaces the `FAIL_DELAY' setting from login.defs)
# Note that other modules may require another minimal delay. (for example,
# to disable any delay, you should add the nodelay option to pam_unix)
auth optional pam_faildelay.so delay=3000000 | {
"perplexity_score": 571.7,
"pile_set_name": "StackExchange"
} |
Smart nose of the hand held explosive detector
Unlike a human nose, the hand held explosive detector cannot be fooled by other smells that might permeate the surrounding environment. A detector is not a nose in the same sense we as people perceive it to work. Human and animal olfactory glands react to the presence of molecules in the air and the brain interprets that reaction as a smell, usually as good or bad. Animals, dogs especially, have more of these olfactory sensors thus the reason why they can smell things that people cannot. But technology has no brain to teach, so how does it know that there are explosives present? It uses a chemical detector that samples all the chemical molecules in the air.
These sensors are optimised for specific chemical compounds. In the case of the hand held explosive detector, the sensors used are optimised for the chemical compounds associated with explosives. While some sensors might detect more than one compound, it is more effective to apply technology for the exact compounds found in explosives.
Some of the sensors are capable of detecting only a few parts per million of the chemical in the air or on a surface such as clothing. The hand held explosive detector makes it possible for law enforcement personnel to search vehicles, or luggage at airports, or the contents of containers unloaded from cargo ships quickly and efficiently.
You might think that these compounds could also be found in other chemicals or products, and you would be correct. In some instances it might be possible to fool specially trained dogs, but a sensor is that much harder to fool. It does not matter if one compound is overwhelming in smell, a sensor cannot smell. It reacts purely to the chemicals that are present. Law enforcement personnel are trained to know how criminals are trying to fool dogs and technology, and they selectively employ that knowledge to find the explosives.
You will agree that law enforcement might check a container to ensure the chemicals are not used for explosives rather than believe a sensor, only to be killed in an explosion. Thus it makes much more sense to use a hand held explosive detector. | {
"perplexity_score": 377,
"pile_set_name": "Pile-CC"
} |
A DOCTOR has said she is prepared to go to jail rather than pay a €150 fine for parking in a disabled space in the course of her job, claiming she was “forced” to by the HSE.
Dr Maeve White, a south Dublin-based community medical officer said she was left with no choice but to park in the space outside a health centre. She said an ongoing parking problem in the area meant the disabled space was her only option and she had a duty of care to her patients.
"I will never, ever pay this fine," she told Dun Laoghaire District Court.
Judge Anne Watkin imposed the fine and warned she would go to prison if she did not pay it.
Dr White, of Brookvale Road, Rathfarnham, pleaded guilty after being summonsed for parking illegally in a disabled space.
The court heard the offence happened at St Brigid’s, Church Road, Stillorgan at 2.10pm on February 26 last.
Dr White parked her car outside Stillorgan Health Centre, where she was due to carry out a clinic that afternoon, and maintains she could find no legal parking spaces.
“I did knowingly park in the disabled space, so I am guilty, yes,” Dr White told Judge Watkin. “The situation is I was forced by my employer, the HSE, to park there. The only other option is to drive away.”
“Nobody forced you,” the judge replied. “Did they have a gun to your head?”
The judge appreciated that the doctor was “between a rock and a hard place.” She said she was “in luck” as the fine had not increased and was €150.
If it happened again, she said, it would be €250.
“I will never, ever pay this fine,” Dr White said, to which the judge replied: “I don’t need to hear that you are going to be in contempt of court.”
Dr White told the court the HSE had “abused” her.
“This is not a place to start dealing with your gripes against somebody else,”the judge said. “This is a court of law.”
Dr White was there because she had committed an offence, the judge said.
Judge Watkin said Dr White could go to jail for not paying the fine and that she should take advice “if you think you are making a statement against the HSE by not paying a fine owned to another organisation because you have inconvenienced disabled people.”
Dr White said she believes the HSE is responsible for paying the fine, but they have refused.
She said there are no designated parking spaces for staff or patients at Stillorgan Health Centre and the disabled spaces in front of the clinic are “rarely used.”
Parking at Stillorgan Shopping Centre is nearly 1km away and she had bags with heavy equipment, as well as having undergone knee surgery in recent months.
“I was faced with a dilemma; should I break the law and park in the disabled parking space and do the clinic, or should I drive off and abandon children with appointments, whose parents take time off work to be there,” she said. “I chose to put patients first and I parked in the space.”
The HSE cannot comment on an individual case, a spokesperson said.
Online Editors | {
"perplexity_score": 385.7,
"pile_set_name": "OpenWebText2"
} |
Amyotrophic lateral sclerosis (ALS) and spinal muscular atrophy (SMA) are fatal neurological disorders that involve the selective degeneration of spinal motor neurons. SMA ? the most common genetic cause of infant mortality ? is a monogenic disorder caused by widespread deficiency in the survival motor neuron (SMN) protein due to deletion of the SMN1 gene. In contrast, ALS is predominantly a sporadic disorder, but in a minority of familial cases, mutations in over 20 different genes cause motor neuron degeneration. Genetic and molecular studies increasingly suggest that ALS and SMA may share common underlying mechanisms of disease. This project focuses on one form of familial ALS caused by mutations in the RNA binding protein fused in sarcoma (FUS) - which are associated with a broad range of clinical phenotypes including some of the most aggressive, juvenile-onset forms of the disease - and the possible role of SMN biology in the pathogenesis of FUS-dependent motor neuron degeneration. SMN has a well-established function in the assembly of small nuclear ribonucleoproteins (snRNPs) involved in diverse mRNA processing pathways and increasing evidence links SMN-dependent RNA dysregulation with the etiology of SMA. Remarkably, recent studies in cultured mammalian cells and ALS patients' fibroblasts have shown that FUS depletion or expression of ALS-linked FUS mutations disrupt the normal localization of SMN to nuclear bodies known as Gems. Furthermore, FUS has been shown to associate with SMN as well as specific snRNPs whose biogenesis is SMN-dependent and might be disrupted by ALS-linked FUS mutations. Together, these findings suggest that FUS and SMN are functionally linked through a shared molecular pathway(s) and support the view that SMA and ALS are related motor neuron diseases. However, the normal requirement of FUS for snRNP biogenesis and the pathogenic impact of FUS mutations on SMN biology have not yet been defined mechanistically, and the contribution of SMN dysfunction to FUS-ALS pathology remains unknown. To address these outstanding questions directly, our project takes a systematic, multi-disciplinary approach involving novel mouse models of FUS-dependent ALS to explore potential SMN-dependent mechanisms of FUS-mediated motor neuron degeneration. In Aim 1, we will investigate the phenotypic effects of both reduced and increased SMN expression on FUS-dependent motor neuron pathology in mouse models of ALS. In Aim2, we will employ a comprehensive set of molecular approaches to establish the functional relevance of normal and pathogenic FUS-SMN interactions in the pathway(s) of snRNP biogenesis in motor neurons using a combination of cellular and animal model systems. Collectively, these studies aim to establish convergent mechanisms in ALS and SMA and will yield a more complete understanding of the biology of FUS and SMN that is relevant to motor neuron survival. Identification of shared molecular pathways contributing to death and dysfunction of motor neurons in SMA and ALS may also expand the range of therapeutic targets for these diseases. | {
"perplexity_score": 254.2,
"pile_set_name": "NIH ExPorter"
} |
The incorporation of prior genomic information does not necessarily improve the performance of Bayesian linkage methods: an example involving sex-specific recombination and the two-point PPL.
We continue statistical development of the posterior probability of linkage (PPL). We present a two-point PPL allowing for unequal male and female recombination fractions, thetaM and thetaF, and consider alternative priors on thetaM, thetaF. We compare the sex-averaged PPL (PPLSA), assuming thetaM = thetaF, to the sex-specific PPL (PPLSS) in (thetaM, thetaF), in a series of simulations; we also compute the PPLSS using alternative priors on (thetaM, thetaF). The PPLSS based on a prior that ignores prior genomic information on sex specific recombination rates performs essentially identically to the PPLSA, even in the presence of large thetaM, thetaF differences. Moreover, adaptively skewing the prior, to incorporate (correct) genomic information on thetaM, thetaF differences, actually worsens performance of the PPLSS. We demonstrate that this has little to do with the PPLSS per se, but is rather due to extremely high levels of variability in the location of the maximum likelihood estimates of (thetaM, thetaF) in realistic data sets. Incorporating (correct) prior genomic information is not always helpful. We recommend that the PPLSA be used as the standard form of the PPL regardless of the sex-specific recombination rates in the region of the marker in question. | {
"perplexity_score": 597.6,
"pile_set_name": "PubMed Abstracts"
} |
Subscribe to Nigerian Careers Today's Feeds in a Feed reader or Sign Up for Free Email Updates and you'll never miss a single post. After subscribing, check your email to activate your subscription. Thanks for visiting! | {
"perplexity_score": 761.5,
"pile_set_name": "Pile-CC"
} |
Q:
Is there a way to use FINDSTR with non-ASCII (in this case Japanese/Chinese) characters in batch?
I have a list of Japanese Kanji and their pronunciations saved in a text file (JouyouKanjiReadings.txt) like this
亜 ア
哀 アイ,あわれ,あわれむ
愛 アイ
悪 アク,オ,わるい
握 アク,にぎる
圧 アツ
(each gap is made by pressing TAB)
and I have a script like this
@echo off
set /p text=Enter here:
echo %text%>Search.txt
echo.
findstr /G:"Search.txt" JouyouKanjiReadings.txt || echo No Results && pause > nul && exit
pause > nul
However, when I run the script, I always get "No Results". I tried with English characters and it worked fine. I also tried the same script with this
findstr "%text%" JouyouKanjiReadings.txt || echo No Results && pause > nul && exit
but got the same results. Is there any ways to get around this? Also, I'm displaying the these characters correctly in the command prompt by using
chcp 65001
and a different font.
A:
You need to use find (which supports Unicode but not regex) instead of findstr (which supports regex but not Unicode). See Why are there both FIND and FINDSTR programs, with unrelated feature sets?
D:\kanji>chcp
Active code page: 65001
D:\kanji>find "哀" JouyouKanjiReadings.txt
---------- JOUYOUKANJIREADINGS.TXT
哀 アイ,あわれ,あわれむ
Redirect to NUL to suppress the output if you don't need it
That said, find isn't a good solution either. Nowadays you should use PowerShell instead of cmd with all of its quirks due to compatibility legacy issues. PowerShell fully supports Unicode and can run any .NET framework methods. To search for strings you can use the cmdlet Select-String or its alias sls
PS D:\kanji> Select-String '握' JouyouKanjiReadings.txt
JouyouKanjiReadings.txt:5:握 アク,にぎる
If fact you don't even need to use UTF-8 and codepage 65001. Just store the file in UTF-16 with BOM (that'll result in a much smaller file because your file contains mostly Japanese characters), then find and sls will automatically do a search in UTF-16
Of course if there are a lot of existing batch code then you can call PowerShell from cmd like this
powershell -Command "Select-String '哀' JouyouKanjiReadings.txt"
But if it's entirely new then please just avoid the hassle and use PowerShell | {
"perplexity_score": 1598.7,
"pile_set_name": "StackExchange"
} |
IN THE SUPREME COURT OF THE STATE OF IDAHO
Docket No. 40461-2012
STATE OF IDAHO, )
) Boise, August 2013 Term
Plaintiff-Respondent, )
) 2013 Opinion No. 101
v. )
) Filed: October 2, 2013
JOSEPH RICHARD CLINTON, )
) Stephen W. Kenyon, Clerk
Defendant-Appellant. )
)
Appeal from the District Court of the Fourth Judicial District of the State of
Idaho, in and for Ada County. The Hon. Deborah A. Bail, District Judge.
The judgment of the district court is affirmed.
Shawn F. Wilkerson, Deputy State Appellate Public Defender, Boise, for appellant.
Jason M. Gray, Deputy Attorney General, Boise, for respondent.
EISMANN, Justice.
This is an appeal out of Ada County contending that the district court erred in failing to
order a mental health evaluation for a child molester prior to sentencing and abused its discretion
in sentencing the defendant to prison. We affirm the judgment of the district court.
I.
Factual Background.
Joseph Richard Clinton was indicted for the felony crime of lewd conduct with a minor
under sixteen years of age. Although he was initially found incompetent to stand trial, he was
determined to be competent after a reassessment. He pleaded guilty to the charge, and he
underwent a psychosexual evaluation prior to sentencing. He did not request an evaluation of his
mental condition pursuant to Idaho Code section 19-2522, nor did the district court sua sponte
order one. The court sentenced him to serve twenty years in the custody of the Idaho Board of
Correction, with three years of the sentence fixed and the remainder indeterminate. Clinton filed
a motion for reduction of his sentence pursuant to Idaho Criminal Rule 35, which the court
denied. Clinton then timely appealed.
His appeal was first heard by the Idaho Court of Appeals. He contended that the district
court erred in failing to sua sponte order a mental health evaluation and that it abused its
discretion in imposing the sentence. The Court of Appeals affirmed the sentence. In doing so, it
held that a trial court’s unobjected to failure to order a mental examination prior to sentencing
would be reviewed under a manifest disregard standard rather than the fundamental error
standard announced by this Court in State v. Perry, 150 Idaho 209, 245 P.3d 961 (2010). The
State filed a petition for review regarding that issue, and we granted the State’s petition. In cases
that come before this Court on a petition for review of a Court of Appeals decision, we directly
review the decision of the lower court as if the appeal initially had come directly to this Court.
State v. Suriner, 154 Idaho 81, 83, 294 P.3d 1093, 1095 (2013). 1
II.
Did the District Court Err in Failing to Sua Sponte Order a Mental Evaluation?
Idaho Code section 19-2522(1) provides that “[i]f there is reason to believe the mental
condition of the defendant will be a significant factor at sentencing and for good cause shown,
the court shall appoint at least one (1) psychiatrist or licensed psychologist to examine and report
upon the mental condition of the defendant.” Clinton did not request a mental health evaluation
prior to his sentencing, and did not object to the failure to have that evaluation. Because the
district court’s failure to sua sponte order the evaluation did not violate a constitutional right, it
1
In deciding that the manifest disregard standard survived our Perry decision, the Court of Appeals stated:
Initially, we note that, despite the language and apparent scope of Perry, the Perry Court neither
addressed nor expressly overruled the manifest disregard standard this Court has consistently
applied for many years. In other words, there was no effort by our Supreme Court to explicitly
invalidate the manifest disregard standard, despite the opportunity to do so in Perry and in
subsequent cases where Perry was applied to post-trial issues.
This statement reflects a misunderstanding of our standard of review. When we issue an opinion that announces a
rule of law, we do not search opinions of the Court of Appeals to see if our decision conflicts with a rule of law
previously announced by that court. Rather, we simply expect lower courts, including the Court of Appeals, to
follow decisions of this Court when there is a conflict between our decisions on an issue of law and those of the
Court of Appeals. If, in an appeal to this Court, a party relies upon the reasoning in an opinion of the Court of
Appeals, we may agree with or reject that reasoning, but even if we reject the reasoning we will not expressly
overrule the decision of the Court of Appeals because it was not our opinion. Even when we grant review in a case
that was initially decided by the Court of Appeals, we do not reverse its decision when we disagree with it, because
we hear the case anew and do not review the decision of the Court of Appeals.
2
does not constitute a fundamental error that is reviewable on appeal. State v. Carter, No. 39927,
2013 WL 4398863 (Idaho Aug. 16, 2013).
III.
Did the District Court Abuse Its Discretion In Imposing the Sentence?
Clinton contends that the district court abused its discretion in imposing the sentence of
twenty years in the custody of the Idaho Board of Correction with three years fixed. “We review
the length of a sentence under an abuse of discretion standard.” State v. Al–Kotrani, 141 Idaho
66, 70, 106 P.3d 392, 396 (2005). “When a sentence is challenged as being excessively harsh,
we independently review the record on appeal, having due regard for the nature of the offense,
the character of the offender, and the protection of the public interest.” State v. Jeppesen, 138
Idaho 71, 76, 57 P.3d 782, 787 (2002). “[W]hen doing so we consider the defendant's entire
sentence.” State v. Oliver, 144 Idaho 722, 726, 170 P.3d 387, 391 (2007). However, “[w]e
presume that the fixed portion of the sentence will be the defendant’s probable term of
confinement. That is because whether or not a defendant serves longer than the fixed portion of
the sentence is a matter left to the sole discretion of the parole board . . . .” Id. (citation omitted).
“When determining whether the sentence is excessive, we must consider: (1) the protection of
society; (2) deterrence of the defendant and others; (3) the possibility of the defendant’s
rehabilitation; and (4) punishment or retribution for the defendant.” State v. Strand, 137 Idaho
457, 460–61, 50 P.3d 472, 475–76 (2002). “In order to show that the sentence imposed was
unreasonable, the defendant must show that the sentence, in light of the governing criteria, is
excessive under any reasonable view of the facts.” State v. Cannady, 137 Idaho 67, 73, 44 P.3d
1122, 1128 (2002).
When imposing sentence, the district court considered that in this case Clinton had lured
a seven-year-old girl into his mobile home, where he sexually assaulted her; that Clinton had a
prior conviction for lewd conduct with a minor stemming from an incident in which Clinton
sexually abused three young boys who were seven, nine, and eleven years of age; that in the
earlier criminal case, Clinton admitted to having molested fifty children; and that in the opinion
of the psychologist who performed the psychosexual evaluation in this case, Clinton is a
pedophile who does not have the capacity to contain his desires to sexually abuse children and is
a high risk to reoffend.
3
Clinton argues that the district court abused its discretion for several reasons. He asserts
that “the district court concluded, without any evidence to support its conclusion, that Mr.
Clinton’s dementia should be a [sic] aggravating factor because his sexual desires will continue
while his dementia will reduce his ability to understand his actions.” During the sentencing
hearing, the district court stated that “the defendant is suffering from dementia, which will
probably worsen his ability to understand and internalize additional counseling” and that
“[u]nfortunately, sexual impulses tend to survive dementia.”
Dementia is the progressive deterioration of cognitive function. The psychologist who
conducted the psychosexual evaluation of Clinton stated that he had low intellectual functioning
and that “individuals who have low intellectual functioning could sometimes act on their
inappropriate sexual impulses purely based on incapacity to contain sexual desires.” The
psychologist stated that Clinton’s “insight into his sexual issues appeared quite poor, and he did
not present with having the tools necessary to manage them.” With respect to Clinton’s potential
to benefit from treatment, the psychologist stated that considering that Clinton “had previously
participated in ten years of sexual offender treatment and still re-offended, in addition to taking
into account his limited intellect, there [are] concerns regarding how much more he could learn
from sexual offender treatment.” The district court did not err in its evaluation that Clinton’s
dementia increased the risk of him reoffending.
Clinton also points out that he stated he wanted to reinitiate sexual offender treatment and
that the psychologist concluded that Clinton “would be considered amenable for sexual offender
treatment.” However, the psychologist added, “Based on risk level, it was recommended
treatment took place in a structured environment.” In its judgment, the district court
recommended that Clinton “participate in Sex Offender treatment while incarcerated.” The
sentence is certainly consistent with that recommendation.
Finally, Clinton argues that he receives support from his friends, has a place to live, and
has a positive employment background. The existence of these factors does not show that the
district court abused its discretion in imposing the sentence. Just before announcing the
sentence, the court stated that “the risk in this case is quite high, and the practical solutions are
non-existent.” There is no indication that the district court abused its discretion in imposing the
sentence in this case.
4
IV.
Conclusion.
We affirm the judgment of the district court.
Chief Justice BURDICK, Justices J. JONES, W. JONES, and HORTON CONCUR.
5 | {
"perplexity_score": 365.5,
"pile_set_name": "FreeLaw"
} |
Pararena
Mouse-controlled sci-fi ball sports game set in a parabolic arena, inspired by the 1975 movie Rollerball. Players glide on hover-boards trying to get a ball into the opponent's goal. The game was released in 1990 by John Calhoun as shareware for Mac OS and saw a commercial re-release in 1992 with 16 bit color and local multiplayer. In 2016, the developer released the full source code and game assets on github. | {
"perplexity_score": 184.7,
"pile_set_name": "OpenWebText2"
} |
Lead at trace concentrations is quantitatively extracted with 4.7 x 10(-4) M of cryptand 222B in tolune at pH 5.5 in the presence of 5 x 10(-4) M eosin as the counter ion. Lead from the organic phase is stripped with 1 M hydrochloric acid and determined by atomic absorption spectroscopy at 217 nm. Lead is separated from calcium, magnesium, zinc, copper which are associated within aquatic environment. It has been also separated from the multicomponent mixtures containing lead, cadmium, zinc; lead, cadmium, potassium, magnesium; and lead, cadmium, caesium and zinc. The method has been extended for the separation of lead from environmental samples containing effluents, sediments and aerosols. | {
"perplexity_score": 209.6,
"pile_set_name": "Pile-CC"
} |
Can anyone give me an update on where things stand with our ability to enter
into ISDA's with Brazilian counterparties? (i.e., can we do it? Are there
any limitations?)
Carol St. Clair
EB 3889
713-853-3989 (Phone)
713-646-3393 (Fax)
[email protected] | {
"perplexity_score": 883.1,
"pile_set_name": "Enron Emails"
} |
---
abstract: 'We generalize the standard quantum adiabatic approximation to the case of open quantum systems. We define the adiabatic limit of an open quantum system as the regime in which its dynamical superoperator can be decomposed in terms of independently evolving Jordan blocks. We then establish validity and invalidity conditions for this approximation and discuss their applicability to superoperators changing slowly in time. As an example, the adiabatic evolution of a two-level open system is analyzed.'
author:
- 'M.S. Sarandy'
- 'D.A. Lidar'
title: Adiabatic approximation in open quantum systems
---
Introduction
============
The adiabatic theorem [@Born:28; @Kato:50; @Messiah:book] is one of the oldest and most useful general tools in quantum mechanics. The theorem posits, roughly, that if a state is an instantaneous eigenstate of a sufficiently slowly varying Hamiltonian $H$ at one time, then it will remain an eigenstate at later times, while its eigenenergy evolves continuously. Its role in the study of slowly varying quantum mechanical systems spans a vast array of fields and applications, such as energy-level crossings in molecules [@Landau:32; @Zener:32], quantum field theory [@Gellmann:51], and geometric phases [Berry:84,Wilczek:84]{}. In recent years, geometric phases have been proposed to perform quantum information processing [@ZanardiRasseti:99; @ZanardiRasseti:2000; @Ekert-Nature], with adiabaticity assumed in a number of schemes for geometric quantum computation (e.g., [@Pachos:00; @Duan-Science:01; @Pachos:02; @Fazio:03]). Moreover, additional interest in adiabatic processes has arisen in connection with the concept of adiabatic quantum computing, in which slowly varying Hamiltonians appear as a promising mechanism for the design of new quantum algorithms and even as an alternative to the conventional quantum circuit model of quantum computation [@Farhi:00; @Farhi:01].
Remarkably, the notion of adiabaticity does not appear to have been extended in a systematic manner to the arena of *open* quantum systems, i.e., quantum systems coupled to an external environment [@Breuer:book]. Such systems are of fundamental interest, as the notion of a closed system is always an idealization and approximation. This issue is particularly important in the context of quantum information processing, where environment-induced decoherence is viewed as a fundamental obstacle on the path to the construction of quantum information processors (e.g., [@LidarWhaley:03]).
The aim of this work is to systematically generalize the concept of adiabatic evolution to the realm of open quantum systems. Formally, an open quantum system is described as follows. Consider a quantum system $S$ coupled to an environment, or bath $B$ (with respective Hilbert spaces $\mathcal{H}_{S},\mathcal{H}_{B}$), evolving unitarily under the total system-bath Hamiltonian $H_{SB}$. The exact system dynamics is given by tracing over the bath degrees of freedom [@Breuer:book] $$\rho (t)=\mathrm{Tr}_{B}[U(t)\rho _{SB}(0)U^{\dag }(t)], \label{system}$$where $\rho (t)$ is the system state, $\rho _{SB}(0)=\rho (0)\otimes \rho
_{B}(0)$ is the initially uncorrelated system-bath state, and $U(t)=\mathcal{T}\mathsf{\exp }[-i\int_{0}^{t}H_{SB}(t^{\prime })dt^{\prime }]$ ($\mathcal{T}$ denotes time-ordering; we set $\hbar =1$). Such an evolution is completely positive and trace preserving [@Breuer:book; @Kraus:71; @Alicki:87]. Under certain approximations, it is possible to convert Eq. (\[system\]) into the convolutionless form $$\begin{aligned}
{\dot{\rho}}(t) &=& \mathcal{L}(t) \rho (t).
\label{eq:t-Lind}\end{aligned}$$ An important example is $$\begin{aligned}
{\dot{\rho}}(t) &=&
-i\left[ H(t),\rho (t) \right] +\frac{1}{2}
\sum_{i=1}^{N}\left([\Gamma _{i}(t),\rho (t) \Gamma^{\dagger }_{i}(t)] \right.
\nonumber \\
&&\left.+[\Gamma_{i}(t)\rho (t), \Gamma^{\dagger }_{i}(t)]\right).
\label{eq:t-Lind2}\end{aligned}$$Here $H(t)$ is the time-dependent effective Hamiltonian of the open system and $\Gamma _{i}(t)$ are time-dependent operators describing the system-bath interaction. In the literature, Eq. (\[eq:t-Lind2\]) with time-*in*dependent operators $\Gamma _{i}$ is usually referred to as the Markovian dynamical semigroup, or Lindblad equation [@Breuer:book; @Alicki:87; @Gorini:76; @Lindblad:76] \[see also Ref. [@Lidar:CP01] for a simple derivation of Eq. (\[eq:t-Lind2\]) from Eq. (\[system\])\]. However, the case with time-dependent coefficients is also permissible under certain restrictions [@Lendi:86]. The Lindblad equation requires the assumption of a Markovian bath with vanishing correlation time. Equation (\[eq:t-Lind\]) can be more general; for example, it applies to the case of non-Markovian convolutionless master equations studied in Ref. [@Breuer:04]. In this work we will consider the class of convolutionless master equations (\[eq:t-Lind\]). In a slight abuse of nomenclature, we will henceforth refer to the time-dependent generator $\mathcal{L}(t)$ as the Lindblad superoperator, and the $\Gamma
_{i}(t)$ as Lindblad operators. Returning to the problem of adiabatic evolution, conceptually, the difficulty in the transition from closed to open systems is that the notion of Hamiltonian eigenstates is lost, since the Lindblad superoperator – the generalization of the Hamiltonian – cannot in general be diagonalized. It is then not *a priori* clear what should take the place of the adiabatic eigenstates. Our key insight in resolving this difficulty is that this role is played by *adiabatic Jordan blocks of the Lindblad superoperator*. The Jordan canonical form [@Horn:book], with its associated left and right eigenvectors, is in this context the natural generalization of the diagonalization of the Hamiltonian. Specifically, we show that, for slowly varying Lindblad superoperators, the time evolution of the density matrix, written in a suitable basis in the state space of linear operators, occurs separately in sets of Jordan blocks related to each Lindblad eigenvalue. This treatment for adiabatic processes in open systems is potentially rather attractive as it can simplify the description of the dynamical problem by breaking down the Lindblad superoperator into a set of decoupled blocks. In order to clearly exemplify this behavior, we analyze a simple two-level open system for which the exact solution of the master equation (\[eq:t-Lind\]) can be analytically determined.
The paper is organized as follows. We begin, in Sec. \[closed\], with a review of the standard adiabatic approximation for closed quantum systems. In Sec. \[open\] we describe the general dynamics of open quantum systems, review the superoperator formalism, and introduce a strategy to find suitable bases in the state space of linear operators. Section \[adiabatic\] is devoted to deriving our adiabatic approximation, including the conditions for its validity. In Sec. \[example\], we provide a concrete example which illustrates the consequences of the adiabatic behavior for systems in the presence of decoherence. Finally, we present our conclusions in Sec. \[conclusions\].
The adiabatic approximation in closed quantum systems {#closed}
=====================================================
Condition on the Hamiltonian
----------------------------
To facilitate comparison with our later derivation of the adiabatic approximation for open systems, let us begin by reviewing the adiabatic approximation in closed quantum systems, subject to unitary evolution. In this case, the evolution is governed by the time-dependent Schrödinger equation $$H(t)\,|\psi (t)\rangle =i\,|{\dot{\psi}}(t)\rangle , \label{se}$$where $H(t)$ denotes the Hamiltonian and $|\psi (t)\rangle $ is a quantum state in a $D$-dimensional Hilbert space. For simplicity, we assume that the spectrum of $H(t)$ is entirely discrete and nondegenerate. Thus we can define an instantaneous basis of eigenenergies by $$H(t)\,|n(t)\rangle =E_{n}(t)\,|n(t)\rangle , \label{ebh}$$with the set of eigenvectors [$|n(t)\rangle $]{} chosen to be orthonormal. In this simplest case, where to each energy level there corresponds a unique eigenstate, *adiabaticity is then defined as the regime associated to an independent evolution of the instantaneous eigenvectors of* $H(t)$. This means that instantaneous eigenstates at one time evolve continuously to the corresponding eigenstates at later times, and that their corresponding eigenenergies do not cross. In particular, if the system begins its evolution in a particular eigenstate $|n(0)\rangle$, then it will evolve to the instantaneous eigenstate $|n(t)\rangle $ at a later time $t$, without any transition to other energy levels. In order to obtain a general validity condition for adiabatic behavior, let us expand $|\psi (t)\rangle $ in terms of the basis of instantaneous eigenvectors of $H(t)$, $$|\psi (t)\rangle =\sum_{n=1}^{D}a_{n}(t)\,e^{-i\int_{0}^{t}dt^{\prime
}E_{n}(t^{\prime })}\,|n(t)\rangle , \label{ep}$$with $a_{n}(t)$ being complex functions of time. Substitution of Eq. ([ep]{}) into Eq. (\[se\]) yields $$\sum_{n}\left( {\dot{a}}_{n}|n\rangle +a_{n}|{\dot{n}}\rangle \right)
\,e^{-i\int_{0}^{t}dt^{\prime }E_{n}(t^{\prime })}=0, \label{an1}$$where use has been made of Eq. (\[ebh\]). Multiplying Eq. (\[an1\]) by $\langle k(t)|$, we have $${\dot{a}}_{k}=-\sum_{n}a_{n}\langle k|{\dot{n}}\rangle
\,e^{-i\int_{0}^{t}dt^{\prime }g_{nk}(t^{\prime })}, \label{an2}$$where $$g_{nk}(t)\equiv E_{n}(t)-E_{k}(t). \label{eq:g}$$A useful expression for $\langle k|{\dot{n}}\rangle $, for $k\neq n$, can be found by taking the time derivative of Eq. (\[ebh\]) and multiplying the resulting expression by $\langle k|$, which reads $$\langle k|{\dot{n}}\rangle =\frac{\langle k|{\dot{H}}|n\rangle }{g_{nk}}\quad (n\neq k). \label{knee}$$Therefore, Eq. (\[an2\]) can be written as $${\dot{a}}_{k}=-a_{k}\langle k|{\dot{k}}\rangle -\sum_{n\neq k}a_{n}\frac{\langle k|{\dot{H}}|n\rangle }{g_{nk}}\,e^{-i\int_{0}^{t}dt^{\prime
}g_{nk}(t^{\prime })}. \label{anf}$$Adiabatic evolution is ensured if the coefficients $a_{k}(t)$ evolve independently from each other, i.e., if their dynamical equations do not couple. As is apparent from Eq. (\[anf\]), this requirement is fulfilled by imposing the conditions $$\max_{0\le t\le T} \left\vert \frac{\langle k|{\dot{H}}|n\rangle }{g_{nk}}\right\vert \,\ll\,
\min_{0\le t\le T} \left\vert{g_{nk}}\right\vert, \label{vcc}$$ which serves as an estimate of the validity of the adiabatic approximation, where $T$ is the total evolution time. Note that the left-hand side of Eq. (\[vcc\]) has dimensions of frequency and hence must be compared to the relevant physical frequency scale, given by the gap $g_{nk}$ [@Messiah:book; @Mostafazadeh:book]. For a discussion of the adiabatic regime when there is no gap in the energy spectrum see Refs. [@Avron:98; @Avron:99]. In the case of a degenerate spectrum of $H(t)$, Eq. (\[knee\]) holds only for eigenstates $|k\rangle $ and $|n\rangle $ for which $E_{n}\neq E_{k}$. Taking into account this modification in Eq. (\[anf\]), it is not difficult to see that the adiabatic approximation generalizes to the statement that each degenerate eigenspace of $H(t)$, instead of individual eigenvectors, has independent evolution, whose validity conditions given by Eq. (\[vcc\]) are to be considered over eigenvectors with distinct energies. Thus, in general one can define adiabatic dynamics of closed quantum systems as follows:
\[defc\] A closed quantum system is said to undergo adiabatic dynamics if its Hilbert space can be decomposed into decoupled Schrödinger eigenspaces with distinct, time-continuous, and noncrossing instantaneous eigenvalues of $H(t)$.
It is conceptually useful to point out that the relationship between slowly varying Hamiltonians and adiabatic behavior, which explicitly appears from Eq. (\[vcc\]), can also be demonstrated directly from a simple manipulation of the Schrödinger equation: recall that $H(t)$ can be diagonalized by a unitary similarity tranformation $$H_{d}(t)=U^{-1}(t)\,H(t)\,U(t), \label{hdc}$$where $H_{d}(t)$ denotes the diagonalized Hamiltonian and $U(t)$ is a unitary transformation. Multiplying Eq. (\[se\]) by $U^{-1}(t)$ and using Eq. (\[hdc\]), we obtain $$H_{d}\,|\psi \rangle _{d}=i\,|{\dot{\psi}}\rangle _{d}-i\,{\dot{U}}^{-1}|\psi \rangle , \label{sed}$$where $|\psi \rangle _{d}\equiv U^{-1}|\psi \rangle $ is the state of the system in the basis of eigenvectors of $H(t)$. Upon considering that $H(t)$ changes slowly in time, i.e., $dH(t)/dt\approx 0$, we may also assume that the unitary transformation $U(t)$ and its inverse $U^{-1}(t)$ are slowly varying operators, yielding $$H_{d}(t)\,|\psi (t)\rangle _{d}=i\,|{\dot{\psi}}(t)\rangle _{d}.
\label{eq:Had}$$Thus, since $H_{d}(t)$ is diagonal, the system evolves separately in each energy sector, ensuring the validity of the adiabatic approximation. In our derivation of the condition of adiabatic behavior for open systems below, we will make use of this semi-intuitive picture in order to motivate the decomposition of the dynamics into Lindblad-Jordan blocks.
Condition on the total evolution time
-------------------------------------
The adiabaticity condition can also be given in terms of the total evolution time $T$. We shall consider for simplicity a nondegenerate $H(t)$; the generalization to the degenerate case is possible. Let us then rewrite Eq. (\[anf\]) as follows [@Gottfried:book]: $$e^{i\gamma _{k}(t)}\,\frac{\partial }{\partial t}[
a_{k}(t)\,e^{-i\gamma _{k}(t)}] =
-\sum_{n\neq k}a_{n}\frac{\langle k|{\dot{H}}|n\rangle }{g_{nk}}\,e^{-i\int_{0}^{t}dt^{\prime }g_{nk}(t^{\prime })},
\label{adtti}$$ where $\gamma _{k}(t)$ denotes the Berry’s phase [@Berry:84] associated to the state $|k\rangle $, $$\gamma _{k}(t)=i\int_{0}^{t}dt^{\prime }\langle k(t^{\prime })|{\dot{k}}(t^{\prime })\rangle .$$Now let us define a normalized time $s$ through the variable transformation $$t=sT,\,\,\,\,\,0\leq s\leq 1. \label{nt}$$Then, by performing the change $t\rightarrow s$ in Eq. (\[adtti\]) and integrating, we obtain $$\begin{aligned}
&&a_{k}(s)\,e^{-i\gamma _{k}(s)}= \nonumber \\
&&a_{k}(0)-\sum_{n\neq k}\int_{0}^{s}ds^{\prime }\frac{F_{nk}(s^{\prime })}{g_{nk}(s^{\prime })}e^{-iT\int_{0}^{s^{\prime }}ds^{\prime \prime
}g_{nk}(s^{\prime \prime })}, \label{akint}\end{aligned}$$where $$\begin{aligned}
F_{nk}(s)=a_{n}(s)\,\langle k(s)|\frac{dH(s)}{ds}|n(s)\rangle \,e^{-i\gamma
_{k}(s)}. \end{aligned}$$ However, for an adiabatic evolution as defined above, the coefficients $a_{n}(s)$ evolve without any mixing, which means that $a_{n}(s)\approx
a_{n}(0)\,e^{i\gamma _{n}(s)}$. Therefore, $$\begin{aligned}
F_{nk}(s)=a_{n}(0)\,\langle k(s)|\frac{dH(s)}{ds}|n(s)\rangle \,e^{-i[\gamma
_{k}(s)-\gamma _{n}(s)]}. \end{aligned}$$ In order to arrive at a condition on $T$, it is useful to separate out the fast oscillatory part from Eq. (\[akint\]). Thus, the integrand in Eq. (\[akint\]) can be rewritten as $$\begin{aligned}
&&\frac{F_{nk}(s^{\prime })}{g_{nk}(s^{\prime })}e^{-iT\int_{0}^{s^{\prime }}ds^{\prime \prime }g_{nk}(s^{\prime \prime })}=
\nonumber \\
&&\frac{i}{T}\left[ \frac{d}{ds^{\prime }}\left( \frac{F_{nk}(s^{\prime })}{g_{nk}^{2}(s^{\prime })}e^{-iT\int_{0}^{s^{\prime
}}ds^{\prime \prime }g_{nk}(s^{\prime \prime })}\right) \right. \nonumber \\
&&\left. -\,e^{-iT\int_{0}^{s^{\prime }}ds^{\prime \prime
}g_{nk}(s^{\prime \prime })}\frac{d}{ds^{\prime }}\left( \frac{F_{nk}(s^{\prime })}{g_{nk}^{2}(s^{\prime })}\right) \right] . \label{ricc}\end{aligned}$$Substitution of Eq. (\[ricc\]) into Eq. (\[akint\]) results in $$\begin{aligned}
&&a_{k}(s)\,e^{-i\gamma _{k}(s)}= \nonumber \\
&&a_{k}(0)+\frac{i}{T}\sum_{n\neq k}\left( \frac{F_{nk}(0)}{g_{nk}^{2}(0)}-\frac{F_{nk}(s)}{g_{nk}^{2}(s)}e^{-iT\int_{0}^{s}ds^{\prime
}g_{nk}(s^{\prime })}\right. \nonumber \\
&&\left. +\,\int_{0}^{s}ds^{\prime }\,e^{-iT\int_{0}^{s^{\prime }}ds^{\prime
\prime }g_{nk}(s^{\prime \prime })}\frac{d}{ds^{\prime }}\frac{F_{nk}(s^{\prime })}{g_{nk}^{2}(s^{\prime })}\right). \label{akfinal}\end{aligned}$$A condition for the adiabatic regime can be obtained from Eq. (\[akfinal\]) if the integral in the last line vanishes for large $T$. Let us assume that, as $T\rightarrow \infty $, the energy difference remains nonvanishing. We further assume that $d\{F_{nk}(s^{\prime })/g_{nk}^{2}(s^{\prime
})\}/ds^{\prime }$ is integrable on the interval $\left[ 0,s\right] $. Then it follows from the Riemann-Lebesgue lemma [@Churchill:book] that the integral in the last line of Eq. (\[akfinal\]) vanishes in the limit $T\rightarrow \infty $ (due to the fast oscillation of the integrand) [@RiemannLebesgue]. What is left are therefore only the first two terms in the sum over $n\neq k$ of Eq. (\[akfinal\]). Thus, a general estimate of the time rate at which the adiabatic regime is approached can be expressed by $$\begin{aligned}
T\gg \frac{F}{g^{2}}, \label{timead}\end{aligned}$$ where $$\begin{aligned}
&&F=\max_{0\leq s\leq 1}|a_{n}(0)\,\langle k(s)|\frac{dH(s)}{ds}|n(s)\rangle
|, \nonumber \\
&&g=\min_{0\leq s\leq 1}|g_{nk}(s)|\, ,\end{aligned}$$with $\max $ and $\min $ taken over all $k$ and $n$. A simplification is obtained if the system starts its evolution in a particular eigenstate of $H(t)$. Taking the initial state as the eigenvector $|m(0)\rangle $, with $a_{m}(0)=1$, adiabatic evolution occurs if $$\begin{aligned}
T\gg \frac{\mathcal{F}}{\mathcal{G}^{2}}, \label{timead2}\end{aligned}$$ where $$\begin{aligned}
&&\mathcal{F}=\max_{0\leq s\leq 1}|\langle k(s)|\frac{dH(s)}{ds}|m(s)\rangle
|\,, \nonumber \\
&&\mathcal{G}=\min_{0\leq s\leq 1}|g_{mk}(s)|\,.\end{aligned}$$Equation (\[timead2\]) gives an important validity condition for the adiabatic approximation, which has been used, e.g., to determine the running time required by adiabatic quantum algorithms [@Farhi:00; @Farhi:01].
The dynamics of open quantum systems {#open}
====================================
In this section, we prepare the mathematical framework required to derive an adiabatic approximation for open quantum systems. Our starting point is the convolutionless master equation (\[eq:t-Lind\]). It proves convenient to transform to the superoperator formalism, wherein the density matrix is represented by a $D^{2}$-dimensional coherence vector $$|\rho \rangle \rangle =\left(
\begin{array}{cccc}
\rho _{1} & \rho _{2} & \cdots & \rho _{D^{2}}\end{array}\right) ^{t}, \label{vcv}$$and the Lindblad superoperator $\mathcal{L}$ becomes a $(D^{2}\times D^{2})$-dimensional supermatrix [@Alicki:87]. We use the double bracket notation to indicate that we are not working in the standard Hilbert space of state vectors. Such a representation can be generated, e.g., by introducing a basis of Hermitian, trace-orthogonal, and traceless operators \[e.g., su($D$)\], whence the $\rho _{i}$ are the expansion coefficients of $\rho $ in this basis [@Alicki:87], with $\rho _{1}$ the coefficient of $I$ (the identity matrix). In this case, the condition $\mathrm{Tr}\rho ^{2}\leq 1$ corresponds to $\left\Vert |\rho \rangle \rangle \right\Vert \leq 1$, $\rho =\rho ^{\dag }$ to $\rho _{i}=\rho _{i}^{\ast }$, and positive semidefiniteness of $\rho $ is expressed in terms of inequalities satisfied by certain Casimir invariants \[e.g., of $su(D)$\] [@byrd:062322; @Kimura:2003-1; @Kimura:2003-2]. A simple and well-known example of this procedure is the representation of the density operator of a two-level system (qubit) on the Bloch sphere, via $\rho =(I_{2}+\overrightarrow{v}\cdot \overrightarrow{\sigma })/2$, where $\overrightarrow{\sigma }=(\sigma _{x},\sigma _{y},\sigma _{z})$ is the vector of Pauli matrices, $I_{2}$ is the $2\times 2$ identity matrix, and $\overrightarrow{v}\in \mathbb{R}^{3}$ is a three-dimensional coherence vector of norm$\leq 1$. More generally, coherence vectors live in Hilbert-Schmidt space: a state space of linear operators endowed with an inner product that can be defined, for general vectors $u$ and $v$, as $$(u,v)\equiv \langle \langle u|v\rangle \rangle \equiv \frac{1}{{\mathcal{N}}}{\text{Tr}}\left( u^{\dagger }v\right) , \label{ip}$$where ${\mathcal{N}}$ is a normalization factor. Adjoint elements $\langle
\langle v|$ in the dual state space are given by row vectors defined as the transpose conjugate of $|v\rangle \rangle $: $\langle \langle
v|=(v_{1}^{\ast },v_{2}^{\ast },...,v_{D^{2}}^{\ast })$. A density matrix can then be expressed as a discrete superposition of states over a complete basis in this vector space, with appropriate constraints on the coefficients so that the requirements of Hermiticity, positive semidefiniteness, and unit trace of $\rho $ are observed. Thus, representing the density operator in general as a coherence vector, we can rewrite Eq. (\[eq:t-Lind\]) in a superoperator language as $$\mathcal{L}(t)\,|\rho (t)\rangle \rangle =|{\dot{\rho}}(t)\rangle \rangle ,
\label{le}$$where $\mathcal{L}$ is now a supermatrix. This master equation generates nonunitary evolution, since $\mathcal{L}(t)$ is non-Hermitian and hence generally nondiagonalizable. However, it is always possible to obtain an elegant decomposition in terms of a block structure, the Jordan canonical form [@Horn:book]. This can be achieved by the similarity transformation $$\mathcal{L}_{J}(t)=S^{-1}(t)\,\mathcal{L}(t)\,S(t), \label{jd}$$where $\mathcal{L}_{J}(t)=\mathrm{diag}(J_{1},...,J_{m})$ denotes the Jordan form of $\mathcal{L}(t)$, with $J_{\alpha }$ representing a Jordan block related to an eigenvector whose corresponding eigenvalue is $\lambda
_{\alpha }$, $$J_{\alpha }=\left(
\begin{array}{ccccc}
\lambda _{\alpha } & 1 & 0 & \cdots & 0 \\
0 & \lambda _{\alpha } & 1 & \cdots & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \cdots & 0 & \lambda _{\alpha } & 1 \\
0 & \cdots & \cdots & 0 & \lambda _{\alpha }\end{array}\right) . \label{ljmatg}$$The number $m$ of Jordan blocks is given by the number of linearly independent eigenstates of $\mathcal{L}(t)$, with each eigenstate associated to a different block $J_{\alpha }$. Since $\mathcal{L}(t)$ is in general non-Hermitian, we generally do not have a basis of eigenstates, whence some care is required in order to find a basis for describing the density operator. A systematic procedure for finding a convenient discrete vector basis is to start from the instantaneous right and left eigenstates of $\mathcal{L}(t)$, which are defined by $$\begin{aligned}
\mathcal{L}(t)\,|\mathcal{P}_{\alpha }(t)\rangle \rangle &=&\lambda
_{\alpha }(t)\,|\mathcal{P}_{\alpha }(t)\rangle \rangle , \label{rleb0} \\
\langle \langle Q_{\alpha }(t)|\,\mathcal{L}(t) &=&\langle \langle Q_{\alpha
}(t)|\,\lambda _{\alpha }(t), \label{rleb}\end{aligned}$$where, in our notation, possible degeneracies correspond to $\lambda
_{\alpha }=\lambda _{\beta }$, with $\alpha \neq \beta $. In other words, we reserve a different index $\alpha $ for each independent eigenvector since each eigenvector is in a distinct Jordan block. It can immediately be shown from Eqs. (\[rleb0\]) and (\[rleb\]) that, for $\lambda _{\alpha }\neq
\lambda _{\beta }$, we have $\langle \langle Q_{\alpha }(t)|\mathcal{P}_{\beta }(t)\rangle \rangle =0$. The left and right eigenstates can be easily identified when the Lindblad superoperator is in the Jordan form $\mathcal{L}_{J}(t)$. Denoting $|\mathcal{P}_{\alpha }(t)\rangle \rangle
_{J}=S^{-1}(t)\,|\mathcal{P}_{\alpha }(t)\rangle \rangle $, i.e., the right eigenstate of $\mathcal{L}_{J}(t)$ associated to a Jordan block $J_{\alpha }$, then Eq. (\[rleb0\]) implies that $|\mathcal{P}_{\alpha }(t)\rangle
\rangle _{J}$ is time-independent and, after normalization, is given by $$\left. |\mathcal{P}_{\alpha }\rangle \rangle _{J}\frac{{}}{{}}\right\vert
_{J_{\alpha }}=\left(
\begin{array}{c}
1 \\
0 \\
\vdots \\
0 \\
\end{array}\right) , \label{pj}$$where only the vector components associated to the Jordan block $J_{\alpha }$ are shown, with all the others vanishing. In order to have a complete basis we shall define new states, which will be chosen so that they preserve the block structure of $\mathcal{L}_{J}(t)$. A suitable set of additional vectors is $$\left. |\mathcal{D}_{\alpha }^{(1)}\rangle \rangle _{J}\frac{{}}{{}}\right\vert _{J_{\alpha }}=\left(
\begin{array}{c}
0 \\
1 \\
0 \\
\vdots \\
0 \\
\end{array}\right) ,\,...\,,\,\left. |\mathcal{D}_{\alpha }^{(n_{\alpha }-1)}\rangle
\rangle _{J}\frac{{}}{{}}\right\vert _{J_{\alpha }}=\left(
\begin{array}{c}
0 \\
0 \\
0 \\
\vdots \\
1 \\
\end{array}\right) , \label{dj}$$where $n_{\alpha }$ is the dimension of the Jordan block $J_{\alpha }$ and again all the components outside $J_{\alpha }$ are zero. This simple vector structure allows for the derivation of the expression $$\mathcal{L}_{J}(t)\,|\mathcal{D}_{\alpha }^{(j)}\rangle \rangle _{J}=|\mathcal{D}_{\alpha }^{(j-1)}\rangle \rangle _{J}+\lambda _{\alpha }(t)\,|\mathcal{D}_{\alpha }^{(j)}\rangle \rangle _{J}, \label{ldj}$$with $|\mathcal{D}_{\alpha }^{(0)}\rangle \rangle _{J}\equiv |\mathcal{P}_{\alpha }\rangle \rangle _{J}$ and $|\mathcal{D}_{\alpha }^{(-1)}\rangle
\rangle _{J}\equiv 0$. The set $\left\{ |\mathcal{D}_{\alpha }^{(j)}\rangle
\rangle _{J},\text{\thinspace with}\,j=0,...,(n_{\alpha }-1)\right\} $ can immediately be related to a right vector basis for the original $\mathcal{L}(t)$ by means of the transformation $|\mathcal{D}_{\alpha }^{(j)}(t)\rangle
\rangle =S(t)\,|\mathcal{D}_{\alpha }^{(j)}\rangle \rangle _{J}$ which, applied to Eq. (\[ldj\]), yields $$\mathcal{L}(t)\,|\mathcal{D}_{\alpha }^{(j)}(t)\rangle \rangle =|\mathcal{D}_{\alpha }^{(j-1)}(t)\rangle \rangle +\lambda _{\alpha }(t)\,|\mathcal{D}_{\alpha }^{(j)}(t)\rangle \rangle . \label{ldo}$$Equation (\[ldo\]) exhibits an important feature of the set $\left\{ |\mathcal{D}_{\beta }^{(j)}(t)\rangle \rangle \right\} $, namely, it implies that Jordan blocks are invariant under the action of the Lindblad superoperator. An analogous procedure can be employed to define the left eigenbasis. Denoting by $_{J}\langle \langle \mathcal{Q}_{\alpha }(t)|=\langle \langle
\mathcal{Q}_{\alpha }(t)|S(t)$ the left eigenstate of $\mathcal{L}_{J}(t)$ associated to a Jordan block $J_{\alpha }$, Eq. (\[rleb\]) leads to the normalized left vector $$\left. _{J}\langle \langle \mathcal{Q}_{\alpha }|\frac{{}}{{}}\right\vert
_{J_{\alpha }}=\left( \frac{{}}{{}}0,\,.\,.\,.\,,0,1\frac{{}}{{}}\right) .
\label{qj}$$The additional left vectors are defined as $$\begin{aligned}
\left. _{J}\langle \langle \mathcal{E}_{\alpha }^{(0)}|\frac{{}}{{}}\right\vert _{J_{\alpha }} &=&\left( \frac{{}}{{}}1,0,0,\,.\,.\,.\,,0\frac{{}}{{}}\right) , \nonumber \\
\vspace{0.1cm} &.\,\,.\,\,.& \nonumber \\
\vspace{0.1cm}\left. _{J}\langle \langle \mathcal{E}_{\alpha }^{(n_{\alpha
}-2)}|\frac{{}}{{}}\right\vert _{J_{\alpha }} &=&\left( \frac{{}}{{}}0,\,.\,.\,.\,,0,1,0\frac{{}}{{}}\right) , \label{rj}\end{aligned}$$which imply the following expression for the left basis vector $\langle
\langle \mathcal{E}_{\alpha }^{(i)}(t)|=\,_{J}\langle \langle \mathcal{E}_{\alpha }^{(i)}|\,S^{-1}(t)$ for $\mathcal{L}(t)$: $$\langle \langle \mathcal{E}_{\alpha }^{(i)}(t)|\,\mathcal{L}(t)=\langle
\langle \mathcal{E}_{\alpha }^{(i+1)}(t)|+\langle \langle \mathcal{E}_{\alpha }^{(i)}(t)|\,\lambda _{\alpha }(t). \label{lro}$$Here we have used the notation $_{J}\langle \langle \mathcal{E}_{\alpha
}^{(n_{\alpha }-1)}|\equiv \,_{J}\langle \langle \mathcal{Q}_{\alpha }|$ and $_{J}\langle \langle \mathcal{E}_{\alpha }^{(n_{\alpha })}|\equiv 0$. A further property following from the definition of the right and left vector bases introduced here is $$\langle \langle \mathcal{E}_{\alpha }^{(i)}(t)|\mathcal{D}_{\beta
}^{(j)}(t)\rangle \rangle =\,_{J}\langle \langle \mathcal{E}_{\alpha }^{(i)}|\mathcal{D}_{\beta }^{(j)}\rangle \rangle _{J}=\delta _{\alpha \beta }\delta
^{ij}. \label{lrr}$$This orthonormality relationship between corresponding left and right states will be very useful in our derivation below of the conditions for the validity of the adiabatic approximation.
The adiabatic approximation in open quantum systems {#adiabatic}
===================================================
We are now ready to derive our main result: an adiabatic approximation for open quantum systems. We do this by observing that the Jordan decomposition of $\mathcal{L}(t)$ \[Eq. (\[jd\])\] allows for a nice generalization of the standard quantum adiabatic approximation. We begin by defining the adiabatic dynamics of an open system as a generalization of the definition given above for closed quantum systems:
\[def:open-ad\] An open quantum system is said to undergo adiabatic dynamics if its Hilbert-Schmidt space can be decomposed into decoupled Lindblad–Jordan eigenspaces with distinct, time-continuous, and noncrossing instantaneous eigenvalues of $\mathcal{L}(t)$.
This definition is a natural extension for open systems of the idea of adiabatic behavior. Indeed, in this case the master equation (\[eq:t-Lind\]) can be decomposed into sectors with different and separately evolving Lindblad-Jordan eigenvalues, and we show below that the condition for this to occur is appropriate slowness of the Lindblad superoperator. The splitting into Jordan blocks of the Lindblad superoperator is achieved through the choice of a basis which preserves the Jordan block structure as, for example, the sets of right $\left\{ |\mathcal{D}_{\beta }^{(j)}(t)\rangle \rangle \right\} $ and left $\left\{ \langle
\langle \mathcal{E}_{\alpha }^{(i)}(t)|\right\} $ vectors introduced in Sec. \[open\]. Such a basis generalizes the notion of Schrödinger eigenvectors.
Intuitive derivation
--------------------
Let us first show how the adiabatic Lindblad-Jordan blocks arise from a simple argument, analogous to the one presented for the closed case \[Eqs. (\[hdc\])-(\[eq:Had\])\]. Multiplying Eq. (\[le\]) by the similarity transformation matrix $S^{-1}(t)$, we obtain $$\begin{aligned}
\mathcal{L}_{J}\,|\rho \rangle \rangle _{J}=|{\dot{\rho}}\rangle \rangle
_{J}-{\dot{S}}^{-1}\,|\rho \rangle \rangle , \label{ljinter}\end{aligned}$$ where we have used Eq. (\[jd\]) and defined $|\rho \rangle \rangle
_{J}\equiv S^{-1}|\rho \rangle \rangle $. Now suppose that $\mathcal{L}(t)$, and consequently $S(t)$ and its inverse $S^{-1}(t)$, changes slowly in time so that ${\dot{S}}^{-1}(t)\approx 0$. Then, from Eq. (\[ljinter\]), the adiabatic dynamics of the system reads $$\mathcal{L}_{J}(t)\,|\rho (t)\rangle \rangle _{J}=|{\dot{\rho}}(t)\rangle
\rangle _{J}. \label{ljrj}$$Equation (\[ljrj\]) ensures that, choosing an instantaneous basis for the density operator $\rho (t)$ which preserves the Jordan block structure, the evolution of $\rho (t)$ occurs separately in adiabatic blocks associated with distinct eigenvalues of $\mathcal{L}(t)$. Of course, the conditions under which the approximation ${\dot{S}}^{-1}(t)\approx 0$ holds must be carefully clarified. This is the subject of the next two subsections.
Condition on the Lindblad superoperator
---------------------------------------
Let us now derive the validity conditions for open-system adiabatic dynamics by analyzing the general time evolution of a density operator under the master equation (\[le\]). To this end, we expand the density matrix for an arbitrary time $t$ in the instantaneous right eigenbasis $\left\{ |{\mathcal{D}_{\beta }^{(j)}(t)\rangle \rangle }\right\} $ as $$|\rho (t)\rangle \rangle =\frac{1}{2}\sum_{\beta =1}^{m}\sum_{j=0}^{n_{\beta
}-1}r_{\beta }^{(j)}(t)\,|\mathcal{D}_{\beta }^{(j)}(t)\rangle \rangle ,
\label{rtime}$$where $m$ is the number of Jordan blocks and $n_{\beta }$ is the dimension of the block $J_{\beta }$. We emphasize that we are assuming that there are no eigenvalue crossings in the spectrum of the Lindblad superoperator during the evolution. Requiring then that the density operator Eq. (\[rtime\]) evolves under the master equation (\[le\]) and making use of Eq. (\[ldo\]), we obtain $$\begin{aligned}
\sum_{\beta =1}^{m}\sum_{j=1}^{n_{\beta }-1}r_{\beta }^{(j)}\,\left( |\mathcal{D}_{\beta }^{(j-1)}\rangle \rangle +\lambda _{\beta }\,|\mathcal{D}_{\beta }^{(j)}\rangle \rangle \right) = \nonumber \\
\sum_{\beta =1}^{m}\sum_{j=0}^{n_{\beta }-1}\left( {\dot{r}}_{\beta
}^{(j)}\,|\mathcal{D}_{\beta }^{(j)}\rangle \rangle +r_{\beta }^{(j)}\,|{\dot{\mathcal{D}}}_{\beta }^{(j)}\rangle \rangle \right) . \label{lindg1}\end{aligned}$$Equation (\[lindg1\]) multiplied by the left eigenstate $\langle \langle
\mathcal{E}_{\alpha }^{(i)}|$ results in $${\dot{r}}_{\alpha }^{(i)}=\lambda _{\alpha }\,r_{\alpha }^{(i)}+r_{\alpha
}^{(i+1)}-\sum_{\beta =1}^{m}\sum_{j=0}^{n_{\beta }-1}r_{\beta
}^{(j)}\langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta }^{(j)}\rangle \rangle , \label{rdot}$$with $r_{\alpha }^{(n_{\alpha })}(t)\equiv 0$. Note that the sum over $\beta
$ mixes different Jordan blocks. An analogous situation occurred in the closed system case, in Eq. (\[anf\]). Similarly to what was done there, in order to derive an adiabaticity condition we must separate this sum into terms related to the eigenvalue $\lambda _{\alpha }$ of $\mathcal{L}(t)$ and terms involving mixing with eigenvalues $\lambda _{\beta }\neq \lambda
_{\alpha }$. In this latter case, an expression can be found for $\langle
\langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta
}^{(j)}\rangle \rangle $ as follows: taking the time derivative of Eq. ([ldo]{}) and multiplying by $\langle \langle \mathcal{E}_{\alpha }^{(i)}|$ we obtain, after using Eqs. (\[lro\]) and (\[lrr\]), $$\begin{aligned}
\langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta
}^{(j)}\rangle \rangle =\frac{1}{\omega _{\beta \alpha }}\,\left(
\,\langle \langle \mathcal{E}_{\alpha }^{(i)}|\,{\dot{\mathcal{L}}}\,|\mathcal{D}_{\beta }^{(j)}\rangle \rangle \,\right. \nonumber
\\
\left. +\,\langle \langle \mathcal{E}_{\alpha }^{(i+1)}|{\dot{\mathcal{D}}}_{\beta }^{(j)}\rangle \rangle -\langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta }^{(j-1)}\rangle \rangle \,\right) ,\,\,\,\,
\label{rddi}\end{aligned}$$where we have defined $$\omega _{\beta \alpha }(t)\equiv \lambda _{\beta }(t)-\lambda _{\alpha }(t)
\label{eq:omab}$$and assumed $\lambda _{\alpha }\neq \lambda _{\beta }$. Note that, while $\omega _{\beta \alpha }$ plays a role analogous to that of the energy difference $g_{nk}$ in the closed case \[Eq. (\[eq:g\])\], $\omega _{\beta
\alpha }$ may be complex. A similar procedure can generate expressions for all the terms $\langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta }^{(j-k)}\rangle \rangle $, with $k=0,...,j$. Thus, an iteration of Eq. (\[rddi\]) yields $$\begin{aligned}
\langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta
}^{(j)}\rangle \rangle &=&\sum_{k=0}^{j}\frac{(-1)^{k}}{\omega _{\beta
\alpha }^{k+1}}\left( \langle \langle \mathcal{E}_{\alpha }^{(i)}|\,{\dot{\mathcal{L}}}\,|\mathcal{D}_{\beta }^{(j-k)}\rangle \rangle \,\right.
\nonumber \\
&&\left. +\,\langle \langle \mathcal{E}_{\alpha }^{(i+1)}|{\dot{\mathcal{D}}}_{\beta }^{(j-k)}\rangle \rangle \right) . \label{rddi3}\end{aligned}$$ From a second recursive iteration, now for the term $\langle \langle
\mathcal{E}_{\alpha }^{(i+1)}|{\dot{\mathcal{D}}}_{\beta }^{(j-k)}\rangle
\rangle $ in Eq. (\[rddi3\]), we obtain $$\begin{aligned}
&& \langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta
}^{(j)}\rangle \rangle = \nonumber \\
&& \sum_{p=1}^{(n_{\alpha }-i)}\left(
\prod_{q=1}^{p}\sum_{k_{q}=0}^{(j-S_{q-1})}\right) \frac{\langle \langle
\mathcal{E}_{\alpha }^{(i+p-1)}|{\dot{\mathcal{L}}}|\mathcal{D}_{\beta
}^{(j-S_{p})}\rangle \rangle }{(-1)^{S_{p}}\,\omega _{\beta \alpha
}^{p+S_{p}}}, \label{rdd}\end{aligned}$$where $$S_{q}=\sum_{s=1}^{q}k_{s}\,\,\,\,,\,\,\,\,\left(
\prod_{q=1}^{p}\sum_{k_{q}=0}^{(j-S_{q-1})}\right)
=\sum_{k_{1}=0}^{j-S_{0}}\cdots \sum_{k_{p}=0}^{j-S_{p-1}},$$with $S_{0}=0$. We can now split Eq. (\[rdot\]) into diagonal and off-diagonal terms $$\begin{aligned}
&&{\dot{r}}_{\alpha }^{(i)}=\lambda _{\alpha }r_{\alpha
}^{(i)}+r_{\alpha }^{(i+1)}-\sum_{\beta \,|\,\lambda _{\beta }=\lambda
_{\alpha }}\sum_{j=0}^{n_{\beta }-1}r_{\beta }^{(j)}\langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta }^{(j)}\rangle \rangle \hspace{0.3cm} \nonumber \\
&&-\sum_{\beta \,|\,\lambda _{\beta }\neq \lambda _{\alpha
}}\sum_{j=0}^{n_{\beta }-1}r_{\beta }^{(j)}\langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta }^{(j)}\rangle \rangle ,
\label{rfinal}\end{aligned}$$where the terms $\langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta }^{(j)}\rangle \rangle $, for $\lambda _{\beta }\neq \lambda
_{\alpha }$, are given by Eq. (\[rdd\]). In accordance with our definition of adiabaticity above, the adiabatic regime is obtained when the sum in the second line is negligible. Summarizing, by introducing the normalized time $s$ defined by Eq. (\[nt\]), we thus find the following from Eqs. (\[rdd\]) and (\[rfinal\]).
\[t1\] A sufficient condition for open quantum system adiabatic dynamics as given in Definition \[def:open-ad\] is: $$\begin{aligned}
&&\max_{0\le s\le 1}\,\left\vert \sum_{p=1}^{(n_{\alpha }-i)}\left(
\prod_{q=1}^{p}\sum_{k_{q}=0}^{(j-S_{q-1})}\right) \frac{\langle \langle
\mathcal{E}_{\alpha }^{(i+p-1)}|\frac{d\mathcal{L}}{ds}|\mathcal{D}_{\beta
}^{(j-S_{p})}\rangle \rangle }{(-1)^{S_{p}}\,\omega _{\beta \alpha
}^{p+S_{p}}}\right\vert \nonumber \\
&&\ll 1,
\label{vc}\end{aligned}$$with $\lambda _{\beta }\neq \lambda _{\alpha }$ and for arbitrary indices $i$ and $j$ associated to the Jordan blocks $\alpha $ and $\beta $, respectively.
The condition (\[vc\]) ensures the absence of mixing of coefficients $r_{\alpha
}^{(i)}$ related to distinct eigenvalues $\lambda _{\alpha }$ in Eq. ([rfinal]{}), which in turn guarantees that sets of Jordan blocks belonging to different eigenvalues of $\mathcal{L}(t)$ have independent evolution. Thus the accuracy of the adiabatic approximation can be estimated by the computation of the time derivative of the Lindblad superoperator acting on right and left vectors. Equation (\[vc\]) can be simplified by considering the term with maximum absolute value, which results in:
\[c1os\] A sufficient condition for open quantum system adiabatic dynamics is $$\begin{aligned}
&&{\cal N}_{ij}^{n_\alpha n_\beta} \max_{0\le s\le 1}\,\left\vert \frac{\langle \langle
\mathcal{E}_{\alpha }^{(i+p-1)}|\frac{d\mathcal{L}}{ds}|\mathcal{D}_{\beta
}^{(j-S_{p})}\rangle \rangle }{\omega _{\beta \alpha
}^{p+S_{p}}}\right\vert \ll 1, \end{aligned}$$where the $\max$ is taken for any $\alpha \ne \beta$, and over all possible values of $i\in\{0,...,n_{\alpha}-1\}$, $j\in\{0,...,n_{\beta}-1\}$, and $p$, with $$\begin{aligned}
&&{\cal N}_{ij}^{n_\alpha n_\beta} = \sum_{p=1}^{(n_{\alpha }-i)}\left(
\prod_{q=1}^{p}\sum_{k_{q}=0}^{(j-S_{q-1})}\right) 1
\label{numberterms} \\
&&= \left(
\begin{array}
[c]{c}
n_\alpha-i+1+j\\
1+j
\end{array}
\right)-1
= \frac{(n_\alpha-i+1+j)!}{(1+j)!(n_\alpha-i)!}-1. \nonumber\end{aligned}$$
Observe that the factor ${\cal N}_{ij}^{n_\alpha n_\beta}$ defined in Eq. (\[numberterms\]) is just the number of terms of the sums in Eq. (\[vc\]). We have included a superscript $n_\beta$, even though there is no explicit dependence on $n_\beta$, since $j\in\{0,...,n_{\beta}-1\}$.
Furthermore, an adiabatic condition for a slowly varying Lindblad super-operator can directly be obtained from Eq. (\[vc\]), yielding the following.
A simple sufficient condition for open quantum system adiabatic dynamics is ${\dot{\mathcal{L}}}\approx 0$.
Note that this condition is in a sense too strong, since it need not be the case that $\dot{\mathcal{L}}$ is small in general (i.e., for all its matrix elements). Indeed, in Sec. \[example\] we show via an example that adiabaticity may occur due to the *exact* vanishing of relevant matrix elements of ${\dot{\mathcal{L}}}$. The general condition for this to occur is the presence of a *dynamical symmetry* [@Bohm:88].
Let us end this subsection by mentioning that we can also write Eq. (\[vc\]) in terms of the time variable $t$ instead of the normalized time $s$. In this case, the natural generalization of Eq. (\[vc\]) is $$\begin{aligned}
&&\max_{0\le t\le T}\,\left\vert \sum_{p=1}^{(n_{\alpha }-i)}\left(
\prod_{q=1}^{p}\sum_{k_{q}=0}^{(j-S_{q-1})}\right) \frac{\langle \langle
\mathcal{E}_{\alpha }^{(i+p-1)}|{\dot{\mathcal{L}}}|\mathcal{D}_{\beta
}^{(j-S_{p})}\rangle \rangle }{(-1)^{S_{p}}\,\omega _{\beta \alpha
}^{p+S_{p}}}\right\vert \nonumber \\
&&\ll \,\min_{0\le t\le T} \, |\omega_{\beta \alpha}|.
\label{vch}\end{aligned}$$ Note that, as in the analogous condition (\[vcc\]) in the closed case, the left-hand side has dimensions of frequency, and hence must be compared to the natural frequency scale $\omega_{\beta \alpha}$. However, unlike the closed systems case, where Eq. (\[vcc\]) can immediately be derived from the time condition (\[timead\]), we cannot prove here that $\omega_{\beta\alpha}$ is indeed the relevant physical scale. Therefore, Eq. (\[vch\]) should be regarded as a heuristic criterion.
Condition on the total evolution time {#sec:tot-t}
-------------------------------------
As mentioned in Sec. \[closed\], for closed systems the rate at which the adiabatic regime is approached can be estimated in terms of the total time of evolution, as shown by Eqs. (\[timead\]) and (\[timead2\]). We now provide a generalization of this estimate for adiabaticity in open systems.
### One-dimensional Jordan blocks
Let us begin by considering the particular case where $\mathcal{L}(t)$ has only one-dimensional Jordan blocks and each eigenvalue corresponds to a single independent eigenvector, i.e., $\lambda _{\alpha }=\lambda _{\beta
}\Rightarrow \alpha =\beta $. Bearing these assumptions in mind, Eq. ([rfinal]{}) can be rewritten as $${\dot{r}}_{\alpha }=\lambda _{\alpha }r_{\alpha }-r_{\alpha }\langle \langle
\mathcal{E}_{\alpha }|{\dot{\mathcal{D}}}_{\alpha }\rangle \rangle
-\sum_{\beta \neq \alpha }r_{\beta }\langle \langle \mathcal{E}_{\alpha }|{\dot{\mathcal{D}}}_{\beta }\rangle \rangle , \label{rfsc}$$where the upper indices $i,j$ have been removed since we are considering only one-dimensional blocks. Moreover, for this special case, we have from Eq. (\[rdd\]) $$\langle \langle \mathcal{E}_{\alpha }|{\dot{\mathcal{D}}}_{\beta }\rangle
\rangle =\frac{\langle \langle \mathcal{E}_{\alpha }|\,{\dot{\mathcal{L}}}\,|\mathcal{D}_{\beta }\rangle \rangle }{\omega _{\beta \alpha }}. \label{edsc}$$In order to eliminate the term $\lambda _{\alpha }r_{\alpha }$ from Eq. ([rfsc]{}), we redefine the variable $r_{\alpha }(t)$ as $$r_{\alpha }(t)=p_{\alpha }(t)\,e^{\int_{0}^{t}\lambda _{\alpha }(t^{\prime
})dt^{\prime }},$$which, applied to Eq. (\[rfsc\]), yields $${\dot{p}}_{\alpha }=-p_{\alpha }\,\langle \langle \mathcal{E}_{\alpha }|{\dot{\mathcal{D}}}_{\alpha }\rangle \rangle -\sum_{\beta \neq
\alpha }p_{\beta }\,\langle \langle \mathcal{E}_{\alpha }|{\dot{\mathcal{D}}}_{\beta }\rangle \rangle \,e^{\Omega _{\beta \alpha }}, \label{eq:rfsc2}$$with $$\Omega _{\beta \alpha }(t)=\int_{0}^{t}dt^{\prime }\,\omega _{\beta \alpha
}(t^{\prime }). \label{omos}$$Equation (\[eq:rfsc2\]) is very similar to Eq. (\[anf\]) for closed systems, but the fact that $\Omega _{\beta \alpha }$ is in general complex-valued leads to some important differences, discussed below. We next introduce the scaled time $s=t/T$ and integrate the resulting expression. Using Eq. ([edsc]{}), we then obtain $$\begin{aligned}
&&p_{\alpha }(s)\,=\,p_{\alpha }(0)-\int_{0}^{s}ds^{\prime
}p_{\alpha }(s^{\prime })\,\Phi _{\alpha }(s^{\prime }) \nonumber \\
&&-\sum_{\beta \neq \alpha }\int_{0}^{s}ds^{\prime }\frac{V_{\beta \alpha }(s^{\prime })}{\omega _{\beta \alpha }(s^{\prime })}\,e^{T\,\Omega _{\beta \alpha }(s^{\prime })}, \label{scint}\end{aligned}$$where $\Phi _{\alpha }(s)$ is defined by $$\Phi _{\alpha }(s)=\langle \langle \mathcal{E_{\alpha }}(s)|\frac{d}{ds}|\mathcal{D_{\alpha }}(s)\rangle \rangle$$and $V_{\beta \alpha }(s)$ by $$V_{\beta \alpha }(s)=p_{\beta }(s)\,\langle \langle \mathcal{E_{\alpha }}(s)|\frac{d\mathcal{L}(s)}{ds}|\mathcal{D_{\beta }}(s)\rangle \rangle .
\label{vba}$$The integrand in the last line of Eq. (\[scint\]) can be rearranged in a similar way to Eq. (\[ricc\]) for the closed case, yielding $$\begin{aligned}
&&\frac{V_{\beta \alpha }(s)}{\omega _{\beta \alpha }(s)}\,e^{T\,\Omega _{\beta \alpha }(s)} \nonumber \\
&=&\frac{1}{T}\left[ \frac{d}{ds}\left( \frac{V_{\beta \alpha }}{\omega
_{\beta \alpha }^{2}}\,e^{T\,\Omega _{\beta \alpha }(s)}\right)
-e^{T\,\Omega _{\beta \alpha }(s)}\frac{d}{ds}\frac{V_{\beta \alpha }}{\omega _{\beta \alpha }^{2}}\right] . \label{ninte}\end{aligned}$$Therefore, from Eq. (\[scint\]) we have $$\begin{aligned}
&&p_{\alpha }(s)\,=\,p_{\alpha }(0)-\int_{0}^{s}ds^{\prime
}p_{\alpha }(s^{\prime })\,\Phi _{\alpha }(s^{\prime }) \nonumber \\
&&+\frac{1}{T}\sum_{\beta \neq \alpha }\left( \frac{V_{\beta \alpha }(0)}{\omega _{\beta \alpha }^{2}(0)}-\frac{V_{\beta \alpha }(s)}{\omega _{\beta
\alpha }^{2}(s)}\,e^{T\,\Omega _{\beta \alpha }(s)}\right. \nonumber \\
&&\left. +\int_{0}^{s}ds^{\prime }\,e^{T\,\Omega _{\beta \alpha }(s^{\prime
})}\frac{d}{ds^{\prime }}\frac{V_{\beta \alpha }(s^{\prime })}{\omega
_{\beta \alpha }^{2}(s^{\prime })}\right) .
\label{afsc}\end{aligned}$$ Thus a condition for adiabaticity in terms of the total time of evolution can be given by comparing $T$ to the terms involving indices $\beta \neq
\alpha $. This can be formalized as follows.
\[t2\] Consider an open quantum system whose Lindblad superoperator $\mathcal{L}(s)$ has the following properties: $(a)$ The Jordan decomposition of $\mathcal{L}(s)$ is given by one-dimensional blocks. $(b)$ Each eigenvalue of $\mathcal{L}(s)$ is associated to a unique Jordan block. Then the adiabatic dynamics in the interval $0\leq s\leq 1$ occurs if and only if the following time conditions, obtained for each Jordan block $\alpha$ of $\mathcal{L}(s)$, are satisfied: $$\begin{aligned}
T &\gg& \max_{0\leq s\leq 1} \left\vert \,\sum_{\beta \neq \alpha }\left(
\frac{V_{\beta \alpha }(0)}{\omega _{\beta \alpha }^{2}(0)}-\frac{V_{\beta
\alpha }(s)}{\omega _{\beta \alpha }^{2}(s)}\,e^{T\,\Omega _{\beta \alpha
}(s)}\right. \right. \nonumber \\
&&\left. \left. +\int_{0}^{s}ds^{\prime }\,e^{T\,\Omega _{\beta \alpha
}(s^{\prime })}\frac{d}{ds^{\prime }}\frac{V_{\beta \alpha }(s^{\prime })}{\omega _{\beta \alpha }^{2}(s^{\prime })}\right) \right\vert,
\label{eq:Tad}\end{aligned}$$
Equation (\[eq:Tad\]) simplifies in a number of situations.
- Adiabaticity is guaranteed whenever $V_{\beta \alpha }$ vanishes for all $\alpha \neq \beta $. An example of this case will be provided in Sec. \[example\].
- Adiabaticity is similarly guaranteed whenever $V_{\beta \alpha }(s)$, which can depend on $T$ through $p_{\beta }$, vanishes for all $\alpha
,\beta $ such that $\mathrm{Re}(\Omega _{\beta \alpha })>0$ and does not grow faster, as a function of $T$, than $\exp (T|\,{\mathrm{Re}}\Omega
_{\beta \alpha }|)$ for all $\alpha ,\beta $ such that $\mathrm{Re}(\Omega
_{\beta \alpha })<0$.
- When $\mathrm{Re}(\Omega _{\beta \alpha })=0$ and $\mathrm{Im}(\Omega
_{\beta \alpha })\neq 0$ the integral in inequality (\[eq:Tad\]) vanishes in the infinite time limit due to the Riemann-Lebesgue lemma [Churchill:book]{}, as in the closed case discussed before. In this case, again, adiabaticity is guaranteed provided $p_{\beta }(s)$ \[and hence $V_{\beta \alpha }(s)$\] does not diverge as a function of $T$ in the limit $T
\rightarrow \infty$.
- When $\mathrm{Re}(\Omega _{\beta \alpha })>0$, the adiabatic regime can still be reached for large $T$ provided that $p_{\beta }(s)$ contains a decaying exponential which compensates for the growing exponential due to $\mathrm{Re}(\Omega _{\beta \alpha })$.
- Even if there is an overall growing exponential in inequality ([eq:Tad]{}), adiabaticity could take place over a finite time interval $[0,T_{\ast }]$ and, afterwards, disappear. In this case, which would be an exclusive feature of open systems, the crossover time $T_{\ast }$ would be determined by an inequality of the type $T\gg a+b\exp (cT)$, with $c>0$. The coefficients $a,b$ and $c$ are functions of the system-bath interaction. Whether the latter inequality can be solved clearly depends on the values of $a,b,c$, so that a conclusion about adiabaticity in this case is model dependent.
### General Jordan blocks
We show now that the hypotheses $(a)$ and $(b)$ can be relaxed, providing a generalization of Proposition \[t2\] for the case of multidimensional Jordan blocks and Lindblad eigenvalues associated to more than one independent eigenvector. Let us redefine our general coefficient $r_{\alpha }^{(i)}(t)$ as $$r_{\alpha }^{(i)}(t)=p_{\alpha }^{(i)}(t)\,e^{\int_{0}^{t}\lambda _{\alpha
}(t^{\prime })dt^{\prime }},$$which, applied to Eq. (\[rfinal\]), yields $$\begin{aligned}
&&{\dot{p}}_{\alpha }^{(i)}\,=\,p_{\alpha }^{(i+1)} \nonumber
\\
&&-\sum_{\beta \,|\,\lambda _{\beta }=\lambda _{\alpha
}}\sum_{j=0}^{n_{\beta }-1}p_{\beta }^{(j)}\langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta }^{(j)}\rangle \rangle
\,e^{\Omega _{\beta \alpha }} \nonumber \\
&&-\sum_{\beta \,|\,\lambda _{\beta }\neq \lambda _{\alpha
}}\sum_{j=0}^{n_{\beta }-1}p_{\beta }^{(j)}\langle \langle \mathcal{E}_{\alpha }^{(i)}|{\dot{\mathcal{D}}}_{\beta }^{(j)}\rangle \rangle
\,e^{\Omega _{\beta \alpha }}. \label{rfscg}\end{aligned}$$The above equation can be rewritten in terms of the scaled time $s=t/T$. The integration of the resulting expression then reads $$\begin{aligned}
&&p_{\alpha }^{(i)}(s)\,=\,p_{\alpha
}^{(i)}(0)+T\int_{0}^{s}ds^{\prime }p_{\alpha }^{(i+1)}(s^{\prime })
\nonumber \\
&&\hspace{-0.9cm}-\sum_{\beta \,|\,\lambda _{\beta }=\lambda _{\alpha
}}\sum_{j}\int_{0}^{s}ds^{\prime }p_{\beta }^{(j)}(s^{\prime })\,\Phi
_{\beta \alpha }^{(ij)}(s^{\prime })\,e^{T\,\Omega _{\beta \alpha
}(s^{\prime })} \nonumber \\
&&\hspace{-0.9cm}-\sum_{\beta \,|\,\lambda _{\beta }\neq \lambda _{\alpha
}}\sum_{j,p}\int_{0}^{s}ds^{\prime }\frac{(-1)^{S_{p}}\,V_{\beta \alpha
}^{(ijp)}(s^{\prime })}{\omega _{\beta \alpha }^{p+S_{p}}(s^{\prime })}\,e^{T\,\Omega _{\beta \alpha }(s^{\prime })}, \label{scintg}\end{aligned}$$where use has been made of Eq. (\[rdd\]), with the sum over $j$ and $p$ in the last line denoting $$\sum_{j,p}\equiv \sum_{j=0}^{n_{\beta }-1}\sum_{p=1}^{(n_{\alpha }-i)}\left(
\prod_{q=1}^{p}\sum_{k_{q}=0}^{(j-S_{q-1})}\right) .$$The function $\Phi _{\beta \alpha }^{(ij)}(s)$ is defined by $$\Phi _{\beta \alpha }^{(ij)}(s)=\langle \langle \mathcal{E}_{\alpha
}^{(i)}(s)|\frac{d}{ds}|{\mathcal{D}}_{\beta }^{(j)}(s)\rangle \rangle ,
\label{phabij}$$and $V_{\beta \alpha }^{(ijp)}(s)$ by $$V_{\beta \alpha }^{(ijp)}(s)=p_{\beta }^{(j)}(s)\langle \langle \mathcal{E}_{\alpha }^{(i+p-1)}(s)|\frac{d\mathcal{L}(s)}{ds}|\mathcal{D}_{\beta
}^{(j-S_{p})}(s)\rangle \rangle .\, \label{vbapj}$$The term $T\int_{0}^{s}ds^{\prime }p_{\alpha }^{(i+1)}(s^{\prime })$ in the first line of Eq. (\[scintg\]), which was absent in the case of one-dimensional Jordan blocks analyzed above, has no effect on adiabaticity, since it does not cause any mixing of Jordan blocks. Therefore, the analysis can proceed very similarly to the case of one-dimensional blocks. Rewriting the integral in the last line of Eq. (\[scintg\]), as we have done in Eqs. (\[akfinal\]) and (\[afsc\]), and imposing the absence of mixing of the eigenvalues $\lambda _{\beta }\neq \lambda _{\alpha }$, i.e., the negligibility of the last line of Eq. (\[scintg\]), we find the following general theorem ensuring the adiabatic behavior of an open system.
\[t3\] Consider an open quantum system governed by a Lindblad superoperator $\mathcal{L}(s)$. Then the adiabatic dynamics in the interval $0\leq s\leq 1$ occurs if and only if the following time conditions, obtained for each coefficient $p_\alpha^{(i)}(s)$, are satisfied:
$$\begin{aligned}
T &\gg& \max_{0\leq s\leq 1} \left\vert \,\sum_{\beta \,|\,\lambda _{\beta
}\neq \lambda _{\alpha }}\sum_{j,p}\,(-1)^{S_{p}}\right. \nonumber \\
&&\times \left[ \frac{V_{\beta \alpha }^{(ijp)}(0)}{\omega _{\beta \alpha
}^{p+S_{p}+1}(0)}-\frac{V_{\beta \alpha }^{(ijp)}(s)\,e^{T\,\Omega _{\beta
\alpha }(s)}}{\omega _{\beta \alpha }^{p+S_{p}+1}(s)}\right. \nonumber \\
&&\left. \left. +\int_{0}^{s}ds^{\prime }\,e^{T\,\Omega _{\beta \alpha
}(s^{\prime })}\frac{d}{ds^{\prime }}\frac{V_{\beta \alpha
}^{(ijp)}(s^{\prime })}{\omega _{\beta \alpha }^{p+S_{p}+1}(s^{\prime })}\right] \right\vert . \label{eq:tadscg}\end{aligned}$$
Theorem \[t3\] provides a very general condition for adiabaticity in open quantum systems. The comments made about simplifying circumstances, in the case of one-dimensional blocks above, hold here as well. Moreover, a simpler sufficient condition can be derived from Eq. (\[eq:tadscg\]) by considering the term with maximum absolute value in the sum. This procedure leads to the following corollary:
\[ct3\] A sufficient time condition for the adiabatic regime of an open quantum system governed by a Lindblad superoperator $\mathcal{L}(t)$ is $$\begin{aligned}
T &\gg& \mathcal{M}_{ij}^{n_\alpha n_\beta} \, \max_{0\le s\le 1} \left\vert
\, \frac{V_{\beta \alpha }^{(ijp)}(0)}{\omega _{\beta \alpha}^{p+S_{p}+1}(0)}
-\frac{V_{\beta \alpha }^{(ijp)}(s)\,e^{T\,\Omega_{\beta \alpha }(s)}}{\omega _{\beta \alpha}^{p+S_{p}+1}(s)} \right. \nonumber \\
&&\left.+\int_{0}^{s}ds^{\prime }\,e^{T\,\Omega _{\beta \alpha }(s^{\prime
})}\frac{d}{ds^{\prime }} \frac{V_{\beta \alpha }^{(ijp)}(s^{\prime })}{\omega _{\beta\alpha }^{p+S_{p}+1}(s^{\prime })} \right\vert,
\label{eq:tadcol}\end{aligned}$$ where $\max $ is taken over all possible values of the indices $\lambda_\alpha \neq \lambda_\beta $, $i$, $j$, and $p$, with $$\begin{aligned}
&&\mathcal{M}_{ij}^{n_\alpha n_\beta} = \sum_{\beta \,|\,\lambda _{\beta
}\neq \lambda _{\alpha}} \sum_{j=0}^{(n_{\beta}-1)}\sum_{p=1}^{(n_{\alpha
}-i)}\left( \prod_{q=1}^{p}\sum_{k_{q}=0}^{(j-S_{q-1})}\right) 1 \nonumber
\\
&&= \Lambda_{\beta\alpha} \left[ \frac{(n_\alpha+n_\beta-i+1)!}{(n_\alpha-i+1)!n_\beta!}-n_\beta-1 \right], \label{Nlt}\end{aligned}$$ where $\Lambda_{\beta\alpha}$ denotes the number of Jordan blocks such that $\lambda_\alpha \neq \lambda_\beta$.
Physical interpretation of the adiabaticity condition
-----------------------------------------------------
There are various equivalent ways in which to interpret the adiabatic theorem for *closed* quantum systems [@Messiah:book]. A particularly useful interpretation follows from Eq. (\[timead2\]): the evolution time must be much longer than the ratio of the norm of the time derivative of the Hamiltonian to the square of the spectral gap. In other words, either the Hamiltonian changes slowly, or the spectral gap is large, or both. It is tempting to interpret our results in a similar fashion, which we now do.
The quantity $V_{\beta \alpha }^{(ijp)}$, by Eq. (\[vbapj\]), plays the role of the time derivative of the Lindblad superoperator. However, the appearance of $\exp [T\,\mathrm{Re}\,\Omega _{\beta \alpha }(s)]$ in Eq. (\[eq:tadscg\]) has no analog in the closed-systems case, because the eigenvalues of the Hamiltonian are real, while in the open-systems case the eigenvalues of the Lindblad superoperator may have imaginary parts. This implies that adiabaticity is a phenomenon which is not guaranteed to happen in open systems even for very slowly varying interactions. Indeed, from Theorems \[t2\] and \[t3\], possible pictures of such system evolutions include the decoupling of Jordan blocks only over a finite time interval (disappearing afterwards), or even the case of complete absence of decoupling for any time $T$, which implies no adiabatic evolution whatsoever.
The quantity $\omega _{\beta \alpha}$, by Eq. (\[eq:omab\]), clearly plays the role of the spectral gap in the open-system case. There are two noteworthy differences compared to the closed-system case. First, the $\omega _{\beta \alpha }$ can be complex. This implies that the differences in decay rates, and not just in energies, play a role in determining the relevant gap for open systems. Second, for multidimensional Jordan blocks, the terms $\omega_{\beta\alpha}$ depend on distinct powers for distinct pairs $\beta,\alpha$. Thus certain $\omega _{\beta \alpha }$ (those with the higher exponents) will play a more dominant role than others.
The conditions for adiabaticity are best illustrated further via examples, one of which we provide next.
Example: The adiabatic evolution of an open quantum two-level system {#example}
====================================================================
In order to illustrate the consequences of open quantum system adiabatic dynamics, let us consider a concrete example that is analytically solvable. Suppose a quantum two-level system, with internal Hamiltonian $H=(\omega
/2)\,\sigma _{z}$, and described by the master equation (\[eq:t-Lind\]), is subjected to two sources of decoherence: spontaneous emission $\Gamma
_{1}(t)=\epsilon (t)\,\sigma _{-}$ and bit flips $\Gamma _{2}(t)=\gamma
(t)\,\sigma _{x}$, where $\sigma _{-}=\sigma _{x}-i\sigma _{y}$ is the lowering operator. Writing the density operator in the basis $\left\{
I_{2},\sigma _{x},\sigma _{y},\sigma _{z}\right\} $, i.e., as $\rho =(I_{2}+\overrightarrow{v}\cdot \overrightarrow{\sigma })/2$, Eq. (\[le\]) results in $$|{\dot{\rho}}(t)\rangle \rangle =\frac{1}{2}\left(
\begin{array}{c}
0 \\
-\omega v_{y}-2\epsilon ^{2}v_{x} \\
\omega v_{x}-2(\gamma ^{2}+\epsilon ^{2})v_{y} \\
-4\epsilon ^{2}-2(\gamma ^{2}+2\epsilon ^{2})v_{z} \\
\end{array}\right) =\frac{1}{2}\left(
\begin{array}{c}
0 \\
{\dot{v}}_{x} \\
{\dot{v}}_{y} \\
{\dot{v}}_{z} \\
\end{array}\right) , \label{rhoex}$$where $v_{x}(t)$, $v_{y}(t)$, and $v_{z}(t)$ are real functions providing the coordinates of the quantum state $|\rho (t)\rangle \rangle $ on the Bloch sphere. The Lindblad superoperator is then given by $$\mathcal{L}(t)=\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & -2\,\epsilon ^{2} & -\omega & 0 \\
0 & \omega & -2\epsilon ^{2}-2\gamma ^{2} & 0 \\
-4\,\epsilon ^{2} & 0 & 0 & -4\epsilon ^{2}-2\gamma ^{2}\end{array}\right) . \label{lmat}$$In order to exhibit an example that has a nontrivial Jordan block structure, we now assume $\gamma ^{2}=\omega $ (which can in practice be obtained by measuring the relaxation rate $\gamma $ and correspondingly adjusting the system frequency $\omega $). We then have three different eigenvalues for $\mathcal{L}(t)$, $$\begin{aligned}
\lambda _{1} &=&0, \\
\lambda _{2} &=&-2\epsilon ^{2}-\gamma ^{2}\text{ }\mathrm{(twofold\,\, degenerate)} \\
\lambda _{3} &=&-4\epsilon ^{2}-2\gamma ^{2},\end{aligned}$$which are associated with the following three independent (unnormalized) right eigenvectors: $$|\mathcal{D}_{1}^{(0)}\rangle \rangle =\left(
\begin{array}{c}
f(\gamma ,\epsilon ) \\
0 \\
0 \\
1 \\
\end{array}\right) ,|\mathcal{D}_{2}^{(0)}\rangle \rangle =\left(
\begin{array}{c}
0 \\
1 \\
1 \\
0 \\
\end{array}\right) ,|\mathcal{D}_{3}^{(0)}\rangle \rangle =\left(
\begin{array}{c}
0 \\
0 \\
0 \\
1 \\
\end{array}\right) , \label{dex}$$with $f(\gamma ,\epsilon )=-1-(\gamma ^{2}/2\epsilon ^{2})$. Similarly, for the left eigenvectors, we find $$\begin{aligned}
\langle \langle \mathcal{E}_{1}^{(0)}| &=&\left( \frac{{}}{{}}1/f(\gamma
,\epsilon ),0,0,0\frac{{}}{{}}\right) , \nonumber \\
\vspace{0.1cm}\langle \langle \mathcal{E}_{2}^{(1)}| &=&\left( \frac{{}}{{}}0,\gamma ^{2},-\gamma ^{2},0\frac{{}}{{}}\right) , \nonumber \\
\vspace{0.1cm}\langle \langle \mathcal{E}_{3}^{(0)}| &=&\left( \frac{{}}{{}}-1/f(\gamma ,\epsilon ),0,0,1\frac{{}}{{}}\right) . \label{rex}\end{aligned}$$The Jordan form of $\mathcal{L}(t)$ can then be written as $$\mathcal{L}_{J}(t)=\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & -2\epsilon ^{2}-\gamma ^{2} & 1 & 0 \\
0 & 0 & -2\epsilon ^{2}-\gamma ^{2} & 0 \\
0 & 0 & 0 & -4\epsilon ^{2}-2\gamma ^{2}\end{array}\right) , \label{ljmat}$$(observe the two-dimensional middle Jordan block), with the transformation matrix leading to the Jordan form being $$S(t)=\left(
\begin{array}{cccc}
f(\gamma ,\epsilon ) & 0 & 0 & 0 \\
0 & 1 & \gamma ^{-2} & 0 \\
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 1\end{array}\right) . \label{smat}$$Note that, in our example, each eigenvalue of $\mathcal{L}(t)$ is associated to a unique Jordan block, since we do not have more than one independent eigenvector for each $\lambda _{\alpha }$. We then expect that the adiabatic regime will be characterized by an evolution which can be decomposed by single Jordan blocks. In order to show that this is indeed the case, let us construct a right and left basis preserving the block structure. To this end, we need to introduce a right and a left vector for the Jordan block related to the eigenvalue $\lambda _{2}$. As in Eqs. (\[dj\]) and (\[rj\]), we define the additional states as $$|\mathcal{D}_{2}^{(1)}\rangle \rangle _{J}=\left(
\begin{array}{c}
0 \\
0 \\
1 \\
0 \\
\end{array}\right) ,\,\,\,_{J}\langle \langle \mathcal{E}_{2}^{(0)}|=\left( \frac{{}}{{}}0,1,0,0\frac{{}}{{}}\right) . \label{drjex}$$We then obtain, after applying the transformations $|\mathcal{D}_{2}^{(1)}(t)\rangle \rangle =S(t)\,|\mathcal{D}_{2}^{(1)}\rangle \rangle
_{J}$ and $\langle \langle \mathcal{E}_{2}^{(0)}(t)|=\,_{J}\langle \langle
\mathcal{E}_{2}^{(0)}|\,S^{-1}(t)$, the right and left vectors $$|\mathcal{D}_{2}^{(1)}\rangle \rangle =\left(
\begin{array}{c}
0 \\
\gamma ^{-2} \\
0 \\
0 \\
\end{array}\right) ,\,\,\,\langle \langle \mathcal{E}_{2}^{(0)}|=\left( \frac{{}}{{}}0,0,1,0\frac{{}}{{}}\right) . \label{drex}$$Expanding the coherence vector in the basis $\left\{ |\mathcal{D}_{\alpha
}^{(j)}(t)\rangle \rangle \right\} $, as in Eq. (\[rtime\]), the master equation (\[le\]) yields $$\begin{aligned}
&&\hspace{-0.3cm}f(\gamma ,\epsilon )\,{\dot{r}}_{1}^{(0)}+{\dot{f}}(\gamma
,\epsilon )\,r_{1}^{(0)}=0, \nonumber \\
&&\hspace{-0.3cm}{\dot{r}}_{2}^{(0)}-2\frac{{\dot{\gamma}}}{\gamma ^{3}}r_{2}^{(1)}+\frac{{\dot{r}}_{2}^{(1)}}{\gamma ^{2}}=-\left( 2\epsilon
^{2}+\gamma ^{2}\right) r_{2}^{(0)}-2\frac{\epsilon ^{2}}{\gamma ^{2}}r_{2}^{(1)}, \nonumber \\
&&\hspace{-0.3cm}{\dot{r}}_{2}^{(0)}=r_{2}^{(1)}-\left( 2\epsilon
^{2}+\gamma ^{2}\right) r_{2}^{(0)}, \nonumber \\
&&\hspace{-0.3cm}{\dot{r}}_{1}^{(0)}+{\dot{r}}_{3}^{(0)}=\left( -4\epsilon
^{2}-2\gamma ^{2}\right) \,r_{3}^{(0)}, \label{glee}\end{aligned}$$It is immediately apparent from Eq. (\[glee\]) that the block related to the eigenvalue $\lambda _{2}$ is already decoupled from the rest. On the other hand, by virtue of the last equation, the blocks associated to $\lambda _{1}$ and $\lambda _{3}$ are coupled, implying a mixing in the evolution of the coefficients $r_{1}^{(0)}(t)$ and $r_{3}^{(0)}(t)$. The role of the adiabaticity will then be the suppression of this coupling. We note that in this simple example, the coupling between $r_{1}^{(0)}(t)$ and $r_{3}^{(0)}(t)$ would in fact also be eliminated by imposing the probability conservation condition $\mathrm{Tr}\rho =1$. However, in order to discuss the effects of the adiabatic regime, let us permit a general time evolution of all coefficients (i.e., probability leakage) and analyze the adiabatic constraints. The validity condition for adiabatic dynamics, given by Eq. (\[vch\]), yields $$\left\vert \frac{\langle \langle \mathcal{E}_{3}^{(0)}|\,{\dot{\mathcal{L}}}\,|\mathcal{D}_{1}^{(0)}\rangle \rangle }{\lambda _{1}-\lambda _{3}}\right\vert =\left\vert \frac{2\gamma ^{2}{\dot{\epsilon}}/\epsilon -2\gamma
{\dot{\gamma}}}{\gamma ^{2}+2\epsilon ^{2}}\right\vert \ll
\left\vert \lambda_{1}-\lambda_{3} \right\vert.
\label{exvc}$$ We first note that we have here the possibility of an adiabatic evolution even without ${\dot{\mathcal{L}}}(t)\approx 0$ in general (i.e., for all its matrix elements). Indeed, solving $\gamma ^{2}{\dot{\epsilon}}/\epsilon
=\gamma {\dot{\gamma}}$, Eq. (\[exvc\]) implies that independent evolution in Jordan blocks will occur for $\epsilon (t)\propto \gamma (t)$. Since $f(\gamma ,\epsilon )=-1-(\gamma ^{2}/2\epsilon ^{2})$ is then constant in time, it follows, from Eq. (\[glee\]), that $r_{1}^{(0)}(t)$ is constant in time, which in turn ensures the decoupling of $r_{1}^{(0)}(t)$ and $r_{3}^{(0)}(t)$. In this case, it is a *dynamical symmetry* (constancy of the ratio of magnitudes of the spontaneous emission and bit-flip processes), rather than the general slowness of ${\dot{\mathcal{L}}}(t)$, that is responsible for the adiabatic behavior. The same conclusion is also obtained from the adiabatic condition (\[vc\]). Of course, Eq. (\[exvc\]) is automatically satisfied if $\mathcal{L}(t)$ is slowly varying in time, which means ${\dot{\gamma}}(t)\approx 0$ and ${\dot{\epsilon}}(t)\approx 0$. Assuming this last case, the following solution is found: $$\begin{aligned}
&&r_{1}^{(0)}(t)=r_{1}^{(0)}(0), \nonumber \\
&&r_{2}^{(0)}(t)=\left[ r_{2}^{(1)}(0)\,t+r_{2}^{(0)}(0)\right] \,{e}^{(-2\epsilon ^{2}-\gamma ^{2})\,t}, \nonumber \\
&&r_{2}^{(1)}(t)=r_{2}^{(1)}(0)\,{e}^{(-2\epsilon ^{2}-\gamma ^{2})\,t},
\nonumber \\
&&r_{3}^{(0)}(t)=r_{3}^{(0)}(0)\,{e}^{(-4\epsilon ^{2}-2\gamma ^{2})\,t}.
\label{solex}\end{aligned}$$It is clear that the evolution is independent in the three distinct Jordan blocks, with functions $r_{\alpha }^{(i)}(t)$ belonging to different sectors evolving separately. The only mixing is between $r_{2}^{(0)}(t)$ and $r_{2}^{(1)}(t)$, which are components of the the same block. The decoupling of the coefficients $r_{1}^{(0)}(t)$ and $r_{3}^{(0)}(t)$ in the adiabatic limit is exhibited in Fig. \[f1\]. Observe that the adiabatic behavior is recovered as the dependence of $\epsilon (t)$ and $\gamma (t)$ on $t$ becomes negligible.
The original coefficients $v_{x}$, $v_{y}$, and $v_{z}$ in the Bloch sphere basis $\left\{ I_{2},\sigma _{x},\sigma _{y},\sigma _{z}\right\} $ can be written as combinations of the functions $r_{\alpha }^{(i)}$. Equation ([solex]{}) yields $$\begin{aligned}
v_{x}(t) &=&\left( v_{x}(0)+(v_{x}(0)-v_{y}(0))\gamma ^{2}\,t\right)
\,e^{(-2\epsilon ^{2}-\gamma ^{2})\,t}, \nonumber \\
v_{y}(t) &=&\left( v_{y}(0)+(v_{x}(0)-v_{y}(0))\gamma ^{2}\,t\right)
\,e^{(-2\epsilon ^{2}-\gamma ^{2})\,t}, \nonumber \\
v_{z}(t) &=&\left( v_{z}(0)-\frac{1}{f(\gamma ,\epsilon )}\right)
e^{(-4\epsilon ^{2}-2\gamma ^{2})\,t}+\frac{1}{f(\gamma ,\epsilon )}
\label{evs}\end{aligned}$$with the initial conditions $$\begin{aligned}
&&v_{x}(0)=r_{2}^{(0)}(0)+\gamma ^{-2}r_{2}^{(1)}(0), \nonumber \\
&&v_{y}(0)=r_{2}^{(0)}(0), \nonumber \\
&&v_{z}(0)=\frac{1}{f(\gamma ,\epsilon )}+r_{3}^{(0)}(0),\end{aligned}$$where now $r_{1}^{(0)}(0)=1/f(\gamma ,\epsilon )$ has been imposed in order to satisfy the $\mathrm{Tr}\rho =1$ normalization condition. The Bloch sphere is then characterized by an asymptotic decay of the Bloch coordinates $v_{x}$ and $v_{y}$, with $v_{z}$ approaching the constant value $1/f(\gamma
,\epsilon )$.
Finally, let us comment on the analysis of adiabaticity in terms of the conditions derived in Sec. \[sec:tot-t\] for the total time of evolution. Looking at the matrix elements of ${\dot {\cal L}} (t)$, it can be shown that, for $\beta\ne \alpha$, the only term $V_{\beta \alpha }^{(ijp)}$ defined by Eq. (\[vbapj\]) which can be *a priori* nonvanishing is $V_{13}$. Therefore, we have to consider the energy difference $\omega _{13}=4\epsilon ^{2}+2\gamma ^{2}$. Assuming that the decoherence parameters $\epsilon $ and $\gamma $ are nonvanishing, we have $\omega _{13}>0$ and hence $\Omega _{13}>0$. This signals the breakdown of adiabaticity, unless $V_{13}=0$. However, as we saw above, $V_{13}\propto \langle \langle
\mathcal{E}_{3}^{(0)}|\,{\dot{\mathcal{L}}}\,|\mathcal{D}_{1}^{(0)}\rangle
\rangle =2\gamma ^{2}{\dot{\epsilon}}/\epsilon -2\gamma {\dot{\gamma}}$ and thus $V_{13}=0$ indeed implies the adiabaticity condition $\epsilon
(t)\propto \gamma (t)$, in agreement with the results obtained from Theorem \[t1\]. In this (dynamical symmetry) case adiabaticity holds exactly, while if $\epsilon (t)$ is *not* proportional to $\gamma (t)$, then there can be no adiabatic evolution. Thus, the present example, despite nicely illustrating our concept of adiabaticity in open systems, does not present us with the opportunity to derive a nontrivial condition on $T$; such more general examples will be discussed in a future publication.
Conclusions and outlook {#conclusions}
=======================
The concept of adiabatic dynamics is one of the pillars of the theory of closed quantum systems. Here we have introduced its generalization to open quantum systems. We have shown that under appropriate slowness conditions the time-dependent Lindblad superoperator decomposes into dynamically decoupled Jordan blocks, which are preserved under the adiabatic dynamics. Our key results are summarized in Theorems \[t1\] and \[t3\], which state sufficient (and necessary in the case of Theorem \[t3\]) conditions for adiabaticity in open quantum systems. In particular, Theorem \[t3\] also provides the condition for breakdown of the adiabatic evolution. This feature has no analog in the more restricted case of closed quantum systems. It follows here from the fact that the Jordan eigenvalues of the dynamical superoperator – the generalization of the real eigenvalues of a Hamiltonian – can have an imaginary part, which can lead to unavoidable transitions between Jordan blocks. It is worth mentioning that all of our results have been derived considering systems exhibiting gaps in the Lindblad eigenvalue spectrum. It would be interesting to understand the notion of adiabaticity when no gaps are available, as similarly done for the closed case in Refs. [@Avron:98; @Avron:99]. Moreover, two particularly intriguing applications of the theory presented here are to the study of geometric phases in open systems and to quantum adiabatic algorithms, both of which have received considerable recent attention [@Farhi:00; @Farhi:01; @Thomaz:03; @Carollo:04; @Sanders:04]. We leave these as open problems for future research.
M.S.S. gratefully acknowledges the Brazilian agency CNPq for financial support. D.A.L. gratefully acknowledges financial support from NSERC and the Sloan Foundation. This material is partially based on research sponsored by the Defense Advanced Research Projects Agency under the QuIST program and managed by the Air Force Research Laboratory (AFOSR), under agreement F49620-01-1-0468 (to D.A.L.).
[10]{}
, [Z. Phys.]{} [**51**]{}, 165 (1928).
, [J. Phys. Soc. Jpn.]{} [**5**]{}, 435 (1950).
, [*[Quantum Mechanics]{}*]{} ([North-Holland]{}, [Amsterdam]{}, 1962), Vol. 2.
, Zeitschrift [**2**]{}, 46 (1932).
, Proc. R. Soc. London Ser. A [**137**]{}, 696 (1932).
, Phys. Rev. [**84**]{}, 350 (1951).
, Proc. R. Soc. London [**392**]{}, 45 (1989).
, Phys. Rev. Lett. [**52**]{}, 2111 (1984).
, Phys. Lett. A [**264**]{}, 94 (1999).
, Phys. Rev. A [**61**]{}, 010305 (2000).
, Nature (London) [**403**]{}, 869 (2000).
, Phys. Rev. A [**62**]{}, 052318 (2000).
, Science [**292**]{}, 1695 (2001).
, Phys. Rev. A [**66**]{}, 022102 (2002).
, Phys. Rev. Lett. [**90**]{}, 028301 (2003).
, e-print quant-ph/0001106.
, Science [**292**]{}, 472 (2001).
, [*[The Theory of Open Quantum Systems]{}*]{} ([Oxford University Press]{}, Oxford, 2002).
, in [*[Irreversible Quantum Dynamics]{}*]{}, Vol. 622 of [*[Lecture Notes in Physics]{}*]{}, edited by [F. Benatti and R. Floreanini]{} ([Springer]{}, [Berlin]{}, 2003), p. 83 \[e-print quant-ph/0301032 (2003)\].
, Ann. Phys. (N.Y.) [**64**]{}, 311 (1971).
, [*[Quantum Dynamical Semigroups and Applications]{}*]{}, No. 286 in [*[Lecture Notes in Physics]{}*]{} ([Springer-Verlag]{}, Berlin, 1987).
, J. Math. Phys. [**17**]{}, 821 (1976).
, Commun. Math. Phys. [**48**]{}, 119 (1976).
, Chem. Phys. [**268**]{}, 35 (2001).
K. Lendi, Phys. Rev. A [**33**]{}, 3358 (1986).
H.-P. Breuer, Phys. Rev. A [**70**]{}, 012106 (2004).
, [*[Matrix Analysis]{}*]{} ([Cambridge University Press]{}, [Cambridge, UK]{}, 1999).
, [*[Dynamical Invariants, Adiabatic Approximation, and the Geometric Phase]{}*]{} ([Nova Science Publishers]{}, [New York]{}, 2001).
J. E. Avron and A. Elgart, Phys. Rev. A [**58**]{}, 4300 (1998).
J. E. Avron and A. Elgart, Commun. Math. Phys. [**203**]{}, 445 (1999).
, [*[Quantum Mechanics: Fundamentals]{}*]{} ([Springer]{}, [New York]{}, 2003).
, [*[Fourier Series and Boundary Value Problems]{}*]{} ([McGraw-Hill]{}, [New York]{}, 1993).
The Riemann-Lebesgue lemma can be stated through the following proposition: Let $f: [a,b]
\rightarrow {\bf C}$ be an integrable function on the interval $[a,b]$. Then $\int_a^b\,dx\,e^{inx}f(x)\rightarrow 0$ as $n\rightarrow \pm \infty$.
, Phys. Rev. A [**68**]{}, 062322 (2003).
G. Kimura, Phys. Lett. A [**314**]{}, 339 (2003).
G. Kimura, J. Phys. Soc. Jpn. Suppl. C [**72**]{}, 185 (2003).
, [*[Dynamical Groups and Spectrum Generating Algebras: Vol. I]{}*]{} (World Scientific, Singapore, 1988).
, J. Phys. A [**36**]{}, 7461 (2003).
, Phys. Rev. Lett. [**92**]{}, 020402 (2004).
I. Kamleitner, J. D. Cresser, and B. C. Sanders, Phys. Rev. A [**70**]{}, 044103 (2004). | {
"perplexity_score": 500.1,
"pile_set_name": "ArXiv"
} |
Wood County Airport (Ohio)
Wood County Airport is a county-owned, public-use airport located one nautical mile (1.85 km) northeast of the central business district of Bowling Green, in Wood County, Ohio, United States on the campus of Bowling Green State University. It is owned by the Wood County Airport Authority and is also known as Wood County Regional Airport (WCRA). As per the FAA's National Plan of Integrated Airport Systems for 2009-2013, it is classified as a general aviation airport.
History
Bricker Field
The airport was established in 1939, and purchased by Bowling Green State University in 1942 for use in the V-12 Navy College Training Program. On its acquisition it was named Bricker field after Ohio governor John W. Bricker. After the war, traffic at the airport decreased well below capacity. A Lockheed T-33 was added as a Gate guardian between 1965 and 1967. Bricker Field was transferred from the university to the local government in 1970.
Accidents and Incidents
A Stearman Biplane crashed in a nearby farm during an attempted emergency landing on July 31st 1946 at 6:35PM, killing it's pilot.
A Vultee BT-13 Valiant crashed during landing at 3:30PM on September 23rd, 1950, killing it's pilot. The pilot was from Custar and not a student.
A Cherokee 140 crashed into Frazee Apartments a half mile from the airport on May 1st 1982 at 10:40AM. 4 onboard the plane were killed, but there were no ground fatalities. In the immediate aftermath, the crash was attributed to the plane being overloaded for a flight bound for Columbus. The victims included the pilot, who was a BGSU junior, two people from Napoleon, Ohio, and a student of Northwest State Community College.
The roof of a hangar was destroyed by a storm in July 2003. Several planes were damaged.
$60,000 was stolen from the airport between May 2007 and March 2008.
The right landing gear of a Piper PA-28R-201 collapsed while taxiing after landing on September 13th, 2016 at 8:40AM. The aircraft was damaged, but it's two occupants were uninjured.
An MD-369 helicopter performing power line inspection for FirstEnergy crashed at 11:36AM on January 15th, 2018, 78 minutes after take off, causing the deaths of the powerline inspector and the pilot. The cause of the crash was identified as a loss of engine power at a low altitude with winter weather being a factor.
Facilities and aircraft
Wood County Airport covers an area of at an elevation of 673 feet (205 m) above mean sea level. It has two asphalt paved runways: 10/28 is 4,199 by 75 feet (1,280 x 23 m) and 18/36 is 2,628 by 50 feet (801 x 15 m).
For the 12-month period ending February 23, 2007, the airport had 27,405 aircraft operations, an average of 75 per day: 98.5% general aviation, 1% air taxi and 0.5% military. At that time there were 47 aircraft based at this airport: 87% single-engine, 9% multi-engine and 4% helicopter.
The university fleet at this airport consisted of 19 aircraft in 2019, all either single or multi engine propeller aircraft.
The airport has an AWOS IIIP/T in operation.
The Bowling Green Flight Center is a 16,800 square foot aviation education facility at airport run as part of the Bowling Green State University aviation program. It was opened on April 27th, 2015.
References
External links
Wood County Regional Airport
Aerial photo as of 22 March 1994 from USGS The National Map
Category:Airports in Ohio
Category:Transportation in Wood County, Ohio
Category:Buildings and structures in Wood County, Ohio
Category:University and college airports | {
"perplexity_score": 124.4,
"pile_set_name": "Wikipedia (en)"
} |
The Wonderful Gatsby: Key Treasure is a concealed object journey game the place you will consider on the part of a youthful feminine architect that been presented the activity to restore the old Gatsby residence to its outdated glory.
You journey to the mansion and whilst looking close to you locate out that Gatsby has been working with some very harmful guys. You mate get caught up in all this and the criminals kidnap her.
You will have to search each inch of the mansion and uncover its darkish secrets, all when attempting to get your good friend again. use your puzzle fixing techniques to succeed !
Observe : This is a downloader for the video game. As soon as operate, the software will mechanically obtain and install the sport on your computer system.
To buy the item you need to sign up with Massive Fish. The registration is abso lutely free and the account will permit you earn free of charge games, perform community video games or take part in video game message boards, write opinions as very well as present you major reductions.
In this article are some key capabilities of "The Fantastic Gatsby: Solution Treasure": | {
"perplexity_score": 997.4,
"pile_set_name": "Pile-CC"
} |
[Clinical use of transitory otoacoustic evoked emissions in therapeutic follow-up].
The suitability of transiently evoked otoacoustic emissions (TEOAE) for the observation of changes in inner ear function was examined in 28 normal hearing subjects and 25 patients with sudden deafness. The measurements were performed with the ILO88 system using a nonlinear sequence of click stimuli. The TEOAEs of both ears of each control subject were measured in 7 sessions at 4 different stimulus levels. The evaluation involved visual inspection of the time-dependent records and an analysis of response amplitudes and cross correlational coefficients. Minor changes in curve appearance and variations of the response level in the order of +/- 4 dB could be detected which were attributed to the noise floor and variations in probe application. Within the patient group, at least three TEOAE measurements were performed during the course of rheologic therapy. Pure-tone audiograms were recorded prior to each TEOAE session for comparison. The most significant result from the data analysis is that response amplitudes increase significantly in almost all cases in which pure-tone audiometry reveals normalization of hearing threshold. This indicates a recovery of outer hair cells. Furthermore, comparison of TEOAE records obtained in different sessions shows an initial growth of correlation coefficients followed by a final saturation. These findings indicate that the combination of amplitude evaluation and correlation analysis is suitable for observing changes in intensity and waveform of cochlear emissions. | {
"perplexity_score": 593.6,
"pile_set_name": "PubMed Abstracts"
} |
Defending Marriage by Defining Marriage
8/1/2013
Alton J. Pelowski
Today, when the issue of same-sex marriage is discussed in the courts, academy, media and public square, the debate is usually framed in terms of “marriage equality.” But as many have pointed out, including Justice Samuel Alito in his United States v. Windsor dissent, such rhetoric belies a deeper, underlying debate: the question of what marriage actually is.
From this perspective, Columbia editor Alton Pelowski recently interviewed Ryan T. Anderson, a fellow at the Heritage Foundation in Washington, D.C., and founder and editor of Public Discourse, the online journal of the Witherspoon Institute in Princeton, N.J. Anderson is co-author, with Sherif Gergis and Robert P. George, of What is Marriage? Man and Woman: A Defense (Encounter Books, 2012).
Columbia: You point out that in the marriage debate today, there are two fundamentally different understandings of marriage. How do these views differ?
Ryan Anderson: As Justice Alito helpfully explained, there’s the conjugal conception of marriage and then there’s the consent-based, or revisionist, view of marriage. In the consent-based view, marriage is simply an intense, loving relationship between consenting adults. It is gender blind and not based on the sexual differences between man and woman.
Our account of marriage, the conjugal account, is that marriage unites a man and a woman to be mother and father to any children that they conceive. It’s based on a more comprehensive union a union of hearts and minds, but also a union of bodies. What’s significant here is that the act that unites a man and a woman is the same act that creates new life.
Marriage needs to be sexually exclusive because this type of relationship can produce new life. It needs to be a permanent because it’s a comprehensive union and because children need a stable environment with a mother and father. With the revisionist view, on the other hand, there’s no real justification as to why it should be between only two people, exclusive or permanent.
For the state, neutrality on this question is really impossible. The law will enshrine one vision of marriage or another, and no matter what, one vision of the good, the true and the beautiful will be advanced. So the question is, which vision is the true vision? We want to get law reflecting reality as much as possible.
Columbia: How does the issue of equality relate to the definition of marriage?
Ryan Anderson: In some sense, everyone in the debate is in favor of equality. We all want the government to allow people to enter marriages equally. The question, then, is what is marriage? Only if we know what marriage is can we know if the law is treating marriages and spouses equally or not.
Every marriage policy will draw a line between what is and what isn’t a marriage. The revisionist account of marriage is no different; it draws a line at the number two. But if equality demands redefining marriage to include the same-sex couple, then does equality demand redefining marriage to include a three-person relationship? This is an open question in some people’s minds. The government needs to justify why it recognizes a certain type of relationship as marriage and not others.
Columbia: Why does the state have an interest in regulating marriage in the first place?
Ryan Anderson: Government is not in the marriage business because it cares about the love lives of consenting adults, apart from the fact that a certain type of loving relationship produces children. So the issue here is the government’s interest in ensuring that every child’s right to a mother and father is protected in the least coercive and least intrusive way possible.
Instead of encouraging a mother and father to raise their children as husband and wife, the government could try to raise children themselves, as with Plato’s thought experiment in The Republic. But with the breakdown of marriage in recent decades, we’ve seen how the welfare state has grown with disastrous results for children. We have seen an increase in crime, prison population and child poverty, and a decrease in social mobility.
Social justice and freedom are better served by the government getting marriage right, so that civil society can do the work that government can’t. When the marriage culture falls apart, we are left with a big government welfare program to pick up the pieces.
Columbia: What about the case of infertile couples and the fact that even heterosexual couples today dissociate marriage from procreation?
Ryan Anderson: No one has ever thought that every marriage will produce a child. We’ve never had fertility requirements in marriage law. But everyone knew that every child was the result of a male-female union. Marriage laws and policies maximize the likelihood that a child will grow up to know the love and care of a mother and father.
Public policy is based on the rule rather than the exception to the rule. The government’s interest is mainly in all of the marriages that will produce children and in male-female relationships that will produce children but are not yet marriages.
The question before us is this: Do we want to make marriage reforms that encourage marriage between husbands and wives, mothers and fathers, or do we want to double down on the idea that marriage is really just about adult desires?
Columbia: What is at stake? What effect could legally changing the definition of marriage have?
Ryan Anderson: First of all, redefining marriage makes it all about the desires of adults and eliminates from law and public policy any institution that upholds the ideal that a child deserves a mother and a father.
Secondly, the redefinition of marriage won’t stop here. If we reduce marriage to be just about someone you love, and see the male-female aspect as irrational or arbitrary, what’s magical about the number two?
The principled reason for why marriage is understood to be monogamous, sexually exclusive and permanent is precisely based on the male-female dimension. Apart from that, various scholars and activists have argued that marriage should just be about a contract between consenting adults. There’s no reason, they say, that it couldn’t be a temporary contract that can be renewed. Likewise, you see people saying that monogamy is unnatural and that extramarital affairs should be allowed, provided there’s no deceit or coercion.
Such proposals are a nightmare for the public policy interest, which is to get men and women to commit to each other permanently and exclusively. A greater number of sexual partners and short-lived relationships brings a greater chance of fatherless children and fragmented families.
The third consequence of redefining marriage relates to religious liberty. We’ve already seen Catholic Charities forced out of the adoption services that they have provided in places like Massachusetts, Illinois and Washington, D.C., because they wanted to place the children in their care in homes with a mom and dad. We have also seen florists, bakers, photographers and innkeepers who have been sued for refusing to participate in same-sex weddings.
Columbia: Why do you think that public opinion has swayed in support of same-sex marriage in recent years?
Ryan Anderson: We haven’t been making the argument. It’s not surprising that one side has more influence when it is well-organized, well-funded and very outspoken, while the other side is largely silent.
It’s also the fact that the past 40 years have been a nightmare for marriage in general. Same-sex marriage is only plausible in a world that has already done so much damage to marriage and human sexuality. The elimination of the male-female aspect of marriage follows the sexual revolution’s train of bad consequences: pornography, non-marital sex, extramarital sex, non-marital childbearing, divorce and so on. Young people don’t hear arguments in favor of the conjugal view of marriage, and they haven’t seen it lived out.
Columbia: How do you respond to those who dismiss the conjugal view of marriage as arbitrary and irrational, in part because it is associated with religion?
Ryan Anderson: Religious people also have views about things like murder and property rights. The question is whether or not the view itself commands rational support.
Consider all of the great thinkers who have considered the question of marriage: ancient Greeks and Romans; leaders of Judaism, Christianity and Islam; Enlightenment thinkers like Emmanuel Kant and John Locke; Eastern thinkers like Mahatma Gandhi. These thinkers disagree about so much in their philosophies, theologies and political theories, but they all agree that marriage is a male-female institution. It wasn’t until the year 2000 that any political community on the face of the earth defined marriage as anything other than a male-female relationship.
To say that the conjugal view of marriage is somehow irrational and arbitrary belies history and the reasoned arguments that support it.
Columbia: How do you recommend that people who hold the conjugal view of marriage relate to homosexual persons?
Ryan Anderson: I think this is the big question going forward, and it’s something that younger generations have to wrestle with in a way that previous generations have not. How do we show love and respect to our gay and lesbian friends, family members, and fellow citizens without redefining marriage? One cannot deny that over the course of American history gays and lesbians have been mistreated and abused in various ways but marriage law is not one of those ways.
We need to find an appealing way of presenting the truth of marriage to our gay friends and family members, while affirming real opportunities for meaningful relationships and human flourishing. In previous times and cultures, people better understood close, healthy friendships; they did not seek emotional fulfillment only through marriage. Recognizing the uniqueness of marriage opens up broad horizons of possibility for deep relationships that are non-marital.
Columbia: What role can Knights of Columbus and other concerned citizens of good will play in preserving and promoting marriage in society?
Ryan Anderson: The first thing that they can do is live out the truth about marriage and human sexuality in their own families. Be faithful husbands and faithful wives. Be good mothers and good fathers. Young people, live out the virtue of chastity and prepare yourself now for your future marriage. Long before the debate we are facing today, marriage was falling apart because heterosexuals bought into a false, liberal ideology about sex.
The second thing is to work to protect religious liberty with regard to marriage law. We must make sure our elected officials and fellow citizens respect institutions and individuals who believe that marriage is between a man and a woman.
The third thing is to work now at making the case for marriage with increased vigor and commitment. We need arguments in social media, entertainment and broader culture in as many different ways possible that explain what marriage is and why it matters. | {
"perplexity_score": 260.6,
"pile_set_name": "Pile-CC"
} |
Media
John Reed Named BND Commission Chairman
BROWNSVILLE, Texas—John Reed was unanimously elected to serve as chairman of the Brownsville Navigation District (BND) during the board’s meeting Wednesday, May 16, 2018. Local businessman Esteban Guerra also was sworn in as the BND’s newest commissioner, along with John Wood, who served as the previous chairman and was reelected to his second four-year term on the commission on May 5.
Reed is chairman for two years and continues to serve alongside BND Vice Chairman Sergio Tito Lopez, Secretary Ralph Cowen, who were also unanimously elected to the serve as part of the executive board.
Outgoing Commissioner Carlos Masso, who did not seek reelection, welcomed Commissioner Guerra by placing a Port of Brownsville ceremonial lapel pin onto his jacket. Masso served as a BND commissioner for 12 years, including as chairman 2008-2010. | {
"perplexity_score": 351.5,
"pile_set_name": "Pile-CC"
} |
The present disclosure relates to continuous board (e.g., wallboard) manufacturing processes and, more particularly, to a mold and a method for making a slurry distributor for the distribution of an aqueous cementitious slurry, such as aqueous calcined gypsum slurry, for example.
It is well-known to produce gypsum board by uniformly dispersing calcined gypsum (commonly referred to as “stucco”) in water to form an aqueous calcined gypsum slurry. The aqueous calcined gypsum slurry is typically produced in a continuous manner by inserting stucco and water and other additives into a mixer which contains means for agitating the contents to form a uniform gypsum slurry. The slurry is continuously directed toward and through a discharge outlet of the mixer and into a discharge conduit connected to the discharge outlet of the mixer. An aqueous foam can be combined with the aqueous calcined gypsum slurry in the mixer and/or in the discharge conduit. The stream of slurry passes through the discharge conduit from which it is continuously deposited onto a moving web of cover sheet material supported by a forming table. The slurry is allowed to spread over the advancing web. A second web of cover sheet material is applied to cover the slurry and form a sandwich structure of a continuous wallboard preform, which is subjected to forming, such as at a conventional forming station, to obtain a desired thickness. The calcined gypsum reacts with the water in the wallboard preform and sets as the wallboard preform moves down a manufacturing line. The wallboard preform is cut into segments at a point along the line where the wallboard preform has set sufficiently, the segments are flipped over, dried (e.g., in a kiln) to drive off excess water, and processed to provide the final wallboard product of desired dimensions.
Prior devices and methods for addressing some of the operational problems associated with the production of gypsum wallboard are disclosed in commonly-assigned U.S. Pat. Nos. 5,683,635; 5,643,510; 6,494,609; 6,874,930; 7,007,914; and 7,296,919, which are incorporated herein by reference.
The weight proportion of water relative to stucco that is combined to form a given amount of finished product is often referred to in the art as the “water-stucco ratio” (WSR). A reduction in the WSR without a formulation change will correspondingly increase the slurry viscosity, thereby reducing the ability of the slurry to spread on the forming table. Reducing water usage (i.e., lowering the WSR) in the gypsum board manufacturing process can yield many advantages, including the opportunity to reduce the energy demand in the process. However, spreading increasingly viscous gypsum slurries uniformly on the forming table remains a great challenge.
Furthermore, in some situations where the slurry is a multi-phase slurry including air, air-liquid slurry separation can develop in the slurry discharge conduit from the mixer. As WSR decreases, the air volume increases to maintain the same dry density. The degree of air phase separated from the liquid slurry phase increases, thereby resulting in the propensity for larger mass or density variation.
It will be appreciated that this background description has been created by the inventors to aid the reader and is not to be taken as an indication that any of the indicated problems were themselves appreciated in the art. While the described principles can, in some aspects and embodiments, alleviate the problems inherent in other systems, it will be appreciated that the scope of the protected innovation is defined by the attached claims and not by the ability of any disclosed feature to solve any specific problem noted herein. | {
"perplexity_score": 364.3,
"pile_set_name": "USPTO Backgrounds"
} |
Q:
Making column sum of adjacency matrix even.
Let $G$ be a connected graph with $V$ vertices and let say I have an adjacency matrix of order $N$, how can I make sum of each column even?
Like I have a graph with $4$ vertices and $4$ edges as
$\begin{bmatrix}0 & 1 & 0 &0 \\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \end{bmatrix}$
And I want to convert it like this,
$\begin{bmatrix}0 & 0 & 0 &0 \\
1 & 0 & 1 & 0\\
0 & 0 & 0 & 0\\
1 & 0 & 1 & 0 \end{bmatrix}$
For reference, if $i^{th}$ element of the $j^{th}$ row is $1$ then the edge is directed from $j$ to $i$.
You can reverse any edge between two vertices such that the graph remains connected. And the adjacency matrix's indexing starts from 1 rather than from 0.
A:
As mentioned in the comments, this depends on what operations you allow, because you are clearly allowing your underlying graph to change --- the graph after your transformation isn't isomorphic to the first graph.
For your first matrix, you have the following graph:
For your second matrix, you have the following:
Taken as directed graphs, these are not isomorphic (however, if you relax them to merely undirected, they are). As such, it's difficult to determine what operations you're allowing as legal transformations. | {
"perplexity_score": 465.8,
"pile_set_name": "StackExchange"
} |
Your lucky it was just your neighbor, most would have gotten a visit by the local police. Heck even being a drummer I probably would have called on you if it was that late. If you truly want to play that late you really need to either get an e-kit or build one heck of a studio space to contain the sound better.
It's stuff like you are doing playing that late that gets drummers a bad name... | {
"perplexity_score": 422.9,
"pile_set_name": "Pile-CC"
} |
Accessories
Pay your tributes to the racers and the mechanics of old with this uniquely cool wool blend driver’s cap. The cap is a perfect accessory for those who love to go on long winter drives on the weekend. Enjoy a snug and comfortable fit as you swerve through corners and cruise down the freeway. This driver’s cap is as lightweight as they come, which creates a smooth and easy fit. Made from a versatile wool blend you can rest assured that your head stays warm even on the coldest of days. The lining is made from cotton and an inner sweatband completes the cap. The brim is sewn down which helps the cap retain its original shape. Learn More
A casual design that stays true to its classic heritage, the wool Ascot cap adds a hint of sophistication to your appearance. Pair it with the right ensemble and you’re as good as royalty. The cap is made from 100% wool which is of a superior quality like no other. The ascot cap suits anybody and everybody, making it a trendsetting option for the season. In fact, it is a favorite among celebrities and aristocrats alike. Choose from a wide range of colors that are specifically developed for the season. Learn More
A blanket scarf sparks your imagination as it lets you wrap it around in various different ways to achieve many different looks. Wear this 82” x 26” cable stitch blanket scarf as an outerwear piece or as a shawl. The scarf features gorgeous metallic and gold stitching throughout its body which gives it an exquisite and rich feel. Woven in a nylon and polyester blend, it is a perfect fall and winter accessory and a must have in every woman’s wardrobe.
Learn More
Here is one hat that will look fabulous on you! This Western Fedora is a stunning addition in every woman's wardrobe. With a 2” brim and the perfect blend of fabrics, this gorgeous fedora will take your style quotient up by notches. Jeans or dresses, there is probably no outfit you cannot pair this fedora with! The high crown of the hat ensures that it does not mess with your hairstyle, while the colorful jacquard band adds to its southwestern appeal, which also serves as the hat's design inspiration! Learn More
What is more perfect than a pair of gloves that molds to the shape of your hands perfectly and are so stylish that you would like to wear them everywhere? These two tone gloves are the answer to that. With their perfect cut, they fit onto your hands smoothly, and the soft, buttery lambskin leather makes them a delight to wear. The gloves are provided with 3M thinsulate lining that gives you added warmth and comfort without adding to the weight of the gloves. You can choose these two tone gloves to go with the coat of your choice, and to brighten up your ensemble!
Learn More
A timeless design that has stayed relevant through the times and seasons, you can't afford not to have one of these stylish tweed Ivy Caps in your wardrobe! Crafted by the finest tweed from Woolrich, the ivy cap bring so much sophistication with its minimalistic flat-cap cuts and fine wool blend that you'll want to wear them more than to just keep the chill away. The cap has a handsome suede peak with an antique metal buckle on the back strap for a smart style and a comfortable fit. The Tweed Ivy Caps make a great gift for the sophisticated gentleman, and it's a good pick for you as well! MADE IN THE USA. Learn More
Leather backpacks will never go out of style. They add a dash of youth and pizzazz that everyone wants to feel. This leather backpack contains all the essentials of a handbag, starting with multiple compartments, inside zipped pockets, back slide pockets and adjustable straps. With lots of capacity and stylish design these leather backpacks are a must have in every woman's wardrobe. The bags measure 10.75 inches in length, 12 inches in height, and 5 inches in depth.
Learn More
The derby which is characterized by its stiff, rounded crown and narrow brim is now popular among all ages. It features a satin band, feather, and hatpin, and can be worn in a dressy or casual mode. Satin lined and beautifully crafted. Stylish on women as well. Imported. Learn More
Chic is the word that comes to mind when one sees these classic suede European driving caps. Flattering on both men and women alike, these stunning caps are made out of 8 pieces of nappa suede stitched together and topped with a button detail. The water and stain resistant suede adds to the utility of the cap, making it stylish as well as a practical choice. Pair it up with leather, tweed, or denim casuals; and you can never go wrong with these attractive versatile caps. Learn More
We give you style, charm, and warmth packed in these stylish “Blue Brothers” replica hats that always look great. Made of 100% wool felt these light to wear hats are not only fashionable but water repellent too. So you do not have to worry about winter rains or snow flurries because this hat will protect you no matter what the weather. Hats take up too much space, you think? Not this one, as you can fold and store it away without it losing its shape. The feather and the hat pin that complement the trendy band are detachable. Learn More | {
"perplexity_score": 493.7,
"pile_set_name": "Pile-CC"
} |
Mass spectrometric approach for the analysis of food proteins.
In the study of food proteins, the need for accurate protein structural analysis has been acknowledged because of the fact that nucleotide sequencing alone is of limited analytical value if not combined with relevant information regarding the specific protein expressed and the occurrence of phosphorylation, glycosylation and disulphide bridges, and with the modification induced by the technological treatment. Mass spectrometry, whether used alone or to complement the traditional molecular-based techniques has become fundamental to the structural analysis of proteins. It is, moreover, virtually irreplaceable in determining post-translational modifications as conventional methods cannot deliver reliable data. What lies at the root of this methodological breakthrough is the combination of high-resolution separation techniques such as two-dimensional electrophoresis or capillary reverse- phase high-performance liquid chromatography with mass spectrometric analysis, what is termed "proteomic" analysis. Thus, it appears appropriate to state that the new mass spectrometric techniques have been established as a valuable and efficient tool for protein and peptide analysis in complex mixtures, like those from food matrices, enabling us therefore to provide accurate information on molecular weight and also to put forth a structural assessment at a low-picomole level of material. Thus, a series of alternative approaches have been developed based on advanced mass spectrometric analysis in conjunction with classic protein chemistry in order to provide an in-depth view of food protein structure. This review outlines several of these novel methodologies as they apply to structural characterization of food products. | {
"perplexity_score": 235.2,
"pile_set_name": "PubMed Abstracts"
} |
Thy Art Is Murder Death Metal (7 votes) Metalcore (3 votes) Alternative Rock Ambient Americana Black Metal Bluegrass Blues Breaks Britpop Classical Country Death Metal Doom Metal Downtempo Dream Pop Drone Drum and Bass Dubstep Electronic Emo Experimental Folk Folk Punk Funk Garage Gothic Grime Grind Grunge Hard Rock Hardcore Heavy Metal Hip-Hop House IDM Indie Folk Indie Pop Indie Rock Industrial Jam Rock Jazz Jazz Fusion Lo-fi Math Rock Melodic Death Metal Metal Metalcore Minimal New Age Noise Rock Nu-metal Pop Pop Punk Pop Rock Post Hardcore Post Metal Post Punk Post Rock Power Metal Progressive Metal Progressive Rock Psychedelic Punk R&B Reggae Rock Shoegaze Ska Sludge Metal Soul Soundtrack Stoner Rock Techno Thrash Metal Trance Trip Hop
Thy Art Is Murder is a five-piece technical deathcore band hailing from Sydney, Australia. The band formed in2006 and havereleasedtwoEP'sand two full-length albums. They are signed to Nuclear Blast globally but are signedto Distort in Canada.Their 2008 EP InfiniteDeath,reachedposition no. 10 on the AIR Charts upon release, and theirsecond full-length album Hatedebuted at no. 35 on the ARIA Charts,makingthem thefirst extreme metal band toever reach the Top 40 of this chart. Thealbum also reached no. 1 on AIR and both no. 2 and no.4respectivelyonthe USA and Canadian iTunes metal charts on itsweek of release. | {
"perplexity_score": 1239.8,
"pile_set_name": "OpenWebText2"
} |
...rT”sRNliW nTNIY,.c j
December 8, 1949 !
Hon. noyne L. Kelly opinion RO. v-956.
Rxecntlve Direotor
Board for Texas State Re: The agency veated with
Hoapitala~and Special Schools authority to appoint
Austlp, ,Texaa: the Superlptendent of
the,Confde‘$ate Worn&n's
DearSlr:z : Home.
:We hu0te from your InqUlry ai3folJ&s:
"Rticentlywe have had On8 of our Sypei-ln-
t8nd8nts.Q~f9 State Instltutlon-to paaa @way;
namQ+,'Kti.:Sus$e ha18 Butler of the COn$8d-.
~eratb Woman's Horn& here in Austin.
"There haa‘been some dlacusalon backwarda
and foruard~between the Oo.vernor'aOffice, Mr.
Claud Qllmbr, ChalFman of this Board,.myaelf
and i$plicaats':forthis position +a.to who waa
to make the appolntment'to this vacanoy. The
~C+ove~or~;a.OffloeHaag aCO8pt8d the status of
Xoltse Blll,1,.5lst Leglalature, ~vhioh,aeems
to glVe the power of appol@imen$ of,thla poai-
~tlon'to this Board.
9ar. Claud Gilm8r and I are not S.&e of
this atatua ao we would like an opinion ~from
your *partmentat, an early aat? concerning
the.~law.
governing the.appointment of the Su-
perintedcletit
of~the Te%aa Confederate Women's
Home.at Juatln, Texas.
Seotlon l'of Art&l;, 3&74b, Vernon's Civil
Statutes (H.B.l,,slat Leg.) prOvid8a for the creation ,of
the Bdard for Texas Stat8 Hoapltala and Special Schools.
Sbotlon 3 reads In part:
"The term 'Texas State Hospltala and
Special Schools' . . i shall mean The Austin
State Hospital, Austin Stat8 School, Austin
State Sohdol Fati Colony, The confederate
Hoioef0$ women, Th8 T8Xaa cOnfed8rat8 Hom8
'parMen, The Texas Blind, Deaf and Orphan
.
Hon. Moyue L. Kelly, page 2 (V-956)
School, The Texas Sohool for th8 Blind, the
T8XaS School for the Deaf, and the State
Dairy and Hog Fang, all located In or ad a-
cent to the City of Austin, Texas . . . iEm-
phasls added)
Section 2 of Article 3175b provIdea in part:
I . . Effective September 1, 1949, the aon-
tiol and management of, and all rlghts, pri-
vileges, pow8rs, ana duties incident thereto
. . . which are now Vested In and 8X8YFCiS8d
by the State Board of Control shall be tpsns-
ferred to, vested in, and exeraiaed by the
& rd for Texas State Hospitals and Speola'l
s;~ools. Provid a h tht'h B rd
-control ahallec&t~~e~~ hard &rizas8S
for such Institutions . . .n (Bmphasis added)
At the time House Bill.1 was enacted, the Con-
federate Home for Women was ag eleemosynary lnatltutlon
subject to the powers of the Stat8 Board of Control un-
der Article 3219 V.C.S. This statute prOVid8S In part:
" . . . The Board shall eppolnt a super-
intendent for the Confederate Woman's Home,
with the approval of the GOV8rnOr.'
By virtue of the unaeraoorfjaprovisions of Sec-
tion 2 of Article 3174b, oonalaered In connection with
~rtlole 3219, the authority to appoint the superintendent
of the Confederate Home for Women la vested In the Board
for Texas State Hospitals and Special Schools, subject
to the approval of the Governor.
In its opln$on ao- v-929, this OffIC8 held that
the State Board of Control Is the appointing authority as
to the superintendents of th8 Texas Sohobl for th8 Blind
and the Texas School for the Deaf. But that holding was
requitid by the provisions of House Bill 370, 51st Leg-
islature, which was enacted later than House Blll.1.
House Bill 370 has no relevancy with respect to the ap-
polntment of the superintendent for the Confederate Home
for Women.
SUMMARY
The Board for Texas State Hospitals and
Special Sahools.1~ Vested with the authority
..
Hon. Moyne L. Kelly, page 3, (V-956)
to eppolnt the auperlntendent of the confed-
erate Home for Women, subjeat to ;$h8approv-
al of the Governor. Art.3174b, 3ec.2 and
Art.3219, V.C.S.
Poura very truly,
AlTORlWZYGENRRALOFTEXAS
CEO :mw
Chester E. 0111s0u
Assistant
APPROVED
ATTORNEY GZRRRAL | {
"perplexity_score": 1851.1,
"pile_set_name": "FreeLaw"
} |
An individual mammal's immune system functions through recognition of certain cell surface proteins, some of which are termed major histocompatibility complex proteins, or MHC proteins. Additional minor histocompatibility proteins exist which can also contribute to immunological recognition events. The individual mammal's immune system recognizes its own MHC proteins, or those of its identical twin, as self and thus does not destroy its own cells or those of its identical twin. Members of the same species may share major and/or minor histocompatibility antigens, and thus an individual may not recognize the cells of another member of its species as non-self, depending on the degree of the differences between the MHC proteins of the two individuals. When an individual's immune system recognizes the cells of other members of the same species as non-self, the first individual's immune system may proceed to destroy the cells of the second individual. In humans, the major histocompatibility proteins are known as “HLA” antigens.
When tissues such as bone marrow, blood cells, or solid organs are transplanted from one individual to another, normally the recipient will recognize the donor's cells as non-self and the recipient's immune system will destroy the donor's cells as described above. For this reason, in a tissue transplantation, the recipient is normally subjected to immunosuppressive drugs and/or irradiation. However, transplantation patients are also subject to immunologic recognition in the opposite direction, that is, the donor tissue may contain immunologically competent cells which proceed to destroy the recipient's cells, a condition termed “graft-versus-host disease” or “GVHD”.
Graft-versus-host disease can develop when bone marrow, blood products, or solid organs containing immunocompetent cells are transferred from a donor to a recipient. Thus, when MHC antigenic differences exist between the donor and recipient, the recipient is at risk for the development of graft-versus-host disease. Graft-versus-host disease may also develop when there are antigenic differences between donor and recipient for the minor histocompatibility antigens. Thus, graft-versus-host disease can also develop between MHC-matched persons. Moreover, surgery patents who receive directed blood transfusion, for example, transfusion of blood from an HLA homozygous child to a heterozygous parent, may also develop graft-versus-host disease.
Current approaches to preventing graft-versus-host disease include attempts to eliminate immunocompetent donor cells, for example, by in vitro manipulation of the donor tissue. For example, immunocompetent T cells may be removed from donor bone marrow through physical separation such as by lectin agglutination, or by treatment of the bone marrow with monoclonal antibodies directed to T cells. However, use of bone marrow depleted of T cells is associated with a higher rate of graft failure, which is frequently fatal. Use of T cell depleted bone marrow grafts is also associated with an increased incidence of relapse among the recipients, particularly recipients having chronic myelocytic leukemia.
Another approach to preventing immune-mediated injury is to interrupt the complement cascade (e.g., by depleting C3 with cobra venom factor or by inhibiting the C3 convertase with recombinant soluble CR1). However, antibody depletion has unacceptable risks of over-immunosuppression (i.e., infection), and experimental studies of inhibition of the complement cascade with cobra venom factor or sCR1 show incomplete inhibition. An additional drawback to the use of cobra venom is the prospect of systemic effects due to the large amounts of vasoactive and chemotactic C3a and C5a produced.
Another common practice for inhibiting immune-mediated disorders is to subject the recipient to immunosuppressive therapy after transplantation. Such immunosuppression may occur by use of glucocorticoids, cyclosporin, methotrexate, or combinations of such drugs. However, immunosuppression also results in increased incidence of infection, and even when immunosuppressant drugs are used, immune-mediated cytotoxicity may still occur.
Although many approaches to controlling immune-mediated disorders have been attempted, none of these approaches have been particularly successful. Thus there remains a need for an effective, clinically applicable means of preventing or treating GVHD and CTL- and/or complement-dependent rejection of organ or tissue transplants. | {
"perplexity_score": 161.2,
"pile_set_name": "USPTO Backgrounds"
} |
Kinugasa (surname)
Kinugasa (written: 衣笠) is a Japanese surname. Notable people with the surname include:
, Japanese military officer
, Japanese baseball player
, Japanese swimmer
, Japanese actor and film director
Category:Japanese-language surnames | {
"perplexity_score": 123.7,
"pile_set_name": "Wikipedia (en)"
} |
In this article we are going to try and help you remove Moosjs.cn . Our instructions cover all Windows versions as well as most browsers – Chrome, Firefox, Internet Explorer etc.
Browser hijackers normally represent harmless ad-producing software versions. Below we are going to introduce one specific hijacker program – Moosjs.cn . You are going to find out many details about this kind of software in general and about this particular version as well. For instance – we are going to explain why browser hijackers could only infect all the popular browsers such as Firefox, Chrome and Opera, and how they may get installed on your PC, and make the aforementioned browsers start generating various online ads like banners and pop-ups.
All you have to know about the term “browser hijackers”:
First of all, you need to accept the fact that you are not dealing with a kind of malware. Instead, you are facing one very commonly spread version of ad-generating software with strictly advertising functions. What makes Moosjs.cn different from any virus – a Ransomware-based one, for example, is the following:
Viruses infect your entire machine and could access any file on your disks and drives. For instance, a version of Ransomware will access all your data, select the particular files you use most and encrypt them, thus making them inaccessible to you. After completing the encryption process, the virus will generate a very disturbing ransom-requiring notification on your computer screen, letting you know that your encoded files are in great danger unless you complete the demanded payment.
On the other hand, any browser hijacker, including Moosjs.cn , may only infect your browsers and access their search request databases. The changes that may later occur as a result of the activities of a hijacker inside your system are the following:– your default search engines and homepages may disappear and some new, often unfamiliar, ones could appear;– your browsers, no matter whether you decide to use Firefox or Chrome, or any of the other most common ones, could begin producing a lot of annoying ads and banners while you are browsing;– some redirecting may begin – you could be sent to various web pages every time you try to load the desired ones.
Another crucial difference between Moosjs.cn and Ransomware is that the Ransomware-based product will NOT need your approval to become a part of your system, while the browser hijacker will ALWAYS need it and could trick you into unknowingly agreeing to install the program on your computer.
How and where we might come across browser hijackers:
As exemplary versions of advertising software, most browser hijackers could be found everywhere on the web – inside shareware, torrents, streaming websites and other contagious web pages. However, their most likely hiding places are the so-called program bundles. If you are not aware of this term, we are going to explain it for you: software bundles represent sets of various apps, programs and games, which are spread together for free. As a result, any user with Internet access could download such a set. Nonetheless, to download a bundle doesn’t mean to install one. Just downloading a free mix of software cannot automatically infect your browsers with browser hijackers. To your surprise, the infection could only occur in case you decide to knowingly or unknowingly install the entire content of the bundle on your computer. Here is how hijackers and Adware developers could trick you into installing their advertising software on your PC:
Some of the aforementioned bundles could contain at least one very interesting or new program/ game and you can be very excited to try it. Consequently, you may become a little careless when it comes to performing a safe installation process and go with the easiest, quickest or the automatic wizard features, such as the Default/ the Recommended/ or the Easy one. Generally, choosing such a feature almost always means incorporating the whole bundle into your system, along with Moosjs.cn or the other promoting programs, maybe lurking inside it.
In order to be able to try new software for free but minimize the chances of catching Adware or browser hijackers, you need to perform an ideal installation process via the wizard steps marked as Custom or Advanced. Only these features ensure your control over the entire process and that you will get the chance to opt out of unnecessary programs (such as Moosjs.cn ) and their unwanted features.
What to do in case Moosjs.cn has already infected your PC?
This may surprise you, but Moosjs.cn is not among the programs that are hard to remove. You will simply need to trust a removal guide that has been tested and has proven to successfully fight such infections. Luckily, we have one which we are offering to you for free – just scroll down and you will see it. Good luck with removing Moosjs.cn and don’t forget to implement all the uninstallation steps carefully!
Moosjs.cn Removal
Many types of malware will restrict your access to their core files. It is highly recommended that you reboot your PC in safe mode before attempting to use this guide.
WARNING! If you are using Windows 8,0 or later and/or your operating system is installed on a fast SSD drive this may fail to work. In this case click here to see how to start your PC in Safe Mode.
#1: Uninstall the malicious program from your control panel
[bannerMiddle]
Enter control panel to look for any suspicious programs, which may have installed on your PC. To do that:
Navigate to your Desktop
Press simultaneously the Win button together with the R button (Win+R)
In the Run window that just opened type appwiz.cpl
Go through the list of programs and find Moosjs.cn or anything else that may seem suspicious. Right-click on it and choose the uninstallation option
WARNING! Carefully read any confirmation messages that may be created in the process. Sometimes you may get offers to download more Adware applications and this can be linked to either the Yes or the No answer depending on the wording!
Optional:
Go through the list of programs again and check online for any potentially unwanted programs. We have an article that covers this awesome free software that makes sure that your computer is free from bloatware and programs that you don’t need.
#2: Remove Moosjs.cn From Chrome
Now we’ll remove the extensions that the malware has attached to your browser.
Open your Google Chrome browser.
Type chrome://extensions/ in the URL address bar and press Enter.
Click on “Developer Mode” on the top right and look for the extension installed by Moosjs.cn and anything that might be related to it. Copy their IDs (the string of letters), then remove them by clicking on the trash bin icon.
Type Regedit in the Windows Start Menu and press Enter. Go in : HKEY_LOCAL_MACHINE\SOFTWARE\Google\Chrome\Extensions and delete the entries corresponding to the suspicious IDs you recorded.
#3: Remove Moosjs.cn From Firefox
Open Mozilla Firefox browser.
Type “about:support” in the URL address bar and press Enter.
Click on the “Refresh Firefox” button on the right and confirm.
#4 Remove Moosjs.cn From Internet Explorer
Open your Internet Explorer internet browser.
Click on the Gear icon on the up right, then on manage add-ons.
Go through the list disable any suspicious extensions.
#5 Remove any leftover parasitic processes
[bannerMiddleSecond]
From the task manager:
Use Ctrl + Shift + Esc and open the Task manager, then click on the Processes
Go through the list of processes and look for unknown or otherwise suspicious entries.
If you see anything suspicious right click on the process and shoes Open File Location, then terminate the process and delete any files you find in the directory.
WARNING! If the directory you open from this menu has no files inside of it it’s probably because the malware has hidden them. You need to reveal hidden files and folders in order to be able to see them. Click here if you don’t know how to do that.
From the start menu:
Press simultaneously the Win button together with the R button (Win+R)
In the Run window that just opened type msconfig
Click on the Startup tab.
This menu controls which programs are loaded when windows starts after a reboot. Disable anything that seems suspicious. Optionally you can also disable any program that you don’t need and also has a high impact on your startup time.
Feel free to write to us in the comment section with any questions that you may have. Also if we have been helpful to you please share this article to help us reach more people like you. | {
"perplexity_score": 564.2,
"pile_set_name": "Pile-CC"
} |
Background and Qualifications
I have a background in nursing and midwifery and 20 years working in the voluntary sector as well as many years in the NHS as psychological therapist. I have specialisms in working with trauma and attachment difficulties in adults. Over the last few years I have worked a lot with young parents and infants using a Parent-Infant psychotherapy approach and offer an attachment based parenting program
In 1988 I gained a First Class Honors Degree in Psychology and Human Sciences and then trained in 1990 in Humanistic Psychotherapy. I studied for a British Association for Counselling and Psychotherapy BACP recognised Diploma in Humanistic Counselling at the Northern Guild for psychotherapy qualifying in 1994. I then qualified in Integrative psychotherapy IIPA at the MIP Manchester Institute for Psychotherapy; I hold a diploma in Casework Supervision since 2003 ; Other studies include a programme of Mindfulness training at Breathworks Breathworks Mindfulness based CBT at Masters level at Leeds University and Parent-Infant Psychotherapy;Watch,Wait and Wonder Infant Observation ;Children and Adolescent StudiesMIP ;Sexual abuse and Personality Disorders; I am also a 'Certified Circle of Security'Circle of Security facilitator and Level One in Sensorimotor Psychotherapy.Sensorimotor Psychotherapy
Professional membership:
I am Senior accredited member of BACP(British Association for Counselling and Psychotherapy) The largest professional body in the UK for counselling and psychotherapy. For more information please use the link to their website: My membership number is: 503532 and is renewed every year. I practise within the ethical framework for BACP and am subject to its complaints procedure.
Personal Approach:
As well as many years of practice and training, I believe that making sense of my own life experience informs much of my way of being with my clients. My work on myself through personal therapy and daily meditation and yoga enables me to offer a confident, congruent and secure space for my you. I am attentive and compassionate and use my head as well as my heart to navigate through the mystery of another persons inner world. I work from a premise that I do not know your experience but rather invite you to show me and explore at your pace, how you see the world, yourself and others, how you feel in your body, how you protect yourself and sometimes limit yourself. I practice in a way that seeks to accept rather than judge and to be open to feedback and challenge. | {
"perplexity_score": 501.2,
"pile_set_name": "Pile-CC"
} |
Minnesota men’s golf is currently in eight place at the NCAA Raleigh Regional at Lonnie Poole Golf Course. Play has been suspended due to weather, and the first round is now scheduled to be completed Friday.
The Gophers, who started on the 10th, all made it through at least 10 holes before the stoppage of play. Freshman Jose Mendez is tied for second at 4 under through 10 holes. He has made four birdies and six pars.
Georgia Tech (-14) leads the team competition. The Yellow Jackets’ Richard Werenski leads the field at 5 under with one hole to finish in the first round. The top five teams from each NCAA Regional advance to the NCAA Championships. Minnesota is currently four shots out of fifth. The top individual not on one of the advancing teams will also advance.
The plan is to finish the first round beginning at 10:30 a.m. CT Friday. | {
"perplexity_score": 254.5,
"pile_set_name": "Pile-CC"
} |
Trending
Year in review: 2017's Top 10 discoveries
Audrey Leon looks at some of 2017's 10 largest offshore discoveries. First published in the December 2017 OE.
While exploration hasn’t been as hot as it used to be, the finds this year have been big and plentiful. Wood Mackenzie has provided the year’s 10 biggest finds by volume.
#1 – Yakaar, Senegal
In May this year, Kosmos and partner BP discovered gas at the Yakaar-1 prospect, saying that the find could support the development of a second LNG hub. Yakaar-1 was drilled using the Atwood Achiever drillship. The well may contain 15 Tcf gross Pmean gas resource, according to Kosmos.
Yakaar, which is in the Cayar Offshore Profond block about 95km northwest of Dakar, was drilled in nearly 2550m water depth to 4700m total depth.
The well intersected a 120m gross hydrocarbon column in three pools within the primary Lower Cenomanian objective and 45m of net pay.
Kosmos estimates that Yakaar-1 discovered a gross Pmean gas resource of approximately 15 Tcf. An appraisal program is being planned to delineate the Yakaar discovery, the company said back in May.
“The result confirms our view of the potential scale of the petroleum system offshore Mauritania and Senegal, in particular the basin floor fan systems which have now been further derisked, with the well demonstrating that reservoir and trap both work in these previously untested fairways,” said Andrew G. Inglis, chairman and CEO, in May this year.
#2 – Zama, Mexico
Since Mexico’s historic oil reform was passed in 2013, the country has made significant strides by allowing in private and foreign investment. In July this year, Talos Energy backed by partners Sierra Oil and Gas and Premier Oil touted the 1+ billion bbl discovery at their Zama-1 exploration well, offshore Mexico.
The find has been described as one of the 20 largest shallow water finds in the past 20 years and the first private sector oil discovery in Mexico. Zama-1 was spud in May this year and drilled in 166m water depth, about 37mi off Tabasco, in Block 7 in the Sureste Basin, using the Ensco 8503 semisubmersible.
The well reached an initial shallow target vertical depth of approximately 11,100ft (3383m). Talos says it hit a 1100ft (335m) oil bearing interval, with 558-656ft (170-200m) of net oil pay in Upper Miocene sandstones with no water contact. Oil samples indicate light oil, with API gravities between 28° and 30° and some associated gas.
Talos reported later that same month that Zama-1 failed to find further volumes in a deeper target. Zama-1 was drilled to a total depth of 4108m (13,478ft).
Map from Premier Oil.
According to partner Premier Oil, the estimated recoverable P90-P10 gross unrisked resources are in the range of 400-800 MMboe, including the volumes that extend into the neighboring block.
#4 – Snoek, Guyana
In March this year, supermajor ExxonMobil confirmed a new discovery offshore Guyana at the Snoek well, in the southern portion of the Stabroek block – the same block containing the major Liza discovery currently under development.
Exxon encountered more than 82ft (25m) of high-quality, oil-bearing sandstone reservoirs. The well was spud in February this year by the Stena Carron drillship, and drilled to 16,978ft (5175m) at 5128ft (1563m) water depth.
Snoek is about 5mi (9km) to the southeast of the 2015 Liza-1 discovery. Stabroek covers 6.6 million acres (26,800sq km). Exxon said Snoek targeted similar aged reservoirs as encountered in previous discoveries at Liza and Payara. In March, Wood Mackenzie said Snoek adds another 220-370 MMbbl to its estimate of the block.
Exxon and its partners are continuing to have success at Stabroek. In October 2017, ExxonMobil confirmed further potential with a fifth oil discovery in the Turbot-1 well, which is in the southeastern portion of the block, approximately 30mi (50km) to the southeast of the Liza phase one project.
#6 – Neptune, Russia
In early October, Gazprom Neft subsidiary Gazpromneft-Sakhalin, completed drilling at its Neptune appraisal well at the Ayashsky license in the Sea of Okhotsk. Gazprom Neft reports initial in-place reserves estimated at 255 million tonnes of oil equivalent. A detailed assessment of these reserves will be prepared by mid-2018.
The Ayashsky block in the Okhotsk Sea forms part of the Sakhalin-3 project. The block is in the northeastern part of Sakhalin Island’s continental shelf, 55km from the coast. Water depth at the field is 62m. Gazprom Neft says that 2150sq m of 3D seismic has been shot inside the Ayashsky block.
#8 – Macadamia, Trinidad
BP's Juniper platform, offshore Trinidad and Tobago. Photo from BP.
In May this year, BP Trinidad & Tobago (bpTT) found success at the Macadamia wildcat, which was drilled to test exploration and appraisal segments below the existing SEQB discovery, which sits 10km south of the producing Cashima field, offshore Trinidad.
The well penetrated hydrocarbon-bearing reservoirs in seven intervals with approximately 600ft of net pay. Combined with the shallow SEQB gas reservoirs, the Macadamia discovery is expected to support a new platform within the post-2020 timeframe, BpTT said at the time.
“Savannah [another discovery made at the time] and Macadamia demonstrate that with the right technology we can continue to uncover the full potential of the Columbus Basin,” said Norman Christie, regional president for bpTT, back in June. | {
"perplexity_score": 458.7,
"pile_set_name": "Pile-CC"
} |
I don't always ask for a to-go box But when i do i forget it in the restaurant
279,168 shares | {
"perplexity_score": 621.1,
"pile_set_name": "OpenWebText2"
} |
Q:
Function behavior with very large variables
Whenever I think about how a function behaves, I always try to identify a general pattern of behavior with some common numbers (somewhere between 5 and 100 maybe) and then I try to see if anything interesting happens around 1, 0 and into negative numbers if applicable.
If that all works out, I essentially assume that I know that the function is going to behave similarly for very large numbers as it does for those relatively small numbers.
Are there notable (famous, clever or common) functions where very large numbers would cause them to behave significantly differently than would initially be thought if I followed my regular experimental pattern? If so, are there any warning signs I should be aware of?
A:
The Griewank function,
$$ f(\mathbf x) = \frac1{4000}\sum_{i=1}^n x_i^2 - \prod_{i=1}^n \cos\left(\frac{x_i}{\sqrt i}\right) + 1 $$
which is one of the objective functions used in testing optimization algorithms, looks completely different in large scale (dominated by x2) and small scale (dominated by cos x).
(source: geatbx.com)
(source: geatbx.com)
(source: geatbx.com) | {
"perplexity_score": 1010.6,
"pile_set_name": "StackExchange"
} |
t is the remainder when 196214573 is divided by 29743?
2
What is the remainder when 89860 is divided by 89850?
10
What is the remainder when 1657039 is divided by 127417?
618
Calculate the remainder when 32455941 is divided by 621.
618
Calculate the remainder when 3171804 is divided by 16434.
42
What is the remainder when 12198 is divided by 5865?
468
What is the remainder when 1041577 is divided by 104157?
7
What is the remainder when 801805 is divided by 89067?
202
What is the remainder when 1518253 is divided by 303634?
83
Calculate the remainder when 401368 is divided by 9835.
7968
Calculate the remainder when 164863 is divided by 1616.
31
Calculate the remainder when 133147 is divided by 66421.
305
Calculate the remainder when 20894917 is divided by 1826.
1825
Calculate the remainder when 103447198 is divided by 40.
38
What is the remainder when 145846982 is divided by 141599?
12
Calculate the remainder when 39569 is divided by 1443.
608
What is the remainder when 1278419 is divided by 1078?
989
What is the remainder when 103716 is divided by 24412?
6068
Calculate the remainder when 485443 is divided by 13.
10
Calculate the remainder when 247555 is divided by 62.
51
Calculate the remainder when 3069339 is divided by 4347.
357
What is the remainder when 551607 is divided by 451?
34
What is the remainder when 17677134 is divided by 68?
58
Calculate the remainder when 926827 is divided by 8275.
27
Calculate the remainder when 383271 is divided by 4674.
3
Calculate the remainder when 551631 is divided by 68857.
775
Calculate the remainder when 2193459 is divided by 109672.
19
What is the remainder when 1573246 is divided by 1487?
0
Calculate the remainder when 19901883 is divided by 4375.
8
What is the remainder when 7687 is divided by 7563?
124
Calculate the remainder when 213946 is divided by 904.
602
Calculate the remainder when 1422024 is divided by 736.
72
What is the remainder when 5114197 is divided by 2123?
2013
Calculate the remainder when 582736 is divided by 26.
24
Calculate the remainder when 3829085 is divided by 57.
53
Calculate the remainder when 4873348 is divided by 159.
157
Calculate the remainder when 9682514 is divided by 77.
72
Calculate the remainder when 4037590 is divided by 19.
14
Calculate the remainder when 1468728 is divided by 7235.
23
Calculate the remainder when 799303 is divided by 1127.
260
What is the remainder when 14563093 is divided by 98?
97
Calculate the remainder when 9828026 is divided by 9828025.
1
What is the remainder when 559506 is divided by 62165?
21
Calculate the remainder when 11895247 is divided by 10290.
7
What is the remainder when 6111672 is divided by 316?
232
Calculate the remainder when 168285 is divided by 17.
2
What is the remainder when 15069768 is divided by 1291?
1216
What is the remainder when 6141889 is divided by 19314?
37
Calculate the remainder when 30587881 is divided by 1222.
1221
Calculate the remainder when 588153 is divided by 247.
46
What is the remainder when 871414 is divided by 465?
4
Calculate the remainder when 33504 is divided by 10358.
2430
What is the remainder when 1496587 is divided by 372670?
5907
What is the remainder when 480221284 is divided by 2467?
2465
What is the remainder when 134473 is divided by 33408?
841
What is the remainder when 12963237 is divided by 40?
37
What is the remainder when 17304157 is divided by 145?
2
Calculate the remainder when 1734109 is divided by 52.
13
What is the remainder when 65204543 is divided by 7?
5
Calculate the remainder when 230189 is divided by 12107.
156
What is the remainder when 29934 is divided by 2373?
1458
Calculate the remainder when 20588842 is divided by 299.
1
What is the remainder when 2277274 is divided by 325236?
622
Calculate the remainder when 412967541 is divided by 4323.
4320
What is the remainder when 191584 is divided by 223?
27
Calculate the remainder when 5914475 is divided by 2254.
2233
Calculate the remainder when 10469519 is divided by 11.
5
Calculate the remainder when 1069520 is divided by 581.
480
What is the remainder when 19492 is divided by 19487?
5
What is the remainder when 740208 is divided by 37010?
8
What is the remainder when 4774408 is divided by 3972?
64
What is the remainder when 299526 is divided by 32977?
2733
Calculate the remainder when 424098 is divided by 268.
122
What is the remainder when 82651643 is divided by 46?
39
What is the remainder when 79855 is divided by 39?
22
Calculate the remainder when 473735 is divided by 111.
98
Calculate the remainder when 38985 is divided by 6331.
999
What is the remainder when 306963795 is divided by 22?
17
What is the remainder when 59438 is divided by 6505?
893
What is the remainder when 9160843 is divided by 1526807?
1
Calculate the remainder when 176951 is divided by 44190.
191
Calculate the remainder when 54738 is divided by 18182.
192
Calculate the remainder when 1271702 is divided by 660.
542
What is the remainder when 2054101 is divided by 55?
16
What is the remainder when 2508416 is divided by 769?
707
What is the remainder when 7800038 is divided by 639?
404
What is the remainder when 3469714 is divided by 221?
14
What is the remainder when 8493237 is divided by 673?
650
Calculate the remainder when 71079710 is divided by 23693230.
20
Calculate the remainder when 4103723 is divided by 124355.
8
Calculate the remainder when 9824 is divided by 16.
0
What is the remainder when 32623723 is divided by 1733?
1731
Calculate the remainder when 130887 is divided by 1031.
981
What is the remainder when 114575544 is divided by 1308?
1284
What is the remainder when 17427 is divided by 2979?
2532
Calculate the remainder when 555185 is divided by 18.
11
What is the remainder when 201957 is divided by 2266?
283
Calculate the remainder when 56299 is divided by 7788.
1783
What is the remainder when 4182427 is divided by 88987?
38
What is the remainder when 152656 is divided by 13808?
768
What is the remainder when 141386445 is divided by 228?
225
What is the remainder when 667231 is divided by 667224?
7
Calculate the remainder when 1183853 is divided by 394612.
17
What is the remainder when 7072124 is divided by 1785?
1739
What is the remainder when 18357 is divided by 17428?
929
Calculate the remainder when 86958 is divided by 1035.
18
Calculate the remainder when 6549619 is divided by 3775.
3769
What is the remainder when 28832 is divided by 2873?
102
What is the remainder when 12989 is divided by 5942?
1105
What is the remainder when 891240 is divided by 297045?
105
Calculate the remainder when 2864783 is divided by 11.
9
What is the remainder when 2617601 is divided by 17686?
73
What is the remainder when 2195320 is divided by 5434?
5418
Calculate the remainder when 1504559 is divided by 8047.
7817
What is the remainder when 521614 is divided by 173870?
4
Calculate the remainder when 22111 is divided by 312.
271
Calculate the remainder when 206716 is divided by 25827.
100
Calculate the remainder when 381224 is divided by 54304.
1096
Calculate the remainder when 87916 is divided by 2677.
2252
Calculate the remainder when 35296931 is divided by 118446.
23
Calculate the remainder when 8646812 is divided by 49.
27
Calculate the remainder when 153467769 is divided by 308.
301
What is the remainder when 17216549 is divided by 45?
44
Calculate the remainder when 184598647 is divided by 1768.
1767
Calculate the remainder when 4267938 is divided by 1891.
1842
What is the remainder when 40477090 is divided by 879?
19
Calculate the remainder when 33649 is divided by 1020.
1009
What is the remainder when 5982621 is divided by 6618?
6567
What is the remainder when 89318 is divided by 220?
218
Calculate the remainder when 1291368 is divided by 465.
63
Calculate the remainder when 2088352 is divided by 629.
72
Calculate the remainder when 1449588 is divided by 48316.
108
Calculate the remainder when 5180214 is divided by 1086.
1080
Calculate the remainder when 43120 is divided by 3104.
2768
What is the remainder when 503605 is divided by 649?
630
Calculate the remainder when 658909 is divided by 51.
40
Calculate the remainder when 4973565 is divided by 2486779.
7
Calculate the remainder when 7489456 is divided by 559.
533
Calculate the remainder whe | {
"perplexity_score": 1512.6,
"pile_set_name": "DM Mathematics"
} |
Paralyzed woman uses her mind to control robot arm
By Malcolm Ritter
Associated Press
Posted:
05/16/2012 04:49:06 PM PDT
Updated:
05/16/2012 05:48:52 PM PDT
NEW YORK -- Using only her thoughts, a Massachusetts woman paralyzed for 15 years directed a robotic arm to pick up a bottle of coffee and bring it to her lips, researchers report in the latest advance in harnessing brain waves to help disabled people.
In the past year, similar stories have included a quadriplegic man in Pennsylvania who made a robotic arm give a high-five and stroke his girlfriend's hand, and a partially paralyzed man who remotely controlled a small robot that scooted around in a Swiss lab.
It's startling stuff. But will the experimental brain-controlled technology ever help paralyzed people in everyday life?
Experts in the technology and in rehabilitation medicine say they are optimistic that it will, once technology improves and the cost comes down.
The latest report, which was published online Wednesday in the journal Nature, comes from scientists at Brown University, the Providence VA Medical Center in Rhode Island, Harvard Medical School and elsewhere.
It describes how two people who lost use of their arms and legs because of strokes years before were able to control free-standing robotic arms with the help of a tiny sensor implanted in their brains.
The sensor, about the size of a baby aspirin, eavesdropped on the electrical activity of a few dozen brain cells as the study participants imagined moving their arms. The chip then sent signals to a computer, which translated them into commands to the robotic arms.
Advertisement
The computer was taught how to interpret the brain patterns through practice as the paralyzed participants watched the robot arms move and then imagined that they were moving their own arms the same way.
In one task to test the system, the two participants tried to direct a robot arm to reach out and squeeze foam balls in front of them. The man succeeded in less than half his attempts, but the woman was able to do it about 60 percent of the time.
The woman, Cathy Hutchinson of East Taunton, Mass., also was asked to use the arm to drink the coffee. That involved picking up the bottle, bringing it to her lips so she could sip from a straw, and putting the bottle back on the table. She succeeded in four out of six tries with the arm, which was specially programmed for this task.
"The smile on her face ... was just a wonderful thing to see," said Dr. Leigh Hochberg, a researcher with the Providence VA, Brown and Massachusetts General Hospital.
Researchers said in Hutchinson's case that the results show that the implanted chip still worked after five years, and that her brain was still generating useful signals even though she hadn't moved her arms in almost 15 years.
The ultimate goal, researchers said, is an implanted device that would reactivate a person's own paralyzed limbs. Another goal is to operate high-tech prostheses for amputees. | {
"perplexity_score": 321.5,
"pile_set_name": "Pile-CC"
} |
In Escape Rooms, Video Games Meet Real Life - ahamilton
http://www.nytimes.com/2014/06/04/arts/video-games/in-escape-rooms-video-games-meet-real-life.html
======
schoen
I did the two permanent room escapes run by Real Escape Game/SCRAP in San
Francisco (in the New People mall in Japantown), namely Escape from the
Mysterious Room and Escape from the Time Travel Lab. They were great fun. (My
teams didn't manage to escape from either of them.)
I also did their Escape from the Bank (themed after the aftermath of a bank
robbery), where I think my team was the only one to make it out. That event is
possibly less awesome because you're seated at a table in a big hall with a
lot of other teams around you, rather than exploring small room all by
yourselves.
Now I'm looking forward to trying the games in New York City!
~~~
fallinghawks
I did Escape from the Mysterious Room as well, and really enjoyed it. We
probably needed another 15 minutes to complete because we got hung up on one
of the puzzles that needed a piece we hadn't found yet.
I'd like to do it again but would like to go with people who have actually
have played escape games (esp. Japanese) before.
------
Udo
It's interesting how much LARP ideas are beginning to diffuse into general
culture. Lastly I was talking to someone who basically organized themed mini-
LARPs for corporate teams. Since these are audiences who generally aren't
familiar with the medium, they're always amazed.
I think as our natural environment continues to become safer and more
virtualized, these immersive adventures and ARGs will become more popular and
mainstream.
------
wzsddtc
These are really popular in China Mainland as well since about 2 years ago.
People just create rooms at their own places and put ads on WeiBo to get
people to come.
------
prawn
Are they the same every time or is there an element of randomness in the
puzzles and codes?
It'd be interesting if they could be random enough that someone couldn't spoil
it for others, and people could use AR or just wi-fi to research clues?
~~~
jevinskie
I don't think many people are paying money to go to these just to "cheat".
Unless... there are competitions.
~~~
prawn
If there was randomness, you could offer "Free if you can escape in an hour!"
Otherwise someone could get the full experience by going in with the
instructions written down and pull it out of their pocket in the last five
minutes if they'd failed to escape.
------
briggers
These are awesome. I did a couple in Warsaw, one in Budapest and now one in
Prague.
I use it mostly as a 2nd/3rd date to find out how people handle
stress/cooperate, but they're really fun too.
------
martinshen
We've been working with "escape room" game event organizers like SCRAP for a
while now. They're incredibly popular on our "Netflix for Events" service.
I've done a handful and can certainly attest to this "video games in real
life" trend in events from traditional scavenger hunts to a maze that you have
to solve from the third person. I love this intersection of technology and
real life entertainment. Folsom Street Foundry in SoMA has even started
hosting weekly social game nights on Tuesdays
------
personlurking
There's an entertaining Spanish film called La Habitación de Fermat (Fermat's
Room) which deals with this.
"Four mathematicians who do not know each other are invited by a mysterious
host on the pretext of resolving a great enigma. The room in which they find
themselves turns out to be a shrinking room..."
Here's the trailer (w/ subs)
[https://www.youtube.com/watch?v=f8fS74Y-qBs](https://www.youtube.com/watch?v=f8fS74Y-qBs)
------
brador
This escape the room as a live event started in Hong kong a few years ago.
Glad to see it finally come here.
------
justjimmy
These are popular here in Taiwan, I think there's like 1 every day - held by
many different organizations. The concept's the same - solve puzzles before
the time runs outs.
Different organizations go to different lengths to make the activity feel more
immersive, some are great, some are meh. Sometimes the group is so big, it can
get very chaotic with everyone running around looking for clues.
The only downside is once they reveal the clues/answers, it can be frustrating
if they were impossible to solve in the first place.
------
austinl
I was at the Escape from the Moon Base [1] in SF two weeks ago and it was a
lot of fun. I went with some coworkers, but I'd also recommend going with
friends, and would definitely participate again.
The puzzles are fairly challenging (no one in my session of 30 teams/180
people) finished with an entirely correct solution, so it's satisfying when
your team solves certain parts.
[1] [http://realescapegame.com/sf07_mb/](http://realescapegame.com/sf07_mb/)
------
lukas
I played the Escape from Time Travel Lab as a team building exercise and it
was an awesome experience - I totally recommend it. I just wish they would put
out more games!
------
nitrogen
Sounds somewhat like murder mystery dinner parties. Also: why does NYT hijack
the left and right arrows to take me away to another article?
------
nschuett
One of the hardest things about these "escape from the room" games is keeping
all the puzzles and clues organized, and sharing progress across the whole
team. It's a pretty great exercise in project mgmt and teamwork.
------
nnnnni
It's not exactly the same, but TrueDungeon has a similar premise of "a small
group of people attempts to figure out puzzles together to get through
something".
------
kqr2
For a zombie themed escape, check out:
[http://roomescapeadventures.com/](http://roomescapeadventures.com/)
~~~
seeken
I did this a couple weeks ago. The puzzles are a bit contrived but it is
challenging and fun.
------
antonmaju
This reminds me of a popular visual novel game, "9 Hours 9 Persons 9 Doors".
------
jsemrau
Boring, in "In Shadows" (www.inshadows.asia) video games meet real life. | {
"perplexity_score": 458.2,
"pile_set_name": "HackerNews"
} |
The effect of post-mortem ageing and heating on water retention in bovine muscles.
The muscles semitendinosus (ST) and psoas major (PM) were removed from chilled young bull carcasses 24h after slaughter and stored at 4°C. At the 1st, 6th and 12th day of post-mortem ageing the chemical composition (moisture, fat, protein, collagen) and contents of free, immobilized and unfreezable water in the muscles were estimated. The muscle steaks were boiled at 100°C, roasted at 170°C or fried at 160°C to an internal temperature of 75°C, and the amounts of total, free, immobilized, and unfreezable water in heated muscles were evaluated. The unfreezable water was estimated by DSC. In the raw muscles immobilized water constituted 74-75%, free water 16.6-17.6% and unfreezable water 7-8% of the total water. Independent of time of ageing, PM muscle contained significantly more free water than ST muscle. During post-mortem ageing, changes in free, immobilized and unfreezable water in muscles were not significant. The level of free water was highest in boiled and least in fried meat, however the amount of immobilized water was highest in fried and lowest in boiled meat. The amount of unfreezable water in muscles heated after 12 days of post-mortem ageing decreased. | {
"perplexity_score": 409.5,
"pile_set_name": "PubMed Abstracts"
} |
An efficient method to achieve high data-rate coverage in wireless communication is to use multiple antennas both at the transmitter and the receiver, since it makes it is possible to exploit the spatial degrees of freedom offered by multipath fading inside the wireless channel in order to provide a substantial increase in data rates and reliability of wireless transmission.
In the downlink, there are three basic approaches for utilizing the antenna: diversity, multiplexing and beamforming. With beamforming, the radiation pattern of the antennas may be controlled by transmitting a signal from a plurality of elements with an element specific gain and phase. In this way, radiation patterns with different pointing directions and beam widths in both elevation and azimuth directions may be created.
The gains from adjusting the beam shapes used for transmissions come from both increased received power (increased SNR) as well as a possibly lower interference (increased SINR) in a multi cell scenario. However, how much of these gains may be realized depends on how well the transmitting antenna system can direct the energy to the target users, and how well it avoids emitting energy to the interfered users.
The area of beamforming is usually divided in two parts, namely user specific beamforming (UE-BF) and cell specific beamforming (CS-BF). With user specific beamforming, the transmit beam used is chosen to optimize the channel between an eNB and a single user which is the method to use when transmitting user specific data. With CS-BF, beam are chosen to support all users within the cell, which is a method suitable for transmitting control information or other broadcast signals. Hence a cell-specific beam will generally cover a larger solid angle wider than a user specific beam.
In present wireless communication systems and frequency division duplexing FDD systems in particular, the user specific beamforming is typically implemented through the use of codebooks. There are both proprietary codebooks as well as standardized. When using codebook based transmissions, each user (which knows the codebook prior to transmission) may estimate what the gain would be for each code word and then feedback information of this to the eNB.
Cell specific beamforming, on the other hand, is standard transparent. Further, since the beams are supposed to suit all users within a cell, the best beam shape cannot be measured and optimized with a limited feedback from a few selected users. Therefore, one commonly assumed method to optimize cell specific beams is through the use of self-organizing network (SON) algorithms, sometimes called reconfigurable antenna system self-organizing networks (RAS-SON) algorithms. Such algorithms may typically measure some second order effect of changes in beam shapes, and optimize the beam shapes based on these. For example, one node may form some candidate cell specific beams, and then try these settings/beams in the network during a limited period of time, and evaluate which of these settings/beams that gives the best capacity or system throughput. This procedure is then repeated for various nodes/areas throughout the network to tune the overall setting and thus increase the overall network performance
These types of RAS-SON algorithms are blind/semi-blind and hence they become relatively slow (depending on the amount of time for which each setting is evaluated). This will particularly be the case when the beam shapes of multiple cells are to be improved, as is typically the case in cellular networks.
Cell specific beamforming, and specifically optimization of the cell specific beam shapes, is typically done to define and isolate the cells from each other. Well isolated cells facilities the UE to make a better choice of serving cell for communication.
Thus, current cell shaping methods are typically blind/semi blind in the sense that the antenna patterns at one or more sites are changed slightly, and then they are evaluated for some period of time. To avoid instability in systems this period has to be long enough to be statistically representative of the traffic situation. This results in slow algorithms.
Further, since arbitrary combinations of weights in an array to generate arbitrary beam shapes is far too large (for large arrays) to test all, only a smaller restricted subset is usually considered. Such beam shapes, for example fixed beam width and some certain tilt settings, may not be optimal for neither received signal nor interference suppression. | {
"perplexity_score": 451.3,
"pile_set_name": "USPTO Backgrounds"
} |
// RUN: %clang_cc1 -triple x86_64-unknown-linux -O0 -fsanitize-cfi-cross-dso \
// RUN: -fsanitize=cfi-icall,cfi-nvcall,cfi-vcall,cfi-unrelated-cast,cfi-derived-cast \
// RUN: -fsanitize-trap=cfi-icall,cfi-nvcall -fsanitize-recover=cfi-vcall,cfi-unrelated-cast \
// RUN: -emit-llvm -o - %s | FileCheck %s
void caller(void (*f)()) {
f();
}
// CHECK: define weak_odr hidden void @__cfi_check_fail(i8*, i8*)
// CHECK: store i8* %0, i8** %[[ALLOCA0:.*]], align 8
// CHECK: store i8* %1, i8** %[[ALLOCA1:.*]], align 8
// CHECK: %[[DATA:.*]] = load i8*, i8** %[[ALLOCA0]], align 8
// CHECK: %[[ADDR:.*]] = load i8*, i8** %[[ALLOCA1]], align 8
// CHECK: %[[ICMP_NOT_NULL:.*]] = icmp ne i8* %[[DATA]], null
// CHECK: br i1 %[[ICMP_NOT_NULL]], label %[[CONT0:.*]], label %[[TRAP:.*]],
// CHECK: [[TRAP]]:
// CHECK-NEXT: call void @llvm.trap()
// CHECK-NEXT: unreachable
// CHECK: [[CONT0]]:
// CHECK: %[[A:.*]] = bitcast i8* %[[DATA]] to { i8, { i8*, i32, i32 }, i8* }*
// CHECK: %[[KINDPTR:.*]] = getelementptr {{.*}} %[[A]], i32 0, i32 0
// CHECK: %[[KIND:.*]] = load i8, i8* %[[KINDPTR]], align 4
// CHECK: %[[VTVALID0:.*]] = call i1 @llvm.type.test(i8* %[[ADDR]], metadata !"all-vtables")
// CHECK: %[[VTVALID:.*]] = zext i1 %[[VTVALID0]] to i64
// CHECK: %[[NOT_0:.*]] = icmp ne i8 %[[KIND]], 0
// CHECK: br i1 %[[NOT_0]], label %[[CONT1:.*]], label %[[HANDLE0:.*]], !prof
// CHECK: [[HANDLE0]]:
// CHECK: %[[DATA0:.*]] = ptrtoint i8* %[[DATA]] to i64,
// CHECK: %[[ADDR0:.*]] = ptrtoint i8* %[[ADDR]] to i64,
// CHECK: call void @__ubsan_handle_cfi_check_fail(i64 %[[DATA0]], i64 %[[ADDR0]], i64 %[[VTVALID]])
// CHECK: br label %[[CONT1]]
// CHECK: [[CONT1]]:
// CHECK: %[[NOT_1:.*]] = icmp ne i8 %[[KIND]], 1
// CHECK: br i1 %[[NOT_1]], label %[[CONT2:.*]], label %[[HANDLE1:.*]], !nosanitize
// CHECK: [[HANDLE1]]:
// CHECK-NEXT: call void @llvm.trap()
// CHECK-NEXT: unreachable
// CHECK: [[CONT2]]:
// CHECK: %[[NOT_2:.*]] = icmp ne i8 %[[KIND]], 2
// CHECK: br i1 %[[NOT_2]], label %[[CONT3:.*]], label %[[HANDLE2:.*]], !prof
// CHECK: [[HANDLE2]]:
// CHECK: %[[DATA2:.*]] = ptrtoint i8* %[[DATA]] to i64,
// CHECK: %[[ADDR2:.*]] = ptrtoint i8* %[[ADDR]] to i64,
// CHECK: call void @__ubsan_handle_cfi_check_fail_abort(i64 %[[DATA2]], i64 %[[ADDR2]], i64 %[[VTVALID]])
// CHECK: unreachable
// CHECK: [[CONT3]]:
// CHECK: %[[NOT_3:.*]] = icmp ne i8 %[[KIND]], 3
// CHECK: br i1 %[[NOT_3]], label %[[CONT4:.*]], label %[[HANDLE3:.*]], !prof
// CHECK: [[HANDLE3]]:
// CHECK: %[[DATA3:.*]] = ptrtoint i8* %[[DATA]] to i64,
// CHECK: %[[ADDR3:.*]] = ptrtoint i8* %[[ADDR]] to i64,
// CHECK: call void @__ubsan_handle_cfi_check_fail(i64 %[[DATA3]], i64 %[[ADDR3]], i64 %[[VTVALID]])
// CHECK: br label %[[CONT4]]
// CHECK: [[CONT4]]:
// CHECK: %[[NOT_4:.*]] = icmp ne i8 %[[KIND]], 4
// CHECK: br i1 %[[NOT_4]], label %[[CONT5:.*]], label %[[HANDLE4:.*]], !nosanitize
// CHECK: [[HANDLE4]]:
// CHECK-NEXT: call void @llvm.trap()
// CHECK-NEXT: unreachable
// CHECK: [[CONT5]]:
// CHECK: ret void
// CHECK: define weak void @__cfi_check(i64, i8*, i8*)
// CHECK-NOT: }
// CHECK: call void @llvm.trap()
// CHECK-NEXT: ret void | {
"perplexity_score": 1790.6,
"pile_set_name": "Github"
} |
Thursday, April 27, 2017
One Last Pick Thru the Bins Volume 24: Menomena. Who Might Be My Favorite Band
Large intestines, c'mon forward! Shake it!
I first heard Menomena in…well, looks like that happened on
August of 2012, but I actually pulled it from PDX Pop Now!’s 2010 compilation.
The specific song was “Five Little Rooms.” I have been listening to that same
song, often late at night and sufficiently loose (or tight…still struggling
with which way to go there), since then, trying to decipher its meaning. I’m no
Charlie Manson, some clown sifting the sound and lyrics for instructions on how
to bring about Racist Doomsday (true story). Still, everything about that song – the drums pounding
as if on the walls of those rooms, and vaguely funereal sound of the piano, the
soaring buzz of the guitars toward the end, the mocking chorus – somehow recalls sensations
of panic. Or, as one part of the lyrics puts it, “Click your heels and get the
hell away.” Look, it sounds much more artful in the song...
We’re all familiar with the record store routine, the act of
walking in there with a head full of ideas about what to buy, only to have every
last idea leak out one aisle to the next, each successive band’s name taking
you further and further from those original thoughts. When music collecting
went mostly online, I stopped going to record stores – probably due to the
above, too – but the day I finally did go, though, Menomena stayed front and
center until I walked it to the register. “Five Little Rooms” had everything to
do with that.
The above might reek of obsessive fandom, but, until this
week, I’ve never quite pored over Menomena’s albums; and it’s rare that I pull
a song apart like I did (and do) with “Five Little Rooms.” It’s not my style,
for one (see: this entire goddamn project), but there’s also no world in which
Menomena makes for easy listening. It’s not even that they don’t do infectious
beats and addictive hooks – though they don’t do them much. Menomena is a study
of details, the process of trying to catch and piece out the way they construct
each song. They’re a band built not just for volume, but for the era of
earbuds; listening to them almost requires sound-blocking, because it’s too
easy miss an instrument, or some accent, or fail to note how, in one song, they
built one bridge on a choppily pulsing saxophone, and the next bridge by
plucking twinkles out of strings (see: “Weird”). That one also has one of my
favorite lyrical phrases: “There’s no love lost that I can’t find again.” (Damn
it! Typing that in plain text strangles the lyrics. Delivery matters. The way
one communicates a thought or a feeling will always change it. Obviously.)
It feels right to tackle Menomena from the angle of
approach, because it’s a big part of their sound. It starts with their songwriting
process, something covered on the band’s Wikipedia page:
"First, we set the tempo of the click, which is played
through a pair of headphones. We then take turns passing a single mic around
the room. One of us will hold the mic in front of an instrument, while another
one of us will lay down a short improvised riff over the click track. We
usually start with the drums. Once the drums begin looping, we throw on some
bass, piano, guitar, bells, sax, or whatever other sort of noisemaker happens
to be in the room. Deeler keeps the process democratic, which is the only way
we can operate."
Also worth noting: they compose their songs slowly and,
apparently, over email. It takes them a while, too, and maybe that painstaking process
lands on something about them: they don’t write bad songs. No “I Was Made for Loving You, Baby,” for these guys. Don’t get me wrong: their songs aren’t my
children, perfect little things I’m obliged to love equally, because I don’t.
When I say they don’t write bad songs, I mean there’s nothing cheap in their
music, nothing easy. It’s possible that oversimplifies things, because it’s not
like the band has to reinvent the wheel every time they record: Menomena has a
sound, something their fans (like me) respond to, and, because there’s so much
in what they do, they can keep mining that massive goddamn vein till they die
and compulsive twits like me will keep coming back to pull apart the puzzle so
we can see how this one fits together.
For all the parts that comprise most Menomena songs, their
particular genius comes with their talent for giving each of those parts space
to breathe and be heard. Again, so long as one listens with earbuds; try this
in a car, and you’re fucked, you’ll miss half of it at a minimum. Some of it comes
by way of contrasts so vivid you can’t miss them: a heavy, rhythmic baseline
thudding at the floor, while a piano twinkles in all the open space above it
(again, God bless the piano). No less often, they seem to dial back every other
sound in order to let the element they want step the foreground. The band hauls
the guts of what they’re doing front and center, basically, sort of like one of
those old see-through anatomy statuettes: the listener only has to pay
attention to hear with decent clarity how the component parts fit together to
make the sound.
Insofar as Menomena has a sound, they also possess the
talent, precision and boldness to excavate every last possibility within that
sound. And, as most of your better bands, they pulled their sound to slightly
different ground as they need to in order to expand it. Like most acts these
days, their oeuvre gets a little sloppy between remixes and remastered
editions, but I relied on four albums to produce this post: I Am the Fun Blame
Monsters! (2003; also, anagram, or whatever the fuck those things are called), Friend And Foe (2007), Mines (2010), and Moms (2012). The
progression from beginning to end isn’t so rare – e.g., Blame Monster’s sounds
spare and experimental next to Moms’ tidied up and, frankly, lighter, cleaner
sound – and that’s not an insult for once. It’s the opposite, actually; Momsmight be my favorite. The close second? Friend And Foe. Or vice versa. It’s
close.
To make a distinction, Friend And Foe feels like an anchor
for Menomena’s “sound”: they're as broad musically on that album as they feel
anywhere, but there’s more assurance in the music than on Blame Monster. Moms,
meanwhile, shows what they can do in the same space when they want to, for lack
of a better word, have fun. There’s something I picked up in music theory video
that I can’t stop hearing when I listen to Moms, even if someone could tell me
I’m 100% full of shit on a closer listen: the brighter notes on that album
makes me wonder if the band didn’t just switch to major chords for that one.
But listen to “Plumage” and “Skintercourse” next to, say, “Muscle’n Flo” and “Running”
and tell me the former isn’t lighter. And poppier. And there’s nothing wrong
with that.
Every Menomena album has enough to 1) fan-crush over and 2) pull
apart that I could go on for four more pages (one for each album; oh yeah!). Because
I like everything, basically, I don’t know where to start except to talk about
everything. Rather than lard up the rest of this post with links to Youtube videos of
uneven quality, instead I’m going to close out this post by listing all the
songs that I’ll be posting to a Spotify playlist (yes, another one) and posting
to twitter. There are a couple things I want to touch on before that, though.
First, I want to return to the idea about not loving all
Menomena songs equally. I spent today (at work; built-in limitation) specifically
listening for songs I liked well enough, but not a lot, never mind loved. I
only got all the way through two albums – Blame Monster and Friend And Foe…the
latter of which, it feels relevant to point out, might be my favorite. Even so,
I pulled out “Boyscout’n” and “Running” as two songs that sort of fell short;
the latter feels a little like parody. Blame Monster (A & B sides) has
more: “Strongest Man in the World,” “Shirt” (which, per my notes, “all the
elements, just a little flatter”), “Nebali”, and “Monkey’s Back” (but even that
wandering mess has a heavy middle section that blows my damn mind). When I
listened to Mines later, I found a couple more that leave me flat - “BOTE” and “Oh, Pretty Boy, You’re Such a Big Boy” – so it’s not like they’re batting 1.000. Though,
for me, their average clearly floats on the high end. (Guys! It's, like, .750!)
The second point gets at something central to what I love
about Menomena. Disappointment, alienation and generally picking at the
protective devices that scab over raw emotional truth are some favorite themes.
And, to be clear, I love that stuff. It is my fucking jam. I savor songs that
push on raw nerves in the same way I spend all day pushing on an aching tooth
and for exactly the same reason. Even if I can’t wrap my thoughts around every
piece of why that feels like life’s realest pleasure, that’s a lot of what I’m listening
for in lyrics: some version of truth.
And, that’s enough for this one. Below are the 20 (edit; see below) songs that
I either love or that I feel best represent them as a band. My guess is that
those two sentiments cross over, and multiple times. At any rate, the songs are
below and the playlist is on Spotify, and with a link on my twitter feed. | {
"perplexity_score": 473,
"pile_set_name": "Pile-CC"
} |
Q:
jQuery $.POST only works when some function executed after it.
I have this jQuery project that is almost complete, for the final portion when the user's data has been succeesfully validated, it executes an external PHP script using the $.post function, passing as parameters the user entered info.
My problem though is that my callback function only works if i put an alert function after it.
Works:
$.post("backend.php", dataString,
function(response) {
if (response=="1") {
alert("ok");
}
}, 'html');
alert('test');
But if i don't put that alert after the closing tag, my script doesnt do the alert "ok", or anything else in the callback.
Do any of you see what's wrong w/ my code?
Thanks!
A:
My guess is that there's something that comes after the $.post() script that either causes the page to change or causes the post to be stopped or fail.
Fundamentally, all the alert('test') does is block your code waiting for the OK button to be pressed. The only way that could make your code suddenly work is if there is something executing in your code after that second alert that messed up the post.
Remember the post is asynchronous. It will happen in the background while any other code you have runs and only when it succeeds will the alert('ok') run.
So, my guess is that something else in your page is running and messing up the post. | {
"perplexity_score": 1038.2,
"pile_set_name": "StackExchange"
} |
; RUN: opt < %s -instcombine -S | FileCheck %s
define zeroext i1 @_Z3fooPb(i8* nocapture %x) {
entry:
%a = load i8, i8* %x, align 1, !range !0
%b = and i8 %a, 1
%tobool = icmp ne i8 %b, 0
ret i1 %tobool
}
; CHECK: %a = load i8, i8* %x, align 1, !range !0
; CHECK-NEXT: %tobool = icmp ne i8 %a, 0
; CHECK-NEXT: ret i1 %tobool
!0 = !{i8 0, i8 2} | {
"perplexity_score": 1468.1,
"pile_set_name": "Github"
} |
Leigh Whannel is currently filming Insidious Chapter 3 and not only do we have our first look at the film, we’ve got new plot information and a chance for fans to visit the set. The film, which features new characters played Dermot Mulroney and Stefanie Scott as well as returning characters played by Lin Shaye, Angus Sampson and Whannell himself, opens May 29, 2015, It will show events that “predate” what happens in the first two films. Yes, it’s a prequel. Read more about the Insidious Chapter 3 plot and more below.
Here’s your first look at Insidious Chapter 3.
And here’s the full press release on both the film and the contest. I’ll bold the plot details.
Focus Features, Entertainment One (eOne), Sony Pictures Worldwide Acquisitions (SPWA), and Blumhouse Productions announced today that Insidious: Chapter 3 has begun production in Los Angeles. Leigh Whannell, co-creator of the terrifying horror franchise, is writing and directing the new movie.
To commemorate the start of production of the series’ newest chapter, Focus is launching a sweepstakes on the official Insidious Facebook (www.facebook.com/InsidiousMovie) and Twitter (www.twitter.com/InsidiousMovie) pages. Insidious buffs and fans will have the chance to win a trip for two to Los Angeles to visit the set of Insidious: Chapter 3. The contest begins on Tuesday, July 22nd, and ends on Friday, July 25th. The contest’s complete official rules can be accessed at www.focusfeatures.com/article/insidious_set_visit.
Insidious: Chapter 3 stars Dermot Mulroney (of August: Osage County) and Stefanie Scott (of Blumhouse’s upcoming Jem and the Holograms) alongside Lin Shaye, Angus Sampson, and Mr. Whannell, with the latter trio reprising their roles from the first two movies in the franchise.
In Insidious: Chapter 3, a twisted new tale of terror begins for a teenage girl and her family, predating the haunting of the Lambert family in the earlier movies and revealing more mysteries of the otherworldly realm The Further.
Focus Features will release Insidious: Chapter 3 domestically nationwide on Friday, May 29th, 2015. eOne will distribute the picture in Canada, U.K., and Spain; and Sony will distribute the picture in the rest of the world. Jason Blum of Blumhouse, who produced both previous movies in the series, is producing the next installment with returning producer Oren Peli and franchise co-creator James Wan, who directed the two earlier films written by Mr. Whannell with story by Mr. Wan and Mr. Whannell. Brian Kavanaugh-Jones, Steven Schneider, Charles Layton, and Xavier Marchand are executive-producing Insidious: Chapter 3.
Insidious, released in 2011, and Insidious: Chapter 2, released in 2013, grossed a combined $257 million worldwide.
All the contest stuff aside, I like the idea of the film being a prequel. It opens to door to show how Shaye and her team know the ins and outs of The Further and will help to expand some of the series’ mythology.
What do you think of the angle? | {
"perplexity_score": 204.7,
"pile_set_name": "OpenWebText2"
} |