text
stringlengths
0
1.59M
meta
dict
We Cherish Interactions Because We Know Your Time Is Precious So, Get In Touch Now.. Here at YouPals we are here to help you, so if there is anything that you need to know or if you have any problems or questions , please mail us here or at [email protected]. We want to make sure that this site is friendly, fun and helps you find new people, make new friends or Expand your social network. We welcome feedback and suggestions and hope to see you online soon. Meanwhile, use search to find a list of people who match your criteria. Have fun, keep sharing things of interest and enjoy yourselves!!! If you want to ask us a question directly, please submit your message with the following form.
{ "pile_set_name": "Pile-CC" }
That cricket sound you hear - that is what happens when facebook threads of us freaks - go some 100 comments or more long. As much as I love my daily fix of FB caffeine, I think the forum needs a way to get less stuffy and more laid back - like we are on the forum. The medium itself is of course part of the problem - but what do you all think? Not sure I follow your post. Which forum is too stuffy/not laid back enough? Facebook threads or the "Israeli and Kosher Wine Forums" here? There isn't really anything too stuffy about 99% of what my facebook feed serves up when I clock in. The degree to which facebook is or isn't engendering laid back discussion on a zillion and one different threads, has as much to do with your facebook connections and groups as anything else. As you said, the medium is part of the issue. Facebook is a social media space for lots of (possible) things, not just one bundled topic of interest (like "wine"). Facebook is programmed to keep one's attention and prolong one's facebook activity. Integration with smart phones makes it even easier, and so more likely to be used on the fly. Just my two cents. Last edited by Joshua London on Fri Mar 08, 2013 10:21 am, edited 1 time in total. We could create a private group on FB that does not show up on anyone's FB page when they post - it would have the "less stuffy" advantage of FB but is much more organized and less invasive than a message thread with a bunch of people. David Raccah wrote:That cricket sound you hear - that is what happens when facebook threads of us freaks - go some 100 comments or more long. As much as I love my daily fix of FB caffeine, I think the forum needs a way to get less stuffy and more laid back - like we are on the forum. The medium itself is of course part of the problem - but what do you all think? While English is my first language, I am not sure I understand the post. Let me know if I have it right-Facebook is taking up people's attention that might otherwise be on the forum. The forum needs to adapt in some way to survive as anything meaningful as evidenced by the lack of traffic here. Indeed - I was worried I was not clear. The forum has become less and less relevant. Used often when looking for particular notes, and of course the weekly weekend thread. On Facebook - a bunch of us are pounding away on the keyboard IM'ing and blabbering away about the world of wine - which is hilarious and definitely not for the open spaces of the forum -which feels more formal. The forum needs to add a less formal location - like FB and allow people to just hang out. There is a weekly chat that the other forum uses - but that, while less formal, is still formal in the sense that I have no idea who any of the people are. Gee, I'm not sure any of this really matters. 90% of my forum time is spent on the other one, which is more robust and vibrant. And I don't really do Facebook all that much. It's nice to have this place for kosher talk. I have a life, and I go onto Facebook maybe once every 2 or 3 days, whereas this place I have been accessing a few times a day (nothing of interest lately, though). Too much meaningless chatter on Facebook in other areas of my friends' lives, and not much of interest there. this place is easy to monitor for interesting topics, Facebook less so. I'm not sure a facebook group would be any better, especially if it takes a jump from my home news feed. There is a trade-off to trying to make this more of a social club for a couple of dozen folks (or less) around the globe. The more informal, core group driven any medium is, the greater is the 'closed-ranks' or 'circled-wagon' feel to anyone not in the group. Traffic here is as much about having something to say, as saying it - and it is partly the degree to which there has been any value added with minimal barrier to new entrants that drives folks to dip in or tune out. Think of it this way: If I stumbled on this forum and discovered that most threads seemed like informal facebook conversations between a group of pals, I'd probably move on because its just chatter - I wouldn't be learning anything about kosher or Israeli wine, about the market, about new wines or trends or whatever, and so it would add nothing to my wine own appreciation or knowledge -- and it wouldn't really be fun for anyone who isn't already "in it." I guess my views here should be taken with a healthy grain of salt as I'm hardly an active member, but part of what has kept me away since the forum migrated over from Stratsplace was that Rogov allowed it to become more and more about less and less, and then his death reduced a large chunk of what was left. The community that remained and/or has since joined or become active has filled some of that void, but only so much... hence Raccah's "cricket sound." I frankly don't see how making this forum less formal and even more chatty will add value to what remains a fairly narrow space. Again, just my two cents. David Raccah wrote:The forum needs to add a less formal location - like FB and allow people to just hang out. There is a weekly chat that the other forum uses - but that, while less formal, is still formal in the sense that I have no idea who any of the people are. Actually I've myself participated in the weekly chat on the other forum several times and all I can say is that it is pretty much like the crazy chatting a bunch of us are having on facebook lately. It is even less wine-related as many of the folks there chat about everything and nothing all at the same time with wine related topics jumping in here and there from time to time. I pretty much agree with everything Josh said. At the risk of sounding a bit snobby, I like that the forum has an unspoken entry barrier - quality content (or at least used to). While Facebook is a more informal medium that allows for freer "conversation" like discussion, there is no barrier to what is said or discussed, resulting in a large ratio of meaningless drivel for every bit of quality (in this case) wine-related information. Since Rogov's passing, in addition to the obvious loss of a big chuck of the forum's professional/quality content, the level of discourse has been reduced from a more professional discussion of wines, varietals, trends in the industry, etc. to more entry-level discourse. While this has benefited the forum's declining participating membership by opening it up forum to more and more participants (which is a very positive thing), it has also resulted in the fact that there is less quality wine-related information being discussed/offered here, further driving away many of the old-time regulars... Since Rogov's passing, in addition to the obvious loss of a big chuck of the forum's professional/quality content, the level of discourse has been reduced from a more professional discussion of wines, varietals, trends in the industry, etc. to more entry-level discourse. So there is no reason that participants cannot add such posts to the mix. I try, for instance, when I run across interesting articles, to provide a link to forum members. The Facebook conversations are definitely not something we want to have posted onto this forum - it's a different style conversation, and as Yossie said, there is a lot of conversation that is not directly relevant to wine. I don't see why the two can't coexist - the forum is what it is, and regardless of whether participation is dwindling or not, I don't think it's because there is no way to have conversations like on Facebook. And even if that were true, then who cares, as long as we have a place to talk about wine. If people want a "chat room" for this group to have those informal conversations, I'm sure that can easily be done on facebook or google plus - I think people would be more inclined to use one of those more frequently than enter a chat room on this site. Agreed. My concern is that we are losing some of the wine conversation to FB. As we have discussed in the past, with Rogov gone, each of us needs to work a bit harder at contributing and maintaining the forum IF we want it to survive. It ought not to be an either/or - they are very different medium. I think many were active on this forum as a place to interact with Rogov and the discussion he faciliated. AT the same time, even 5 years ago, social media was very different -- there were very few English blogs about Israeli wine (I just started my own in 2007, and yes, much content, was taken from scoops here), Twitter was almost non-existent and Facebook only recently opened to non-students. The medium has changed. Dropping in here to respect this conversation, let me say that I enjoy Facebook too, but view it as a rather different experience. I use my Facebook page (Robin Garr, where you are all more than welcome to "friend" me) in a rather different way, where I connect with a variety of friends old and new and bring together all the different circles I live in at one central place where I can share whatever is on my mind at the time (that doesn't exceed my personal bounds of privacy) whether it be food or wine or liberal politics or progressive theology or music and art and literature or something else. The forums are much more constrained to specific topics and incorporate a smaller circle of people who come together to talk about food and wine - and in the case of this forum, Israeli and kosher wines. I see room for both, and can only repeat my frequent reassurance: Even with Rogov gone, this community is warmly welcome here, and I am happy to provide an online home for you. All that said, if you would like to have an "off topic" forum added within this section as a place where only registered members can post on topics other than food and wine, I can easily do that. If you decide you'd like to have another room added on, just let me know. Having seen the Wine Insanity chatline on Facebook, I can definitely see why it is compelling. Many of the more regulars have migrated to that forum, where recognition is instantaneous to those who require dialogue. Although only a few are on it, it bustles with activity. As opposed to this forum, where fewer are active participants now, and we slog through 1 or 2 posts per day. For me, it doesn't matter because I have little to post anyway, but I wonder at the value of either (a lot of the chat there has no value, but neither does the dead space here). Come on, people, let's make this relevant or fold it. Ideas would be welcome. The high point here recently seems to have been Rogov's drinking windows.
{ "pile_set_name": "Pile-CC" }
A huge storm stretching from Newfoundland to Portugal is lingering over the ocean and will eventually weaken before moving on. NBC’s Brian Williams reports. Weather This content comes from Closed Captioning that was broadcast along with this program. >>>look at the size of this storm over the atlantic ocean . while it's far from the continental u.s. , it stretches the width of the ocean, links two continents together. it was captured by satellite stretching from newfoundland to portugal. look at its southern tip, stretching from almost north africa back to the caribbean. and its center, the pressure is as intense as a category 3
{ "pile_set_name": "Pile-CC" }
Altruism in dental students. Altruistic dentists play a central role in treating minority populations, the poor, the uninsured, and those living in underserved communities. This study examines factors associated with graduating dental students' altruistic attitudes. We use a nationally representative dataset, the 2007 American Dental Education Association Survey of Dental School Seniors (n=3,841), and a comprehensive framework to investigate individual, school, and community characteristics that may influence altruism. Student characteristics were the most significant predictors: women, African Americans, Hispanics, Asian/Pacific Islanders, and students with low socioeconomic status expressed greater altruism than their counterparts. These results inform dental educators and administrators to expand efforts to recruit underrepresented racial/ethnic and low-income students into dentistry. Additionally, we found that students with altruistic personalities attend schools where the social context is more accepting and respectful of diversity. This suggests that schools can promote altruism in their students by creating a positive culture and environment for diverse populations.
{ "pile_set_name": "PubMed Abstracts" }
Q: How to debug `Error while processing function` in `vim` and `nvim`? TL;DR How to find where exactly vim or nvim error started (which file?) when I'm interested in fixing the actual issue and not just removing the bad plugin? Anything better than strace and guesswork to find the error origin? Issue I often add a plugin to my vim or nvim config and end up getting errors on hooks (buffer open, close, write): "test.py" [New] 0L, 0C written Error detected while processing function 343[12]..272: line 8: E716: Key not present in Dictionary: _exec E116: Invalid arguments for function get(a:args, 'exec', a:1['_exec']) E15: Invalid expression: get(a:args, 'exec', a:1['_exec']) The problem is, I have no idea where those come from, I only get some line number of unknown file and it's not my vim/nvim config file. A: This particular plugin has been written in an object-oriented style. The 343[12]..272 refers to an anonymous (numbered) function in a Dictionary object. If you know the (recently installed) plugin, you can use the :breakadd file */pluginname.vim file in your ~/.vimrc to stop and then step through (with :next) it line-by-line. Alternatively, you can capture a full log of a Vim session with vim -V20vimlog. After quitting Vim, examine the vimlog log file for the error message and suspect commands before that.
{ "pile_set_name": "StackExchange" }
Download A Tale of Two Fractals by A.A. Kirillov PDF Download A Tale of Two Fractals by A.A. Kirillov PDF Since Benoit Mandelbrot's pioneering paintings within the overdue Nineteen Seventies, ratings of study articles and books were released regarding fractals. regardless of the amount of literature within the box, the final point of theoretical figuring out has remained low; so much paintings is aimed both at too mainstream an viewers to accomplish any intensity or at too really good a neighborhood to accomplish common use. Written by means of celebrated mathematician and educator A.A. Kirillov, A story of 2 Fractals is meant to aid bridge this hole, offering an unique therapy of fractals that's immediately available to novices and sufficiently rigorous for critical mathematicians. The paintings is designed to provide younger, non-specialist mathematicians a pretty good starting place within the concept of fractals, and, within the method, to equip them with publicity to numerous geometric, analytical, and algebraic instruments with purposes throughout different areas. The epitomy of commerical jet airliner go back and forth, the Boeing 707 served with all of the central vendors bringing new criteria of convenience, pace and potency to airline passengers. Pan Am used to be the 1st significant airline to reserve it and flew its fleet emblazoned with the well-known Clipper names. BOAC positioned a considerable order and insisted on Rolls-Royce Conway engines instead of the Pratt & Whitney JT sequence engines favorite via American buyers. Student's love Schaum's--and this new advisor will express you why! Graph thought takes you immediately to the center of graphs. As you examine alongside at your individual speed, this learn advisor indicates you step-by-step tips to remedy the type of difficulties you are going to locate in your tests. It delivers countless numbers of thoroughly labored issues of complete strategies. An exploration of regression photographs via special effects. contemporary advancements in laptop know-how have motivated new and fascinating makes use of for portraits in statistical analyses. Regression photos, one of many first graduate-level textbooks at the topic, demonstrates how statisticians, either theoretical and utilized, can use those fascinating ideas. This in-depth assurance of significant parts of graph thought keeps a spotlight on symmetry homes of graphs. ordinary issues on graph automorphisms are awarded early on, whereas in later chapters extra specialized subject matters are tackled, resembling graphical typical representations and pseudosimilarity. the ultimate 4 chapters are dedicated to the reconstruction challenge, and the following exact emphasis is given to these effects that contain the symmetry of graphs, lots of which aren't to be present in different books. This property has an important corollary. 1 (Maximum principle). Assume that is a connected domain with boundary (denoted by @ ). Then any nonconstant real harmonic function on attains its maximal value only on the boundary. , such that f j@ D '). y/. The measure x is called Poisson measure, and in the case of a smooth boundary, it is given by a density x / that is a smooth function of x 2 and y 2 @ . A/ (as a probability of reaching the boundary in a set A starting from x and moving randomly along ). 1 C F / : Here, as always, when an arithmetic operation is applied to a set, it means that it is applied to each element of the set. A picture of this set is shown in Fig. 8 (taken from the book [Edg90]). 3. Let ! D e 3 , a cube root of 1. ? S / for such a system? 3. Continued Fractions There is one more interesting numerical system related to the notion of continued fraction. Let k D fk1 ; k2 ; : : : g be a finite or infinite system of positive integers. 4) as n ! 1 if the sequence k is infinite. Consider the triangular piece of the infinite gasket that is based on the segment Œk 1; k C 1. It is shown in Fig. 4. We denote the values of at the points k 1; k; k C1 by a ; a; aC respectively. Then the values bC ; b ; c in the remaining vertices shown in Fig. l/ is an integer when l < 2n . 42 3 Harmonic Functions on the Sierpi´nski Gasket The result is c D 5a 2a 3a C 2aC ; 5 bC D 2a 2aC ; b D 2a 2aC C 3a : 5 Consider now the functions g˙ W ! k ˙ /. Knowing the boundary values of the corresponding harmonic functions on pieces of S, we can write a˙ C b˙ 2 g˙ .
{ "pile_set_name": "Pile-CC" }
Breast cancer screening and biomarkers. Annual screening mammograms have been shown to be cost-effective and are credited for the decline in mortality of breast cancer. New technologies including breast magnetic resonance imaging (MRI) may further improve early breast cancer detection in asymptomatic women. Serum tumor markers such as CA 15-3, carcinoembyonic antigen (CEA), and CA 27-29 are ordered in the clinic mainly for disease surveillance, and not useful for detection of localized cancer. This review will discuss blood-based markers and breast-based markers, such as nipple/ductal fluid, with an emphasis on biomarkers for early detection of breast cancer. In the future, it is likely that a combination approach to simultaneously measure multiple markers would be most successful in detecting early breast cancer. Ideally, such a biomarker panel should be able to detect breast cancer in asymptomatic patients, even in the setting of normal mammogram and physical examination results.
{ "pile_set_name": "PubMed Abstracts" }
Thursday, September 29, 2016 IRS REPORT: Bruh, Joe the Plumber Was Right; Obamacare Just Spread $11 Billion of Wealth Around If folks don't like their healthcare then they can give us all their money so we can give it to other folks. New IRS disclosures from the 2014 tax year reveal the specifics of how the so-called "Affordable Care Act" helped to facilitate Obama's desire to, as he famously told "Joe the Plumber", "spread the wealth around." To be precise, in 2014, Obamacare spread $11.2BN of wealth around, in the form of healthcare premium tax credits, with nearly 80% going to taxpayers reporting less that $35,000 of adjusted gross income. Moreover, the average tax filer received $3,600 of healthcare premium support with those in the lowest tax bracket receiving over $5,500 per person. Equally disturbing is this fact: 8.1mm tax filers, those who elected to forgo health insurance, were hit with $1.7BN in Obamacare penalties...call it the "young and healthy tax". Ironically, 40% of the penalties fell upon people making less than $35,000 per year...the very same people that Obama apparently intended to "help". Here's how the subsidies and penalties broke down by tax bracket (the original IRS table can be found here): Of course, the real tragedy of Obamacare is that even if those 8.1mm young and healthy people wanted to buy health insurance, many of them have now likely been priced out of the market as premiums have soared and coverage "options" have vanished as insurers have pulled out of exchanges all over the country (something we discussed at length in a post entitled "Obamacare On "Verge Of Collapse" As Premiums Set To Soar Again In 2017"). In essence, while the bill has seemingly "helped" the 3.1mm people receiving subsidies in the chart above it has trapped the 8.1mm young and health people with a permanent tax increase as they are now even less likely to buy health insurance after Obamacare has driven up the rates astronomically. But, of course, the Obamacare penalties will only get even worse from here. According to The Washington Free Beacon, in 2014, uninsured individuals were required to pay the greater of either a flat penalty of $95 for each uninsured adult or 1% of their household’s adjusted gross income. That said, the penalties are set to increase in 2016 to the greater of a flat fee of $695 or 2.5% of AGI. According to the Congressional Budget Office, taxpayers are expected to pay penalties of $4BN in 2016 and $5BN annually from 2017 through 2024. Senator Tom Cotton (R-Arkansas), also pointed out the irony in the fact that Obamacare is now penalizing many taxpayers who can no longer afford healthcare simply because Obamacare itself has driven up premiums to such an extent they've been rendered completely unaffordable. “It’s not surprising that the Obamacare mandate numbers are worse than the administration first claimed,” said Sen. Tom Cotton (R., Ark.). “Obamacare penalizes taxpayers who can no longer afford insurance that Obamacare made unaffordable...” ...“As Obamacare continues to unravel, things will only get worse,” Cotton said. “The legacy of Obamacare is skyrocketing premiums, unaffordable deductibles, the destruction of the individual insurance market, and tax penalties on Obamacare’s victims.” 5 comments: There's another tax consequence of Obamacare. For people who PAY taxes AND have high medical expenses, they raised the threshold one has to meet before they can deduct those expenses. It used to be you could deduct everything over 7% of income. They raised it to a 10% hurdle. In short, they penalize those with the highest health care burden. And---predictably---no one in the media or anywhere else noticed. Except me....because I have those high expenses.
{ "pile_set_name": "Pile-CC" }
*[RFC 0/3] Unify CPU topology across ARM64 & RISC-V @ 2018-11-09 1:50 atish.patra 2018-11-09 1:50 ` Atish Patra ` (4 more replies)0 siblings, 5 replies; 20+ messages in thread From: atish.patra @ 2018-11-09 1:50 UTC (permalink / raw) To: linux-riscv The cpu-map DT entry in ARM64 can describe the CPU topology in much better way compared to other existing approaches. RISC-V can easily adopt this binding to represent it's own CPU topology. Thus, both cpu-map DT binding and topology parsing code can be moved to a common location so that RISC-V or any other architecture can leverage that. The relevant discussion regarding unifying cpu topology can be found in [1]. arch_topology seems to be a perfect place to move the common code. I have not introduced any functional changes in the moved to code. The only downside in this approach is that the capacity code will be executed for RISC-V as well. But, it will exit immediately after not able to find the appropriate DT node. If the overhead is considered too much, we can always compile out capacity related functions under a different config for the architectures that do not support them. The patches have been tested for RISC-V and compile tested for ARM64. The socket changes[2] can be merged on top of this series or vice versa. [1] https://lkml.org/lkml/2018/11/6/19 [2] https://lkml.org/lkml/2018/11/7/918 Atish Patra (3): dt-binding: cpu-topology: Move cpu-map to a common binding. cpu-topology: Move cpu topology code to common code. RISC-V: Parse cpu topology during boot. Documentation/devicetree/bindings/arm/topology.txt | 475 ------------------- .../devicetree/bindings/cpu/cpu-topology.txt | 526 +++++++++++++++++++++ arch/arm64/include/asm/topology.h | 23 +- arch/arm64/kernel/topology.c | 305 +----------- arch/riscv/Kconfig | 1 + arch/riscv/kernel/smpboot.c | 6 +- drivers/base/arch_topology.c | 303 ++++++++++++ include/linux/arch_topology.h | 23 + include/linux/topology.h | 1 + 9 files changed, 864 insertions(+), 799 deletions(-) delete mode 100644 Documentation/devicetree/bindings/arm/topology.txt create mode 100644 Documentation/devicetree/bindings/cpu/cpu-topology.txt -- 2.7.4 ^permalinkrawreply [flat|nested] 20+ messages in thread *[RFC 0/3] Unify CPU topology across ARM64 & RISC-V 2018-11-09 1:50 [RFC 0/3] Unify CPU topology across ARM64 & RISC-V atish.patra @ 2018-11-09 1:50 ` Atish Patra 2018-11-09 1:50 ` [RFC 1/3] dt-binding: cpu-topology: Move cpu-map to a common binding atish.patra ` (3 subsequent siblings)4 siblings, 0 replies; 20+ messages in thread From: Atish Patra @ 2018-11-09 1:50 UTC (permalink / raw) To: linux-kernel Cc: mark.rutland, devicetree, Damien.LeMoal, juri.lelli, anup, palmer, jeremy.linton, atish.patra, robh+dt, sudeep.holla, mick, linux-riscv, linux-arm-kernel The cpu-map DT entry in ARM64 can describe the CPU topology in much better way compared to other existing approaches. RISC-V can easily adopt this binding to represent it's own CPU topology. Thus, both cpu-map DT binding and topology parsing code can be moved to a common location so that RISC-V or any other architecture can leverage that. The relevant discussion regarding unifying cpu topology can be found in [1]. arch_topology seems to be a perfect place to move the common code. I have not introduced any functional changes in the moved to code. The only downside in this approach is that the capacity code will be executed for RISC-V as well. But, it will exit immediately after not able to find the appropriate DT node. If the overhead is considered too much, we can always compile out capacity related functions under a different config for the architectures that do not support them. The patches have been tested for RISC-V and compile tested for ARM64. The socket changes[2] can be merged on top of this series or vice versa. [1] https://lkml.org/lkml/2018/11/6/19 [2] https://lkml.org/lkml/2018/11/7/918 Atish Patra (3): dt-binding: cpu-topology: Move cpu-map to a common binding. cpu-topology: Move cpu topology code to common code. RISC-V: Parse cpu topology during boot. Documentation/devicetree/bindings/arm/topology.txt | 475 ------------------- .../devicetree/bindings/cpu/cpu-topology.txt | 526 +++++++++++++++++++++ arch/arm64/include/asm/topology.h | 23 +- arch/arm64/kernel/topology.c | 305 +----------- arch/riscv/Kconfig | 1 + arch/riscv/kernel/smpboot.c | 6 +- drivers/base/arch_topology.c | 303 ++++++++++++ include/linux/arch_topology.h | 23 + include/linux/topology.h | 1 + 9 files changed, 864 insertions(+), 799 deletions(-) delete mode 100644 Documentation/devicetree/bindings/arm/topology.txt create mode 100644 Documentation/devicetree/bindings/cpu/cpu-topology.txt -- 2.7.4 _______________________________________________ linux-riscv mailing list [email protected] http://lists.infradead.org/mailman/listinfo/linux-riscv^permalinkrawreply [flat|nested] 20+ messages in thread *[RFC 0/3] Unify CPU topology across ARM64 & RISC-V 2018-11-09 1:50 [RFC 0/3] Unify CPU topology across ARM64 & RISC-V atish.patra ` (3 preceding siblings ...) 2018-11-09 1:50 ` [RFC 3/3] RISC-V: Parse cpu topology during boot atish.patra @ 2018-11-15 18:31 ` jhugo 2018-11-15 18:31 ` Jeffrey Hugo ` (2 more replies)4 siblings, 3 replies; 20+ messages in thread From: jhugo @ 2018-11-15 18:31 UTC (permalink / raw) To: linux-riscv On 11/8/2018 6:50 PM, Atish Patra wrote: > The cpu-map DT entry in ARM64 can describe the CPU topology in > much better way compared to other existing approaches. RISC-V can > easily adopt this binding to represent it's own CPU topology. > Thus, both cpu-map DT binding and topology parsing code can be > moved to a common location so that RISC-V or any other > architecture can leverage that. > > The relevant discussion regarding unifying cpu topology can be > found in [1]. > > arch_topology seems to be a perfect place to move the common > code. I have not introduced any functional changes in the moved > to code. The only downside in this approach is that the capacity > code will be executed for RISC-V as well. But, it will exit > immediately after not able to find the appropriate DT node. If > the overhead is considered too much, we can always compile out > capacity related functions under a different config for the > architectures that do not support them. > > The patches have been tested for RISC-V and compile tested for > ARM64. > > The socket changes[2] can be merged on top of this series or vice > versa. > > [1] https://lkml.org/lkml/2018/11/6/19 > [2] https://lkml.org/lkml/2018/11/7/918 > > Atish Patra (3): > dt-binding: cpu-topology: Move cpu-map to a common binding. > cpu-topology: Move cpu topology code to common code. > RISC-V: Parse cpu topology during boot. > > Documentation/devicetree/bindings/arm/topology.txt | 475 ------------------- > .../devicetree/bindings/cpu/cpu-topology.txt | 526 +++++++++++++++++++++ > arch/arm64/include/asm/topology.h | 23 +- > arch/arm64/kernel/topology.c | 305 +----------- > arch/riscv/Kconfig | 1 + > arch/riscv/kernel/smpboot.c | 6 +- > drivers/base/arch_topology.c | 303 ++++++++++++ > include/linux/arch_topology.h | 23 + > include/linux/topology.h | 1 + > 9 files changed, 864 insertions(+), 799 deletions(-) > delete mode 100644 Documentation/devicetree/bindings/arm/topology.txt > create mode 100644 Documentation/devicetree/bindings/cpu/cpu-topology.txt > I was interested in testing these on QDF2400, an ARM64 platform, since this series touches core ARM64 code and I'd hate to see a regression. However, I can't figure out what baseline to use to apply these. Different patches cause different conflicts of a variety of baselines I attempted. What are these intended to apply to? Also, you might want to run them through checkpatch next time. There are several whitespace errors. -- Jeffrey Hugo Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project. ^permalinkrawreply [flat|nested] 20+ messages in thread *Re: [RFC 0/3] Unify CPU topology across ARM64 & RISC-V 2018-11-15 18:31 ` [RFC 0/3] Unify CPU topology across ARM64 & RISC-V jhugo @ 2018-11-15 18:31 ` Jeffrey Hugo 2018-11-19 17:46 ` atish.patra 2018-11-20 11:11 ` sudeep.holla2 siblings, 0 replies; 20+ messages in thread From: Jeffrey Hugo @ 2018-11-15 18:31 UTC (permalink / raw) To: Atish Patra, linux-kernel Cc: mark.rutland, devicetree, Damien.LeMoal, juri.lelli, anup, palmer, jeremy.linton, robh+dt, sudeep.holla, mick, linux-riscv, linux-arm-kernel On 11/8/2018 6:50 PM, Atish Patra wrote: > The cpu-map DT entry in ARM64 can describe the CPU topology in > much better way compared to other existing approaches. RISC-V can > easily adopt this binding to represent it's own CPU topology. > Thus, both cpu-map DT binding and topology parsing code can be > moved to a common location so that RISC-V or any other > architecture can leverage that. > > The relevant discussion regarding unifying cpu topology can be > found in [1]. > > arch_topology seems to be a perfect place to move the common > code. I have not introduced any functional changes in the moved > to code. The only downside in this approach is that the capacity > code will be executed for RISC-V as well. But, it will exit > immediately after not able to find the appropriate DT node. If > the overhead is considered too much, we can always compile out > capacity related functions under a different config for the > architectures that do not support them. > > The patches have been tested for RISC-V and compile tested for > ARM64. > > The socket changes[2] can be merged on top of this series or vice > versa. > > [1] https://lkml.org/lkml/2018/11/6/19 > [2] https://lkml.org/lkml/2018/11/7/918 > > Atish Patra (3): > dt-binding: cpu-topology: Move cpu-map to a common binding. > cpu-topology: Move cpu topology code to common code. > RISC-V: Parse cpu topology during boot. > > Documentation/devicetree/bindings/arm/topology.txt | 475 ------------------- > .../devicetree/bindings/cpu/cpu-topology.txt | 526 +++++++++++++++++++++ > arch/arm64/include/asm/topology.h | 23 +- > arch/arm64/kernel/topology.c | 305 +----------- > arch/riscv/Kconfig | 1 + > arch/riscv/kernel/smpboot.c | 6 +- > drivers/base/arch_topology.c | 303 ++++++++++++ > include/linux/arch_topology.h | 23 + > include/linux/topology.h | 1 + > 9 files changed, 864 insertions(+), 799 deletions(-) > delete mode 100644 Documentation/devicetree/bindings/arm/topology.txt > create mode 100644 Documentation/devicetree/bindings/cpu/cpu-topology.txt > I was interested in testing these on QDF2400, an ARM64 platform, since this series touches core ARM64 code and I'd hate to see a regression. However, I can't figure out what baseline to use to apply these. Different patches cause different conflicts of a variety of baselines I attempted. What are these intended to apply to? Also, you might want to run them through checkpatch next time. There are several whitespace errors. -- Jeffrey Hugo Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project. _______________________________________________ linux-riscv mailing list [email protected] http://lists.infradead.org/mailman/listinfo/linux-riscv^permalinkrawreply [flat|nested] 20+ messages in thread *[RFC 0/3] Unify CPU topology across ARM64 & RISC-V 2018-11-15 18:31 ` [RFC 0/3] Unify CPU topology across ARM64 & RISC-V jhugo 2018-11-15 18:31 ` Jeffrey Hugo@ 2018-11-19 17:46 ` atish.patra 2018-11-19 17:46 ` Atish Patra 2018-11-20 11:11 ` sudeep.holla2 siblings, 1 reply; 20+ messages in thread From: atish.patra @ 2018-11-19 17:46 UTC (permalink / raw) To: linux-riscv On 11/15/18 10:31 AM, Jeffrey Hugo wrote: > On 11/8/2018 6:50 PM, Atish Patra wrote: >> The cpu-map DT entry in ARM64 can describe the CPU topology in >> much better way compared to other existing approaches. RISC-V can >> easily adopt this binding to represent it's own CPU topology. >> Thus, both cpu-map DT binding and topology parsing code can be >> moved to a common location so that RISC-V or any other >> architecture can leverage that. >> >> The relevant discussion regarding unifying cpu topology can be >> found in [1]. >> >> arch_topology seems to be a perfect place to move the common >> code. I have not introduced any functional changes in the moved >> to code. The only downside in this approach is that the capacity >> code will be executed for RISC-V as well. But, it will exit >> immediately after not able to find the appropriate DT node. If >> the overhead is considered too much, we can always compile out >> capacity related functions under a different config for the >> architectures that do not support them. >> >> The patches have been tested for RISC-V and compile tested for >> ARM64. >> >> The socket changes[2] can be merged on top of this series or vice >> versa. >> >> [1] https://lkml.org/lkml/2018/11/6/19 >> [2] https://lkml.org/lkml/2018/11/7/918 >> >> Atish Patra (3): >> dt-binding: cpu-topology: Move cpu-map to a common binding. >> cpu-topology: Move cpu topology code to common code. >> RISC-V: Parse cpu topology during boot. >> >> Documentation/devicetree/bindings/arm/topology.txt | 475 ------------------- >> .../devicetree/bindings/cpu/cpu-topology.txt | 526 +++++++++++++++++++++ >> arch/arm64/include/asm/topology.h | 23 +- >> arch/arm64/kernel/topology.c | 305 +----------- >> arch/riscv/Kconfig | 1 + >> arch/riscv/kernel/smpboot.c | 6 +- >> drivers/base/arch_topology.c | 303 ++++++++++++ >> include/linux/arch_topology.h | 23 + >> include/linux/topology.h | 1 + >> 9 files changed, 864 insertions(+), 799 deletions(-) >> delete mode 100644 Documentation/devicetree/bindings/arm/topology.txt >> create mode 100644 Documentation/devicetree/bindings/cpu/cpu-topology.txt >> > > I was interested in testing these on QDF2400, an ARM64 platform, since > this series touches core ARM64 code and I'd hate to see a regression. > However, I can't figure out what baseline to use to apply these. > Different patches cause different conflicts of a variety of baselines I > attempted. > > What are these intended to apply to? > I had rebased them on top of 4.20-rc1. > Also, you might want to run them through checkpatch next time. There > are several whitespace errors. > Sorry I missed couple of them. Thanks for trying to test the patches. I will send a next version as Rob suggested. Please test that. Regards, Atish ^permalinkrawreply [flat|nested] 20+ messages in thread *Re: [RFC 0/3] Unify CPU topology across ARM64 & RISC-V 2018-11-19 17:46 ` atish.patra@ 2018-11-19 17:46 ` Atish Patra0 siblings, 0 replies; 20+ messages in thread From: Atish Patra @ 2018-11-19 17:46 UTC (permalink / raw) To: Jeffrey Hugo, linux-kernel Cc: mark.rutland, devicetree, Damien Le Moal, juri.lelli, anup, palmer, jeremy.linton, robh+dt, sudeep.holla, mick, linux-riscv, linux-arm-kernel On 11/15/18 10:31 AM, Jeffrey Hugo wrote: > On 11/8/2018 6:50 PM, Atish Patra wrote: >> The cpu-map DT entry in ARM64 can describe the CPU topology in >> much better way compared to other existing approaches. RISC-V can >> easily adopt this binding to represent it's own CPU topology. >> Thus, both cpu-map DT binding and topology parsing code can be >> moved to a common location so that RISC-V or any other >> architecture can leverage that. >> >> The relevant discussion regarding unifying cpu topology can be >> found in [1]. >> >> arch_topology seems to be a perfect place to move the common >> code. I have not introduced any functional changes in the moved >> to code. The only downside in this approach is that the capacity >> code will be executed for RISC-V as well. But, it will exit >> immediately after not able to find the appropriate DT node. If >> the overhead is considered too much, we can always compile out >> capacity related functions under a different config for the >> architectures that do not support them. >> >> The patches have been tested for RISC-V and compile tested for >> ARM64. >> >> The socket changes[2] can be merged on top of this series or vice >> versa. >> >> [1] https://lkml.org/lkml/2018/11/6/19 >> [2] https://lkml.org/lkml/2018/11/7/918 >> >> Atish Patra (3): >> dt-binding: cpu-topology: Move cpu-map to a common binding. >> cpu-topology: Move cpu topology code to common code. >> RISC-V: Parse cpu topology during boot. >> >> Documentation/devicetree/bindings/arm/topology.txt | 475 ------------------- >> .../devicetree/bindings/cpu/cpu-topology.txt | 526 +++++++++++++++++++++ >> arch/arm64/include/asm/topology.h | 23 +- >> arch/arm64/kernel/topology.c | 305 +----------- >> arch/riscv/Kconfig | 1 + >> arch/riscv/kernel/smpboot.c | 6 +- >> drivers/base/arch_topology.c | 303 ++++++++++++ >> include/linux/arch_topology.h | 23 + >> include/linux/topology.h | 1 + >> 9 files changed, 864 insertions(+), 799 deletions(-) >> delete mode 100644 Documentation/devicetree/bindings/arm/topology.txt >> create mode 100644 Documentation/devicetree/bindings/cpu/cpu-topology.txt >> > > I was interested in testing these on QDF2400, an ARM64 platform, since > this series touches core ARM64 code and I'd hate to see a regression. > However, I can't figure out what baseline to use to apply these. > Different patches cause different conflicts of a variety of baselines I > attempted. > > What are these intended to apply to? > I had rebased them on top of 4.20-rc1. > Also, you might want to run them through checkpatch next time. There > are several whitespace errors. > Sorry I missed couple of them. Thanks for trying to test the patches. I will send a next version as Rob suggested. Please test that. Regards, Atish _______________________________________________ linux-riscv mailing list [email protected] http://lists.infradead.org/mailman/listinfo/linux-riscv^permalinkrawreply [flat|nested] 20+ messages in thread *[RFC 0/3] Unify CPU topology across ARM64 & RISC-V 2018-11-15 18:31 ` [RFC 0/3] Unify CPU topology across ARM64 & RISC-V jhugo 2018-11-15 18:31 ` Jeffrey Hugo 2018-11-19 17:46 ` atish.patra@ 2018-11-20 11:11 ` sudeep.holla 2018-11-20 11:11 ` Sudeep Holla 2018-11-20 15:28 ` jhugo2 siblings, 2 replies; 20+ messages in thread From: sudeep.holla @ 2018-11-20 11:11 UTC (permalink / raw) To: linux-riscv On Thu, Nov 15, 2018 at 11:31:33AM -0700, Jeffrey Hugo wrote: [...] > > I was interested in testing these on QDF2400, an ARM64 platform, since this > series touches core ARM64 code and I'd hate to see a regression. However, I > can't figure out what baseline to use to apply these. Different patches > cause different conflicts of a variety of baselines I attempted. > Good to know that we can test DT configuration on QDF2400. I always assumed it's ACPI only. > What are these intended to apply to? > The series alone may not get the package/socket ids correct on QDF2400. I have not yet added support for the same as I wanted to get the initial feedback on DT bindings. The movement of DT binding and corresponding code should not regress and you should be able to validate only that part. -- Regards, Sudeep ^permalinkrawreply [flat|nested] 20+ messages in thread *Re: [RFC 0/3] Unify CPU topology across ARM64 & RISC-V 2018-11-20 11:11 ` sudeep.holla@ 2018-11-20 11:11 ` Sudeep Holla 2018-11-20 15:28 ` jhugo1 sibling, 0 replies; 20+ messages in thread From: Sudeep Holla @ 2018-11-20 11:11 UTC (permalink / raw) To: Jeffrey Hugo Cc: mark.rutland, devicetree, Damien.LeMoal, juri.lelli, anup, palmer, linux-kernel, jeremy.linton, Atish Patra, robh+dt, mick, linux-riscv, linux-arm-kernel On Thu, Nov 15, 2018 at 11:31:33AM -0700, Jeffrey Hugo wrote: [...] > > I was interested in testing these on QDF2400, an ARM64 platform, since this > series touches core ARM64 code and I'd hate to see a regression. However, I > can't figure out what baseline to use to apply these. Different patches > cause different conflicts of a variety of baselines I attempted. > Good to know that we can test DT configuration on QDF2400. I always assumed it's ACPI only. > What are these intended to apply to? > The series alone may not get the package/socket ids correct on QDF2400. I have not yet added support for the same as I wanted to get the initial feedback on DT bindings. The movement of DT binding and corresponding code should not regress and you should be able to validate only that part. -- Regards, Sudeep _______________________________________________ linux-riscv mailing list [email protected] http://lists.infradead.org/mailman/listinfo/linux-riscv^permalinkrawreply [flat|nested] 20+ messages in thread *[RFC 0/3] Unify CPU topology across ARM64 & RISC-V 2018-11-20 11:11 ` sudeep.holla 2018-11-20 11:11 ` Sudeep Holla@ 2018-11-20 15:28 ` jhugo 2018-11-20 15:28 ` Jeffrey Hugo1 sibling, 1 reply; 20+ messages in thread From: jhugo @ 2018-11-20 15:28 UTC (permalink / raw) To: linux-riscv On 11/20/2018 4:11 AM, Sudeep Holla wrote: > On Thu, Nov 15, 2018 at 11:31:33AM -0700, Jeffrey Hugo wrote: > > [...] > >> >> I was interested in testing these on QDF2400, an ARM64 platform, since this >> series touches core ARM64 code and I'd hate to see a regression. However, I >> can't figure out what baseline to use to apply these. Different patches >> cause different conflicts of a variety of baselines I attempted. >> > > Good to know that we can test DT configuration on QDF2400. I always assumed > it's ACPI only. It is ACPI only in the production configuration. I suppose we could hack things up to do basic DT sanity, but I expect it would be nasty and non-trivial. > >> What are these intended to apply to? >> > > The series alone may not get the package/socket ids correct on QDF2400. > I have not yet added support for the same as I wanted to get the initial > feedback on DT bindings. The movement of DT binding and corresponding > code should not regress and you should be able to validate only that > part. > On a cursory glance, it looks like some of the reorganized code would also be used in the ACPI path (things that are common between DT and ACPI). I do not expect problems, but I still feel its prudent to do a sanity check on actual hardware. -- Jeffrey Hugo Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project. ^permalinkrawreply [flat|nested] 20+ messages in thread *Re: [RFC 0/3] Unify CPU topology across ARM64 & RISC-V 2018-11-20 15:28 ` jhugo@ 2018-11-20 15:28 ` Jeffrey Hugo0 siblings, 0 replies; 20+ messages in thread From: Jeffrey Hugo @ 2018-11-20 15:28 UTC (permalink / raw) To: Sudeep Holla Cc: mark.rutland, devicetree, Damien.LeMoal, juri.lelli, anup, palmer, linux-kernel, jeremy.linton, Atish Patra, robh+dt, mick, linux-riscv, linux-arm-kernel On 11/20/2018 4:11 AM, Sudeep Holla wrote: > On Thu, Nov 15, 2018 at 11:31:33AM -0700, Jeffrey Hugo wrote: > > [...] > >> >> I was interested in testing these on QDF2400, an ARM64 platform, since this >> series touches core ARM64 code and I'd hate to see a regression. However, I >> can't figure out what baseline to use to apply these. Different patches >> cause different conflicts of a variety of baselines I attempted. >> > > Good to know that we can test DT configuration on QDF2400. I always assumed > it's ACPI only. It is ACPI only in the production configuration. I suppose we could hack things up to do basic DT sanity, but I expect it would be nasty and non-trivial. > >> What are these intended to apply to? >> > > The series alone may not get the package/socket ids correct on QDF2400. > I have not yet added support for the same as I wanted to get the initial > feedback on DT bindings. The movement of DT binding and corresponding > code should not regress and you should be able to validate only that > part. > On a cursory glance, it looks like some of the reorganized code would also be used in the ACPI path (things that are common between DT and ACPI). I do not expect problems, but I still feel its prudent to do a sanity check on actual hardware. -- Jeffrey Hugo Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project. _______________________________________________ linux-riscv mailing list [email protected] http://lists.infradead.org/mailman/listinfo/linux-riscv^permalinkrawreply [flat|nested] 20+ messages in thread
{ "pile_set_name": "Pile-CC" }
Abstract One of the core symptoms of autism spectrum disorders (ASD) is the need for consistency, repetition, rituals, and rigid patterns of play. The need for sameness may be extended to include videogame play and interaction with others.Unfamiliar social interactions and disruption of repetitive patterns of interests and behavior may easily exacerbate anxiety-related stress in individuals with ASD. Whereas, videogame can induce distraction of stress response. We examined the effects of videogame play on cortisol social under the situations of asked questions on daily events plasma.We conducted astructuredinterview consisting of the declarative memory recallon daily events during videogame.Before the start of the interviews, each participant played a videogame as the default context. Two serial contexts followed in which participants were exposed to different social stimuli. Twotypes of stimulators such as an unfamiliar female and an unfamiliar male asked the participants questions on unpleasant daily events, respectively. Immediately after these interviews, the participants were permitted to resume the video game play.A blood specimen for plasma cortisol determination were conducted twice: once at the time of 28 days before and again 5 minutes after the interviews.There were no significant differences in plasma cortisol levels between before and after the interview questions in the 10 children with ASD and the 7 normal healthy controls. Disruption of videogame play, asked questions conducted by the unfamiliar adults, and memory retrieval for the unpleasant daily life events may be able to increase plasma cortisol levels. However, plasma cortisol levels in the 10 children with ASD were not significantly increase plasma cortisol response. Considering that video playing has been found toinduce distraction or decrease and no stress response, video game play candistract cortisol response to stressors. 1. Introduction Deficits in social relatedness and communication, and circumscribed interests and behaviors were characteristics of autism spectrum disorders (ASD) [1]. Particular, encompassing preoccupation with one or more stereotyped and restrictive patterns of interests is characteristic manifestation in individuals with ASD [1]. It is well known that the core features of ASD induce the need for consistency, repetition, rituals, predictability and rigid patterns of play and problem solving [2]. The constant changing nature of unfamiliar social interactions and social situations that require spontaneous intuitive adjustment may exacerbate socially related anxiety for these children with ASD [2]. Social unfamiliarity may therefore easily induce anxiety and stress in individuals with ASD [2]. There is accumulating evidence indicating that individuals with ASD have hypersensitive to psychosocial stress. To the extent to which the situation is perceived as stressful, one of the useful biological marker is salivary [2, 3] or plasma [4, 5]. In order to avoid stress associated with collection of blood samples, many previous studies measured cortisol in saliva. Twenty children with ASD aged 3 to 10 years showed significantly higher salivary cortisol response to blood draw stressor compared to age matched 28 children withoutASD [6]. To examine psychosocial stress such as public speaking task [7] or closely relationship between increased sensory sensitivity and variable cortisol secretion (Corbett et al, 2009) [8] was used in children with ASD. Moreover, in many previous studies examining the effect of self-reported social anxiety/stress in relation to peer interaction, salivary cortisol levels were used as a biological marker of stress [2, 8, 9]. For example, salivary cortisol levels were significantly increased ininteraction with unfamiliar peers in 33 children with ASD compared to interaction with familiar peers [2]. The peer interaction paradigm resulted in significantly higher levels of salivary cortisol in 21 children with ASD aged 8 to 12 yearscompared to normal controls, suggesting that theASDchildren easily activate hypothalamic-pituitary-adrenal (HPA) responses in social situations [9]. Forty-five prepubescent male children with ASDaged 8 to 12 years maintained an elevated cortisol level in response to standardized social-evaluative performance task compared to children with typical development [10]. The children who exposed to the stressful condition showed pronounced increases ofsalivary cortisol [11]. While, ASD patients (mean age of 21.8±2.0 years)showed the dissociation between heart rate and cortisol responses due to a physiological dysfunction in ASD [7].Extrapolating from these findings, children with ASD may be easily stressed, and increase salivary cortisol levels by human interaction. It has been well known that salivary cortisol and total plasma cortisol have been found to show a strong concordance of results [12]. Moreover, saliva is generally sampled by swabbing the mouth with a soft cotton roll, which reteins 135-450 μlof saliva after centrifugation [13]. These volumes can make a crucial difference between being abate to measurable a reliable cortisol concentration and having to throw away a valuable sample because of lack of sufficient material [13]. We thus used the plasma cortisol levels as a biomarker of stress response. The majority of youths with ASD (64.2% of the 860 youth with ASD) aged 13 to 17 years spent most of their free time using non-social media (television, video games), while only 13.2% spent time on social media (email, internet chatting) [14]. Compared with other disability groups (speech/language impairments, learning disabilities, intellectual disabilities), rates of non-social media use were higher among the ASD group, and rates of social media use were lower [14]. Moreover, 202 children with ASD spent approximately 62.0 % subjects with ASD aged 8 to 18 years more time watching television and playing video games than in all non-screen activities combined. Compared with typically developing siblings, children with ASD spent more hours per day playing videogames (2.4 vs. 1.6 for boys, and 1.8 vs. 0.8 for girls) [14]. Some effects of video game are harmful (such as effects of violent videogames on aggression and the effect of screen time on poorer school performance), whereas others are beneficial (e.g., effects of action games on visual-spatial skills) [15]. Videogame effects are complex and would be better understood as multiple dimensions rather than a simplistic “good-bad” dichotomy [15]. Video game playdid not increase up-regulation of salivary serum [16] or salivary [17] cortisol levels. Moreover, casual videogame decreased physiological stress responses [18].Videogame play may thus not increase plasma or salivary cortisol levels, or decrease physiological stress responses. As described above, one of the core symptoms of ASD is the need for consistency and repetition patterns of play. This need for sameness may be extended to include videogame play and interaction with others.Social interactions with unfamiliar adults, and disruption of repetitive patterns of interests and behavior may easily exacerbate anxiety-related stress in individuals with ASD. Whereas, videogame can induce distraction of such stress response. It is possible that different types of psychosocial stress such as interruption of videogame play, asked questionsmay be attenuated by videogame play andplasma cortisol levels individuals with ASD. The purpose of this study attempt to untangle this possibility. 2. Materials and Methods 2.1. Participants This study included 10 individuals with ASD (8 male and 2 female) with age of 6-13 years (mean ± SD, 10.90 ± 4.04 years), and 7 normal healthy controls (3 male and 4 female)with age of 6-19 years (mean ± SD, 11.71 ± 4.11 years). The Statistical Manual of Mental Disorders-Fourth Edition (DSM-IV) [20] was employed for diagnoses. For diagnoses of ASD, the semi-structured clinical interview based on the Autism Diagnostic Interview Revised (ADI-R) [20] was also used. The agreement of two independent, experienced psychiatrists was required for a diagnosis. At screening, physical (resting blood pressure and heart rate) and clinical laboratory examinations (hematology, plasma chemistry including plasma cholesterol and tryglyceride) of the individuals were performed. Among the 10 children with ASD, 6 were recruited through a local advertisementof the Research Institute of Ashiya University at February 2009-October 2012. Other 4 children with ASD were recruited through local advertisementof the Research Institute of Ashiya University at November 2011. 2.2. Study Procedures Figure 1. The experimental time sequence used to the stress response of individuals with ASD and normal controls We conducted astructuredinterview consisting of the declarative memory recallduring video game for 40 minutes.Before the start of the interviews, each participant played a TV-based videogame as the default context. Two serial contexts followed in which participants were exposed to different social and emotional stimuli for 7 minutes each (a total of 14 minutes) during which time all individuals were ceased videogame play by oral request of two types of stimulators. Twotypes of the stimulators such as an unfamiliar female and an unfamiliar maleasked the participants questions on unpleasant daily events, respectively. For example, “What was your most pleasant experience recently?” As unpleasant one asked was; “What was your most unpleasant experience recently?” “Did you have a difficult time at school today?” Immediately after these interviews, the participants were permitted to resume the role-play videogame play (reference to Figure 1). 2.3. Measures Clinical outcome evaluations were carried out at the baseline and 4, 8, 12 and 16 weeks after the intervention, using the Social Responsiveness Scale (SRS) [22] and ABC [22]. The SRS and ABC subscales were completed by the parent. 2.3.1. Social Responsiveness Scale (SRS) The SRS is a 65-item quantitative measure of autistic social impairment completed by an informant who had regularly observed the subject in naturalistic social contexts over a period of at least 2 months. SRS scores are unrelated to age in the range from 7 years to 18 years and do not vary as a function of race, ethnicity, or the rater’s level of education [23]. 2.3.2. Aberrant Behavior Checklist (ABC) The ABC, which was originally developed to measure problem behaviors in developmentally disabled populations, is a good measure of associated with ASD and mental retardation, and has emerged as one of several important end points for assessing treatment effects in ASD psychopharmacologic and behavioral intervention trials in children and adolescents with intellectual disability [22] and also normal IQ levels [24]. The IQ scores are therefore needed in this study. The ABC includes five subscales: irritability, social withdrawal, stereotypy, hyperactivity and inappropriate speech in conjunction with a clearly established and validated factorial structure [22]. 2.4. Cortisol Sampling and Assay Procedures Following an overnight fast, a blood specimen for plasma cortisol determination were conducted twice: once at the time of 28 days before and again 5 minutes after the interviews. All blood samples were obtained at 13:00-15:00 PM in the seated position after at least 15 minutes of rest. Plasma, obtained within 5 minutes of collection by centrifuging whole blood, and the plasma was stored at -70°C until further analysis. Analysis was carried out by SRL, Inc, Tokyo, Japan. Plasma cortisol levels were measured by radioimmunoassay (Amersham Pharmacia Biotech, TFB Co., Tokyo, Japan). The intra- and inter-assay coefficients of variation were 7 % or less and 9 % or less respectively (SRL, lnc, Tokyo, Japan). 3. Results 3.1. Descriptive Characteristics There was no significant difference in age between the 10 individuals with ASD and the 7 normal healthy controls (U=30.50, p=0.67). The subscale of awareness, cognition, communication, motivation, and mannerisms of the SRS in the 10 children with ASD were significantly higher than the 7 normal controls (p<0.05). All of the 17 individuals have no abnormalities in physical and laboratory examinations. The subscale scores of irritability, hyperactivity and inappropriate speech were significantly higher than the 7 normal controls (p<0.05). The total scores of the SRS and ABC were 61.10±32.82 and 24.70± 19.97, respectively.Earlier studies have reported total SRS and ABC scores of 101.7±22.1 [25] and 85.6±27.3 [26], respectively, for children and adolescents with ASD.Our 10children with ASD were thus considered to have mild conditions. However, they have delayed social skills and/or extreme difficulties with organizational skills. They display behavior, interests, and activities that are restricted and repetitive patterns of behavior and activities. As a result, they found public school or work settings moredifficult than the normal developed peers (Table 1). Table 1. Demographics of subjects There were no significant differences in plasma cortisol levels between the 10 children with ASD and the 7 normal controls at 28 days before the interviews, and at 5 minutes after the interviews. There were no significant difference in plasma cortisol levels between 28 days before and 5 minutes after of the in interviews in the 10 children with ASD, and also in the 7 normal controls (Table 2). Table 2. Plasma levels of cortisol (ng/ml) 4. Discussion The constant changing nature of unfamiliar social interactions and social situations may exacerbate socially related stressin individuals with ASD [2]. The need for sameness may be extended further to include patterns of familiarity and interactions with others [2]. These considerations taken together, the 11 individuals with ASD may be easily stressed by interrupting video game play and communication with the novel adults. Salivary cortisol levels were used as a biomarker for psychosocial stress response [2, 3]. Method on saliva assay has drawbacks because of lack of sufficient material [13]. Salivary cortisol and total plasma cortisol have been found to show a strong concordance of results [12]. Drawing these consideration together, we examined the plasma cortisol levels as a biomarker of stress response to interview questions. According to previous studies on elevated salivary cortisol response to psychosocial stress,twenty children with ASD aged between 3 to 10 years showed significantly higher salivary cortisol response to blood draw stressor compared to age matched 28 children without ASD, suggesting increased reactivity of the hypothalamic-pituitary axis (HPA) to novel stimuli in children with ASD [6]. The 10 children with ASD (mean age, 9.4 ± 1.4 years) showed a significantly elevated salivary cortisol response to psychosocial stress, consisting of public speaking task than the 12 normal healthy controls (mean age, 9.4 ± 1.5 years), indicating disturbed reactions in the 10 children with ASD [7]. Significant correlationswere found between salivary cortisol levels and self-reported distress ratings within both familiar and unfamiliar peers, indicating a more complex relationshipsuch as the role of anticipation in the stress response or acute coping differences in the children in each of the counterbalance conditions [2]. The peer interaction paradigm resulted in significantly higher levels of salivary cortisol in 21 children with ASD aged 8 to 12 yearscompared to 24 age-matched normal controls, suggestingactivation of hypothalamic-pituitary-adrenal (HPA) responses in social situations [9]. Their additional study revealed that ASD group consisting of 27 male individuals with ASD aged between 8 to 12 years maintained an elevated cortisol level compared to 32 children with typical development in response to standardized social-evaluative performance task [10]. A recent study measured cortisol levels in children with and without Autism: (1) at rest; (2) in a novel environment; and (3) in response to a blood draw stressor. A significantly higher serum cortisol response was found in the group of children with ASD [6]. These findings suggests increased reactivity of the hypothalamic-pituitary axis to novel stimuli in children with ASD[6].According to a previous study comparing cortisol, stress and sensory sensitivity in children with ASD aged between 6 to 12 years (mean age, 9.08 years), increased sensory sensitivity related to variable cortisol secretion, may be related to plausible developmental factors [8]. Extrapolating from these findings, children and also adults with ASD may be easily stressed by human interaction. Individuals with ASD showed hypersensitive to human interaction as measured by increases in salivary cortisol levels. Although salivary cortisol levels are usually used as a biomarker of stress response in children with ASD to avoid stress associated with collection of blood samples, a lot of studies uses plasma cortisol response as a biomarker of psychosocial stress. For example, public speaking induced significantly increased before and up to 90 minutes after the stressor in 106 healthy adolescents aged 18 to 19 years [4]. The public speaking evoked significantly increased plasma cortisol levels in 79 healthy adolescents or young adults aged 18 to 27 years compared to a no demanding task in 30 age-matched subjects [27]. Drawing these strands together, the 10 individuals with ASD could have shown to increase cortisol response. In this study, two types of stimulators such as an unfamiliar female and an unfamiliar male asked the questions at the time which they were ceased video game play. The structured interviews encompassed the memory retrieval of most unpleasant daily events in their personal relationship to their peers or teachers in their school. These settings included three kinds of stress such as interrupting game play, interaction with the novel adults, and retrieval of distressing memory. Considering that one of the insistent sameness and most consistent characteristic of ASD is the lack of understanding of social ability, and that personal interaction with unfamiliar adults may have been able to induce stress response in the 10 individuals with ASD. In respect to the effects of memory retrieval on cortisol response, an inverted U-shaped dose-response relationship between salivary cortisol levels and recall performance was observed with moderate elevation of salivary cortisol resulting in the best recall performance [28]. Moreover, memory retrieval testing after the learning session was strongly associated with urine cortisol secretion[29] or salivary cortisol response [30]. Therefore, memory retrieval of the unpleasant daily events may have been induced increased plasma cortisol response. Collectively, disrupting game playing, the asked questions conducted by the unfamiliar adults, and memory retrieval of the life events or daily happening may have induced HPA-related stress response, increasing plasma cortisol levels. However, plasma cortisol levels in the 10 children with ASD were not significantly increased in the above described test setting. Video game playing increased sympathetic tone, mental workload, and energy expenditure, however, did not suggest up-regulation of serum cortisol levels [16]. The violent and non-violent TV videogame did not induce significant differences in salivary cortisol levels before and after gaming [17]. Interestingly, casual videogame induced autonomic nervous system relaxation or decreased physiological stress responses in electroencephalographic changes and hear rate variability [18]. The videogame successfully distracted patients during the dental procedure accompanied by an increase in physiologic arousal [31]. Drawing these strands together, the video game play may distract plasma cortisol response to stressful setting. Psychoendocrinological factors associated with flat responses of cortisol profile in the 10 children with ASD in this study are not clearly understand. Three factors may be considered as follows: first, psychosocial stress induced by the above described possibly stressful situation (such as the disrupting videogame play, asked questions in relation to memory retrieval of unpleasant daily events, and communication with the novel persons) may be attenuated by videogame play. Second, considering previous finding that younger children with ASD show an enhanced willingness to approach others and seem to do so without apparent stress, and showed a lower cortisol response in setting which was provided behavioral structure to the free play by permitting key interactive sequences [32], the 10 individuals with ASD in this study may be willing to approach unfamiliar adults, and thus showed no changes of plasma cortisol response. Third, our interview questions may have less impact on plasma cortisol response. The present findings might provide useful information on the distracted effects on physiological or psychological stress in social interaction in ASD. However, there is lack of data on plasma cortisol response to the same interviews without videogame play in this study. The effect of videogame play on distraction in stress reduction is needed in further studies. In conclusion, plasma cortisol response to stressful situations such as the disrupting video game play, asked questions in relation to memory retrieval of unpleasant daily events, and communication with the novel persons may be attenuated by video game play. Authors' Contributions Kunio Yui wrote the article. Masako Ohnishi contributed to collection of clinical characteristics from subjects with autism spectrum disorders. All authors read and approved the final manuscript.
{ "pile_set_name": "Pile-CC" }
516 F.3d 1189 (2008) Jack E. BRADFORD and Colleen Bradford, Plaintiffs-Appellants, v. Kent WIGGINS and Scott R. Womack, Defendants-Appellees. No. 06-4287. United States Court of Appeals, Tenth Circuit. February 20, 2008. *1190 *1191 D. Bruce Oliver, D. Bruce Oliver, L.L.C., Salt Lake City, UT, for Plaintiffs-Appellants. Linette B. Hutton, Winder & Haslam, P.C., Salt Lake City, UT, for Defendants-Appellees. Before HENRY, Chief Judge, SEYMOUR, and GORSUCH, Circuit Judges. PER CURIAM. Jack E. and Colleen Bradford, faced with the charge of rioting, pleaded nolo contendere in abeyance to the lesser charge of disorderly conduct under Utah Code Ann.1953 § 76-9-102. They then brought suit under 42 U.S.C. § 1983, alleging that Deputies Kent Wiggins and Scott R. Womack unlawfully seized them and caused their unlawful arrest, false imprisonment, and prosecution in violation of the Fourth, Fifth, Sixth, and Fourteenth Amendments. They also sought relief pursuant to 42 U.S.C. §§ 1981 and 1981a, along with conspiracy claims pursuant to § 1985(2), and pendent state tort claims. The District Court granted the deputies' motion for summary judgment, finding that the Bradfords' claims are barred by judicial estoppel and qualified immunity. We exercise jurisdiction under 28 U.S.C. § 1291, and affirm. I. BACKGROUND A. Factual Background: The Confrontation Between the Bradfords and the Deputies On August 16, 2003, Deputy Wiggins observed Debra Bradford, the Bradfords' daughter-in-law, allegedly speeding. Debra refused to stop, despite Deputy Wiggins's lights and siren, and finally pulled into the driveway of the home she shared with her spouse, Michael Bradford (Jack and Colleen Bradford's son). Debra refused to give Deputy Wiggins her driver's license or registration or get out of the car, screaming for Michael, who was inside. Michael, who has a history of weapons and assault offenses, a fact with which local police, including Deputy Wiggins, were familiar, emerged from the house screaming profanities. Deputy Wiggins instructed Michael to return inside, and proceeded to *1192 call for back-up. After Michael returned inside the home, he called his mother, Colleen Bradford, and asked her to come witness the events. Deputy Wiggins's first back-up — Deputy Womack and another officer — arrived and assisted Deputy Womack in getting Debra out of the car. (DVD, Title 2, dash cam time 20:07:00.[1]) Shortly thereafter, Colleen and Jack Bradford arrived on the scene. By that time, several other armed officers and police vehicles had positioned themselves outside the home. The videos, officer incident reports, and the Bradfords' plea hearing testimony show that the officers repeatedly ordered the Bradfords to leave. The video also shows the Bradfords animatedly waving their arms as they spoke to the officers about drawing Michael out of the residence. Michael eventually emerged from the home, approaching the officers with his hands in the air, saying, "Shoot me." Aplts' App. at 195 (Plea Hearing, dated Feb. 11, 2004). As Michael approached the police with hands still in the air, an officer then aggressively ran from the back and side of Michael, tackled him, and hand-cuffed him (see id. at 196; DVD, Title 1, 20:21:00). The DVD is not clear, and the parties contest exactly what occurred during Michael's take-down and arrest. The Bradfords allege that while they were "stunned by the attack [on Michael]" they stepped aside to get a view of Michael and the officer on the ground. Aplt's Br. at 20. "A second later" they allege Deputy Wiggins started pushing them back from the scene, yelling, "Back off, back off, now! You both want to go to jail! . . . Back off!" Id. The Bradfords claim that Deputy Womack helped Deputy Wiggins in restraining them, pushing Colleen to the ground, while Deputy Wiggins body-slammed Jack. They maintain that nothing in the video suggests they "were even remotely tumultuous or violent towards anyone." Id. Deputies Wiggins and Womack offer a very different version of events. They allege that when Michael was tackled, the Bradfords tried to push their way past the officer's. Aples' Br. at 7. Deputy Womack claims that he extended his arm to prevent Ms. Bradford from getting any closer. They further state that they put Mr. Bradford in a wrist lock and took Ms. Bradford by the elbow and started pulling them away from Michael and the officers arresting him, Ms. Bradford, they claim, resisted and tripped, then fell to her backside on the ground, where another police officer placed her right arm in a twist lock and escorted her to the car. Id. at 7-8. Michael was then placed in the police car for transport, and everyone left the scene. The dashboard camera videos from Deputy Wiggins's and Deputy Womack's cars are hard to see and have intermittent sound. However, the tapes appear to show Michael calmly coming out of the house with his hands in the air, a police officer tackling him to the ground from behind, and the Bradfords running towards their son and being pushed back, out of frame, by the police. As the district court noted, for the purposes of summary judgment, we review the evidence in the light most favorable to the Bradfords. Simpson v. Univ. of Cola, 500 F.3d 1170, 1179 (10th Cir.2007); Aplts' App. at 402 (Dist. Ct. Order at 2, dated June 16, 2006). B. Procedural Background: The Bradfords' § 1983 action The Box Elder County prosecutor filed an information charging the Bradfords with rioting, a third degree felony, in violation *1193 of Utah Code Ann. § 76-9-101. Following their arraignment, on February 11, 2004, the Bradfords entered no contest pleas in abeyance to disorderly conduct, in violation of Utah Code Ann. § 76-9-102. The plea agreement provided that following successful completion of twelve months' probation and payment of a fine, the charges would be dismissed. At the plea hearing, the court inquired as to what the Bradfords had done wrong. They both admitted — albeit less than enthusiastically — that they disobeyed officers' commands to leave the area. Mr. Bradford, when asked by the judge what he had done wrong, answered, "I thought we should have left when he asked me, but I did call [Michael] out." Aplt.App. at 196-97. Ms. Bradford stated, "[The police] wanted us to get back in our car and leave. Well I'm sorry, that's my son. I'm not going to leave." Aplt.App. at 197. Thus, both admitted that the police indicated they should have left the scene. In March 2005, following completion of the terms of the plea agreement, the Bradfords filed this § 1983 action. The Bradfords now contend that the deputies have violated and conspired to violate the Fourth, Fifth, Sixth, and Fourteenth Amendments when the deputies (1) made contact with them; (2) seized them; (3) detained them without reasonable suspicion; (4) caused their arrest/booking without probable cause; and (5) caused their prosecution without probable cause. On June 23, 2006, the district court granted the deputies' motion for summary judgment, holding the Bradfords' false arrest and baseless prosecution claims barred by judicial estoppel, and their unlawful seizure, detention, and contact claims barred by qualified immunity. The court reasoned that applying judicial estoppel is necessary here to protect the integrity of the courts under Johnson v. Lindon City Corp., 405 F.3d 1065 (10th Cir.2005), because (1) the Bradfords' false arrest and baseless prosecution claims are "clearly inconsistent" with testimony at their plea hearing; (2) the Utah court accepted the Bradfords' plea, so judicial acceptance of their § 1983 claims would "create the perception that either the first or the second court was misled"; (3) the Bradfords "would derive an unfair advantage if not estopped." Aplt's App. at 414-19 (Dist. Ct. Order at 14-19). See Johnson, 405 F.3d at 1069 (citing New Hampshire v. Maine, 532 U.S. 742, 750-51, 121 S.Ct. 1808, 149 L.Ed.2d 968 (2001)). As to qualified immunity, the district court held that the Bradfords had not met their burden of showing that the deputies violated their constitutional rights. Aplt's App. at 412 (Dist. Ct. Order at 12). Further, the court held that the Bradfords failed to plead sufficient facts to support a Fifth or Sixth Amendment violation and thus dismissed those claims outright; the Bradfords have not challenged this decision. II. DISCUSSION On appeal, the Bradfords argue only two issues. They argue first, that judicial estoppel does not bar their false arrest and baseless prosecution claims, because they have consistently claimed innocence, and second, that the deputies are not entitled to qualified immunity as they clearly violated the Bradfords' Fourth Amendment rights. We review a judicial estoppel decision for abuse of discretion.[2]Eastman v. *1194 Union Pac. R.R. Co., 493 F.3d 1151, 1156 (10th Cir.2007). "A court abuses its discretion only when it makes a clear error of judgment, exceeds the bounds of permissible choice, or when its decision is arbitrary, capricious or whimsical, or results in a manifestly unreasonable judgment" Id. We review a district court's grant of summary judgment based on qualified immunity de novo, in the light most favorable to the nonmoving party. Ward v. Anderson, 494 F.3d 929, 934 (10th Cir. 2007). Summary judgment is appropriate if there is no genuine issue as to any material fact and the moving party is entitled to judgment as a matter of law. Id. A. Judicial Estoppel of the False Arrest and Baseless Prosecution Claims As noted, the district court held that judicial estoppel barred the Bradfords' claims of false arrest and baseless prosecution. Until the Supreme Court first held, in New Hampshire v. Maine, 532 U.S. at 749, 121 S.Ct. 1808, that the doctrine is applicable in federal court, the Tenth Circuit had historically rejected judicial estoppel. The case on which the district court relied — Johnson v. Lindon City Corp., 405 F.3d 1065 — constitutes our first application of the doctrine following the Supreme Court's decision. The facts in Johnson are similar to those in the present case: Two plaintiffs entered pleas in abeyance and, in the course of pleading, admitted to certain facts that they later denied in a § 1983 claim. We held that the plaintiffs were judicially estopped from pursuing their § 1983 case against their arresting officers. The doctrine of judicial estoppel is based upon protecting the integrity of the judicial system by "prohibiting parties from deliberately changing positions according to the exigencies of the moment." New Hampshire, 532 U.S. at 749-50, 121 S.Ct, 1808. Though there is no precise formula, in order to determine whether to apply judicial estoppel, courts typically inquire as to whether: 1) a party's later position is clearly inconsistent with its earlier position; 2) a party has persuaded a court to accept that party's earlier position, so that judicial acceptance of an inconsistent position in a later proceeding would create "the perception that either the first or second court was misled"; and 3) the party seeking to assert the inconsistent position would derive an unfair advantage if not estopped. Johnson, 405 F.3d at 1069 (citing New Hampshire, 532 U.S. at 750, 121 S.Ct. 1808). "Because of the harsh results attendant with precluding a party from asserting a position that would normally be available to the party, judicial estoppel must be applied with caution." Lowery v. Stovall, 92 F.3d 219, 224 (4th Cir.1996).[3] *1195 The first inquiry the court must answer is whether the Bradfords' § 1983 claims are clearly inconsistent with an earlier proceeding — in this case, the hearing at which they pleaded nolo contendere in abeyance. The first reason for the estoppel in Johnson was that the plea hearing admissions were clearly inconsistent with the § 1983 claims. So too in the Bradfords' case. At the plea hearing, during which they pleaded no contest to disorderly conduct, both Mi. and Mrs. Bradford admitted that the police asked them to leave, and they refused. Mr. Bradford said, "I thought we should have left when he asked me, but I did call [Michael] out." Aplt.App. at 196-97. Mrs. Bradford stated, "They wanted us to get back in our car and leave. Well I'm sorry, that's my son. I'm not going to leave." Aplt.App. at 197. In contrast, in the district court they made no such concessions. In fact, in their § 1983 complaint, the Bradfords claimed that no probable cause existed to arrest them. Aplt's App. at 19 (Complaint, at ¶ 41). Further, and importantly, the Bradfords now explicitly maintain that they "parked and left when ordered to." Aplt's Br. at 41. These claims are clearly inconsistent with their admission at the plea hearing that they refused to leave when ordered to. Second, a court must determine whether the party has persuaded a court to accept its earlier position so that judicial acceptance of the inconsistent position would create the perception that either the first or the second court was misled. Johnson, 405 F.3d at 1069. The Utah court accepted the Bradfords' plea after specifically inquiring into whether they had refused the deputies' requests to leave. Therefore, acceptance by this court of the inconsistent position the Bradfords now maintain would create the perception that one court or the other was misled. Finally, we must determine whether the Bradfords would derive an unfair advantage on the deputies if not estopped. Id. We held in Johnson that by entering pleas in abeyance, the plaintiffs received a substantial benefit. Id. at 1070. In exchange for entering pleas in abeyance, the State agreed to substitute disorderly conduct charges for rioting, a third degree felony, and then to dismiss even the disorderly conduct charges as long as the Bradfords successfully completed twelve months' probation and paid a fine. In Johnson, we held that a party who accepts the benefit of a such a plea and then makes inconsistent statements in a subsequent Section 1983 action would derive an unfair advantage if not estopped from pursuing these claims. Id. As the present case satisfies the three New Hampshire inquiries, the district court did not abuse its discretion in finding that the Bradfords, because of their plea and their plea hearing statements, are judicially estopped from pursuing their Section 1983 claims of false arrest and baseless prosecution in violation of the Fourth and Fourteenth Amendments. B. Qualified Immunity from the Seizure, Detention, and Contact Claims In granting summary judgment to the deputies, the district court held that the Bradfords' seizure, detention, and contact claims were barred by qualified immunity. In Saucier v. Katz, 533 U.S. 194, 201, 121 S.Ct. 2151, 150 L.Ed.2d 272 (2001), the Supreme Court set forth a definitive test for review of summary judgment motions raising that defense. Under Saucier, we must consider whether "[t]aken in the light most favorable to the party asserting the injury, . . . the facts alleged show the officer's conduct violated a constitutional right." Id. at 201, 121 S.Ct. 2151. If so, we must then determine *1196 whether the right was clearly established. Id. In order to answer the threshold question of Saucier, the court must decide whether, if the evidence is taken in the light most favorable to the party asserting the injury, the alleged facts show that the deputies violated the Bradfords' Fourth Amendment rights. The Fourth Amendment protects individuals from "unreasonable searches and seizures." U.S. Const. amend. IV. To establish a violation of the Fourth Amendment in a Section 1983 action, the claimant must demonstrate "both that a `seizure' occurred and that the seizure was `unreasonable.'" Childress v. City of Arapaho, 210 F.3d 1154, 1156 (10th Cir.2000) (citing Brower v. County of Inyo, 489 U.S. 593, 599, 109 S.Ct. 1378, 103 L.Ed.2d 628 (1989)). A Fourth Amendment seizure occurs when a police officer restrains the liberty of an individual through physical force or show of authority. Terry v. Ohio, 392 U.S. 1, 20 n. 16, 88 S.Ct. 1868, 20 ]L.Ed.2d 889 (1968). Assuming, without deciding, that the Bradfords were seized, to establish a Fourth Amendment violation, we must find that the seizure was unreasonable. Brower, 489 U.S. at 599, 109 S.Ct. 1378. In determining reasonableness, courts must look to "the balancing of competing interests." Holland ex rel. Overdorff v. Harrington, 268 F.3d 1179, 1185 (10th Cir.2001). The determination of reasonableness takes into account that officers are frequently forced to make split-second decisions under stressful and dangerous conditions. While there is no ready test, reasonableness. is determined by balancing "the governmental interest which allegedly justifies official intrusion" against "the constitutionally protected interests of the private citizen." Terry, 392 U.S. at 20-21, 88 S.Ct. 1868. In this case, the governmental interest at stake was the successful arrest of Michael Bradford. When the Bradfords rushed toward their son upon his arrest, it was reasonable of the officers to make the split-second decision that the Bradfords' actions could possibly interfere with the arrest. Therefore the brief seizure of the Bradfords was reasonable. While the Bradfords' concern for their son's wellbeing may be understandable — given how aggressively he was tackled — we hold that the deputies' actions were reasonable in light of the totality of the circumstances, and the circumstances were unquestionably escalated by Debra and Michael's behavior. Having concluded that any seizure that occurred was reasonable and therefore did not violate the Fourth Amendment, we need not address the second Saucier question to determine qualified immunity — whether the constitutional right was clearly established. See, e.g., Wilder v. Turner, 490 F.3d 810, 813 (10th Cir.2007) ("If the officer's conduct did not violate a constitutional right, the inquiry ends and the officer is, entitled to qualified immunity."). The answer to the threshold inquiry — that a constitutional right was not violated — is enough to conclude that the deputies are entitled to qualified immunity from the seizure, detention, and contact claims. III. CONCLUSION Accordingly, because this imposition of judicial estoppel was not an abuse of discretion, and since the seizure of the Bradfords was reasonable, we AFFIRM the district court's grant of summary judgment to Deputies Wiggins and Womack. HENRY, Chief Judge, concurring. I write separately to note that although we do not decide the issue in the main opinion, in my view, the Bradfords were seized, albeit reasonably. *1197 A Fourth Amendment seizure occurs when a police officer "restrains [one's] liberty." Terry v. Ohio, 392 U.S. 1, 19, 88 S.Ct. 1868, 20 L.Ed.2d 889 (1968). The deputies claim, and the district court agreed, that the test for whether an action constitutes a Fourth Amendment seizure is more specific than the simple "restraint of liberty" — rather, they claim, it is whether the plaintiffs felt "free to leave." Aple's Br. at 17-21; Aplt's App. at 409-11 (Dist. Ct. Order at 9-11). The district court held, "A person is seized within the meaning of the Fourth Amendment when a reasonable person would believe that he or she is not free to leave . . . [N]othing in the record indicates to this court that the Bradfords were not free to leave, the touchstone for a Fourth Amendment seizure." Aplt's App. at 409-10, 411 (Dist. Ct. Order at 9-10, 11). The court reasoned that not only were the Bradfords free to leave, they were reportedly ordered to do just that. However I do not agree that the inquiry is that simple. Seizure does not necessarily imply any physical restraint. See, e.g., United States v. Place, 462 U.S. 696, 712 n. 1, 103 S.Ct. 2637, 77 L.Ed.2d 110 (1983) (Brennan, J. concurring) (noting that although the seizure at issue in Terry was physical restraint, "the Court acknowledged . . . that `seizures' may occur irrespective of the imposition of actual physical restraint."). Under Terry, "Only when the officer, by means of physical force or show of authority, has in some way restrained the liberty of a citizen may we conclude that a `seizure' has occurred." 392 U.S. at 20 n. 16, 88 S.Ct. 1868 (emphasis added). Ordering the Bradfords to leave and then physically removing them from the scene no doubt restrains their liberty, if the one thing they want to do — and otherwise would have the liberty to do — is to remain on the premises. We touched on this issue in Roska ex rel. Roska v. Peterson, 328 F.3d 1230 (10th Cir.2003), addressing the narrow question before the panel, but the "free to leave" inquiry set forth in that case is not always the end of the matter. While it may have been the end of the matter as to the way Roska was argued, read any more broadly than that, the language would be at direct odds with this language from Bostick, which held that whether an individual is "free to leave" is not always dispositive: The state court erred, however, in focusing on whether Bostick was "free to leave" rather than on the principle that those words were intended to capture. When police attempt to question a person who is walking down the street or through an airport lobby, it makes sense to inquire whether a reasonable person would feel free to continue walking. But when the person is seated on a bus and has no desire to leave, the degree to which a reasonable person would feel that he or she could leave is not an accurate measure of the coercive effect of the encounter. . . . . . . . Bostick's freedom of movement was restricted by a factor independent of police conduct — i.e., by his being a passenger on a bus. Accordingly, the "free to leave" analysis on which Bostick relies is inapplicable. In such a situation, the appropriate inquiry is whether a reasonable person would feel free to decline the officers' requests or otherwise terminate the encounter. This formulation follows logically from prior cases and breaks no new ground. We have said before that the crucial test is whether, taking into account all of the circumstances surrounding the encounter, the police conduct would "have communicated to a reasonable person that he was not at liberty to ignore the police presence and go about his business." *1198 Florida v. Bostick, 501 U.S. 429, 435-37, 111 S.Ct. 2382, 115 L.Ed.2d 389 (1991) (quoting Michigan v. Chesternut, 486 U.S. 567, 569, 108 S.Ct. 1975, 100 L.Ed.2d 565 (1988)) (emphasis added). This broad formulation of "the principle that th[e] words [`free to leave'] were intended to capture," might very well cover a case like the Bradfords'. Taking into account all the surrounding circumstances, the Bradfords would not have felt free to ignore the police presence and go about the business of staying in front of their son's home — public property. Therefore, it is my view that they were seized under Bostick. Even if a seizure did occur, as I believe it did, the deputies' actions did not violate the Bradfords' Fourth Amendment rights since that seizure was reasonable. NOTES [1] The encounter was taped by the deputies' dash board cameras, and the DVD recording is part of our record. [2] Most circuits review appeals of summary judgment based on judicial estoppel for abuse of discretion. See, e.g., Abercrombie & Fitch Co. v. Moose Creek, Inc., 486 F.3d 629, 633 (9th Cir.2007); Stephens v. Tolbert, 471 F.3d 1173, 1175 (11th Cir.2006); Thom v. Howe, 466 F.3d 173, 182 (1st Cir.2006); Stallings v. Hussmann Corp., 447 F.3d 1041, 1046 (8th Cir.2006); Jethroe v. Omnova Solutions, Inc., 412 F.3d 598, 599-600 (5th Cir.2005); Lampi Corp. v. Am. Power Prods., Inc., 228 F.3d 1365, 1377 (Fed.Cir.2000); Klein v. Stahl GMBH & Co. Maschinefabrik, 185 F.3d 98, 108 (3d Cir.1999); King v. Herbert J. Thomas Memorial Hosp., 159 F.3d 192, 196 (4th Cir. 1998). But see, Eubanks v. CBSK Fin. Group, Inc., 385 F.3d 894, 897 (6th Cir.2004) (applying a de novo standard). [3] Applying judicial estoppel both narrowly and cautiously, as we must, we do not hold it to be dispositive that the Bradfords simply entered a no contest plea. See Thore v. Howe, 466 F.3d 173, 187 (1st Cir.2006) (rejecting a per se rule that judicial estoppel always applies or never applies to facts admitted during a guilty plea). Sometimes a civil action following a plea is justified, most commonly when a party's previous position was based on a mistake. Thore, 466 F.3d at 185. But see, Zinkand v. Brown, 478 F.3d 634, 638 (4th Cir.2007) ("(B]ad faith is the determinative factor.") (internal quotation marks omitted). However, though the plea itself is not dispositive, we hold that the Bradfords' plea and their plea hearing statements that they refused the officers' requests to leave are sufficient to justify judicial estoppel in this case.
{ "pile_set_name": "FreeLaw" }
Outcome Enter a dangerous post-apocalyptic world in this challenging platform runner. Make your way through the wasteland to reach survial camps. Be careful and avoid all obstacles, use your skills and collect as many coins as possible to unlock new levels.
{ "pile_set_name": "Pile-CC" }
Nocentelli: Live in San Francisco Nocentelli: Live in San Francisco is a live album by guitarist Leo Nocentelli of The Meters. The album was recorded at Slim's nightclub in San Francisco. It was released by DJM Records in November 1997. Background Nocentelli performed regularly in a quartet with drummer Ziggy Modeliste. On this recording the quartet included keyboardist Kevin Walsh and bassist Nick Daniels. The performance was recorded by an audience member at Slim's nightclub in San Francisco. Unaware of the recording, Nocentelli received a Digital Audio Tape from the audience member. He asked a friend to convert it to analog tape. During conversion, a DJM Records executive heard the music and arranged for its release as an album. Reception Don Snowden of AllMusic noted that the album "could have been a great glimpse of two masters revisiting past highlights" and found it not fully satisfying. Tony Green of JazzTimes wrote, "the extended jams (...) give Nocentelli the space to step out of his rhythm guitar role for some smoldering classic fusion-leaning solos." John Koetzner of Blues Access had a positive review and said the album is a great way to discover The Meters and Nocentelli's talent. Track listing Personnel Credits adapted from AllMusic. Leo Nocentelli – primary artist, guitar, vocals, liner notes, producer, composer Zigaboo Modeliste – drums, vocals, composer Nick Daniels – bass, vocals Kevin Walsh – keyboards Dan Morehouse – digital mastering, editing Composition (track 3) – Jay Livingston, Ray Evans Additional composition (tracks 1, 2, 5, 8, 9) – George Porter Jr., Art Neville References Category:1997 live albums Category:Leo Nocentelli albums Category:DJM Records live albums
{ "pile_set_name": "Wikipedia (en)" }
Q: Show $ \int_0^\infty\left(1-x\sin\frac 1 x\right)dx = \frac\pi 4 $ How to show that $$ \int_0^\infty\left(1-x\sin\frac{1}{x}\right)dx=\frac{\pi}{4} $$ ? A: Use $$ \int \left(1-x \sin\left(\frac{1}{x}\right)\right) \mathrm{d} x = x - \int \sin\left(\frac{1}{x}\right) \mathrm{d} \frac{x^2}{2} = x - \frac{x^2}{2}\sin\left(\frac{1}{x}\right) - \frac{1}{2} \int \cos\left(\frac{1}{x}\right) \mathrm{d}x $$ Integrating by parts again $\int \cos\left(\frac{1}{x}\right) \mathrm{d}x = x \cos\left(\frac{1}{x}\right) - \int \sin\left(\frac{1}{x}\right) \frac{\mathrm{d}x}{x} $: $$ \int \left(1-x \sin\left(\frac{1}{x}\right)\right) \mathrm{d} x = x - \frac{x^2}{2}\sin\left(\frac{1}{x}\right) - \frac{x}{2} \cos\left(\frac{1}{x}\right) + \frac{1}{2} \int \sin\left(\frac{1}{x}\right) \frac{\mathrm{d}x}{x} $$ Thus: $$ \begin{eqnarray} \int_0^\infty \left(1-x \sin\left(\frac{1}{x}\right)\right) \mathrm{d} x &=& \left[x - \frac{x^2}{2}\sin\left(\frac{1}{x}\right) - \frac{x}{2} \cos\left(\frac{1}{x}\right)\right]_{0}^{\infty} + \frac{1}{2} \int_0^\infty\sin\left(\frac{1}{x}\right) \frac{\mathrm{d}x}{x} = \\ &=& 0 + \frac{1}{2} \int_0^\infty \frac{\sin{u}}{u} \mathrm{d} u = \frac{\pi}{4} \end{eqnarray} $$ where the last integral is the Dirichlet integral. A: Sasha's answer concisely gets the answer in terms of the Dirichlet integral, so I will evaluate this integral in the same way that the Dirichlet integral is evaluated with contour integration. First, change variables to $z=1/x$: $$ \int_0^\infty\left(1-x\sin\left(\frac1x\right)\right)\,\mathrm{d}x =\int_0^\infty\frac{z-\sin(z)}{z^3}\,\mathrm{d}z\tag{1} $$ Since the integrand on the right side of $(1)$ is even, entire, and vanishes as $t\to\infty$ within $1$ of the real axis, we can use symmetry to deduce that the integral is $\frac12$ the integral over the entire line and then shift the path of integration by $-i$: $$ \int_0^\infty\frac{z-\sin(z)}{z^3}\,\mathrm{d}z =\frac12\int_{-\infty-i}^{\infty-i}\frac{z-\sin(z)}{z^3}\,\mathrm{d}z\tag{2} $$ Consider the contours $\gamma^+$ and $\gamma^-$ below. Both pass a distance $1$ below the real axis and then circle back along circles of arbitrarily large radius. $\hspace{4.4cm}$ Next, write $\sin(z)=\frac1{2i}\left(e^{iz}-e^{-iz}\right)$ and split the integral as follows $$ \frac12\int_{-\infty-i}^{\infty-i}\frac{z-\sin(z)}{z^3}\,\mathrm{d}z =\frac12\int_{\gamma^-}\left(\frac1{z^2}+\frac{e^{-iz}}{2iz^3}\right)\,\mathrm{d}z -\frac12\int_{\gamma^+}\frac{e^{iz}}{2iz^3}\,\mathrm{d}z\tag{3} $$ $\gamma^-$ contains no singularities so the integral around $\gamma^-$ is $0$. The integral around $\gamma^+$ is $\color{#00A000}{2\pi i}$ times $\color{#00A000}{-\dfrac{1}{4i}}$ times the residue of $\color{#C00000}{\dfrac{e^{iz}}{z^3}}$ at $\color{#C00000}{z=0}$; that is, $\color{#00A000}{-\dfrac\pi2}$ times the coefficient of $\color{#C00000}{\dfrac1z}$ in $$ \frac{1+iz\color{#C00000}{-z^2/2}-iz^3/6+\dots}{\color{#C00000}{z^3}}\tag{4} $$ Thus, the integral around $\gamma^+$ is $\color{#00A000}{\left(-\dfrac\pi2\right)}\color{#C00000}{\left(-\dfrac12\right)}=\dfrac\pi4$. Therefore, combining $(1)$, $(2)$, and $(3)$ yields $$ \int_0^\infty\left(1-x\sin\left(\frac1x\right)\right)\,\mathrm{d}x=\frac\pi4\tag{5} $$ As complicated as that may look at first glance, with a bit of practice, it is easy enough to do in your head. A: Let's start out with the variable change $\displaystyle x=\frac{1}{u}$ and then turn the integral into a double integral: $$\int_{0}^{\infty} {\left( {1 - \frac{\sin u}{u}} \right)\frac{1}{u^2}} \ du=$$ $$ \int_{0}^{\infty}\left(\int_{0}^{1} 1 - \cos (u a) \ da \right)\frac{1}{u^2} \ du=$$ By changing the integration order we get $$ \int_{0}^{1}\left(\int_{0}^{\infty} \frac{1 - \cos (a u)}{u^2} \ du \right)\ \ da=\int_{0}^{1} a \frac{\pi}{2} \ da=\frac{\pi}{4}.$$ Note that by using a simple integration by parts at $\displaystyle \int_{0}^{\infty} \frac{1 - \cos (a u)}{u^2} \ du$ we immediately get $\displaystyle a\int_{0}^{\infty} \frac{\sin(au)}{u} \ du = a\int_{0}^{\infty} \frac{\sin(u)}{u}\ du$ that is $\displaystyle a\frac{\pi}{2}$. The last integral is the famous Dirichlet integral. Hence the result follows and the proof is complete. Q.E.D. (Chris)
{ "pile_set_name": "StackExchange" }
692 F.Supp. 1354 (1988) Harry A. BENDIBURG, Individually and as Administrator of the Estate of Carl Bendiburg, Deceased, Plaintiff, v. Pamela S. DEMPSEY, et al., Defendants. Civ. A. No. 1:87-CV-1774-JOF. United States District Court, N.D. Georgia, Atlanta Division. July 14, 1988. Harold Dennis Corlew, Atlanta, Ga., for plaintiff. William C. Joy, Victoria H. Tobin, Office of State Atty. Gen., Bruce McCord Edenfield, Hicks, Maloof & Campbell, Atlanta, Ga., Jerry Lovvorn Gentry, Sams Glover & Gentry, Marietta, Ga., Alan F. Herman, Freeman & Hawkins, J. Caleb Clarke, III, Culbreth & Clarke, Earl W. Gunn, Sidney F. Wheeler, Long, Weinberg, Ansley & Wheeler, Lawrie E. Demorest, Ralph Jerry Kirkpatrick, Wendy L. Hagenau, Randall L. Hughes, Powell, Goldstein, Frazer & Murphy, Atlanta, Ga., for defendants. ORDER FORRESTER, District Judge. This matter is before the court on (1) defendant Adventist Health Systems/Sunbelt, *1355 Inc.'s[1] motion for summary judgment, Fed.R.Civ.P. 56; (2) defendant Sallie T. Walker's motion for summary judgment, id.; (3) defendant Walker's motion for imposition of sanctions, Fed.R.Civ.P. 11; and (4) defendant Cobb County's motion to dismiss. Fed.R.Civ.P. 12(b)(6). Defendant Adventist Health Systems' motion for summary judgment was filed October 23, 1987. By stipulation between the parties filed December 14, 1987, however, defendant Adventist Health Systems was dismissed from this action with prejudice. Accordingly, the motion for summary judgment is now moot and DENIED as such. Defendant Walker's motions for summary judgment and for imposition of sanctions and defendant Cobb County's motion to dismiss shall be considered separately following a recitation of the relevant facts. I. STATEMENT OF FACTS. The following facts are based upon the court's review of the relevant pleadings of record as well as the affidavit and deposition testimony and the various exhibits submitted by the parties. The parties to this action are (1) plaintiff Harry D. Bendiburg who brings this action both in his individual capacity as well as in his capacity as administrator of the estate of his son, Carl Austin Bendiburg (hereinafter "plaintiff's decedent" or "Carl"), Complaint, ¶ 3; (2) defendant Cobb County Department of Family and Children Services (DFACS), an instrumentality of the State of Georgia, id., ¶ 7; (3) defendant Cobb County, a political subdivision of the State of Georgia, id., ¶ 9; (4) defendant Medical Personnel Pool of Atlanta, Inc. (Med Pool), a corporation organized and existing under the laws of the State of Georgia, id., ¶ 11; (5) defendant Drs. Klaus, Cohen and Weil Orthopaedic Associates, P.C., a professional corporation organized and existing under the laws of the State of Georgia, id., ¶ 13; (6) defendant Pamela S. Dempsey, an official of defendant DFACS, id., ¶ 4; (7) defendant Sue Terry, a supervisory official of defendant DFACS, id., ¶ 5; (8) defendant Nancy J. Pendergraft, also a supervisory official of defendant DFACS, id., ¶ 6; (9) defendant Sallie T. Walker, a judicial officer of the Cobb County Juvenile Court, id., ¶ 8; (10) defendant Nancy Harrison, an employee of defendant Med Pool, id., ¶ 10; (11) defendant Richard Cohen, M.D., an employee of defendant Klaus, Cohen and Weil Orthopaedic Associates, id., ¶ 12; and (12) defendant Baheeg Shadeed, M.D., a resident of the State of Georgia, id., ¶ 14. The court's jurisdiction is predicated upon 28 U.S.C. §§ 1331 & 1343. Id., ¶ 2. This action, in which plaintiff alleges violations of 42 U.S.C. § 1983 as well as the state law tort of battery, arose out of the following series of events. On September 15, 1985, plaintiff's decedent, Carl Bendiburg, was seriously injured in an automobile accident and admitted to the Smyrna Hospital for treatment. Defendant Walker's Statement of Material Facts, ¶ 2; Plaintiff's Statement of Material Facts, ¶ 2. Among the injuries suffered by Carl was a compound fracture of the left leg. On November 9, 1985, Carl was discharged from Smyrna Hospital. Because of a continuing infection of the left leg contracted during his stay in the hospital, however, Carl continued to receive at-home nursing care furnished by defendant Med Pool. Defendant Walker's Statement of Material Facts, ¶ 3; Plaintiff's Statement of Material Facts, ¶ 3. Among the responsibilities of these nurses was to ensure that Carl was administered the proper dosage of antibiotics prescribed for his continuing infection. Id. When the nurses experienced increased difficulty in administering the prescribed medical intravenously,[2] plaintiff was asked to consent to the insertion of a Hickman catheter[3] into Carl which would *1356 allow for direct administration of the anti-biotics. Defendant Walker's Statement of Material Facts, ¶ 5; Plaintiff's Statement of Material Facts, ¶ 5. Plaintiff refused to consent to the procedure. Id. On November 27, 1985, defendant Harrison, a registered nurse employed by defendant Med Pool, contacted defendant Dempsey of the Cobb County Department of Family and Children Services (DFACS). Defendant Walker's Statement of Material Facts, ¶ 8; Plaintiff's Statement of Material Facts, ¶ 8. Defendant Harrison informed defendant Dempsey of the circumstances regarding Carl's infection, the need for intravenous medication, the perceived need for the Hickman catheter, and plaintiff's refusal to consent to insertion of the Hickman catheter. Id. As a result of this conversation, defendant Dempsey prepared and presented to the Juvenile Court of Cobb County an ex parte deprivation petition pursuant to O.C.G.A. § 15-11-23, et seq. Defendant Walker's Statement of Material Facts, ¶ 9; Plaintiff's Statement of Material Facts, ¶ 9. The petition was originally presented to and to be heard by Judge B. Wayne Phillips of the Cobb County Juvenile Court. Defendant Walker's Statement of Material Facts, ¶ 10; Plaintiff's Statement of Material Facts, ¶ 10. Judge Phillips declined to hear the petition, however, after learning that it involved plaintiff, with whom he was acquainted. Id. For this reason, Judge Phillips directed that the petition instead be presented to defendant Walker. As will be discussed at length below, defendant Walker's exact legal status is disputed; however, the parties agree that she heard the petition under the title "Judge Pro Tempore" and pursuant to a standing order[4] issued by Judge Phillips July 12, 1985. Defendant Walker's Statement of Material Facts, ¶¶ 11-13; Plaintiff's Statement of Material Facts, ¶¶ 11-13. In any event, after hearing testimony from defendant Dempsey, defendant Walker entered an order granting the ex parte deprivation petition and, in so doing, caused custody of plaintiff's decedent to be placed temporarily in defendant DFACS. Defendant Walker's Statement of Material Facts, ¶ 13; Plaintiff's Statement of Material Facts, ¶ 13. Pursuant to this order, Carl was readmitted to Smyrna Hospital where the controversial Hickman catheter was inserted. Defendant Walker's Statement of Material Facts, ¶ 15; Plaintiff's Statement of Material Facts, ¶ 15. Carl was released from Smyrna Hospital November 29, 1985. On December 2, 1985, custody of Carl was restored to plaintiff by and through a consent agreement between plaintiff and defendant DFACS. Complaint, ¶ 27. On December 14, 1985, Carl died as a result of a massive pulmonary embolus.[5]Id., ¶¶ 28-30. This action followed. II. DISCUSSION A. Defendant Walker's Motion for Summary Judgment. 1. Fed.R.Civ.P. 56. Before turning to the merits of defendant Walker's motion for summary judgment, the court will set forth the standard controlling practice under Fed.R.Civ.P. 56. Courts may grant motions for summary judgment when "there is no genuine issue as to any material fact and ... the moving party is entitled to judgment as a matter of law." Fed.R.Civ.P. 56(c). The party seeking summary judgment bears the burden of demonstrating that no genuine issue of material fact exists in the case. Hines v. State Farm Fire & Casualty Company, 815 F.2d 648 (11th Cir.1987). This burden may be discharged by demonstrating that *1357 there is an absence of evidence to support the non-moving party's case. Celotex Corp. v. Catrett, 477 U.S. 317, 106 S.Ct. 2548, 91 L.Ed.2d 265 (1986). In determining whether this burden is met, courts should review the evidence of record and all factual inferences in the light most favorable to the party opposing the motion. Hines. Summary judgment should be entered "against a party who fails to make a showing sufficient to establish the existence of an element essential to that party's case, and on which that party will bear the burden of proof at trial." Celotex. 2. Judicial Immunity. Plaintiff concedes "that if [defendant Walker] was in fact a legally appointed judge pro tempore of the Juvenile Court of Cobb County, then she is entitled to judicial immunity and the action against her must be dismissed." Response at 1. It is plaintiff's contention, however, that defendant Walker was not a lawfully appointed judge pro tempore and therefore is not immune from damages in this action. In this regard, plaintiff presents two arguments. First, plaintiff argues that Judge Phillips' July 12, 1985 standing order by which defendant Walker was appointed was entered without regard to the statutory framework established for such appointments and thus is a nullity. Second, it is argued that regardless of the validity of the July 12, 1985 standing order, jurisdiction over the Bendiburg matter was never vested in defendant Walker because Judge Phillips improperly declined to hear the petition himself. (a) The July 12, 1985 Standing Order. As mentioned above, it is plaintiff's contention that Judge Phillips' July 12, 1985 standing order by which defendant Walker was appointed judge pro tempore is of no legal effect. As a consequence, the argument goes, defendant Walker's granting of defendant DFACS' ex parte deprivation petition was effected in the complete absence of jurisdiction over the matter. It is for this reason that plaintiff contends defendant Walker is not entitled to judicial immunity for the alleged unconstitutional deprivation of plaintiff's parental rights.[6] The court's consideration of this matter must necessarily begin with O.C.G.A. § 15-11-63. This code section provides, In the event of the disqualification, illness, or absence of the judge of the juvenile court, the judge of the juvenile court may appoint any attorney at law resident in the judicial circuit in which the court lies, any judge or senior judge of the superior courts, or any duly appointed juvenile court judge to serve as judge pro tempore of the juvenile court. In the event the judge of the juvenile court is absent or unable to make such appointment, the judge of the superior court of that county may so appoint. The person so appointed shall have the authority to preside in the stead of the disqualified, ill or absent judge.... Judge Phillips' July 12, 1985 order, expressly entered pursuant to section 15-11-63, provides in relevant part, Sallie T. Walker is hereby appointed as judge pro tempore in the absence of the judge of the Juvenile Court of Cobb County, Georgia, to serve during any period of disqualification, illness, or absence of the undersigned and to fully act in the happening of such an event in the place of the undersigned in any and all matters within the jurisdiction of this court. Phillips Depo., Exhibit P-109. Plaintiff argues that this order purports to create the permanent position of judge pro tempore, an act which section 15-11-63 does not authorize. Defendant Walker, on the other hand, argues that the juvenile court judge's inherent power "to carry on its business" carries with it the authority to issue the standing order in question, and cites the obvious benefits of giving effect to such an order.[7] Thus, the first question *1358 the court must address is whether defendant Walker was appointed to hear the Bendiburg matter in a manner inconsistent with state law. For the proposition that unless a temporary judge is appointed in strict compliance with state law, he is wholly without jurisdiction to act, plaintiff cites the opinions of the Georgia Supreme Court in Chambers v. Wynn, 217 Ga. 381, 122 S.E.2d 571 (1961); Adams v. Payne, 219 Ga. 638, 135 S.E.2d 423 (1964); and Trammell v. Trammell, 220 Ga. 293, 138 S.E.2d 562 (1964), as well as of the Georgia Court of Appeals in Bedingfield v. First National Bank, 4 Ga.App. 197, 61 S.E. 30 (1908) and Lamas v. Baldwin, 128 Ga.App 715, 197 S.E.2d 779 (1973). In Chambers, Adams, and Trammell, the Georgia Supreme Court was confronted with judgments, orders and other judicial acts entered by superior court judges emeritus. The state law in effect at the time of these cases provided for four ways in which a judge emeritus could assume jurisdiction to hear a particular case: (1) appointment by the Governor in the event the judge is "unable to serve;" (2) when selected by the litigants; (3) when selected by the clerk of court; and (4) when appointed by the judge in writing and for a specified time, place and duration.[8] In all three instances, the judge emeritus was not appointed in strict compliance with the appropriate legislation. In Adams, the judge emeritus had been requested to serve by a superior court judge. This request, however, was not made in writing and further failed to specify the time, place and duration of the service. Similarly, in Trammell, the judge emeritus had been appointed to serve by the Governor in the place of a disqualified judge. The Governor further appointed the judge emeritus to hear any other matters arising while he served on the bench. Noting that the applicable statute allowed the Governor to make such appointments only where a particular judge "is unable to serve," the court held the latter portion of the Governor's appointment to be invalid.[9] In all three cases, the judicial acts taken by the improperly appointed judges emeritus were ruled nullities for want of jurisdiction. In Bedingfield and Lamas, the court of appeals was confronted with improperly appointed judges pro hac vice. The law in effect when Bedingfield was decided provided for the appointment of such judges either by request of the litigants or by the clerk of court. When the court of appeals was made aware that the judge pro hac vice in that case has been appointed by the trial judge, the judgment appealed from was ruled to be void as having been rendered in the complete absence of jurisdiction. By the time Lamas was decided, the pertinent law had been changed to allow for the appointment of a judge pro hac vice by the chief judge to preside over a pending case whenever necessary by reason of the disqualification of the judge. Despite the clear wording of this law, the chief judge appointed an attorney judge pro hac vice to hear cases arising while several judges were absent from the circuit. Again, the court of appeals vacated all judicial acts taken by the improperly appointed judge pro hac vice on the grounds of lack of jurisdiction. To the extent that plaintiff cites the above authorities for the proposition that temporary judges improperly appointed are without jurisdiction to act, the court agrees. The court does not find these cases dispositive of the case at bar, however. All five cases describe judicial appointments made in complete disregard for the clear requirements of the laws in effect at the relevant time. In the present case, the challenged judicial appointment was expressly made pursuant to the appropriate statute; i.e., O.C.G.A. § 15-11-63. Furthermore, by its own terms, the July 12, 1985 standing order limits defendant Walker's authority to act judge pro tempore to those instances specifically provided for in section 15-11-63. Thus, it cannot be said that Judge Phillips purported to exercise *1359 more authority in executing this order than the law of Georgia allows. That the entry of a standing order is not provided for within the text of section 15-11-63 is of no consequence; it was merely the means chosen by Judge Phillips to give effect to the purpose behind the statute when the appropriate time arose. Though termed a "standing order" by the parties, the July 12, 1985 order was expressly made effective only in the event of Judge Phillips' "disqualification, illness or absence" and thus tracks the exact language of section 15-11-63. It provides for no other circumstances under which defendant Walker could assume the role of judge pro tempore and thus clearly does not purport to create an additional judgeship or any other office within the Cobb County Juvenile Court system. For these reasons, the court finds that Judge Phillips' July 12, 1985 order by which defendant Walker was appointed judge pro tempore over the Bendiburg matter was entered in full compliance with the terms of O.C.G.A. § 15-11-63. (b) Judge Phillips' "Disqualification." Plaintiff argues in the alternative that, regardless of whether the July 12, 1985 standing order was entered in compliance with section 15-11-63, defendant Walker was never vested with jurisdiction to hear the Bendiburg petition because no statutory condition precedent to her appointment occurred. Specifically, it is argued that Judge Phillips' decision not to hear the petition was not the legal equivalent of a "disqualification"[10] and thus defendant Walker's appointment to preside over the Bendiburg matter was a nullity. Plaintiff asserts that the court must distinguish "between a judge disqualifying himself, a legal act which must be predicated upon legal grounds, and a judge simply declining to hear the matter for personal reasons." Response at 14. As previously mentioned, Judge Phillips' asserted reason for declining to hear the deprivation petition was that he was acquainted with plaintiff. Plaintiff argues that by this decision, Judge Phillips violated "the long standing rule that a judge has a duty to perform the judicial role mandated by statute and that he cannot voluntarily recuse himself without a legal basis for disqualification." Response at 15 (citations omitted). It is further argued that "merely knowing the party does not disqualify a judge." Id. The court agrees in principle with both of these propositions. As will be seen, however, Judge Phillips' decision not to hear the Bendiburg petition does have a legal basis which goes beyond being a mere acquaintance of a party to the action. Judge Phillips testifies as follows: As a result of Mr. Bendiburg's and my past professional relationship, he, as a magistrate and I, as chief magistrate of the Cobb Judicial Circuit, I was concerned as to whether my impartiality might reasonably be questioned. I had not reappointed Mr. Bendiburg as a magistrate when his term ended due to certain problems I had with him in carrying out his duties, and had several confrontations with him about how and when he was to carry out his duties. (Emphasis supplied). Phillips Aff., ¶ 6.[11] The portion of Judge Phillips' testimony underscored above indicates that his decision not to hear the Bendiburg matter was based upon the language of Canon 3(C)(1)(a) of the Georgia Code of Judicial Conduct,[12] which provides, "a judge should[13] disqualify himself in a proceeding in which his impartiality might reasonably be questioned, including but not limited to instances where: (a) he has a personal bias or prejudice concerning a party.... (Emphasis supplied)." In light of the undisputed evidence of record wherein the past relationship between *1360 Judge Phillips and plaintiff is described, the court finds that Canon 3(C)(1)(a) of the Georgia Code of Judicial Conduct provides a sufficient legal basis for Judge Phillips' decision not to hear the Bendiburg petition. It follows, therefore, that Judge Phillips' decision constituted a legal disqualification and thus satisfied the requirements of section 15-11-63 and the July 12, 1985 standing order. Accordingly, the court finds that defendant Walker was a lawfully appointed judge pro tempore of the Juvenile Court of Cobb County when she entered the November 27, 1985 order granting defendant DFACS' ex parte deprivation petition and, as such, is entitled to full judicial immunity for her actions. Dykes v. Hosemann, 776 F.2d 942 (11th Cir.1985). For this reason, defendant Walker's motion for summary judgment is GRANTED. B. Defendant Walker's Motion for Imposition of Sanctions. Defendant Walker seeks the imposition of sanctions against plaintiff on the grounds that the facts available to him prior to instigation of this action sufficiently demonstrated that defendant Walker is immune from damages liability for her actions taken as judge pro tempore. While the foregoing analysis illustrates that the court agrees with defendant Walker's judicial immunity defense, the court finds that plaintiff's prosecution of this action does not rise to the level of a Rule 11 violation. The case law cited by plaintiff and discussed supra, as well as the arguments based thereon provide a certain measure of support for plaintiff's theory of liability against defendant Walker. For this reason, defendant Walker's motion for imposition of sanctions is DENIED. C. Defendant Cobb County's Motion to Dismiss. 1. Fed.R.Civ.P. 12(b)(6). Defendant Cobb County moves to dismiss on the grounds that plaintiff's complaint fails to state a claim against it upon which relief can be granted. Fed.R.Civ.P. 12(b)(6). Under Rule 12(b)(6), the burden of demonstrating that no claim has been stated is upon the movant. Jackam v. HCA Mideast, Ltd., 800 F.2d 1577 (11th Cir.1986). In addition, the court must construe the complaint liberally in favor of the plaintiff, taking the facts as alleged as true; all reasonable inferences are made in favor of the plaintiff. Blum v. Morgan Guaranty Trust Company of New York, 709 F.2d 1463 (11th Cir.1983). Unless it is clear that the plaintiff can prove no set of facts in support of his claim which would entitle him to relief, defendant Cobb County's motion to dismiss must be denied. Jackam. 2. Municipal Liability. The Supreme Court has determined that local governments may be the targets of section 1983 actions where official policy or governmental custom is responsible for deprivation of rights protected by the Constitution. Monell v. New York City Department of Social Services, 436 U.S. 658, 98 S.Ct. 2018, 56 L.Ed.2d 611 (1978). In this regard, plaintiff contends that defendant Cobb County has a custom or policy of depriving parents of their parental rights without due process of law and further that this custom or policy is executed by the defendant county through the juvenile court system acting through defendant Walker. As grounds for its motion to dismiss, defendant Cobb County argues that defendant Walker, acting as judge pro tempore of the Juvenile Court of Cobb County, is a state official and not an official of Cobb County and thus there exists no nexus between the Cobb County Juvenile Court and the county itself. The grant or denial of defendant Cobb County's motion to dismiss thus turns on whether under Georgia law, defendant Walker is a county official for the purposes of municipal liability within the meaning of Monell and its progeny. See Pembaur v. City of Cincinnati, 475 U.S. 469, 483, 106 S.Ct. 1292, 1300, 89 L.Ed.2d 452 (1986) (whether an official is a final policy making authority is a question of state law); City of St. Louis v. Praprotnik, ___ U.S. ___, ___, 108 S.Ct. 915, 922, 99 L.Ed.2d 107 (1988) (the *1361 identification of policy making officials is a question of state law). In support of its argument that defendant Walker is a state official, defendant Cobb County first points to several provisions of the Constitution of the State of Georgia. Specifically, it is noted that counties are prohibited from taking any action affecting any court, including the juvenile court, or the personnel thereof. Constitution of the State of Georgia, Article 9, Section 2, ¶ 1(c)(7), and further that the judicial power of the State of Georgia is expressly vested in several classes of courts, including the juvenile courts. Id., Article 6, Section 1, ¶ 1. It is likewise noted that the Georgia Constitution provides for the juvenile courts to have "uniform jurisdiction, powers, rules of practice and procedure, and selection, qualification, terms and discipline of judges." Id., Article 6, Section 1, ¶ 5. Finally, defendant Cobb County points out that O.C.G.A. § 15-11-1, et seq., entitled "judicial proceedings," provides for (1) the creation of a juvenile court in each county, O.C.G.A. § 15-11-3(a); (2) the appointment of juvenile court judges by the appropriate superior court, O.C.G.A. § 15-11-3(b); and (3) the appointment of clerks and other personnel by the juvenile court judge. O.C.G.A. § 15-11-9. These enactments, it is argued, are consistent with the above-described constitutional provisions and show further that defendant Walker was at all relevant times a state officer. Plaintiff likewise relies upon certain aspects of the Georgia juvenile court system in support of his position that defendant Walker is a county official. In particular, plaintiff notes that, like other county officials, a juvenile court judge's salary, as well as that of a judge pro tempore, is paid from the county treasury. County Code Section 2-5-38; O.C.G.A. § 15-11-63. Similarly, plaintiff points out that a juvenile court judge's compensation is set by the superior court with the approval of the governing authority of the county. O.C.G. A. § 15-11-3(d)(1). In addition, a juvenile court judge is appointed by "the judge or a majority of the judges of the superior court" in the circuit in which the county is situated. O.C.G.A. § 15-11-3(b). Indeed, defendant Walker herself was appointed by a duly appointed judge of the Cobb County Juvenile Court. Finally, it is noted that (1) all expenditures of the juvenile court are "payable out of the county treasury with the approval of the governing authority ... of the county," O.C.G.A. § 15-11-3(i); (2) the compensation of the juvenile court employees are fixed by the juvenile court judge with county approval, O.C.G.A. § 15-11-9; (3) the salaries of such employees are paid out of county funds, id; (4) all such employees are appointed by the juvenile court judge "from eligible lists secured from the local merit boards in the county," id; and (5) the appointment, salary, tenure and all other conditions of employment of the employees [are] in accordance with the laws and regulations governing the [county] merit system. Id. In further support of his argument, plaintiff cites the Fifth Circuit Court of Appeals opinion in Crane v. State of Texas, 766 F.2d 193 (5th Cir.1985). In Crane, the issue before the court was whether a district attorney was an officer of the State of Texas or of the county in which he served. The court first noted that, A Texas district attorney has numerous ... attributes of a state official. [T]he geographic extent of his office's authority is created by a specific state statute for each territory within the state, some few of which comprise more than one county. In the event of a vacancy in his office, the Governor appoints his interim successor. His bond for faithful performance of his duties was to the Governor of the state. The state administrative body, the prosecutor's counsel, exists to discipline and assist the holders of his office. The district attorney is required by statute to make reports to the state attorney general upon his request. His office is created by ... the state constitution.... (Citations omitted). Crane at 194-95. Despite these significant elements of state office, the court was more impressed that (1) a Texas district attorney is elected by the voters of his district, usually one county; (2) a district *1362 attorney's powers and duties are limited to the territory of his district; (3) a district attorney is paid by county funds, though the county is partly reimbursed by the state; and (4) the fact that the office of the district attorney is created by the state constitution is diminished by the fact that other local offices — and the county itself — are similarly created by the Texas Constitution. Id. at 195. Based on these facts, the court concluded that the Texas district attorney was "properly viewed as a county official." Id. The court finds Crane unpersuasive. It is strictly an application of Texas law and plaintiff makes little attempt at drawing pertinent comparisons between the state law relied upon by the Fifth Circuit and the law of the State of Georgia. In any event, one court in this district has recently had occasion to hold that, under Georgia law, a district attorney is a state, rather than county, official. Owens v. Fulton County, 690 F.Supp. 1024 (N.D.Ga.1988) (Hall, J.). In so holding, Judge Hall effectively distinguished the Crane opinion and noted several significant differences between Texas and Georgia law. For these reasons, the court declines to follow the Fifth Circuit Court of Appeals and instead, for the reasons set forth below, finds that defendant Walker, in her capacity as judge pro tempore of the Juvenile Court of Cobb County, is an officer of the State of Georgia. As will be seen, this conclusion rests on the various state constitutional and statutory provisions which establish and define both the judicial power and the political subdivisions of the State of Georgia. By legislative enactment, the State of Georgia is divided into 159 counties. O.C. G.A. § 36-1-1. The power of the General Assembly to create and define the state's counties is derived generally from Article III of the Constitution of the State of Georgia (legislative power) and specifically from Article IX (counties and municipal corporations). Thus created by the state, counties can exercise no powers not conferred upon them by the state. As the Georgia Supreme Court has stated, "Counties can exercise only such powers as are conferred on them by law, and a county can exercise no powers except such as are expressly given or necessarily implied from express grant of other powers." DeKalb County v. Atlanta Gas Light Company, 228 Ga. 512, 513, 186 S.E.2d 732 (1972). See also McCray v. Cobb County, 251 Ga. 24, 27, 302 S.E.2d 563 (1983) (county only has the powers given to it by the legislature). This is a state constitutional principle: "Each county shall [have] such powers and limitations as are provided in this constitution and as provided by law." Constitution of the State of Georgia, Article IX, section 1, para. 1.[14] In the context of defendant Cobb County's motion to dismiss, the following limitation on the power of a county is most significant: "The power granted to counties ... shall not be construed to extend to ... action affecting any court [including the juvenile court] or the personnel thereof." Id., Article IX, section 2, para. 1(c)(7). The manner in which the counties and the judicial system of the State of Georgia coexist and interrelate likewise leads to the conclusion that a juvenile court judge pro tempore is a state official. All courts of the State of Georgia, including the juvenile courts, are components of a unified state judicial system. Id., Article VI, section 1, para. 2. "The judicial power of the state [is] vested exclusively in" these courts, id., Article VI, section 1, para. 1, and each has "uniform jurisdiction, powers, [and] rules of practice and procedure." Id., Article VI, section 1, para. 5. In regard to the relationship between these courts and the several counties of the State of Georgia, it is provided that "the state shall be divided into judicial circuits, each of which shall consist of not less than one county. Each county shall have at least one superior court ... and, where needed, a juvenile *1363 court."[15]Id., Article VI, section 1, para. 6. This paragraph does no more than provide for the orderly arrangement of the several courts of the state; no right of power, control or authority over the courts situated within their boundaries is thereby conferred upon the counties. It is thus clear that the judicial acts of a judge sitting within a particular county are the exercise of the judicial power of the state and are taken by authority of an office created by the state. Put another way, when defendant Walker entered the order granting the controversial ex parte deprivation petition, she did so pursuant to the judicial power of the state as vested within the Juvenile Court of Cobb County and in her capacity as judge pro tempore of the Juvenile Court of Cobb County, an office created by state law. This fact, coupled with the limitations expressly placed upon the counties with regard to "action affecting any court or the personnel thereof," clearly indicates that the several courts of the State of Georgia, including the juvenile courts, are organs of the state and that their judicial officers are state officials. That such responsibilities as compensating and approving the amount of compensation of a juvenile court judge fall on the counties is of no consequence; such provisions constitute no more than allocations of fiscal and other administrative responsibilities to the counties,[16] and no right of control or other power over the juvenile courts is expressly or impliedly conferred upon them. In sum, the court concludes that defendant Walker, in her capacity as judge pro tempore of the Juvenile Court of Cobb County, is an officer of the State of Georgia. As such, it is clear that she cannot be the "official policymaker" responsible for establishing the alleged unconstitutional custom or policy on behalf of defendant Cobb County. Pembaur v. City of Cincinnati, 475 U.S. 469, 480, 106 S.Ct. 1292, 1300, 89 L.Ed.2d 452 (1986). Inasmuch as this is the sole basis of plaintiff's theory of liability against defendant Cobb County, defendant Cobb County's motion to dismiss for failure to state a claim is GRANTED. III. CONCLUSION In sum, defendant Adventist Health Systems/Sunbelt, Inc.'s motion for summary judgment is DENIED as moot. Defendant Walker's motion for summary judgment is GRANTED. Defendant Walker's motion for imposition of sanctions is DENIED. Defendant Cobb County's motion to dismiss is GRANTED. NOTES [1] D/b/a Smyrna Hospital. [2] Due to loss of venous access or collapsed veins. [3] This device is described as "a long silicone rubber catheter which is inserted into either the subclavian (collarbone) or jugular (neck) vein and then threaded through the patient's upper venous system to the juncture of the superior vena cava and the right atrium of the patient's heart." Plaintiff's Response to Defendant Shadeed's Motion for Summary Judgment at 4. [4] This standing order served to appoint defendant Walker as judge pro tempore of the juvenile court in the event Judge Phillips was ill or absent or was disqualified from hearing a particular case. Plaintiff challenges Judge Phillips' authority to enter such a standing order and therefore defendant Walker's jurisdiction to hear the petition in question. Plaintiff likewise challenges the manner in which Judge Phillips "disqualified" himself from hearing the Bendiburg petition. These issues shall be discussed infra. [5] "A clot or other plug brought by the blood from another vessel and forced into a smaller one, thus obstructing the circulation." Dorland's Illustrated Medical Dictionary, 25th Edition (1974). [6] For the purposes of this order only, the court will assume that this latter portion of plaintiff's theory of liability against defendant Walker is accurate. The court wishes to point out, however, that it has found and plaintiff has cited no authority for the proposition that judicial acts taken in the absence of jurisdiction necessarily fall outside the scope of judicial immunity. [7] E.g., the avoidance of unnecessary delay. [8] This latter method of appointing a judge emeritus was apparently not available until soon after the opinion in Chambers was entered. [9] The Chambers court did not specify why the appointment in that case was improper. [10] Or, for that matter, of an "illness" or "absence." [11] Attached as Exhibit C to the present motion. [12] The text of the Georgia Code of Judicial Conduct is located in the Appendix of Volume 231 of the Georgia Reports. [13] The word "should" has been interpreted by the Georgia courts to mean "shall." Savage v. Savage, 234 Ga. 853, 218 S.E.2d 568 (1975) (cited by defendant Walker). [14] The authority of the General Assembly to "broaden, limit, or otherwise regulate" the county's exercise of the powers conferred upon them is likewise recognized. Id., Article IX, section 2, para. 1. [15] O.C.G.A. § 15-11-3(a) establishes a juvenile court in every county. [16] In any event, the State of Georgia contributes toward the salaries of juvenile court judges. O.C.G.A. § 15-11-3(d)(1).
{ "pile_set_name": "FreeLaw" }
753 F.Supp.2d 1163 (2010) Jonathan Paul BOYD, Plaintiff, v. Carol H. STECKEL, in her official capacity as Commissioner of the Alabama Medicaid Agency, Defendant. Case No.: 2:10-cv-688-MEF. United States District Court, M.D. Alabama, Northern Division. November 12, 2010. James Patrick Hackney, James Arnold Tucker, Lonnie Jason Williams, Alabama Disabilities Advocacy Program, Tuscaloosa, AL, Stephen F. Gold, Attorney at Law, Philadelphia, PA, for Plaintiff. James William Davis, Margaret Lindsey Fleming, Misty S. Fairbanks, William G. Parker, Jr., State of Alabama, Office of the Attorney General, Stephanie McGee Azar, *1164 The Alabama Medicaid Agency, Montgomery, AL, for Defendant. MEMORANDUM OPINION AND ORDER MARK E. FULLER, Chief Judge. This cause is before the Court on the Amended Motion for Preliminary Injunction and Expedited Hearing, (Doc. #15), filed on September 29, 2010 by Plaintiff Jonathan Paul Boyd ("Boyd"). The Court has carefully considered all submissions and argument in support of and in opposition to the motion and has convened a hearing on the matter. For the reasons set forth below, the motion for a preliminary injunction is due to be DENIED. JURISDICTION AND VENUE This Court has jurisdiction over the case pursuant to 28 U.S.C. §§ 1331 and 1334(a). Declaratory and injunctive relief is authorized by 28 U.S.C. §§ 2201 and 2202 as well as Federal Rule of Civil Procedure 65. Venue is proper in this district pursuant to 28 U.S.C. § 1391(b) because Defendant Carol H. Steckler, in her official capacity as Commissioner of the Alabama Medicaid Agency ("Commissioner Steckel"), resides in this district. FACTS[1] AND PROCEDURAL HISTORY On September 29, 2010, Boyd sued Commissioner Steckel for alleged violations of Title II of the Americans with Disabilities Act ("ADA"), 42 U.S.C. § 12132, as well as its implementing regulations, and violations of Section 504 of the Rehabilitation Act, 29 U.S.C. § 794(a), and its implementing regulations. (Doc. #14, at 11-12, ¶¶ 57, 60). Specifically, Boyd alleges that Commissioner Steckel has failed to properly assess and provide the Medicaid services needed to permit Boyd to live in the community, as opposed to the nursing home in which he resides. Id. at 12-13, ¶¶ 58, 61. On September 29, 2010, Boyd also filed an Amended Motion for Preliminary Injunction and Expedited Hearing. (Doc. #15). This motion was granted to the extent that it sought an expedited hearing. (Doc. #17). On October 12, 2010, the United State of America filed a statement of interest and brief in support of Boyd's motion for a preliminary injunction. (Doc. #25). The hearing for the preliminary injunction motion was held on October 13, 2010. A. Medicaid Title XIX of the Social Security Act of 1965 established Medicaid. 79 Stat. 343, as amended, 42 U.S.C. §§ 1396 et seq. "Medicaid is a joint [S]tate-[F]ederal funding program for medical assistance in which the Federal Government approves a[S]tate plan for the funding of medical services for the needy and then subsidizes a significant portion of the financial obligations the State has agreed to assume." Alexander v. Choate, 469 U.S. 287, 289 n. 1, 105 S.Ct. 712, 83 L.Ed.2d 661 (1985). Medicaid is a voluntary program whereby the States need not participate. Id. However, should a State choose to participate, then it "must comply with the requirements of Title XIX and applicable regulations." Id. Under the Medicaid Act, states may choose to operate home and community-based waiver programs for individuals to avoid institutionalization. 42 U.S.C. § 1396n(c). Pursuant to this section: The Secretary may by waiver provide that a State plan approved under this title may include as `medical assistance' under such a plan payment for part or *1165 all of the cost of home or community-based services (other than room and board) approved by the Secretary which are provided pursuant to a written plan of care to individuals with respect to whom there has been a determination that but for the provision of such services the individuals would require the level of care provided in a hospital or a nursing facility ... the cost of which could be reimbursed under the State plan. Id. § 1396n(c)(1). Such waiver programs "are intended to provide the flexibility needed to enable States to try new or different approaches to the efficient and cost-effective delivery of health care services, or to adapt their programs to the special needs of particular areas or groups of recipients." 42 C.F.R. § 430.25(b). However, these waiver programs must be cost-neutral in the aggregate—i.e. the cost of operating the waiver system must not exceed what the cost would be to provide Medicaid services without the waiver program. 42 U.S.C. § 1396n(c)(2)(D) ("[U]nder such [a] waiver the average per capita expenditure estimated by the State in any fiscal year for medical assistance provided with respect to such individuals does not exceed 100 percent of the average per capita expenditure that the State reasonably estimates would have been made in that fiscal year for expenditures under the State plan for such individuals if the waiver had not been granted ...."); see also 42 C.F.R. § 441.302(e)-(f). The Medicaid Act also provides that States may deviate from certain other Medicaid requirements. 42 U.S.C. § 1396n(c)(3). For example, an approved waiver program may also include a waiver of the Medicaid requirements of "statewideness," "comparability," and "income and resource rules applicable in the community." Id. More specifically, under the applicable federal regulations, "the State may exclude those individuals [from waiver programs] for whom there is a reasonable expectation that home and community-based services would be more expensive than the Medicaid services the individual would otherwise receive." 50 Fed. Reg. 10,013 (Mar. 13, 1985). Similarly, the State "can choose to provide home and community-based services to a limited group of eligibles, such as the developmentally disabled" and need not "provide the services to all eligible individuals who require an ICF [intermediate care facility] or SNF [skilled nursing facility] level of care." Id. The Medicaid statutes and regulations also provide for caps on the number of persons served under a waiver program for a given year—that is, they "contemplate that State waiver plans will limit the number of eligible participants in any year." (Doc. #20, at 23) (citing 42 U.S.C. § 1396n(c)(9) ("In the case of any waiver under this subsection which contains a limit on the number of individuals who shall receive home or community-based services, the State may substitute additional individuals to receive such services to replace any individuals who die or become ineligible for services under the State plan."); 42 C.F.R. § 441.303(f)(6) ("The State must indicate the number of unduplicated beneficiaries which it intends to provide waiver services in each year of its program. This number will constitute a limit on the size of the waiver program unless the State requests and the Secretary approves a greater number of waiver participants in a waiver amendment.") (emphasis added)). B. Alabama's Waiver Programs The State of Alabama ("Alabama") has chosen to participate in Medicaid and to provide certain waiver programs. (Doc. #20, at 20). Currently, Alabama operates six waiver programs with varying purposes, qualifying criteria, services provided, *1166 and enrollment limits: (1) the Elderly & Disabled ("E & D") Waiver; (2) the Intellectual Disabilities ("ID") Waiver;[2] (3) the Living at Home ("LAH") Waiver;[3] (4) the State of Alabama Independent Living ("SAIL") Waiver; (5) the HIV/AIDs Waiver;[4] and (6) the Technology Assisted ("TA") Waiver for Adults.[5] Doc. #20, at 24-25; see also Doc. #16 Ex. D. The SAIL Waiver program provides numerous services for persons with specific medical diagnoses, which includes quadriplegia. Doc. #16 Ex. D. Such services include personal care, personal assistance service, environmental accessibility adaptations, medical supplies, and assistive technology. Id. However, there are limitations on the extent of such services. For example, "reimbursement for in-home personal care and assistance is limited to 25 hours per week." (Doc. #19 Ex. C, Chappelle Aff. ¶ 13). Moreover, while personal care is covered to some extent under the SAIL waiver, "skilled nursing care is not available at all under the SAIL Waiver...." Id. The SAIL Waiver program is capped at 660 persons, although the record is unclear as to whether the program is full. The E & D Waiver program also provides numerous services "to individuals that would otherwise require the level of care available in an intermediate care facility." (Doc. #16 Ex. D). Such services include case management, homemaker services, personal care, adult day health, and respite care (skilled and unskilled). Id. As with the SAIL waiver, there are limitations on the extent of these services. For example, "`skilled' care (provided by a nurse or other health-care professional), is not available on a regular basis under the E & D Waiver, but may only be provided as respite care (relief for a regular caregiver)." (Doc. #19 Ex. C, Chappelle Aff. ¶ 14). Additionally, this respite care is limited to 720 hours per year. Although there is no hourly limit on homemaker services, personal care and adult companion services, these "would not include administration of medicine...." Id. The E & D Waiver is capped at 9,205 people and has remained at the cap since 2008. (Doc. #16, at 5). Additionally, according to the Kaiser Commission, there are over 7,000 persons on the E & D Waiver waiting list. Id. C. Boyd's Facts According to the Amended Complaint, Boyd is a 34 year-old man who became paralyzed after an accident in October of 1995, which broke his spine and rendered him tetraplegic—i.e. leaving him without the use of his arms and legs. (Doc. #14, at 1, ¶ 1-2; Doc. #20, at 8). Following his accident, Boyd lived with his mother and stepfather for eleven years, with his mother acting as his primary care giver. (Doc. #14, at 1, ¶ 3; Doc. #20, at 8). During this time, Boyd "was eligible for and received community-based Medicaid waiver services to complement the care being provided *1167 by his mother." (Doc. #14, at 1, ¶ 3; Doc. #20, at 8). However, after his mother was no longer able to provide the required care, Boyd entered the nursing facility—Chandler Health and Rehab Center in Alabaster, Alabama—where he has lived since December of 2006. (Doc. #14, at 1, ¶ 3; Doc. #20, at 8). Because community-based services and reimbursement for nursing home care are mutually exclusive alternatives, the community-based Medicaid waiver services were discontinued when Boyd entered the nursing facility. (Doc. #14, at 4, ¶ 19; Doc. #20, at 8-9). Currently, Boyd is eligible for and receives Medicaid, which pays for his nursing home services. (Doc. #14, at 4, ¶ 20; Doc. #20, at 9). For ambulation, Boyd uses a motorized wheelchair which he controls with a "sip and puff" device. (Doc. #14, at 4, ¶ 21). At the nursing home, Boyd "receives assistance with his activities of daily living, including assistance with taking medications, bathing, dressing, toileting, feeding, and transferring from and to his bed and into and out of his wheelchair." Id. at 4-5, ¶ 22. He also receives assistance with basic household chores, for his bowel program (twice weekly), and for changing his catheter (twice monthly). Id. After his accident, Boyd returned to college and graduated in 2007 with a bachelor of fine arts from the University of Montevallo. (Doc. #14 at 5, ¶ 25; Doc. #20, at 9). His nursing home is 13 miles from the university. (Doc. #14, at 5, ¶ 27). While earning his bachelor's degree, Boyd took public transportation (ClasTran) to and from classes. (Doc. #14, at 5, ¶ 27; Doc. #20, at 9). The Alabama Department of Rehabilitation Services ("Rehab Services") paid for this use of ClasTran. (Doc. #20, at 9). Boyd alleges that this public transportation was, and still is, available only until 3:30 p.m. (Doc. #14, at 5-6, ¶ 27). In 2010, Boyd was admitted to a University of Montevallo graduate program seeking a Master's degree in community counseling. (Doc. #14, at 5, ¶ 26; Doc. #20, at 9). He began this program in September of 2010. (Doc. #14, at 5 ¶ 26). Because the graduate program offers classes only at night, Boyd alleges that he is unable to take the ClasTran and must "rely upon and pay a nursing home maintenance worker all of his scholarship funds ($500 for the semester) to transport him to and from campus for his classes." Id. at 6, ¶ 28; see also Doc. #20, at 9.[6] Additionally, Boyd claims that he borrows money from his brother to pay others $20 per trip for six additional trips per semester, which are needed in order to complete required assignments. (Doc. #14, at 6, ¶ 29). Essentially, Boyd wishes to receive community-based services necessary for him to live in the community, to be able to take more than two classes per semester towards his graduate degree, and to enjoy other University functions.[7]Id. ¶¶ 30-31. He has *1168 located rental housing near campus which meets his accessibility needs but is unable to secure the rental unless he knows that the necessary community-based services will be provided. Id. at 6-7, ¶ 32.[8] Boyd also complains of the conditions and atmosphere of the nursing home. Id. at 7-8, ¶¶ 34-38.[9] Boyd applied for a Medicaid waiver program in October of 2008 and has been on a waiting list for services since that time. Id. at 8, ¶ 41. He also "recently renewed his request for services by asking [Commissioner Steckel] to make reasonable modifications to her waiver programs and provide him with waiver services including 10 hours per day of assistance with activities of daily living, assistance with his bowel program twice per week, assistance with changing his catheter twice per month and necessary equipment and care supplies." Id. at 8-9, ¶ 41. Because Commissioner Steckel has failed to provide such community-based services, Boyd alleges that he is forced to "continue to reside in a Medicaid-funded nursing facility instead of the community."[10] DISCUSSION A. Preliminary Injunction Standard The purpose of a typical preliminary injunction is prohibitive in nature in that it is "`merely to preserve the relative positions of the parties until a trial on the merits can be held.'" United States v. Lambert, 695 F.2d 536, 539 (11th Cir.1983) (quoting Univ. of. Tex. v. Camenisch, 451 U.S. 390, 395, 101 S.Ct. 1830, 68 L.Ed.2d 175 (1981)); see also Mercedes-Benz U.S. Int'l, Inc. v. Cobasys, LLC, 605 F.Supp.2d 1189, 1196 (N.D.Ala.2009) ("Typically, a preliminary injunction is prohibitory and generally seeks only to maintain the status quo pending a trial on the merits.") (citations omitted). The burden on the party seeking a typical, prohibitive preliminary *1169 injunction is particularly high. All Care Nursing Serv., Inc. v. Bethesda Mem. Hosp., Inc., 887 F.2d 1535, 1537 (11th Cir. 1989) ("Preliminary injunctions are issued when drastic relief is necessary to preserve the status quo.") (citing Cate v. Oldham, 707 F.2d 1176 (11th Cir.1983); Bannum, Inc. v. City of Fort Lauderdale, Fla., 657 F.Supp. 735 (S.D.Fla.1986)), cert. denied, Quality Prof'l Nursing, Inc. v. Bethesda Mem'l Hosp., Inc., 526 U.S. 1016, 119 S.Ct. 1250, 143 L.Ed.2d 347 (1999); see also Lambert, 695 F.2d at 539 ("[A preliminary injunction's] grant is the exception rather than the rule, and plaintiff must clearly carry the burden of persuasion.") (emphasis added). However, where, as here, "a preliminary injunction goes beyond the status quo and seeks to force one party to act, it becomes a mandatory or affirmative injunction and the burden placed on the moving party is increased." Mercedes-Benz, 605 F.Supp.2d at 1196 (citing Exhibitors Poster Exchange, Inc. v. Nat'l Screen Serv. Corp., 441 F.2d 560, 561 (5th Cir.1971), reh'g denied, 520 F.2d 943 (5th Cir.1975), cert. denied, 423 U.S. 1054, 96 S.Ct. 784, 46 L.Ed.2d 643 (1976)).[11] For such mandatory injunctions, relief should be granted "[o]nly in rare instances." Harris v. Wilters, 596 F.2d 678, 680 (5th Cir.1979) (emphasis added); see also Mercedes-Benz, 605 F.Supp.2d at 1196. A preliminary injunction is "an extraordinary and drastic remedy" that cannot be granted unless the moving party clearly establishes the following four prerequisites: (1) it has a substantial likelihood of success on the merits; (2) irreparable injury will be suffered unless the injunction issues; (3) the threatened injury to the movant outweighs whatever damage the proposed injunction may cause the opposing party; and (4) if issued, the injunction would not be adverse to the public interest. Siegel v. LePore, 234 F.3d 1163, 1176 (11th Cir.2000) (en banc), reh'g denied, 234 F.3d 1218 (11th Cir.2000). If the moving party cannot clearly establish any one of the four required elements, then a preliminary injunction should not be granted. Bethel v. City of Montgomery, No. 2:04cv743-MEF, 2010 WL 996397 at *4, 2010 U.S. Dist. LEXIS 24949 at *11-12 (M.D.Ala. Mar. 2, 2010) ("A preliminary injunction is an extraordinary and drastic remedy which should not be granted unless the movant clearly carries the burden of persuasion as to all prerequisites.") (emphasis in original) (Coody, J.) (citations omitted); see also Church v. City of Huntsville, 30 F.3d 1332, 1342 (11th Cir.1994) (holding that the moving party's failure to demonstrate a substantial likelihood of success on the merits defeated the party's motion for a preliminary injunction, regardless of the party's ability to establish any of the other elements). Because this Court finds that Boyd has failed to establish a substantial likelihood of success on the merits sufficient justify the use of such an extraordinary remedy as a mandatory preliminary injunction, this motion is due to be DENIED. B. Substantial Likelihood of Success on the Merits i. The Statutes and Regulations Section 504 of the Rehabilitation Act and Title II of the ADA contain similar provisions and are enforced by similar implementing regulations. Section 504 provides, in part, that "[n]o otherwise qualified individual with a disability in the United States ... shall, solely by *1170 reason of his or her disability, ... be subjected to discrimination under any program or activity receiving Federal financial assistance...."[12] 29 U.S.C. § 794(a). Its implementing regulation states that "[r]ecipients shall administer programs and activities in the most integrated setting appropriate to the needs of qualified handicapped persons." 28 C.F.R. § 41.51(d). Finally, Section 504 contains a fundamental-alteration defense for the recipient of federal funds. Id. § 41.53 ("A recipient shall make reasonable accommodation to the known physical or mental limitations of an otherwise qualified handicapped applicant or employee unless the recipient can demonstrate that the accommodation would impose an undue hardship on the operation of its program."). Similarly, Title II of the ADA also prohibits discrimination in the provision of public services. It provides, in part, that "no qualified individual with a disability shall, by reason of such disability, ... be subjected to discrimination by any [public] entity." 42 U.S.C. § 12132. Under Title II, "Congress instructed the Attorney General to issue regulations implementing provisions of Title II, including § 12131's discrimination proscription." Olmstead v. L.C. by Zimring, 527 U.S. 581, 591, 119 S.Ct. 2176, 144 L.Ed.2d 540 (1999) (citing 42 U.S.C. § 12134(a)). The Olmstead Court further explained: One of the § 504 regulations requires recipients of federal funds to `administer programs and activities in the most integrated setting appropriate to the needs of qualified handicapped persons.' 28 CFR § 41.51(d) (1998). As Congress instructed, the Attorney General issued Title II regulations ..., including one modeled on the § 504 regulation just quoted; called the `integration regulation,' it reads: `A public entity shall administer services, programs, and activities in the most integrated setting appropriate to the needs of qualified individuals with disabilities.' 28 C.F.R § 35.130(d) (1998) Id. at 591-92, 119 S.Ct. 2176. Like § 504, Title II of the ADA provides for a fundamental-alteration defense. 28 C.F.R. § 35.130(b)(7) ("A public entity shall make reasonable modifications in policies, practices, or procedures when the modifications are necessary to avoid discrimination on the basis of disability, unless the public entity can demonstrate that making the modifications would fundamentally alter the nature of the service, program, or activity."). Indeed, Congress stated in the ADA that "[t]he remedies, procedures, and rights set forth in section 505 of the Rehabilitation Act of 1973 shall be the remedies, procedures, and rights this title provides to any person alleging discrimination on the basis of disability...." 42 U.S.C. § 12133. "Because the same standards govern discrimination claims under the Rehabilitation Act and the ADA, [this Court will] discuss those claims together and rely on cases construing those statutes interchangeably." Allmond v. Akal Sec. Inc., 558 F.3d 1312, 1316 n. 3 (11th Cir.2009), reh'g en banc denied, 347 Fed.Appx. 555 (11th Cir.2009), cert. denied, ___ U.S. ___, 130 S.Ct. 1139, 175 L.Ed.2d 972 (2010). *1171 ii. Analysis The Supreme Court's fragmented decision [13] in Olmstead remains the seminal case on the ADA's—and therefore the Rehabilitation Act's—anti-discrimination provision. In Olmstead, the disabled persons were two women with mental illnesses—schizophrenia and a personality disorder, respectively. 527 U.S. at 593, 119 S.Ct. 2176. Both women were voluntarily confined for treatment in a Georgia hospital's psychiatric unit. Id. Eventually, their treating psychiatrists concluded that one of the community-based programs would be appropriate to meet their treatment needs. Id. However, after they remained institutionalized, they sued the State under Title II of the ADA, alleging "that the State's failure to place [them] in a community-based program, once [their] treating professionals determined that such placement was appropriate, violated, inter alia, Title II of the ADA." Id. at 593-94, 119 S.Ct. 2176. The women requested, amongst other forms of relief, that "the State place [them] in a community care residential program, and that [they] receive treatment with the ultimate goal of integrating [them] into the mainstream of society." Id. at 594, 119 S.Ct. 2176.[14] a. Determining Qualification for Community-Based Services: the Olmstead Majority The majority opinion in Olmstead addressed only two issues: (1) whether the women were discriminated against "by reason of" their disability and (2) whether discrimination under the ADA required a showing that the State treated similarly situated individuals outside of the protected class differently. Id. at 598, 119 S.Ct. 2176. With regards to the second issue, the Court merely stated that it was "satisfied that Congress had a more comprehensive *1172 view of the concept of discrimination advanced in the ADA." Id. As to whether there was discrimination "by reason of" disability, the Court emphasized that the ADA specifically identifies "`segregation' of persons with disabilities `as a form of discrimination.'" Id. at 600, 119 S.Ct. 2176 (citing 42 U.S.C. § 12101(a) ("[H]istorically, society has tended to isolate and segregate individuals with disabilities, and, despite some improvements, such forms of discrimination against individuals with disabilities continues to be a pervasive social problem."); Id. § 12101(a)(5) ("[I]ndividuals with disabilities continually encounter various forms of discrimination, including ... segregation.")). Thus, the Court held that "unjustified institutional isolation of persons with disabilities is a form of discrimination." Id. This holding reflected "two evident judgments:" (1) that unnecessary institutional isolation "perpetuates unwarranted assumptions that persons so isolated are incapable or unworthy of participating in community life" and (2) that such confinement "severely diminishes everyday life activities of individuals, including family relations, social contacts, work options, economic independence, educational advancement, and cultural enrichment." Id. at 600-01, 119 S.Ct. 2176. The Court further noted that discrimination also existed because disabled persons "must, because of their disabilities, relinquish participation in community life they could enjoy given reasonable accommodations" in order to "receive needed medical services" whereas non-disabled persons need not make the same sacrifice. Id. at 601, 119 S.Ct. 2176. However, the Court also stressed that the ADA does not require deinstitutionalization when the person would be incapable of managing or benefitting from it. Id. at 601-02, 119 S.Ct. 2176 ("[N]othing in the ADA or its implementing regulations condones termination of institutional settings for persons unable to handle or benefit from community settings."). Thus, as the Court stated: The State generally may rely on the reasonable assessments of its own professionals in determining whether an individual `meets the essential eligibility requirements' for habilitation in a community-based program. Absent such qualification, it would be inappropriate to remove a patient from the more restrictive setting. See 28 C.F.R. § 35.130(d) (public entity shall administer services and programs in "the most integrated setting appropriate to the needs of the qualified individuals with disabilities" (emphasis added)); cf. School Bd. of Nassau Cty. v. Arline, 480 U.S. 273, 288, 107 S.Ct. 1123, 94 L.Ed.2d 307 (1987) ("Courts normally should defer to the reasonable medical judgments of public health officials."). Id. at 602, 119 S.Ct. 2176 (emphasis added). Because there was no genuine dispute regarding the qualifications of the women for community-based services—indeed, the State's own professionals determined that such services would be appropriate—the Court found discrimination in the failure to deinstitutionalize. Id. Here, this Court finds that Boyd cannot establish a substantial likelihood of success at this early juncture as to whether he is qualified for community-based services. In his brief in support of the summary judgment motion, Boyd addresses Olmstead's holding that "`unjustified isolation... is properly regarded as discrimination based on disability.'" (Doc. # 16, at 10) (quoting Olmstead, 527 U.S. at 597, 119 S.Ct. 2176). He then goes on to argue that being in a nursing home severely limits his everyday life activities. Id. at 10-13. However, the key in Olmstead is that the institutionalization must be unjustified *1173 and unnecessary. 527 U.S. at 596-597, 119 S.Ct. 2176. Hence, the Olmstead majority required a showing that the women qualified for community-based services—i.e. that community-based services were appropriate for them.[15]Id. at 602-03, 119 S.Ct. 2176. This burden was met in that case because neither party disputed it. Id. In the instant case, Boyd has declared what his needs would be should he be provided with community-based services. Specifically, he states that he will require "ten hours per day of assistance with activities of daily living, assistance with his bowel program twice weekly, assistance with replacement of his catheter twice monthly and necessary equipment and care supplies." (Doc. # 16, at 14). He also asserts that Commissioner Steckel "does not dispute that Plaintiff Boyd is a qualified person with a disability who meets the eligibility requirements for Alabama's Medicaid nursing home `level of care' as well as for its waiver and Medicaid programs." Id. at 7. In response, Commissioner Steckel admits that Boyd is eligible for nursing home level of care but argues that "the issue of whether [Boyd] is qualified for Medicaid Waiver services, insofar as the ADA and Rehab[ilitation] Act define that term, is contested." (Doc. # 20, at 52 n. 31). Commissioner Steckel has put evidence before this Court in the form of an affidavit by Dr. Robert Moon, ("Dr. Moon"), Medical Director and Deputy Commissioner of Health Systems for the Alabama Medicaid Agency. (Doc. # 22 Ex. B). After reviewing Boyd's medical records, Dr. Moon contends that numerous additional services would be needed to ensure that Boyd's needs are met. Id. ¶ 7. He also points to several of Boyd's past health issues, one of which required hospitalization. Id. ¶ 8. Essentially, Dr. Moon *1174 states that more care and more expertise than that requested by Boyd would be needed in order to monitor for and remedy these health issues should they occur again. Commissioner Steckel is entitled to rely on Dr. Moon's assessment and conclude that the community-based services requested by Boyd are inappropriate for his needs. Olmstead, 527 U.S. at 602, 119 S.Ct. 2176 ("The State generally may rely on the reasonable assessments of its own professionals in determining whether an individual `meets the essential eligibility requirements' for habilitation in a community-based program.").[16] Thus, "[i]t would be inappropriate to remove [Boyd] from the more restrictive setting"—at least until Boyd can demonstrate, at summary judgment or trial, that Dr. Moon's assessment is unreasonable or that he is still qualified for community-based services even under Dr. Moon's assessment.[17]Id. Without more at this stage, this Court cannot find that Boyd has established a substantial likelihood of proving his qualification for the community-based services requested— i.e. that they are appropriate to meet his needs.[18] Furthermore, according to the federal regulations, Alabama is entitled to exclude individuals from waiver programs where *1175 "there is a reasonable expectation that home and community-based services would be more expensive than Medicaid services the individual would otherwise receive." 50 Fed. Reg. 10,013. Attempting to prove that community-based services are cheaper than nursing home care, Boyd points to data showing that "Alabama's Medicaid nursing home reimbursement is approximately $33,700 a year" whereas the "Medicaid waiver for home and community-based services [under the E & D Waiver] is approximately $10,365." (Doc. # 16, at 5). Thus, Boyd claims that the use of community-based services saves Alabama and the federal government approximately $22,000 a year. However, the data relied upon by Boyd refers to cost-neutrality in the "average per capita expenditures," not cost-neutrality as it relates to him in particular. (Doc. # 16 Ex. E, at 8). Commissioner Steckel contends that Boyd would need significantly more hours of care, more expertise in care, and more services and equipment then requested by Boyd or provided under the E & D Waiver in order to live in the community. Given this dispute, the evidence provided by Dr. Moon, and the lack of any evidence from a medical professional supporting Boyd's contentions as to his needs, Boyd cannot establish a substantial likelihood of success on the issue of whether it would be more cost-efficient to treat him in the community. b. The Fundamental-Alteration Defense: the Olmstead Plurality Additionally, Boyd has failed to establish a substantial likelihood of success on the allegedly reasonable modifications requested by him. The plurality in Olmstead discussed the fundamental-alteration defense advanced by the State—namely, that all of its available funds were already being used to provide services to other disabled persons—and rejected the Court of Appeal's holding that the State must show that the cost of providing community care to the women was unreasonable in comparison to its entire mental health budget. 527 U.S. at 604, 119 S.Ct. 2176. The plurality further clarified: [Such an interpretation] would leave the State virtually defenseless once it is shown that the plaintiff is qualified for the service or program she seeks. If the expense entailed in placing one or two people in a community-based treatment program is properly measured for reasonableness against the State's entire mental health budget, it is unlikely that a State, relying on the fundamental-alteration defense, could ever prevail.... Sensibly construed, the fundamental-alteration component of the reasonable-modifications regulation would allow the State to show that, in the allocation of available resources, immediate relief for the plaintiffs would be inequitable, given the responsibility the State has undertaken for the care and treatment of a large and diverse population of persons with ... disabilities. Id. at 603-04, 119 S.Ct. 2176 (emphasis added). Noting that deinstitutionalization might never be appropriate for some persons, the plurality made clear that the ADA was not designed to eradicate institutions or to force deinstitutionalization on persons when it would be inappropriate. Id. at 604, 119 S.Ct. 2176 ("[T]he ADA is not reasonably read to impel States to phase out institutions, placing patients in need of close care at risk. Nor is it the ADA's mission to drive States to move institutionalized patients into an inappropriate setting...."). In emphasizing the "leeway" that must be given to States to "maintain a range of facilities and to administer services with an even hand," the plurality highlighted the unfairness associated with ordering a State to deinstitutionalize one person under certain circumstances. Id. at 605, 119 S.Ct. *1176 2176. For example, if a State demonstrates a "comprehensive, effectively working plan for placing qualified persons with... disabilities in less restrictive settings, and a waiting list that moved at a reasonable pace not controlled by the State's endeavors to keep its institutions fully populated, the reasonable-modifications standard would be met." Id. at 605-06, 119 S.Ct. 2176. The plurality stated that courts could not allow one to essentially line-jump such a program for providing community based services. Id. at 606, 119 S.Ct. 2176 ("In such circumstances, a court would have no warrant effectively to order displacement of persons at the top of the community-based treatment waiting list by individuals lower down who commenced civil actions."). Thus, the plurality concluded that a State is required to provide community-based services for disabled persons when several factors are met: (1) "the State's treatment professionals determine that such placement is appropriate"; (2) "the affected persons do not oppose such treatment"; and (3) "the placement can be reasonably accommodated, taking into account the resources available to the State and the needs of others with ... disabilities." Id. at 607, 119 S.Ct. 2176. In addition to disputing the first element, Commissioner Steckel argues that Boyd's placement into the community would not be a reasonable accommodation, but rather would result in a fundamental alteration of Alabama's Medicaid system. See Doc. # 20, at 61-65. Assuming that Boyd does qualify for an existing waiver program or for a modified waiver program, this Court must be mindful of the limitations on the Olmstead plurality's discussion of the fundamental-alteration defense. Boyd has averred that the E & D Waiver—which Commissioner Steckel disputes whether Boyd is qualified to be under—is capped at 9,205 people and has been at that cap since 2008. The DOJ contends that Alabama need only request an increase in the cap for a particular waiver program in order to comply with the ADA. (Doc. # 25, at 9). However, the Medicaid waiver program at issue in Olmstead had unused slots open. 527 U.S. at 601, 119 S.Ct. 2176. Thus, the Olmstead Court "did not consider whether a forced change in the waiver program's cap would constitute a fundamental alteration, because the [S]tate's program in that case was far from full." Arc. of Wash. State Inc. v. Braddock, 427 F.3d 615, 619 (9th Cir.2005). Although there is no applicable precedent from the Eleventh Circuit, other circuits have addressed the issue of what would constitute a fundamental alteration, with seemingly conflicting results. For example, the First Circuit has stated that "in no event is the [State] required to undertake measures that would pose an undue financial or administrative burden... or effect a fundamental alteration in the nature of the service." Toledo v. Sanchez, 454 F.3d 24, 39 (1st Cir.2006) (emphasis added), cert. denied, Univ. of P.R. v. Toledo, 549 U.S. 1301, 127 S.Ct. 1826, 167 L.Ed.2d 356 (2007). On the other hand, the Third Circuit has held that budgetary constraints, "[t]hough clearly relevant," are alone "insufficient to establish a fundamental alteration defense." Pa. Prot. & Advocacy, Inc. v. Pa. Dep't of Pub. Welfare, 402 F.3d 374, 380 (3rd Cir.2005) (citations omitted). Going further, the Ninth Circuit has held that "[o]ne basis for finding a `fundamental alteration' would have been for the [S]tate to demonstrate that the remedy would force it `to apply for additional Medicaid waivers in order to provide community-based services'" to the plaintiffs. Id. (quoting Townsend v. Quasim, 328 F.3d 511, 519 (9th Cir.2003)); see also Bruggeman v. Blagojevich, 219 F.R.D. 430, 435 (N.D.Ill.2004) (rejecting the argument that a court can consider the *1177 fact that the State can "request additional waiver slots to expand community-based services" as part of the State's available resources because it is "beyond the scope of inquiry permissible under Olmstead"). Requiring Alabama to seek more waiver slots could very well be a fundamental alteration as the Ninth Circuit has held. Were this Court to grant a preliminary injunction here, nothing would prevent these other thousands of persons on the waiting lists from filing lawsuits and being granted preliminary injunctions that essentially increase the waiver cap. Cf. Long v. Benson, No. 4:08cv26-RH/WCS, 2008 WL 4571903 at *2 (N.D.Fla. Oct. 14, 2008) ("[C]ommon sense and experience suggest there is nothing that can be done for [the plaintiff] in a nursing home that cannot also be done in his apartment complex. Indeed, this is true of most if not all services provided in nursing homes for most if not all patients."), affirmed, 383 Fed.Appx. 930 (11th Cir.2010). Such a result would hardly render preliminary injunctions a drastic and extreme remedy. Furthermore, it could potentially disrupt the entire balance of the Alabama Medicaid program, rendering the permissible caps illusory and requiring Alabama to provide community-based services to anyone and everyone who qualifies for a particular Medicaid waiver program or a modified version of that program.[19]See Olmstead, 527 U.S. at 604, 119 S.Ct. 2176 ("[T]he ADA is not reasonably read to impel States to phase out institutions, placing patients in need of close care at risk").[20] Additionally, nothing in the record establishes whether the waiting list "move[s] at a reasonable pace not controlled by *1178 [Alabama's] endeavors to keep its institutions fully populated." Olmstead, 527 U.S. at 605-06, 119 S.Ct. 2176. Simply stating that the waiver program is capped, which is permitted under the Medicaid Act, does not mean that this is anything but "a comprehensive, effectively working plan." Id. Although Alabama bears the burden of establishing the existence of such a program, this Court cannot find—on the record before it—that there is a substantial likelihood that Alabama will not be meet this burden. Cf. Townsend, 328 F.3d at 519 (reversing summary judgment for the State because the "current record [did] not provide [the court] with sufficient information to evaluate the ... fundamental alteration defense"). Finally, Boyd has not pointed to anything—nor can this Court find anything— that would distinguish him from the other thousands of persons on waiting lists for community-based services under Alabama's Medicaid program. Permitting Boyd to jump ahead of others on the waiting list merely because he filed a lawsuit goes against the express language of the Olmstead plurality, which this Court will not do. See Olmstead, 527 U.S. at 606, 119 S.Ct. 2176 ("In such circumstances, a court would have no warrant effectively to order displacement of persons at the top of the community-based treatment waiting list by individuals lower down who commenced civil actions.") (emphasis added). Given the fragmented nature of the Olmstead opinion, the lack of guidance as to what constitutes a fundamental alteration, and the potential conflict between the Medicaid Act and the ADA and Rehabilitation Act, this Court cannot find that Boyd has established a substantial likelihood of success on the merits as to whether the relief he seeks would constitute a reasonable modification or a fundamental alteration. The uncertainty is heightened by the fact that Boyd seeks a mandatory preliminary injunction requiring him to satisfy a heightened burden.[21] CONCLUSION For the foregoing reasons, it is hereby ORDERED that the motion for a preliminary injunction, (Doc. # 15), is DENIED. NOTES [1] This recitation of facts is based on the allegations in the Amended Complaint, (Doc. #14), and the evidence and testimony submitted by the parties in support of and in opposition to the motion for a preliminary injunction. [2] The ID Waiver (formerly known as the Mental Retardation ("MR") Waiver) is available only for persons with intellectual disabilities. (Doc. #19 Ex. C, Chappelle Aff. ¶ 9). As such, it is inapplicable in the instant case. [3] The LAH Waiver is available only for persons with intellectual disabilities. (Doc. #19 Ex. C, Chappelle Aff. ¶ 9). As such, it is inapplicable in the instant case. [4] The HIV/AIDS Waiver is available only for persons diagnosed with HIV, AIDS, and related illnesses. (Doc. #19 Ex. C, Chappelle Aff. ¶ 10). As such, it is inapplicable in the instant case. [5] The TA Waiver for Adults is "available only for persons who received private duty nursing services through the EPSDT Program under the Medicaid State Plan prior to turning 21 years of age." Doc. #19 Ex. C, Chappelle Aff. ¶ 11. Because there is no evidence on the record establishing whether Boyd received such services, the TA Waiver for Adults is inapplicable at this juncture. Id. [6] Commissioner Steckel argues that Boyd's "assertion that he has been `forced' into this position is disingenuous, however, as he failed to request assistance from Rehab Services—which has a history and current practice of assisting him with transportation." (Doc. #20, at 10). At the October 13, 2010 hearing, Boyd admitted that he was satisfied with his current transportation arrangements. However, he also claimed that these arrangements could not continue past the spring semester, after which he alleges that he will lose the $500 per semester scholarship he currently uses to pay the nursing home employee. [7] Specifically, Boyd contends that he "is experiencing an increase in time, effort and expense for him to complete his graduate degree and is being prevented from participating in aspects of college life enjoyed by other graduate students." Id. at 6, ¶ 31. He is currently taking two nighttime classes for his graduate program, but he would like to take four. Id. ¶ 30. Boyd also points out that he is unable to attend other University functions such as athletic events, author readings, theatrical performances, and musical performances. Id. [8] Because "accessible rental housing is difficult to locate and secure," Boyd alleges that he "needs to act as soon as possible in order to secure rental housing." Id. at 7-8, ¶ 32. [9] Specifically, Boyd alleges that, since he moved into the nursing home in December of 2006, he has had five decubitus ulcers ("pressure sore[s]"). Id. at 7, ¶ 34. In the eleven years when his mother was his primary care giver, he only had one. Id. Furthermore, Boyd contends that he "must return to the facility by a specific time in the evening, which limits his socializing with friends and having overnight stays with friends." Id. ¶ 35. Because he is a Medicaid resident in a nursing home, only $30 of his $897 monthly Social Security Disability check is available for his personal expenses, again limiting his ability to socialize or have snacks. Id. He claims that "virtually all of the [other] residents are disabled and most are much older than [him]" taking away the "simple pleasure of being around people his own age who have similar interests and activities." Id. at 8, ¶ 37. As such, he does not take part in many of the nursing home's activities because they are "geared toward octogenarians." Id. He also points out that he "must eat when and what the facility provides and must transfer to and from his bed, shower, toilet, and dress on the staff's schedule." Id. at 7, ¶ 36. To accommodate others with their morning routines, the nursing home staff often pushes Boyd's back until 11 a.m. or later. Id. Thus, he contends that he "is unable to get out of bed until 11 a.m. unless he schedules his activities a few days in advance." Id. at 7-8, ¶ 36. This also applies to his evening routine. Id. at 8, ¶ 36. Finally, Boyd complains that "[t]here is little or no privacy" in the nursing home, where "[t]here is constant noise, screaming and crying." Id. ¶ 38. He claims that the nursing home residents "are often not dressed and are exposed" and "[t]here is a pervasive unpleasant disinfectant odor." [10] He describes this as being "unnecessarily institutionalized." Id. at 8, ¶¶ 40-41. [11] Unless subsequently overruled, decisions of the old Fifth Circuit before October 1, 1981 are binding on the Eleventh Circuit. Bonner v. City of Prichard, 661 F.2d 1206, 1209 (11th Cir. 1981) (en banc). [12] Neither party disputes "that the Federal Government funds a substantial portion of what Alabama spends on Medicaid." (Doc. #20, at 18); see also Alexander, 469 U.S. at 301, 105 S.Ct. 712 ("Medicaid is a joint state-federal funding program for medical assistance in which the Federal Government approves a state plan for the funding of medical services for the needy and then subsidizes a significant portion of the financial obligations the State has agreed to assume."). [13] The Olmstead decision consisted of a majority of five justices—Justices Ginsburg, O'Connor, Souter, and Breyer—joining in Parts I, II, and III.A. However, only a plurality of four—Justices Ginsburg, O'Connor, Souter, and Breyer—joined in part III.B, which is the main focus of the instant case. Furthermore, Justice Stevens filed an opinion concurring in part and concurring in the judgment. Justice Kennedy filed an opinion concurring only in the judgment, in which Justice Breyer joined as to Part I. Finally, Justice Thomas dissented, joined by Justices Rehnquist and Scalia. [14] Procedurally, the District Court granted partial summary judgment in favor of the women, holding that the failure to provide community-based services violated Title II of the ADA. Olmstead, 527 U.S. at 594, 119 S.Ct. 2176. The District Court also rejected the State's argument that the failure to provide community-based services was "by reason of" lack of funds, not the women's disabilities. Id. The court concluded that "unnecessary institutional segregation of the disabled constitutes discrimination per se, which cannot be justified by lack of funding." Id. Finally, the District Court rejected the State's fundamental-alteration defense—namely that it was already using all of its funds to provide services to other disabled persons—noting that the State already provided services of the kind which the women sought and that the provision of such services to the women would cost considerably less than institutionalization. Id. at 595, 119 S.Ct. 2176. The Eleventh Circuit affirmed the judgment, but remanded for reconsideration of the State's lack-of-funds defense. Id. The appeals court held that, when the treating physician finds community-based services appropriate to meet the needs of a disabled person, then the State must provide such services under the ADA. Id. Absent such a finding, the Eleventh Circuit held that the ADA does not require deinstitutionalization. Id. However, the duty to deinsitutionalize was "not absolute" because "fundamental alterations [to the State's Medicaid program] were not demanded." Id. As such, the Eleventh Circuit remanded so that the State could attempt to prove that "the additional expenditures necessary to treat [the women] would be unreasonable given the demands of the State's mental health budget." Id. [15] The Olmstead Court's language on this issue is somewhat confusing, since the Medicaid statute refers a "qualified individual with a disability," which is "an individual with a disability who, with or without reasonable modifications to rules, policies, or practices... meets the essential eligibility requirements for receipt of services or the participation in programs or activities provided by the public entity." 42 U.S.C. § 12131(2). In its brief in support of Boyd's preliminary injunction motion, the United States Department of Justice ("DOJ") contends that Commissioner Steckel's argument on Boyd's qualifications "conflates the question of eligibility with the question of whether the relief sought is a reasonable modification." (Doc. # 25 at 12). However, a closer reading of Olmstead reveals that the Court's requirement of being qualified for community-based services means that the services are appropriate to meet the individual's needs. 527 U.S. at 602, 119 S.Ct. 2176 ("The State generally may rely on the reasonable assessments of its own professionals in determining whether an individual `meets the essential eligibility requirements' for habilitation in a community-based program. Absent such qualification, it would be inappropriate to remove a patient from the more restrictive setting.") (emphasis added) (citing 28 C.F.R. § 35.130(d) (public entity shall administer services and programs in "the most integrated setting appropriate to the needs of the qualified individuals with disabilities") (emphasis added by the Supreme Court)). Otherwise, the citation to the integration provision and the emphasis on the appropriateness language would be unnecessary and irrelevant. Additionally, a contrary interpretation would mean that a person who qualifies for Medicaid generally would then automatically qualify for community-based services as a "qualified individual with a disability" even if it were not the most appropriate setting for his needs. Such a result would be contrary to the integration provision's express language. See 28 C.F.R § 35.130(d); 28 C.F.R. § 41.51(d); see also Olmstead, 527 U.S. at 607, 119 S.Ct. 2176 (holding that one element necessary for a State to be required to provide community-based services is that "the State's treatment professionals determine that such placement is appropriate") (emphasis added). [16] Boyd himself has presented no evidence from a medical professional that supports his views of what his needed medical services would be the community setting. Instead, he merely states what he believes his needs would be and asserts that his treating physician at the nursing home supports his decision to move out. Even without the conflicting evidence presented by Dr. Moon, this Court could not find these bare assertions sufficient to prove that the requested community-based services would be appropriate to meet Boyd's needs. [17] The DOJ contends that the fact that Boyd lived in the community for eleven years demonstrates that community-based services are appropriate for his needs. (Doc. # 25 at 12). However, the record before this Court does not contain sufficient information to determine (1) what services Boyd actually received while his mother was his primary caregiver; (2) whether those services alone would be enough without his mother acting as primary caregiver; (3) what additional services, if any, would be needed for Boyd to live in the community; and (4) whether Boyd's medical needs have changed since he lived in the community. On such a barren record, this Court cannot find that the fact that Boyd lived in the community for eleven years with a relative acting as primary caregiver, in and of itself, establishes that community-based services are appropriate for his needs now. [18] In support of Boyd's motion for a preliminary injunction, the DOJ places much emphasis on a recent case from the Middle District of Florida. (Doc. # 25, at 6 n. 6) (citing Haddad v. Arnold, No. 3:10-cv-00414-MMH-TEM (M.D.Fla. July 9, 2010)); see also id. at 11. In Haddad, the court issued a preliminary injunction requiring the State to provide community-based services for a woman with quadriplegia. However, the facts of Haddad are easily—and pertinently—distinguishable from the instant case. In Haddad, the plaintiff pointed to a specific waiver, already in existence, which appeared to have open slots and provided the services requested by her. Haddad, at 27. The State did not dispute that she was qualified for that program. Id. Similarly, in Olmstead, the women qualified for a specific waiver program, in which slots were still available. 527 U.S. at 601, 119 S.Ct. 2176. Here, Boyd has failed to establish which specific waiver, if any, already provides the services he requests. Even looking to the E & D Waiver that Boyd discussed briefly, he has failed to establish how it provides all of the services he requested and/or would need to live in the community. Indeed, Commissioner Steckel contends that none of the existing waivers covers the community-based services requested by Boyd. See Doc. # 20, at 64; see also Doc. # 19 Ex. C, Chappelle Aff. ¶¶ 5-15. She also specifically contends that the E & D Waiver does not cover the equipment requested by Boyd or the skilled care that Dr. Moon determined he would need. (Doc. #19 Ex. C, Chappelle Aff. ¶ 14). Boyd has done nothing to rebut this evidence at this stage. [19] In Haddad, the Middle District of Florida considered similar arguments about the apparent conflict between the ADA and the Rehabilitation Act's anti-discrimination provisions—as applied by the Olmstead Court to prevent unnecessary institutionalization—and the Medicaid Act—which has a extensive regulatory and statutory scheme that permits capped waiver programs. Rejecting the State's contentions that there was a conflict, the court concluded that the plaintiff's claim "simply addresses the question of whether these Defendants, having opted to provide particular services via the mechanism of a Medicaid Waiver Program, may be required, under the ADA, to provide those same services to [the plaintiff] if necessary to avoid imminent, unnecessary institutionalization." Haddad at 29. Because Haddad is factually distinguishable from this case—neither party disputed the plaintiff's qualifications for a particular waiver program which may have had open slots—this Court is not convinced that the issue is so narrow in the instant case, particularly given the Olmstead Court's holding that the courts must look at the big picture. 527 U.S. at 606, 119 S.Ct. 2176 (requiring courts to consider whether "the placement can be reasonably accommodated, taking into account the resources available to the State and the needs of others with ... disabilities") (emphasis added). Thus, the potential impact of a grant of preliminary injunctive relief on the State's Medicaid program, and its waiver programs in particular, is an appropriate consideration. To be clear, this Court is not holding that the ADA and the Rehabilitation Act do not apply to a State who chooses to have Medicaid. However, this Court is not convinced that the intended interaction between the statutes is such that States who choose to have Medicaid and who choose to use optional waiver programs must therefore provide such community-based services to all persons who could benefit from them even when the waiver programs are full. [20] For these same reasons, this Court finds that the balance of hardships does not favor granting a preliminary injunction and that it would not be in the public interest to grant such injunctive relief at this stage. Without a more developed record and an opportunity to more fully brief the issues, the grant of preliminary injunctive relief poses a grave risk of setting precedent which could undermine Alabama's Medicaid scheme, negatively impacting those other disabled persons receiving Medicaid funds. For these additional reasons, the motion is due to be DENIED. [21] Even if his pleadings could be construed as seeking a typical prohibitory injunction, Boyd still cannot establish the substantial likelihood of success on the merits sufficient to upset the status quo in this case.
{ "pile_set_name": "FreeLaw" }
Browse the latest Pharmaprix Flyer, valid February 23 – March 1, 2019. Don’t miss the Pharmaprix Specials, and beauty sales from the current flyer. As with many other retailers, you can get benefits when you shop using the Optimum program, that comes in the form of a plastic or digital card or key fob. The […] New Flyers via e-mail ! Leave this field empty if you're human: Pharmaprix Flyer. Browse the Pharmaprix weekly flyer, online shopping specials, latest deals, sales and offers. View all the specials from Pharmaprix for the coming week right here. Browse your local flyer from the comfort of your home. Pharmaprix is a subsidiary of Canada’s biggest pharmaceutical retailer Shoppers Drug Mart Corporation. In 2013, Shoppers Drug Mart was acquired by Loblaw Companies, which is now the parent company of both Pharmaprix and Shoppers Drug Mart. Pharmaprix carries three main departments, namely health and wellbeing, beauty, and food & home, besides offering a range of typical pharmaceutical services such as filling prescriptions, renewing prescriptions and others. Some of the brands available at Pharmaprix stores are Biotherm, Chanel Fragrances, DuWop, Ducray, and Dolce & Gabbana. Products within the beauty category vary between general makeup and skin care to nails products, bath & body, and hair care. Pharmaprix also carries gifts & sets at convenient prices. In addition, Pharmaprix deals offer specially priced items and further savings are available through the Pharmaprix weekly flyer. The flyer usually includes senior’s day savings, 2-day sale deals and various discounts on select items. Why Canada Flyers? If you want the best specials and sales, then the Canada grocery & retail store flyers are great for saving money on food, appliances, electronics, household products, groceries, home decor, toys, clothing, footwear, furniture, tools, beauty products, and more. Everything you need to know before going to shop, regarding current specials, promotions & sales can be found in this week flyers.
{ "pile_set_name": "Pile-CC" }
/* Instrument.java -- A MIDI Instrument Copyright (C) 2005 Free Software Foundation, Inc. This file is part of GNU Classpath. GNU Classpath is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version. GNU Classpath is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GNU Classpath; see the file COPYING. If not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Linking this library statically or dynamically with other modules is making a combined work based on this library. Thus, the terms and conditions of the GNU General Public License cover the whole combination. As a special exception, the copyright holders of this library give you permission to link this library with independent modules to produce an executable, regardless of the license terms of these independent modules, and to copy and distribute the resulting executable under terms of your choice, provided that you also meet, for each linked independent module, the terms and conditions of the license of that module. An independent module is a module which is not derived from or based on this library. If you modify this library, you may extend this exception to your version of the library, but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version. */ package javax.sound.midi; /** * The abstract base class for all MIDI instruments. * * @author Anthony Green ([email protected]) * @since 1.3 * */ public abstract class Instrument extends SoundbankResource { // The instrument patch. private Patch patch; /** * Create a new Instrument. * * @param soundbank the Soundbank containing the instrument. * @param patch the patch for this instrument * @param name the name of this instrument * @param dataClass the class used to represent sample data for this instrument */ protected Instrument(Soundbank soundbank, Patch patch, String name, Class<?> dataClass) { super(soundbank, name, dataClass); this.patch = patch; } /** * Get the patch for this instrument. * * @return the patch for this instrument */ public Patch getPatch() { return patch; } }
{ "pile_set_name": "Github" }
Advertisement More stories Hunters take part in a wolf hunt near the town of Blace, southern Serbia, January 27, 2018. — AFP picBLACE (Serbia), Feb 15 — Rifle fire rips through the silence of the forest and fields on the slopes of Jastrebac mountain in southern Serbia. Two wolves have just fallen victim to a legal hunt. Forbidden in most of western Europe, the blood sport is allowed from July to April in this Balkan country, where wolves are not endangered. Around 800 of them roam the wild and depopulated mountains of southern Serbia, a region of mostly poor farmers and herders. It is not uncommon for wolves to attack livestock, especially in winter. “Last year they slaughtered four of my sheep in just five minutes,” said farmer Ivan Milenkovic, who keeps around 60 sheep in the village of Dresnica. “I installed spotlights that light up every night to deter them,” he told AFP. Silent chase Other mountain residents take up arms during the hunting season to counter the wolf attacks. Local hunting associations that monitor the wolf population set quotas. On a recent cold winter’s dawn, more than 400 hunters gathered near Blace, a town of about 5,000 people between the mountains of Jastrebac and Kopaonik. After swigging some rakija (local fruit brandy), the hunters split into two groups, the trackers and the watchers, and exchanged their traditional greeting: “Good vision and calm hand!” The silent watchers spread out in a line through the woods, while the trackers form another line a couple of kilometres away and walk towards the watchers, squeezing the gap between them which holds their prey. As they wait amid the trees, the watchers examine fresh wolf prints in the snow. According to regional hunting quotas, six wolves can be killed in the Blace area in one hunting season. “You wait your whole life to kill a wolf,” said Dejan Pantelic, one of the hunters. “It’s extremely rare, many never see it. I’ve been hunting for 24 years and I’ve not killed one.” The wolf is smart with an exceptional sense of smell and hearing, the 42-year-old explained. And few animals are more mobile — the wolf can easily travel between 50 and 100 kilometres a day. “An isolated hunter has practically no chance of killing a wolf, only an organised hunt can yield results,” Pantelic said. After the hunt The Blace hunt has become a social event which has run for the past two decades, culminating with a feast that brings together hunters and villagers. Adults and children greet the hunters on their return, looking at them curiously. On the day AFP was invited to the hunt, the men who took part had killed two wolves and three foxes, whose bodies were then roped onto car bonnets. Local people had photographs taken of themselves and their children with the dead animals. Nikola Milincic, 24, was proud of shooting down a light-furred wolf just six years after his hunting debut. “I saw her at about 50 steps, I shot and I was successful,” he said. “Some people wait for this moment for their whole hunting life, without success... Luck was with me today.” The other successful hunter, Borica Vukicevic, had been waiting 38 years for “his wolf”. The 63-year-old stood beside his catch, a she-wolf covered in dark grey fur and bearing sharp fangs that remained exposed in death. In the Balkans, wolf hunting is also allowed in Montenegro, Bosnia and Macedonia, but prohibited in Albania, Croatia and Kosovo. According to estimates from hunting associations, there are up to 3,000 wolves in Macedonia, about 800 in Bosnia, 500 in Montenegro, 400 in Kosovo, 300 in Albania and 200 in Croatia. — AFP
{ "pile_set_name": "Pile-CC" }
Poly alumni in the arts As long as there have been alumni of the school, there have been alumni who found success as professional artists. The list below is just a few of the alumni that have gone on to capture a wider audience’s attention.
{ "pile_set_name": "Pile-CC" }
Freeze Frame - Back to Amerks Hockey Earlier this month I returned to where it began for me as a professional photographer. I returned to the Rochester War Memorial to shoot a Rochester Americans hockey game. As a former Amerks team photographer, the last time I covered a game there was about thirty years ago. I was still shooting film back then! As I approached the War Memorial pass gate, I came upon a familiar face. The same Steve who was there, checking in media members all those years ago. He was surprised to see me and when he started to fill in my last name on the pass, I began to spell it for him. He stopped me and said with a smile, “I still remember how to spell it.” It took me about a period the shake the rust off but once I got going, it was as though I never left. As I focused in on the players, I saw number 21 come into my frame and I couldn’t help but think about the little speedster Claude Verret, but now Zac Dalpe is wearing his number. As number 16, Matt Pelech, skated by me, I flashed back to the helmetless veteran, Yvon Lambert, who didn’t have to abide by the AHL helmet rule because of a Grandfather clause at the time. Then there was number 15, Chris Langevin, and the reckless way he would sacrifice his body to make a play. What Langevin lacked in skating ability he made up with pure grit. I didn’t see a number 9 out on the rink like I used to because that number now resides on a banner, high above the ice surface, hanging from the rafters. That number will forever belong to Amerk great, Jody Gage. Like most professional photographers, my career started as a hobby. In the early days I gravitated to shooting what I was most interested in and during the 1980’s, it was hockey. The ultimate was when I started getting paid for my hobby. The reason I requested a media credential from the Amerks this month was mostly out of a nostalgic desire to go back to my photographic roots. To go back to the days when my work was more of a hobby than a business. To go back to the days when Val James patrolled the ice at the War Memorial like he owned the place and Jim Hofford delivered punishing hip checks at the red line and I had the opportunity to capture it all on film.
{ "pile_set_name": "Pile-CC" }
Hezbollah's Gambit in Lebanon The Lebanese government has collapsed after Hezbollah and its allies resigned their seats in the cabinet, leaving it without a governing majority. Hezbollah had threatened to resign over an ongoing United Nations investigation that is expected to blame the group for the 2005 bombing that killed then-Prime Minister Rafik Hariri. His son Saad Hariri, the current prime minister, refuses to condemn the investigation as Hezbollah demands. Here's what Lebanon-watchers are saying about this developing political crisis and what it means for the country. Hezbollah's Mission to Dominate LebanonThanassis Cambanis writes in the New York Times that this is "the final stage in Hezbollah’s rise from resistance group to ruling power. While Hezbollah technically remains the head of the political opposition in Beirut, make no mistake: the Party of God has fully consolidated its control in Lebanon, and will stop at nothing--including civil war--to protect its position." However, "Hezbollah cannot afford the blow to its popular legitimacy that would occur if it is pinned with the Hariri killing. The group's power depends on the unconditional backing of its roughly 1 million supporters. Its constituents are the only audience that matters to Hezbollah." Can Lebanese People Resist Hezbollah? The Council on Foreign Relations' Elliott Abrams writes that "the majority of Lebanese who oppose Hizballah, and who are mostly Maronite Catholics, Druze, and Sunni, must demonstrate that they have the will to keep their country from complete domination by the Shia terrorist group. This is asking quite a bit, to be sure, but Lebanese should have learned from the impact of their March 14, 2005 demonstrations that world support can be rallied and their opponents can pushed back. But they must take the lead." He concludes, "Those who wish Lebanon well must also hope that its political leaders and its populace show the considerable courage that this crisis demands of them." Dilemma for Hezbollah and Its Sponsor, Syria "Neither Hizbullah nor Syria is pleased with what is going on," The Beirut Daily Star's Michael Young writes. "For the party, all the contentious means of crippling the tribunal have grave shortcomings. A serious political or security escalation would only harden discord at a moment when Hizbullah’s primary goal is to show that Lebanon is united in its rejection of the special tribunal. As for [Syrian President Bashar] Assad, if he pushes too hard, he may lose for good the Lebanese Sunni card, which he has worked for years to regain. Hariri alone can issue Hizbullah with a certificate of innocence, and if the prime minister decides to sit the coming period out of office, it is difficult to see how any opposition-led government would function properly." No Good Options for U.S. Here, The New York Times' Mark Landler and Robert Worth explain: "In contrast to the Iranian case, where the Obama administration doggedly stitched together a sanctions campaign that it claims has delayed Iran’s pursuit of a nuclear bomb, the United States has fewer options in Lebanon." While the U.S. supports Hariri and opposes Hezbollah, there's not much they can actually do. "The American role has largely been confined to advising Mr. Hariri to stand firm in his support for the tribunal." Hezbollah's Distorted View of Itself The Center for New American Security's Andrew Exum points out that Hezbollah is in part motivated by a misguided sense of "insecurity." While the group is very powerful, it "sees itself as so very weak." He explains, "To an outsider, Hizballah looks like the big bully in Lebanon--which it most certainly is. But from within the organization, all many can see are enemies: Saudi Arabia, Israel, March 14th, the United States, etc. Just because you're paranoid does not mean people are not out to get you, and we know that Hizballah's domestic enemies have conspired with forces outside Lebanon to weaken Hizballah's standing." How This Might End "In the end, and this may well have been Hezbollah's intention," predicts Steven Heydemann in Foreign Policy, "the collapse of the government could pave the way for an exit from the current stalemate." In the short term, "A new Lebanese Prime Minister-with Saudi and Syrian backing-concedes to Hezbollah's demands and rejects the findings of the Special Tribunal, while a neutered Hariri and his supporters rail against the injustice from the back benches of Lebanese Parliament." This would "avoid outright bloodshed" and "reinforce [the UN tribunal's] already politicized image in the Arab world." In the long term, "It would decisively consolidate Hezbollah's standing as Lebanon's dominant political force, and signal with a whimper rather than a bang, the final demise of Lebanon's Cedar Revolution." News reports are focusing on the Germanwings pilot's possible depression, following a familiar script in the wake of mass killings. But the evidence shows violence is extremely rare among the mentally ill.
{ "pile_set_name": "Pile-CC" }
All kinds of cool ways to get your movie made without breaking the bank... Saturday, July 31, 2010 Matthew Sorrels' FF DSLR Camera Rig Here's a photo of my version of your camera rig (I mentioned it in your comment section). As you can see I changed the back elbow so it can rest on the table flat. I also made the left side a bit larger using 6" pipes rather than 4" so I can get my hand in there to adjust the focus ring on the GH1. I actually think 5" would have been better, I just needed a little more room. For the bolt that holds the camera I ended up with a lot of extra at the top, which is why I added the nut you can see on the bottom. I thought you said you used a 2.5" bolt in the video, but perhaps a 2" bolt would work better? The GH1 is most likely the heaviest DSLR you could mount on this though. I doubt the Canons would work as well. It fits pretty nicely, but for a larger DSLR you might need a real base plate. Next up is some hockey tape I think or maybe I'll paint it black. Thanks for putting up the video showing how to make this. I've been wanting a hand held rig for a while now but didn't really want to spend hundreds to get one. This was a great, easy to build solution. We want your pictures! Send your shots of a gadget created with Frugal Filmmaker directions or inspiration and we'll post it. Don't forget to tell everyone how you did it and give us insight on what works and what didn't.
{ "pile_set_name": "Pile-CC" }
Leandro Lessa Azevedo Leandro Lessa Azevedo, (born 13 August 1980 in Ribeirão Preto), simply known as Leandro, is a Brazilian former footballer who played as a striker. Club statistics Honours Corinthians Brazil Cup: 2002 Tournament Rio - São Paulo: 2002 Fluminense Rio de Janeiro State League: 2005 São Paulo Brazilian League: 2006, 2007 Grêmio Rio Grande do Sul State League: 2010 Vasco da Gama Brazil Cup: 2011 External links globoesporte.globo.com CBF sambafoot Category:1980 births Category:Living people Category:Brazilian footballers Category:Brazilian expatriate footballers Category:Botafogo Futebol Clube (SP) players Category:Sport Club Corinthians Paulista players Category:FC Lokomotiv Moscow players Category:Expatriate footballers in Russia Category:Goiás Esporte Clube players Category:Fluminense FC players Category:São Paulo FC players Category:Tokyo Verdy players Category:Grêmio Foot-Ball Porto Alegrense players Category:CR Vasco da Gama players Category:Fortaleza Esporte Clube players Category:Expatriate footballers in Japan Category:Campeonato Brasileiro Série A players Category:Russian Premier League players Category:J1 League players Category:J2 League players Category:Association football forwards
{ "pile_set_name": "Wikipedia (en)" }
Distal biceps tendon ruptures: a historical perspective and current concepts. Distal biceps tendon rupture is a relatively rare injury most commonly seen in the dominant extremity of men between 40 and 60 years of age. It occurs when an eccentric extension force is applied to a contracting biceps muscle. The hallmark finding is a palpable defect in the distal biceps, which is accentuated by elbow flexion. Radiographic evaluation is usually not necessary. Acute surgical repair is advocated for optimal return of function by either a one-incision or a modified two-incision muscle-splitting technique. The arm is protected for 6 to 8 eight weeks after surgery. Unrestricted range of motion and gentle strengthening may begin after the 6 - 8 week protection period. Return to unrestricted activity is usually allowed by 5 months after surgery.
{ "pile_set_name": "PubMed Abstracts" }
In 2008 the ENCJ and the Ibero-American Judicial Summit set up a joint commisson (rules for the creation and operation of the joint commission). The aim of the joint-commission is to generate communication and dialogue between the two networks. The purpose of the commission is to develop the strengthening of the Judicial Power, develop projects and activities of common interest and facilitate the exchange of experiences and information between the networks. The ENCJ is represented in the joint-commission by de Conseil Supérieur de la Magistrature of France and the Consiglio Superiore della Magistratura of Italy.
{ "pile_set_name": "Pile-CC" }
NUMBER 13-09-00578-CR COURT OF APPEALS THIRTEENTH DISTRICT OF TEXAS CORPUS CHRISTI - EDINBURG SIMON ORTIZ, Appellant, v. THE STATE OF TEXAS, Appellee. On appeal from the 319th District Court of Nueces County, Texas. MEMORANDUM OPINION Before Justices Rodriguez, Benavides, and Vela Memorandum Opinion by Justice Rodriguez Appellant Simon Ortiz appeals the revocation of his community supervision. See Tex. Penal Code Ann. § 22.02 (Vernon Supp. 2009). Following a plea of true at his revocation hearing, the trial court found that Ortiz committed violations of his terms of community supervision and assessed his punishment at five years' confinement. By two issues, which we renumber and reorganize as one, Ortiz complains that he received ineffective assistance of counsel at the revocation stage because his counsel failed to investigate his mental health history. We affirm.I. Background Ortiz was indicted for aggravated assault as follows: . . . on or about January 29, 2006, in Nueces County, Texas, [Ortiz] did then and there intentionally, knowingly, or recklessly cause bodily injury to Vawn Hue Hunter by striking Vawn Hue Hunter and did then and there use or exhibit a deadly weapon, to-wit: a hammer, during the commission of said assault . . . . See id. Prior to trial, Ortiz's counsel filed a motion for psychiatric examination. In the motion, counsel contended that Ortiz was unable to assist in his own defense. On March 9, 2006, the motion for psychiatric evaluation was granted by the trial court. Raul Capitaine, M.D., a member of the American Board of Forensic Examiners, performed the evaluation on April 5, 2006, and submitted his findings to the trial court. Dr. Capitaine found Ortiz to be competent to stand trial and that medication was necessary to "attain or maintain competence." On April 19, 2006, Ortiz signed a plea agreement in which he pleaded guilty to aggravated assault and received seven years' community supervision. See Tex. Code Crim. Proc. Ann. art. 42.12 § 5 (Vernon Supp. 2009). On August 18, 2009, the State filed a motion to revoke probation, alleging that Ortiz violated his probation by committing an assault in Harris County, Texas, on July 25, 2009. See Tex. Penal Code Ann. § 22.01 (Vernon Supp. 2009). On September 23, 2009, Ortiz requested that he be appointed an attorney because he did not have the means to retain one, and the court assigned Ortiz new counsel to represent him in the revocation proceeding. On February 24, 2010, at his revocation hearing, Ortiz pleaded true to the allegations contained in the motion to revoke. The trial court found that Ortiz violated the terms of his community supervision, revoked Ortiz's probation, and sentenced Ortiz to five years' confinement in the Institutional Division of the Texas Department of Criminal Justice.II. Standard of Review & Applicable Law For Ortiz to establish a claim of ineffective assistance of counsel, he must show: (1) that his counsel's representation fell below an objective standard of reasonableness; and (2) that, but for his attorney's unprofessional errors, there is a reasonable probability that the result of the proceeding would have been different. Strickland v. Washington, 466 U.S. 668, 688-94 (1984). A reasonable probability is one that is sufficient to undermine confidence in the outcome of the case. Id. Whether the two-pronged test has been met depends upon the totality of the representation and is not determined by isolated acts or omissions. Rodriguez v. State, 899 S.W.2d 658, 665 (Tex. Crim. App. 1995); Jaynes v. State, 216 S.W.3d 839, 851 (Tex. App.-Corpus Christi 2006, no pet.). The burden is on appellant to prove ineffective assistance of counsel by a preponderance of the evidence. Jaynes, 216 S.W.3d at 851. We presume that counsel gave his client reasonable professional assistance, and our review of counsel's representation is highly deferential. See Mallett v. State, 65 S.W.3d 59, 63 (Tex. Crim. App. 2001) (citing Tong v. State, 25 S.W.3d 707, 712 (Tex. Crim. App. 2000)); Jaynes, 216 S.W.3d at 851. For an appellant to defeat the presumption of reasonable professional assistance, an allegation of ineffectiveness "must be firmly founded in the record, and the record must affirmatively demonstrate the alleged ineffectiveness." Mallett, 65 S.W.3d at 63 (citing Thompson v. State, 9 S.W.3d 808, 814 (Tex. Crim. App. 1999)). Generally, the trial record will be underdeveloped and will not adequately reflect the errors of trial counsel. Thompson, 9 S.W.3d at 813-14; Kemp v. State, 892 S.W.2d 112, 115 (Tex. App.-Houston [1st Dist.] 1994, pet. ref'd). On direct appeal, a defendant cannot usually rebut the presumption that counsel's performance was the result of sound or reasonable trial strategy because the record is normally silent as to counsel's decision-making process. Strickland, 466 U.S. at 688; see Jaynes, 216 S.W.3d at 851. III. Discussion By his sole issue on appeal, Ortiz complains that his counsel was ineffective because he failed to investigate Ortiz's mental health history prior to the hearing on the State's motion to revoke. It is Ortiz's contention that had his counsel investigated his mental health history, counsel would have discovered that Ortiz had been previously examined by a psychiatrist following a court-ordered competency evaluation in April 2006. Ortiz argues that his mental defects were evident and that he was not able to understand the proceedings during his probation revocation hearing due to his mental illness. Ortiz refers this Court to two instances that occurred during the hearing where he gave incorrect answers to the trial court when he was allegedly confused. The first example occurred during the following exchange between the court and Ortiz: [The Court]: Did you in fact sign the paperwork? [Ortiz]: No. [Ortiz's Counsel]: Oh, yeah, yeah, yeah. [Ortiz]: Yes. [The Court]: Okay. Did anybody force you to sign it? [Ortiz]: No. [The Court]: And did anybody promise you anything to get you to sign it? [Ortiz]: No. [The Court]: Okay. Did you sign everything here freely and voluntarily? [Ortiz]: Yes, sir. Ortiz contends that this example of him having to be reminded of the paperwork he had just signed shows that there was a clear lack of understanding on his part. Moments later, Ortiz was asked another question by the court: [The Court]: To the allegations in the motion to revoke, how do you plead, true or not true? [Ortiz]: Not true. [Ortiz's Counsel]: We went over - we went over this. The assault. [Ortiz]: Okay. [Ortiz's Counsel]: And you told me it was true. [Ortiz]: True. [The Court]: Is anybody forcing you to plead true? [Ortiz]: No. [The Court]: And are you pleading true here freely and voluntarily? [Ortiz]: Yes. Ortiz argues that he forgot how he was pleading to the probation violation allegations and had to, again, be reminded by counsel to plead true. Prior to his plea agreement for aggravated assault in 2006, Ortiz underwent a court-ordered competency hearing. See Tex. Code Crim. Proc. Ann. art. 46B.004 (Vernon 2006). The opinion of Dr. Capitaine, the psychiatrist who evaluated Ortiz, was that medications were necessary to maintain Ortiz's competency and that he believed Ortiz was able to "[r]ationally understand the charges and potential consequences of the pending proceedings." Moreover, Dr. Capitaine noted that Ortiz was able to testify but that "[q]uestions may have to be repeated and phrased in basic language." There is no evidence in the record that indicates whether Ortiz was medicated at the time of his plea bargain for the 2006 aggravated assault or at his probation revocation hearing on October 6, 2009, and we cannot guess as to the matter because ineffective assistance of counsel claims must be firmly established by the record, not built on retrospective speculation. (1) See Bone v. State, 77 S.W.3d 828, 835 (Tex. Crim. App. 2002). Because Ortiz did not file a motion for new trial on ineffective assistance of counsel grounds or elicit testimony concerning counsel's reasons for not seeking a competency hearing or ascertaining whether or not counsel made any investigation into Ortiz's mental health, there is no evidence in the record to suggest that the actions of Ortiz's revocation counsel were not the result of sound and reasonable trial strategy. See Jaynes, 216 S.W.3d at 855. Accordingly, Ortiz has not rebutted the strong presumption that his counsel provided professional and objectively reasonable assistance. See Mallett, 65 S.W.3d at 62; Thompson, 9 S.W.3d at 813. Because Ortiz has not established that his counsel's performance fell below an objectively reasonable standard, he has not met the first prong of Strickland. (2) See Jaynes, 216 S.W.3d at 855 (citing Mallet, 65 S.W.3d at 67). Ortiz's issue is overruled. IV. Conclusion The judgment of the trial court is affirmed. NELDA V. RODRIGUEZ Justice Do not publish. Tex. R. App. P. 47.2(b). Delivered and filed the 16th day of July, 2010. 1. Ortiz did not file a motion for new trial, and the appellate record is silent as to evidence regarding the strategy of Ortiz's counsel at his revocation hearing. In fact, we cannot assume that Ortiz's counsel did not investigate Ortiz's competency when the record is silent as to the depth of counsel's investigation. See Hernandez v. State, 726 S.W.2d 53, 57 (Tex. Crim. App. 1986); Brown v. State, 129 S.W.3d 762, 767 (Tex. App.-Houston [1st Dist.] 2004, no pet.). Because Ortiz raised the ineffective assistance of counsel claim on direct appeal, his revocation counsel has not had the opportunity to respond to Ortiz's concerns; the reasonableness of the choices made by Ortiz's counsel may involve facts that do not appear in the appellate record. See Rylander v. State, 101 S.W.3d 107, 111 (Tex. Crim. App. 2003). Ortiz's counsel should be afforded an opportunity to explain his actions before being denounced as ineffective. See id. at 111. 2. Because Ortiz failed to meet the first prong of Strickland, we need not consider the second prong, that is, whether the result of the proceeding would have been different. See Garcia v. State, 57 S.W.3d 436, 440 (Tex. Crim. App. 2001).
{ "pile_set_name": "FreeLaw" }
Friday, March 30, 2007 There are films that are intentionally hilarious and there are films that are unintentionally hilarious. 'End of Days' is one such film. It tickles every sensibility till it gets offensive. 'End of Days' was an independent venture starring Arnold Schwarzenegger and Gabriel Byrne. Released very aptly in the year 1999,it came up with yet another bizarre doomsday theory, outdoing 'Rosemary's Baby','The Omen' and other Lucifer-infested films at apocalyptic histrionics.The film swept the U.S Box office and 'The Razzies' with equal panache. Arnie plays Jericho,a decent yet drunken cop, disillusioned with God after the death of his wife and daughter. Gabriel Byrne plays a suave industrialist who is possessed by Satan,two days before the 1st of January 2000 (the supposed beginning of doom).Gabriel Byrne's Satan is an Armani clad smooth talker with an eye for the ladies. He is particularly interested in Christine York, the young woman ill-fated enough to bear the progeny of Lucifer (it has been decided well in advance, we know because she was born with a special mark on her arm). Christine York (played by Robin Tunney) is a troubled young woman who sees visions of being seduced by Byrne's character. She loathes him because she fears that she might end up liking him. As I mentioned earlier, the film is a winner for its idiotic premise. Satan is slated to reproduce and swarm the world with baby Lucifers on the 31st of December 1999. It is important to note that 1999 has special significance. Since 666 is the ultimate Satanic numerical, when inverted it reads 999 as in 1999. Thus at the start of the new millennium, the world is in for a sinful treat! The catholic church is on a hunt to find Christine York and kill her so that she doesn't abet Satan in his quest to taint the saintly globe. Jericho becomes aware of all this through an illogical and grotesque sequence of events and vows to protect Christine from the devil and his advances. The film has its share of one liners. 'I have come for my wife,Christine come to me!' seems to be the best pick up line Satan can come up with. Satan is anatomically very similar to the terminator. He is entitled to automatic recovery from gun shots, punches and grenade attacks. Arnie tries very hard to make it appear that he has been taking acting lessons. The dialogue is painfully crass and the plot even more so. The acting is wooden and had me rolling with laughter. It is sad that an actor of Mr. Byrne's caliber felt compelled to be a part of such a mindless box-office bungle. The ending is predictable and lame. Thankfully, as one of my girlfriends said, 'Gabriel Byrne looks hot!' He is definitely worth all the eye-candy. No comments: About Me Disclaimer All text and some images on this blog are the property of the blogger. The views expressed in this blog are not intended to cause harm or offense to anyone. The comments are the property of their writers.
{ "pile_set_name": "Pile-CC" }
Background {#Sec1} ========== Fermented beverages have a long history of preparation and use globally for medicinal, social, and ritualistic purposes \[[@CR1]--[@CR4]\]. In China, different socio-linguistic groups in regions throughout the country have developed their own characteristic fermented beverages that are associated with cultural identity and social aspects of communities \[[@CR3], [@CR4]\]. For example, *Guyuelongshan* is a rice wine from Shaoxing in Zhejiang Province, *Hejiu* is a rice wine from Shanghai, and koumiss is a Mongolian liquor \[[@CR5]\]. In addition, Tibetan communities prepare barley wine and there are many types of sweet rice wine from southwestern China including "*nuomi*" that are consumed during weddings, hospitality, funerals, ancestor worship, and other ceremonies \[[@CR5]\]. Rice wine is among the most common and oldest fermented beverages in China. It is fermented using a fermentation starter, also known as *koji* (or *jiuqu* in Mandarin) \[[@CR6]\]. *Koji* can be made with staple crops such as wheat, rice, millet, and maize that consist of microorganisms that support the fermentation process \[[@CR7]\]. For example, communities in Shaoxing prepare *koji* as a raw material for rice wine from wheat that harbors many microorganisms including *Absidia*, *Acetobacteria*, *Aspergillus*, *Bacillus*, *Mucor*, *Lactobacillus*, and *Rhizopus* \[[@CR8]\]. Some of these microorganisms are also used as single strains for the industrial manufacture of rice wine. Zhang et al. \[[@CR9]\] highlighted that *Aspergillus oryzae* SU16, as a single strain, could be used in the production of *koji*. In addition to common staple grains such as wheat, rice, millet, and maize for the preparation of fermentation starters, indigenous groups in mountainous regions of China have a long history of using a wide diversity of local plants for making *koji*. We previously documented a total of 103 species in 57 botanical families of wild plants that are traditionally used as starters for preparing fermented beverages by Shui communities in southwestern China \[[@CR4]\]. The Dong are a socio-linguistic group (also known as the Kam) of southeast Guizhou that also have a long history of using *koji* for producing glutinous wine as a source of livelihood. Our previous studies demonstrate that the Dong people cultivate many varieties of glutinous rice \[[@CR10], [@CR11]\] which they use as their staple food. However, there remains a lack of documentation regarding the plants used as fermentation starters by Dong communities. This study seeks to address this knowledge gap by identifying the diversity of plants used as fermentation starters (*koji*) by Dong communities and associated knowledge and values. Findings have the potential to inform the conservation of natural resources associated with a culturally-relevant beverage of Dong communities while preserving traditional ecological knowledge. Methods {#Sec2} ======= Study area {#Sec3} ---------- Research was carried out in three Dong villages in Qiandongnan Miao and Dong Autonomous Prefecture in the southeastern part of Guizhou Province (longitude 108°50.3′ E--109°58.5′ E, latitude 25°53.7′ N--26°24.2′ N), located near Hunan and Guangxi provinces. These villages are in the core zone of Dong socio-linguistic group and include Xiaohuang of Congjiang County, Huanggang of Liping County, and Nongwu of Rongjiang County (Fig. [1](#Fig1){ref-type="fig"}, Table [1](#Tab1){ref-type="table"}). The three villages have a combined area of 51.22 km^2^ and are located between 630 and 780 m above sea level. The climate is characterized as subtropical monsoon humid with an annual average temperature of 18.4 °C, an average precipitation of 1200 mm, average sunlight time of 1300 h, and a frost-free period of 310 days per year.Fig. 1Geographic location of the study area: Xiaohuang, Huanggang, and Nongwu in three counties of Congjiang, Liping, and Rongjiang, respectively (Qiandongnan Miao and Dong Autonomous Prefecture, China)Table 1Study area (three Dong villages in Qiandongnan Miao and Dong Autonomous Prefecture)Village nameNo. of familyPopulationArea (km^2^)Altitude (m)Geographic locationXiaohuang (Congjiang)740380016.5363025°53.7′ N, 109°58.5′ EHuanggang (Liping)325162929.7078026°24.2′ N, 109°14.6′ ENongwu (Rongjiang)1355504.9974025°94.1′ N, 108°50.3′ E The three study site villages are dominated by members of the Dong and Miao socio-linguistic groups. Traditional rice-fish co-culture system predominates in these villages and integrates with animal husbandry, forestry management, and medicinal plant collection and trade \[[@CR10], [@CR11]\]. In this study, we selected to focus on interviewing Dong households because of their longer history of cultivating glutinous rice (*Oryza sativa* var. *glutinosa*) compared to the Miao as well as their subsistence lifestyle for procuring food. Glutinous rice wine is a very popular fermented beverage in local communities. The Dong, as many indigenous communities, rely on their environment for a range of wild and cultivated crops for preparing food, beverages, and medicine \[[@CR12], [@CR13]\]. The above information indicates that these villages are ideal areas for studying the traditional knowledge of plants used as fermentation starters for traditional glutinous rice wine. Ethnobotanical data collection {#Sec4} ------------------------------ Ethnobotanical surveys were carried out from September 2017 to July 2018. A total of 217 informants (including 126 male and 91 female) were interviewed from the three study sites (Table [2](#Tab2){ref-type="table"}). Semi-structured interviews were carried out using a snowballing approach of meeting Dong community members including in fields, around fish ponds, in canteens, in artisanal workshops, in farmhouses, and in village squares. The semi-structured interviews involved open-ended questions and conversations with informants in the above scenes. The major questions are as follows:Do you know about "*Jiuqu*" (fermentation starters for brewing traditional glutinous rice wine)?Do you know the technology of koji-making?If yes, which plants did you choose, and which parts of the plants to make the fermentation starters?Where do you usually collect these plants?Can you take us to collect these plants? (Field identification or local plant flora).Do you know these plant names?Can you read these names in Dong language?Why do not you choose a commercial "*Jiuqu*" for brewing traditional glutinous rice wine?Would you consider passing this knowledge to your children or other people?What other interesting things can you share with us?Table 2Demographic details of interviewed informantsCategorySubcategoryNumber of informants% of informantsGenderMale12658.06Female9141.94Age20--402210.1440--6011753.9260 and older7835.94Education statusIlliterate15270.05Primary4420.28Secondary167.37Higher52.30OccupationFarmer13361.29Migrant workers7132.72Local officials135.99Knowledge about koji-making plantsYes19388.94No2411.06 Interviews were carried out in either the Dong language with the assistance of a local Dong translator (Fig. [2](#Fig2){ref-type="fig"}) or in simplified Mandarin.Fig. 2Indigenous knowledge of traditional glutinous rice wine koji-making plants: **a** A local guide to helping identification of glutinous rice wine koji-making plants. **b** One of face-to-face interview. **c** The koji for brewing glutinous rice liquor/wine. **d** Glutinous rice wine made from koji In the local area, people with primary and higher education tend to go out to work as migrant workers in non-agriculture times, and those with higher education have the opportunity to find permanent jobs in the provincial and prefectural capital cities, or county towns nearby. Interviews in Mandarin were primarily with individuals with primary education or above including migrant workers and local government officials. All interview procedures involved in this study were in accordance with the International Society of Ethnobiology Code of Ethics including procuring prior informed consent before interviews \[[@CR14]\]. The demographic characteristics (age, educational status, and occupation) were identified and recorded in all face-to-face interviews (Fig. [2](#Fig2){ref-type="fig"}). In addition to interviews, we carried out participatory observation in the study site communities. Specifically, we focused on observing the process of collecting plants and preparing *koji*. These observations were supplemented by key informant interviews on the type of plant species. All of the plants mentioned by key informants were identified in the field and collected to prepare voucher specimens. We checked the scientific names of our field collections with *The Plant List* \[[@CR15]\]. Botanical specimens were further examined at the Herbarium of Jishou University, Hunan Province, China. The specimens were assigned voucher numbers and deposited at the Herbarium of Jishou University. Data analysis {#Sec5} ------------- Classical ethnobotanical descriptive statistics were used to summarize ethnobotanical data in Excel 2013. The association between indigenous knowledge of koji-making with participant's demographic factors including gender, age, educational status, and occupation was tested with Chi-square analysis. Statistical analysis was carried out using SPSS version 20 (SPSS, Chicago) at 5% level of significance (*P* \< 0.05). Use Value (UV) index \[[@CR16]\] was calculated to evaluate the botanical species with the greatest use across the study site communities. The UV of each plant mentioned was calculated using the following formula:$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{UV}=\frac{\sum \mathrm{UP}}{\ n} $$\end{document}$$where UP is the number of uses mentioned by each informant for a given plant use and *n* is the total number of informants. Results {#Sec6} ======= Socio-demographic characteristics of respondents {#Sec7} ------------------------------------------------ Table [2](#Tab2){ref-type="table"} describes the demographic characteristics of the 217 study informants. Informants comprised of 58.06% (*N* = 126) males and 41.94% (*N* = 91) females. In addition, informants were between the ages of 20 and 96 years (the majority were between 40 and 60 years old). Most of the surveyed respondents (70.05%) are illiterate, and only five (2.30%) of the interviewed respondents had completed higher education (Table [2](#Tab2){ref-type="table"}). The majority of the respondents were farmers (61.29%, *N* = 133) and migrant workers (32.72%, *N* = 71), except for a few local government officials (5.99%, *N* = 13). Most respondents (*N* = 193; 88.94%) demonstrated average knowledge about *koji* plants in general (Tables [2](#Tab2){ref-type="table"} and [3](#Tab3){ref-type="table"}).Table 3Knowledge about koji-making plants in relation with gender, age, educational status, and occupation of the respondentsCharacteristicsTotal number of respondentsKnowledge about koji-making plants*X* ^2^*P* valueYesNoGender*X*^2^ = 1.807, df = 1*P* = 0.179Male12610917Female91847Age*X*^2^ = 58.668, df = 2*P \<* 0.00120--402291340--60117108960 and older78762Education status*X*^2^ = 13.443, df = 3*P* = 0.004None15214111Primary44386Secondary16115Higher532Occupation*X*^2^ = 5.664, df = 2*P* = 0.059Farmers13311914Migrant workers71656Local officials1394 Diversity of plants used for *koji* {#Sec8} ----------------------------------- A total of 60 plant species were documented for preparing *koji*, belonging to 58 genera and 36 families (Table [4](#Tab4){ref-type="table"}). The most prevalent botanical families were Asteraceae and Rosaceae (*N* = 6, respectively), followed by Lamiaceae (*N* = 4); Asparagaceae, Menispermaceae, and Polygonaceae (*N* = 3, respectively); Lardizabalaceae, Leguminosae, Moraceae, Poaceae, and Rubiaceae (*N* = 2, respectively); and the other botanical families represented in our collections each consisted of a single species (Table [4](#Tab4){ref-type="table"}).Table 4Inventory of plants traditionally used for koji-making in the study area (species are listed alphabetically)Scientific nameVoucher numberFamily nameDong nameChinese nameHabitPart usedUV*Actinidia eriantha* Benth.KJBT0040ActinidiaceaeSangp buc donglMao Hua Mi Hou TaoShrubBranch1.51*Adiantum flabellulatum* L.KJBT0052PteridaceaeKaok naemlShan Ye Tie Xian JueHerbLeaf0.79*Agrimonia pilosa* LedebKJBT0029RosaceaeDemh Meix SaisLu Bian HuangHerbRoot0.47*Akebia quinata* (Houtt.) Decne.KJBT0064LardizabalaceaeGueel nyanl badsBa Yue GuaShrubFruit0.67*Arctium lappa* L.KJBT0027AsteraceaeMal kap gueecNiu BangHerbAerial part0.45*Artemisia annua* L.KJBT0019AsteraceaeMal yaems sulHe HaoHerbRoot1.19*Asarum forbesii* Maxim.KJBT0033AristolochiaceaeNaos max ticMa Ti XiangHerbLeaf0.77*Asparagus cochinchinensis* (Lour.) Merr.KJBT0050AsparagaceaeSangp begs sangp laoxTian Men DongHerbRoot0.56*Bauhinia brachycarpa* Wall. ex Benth.KJBT0059LeguminosaeJaol bavYe Guan MenShrubRoot0.47*Cayratia trifolia* (L.) DominKJBT0023VitaceaeJaol meixguvSan Ye Wu Lian MeiShrubFruit1.25*Cirsium japonicum* DC.KJBT0044AsteraceaeMal sax bav laoxDa JiHerbRoot0.31*Clerodendrum cyrtophyllum* Turcz.KJBT0009LamiaceaeBav sup geel kuenpDa Qing YeShrubAerial part0.44*Codonopsis pilosula*KJBT0011CampanulaceaeDemh Gaams YousDang ShenClimberRoot0.40*Cunninghamia lanceolata* (Lamb.) Hook.KJBT0047TaxodiaceaeMeix beensSha Mu YeTreeLeaf1.24*Cyclea racemosa* Oliv.KJBT0002MenispermaceaeJaol enl sup danglLun Huan TengHerbBranch0.79*Diospyros cathayensis* StewardKJBT0048EbenaceaeMeix bav mincShi Zi YeTreeLeaf0.86*Elaeagnus pungens* Thunb.KJBT0051ElaeagnaceaeDemh nyox sencHu Tui ZiShrubAerial part0.78*Fallopia multiflora* (Thunb.) Harald.KJBT0018PolygonaceaeJaol maenc yeexHe Shou WuClimberRoot0.28*Ficus pumila* L.KJBT0006MoraceaeJaol liangc fenxCheng Tuo GuoTreeLeaf0.27*Ficus tikoua* Bur.KJBT0013MoraceaeJaol demh xeensDi Gua TengClimberWhole plant0.47*Gardenia jasminoides* EllisKJBT0022RubiaceaeWap lagx ngocHuang Zhi ZiShrubFlower1.03*Gaultheria leucocarpa* Bl. var. *crenulata* (Kurz) T. Z. HsuKJBT0053EricaceaeMelx demh miuusBai Zhu ShuHerbLeaf1.25*Gentiana rhodantha* Franch. ex Hemsl.KJBT0028GentianaceaeNyangt boy liongcLong Dan CaoHerbWhole plant1.21*Gerbera piloselloides* (L.) Cass.KJBT0034AsteraceaeSangp mal kap gavMao Da Ding CaoHerbWhole plant1.00*Geum macrophyllum* Willd.KJBT0030RosaceaeYangh muic naemxLu Bian QingHerbAerial part0.92*Glochidion puberum* (Linn.) Hutch.KJBT0049PhyllanthaceaeMeix sonp poncSuan Pan ZiTreeFruit1.07*Gonostegia hirta* (Bl.) Miq.KJBT0038UrticaceaeMal kgoux lailNuo Mi TuanHerbWhole plant0.92*Hedera nepalensis* var. *sinensis* (Tobl.) Rehd.KJBT0005AraliaceaeJaol bav yaopChang Chun TengShrubAerial part0.40*Houttuynia cordata* Thunb.KJBT0063SaururaceaeSangp wadcZhe Er GenHerbRoot1.46*Imperata cylindrica* (L.) Beauv.KJBT0003PoaceaeSangp nyangt bagxBai Mao GenHerbRoot1.12*Kadsura longipedunculata* Finet et Gagnep.KJBT0046SchisandraceaeJaol dangl bogl padtShan Wu Wei ZiShrubBark1.47*Kalimeris indica* (L.) Sch.-Bip.KJBT0032AsteraceaeMal langxNi Qiu ChuanHerbAerial part0.76*Leonurus japonicus* Houtt.KJBT0060LamiaceaeMal semp beengcYi Mu CaoHerbWhole plant0.96*Ligularia fischeri* (Ledeb.) Turcz.KJBT0042AsteraceaeBav dinl maxTi Ye Tuo WuHerbBranch0.46*Melastoma dodecandrum* Lour.KJBT0014MelastomataceaeMal demh xeensDi ShenShrubLeaf0.79*Mentha canadensis*KJBT0043LamiaceaeNaos suic yeexBo HeHerbLeaf1.46*Oryza sativa* var. *glutinosa* Matsum.KJBT0037PoaceaeOuxNuo HeHerbStem1.50*Paris polyphylla* SmithKJBT0039MelanthiaceaeWap bar YealQi Ye Yi Zhi HuaHerbWhole plant0.55*Polygala sibirica* L.KJBT0017PolygalaceaeSangp jeml meec anghGua Zi JinHerbAerial part0.82*Polygonatum cyrtonema* HuaKJBT0021AsparagaceaeXingp mant jencHuang JingHerbRoot1.00*Polygonum hydropiper* L.KJBT0026PolygonaceaeMeix bavLa LiaoHerbLeaf1.42*Portulaca oleracea* L.KJBT0016PortulacaceaeMal NguedcGua Zi CaiHerbWhole plant1.00*Pteridium aquilinum* (L.) Kuhn var. *latiusculum* (Desv.) Underw. ex HellerKJBT0062DennstaedtiaceaeKaokJue CaiHerbStem0.92*Pueraria lobata* var. *montana* (Lour.) van der MaesenKJBT0015LeguminosaeSangp nieengvGe TengClimberBranch1.74*Frangula crenata* (Siebold & Zucc.) Miq.KJBT0024RhamnaceaeMeix liuucliicKu Li YeShrubLeaf1.22*Rohdea japonica* (Thunb.) RothKJBT0054AsparagaceaeMal nyinc supWan Nian QingHerbRoot1.12*Rosa laevigata* MichxKJBT0065RosaceaeOngv kuaotJin Ying ZiShrubFruit1.38*Rosa roxburghii* Tratt.KJBT0007RosaceaeSunl ongv kuaotCi LiHerbFruit1.44*Rubus pluribracteatus* L.T.Lu & Boufford.KJBT0008RosaceaeDemh bav daemh galDa Hei MeiClimberFruit1.42*Sanguisorba officinalis* L.KJBT0020RosaceaeSangp lagx lugx yakHong Di YuHerbRoot0.81*Sargentodoxa cuneata* (Oliv.) Rehd. et Wils.KJBT0057LardizabalaceaeJaol bogl padt yak magsXue TengHerbBranch1.20*Solanum americanum* Mill.KJBT0025SolanaceaeLianh yeexYe Hai JiaoHerbFruit1.29*Stephania cepharantha* Hay.KJBT0045MenispermaceaeSunl maenc jincJin Xian Diao Wu GuiHerbRoot0.88*Teucrium quadrifarium* Buch.-Ham. ex D. DonKJBT0036LamiaceaeNyangt ousNiu Wei CaoHerbWhole plant0.46*Thalictrum microgynum* Lecoy. ex Oliv.KJBT0056RanunculaceaeWangc lieenc naemxXiao Guo Tang Song CaoHerbWhole plant0.45*Tinospora sagittata Gagnep*.KJBT0004MenispermaceaeSangp juc saengcQing Niu DanShrubLeaf0.47*Uncaria rhynchophylla* (Miq.) Miq. ex Havil.KJBT0010RubiaceaeSangp jaol kgoul daovDa Ye Gou Teng YeClimberBranch1.32*Verbena officinalis* L.KJBT0031VerbenaceaeNyangt piudt max bieenhMa Bian CaoHerbLeaf0.79*Viola philippica* Cav.KJBT0012ViolaceaeMal mac keipDi Cao GuoHerbWhole plant0.47*Zanthoxylum bungeanum* Maxim.KJBT0066RutaceaeSangp siul yanlHua JiaoShrubFruit0.92 Analysis of the life forms of koji-making plants showed that 60.0% of the reported species are herbaceous plants (*N* = 36), 23.3% are shrubs (*N* = 14), 10.0% are lianas (10.0%), and 6.7% are trees (*N* = 4) (Table [4](#Tab4){ref-type="table"}). The root was the most commonly used plant part (21.7%, *N* = 13 citations), followed by the leaf (20.0%, *N* = 12), whole plant (16.7%, *N* = 10), fruit (13.3%, *N* = 8), aerial part (11.7%, *N =* 7), branch (10.0%, *N =* 6), stem (3.3%, *N* = 2), bark, and flower (1.7%, *N* = 1, both) (Table [4](#Tab4){ref-type="table"}, Fig. [3](#Fig3){ref-type="fig"}).Fig. 3Percentage of koji-making plant parts used Traditional knowledge on koji-making plants {#Sec9} ------------------------------------------- Results of the Chi-square test showed that there was no significant association between knowledge of the koji-making plants and gender (*X*^2^ = 1.807, df = 1, *P* value = 0.179) and occupation (*X*^2^ = 5.664, df = 2, *P* value = 0.059). However, there was a significant association between knowledge of *koji* plants with age (*X*^2^ = 58.668, df = 2, *P* value \< 0.001) and educational status (*X*^2^ = 13.443, df = 3, *P* value = 0.004) (Table [3](#Tab3){ref-type="table"}). Informants older than 40 years and those with lower educational status were the most knowledgeable regarding plants for making *koji* (Table [3](#Tab3){ref-type="table"}). Frequently utilized species {#Sec10} --------------------------- The use values (UV) calculated for this study range from 0.27 to 1.74, with a higher UV indicating the plant was more frequently reported to be used by informants. The plant species most frequently utilized by informants for making *koji* are *Pueraria lobata* var. *montana* (Lour.) van der Maesen (1.74), *Actinidia eriantha* Benth. (1.51), and *Oryza sativa* L. var. *glutinosa* Matsum (1.5). There were 23 other species with a UV value greater than 1 including *Kadsura longipedunculata* Finet et Gagnep, *Houttuynia cordata* Thunb., *Mentha canadensis* L., *Rosa roxburghii* Tratt, *Polygonum pubescens* (Meissn.) Steward, *Rubus pluribracteatus* [L.T. Lu](http://l.t.lu) & Boufford, *Rosa laevigata* Michx, *Uncaria rhynchophylla* (Miq.) Miq. ex Havil, *Solanum americanum* Mill., *Cayratia trifolia* (L.) Domin, *Gaultheria leucocarpa* Bl. var. *crenulata* (Kurz) T. Z. Hsu, *Cunninghamia lanceolata* (Lamb.) Hook, *Frangula crenata* (Siebold & Zucc.) Miq., *Gentiana rhodantha* Franch. ex Hemsl, *Sargentodoxa cuneata* (Oliv.) Rehd. et Wils, *Artemisia annua* L., *Imperata cylindrica* (L.) Beauv, *Rohdea japonica* (Thunb.) Roth, *Glochidion puberum* (L.) Hutch, *Gardenia jasminoides* Ellis, *Gerbera piloselloides* (L.) Cass, *Polygonatum cyrtonema* Hua, and *Portulaca oleracea* (L.) (Table [4](#Tab4){ref-type="table"}). Discussion {#Sec11} ========== The technique of using plants as fermentation starters is a prevalent traditional method for preparing many well-known fermented foods and beverages in China \[[@CR17], [@CR18]\]. This study highlights the diversity of plants used by Dong communities as fermentation starters for making rice wine as well as associated knowledge and use value based on the most frequently reported plants used for *koji*. We documented a total of 60 plant species and associated plant parts used by informants in the Dong study site communities as fermentation starters for making glutinous rice wine. Our results further showed that 88.94% of respondents had knowledge about plants used as fermentation starters. This finding indicates the rich indigenous ecological knowledge regarding plants in Dong communities which contributes to sustaining livelihoods and well-being along with biodiversity. Many informants claimed "People who cannot make glutinous rice wine are not a real Dong people, because drinking and singing become a part of our daily life." This naive view clearly emphasized the importance of fermented beverages in Dong communities and partially suggested that the koji for brewing *glutinous rice wine* was widely used in the area. Our results further showed that there was no significant difference in knowledge of *koji* plants between gender or social occupation. These results suggest that *koji* plants are generally known by local people irrespective of their gender or job. An older informant (the old woman in red shirt in Fig. [2](#Fig2){ref-type="fig"}b) said "Glutinous rice wine is easy to brew, but making koji is a profound knowledge that young people won't understand." This statement has been cross-validated among several other informants. Interestingly, the results of this survey showed a significant association between knowledge of *koji* plants and the respondent's age, indicating that elder people have more knowledge about *koji* plants than young people. Although our results showed there was a significant negative correlation between the education level of respondents and the traditional knowledge on *koji* plants they possess, findings from this study are in line with another study that shows that educational status does not contribute to the mastery of traditional ecological knowledge \[[@CR19]\]. But we cannot conclude that the education status decreased this traditional knowledge. Because the ratio of educated informants was too small, while education is more or less related to age (the younger people are more educated than older ones). It is worth mentioning that, in the study area, many young community members intend to go to distant cities for higher education from an early age. Thus, their communication with elders about traditional glutinous rice wine *koji* plants is limited. The 60 species documented in this study represent a diverse range of botanical genera; specifically, the *koji* plants belong to 58 genera and 36 families with the dominant families including Asteraceae, Rosaceae, Lamiaceae, Asparagaceae, Menispermaceae, and Polygonaceae. A comparison of findings from this study with other regional surveys on plants used as fermentation highlights how species composition and diversity notably varies on the basis of cultural group. A survey by Hong et al. \[[@CR4]\] with the Shui socio-linguistic group, also in Guizhou Province, documented that respondents harvested 103 wild plant species in 88 genera and 57 families used as starters for preparing fermented beverages. The majority of plants belonged to the families Asteraceae, Rosaceae, Fabaceae, Melastomaceae, Moraceae, and Rutaceae. For example, Shui communities have been shown to use 9 species in the Rosaceae as fermentation starters (*Agrimonia pilosa* Ledeb., *Geum aleppicum* Jacq., *Rosa roxburghii* Tratt., *Rosa laevigata* Michx., *Rubus alceaefolius* Poir., *Rubus corchorifolius* L., *Rubus ellipticus* Sm., *Rubus xanthocarpus* Bureau & Franch., and *Rubus niveus* Thunb.) while Dong communities use 6 species in the family for *koji* (*Agrimonia pilosa* Ledeb., *Geum macrophyllum*, *Rosa laevigata* Michx., *Rosa roxburghii* Tratt., *Rubus pluribracteatus* L., and *Sanguisorba officinalis* L.). This comparison demonstrates the distinctiveness in species composition among different socio-linguistic groups within the same region (Guizhou Province) of China. Through our interviews, we got a general understanding of traditional technology of local starter-making. They roughly mashed the cleaned plants and plant parts with a wooden hammer, then stirred the powder of the glutinous rice shell into the mixture until mixing, and then rubbed or rolled the mixture into a bolus between hands. After wetting the surface of the bolus with water from mountain springs, they put the mixture in a barrel and let it ferment naturally, and then place it in indoors for air drying after the surface of bolus has grown white mold. At the same time, a comparison of findings from this study with other regional surveys on plants highlights how species composition and diversity may also show convergence between cultural groups. Specifically, the species composition found in this study has notable congruence to the general floristic profile of Miao community reported by Liu et al., which revealed that the Rosaceae, Asteraceae, Poaceae, and Liliaceae were dominant botanical families in Puding, Guizhou Province \[[@CR20]\]. The analysis of the community structure of local plants in the study area confirms the rationality of the versatility hypothesis of Gaoue et al. \[[@CR21]\]. The traditional practice of plant uses, along with the enhancement of the brewing technology, contributes to the diversity and complexity in the use of *koji* plants by the Dong. As species and family level alone are not enough to comprehensively understand the keystone ethnobotanical species of *koji* plants, a quantitative evaluation method of calculating use values (UV) was applied in this study. UV is a commonly used indicator in the fields of ethnobotany and ethnoecology \[[@CR15]\]. The evaluation of UV has the potential to reveal the utilization value of plant species and identify culturally-important plant resources \[[@CR18]\]. Findings on UV in Dong communities showed that some parts of plant species had very restricted uses. For example, stems of *Pueraria lobata* var. *montana*, *Actinidia eriantha*, and glutinous rice were not reported in any published ethnobotanical studies as food or food raw materials. Alternatively, we found some *koji* plants widely reported in the literature as edible wild vegetables or fruits while having limited commercial use in the study area. Examples of these plants include *Artemisia annua* \[[@CR22]\], *Elaeagnus pungens* \[[@CR23]\], *Houttuynia cordata* \[[@CR24]\], *Portulaca oleracea* \[[@CR25]\], *Pteridium aquilinum var*. *latiusculum* (bracken fern) \[[@CR26]\], *Rosa laevigata*, and *Rosa roxburghii* \[[@CR27]\]. Additionally, we identified multiple other plants used by study informants for *koji* that have not been reported for this use in other geographical and sociocultural contexts, including "Naos suic yeex" (*Mentha canadensis*) and "Sangp siul yanl" (*Zanthoxylum bungeanum*). *Mentha canadensis* is a widely used plant to extract essential oil \[[@CR28]\] and is also consumed in China for medicinal purposes in treating human diseases and to enhance appetite. The fruit of *Zanthoxylum bungeanum* is popular as a seasoning and traditional Chinese herbal medicine, and widely distributed in China and some Southeast Asian countries \[[@CR29]\]. Conclusion {#Sec12} ========== This study highlights that the majority of Dong informants in the study site communities continue to use a wide diversity of plants as fermentation starters for brewing glutinous rice wine, a tradition that is over a thousand years old. In addition, this study highlights that elders in study site communities continue to have richer traditional ecological knowledge regarding plants used as fermentation starters and that this knowledge is not being transmitted to the younger generation. The most prevalent *koji* plants reported in this study include *Pueraria lobata* var. *montana* (Lour.) van der Maesen stem, *Actinidia eriantha* Benth., and *Oryza sativa* var. *glutinosa* Matsum. stem. Findings of this study can be used to inform programs focused on the preservation of botanical resources used for preparing traditional glutinous rice wine edge. Similar to our findings of dye plants in the Dong area \[[@CR30]\], we suggest supporting educational workshops and training focused on transmitting the traditional ecological knowledge of community elders to the younger generation. It is expected that such efforts will not only support the cultural identity of communities through the preservation of knowledge and practices, but will also help conserve surrounding biodiversity that is embedded in traditional ecological knowledge. The authors acknowledge the local people in the study area. We also thank Professor Daoying Lan, Jishou University, China, for his critical reading and extensive comments on this manuscript. Funding {#FPar1} ======= This study was financed by the National Natural Science Foundation of China (31761143001, 31870316 & 31560088), Key Laboratory of Ethnomedicine (Minzu University of China) of Ministry of Education of China (KLEM-ZZ201806), Minzu University of China (Collaborative Innovation Center for Ethnic Minority Development, YLDXXK201819), Ministry of Education of China and State Administration of Foreign Experts Affairs of China (B08044), the Special Funds Project for Central Government Guides Local Science and Technology Development (2018CT5012), Open Programme of Center of Tujia Medical Research in Hunan Province, China (2017-6), and the Research Platform Foundation of Jishou University (JD201605, NLE201708). Availability of data and materials {#FPar2} ================================== The data for this study may be availed upon request. CLL conceived and designed the study. JWH and RFZ collected the data. QYL performed the statistical analysis. JWH, RFZ, GXC, KGL, and QYL participated in discussions. SA and CLL finalized the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate {#FPar3} ========================================== We followed ethical guidelines adopted by the International Society of Ethnobiology (2008). Permissions were verbally informed by all participants in this study. All people appeared in Fig. [2](#Fig2){ref-type="fig"} agreed to publish the photos. Consent for publication {#FPar4} ======================= Not applicable Competing interests {#FPar5} =================== The authors declare that they have no competing interests. Publisher's Note {#FPar6} ================ Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
{ "pile_set_name": "PubMed Central" }
Calcifying odontogenic cyst associated with odontoma: report of two cases. The calcifying odontogenic cyst (COC) is a nonneoplastic lesion whose possible association with other odontogenic lesions such as odontoma has been considered improbable by some authors. This paper reports two cases of true odontoma found concurrently with the COC.
{ "pile_set_name": "PubMed Abstracts" }
--- abstract: 'We study stability of gas accretion in Active Galactic Nuclei (AGN). Our grid based simulations cover a radial range from 0.1 to 200 pc, which may enable to link the galactic/cosmological simulations with small scale black hole accretion models within a few hundreds of Schwarschild radii. Here, as in previous studies by our group, we include gas radiative cooling as well as heating by a sub-Eddington X-ray source near the central supermassive black hole of $10^8 M_{\odot}$. Our theoretical estimates and simulations show that for the X-ray luminosity, $L_X \sim 0.008~L_{Edd}$, the gas is thermally and convectivelly unstable within the computational domain. In the simulations, we observe that very tiny fluctuations in an initially smooth, spherically symmetric, accretion flow, grow first linearly and then non-linearly. Consequently, an initially one-phase flow relatively quickly transitions into a two-phase/cold-hot accretion flow. For $L_X = 0.015~L_{Edd}$ or higher, the cold clouds continue to accrete but in some regions of the hot phase, the gas starts to move outward. For $L_X < 0.015~L_{Edd}$, the cold phase contribution to the total mass accretion rate only moderately dominates over the hot phase contribution. This result might have some consequences for cosmological simulations of the so-called AGN feedback problem. Our simulations confirm the previous results of Barai et al. (2012) who used smoothed particle hydrodynamic (SPH) simulations to tackle the same problem. However here, because we use a grid based code to solve equations in 1-D and 2-D, we are able to follow the gas dynamics at much higher spacial resolution and for longer time in comparison to the 3-D SPH simulations. One of new features revealed by our simulations is that the cold condensations in the accretion flow initially form long filaments, but at the later times, those filaments may break into smaller clouds advected outwards within the hot outflow. Therefore, these simulations may serve as an attractive model for the so-called Narrow Line Region in AGN.' author: - 'M. Mo[ś]{}cibrodzka$^{1,\dagger}$, D. Proga$^{1}$' title: Thermal and dynamical properties of gas accreting onto a supermassive black hole in an AGN --- Introduction ============ Physics within the central parsecs of a galaxy is dominated by the gravitational potential of a compact supermassive object. In a classical theory of spherical accretion by @bondi:1952, Bondi radius $R_B$ determines the zone of the gravitational influence of a central object and it is given by $R_{B} \approx 150 (M_{BH}/10^8 M_{\odot}) (T_{\infty}/10^5 {\rm K})^{-1} \, {\rm pc}$, where $M_{BH}$ is the central object mass, and $T_{\infty}$ is the temperature of the uniform surrounding medium. At radii smaller than the Bondi radius, $R_{B}$, the interstellar medium (ISM) or at least its part is expected to turn into an accretion flow. Physics of any part of a galaxy is complex. However near the Bondi radius, it is particularly so because there, several processes compete to dominate not only the dynamical state of matter but also other states such as thermal and ionization. Therefore studies of the central parsec of a galaxy require incorporation of processes and their interactions that are typically considered separately in specialized areas of astrophysics, e.g., the black hole accretion, physics of ISM and of the galaxy formation and evolution. One of the main goals of studying the central region of a galaxy is to understand various possible connections between a supermassive black hole (SMBH) and its host galaxy. Electromagnetic radiation provides one of such connections. For example, the powerful radiation emitted by an AGN, as it propagates throughout the galaxy, can heat up and ionize the ISM. Subsequently, accretion could be slowed down, stopped or turned into an outflow if the ISM become unbound. Studies of heated accretion flows have a long history. Examples of early and key works include @ostriker:1976, @cowie:1978, @mathews:1978, @stell:1982, @bisnovatyi:1980, @krolik:1983, and @balbus:1989. The accretion flows and their related outflows are very complex phenomena. It is likely that several processes are responsible for driving an outflow, i.e., not just the energy of the radiation, as mentioned above, but also for example, the momentum carried by the radiation. Therefore, our group explored combined effects of the radiation energy and momentum on the accretion flows and on producing outflows (e.g. @proga:2007; @proga:2008; @kurosawa:2008; @kurosawa:2009a; @kurosawa:2009b; @kurosawa:2009c). These papers reported on results from simulations carried out using Eulerian finite difference codes where effects of gas rotation and other complications such non-spherical and non-azimuthal effects were included (see e.g. @janiuk:2008). To identify the key processes determining the gas properties (here, we are mainly concerned with thermal properties) and to establish any code limitations in modeling an accretion flow, in this paper we adopt a relatively simple physical set up. Namely, the modeled system consists of a central SMBH of mass $M_{BH}= 10^8 M_{\odot}$ and a spherical shell of gas inflowing to the center. The simulations focus on regions between 0.1-200 pc from the central object, where the outer boundary is outside of $R_B$. The key difference compared to the Bondi problem is an assumption that the central accretion flow is a point-like X-ray source. The X-rays illuminate the accreting gas and the gas itself is allowed to cool radiatively under optically thin conditions. To keep the problem as simple as possible, the radiation luminosity is kept fixed instead of being computed based on the actual accretion rate for an assumed radiation efficiency (see also @kurosawa:2009a). To model the presented problem, one needs to introduce extra terms into the energy equation to account for energy losses and gains. The physics of an optically thin gas that is radiatively heated and cooled, in particular, its thermal and dynamical stability has been analyzed in a great detail by @field:1965. Therefore, to study thermal properties of accretion flows or dynamical properties of thermally unstable gas, it is worthwhile to combine @bondi:1952 and @field:1965 theories. Notice, that our set-up is very similar that that used in the early works in the 70-ties and 80-ties that we mentioned above. Some kind of complexity and time variability in a heated accretion flow is expected based on the 1-D results from the early work. A dynamical study of the introduced physical problem requires resolving many orders of magnitude of the radial distance from the black hole. Our goal is not only to cover the largest radial span as possible but also to resolve any small scale structure of the infalling gas. This is a challenging goal. To study the dynamics of gas in a relatively well controlled computer experiment, we use the Eulerian finite difference code ZEUS-MP [@hayes:2006]. We address systematically numerical requirements to adequately treat the problem of thermally unstable accretion flows. We introduce an accurate heating-cooling scheme that incorporates all relevant physical processes of X-ray heating and radiative cooling. Low optical thickness is assumed, which decouples fluid and radiation evolution. We resolve three orders of magnitude in the radial range by using a logarithmic grid where the logarithm base is adjusted to the physical conditions. We solve hydrodynamical equations in 1- and 2- spatial dimensions (1-D and 2-D). We follow the flow dynamics for a long time scale in order to investigate the non-linear phase of gas evolution. Notice that most of the earlier work in the 70-ties and 80-ties, focused on linear analysis of stability, early stages of evolution of the solutions, and considered only 1-D cases. As useful our group’s past studies are, we keep in mind that any result should be confirmed by using more than one technique or approach. Therefore, @barai:2012 (see also @barai:2011), began a parallel effort to model accretion flows including the same physics but instead of performing simulations using a grid based code we used the smoothed particle hydrodynamic (SPH) GADGET-3 code @springel:2005. Overall, the 3-D SPH simulations presented by @barai:2012 showed that despite this very simple set up, accretion flows heated by even a relatively weak X-ray source (i.e., with the luminosity around 1% of the Eddington luminosity) can undergo a complex time evolution and can have a very complex structure. However, the exact nature and robustness of these new 3-D results has not been fully established. @barai:2012 mentioned some numerical issues, in particular, artificial viscosity and relatively poor spacial resolution in SPH, because of the usage of linear length scale (as opposed to logarithmic grid in ZEUS-MP), limit ability to perform a stability analysis where one wishes to introduce perturbations to an initially smooth, time independent solution with well controlled amplitude and spatial distribution (SPH simulations have intrinsic limitations in realizing a smooth flow). Therefore, the robustness and stability of the solutions found in the SPH simulations are hard to access due to mixing of physical processes and numerical effects. Here, we aim at clarifying the physics of these flows and measure the role of numerical effects in altering the effects of physical processes. Our ultimate goal is to provide insights that could help to interpret observations of AGN. We explore the conditions under which the two phase, hot and cold, medium near an AGN can form and exist. Such two phase accretion flows can be a hint to explain the modes of accretion observed in galactic nuclei, but also to explain the formation of broad and narrow lines which define AGN. We also measure the so-called covering and filling factors and other quantities in our simulations in order to relate the simulation to the origin of the broad and narrow line regions (BLRs and NLRs, respectively). The connection of this work to the galaxy evolution and cosmology is that we resolve lower spatial scales, and hence can probe what physical processes affect the accretion flow. In our models, we can directly observe where the hot phase of accretion turns into a cold one or where an eventual outflow is launched. In most of the current simulations of galaxies (e.g. @dimatteo:2012 and references therein) these processes are assumed or modeled by simple, so called sub-resolution, approximations because, contrary to our simulations, the resolution is too low to capture the flow properties on adequately small scales. The article is organized as follows. In § \[sec:equation\], we present the basic equations describing the physical problem. In § \[sec:num\_setup\], we show the details of the numerical set up. Results are in § \[sec:results\_1d\] and in § \[sec:results\_2d\]. We summarize the results in § \[sec:discussion\]. Basic Equations {#sec:equation} =============== We solve equations of hydrodynamics: $$\frac{D\rho}{Dt} + \rho {\bf \nabla \cdot v}=0 \label{eq:mass}$$ $$\rho \frac{D{\bf v}}{Dt} = -\nabla P + \rho {\bf g} \label{eq:mom}$$ $$\rho \frac{D}{Dt} (\frac{e}{\rho})= -P {\bf \nabla \cdot v} + \rho {\mathcal L} \label{eq:energy}$$ where $D/Dt$ is Lagrangian derivative and all other symbols have their usual meaning. To close the system of equations we adopt the $P=(\gamma-1)e$ equation of state where $\gamma = 5/3$. Here $g$ is the gravitational acceleration near a point mass object in the center. The equation for the internal energy evolution has an additional term $\rho {\mathcal L}$, which accounts for gas heating and cooling by continuum X-ray radiation produced by an accretion flow near the central SMBH. The heating/cooling function contains four terms which are: (1) Compton heating/cooling ($G_{Compton}$), (2) heating and cooling due to photoionization and recombination ($G_X$), (3) free-free transitions cooling ($L_{b}$) and (4) cooling via line emission ($L_{l}$) and it is given by (@blondin:1994, @proga:2000): $$\rho {\mathcal L} = n^2 (G_{Compton} + G_X - L_b - L_l ) \, \, {\rm [erg \,\, cm^{-3} s^{-1} ]}\label{eq:HC_full}$$ where $$G_{Compton}=\frac{k_b \sigma_{TH}}{4 \pi m_e c^2} \xi T_X \left(1- \frac{4T}{T_X}\right)$$ $$\label{eq:HC_2} G_X= 1.5 \times 10^{-21} \xi^{1/4} T^{-1/2} \left(1-\frac{T}{T_X}\right)$$ $$L_b= \frac{2^5 \pi e^6}{ \sqrt{27} h m_e c^2 } \sqrt{\frac{2\pi k_b T}{m_ec^2}}$$ $$\label{eq:HC_4} L_l= 1.7 \times 10^{-18} \exp\left(- \frac{1.3 \times 10^5}{T}\right) \xi^{-1} T^{-1/2} - 10^{-24}$$ where $T_X$ is the radiative temperature of X-rays and $T$ is the temperature of gas. We adopt a constant value $T_X = 1.16 \times 10^8$ K ($E=10$ keV) at all times. The numerical constants in Equation \[eq:HC\_2\] and \[eq:HC\_4\] are taken from an analytical formula fit to the results from a photoionization code XSTAR [@kallman:2001]. XSTAR calculates the ionization structure and cooling rates of a gas illuminated by X-ray radiation using atomic data. The photoionization parameter $\xi$ is defined as: $$\xi \equiv \frac{4 \pi F_X}{n} = \frac{L_X} {n r^2} = \frac{f_X L_{Edd}} {n r^2} = \frac{f_X L_{Edd} m_p \mu} {\rho r^2} \,\, {\rm [ergs \,\, cm \,\, s^{-1}]}$$ where $F_X$ is the radiation flux, $n=\rho/(\mu m_p)$ is the number density, and $\mu$ is a mean molecular weight. Given $\xi$ definition, notice that $\mathcal L$ is a function of thermodynamic variables but also strongly depends on the distance from the SMBH. The luminosity of the central source $L_X$ is expressed in units of the Eddington luminosities, $f_X \equiv L_X/L_{Edd}$. The reference Eddington luminosity for a supermassive black hole mass considered in this work is $$L_{Edd} \equiv \frac{ 4 \pi G M_{BH} m_p c}{\sigma_{TH}} = 1.25 \times 10^{46} \left(\frac{M }{10^8 M_{\odot}}\right) \,\, {\rm [ergs \,\, s^{-1}]}$$ Method and Initial Setup {#sec:num_setup} ======================== To solve Equations. \[eq:mass\], \[eq:mom\], \[eq:energy\], we use the numerical code ZEUS-MP [@hayes:2006]. We modify the original version of the code in particular, we use a Newton-Raphson method to find roots of Equation \[eq:energy\] numerically at each time step. We have successfully tested the numerical method against an analytical model with heating and cooling. We describe the numerical code tests in the Appendix, showing the thermal instability (TI) development in the uniform medium. We solve equations in spherical-polar coordinates. Our computational domain extends in radius from 0.1 to 200 pc. The useful reference unit is a radius of the innermost stable circular orbit of a central black hole: $r_*= 6 GM_{BH}/c^2$. We assume the fiducial mass of the black hole $M_{BH}=10^8 M_{\odot}$ for which $r_*=8.84 \times 10^{13} {\rm cm}$. The computational domain in these units ranges from $r_i=3484.2 r_*$ to $r_o=6.9683 \times 10^6 r_*$ (or $r_i = 6.6 10^{-4} R_B$ and $r_o=1.3 R_B$, where $R_B = 152$ pc). Since $r_i$ is relatively large in comparison to the BH horizon we cannot model here the compact regions near the black hole where X-ray emission is produced. Instead we parameterize the X-ray luminosity using $f_X$, so that $L_X=f_X L_{Edd}$. We solve equations for five values of $f_X$=0.0005, 0.008, 0.01, 0.015, 0.02 (these numbers correspond to models later labeled as A, B, C, D and E). As initial conditions for the A model (lowest luminosity), we use an adiabatic, semi-analytical solution from @bondi:1952. For higher luminosities the integration of equations starts from last data from a model with one level lower luminosity provided that the lower $f_X$ solution is time-independent. The procedure is adopted in order to increase the luminosity in a gradual manner rather than sudden. [^1] Only for steady state solutions (with assumption that the mass accretion rate is constant from $r_i$ to $r_*$) the efficiency of conversion of gravitational energy into radiation $\eta$ is related to $f_X$ as $$\frac{\eta}{\eta_r} = \frac{f_X}{ \dot{m}}$$ where $\dot{m}$ is a mass accretion rate in Eddington units ($\dot{M}_{Edd}=L_{Edd}/\eta_r c^2$ and $\eta_r=0.1$ is a reference efficiency) and it is measured from the model data. In our steady state models, $\dot{m} \approx 1$, therefore the energy conversion efficiency in these cases is approximately $\eta=0.1f_X$. Our boundary conditions put constrains on a density at $r_o$, it is set to be $\rho_o=10^{-23} {\rm g \, cm^{-3}}$. For other variables we use an outflow type of boundary conditions at the inner and outer radial boundary. In 2-D models our computational domain extends in $\theta \in (0,90^\circ)$. At the symmetry axis and at the equator we use appropriate reflection boundary conditions. The numerical resolution used depends on the number of dimensions i.e.: in 1-D $N_r=256, 512, 1024, 2048, 4096$; in 2-D $(N_r,N_\theta)=(256,64), (512,128), (1024,256)$. The spacing of the radial grid is set as $dr_i/dr_{i+1}$=1.023, 1.01, 1.0048, 1.002, 1.0008 for $N_r$=256, 512, 1024, 2048, and 4096, respectively. The number of grid points in the second dimension are chosen so that the linear size of the grid zone in all directions is similar (i.e., $r_i \Delta \theta_j \approx \Delta r_i$). Results: 1-D models {#sec:results_1d} =================== 1-D Steady Solutions -------------------- We begin with presenting the basic characteristics of 1-D solutions. Table \[tab:1d\] shows a list of all our 1-D simulations. Each simulation was performed until $t_{f}=20$ Myr equivalent to 4.7 dynamical time scales at the outer boundary $r_o=200$pc ($t_{dyn}=t_{ff}=\sqrt{r_o^3/2GM_{BH}}=4.21$ Myr). Only some of the numerical solutions settled down to a time-independent state at $t_{f}$. We focus on analyzing two representative solutions, that are steady-state at $t_{f}$: 1D256C and 1D256D, with the X-ray luminosity of the former $f_X=10^{-2}$, and of the latter $f_X=1.5 \times 10^{-2}$. Note that these solutions were obtained using the lowest resolution. We find these two solutions instructive in showing the thermal properties of the gas. ---------- ---------------------- ------- --------- -------------------------------- ---------------------------- ------------------------------- -------------------- --------- Model ID $f_X$ $N_r$ $t_f$ $\langle\dot{M}\rangle_t$ $\langle\chi\rangle_{r,t}$ $\langle\tau_{X,sc}\rangle_t$ Max($\tau_{X,sc}$) comment \[Myr\] ${\rm [M_{\odot} \, yr^{-1}]}$ 1D256A $5 \times 10^{-4}$ 256 20 2.0 3.9 0.44 0.49 s 1D512A $5 \times 10^{-4}$ 512 20 2.0 6.1 0.45 0.51 s 1D1024A $5 \times 10^{-4}$ 1024 20 2.0 6.8 0.46 0.52 s 1D2048A $5 \times 10^{-4}$ 2048 20 2.0 6.8 0.46 0.53 s 1D4096A $5 \times 10^{-4}$ 4096 20 2.0 6.8 0.46 0.53 s 1D256B $8 \times 10^{-3}$ 256 20 1.8 0 0.1 0.12 s 1D512B $8 \times 10^{-3}$ 512 20 1.8 0 0.1 0.13 s 1D1024B $8 \times 10^{-3}$ 1024 20 1.8 0 0.1 0.13 s 1D2048B $8 \times 10^{-3}$ 2048 20 1.9 0 0.1 1.7 s 1D4096B $8 \times 10^{-3}$ 4096 20 1.95 0.05 0.13 5.8 ns 1D256C $1 \times 10^{-2}$ 256 20 1.7 0 0.09 0.09 s 1D512C $1 \times 10^{-2}$ 512 20 1.8 0 0.09 0.09 s 1D1024C $1 \times 10^{-2}$ 1024 20 1.8 0 0.11 10.4 s 1D2048C $1 \times 10^{-2}$ 2048 20 1.8 0.11 0.13 9.3 ns 1D256D $1.5 \times 10^{-2}$ 256 20 1.5 0.7 0.2 24 s 1D512D $1.5 \times 10^{-2}$ 512 20 1.5 1.9 0.23 28 ns 1D1024D $1.5 \times 10^{-2}$ 1024 20 2.1 5.8 0.55 61 ns 1D256E $2 \times 10^{-2}$ 256 20 1.23 4.1 0.2 30 ns ---------- ---------------------- ------- --------- -------------------------------- ---------------------------- ------------------------------- -------------------- --------- Figure \[fig:st1d\] presents the overall structure of model $1D256~C$ and $D$ (model C and D in the left and right column, respectively). Panels from top to bottom in Figure \[fig:st1d\] display: radial profiles of gas density, gas temperature overplotted with the Mach number (red line with the labels on the right hand side of the panels), the net heating/cooling rate plotted together with contribution from each physical process (see Equation \[eq:HC\_full\]), the entropy S, and the bottom row shows gas temperature as a function of $\xi$. In the bottom panels, the red line indicates the T-$\xi$ relation for radiative equilibrium (i.e. solving ${\mathcal L}(\xi,T)=0$ for each $T$). The green line indicates a $T-\xi$ relation for a gas being adiabatically compressed due to the geometry of the spherical accretion ($T \propto \xi^2$), while the blue line for a constant pressure gas ($T \propto \xi^1$). The 1D256C and D solutions differ mainly in the position of the sonic point and in the fact that the model 1D256D is strongly time dependent for a short period of time during initial evolution (see below). However, in most part the solution share several common properties. In particular, in both solutions, the gas is nearly in radiative equilibrium at large radii whereas, at small radii (below $r\approx2\times10^{19}$cm, where $T>2 \times10^6$K) they depart from the equilibrium quite significantly. In the inner and supersonic parts of the solutions, $T$ scales with $\xi$ as if gas was under constant pressure. At large radii where the solutions are nearly in the radiative equilibrium, the net heating/cooling is not exactly zero. One can identify, four zones where either cooling or heating dominates. In the most inner regions where the gas is supersonic, adiabatic heating is very strong and the dominant radiative process is cooling by free-free emission. At the outer radii, the cooling in lines and heating by photoionization dominates. For models considered in this paper the Compton cooling is the least important. ![ Structure of 1-D accretion flow in run 1D256C (left column, $f_X=1\times10^{-2}$) and 1D256D (right column, $f_X=1.5\times10^{-2}$). Each panel is a snapshot taken at t=20 Myr. Panels from top to bottom show: density, temperature with Mach number (Mach number scale is on the right hand side), heating/cooling rates, and entropy S. The dashed vertical line in top panels marks the position of the sonic point. In panel with heating/cooling rates the black solid line is a net heating/cooling and color lines indicate particular physical process included in the calculations: Compton heating (red line), photoionization heating (green line), bremsstrahlung cooling (magenta-line), cooling through line emission (blue-line). The bottom panels display the gas temperature as a function of photoionization parameter; color lines indicate gas in radiative equilibrium (red), constant pressure conditions (blue) and free-fall compression (green).[]{data-label="fig:st1d"}](fig1_col.eps) Inspecting the bottom panels in Figure \[fig:st1d\], one can suspect the gas is in the middle section of the computational domain to be thermally unstable because the slope of the $T-\xi$ relation (in the log-log scale) is larger than 1. Notice also that in both solutions the entropy is a non-monotonic function of radius. The regions where the entropy decreases with increasing radius correspond to the regions where there is net heating and the Schwarzschild criterion indicates convective instability at these radii. We therefore conclude that both solutions could be unstable. We first check more formally the thermal stability of our solutions. Thermal Stability of Steady Accretion Flows ------------------------------------------- The linear analysis of the growth of thermal modes under the radiative equilibrium conditions (${\mathcal L}(\rho_0,T_0) = 0$) has been examined in detail by @field:1965 (see Appendix for basic definitions). In Figure \[fig:tibv\], in the top panels (left and right column correspond again to model 1D256C and D), we show the radial profiles of various mode timescales. The timescales, $\tau=1/n$, are calculated using definitions \[eq:Np\] and \[eq:Nv\]. The growth timescale of short wavelength, isobaric condensations $\tau_{TI}=-1/N_p$ is positive (thermally unstable zone marked with the dotted line) in a limited radial range between about 10 and 100 pc. The location of the thermally unstable zone depends on the central source luminosity, and it moves outward with increasing $f_X$. The long wavelength, isochoric perturbations are damped, at all radii, on timescales of $\tau_{v}=-1/N_v$ (faster than TI development). The short wavelength nearly adiabatic, acoustic waves are damped as well, and $\tau_{ac}=-2/(N_v-N_p)$. In Figure \[fig:tibv\], the dashed line is the accretion timescale $\tau_{acc}=r/v$. Within the thermally unstable zone, $\tau_{TI}$ is short in comparison to $\tau_{acc}$, in both models. ![ Left and right panels correspond, respectively to runs 1D256 C and D. Top panels show the instability growth rates in comparison to the accretion time scale ($\tau_{acc}=r/v$, dashed line). The time scale for the short wavelength isobaric mode growth is displayed as the dotted line while the damping rate as solid line ($\tau_{N_p}$). Other two lines show the long-wavelength isochoric mode damping rate $\tau_{N_v}$ (heavy line) and the effective acoustic waves damping time scale $\tau_{N_v}$ (light line). Middle panel: The dashed line is $\tau_{acc}$ and solid line is $t_{BV}=1/\omega_{BV}$, where $\omega_{BV}^2>0$ is the ${\rm Brunt-V\ddot{a}is\ddot{a}l\ddot{a}}$ oscillation frequency for a spherical system. Solid lines show the regions which are unstable convectivelly. The dotted line indicates region where $\omega_{BV}^2<0$ and oscillations are possible. Bottom panels: the derivative $d \ln T/ d \ln \xi$ as a function of radius is shown as a solid line. $(d \ln T/ d \ln \xi)_{ad}$ for an adiabatic inflow is marked as dotted line, and dashed line is the same derivative for radiative equilibrium conditions. Horizontal one indicates slope of 1. \[fig:tibv\]](fig2_col.eps) @balbus:1986, @balbus:1989, @mathews:1978, (and also @krolik:1983) extended the analysis by @field:1965 to spherical systems with gravity, in more general case when initially the gas is not in the radiative equilibrium. Their approximate solution gives the formula for linear evolution of the short wavelength, isobaric, radial perturbation as it moves with smooth background accretion flow (Equation 23 in @balbus:1986 or Equation 4.12 in @balbus:1989). Since the two presented solutions are close to radiative equilibrium, the approximate formula for the growth of a comoving perturbation given by @balbus:1989 reduces to $$\delta (r) = \frac{\delta \rho}{\rho} =\delta_s \exp \left( \int_{r_s}^{r_f} - \frac{N_p(r')}{v(r')} dr' \right), \label{eq:amp}$$ where $N_p(r')$ is a locally computed growth rate of a short wavelength, isobaric perturbation as defined in the Appendix or @field:1965, $r'$ is radius where $N_p(r') < 0$, and $\delta_s$ is an initial amplitude of a perturbation at some starting radius $r=r_s$. Using Equation \[eq:amp\], the isobaric perturbation amplification factors are $\delta/\delta_s \approx 10^{10}, 10^{16}, 10^{19}$ and $10^{33}$, for models 1D256A, B, C, and D, respectively. Notice that these amplification factors are calculated for the asymptotic, maximum physically allowed growth rate, $n=-N_p$, which might not be numerically resolved. To quantify the role of TI in our simulations we ought to address the following question. What is the minimum amplitude and wavelength of a perturbation in our computer models? The smallest amplitude variability is due to machine precision errors, $\epsilon_{machine} \approx 10^{-15}$ (for a double float computations). The typical $\lambda$ of these numerical fluctuations are of the order of the numerical resolution, $\Delta r_i$. The discretization of the computational domain affects the TI growth rates in our models in two ways: (1) the numerical grid refinement limits the size of the smallest fragmentation that can be captured; (2) the rate at which the condensation grows in the numerical simulations depends on number of points resolving a condensation. As shown in Appendix the perturbation of a given $\lambda$ has to be resolved by 20, or more, grid points. A wavelength $\lambda_0$ for which $n = - 0.9973 \times N_p$ is shown in Figure \[fig:res\] together with $\Delta r_i $ as a function of radius for models with $N_r$=256, 512, 1024, 2048, and, 4096 grid points. In low resolution models we marginally resolve $\lambda_0$. We therefore expect the TI fragmentations to grow slower than theoretical estimates. Reduction of the growth rate due to these numerical effects even by a factor of a few is enough to suppress variability because of the strong exponential dependence. ![Grid spacing (red lines) in models with $N_r$=256, 512, 1024, 2048, and 4096 points and $\lambda_0$ (black, dotted line) as a function of radius in models 1D256 C (left panel) and D (right panel). \[fig:res\]](fig3_col.eps) Thermal mode evolution depends not only on the numerical effects but also other processes affecting the flow. Figure \[fig:time\_scales\] shows the comparison of time scales of physical processes involved: the compression due to geometry of the inflow and stretching due to accretion dynamics. We expect that any eventual condensation formed from the smooth background which leaves the thermally unstable zone, would accrete with supersonic background velocity. From the continuity equation, the co-moving density evolution is a balance of two terms $(1/\rho) (D\rho/Dt)=-2v/r - \partial v/\partial r$, i.e., the compression and tidal stretching. The amplitude of condensation grows in regions where there is compression due to geometry and decreases in regions where fluid undergoes acceleration - it stretches the perturbation. In the models 1D256D and C interior of the TI zone, the evolution of the perturbation is dominated by compression because the compression time scale is the shortest. ![Time scales in 1-D, stationary models 1D256 C (left panel) and 1D256 D (right panel): accretion time scale ($\tau_{acc}$, dashed line), compression time scale ($\tau_c$, solid line), tidal stretching time scale ($\tau_s$, dotted-dashed line), and condensation growth time scale ($\tau_{TI}$, dotted line). \[fig:time\_scales\]](fig4_col.eps) Convective Stability of Steady Accretion Flows ---------------------------------------------- In this subsection, we examine in more detail convective stability of our solutions. In Figure \[fig:tibv\], (middle panels), we compare the accretion time scale $\tau_{acc}$ and the ${\rm Brunt-V\ddot{a}is\ddot{a}l\ddot{a}}$ time scale $\tau_{BV}=\frac{1}{\omega_{BV}}$ associated with the development of convection. The frequency $\omega_{BV}$ is defined as $\omega_{BV}^2 \equiv (-\frac{1}{\rho} \frac{\partial P}{\partial r}) \frac{\partial \ln S}{dr}$. The convectivelly unstable regions are marked as solid lines ($\omega_{BV}^2 >0$). The convectivelly unstable zones overlap with the thermally unstable zones. Since $\tau_{acc} \ll \tau_{BV}$ convective motions might not develop, at least at the linear stage of the development of TI. In the bottom panels of Figure \[fig:tibv\] we show the logarithmic derivatives of $d \ln T / d \ln \xi$ that could be used to graphically assess the stability of the flow. This can be done by comparing the derivatives (the slopes of the $ln T - \ln \xi$ relation) for three cases: model data (solid line where $T$ and $\xi$ are taken directly from the simulations), purely adiabatic inflow (dotted line, assuming that the velocity profile is same as in the numerical solution), and radiative equilibrium conditions (dashed line). In particular, the regions where the solid line is above the red line correspond to the potentially TI zones. The regions where the dotted line is below the solid line correspond with the zone where the flow is potentially convectivelly unstable. The conclusion regarding the flow stability is consistent with the conclusion reached by analyzing the time scales shown in the top and middle panels of Fig. 2. Other Physical Consequences of Radiative Heating and Cooling - obscuration effects ---------------------------------------------------------------------------------- ![ Fraction of central illuminating source radiative energy intrinsically absorbed (upper panels) and emitted (bottom panels) by gas per second as a function of time. We show the steady state solutions $1D256C$ and $1D256D$ in left and right panels, respectively. Solid lines show the net absorption/emission and dashed lines indicate the intrinsic absorption and emission. \[fig:en1d\]](f5.eps) The growth of the thermal instability leads to the development of a dense cold clouds (shells in 1-D models; e.g. variable phase in model 1D256D). The enhanced absorption in the dense condensations may make them optically thick. Here we check if the time-dependent models are self-consistent with our optically thin assumption. Figure \[fig:en1d\] shows the amount of energy absorbed and emitted by the gas (heating and cooling rates integrated over a volume at each time moment) in comparison to the luminosity of the central source in models 1D256C and D. Solid lines show the net rate of the energy exchange between radiation and matter (cooling function ${\mathcal L}$ integrated over the simulation volume) while dashed lines indicate the intrinsic absorption and emission (heating and cooling terms used in ${\mathcal L}$ are integrated independently). The net heating-cooling rate is mostly much lower than unity reflecting the fact that in the steady state the gas is nearly in a radiative equilibrium. During a variable phase (part of model 1-D256D) the energy absorbed by the accretion flow (black solid line) becomes comparable to the X-ray luminosity of the central black hole. During this variable phase the optical thickness of accreting shells can increase up to $\tau_{X,sc} \approx 20$ where the majority contribution to opacity is due to photoionization absorption. The average optical thickness increases in models with higher resolution indicating that the flow is more variable and condensations are denser. This increase in optical depth is related to shells condensating much faster in runs with higher resolutions. The dense condensations falling towards the center could reduce the radiation flux in the accretion flow at larger distances. It is beyond the scope of the present paper to investigate the dependence of the flow dynamics on the optical thickness effects and we leave it to the future study. Significant X-ray absorption is related also to transfer of momentum from radiation field to the gas. To estimate the importance of the momentum exchange between radiation and matter, one can compute a relative radiation force: $$f_{force} \equiv \frac{\sigma_{sc}+\sigma_{X}}{\sigma_{TH}} f_X \label{eq:rforce}$$ where $\sigma_X$ is the energy averaged X-ray cross-section. The momentum transfer is significant when $f_{force}>1$. Using our expression for the heating function due to X-ray photoionization $\sigma_X/\sigma_{TH}=H_X / n / F_X = 2.85\times 10^4 \xi^{-3/4} T^{-1/2}$ (see § \[sec:equation\]). Even for a dense cold shell $f_{force}$ is at most 0.1 (in case when $\tau_{max}\approx60$). Therefore radiation force is not likely to directly launch an outflow. However, the situation may change when optical effects are taken into account. $\dot{M}$ Evolution ------------------- We end our presentation of 1-D results with a few comments on the time evolution of the mass accretion rate, $\dot{M}$. Figure \[fig:mdot1d\] displays $\dot{M}$ vs time measured for all of our 1-D models. One can divide the solutions into two subcategories: steady and unsteady where $\dot{M}$ varies from small fluctuations to large changes. For a given $f_X$, the time behavior of the solution depends on the resolution, due to effects described above. In the variable models a fraction of the accretion proceeds in a form of a cold phase defined as all gas with $T < 10^5$ K. Column 6 in Table \[tab:1d\] shows the ratio of cold to hot mass accretion rates, $\chi$, computed by averaging $\dot{M}'s$ over the radius and simulation time. We average $\dot{M}'s$ over $r<100$ pc because the cadence of our data dumps is comparable to the dynamical time scale at 100 pc. The larger the luminosity the more matter is accreted via the cold phase. However, the maximum value of $\chi$ is of the order of a few, so the dominance of the cold gas is not too strong. ![ Mass accretion rate in 1-D solutions for $f_X=0.0005, 0.08, 0.01$, and $0.015$. Different colors shows $\dot{M}$ for various number of grid points: $N_r$=256 (blue), 512 (red), 1024 (green), 2048 (magenta), 4096 (black). \[fig:mdot1d\]](f6.eps) ------------- ---------------------- ------- ------------ --------- -------------------------------- ------------------------------ ---------------------------- ---------------------------------------- -------------------- ------------------- Model ID $f_X$ $N_r$ $N_\theta$ $t_f$ $\langle\dot{M}\rangle$ $\langle f_{Vol}\rangle_{t}$ $\langle\chi\rangle_{r,t}$ $\langle\tau_{X,sc}\rangle_{\theta,t}$ Max($\tau_{X,sc}$) final state \[Myr\] ${\rm [M_{\odot} \, yr^{-1}]}$ 2D256x64A $5 \times 10^{-4}$ 256 64 20 2.0 $1\times10^{-6}$ 0 0.45 0.46 smooth 2D256x64B $8 \times 10^{-3}$ 256 64 15.4 2.04 $1\times10^{-6}$ $10^{-6}$ 0.1 0.99 smooth 2D512x128B $8 \times 10^{-3}$ 512 128 20 1.95 $1\times10^{-4}$ 0.02 0.11 4.7 clouds 2D1024x256B $8 \times 10^{-3}$ 1024 256 1.83 1.84 $1.5\times10^{-3}$ 0.39 0.19 24. clouds 2D256x64C $1 \times 10^{-2}$ 256 64 11.8 1.94 $7\times 10{-5}$ 0.15 0.11 2.5 smooth 2D512x128C $1 \times 10^{-2}$ 512 128 20 1.88 $5\times10^{-4}$ 0.09 0.13 17.3 clouds 2D1024x256C $1 \times 10^{-2}$ 1024 256 1.12 1.95 $4\times10^{-3}$ 0.5 0.24 70 clouds 2D256x64D $1.5 \times 10^{-2}$ 256 64 12 1.57 $5\times10^{-3}$ 0.3 0.14 37.8 clouds 2D512x128D $1.5 \times 10^{-2}$ 512 128 11 1.6 $3\times10^{-3}$ 0.43 0.12 13.2 outflow,filaments ------------- ---------------------- ------- ------------ --------- -------------------------------- ------------------------------ ---------------------------- ---------------------------------------- -------------------- ------------------- Results: 2-D models {#sec:results_2d} =================== Seeding the TI -------------- To investigate the growth of instabilities in 2-D, we solve eqs. \[eq:mass\], \[eq:mom\], \[eq:energy\], for the same parameters $M_{BH}$ and $f_X$ as in § \[sec:results\_1d\], but on a 2-D, axisymmetric grid with $\theta$ angle changing from 0 to $90 \deg$. We use three sets of numerical resolutions described in § \[sec:num\_setup\]. To set initial conditions in axisymmetric models, we copy the solutions found in 1-D models onto the 2-D grid. In case of time-independent 1-D models, our starting point, is the data from $t=t_{f}$. We checked, for example, the runs 2D256x64 A, B, C, and D (which are 2-D version of models 1D256A, B, C, and, D) are time-independent at all times as expected. In case of higher resolution models, for which the 1-D, steady state models do not exist we adopt a quasi-stationary data from the 1-D run early evolution (at $t$ of a fraction of a Myr), during which the flow is already relaxed from its initial conditions but the TI fluctuations are not yet developed. Models which are time varying in 1-D, in 2-D develop dynamically evolving spherical shells, as expected, also indicating that our numerical code keeps a symmetry in higher dimensions. To break the symmetry in 2-D models, we perturb the smooth solutions adopted as initial conditions. The perturbation of a smooth flow is seeded everywhere and has a small amplitude randomly chosen from a uniform distribution. The new density at each point is $\rho=\rho_0 (1+ Amp*rand)$, where $rand$ is a random number $rand \in (-1,1)$ and maximum amplitude $Amp=10^{-3}$. To seed the isobaric, divergence free fluctuations other (than $\rho$) hydrodynamical variables are left unchanged. The amplitude magnitude $Amp$ is chosen to be much higher in comparison to $\epsilon_{machine}$, in order to investigate the development and evolution of strongly non-linear TI on relatively short time scales, starting directly from a linear regime. The list of all perturbed, 2-D models is given in Table \[tab:2d\]. Formation of Clouds, Filaments, & Rising Bubbles ------------------------------------------------ For luminosities $L_X < 0.015~L_{Edd}$, the 2-D models show similar properties to the 1-D models. The gas is thermally and convectivelly unstable within the computational domain, and we observe that very tiny fluctuations in an initially smooth, spherically symmetric, accretion flow, grow first linearly and then non-linearly. Since the symmetry is broken the cold phase of accretion forms many small clouds. For $L_X = 0.015~L_{Edd}$ or higher, the cold clouds continue to accrete but in some regions a hot phase of the gas starts to move outward. ![image](fig7_reduced.eps) In Figure \[fig:st2d\], we show three snapshots of representative 2-D model 2D512x128D at various times (t = 3, 6 and 11.8 Myrs). This model has the best resolution and the highest luminosity for which we are able to start the evolution from nearly steady state conditions. Columns from left to right show density, temperature, and total gas velocity overplotted with the arrows indicating the direction of flow. Initially (at t=3 Myr) the smooth accretion flow fragments into many clouds, which are randomly distributed in space. The cooler, denser regions are embedded in a warm background medium. The colder clouds are stretched in the radial direction and they have varying sizes. This initial phase of the evolution is common for all models in Table \[tab:2d\]. The phase where many cold clouds accrete along with the warm background inflow is transient. At a later stage (t=6 Myr, middle panels), model 2D512x128D shows a systematic outflow in form of rising, hot bubbles. The outflow is caused by the pressure imbalance between the cold and hot matter and buoyancy forces. The hot bubbles expand at speeds of a few hundreds km/s. Despite of the outflow, the accretion is still possible. During the rising bubble phase, the smaller clouds merge and sink towards the inner boundary as streams/filaments. However, even this phase is relatively short-lived. Bottom panels in Figure \[fig:st2d\] show the later phase of evolution when some of the filaments occasionally break into many clouds (this process takes place between 10 and 50 pc). These ’second generation’ clouds occasionally flow out together with a hot bubble. Along the X-axis, we see an inflow of a dense filament. To quantify the properties of clumpy accretion flow, we measure the volume filling factor of a cold gas $f_{vol}$, defined as: $$f_{Vol}=V_{cloud}/V_{tot}$$ where $V_{cloud}$ is the volume occupied by gas of $T<10^{5}$ K, and $V_{tot}$ is the total volume of the computational domain. In model 2D512x128D, the time-averaged $f_{Vol}$ is $\langle f_{Vol}\rangle=3\times10^{-3}$. The time evolution of $f_{vol}$ within 60 pc is shown in Figure \[fig:vol\] (black, solid line). The $f_{Vol}$ is variable and at the moment of the outflow formation, $f_{Vol}$ suddenly decreases by a factor of about 4. For comparison $f_{Vol}$ calculated during run 2D256x64D is also shown (blue, dashed line). Run 2D256x64D has the same physical parameters as 2D512x128D, however, no outflow forms. In the latter case, $f_{vol}$ is less variable and larger. In Table \[tab:2d\], we gather the time averaged $\langle f_{Vol}\rangle$ for all 2-D solutions. Measuring $f_{Vol}$ allows to quantify whether the perturbed accretion flow returns to its original, smooth state. We find that this happens when the $\langle f_{Vol}\rangle\approx 10^{-5}$ or smaller (models 2D256x64A, B, and C). ![Evolution of the volume filling factor $f_{vol}$ in model 2D512x128D (black, solid line) and 2D256x64D (dashed, blue line). \[fig:vol\]](f8.eps) $\dot{M}$ Evolution ------------------- Figure \[fig:mdot2d\] presents $\dot{M}$ through the inner boundary measured as a function of time. In most cases (except model 2D256x64A), $\dot{M}$ becomes stochastic instantly with spikes corresponding to the accretion of colder but denser clouds similar to those in Figure \[fig:st2d\] (upper panels). Similar to 1-D models, one can divide the solution into two types: steady and unsteady state. In the latter, $\dot{M}$ fluctuates on various levels depending on $f_X$. ![Mass accretion rate in 2-D models with initially seeded random perturbations. Various colors code the $\dot{M}$ in models calculated with various grid resolutions: 2D256x64 (blue), 512x128 (red), 1024x256 (green). Results are sensitive to the resolution same as in 1-D models. \[fig:mdot2d\]](f9.eps) Table \[tab:2d\] lists several characteristics of our 2-D simulations, for example, the ratio $\langle\chi\rangle_t$ averaged over time. In 2-D models this variable is smaller in comparison to 1-D due to geometry of the clouds. The maximum value of $\langle\chi\rangle_t$ is less than unity. This indicates that multi-dimensional effects (specifically development of convection) promote hot phases accretions. We plan to investigate this issue in future by carring out 3-D simulations. We find that a large scale outflow forms only in run 2D512x128D. But even in this case, the $\dot{M}$ is not significantly affected by the outflow. Figure \[fig:mdotinout\] shows the mass outflow rate (dashed line), inflow rate (dotted lines) and total mass flow rate (solid line) as a function of radius. The same types of lines show $\dot{M}$ for various times of the simulation (t=3, 6 and 11.8 Myr, green, blue and black lines, respectively) and they are averaged over $\theta$ angle. The rising bubble originates at about 10 pc in this case. We anticipate that large scale outflow are common and significant for high luminosity cases (i.e., for $f_X > 0.02$). ![Model 2D512x128D: In-, out- and total mass flow rate as a function of radius for three time moments shown in Figure \[fig:st2d\] (green, blue, and black correspond to t=3, 6 and 11.8 Myr). The solid lines mark the inflow rates while the dashed line - outflow rate. The dotted line is the total mass flow. \[fig:mdotinout\]](f10.eps) Obscuration Effects ------------------- Here we again check whether the cloud opacity might affect our results. The averaged optical thickness of the filaments and clouds is similar (see Table \[tab:2d\], columns 8 and 9). In Figure \[fig:energy2d\_all\] we show how much energy is absorbed and emitted in run 2D512x128D during the evolution. The figure shows the intrinsic absorption and emission integrated over the entire computational domain. We next calculate $\langle\tau_{X,cs}\rangle$ (optical thickness due to absorption, averaged over angles and times) and maximum value of $\tau_{X,cs}$ that occurred during the evolution. In case of the largest optical depth of $\tau \approx 70$ (in run 2D1024x256C) the radiation force coefficient from Equation \[eq:rforce\]: $f_{force} \approx 0.2$ which, as in 1-D models, is small but might not be negligible. Therefore, we are planning to explore the effects of optical depth in a follow-up paper. ![Fraction of central illuminating source radiative energy intrinsically absorbed (upper panel) and emitted (bottom panel) by gas per second as a function of time in models 2D512x128 (red, solid line) and 2D256x64D (blue, dotted line). \[fig:energy2d\_all\]](f11.eps) Summary and Discussions {#sec:discussion} ======================= In this work we show the evolution of thermal instabilities in gas accreting onto a supermassive black hole in an AGN. A simplified assumptions made in this work, in particular constant X-ray luminosity emitted near the central SMBH regardless of the $\dot{M}$, allows to follow the development of TI from the linear to strongly non-linear and dynamical stage up to luminosities of $L \approx 1.5 \times 10^{-2}~L_{Edd}$. In our 1-D models the TI is seeded by numerical errors which might be non-isobaric and are initially under-resolved. In the initial phase, the TI growth rate is smaller than predicted by theory. The rate is affected by grid resolution which leads to the formation of cold clouds of various sizes and density contrasts. This is reflected in the mass accretion rate fluctuating at different amplitude and rate for the same physical conditions but different resolutions. One cannot avoid dealing with these numerical difficulties in the numerical models. Nevertheless, we find the under-resolved, 1-D models very useful in quick checking where the thermally unstable zone exists and what type of fluctuation could cause the smooth to turn into a two-phase medium. For given physical conditions, Figure \[fig:res\] shows the wavelength $\lambda_0$ and Equation \[eq:amp\] gives the amplitude of an isobaric perturbation required to break the smooth flow into a two-phase, time-dependent model. In 2-D models, although the models depend on the resolution effects same way as in 1-D setup, we can observe an outflow formation. The convectivelly unstable gas buoyantly rises and, as found in this work, controls the later evolution of the two-phase medium and mass accretion rate. Given a simple set up with minimum number of processes included, our models display the three major features needed to explain some of the AGN observations: cold inflow, hot outflow and cold, dense clouds which occasionally escape, advected with the hot wind. We show that an accretion flow at late, non-linear stages, thus most relevant to observations, are dominated by buoyancy instability not TI. This suggests that the numerical resolution might not have to be as high as that needed to capture the small scale TI modes and it is sufficient to capture significantly larger and slower buoyancy modes. We plan to check the consistency of the models with the observations by calculating the synthetic spectra, including emission and absorption lines based on our simulation following an approach like the one in @sim:2012. Here, we only briefly comment on the main outflow properties and compare them some observations of outflows in Seyfert galaxies. Space Telescope Imaging Spectrograph (STIS) on board the [*Hubble Space Telescope*]{} allows us to map the kinematics of the Narrow Line Regions in some nearby Seyfert Galaxies (e.g. for NGC 4151 @das:2005; NGC 1068; @das:2006; Mrk 3, @crenshaw:2010; Mrk 573, @fisher:2010; and Mrk 78, @fisher:2011). Position-dependent spectra in \[O III\] $\lambda$ 5007 and $H_{\alpha}$ $\lambda$ 6563, and the measurements of the outflow velocity profiles show the following general trend: the outflow has a conical geometry and the \[O III\] emitting gas accelerates linearly up to some radius and then decelerates. The velocities typically reach up to about 1000 km/s and a turnover radius is on one hundred to a few hundred parsec scales. To compare our results with the observations, Figure \[fig:vr\_scatter\] shows the radial velocity of hot and cold gas versus the radius at t=11.8 Myr for model 2D512x128D (the data correspond to a snapshot shown in the right panels in Figure \[fig:st2d\]). We reiterate that our model is quite simplified (e.g., no gas rotation) and the outer radius is relatively small (i.e., 200 pc). Therefore, our comparison is only illustrative. We find that the hot outflow originates at around 10 pc and accelerates up to about $v_{max} \approx 200 {\rm km/s}$, which is comparable to the escape velocity from 10 pc, $v_{esc}=314~{\rm km/s}$. At larger distances, r = 100-200 pc, we see a signature of deceleration which is consistent with the observations of Seyferts outflows. We note that the geometry of the simulated flow is affected by our treatment of the boundaries of the computational domain, specifically along the pole and the equator we use reflection boundary conditions (see § \[sec:num\_setup\]). The scatter plot also indicates that the cold clouds appear at about 20-80 pc. Their maximum velocity is about $v_{max}=100$ km/s, which is smaller than the velocity of the hot outflow. The plot does not show a clear indication of a linear acceleration of the outflowing cold gas. However, it is possible that the cold clouds, seen in this snapshot, will continue to be dragged by the hot outflow and eventually will reach higher velocities. We also measured the column density of the hot and cold gas for the same, representative, snapshot at t=11.8 Myrs. The typical column densities vary with the observer inclination, $N_{H}=5 \times 10^{22} - 10^{24} \, {\rm cm^2}$ for gas $T > 10^5$ K, and $N_{H}=10^{20}-10^{23} \, {\rm cm^2}$ for gas with $T<10^5$ K. This is roughly consistent with column densities estimated from observations of AGN (e.g. for NGC 1068 $N_H= 10^{19}-10^{21} \, {\rm cm^2}$, @das:2007 and references therein). Our results are similar in many respects, to the previous findings presented in @barai:2012, i.e. the accretion evolution depends on $f_X$ luminosity; we also observe clouds, filaments and outflow. The outflow appears at $f_X=0.015$ which is consistent with $f_X=0.02$ found by @barai:2012. Here, we are able to calculate models for about 10 times longer in comparison to 3-D SPH models. We confirm the previous results that the cold phase of accretion rate can be only a few times larger in comparison to the hot one. Similar models has been investigated in the past by e.g. @krolik:1983. Our work is on one hand a simplified and on the other hand an extended version of these previous works. The key extention here is that our new results cover the non-linear phase of the evolution. There are two new conclusions added by our analysis to the previous investigations. The 2-D models with outflows are possibly governed by other than TI instabilities mainly convection. Another non-linear effects found in our 1-D and 2-D models is that the fragmentation of the flow makes it optically thick for photoionization. Further investigation of shadowing effects is required. Some sub-resolution models of AGN feedback in galaxy formation (@dimatteo:2008; @dubois:2010; @lusso:2011) assume that BH accretion is dominated by an unresolved cold phase, in order to boost up the accretion rate obtained in simulations. Our results indicate that the cold phase accretion is unlikely dominant as even in well-developed and well- resolved multi-phase cases, the accretion is typically dominated by a hot phase. However, we note that the cold phase of our solution might be an upper branch of some more complicated multi-phase medium (i.e., a mixture of molecular, atomic and dusty gas). This work was intentionally focused on a very limited number of processes and effects. Its results suggest that the future work should include more self-consistent approach not only with shadowing effects but also with the radiation force. Our next step would be to investigate the non-axisymmetric effects via fully 3-D simulations. The latter is challenging and one may not be able to see very fine details of the gas dynamics as in 2-D models due to resolution effects. ![ Scatter plot of radial velocity of hot ($T>10^5 K$, smaller red symbols) and cold ($T < 10^5 K$, larger blue symbols) phase of the flow in model 2D512x128D at t=11.8 Myr (model shown in right panels in Figure \[fig:st2d\]).[]{data-label="fig:vr_scatter"}](f13_reduced.eps) This work was supported by NASA under ATP grant NNX11AI96G and NNX11AF49G. DP thanks J. Ostriker, J. Stone, and S. Balbus for discussions and also Department of Astrophysical Sciences, Princeton University for its hospitality during his sabbatical. DP also acknowledges the UNLV sabbatical assistance. Authors would like to thank Paramita Barai, Ken Nagamine and Ryuichi Kurosawa for their comments on the manuscript. [37]{} natexlab\#1[\#1]{} , S. A. 1986, , 303, L79 , S. A. & [Soker]{}, N. 1989, , 341, 611 , P., [Proga]{}, D., & [Nagamine]{}, K. 2011, , 418, 591 —. 2012, , 3200 , G. S. & [Blinnikov]{}, S. I. 1980, , 191, 711 , J. M. 1994, , 435, 756 , H. 1952, , 112, 195 , L. L., [Ostriker]{}, J. P., & [Stark]{}, A. A. 1978, , 226, 1041 , D. M., [Kraemer]{}, S. B., [Schmitt]{}, H. R., [Jaff[é]{}]{}, Y. L., [Deo]{}, R. P., [Collins]{}, N. R., & [Fischer]{}, T. C. 2010, , 139, 871 , V., [Crenshaw]{}, D. M., [Hutchings]{}, J. B., [Deo]{}, R. P., [Kraemer]{}, S. B., [Gull]{}, T. R., [Kaiser]{}, M. E., [Nelson]{}, C. H., & [Weistrop]{}, D. 2005, , 130, 945 , V., [Crenshaw]{}, D. M., & [Kraemer]{}, S. B. 2007, , 656, 699 , V., [Crenshaw]{}, D. M., [Kraemer]{}, S. B., & [Deo]{}, R. P. 2006, , 132, 620 , T., [Colberg]{}, J., [Springel]{}, V., [Hernquist]{}, L., & [Sijacki]{}, D. 2008, , 676, 33 , T., [Khandai]{}, N., [DeGraf]{}, C., [Feng]{}, Y., [Croft]{}, R. A. C., [Lopez]{}, J., & [Springel]{}, V. 2012, , 745, L29 , Y., [Devriendt]{}, J., [Slyz]{}, A., & [Teyssier]{}, R. 2010, , 409, 985 , G. B. 1965, , 142, 531 , T. C., [Crenshaw]{}, D. M., [Kraemer]{}, S. B., [Schmitt]{}, H. R., [Mushotsky]{}, R. F., & [Dunn]{}, J. P. 2011, , 727, 71 , T. C., [Crenshaw]{}, D. M., [Kraemer]{}, S. B., [Schmitt]{}, H. R., & [Trippe]{}, M. L. 2010, , 140, 577 , J. C., [Norman]{}, M. L., [Fiedler]{}, R. A., [Bordner]{}, J. O., [Li]{}, P. S., [Clark]{}, S. E., [ud-Doula]{}, A., & [Mac Low]{}, M.-M. 2006, , 165, 188 , A., [Proga]{}, D., & [Kurosawa]{}, R. 2008, , 681, 58 , T. & [Bautista]{}, M. 2001, , 133, 221 , J. H. & [London]{}, R. A. 1983, , 267, 18 , R. & [Proga]{}, D. 2008, , 674, 97 —. 2009, , 397, 1791 —. 2009, , 693, 1929 , R., [Proga]{}, D., & [Nagamine]{}, K. 2009, , 707, 823 , E. & [Ciotti]{}, L. 2011, , 525, A115 , W. G. & [Bregman]{}, J. N. 1978, , 224, 308 , J. P., [Weaver]{}, R., [Yahil]{}, A., & [McCray]{}, R. 1976, , 208, L61 , E. N. 1953, , 117, 431 , D. 2007, , 661, 693 , D., [Ostriker]{}, J. P., & [Kurosawa]{}, R. 2008, , 676, 101 , D., [Stone]{}, J. M., & [Kallman]{}, T. R. 2000, , 543, 686 , F. H. 1992, [Physics of Astrophysics, Vol. II]{} (University Science Books) , S. A., [Proga]{}, D., [Kurosawa]{}, R., [Long]{}, K. S., [Miller]{}, L., & [Turner]{}, T. J. 2012, ArXiv e-prints , V. 2005, , 364, 1105 , R. F. 1982, , 260, 768 Growth rate of a condensation mode in a uniform medium -code tests {#app1} ================================================================== @field:1965 formulated a linear stability analysis of a gas in thermal and dynamical equilibrium. Here, we briefly recall his most important, for our analysis, equations. We disregard the thermal conduction effects. The dispersion relation derived from linearized local fluid equations with heating/cooling described by ${\mathcal L}$ function and perturbed by a periodic, small amplitude wave given by $\exp(nt+ikx)$, is: $$n^3 + N_v n^2 + k^2 c_s^2 n + N_p k^2 c_s^2 = 0 \label{eq:cube}$$ where $k$ is the perturbation wave number ($k=2 \pi /\lambda$) and functions $N_p$ and $N_v$ are defined as $$N_p \equiv \left .\frac{1}{c_p} \left(\frac{\partial {\mathcal L} }{\partial T}\right)\right|_P \label{eq:Np}$$ and $$N_v \equiv \left. \frac{1}{c_v} \left(\frac{\partial {\mathcal L}}{\partial T}\right)\right|_\rho \label{eq:Nv}$$ with $c_p$ and $c_v$ being the specific heats under constant pressure and constant volume conditions, respectively, and $T$ is the gas temperature. Vertical line means that the derivative is taken under constant thermodynamical variable condition. Dispersion Equation \[eq:cube\] has three roots. In a short wavelength regime ($\lambda \ll 2\pi N_p / c_s$), two, complex roots correspond to two conjunct nearly adiabatic sound waves and third, real one is an isobaric condensation mode (the gas density and temperature change in anti-phase so that the pressure remains constant). The sign of the real part of the root gives the stability criterion. The sound wave will grow if $\left. \partial{\mathcal L}/\partial T \right|_S < 0$ (known as Parker’s criterion, @parker:1953). The condensation mode will grow if $\left. \partial {\mathcal L}/\partial T \right|_P < 0$ (Field’s criterion). In a short wavelength limit, the growth rates asymptote to $n=-0.5(N_v-N_p)$ (for sound waves) and $n=-N_p$ (for condensation modes). Isochoric modes ($n \rightarrow -N_v$) and effective acoustic waves are eigen modes of long wavelengths perturbations. The perturbation growth/damp time scale is $\tau_{TI}=1/n$. We use the above @field:1965 theory to show that our numerical scheme for solving the modified energy conservation equation (Equation \[eq:energy\]) together with two other fluid dynamics equations is accurate. The test calculations are carried out in 1-D Cartesian coordinates within $x\in(0,L)$ range where L is the size of the computational domain in dimensionless units. The boundary conditions for all variables are periodic. In an unperturbed state, the gas density ($\rho_0=1$) and internal energy density ($e_0=1$) are constant in the entire computational domain. The velocity of gas is set to zero. We assume that the gas is heated by an external source of radiation and cools due to free-free transitions. The test cooling function is simple: $${\mathcal L}= C \rho T^{1/2} \label{testcool} - H$$ The normalization constants $H$ (for heating) and $C$ (for cooling) are set so that in the unperturbed state the gas is in radiative equilibrium i.e. ${\mathcal L}(\rho_0,e_0)=0$. In this test the functions $N_p$ and $N_v$ have explicit, analytical forms $$N_p \equiv \left .\frac{1}{c_p} \left(\frac{\partial {\mathcal L} }{\partial T}\right)\right|_P \equiv \frac{1}{c_p} \left( \left . \frac{\partial{\mathcal L}}{\partial T} \right |_\rho - \frac{\rho}{T} \left . \frac{\partial{\mathcal L}}{\partial \rho} \right |_T \right) =- \frac{1}{2 c_p} C \rho_0 T_0^{-1/2}$$ and $$N_v \equiv \left. \frac{1}{c_v} \left(\frac{\partial {\mathcal L}}{\partial T}\right)\right|_\rho = \frac{1}{2 c_v} C \rho_0 T_0^{-1/2} = -\gamma N_p.$$ The numerical values of limiting growth/damp rates are $N_p=-0.04$, and $N_v=0.067$, while the speed of sound is: $c_s^2=1.11$ ($\gamma=5/3$). The domain sound crossing time is much shorter than the perturbation growth time scale which allows to keep the constant pressure. Our numerical scheme implemented into ZEUS-MP code correctly reproduces the expected growth rates of small amplitude perturbation of the uniform medium. The perturbation is an eigen mode of TI, and its properties depend on the assumed $\lambda$. Eigen modes are realized by first applying a cosine perturbation to the gas density $\rho= \rho_0 + A \rho_0 \cos(k x)$ and calculating profiles of $e$ and $v$ from e.g. Equations 11 and 14 in [@field:1965], for a given $k$ and corresponding theoretical value of $n$ (given by Equation \[eq:cube\]). Next we measure how fast the perturbation grows while it is in the linear regime. Figure \[fig:app\] (left panel), shows the analytical solution of the theoretical dispersion relation $n(\lambda)$ (solid line, third root of Equation \[eq:cube\]), and the numerical growth rates calculated with ZEUS-MP (points). For very short $\lambda$’s the eigen mode of this root is converging to the isobaric condensation mode and grows at $n=-N_p$ rate, as expected. The long $\lambda$ modes grow slower in comparison to the very short $\lambda$ condensations, as predicted by theory. For relatively large $\lambda$, the third root changes into an effective acoustic wave, it becomes complex with the real part negative meaning that the waves are damped (see @shu:1992, Equation 41 in the Problem Set No 3). ![image](f12a.eps) ![image](f12b.eps) In the second test, we measure the growth rate of a condensation mode that has a finite size (i.e. smaller than the domain length). We are interested in how many numerical grid points is required to resolve the correct $n$. We set $\lambda=0.1$ while $L=1$. Figure \[fig:app\] (right panel) shows the same time snapshots of the growing condensation mode density, calculated with various numerical resolutions. Models with lower resolution evolve slower. When $\lambda$ resolved with 16 points it starts converging to the right solution. We conclude that about 20 or more grid points per $\lambda$ is required to resolve the isobaric condensation. [^1]: We also decouple $L_X$ from $\dot{M}$ in order to avoid introducing additional parameters into the equations. While coupling these quantities not only a radiative efficiency of gravitational to radiative energy has to be assumed but one also needs to know how to calculate the mass accretion rate at the very compact region way below $r_i=0.1 pc$. Another reason for decoupling $L_X$ and $\dot{M}$ is that we are interested in caring out a stability analysis and perturb a steady state solutions with all model parameters fixed.
{ "pile_set_name": "ArXiv" }
[Determination of rubidium and cesium in chloride type oilfield water by flame atomic absorption spectrometry]. Flame atomic absorption spectrometry (FAAS) was applied to the determination of rubidium and cesium in chloride type oilfield water by considering the interferences of the coexistent K+, Na+, Ca2+, and Mg2+ ions, Standard curve method and standard addition method were compared in the determination of rubidium and cesium in the simulated oilfield water and the real oilfield water from the Nanyishan region in Qaidam Basin. Although rubidium and cesium have similar physical-chemical properties, they present different characters during their analyses using the FAAS technique. When the standard addition method was used for the determination of rubidium and cesium in the simulated oilfield water, the results of rubidium were very poor, whereas the results of cesium were satisfactory. When the standard curve method was used for the determination of rubidium and cesium in the simulated oilfield water, the results of both rubidium and cesium were satisfactory within the linear ranges of the standard curves. For the real oilfield water, standard addition method is also only applicable for the determination of cesium with a recovery ranging from 90% to 110%. While standard curve method is applicable for the determination of both rubidium and cesium with a recovery ranging from 90% to 110%.
{ "pile_set_name": "PubMed Abstracts" }
In order to reap the potential benefits of telehealth technologies, the delivery system has to be usable for both patients and clinicians. Usability is the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use ([@b9-ijt-pg03]). This construct is a key to making systems easy to learn and easy to use ([@b12-ijt-pg03]). Measuring the usability of telehealth technology offers a way to evaluate and improve the effectiveness of both the technology and services delivered. Traditionally, telehealth and telemedicine have been conducted over videoconferencing technologies such as Cisco-Tandberg and Polycom. Systems such as Cisco-Tanberg and Polycom are designed for the sole purpose of videoconferencing. However, recent advances in technologies allow videoconferencing to be conducted using multi-purpose computer technologies. Examples include VSee, Adobe Connect, and Cisco WebEx. These multipurpose technologies allow consumers to deliver telehealth using computer software applications instead of traditional videoconferencing systems which are primarily hardware based. In addition to multi-purpose technology, there have been software applications developed specifically for telehealth purposes. These include the Versatile Integrated System for Telerehabilitation (VISYTER) ([@b14-ijt-pg03]) and EHAB ([@b15-ijt-pg03]) systems. A review of the literature found a number of telehealth questionnaires developed mostly in the early 2000s, including the Telemedicine Satisfaction Questionnaire (TSQ) ([@b19-ijt-pg03]), Telemedicine Patient Questionnaire (TMPQ) ([@b4-ijt-pg03], [@b6-ijt-pg03]), and Telemedicine Satisfaction and Usefulness Questionnaire (TSUQ) (Bakken, 2009). However, the questionnaires developed in these previous studies were designed to evaluate special purpose videoconferencing technologies. In order to reap the potential benefits of current telehealth technologies, the delivery system has to be usable for both patients and clinicians. The questionnaires commonly employed to measure usability of a telehealth system, including user experience and satisfaction with various aspects of the technology or service, typically use a Likert scale approach to assessment ([@b11-ijt-pg03]). Since telehealth technology connects clinicians and patients over a distance, the usability questionnaire is designed to measure the quality of the interactions between two sites (e.g., audiovisual quality, quality of communications, ease of use of the equipment) (Houston & Burton, 1997), and the overall impression of the service (i.e., level of comfort and satisfaction with the telemedicine encounter) ([@b2-ijt-pg03]; [@b19-ijt-pg03]). The objective of the present study is to report on the development and reliability assessment of a new usability tool, the Telehealth Usability Questionnaire (TUQ). The TUQ combines items from existing telehealth questionnaires with those from computer usability questionnaires, and was designed to be a comprehensive questionnaire that covers all usability factors (i.e., usefulness, ease of use, effectiveness, reliability, and satisfaction). The TUQ is intended for both clinicians and patients. In addition, the TUQ is intended for use with various types of telehealth systems, including the traditional videoconferencing systems, computer-based systems, and the new generation of mobile telehealth systems. As such, the TUQ utilizes questions that can be modified to correctly address the participants (clinicians or patients) and the telehealth system. METHODS ======= QUESTIONNAIRE DEVELOPMENT ------------------------- Development of the TUQ consisted of four phases: (1) literature review; (2) construct development; (3) item development; and (4) examination of reliability. Each of these phases will be covered in the subsequent sections. BACKGROUND ON EXISTING QUESTIONNAIRES ------------------------------------- A literature review was conducted to identify existing questionnaires that have been widely used in the evaluation of telemedicine and computer/information technology. Identified questionnaires that were used as models for the TUQ were primarily from two fields: telemedicine and computer and information technology. In the field of telemedicine the following questionnaires were identified: the Telemedicine Satisfaction Questionnaire (TSQ) ([@b19-ijt-pg03]), Telemedicine Patient Questionnaire (TMPQ) ([@b4-ijt-pg03], [@b6-ijt-pg03]), and Telemedicine Satisfaction and Usefulness Questionnaire (TSUQ) (Bakken, 2009). Telemedicine questionnaires focus on three factors of usability: usefulness, satisfaction, and interaction quality between patient and clinician over telemedicine technology. The TSQ clearly addresses the three usability factors central to telehealth. For example, it includes items unique to telemedicine such as audio and video quality. TSQ is a questionnaire designed specifically for telemedicine systems. TSQ was also designed for traditional interactive videoconferencing systems such as Polycom or Cisco Tandberg. One main difference between traditional videoconferencing systems and new generation computer-based systems is that the former type of system does not have a user interface that clinicians and patients interact with, which is the case with computer-based systems such as VSee. The traditional videoconferencing systems are usually setup by a technician, and the user (patient and clinician) does not need to know how to setup and interact with the system. This means that the TSQ lacks the items related to interface quality that are important for computer/software-based telehealth. However, because items of the TSQ so clearly address the usability factors central to telehealth, it was identified as a primary source of questionnaire items for the TUQ. In the field of information and computer technology the following questionnaires were identified: the Technology Acceptance Model (TAM) ([@b3-ijt-pg03]), and the IBM Post-Study System Usability Questionnaire (PSSUQ) developed by [@b10-ijt-pg03]. The TAM ([@b3-ijt-pg03]) describes the relationships between perceived qualities of system usage, affective attitude, and behavioral responses to the system. This questionnaire is used widely in the business information arena. We derived questions related to the usability factors of usefulness and ease of use from the TAM. The PSSUQ measures system usability via a multitude of aspects, including system function, information and interface quality, to users' satisfaction level. The evaluation covers the standards of effectiveness, efficacy and satisfaction ([@b10-ijt-pg03]). From the PSSUQ, we derived items for ease of use, interface quality, reliability, and satisfaction. USABILITY ATTRIBUTES OF A TELEHEALTH SYSTEM ------------------------------------------- The TUQ was designed to be a comprehensive questionnaire that covers all usability factors, including usefulness, ease of use, effectiveness, reliability, and satisfaction. The following is a brief description of each of the usability factors assessed in the TUQ. ### USEFULNESS Usefulness refers to the users' perception of how the telehealth system functions to provide a healthcare interaction/service similar to the traditional in-person encounter. The system is useful when it works and has positive effects on clinical outcomes or reduces clinical cost ([@b7-ijt-pg03]). ### EASE OF USE AND LEARNABILITY The system should be easy to learn and use to facilitate rapid work completion (Chen et al., 2009). A system that is easy to learn allows users to build on their knowledge without deliberate effort. For example, a system with intuitive icons is easier to use and to learn than command-based system. ### INTERFACE QUALITY Interface quality measures the interaction between the patient and the telemedicine technology or computer system. This includes the quality of the graphical user interface, the ease of navigation, and an overall impression of how the patient interacts with the telehealth system. This usability attribute was not part of previous telemedicine questionnaires because the telemedicine systems in the past did not have an interface; they were primarily just hardware that needed to be turned on. The Interface Quality sub-scale deals with how pleasant the system was to use for the consumer. It measures if s/he liked the system and if the system had all the functionality and capabilities s/he expected. For example, systems like VSee have a graphical user interface, while most old videoconferencing systems primarily consist of hardware. ### INTERACTION QUALITY Interaction quality measures patient interactions with the clinician, including the quality of the audio and the video, and the similarity of the telehealth interaction between patient and clinician to an in-person interaction. This construct is unique to telehealth and has been the focus of many of the telemedicine questionnaires ([@b4-ijt-pg03]; [@b19-ijt-pg03]). ### RELIABILITY Reliability refers to how easily the user can recover from an error and how the system provides guidance to the user in the event of error. For example, if a user clicks a wrong button, the system provides a means to undo the error or to back track. Ideally, telehealth systems should be as reliable as in-person service. Reliability and validity of data transmission are essential to the safety of patients ([@b17-ijt-pg03]). ### SATISFACTION AND FUTURE USE This factor is related to overall satisfaction of the user with the telehealth system and how willing the user would be to use the system in the future. [Table 1](#t1-ijt-pg03){ref-type="table"} shows the usability components of the TUQ and questionnaire items for each. The table also shows the source of the questionnaire items. The telehealth-related questionnaires are primarily taken from TSQ ([@b19-ijt-pg03]). These items are related to usefulness, interaction quality, and satisfaction and future use. The items related to computer and user interface are primarily taken from PSSUQ ([@b10-ijt-pg03]); these items elicit information about ease of use and learnability, interface quality, satisfaction and future use. The items in the reliability component that are taken from TSQ are related to the reliability of telehealth service, and those from PSSUQ are related to system reliability. The satisfaction and future use component includes questions from both TSQ and PSSUQ. Questionnaire items on ease of use and learnability were also borrowed from TAM, and items on usefulness were also taken from TAM and use similar wording. USABILITY FACTORS ================= The TUQ uses a broader definition of usability that takes into account the utility and the usability of the technology. Utility refers to whether the technology's functionality does what users need ([@b13-ijt-pg03]). Usability is the extent to which a product can be used by users to achieve specified goals with effectiveness, efficiency and satisfaction ([@b9-ijt-pg03]). Early work in telehealth usability evaluation was primarily focused on patient satisfaction ([@b1-ijt-pg03]; [@b8-ijt-pg03]), while later work incorporated satisfaction, usefulness, ease of use, and interaction quality ([@b2-ijt-pg03]; [@b4-ijt-pg03], [@b5-ijt-pg03], [@b6-ijt-pg03]; [@b19-ijt-pg03]), all of which are measures of effectiveness. The TUQ usability factors include usefulness, ease of use, effectiveness, reliability, and satisfaction. The relationship among the TUQ usability components, questionnaire items, and usability factors is depicted in [Table 2](#t2-ijt-pg03){ref-type="table"}. CONTENT VALIDITY ---------------- Because the questionnaire items included in the TUQ were combined from existing sources in telemedicine and computer software interface, the content validity of the questionnaire items was reported in previous studies ([@b2-ijt-pg03]; Lewis, 1994; [@b19-ijt-pg03]). Additionally, the content validity of the TUQ has been shown in previous studies (Parmanto et al., 2011; Schutte et al., 2013; [@b20-ijt-pg03], [@b21-ijt-pg03]). CONTENT RELIABILITY ------------------- ### PARTICIPANTS Fifty-three participants (21 males and 32 females) took part in this study. Participants were recruited from the University of Pittsburgh and included individuals with (56.6%) and without (43.4%) experience utilizing telehealth technology. To be included in the study participants could not have completed the TUQ within the last three months. [Table 3](#t3-ijt-pg03){ref-type="table"} provides characteristics of the participants. PROCEDURE ========= Basic demographic information was collected via interview, and the TUQ was completed independently by participants. All participants were directed to complete the TUQ based on their experience with the VISYTER system. Two participant perspectives were needed (clinicians and clients). Participants who regularly used telehealth technology and identified as "clinicians" were asked to complete the TUQ based on their most recent interaction with the VISYTER application. Participants who had never or did not regularly use telehealth technology were asked to take part in a simulated telehealth session as a client. The simulated telehealth session was designed to approximate an initial session that would occur between a rehabilitation practitioner and a new client. This allowed simulation participants to complete the TUQ from the "client" perspective, as they assumed the client role. These participants interacted with study staff, who assumed the clinician role via VISYTER. After completing the simulated telehealth session, both sets of participants (clinicians based on their prior experience and clients based on their experiences with the simulation) were asked to complete the TUQ. DATA ANALYSIS ============= The TUQ ratings by factor were compared using Cronbach's coefficient alpha. This statistic is most often used as a measure of internal consistency and is used to evaluate if items in a scale are measuring the same construct ([@b16-ijt-pg03]). Guidelines for evaluating Cronbach's coefficient alpha are presented in [Table 4](#t4-ijt-pg03){ref-type="table"}. RESULTS ======= All usability attributes of the TUQ were found to have "good" to "excellent" reliability. Raw and standardized Cronbach's coefficient alpha values for each are presented in [Table 5](#t5-ijt-pg03){ref-type="table"}. Refer back to [Table 2](#t2-ijt-pg03){ref-type="table"} to view the subscales of the TUQ. DISCUSSION ========== The TUQ was developed in response to the ever-evolving technology within telehealth today. There is a need for a usability measure that exhibits the attributes of usefulness, ease of use and learnability, interface quality, interaction quality, reliability, satisfaction, and future use. These salient features must be present in measuring computer-based telehealth technologies. Building on the best measures currently available in telehealth and in information technology and computer science, we have designed the TUQ to include the above-mentioned attributes and to be psychometrically robust. The TUQ has strong content validity because it incorporates items from the best current measures in telehealth, which have withstood the scrutiny of rigorous prior validation studies, and it has been used in previous studies conducted by several of these authors. The TUQ's reliability, specifically internal consistency, is more than adequate. Its internal consistency was evaluated directly as a part of this investigation. The first step in the reliability examination was to develop a protocol to vary the TUQ's standard language when administered orally to a targeted respondent group to ensure optimal understanding. Secondly, a "systems check" was performed by administering the TUQ using various telecommunication systems to ensure its operational integrity and quality when used in various technology contexts. Finally, we tested the TUQ's internal consistency using Cronbach's coefficient alpha. It performed remarkably well, with alpha values that substantially exceeded the acceptable range in all domains: usefulness, ease of use, effectiveness, reliability, and satisfaction. Additionally, the associated eigenvalues, factor loadings, and percentages of variation explained by each domain were strong. Our development efforts and the analyses implicit in those efforts have revealed that the TUQ is a solid, robust, and versatile measure. It is based on the best usability questionnaires available, able to respond to the latest technology changes in telehealth, incorporates both client and clinician interface needs in the delivery of clinical services via telehealth, and addresses all of the relevant dimensions of usability. Its prognosis is strong; more widespread use in telehealth studies going forward will be the ultimate test of its effectiveness. Given the increase in pervasiveness of telehealth in the delivery of clinical services from a distance, along with the rise in use of computer-based systems that rely on software and a computer interface as the model of delivering telehealth, the TUQ will be valuable for measuring usability. This research was supported in part by the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) grant \#H133E090002 (RERC on Telerehabilitation), \#90RE5018 (RERC From Cloud to Smartphone: Empowering and Accessible ICT) and grant \#90DP0064 (DRRP Promoting Independence & Self-management using mHealth). ###### Usability Components and Questionnaire Items and Their Source Components Questionnaire Items TAM TSQ PSSUQ ------------------------------ ----------------------------------------------------------------------------------------- ----- ----- ------- Usefulness 1 Telehealth improves my access to healthcare services S Y 2 Telehealth saves me time traveling to a hospital or specialist clinic S Y 3 Telehealth provides for my healthcare needs S Y Ease of Use and Learnability 1 It was simple to use this system Y S Y 2 It was easy to learn to use the system Y Y 3 I believe I could become productive quickly using this system Y Y Interface Quality 1 The way I interact with this system is pleasant Y 2 I like using the system Y 3 The system is simple and easy to understand S Y 4 This system is able to do everything I would want it to be able to do S Y Interaction Quality 1 I could easily talk to the clinician using the telehealth system Y 2 I could hear the clinician clearly using the telehealth system Y 3 I felt I was able to express myself effectively Y 4 Using the telehealth system, I could see the clinician as well as if we met in person Y Reliability 1 I think the visits provided over the telehealth system are the same as in-person visits Y 2 Whenever I made a mistake using the system, I could recover easily and quickly S Y 3 The system gave error messages that clearly told me how to fix problems Y Satisfaction and Future Use 1 I feel comfortable communicating with the clinician using the telehealth system Y Y 2 Telehealth is an acceptable way to receive healthcare services S Y Y 3 I would use telehealth services again Y 4 Overall, I am satisfied with this telehealth system Y Y Note. Y = taken from the questionnaire with no or slight change; S = Similar item with different wording exists in the questionnaire. ###### Usability Components, Questionnaire Items, and Usability Factors Components Factors Usefulness Ease of use Effectiveness Reliability Satisfaction --------------------------------- ----------------------------------------------------------------------------------------- ------------ ------------- --------------- ------------- -------------- **Usefulness**  1 Telehealth improves my access to healthcare services X  2 Telehealth saves me time traveling to a hospital or specialist clinic X  3 Telehealth provides for my healthcare needs X **Ease of Use & Learnability**  1 It was simple to use this system X  2 It was easy to learn to use the system X  3 I believe I could become productive quickly using this system X **Interface Quality**  1 The way I interact with this system is pleasant X  2 I like using the system X  3 The system is simple and easy to understand X  4 This system is able to do everything I would want it to be able to do X **Interaction Quality**  1 I could easily talk to the clinician using the telehealth system X  2 I could hear the clinician clearly using the telehealth system X  3 I felt I was able to express myself effectively X  4 Using the telehealth system, I can see the clinician as well as if we met in person X **Reliability**  1 I think the visits provided over the telehealth system are the same as in-person visits X  2 Whenever I made a mistake using the system, I could recover easily and quickly X  3 The system gave error messages that clearly told me how to fix problems X **Satisfaction and Future Use**  1 I feel comfortable communicating with the clinician using the telehealth system X  2 Telehealth is an acceptable way to receive healthcare services X  3 I would use telehealth services again X  4 Overall, I am satisfied with this telehealth system X ###### Participant Characteristics Characteristic n Percent (%) ------------------------------ ---- ------------- Gender  Male 21 39.6  Female 32 60.4 Highest Level of Education  Completed some college 17 32.1  Associate's degree 1 1.9  Bachelor's degree 6 11.3  Completed some postgraduate 11 20.8  Master's degree 11 20.8  PhD, law or medical degree 7 13.2 Race/Ethnicity  African American 3 5.6  Asian 5 9.5  Hispanic 2 3.7  Caucasian 42 79.2  Other 1 2 Experience with Telehealth  None 23 43.4  Less than 3 months 8 15.1  3 to 6 months 5 9.5  6 months to 1 year 2 3.7  More than 1 year 15 28.3 ###### Cronbach's Coefficient Alpha Ranges of Acceptability Scale Rating ---------------- -------------- α ≥ 0.9 Excellent 0.8 ≤ α \< 0.9 Good 0.7 ≤ α \< 0.8 Acceptable 0.6 ≤ α \< 0.7 Questionable 0.5 ≤ α \< 0.6 Poor α \< 0.5 Unacceptable ###### Internal Consistency of the TUQ Subscales Variable Cronbach Coefficient Alpha --------------- ---------------------------- ---------------- *Raw* *Standardized* Usefulness 0.83 0.85 Ease of Use 0.92 0.93 Effectiveness 0.86 0.87 Reliability 0.79 0.81 Satisfaction 0.91 0.92
{ "pile_set_name": "PubMed Central" }
Please visit ourCommunity Links section for other Sullivan County Organizations Contributed Photo A Voice Link from Verizon. Verizon, union cross wires Story by Dan Hust MONTICELLO — July 5, 2013 — Verizon’s plan to launch a new wireless service nationally has also launched a local controversy over its commitment to its wired services. In the past week, Verizon, the NYS Attorney General’s Office, the Communications Workers of America (CWA) union, and AARP have sent arguments to the NYS Public Service Commission (PSC) over Verizon’s new Voice Link offering. As detailed in a story in Tuesday’s Democrat and a variety of media reports – including the New York Times – Voice Link provides Verizon customers with a home phone system that operates on the cell network, without directly connecting to the company’s copper or fiber-optic landlines. Lower costs and improved service have been cited as advantages, while concerns have been raised about reliability and usefulness. But the PSC fight seems to more be over implementation and has been localized by the fact that Voice Link equipment was delivered to Verizon’s Monticello facility and that the AG’s Office found a Monticello resident to provide testimony. Here’s what Verizon and the CWA, the union that represents Verizon’s lineworkers, had to say about the issue in interviews Wednesday with the Democrat.Verizon“There’s been a lot of misinformation in the press,” said Tom Maguire, Verizon’s senior vice president of National Operations Support and a key part of Voice Link’s development. For one, he said Verizon is not abandoning its landline services throughout the state, including in Sullivan County. “I don’t think it’s something people need to be worried about,” he said. “... If we are providing service to people today, our intention is to continue providing service tomorrow.” But Maguire said a combination of factors – including marketplace demands and the company’s responsibility to customers – are bringing about a new telecommunications world. “At the end of the day, we’re going to have a combination of three different networks,” he predicted. He’s speaking of traditional copper telephone lines, fiber-optic wire (branded as FiOS by Verizon), and wireless. FiOS is not yet a widespread option in Sullivan County, but wireless coverage has been growing in recent years. “I don’t think it’s a matter of us removing copper,” Maguire said. “I think it’s more of what’s going to happen in the marketplace.” He noted that around a decade ago, Verizon had 53 million landline customers, which as of last year had decreased to 19 million. “Where did all the people go?” he asked, answering that with cable, Internet and wireless companies now offering phone services. “People have been leaving the copper infrastructure by themselves,” he said. “... Voice Link is our reaction to those changes in the marketplace.” Down on Fire Island, the PSC is considering Verizon’s desire to make Voice Link the only option due to Superstorm Sandy’s decimation of the wired system – and the fact that the majority of Verizon’s calls on that barrier island are already made wirelessly. As Maguire put it, it doesn’t make sense “to employ a gazillion people to maintain an infrastructure that no one’s using.” But people in Sullivan County still rely on those copper lines, even if they are more expensive to maintain. So Maguire said his company is only offering Voice Link as an optional replacement. Being a wireless home phone user himself, he feels customers won’t notice a difference. “We thought a lot about trying to follow this notion of ‘sameness,’” he said, referring to Voice Link offering the same quality as a landline. Nevertheless, he did admit that the current generation of Voice Link does not offer data services, meaning that home security, health alert, fax, credit card processing and Internet devices won’t function with it. (He estimated that data services may be offered with the next generation of Voice Link now under development.) So Verizon customer service reps are trained to determine whether or not a customer could switch to Voice Link or should stay with a copper/fiber connection, he said. Voice Link is prepared to be launched nationally, but the reason the equipment was seen earlier in Monticello, indicated Maguire, is due to the region’s number of seasonal summer communities. “Some of the camps up there, the outside infrastructure gets iffy,” he explained, adding that residents of such communities often “are not data-centric.” Maguire said he doesn’t agree with the AG’s objections, noting that the affidavit from the Monticello resident confirmed that his copper-based phone system was restored the same day he rejected the Voice Link service. As for employees worried whether they’ll have a job, he acknowledged that the workforce has and may continue to shrink but that new job opportunities will arise as a result of the technological changes. “I think there will be an evolution,” Maguire said.CWA CWA’s NYS legislative director, Pete Sikora, views Verizon’s PSC request as a thinly veiled attempt to transfer as many people over to wireless as it can, more for their bottom line than for the benefit of their customers or employees. “It’s because Verizon wants to abandon its landline network,” he said, pointing to comments made by Verizon’s CEO, Lowell McAdam, last year that strongly indicated the company is eager to dump its copper network wherever it can. “In other areas that are more rural and more sparsely populated, we have got LTE built that will handle all of those services, and so we are going to cut the copper off there. We are going to do it over wireless,” McAdam was quoted as saying to financial analysts in June of 2012. Sikora sees that future not as one of innovation but of regression. “Without landline service, there is no DSL, so the local cable service becomes the monopoly,” he pointed out. In addition, customers who rely on other forms of data transmission through their phone lines – the aforementioned health alert, home security and credit card processing services, for example – would be out of luck (until Verizon upsells them on its more expensive 4G data services, he predicted). “The biggest problem,” Sikora said, “is the public safety problem.” In the current Voice Link system, if power is lost, a battery backup will last at most two days, and even then, not every cell tower has an emergency generator to remain able to relay calls during outages, he explained. Should another multi-day disaster like Sandy or Irene hit, “people are not going to be able to communicate with 911, and that’s a scary thought.” It’s a scary time for the workers he represents, as well. “They’ve cut the head count in our membership by half over the past decade,” said Sikora, who felt Verizon is too narrowly focused on profit and not on the hard-working employees who have contributed to its success. Indeed, a switch to wireless has the potential to further cut into the lineworkers’ ranks, Sikora acknowledged. He characterized Verizon’s attempt to expand Voice Link beyond Fire Island as incompatible with the current PSC rules, a stance the AG’s Office supports. “Verizon is now violating the law,” Sikora argued, claiming that in some cases, people are feeling pressured into accepting Voice Link by Verizon. “... They’re not actually offering it as an optional service. Customers are being told they realistically won’t get any service if they don’t take Voice Link.” (Verizon refutes this charge.) So he’s hoping the PSC will halt further rollout of Voice Link and is urging customers to contact the AG’s Office if offered that service as the only option for their home phones. He’s also hoping Verizon puts more effort into updating and maintaining its wired network, as Sandy may not be the last devastating storm to hit New York, upstate or downstate. “[Verizon] doesn’t maintain the network properly,” he charged. “Imagine if nobody had landline service, without backup power, and a disaster struck.” * * * The Public Service Commission is accepting comments on Verizon’s Voice Link proposal through September. For more information and a complete list of documents (updated daily), visit the PSC’s website at www.dps.ny.gov and search for Case 13-C-0197.
{ "pile_set_name": "Pile-CC" }
Tuesday, 20 May 2014 Matt Smith, former DOCTOR WHO actor shows his biceps in Ryan Gosling's LOST RIVER. From SCREENRANT In Ryan Gosling’s upcoming fantasy thriller Lost River (formerly titled How to Catch a Monster), Christina Hendricks plays Billy, single mother to Bones (Agents of S.H.I.E.L.D.‘s Iain De Caestecker) and his younger brother. The small family is trying to stay afloat in a dilapidated town when Bones one day discovers a secret road to a magical underwater town. Doctor Who star Matt Smith, who also has a significant role, has described the film as having a “wonderful Lynch-ian quality.” In case that synopsis hasn’t already made it apparent, Gosling’s directorial debut isn’t going to shy away from elements of surrealism. As Lost River is screening at Cannes this month, the first extract has now made its way online and shows Bones in an odd confrontation with Smith’s character, Bully, who wears a glittery open jacket and invites the audience to look at his muscles. It’s a little unclear exactly what is going on in this clip. There are several different things on fire, and Bones seems to be unhappy about this fact. It’s possible that the graffitied building from which he’s emerging at the start of the clip is the entrance to the underwater town, but it’s anyone’s guess what’s in that bag that he’s carrying, or why the appearance of Bully was enough to make him drop it and flee. Perhaps he was just intimidated by all those muscles. Gosling himself chose to stay behind the camera for his directing project, but Lost River nonetheless has an impressive cast that also includes Saoirse Ronan as Rat, the girl next door, and Eva Mendes as a character called Cat. Lost River certainly looks interesting, but it also seems to be targeting a specific tone that’s tough to get right and can come across as either pretentious, confusing, or a mixture of the two – if done wrong, that is. At least we won’t have too long to wait before getting an idea of how much talent Gosling has as a writer and director. Lost River will premiere at Cannes on May 20th as part of the “Un Certain Regard” selection, and the critical response will likely help determine how many theaters it opens in upon its US release, which has not yet been set. Google+ Followers feedjit Disclaimer Please note that any download link on this platform is mainly for educational purposes and if you want to watch the original movie or series, please go to the cinemas or subscribe to the network in charge of the series. Also, please note that any picture apart from some are not original creations of the owner of this blog.
{ "pile_set_name": "Pile-CC" }
Anita Sleeman Anita Sleeman (née Andrés) (December 12, 1930 – October 18, 2011) was a Canadian contemporary classical music composer. She was also a conductor, arranger, educator, and performer. Biography Life Born Anita Andrés December 12, 1930 in San Jose, California to Alejandro Andrés from Salamanca, Spain and Anita Dolgoff from Stavropol, Russia. Sleeman began taking piano lessons at age three and took up trumpet and French horn at school in San Francisco. While there, her music teachers noted her exceptional abilities at an early age (she began to show a talent for composition at age eight). Sleeman attended Placer Junior College as a music student. She met her future husband, Evan Sleeman, in Placer County and they married in 1951. They purchased a ranch in Elko County, Nevada and along with their six children immigrated to Canada in 1963. They lived on a ranch in the remote Anahim Lake area near Bella Coola. In 1967, the couple relocated to Tsawwassen, Metropolitan Vancouver. Throughout her life she played the French horn in a variety of stage and concert bands and performed as a keyboardist in jazz ensembles. Career At age 19 Sleeman composed a march that was played at her community college's commencement in 1950 (the first public performance of her work). Sleeman taught music appreciation at the Anahim Lake elementary school. While in Anahim Lake she played piano and organ at many community gatherings. Sleeman resumed music studies at the University of British Columbia, earning a BMus in 1971, and MMus (on a graduate fellowship) in 1974. At UBC she was a pupil of Jean Coulthard and during that time she taught at the electronic music lab, co-founded the Delta Youth Orchestra, and was involved in the establishment of the music program at the Capilano College in North Vancouver as a member of its music faculty. She returned to California to complete her doctorate (1982) at the University of Southern California attending master classes with Luciano Berio, Luigi Nono, and Charles Wuorinen. She also attended the Dick Grove School of Jazz. For 17 years she served as musical director and conductor of West Vancouver's Ambleside Orchestra, retiring in 2010. Her compositions have been premiered in London, England and Fiuggi, Italy as well as in Ottawa, Windsor and Vancouver; commissions include CBC Radio, Vancouver Community College, the Delta Youth Orchestra, the Galiano Trio, and others. At an early age Sleeman was introduced to the music of Olivier Messiaen, whose inspiration has been important in her development. Other influences are Varèse, Stravinsky, Koechlin, Lígeti, and Bartók. Her diversity of style has also been enhanced by her Spanish and Russian background and her love of jazz. She admired the work of Frank Zappa, to whose memory she dedicated selected performances of her work. List of additional performances February 1997: The Galiano Trio (flute, clarinet, bassoon) presented a concert of Sleeman's works, as part of the Little Chamber Series That Could season. This performance featured her Legend of the Lions and was enhanced by dance, and projected scene design by her daughter Cynthia Sleeman. September 1997: Sleeman was selected to represent Canada at the Donna in Musica festival in Fiuggi, Italy. September 1999: Picasso Gallery II was chosen for performance at the International Association of Women in Music Festival in London, England. January 2002: Cantigas (commissioned by ACWC) was premiered in Ottawa by the Quatuor Arthur- Leblanc at the Then, Now and Beyond series sponsored jointly by Association of Canadian Women Composers and the Ottawa Chamber Music Society. The performance piece was repeated August 6, 2002 at the Ottawa Chamber Music Festival, again performed by the Quatuor Arthur-Leblanc, in the presence of Her Excellency the Governor-General Adrienne Clarkson. July 2006: a new piece commissioned for the CBC, Rhapsody on Themes by Dohnányi, was premiered in Ottawa, Ontario at the Ottawa Chamber Music Festival, and performed again in 2007. Death Sleeman died early in the morning of October 18, 2011 at her home in North Vancouver, British Columbia. A memorial service for her was held on November 26, 2011 at St. Christopher's Anglican Church, West Vancouver. Critical reception Critic Ken Winters of The Globe and Mail praised Sleeman's work Cantigas as "remarkable", continuing, "It's as resourceful as Bartók in exploiting string techniques and sound potentials, and just as vigorous musically." See also Music of Canada List of Canadian composers List of Canadian musicians References Citations Category:1930 births Category:2011 deaths Category:Musicians from San Jose, California Category:Thornton School of Music alumni Category:University of British Columbia alumni Category:Canadian conductors (music) Category:Canadian classical composers Category:21st-century classical composers Category:Musicians from Vancouver Category:Capilano University faculty Category:21st-century American musicians Category:Female classical composers Category:20th-century Canadian composers Category:20th-century American musicians Category:20th-century American women musicians Category:21st-century American women musicians Category:20th-century conductors (music)
{ "pile_set_name": "Wikipedia (en)" }
You are here infinias® Server 50 S-SVR50-8 For your convenience, 3xLOGIC offers a 1U server to support any access control application. All servers come with Microsoft Windows and infinias Access Control Management software pre-installed and preconfigured. All servers come with an infinias Essentials unlimited door license. They’re ready to go – straight out of the box! For your convenience, 3xLOGIC offers a 1U server to support any access control application. All servers come with Microsoft Windows and infinias Access Control Management software pre-installed and preconfigured. All servers come with an infinias Essentials unlimited door license. They’re ready to go – straight out of the box! The SVR50 design provides all the network components required to deploy your VMS. This series includes 8 embedded PoE ports for powering your eIDC32 door controllers. The infinias Server 50 provides support for up to 50 doors, and has built in PoE for the first 8 doors. The SVR50 also comes with pre-installed special utilities, designed to support automatic backup and database management capabilities. infinias Access Control Management software is also pre-installed on the SVR50 series. Simply add a switch to the embedded PoE switch to add additional doors.
{ "pile_set_name": "Pile-CC" }
Inhibition of caspases alleviates gentamicin-induced cochlear damage in guinea pigs. The efficacy of caspase inhibitors for protecting the cochlea was evaluated in an in vivo study using guinea pigs, as the animal model system. Gentamicin (12 mg/ml) was delivered via an osmotic pump into the cochlear perilymphatic space of guinea pigs at 0.5 microl/h for 14 days. Additional animals were given either z-Val-Ala-Asp (Ome)-fluoromethyl ketone (z-VAD-FMK) or z-Leu-Glu-His-Asp-FMK (z-LEHD-FMK), a general caspase inhibitor and a caspase 9 inhibitor, respectively, in addition to gentamicin. The elevation in auditory brain stem response thresholds, at 4, 7, and 14 days following gentamicin administration, were decreased in animals that received both z-VAD-FMK and z-LEHD-FMK. Cochlear sensory hair cells survived in greater numbers in animals that received caspase inhibitors in addition to gentamicin, whereas sensory hair cells in animals that received gentamicin only were severely damaged. These results suggest that auditory cell death induced by gentamicin is closely related to the activation of caspases in vivo.
{ "pile_set_name": "PubMed Abstracts" }
CELP speech coders typically use codebooks to store excitation vectors that are intended to excite synthesis filters to produce a synthetic speech signal. For high bit rates these codebooks contain a large variety of excitation vectors to cope with a large spectrum of sound types. However, at low bit rates, for example around 4–7 kbits/s, the number of bits available for the codebook index is limited, which means that the number of vectors to choose from must be reduced. Therefore low bit rate coders will have a codebook structure that is compromise between accuracy and richness. Such coders will give fair speech quality for some types of sound and barely acceptable quality for other types of sound. In order to solve this problem with low bitrate coders a number of multi-mode solutions have been presented [1–5]. References [1–2] describe variable bitrate coding methods that use dynamic bit allocation; where the type of sound to be encoded controls the number of bits that are used for encoding. References [3–4] describe constant bitrate coding methods that use several equal size codebooks that are optimized for different sound types. The sound type to be encoded controls which codebook is used. These prior art coding methods all have the drawback that mode information has to be transferred from encoder to decoder in order for the decoder to use the correct decoding mode. Such mode information, however, requires extra bandwidth. Reference [5] describes a constant bitrate multi-mode coding method that also uses equal size codebooks. In this case an already determined adaptive codebook gain of the previous subframe is used to switch from one coding mode to another coding mode. Since this parameter is transferred from encoder to decoder anyway, no extra mode information is required. This method, however, is sensitive to bit errors in the gain factor caused by the transfer channel.
{ "pile_set_name": "USPTO Backgrounds" }
With the development of rapid, accurate, high-throughput autoantibody and potentially HLA assay systems, we are rapidly achieving the ability to screen and detect pre-diabetic subjects in the general population. Intervention studies in the NOD mouse suggest that antigen specific therapy can be effective in preventing diabetes. However, our current knowledge of the cellular response in human type 1 diabetes has not yet progressed enough to define relevant antigens to use in a similar fashion. The overall of this program project is to identify and assess peptide- specific immunomodulation strategies suitable for intervention therapy in patients with new-onset type 1 diabetes and pre-clinical islet autoimmunity. It rests on the hypothesis that by utilizing a novel approach of determining peptide immunogenicity in human HLA-transgenic mice bearing alleles associated with type 1 diabetes, we can establish the dominant epitopes of pre pro-insulin, GAD65 and ICA512 for a particular HLA allele. This analysis has been used by several groups including Project 1 to identify novel peptides that appear to be reactive in human type 1 subjects. This particular project will attempt to validate the analysis of peptide immunogenicity in HLA-transgenic mice (Project 1) by testing recognition of these peptides in HLA-defined new onset diabetic subjects and other unique pre-diabetic populations. We will test the hypothesis that identification of peptide specific responses is dependent on the HLA peptide interaction (identified in Project 1 and quantitated in Project 4). We will attempt to define this reactivity in proliferation and cytokine assays performed on selected antigen and peptide-specific T cell lines. We will extend these initial observations to include development of stable T cell lines and clones which can then be used to define the TCR alpha and beta chain usage of these antigen specific cells. Efforts will also be directed at obtaining a greater understanding of the immunologic phenotype associated with disease development by analyzing the differences in cellular and humoral immunity between individuals with high risk and protective HLA phenotypes identified by the genetics core (Project 6). This information will be important in defining hard immunological endpoints for potential future intervention trials. Additionally, we will determine the optimum dose, timing and frequency of administration of peptide in vaccination protocols in a animal model of type 1 diabetes. This type of information will be critically important to design to implement future clinical trials of peptide immunotherapy in human subjects. Together with our collaborators we hope to identify peptides of islet autoantigens which are immunodominant and recognized in human type 1 diabetes populations which in native or modified form might eventually form the basis of a trial of immunotherapy for this disease.
{ "pile_set_name": "NIH ExPorter" }
Q: How to expose WhenAny etc I'm sure I've missed something or backed myself into some strange frustrated corner, but here is what I'm trying to do. I have a WPF application, using Unity as IoC. I have a number of services that have an interface. I deal with my services via the interfaces so the services can be swapped out easily or so that I can offer a choice to the end-user. All standard interface programming stuff. In reality, all my services implement ReactiveObject. I am now wanting to do some command handling stuff and am trying to get the CanExecute behaviour working. My basic problem is I cannot use WhenAny unless I cast the interface to a physical implementation (thus get the full type hierarchy for compilation, which can see WhenAny). However, this cast violates interfaces and means I lose the ability to swap out implementations. Is there a ReactiveUI interface that exposes WhenAny etc that I could derive my service interfaces from and thus be able to use the great features of ReactiveUI whilst remaining non-type specific? A: Why can't you use WhenAny on an instance that is an interface? As of ReactiveUI 4.x, WhenAny should be on every object. If you're still using 3.x, you can write your interfaces like this: interface ICoolInterface : IReactiveNotifyPropertyChanged { /* ... */ }
{ "pile_set_name": "StackExchange" }
Effect of respiration on the QT interval. This clinical study was undertaken to investigate the effect of respiration on the QT interval. The QT interval is affected by a variety of factors, including steady changes in heart rate, instantaneous changes in heart rate as in atrial fibrillation, and changes in autonomic tone. Respiration gives rise to cyclical changes in the instantaneous heart rate and autonomic tone. The effect of respiration on the QT interval was analyzed in 25 subjects in sinus rhythm. Cosinor analysis was used to estimate the amplitude of its change from the mean value, its statistical significance, and the timing of the maximum change. Thirteen (52%) subjects revealed significant respiratory change in the QT interval, being the shortest during inspiration in 10 of them. Its amplitude correlated positively with respiratory cycle length (r = .58, P < .01), but not with age, mean heart rate, or the amplitude of change in the RR interval. The mean amplitude of change in the QT interval was 0.8% compared to a change of 2.6% in the RR interval. There is a respiratory variation in the QT interval in subjects in sinus rhythm that is more prominent during slower respirations. However, the amplitude of change in the QT interval is small compared to the change in the RR interval.
{ "pile_set_name": "PubMed Abstracts" }
Microendoscopic lumbar discectomy: technical note. The microendoscopic discectomy (MED) technique was initially developed in 1997 to treat herniated lumbar disc disease. Since then, thousands of cases have been successfully performed at more than 500 institutions. This article discusses the technical aspects of this procedure and presents a consecutive case series. A total of 150 consecutive patients underwent MED. MED is performed by a muscle-splitting approach using a series of tubular dilators with consecutively increasing diameters. A tubular retractor is then inserted over the final dilator, and a specially designed endoscope is placed inside the tubular retractor. The microdiscectomy is performed endoscopically while the surgeon views the procedure on a video monitor. Clinical outcomes were determined using a modified MacNab criteria, which revealed that 77% of patients had excellent, 17% had good, 3% had fair, and 3% had poor outcomes. The average hospital stay was 7.7 hours. The average return to work period was 17 days. Complications primarily included dural tears, which occurred in 8 patients (5%) and were seen early on in the patient series. Complication rates diminished as the surgeon's experience with this technique increased. MED for lumbar herniated disc disease can be performed safely and effectively, resulting in a shortened hospital stay and faster return to work; however, there is a learning curve to this procedure.
{ "pile_set_name": "PubMed Abstracts" }
Q: How to fix error C2664 which only occurs when namespaces used I am getting this error: main.cpp(10) : error C2664: 'lr::codec::codec(protocol_decoder *)' : cannot convert parameter 1 from 'proto::protocol_decoder *' to 'protocol_decoder *' If I remove the use of the proto namespace then this error goes away. How do I fix this and still retain the use of the proto namespace. Here is the code: main.cpp: #include "protocol_decoder_a.hpp" #include "codec.hpp" int main() { //factory function create protocol decoder proto::protocol_decoder* pro = new proto::protocol_decoder_a; lr::codec cdc(pro); return 0; } codec.hpp: #ifndef __CODEC_HPP__ #define __CODEC_HPP__ #include <map> #include <string> class protocol_decoder; //log replay namespace namespace lr { typedef bool (*c_f)(const char* id, unsigned char* rq, size_t rq_length, unsigned char*& response, size_t& resp_len); // generic codec interface will use specific class codec { public: codec(protocol_decoder* decoder); ~codec() {} bool get_response(const char* id, unsigned char* rq, size_t rq_length, unsigned char*& response, size_t& resp_len); const char* get_monitored_dn(const char* id, unsigned char* rq, size_t rq_length); void load_msgs_from_disk(); protocol_decoder* decoder_; }; } //namespace lr codec.cpp: #include "codec.hpp" using namespace lr; codec::codec(protocol_decoder* decoder) : decoder_(decoder) { load_msgs_from_disk(); } void codec::load_msgs_from_disk() { //use specific protocol decoder here } bool codec::get_response(const char* id, unsigned char* rq, size_t rq_length, unsigned char*& response, size_t& resp_len) { return true; } const char* codec::get_monitored_dn(const char* id, unsigned char* rq, size_t rq_length) { return 0; } protocol_decoder.hpp: #ifndef __PROTOCOL_DECODER_HPP__ #define __PROTOCOL_DECODER_HPP__ namespace proto { enum id_type { UNKNOWN_ID, INT_ID, STRING_ID }; struct msg_id { msg_id() : type(UNKNOWN_ID) {} id_type type; union { const char* s_id; size_t i_id; }; }; class protocol_decoder { public: virtual const char* get_monitored_dn(unsigned char* msg, size_t msg_len) = 0; virtual bool get_response(unsigned char* rq, size_t rq_len, unsigned char* response, size_t resp_len) = 0; virtual bool get_msg_id(unsigned char* rq, size_t rq_len, msg_id id) = 0; }; } //namespace proto #endif //__PROTOCOL_DECODER_HPP__ protocol_decoder_a.hpp: #ifndef __PROTOCOL_DECODER_A_HPP__ #define __PROTOCOL_DECODER_A_HPP__ #include "protocol_decoder.hpp" namespace proto { class protocol_decoder_a : public proto::protocol_decoder { public: virtual const char* get_monitored_dn(unsigned char* msg, size_t msg_len); virtual bool get_response(unsigned char* rq, size_t rq_len, unsigned char* response, size_t resp_len); virtual bool get_msg_id(unsigned char* rq, size_t rq_len, proto::msg_id id); }; } //namespace proto #endif //__PROTOCOL_DECODER_A_HPP__ protocol_decoder_a.cpp: #include "protocol_decoder_a.hpp" using namespace proto; const char* protocol_decoder_a::get_monitored_dn(unsigned char* msg, size_t msg_len) { //specific stuff here return 0; } bool protocol_decoder_a::get_response(unsigned char* rq, size_t rq_len, unsigned char* response, size_t resp_len) { return true; } bool protocol_decoder_a::get_msg_id(unsigned char* rq, size_t rq_len, proto::msg_id id) { return true; } A: You've accidentally declared two protocol_decoder classes. One in the global namespace, and one in the proto namespace. Change this declaration: class protocol_decoder; To this: namespace proto { class protocol_decoder; }
{ "pile_set_name": "StackExchange" }
Cord blood concentrations of leptin, zinc-α2-glycoprotein, and adiponectin, and adiposity gain during the first 3 mo of life. Adipose tissue development starts in intrauterine life and cytokines are involved in this process. Therefore, understanding the role of cytokines in the fat mass gain of infants is crucial to prevent obesity later in life. Furthermore, recent evidence indicates a sex-specific link between cytokines and adipose tissue development. The objective of this study was to assess sex-specific relationships of cord blood concentrations of the cytokines leptin, zinc-α2-glycoprotein (ZAG), and adiponectin with infant adiposity during the first 3 mo of life. This was a prospective cohort study of 104 mother-infant pairs that were selected from a maternity hospital in Sao Paulo, Brazil. Cord blood leptin, ZAG, and adiponectin were determined by enzyme-linked immunosorbent assays. The body composition of the infants was assessed monthly by air displacement plethysmography. A multiple linear regression analysis was conducted with the average fat mass gain from birth to the third month of life as the outcome and cord blood leptin, ZAG, and adiponectin as the variables of interest. Leptin was inversely associated with fat mass gain in the first 3 mo of life (P = 0.003; adjusted R2 = 0.09). There were inverse associations of leptin (P = 0.021), ZAG (P = 0.042), and maternal body mass index (P = 0.04) with fat mass gain in girls (adjusted R2 = 0.29) but fat mass gain in boys was positively associated with gestational age (P = 0.01; adjusted R2 = 0.15). The results of this study suggest that adiposity programming is sex-specific, which highlights the need to investigate the different metabolic mechanisms that are involved in adipogenesis.
{ "pile_set_name": "PubMed Abstracts" }
Haerts (album) Haerts is the debut studio album by American indie pop band Haerts, released on October 27, 2014 by Columbia Records. The album was produced by Haerts and Jean-Philip Grobler (better known as St. Lucia), with additional production from Andy Baldwin. It features three songs that were previously released on the band's debut extended play, Hemiplegia, which was released on October 8, 2013. Promotion Haerts embarked on a small promotional tour around North America in mid-2014. They performed already released music from their first EP Hemiplagia, and unreleased music from their then-upcoming debut album. On September 2, 2014, the band released the first single from their album, "Giving Up", with an accompanying music video on October 30, 2014. The band has embarked on a US tour hitting New York City, Los Angeles, Washington, D.C., Philadelphia, San Francisco, and several other cities, starting on November 7 until December 20, 2014. Track listing Charts References Category:2014 debut albums Category:Columbia Records albums Category:Haerts albums
{ "pile_set_name": "Wikipedia (en)" }
HMS Savage (1805) HMS Savage was a 16-gun brig-sloop of the Seagull class of the British Royal Navy, launched in July 1805. She served during the Napoleonic Wars and captured a privateer. She grounded in 1814 but was salved. The Navy sold her in 1819. Career Commander James Wilkes Maurice arrived in Liverpool on 3 August 1805 with dispatches after his courageous, though ultimately unsuccessful defence of Diamond Rock. The Admiralty greeted him warmly and within the month gave him the task of commissioning the newly launched sloop Savage for the Irish Station. While he was fitting her out at Portsmouth and assembling a crew, Admiral Lord Nelson met with Maurice and expressed his regrets that he had not been able to arrive in time to save Diamond Rock. However, Nelson expressed his admiration for Maurice's conduct and informed Maurice that at his, Nelson's, particular request, Maurice and Savage were to serve under Nelson's command. At the time Nelson was preparing to resume command of the Mediterranean fleet. Unfortunately, Maurice was not able to get Savage ready in time and so was not able to be present at the battle of Trafalgar. Having missed the battle, Savage instead spent from December 1805 to June 1807 primarily in convoying vessels from various ports in the St George's Channel to The Downs, and back. During this service, Savage never lost a vessel. Savage sailed with a convoy from Cork to Jamaica on 30 August 1807. There he served on the Jamaica station under Vice-Admiral Dacres. On 12 December, Savage captured the Spanish privateer Quixote off Porto Cavallo. Quixote carried eight guns and a crew of 99 men. She was "a Vessel of a large Class, and fitted out for the Annoyance of the Trade bound to [Jamaica]". In July 1808, Maurice joined Admiral Alexander Cochrane at Barbados. Cochrane appointed Maurice governor of Marie-Galante, a post he took up on 1 October. Commander William Robilliard then replaced Maurice. In 1810, Commander William Ferrie replaced Robilliard. He sailed for Jamaica on 2 July 1810. Savage underwent repairs at Sheerness between September 1811 and March 1812. Commander William Bissel recommissioned her in February. He then sailed with a convoy to Quebec on 18 May 1812. On 20 January 1814 Bissel stranded Savage on Guernsey. After three days of thick weather she grounded on Rock North on the north most end of the island. Some pilots came aboard and eventually, with their assistance, Savage reached Great Harbour, where she again grounded. The next day she was brought to the Pier Head, and then to a port where she could be repaired. The court martial board dismissed Bissel from the Navy on the grounds that he had sailed southward for too long, had neglected to use the lead and to keep a reckoning, and not insisted that his officers do likewise. By February Savage was back at Portsmouth. C. Mitchell replaced Bissel. Fate The Navy offered Savage for sale at Portsmouth on 3 February 1819. She was sold to a Mr. John Tibbut on that day for £950. Citations References Grocott, Terence (1997) Shipwrecks of the revolutionary & Napoleonic eras (Chatham). Paget, Sir Arthur, and Sir Augustus Berkeley Paget (1896) The Paget papers:diplomatic and other correspondence of the Right Hon. Sir Arthur Paget, G.C.B., 1794-1807. With two appendices 1808 & 1821-1829, Volume 2. (W. Heinemann). Parliament proceedings (1809) Naval papers respecting Copenhagen, Portugal, and the Dardanelles, presented to parliament in 1808. Category:Brig-sloops of the Royal Navy Category:1805 ships Category:Maritime incidents in 1814
{ "pile_set_name": "Wikipedia (en)" }
Certainly by no means a bad level. Perhaps a tad bit on the cramped size. It's plays decently enough, though it'm actually a bit disappointed that it ended relatively quickly. It also seems you have the mapinfo setup for an act 2, I presume you plan to do more than one map? You have some potential here.
{ "pile_set_name": "Pile-CC" }
Q: Is there a better way to concatenate variables in a string with no spaces? I have this (simplified) code: $hostname = "127.0.0.1" $aaa= "http://$hostname:8001" Write-Host $aaa Output is http:// The problem is the colon following the $hostname variable, so I fixed it this way: $hostname = "127.0.0.1" $aaa= "http://$hostname" + ":8001" Write-Host $aaa I was wondering if is there any better way of doing it using any PowerShell technology I am not aware of. A: Two way: "http://$($hostname):8001" or "http://$hostname`:8001" The colon is reserved in variable names: it associate the variable with a specific scope or namespace: $global:var or $env:PATH The part before the ':' can be a scope or a PSDrive.
{ "pile_set_name": "StackExchange" }
Q: Unable to locate the model you have specified: modelName I am trying to load this model: class Menu { function show_menu() { $obj =& get_instance(); $obj->load->helper('url'); $menu = anchor("start/hello/fred","Say hello to Fred |"); $menu .= anchor("start/hello/bert","Say hello to Bert |"); $menu .= anchor("start/another_function","Do something else |"); return $menu; } } This is where my controller is: function hello($name) { $this->load->model('Menu'); $mymenu = $this->Menu->show_menu(); } Why do I get this error? Unable to locate the model you have specified: menu A: CodeIgniter can't find the file of the model. If you named your model Menu, make sure the file name is menu.php and not something else like menu_model.php.
{ "pile_set_name": "StackExchange" }
Traditional DNA sequencing techniques share three essential steps in their approaches to sequence determination. First, a multiplicity of DNA fragments are generated from a DNA species which it is intended to sequence. These fragments are incomplete copies of the DNA species to be sequenced. The aim is to produce a ladder of DNA fragments, each a single base longer than the previous one. For example, with the Sanger method (Sanger et al., Proc. Natl. Acad. Sci. USA 74:5463, 1977), the target DNA is used as a template for a DNA polymerase to produce a number of incomplete clones. These fragments, which differ in respective length by a single base, are then separated on an apparatus which is capable of resolving single-base differences in size. The third and final step is the determination of the nature of the base at the end of each fragment. When ordered by the size of the fragments which they terminate, these bases represent the sequence of the original DNA species. Automated systems for DNA sequence analysis have been developed, such as discussed in Toneguzzo et al., 6 Biotechniques 460, 1988; Kanbara et al., 6 Biotechnology 816, 1988; and Smith et al., 13 Nuc. Acid. Res. 13: 2399, 1985; U.S. Pat. No. 4,707,237 (1987). However, all these methods still require separation of DNA products by a gel permeation procedure and then detection of their locations relative to one another along the axis of permeation or movement through the gel. These apparatuses used in these methods are not truly automatic sequencers. They are merely automatic gel readers, which require the standard sequencing reactions to be carried out before samples are loaded onto the gel. The disadvantages of the above methods are numerous. The most serious problems are caused by the requirement for the DNA fragments to be size-separated on a polyacrylamide gel. This process is time-consuming, uses large quantities of expensive chemicals, and severely limits the number of bases which can be sequenced in any single experiment, due to the limited resolution of the gel. Sanger dideoxy sequencing has a read length of approximately 500 bp, a throughput that is limited by gel electrophoresis (appropriately 0.2%). Other methods for analyzing polynucleotide sequences have been developed more recently. In some of these methods broadly termed sequencing by synthesis, template sequences are determined by priming the template followed by a series of single base primer extension reactions (e.g., as described in WO 93/21340, WO 96/27025, and WO 98/44152). While the basic scheme in these methods no longer require separation of polynucleotides on the gel, they encounter various other problems such as consumption of large amounts of expensive reagents, difficulty in removing reagents after each step, misincorporation due to long exchange times, the need to remove labels from the incorporated nucleotide, and difficulty to detect further incorporation if the label is not removed. Many of these difficulties stem directly from limitations of the macroscopic fluidics employed. However, small-volume fluidics have not been available. As a result, these methods have not replaced the traditional gel-based sequencing schemes in practice. The skilled artisans are to a large extent still relying on the gel-based sequencing methods. Thus, there is a need in the art for methods and apparatuses for high speed and high throughput analysis of longer polynucleotide sequences which can be automated utilizing the available scanning and detection technology. The present invention fulfills this and other needs.
{ "pile_set_name": "USPTO Backgrounds" }
Trp-Trp Cross-Linking: A Structure-Reactivity Relationship in the Formation and Design of Hyperstable Peptide β-Hairpin and α-Helix Scaffolds. Using model peptide β-hairpin scaffolds, the facile formation of a remarkably stable covalently cross-linked modification is reported in the tryptophan side chain, which confers hyperstability to the scaffold and displays a unique structure-reactivity relationship. This strategy is also validated to obtain a thermostable α-helix. Such imposition of conformational constraints can have versatile applications in peptide-based drug discovery, and this strategy may improve peptide bioavailability.
{ "pile_set_name": "PubMed Abstracts" }
Bulldogs v Eels: Five key points Share on social media The Eels did their best to hang on but were overrun by the Bulldogs after suffering several injuries and getting through a mountain of defence. Here are five key points to take from the Bulldogs' 32-12 victory. Defence-minded Eels have come to play Parramatta were the punching bags of the NRL for two years, winning just 11 games across two seasons in back-to-back wooden spoon efforts in 2012 and 2013, leaking the most points in the NRL in each season at almost 30 per game. Those days are gone now. And it may sound strange to say, given the 32 points they let in against Canterbury is technically higher than they averaged in that stretch, but look at how the game panned out. They were starved of possession with just 45 per cent for the game (yes, often it was their own doing), and ravaged by injuries which took their toll as two late tries – one of them against a 12-man defensive line with stricken centre Beau Champion sitting helplessly on the ground nursing a bung knee – blew out the score. Most crucially they were able to defend their line for long stretches, and not only that, seemingly relished the challenge, urging each other on and growing in confidence each time they turned their opponents away. This is what Brad Arthur brings to a side. And while we're not getting carried away and tipping the Eels for the top four, under Arthur's watch they have come a long way from the rabble of recent seasons. Brett Morris can play fullback He's only played two games there, and only two games total in the past six months, but the former Dragons winger looks right at home at the back. He'll never be the ball-player that Jarryd Hayne or Billy Slater were and are, or even one-time five-eighth Greg Inglis is. He's probably more in the Anthony Minichiello mould – a brilliant support player and evasive runner who is at his most dangerous lurking around the back of an attacking raid or returning the ball in broken play. We're not going to go down the Des Hasler path of putting him in the NSW No.1 jersey just yet but he has started well and he's getting better. It will be interesting to see where this trend ends up. Backline injuries are shaping up as match-enders Parramatta have been on both sides of it now. Last week they ran riot against a Manly side that lost a bench player and had to move a forward to the centres when Clint Gutherson went down with an ACL injury early on. On Friday it was their turn, as centre Brad Takairangi shifted to the wing and back-rower Manu Ma'u to the centres following an untimely injury to winger Semi Radradra. The problem was exacerbated by halfback Chris Sandow needing to spend time off the field managing an ankle injury, and Arthur had run out of interchanges by the time Beau Champion broke down five minutes before full-time. Both Manly last week and the Eels on Friday hung in valiantly to be in front shortly before or after half-time but got blown off the park late. With just 10 interchanges and no luxury for coaches to carry a spare outside back on the bench, a match-ending injury to any of the back five is likely to prove pivotal in most games in which they occur this year. Danny Wicks will be an asset There had been plenty of speculation around how ex-Knight Danny Wicks would fare in his attempted NRL return after five years out of the game due to drugs offences. Wicks spent more time on field than planned due to the Eels injuries, but he looked far fitter than at any previous point in his NRL career. Wicks has shed around 20kg from his previous playing weight and showed incredible athleticism. He was tough to tackle and ran the ball ferociously. With Junior Paulo suspended and Richie Fa'aoso still no guarantee to get clearance to return, Wicks could prove a seriously astute signing for the Eels in 2015. Corey Norman ready to lead Fresh off a man-of-the-match effort last week, Eels five eighth Corey Norman was brilliant again in a losing cause. Norman was steady at best for the Eels last year; he provided much-needed stability alongside livewire Chris Sandow and had some good moments and good games. He was quite good, without being amazing. Not so this year, when he has been best on field for Parramatta in the first two games, setting up all three of his side's tries on Friday. First, an utterly pinpoint no-look catch-and-pass with Tim Lafai sprinting up at him set up Semi Radradra for the side's first try. After half-time a decision one-two helped makeshift centre Manu Ma'u to two tries in three minutes on almost the same blade of grade. For the first, running to the left, he held up the ball then delivered a short late pass to send Ma'u crashing over. The second was set to be an action replay until he dropped it on the toe at the last instant and the Bulldogs were unable to adapt, handing Ma'u a double. As the man seemingly most in danger from the club's possible recruitment of Kieran Foran, Norman is quickly making himself indispensable.
{ "pile_set_name": "Pile-CC" }
Q: Android In-App Billing error You need to sign into your google account I am implementing in-app purchase using https://github.com/anjlab/android-inapp-billing-v3 . But while in app pop up open it shows "Error Authentication is required.You need to sign into your google account". I tested using different devices as same result. A: Improtant ! I've spent a lot of time trying to find out why I'm getting error "Error Authentication is required.You need to sign into your google account". And after a lot of hours I found out - I was trying to access wrong item id from the console. In the developer console subscription item had id "premium" and I've tried to access "premium_version" item. Stupid mistake but error from google is absolutely not informative. Hope this helps A: In my case what happened was that the right product was all set up at console, however it wasn't activated. A: If anybody getting the above popup you can re-check through the following steps because unfortunately this google popup is not enough informative for a clue. Make sure you are using the product ID correctly(should be same as what you've put on developer console) Make sure you've activated the product on developer console before testing. It may take a while, so wait till it's ready. Make sure your the version of your app is published state on Beta,Alpha or Production Remember to add testing emails under testers list(Settings -> Testers List -> Create List). Testing email should be different from publisher account. If nothing works in the above, try removing google account on your phone and adding back and clearing data in Play Store. Hope this helps :)
{ "pile_set_name": "StackExchange" }
Q: How to get all element which is inside certain character using js regular expression I have 1 string in variable a which contain array parsing like variable in string format as shown below in example , I want to get all those index which is bounded by [ and ] var a = 'info.name[0][1][5].data[0]', collect = []; a.split(']').reverse().forEach(function(a) { if (a.indexOf('[') !== -1) { var splits = a.split('['); collect.push(splits[splits.length - 1]) } }) console.log(collect); my code shown above works fine I know it fails sometime so ,I am looking more better program if possible please help me to solve this problem with regular expression. **Please Dont Use Jquery or any other libs to solve this problem ** A: You could use the match method: const a = 'info.name[0][1][5].data[0]'; const collect = a.match(/[^[]+?(?=\])/g) || []; console.log(collect); The regular expression consists of: [^[]+?: capture one or more characters that are not [. The ? makes that capturing stop as soon as the next condition is satisfied: (?=\]): require that the next character is a ], but don't capture it. The g modifier will ensure that all matches are returned. || [] is added for the case that there are no matches at all. In that case match returns null. This addition will output an empty array instead of that null, which may be more practical. NB: I am not sure why you used reverse, but if you really need the output array in reversed order, you can of course apply reverse to it.
{ "pile_set_name": "StackExchange" }
[Neuronal linkage of the cerebral cortex and the striatum]. The striatum is the major input station of the basal ganglia. It receives a wide variety of inputs from all areas of the cerebral cortex. In particular, there are several parallel loop circuits, such as the motor, oculomotor, dorsolateral prefrontal, lateral orbitofrontal, and anterior cingulate loops, linking the frontal lobe and the basal ganglia. With respect to the motor loop, the motor-related areas, including the primary motor cortex, supplementary motor area, dorsal and ventral premotor cortices, presupplementary motor area, and rostral and caudal cingulate motor areas, send inputs to sectors of the putamen in combination via separate (parallel) and overlapping (convergent) pathways. Such signals return to the cortical areas of origin via the globus pallidus/substantia nigra and then the thalamus. The somatotopical representation is maintained in each structure that constitutes the motor loop. Employing retrograde transsynaptic transport of rabies virus, we have recently investigated the arrangement of multisynaptic pathways linking the basal ganglia to the caudal aspect of the dorsal premotor cortex (the so-called F2). F2r, the rostral sector of F2, has been shown to be involved in motor planning, whereas F2c, the caudal sector of F2, has been shown to be involved in motor execution. We analyzed the origins of multisynaptic inputs to F2r and F2c in the basal ganglia. Our results indicate that the 2 loop circuits connecting the F2r and F2c with the basal ganglia may possess a common convergent window at the input stage, while they have parallel divergent channels at the output stage.
{ "pile_set_name": "PubMed Abstracts" }
Relationship between plasma and hepatic cytosolic levels of ornithine decarboxylase (ODC) and thymidine kinase (TK) in 70% hepatectomized rats. Ornithine decarboxylase (ODC) and thymidine kinase (TK) are enzymes important for DNA synthesis, a process that is critical for cell renewal and regeneration. As such, they already have been used as surrogate markers of regeneration in tissue. In the present study, the activity of these two enzymes in plasma of rats and regenerating hepatic tissue following a 70% hepatectomy were determined. The results demonstrate that the changes in these enzyme activities in plasma reflect the changes obtained in the liver tissue. Thus, blood levels of ODC and TK can be used as a less invasive and nondestructive means of monitoring the regenerative response of the liver and possibly other tissues.
{ "pile_set_name": "PubMed Abstracts" }
Q: In Sitecore 8 can I get a field value in the language version of the page I want to get a field value in the language version of the page. For example I have an item called Search Placeholder in en-us with the the field value "Select.." on the en-us page it shows that value. But using the code below when I create Search Placeholder in en-gb and I put in the value "Select2..." It shows up blank on the en-gb page. string fieldName = "Search Placeholder Text"; Sitecore.Data.Items.Item someItem = Sitecore.Context.Database.GetItem("/sitecore/content/site/shared-content/Search Placeholder"); Sitecore.Data.Fields.Field someField = someItem.Fields[fieldName]; string searchPlace = someField.Value; Is there a way to check if Search Placeholder has a language version for a page? A: First of all, you can pass chosen language to GetItem method: Sitecore.Context.Database.GetItem(path, language) Then you can check if item has any version in that language using: someItem.Versions.Count > 0 If item has more than 0 versions and the field is null it means that either this item has not been publish after field was added to the template or the field item itself has not been published.
{ "pile_set_name": "StackExchange" }
[Scanning electron microscopic studies of the odontoblasts and the pulpodentinal border in domestic sheep (O. ammon aries Linnaeus, 1758)]. In order to investigate odontoblasts and predentin surfaces using SEM techniques, teeth of sheep with one or several roots were subjected to critical point drying. The odontoblasts of the root pulp are distinguished in their shape and arrangement pattern from those in the crown pulp. They regularly are detaching only one process of Tomes, which is extending up to the border between dentin and enamel and which shows a dendritic ramification in the dentin next to the enamel. The distal cell portions of the odontoblasts are joined together by a system of terminal bars. The collagen structures observed between the odontoblasts and predentin are considered to be von Korff's fibres as found in man.
{ "pile_set_name": "PubMed Abstracts" }
kind: Namespace apiVersion: v1 metadata: name: harbor
{ "pile_set_name": "Github" }
FIG. 12 is a block diagram of a conventional radio-frequency (RF) receiver. An RF signal having a frequency ranging from 55.25 MHz to 801.25 MHz is input to an input terminal 1. A single-tuned filter 2 is implemented by a signal variable capacitance diode and receives the RF signal input to the input terminal 1. The single-tuned filter 2 has a tuning frequency varying within a UHF band (367.25 MHz to 801.25 MHz) in response to a tuning voltage input to a frequency variable port 2a. An RF amplifier 3 amplifies a signal of the UHF band output from the single-tuned filter 2. An output of the RF amplifier 3 is connected to a double-tuned filter 4 composed of two variable capacitance diodes and having a tuning frequency varying in response to a tuning voltage supplied to a frequency variable port 4a. A signal output from the double-tuned filter 4 is supplied to one input port of a mixer 5. The other input port of the mixer 5 receives a signal output from a local oscillator 6 via a frequency divider 7. The mixer 5 mixes the UHF signal from the double-tuned filter 4 with an oscillation signal from the local oscillator 6 to convert the signal output from the double-tuned filter 4 into an intermediate-frequency signal at 45.75 MHz. An intermediate-frequency filter 8 is connected to an output port of the mixer 5 to attenuate undesired components of a signal outside of its range of 6 MHz. A signal output from the intermediate-frequency filter 8 is then amplified by an intermediate-frequency amplifier and output from an output terminal 9. The single-tuned filter 2, the RF amplifier 3, the double-tuned filter 4, the mixer 5, and the intermediate-frequency filter 8 constitute an UHF signal receiver section 10. A VHF signal receiver section 11 receives signals of a VHF band from 55.25 MHz to 361.25 MHz through the input terminal 1, and composed of a single-tuned filter 12, an RF amplifier 13, a double-tuned filter 14, and a mixer 15. The single tuned filter 12 is composed of a single variable capacitance diode and has a tuning frequency varying in response to a tuning voltage supplied to a frequency variable port 12a. The RF amplifier 13 amplifies a signal at the VHF band output from the single-tuned filter 12. The double-tuned filter 14 is connected to an output port of the RF amplifier 13 and composed of two variable capacitance diodes and has a tuning frequency varying in response to a tuning voltage supplied to a frequency variable port 14a. The mixer 15 has one input port receiving a signal output from the double-tuned filter 14 and has the other input port receiving a signal output from the local oscillator 6 via a frequency divider 16. The mixer 15 mixes the VHF signal passing through the double-tuned filter 14 with the oscillation signal from the local oscillator 6 to convert the VHF signal from the double-tuned filter 14 into an intermediate-frequency signal at 45.75 MHz. A signal output from the mixer 15 is transmitted to an input port of the intermediate-frequency filter 8. A tuning section 18 is connected between input ports 17a and 17b of an oscillator 17. The tuning section 18 includes a series assembly 21 including a variable capacitance diode 19 and a capacitor 20 connected in series with each other and an inductor 22 connected in parallel with the series assembly 21. The output port of the oscillator 17 is connected to an input port of a phase-locked-loop (PLL) circuit 23. The PLL circuit 23 supplies tuning voltages from an output port 23a to the variable capacitance diode 19 in the tuning section 18 and variable capacitance diodes in the single-tuned filter 2, the double-tuned filter 4, the single-tuned filter 12, and the double-tuned filter 14 for controlling the oscillation frequency of the local oscillator 6 and the tuning frequencies of the single-tuned filter 2, the double-tuned filter 4, the single-tuned filter 12, and the double-tuned filter 14. In the conventional receiver, the mixers 5 and 15 output intermediate-frequency signals at 45.75 MHz. This requires frequencies of signals passing through the single-tuned filters 2 and 12 and the double-tuned filters 4 and 14 to be separated by the range of the intermediate-frequency (45.75 MHz) from the frequencies of the signals output from the frequency dividers 7 and 16. Such conventional receiver receives a wide frequency range from the VHF band to the UHF band with the single local oscillator 6. It is hence not easy to separate the frequencies of signals passing through the single-tuned filters 2 and 12 and the double-tuned filters 4 and 14 by the range of the intermediate frequency from the frequency of the signals output from the frequency dividers 7 and 16. Accordingly, the passing frequencies of the tuned filters may shift from a receiving channel, hence reducing attenuation of any interference signal. As a result, an interference signal may be received directly by the mixers 5 and 15, hence causing image interruption. Conventional RF receivers similar to the receiver explained above are disclosed in Japanese Patent Laid-Open Publication Nos.2000-295539, 2002-118795, and 1-265688.
{ "pile_set_name": "USPTO Backgrounds" }
Prognostic significance of early lymphocyte recovery after post-autografting administration of GM-CSF in non-Hodgkin's lymphoma. The purpose of this study was to analyze the prognostic significance of early lymphocyte recovery after autologous SCT (ASCT) in the setting of routine post transplant administration of GM-CSF in patients with non-Hodgkin's lymphoma (NHL). This is a single institution retrospective comparative outcome analysis in a cohort of 268 relapsed chemosensitive NHL patients divided into two groups (early and late lymphocyte recovery) based on absolute lymphocyte counts (ALC) obtained on post transplant day +15 (ALC > or = 500, n=151 (56%) and ALC < 500, n=117 (44%)). Patient's characteristics were well-balanced between the two groups with regard to age, sex, preparative regimen, prior therapy, time from diagnosis to transplant and number of CD34+ cells infused. Post transplant complications were comparable in the two groups. Late lymphocyte recovery (ALC < 500 on day +15) was independently associated with a delay in platelet recovery (29 vs 21 days, P=0.0003) in patients who have not received pre-transplant rituximab. With a median follow-up of 22 months, no associations between early lymphocyte recovery and improvement of disease-free and overall survival were observed for either low- or intermediate-grade NHL. In conclusion, in this large single-centered retrospective analysis, where patients received routine post transplant GM-CSF, early lymphocyte recovery was not associated with favorable outcomes.
{ "pile_set_name": "PubMed Abstracts" }
The inter-optic course of a unique precommunicating anterior cerebral artery with aberrant origin of an ophthalmic artery: an anatomic case report. Some variations of the cerebral arterial circle of Willis, such as an inter-optic course of the anterior cerebral artery are exceedingly rare. Imaging of very rare anatomical features may be of interest. In a 67-year-old male individual, the unique precommunicating part of the left anterior cerebral artery was found to course between both optic nerves. There was an agenesis of the right precommunicating cerebral artery. This variation was associated with an aberrant origin of the ophthalmic artery, arising from the anterior cerebral artery. The anatomic features, the possible high prevalence of associated aneurysms of the anterior communicating artery complex as well as implications for surgical planning or endovascular treatments are outlined and embryologic considerations are discussed. To the best of our knowledge, this is a very rare illustrated case of an inter-optic course of a unique precommunicating anterior cerebral artery with aberrant origin of an ophthalmic artery.
{ "pile_set_name": "PubMed Abstracts" }
Point & Shoot Films, a micro budget horror studio based out of New England, is entering into the final stages of their first feature film The Carnage Collection, with a trailer set to drop on Halloween. A number of gore filled stories are told within The Collection, including Slay Bells, “a Christmas tale starring everyone’s favorite holiday fat guy”, Stuffed, “about a girl, her stuffed sloth, and sexual obsessions, and VCR “about a lonely man and his obsolete technology.” Written by Point & Shoot co founders Bob and Derek Ferreira, and directed by the duo and Kimball Rowell, the practical effects shown so far are pretty sick. Splitting Headaches The idea is definitely intriguing, and the filmmakers have an obvious passion for their craft. We’ll have to wait and see the trailer before making to many judgments, but right now The Carnage Collection looks like a hell of a good time. Jeff is a writer for the Blood-Shed, and eats, drinks, and bleeds horror. You can help him write his first book here
{ "pile_set_name": "Pile-CC" }
/* * Copyright 2015 Google Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.google.maps.android.clustering; import com.google.android.gms.maps.model.LatLng; import com.google.maps.android.clustering.algo.StaticCluster; import org.junit.Test; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotEquals; import static org.junit.Assert.assertNotSame; public class StaticClusterTest { @Test public void testEquality() { StaticCluster<ClusterItem> cluster1 = new StaticCluster<>(new LatLng(0.1, 0.5)); StaticCluster<ClusterItem> cluster2 = new StaticCluster<>(new LatLng(0.1, 0.5)); assertEquals(cluster1, cluster2); assertNotSame(cluster1, cluster2); assertEquals(cluster1.hashCode(), cluster2.hashCode()); } @Test public void testUnequality() { StaticCluster<ClusterItem> cluster1 = new StaticCluster<>(new LatLng(0.1, 0.5)); StaticCluster<ClusterItem> cluster2 = new StaticCluster<>(new LatLng(0.2, 0.3)); assertNotEquals(cluster1, cluster2); assertNotEquals(cluster1.hashCode(), cluster2.hashCode()); } }
{ "pile_set_name": "Github" }
It’s important that given a project configuration, two checkouts of the configuration in the same environment (operating system, Python version) should produce the same result, regardless of their history. For example, if someone has been working on a project for a long time, and has committed their changes to a version control system, they should be able tell a colleague to check out their project and run buildout and the resulting build should have the same result as the build in the original working area. We believe that software should be self-contained, or at least, that it should be possible. The tools for satisfying the software responsibilities should largely reside within the software project itself. Some examples: Software services should include tools for monitoring them. Operations, including monitoring is a software responsibility, because the creators of the software are the ones who know best how to assess whether it is operating correctly. It should be possible, when deploying production software, for the software to configure the monitoring system to monitor the software. Software should provide facilities to automate its configuration. It shouldn’t be necessary for people to create separate configuration whether it be in development or deployment (or stages in between). Software deployment should be highly automated. It should be possible to checkout a project with a single simple command (or two) and get a working system. This is necessary to achieve the goals of repeatability and componentization and generally not to waste people’s time.
{ "pile_set_name": "Pile-CC" }
Q: Elmah does not work with asp.net mvc I spent countless hours trying to get Elmah working with asp.net mvc, but can't get it working 100%. Right now all the logging works fine, but the HttpHandlers are all screwy. Everytime I try and log into an admin account I automatically get redirected to Elmahs listings page. It makes no sense because the path for elmah is just elmah.axd (that's what I use for the httphandler in the web.config) and my admin path is something like /MyAdmin/login, so I don't see the connection. I have also setup the ignore routes thing in my routes table for elmah. \ To sum it up. Elmah logging works and so does the error display pages. When I try and log in to my admin account it automatically redirect to Elmahs error display page. I have no idea why. If I comment out routes.IgnoreRoute("elmah.axd"); my login works. IF I leave it in there it always redirects to elmah. A: I finally figured it out. No one would have got this one... I had a reference to RouteDebugger.dll which I got from the book "Asp.net MVC Framework Unleashed" and for some reason this dll messed up all my post requests if Elmah was enabled. It was pure dumb luck that I figured it out. I couldn't get the RouteDebugger working so I deleted the reference and added a different one and then everything worked.
{ "pile_set_name": "StackExchange" }
Q: How to show the current play time of a video file in JTextField? I calculate the time of the current play time of video: public show_time_of_vedio_file(MediaPanel mediaPanel,JFrame_of_subtitle frame) { // for(;;) { double second=mediaPanel.mediaPlayer.getMediaTime().getSeconds(); int second1=(int) second; int hour=second1/3600; second1=second1-hour*3600; int minute=second1/60; second1=second1-minute*60; double milisecond=(second-(int)second)*1000; int milisecond_1=(int) milisecond; String milisecond_string=String.valueOf(milisecond_1); String hour_string=String.valueOf(hour); String minute_string=String.valueOf(minute); String second_string=String.valueOf(second1); if(hour_string.length()==1) hour_string="0".concat(hour_string); if(minute_string.length()==1) minute_string="0".concat(minute_string); if(second_string.length()==1) second_string="0".concat(second_string); if(milisecond_string.length()==2) milisecond_string="0".concat(milisecond_string); else if(milisecond_string.length()==1) milisecond_string="0".concat("0".concat(milisecond_string)); frame.show_time_jTextField.setText(String.format("%s:%s:%s,%s", hour_string,minute_string,second_string,milisecond_string)); } } Now I want to show the this time in JTextField all the time when the video is playing and when the video is not play I want to show 00:00:00,000. Can anyone tell me how can I do this? A: ..want to show the this time in JTextField.. Use a JProgressBar for this instead. E.G. See How to Use Progress Bars for more details. ..not play I want to show 00:00:00,000. See JProgressBar.setString(String). The progress-bar in the upper right of this GUI shows use of a more 'media friendly' string.
{ "pile_set_name": "StackExchange" }
Brad Meltzer’s Decoded: Season One Please be advised. Unless otherwise stated, all BLU-RAY are REGION A and all DVD are REGION 1 encoding. Before purchasing, please ensure that your equipment can playback these regions. For more information on region encoding, please click the link below: Product Notes What if everything you learned in history books were only half true? Best-selling author and history buff Brad Meltzer has studied and written about some of the most revered institutions and documents in human history including the Supreme Court, the presidency, the Secret Service, Wall Street, and the bible. Along the way he's uncovered countless clues, stories and theories he hasn't been able to fully scrutinize until now. In BRAD MELTZERS DECODED: SEASON ONE, Meltzer separates fact from fiction as he investigates the hidden history and coded truths behind everything from the Statue of Liberty to the dollar bill to the assassination of President Lincoln. Mysteries explored in this captivating HISTORY series include: The real story behind the White House cornerstone, which has been missing for two centuries; the location of the lost Confederate treasury; the hidden messages of the Statue of Liberty; and whether Lady Liberty is anti-religion. Credits What if everything you learned in history books were only half true? Best-selling author and history buff Brad Meltzer has studied and written about some of the most revered institutions and documents in human history including the Supreme Court, the presidency, the Secret Service, Wall Street, and the bible. Along the way he's uncovered countless clues, stories and theories he hasn't been able to fully scrutinize until now. In BRAD MELTZERS DECODED: SEASON ONE, Meltzer separates fact from fiction as he investigates the hidden history and coded truths behind everything from the Statue of Liberty to the dollar bill to the assassination of President Lincoln. Mysteries explored in this captivating HISTORY series include: The real story behind the White House cornerstone, which has been missing for two centuries; the location of the lost Confederate treasury; the hidden messages of the Statue of Liberty; and whether Lady Liberty is anti-religion.
{ "pile_set_name": "Pile-CC" }
The invention relates to a washer for an anchor rod, wherein the anchor rod is fastened in a bore hole by a mortar having an opening for the passage there-through of the anchor rod and a passage for the introduction of the mortar mass into the bore hole. Known washers of the type described above are used, for example, with a mountain rock anchor. A rod-shaped anchoring means, for example, an anchor rod, is introduced in a surface that includes a bore hole . The washer is positioned on the end opposite the setting end of the anchoring means, wherein the washer has an opening for the reception of the anchor. A filler mass, particularly, a mortar mass, is introduced into an intermediate space, formed by the wall of the bore hole and the outer contour of the anchoring means, using a through passage arranged in the washer. The filler mass, contained, for example, in cartridges, can be filled into the bore hole using a cartridge compression device. The washer at least partially seals the opening of the bore hole because the washer lies on the top surface. DE-A1-2102391 discloses a washer comprising an opening for the through passage of an anchor rod and a through passage for the introduction of a mortar mass. The advantage of the known prior art is that it is simple to set the anchor rod in the bore hole because a known washer is used. The disadvantage of the known prior art is that the mortar mass must be located very close to the opening for receiving the anchor rod, when the space between the anchor rod and the wall of the bore hole through passage is small. In such an assembly, the risk of breakage of the washer increases drastically. The object of the present invention is to create a washer that is also suitable for filling small spaces between the anchor rod and the wall of the bore hole through passage. Furthermore, the washer should have a high break strength. In accordance with the invention, the opening for receiving the anchor rod and the through passage for filling of the filler mass are connected by at least one channel, whereby the course of the channel deviates from a common axis of the through passage and the opening. The filler mass is conveyed via the channel to the space between the anchor rod and the wall of the bore hole through passage. Optionally, in such an embodiment, the through passage can be arranged on the washer. A blind bore hole can be substituted for the through passage and can be in angular communication with the channel. The channel can be configured closed or even open. Fracture along the channel is prevented by the configuration of the channel out of the common axis of the through passage and the opening. The advantage of such an embodiment is the non-straight linear structure of the channel which provides increased stability. The opening and the through passage are preferably connected by at least two channels to prevent greater weakening of the washer near the channel. Moreover, such an arrangement prevents closure of the channels, for example, by the anchor rod. In a further preferred embodiment, the channels are situated arc-shaped along a disc plane to provide a channel geometry that affords optimal fracture strength. The curved design of the channels prevents a fracture along the channel. Advantageously, the channels are arranged on the side of the washer facing the bore hole and are at least partially open on that side. Such an arrangement provides a more economical production of the washer. Further, in such an embodiment, there is no clogging of the channels, which prevents the filler mass from passing through prior to the setting process. The user can, thus, easily check the functionality of the channels and undertake cleaning without significant effort. The sum of the inner diameter of the channels is, preferably, approximately equal to the inner diameter of the through passage, so that no excessive resistance is generated when the filler mass is being filled, via the channels. The inside diameter of the filling apparatus used is particularly relevant; overall, the channels have the same conveyance capacity as the filling apparatus. Preferably, the side of the washer remote from the bore hole has a conical recess arranged coaxial with the opening to receive a high tensile load. A nut threaded onto the anchor rod has, on a side adjacent to the bore hole, an end region complementary to the conical recess. The channels are, preferably, formed using a stamping process to assure economic manufacture of the washer.
{ "pile_set_name": "USPTO Backgrounds" }
Q: textwrangler: how can I run without opening files? TextWrangler has always worked great, but now it's hanging everytime I open it. I suspect one of the files it's trying to open causes a problem. Is there a way to run it without opening any files? A: You can press and hold the Shift key while launching TextWrangler to suppress all normal startup actions, including reopening files. A: Your TextWrangler preferences are stored in ~/Library/Preferences/ in: the file com.barebones.textwrangler.plist the files in the directory com.barebones.textwrangler.PreferenceData I don't know which of the files contains the open documents. Move the files/folder to your desktop and try starting it, then put them back one after another to find the source of your problem.
{ "pile_set_name": "StackExchange" }
Title Author Degree Type Dissertation Date of Award 2009 Degree Name Doctor of Philosophy Department Chemical and Biological Engineering First Advisor Brent H. Shanks Abstract Potassium-promoted iron oxide is the primary catalyst for dehydrogenating ethylbenzene to styrene. Due to an increasing demand for saving energy, there is a strong incentive to operate the reaction at reduced steam/ethylbenzene molar ratio, since a large amount of steam is used in the process. However, the catalyst experiences short-term deactivation under low S/EB conditions. Active site blocking by surface carbon and iron oxide reduction by either surface carbon or H2 are two possible deactivation mechanisms. However, the relative importance of these two mechanisms is not understood. It is very important to understand which deactivation mechanism dominates as different mechanism will lead to different development approaches. In this study, phase transitions of iron oxide based catalyst samples were investigated with TGA and XRD to understand the intrinsic deactivation mechanism. The effects of various promoters on iron oxide activity and stability were also studied. Hydrogen and carbon dioxide were utilized as the gas environment individually to avoid convolution of effects. Ethylbenzene was then applied to characterize the combined effects of hydrogen, carbon dioxide, and surface coke. Potassium efficiently increases the activity of iron oxide and its effect on phase stability was examined. The active potassium ferrite phase and potassium polyferrite, which has been considered a storage phase of potassium and iron (III), can be converted to each other when exposed to carbon dioxide or hydrogen. It was also found that the deposited surface carbon was a stronger reductant than hydrogen. Other minor promoters are also used in dehydrogenation catalysts to enhance stability, enhance activity, or increase the styrene selectivity. Therefore, their effects on the catalyst were also examined in this study. Chromium, calcium, and cerium were found to have a positive effect on iron oxide stability, while vanadium and molybdenum had negative impacts on iron oxide stability. Activity enhancement could be achieved by doping with chromium, calcium, molybdenum, and cerium. Vanadium greatly reduced the activity of catalyst, since it inhibited formation of the active phase.
{ "pile_set_name": "Pile-CC" }
Q: Apache ExpiresDefault: can it reside in a directive? The 2.2 docs state that ExpiresDefault can be placed in server config, virtual host, directory, and .htaccess. It doesn't mention Location. I have a mod_perl server, and I'd like most, or all, of the non-dynamic content (jpg, css, js, etc.) to expire "infrequently". But I want all mod_perl generated pages to expire "now". My configuration appears to be working, but I want to make sure I'm not missing something, since it's undocumented. ExpiresActive on ExpiresDefault "access plus 1 month" <LocationMatch ^/app/.*> ExpiresDefault "now" </LocationMatch> A: <Location> falls under directory context. So, yes.
{ "pile_set_name": "StackExchange" }
Developing a new model of care for patients with chronic musculoskeletal pain. To evaluate the impact of a nurse consultant in developing a new model of care for patients with chronic musculoskeletal pain. Patients with chronic musculoskeletal pain experience fragmented care and long waits to have their symptoms assessed [Clinical Standards Advisory Group (2000) Services for Patients with Pain. Department of Health, London]. A nurse consultant post was created to implement a chronic musculoskeletal pain service and prevent inappropriate referrals to other services. Seven peers participated in a semi-structured, qualitative, audio-taped interview to evaluate the impact of the nurse consultant's role. Data were analysed using content analysis. A retrospective audit of 60 patients was conducted to determine utilization of hospital services following attendance at the pain clinic. Two main themes were identified from the interview data included: (1) the influence of the nurse consultant in implementing a chronic pain service and (2) the clinical leadership skills of the nurse consultant. The audit demonstrated that majority of patients (n = 53) were utilizing less hospital specialities. The nurse consultant's role was pivotal in the implementation of the chronic pain service.
{ "pile_set_name": "PubMed Abstracts" }
export default function isDate(input) { return input instanceof Date || Object.prototype.toString.call(input) === '[object Date]'; }
{ "pile_set_name": "Github" }
Omaha Dog Parks The Omaha Dog Park Advocates is a non-profit, 501c3 organization. Our goal is to provide safe and clean off-leash dog parks in the City of Omaha. Our dog parks are fenced, off-leash areas where dog owners can bring their dogs to play, exercise, and socialize with other dogs and their owners. With the help of generous donors and friends, and in cooperation with the City of Omaha, the Hefflinger and Hanscom Dog Parks give the dogs who love us so unconditionally a special place to go to let their inner spirit run free.
{ "pile_set_name": "Pile-CC" }
Just Plain Cool-ey! JOSH COOLEY IS AN ARTIST AT PIXAR STUDIOS IN CALIFORNIA. One day, he got the idea to illustrate famous and memorable movie scenes in his style. Accidental Mysteries gives Josh the big “Me Likey Award” — and you can learn more about his cool art here.
{ "pile_set_name": "Pile-CC" }
Q: Floating floor invisible transition Can caulk or similar be used to create invisible transition between floating wood flooring and vinyl floor tiling - the floor under the tiling has been built up to match level of floating wood floor) Thanks for any advice. A: You are not supposed to secure a floating floor in any fashion, and caulking an edge may just do that. If it is a small section of floor, like a small 3X5 ft. powder room, you may get by, (not that I would put a floating floor in a powder room, this is just for example) but a larger floor, you need to use the transition or cap strip made for the laminate floor.
{ "pile_set_name": "StackExchange" }
Effects of precipitation and temperature on crop production variability in northeast Iran. Climate variability adversely impacts crop production and imposes a major constraint on farming planning, mostly under rainfed conditions, across the world. Considering the recent advances in climate science, many studies are trying to provide a reliable basis for climate, and subsequently agricultural production, forecasts. The El Niño-Southern Oscillation phenomenon (ENSO) is one of the principle sources of interannual climatic variability. In Iran, primarily in the northeast, rainfed cereal yield shows a high annual variability. This study investigated the role played by precipitation, temperature and three climate indices [Arctic Oscillation (AO), North Atlantic Oscillation (NAO) and NINO 3.4] in historically observed rainfed crop yields (1983-2005) of both barley and wheat in the northeast of Iran. The results revealed differences in the association between crop yield and climatic factors at different locations. The south of the study area is a very hot location, and the maximum temperature proved to be the limiting and determining factor for crop yields; temperature variability resulted in crop yield variability. For the north of the study area, NINO 3.4 exhibited a clear association trend with crop yields. In central locations, NAO provided a solid basis for the relationship between crop yields and climate factors.
{ "pile_set_name": "PubMed Abstracts" }
Q: Javascript array of dates - not iterating properly (jquery ui datepicker) I have some code which builds an array of date ranges. I then call a function (from the jquery UI datepicker), passing it a date, and compare that date with dates in the array. I'm doing it this way because the dates are stored in a cms and this is the only way I can output them. Unfortunately my code only checks the first date range in the array - and I can't figure out why! I think it's probably something simple (/stupid!) - if anyone can shed some light on it I'd be extremely grateful! The code is below - the june-september range (ps1-pe1) works fine, the december to jan is totally ignored... <script type="text/javascript" language="javascript"> var ps1 = new Date(2010, 06-1, 18); // range1 start var pe1 = new Date(2010, 09-1, 03); // range1 end var ps2 = new Date(2010, 12-1, 20); // range2 start var pe2 = new Date(2011, 01-1, 02); // range2 end var peakStart = new Array(ps1,ps2); var peakEnd = new Array(pe1,pe2); function checkDay(date) { var day = date.getDay(); for (var i=0; i<peakStart.length; i++) { if ((date > peakStart[i]) && (date < peakEnd[i])) { return [(day == 5), '']; } else { return [(day == 1 || day == 5), '']; } } } </script> A: Yaggo is quite right, but apparently too terse. You want to move the second return statement outside of the loop. function checkDay(date) { var day = date.getDay(); for (var i=0; i<peakStart.length; i++) { if ((date > peakStart[i]) && (date < peakEnd[i])) { return [(day == 5), '']; } } // it's not during a peak period return [(day == 1 || day == 5), '']; }
{ "pile_set_name": "StackExchange" }
Q: Memory efficient sort of massive numpy array in Python I need to sort a VERY large genomic dataset using numpy. I have an array of 2.6 billion floats, dimensions = (868940742, 3) which takes up about 20GB of memory on my machine once loaded and just sitting there. I have an early 2015 13' MacBook Pro with 16GB of RAM, 500GB solid state HD and an 3.1 GHz intel i7 processor. Just loading the array overflows to virtual memory but not to the point where my machine suffers or I have to stop everything else I am doing. I build this VERY large array step by step from 22 smaller (N, 2) subarrays. Function FUN_1 generates 2 new (N, 1) arrays using each of the 22 subarrays which I call sub_arr. The first output of FUN_1 is generated by interpolating values from sub_arr[:,0] on array b = array([X, F(X)]) and the second output is generated by placing sub_arr[:, 0] into bins using array r = array([X, BIN(X)]). I call these outputs b_arr and rate_arr, respectively. The function returns a 3-tuple of (N, 1) arrays: import numpy as np def FUN_1(sub_arr): """interpolate b values and rates based on position in sub_arr""" b = np.load(bfile) r = np.load(rfile) b_arr = np.interp(sub_arr[:,0], b[:,0], b[:,1]) rate_arr = np.searchsorted(r[:,0], sub_arr[:,0]) # HUGE efficiency gain over np.digitize... return r[rate_r, 1], b_arr, sub_arr[:,1] I call the function 22 times in a for-loop and fill a pre-allocated array of zeros full_arr = numpy.zeros([868940742, 3]) with the values: full_arr[:,0], full_arr[:,1], full_arr[:,2] = FUN_1 In terms of saving memory at this step, I think this is the best I can do, but I'm open to suggestions. Either way, I don't run into problems up through this point and it only takes about 2 minutes. Here is the sorting routine (there are two consecutive sorts) for idx in range(2): sort_idx = numpy.argsort(full_arr[:,idx]) full_arr = full_arr[sort_idx] # ... # <additional processing, return small (1000, 3) array of stats> Now this sort had been working, albeit slowly (takes about 10 minutes). However, I recently started using a larger, more fine resolution table of [X, F(X)] values for the interpolation step above in FUN_1 that returns b_arr and now the SORT really slows down, although everything else remains the same. Interestingly, I am not even sorting on the interpolated values at the step where the sort is now lagging. Here are some snippets of the different interpolation files - the smaller one is about 30% smaller in each case and far more uniform in terms of values in the second column; the slower one has a higher resolution and many more unique values, so the results of interpolation are likely more unique, but I'm not sure if this should have any kind of effect...? bigger, slower file: 17399307 99.4 17493652 98.8 17570460 98.2 17575180 97.6 17577127 97 17578255 96.4 17580576 95.8 17583028 95.2 17583699 94.6 17584172 94 smaller, more uniform regular file: 1 24 1001 24 2001 24 3001 24 4001 24 5001 24 6001 24 7001 24 I'm not sure what could be causing this issue and I would be interested in any suggestions or just general input about sorting in this type of memory limiting case! A: At the moment each call to np.argsort is generating a (868940742, 1) array of int64 indices, which will take up ~7 GB just by itself. Additionally, when you use these indices to sort the columns of full_arr you are generating another (868940742, 1) array of floats, since fancy indexing always returns a copy rather than a view. One fairly obvious improvement would be to sort full_arr in place using its .sort() method. Unfortunately, .sort() does not allow you to directly specify a row or column to sort by. However, you can specify a field to sort by for a structured array. You can therefore force an inplace sort over one of the three columns by getting a view onto your array as a structured array with three float fields, then sorting by one of these fields: full_arr.view('f8, f8, f8').sort(order=['f0'], axis=0) In this case I'm sorting full_arr in place by the 0th field, which corresponds to the first column. Note that I've assumed that there are three float64 columns ('f8') - you should change this accordingly if your dtype is different. This also requires that your array is contiguous and in row-major format, i.e. full_arr.flags.C_CONTIGUOUS == True. Credit for this method should go to Joe Kington for his answer here. Although it requires less memory, sorting a structured array by field is unfortunately much slower compared with using np.argsort to generate an index array, as you mentioned in the comments below (see this previous question). If you use np.argsort to obtain a set of indices to sort by, you might see a modest performance gain by using np.take rather than direct indexing to get the sorted array: %%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort() x[idx] # 1 loops, best of 100: 148 µs per loop %%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort() np.take(x, idx, axis=0) # 1 loops, best of 100: 42.9 µs per loop However I wouldn't expect to see any difference in terms of memory usage, since both methods will generate a copy. Regarding your question about why sorting the second array is faster - yes, you should expect any reasonable sorting algorithm to be faster when there are fewer unique values in the array because on average there's less work for it to do. Suppose I have a random sequence of digits between 1 and 10: 5 1 4 8 10 2 6 9 7 3 There are 10! = 3628800 possible ways to arrange these digits, but only one in which they are in ascending order. Now suppose there are just 5 unique digits: 4 4 3 2 3 1 2 5 1 5 Now there are 2⁵ = 32 ways to arrange these digits in ascending order, since I could swap any pair of identical digits in the sorted vector without breaking the ordering. By default, np.ndarray.sort() uses Quicksort. The qsort variant of this algorithm works by recursively selecting a 'pivot' element in the array, then reordering the array such that all the elements less than the pivot value are placed before it, and all of the elements greater than the pivot value are placed after it. Values that are equal to the pivot are already sorted. Having fewer unique values means that, on average, more values will be equal to the pivot value on any given sweep, and therefore fewer sweeps are needed to fully sort the array. For example: %%timeit -n 1 -r 100 x = np.random.random_integers(0, 10, 100000) x.sort() # 1 loops, best of 100: 2.3 ms per loop %%timeit -n 1 -r 100 x = np.random.random_integers(0, 1000, 100000) x.sort() # 1 loops, best of 100: 4.62 ms per loop In this example the dtypes of the two arrays are the same. If your smaller array has a smaller item size compared with the larger array then the cost of copying it due to the fancy indexing will also be smaller. A: EDIT: In case anyone new to programming and numpy comes across this post, I want to point out the importance of considering the np.dtype that you are using. In my case, I was actually able to get away with using half-precision floating point, i.e. np.float16, which reduced a 20GB object in memory to 5GB and made sorting much more manageable. The default used by numpy is np.float64, which is a lot of precision that you may not need. Check out the doc here, which describes the capacity of the different data types. Thanks to @ali_m for pointing this out in the comments. I did a bad job explaining this question but I have discovered some helpful workarounds that I think would be useful to share for anyone who needs to sort a truly massive numpy array. I am building a very large numpy array from 22 "sub-arrays" of human genome data containing the elements [position, value]. Ultimately, the final array must be numerically sorted "in place" based on the values in a particular column and without shuffling the values within rows. The sub-array dimensions follow the form: arr1.shape = (N1, 2) ... arr22.shape = (N22, 2) sum([N1..N2]) = 868940742 i.e. there are close to 1BN positions to sort. First I process the 22 sub-arrays with the function process_sub_arrs, which returns a 3-tuple of 1D arrays the same length as the input. I stack the 1D arrays into a new (N, 3) array and insert them into an np.zeros array initialized for the full dataset: full_arr = np.zeros([868940742, 3]) i, j = 0, 0 for arr in list(arr1..arr22): # indices (i, j) incremented at each loop based on sub-array size j += len(arr) full_arr[i:j, :] = np.column_stack( process_sub_arrs(arr) ) i = j return full_arr EDIT: Since I realized my dataset could be represented with half-precision floats, I now initialize full_arr as follows: full_arr = np.zeros([868940742, 3], dtype=np.float16), which is only 1/4 the size and much easier to sort. Result is a massive 20GB array: full_arr.nbytes = 20854577808 As @ali_m pointed out in his detailed post, my earlier routine was inefficient: sort_idx = np.argsort(full_arr[:,idx]) full_arr = full_arr[sort_idx] the array sort_idx, which is 33% the size of full_arr, hangs around and wastes memory after sorting full_arr. This sort supposedly generates a copy of full_arr due to "fancy" indexing, potentially pushing memory use to 233% of what is already used to hold the massive array! This is the slow step, lasting about ten minutes and relying heavily on virtual memory. I'm not sure the "fancy" sort makes a persistent copy however. Watching the memory usage on my machine, it seems that full_arr = full_arr[sort_idx] deletes the reference to the unsorted original, because after about 1 second all that is left is the memory used by the sorted array and the index, even if there is a transient copy. A more compact usage of argsort() to save memory is this one: full_arr = full_arr[full_arr[:,idx].argsort()] This still causes a spike at the time of the assignment, where both a transient index array and a transient copy are made, but the memory is almost instantly freed again. @ali_m pointed out a nice trick (credited to Joe Kington) for generating a de facto structured array with a view on full_arr. The benefit is that these may be sorted "in place", maintaining stable row order: full_arr.view('f8, f8, f8').sort(order=['f0'], axis=0) Views work great for performing mathematical array operations, but for sorting it is far too inefficient for even a single sub-array from my dataset. In general, structured arrays just don't seem to scale very well even though they have really useful properties. If anyone has any idea why this is I would be interested to know. One good option to minimize memory consumption and improve performance with very large arrays is to build a pipeline of small, simple functions. Functions clear local variables once they have completed so if intermediate data structures are building up and sapping memory this can be a good solution. This a sketch of the pipeline I've used to speed up the massive array sort: def process_sub_arrs(arr): """process a sub-array and return a 3-tuple of 1D values arrays""" return values1, values2, values3 def build_arr(): """build the initial array by joining processed sub-arrays""" full_arr = np.zeros([868940742, 3]) i, j = 0, 0 for arr in list(arr1..arr22): # indices (i, j) incremented at each loop based on sub-array size j += len(arr) full_arr[i:j, :] = np.column_stack( process_sub_arrs(arr) ) i = j return full_arr def sort_arr(): """return full_arr and sort_idx""" full_arr = build_arr() sort_idx = np.argsort(full_arr[:, index]) return full_arr[sort_idx] def get_sorted_arr(): """call through nested functions to return the sorted array""" sorted_arr = sort_arr() <process sorted_arr> return statistics call stack: get_sorted_arr --> sort_arr --> build_arr --> process_sub_arrs Once each inner function is completed get_sorted_arr() finally just holds the sorted array and then returns a small array of statistics. EDIT: It is also worth pointing out here that even if you are able to use a more compact dtype to represent your huge array, you will want to use higher precision for summary calculations. For example, since full_arr.dtype = np.float16, the command np.mean(full_arr[:,idx]) tries to calculate the mean in half-precision floating point, but this quickly overflows when summing over a massive array. Using np.mean(full_arr[:,idx], dtype=np.float64) will prevent the overflow. I posted this question initially because I was puzzled by the fact that a dataset of identical size suddenly began choking up my system memory, although there was a big difference in the proportion of unique values in the new "slow" set. @ali_m pointed out that, indeed, more uniform data with fewer unique values is easier to sort: The qsort variant of Quicksort works by recursively selecting a 'pivot' element in the array, then reordering the array such that all the elements less than the pivot value are placed before it, and all of the elements greater than the pivot value are placed after it. Values that are equal to the pivot are already sorted, so intuitively, the fewer unique values there are in the array, the smaller the number of swaps there are that need to be made. On that note, the final change I ended up making to attempt to resolve this issue was to round the newer dataset in advance, since there was an unnecessarily high level of decimal precision leftover from an interpolation step. This ultimately had an even bigger effect than the other memory saving steps, showing that the sort algorithm itself was the limiting factor in this case. Look forward to other comments or suggestions anyone might have on this topic, and I almost certainly misspoke about some technical issues so I would be glad to hear back :-)
{ "pile_set_name": "StackExchange" }
Sermon May 5, 2019 So Ananias went and entered the house. He laid his hands on Saul and said, ‘Brother Saul, the Lord Jesus, who appeared to you on your way here, has sent me so that you may regain your sight and be filled with the Holy Spirit.’ And immediately something like scales fell from his eyes, and his sight was restored. – Acts 9:17 & 18i Derek Black learned at a young age how to hate. Born into a white supremacist family he was taught that people of color were to be shunned, separated from a nation founded by and for whites. He was home schooled not only in the Three “R’s” but taught who God loved and who God rejected. He decided to run for public office and became gifted at spreading his message. He would not speak directly but roundabout in order to make an opening for his position. He commented on running for political office in Florida and how he would subtly win people over to his ideology. He writes about that effort and tells the reader what he used to say: “‘Don’t you think all these Spanish signs on the highway are making everything worse? And don’t you think political correctness is just not letting you talk about things that are real?’ And getting people to agree on that would be the way forward.1” He was, like Saul, blind to love and the common humanity shared by us all. But like Saul, his story does not end with blindness. He attended New College of Florida where he met several Jewish students. Their tolerance made room for him. “Black’s new friends invited him over for Shabbat dinner week after week. Gradually, he began to rethink his views. After much soul-searching, a 22-year-old [Derek] Black wrote an article, published by the Southern Poverty Law Center in 2013, renouncing white nationalism.” The scales of injustice, the long years of hate, fell from his eyes and Derek Black could see anew. In Damascus, long ago, Ananias greeted a man who had sworn to have Ananias killed for Ananias’s beliefs. He spoke to Saul not with words of condemnation or even fear. He said to Saul what would have been hard for me to say to a white nationalist: “Brother Saul….” And so I pondered: are my prejudices so selective that I do not need a blinding light? Am I so confident that my judgment of others is the same as the judgment of God? Have I forgotten the unconditional love of God, who “so loved the world that God gave God’s only Son?” Am I so sure of my virtue that I need not see anew? When my brother Doug was a sophomore in high school, a car with two teenage boys stopped as Doug was walking home from school. He was a half block from our front door. The boys got out of the car and struck Doug in the left eye. So powerful was the blow that the bone beneath his eye as shattered. They had to take a portion of one of Doug’s ribs and place it below his eye to repair the damage. At the time, I had no idea why anyone would want to hurt my brother. All he was doing was walking home from school. I learned latter that there may have been a reason for their hatred – my brother was gay. Had he somehow let his secret be known? Had they stopped the car and smashed his face because they hated not only what Doug was but themselves as well? Yet many of us have seen progress made in civil rights, strides made in gender equality, a new found openness to those born with a different sexual orientation than ourselves. It takes time to remove the scales of hate, but God brings them low. Our blindness is not forever if we but share our mutual humanity with all. So in this era when Americans are tempted to fear others who are not like themselves, when we have forgotten how to be civil to those who do not share our political bias, when Muslims hate Jews and Jews Muslims and Christians are persecuted for following Christ, know this: the blindness of our age, the blindness we may personally possess, cannot withstand the love of God. Blinded on the road to Damascus, Saul would be met by a man who called him “brother.” The grace of God calls us to expand our love towards others. Any time we are told to hate, to fear the foreigner in our midst, to deny human rights to those not our own, we are not on the side of God. To exclude others from the circle of God’s grace is to play at being God. We must always seek to bring others into the circle of grace. As Edwin Markham wrote: He drew a circle that shut me out — Heretic, rebel, a thing to flout. But love and I had the wit to win: We drew a circle that took him in. And so, my friends, must we. Blinded, Saul came stumbling into Damascus, a man on a mission of hate and exclusion now blinded by God. One of those Saul would have put in chains was given the courage to great his would-be jailer with the words, “Brother Saul…” The scales fell from Saul’s eyes and this hater of the church became the greatest evangelist for Christ the world has ever known. May they fall from my eyes, too, that I, like Saul, might see anew. Let us pray…. 1Meanwhile Saul, still breathing threats and murder against the disciples of the Lord, went to the high priest2and asked him for letters to the synagogues at Damascus, so that if he found any who belonged to the Way, men or women, he might bring them bound to Jerusalem.3Now as he was going along and approaching Damascus, suddenly a light from heaven flashed around him.4He fell to the ground and heard a voice saying to him, ‘Saul, Saul, why do you persecute me?’5He asked, ‘Who are you, Lord?’ The reply came, ‘I am Jesus, whom you are persecuting.6But get up and enter the city, and you will be told what you are to do.’ [7The men who were travelling with him stood speechless because they heard the voice but saw no one.8Saul got up from the ground, and though his eyes were open, he could see nothing; so they led him by the hand and brought him into Damascus.9For three days he was without sight, and neither ate nor drank. 10 Now there was a disciple in Damascus named Ananias. The Lord said to him in a vision, ‘Ananias.’ He answered, ‘Here I am, Lord.’11The Lord said to him, ‘Get up and go to the street called Straight, and at the house of Judas look for a man of Tarsus named Saul. At this moment he is praying,12and he has seen in a vision*a man named Ananias come in and lay his hands on him so that he might regain his sight.’13But Ananias answered, ‘Lord, I have heard from many about this man, how much evil he has done to your saints in Jerusalem;14and here he has authority from the chief priests to bind all who invoke your name.’15But the Lord said to him, ‘Go, for he is an instrument whom I have chosen to bring my name before Gentiles and kings and before the people of Israel;16I myself will show him how much he must suffer for the sake of my name.’17So Ananias went and entered the house. He laid his hands on Sauland said, ‘Brother Saul, the Lord Jesus, who appeared to you on your way here, has sent me so that you may regain your sight and be filled with the Holy Spirit.’18And immediately something like scales fell from his eyes, and his sight was restored. Then he got up and was baptized,19and after taking some food, he regained his strength. For several days he was with the disciples in Damascus,20and immediately he began to proclaim Jesus in the synagogues, saying, ‘He is the Son of God.’]
{ "pile_set_name": "Pile-CC" }
Q: in Immutablejs Map push into List I am a beginner developer who is studying redux. I am using immutablejs to add object type data to the state. When you press the button on the react component, the test data (Map ()) is pushed to the List(). But there is a problem. When the button is pressed, the following type of data is input, and when the page is refreshed, it is updated with normal data. Why is this happening? I really appreciate your help. Before Refresh After Refresh import { handleActions } from 'redux-actions' import axios from 'axios' import { Map, List } from 'immutable' let token = localStorage.token if (!token) token = localStorage.token = Math.random().toString(36).substr(-8) let instance = axios.create({ baseURL: 'http://localhost:5001', timeout: 1000, headers: {'Authorization': token} }) const GET_POST_PENDING = 'GET_POST_PENDING' const GET_ALL_POST_SUCCESS = 'GET_ALL_POST_SUCCESS' const CREATE_POST_SUCCESS = 'CREATE_POST_SUCCESS' const GET_POST_FAILURE = 'GET_POST_FAILURE' //actions export const getPost = (postId) => dispatch => { dispatch({type: GET_POST_PENDING}); return instance.get('/posts').then( response => { dispatch({ type: GET_ALL_POST_SUCCESS, payload: response }) } ).catch((error) => { dispatch({ type: GET_POST_FAILURE, payload: error }) }) } export const createPost = () => dispatch => { dispatch({type: GET_POST_PENDING}) return instance.post('/posts',{ id: Math.random().toString(36).substr(-10), timestamp: Date.now(), title: 'test title', body: 'test body', category: 'redux', author: 'minwoo', deleted: false, voteScore: 1 }).then( response => { console.log(response) //check data dispatch({ type:CREATE_POST_SUCCESS, payload: response }) } ).catch((error) => { dispatch({ type: GET_POST_FAILURE, payload: error }) }) } const initialState = Map({ posts: List([]), comments: List([]) }) I know that the console should not be here. However, when I press the button, I want to check if the response data is transmitted correctly. //reducer export default handleActions({ [GET_POST_PENDING]: (state, action) => { return state; }, [GET_ALL_POST_SUCCESS]: (state, action) => { console.log(action.payload.data)//for check data return state.set('posts', List([...action.payload.data])) }, [CREATE_POST_SUCCESS]: (state, action) => { const posts = state.get('posts') return state.set('posts', posts.push( Map(action.payload.date) )) }, [GET_POST_FAILURE]: (state, action) => { return state } }, initialState) The code below is the React component mentioned above. import React from 'react'; import PropTypes from 'prop-types' import { List } from 'immutable'; const PostList = ({posts, PostActions: {getPost}}) => { const postList = posts.map((post,i) => ( <div key={i}> {post.title} <button>edit</button> <button>delete</button> </div> )) return ( <div className="PostList"> {postList} </div> ) } PostList.proptypes = { posts: PropTypes.instanceOf(List), getPost: PropTypes.func } PostList.defaultProps = { posts:[], getPost: () => console.log('getPost is not defined') } export default PostList A: List does not deeply convert your data, so when you pass an array of objects to List you will get a List object containing plain JS object. Use fromJS instead of List in GET_ALL_POST_SUCCESS
{ "pile_set_name": "StackExchange" }
> On 11 Aug 2018, at 11:33, Pablo Rodriguez <[email protected]> wrote: > > On 08/09/2018 10:20 PM, Hans Åberg wrote: >>> On 9 Aug 2018, at 21:20, Pablo Rodriguez wrote: >>> [...] >>> My background is in humanities and I don’t understand the exponent for >>> being a float ("10²" contains an exponent >>> [https://www.m-w.com/dictionary/exponent], but I would say is an integer >>> in all possible worlds [or all the worlds I know ]). >> >> It may refer to a floating point number syntax as in C++ [1], where the >> three cases top there say that there must be a point '.' preceded or >> followed by at least one digit, or at least one digit followed by an >> exponent starting with 'e' or 'E'. > > Many thanks for your explanation, Hans. Advertising You are welcome. > I thought there should be some kind of restriction when referring to the > exponent, but this is why technical explanations aren’t always very > clear. I mean, they have too many restrictions attached to them. The C++ description allows for fast reading with just one lookahead character at a time, which is was important in C when it appeared in 1972 on the not so powerful computers of the day. Then roundoff may make floating point numbers look like integers even they are not. For example, 1.0 could be 0.99999999 or 1.0000001. So don't check floating numbers for equality, instead use say abs(x - y) < a for some small number a. When adding exact integers and inexact floating point numbers, it is easiest to always make that inexact numbers, as exactness cannot always be guaranteed. ___________________________________________________________________________________ If your question is of interest to others as well, please add an entry to the Wiki! maillist : [email protected] / http://www.ntg.nl/mailman/listinfo/ntg-context webpage : http://www.pragma-ade.nl / http://context.aanhet.net archive : https://bitbucket.org/phg/context-mirror/commits/ wiki : http://contextgarden.net ___________________________________________________________________________________
{ "pile_set_name": "Pile-CC" }
Vince, You were a most gracious guest, and we were honored to have you in our home. I am happy that you are Tony's friend, and it was a great pleasure for me to get to know you also. And again, thank you very much for the lovely bouquet of roses. Elisabeth.
{ "pile_set_name": "Enron Emails" }
Modulating the Electrochemical Performances of Layered Cathode Materials for Sodium Ion Batteries through Tuning Coulombic Repulsion between Negatively Charged TMO2 Slabs. Exploiting advanced layered transition metal oxide cathode materials is of great importance to rechargeable sodium batteries. Layered oxides are composed of negatively charged TMO2 slabs (TM = transition metal) separated by Na+ diffusion layers. Herein, we propose a novel insight, for the first time, to control the electrochemical properties by tuning Coulombic repulsion between negatively charged TMO2 slabs. Coulombic repulsion can finely tailor the d-spacing of Na ion layers and material structural stability, which can be achieved by employing Na+ cations to serve as effective shielding layers between TMO2 layers. A series of O3-type NaxMn1/3Fe1/3Cu1/6Mg1/6O2 (x = 1.0, 0.9, 0.8, and 0.7) have been prepared, and Na0.7Mn1/3Fe1/3Cu1/6Mg1/6O2 shows the largest Coulombic repulsion between TMO2 layers, the largest space for Na ion diffusion, the best structural stability, and also the longest Na-O chemical bond with weaker Coulombic attraction, thus leading to the best electrochemical performance. Meanwhile, the thermal stability depends on the Na concentration in pristine materials. Ex situ X-ray absorption (XAS) analysis indicates that Mn, Fe, and Cu ions are all electrochemically active components during insertion and extraction of sodium ion. This study enables some new insights to promote the development of advanced layered NaxTMO2 materials for rechargeable sodium batteries in the future.
{ "pile_set_name": "PubMed Abstracts" }
Q: check if context has some table then add to this table i have multiple DbContexts and each context has some DbSets like public class fooContext : DbContext { DbSet<fooA> fooA {get,set} DbSet<fooB> fooB {get,set} } public class barContext : DbContext { DbSet<barA> barA {get,set} DbSet<barB> barB {get,set} } and an excel file with multiple excelSheets structered properly for linqtosql to work with (having sheet names as fooA,fooB... ,first row is properties names and remaining rows are data) i can see that if i have which context has fooA i can use something like this function inside the context public DbSet Set(string "fooA") { return base.Set(Type.GetType("fooA")); } but i don't which context has fooA to add this to to clarify this ,, normally when you want to add fooARecord to fooA table in fooContext you fooContext.fooA.Add(fooARecord); but i only have fooA as string and fooARecord P.S: cant use linqtosql since oracle and i cant simply import excel to oracle coz too many tables and users need to be able to alter this data before this process A: To check is a fooContext has a DbSet of a specific type only by name, you can do this: var fooContext = new FooContext(); //the context to check var dbSets = fooContext.GetType().GetProperties() .Where(p => p.PropertyType.IsGenericType && p.PropertyType.GetGenericTypeDefinition() == typeof(DbSet<>)).ToArray(); //List Dbset<T> var fooA = "fooA"; //the table to search var dbSetProp = dbSets.SingleOrDefault(x=> x.PropertyType.GetGenericArguments()[0].Name == fooA); if(dbSetProp != null) { //fooContext has fooA table var dbSet = fooContex.Set(dbSetProp.PropertyType.GetGenericArguments()[0]); // or via dbSetProp.GetValue(fooContext) as DbSet dbSet.Add(fooARecord); }
{ "pile_set_name": "StackExchange" }
Q: Sample instant app requires newer SDK I'm keep getting the error at the bottom of the question even though I followed official emulator setup guide and sample project setup guide to the letter. Using: - Android Studio 3.0-Alpha7 - Pixel emulator with SDK 23 Provisioning succeeds and was able to enable instant apps in Settings > Google > Instant Apps Side loading instant app failed: Failure when trying to read bundle. Instant App com.instantappsample requires an SDK version which is newer than the one installed on device. Please update the SDK on the device. Error while Uploading and registering Instant App A: Creating an API 26 (aka O) emulator allowed me to successfully install the Instant App, while otherwise following the guide. Hat-tip to donly from Github project android-instant-apps Workarounds I tried unsuccessfully first: Uninstalling "Google Play Services for Instant Apps" (from the other answer) Downgrading to Android Studio 3.0 Canary 5 Using a physical device that can run instant apps (Galaxy S6 SM-G920V, Android 7.0)
{ "pile_set_name": "StackExchange" }
One-week regimens containing ranitidine bismuth citrate, furazolidone and either amoxicillin or tetracycline effectively eradicate Helicobacter pylori: a multicentre, randomized, double-blind study. The metronidazole resistance of Helicobacter pylori strains has increased rapidly. To evaluate the efficacy and safety of new 1-week regimens containing ranitidine bismuth citrate, furazolidone and either amoxicillin or tetracycline. One hundred and twenty patients with H. pylori-positive inactive duodenal ulcer or non-ulcer dyspepsia diagnosed by endoscopy were recruited randomly to receive one of two regimens for 7 days: ranitidine bismuth citrate, 350 mg b.d., furazolidone, 100 mg b.d., and either amoxicillin, 1000 mg b.d. (n=60), or tetracycline, 500 mg b.d. (n=60). H. pylori infection was identified by rapid urease testing and histology. 13C-Urea breath test was performed to evaluate the cure of H. pylori infection at least 4 weeks after completion of triple therapy. The eradication rates of H. pylori by ranitidine bismuth citrate-furazolidone-amoxicillin and ranitidine bismuth citrate-furazolidone-tetracycline regimens were 82% and 85% (P > 0.05), respectively, by intention-to-treat analysis, and 85% and 91% (P > 0.05), respectively, by per protocol analysis. Adverse effects were mild in both ranitidine bismuth citrate-furazolidone-amoxicillin and ranitidine bismuth citrate-furazolidone-tetracycline groups. One-week regimens containing ranitidine bismuth citrate, furazolidone and amoxicillin or tetracycline are well tolerated and effective for the eradication of H. pylori.
{ "pile_set_name": "PubMed Abstracts" }
Health in Tonga Life expectancy in Tonga, which was once in the mid-70s, has fallen to 64. Up to 40% of the population is said to have type 2 diabetes. Obesity The people of Tonga are the most obese in the world, with 52.4% of men and 67.2% of women diagnosed as obese. This is thought to be as a result of dietary changes. The consumption of imported meat, particularly mutton flaps, has replaced the islanders' traditional diet of fish, root vegetables and coconuts. Tāufaʻāhau Tupou IV, who died in 2006, holds the Guinness World Record for being the heaviest-ever monarch - 200kg. See also Obesity in the Pacific References
{ "pile_set_name": "Wikipedia (en)" }
Sociodemographic, physical, mental and social factors in the cessation of breastfeeding before 6 months: a systematic review. The World Health Organization recommends exclusive breastfeeding as the main source of nutrition for infants during their first 6 months of life. However, despite this well-known recommendation, not all mothers breastfeed, whether partly or fully, during this time. The aim of this systematic literature review was to compile evidence regarding sociodemographic, physical, mental and social factors that influence breastfeeding mothers to stop breastfeeding before the infant reaches 6 months. A systematic search was conducted in four databases. Studies with quantitative research were included. Totally, 186 abstracts were read, 83 seemed relevant but 18 were found to be duplicates. Finally, 27 articles met the inclusion criteria and were included. The quality assessment was carried out with a quality assessment template from the Swedish Council on Technology and Assessment, and the grading of the result was carried out according to GRADE. The association of breastfeeding cessation between the mother's young age, low level of education, return to work within 12-week postpartum, caesarean birth and inadequate milk supply was found to have a low level of evidence. The link found between depressions among the mothers with the cessation of breastfeeding was found to have a very low level of evidence. Sociodemographic factors appeared to have caused cessation of breastfeeding in some of the included articles. The preventive work should focus on how to improve the knowledge of healthcare professionals and targeted interventions must address mothers who are at risk of ceasing breastfeeding before the recommended time.
{ "pile_set_name": "PubMed Abstracts" }
Q: .htaccess not triggered So I have the following .htaccess in my /var/www/site RewriteEngine on RewriteRule ^([^/]+)/?$ parser.php?id=$1 [QSA,L] I have allowed override in my vhost: <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/site> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> UPDATE: Now I got it to work, however when I visit site.com, it also redirects me to this parser.php, in which I don't want as this is my homepage. My homepage should be redirected to index.php and if I do mysite/NASKDj, it should be redirected to parser.php?pid=NASKDj. How do I fix this? A: You have 'AllowOverride None' in the '/var/www/site' directory - this will overrive the one specified in the '/' directory. If your site is in /var/www/site you need to change This one to All too.
{ "pile_set_name": "StackExchange" }
Q: Upload multiple files using AJAX and servlet I've tried almost everything available on the internet but nothing seems to be working. I have a HTML5 filereader code which will get me all the files read from client side directory var f = $('#fileUpload')[0].files; Next thing I want to upload all these files with an AJAX request to a JAVA servlet POST method. for that I tried below code- var data = new FormData(); $.each(f, function(key, value) { data.append(key, value); }); postFilesData(data); //some code.. function postFilesData(data) { $.ajax({ url: 'serv2', type: 'POST', //enctype: 'multipart/form-data', data: data, cache: false, processData: false, mimetyep: 'multipart/form-data', contentType: 'multipart/form-data', success: function(data) { //success }, error: function(textStatus) { console.log('ERRORS: ' + textStatus); } }); } servlet code doPOst method- System.out.println("Hi what request:"+ServletFileUpload.isMultipartContent(request)); System.out.println("hi bro"); // awsUpload.uploadData(foldername); System.out.println("outside aws"); DiskFileItemFactory factory = new DiskFileItemFactory(); ServletFileUpload upload = new ServletFileUpload(factory); String uuidValue = ""; FileItem itemFile = null; try { // parses the request's content to extract file data List formItems = upload.parseRequest(request); Iterator iter = formItems.iterator(); // iterates over form's fields to get UUID Value while (iter.hasNext()) { FileItem item = (FileItem) iter.next(); if (item.isFormField()) { } // processes only fields that are not form fields if (!item.isFormField()) { itemFile = item; } } } catch (Exception e) { // TODO: handle exception } //System.out.println(path); // response.sendRedirect(path+"/user"+"/home.html"); // System.out.println("done"); if(itemFile==null) { System.out.println("File Empty Found"); } System.out.println("The File Name is"+itemFile.getName()); } HTML code: <form method="POST" enctype="multipart/form-data" > <input type="file" class="input-file" name="file[]" id="fileUpload" onchange="fileChanged();" multiple mozdirectory="" webkitdirectory="" directory=""/> <br/> it prints "File Empty Found" and crashes on below line with NullPointerException. I understand it's not getting any data. Could you please point out piece of code which is wrong or code missing which needs to be added. A: Thank you for your response guys..!! To answer @BalusC's questions, Yes it is multipart upload. It enters in while loop but no data was transferred from ajax call and code just broke at line- System.out.println("The File Name is"+itemFile.getName()); since there was no item to get FileName of. Only exceptions I got was on console "NullPointerException" and on UI-console(while debugging in JS) "500 internal server error" I was able to fix the code and was able to transfer data through AJAX call to Servlet. Below is the code. pretty much changed/restructured code for AJAX call and servlet code-- AJAX request-- var fd = new FormData(); //fd.append( 'file', $('#fileUpload')[0].files);//.files[0]); $.each($('#fileUpload')[0].files, function(k, value) { fd.append(k, value); }); $.ajax({ url: 'serv2', data: fd, processData: false, contentType: false, type: 'POST', success: function(data){ alert(data); } }); Servlet code-doPost method-- if (!ServletFileUpload.isMultipartContent(request)) { PrintWriter writer = response.getWriter(); writer.println("Request does not contain upload data"); writer.flush(); return; } // configures upload settings DiskFileItemFactory factory = new DiskFileItemFactory(); factory.setSizeThreshold(THRESHOLD_SIZE); ServletFileUpload upload = new ServletFileUpload(factory); //upload.setFileSizeMax(MAX_FILE_SIZE); //upload.setSizeMax(MAX_REQUEST_SIZE); String uuidValue = ""; FileItem itemFile = null; try { // parses the request's content to extract file data List formItems = upload.parseRequest(request); Iterator iter = formItems.iterator(); // iterates over form's fields to get UUID Value while (iter.hasNext()) { FileItem item = (FileItem) iter.next(); if (item.isFormField()) { if (item.getFieldName().equalsIgnoreCase(UUID_STRING)) { uuidValue = item.getString(); } } // processes only fields that are not form fields if (!item.isFormField()) { itemFile = item; } } System.out.println("no of items: " + formItems.size()); System.out.println("FILE NAME IS : "+itemFile.getName()); } } I was able to print no of file objects passed from UI which were correct. Thank you for your time guys..!! :)
{ "pile_set_name": "StackExchange" }
Q: Access SmartSheet API behind corporate firewall .Net C# I have just started a development to update a smart sheet document using the API. Using the example (csharp-read-write-sheet) in the SDK reference I can update the the document as long as I am on an open internet connection, however, I cannot when I am connected to the company LAN as it is reporting a proxy authentication issue. This is the code from the SDK string accessToken = ConfigurationManager.AppSettings["AccessToken"]; if (string.IsNullOrEmpty(accessToken)) accessToken = Environment.GetEnvironmentVariable("SMARTSHEET_ACCESS_TOKEN"); if (string.IsNullOrEmpty(accessToken)) throw new Exception("Must set API access token in App.conf file"); // Get sheet Id from App.config file string sheetIdString = ConfigurationManager.AppSettings["SheetId"]; long sheetId = long.Parse(sheetIdString); // Initialize client SmartsheetClient ss = new SmartsheetBuilder().SetAccessToken(accessToken).Build(); // Load the entire sheet Sheet sheet = ss.SheetResources.GetSheet(sheetId, null, null, null, null, null, null, null); Console.WriteLine("Loaded " + sheet.Rows.Count + " rows from sheet: " + sheet.Name); Can you please advise how I can configure the API to provide a System.Net.WebProxy object to the Client API to provide authentication route through the company proxy A: @Steve Weil's answer does not allow for you to provide user credentials.... Further research based on it though ended me up at Is it possible to specify proxy credentials in your web.config? which has now solved my issues
{ "pile_set_name": "StackExchange" }
Kenny Sebastian Bio, Profile | Contact details (Phone number, Email Id, Website, Address Details)- Kenny Sebastian is a Comedian, filmmaker, and musician who has toured around the world as a stand-up comedian. Kenny is also a musician and he released his Acoustic Album“Balance”. In 2014, he wrote and hosted the entire season of sketch comedy “The Living Room” which aired in channel comedy central. He owns a production studio called “Superhuman studioz”.Kenny has featured as number one Indian to watch out for in 2016 on “Buzzfeed India.” Kenny is currently touring all over India performing comedy shows named “Don’t be that guy” since January 2017. Here, we are showing you all possible ways to contact him.
{ "pile_set_name": "Pile-CC" }
Malamute got into run Our Alaskan Malamute broke into run today by pulling the wire away from the wood it was attached to and got one of young 14 week old pullets, my daugters are very upset. Is there anything I can do to prevent him from going back for more? All I hear about it once they get one they don't stop going back for more. My husband is on a 10 day Mission in Guatemala and I am at a loss here! I like for my animals to live in harmony and teach my dogs whats off limits while rewarding good behavior of that sort. Like chasing rats, snakes, bears, coons, etc.. So when my Aussie/Healer decided to take out the neighbors chickens and bring them home as a gift... I tied a dead rotting chicken to his neck and left it on for a week. My old timer Grandpa told me to do this ( I was 16) and that dog NEVER attacked another chicken. PROBLEM was my mom was out of town on business when I did it and then brought her UBER RICH NY boss and her associates by the house on their layover. When they got out of the limo there was my incredibly rancid dog running around their legs with the flopping half rotted chicken brushing up against them. The entire yard and the house smelled like a corpse and I was grounded for weeks...
{ "pile_set_name": "Pile-CC" }
/* * Written by Dr Stephen N Henson ([email protected]) for the OpenSSL project * 2006. */ /* ==================================================================== * Copyright (c) 2006 The OpenSSL Project. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * 3. All advertising materials mentioning features or use of this * software must display the following acknowledgment: * "This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit. (http://www.OpenSSL.org/)" * * 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to * endorse or promote products derived from this software without * prior written permission. For written permission, please contact * [email protected]. * * 5. Products derived from this software may not be called "OpenSSL" * nor may "OpenSSL" appear in their names without prior written * permission of the OpenSSL Project. * * 6. Redistributions of any form whatsoever must retain the following * acknowledgment: * "This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit (http://www.OpenSSL.org/)" * * THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY * EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * ==================================================================== * * This product includes cryptographic software written by Eric Young * ([email protected]). This product includes software written by Tim * Hudson ([email protected]). * */ #include <stdio.h> #include "cryptlib.h" #include <openssl/x509.h> #include <openssl/asn1.h> #include <openssl/dh.h> #include <openssl/bn.h> #include "asn1_locl.h" #ifndef OPENSSL_NO_CMS # include <openssl/cms.h> #endif extern const EVP_PKEY_ASN1_METHOD dhx_asn1_meth; /* * i2d/d2i like DH parameter functions which use the appropriate routine for * PKCS#3 DH or X9.42 DH. */ static DH *d2i_dhp(const EVP_PKEY *pkey, const unsigned char **pp, long length) { if (pkey->ameth == &dhx_asn1_meth) return d2i_DHxparams(NULL, pp, length); return d2i_DHparams(NULL, pp, length); } static int i2d_dhp(const EVP_PKEY *pkey, const DH *a, unsigned char **pp) { if (pkey->ameth == &dhx_asn1_meth) return i2d_DHxparams(a, pp); return i2d_DHparams(a, pp); } static void int_dh_free(EVP_PKEY *pkey) { DH_free(pkey->pkey.dh); } static int dh_pub_decode(EVP_PKEY *pkey, X509_PUBKEY *pubkey) { const unsigned char *p, *pm; int pklen, pmlen; int ptype; void *pval; ASN1_STRING *pstr; X509_ALGOR *palg; ASN1_INTEGER *public_key = NULL; DH *dh = NULL; if (!X509_PUBKEY_get0_param(NULL, &p, &pklen, &palg, pubkey)) return 0; X509_ALGOR_get0(NULL, &ptype, &pval, palg); if (ptype != V_ASN1_SEQUENCE) { DHerr(DH_F_DH_PUB_DECODE, DH_R_PARAMETER_ENCODING_ERROR); goto err; } pstr = pval; pm = pstr->data; pmlen = pstr->length; if (!(dh = d2i_dhp(pkey, &pm, pmlen))) { DHerr(DH_F_DH_PUB_DECODE, DH_R_DECODE_ERROR); goto err; } if (!(public_key = d2i_ASN1_INTEGER(NULL, &p, pklen))) { DHerr(DH_F_DH_PUB_DECODE, DH_R_DECODE_ERROR); goto err; } /* We have parameters now set public key */ if (!(dh->pub_key = ASN1_INTEGER_to_BN(public_key, NULL))) { DHerr(DH_F_DH_PUB_DECODE, DH_R_BN_DECODE_ERROR); goto err; } ASN1_INTEGER_free(public_key); EVP_PKEY_assign(pkey, pkey->ameth->pkey_id, dh); return 1; err: if (public_key) ASN1_INTEGER_free(public_key); if (dh) DH_free(dh); return 0; } static int dh_pub_encode(X509_PUBKEY *pk, const EVP_PKEY *pkey) { DH *dh; int ptype; unsigned char *penc = NULL; int penclen; ASN1_STRING *str; ASN1_INTEGER *pub_key = NULL; dh = pkey->pkey.dh; str = ASN1_STRING_new(); if (!str) { DHerr(DH_F_DH_PUB_ENCODE, ERR_R_MALLOC_FAILURE); goto err; } str->length = i2d_dhp(pkey, dh, &str->data); if (str->length <= 0) { DHerr(DH_F_DH_PUB_ENCODE, ERR_R_MALLOC_FAILURE); goto err; } ptype = V_ASN1_SEQUENCE; pub_key = BN_to_ASN1_INTEGER(dh->pub_key, NULL); if (!pub_key) goto err; penclen = i2d_ASN1_INTEGER(pub_key, &penc); ASN1_INTEGER_free(pub_key); if (penclen <= 0) { DHerr(DH_F_DH_PUB_ENCODE, ERR_R_MALLOC_FAILURE); goto err; } if (X509_PUBKEY_set0_param(pk, OBJ_nid2obj(pkey->ameth->pkey_id), ptype, str, penc, penclen)) return 1; err: if (penc) OPENSSL_free(penc); if (str) ASN1_STRING_free(str); return 0; } /* * PKCS#8 DH is defined in PKCS#11 of all places. It is similar to DH in that * the AlgorithmIdentifier contains the paramaters, the private key is * explcitly included and the pubkey must be recalculated. */ static int dh_priv_decode(EVP_PKEY *pkey, PKCS8_PRIV_KEY_INFO *p8) { const unsigned char *p, *pm; int pklen, pmlen; int ptype; void *pval; ASN1_STRING *pstr; X509_ALGOR *palg; ASN1_INTEGER *privkey = NULL; DH *dh = NULL; if (!PKCS8_pkey_get0(NULL, &p, &pklen, &palg, p8)) return 0; X509_ALGOR_get0(NULL, &ptype, &pval, palg); if (ptype != V_ASN1_SEQUENCE) goto decerr; if (!(privkey = d2i_ASN1_INTEGER(NULL, &p, pklen))) goto decerr; pstr = pval; pm = pstr->data; pmlen = pstr->length; if (!(dh = d2i_dhp(pkey, &pm, pmlen))) goto decerr; /* We have parameters now set private key */ if (!(dh->priv_key = ASN1_INTEGER_to_BN(privkey, NULL))) { DHerr(DH_F_DH_PRIV_DECODE, DH_R_BN_ERROR); goto dherr; } /* Calculate public key */ if (!DH_generate_key(dh)) goto dherr; EVP_PKEY_assign(pkey, pkey->ameth->pkey_id, dh); ASN1_STRING_clear_free(privkey); return 1; decerr: DHerr(DH_F_DH_PRIV_DECODE, EVP_R_DECODE_ERROR); dherr: DH_free(dh); ASN1_STRING_clear_free(privkey); return 0; } static int dh_priv_encode(PKCS8_PRIV_KEY_INFO *p8, const EVP_PKEY *pkey) { ASN1_STRING *params = NULL; ASN1_INTEGER *prkey = NULL; unsigned char *dp = NULL; int dplen; params = ASN1_STRING_new(); if (!params) { DHerr(DH_F_DH_PRIV_ENCODE, ERR_R_MALLOC_FAILURE); goto err; } params->length = i2d_dhp(pkey, pkey->pkey.dh, &params->data); if (params->length <= 0) { DHerr(DH_F_DH_PRIV_ENCODE, ERR_R_MALLOC_FAILURE); goto err; } params->type = V_ASN1_SEQUENCE; /* Get private key into integer */ prkey = BN_to_ASN1_INTEGER(pkey->pkey.dh->priv_key, NULL); if (!prkey) { DHerr(DH_F_DH_PRIV_ENCODE, DH_R_BN_ERROR); goto err; } dplen = i2d_ASN1_INTEGER(prkey, &dp); ASN1_STRING_clear_free(prkey); prkey = NULL; if (!PKCS8_pkey_set0(p8, OBJ_nid2obj(pkey->ameth->pkey_id), 0, V_ASN1_SEQUENCE, params, dp, dplen)) goto err; return 1; err: if (dp != NULL) OPENSSL_free(dp); if (params != NULL) ASN1_STRING_free(params); if (prkey != NULL) ASN1_STRING_clear_free(prkey); return 0; } static void update_buflen(const BIGNUM *b, size_t *pbuflen) { size_t i; if (!b) return; if (*pbuflen < (i = (size_t)BN_num_bytes(b))) *pbuflen = i; } static int dh_param_decode(EVP_PKEY *pkey, const unsigned char **pder, int derlen) { DH *dh; if (!(dh = d2i_dhp(pkey, pder, derlen))) { DHerr(DH_F_DH_PARAM_DECODE, ERR_R_DH_LIB); return 0; } EVP_PKEY_assign(pkey, pkey->ameth->pkey_id, dh); return 1; } static int dh_param_encode(const EVP_PKEY *pkey, unsigned char **pder) { return i2d_dhp(pkey, pkey->pkey.dh, pder); } static int do_dh_print(BIO *bp, const DH *x, int indent, ASN1_PCTX *ctx, int ptype) { unsigned char *m = NULL; int reason = ERR_R_BUF_LIB, ret = 0; size_t buf_len = 0; const char *ktype = NULL; BIGNUM *priv_key, *pub_key; if (ptype == 2) priv_key = x->priv_key; else priv_key = NULL; if (ptype > 0) pub_key = x->pub_key; else pub_key = NULL; update_buflen(x->p, &buf_len); if (buf_len == 0) { reason = ERR_R_PASSED_NULL_PARAMETER; goto err; } update_buflen(x->g, &buf_len); update_buflen(x->q, &buf_len); update_buflen(x->j, &buf_len); update_buflen(x->counter, &buf_len); update_buflen(pub_key, &buf_len); update_buflen(priv_key, &buf_len); if (ptype == 2) ktype = "DH Private-Key"; else if (ptype == 1) ktype = "DH Public-Key"; else ktype = "DH Parameters"; m = OPENSSL_malloc(buf_len + 10); if (m == NULL) { reason = ERR_R_MALLOC_FAILURE; goto err; } BIO_indent(bp, indent, 128); if (BIO_printf(bp, "%s: (%d bit)\n", ktype, BN_num_bits(x->p)) <= 0) goto err; indent += 4; if (!ASN1_bn_print(bp, "private-key:", priv_key, m, indent)) goto err; if (!ASN1_bn_print(bp, "public-key:", pub_key, m, indent)) goto err; if (!ASN1_bn_print(bp, "prime:", x->p, m, indent)) goto err; if (!ASN1_bn_print(bp, "generator:", x->g, m, indent)) goto err; if (x->q && !ASN1_bn_print(bp, "subgroup order:", x->q, m, indent)) goto err; if (x->j && !ASN1_bn_print(bp, "subgroup factor:", x->j, m, indent)) goto err; if (x->seed) { int i; BIO_indent(bp, indent, 128); BIO_puts(bp, "seed:"); for (i = 0; i < x->seedlen; i++) { if ((i % 15) == 0) { if (BIO_puts(bp, "\n") <= 0 || !BIO_indent(bp, indent + 4, 128)) goto err; } if (BIO_printf(bp, "%02x%s", x->seed[i], ((i + 1) == x->seedlen) ? "" : ":") <= 0) goto err; } if (BIO_write(bp, "\n", 1) <= 0) return (0); } if (x->counter && !ASN1_bn_print(bp, "counter:", x->counter, m, indent)) goto err; if (x->length != 0) { BIO_indent(bp, indent, 128); if (BIO_printf(bp, "recommended-private-length: %d bits\n", (int)x->length) <= 0) goto err; } ret = 1; if (0) { err: DHerr(DH_F_DO_DH_PRINT, reason); } if (m != NULL) OPENSSL_free(m); return (ret); } static int int_dh_size(const EVP_PKEY *pkey) { return (DH_size(pkey->pkey.dh)); } static int dh_bits(const EVP_PKEY *pkey) { return BN_num_bits(pkey->pkey.dh->p); } static int dh_cmp_parameters(const EVP_PKEY *a, const EVP_PKEY *b) { if (BN_cmp(a->pkey.dh->p, b->pkey.dh->p) || BN_cmp(a->pkey.dh->g, b->pkey.dh->g)) return 0; else if (a->ameth == &dhx_asn1_meth) { if (BN_cmp(a->pkey.dh->q, b->pkey.dh->q)) return 0; } return 1; } static int int_dh_bn_cpy(BIGNUM **dst, const BIGNUM *src) { BIGNUM *a; if (src) { a = BN_dup(src); if (!a) return 0; } else a = NULL; if (*dst) BN_free(*dst); *dst = a; return 1; } static int int_dh_param_copy(DH *to, const DH *from, int is_x942) { if (is_x942 == -1) is_x942 = ! !from->q; if (!int_dh_bn_cpy(&to->p, from->p)) return 0; if (!int_dh_bn_cpy(&to->g, from->g)) return 0; if (is_x942) { if (!int_dh_bn_cpy(&to->q, from->q)) return 0; if (!int_dh_bn_cpy(&to->j, from->j)) return 0; if (to->seed) { OPENSSL_free(to->seed); to->seed = NULL; to->seedlen = 0; } if (from->seed) { to->seed = BUF_memdup(from->seed, from->seedlen); if (!to->seed) return 0; to->seedlen = from->seedlen; } } else to->length = from->length; return 1; } DH *DHparams_dup(DH *dh) { DH *ret; ret = DH_new(); if (!ret) return NULL; if (!int_dh_param_copy(ret, dh, -1)) { DH_free(ret); return NULL; } return ret; } static int dh_copy_parameters(EVP_PKEY *to, const EVP_PKEY *from) { return int_dh_param_copy(to->pkey.dh, from->pkey.dh, from->ameth == &dhx_asn1_meth); } static int dh_missing_parameters(const EVP_PKEY *a) { if (a->pkey.dh == NULL || a->pkey.dh->p == NULL || a->pkey.dh->g == NULL) return 1; return 0; } static int dh_pub_cmp(const EVP_PKEY *a, const EVP_PKEY *b) { if (dh_cmp_parameters(a, b) == 0) return 0; if (BN_cmp(b->pkey.dh->pub_key, a->pkey.dh->pub_key) != 0) return 0; else return 1; } static int dh_param_print(BIO *bp, const EVP_PKEY *pkey, int indent, ASN1_PCTX *ctx) { return do_dh_print(bp, pkey->pkey.dh, indent, ctx, 0); } static int dh_public_print(BIO *bp, const EVP_PKEY *pkey, int indent, ASN1_PCTX *ctx) { return do_dh_print(bp, pkey->pkey.dh, indent, ctx, 1); } static int dh_private_print(BIO *bp, const EVP_PKEY *pkey, int indent, ASN1_PCTX *ctx) { return do_dh_print(bp, pkey->pkey.dh, indent, ctx, 2); } int DHparams_print(BIO *bp, const DH *x) { return do_dh_print(bp, x, 4, NULL, 0); } #ifndef OPENSSL_NO_CMS static int dh_cms_decrypt(CMS_RecipientInfo *ri); static int dh_cms_encrypt(CMS_RecipientInfo *ri); #endif static int dh_pkey_ctrl(EVP_PKEY *pkey, int op, long arg1, void *arg2) { switch (op) { #ifndef OPENSSL_NO_CMS case ASN1_PKEY_CTRL_CMS_ENVELOPE: if (arg1 == 1) return dh_cms_decrypt(arg2); else if (arg1 == 0) return dh_cms_encrypt(arg2); return -2; case ASN1_PKEY_CTRL_CMS_RI_TYPE: *(int *)arg2 = CMS_RECIPINFO_AGREE; return 1; #endif default: return -2; } } const EVP_PKEY_ASN1_METHOD dh_asn1_meth = { EVP_PKEY_DH, EVP_PKEY_DH, 0, "DH", "OpenSSL PKCS#3 DH method", dh_pub_decode, dh_pub_encode, dh_pub_cmp, dh_public_print, dh_priv_decode, dh_priv_encode, dh_private_print, int_dh_size, dh_bits, dh_param_decode, dh_param_encode, dh_missing_parameters, dh_copy_parameters, dh_cmp_parameters, dh_param_print, 0, int_dh_free, 0 }; const EVP_PKEY_ASN1_METHOD dhx_asn1_meth = { EVP_PKEY_DHX, EVP_PKEY_DHX, 0, "X9.42 DH", "OpenSSL X9.42 DH method", dh_pub_decode, dh_pub_encode, dh_pub_cmp, dh_public_print, dh_priv_decode, dh_priv_encode, dh_private_print, int_dh_size, dh_bits, dh_param_decode, dh_param_encode, dh_missing_parameters, dh_copy_parameters, dh_cmp_parameters, dh_param_print, 0, int_dh_free, dh_pkey_ctrl }; #ifndef OPENSSL_NO_CMS static int dh_cms_set_peerkey(EVP_PKEY_CTX *pctx, X509_ALGOR *alg, ASN1_BIT_STRING *pubkey) { ASN1_OBJECT *aoid; int atype; void *aval; ASN1_INTEGER *public_key = NULL; int rv = 0; EVP_PKEY *pkpeer = NULL, *pk = NULL; DH *dhpeer = NULL; const unsigned char *p; int plen; X509_ALGOR_get0(&aoid, &atype, &aval, alg); if (OBJ_obj2nid(aoid) != NID_dhpublicnumber) goto err; /* Only absent parameters allowed in RFC XXXX */ if (atype != V_ASN1_UNDEF && atype == V_ASN1_NULL) goto err; pk = EVP_PKEY_CTX_get0_pkey(pctx); if (!pk) goto err; if (pk->type != EVP_PKEY_DHX) goto err; /* Get parameters from parent key */ dhpeer = DHparams_dup(pk->pkey.dh); /* We have parameters now set public key */ plen = ASN1_STRING_length(pubkey); p = ASN1_STRING_data(pubkey); if (!p || !plen) goto err; if (!(public_key = d2i_ASN1_INTEGER(NULL, &p, plen))) { DHerr(DH_F_DH_CMS_SET_PEERKEY, DH_R_DECODE_ERROR); goto err; } /* We have parameters now set public key */ if (!(dhpeer->pub_key = ASN1_INTEGER_to_BN(public_key, NULL))) { DHerr(DH_F_DH_CMS_SET_PEERKEY, DH_R_BN_DECODE_ERROR); goto err; } pkpeer = EVP_PKEY_new(); if (!pkpeer) goto err; EVP_PKEY_assign(pkpeer, pk->ameth->pkey_id, dhpeer); dhpeer = NULL; if (EVP_PKEY_derive_set_peer(pctx, pkpeer) > 0) rv = 1; err: if (public_key) ASN1_INTEGER_free(public_key); if (pkpeer) EVP_PKEY_free(pkpeer); if (dhpeer) DH_free(dhpeer); return rv; } static int dh_cms_set_shared_info(EVP_PKEY_CTX *pctx, CMS_RecipientInfo *ri) { int rv = 0; X509_ALGOR *alg, *kekalg = NULL; ASN1_OCTET_STRING *ukm; const unsigned char *p; unsigned char *dukm = NULL; size_t dukmlen = 0; int keylen, plen; const EVP_CIPHER *kekcipher; EVP_CIPHER_CTX *kekctx; if (!CMS_RecipientInfo_kari_get0_alg(ri, &alg, &ukm)) goto err; /* * For DH we only have one OID permissible. If ever any more get defined * we will need something cleverer. */ if (OBJ_obj2nid(alg->algorithm) != NID_id_smime_alg_ESDH) { DHerr(DH_F_DH_CMS_SET_SHARED_INFO, DH_R_KDF_PARAMETER_ERROR); goto err; } if (EVP_PKEY_CTX_set_dh_kdf_type(pctx, EVP_PKEY_DH_KDF_X9_42) <= 0) goto err; if (EVP_PKEY_CTX_set_dh_kdf_md(pctx, EVP_sha1()) <= 0) goto err; if (alg->parameter->type != V_ASN1_SEQUENCE) goto err; p = alg->parameter->value.sequence->data; plen = alg->parameter->value.sequence->length; kekalg = d2i_X509_ALGOR(NULL, &p, plen); if (!kekalg) goto err; kekctx = CMS_RecipientInfo_kari_get0_ctx(ri); if (!kekctx) goto err; kekcipher = EVP_get_cipherbyobj(kekalg->algorithm); if (!kekcipher || EVP_CIPHER_mode(kekcipher) != EVP_CIPH_WRAP_MODE) goto err; if (!EVP_EncryptInit_ex(kekctx, kekcipher, NULL, NULL, NULL)) goto err; if (EVP_CIPHER_asn1_to_param(kekctx, kekalg->parameter) <= 0) goto err; keylen = EVP_CIPHER_CTX_key_length(kekctx); if (EVP_PKEY_CTX_set_dh_kdf_outlen(pctx, keylen) <= 0) goto err; /* Use OBJ_nid2obj to ensure we use built in OID that isn't freed */ if (EVP_PKEY_CTX_set0_dh_kdf_oid(pctx, OBJ_nid2obj(EVP_CIPHER_type(kekcipher))) <= 0) goto err; if (ukm) { dukmlen = ASN1_STRING_length(ukm); dukm = BUF_memdup(ASN1_STRING_data(ukm), dukmlen); if (!dukm) goto err; } if (EVP_PKEY_CTX_set0_dh_kdf_ukm(pctx, dukm, dukmlen) <= 0) goto err; dukm = NULL; rv = 1; err: if (kekalg) X509_ALGOR_free(kekalg); if (dukm) OPENSSL_free(dukm); return rv; } static int dh_cms_decrypt(CMS_RecipientInfo *ri) { EVP_PKEY_CTX *pctx; pctx = CMS_RecipientInfo_get0_pkey_ctx(ri); if (!pctx) return 0; /* See if we need to set peer key */ if (!EVP_PKEY_CTX_get0_peerkey(pctx)) { X509_ALGOR *alg; ASN1_BIT_STRING *pubkey; if (!CMS_RecipientInfo_kari_get0_orig_id(ri, &alg, &pubkey, NULL, NULL, NULL)) return 0; if (!alg || !pubkey) return 0; if (!dh_cms_set_peerkey(pctx, alg, pubkey)) { DHerr(DH_F_DH_CMS_DECRYPT, DH_R_PEER_KEY_ERROR); return 0; } } /* Set DH derivation parameters and initialise unwrap context */ if (!dh_cms_set_shared_info(pctx, ri)) { DHerr(DH_F_DH_CMS_DECRYPT, DH_R_SHARED_INFO_ERROR); return 0; } return 1; } static int dh_cms_encrypt(CMS_RecipientInfo *ri) { EVP_PKEY_CTX *pctx; EVP_PKEY *pkey; EVP_CIPHER_CTX *ctx; int keylen; X509_ALGOR *talg, *wrap_alg = NULL; ASN1_OBJECT *aoid; ASN1_BIT_STRING *pubkey; ASN1_STRING *wrap_str; ASN1_OCTET_STRING *ukm; unsigned char *penc = NULL, *dukm = NULL; int penclen; size_t dukmlen = 0; int rv = 0; int kdf_type, wrap_nid; const EVP_MD *kdf_md; pctx = CMS_RecipientInfo_get0_pkey_ctx(ri); if (!pctx) return 0; /* Get ephemeral key */ pkey = EVP_PKEY_CTX_get0_pkey(pctx); if (!CMS_RecipientInfo_kari_get0_orig_id(ri, &talg, &pubkey, NULL, NULL, NULL)) goto err; X509_ALGOR_get0(&aoid, NULL, NULL, talg); /* Is everything uninitialised? */ if (aoid == OBJ_nid2obj(NID_undef)) { ASN1_INTEGER *pubk; pubk = BN_to_ASN1_INTEGER(pkey->pkey.dh->pub_key, NULL); if (!pubk) goto err; /* Set the key */ penclen = i2d_ASN1_INTEGER(pubk, &penc); ASN1_INTEGER_free(pubk); if (penclen <= 0) goto err; ASN1_STRING_set0(pubkey, penc, penclen); pubkey->flags &= ~(ASN1_STRING_FLAG_BITS_LEFT | 0x07); pubkey->flags |= ASN1_STRING_FLAG_BITS_LEFT; penc = NULL; X509_ALGOR_set0(talg, OBJ_nid2obj(NID_dhpublicnumber), V_ASN1_UNDEF, NULL); } /* See if custom paraneters set */ kdf_type = EVP_PKEY_CTX_get_dh_kdf_type(pctx); if (kdf_type <= 0) goto err; if (!EVP_PKEY_CTX_get_dh_kdf_md(pctx, &kdf_md)) goto err; if (kdf_type == EVP_PKEY_DH_KDF_NONE) { kdf_type = EVP_PKEY_DH_KDF_X9_42; if (EVP_PKEY_CTX_set_dh_kdf_type(pctx, kdf_type) <= 0) goto err; } else if (kdf_type != EVP_PKEY_DH_KDF_X9_42) /* Unknown KDF */ goto err; if (kdf_md == NULL) { /* Only SHA1 supported */ kdf_md = EVP_sha1(); if (EVP_PKEY_CTX_set_dh_kdf_md(pctx, kdf_md) <= 0) goto err; } else if (EVP_MD_type(kdf_md) != NID_sha1) /* Unsupported digest */ goto err; if (!CMS_RecipientInfo_kari_get0_alg(ri, &talg, &ukm)) goto err; /* Get wrap NID */ ctx = CMS_RecipientInfo_kari_get0_ctx(ri); wrap_nid = EVP_CIPHER_CTX_type(ctx); if (EVP_PKEY_CTX_set0_dh_kdf_oid(pctx, OBJ_nid2obj(wrap_nid)) <= 0) goto err; keylen = EVP_CIPHER_CTX_key_length(ctx); /* Package wrap algorithm in an AlgorithmIdentifier */ wrap_alg = X509_ALGOR_new(); if (!wrap_alg) goto err; wrap_alg->algorithm = OBJ_nid2obj(wrap_nid); wrap_alg->parameter = ASN1_TYPE_new(); if (!wrap_alg->parameter) goto err; if (EVP_CIPHER_param_to_asn1(ctx, wrap_alg->parameter) <= 0) goto err; if (ASN1_TYPE_get(wrap_alg->parameter) == NID_undef) { ASN1_TYPE_free(wrap_alg->parameter); wrap_alg->parameter = NULL; } if (EVP_PKEY_CTX_set_dh_kdf_outlen(pctx, keylen) <= 0) goto err; if (ukm) { dukmlen = ASN1_STRING_length(ukm); dukm = BUF_memdup(ASN1_STRING_data(ukm), dukmlen); if (!dukm) goto err; } if (EVP_PKEY_CTX_set0_dh_kdf_ukm(pctx, dukm, dukmlen) <= 0) goto err; dukm = NULL; /* * Now need to wrap encoding of wrap AlgorithmIdentifier into parameter * of another AlgorithmIdentifier. */ penc = NULL; penclen = i2d_X509_ALGOR(wrap_alg, &penc); if (!penc || !penclen) goto err; wrap_str = ASN1_STRING_new(); if (!wrap_str) goto err; ASN1_STRING_set0(wrap_str, penc, penclen); penc = NULL; X509_ALGOR_set0(talg, OBJ_nid2obj(NID_id_smime_alg_ESDH), V_ASN1_SEQUENCE, wrap_str); rv = 1; err: if (penc) OPENSSL_free(penc); if (wrap_alg) X509_ALGOR_free(wrap_alg); return rv; } #endif
{ "pile_set_name": "Github" }
Q: when use two class in xml give error unbound prefix error I want to use two class in xml <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:wave="http://schemas.android.com/apk/res-auto" android:layout_width="fill_parent" android:layout_height="fill_parent"> <com.example.tesst.MaskableFrameLayout android:id="@+id/frm_mask_animated" android:layout_width="100dp" app:porterduffxfermode="DST_IN" app:mask="@drawable/animation_mask" android:layout_height="100dp"> <com.john.waveview.WaveView android:id="@+id/wave_view" android:layout_width="300dp" android:layout_height="300dp" wave:above_wave_color="@android:color/white" wave:blow_wave_color="@android:color/white" wave:progress="80" android:layout_gravity="center" wave:wave_height="little" wave:wave_hz="normal" wave:wave_length="middle" /> </com.example.tesst.MaskableFrameLayout> </FrameLayout> What is wrong with it? The error:Error parsing XML: unbound prefix shows up! I do not know what is the problem Help please A: You're using the "app:" prefix, and that's not defined. Change <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:wave="http://schemas.android.com/apk/res-auto" To <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:wave="http://schemas.android.com/apk/res-auto" xmlns:app="http://schemas.android.com/apk/res-auto"
{ "pile_set_name": "StackExchange" }
NCAA Division II Wrestling Championships The NCAA Division II Wrestling Championships for individuals and teams were first officially sponsored in 1963 and have since been held annually. The NCAA Division II Wrestling Championships is a double-elimination tournament for individuals competing in ten weight classes. Sixteen wrestlers in each class qualify through four "Super Regional" tournaments. During the championships, individual match winners earn points based on the level and quality of the victory, which are totaled to determine the team championship standings. In addition to determining the national championship, the NCAA Division II Wrestling Championships also determine the Division II All-America team. The top eight finishers in each weight class qualify for Division II All-American status. Team champions Prior to 1963, only a single national championship was held for all members of the NCAA; Division II competition began in 1963, with Division III following in 1974. Names used are those current in the years listed. Note: Shaded scores = Closest margin of victory, point in 1979 & widest margin of victory, 88 points in 1982. Team titles Source A = known as Cal State Poly College & Cal Poly State University B = known as Cal State College, Bakersfield & CSU Bakersfield C = known as Central State College (OK) & Central Oklahoma D = known as Portland State College & Portland State U. E = was Western State College of Colorado F = known as State College of Iowa & UNI G = known as Mankato State College, Mankato State University & Minnesota State–Mankato Team title winning streaks (more than 2) Source Division II wrestlers to Division I championships Sources Through 1989, the Division II finalists advanced to the Division I championships, held the following week, where many athletes earned All-American recognition in two divisions during the same season. This practice was discontinued after Carlton Haselrig of the University of Pittsburgh at Johnstown won the Division II heavyweight title and advanced to Division I, where he also won the heavyweight title three years in a row, 1987–89. Former Division II team champions now in Division I Source Notes See also NCAA Division I Wrestling Championships NCAA Division III Wrestling Championships NAIA national wrestling championship Pre-NCAA Wrestling Champion U Sports (Canada) Intercollegiate women's wrestling champions References External links NCAA Division II wrestling Category:NCAA Wrestling Championship Championship Wrestling Category:Recurring sporting events established in 1963 Category:1963 establishments in Iowa
{ "pile_set_name": "Wikipedia (en)" }
--- abstract: 'We show that the deformation space of complex parallelisable nilmanifolds can be described by polynomial equations but is almost never smooth. This is remarkable since these manifolds have trivial canonical bundle and are holomorphic symplectic in even dimension. We describe the Kuranishi space in detail in several examples and also analyse when small deformations remain complex parallelisable' address: | Sönke Rollenske\ Department of Mathematics\ Imperial College London\ SW7 2AZ London\ United Kingdom author: - Sönke Rollenske title: 'The Kuranishi-space of complex parallelisable nilmanifolds' --- Introduction ============ Left-invariant geometric structures on nilmanifolds, i.e., compact quotients of (real) nilpotent Lie groups, have proved to be both very rich and accessible for an in depth study. Thus many examples and counter-examples in (complex) differential geometry are of this type. In this paper we are concerned with deformations of complex structures for complex parallelisable nilmanifolds, which are the compact quotient of complex nilpotent Lie groups. The study of deformations of complex structures on compact complex manifolds has been an important topic since it was first developed by Kodaira and Spencer in [@Kod-sp58]. A deformation of a given compact complex manifold $X$ is a flat proper map $\pi:\kx\to \kb$ of (connected) complex spaces such that all the fibres are smooth manifolds together with an isomorphism with $X\isom \ky_0=\inverse\pi(0)$ for a point $0\in \kb$. If $\kb$ is a smooth $\pi$ is just a holomorphic submersion. Kodaira and Spencer showed that first order deformations correspond to elements in $H^1(X, \Theta_X)$ where $\Theta_X$ is the sheaf of holomorphic tangent vectors. A key result is now the theorem of Kuranishi which, for a given compact complex manifold $X$, guarantees the existence of a locally complete space of deformations $\kx\to{\mathrm{Kur}}(X)$ which is versal at the point corresponding to $X$. In other word, for every deformation $\ky\to \kb$ of $X$ there is a small neighbourhood $\ku$ of $0$ in $\kb$ yielding a diagram $$\xymatrix{ \ky\restr{\ku}\isom f^*\kx \ar[d]\ar[r] & \kx \ar[d]\\ \ku\ar[r]^f&{\mathrm{Kur}}(X),}$$ and in addition the differential of $f$ at $0$ is unique. The Kuranishi family ${\mathrm{Kur}}(X)$ hence parametrises all sufficiently small deformations of $X$. In general the map $f$ will not be unique which is roughly due to the existence of automorphisms. Another point of view, which we will mainly adopt in this paper, is the following: consider $X$ as a differentiable manifold together with an integrable almost complex structure $(M,J)$, i.e., $J:TM\to TM$, $J^2=-\id_{TM}$ and the Nijenhuis integrability condition holds (see [[(\[nijenhuis\])]{}]{} below). A deformation of $X$ can be viewed as a family of such complex structures $J_t$ depending on some parameter $t\in \kb$ with $J=J_0$. The construction of the Kuranishi space can then be made explicit after the choice of a hermitian metric on $M$. We will go through this construction for our special case of complex parallelisable nilmanifolds in Section \[kurani\]. In general the Kuranishi space can be arbitrarily bad but we can hope for better control over the deformations if we restrict our class of manifolds. If, for example, $X$ is Kähler and has trivial canonical bundle, i.e., $X$ is a Calabi-Yau manifold, then the Tian-Todorov Lemma implies that the Kuranishi space is indeed smooth; we say that $X$ has unobstructed deformations. These manifolds are very important both in physics and in mathematics for example in the context of mirror symmetry. This result fails if we drop the Kähler condition [@ghys95]. The only nilmanifolds which can carry a Kähler structure are tori but it was proved by Cavalcanti and Gualtieri [@caval-gual04] and, independently, by Babaris, Dotti and Verbitsky [@0712.3863v3] that nilmanifolds with left-invariant complex structure always have trivial canonical bundle. In addition, all known examples in the context of left-invariant complex structures on nilmanifolds, e.g., complex tori, the Iwasawa manifold [@nakamura75], Kodaira surfaces [@borcea84], abelian complex structures [@con-fin-poon06; @mpps06] (see Section \[definitions\] for a definition), had unobstructed deformations. Therefore it was speculated if this holds for all left-invariant complex structures. This was supported by results on weak homological mirror symmetry for nilmanifolds [@poon06; @0708.3442v2]. On the other hand Catanese and Frediani observed in their study of deformations of principal holomorphic torus bundles, which are in particular nilmanifolds with left-invariant complex structure, that the Kuranishi space can be singular [@cat-fred06]. In this article we want to study the Kuranishi space of complex parallelisable nilmanifolds. These were very intensively studied by Winkelmann [@winkelmann98] and they enjoy many interesting properties. We will only be concerned with their deformations. We will show in particular, that the Kuranishi space of a complex parallelisable nilmanifold is *almost always* singular thus showing that no analog of the Tian-Todorv theorem can exist for nilmanifolds. Nevertheless, the Kuranishi space can not become too ugly: If $X=\Gamma\backslash G$ is a complex parallelisable nilmanifold and $G$ is $\nu$-step nilpotent, then ${\mathrm{Kur}}(X)$ is cut out by polynomial equations of degree at most $\nu$. In Section \[nonexam\] we will give an example that the bound on the degree does not remain valid for general nilmanifolds but, as far as we know, there could be a larger bound depending on the step-length and the dimension only. We believe that the Lie-algebra ${\ensuremath{\gothg}}$ of $G$ cannot be too far from being free if the Kuranishi space is smooth and all examples that we found were actually free. Unfortunately, the analysis of the obstructions of higher order becomes very complicated but we can at least prove the following: Let $X=\Gamma\backslash G$ be a complex parallelisable nilmanifold and let ${\ensuremath{\gothg}}$ be the Lie-algebra of $G$. If ${\ensuremath{\gothg}}\slash [{\ensuremath{\gothg}},[{\ensuremath{\gothg}},{\ensuremath{\gothg}}]]$ is not isomorphic to a free 2-step nilpotent Lie-algebra then there is a non-vanishing obstruction in degree 2 and the Kuranishi space is singular. In particular, if ${\ensuremath{\gothg}}$ is 2-step nilpotent then ${\mathrm{Kur}}(X)$ is smooth if and only if ${\ensuremath{\gothg}}$ is a free 2-step nilpotent Lie-algebra. It is a natural question which infinitesimal deformations in $H^1(X,\Theta_X)$ integrate to a 1-parameter family of complex parallelisable complex structure and we show in Section \[remainparall\] that this is the case if and only if they are infinitesimally complex parallelisable. The same results holds for abelian complex structures [@con-fin-poon06]. From this we can also deduce that every complex parallelisable nilmanifold which is not a torus has small deformations which are no longer complex parallelisable (Corollary \[nondef\]). On the other hand it is known that small deformations at least remain in the category of nilmanifolds with left-invariant complex structure (see Section \[kurani\] or [@rollenske07b]). In Section \[examples\] we will give several explicit examples, mostly in small dimension. As far as we know, these are the first examples of compact complex manifolds with trivial canonical bundle (or even holomorphic symplectic structure) which have non-reduced Kuranishi-space. ### Acknowledgements {#acknowledgements .unnumbered} This research was carried out at Imperial College London supported by a DFG Forschungsstipendium and I would like to thank the Geometry group there for their hospitality. Fabrizio Catanese, Fritz Grunewald, Andrey Todorv and Jörg Winkelmann made several useful comments during a talk at the University of Bayreuth. Complex parallelisable nilmanifolds and nilmanifolds with left-invariant complex structure {#definitions} ========================================================================================== Let $G$ be a simply connected, complex, nilpotent Lie-group with Lie-algebra ${\ensuremath{\gothg}}$ and $\Gamma\subset G$ a lattice, i.e., a discrete cocompact subgroup. By a theorem of Mal’cev [@malcev51] such a lattice exists if and only if the real Lie-algebra underlying ${\ensuremath{\gothg}}$ can be defined over $\IQ$. The most important invariant attached to a nilpotent Lie-algebra (or Lie-group) is its nilpotency index, also called step length. It is defined as follows: consider the descending central series, inductively defined by $$\kc_0{\ensuremath{\gothg}}:={\ensuremath{\gothg}}, \qquad \kc_{k+1}{\ensuremath{\gothg}}=[\kc_k{\ensuremath{\gothg}},{\ensuremath{\gothg}}].$$ Then ${\ensuremath{\gothg}}$ is nilpotent if and only if there exists a $\nu$ such that $\kc^\nu{\ensuremath{\gothg}}=0$. The smallest such $\nu$ is called the nilpotency index. Since the multiplication in $G$ is holomorphic we can act with elements of $\Gamma$ on the left; the quotient $X:=\Gamma\backslash G$ is a complex parallelisable compact nilmanifold. The nilpotent complex Lie-group $G$ acts transitively on $X$ by multiplication on the right and this is in fact an equivalent characterisation of $\IC$-parallelisable nilmanifolds [@wang54]. As already remarked by Nakamura [@nakamura75] not all deformations of $\IC$-parallelisable nilmanifolds are again $\IC$-parallelisable but, as we will discuss in section \[kurani\], we can describe all deformations in the slightly more general framework of nilmanifolds with left-invariant complex structures which we will now explain. Let $H$ be a simply connected, real, nilpotent Lie-group with Lie-algebra ${\gothh}$ and containing a lattice $\Gamma$. Taking the quotient yields a real nilmanifold $M:=\Gamma\backslash H$. An almost complex structure $J: {\gothh}\to {\gothh}$ defines an almost complex structure on $H$ by left-translation and this almost complex structure is integrable if and only if the Nijenhuis condition $$\label{nijenhuis} [x,y]-[Jx,Jy]+J[Jx,y]+J[x,Jy]=0$$ holds for all $x,y\in {\gothh}$. In this case we call the pair $({\gothh}, J)$ a Lie-algebra with complex structure. The action of $\Gamma$ on the left is then holomorphic and we get an induced complex structure on $M$. We call $(M,J)$ a nilmanifold with left-invariant complex structure. Note that the multiplication in $H$ induces an action on the left on $M$ if and only if $\Gamma$ is normal if and only if $H=\IR^n$ is abelian; there is always an action on the right which is holomorphic if and only if $(H,J)$ is a complex Lie-group. By abuse of notation we will call a tensor, e.g., a vector field, differential form or metric, on $M$ left-invariant if its pullback to the universal cover $H$ is left-invariant. The complexified Lie-algebra ${\gothh}_\IC={\gothh}\tensor_\IR\IC$ decomposes as $${\gothh}_\IC={{{{\gothh}}^{1,0}}}\oplus{{{{\gothh}}^{0,1}}}$$ where ${{{{\gothh}}^{1,0}}}$ is the $i$-eigenspace of $J$ and ${{{{\gothh}}^{0,1}}}=\overline{{{{{\gothh}}^{1,0}}}}$ is the $(-i)$-eigenspace. It is not hard to see that the complex structure is integrable if and only if ${{{{\gothh}}^{1,0}}}$ is a (complex) Lie-subalgebra of ${\gothh}_\IC$. The complex structure $J$ makes $({\gothh},J)$ into a complex Lie-algebra if and only if the bracket is $J$-linear, i.e., for all $x,y\in {\gothh}$ we have $$\label{Clie} [Jx,y]=J[x,y].$$ In this case $H$ is a complex Lie-group and $(M,J)$ is $\IC$-parallelisable as above. The following equivalent characterisation is also well known. \[parallchar\] A Lie-algebra with complex structure $({\gothh},J)$ is a complex Lie-algebra if and only if $[{{{{\gothh}}^{1,0}}}, {{{{\gothh}}^{0,1}}}]=0$. In this case the canonical projection $$\pi: ({\gothh},J)\to {{{{\gothh}}^{1,0}}}, \qquad z\mapsto \frac{1}{2}(z-iJz)$$ is an isomorphism of complex Lie algebras. Let $x,y\in {\gothh}$ and consider $X:=\frac{1}{2}(x-iJx)\in {{{{\gothh}}^{1,0}}}$ and $\bar Y:=\frac{1}{2}(y+iJy)\in {{{{\gothh}}^{0,1}}}$. Then $$\begin{aligned} [X,\bar Y]&= \frac{1}{4}[x-iJx, y+iJy]\\&= \frac{1}{4}([x,y]-i^2[Jx,Jy]-i([Jx,y]-[x,Jy])\\&=\frac{1}{4}([x,y]+[Jx,Jy])-i([Jx,y]-[x,Jy])\end{aligned}$$ and we see that this vanishes if and only if $$[x,y]=-[Jx,Jy] \text{ and } [Jx,y]=[x,Jy].$$ If we combine these two equations with the Nijenhuis tensor [[(\[nijenhuis\])]{}]{} then we get the identity $-2[x,y]=2J[Jx,y]$ which becomes [[(\[Clie\])]{}]{} after applying $J$ to it and dividing by $-2$. On the other hand the equations are certainly fulfilled if [[(\[Clie\])]{}]{} holds and we have shown the claimed equivalence. The second claim is proved by a similar computation: since $\pi$ is an isomorphism of complex vector spaces it remains to show that $\pi$ is a homomorphism of Lie-algebras. Indeed for $x,y\in{\gothh}$ we have using [[(\[Clie\])]{}]{} $$[\pi(x), \pi(y)]= \frac{1}{4}[x-iJx, y-iJy]= \frac{1}{4}([x,y]+i^2[Jx,Jy]-2iJ[x,y])=\pi([x,y]).$$ \[notation\] In order to make our notation more transparent ${\gothh}$, $H$ and $M$ will always denote a real Lie-algebra, Lie-group or nilmanifold, often equipped with a (left-invariant) complex structure $J$. We will only consider integrable complex structures. The notations ${\ensuremath{\gothg}}$, $G$ and $X$ will be reserved for their complex parallelisable counterparts. If we need to access the underlying real object with left-invariant complex structure we will write for example ${\ensuremath{\gothg}}=({\gothh},J)$. By the above Lemma we can then identify $${\ensuremath{\gothg}}_\IC={\gothh}_\IC={\ensuremath{\gothg}}\oplus \bar {\ensuremath{\gothg}}$$ where the bracket on $\bar {\ensuremath{\gothg}}$ is given by $[\bar x, \bar y]=\overline{[x,y]}$ and $[{\ensuremath{\gothg}}, \bar{\ensuremath{\gothg}}]=[{{{{\gothh}}^{1,0}}}, {{{{\gothh}}^{0,1}}}]=0$. Another important class of left-invariant complex structures are so-called abelian complex structures, which are characerised by $[{{{{\gothh}}^{1,0}}},{{{{\gothh}}^{1,0}}}]=0$ or, equivalently, $[Jx,Jy]=[x,y]$ for all $x,y\in {\gothh}$. In some sense this is the opposite condition to being a complex Lie-algebra and their deformations have been studied in [@mpps06; @con-fin-poon06]. As we pointed out in the introduction, deformations behave much more nicely in this case. Dolbeault cohomology ==================== In this section we will describe how the Dolbeault cohomology of a nilmanifold with left-invariant complex structure $(M,J)$ is completely controlled by the Lie-algebra with complex structure $({\gothh}, J)$. This reduces many problems in the study of nilmanifolds to finite dimensional linear algebra. We will soon concentrate on the complex parallelisable case. Let $(M,J)$ be a nilmanifold with left-invariant complex structure and ${\gothh}$ be the Lie-algebra of the corresponding Lie-group. We can identify elements in $$\Lambda ^{p,q}:=\Lambda^{p,q}({\gothh}^*,J)=\Lambda^p{{{{\gothh}^*}^{1,0}}}\tensor\Lambda^q{{{{\gothh}^*}^{0,1}}}$$ with left-invariant differential forms of type $(p,q)$ on $M$. The differential $d=\del+\delbar$ restricts to $$\Lambda^*{\gothh}_\IC^*=\bigoplus\Lambda ^{p,q}$$ and can in fact be defined in terms of the Lie bracket only: for $\alpha \in {\gothh}^*$ and $x,y\in {\gothh}$ considered as differential form and vectorfields we have $$\label{differential} d\alpha(x,y)=x(\alpha(y))-y(\alpha(x))-\alpha([x,y])=-\alpha([x,y])$$ since all left-invariant functions are constant. Let $H^k({\gothh}, \IC)$ be the $k$-th cohomology group of the complex $$\Lambda^*{\gothh}^*_\IC: \quad 0\to \IC\overset{0}{\to} {\gothh}_\IC^*\overset{d}{\to}\Lambda^2{\gothh}_\IC^* \overset{d}{\to}\Lambda^3{\gothh}_\IC^* \overset{d}{\to}\dots$$ and $H^{p,q}({\gothh},J)$ be the $q$-th cohomology group of the complex $$\Lambda^{p,*}:\quad 0\to \Lambda^{p,0}\overset{\delbar}{\to}\Lambda^{p,1}\overset{\delbar}{\to}\Lambda^{p,2}\overset{\delbar}{\to}\dots$$ In fact, the first complex calculates the usual Lie-algebra cohomology with values in the trivial module $\IC$ while the second calculates the cohomology of the Lie-algebra ${{{{\gothh}}^{0,1}}}$ with values in the module $\Lambda^{p,0}$ (see [@rollenske07b]). \[cohom\] Let $M=\Gamma\backslash H$ be a real nilmanifold with Lie-algebra ${\gothh}$. 1. The inclusion of $\Lambda^*{\gothh}^*_\IC$ into the de Rham complex induces an isomorphism $$H_{\mathrm{dR}}^*(M, \IC) \isom H^*({\gothh}, \IC)$$ in cohomology. (Nomizu, [@nomizu54]) 2. The inclusion of $\Lambda^{p,*}$ into the Dolbeault complex induces an inclusion $$\label{iota} \iota_J:H^{p,q}({\gothh},J)\to H^{p,q}(M,J)$$ which is an isomorphism if $(M,J)$ is complex parallelisable (Sakane, [@sakane76]) or if $J$ is abelian (Console and Fino, [@con-fin01]). Moreover, there exists a a dense open subset $U$ of the space of all left-invariant complex structures on $M$ such that $\iota$ is an isomorphism for all $J\in U$ ([@con-fin01]) Other work in this direction was done by Cordero, Fernándes, Gray and Ugarte [@cfgu00]. Conjecturally $\iota$ is an isomorphism for all left-invariant complex structures; in particular no counterexample is known. For further reference we describe some cohomology groups in these terms. Let ${\ensuremath{\gothg}}$ be a complex Lie-algebra. Let us denote by $K^k:=\im(d:\Lambda^{k-1}{\gothh}_\IC^*{\to}\Lambda^{k}{\gothh}_\IC^*)$ the space of $k$-boundaries. Then $$\begin{gathered} H^0({\ensuremath{\gothg}}, \IC)=\IC,\\ H^1({\ensuremath{\gothg}},\IC)=\Ann(\kc_1{\ensuremath{\gothg}})=\Ann([{\ensuremath{\gothg}},{\ensuremath{\gothg}}]),\\ K^2=\Ann(\ker([-,-]:\Lambda^2{\ensuremath{\gothg}}\to {\ensuremath{\gothg}})).\end{gathered}$$ Moreover, $H^{0,1}({\ensuremath{\gothg}})=\overline{H^1({\ensuremath{\gothg}},\IC)}$ and $\im(\delbar:\bar{\ensuremath{\gothg}}^*\to \Lambda^2\bar{\ensuremath{\gothg}}^*)=\bar K^2$. All assertions follow immediately from the fact that the differential $d:{\ensuremath{\gothg}}^*\to\Lambda^2{\ensuremath{\gothg}}^*$ is the dual of the Lie bracket $[-,-]:\Lambda^2{\ensuremath{\gothg}}\to {\ensuremath{\gothg}}$ and from the identification ${\ensuremath{\gothg}}_\IC={\ensuremath{\gothg}}\oplus \bar {\ensuremath{\gothg}}$. Since we are interested in deformations, the cohomology of the holomorphic tangent bundle (resp. tangent sheaf) $\Theta_{(M,J)}$ is of particular interest. It has been calculated in [@rollenske07b] for left-invariant complex structures for which [[(\[iota\])]{}]{} is an isomorphism, generalising results on abelian complex structure in [@mpps06; @con-fin-poon06]. But for a complex parallelisable nilmanifold $X$ we can calculate it directly (as observed by Nakamura [@nakamura75]). Any element of the complex Lie-algebra ${\ensuremath{\gothg}}$ gives rise to a holomorphic vector field. Hence the tangent sheaf is isomorphic to $\ko_X\tensor {\ensuremath{\gothg}}$ and in cohomology we have a natural isomorphism $$H^q(X,\Theta_X)=H^q(X, \ko_X\tensor {\ensuremath{\gothg}})\isom H^q(X, \ko_X)\tensor {\ensuremath{\gothg}}=H^{0,q}(X)\tensor {\ensuremath{\gothg}}\isom H^{0,q}({\ensuremath{\gothg}})\tensor {\ensuremath{\gothg}}.$$ Combining this with the previous results we get \[cohomcalc\] Let $X=\Gamma\backslash G$ be a complex parallelisable nilmanifold. Then the tangent sheaf $\Theta_X\isom \ko_X\tensor {\ensuremath{\gothg}}$ and its cohomology is calculated by the complex $$0\to {\ensuremath{\gothg}}\overset{0}{\to} \bar{{\ensuremath{\gothg}}}^*\tensor {\ensuremath{\gothg}}\overset{\delbar}{\to}\Lambda^{2} \bar{{\ensuremath{\gothg}}}^*\tensor {\ensuremath{\gothg}}\overset{\delbar}{\to}\dots$$ where the differential of $\bar\alpha\tensor X \in \Lambda^{p,0}{\ensuremath{\gothg}}$ is given by $ \delbar (\bar\alpha\tensor X)=(\delbar\bar\alpha) \tensor X$. In particular we have $$\begin{gathered} H^0(X, \Theta)={\ensuremath{\gothg}}\\ H^1(X,\Theta)=H^1(X, \ko_X)\tensor {\ensuremath{\gothg}}=\overline {\Ann([{\ensuremath{\gothg}},{\ensuremath{\gothg}}])}\tensor {\ensuremath{\gothg}}\end{gathered}$$ Kuranishi theory {#kurani} ================ In [@kuranishi62] Kuranishi showed that for every compact complex manifold $X$ there exists a locally complete family of deformations which is versal at $X$. He constructs this family explicitly as a small neighbourhood of zero in the space of harmonic $(0,1)$-forms with values in the holomorphic tangent bundle after choosing some hermitian metric on $X$ (which always exists). We will now apply his construction to complex parallelisable nilmanifolds using the results of the last section. Let $(M,J)=(\Gamma\backslash H,J)$ be the real nilmanifold with left-invariant complex structure underlying a complex parallelisable nilmanifold $X=\Gamma\backslash G$. The complex structure $J:{\gothh}\to {\gothh}$ is uniquely determined by the eigenspace decomposition ${\gothh}_\IC={{{{\gothh}}^{1,0}}}\oplus {{{{\gothh}}^{0,1}}}$. A (sufficiently small) deformation of this decomposition ${\gothh}_\IC=V\oplus \bar V$ can be encoded in a map $\Phi:{{{{\gothh}}^{0,1}}} \to {{{{\gothh}}^{1,0}}}$ such that $\bar V=(\id+\Phi) {{{{\gothh}}^{0,1}}}$, i.e., the graph of $\Phi$ in ${\gothh}_\IC$ is the new space of vectors of type $(0,1)$. This decomposition then determines a unique almost complex structure $J_V$ which is integrable if and only if $[V,V]\subset V$. So far we have only described deformations of $J$ which remain left-invariant; this will be justified in a moment. The integrability condition is most conveniently expressed using the so-called Schouten bracket: for $X,Y\in {{{{\gothh}}^{1,0}}}$ and $(0,1)$-forms $\bar \alpha, \bar\beta\in{{{{\gothh}^*}^{0,1}}}$ we set $$\label{Schouten} [\bar\alpha\tensor X, \bar\beta\tensor Y]:=\bar \beta \wedge L_{Y}\bar \alpha \tensor X+ \bar\alpha \wedge L_{X}\bar\beta\tensor Y+\bar \alpha\wedge \bar \beta \tensor [X,Y]$$ where $L_{X}\bar\beta=i_Xd\bar\beta+d(i_X \bar\beta)$ is the Lie derivative and $i_X$ is the contraction with $X$. One can then show that the new complex structure is integrable if and only if $\Phi$ satisfies the Maurer-Cartan equation $$\label{MC} \delbar \Phi +[\Phi, \Phi]=0$$ and it is well known that infinitesimal deformations, which correspond to first-order solutions, are parametrised by classes in $H^1(X,\Theta_X)$ (see for example [@catanese88] or [@Huybrechts] for an overview). But different solutions may well yield isomorphic deformations. In order to single out a preferred solution we choose a hermitian structure on ${\ensuremath{\gothg}}$ which induces a left-invariant hermitian structure on $X$. Using the Hodge star operator associated to the hermitian metric we can define the formal adjoint $\delbar^*$ to $\delbar$ and the Laplace operator $$\Delta:=\delbar\delbar^*+\delbar^*\delbar.$$ Defining the space of harmonic forms to be $\kh^k=\ker (\Delta:\Lambda^{k} \bar{{\ensuremath{\gothg}}}^*\to \Lambda^{k} \bar{{\ensuremath{\gothg}}}^* )$ there is an orthogonal decomposition $$\Lambda^{k} \bar{{\ensuremath{\gothg}}}^*=B^k\oplus\kh^k\oplus V^k$$ where $B^k=\im(\delbar:\Lambda^{k-1} \bar{{\ensuremath{\gothg}}}^*\to \Lambda^{k} \bar{{\ensuremath{\gothg}}}^*)$ and $V^k=\im(\delbar^*:\Lambda^{k+1} \bar{{\ensuremath{\gothg}}}^*\to \Lambda^{k} \bar{{\ensuremath{\gothg}}}^*)$; this is just the intersection of usual Hodge-decomposition with the subcomplex of left-invariant differential forms. The main point is that all harmonic forms are in left-invariant in our setting. Since $\ker(\delbar)=B^k\oplus \kh^k$ we get an isomorphism $$H^k(X,\Theta_X)\isom H^k(X,\ko_X)\tensor {\ensuremath{\gothg}}\isom \kh^k\tensor {\ensuremath{\gothg}}.$$ We are especially interested in the first two cohomology groups. By Lemma \[cohomcalc\] we have $B^1=0$ which yields a commutative diagram $$\xymatrix{ \bar{\ensuremath{\gothg}}^*\tensor {\ensuremath{\gothg}}\ar[rr]^\delbar \ar@{=}[d]&& \ar@{=}[d]\Lambda^2\bar{\ensuremath{\gothg}}^*\tensor {\ensuremath{\gothg}}\\ (\kh^1\tensor{\ensuremath{\gothg}})\oplus (V^1 \tensor {\ensuremath{\gothg}})\ar[rr]^\delbar \ar[d]^{\mathrm{pr}} && (B^2\tensor {\ensuremath{\gothg}})\oplus( \kh^2\tensor {\ensuremath{\gothg}})\oplus( V^2\tensor {\ensuremath{\gothg}}) \ar[dl]^P\ar[d]_H\\ V^1\tensor {\ensuremath{\gothg}}& \ar[l]^{\delta}_{\isom} B^2\tensor{\ensuremath{\gothg}}&\kh^2\tensor {\ensuremath{\gothg}}. }$$ We denote by $\delta$ the inverse of the isomorphism $P\circ \delbar: V^1\tensor{\ensuremath{\gothg}}\to B^2\tensor{\ensuremath{\gothg}}$. We will now use these operators to describe the Kuranishi space: let $X_1,\dots, X_n$ be a basis of ${\ensuremath{\gothg}}$ and $\bar\omega^1, \dots \bar\omega^m$ be a basis for $\kh^1$. Then $\{\bar\omega^i\tensor X_j\}$ is a basis of $H^1(X,\Theta_X)$ and we define recursively $$\label{Phi} \begin{split} \Phi_1(\underline t)&=\sum_{i,j} t_i^j \bar\omega^i\tensor X_j, \\ \Phi_2(\underline t)&:=-\delta\circ P [\Phi_1(\underline t), \Phi_1(\underline t)],\\ \Phi_k(\underline t)&:=-\delta\circ P \sum_{1\leq i<k} \left[\Phi_i(\underline t), \Phi_{k-i}(\underline t)\right] \quad (k\geq 2), \end{split}$$ obtaining a formal power series $$\Phi(\underline t)=\sum_{k\geq1} \Phi_k(\underline t).$$ We see that $\Phi_k$ is a homogeneous polynomial of degree $k$ in the variables $t_i^j$ and it is easy to verify that $$\delbar\Phi+[\Phi,\Phi]=H[\Phi,\Phi].$$ The map $\Phi$ does not depend on the choice of the basis and we can define the obstruction map $${\mathrm{obs}}:\kh^1\tensor {\ensuremath{\gothg}}\to \kh^2\tensor {\ensuremath{\gothg}},\qquad \mu=\sum_{i,j} t_i^j \bar\omega^i\tensor X_j\mapsto H[\Phi(\underline t),\Phi(\underline t)].$$ We can now formulate Kuranishi’s theorem in our context. The formal powerseries $\Phi(\underline t)$ converges for sufficiently small values of $\underline t$ and there is a versal family of deformations of $X$ over the space $${\mathrm{Kur}}(X):=\{\mu\in \kh^1(\Theta_X)\mid \|\mu\|<\epsilon; {\mathrm{obs}}(\mu)=0\}.$$ where $\kh^1(\Theta_X)=\kh^1\tensor {\ensuremath{\gothg}}$ is the space of harmonic 1-forms with values in $\Theta_X$. ${\mathrm{Kur}}(X)$ is called the Kuranishi space of $X$. By construction $\Phi$ is left-invariant and hence the new complex structure will also be left-invariant. In fact, the new subbundle of tangent vectors of type $(0,1)$ in $TM_\IC$ is obtained by translating the subspace $(\id+\Phi){{{{\ensuremath{\gothg}}}^{0,1}}}\subset {\ensuremath{\gothg}}_\IC$. We have reproved that all sufficiently small deformations of our complex parallelisable nilmanifold carry a left-invariant complex structure. Note that the construction involved the choice of a hermitian structure so ${\mathrm{Kur}}(X)$ is not defined in a canonical way. Nevertheless for different choices of a metric (the germs of) the resulting spaces are (non canonically) isomorphic. The values of $\underline t$ have to be small for two different reasons. First of all we need to ensure the convergence of the formal power series $\Phi(\underline t)$ and secondly $(\id+\Phi)\bar {\ensuremath{\gothg}}$ should be the space of $(0,1)$ vectors for an integrable almost complex structure, in other words we need $(\id+\Phi)\bar {\ensuremath{\gothg}}\oplus\overline{(\id+\Phi)\bar {\ensuremath{\gothg}}}={\ensuremath{\gothg}}_\IC$. We will see that the first issue will not arise in our setting. Usually the terms of the formal power series $\Phi$ are described using Green’s operator, which inverts the Laplacian on the orthogonal complement of harmonic forms, setting $$\Phi_k(\underline t):=-\delbar^*G \sum_{1\leq i<k} \left[\Phi_i(\underline t), \Phi_{k-i}(\underline t)\right]$$ It is straight-forward to check that this agrees with our definition above using the identities $G\circ\Delta+H=\Delta\circ G+H=\id$ and definition of the Laplacian. Our formula involves only $\delta=\inverse \delbar$ and the projection $P$ which will facilitate the computation of examples in Section \[examples\]. Now that we have seen how the Kuranishi space is constructed we want to investigate its structure in detail for complex parallelisable nilmanifolds. The key result is the following: \[schoutennil\] Let $\bar\alpha\tensor X, \bar\beta\tensor Y \in \bar{\ensuremath{\gothg}}^*\tensor {\ensuremath{\gothg}}$. Then their Schouten bracket is $$[\bar\alpha\tensor X, \bar\beta\tensor Y ]=\bar\alpha\wedge \bar \beta\tensor[X,Y].$$ Comparing the expression with the general formula [[(\[Schouten\])]{}]{} it suffices to show that for $X\in {\ensuremath{\gothg}}$ and $\bar\alpha\in\bar{\ensuremath{\gothg}}$ the Lie-derivative $L_X\bar\alpha=i_Xd\bar\alpha+d(i_X \bar\alpha)=0$. But $\bar\alpha$ is of type $(0,1)$ and $d\bar\alpha$ is of type $(0,2)$ (since $[{\ensuremath{\gothg}}, \bar{\ensuremath{\gothg}}]=0$) so both vanish when contracted with a vector of type $(1,0)$. This gives us For $\Phi$ as in the recursive description [[(\[Phi\])]{}]{} we have $[\Phi_k, \Phi_l] \in \Lambda^2\bar{{\ensuremath{\gothg}}}^*\tensor \kc_{k+l-1}{\ensuremath{\gothg}}\subset \Lambda^2\bar{{\ensuremath{\gothg}}}^*\tensor{\ensuremath{\gothg}}$. We prove our claim by induction: for $k=1$ there is nothing to prove since $\kc_0{\ensuremath{\gothg}}={\ensuremath{\gothg}}$. Note that, by the Jacobi identity, $[\kc_k{\ensuremath{\gothg}}, \kc_l{\ensuremath{\gothg}}]\subset \kc_{k+l+1}{\ensuremath{\gothg}}$. Since the Schouten bracket is the Lie bracket on the vector part and the map $\delta=\inverse\delbar$ acts only on the form part our claim follows. We deduce immediately that the Kuranishi space can not be too complicated: \[polynom\] If ${\ensuremath{\gothg}}$ is $\nu$-step nilpotent and $\Phi$ as in [[(\[Phi\])]{}]{} then $${\mathrm{obs}}(\underline t)=\sum_{\stackrel{1\leq i,j< \nu,}{ i+j\leq \nu}}H[\Phi_i, \Phi_j].$$ In particular ${\mathrm{Kur}}(X)$ is cut out by polynomial equations of degree at most $\nu$. Since ${\ensuremath{\gothg}}$ is $\nu$-step nilpotent $\kc_k{\ensuremath{\gothg}}=0$ for $k\geq \nu$. By the previous Lemma this implies that $[\Phi_i, \Phi_j]=0$ whenever $i+j-1\geq \nu$ and hence the only possibly non-vanishing terms of ${\mathrm{obs}}=H[\Phi,\Phi]$ are the ones given above. For further reference we use Lemma \[schoutennil\] to calculate the second order obstructions, i.e., the quadratic term of the obstruction map ${\mathrm{obs}}$: let as before $\bar\omega^1, \dots, \bar\omega^{m}$ be a basis of $\kh^1=\overline {\Ann([{\ensuremath{\gothg}},{\ensuremath{\gothg}}])}$ and $X_1, \dots, X_n$ be a basis of ${\ensuremath{\gothg}}$. Then we can represent any element in $H^1(X, \Theta_X)$ as $$\Phi_1(\underline t)=\sum_{i,j} t_i^j \bar\omega^i\tensor X_j$$ and consequently $$\label{deg2} \begin{split} [\Phi_1(\underline t),\Phi_1(\underline t)]&=[\sum_{i,k} t_i^k \bar\omega^i\tensor X_k, \sum_{j,l} t_j^l \bar\omega^j\tensor X_l]\\ &= \sum_{i,j,k,l}(t_i^k t^l_j)[ \bar\omega^i\tensor X_k, \bar\omega^j\tensor X_l]\\ &= \sum_{i,j,k,l}(t_i^k t^l_j)\bar\omega^i\wedge\bar\omega^j\tensor [X_k, X_l]\\ & = \sum_{1\leq i<j\leq m}\sum_{k,l}(t_i^k t^l_j-t_j^k t^l_i)\bar\omega^i\wedge\bar\omega^j\tensor [X_k, X_l]\\ & = \sum_{1\leq i<j\leq m}\sum_{1\leq k<l\leq n}2(t_i^k t^l_j-t_j^k t^l_i)\bar\omega^i\wedge\bar\omega^j\tensor [X_k, X_l]\\ & = 2 \sum_{1\leq i<j\leq m}\sum_{1\leq k<l\leq n} \det\begin{pmatrix}t_i^k & t_i^l\\t_j^k & t_j^l \end{pmatrix} \bar\omega^i\wedge\bar\omega^j\tensor [X_k, X_l]. \end{split}$$ We deduce from this formula a necessary condition for the Kuranishi space to be smooth: \[lambda2\] If the subspace $\Lambda^2 \kh^1 \subset \Lambda^2 \bar{\ensuremath{\gothg}}$ is not contained in $B^2$ and ${\ensuremath{\gothg}}$ is not abelian then there is a non-vanishing obstructions in degree 2 and the Kuranishi space is singular. Assume that $\Lambda^2 \kh^1 \subset \Lambda^2 \bar{\ensuremath{\gothg}}$ is not contained in $B_2$. Then there is some basis vector $\bar\omega^i\wedge\bar\omega^j$ which is not contained in the image of $\delbar$. Since ${\ensuremath{\gothg}}$ is not abelian there are vectors $X_k, X_l$ such that $[X_k, X_l]\neq 0$. Setting $t_p^q=0$ if $p\neq i,j$ or $q\neq k,l$ and choosing the remaining coefficient such that $\det\begin{pmatrix}t_i^k & t_i^l\\t_j^k & t_j^l \end{pmatrix}\neq 0$ we have found an obstructed element in $H^1(X, \Theta_X)$. The condition that the Kuranishi space be smooth is very strong. To make this more precise we need to recall the definition of the free 2-step Lie-algebra: let $m\geq2$, $V=\IC^m$ and $\gothb_m:=V\oplus \Lambda^2 V$. Then $\gothb_m$ with the Lie bracket $$[ \cdot, \cdot]: \gothb_m\times \gothb_m\to \gothb_m, \qquad [a+b\wedge c, a'+b'\wedge c']:=a\wedge a'$$ is the free 2-step nilpotent Lie-algebra. \[freecond\] If ${\ensuremath{\gothg}}$ is not abelian then there is a non-vanishing obstruction in degree 2 if and only if ${\ensuremath{\gothg}}\slash \kc^2{\ensuremath{\gothg}}$ is not isomorphic to a free 2-step nilpotent Lie-algebra. Hence, if ${\ensuremath{\gothg}}\slash \kc^2{\ensuremath{\gothg}}$ is not free then the Kuranishi space is singular. The vanishing of all obstructions on degree 2 is not a sufficient condition for the Kuranishi space to be smooth. A 4-dimensional example where ${\mathrm{Kur}}(X)$ is cut out by a single cubic equation can be found in Section \[explicit4\]. All examples with smooth Kuranishi space which we could find were actually free Lie algebras and at least in the 2-step nilpotent case there are no other: \[b\_m\] If ${\ensuremath{\gothg}}$ is 2-step nilpotent then the Kuranishi space is smooth if and only if ${\ensuremath{\gothg}}$ is a free 2-step nilpotent Lie-algebra, i.e., ${\ensuremath{\gothg}}\isom \gothb_m$ with $m=h^{0,1}(X)$. This follows immediately from the theorem since for a 2-step nilpotent Lie-algebra we have $\kc_2{\ensuremath{\gothg}}=0$, hence ${\ensuremath{\gothg}}/\kc_2{\ensuremath{\gothg}}\isom{\ensuremath{\gothg}}.$ Note that $\gothb_2$ is the complex Heisenberg algebra, which is the Lie-algebra of the universal cover of the Iwasawa-manifold. So we have reproved the smoothness of the Kuranishi space of the Iwasawa manifold first observed by Nakamura. It is very easy to produce examples with singular Kuranishi space: If ${\ensuremath{\gothg}}\isom {\ensuremath{\gothg}}'\oplus \gotha$ where $\gotha\isom \IC^n$ is an abelian Lie-algebra and ${\ensuremath{\gothg}}'$ is not abelian, then the Kuranishi-space is singular. In particular, if $X$ is any complex parallelisable nilmanifold which is not a torus and $T$ is a complex torus then $X\times T$ has obstructed deformations. We have ${\ensuremath{\gothg}}\slash \kc_2{\ensuremath{\gothg}}={\ensuremath{\gothg}}'\slash\kc_2{\ensuremath{\gothg}}'\oplus \gotha$ which is not free. An application of the theorem proves the assertion. Before we can address the proof of Theorem \[freecond\] we need a technical lemma. Let ${\ensuremath{\gothg}}=\kc_0 {\ensuremath{\gothg}}\supset \kc_1{\ensuremath{\gothg}}\supset \dots \supset \kc_{\nu}{\ensuremath{\gothg}}=0$ be the descending central series and let $\kc^k{\ensuremath{\gothg}}^*=\Ann\kc_k{\ensuremath{\gothg}}$. We get a filtration $$0=\kc^0{\ensuremath{\gothg}}^*\subset \kc^1{\ensuremath{\gothg}}^*=\Ann(\kc_1{\ensuremath{\gothg}})\subset \dots\subset \kc^\nu{\ensuremath{\gothg}}^*={\ensuremath{\gothg}}^*.$$ \[dck\] Setting $$W^k=\langle \alpha\wedge \beta \in \Lambda^2{\ensuremath{\gothg}}^*\mid \alpha \in \kc^i{\ensuremath{\gothg}}^*, \beta\in \kc^j{\ensuremath{\gothg}}^*, i+j\leq k\rangle \subset \Lambda^2{\ensuremath{\gothg}}^*$$ we have $$d\alpha \in W^k \iff \alpha \in \kc^k{\ensuremath{\gothg}}^*.$$ Assume that there is $\alpha \notin \kc^k{\ensuremath{\gothg}}^*$ with $d\alpha \in W^k$. By the Jacobi-identity $\kc_k{\ensuremath{\gothg}}$ is generated by elements of the form $X=[Y,Z]$ where $Y\notin \kc_1{\ensuremath{\gothg}}$ and $Z\in \kc_{k-1}{\ensuremath{\gothg}}$ and hence $\alpha(X)\neq 0$ for one such element. By the definition of $\kc^i{\ensuremath{\gothg}}^*$ we have $\beta(Y,Z)=0$ for all $\beta \in W^k$. On the other hand $$d\alpha(Y,Z)=-\alpha([Y,Z])=-\alpha(X)\neq 0$$ so $d\alpha\notin W_k$ – a contradiction. The other direction is a well known fact for nilpotent Lie algebras. It can be easily seen picking a basis adapted to the descending central series (often called Malcev or Engel basis) and writing $\alpha$ as a linear combination of the elements of the dual basis. *Proof of Theorem \[freecond\].* Let ${\ensuremath{\gothg}}$ be a non-abelian Lie-algebra. By Lemma \[lambda2\] it suffices to show that $\Lambda^2\kh^1\subset B_2$ if and only if ${\ensuremath{\gothg}}\slash \kc^2{\ensuremath{\gothg}}$ is a free 2-step Lie-algebra. Recalling that $\kh^1=\overline{\kc^1{\ensuremath{\gothg}}^*}$ and $B^2=\overline{\im(d)}$ we have to prove that $\Lambda^2\kc^1{\ensuremath{\gothg}}^*=W^1$ is in the image of the differential if and only if ${\ensuremath{\gothg}}\slash \kc^2{\ensuremath{\gothg}}$ is free. The Lie bracket in ${\ensuremath{\gothg}}$ can also be considered as a linear map $$b:\Lambda^2 {\ensuremath{\gothg}}\to \kc_1{\ensuremath{\gothg}},$$ which is, by definition, surjective. Dualising we get (the restriction of) the differential $$d: (\kc_1{\ensuremath{\gothg}})^* \to \Lambda^2 {\ensuremath{\gothg}}^*,$$ which is now injective. Let $A$ be the anullator of $\kc_2{\ensuremath{\gothg}}$ in $(\kc_1{\ensuremath{\gothg}})^*$. Then we infer from Lemma \[dck\] that $d\restr{A}:A\to W^2=\Lambda^2\kc^1{\ensuremath{\gothg}}^*$, in fact, $$dA=\im (d) \cap W^2=\im(d)\cap \Lambda^2\kc^1{\ensuremath{\gothg}}^*.$$ The dual map $$b': \left(\Lambda^2\kc^1{\ensuremath{\gothg}}^*\right)^*=\Lambda^2({\ensuremath{\gothg}}\slash\kc_1{\ensuremath{\gothg}}) \to A^*=\kc_1{\ensuremath{\gothg}}\slash\kc_2{\ensuremath{\gothg}}$$ gives an anti-symmetric bilinear form on ${\ensuremath{\gothg}}\slash\kc_1{\ensuremath{\gothg}}$ with values in $\kc_1{\ensuremath{\gothg}}\slash\kc_2{\ensuremath{\gothg}}$ which is exactly the Lie bracket in the quotient Lie-algebra ${\ensuremath{\gothg}}\slash\kc_2{\ensuremath{\gothg}}$. Hence we see that $\Lambda^2\kc^1{\ensuremath{\gothg}}^*$ is in the image of $d$ if and only if $d: A \to W^2$ is surjective if and only if $b'$ is injective. But $b'$ is by definition surjective so it is injective if and only if it is bijective in which case $\Lambda^2({\ensuremath{\gothg}}\slash\kc_1{\ensuremath{\gothg}})\isom\kc_1{\ensuremath{\gothg}}\slash\kc_2{\ensuremath{\gothg}}$ and the Lie-algebra ${\ensuremath{\gothg}}\slash \kc^2{\ensuremath{\gothg}}$ is indeed free. Deformations remaining complex parallelisable {#remainparall} ============================================= It is a natural question if there are conditions which guarantee that a given small deformation of our complex parallelisable manifold $X$ is again complex parallelisable. So let $\mu \in H^1(X, \Theta_X)=\kh^1\tensor {\ensuremath{\gothg}}$ be a infinitesimal deformation and $\Phi$ the corresponding iterative solution of the Maurer-Cartan equation as in [[(\[Phi\])]{}]{}. The new space of $(0,1)$-vectors is $(\id+\Phi)\bar {\ensuremath{\gothg}}$. (Recall that we identified ${\ensuremath{\gothg}}_\IC={\ensuremath{\gothg}}\tensor\bar{\ensuremath{\gothg}}$.) By Lemma \[parallchar\] the new complex structure is again complex parallelisable if and only if $$[(\id+\bar\Phi) X, (\id+\Phi)\bar Y]=0$$ for all $X,Y\in {\ensuremath{\gothg}}$. Looking at the terms up to first order yields $$[X, \bar Y]+[\bar\mu X,\bar Y] +[ X,\mu\bar Y]=[\bar\mu X,\bar Y] +[ X,\mu\bar Y]=0.$$ The first of these terms is in $\bar{\ensuremath{\gothg}}$ while the second is in ${\ensuremath{\gothg}}$ and they are complex conjugate to each other up to sign and renaming. Thus we call $\mu$ an infinitesimally complex parallelisable deformation if $$\begin{gathered} \forall X,Y \in {\ensuremath{\gothg}}: [ X,\mu\bar Y]=0 \iff \mu \in \kh^1\tensor \kz{\ensuremath{\gothg}}.\end{gathered}$$ Such infinitesimal deformations are always unobstructed: if $\mu \in \kh^1\tensor \kz{\ensuremath{\gothg}}$ then $[\mu,\mu]\in \Lambda^2\bar{\ensuremath{\gothg}}^*\tensor [\kz{\ensuremath{\gothg}}, \kz{\ensuremath{\gothg}}]=0$. Hence in the recursive definition [[(\[Phi\])]{}]{} all higher order terms vanish, $\Phi=\mu$ and ${\mathrm{obs}}(\mu)=0$. We have proved \[remaining\] For an element $\mu \in H^{1}(X, \Theta_X)=H^1(X,\ko_X)\tensor {\ensuremath{\gothg}}$ the following are equivalent: 1. $\mu \in H^1(X, \ko_X)\tensor \kz{\ensuremath{\gothg}}$. 2. $\mu$ defines an infinitesimally complex parallelisable deformation. 3. $t\mu$ induces a 1-parameter family of complex parallelisable manifolds for $t$ small enough, i.e., provided that $(\id+t\mu)\bar{\ensuremath{\gothg}}\oplus(\id+t\bar\mu){\ensuremath{\gothg}}={\ensuremath{\gothg}}_\IC$. Hence the Kuranishi family is (locally) a cylinder over an analytic subset of $H^1(X, \ko_X)\tensor ({\ensuremath{\gothg}}/\kz{\ensuremath{\gothg}})$. Since ${\ensuremath{\gothg}}=\kz{\ensuremath{\gothg}}$ if and only if ${\ensuremath{\gothg}}$ is abelian we deduce: \[nondef\] If ${\ensuremath{\gothg}}$ is not abelian then there are small deformations of $X$ which are not complex parallelisable. Examples ======== We continue to use the notation from Remark \[notation\]. The deformation theory of the complex parallelisable nilmanifold $X$ is completely determined by the Lie-algebra ${\ensuremath{\gothg}}$ and we have already discussed two series of examples where ${\mathrm{Kur}}(X)$ is smooth. - If ${\ensuremath{\gothg}}=\mathfrak a_k$ is the $k$-dimensional abelian Lie-algebra then $X$ is a torus and ${\mathrm{Kur}}(X)$ is smooth of dimension $\frac{k^2(k+1)}{2}$. - If ${\ensuremath{\gothg}}=\mathfrak b_m$ is the free 2-step nilpotent Lie-algebra on $m$ generators, which has dimension $\frac{m(m+3)}{2}$, then ${\mathrm{Kur}}(X)$ is smooth of dimension $\frac{m^2(m+3)}{2}$ (see Corollary \[b\_m\]). Examples in low dimension – overview ------------------------------------ Nilpotent complex Lie algebras are classified up to dimension 7 [@magnin86] and partial results are known in dimension 8 . Starting from dimension 7 there are infinitely many non-isomorphic cases. We will now describe the Kuranishi-space of complex parallelisable nilmanifolds up to dimension 5. There is a convenient way to describe a nilpotent Lie-algebra ${\ensuremath{\gothg}}$ using the differential $d: {\ensuremath{\gothg}}\to \Lambda^2{\ensuremath{\gothg}}$. The expression $${\ensuremath{\gothg}}=(0,0,0,0,12+34)$$ means the following: with respect to a basis $\omega^1, \dots , \omega^5$ the differential is given by $$d\omega^1=d\omega^2=d\omega^3=d\omega^4=0 \text{ and } d\omega^5=\omega^1\wedge\omega^2+\omega^3\wedge\omega^4.$$ This determines the Lie bracket, which is the dual map (see [[(\[differential\])]{}]{}). More precisely, if we denote by $X_1, \dots, X_5$ the dual basis then the only non-zero Lie brackets are $[X_1, X_2]=[X_3,X_4]=-X_5$. Table 1 lists all Lie-algebras up to dimension 5 in this notation together with some information on the Kuranishi space of an associated complex parallelisable nilmanifold. We denote the nilpotency index by $\nu$. Note that all Lie-algebras with smooth Kuranishi space are either free or abelian. One can check that also the free 4-step nilpotent Lie-algebra on 2 generators $(0,0,12,13,23,14,25,24+15)$ has smooth Kuranishi space. \[alle\] $\dim$ Lie-algebra $\nu$ $h^1(\Theta_X)$ smooth irreducible reduced -------- --------------------- ------- ----------------- --------- ------------- --------- 1 $\gotha_1$ 1 1 $\surd$ $\surd$ $\surd$ 2 $\gotha_2$ 1 6 $\surd$ $\surd$ $\surd$ 3 $\gotha_3$ 1 18 $\surd$ $\surd$ $\surd$ 3 $\gothb_1$ 2 6 $\surd$ $\surd$ $\surd$ 4 $\gotha_4$ 1 40 $\surd$ $\surd$ $\surd$ 4 $(0,0,0,12)$ 2 12 $-$ $-$ $\surd$ 4 $(0,0,12,13)$ 3 8 $-$ $-$ $\surd$ 5 $\gotha_5$ 1 75 $\surd$ $\surd$ $\surd$ 5 $(0,0,0,12,13)$ 2 15 $-$ $-$ $\surd$ 5 $(0,0,0,0,12+34)$ 2 20 $-$ $-$ $\surd$ 5 $(0,0,12,13,23)$ 3 10 $\surd$ $\surd$ $\surd$ 5 $(0,0,0,12,13+24)$ 3 15 $-$ $-$ $-$ 5 $(0,0,12,13,14)$ 4 10 $-$ $-$ $-$ 5 $(0,0,12,13,14+23)$ 4 10 $-$ $-$ $-$ : Kuranishi spaces up to dimension 5. Examples in low dimension – explicit descriptions ------------------------------------------------- In this section we will give explicit equations for the Kuranishi space of some examples. In order to avoid cumbersome notation we will only consider the germ of the Kuranishi space at zero which will be denoted by ${\mathrm{Kur}}(X)_0$. Since nothing interesting happens in dimension 1, 2, and 3 we start in dimension 4. ### Computations in dimension 4 {#explicit4} We will now compute the Kuranishi space explicitly for the two singular examples in dimension 4. The structure equations of the considered Lie-algebras are given with respect to the bases $X_1, \dots, X_n$ and $\omega^1, \dots, \omega^n$ as described at the beginning of this section. Thus we will always start the computation of the iterative solution of the Maurer-Cartan equation with the element $$\Phi_1(\underline t)=\sum_{i=1}^{n}\sum_{j=1}^{m} t_i^j \bar\omega^i\tensor X_j$$ where $n=\dim{\ensuremath{\gothg}}$ and ${m}=\codim\kc_1{\ensuremath{\gothg}}=h^{0,1}(X)$. In order to use harmonic forms we equip ${\ensuremath{\gothg}}$ with the unique hermitian metric such that the $X_i$ form an orthonormal basis. In every step of the recursion [[(\[Phi\])]{}]{} we will decompose $[\Phi_k, \Phi_l]=\beta+\chi$ where $\chi$ is harmonic and $\beta$ is exact. Then $\chi$ will contribute to the obstruction map and $\delta(\beta)=\inverse{(\delbar)}\beta$ will, if necessary, be used to compute the next iterative step. #### The Lie-algebra ${\ensuremath{\gothg}}=(0,0,0,12)$ {#the-lie-algebra-ensuremathgothg00012 .unnumbered} Since ${\ensuremath{\gothg}}$ is 2-step nilpotent we only have to look at obstructions in degree 2, i.e., ${\mathrm{obs}}=H[\Phi_1, \Phi_1]$. Since $[X_1,X_2]=-X_4$ is the only non-zero bracket we deduce from [[(\[deg2\])]{}]{} that $$\begin{aligned} [\Phi_1(\underline t),\Phi_1(\underline t)]&= - 2 \sum_{1\leq i<j\leq 3}\det\begin{pmatrix}t_i^1 & t_i^2\\t_j^1 & t_j^2 \end{pmatrix} \bar\omega^i\wedge\bar\omega^j\tensor X_4\\ &=-2\det\begin{pmatrix}t_1^1 & t_1^2\\t_3^1 & t_3^2 \end{pmatrix} \bar\omega^1\wedge\bar\omega^3\tensor X_4-2\det\begin{pmatrix}t_2^1 & t_2^2\\t_3^1 & t_3^2 \end{pmatrix} \bar\omega^2\wedge\bar\omega^3\tensor X_4\\ &\qquad-\delbar\left(2\det\begin{pmatrix}t_1^1 & t_1^2\\t_2^1 & t_2^2 \end{pmatrix} \bar\omega^4\tensor X_4\right).\end{aligned}$$ Hence $$\begin{aligned} {\mathrm{Kur}}(X)_0&=\{ \underline t \in \IC^{12}\mid \det\begin{pmatrix}t_1^1 & t_1^2\\t_3^1 & t_3^2 \end{pmatrix}=\det\begin{pmatrix}t_2^1 & t_2^2\\t_3^1 & t_3^2 \end{pmatrix}=0\}_0\\ &=\left(\IC^6\times Y\right)_0\end{aligned}$$ where $$Y= \{ t_3^1 = t_3^2=0\}\cup \{ \rk\begin{pmatrix}t_1^1 &t_2^1 & t_3^1 t_1^2\\t_1^2 & t_2^2& t_3^2 \end{pmatrix}\leq 1\}.$$ In particular we see that the Kuranishi space is a cylinder over the reducible space $Y$. #### The Lie-algebra ${\ensuremath{\gothg}}=(0,0,12,13)$ {#the-lie-algebra-ensuremathgothg001213 .unnumbered} We infer from [[(\[deg2\])]{}]{} that $$\begin{aligned} [\Phi_1(\underline t),\Phi_1(\underline t)]&= - 2 \det\begin{pmatrix}t_1^1 & t_1^2\\t_2^1 & t_2^2 \end{pmatrix} \bar\omega^1\wedge\bar\omega^2\tensor X_3 - 2 \det\begin{pmatrix}t_1^1 & t_1^3\\t_2^1 & t_2^3 \end{pmatrix} \bar\omega^1\wedge\bar\omega^2\tensor X_4\\ &=-\delbar\left(2\det\begin{pmatrix}t_1^1 & t_1^2\\t_2^1 & t_2^2 \end{pmatrix} \bar\omega^3\tensor X_3 + 2 \det\begin{pmatrix}t_1^1 & t_1^3\\t_2^1 & t_2^3 \end{pmatrix}\bar\omega^3\tensor X_4\right)\end{aligned}$$ and by the recursion formula we set $$\Phi_2:=2\det\begin{pmatrix}t_1^1 & t_1^2\\t_2^1 & t_2^2 \end{pmatrix} \bar\omega^3\tensor X_3 + 2 \det\begin{pmatrix}t_1^1 & t_1^3\\t_2^1 & t_2^3 \end{pmatrix}\bar\omega^3\tensor X_4.$$ We see that there are no obstructions of second order and calculate (noting that $X_4$ is in the centre and that $[X_2, X_3]=0$) $$\begin{aligned} [\Phi_1(\underline t),\Phi_2(\underline t)]&=[t^1_1\bar\omega^1\tensor X_1 + t^1_2 \bar\omega^2\tensor X_1, 2\det\begin{pmatrix}t_1^1 & t_1^2\\t_2^1 & t_2^2 \end{pmatrix} \bar\omega^3\tensor X_3]\\ &= -2 \det\begin{pmatrix}t_1^1 & t_1^2\\t_2^1 & t_2^2 \end{pmatrix} \left( t^1_1\bar\omega^1\wedge \bar\omega^3\tensor X_4 +t^1_2\bar\omega^2\wedge \bar\omega^3\tensor X_4\right)\\ &= -2\det\begin{pmatrix}t_1^1 & t_1^2\\t_2^1 & t_2^2 \end{pmatrix} \left( t^1_2\bar\omega^2\wedge \bar\omega^3\tensor X_4+t^1_1\delbar \bar\omega^4\tensor X_4 \right)\\ &=-2 t^1_2\det\begin{pmatrix}t_1^1 & t_1^2\\t_2^1 & t_2^2 \end{pmatrix} \bar\omega^2\wedge \bar\omega^3\tensor X_4 \mod B_2\end{aligned}$$ Hence we have $${\mathrm{Kur}}(X)_0=\{ \underline t \in \IC^2\tensor \IC^4 =\IC^8\mid t^1_2\det\begin{pmatrix}t_1^1 & t_1^2\\t_2^1 & t_2^2 \end{pmatrix}=0\}_0,$$ in other words, ${\mathrm{Kur}}(X)_0$ is a cylinder over the cone over the union of a plane and a quadric in $\IP^3$. ### Remarks on dimension 5 {#dim5sect} The computations in dimension 5 proceed along the same lines as in dimension 4 but are, as one might imagine, much more involved. Thus, we will only present the results. In the view of Theorem \[remaining\] the Kuranishi space is a cylinder over an analytic subset of the vector space $H^1(\bar {\ensuremath{\gothg}},\IC)\tensor ({\ensuremath{\gothg}}\slash \kz{\ensuremath{\gothg}})$ whose dimension we denote by $d$. The germ of the Kuranishi space at 0 is cut out by polynomial function and we will give the primary decomposition, computed using the program Singular [@GPS05], of the ideal $I$ of all these functions. Different ideals in the decomposition correspond to different irreducible components. If some component is set-theoretically contained in another component we call it an embedded component; this can only happen if the component is not reduced. Non-reduced components occur if there are infinitesimal deformations which can be lifted up to a certain order but not to actual deformations. In all examples the Kuranishi space has several irreducible components. We denote by $k$ be the number of components of the reduced space and by $e$ the number of embedded components. Note that in the case ${\ensuremath{\gothg}}=(0,0,12,13,14+23)$ there are two non-reduced components which are not embedded, both supported on linear subspaces. To simplify the description of the ideals we introduce the notation $$\begin{gathered} \delta_{ij}^{kl}:=\det\begin{pmatrix}t_i^k & t_i^l\\t_j^k & t_j^l \end{pmatrix},\\ \Delta_{ijk}^{lmn}:=\det\begin{pmatrix} t_i^l &t_j^l &t_k^l\\ t_i^m &t_j^m &t_k^m\\ t_i^n &t_j^n &t_k^n \end{pmatrix}.\end{gathered}$$ The results can now be found in Table 2 where we also give the codimension and the degree of the various components. \[dim5\] [ccccccX]{} $\mathbf{{\ensuremath{\gothg}}}$ & $\mathbf d$ & $\mathbf{(k,e)}$ & **Codim.** &**Degree** & **reduced?** & **Ideal (primary decomposition)**\ &\ $(0,0,0,12,13)$ &9&$(2,0)$&$(2,2)$&$(3,1)$& $(\surd,\surd)$&$(\delta_{23}^{23}, \delta_{23}^{13}, \delta_{23}^{12})\cap (t_3^1, t_2^1)$\ &\ $(0,0,0,12,13+24)$ & 12 &$(3,2)$ & $(4, 5, 4);( 5, 5)$ &$(9, 3, 3);(2, 4)$ & $(\surd, \surd, \surd)$& [ $$\begin{aligned} &(\delta^{13}_{23}+\delta^{24}_{23}, \delta_{23}^{12}, \delta_{13}^{14}+\delta_{13}^{24}, \delta_{13}^{12}, \delta_{12}^{13}+\delta_{12}^{24}, \delta^{12}_{12})\\ \cap &( t_3^2, t_3^1,t_1^2, t_2^1t_3^3+t^2_2t_3^4, 2(t^2_2)^2-t_3^3, 2t_2^1t_2^2+t_3^4)\\ \cap&( t_3^2, t_3^1, t_2^1t_3^3+t^2_2t_3^4, t^1_1 + t_1^2t_3^4, \delta_{12}^{12})\\ \cap&(t_3^2, t_1^2, (t_3^3)^2, t_3^1t_3^3, t_2^2t_3^3, (t_3^1)^2, \delta_{23}^{13}+t_2^2t_3^4, t_2^2t_3^1, \delta_{13}^{13}, (t_2^2)^2)\\ \cap&(t_1^2, t_3^1t_3^3+t_3^2t_3^4, (t_3^2)^2, t_3^1t_3^2, t_1^1t_3^2,(t_3^1)^2, \\ &\qquad\delta^{13}_{23}+\delta^{24}_{23}, \delta_{23}^{12}, t_1^1t_3^1,(t_1^1)^2, \delta^{13}_{13}+t_1^4t_3^2+2t_1^1(t_2^2)^2)\end{aligned}$$]{} \ &\ $(0,0,12,13,14)$ & 8 & $(2,1)$ & $(2,1);(2)$ & $(3,1);(2)$ & $(\surd,\surd)$ & $$(\delta_{12}^{23}, \delta_{12}^{13}, \delta_{12}^{12})\cap (t_2^1)\cap (\delta^{12}_{12}, t_1^1t_2^1, (t_1^1)^2, (t_2^1)^2)$$ \ &\ $(0,0,12,13,14+23)$ & 8 & $(4,0)$ & $(2, 2, 2, 2)$ & $(3,2,2,3)$ & $(\surd, \surd,-, -)$ & [$$\begin{aligned} & (\delta_{12}^{23}, \delta_{12}^{13}, \delta^{12}_{12}) \cap (t_2^1, 2(t_1^1)^2-t_2^2)\\ \cap &((t_2^2)^2,t_2^1t_2^2, (t_2^1)^2, 2t_1^1\delta_{12}^{12}-t_2^1t_2^3)\\ \cap& ( (t_2^1)^3, t_2^2\delta_{12}^{12}+t_2^1\delta_{12}^{13}, t_2^1\delta_{12}^{12},\\ & \qquad t_1^1(t_2^1)^2,t_1^2\delta_{12}^{12} +t_1^1\delta_{12}^{13}, t_1^1\delta_{12}^{12}, (t_1^1)^2t_2^1, (t^1_1)^3)\end{aligned}$$]{} \ &\ $(0,0,0,0,12+34)$ & 16 & $(2,0)$ & $(5,5)$ & $(20,12)$ & $(\surd, \surd)$ & [$$\begin{aligned} (&\delta^{12}_{34}+\delta_{34}^{34}, \delta^{12}_{24}+\delta^{34}_{24}, \delta^{12}_{14}+\delta^{34}_{14}, \delta^{12}_{23}+\delta^{34}_{23}, \delta^{12}_{13}+\delta^{34}_{14},\\ & \delta^{12}_{12}+\delta^{34}_{12}, \Delta^{234}_{234}, \Delta^{234}_{134}, \Delta^{234}_{124}, \Delta^{134}_{234}, \Delta^{134}_{134}, \Delta^{134}_{124}, \Delta_{123}^{234}, \Delta^{134}_{123})\\ \cap&(\delta^{12}_{24}+\delta^{34}_{23}, \delta^{12}_{14}+\delta^{34}_{14}, \delta^{12}_{23}+\delta^{34}_{23}, \delta^{12}_{13}+\delta^{34}_{13}, \delta^{12}_{34}-\delta^{34}_{12},\\ &\delta^{24}_{12}+\delta^{24}_{34}, \delta^{23}_{12}+\delta^{23}_{34}, \delta^{14}_{12}+\delta^{14}_{34}, \delta^{13}_{12}+\delta^{13}_{34},\delta^{12}_{12}-\delta^{34}_{34},\\ & t^1_2\delta^{12}_{34}-t^1_4\delta^{34}_{23}+t^1_3\delta^{14}_{34}, t^1_1\delta^{12}_{34}-t^1_4\delta^{34}_{13}+t^1_3\delta^{34}_{14})\end{aligned}$$]{} \ A non-parallelisable example {#nonexam} ---------------------------- The Kuranishi space of a nilmanifold which is neither complex parallelisable nor carries an abelian complex structure can be much more complicated. We will illustrate this fact by describing a 2-step nilpotent Lie-algebra such that the Kuranishi space of an associated nilmanifold is singular but not cut out by quadrics, i.e., there are non-vanishing obstructions of higher order. We use here an alternative way to describe a real Lie-algebra with complex structure: consider the complex vectorspace $V:=\langle X_1, \dots, X_7\rangle_\IC$. There is a natural real vectorspace ${\gothh}\subset V\oplus \bar V$ invariant under complex conjugation such that ${\gothh}_\IC=V\oplus \bar V$. This decomposition defines a complex structure $J$ on ${\gothh}$ via ${{{{\gothh}}^{1,0}}}:=V$. Let $\omega^1, \dots, \omega^7$ be the basis of $V^*$ dual to the $X_i$. Then, by the formula for the differential [[(\[differential\])]{}]{}, a Lie bracket on ${\gothh}$ is uniquely determinded by $$\begin{gathered} d\omega^1=d\omega^2=d\omega^3=d\omega^4=d\omega^5=0,\\ d\omega^6=\omega^1\wedge\omega^2,\\ d\omega^7=\omega^3\wedge\omega^4+ \bar \omega^1\wedge\omega^5,\end{gathered}$$ and the complex conjugate equations. For example, we have $[\bar X_5, X_1]=\bar X_7$. Then $$d {{{{\gothh}^*}^{1,0}}}\subset \Lambda^{2,0}\oplus \Lambda^{1,1}$$ which means $d=\del+\delbar$ and the complex structure is integrable with respect to this Lie bracket. But since the image of $d$ is not contained in one of the components $\Lambda^{1,1}$ and $\Lambda^{2,0}$ neither the complex structure is abelian nor is $({\gothh}, J)$ a complex Lie-algebra. Our Lie-algebra with complex structure $({\gothh},J)$ is defined over $\IQ$ and by the theorem of Mal’cev [@malcev51] there exists a lattice $\Gamma$ in the corresponding real simply connected nilpotent Lie-group $H$. We obtain a nilmanifold with left-invariant complex structure $(M,J)=(\Gamma\backslash H, J)$. Now let $$\mu:= \bar \omega^3\tensor X_1 +\bar \omega^4\tensor X_2.$$ Recall that for $X\in {{{{\gothh}}^{1,0}}}$ and $\bar Y \in {{{{\gothh}}^{0,1}}}$ we have $\delbar X (\bar Y)= {{{ [\bar Y, X]}^{1,0}}}$ where ${{{x}^{1,0}}}$ is the image of $x\in {\gothh}_\IC$ under the projection to the $(1,0)$-part. In particular we see that $\delbar X_1=\delbar X_2=0$. This implies $\delbar\mu=0$ and $\mu$ defines a class in $H^1((M,J), \Theta_{(M,J)})$. Since every left-invariant function is constant and the contraction of a vector of type $(1,0)$ with a form of type $(0,2)$ is zero the Schouten-bracket is given by $$[\bar\alpha\tensor X, \bar\beta\tensor Y]:=\bar\beta\wedge (i_Y\del\bar \alpha) \tensor X+ \bar\alpha \wedge (i_X\del\bar\beta)\tensor Y+\bar\alpha\wedge \bar \beta\tensor [X,Y].$$ We compute the first two steps of the iterative solution $\Phi$ with $\Phi_1=\mu$ of the Maurer-Cartan equation. Since $\del\bar\omega^3=\del\bar\omega^4=0$ we get $$\begin{aligned} [\mu,\mu]&= 2\bar\omega^3\wedge\bar\omega^4\tensor [X_1, X_2]\\ &=-\delbar(2\bar \omega^7\tensor X_6).\end{aligned}$$ We see that the obstruction in degree 2 vanishes. Following the recursion [[(\[Phi\])]{}]{} we set $\Phi_2=2\bar\omega^7\tensor X_6$ and hence $$\begin{aligned} [\Phi_1, \Phi_2]&= [\bar \omega^3\tensor X_1, 2\bar\omega^7\tensor X_6] +[\bar \omega^4\tensor X_2, 2\bar\omega^7\tensor X_6]\\ &= 2\bar\omega^3\wedge(i_{X_1}\del\bar\omega^7)\tensor X_6 +2\bar\omega^4\wedge(i_{X_2} \omega^1\wedge\bar\omega^5)\tensor X_6\\ &= 2\bar\omega^3\wedge\bar\omega^5\tensor X_6.\end{aligned}$$ It is immediate from the equations that this $2$-form with values in the tangent bundle is not exact and hence there is a non-vanishing obstruction in degree three. [CFGU00]{} Maria Laura Barberis, Isabel G. Dotti, and Misha Verbitsky. Canonical bundles of complex nilmanifolds, with applications to hypercomplex geometry, 2007, arXiv:0712.3863v3 \[math.DG\]. Ciprian Borcea. Moduli for [K]{}odaira surfaces. , 52(3):373–380, 1984. F. Catanese. Moduli of algebraic surfaces. In [*Theory of moduli (Montecatini Terme, 1985)*]{}, volume 1337 of [*Lecture Notes in Math.*]{}, pages 1–83. Springer, Berlin, 1988. S. Console and A. Fino. Dolbeault cohomology of compact nilmanifolds. , 6(2):111–124, 2001. Fabrizio Catanese and Paola Frediani. Deformation in the large of some complex manifolds. [II]{}. In [*Recent progress on some problems in several complex variables and partial differential equations*]{}, volume 400 of [*Contemp. Math.*]{}, pages 21–41. Amer. Math. Soc., Providence, RI, 2006. Luis A. Cordero, Marisa Fern[á]{}ndez, Alfred Gray, and Luis Ugarte. Compact nilmanifolds with nilpotent complex structures: [D]{}olbeault cohomology. , 352(12):5405–5433, 2000. S. Console, A. Fino, and Y. S. Poon. Stability of abelian complex structures. , 17(4):401–416, 2006. Gil R. Cavalcanti and Marco Gualtieri. Generalized complex structures on nilmanifolds. , 2(3):393–410, 2004. Richard Cleyton and Yat Sun Poon. Differential gerstenhaber algebras associated to nilpotent algebras, 2007, arXiv:0708.3442v2 \[math.AG\]. tienne Ghys. Déformations des structures complexes sur les espaces homogènes de [$\mathrm{ SL}(2,\mathbb C)$]{}. , 468:113–138, 1995. G.-M. Greuel, G. Pfister, and H. Schönemann. 3.0. , Centre for Computer Algebra, University of Kaiserslautern, 2005. . Daniel Huybrechts. . Universitext. Springer-Verlag, Berlin, 2005. K. Kodaira and D. C. Spencer. On deformations of complex analytic structures. [I]{}, [II]{}. , 67:328–466, 1958. M. Kuranishi. On the locally complete families of complex analytic structures. , 75:536–577, 1962. L. Magnin. Sur les algèbres de [L]{}ie nilpotentes de dimension [$\leq 7$]{}. , 3(1):119–144, 1986. A. I. Malcev. On a class of homogeneous spaces. , 1951(39):33, 1951. C. Maclaughlin, H. Pedersen, Y. S. Poon, and S. Salamon. Deformation of 2-step nilmanifolds with abelian complex structures. , 73(1):173–193, 2006. Iku Nakamura. Complex parallelisable manifolds and their small deformations. , 10:85–112, 1975. Katsumi Nomizu. On the cohomology of compact homogeneous spaces of nilpotent [L]{}ie groups. , 59:531–538, 1954. Yat Sun Poon. Extended deformation of [K]{}odaira surfaces. , 590:45–65, 2006, arXiv:math.DG/0402440. Sönke Rollenske. Nilmanifolds: Complex structures, geometry and deformations, 2007, arXiv:0709.0467v1 \[math.AG\]. Yusuke Sakane. On compact complex parallelisable solvmanifolds. , 13(1):187–212, 1976. Hsien-Chung Wang. Complex parallisable manifolds. , 5:771–776, 1954. J[ö]{}rg Winkelmann. Complex analytic geometry of complex parallelizable manifolds. , (72-73):x+219, 1998.
{ "pile_set_name": "ArXiv" }