text
stringlengths
0
3.53M
meta
dict
Q: ng-mouseover not working I am at the very beginning of learning Angular. Right now I am trying to implement a ng-repeat div which is populated from a collection. I also want to implement a mouseOver function which changes the text in a paragraph when I hover over one of the elements. <!DOCTYPE html> <html ng-app="MyApp"> <head> <script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <script type="text/javascript"> var app = angular.module('MyApp', []); app.controller('RezeptController', function ($scope) { this.rezepte = rezeptCollection; this.mouseOverElement = function (element) { this.msg = "Mouse Over: " + element.name; } }); var rezeptCollection = [ {name: 'Okroshka', herkunft: 'Russland'}, {name: 'Sushi', herkunft: 'Japan'} ]; </script> <title></title> <meta charset="utf-8" /> </head> <body class="container" ng-controller="RezeptController as rezepte"> <div ng-repeat="rezept in rezepte.rezepte" > <div ng-mouseover="mouseOverElement(element)"> {{rezept.name}} </div> </div> <p>{{ msg }}</p> </body> </html> This code does get the job of displaying the elements done. Unfortunately the mouseOverElement does not trigger. I have to admit that I did not understand the scope concept entirly. So what I tried is to change the app.controller definition to: app.controller('RezeptController', function ($scope) { $scope.rezepte = rezeptCollection; $scope.mouseOverElement = function (element) { $scope.msg = "Mouse Over: " + element.name; } }); This does not fix the problem plus the items are not shown at all. Please help me understand what I am missing here. A: I believe your issue stem from the fact that you are using the "RezeptController as rezepte" notation, which is good practice, but then you are being inconsistent on how you access things in that scope. You need to make sure you are prefixing any scope variable or function calls with rezepte. It is also good practice to take the confusion out of this by aliasing it as rezepte in your controller: <!DOCTYPE html> <html ng-app="MyApp"> <head> <script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <script type="text/javascript"> var app = angular.module('MyApp', []); app.controller('RezeptController', function ($scope) { var rezepte = this; rezepte.rezepte = rezeptCollection; rezepte.mouseOverElement = function (element) { rezepte.msg = "Mouse Over: " + element.name; } }); var rezeptCollection = [ {name: 'Okroshka', herkunft: 'Russland'}, {name: 'Sushi', herkunft: 'Japan'} ]; </script> <title></title> <meta charset="utf-8" /> </head> <body class="container" ng-controller="RezeptController as rezepte"> <div ng-repeat="rezept in rezepte.rezepte" > <div ng-mouseover="rezepte.mouseOverElement(rezept)"> {{rezept.name}} </div> </div> <p>{{ rezepte.msg }}</p> </body> </html>
{ "perplexity_score": 1605.2, "pile_set_name": "StackExchange" }
[Foot reflex zone massage--general practice and evaluation]. Reflexology is a frequently used method of complementary medicine. This review deals with history, theory and practise of this technique. Furthermore, results of the published clinical trials are discussed.
{ "perplexity_score": 578.5, "pile_set_name": "PubMed Abstracts" }
Q: Mongoengine Link to Existing Collection I'm working with Flask/Mongoengine-MongoDB for my latest web application. I'm familiar with Pymongo, but I'm new to object-document mappers like Mongoengine. I have a database and collection set up already, and I basically just want to query it and return the corresponding object. Here's a look at my models.py... from app import db # ---------------------------------------- # Taking steps towards a working backend. # ---------------------------------------- class Property(db.Document): # Document variables. total_annual_rates = db.IntField() land_value = db.IntField() land_area = db.IntField() assessment_number = db.StringField(max_length=255, required=True) address = db.StringField(max_length=255, required=True) current_capital_value = db.IntField valuation_as_at_date = db.StringField(max_length=255, required=True) legal_description = db.StringField(max_length=255, required=True) capital_value = db.IntField() annual_value = db.StringField(max_length=255, required=True) certificate_of_title_number = db.StringField(max_length=255, required=True) def __repr__(self): return address def get_property_from_db(self, query_string): if not query_string: raise ValueError() # Ultra-simple search for the moment. properties_found = Property.objects(address=query_string) return properties_found[0] The error I get is as follows: IndexError: no such item for Cursor instance This makes complete sense, since the object isn't pointing at any collection. Despite trolling through the docs for a while, I still have no idea how to do this. Do any of you know how I could appropriately link up my Property class to my already extant database and collection? A: The way to link a class to an existing collection can be accomplished as such, using meta: class Person(db.DynamicDocument): # Meta variables. meta = { 'collection': 'properties' } # Document variables. name = db.StringField() age = db.IntField() Then, when using the class object, one can actually make use of this functionality as might be expected with MongoEngine: desired_documents = Person.objects(name="John Smith") john = desired_documents[0] Or something similar :) Hope this helps!
{ "perplexity_score": 1219.8, "pile_set_name": "StackExchange" }
How do you live with yourself if you know people don't really… #Depression - Instawell How do you live with yourself if you know people don't really care they just say they do. if the fake smiles are outworn. your drowning but you'd rather die than go through all that pain again? #Depression - Instawell
{ "perplexity_score": 732.2, "pile_set_name": "Pile-CC" }
'hello\ world'
{ "perplexity_score": 1693.5, "pile_set_name": "Github" }
Margaret Constance Williams (born 15 April 1997) is an English actress making her television debut in HBO's Game of Thrones, where she plays the role of Arya Stark. She was part of the initial starring cast and remains a member of the starring cast for the eighth season. Contents show] Career Maisie is right-handed, but plays Arya as left-handed, as in the novels. Maisie's home village is Clutton, in Somerset. The audition day for Game of Thrones was the same day that her school was having a field trip to a local pig farm - she nearly skipped the audition because she really wanted to see the pig farm (and didn't think she'd get the part anyway), but ultimately decided to go to the audition.[1] Williams has not read the books, but has read summaries of what happens to Arya in later books. She stated that she chose not to read the books because the TV adaptation abbreviates a large amount of material, and she was worried that she would get confused, basing her reactions on points from the books which don't exist for Arya in the TV version.[2] On May 11th, 2017, it was announced that Williams will be playing Wolfsbane in the next installment in the X-Men movie franchise, New Mutants. Wolfsbane is a mutant with the power to turn into a werewolf (so she's still playing a "wolf girl" in a sense after Game of Thrones).[3] Williams joins her on-screen sister Sophie Turner (Sansa) in the X-Men universe, after Turner was cast as young Jean Grey (a telepath) in X-Men: Apocalypse. Credits Starring Gallery Arya Stark's reaction to The Red Wedding (Maisie Williams) *Spoiler* See also
{ "perplexity_score": 192.4, "pile_set_name": "OpenWebText2" }
More Topics Weather Forecast Robert J. "Bob" Koski Robert J. "Bob" Koski, age 64, of Two Harbors passed away Monday, November 6, 2017 at St. Luke's Hospital in Duluth. He was born May 28, 1953 in Virginia, MN to William and Mildred (Garson) Koski. Bob lived his early years in Virginia and was always proud to say he was a Ranger. Bob's family moved to Bloomington, MN where he graduated from J. F. Kennedy High School. Eventually Bob came to Duluth and worked at Diamond Tool. He later moved to Two Harbors to start his own business. Bob was known as the barber at the "7th Avenue Clip Joint." Bob also was a very good walleye fisherman. In his spare time he enjoyed working on boats and motors. Bob got to be quite a collector of old outboards and fishing tackle. Bob was quick witted and had a good sense of humor. He will be missed by his many friends and family.
{ "perplexity_score": 188.5, "pile_set_name": "Pile-CC" }
To determine whether poisoning with the organophosphate insecticide methamidophos results in a syndrome of persistent subclinical sensory and motor neuropathy, a clinical and epidemiologic study will be undertaken among a population of 50 seriously poisoned agricultural workers. Two comparison groups will also be examined -- one in which a history of methamidophos exposure is common, but with no history of poisoning, and the other with no history of exposure. Specific aims are: 1.To determine whether inhibition of lymphocyte neuropathy target esterase (NTE) measured in peripheral lymphocytes is a sensitive and specific index of peripheral neurotoxicity. 2.To determine whether previous poisoning with methamidophos results in diminished motor and sensory function, as reflected in electrophysiologic studies, pinch strength, and elevated vibrotactile threshold. 3.To determine whether a dose-response relationship exists between lymphocyte NTE inhibition and motor or sensory function. 4.To determine whether there exist threshold levels of methamidophos exposure below which either sensory or motor neuropathy is no longer evident. The public health significance of this study is that methamidophos and other organophosphate neurotoxins are widely used by workers in the United States and throughout the world.
{ "perplexity_score": 473, "pile_set_name": "NIH ExPorter" }
Tidewater Time Capsule History Beneath the Patuxent: History Beneath the Patuxent Years ago few people understood the value of the submerged cultural resources beneath the waters of the Chesapeake region. Recently, the search for the region's underwater heritage has been validated, and has initiated an intensified attempt to study and preserve the priceless resources in the waters of the Bay and its tributaries. This is the story of one such effort. The Patuxent Project was the first underwater archaeological survey of an entire river system. In this multiphase investigation, archaeologists sought such diverse resources as inundated aboriginal and historic sites, harbor facilities, military establishments, battle sites, shipwrecks, and, in particular, the final resting place of Joshua Barney's famed Chesapeake Flotilla from the War of 1812. Advertise on somd.com Our sponsors make Southern Maryland Online a reality, so please let them know that you saw them on Somd.com. Click here for information on becoming a sponsor and putting your business in front of over 250,000 potential customers every month. You can also advertise your business in the So. Md. Classifieds and reach an enormous target audience for as little as $29.95 per month.
{ "perplexity_score": 355.2, "pile_set_name": "Pile-CC" }
Q: Need advice on microcontroller to switch relay on/off I am trying to connect an OMRON G2RL-2A DPST 12 VDC relay (G2RL relay datasheet) to my PICAXE 20x2 microcontroller. I also have a ULN2803A relay driver (ULN2803A datasheet). I managed to get the microcontroller to work quite well, so I don't have a problem with it. The question is this: The ULN2803 does not have a V+ pin, so does that mean it does not require power? It only has a GND pin, which I believe should be connected to a GND (probably the GND of the microcontroller?) Now, I think I have to connect an output of the PICAXE 20x2 to an input of ULN2803. After this, where should I connect the respective output of the ULN2803 on the relay? Also do I have to use a 12 VDC power for my relay? Or maybe I can use the same 5 V power of my microcontroller? If not, where should I connect this 12 V on the relay? Sorry for total noob questions, I hope you can guide me. UPDATE I can not still make this circuit to work properly. Here is the work I have done so far, please have a look. The problem is, if I connect ULN2803 pin 10 to +12 V or GND, the relays either dont work or just lock the current state. Where should I connect ULN2803 pin 10? A: V+ goes through the relay, into the collector of the darlington pair inside the driver, then down to ground. A second V+ connection to pin 10 acts as a 'flywheel' diode to stop back-emf. Connect GND to ground (anywhere), one of the outputs to the relay, and the other side of the relay to +12V. The logic '1' signal is enough to power the base of the darlington pair to turn it on/off. It needs no power supply of its own. Here is the circuit built on breadboard. The connectors along the top are: Black: Common ground Yellow: +5V Blue: +3.3V Green: +12V Red: -12V
{ "perplexity_score": 707, "pile_set_name": "StackExchange" }
Understanding Islam Series Understanding Islam Series will be happening at Redlands Peace Academy starting Tuesday April 10th at 7pm. Please come and learn about your Muslim co-worker/neighbor/pharmacist and possibly your physician
{ "perplexity_score": 984.7, "pile_set_name": "Pile-CC" }
Q: Strange issue with android volume I have created an android application , which works just fine . in the application i am using media player to play .wav files , which also is working fine. the only problem comes , when i try to increase the volume of the application , just when i touch the up-volume button , the volume for call increases but the volume for application remains same any suggestion , what can be the effecting factor ?? here is my code to play audio MediaPlayer mediaPlayer;//as global var in activity mediaPlayer=new MediaPlayer(); // inside onCreate() method whenever i need to play audio i call this method private void playAudio() { Log.d("hussi","before the media "); try { mediaPlayer.stop(); Log.d("hussi","so the file name passed is"+gif_char+".wav"); AssetFileDescriptor descriptor = getApplicationContext().getAssets().openFd(gif_char+".wav"); long start = descriptor.getStartOffset(); long end = descriptor.getLength(); mediaPlayer.reset(); mediaPlayer.setDataSource(descriptor.getFileDescriptor(), start, end); mediaPlayer.prepare(); mediaPlayer.start(); } catch (IllegalArgumentException e) { Log.d("hussi","1===>>>"+e); e.printStackTrace(); Log.d("hussi",e.getMessage()); } catch (SecurityException e) { Log.d("hussi","2===>>>"+e); e.printStackTrace(); } catch (IllegalStateException e) { Log.d("hussi","3===>>>"+e); e.printStackTrace(); } catch (IOException e) { Log.d("hussi","4===>>>"+e); e.printStackTrace(); }catch (Exception e) { Log.d("hussi","5===>>>"+e); e.printStackTrace(); } } A: setVolumeControlStream(AudioManager.STREAM_MUSIC); in your onCreate() method. That tells the OS that the volume buttons should affect the "media" volume when your application is visible, and that's the volume it uses for your application. Remember your application needs focus.
{ "perplexity_score": 2315.4, "pile_set_name": "StackExchange" }
Recipes to Share with Your Dogs I cook for the dogs just about every day. Some of my favorite recipes to make are ones that can be shared by the humans in the house (since I don’t cook for them quite as often). Here are three shareable recipes from this weekend: French Toast My grocery store puts out a rack of breads whose sell-by date is today and marks them down significantly. It’s an opportunity to pick up delicious artisan loaves at great prices. This week I chose a loaf of rosemary potato bread: I cut the bread into cubes. (Of course you can make the more traditional slices of French toast if you prefer, which I sometimes do. But I always cube the portion I’m feeding to the little dogs anyway so this time I just decided to cube the entire loaf.): In a mixing bowl, I whisked 15 eggs with a generous splash of vanilla and soaked the cubed bread until the eggs were absorbed. Then I placed about half the mixture at a time into a hot, buttered frying pan and sprinkled heavily with cinnamon (I have no self control when it comes to cinnamon): After browning the one side, I turned the bread over and browned the other. Then, into the dog bowls: I added sliced bananas to the bowls before feeding but you could feed as-is or with the toppings of your choice. *** Cinnamon Date Scones 1 egg 1 C buttermilk 2 T honey 2 C whole wheat flour 1 1/2 C all purpose flour 1 C chopped dates 2 tsp baking powder 1 tsp baking soda 1/2 tsp salt lots of cinnamon! 1/2 C melted butter Whisk the egg in a bowl then mix in buttermilk and honey. Stir in dry ingredients until partially mixed then drizzle in melted butter while stirring to mix thoroughly. Shape the dough into a flat round and cut into six pieces on a greased baking sheet, pulling them slightly apart from one another. Bake at 400 degrees F for 25 minutes: *** Almond Scones 3 C almond meal 2 tsp baking powder 1/4 C coconut oil 1 heaping T honey 1 tsp vanilla 2 eggs Whisk the eggs in bowl then mix in the remaining wet ingredients. (Note: I had never used coconut oil before making this recipe and found it challenging to get it thoroughly mixed. You could use another nut oil, such as almond or macadamia, if you have it.) Stir in dry ingredients until completely mixed. Shape the dough into a flat round and cut into six pieces on a greased baking sheet, pulling them slightly apart from one another. Bake at 300 degrees F for 20 minutes: I found these a bit bland for my taste but no complaints from the dogs. Karen F Eucritta These sound delicious, especially the cinnamon date scones – I wonder if the recipe would work with figs? I should bookmark and try it out when our figs come ripe. Sadly, though, our Bertie doesn’t much care for sweets beyond the occasional bit of fruit. I once gave him a bit of pastry baked with butter, cinnamon and sugar on it – is there a proper name for it? it’s what my grandmother always made with leftover shortcrust – and while he ate it, he did so with a slightly aggrieved look, as if asking why I had to muck up perfectly good crunchy crust with all that *stuff.* I’m sure figs would work. Mostly how I come up with recipes for the dogs is to modify recipes I find online, substituting ingredients as I see fit. This recipe didn’t call for dates, or cinnamon for that matter, I just like those things and thought they might be good. My thoughts don’t always translate to something I’d be willing to eat but in this case, it turned out all right. My mom used to put the cinnamon and sugar on the leftover crust and roll it up before baking so the name was easy: cinnamon rolls! They lasted like 30 seconds out the oven. Eucritta I asked my father today if he remembered them having a proper name, and he thought my grandmother called them ‘sugar crusts.’ Whenever they lasted long enough to be called much of anything. I was inspired to share lunch with Bertie today: chunks of stale sourdough toasted a bit with dripping in a skillet, with eggs cracked on top and some grated cheddar. I added onions and hot sauce to mine, but he liked his fine without. I’ve found he can eat most things I do, which makes sense, given how long dogs have been living with people. What I’ve been most surprised by though is how much he enjoys veggie salads. Give him a wilted salad of cabbage, green beans, black beans and some nuts, and he chows down like the proper little Californian he is. That’s my view as well – dogs were domesticated and kept for thousands of years on “table scraps”. In the past century, the pet food industry cropped up and tried to convince us this is dangerous and we should buy their products instead – all the while poisoning our pets with melamine, salmonella, aflatoxin and whatever else. I’d rather make use of my leftovers and share food that I eat myself with my dogs. The one thing my dogs don’t eat is raw greens (lettuce, spinach, etc.). None ever seemed to care for those. D. Lake I feed homemade food to all ages and haven’t had any refusals yet. While I consider the French toast with banana to be a meal, I would add some protein (buttermilk, yogurt or ricotta cheese for example) to the scones before feeding or simply use them as treats. And the usual caveats apply about maintaining a wide variety of foods in the diet and not relying on just a few recipes for every day feeding.
{ "perplexity_score": 666.2, "pile_set_name": "Pile-CC" }
/* P動作--企圖進入一關鍵區 */ void semaphore_P(int sem_id) { struct sembuf sb; sb.sem_num = 0; sb.sem_op = -1; /* P動作-訊號量值減少 */ sb.sem_flg = SEM_UNDO; /* 若果死亡,解除訊號量請求*/ if (semop(sem_id, &sb, 1) == -1){ fprintf(stderr,”semaphore_P failed\n”); return(0); } return(1); } /* V動作--離開關鍵區 */ void semaphore_V(int sem_id) { struct sembuf sb; sb.sem_num = 0; sb.sem_op = 1; /* V動作-訊號量值增加 */ sb.sem_flg = SEM_UNDO; /* 若果死亡,解除訊號量請求*/ if (semop(sem_id, &sb, 1) == -1){ fprintf(stderr,”semaphore_V failed\n”); return(0); } return(1); }
{ "perplexity_score": 4513.3, "pile_set_name": "Github" }
ABOUT HOTELS.COM Looking for a hotel in your next travel destination? Hotels.com provides travelers with unbeatable prices on over 325,000 hotels in about 19,000 different locations. With great deals on hotels around the world, you can be sure to find one that will suit all of your needs at a great price. Use a hotels.com coupon or promo code from below to make your deal even sweeter. Hotels.com’s inventory includes hotels, commercial lodging, and condos from around the world to house its millions of clients each year. Customer reviews from these clients allow new travelers to gain an inside look into the advantages and disadvantages of each location and compare hotels against one another for cost, location, and convenience as well as services and amenities. With search available by destination, specific hotel, landmark you would like to be nearby, or address, hotels.com make booking easier than ever. You can sort by a number of different qualities like price, star rating, guest rating, and more to find the best fit for you. If you are traveling with kids, find family friendly destinations with fun activities. If you are traveling alone or looking for a quiet getaway, instead search for an adults-only option to enjoy your stay. Get great deals on all bookings with hotels.com promo codes and coupons. With 85 websites in 34 different languages, Hotels.com is accessible to people of all nationalities and backgrounds. Wherever you are, hotels.com is available on a computer, smartphone, or tablet for your convenience. Just open up the site, select your preferences, and type in a hotels.com promo code or coupon before you book to get the best deal you possibly can for your last minute stay! Hotels.com’s loyalty program allows guests to claim special discounts and coupons on most hotels across the world. With membership, a customer claims a discount for every 10 nights spent at any hotel. Free nights are redeemable at any time, so take the opportunity to spend a spontaneous and lavish night at your nearest hotel! This rewards program is just another way to score great deals from hotels.com in over 45 different countries. Whether you have a last minute trip or are planning spring break months in advance, use a hotels.com coupon or promo code to book at the best price. Use the Deal Of The Day feature to discover great travel opportunities, and discover great hotelsyou using the recommended destinations feature. Look below to find one that qualifies for your trip! Simply click the “get coupon” button to redeem, or type in the code that you find when you are checking out! Bon Voyage! ABOUT HOTELS.COM Looking for a hotel in your next travel destination? Hotels.com provides travelers with unbeatable prices on over 325,000 hotels in about 19,000 different locations. With great deals on hotels around the world, you can be sure to find one that will suit all of your needs at a great price. Use a hotels.com coupon or promo code from below to make your deal even sweeter. Hotels.com’s inventory includes hotels, commercial lodging, and condos from around the world to house its millions of clients each year. Customer reviews from these clients allow new travelers to gain an inside look into the advantages and disadvantages of each location and compare hotels against one another for cost, location, and convenience as well as services and amenities. With search available by destination, specific hotel, landmark you would like to be nearby, or address, hotels.com make booking easier than ever. You can sort by a number of different qualities like price, star rating, guest rating, and more to find the best fit for you. If you are traveling with kids, find family friendly destinations with fun activities. If you are traveling alone or looking for a quiet getaway, instead search for an adults-only option to enjoy your stay. Get great deals on all bookings with hotels.com promo codes and coupons. With 85 websites in 34 different languages, Hotels.com is accessible to people of all nationalities and backgrounds. Wherever you are, hotels.com is available on a computer, smartphone, or tablet for your convenience. Just open up the site, select your preferences, and type in a hotels.com promo code or coupon before you book to get the best deal you possibly can for your last minute stay! Hotels.com’s loyalty program allows guests to claim special discounts and coupons on most hotels across the world. With membership, a customer claims a discount for every 10 nights spent at any hotel. Free nights are redeemable at any time, so take the opportunity to spend a spontaneous and lavish night at your nearest hotel! This rewards program is just another way to score great deals from hotels.com in over 45 different countries. Whether you have a last minute trip or are planning spring break months in advance, use a hotels.com coupon or promo code to book at the best price. Use the Deal Of The Day feature to discover great travel opportunities, and discover great hotelsyou using the recommended destinations feature. Look below to find one that qualifies for your trip! Simply click the “get coupon” button to redeem, or type in the code that you find when you are checking out! Bon Voyage!
{ "perplexity_score": 633, "pile_set_name": "Pile-CC" }
Meet the Author: Asali Solomon, “Disgruntled” Calling all ¡Adelante! book club members! Join AAUW for our book discussion with former AAUW American Fellow Asali Solomon, author of Disgruntled. Submit your questions in advance using the form below, and come join us live to hear what the author has to say! Kenya Curtis is only 8 years old, but she knows that she’s different. It’s not because she’s black — most of the other students in the fourth-grade class at her West Philadelphia elementary school are, too. Maybe it’s because she celebrates Kwanzaa, or because she’s forbidden from reciting the Pledge of Allegiance. Maybe it’s because she calls her father “Baba” instead of “Daddy.” What Kenya does know is that her difference is connected to what her Baba calls “the shame of being alive.” Effortlessly funny and achingly poignant, Asali Solomon’s long-awaited debut novel follows Kenya from West Philadelphia to the suburbs, from public school to private, from childhood through adolescence, as she grows increasingly disgruntled by her inability to find any place or thing or person that feels like home. A coming-of-age tale, a portrait of Philadelphia in the late ‘80s and early ‘90s, an examination of the impossible double binds of race, Disgruntled is a novel about the desire to rise above the limitations of the narratives we’re given and the painful struggle to craft fresh ones we can call our own. About ¡Adelante! Book of the Month Club Female authors are much less likely than male authors to have their books reviewed in major publications like the New York Times and Harper’s Magazine. That’s why we created the ¡Adelante! Book of the Month Club: to spotlight engrossing stories and writing by women from all backgrounds. We also connect our members to some of the authors we feature through web discussions. Stay in the know about upcoming ¡Adelante! events, or create your own book club.
{ "perplexity_score": 259.9, "pile_set_name": "Pile-CC" }
Samuel May Williams Samuel May Williams (October 4, 1795 – September 13, 1858) was an American businessman, politician, and close associate of Stephen F. Austin. As a teenager, Williams started working in the family's mercantile business in Baltimore. He spent time in South America and New Orleans, fleeing the latter because of debts. He landed in Mexican Texas in 1822, having learned French and Spanish. Stephen F. Austin hired Williams for his colony in 1824, clerking and later adding the title of secretary to the ayuntamiento, a local government established for the colony by the Mexican state of Coahuila and Texas. He worked for Austin for about a decade. In 1834, Williams quit as secretary of the Austin Colony to work as a merchant, then formalized a partnership of with Thomas F. McKinney. The next year he also made deals with the provincial government in Monclova for a bank charter and for large tracts of land in Texas. At that time he served the Brazos District in the Coahuila and Texas legislature. However, by 1836, Williams and his partner, Thomas F. McKinney, sided with the Texians against Mexico. Williams borrowed money against his family's lines of credit, which the partners applied to ships and ammunition on behalf of the rebel government. After Texas gained independence, Williams focused most of his business activities in Galveston. Through the McKinney and Williams partnership, he was invested in the Galveston City Company, and established diverse business interests there. He represented Galveston County for one term in the Republic of Texas legislature. He was a partner in the McKinney and Williams mercantile business until it was acquired by Henry Howell Williams in 1842. After 1842, Williams worked toward establishing a bank in Texas. He briefly returned to public service when he accepted a diplomatic mission to negotiate a treaty with Mexico, which had still not recognized the sovereignty of the Republic of Texas. In the first year of Texas statehood, he ran twice for the U.S. House of Representatives, losing both times. Williams then returned focus to introducing the first bank in Texas, succeeding in 1848. The Commercial & Agricultural Bank (C & A Bank) was the only institution to legally issue paper money, though his charter and the bank's practices faced legal challenges throughout its existence. These included anti-banking legislation and scrutiny from various Texas Attorneys General. Favorable decisions rendered by the district courts saved Williams and his bank for about four years. C&A Bank remained solvent during the Panic of 1857, but anti-banking politics were on the rise. Many of Williams' friends and allies distanced themselves from the bank and encourage him to give up the project, and he died in 1858. Early life Samuel May Williams was born October 4, 1795, in Providence, Rhode Island, to Howell and Dorothy (Wheat) Williams. His ancestors arrived in New England in the 1630s, and his family tree included a signer of the Declaration of Independence and a president of Yale University. Williams had four brothers and three sisters. Williams' immediate family consisted of sailors and merchants. Howell Williams was a ship captain, and Samuel's uncle, Nathaniel Felton Williams, was a commission merchant in Baltimore. After some schooling in his native city, he apprenticed to Nathaniel. A younger brother, also named Nathaniel Felton Williams, succeeded Samuel as uncle Nathaniel's apprentice. Williams left Baltimore to oversee freight bound for Buenos Aires, where he stayed to conduct further business in South America. The Williams family conducted a robust trade with Argentina, shipping food in exchange for cash or hides. There Williams learned the Spanish and French languages, and his business dealings gave him experience in navigating Spanish business and political customs. While Joe B. Frantz and Ruth G. Nichols estimate his arrival to New Orleans as the year 1815, Margaret Swett Henson disputes this as a possibility, though she is less certain about timeline. In 1818, Williams boarded at a hotel in Washington, D. C. (where pirate and New Orleans denizen Jean Lafitte had been a resident), and the next year, Williams was living and working in New Orleans. Career Gone to Texas Williams is confirmed to be in Texas no later than 1821, when he sold tobacco to the Karankawa people on Galveston Island. He moved to Texas in May 1822, when Williams left New Orleans and the United States to escape debts and poor economic prospects which resulted from the Panic of 1819. He and a female companion registered under the names of Mr. and Mrs. E. Eccleston, and passed over the gangplanks of the sloop Good Intent. The sailing ship met a storm in the Gulf, and it finally made landfall at the mouth of the Colorado River on June 18. At this time Texas was a part of Mexico. Moses Austin had negotiated a contract to colonize part of Texas, but his death in 1821 put the deal in limbo. As Williams first arrived in Texas, Stephen F. Austin, the son of the deceased empresario, traveled to Mexico City in order to reinstate and implement the Austin Colony. During Austin's visit, word traveled to Mexico City of a Mr. Eccleston, who was literate in English, Spanish, and French. By late 1823, Austin had returned to Texas and met Mr. Eccleston, who had established a local reputation by clerking and teaching school. Austin hired him in November. This is around the time that Williams reverted to his birth name and earlier identity. For a time both Williams and Austin had lived and worked in the same area of New Orleans, but there is no direct evidence that they had known each other prior to their meeting in 1823. Stephen F. Austin Colony Williams was with Austin when the new empresario selected a location for his colonial headquarters, San Felipe de Austin. Austin first hired Williams as a translator and clerk, whose language skills, both in English and Spanish, were necessary to fulfill his responsibilities. Another critical skill was his handwriting. In an era when all documents were written by hand, the ability to write legibly was critical to properly reading them later. Two other assets of Williams were his knowledge of Mexican culture and Spanish business practices. In the fall of 1824, Austin appointed Williams as a recording secretary for the Austin Colony. Though Mexico had not yet established an ayuntamiento (local government) in the colony, Austin had told Jose Antonio Saucedo about his intention to establish the recording secretary position with all of the responsibilities of a secretary for an ayuntamiento. Also in 1824, Williams received his own headright, which included two leagues (about 4,428 acres each) and three labores (about 640 acres each). Austin had promised Williams an annual salary of $1,000, ($24,434 in 2019) but the colony was not generating much revenue. Williams continued to accept additional responsibilities at the Austin Colony. He managed the Public Land Office, and he served as its postmaster from 1826. He served as secretary of the ayuntamiento from 1828 to 1832, a post requiring him to record official documents in Spanish and send them to the state government. Austin later claimed that Williams had been underpaid for his service and later compensated him with of land in Texas. With his existing land grant of , Williams had accumulated more than of land in Texas. After the Austin Colony Early in 1834 Williams co-founded the partnership of McKinney and Williams, setting up a warehouse at Brazoria, then relocated to Quintana at the mouth of the Brazos River. The firm operated small steamboats on the Brazos and used its warehouse to manage transfer of freight to and from the larger ships operating on the Gulf of Mexico. An internal political battle in Mexico caused the state of Coahuila and Texas to split into two capitals. Those loyal to Santa Anna (santanistas) controlled the previous capital, Saltillo, while the rival federalistas established their capital at Monclova. During meetings at the state capital, Williams bought 100 leagues of land in northeast Texas from the Monclova government at an eighty percent discount. During the trip he also secured a bank charter, while selling $85,000 worth of its stock. However, back in Anglo-Texas, the Consultation nullified the land deal when it declared all large land grants voided in November 1835. In 1835, Williams was elected as a delegate to the Coahuila and Texas Legislature, representing the district of Brazos. That legislature offered for sale 3.5 million acres of land in the Mexican state, an action which many Texans perceived as corrupt. His participation in the Monclova government aroused the resentment of such persons, many of whom were already suspicious of Williams because of his former position of power in granting land in the Austin Colony. From the perspective of Williams, a member of the federalist Monclova government, the massive land sale was a justified state action in defense of the Mexican Constitution of 1824, which was challenged by Santa Anna and the Saltillo government. The Monclova government was raising money to prepare for a possible civil war in Coahuila. Texas independence With santanista General Martín Perfecto de Cos marching from Saltillo, the Monclova legislature ended its session in late-May 1835 with most of its members fleeing the region. Many of the federalists were captured, including Williams who was taken after crossing the Rio Grande at Presidio, Texas. He was incarcerated at San Antonio, but escaped on horseback in a plot engineered by his friend, José Antonio Navarro. Williams arrived in San Felipe de Austin as an enemy of Santa Anna and Cos. At the same time, his actions in Monclova made him unpopular with Anglos in Texas. Williams went on a tour of the eastern United States in order to raise capital for his bank. Williams was selling stock in New York when he read about a possible war in Texas. He pivoted toward Texas independence while relying on financial assistance from his brother, Henry Howell Williams. He borrowed against his brother's credit to obtain the 125-ton schooner Invincible in support of a Texian naval force. In May 1836, Williams returned with ammunition and supplies loaded on his schooner, along with as many as 700 volunteers on three other boats. Mostly as a result of procurements Williams made in the United States in 1835, the McKinney and Williams partnership had contracted $99,000 in short-term debt on behalf of the Republic of Texas. The new government was not able to repay the debt. These loans to the Texas cause had been leveraged by letters of credit from Henry Howell Williams. Thus the Republic of Texas mounted substantial debt from the account of the McKinney and Williams partnership, which in turn had at least of portion of its financial backing from Henry Howell Williams. After independence, Williams served the Republic of Texas as its loan commissioner while serving simultaneously as a procurement officer for the Texas Navy, serving both the Sam Houston and Mirabeau Lamar administrations. Galveston Mercantile business McKinney and Williams were investors and co-founders of the Galveston City Company with Michel B. Menard. Menard hatched the development scheme in 1833, coordinating to acquire a Mexican title to bayside land at the east end of Galveston Island from Juan Seguin. The next year Galveston City Company purchased from Seguin a league and a labor, or about 4,605 acres. The McKinney and Williams investment was initially a fifty-percent share. In December 1836, the Republic of Texas announced it would validate this title in exchange for $50,000 in cash or merchandise. While there is no evidence of any payment to the Texas government, this action transferred land on the east end of the island to the Galveston City Company. Both Williams and McKinney joined the company's board of directors, and voted with the board to make lots available for sale on April 20, 1838. McKinney and Williams relocated its headquarters from Quintana to Galveston in 1838. The firm already operated a diverse set of businesses on the island. They owned the Tremont Hotel and a race track, while operating a tavern, stables, and offering houses for let. Their principal developments were their three-story warehouse and the wharf at Twenty-fourth Street. Henry Howell Williams was the managing partner of the Tremont, a hotel which operated in Galveston through 1865. In 1839, McKinney and Williams received goods from Liverpool, England, and loaded the same ship with Texas cotton, establishing direct trade between England and the Republic of Texas. McKinney and Williams divested of their commission business in 1842, selling it to Henry Howell Williams, who ran it as a division of his commission house in Baltimore. In 1839, Williams represented Galveston County in the lower house of the Congress of the Republic of Texas. McKinney and Williams used their commission house to support the Williams campaign. They offered to buy Texas Treasury notes (redbacks) for 50 cents on the dollar just as rival commission houses offered only 37.5 cents on the dollar. Substantively, he campaigned based on a conservative monetary policy in response to the Republic's devaluing currency. Williams, once used his position in the House to assert the interests of the Galveston City Company, challenging the election of John M. Allen, the town's first mayor. He led the passage of a bill to change Galveston's charter, which imposed a requirement of land ownership in order to vote in municipal elections. Effectively this reduced the number of eligible voters by half, and this paved the way for the "conservatives" to elect John W. Walton over the "liberal" John M. Allen in 1840. Allen transported the town's archives and stored them in his home, which was guarded by two artillery pieces. McKinney recruited a group of men to recapture the archives, and Allen surrendered them without incident. Williams chaired the finance committee for the lower house, and served as a member of the naval affairs committee. Williams allied with Sam Houston's faction against President Mirabeau Lamar's spending proposals and his Indian policy. He issued a finance report and proposal on January 12, 1840, which included two Hamiltonian recommendations: the recall of older debt instruments and stopping further issuance of unsecured debt. In addition, he proposed that Texas only accept payments in cash from importers and suggested a broad fifteen percent tariff on imported goods. This proposal was a political success, as all of its recommendations were adopted. However, inflation continued into 1841, as Texas redbacks traded at 37.5 cents on the dollar before he went to Austin and later depreciated to 12.5 cents. He sponsored a successful bill which established a state charter for the city of Galveston and another to build a local lighthouse. Williams promoted a bill to emancipate Cary McKinney, a slave of his business partner. Cary was just one of two enslaved persons emancipated in recognition to service to the Republic of Texas. Though Cary's family was not emancipated, he purchased his family members, and in this way, he reunited his own family. President Sam Houston selected Williams for a diplomatic mission to Mexico. As late as 1843, Mexico did not recognize the sovereignty of Texas, while Santa Anna offered a general amnesty and other concessions to Texas provided it agreed to re-incorporate itself into Mexico. Houston agreed to a cease fire and peace talks. He appointed Williams and George W. Hockley as envoys, charging them with entertaining a proposal which was unpopular in Texas. The two envoys accepted their official orders in September 1843, but they lacked the authority to stand behind their own agreements, which would be subject to approval by the President and the Texas Congress. According to the Secretary of State, Anson Jones, their job was to extend the process as long as possible. Meanwhile, unknown to Williams and Hockley, Texas negotiated an annexation treaty with the United States and appealed for sovereign recognition from Great Britain. The Texas War Department, on the other hand, instructed them to demand demilitarization of the Nueces Strip, the corridor in south Texas between the Rio Grande and Nueces rivers. Diplomatically, Mexico claimed all of Texas as its sovereign territory; militarily, Mexico fortified at the Nueces River. Withdrawing from the Nueces Strip was an untenable position for Mexico, though Jones had told them that Mexico would agree to it. Williams and Hockley sailed for Laredo in early October, but the venue changed to Sabinas while they in transit. After arriving, an illness postponed the meetings, so negotiations did not start until December. Their counterparts, Cayetano Montero and Alexander Ybara, did not consent to the Texans' proposal that Mexico withdraw troops beyond the Rio Grande River. Williams acquired a sugar plantation in 1844. This 1,100 acre parcel was located near the coast and west of Quintana, at the San Bernard River. Joseph and Austin Williams, two of his sons, managed the plantation for a brief period before its sale in 1850. Williams ran for US Congress twice. A special election in 1846 was necessary to determine representation for Texas for the few months remaining of the Twenty-ninth Congress. Williams was one of six candidates for the Texas Second district while running on a platform of national assumption of Texas's debt. Texans would benefit from the passage of such legislation; however, Williams and his former business partner had not been repaid by the Republic of Texas for its war debt. He finished second to Timothy Pillsbury. He challenged Pillsbury for the seat in November and lost by a wider margin, the last time he would run for public office. Commercial and Agricultural Bank Williams was president of the first incorporated bank to operate in Texas. The Commercial and Agricultural Bank opened in Galveston on December 30, 1847, and later established a branch in Brownsville, as well as agencies in New Orleans, New York City, and Akron, Ohio. Williams continued as the bank's president until his death. Williams exploited his latent 1835 Mexican bank charter and an old resolution from the Republic of Texas. He had permission to open a bank during the Texas Republic, but did not use this privilege. The Texas bank inspector, Niles F. Smith, never had an opportunity to execute his duties in 1837, but he was never officially relieved of his appointed post. Williams recalled inspector Smith in 1847, who certified the bank's assets in Galveston, New Orleans, and New York. Williams operated out of a building at the corner of Market and Tremont streets. Three months later the Texas Legislature proscribed the distribution of paper money, creating a civil fine of $5,000 per offense. Despite this law, $30,000 of the Commercial & Agricultural bank notes had been circulating in New Orleans that year, with another $18,000 distributed in Texas. Meanwhile, Williams maintained a large cache of specie to cover redemptions of his notes. Two members of the Texas Congress headed an opposition to the C & A Bank: Guy M. Bryan and Elisha M. Pease. Attorney General and Pease-ally John W. Harris filed a lawsuit against Williams, who retained Galveston-attorney, Ebenezer C. Allen. Harris filed a total of four suits against Williams with $75,000 in fines. In 1849, a District Court ruled that an 1841 law that gave Williams banking privileges was still valid. In 1849, Williams gained an ally in the state congress. His former partner in the mercantile trade, Thomas McKinney gained a seat in the lower house and sought to repeal the 1848 banking laws. McKinney also found common cause with Williams in relation to the incoming Attorney General, Andrew Jackson Hamilton. For McKinney it was personal, and for Williams it was business. Hamilton had prosecuted the case against Williams in district court and was then the official in charge of enforcing the 1848 banking act. McKinney took over as chairman of a special investigations committee, which determined that Hamilton had failed to report "widespread corruption". After the legislature passed a reform bill to change the office of Attorney General into an elected one, McKinney campaigned for the election of Ebenezer C. Allen, Williams's lawyer. Liquidity in Galveston conferred an advantage for local business interests to finance large infrastructure projects. The C & A bank financed the Galveston and Brazos Navigation Company, chartered in 1850. The company cut a channel through the shallow, sandy bottoms, from the coast at the west end of Galveston Island to the mouth of the Brazos River. In March 1852, the Texas Supreme Court ruled on the state's appeal in the Williams case. The prosecution strategy was based on challenging the legality of the charter, but the court affirmed the 1835 charter and the 1841 relief law. In response to the decision, Bryan and Hamilton devised a new strategy, emphasizing the illegality of paper money issuance, and fining anyone who issued or circulated paper money. By the end of the year, a new attorney general, Thomas J. Jennings sued officers of the C & A Bank, though it was dismissed by District Judge Peter W. Gray. Jennings continued to file similar suits against two Galveston financiers into the next year, while Gray continued to dismiss them. Early in 1857, District Judge Gray fined Robert G. Mills $100,000 after a jury found him guilty of violating the 1848 banking act. The jury in the Williams trial was deadlocked, but his legal team recommended that he consent to a fine of $2,000 with a guilty plea. A few weeks later, Gray lost his election bid for the Texas Supreme Court to Oran M. Roberts, with the court seating an anti-banking, pro-agrarian judge. The political climate did not favor Williams and his banking interests in 1857, but an economic panic was a greater threat. A bank closed for business in New York that August, and by October, three banks in New Orleans stopped paying in specie. This triggered a mass of demands for redemptions at both Galveston banks, which prompted Williams to refuse to honor some depositors' checks. An early closing triggered a sharp discount on his bank notes, but local fears were allayed after local merchants honored the Williams notes at par, and Williams reversed his policy, resuming payments in specie. Williams survived the Panic of 1857. However, political and legal prospects for his bank did not improve. Williams persisted in the banking business despite the legal challenges, and he resisted the advice of family and friends to divest of C & A Bank. Personal life Family life In 1822, Williams and a female companion arrived in Texas under the assumed names of Mr. and Mrs. E. Eccleston. He left unresolved debt in New Orleans, but there may have been other reasons for his departure. The couple lived together in San Felipe de Austin, but there is no indication of their marital status or even her name. They had a son together named Joseph Guadalupe Victoria Williams, who was born in San Felipe de Austin in 1825. The next year Williams split up with her, and she left the colony, taking their son with her. After her death, Williams brought Joseph back to Texas and raised him as a member of his family. Williams married Sarah Patterson Scott on March 4, 1828, at San Felipe de Austin. A native of Kentucky, she emigrated to Texas with her parents William and Mary Scott in 1824. Sarah Williams gave birth to nine children, five of whom survived to adulthood. By 1829, Samuel and Sarah Williams were caring for two children: their newborn daughter Sophia Caroline Williams and his four-year-old son "Vic" from the previous relationship. By 1830, Williams had already moved his family from Calle Commercio to a lot on the outskirts of town, and added onto this wood-framed house, then finished it with locally milled siding and brick, and with imported sash from New Orleans. In addition to his annual salary of $1,000, he received a seven-league bonus in 1830 for service to the ayuntamiento. He selected seven leagues scattered throughout the colony. After escaping from jail in June 1835, Williams moved his family from the seat of the Austin Colony to Quintana. While Samuel was in the United States, Sarah fled her home during the Runaway Scrape early in 1836. By that time, the Williams's had already relocated from San Felipe de Austin to Quintana, the site of the McKinney and Williams warehouse, where the McKinneys also lived. Thomas McKinney and William H. Jack transported a small contingent of people from Quintana over the Gulf to McKinney's warehouse at the mouth of the Neches River. For the four years ending in the summer of 1839, business travels on behalf of his own interests and in support of the Texas cause kept him occupied in the United States. He only resided with his family for eight months during this period. Freemasonry The Independent Royal Arch Lodge No. 2, Free & Accepted Masons initiated Williams in New York City on November 21, 1835. Williams received the first three degrees of Freemasonry that same night. Four days later, on November 25, 1835, he received all the degrees of the Royal Arch chapter in Jerusalem Chapter No. 8, Royal Arch Masons. Six days later, on December 1, 1835, he received the orders of Masonic Knighthood in Morton Commandery No. 4, Knights Templar. A week after that, on December 8, 1835, the General Grand Chapter of Royal Arch Masons, meeting in Washington, D.C., granted Williams a charter for a chapter of Royal Arch Masons to be known as San Felipe de Austin Chapter No. 1, it being the first Royal Arch chapter in Texas, and he was installed as its first presiding officer. As a Freemason, Williams became a member of a fraternity that included a number of other early Texas patriots like Stephen F. Austin, Sam Houston, Jose Antonio Navarro and Lorenzo de Zavala. Williams "was the moving spirit" behind the formation of the first Masonic lodge in Galveston, and is considered "the father" of Galveston's Harmony Lodge No. 6. Williams represented Harmony Lodge at the annual Grand Lodge meeting in December 1839. At that meeting, he was elected the third Grand Master of Masons in Texas, succeeding Anson Jones and Branch T. Archer. An avid proponent of Royal Arch Freemasonry, Williams was instrumental in the formation of the Grand Chapter of Texas, Royal Arch Masons, and, on December 30, 1850, he was elected its first presiding officer. Samuel May Williams House Thomas McKinney managed the construction of Williams's house and his own on a large nearby suburban lots outside of Galveston, a project he started in 1839. The family took residency sometime in 1840. The large tract included a pasture, a vegetable garden, and several outbuildings. They kept henhouse and a cowpen for their own consumption of eggs and other dairy, as well as stables and a carriage house. The Williams' owned slaves, perhaps all from the same family, who worked in the house and other parts of the property. Allegedly a cook attempted to poison the family in 1842; however, there is no record to indicate any punishment, legal or otherwise. This suburban area near the Williams house was home to several local merchants during the 1840s. In addition to the McKinney house, his neighbors included Michel Menard, John Sydnor, and two of the Borden brothers. Menard was a co-founder of Galveston with McKinney and Williams, and engaged as a merchant with J. Temple Doswell. Sydnor was a commission merchant, and is known to have auctioned enslaved persons, though there is a question about whether this was a regular part of his business. Thomas and Gail Borden Jr. co-published the Telegraph and Texas Register before moving to Galveston. Thomas surveyed land in the Austin Colony before the Texas Revolution. Gail, during part of his stay in Galveston, was a collector of customs, but he is best known for inventing a process for condensed milk. Despite the population growth of Galveston through 1850, this neighborhood ceded its popularity among the wealthy to the Broadway corridor in the next decade. Death and legacy Williams died on September 13, 1858 after an illness of about two weeks. Local Knights Templar and Royal Arch Freemasons led his funeral proceedings. Many flags were flown that day at half mast, while many Galveston businesses closed to honor his memory. He left four surviving children and his wife. He and his wife are buried at the Trinity Episcopal Cemetery in Galveston. The original grave sites cannot be identified, but the State of Texas installed a cenotaph in their memory. Since he had not recorded a will, he left a contested estate which included large tracts of undeveloped land in addition to other assets valued at $95,000. His Galveston home, the Samuel May Williams House, is on the National Register of Historic Places. Early in 1859, the Texas State Supreme Court invalidated the bank charter of 1835 used to support the C & A Bank. The ruling had three consequences for the bank: it made the bank responsible for paying a large fine imposed by the lower court, it lost the ability to issue notes, and all notes outstanding were to be redeemed and destroyed. This effectively killed the bank, though Williams did not live quite long enough to witness its ending. Biographer Margaret Swett Henson writes, "The ten-year court battle with its political overtones had eroded his once robust health, and despairing of vindication, he relinquished the struggle and allowed despondency to triumph." Journalist Gary Cartwright claims that Williams "died a bitter and frustrated old man, emotionally, physically, and financially drained." Williams, through his business partnership with McKinney, supported the war for independence with cash, and with goods and services in-kind, including ships, mercenaries, and ammunition. Williams died in 1858, without seeing any repayment by the Republic or by the State of Texas. McKinney continued to assert the claims of the partnership, but died in 1873 before succeeding in this advocacy. Upon his death, the state acknowledged $17,000, a small portion of the claim, but did not remit any funds to his estate. The State of Texas finally acknowledged the legacy of the Williams and McKinney contributions through a repayment in 1835. Texas historian Joe Frantz characterizes the partnership of McKinney and Williams as "Underwriters of the Texas Revolution," since, "without their aid [Sam] Houston's army conceivably would have lacked clothes, provisions, and most especially, arms." Yet Cartwright notes a lack of acknowledgement of the legacy of Williams in his adopted hometown: "Today hardly anyone in Galveston remembers his name or what he did. The city of Galveston is tireless, almost shameless, in its devotion to its heroes, but there is no street, no park, no school or building or monument that bears the name Samuel May Williams." He also cites examples of actions by Williams which were corrupt, or at least, perceived to be corrupt by dint of their association with what many Texans considered to be unsavory business and political dealings. Williams was labelled a "Monclova speculator" by some of his Texian contemporaries, a reference to his advantageous land acquisitions from the rebelling anti-Santanista faction in Coahuila. After Texas independence, the Republic brokered a deal acknowledging a legacy land grant from the Monclova government and conveying it to the Galveston Town Company, which included Williams as one of its business partners. Cartwright alleges that Williams was conscious of his own standing in Galveston when he ran for the Congress of the Texas Republic in 1839. Leading up to the election, McKinney and Williams offered favorable exchange rates for Texas treasury notes compared to their local business rivals, which Cartwright suggests was a successful attempt to buy votes. References Bibliography Further reading External links Category:1795 births Category:1858 deaths Category:Politicians from Providence, Rhode Island Category:Republic of Texas politicians Category:American bankers Category:American emigrants to Mexico Category:Businesspeople from Providence, Rhode Island Category:People from Galveston, Texas Category:American city founders Category:19th-century American businesspeople Category:Mexican civil servants Category:American slave owners Category:State legislators of Mexico Category:Postmasters
{ "perplexity_score": 214.6, "pile_set_name": "Wikipedia (en)" }
Ask HN: Why use Docker and what are the business cases for using it? - dmarg I recently did a very basic tutorial on Docker. Afterward, when talking to the tech lead of the project I am working on, he asked me &quot;What are the business cases for using Docker, how could it save us money, how could it make us more efficient?&quot; I told him I would do some research to find the best answers and figured I would turn to you all here at HackerNews for some advice.<p>Thanks in advance!<p>Just so you know, I currently work on a project at my company that uses AWS, Codeship and BitBucket for development and production. AWS and Elasticbeanstalk host our application, Codeship runs tests on changes to branches in BitBucket and then pushes the code that passes on certain branches to different AWS environments for Development or Production. ====== lastofus Does Docker provide anything useful to someone who develops on OS X and deploys to Linux VMs (Digital Ocean, Linode, AWS)? Current setup is something like: \- Develop locally (OS X) \- Test deploy to local Vagrant Linux VM (provisioned by Ansible) \- Deploy to staging/live Linux VM w/ Ansible (or Fabric if I'm being lazy) I've been following the Docker hype for some time now, but ever time I look into it, I couldn't find any info on how Docker could make my life simpler or easier. If anything, it would just add another complex abstraction layer to have to deal with. What am I missing? ~~~ justizin You can build a docker image in a vagrant VM on OSX and deploy that exact docker image to production. If your vagrant image resembles production, it's probably fine, but there's a level of confidence to be reached from deploying the exact entire self- contained binary, shipping it through QA and staging, and eventually promoting it to production. ~~~ pekk This is also a use case for not developing on OS X, which doesn't have anything to do with production anyway. ~~~ mdaniel Can you clarify? Do you mean that one should only develop upon the platform that will be used in production? ~~~ justizin Yes, the previous commenter is a purist with no battery life on their laptop. Further, if you're in the middle of upgrading your production OS, does this mean that you need two developer machines? C'mon! ------ osipov There are many ways of using Docker and obviously different companies could come up with their own business cases for adopting the technology. So let me focus on one scenario and we can talk about whether it makes sense for your environment. Software engineering is often difficult because programmers have to deal with inconsistent environments for development and for production execution of their products. Due to mismatches between these environments, developers often found bugs that surface in one environment but not in another. Hardware based virtualization (VMWare, HyperV, etc) helped with the inconsistency issue because it enabled developers to create dev/test environments that could later be replicated into production. However this category of virtualization requires more computational resources (esp. storage) than operating system virtualization like LXC used by Docker. In addition to requiring less resources than hardware virtualization, Docker defined a convenient container specification format (Dockerfile) and a way to share these specifications (DockerHub). When used together, these Docker technologies accelerate the process of defining a consistent environment for both development and production in your application. Dockerfiles are easy to maintain and help reduce the need for a dedicated operations team for your application. In buzzword speak, your team can become more "DevOps". Docker, by the virtue of relying on a thinner virtualization layer than hardware hypervisors like VMWare, also has higher performance for I/O intensive operations. When used on bare metal hardware, you going to be able to get better performance for many databases in Docker containers compared to databases running in a virtual machine guest. So to recap, Docker can help you \- maintain consistent dev/test/prod environments \- use less resources than hardware virtualization \- free up the time your team spends on dev to ops handoffs \- improve your app I/O performance compared to running in a hardware virtualization based virtual machine guest However if you are using AWS, note that Docker Container Service available from Amazon actually doesn't give you Docker on bare metal. That's because Docker Containers run in AWS virtual machine guests virtualized by Xen hypervisor. So with AWS you are paying a penalty in resources and performance when using Docker. ~~~ takeda Great, but what are the benefits of running Docker in AWS? You are still running VMs and you are being charged for running them. With Docker you are simply putting yet another layer of complexity, because now you have to run more beefier VMs, you now have problem with network communication between containers running on different hosts. So you will most likely need to use overlay network. You also decrease resiliency, because now when AWS terminated a single VM, all apps running on that node suddenly disappear. I also don't get the argument about running the same container in dev/test/prod. For example my company is working on going Docker and one of the problem with these environments is that app running there has different configuration. So the idea to solve it is to create three different versions of the same container. Genius! But now are you really running the same thing in dev/test/prod? How is it different to what we did in the past? Especially that before Docker through our continuous delivery we actually were using exact same artifact on machines set up with chef that were configured the same way as in prod, while with Docker now we plan to use three different containers. ~~~ osipov >what are the benefits of running Docker in AWS? I don't see benefits to running Docker in AWS. In my opinion, AWS implemented its Docker-based Container Service very poorly. I advise my customers against using AWS when want to use Docker. There are many bare metal as a service providers out in the marketplace. >the argument about running the same container in dev/test/prod Is this issue really caused by Docker because you said that you had consistent environments when built by chef? ~~~ notwedtm Can you name a few bare metal as a service providers? ~~~ nickstinemates Rackspace, SoftLayer/Bluemix ------ Rezo I'd be interested to know if and how people are integrating Docker into their edit-compile-test development cycle? For me personally, the time it takes from performing an edit to seeing your change "live" on a developer's machine is extremely important. To this end we've spent some effort in making our own services and dependencies run natively on both OS X and Linux to minimize the turnaround time (currently at around 4 seconds). Docker fixes the "works on my machine" problem, which is worth tackling but not something you run into too often, but (in my admittedly limited experience) introduces pain into the developer's workflow. Right now, I'm leaning towards enforcing identical environments in CI testing and production via Docker images, but not necessarily extending that to the developer's machines. Developers can still download images of dependencies, just not for the thing they're actually developing. I'd love to hear alternative takes. ~~~ viktorbenei We do a really simple trick. We have one Dockerfile, which we use for dev and for production, every configuration difference is handled through environment variables (which is easy to manage with docker-compose, Ansible, etc.), so you can run the same image everywhere and change the environment configuration to switch between dev/test/prod. So to solve the live reload problem we do only one thing in development which we don't do anywhere else: the Dockerfile copies all the current source files into the image, but we run the container with a mounted volume of the source code - which in dev mappes exactly the same way the Dockerfile defines the src copy. This way when you create the image and start the dev container it won't change anything, but as the src is mounted on top of the included src in the image the regular (Rails) auto reload works perfectly. For test and prod we just simply don't do any volume mounting at all, and use the baked in src directly. Easy and fast everywhere, all you have to control is how and when the images are created (ex: we do a full clean checkout in a new directory before building a test image, to ensure it only contains committed code. EDIT: by test I mean for local testing, while you're working on revisions, a CI server does the automatic testing). ------ collyw To tell all the cool kids you are using it. (Ok, I know there are real use cases for Docker, but I see a lot of hype as well. People telling my mathematician friend that she needs to use docker at the start of her project - it is likely to be a one off graph she needs to produce for a research paper). ~~~ osipov There is a big push for reproducibility in science. If you friend can package the process for building that graph in a Dockerfile, it is more likely that readers of her paper will be able to reproduce her results. ~~~ mugsie or, you know, publish the formula, so readers can reproduce in whatever language / system they want. Reproducibility is a big push.... but not like you are suggesting. Shipping a dockerfile is the equivalent of saying "This works, _if_ you use this flask, this pipette, this GCMS and this piece of litmus paper" Docker is not the only solution to problems. It solves some, but you can't tack it on to everything. ~~~ log_n Why not both? I am not in academia but I was under the impression that some academics might be publishing 'questionable' results that cannot be reproduced at all in order get their paper count up for tenure review. Not to mention puff-pieces from industry that basically serve as PR in peer reviewed journals without furthering their discipline. So shipping working code (even if it comes with a required pipette) might be a nice requirement for a peer reviewed publication to take on in order to keep their journal relevant. Shipping in Docker or similar guarantees reproducibility. ~~~ collyw If the code is crap, and only works on one particular data set, then putting it in a docker container ain't going to help. ------ istvan__ One business use case is to stop threads between the Ops and the Dev team about missing JAR files on a production system. If Docker was fixing this only, I would still use it. There is nothing better than a single binary deployment that is byte to byte the same as it was running on a dev laptop and a QA env. ~~~ mcguire Do you mean like a war file? (Yes, I'm being hyperbolic, unless you want to have your entire application running from a servlet container with nothing in front of it. And I agree, a single deployment object is just about the only way to keep your sanity. But still, this isn't the first thing that's proclaimed that advantage.) ~~~ istvan__ No, I do not mean like a war file. I am talking about an average Java developer who does not understand how classpath works, how environment works and what are the assumption that is built in to the code and blames everything on other people . For a long time they could get away with this, post-Docker world they cannot. ~~~ boomzilla Not sure why you still have this problem. In Java world, Maven build system solved this long, long time ago. There is even a plug-in that builds a uber jar that can be invoked with JVM as the only dependency. ~~~ xahrepap Still have problems with Java versions, Java security policies (encryption strength, etc, that you manually have to add files to your JDK/JRE to work), external dependencies, etc. We use Java, Maven, JBoss + Ear/War deployments. And "Works on my Box" is one of the most frustrating problems we have. This is one of the reasons we're pushing toward Docker. ------ b00gizm I absolutely love it (Docker + docker-compose) for creating homogenous local development environments. And if your app needs a MongoDB, Elasticsearch or anything else, it's as easy as adding one line to your overall docker-compose config file to link those services to your app. No need to pollute your development machine, you can just have anything running in Docker containers and share them across your team. I've created several repos on Github for that matter. Here's for example some boilerplate for running Node.js, Express, Nginx, Redis and Grunt/Gulp inside Docker containers: [https://github.com/b00giZm/docker-compose-nodejs- examples](https://github.com/b00giZm/docker-compose-nodejs-examples) ~~~ towelguy I love docker-compose, it is to services what npm is to packages. That said, I'm only using it in development. For production there's so many options I don't even know where to begin: [http://stackoverflow.com/a/18287169/3557327](http://stackoverflow.com/a/18287169/3557327) ------ MalcolmDiggs In my experience, if you start hearing: "I don't know what's wrong, it works fine on my localhost!" a lot, then it may be time to think about Docker. In more general terms: more the complex the environment, the more moving pieces, the more developers on the team, the more servers in production, the more likely there's going be a discrepancy between what Developer-A has running on his machine, and what Developer-B has running on theirs. Docker helps keep everybody on the same page. For me personally: I'm a dev on a number of projects, and Docker helps me keep my dependencies straight. I no longer have to change things around locally just to work on Project-A, I just get their latest Docker image, and I'm good to go. ~~~ msravi How is this different from sharing a plain old VM image and doing the same thing? Is there any particular advantage that a Docker image brings? ~~~ Matt3o12_ Docker images use the union file system. That means when a new version of that image is available, you often only pull a few megabytes from the hub because it already has the OS. A docker container is also way more lightweight. Instead of running a full OS, you I only run one process. So your docker image starts within seconds ~~~ ccozan Is this more like a Solaris zone or a Linux chroot-ed env? ~~~ nickstinemates A lot like Linux chroot, with some additional features, restrictions, and a mechanism to share them easily. ------ thebigspacefuck Makes system administration somewhat easier since the host OS can stay the same and docker containers change, while giving developers more control. I have total control of which version of packages is installed, which OS I'm using. I don't have to create a ticket, argue over the ask, and get approval just to change an web server timeout. It sort of usurps the sys admin role, though, which might be a negative. I can move my container anywhere that's running Docker and all packages are there inside of the container. If you spend a lot of time setting up new boxes, that's a plus. Before, I had to dump all packages, figure out which ones were missing, then install all of them, and the host OS had to be exactly the same. Now I know it's exactly the same, all the time, anywhere. My only warning is that using anything but Ubuntu for your build host is going to take way too long and you're going to be waiting hours for it to complete if you don't have any layers cached. ------ colordrops It's very useful in situations where you need a reproduceable deployment and also need high performance and direct access to hardware. We run simulations that require a lot of setup, and we tried with VMs at first, but they were too slow and the GL driver inside of the VM didn't implement all the extensions we needed. Docker worked perfectly in this case, though we have to run it in privileged mode. ------ mrbig4545 I never figured it out, we could do everything we need in production with LXC, puppet, carton and perlbrew. Combined with a vagrant box for dev, we have no issues. Although I do use docker to run deluge in a container with openvpn on my home pc, but only because someone else had gone to the trouble of writing the dockerfile and getting it to work. It seems to break a lot though, when I have time I'm going to get rid of it and replace it with a systemd-nspawn container, because there's less handwaving involved and I can get it to work correctly. ~~~ ecliptik Could you post a link to the Dockerfile for the deluge/openvpn container? I have a VM setup with transmission/openvpn since I was having issues getting the VPN to work with Docker's networking and it was just easier to use a standard VM instead. ~~~ mrbig4545 Sure, this is the one I used, with private internet access.com , [https://github.com/jbogatay/docker- piavpn](https://github.com/jbogatay/docker-piavpn) I'm sure you could easily change it to work with other VPN providers ------ d0m Something noting is that more and more PaaS (Platform as a service) are using docker.. so sometimes you're not making the decision as a developer to use a docker, you're just forced to use it. I'm saying this because I know docker solves a lot of pain on the devops side, but on the "software" side it's been painful all the time I've touched it. I.e. practically speaking, it makes releasing much slower, sometimes I'm forced to do a hard reset on the container rather than just reload nginx, etc. My suggestion is to go with what's simpler for your stack. If you're struggling with having to manage and deploy new configured/secure ec2 instances every day, then it might be worth looking into docker. ~~~ jacques_chester > _Something noting is that more and more PaaS (Platform as a service) are > using docker_ To expand on this: * Heroku have introduced Docker-based tools to run their buildpacks outside of their staging servers, * Cloud Foundry has, in public beta, the Diego scheduler, which can accept and manage Docker images, * OpenShift 3 uses Docker and Kubernetes as its core components. Disclaimer: I work for Pivotal, who founded Cloud Foundry. ------ oxygen0211 I can give you two first hand examples which also revolve about the apsects osipov mentioned: #1 is in my dayjob: We use docker in combination with vagrant for spinning up a test environment consisting of several containers with different components (both ur own products and 3rd party) to run integration tests against them. The main reasons for this approach are: \- We can supply containers with the installed producs which saves time on automated runs since you don't have to run a setup every time \- We can provide certain customizations which are only needed for testing and which we don't want to ship to customers without doing all the steps needed for that over and over again \- We have exact control and version how the environment looks like. \- Resources are better distributed than in hardware virtualization environments #2 Is a pet project of mine, a backend for a mobile App. There are still big parts missing, but in the moment it consists of a backend application which exposes a REST API running on equinox plus a database (in an own container). The reasons I see for using docker here: \- I have control and versioning of the environment \- I can test on my laptop in the same components as prod, but scaled down (by just spinning up the database and one backend container) \- Since more and more cloud providers are supporting Docker (I am currently having an eye on AWS and DigitalOcean, haven't decided yet), switching the provider in the future will be easier compared to having, say a proprietary image or whatever. \- If I ever scale up the poject and onboard new teammembers, the entry barriers for (at least) helping in Ops will be lower than if they have to learn the single technologies until they get at least basic knowledge of the project. ------ rcconf My apologies if I'm hijacking the original poster. Does Docker handle multi-environment configuration management? For example: qa, stage and live have the same config files, but different values. Currently we're using Ansible and we set variables for a specific environment, then we feed those variables into config files based on where we're deploying to (config files are not duplicated, only variables that feed into config files.) All of our configuration is in git and we can quickly see and change it. How does Docker handle this? ~~~ geerlingguy One way I've seen many people tackle this problem is to have the Dockerfile/image built in a more generic way, then the end of the Dockerfile kicks off an Ansible playbook (or some other lite CM tool) that will configure everything for the proper environment (e.g. change configuration and kick off a service, something along those lines). Some will even go as far as using a CM tool to do the entire internal Dockerfile build, and the Dockerfile is just a wrapper around the CM tool. This does require more bloat inside the Docker image, as you need to have your CM tool or whatever other supporting files/scripts installed in the image, but it does make more complex scenarios much simpler. ~~~ yebyen > you need to have your CM tool or whatever other supporting files/scripts > installed in the image This pattern is maybe even more helpful than harmful, for making your dev environment more closely match production, when your final deploy target is not a docker container. (You are obviously going to want to see those build scripts running in test, if not earlier; certainly once, before they should kick off in a production environment.) You could do more individual steps in the docker file, just like you could store your token credentials and database handles in the git repository. Neither way is "completely wrong" but there is a trade-off. ------ vruiz Side question. I'm well aware of the benefits of docker but, has anybody measured performance degradation due to lack of machine specialization? Back in the web 1.0 days it was common knowledge that you start in 1 server, then you split into 1 app server and 1 database server and you can get 4x the capacity. Did we lose all that with the docker way? Is it not so relevant anymore with modern multicore CPUs? ~~~ distracteddev90 Docker doesn't force you to put your app and your db on the same image. That is up to you. Most have "App" images and "DB" images separate. If we want to get really specific, Its also common to see the "DB" image split up between the image of the disk where the data is actually persisted, and the image of the actual DB process. This makes it easy to play around with your data under different versions of your DB. ------ ZitchDog If you're familiar with Java, think of a docker container like a WAR or EAR, except it can contain ANY dependency, not just Java code. Database, binaries, cache server, you name it. The implementation is vastly different, but the effect is a deployment artifact that can be configured at build time, and easily deployed to multiple servers. ------ atroyn Codeship have a great series on Docker for Continuous delivery on their blog: [http://blog.codeship.com/](http://blog.codeship.com/) That said I've paged the founders to this thread, they can make the case much more effectively than I can. (disclosure: I don't work for Codeship). ------ jgrowl Besides just actually running software, I also find it really neat when projects use docker to build their entire application. It provides an effective means of documenting all of your dependencies and making reproducible builds. Take the docker-compose for example. You can just check the code out, run a single script that builds the project for your environment and everything is pretty much self contained in the dockerfile ([https://github.com/docker/compose/blob/master/Dockerfile](https://github.com/docker/compose/blob/master/Dockerfile)). You don't have to clog up your host computer with deps and you get an executable plopped into an output bin folder. Additionally, the steps in the dockerfile get cached so subsequent builds are really fast. ~~~ davexunit >making reproducible builds. Docker builds actually aren't reproducible. There are many sources of non- determinism that Docker cannot address. Do you use the base images from DockerHub as-is or do you run 'apt-get upgrade' or whatever for security patches? If you do, the result you get from building that image (as opposed to using what's in a cache) is different depending on the time it was built. The same goes for any Dockerfiles that compile from source. Hell, just extracting a source tarball results in a different hash of the source tree because of the timestamps on the files. You and I have little hope of building the same image and getting the same exact result. Build reproducibility is a very interesting topic with some unsolved issues, but Docker isn't helping with it. See [https://reproducible.debian.net](https://reproducible.debian.net) for a good resource about build reproducibility. ~~~ vezzy-fnord Don't know why you were downvoted. Docker doesn't give you reproducible builds because you're still running in a raw host OS environment with all its state, but simply the subsystems partitioned into their own namespaces. Docker is more akin to a snapshot than reproducible. ------ rlpb Docker, or containers in general? I'd really like to hear about Docker specifically, but most of the answers so far seem to relate to containers in general, rather than Docker specifically. What are the business cases for using Docker over some other container-based solution? ------ fu86 We have a shitload of servers running CentOS for historical reasons. We can't change the distribution because all the services running on this servers are tight to the quirks and special cases of this distribution. So we need to live with CentOS. Some of our newer services need a up to date version of glibc and a lot of other dependencies CentOS can't provide. So we use docker to boot up Ubuntu 14.04 containers and run the services with special needs in them. Another great thing is isolating scripts we don't trust. We allow our customers to run scripts of all kind on our servers --> inside Docker containers. So the customers can't mess with the hostsystem. ~~~ liviu- >running CentOS for historical reasons Is CentOS not the state-of-the-art Linux distro to run for servers (besides RHEL for support)? ~~~ njharman Maybe safe-of-the-art. It is stable (and old) which is the way many people like their servers. But, it certainly isn't the latest and greatest. ~~~ cheald The entire point of RHEL/CentOS is that it isn't bleeding edge; it will certainly be modern though. I think it's rather unfair to call it "old" though; the latest release was in March. ~~~ njharman Fair enough. But the mentality that picks stable over up-to-date tends to never upgrade. I'm stuck supporting rhel5.5, our "new" systems are 6.5 ------ allan_s our use case for docker is the following: we're a webshop, and recently we've standardized our stack on symfony2/nginx/postgresql, so all our websites use that. but beside of that we have some that we maintain that need to run on old version of php/centos. As we have only 1 server internally for pre-staging environments, docker does help us to save a lot of memory/cpu compare to what we had before (virtualbox, yes...), without needing a lot of machine to setup (like openstack). Also we don't really have a guy dedicated to sysadmin, so the less time we need to spent on server administration, the better we feel. So we have a set of 3 containers (for symfony+php_fpm / postgresql / nginx ) that already tuned to meet our needs, with a ansible playbook [https://github.com/allan- simon/ansible-docker-symfony2-vagra...](https://github.com/allan- simon/ansible-docker-symfony2-vagrant/) , that we reuse for every new project we have. So that the developers can have a working stack, without needing to reinvent the wheel, they even don't need any knowledge of system adminstration "run this ansible command, done" ! without any risk to break other services. Also the reproductability and the stateless properties of docker containers has helped to nearly eliminate the class of bugs "the guy that does not work in our company anymore made a tweak one day on the server to solve this business critical bug, but nobody know what it is but we need to redeploy and the bug has reappeared" ------ muralimadhu We have been using docker for about a year at Demandforce, Intuit. We have had a mostly positive experience with it. Plusses: \- Dont see any environment related issues because the docker image is the same in every environment \- Easy onboarding to teams that use docker because you dont need to setup anything new. This is especially useful if your company encourages developers to work across teams \- Ops can build around infrastructure around this and be sure that every team builds and runs code in the same way \- If your application is complex, using docker-compose, its extremely simple to setup your dev environment \- The community is moving towards docker, and it doesnt hurt your resume if you have production docker experience Minuses: \- For an extremely simple application (that you think will remain simple over its lifetime), it might be more overhead to use docker than not use it \- Even though we’ve been using boot2docker and vagrant to setup docker on MacOSX, it hasnt worked seamlessly. When you get on and off a vpn for example, boot2docker has constantly messed things up. If you can get your dev setup right, docker works well. If not, it can be a pain sometimes on OSX \- Although its easy to build docker images for most of the open source software out there (if docker images dont already exist), it can be a pain to do that for enterprise software. Try using docker with oracle db. You might get it to work. You wont have fun with it !! ------ deftek I would keep an eye on this project: [http://www.opencontainers.org/](http://www.opencontainers.org/) ~~~ dmarg Heard about this and seems like everyone and their mother are signing on. This is one of the reasons why I asked the main question is because I want to fully understand what the business case is for using Docker. ------ zalmoxes I'm a sysadmin for a private K-8 school. I use docker because of the ease of deploying and upgrading a large number of tools on a limited number of servers. I've used puppet for several years to manage our infrastructure, and puppet is still managing all our staff and student laptops, but for servers, I've switched everything to CoreOS + Docker. ------ collyw Can people elaborate on when it would be better to use a virtual machine and when it would be beneficial to use a container? ~~~ takeda If you have own hardware (e.g. Data Center) that is running your own code that you trust. By going with containers you can pack more applications into the same hardware (less overhead), therefore your costs ate lower. If you running in AWS, you use VMs anyway so the overhead is there no matter what (and also is not your concern, because you pay for the VMs). By adding Docker there you basically adding one extra layer on top of it, so from the infrastructure point of view you making things even more complex. ------ zwischenzug I talk about this a bit here: [https://www.youtube.com/watch?v=zVUPmmUU3yY](https://www.youtube.com/watch?v=zVUPmmUU3yY) this was a year ago, so a little out of date. I now work for another company that is into Docker. I also have various bits on my blog: [http://zwischenzugs.tk](http://zwischenzugs.tk) check out the jenkins ones, for example: [http://zwischenzugs.tk/index.php/2014/11/08/taming-slaves- wi...](http://zwischenzugs.tk/index.php/2014/11/08/taming-slaves-with-docker- and-shutit/) [http://zwischenzugs.tk/index.php/2015/03/19/scale-your- jenki...](http://zwischenzugs.tk/index.php/2015/03/19/scale-your-jenkins- compute-with-your-dev-team-use-docker-and-jenkins-swarm/) ~~~ i_have_to_speak In your Jenkins example, why use docker? Why not ask the devs to directly install Jenkins on their box? ~~~ zwischenzug Because the builds would not be contained. There were many dependencies that needed to be installed on each box. Using Docker meant that the slaves could be built from scratch as well. See here: [http://zwischenzugs.tk/index.php/2014/11/08/taming-slaves- wi...](http://zwischenzugs.tk/index.php/2014/11/08/taming-slaves-with-docker- and-shutit/) ------ irvani We at PeachDish use Docker and Bitbucket to scale our BeanStalk environment. Docker has helped us deploy test site much easier as one can be assured that everything needed to run the app is in the dock. It helps us build consistent environments for testing. ------ vsaen simplest case is that it can serve as a multiple staging environment, when you have loads of APP in a single code base (often startups going about prod- market fit). With docker tech shipping speed= production speed. Without docker you are slowed down by 2x or more.Without docker, either you set up a lot of staging environments, which is not great and costly. Or you use one single testing environment and let each of you tech person wait for hours, wasting time, while QA tries to test one branch and you have another idiot deploying another brand on staging. While there are more complex/useful cases, this is one simple biz value i get out of it for my team. ------ trustfundbaby Mostly answered already, but another use case is getting engineers up to speed on environments that they're not familiar with. For example, I work a lot with Rails apps, and sometimes we'll need a dev to come in and work some css/html magic or something like that. Well getting them setup with rvm, ruby, rails, bundler, mysql, elasticsearch and working through compilation errors on OSX can be a real nightmare, especially if you set up your env 2 years ago and have just been updating it incrementally since. When we dockerize the app, they can be setup with the docker image and ready to go in under an hour, without having to screw with their system ... much. ------ phunkystuff Here's a really simple example of a project that I had set up using docker. For my website I have it set up with continuous integration to run my tests when I merge into master and build a docker image which it then pushes to the docker registry. I then have it ssh into my host server, pull that image, then run the new container and remove the old one. Boom, i've just deployed my website by simply merging a PR into my master! This is just a simple use case, and I probably wouldn't suggest deploying a production ready site like this, but it's really cool! It's really simple to just pass around images and have things up and running on local dev's too ~~~ mrbig4545 This is not something that is unique to docker though. I could just as easily set up a hook to deploy our site without docker, but it's not something I need. I prefer to manually deploy, it's only one command, and I can make sure it's all worked correctly. That said, I will be moving that command to a chat command, as I like that. But even then, it'll be a manually triggered command. I have auto deploy setup for CI testing, as in that case I do want to know that those branches are ready for deployment to prod, when I want to. ~~~ phunkystuff Ahh that's true, I guess what I was getting at was that it makes the deploys much quicker and easier (at least in my experience). I was also playing around with a chat command to spin up PR's/branches that haven't been merged into master yet, onto a temp container for viewing! Dunno, maybe i'm just not as familiar with other tools that can do similar things as easily ------ mercanator In our organization we have seen tremendous value using docker for our CI server (bamboo) and for infrastructure deployment. On the CI server side of things.... Our CI server a.k.a the "host machine" in docker speak has a MySQL database that we did not want used by all of our builds and wanted to isolate the creation of our application from the ground up within using its own MySQL instance. Additionally there is a lot of other peripheral software that we did not want to install on our host machine but it was critical for our unit tests to run. This is where docker is valuable to us from a build perspective. It allows us to isolate the application for unit testing and not get caught up with the possibility of other running software on the same machine affecting the builds, therefore freeing us up from debugging, and therefore saving the company money because developers are not wasting time debugging CI server issues. It's easy to isolate CI server related issues from the docker container running the unit tests because a developer can just run the same tests using the container on their local machine, so it creates a consistent environment. On the infrastructure deployment side of things.... Previous to our "dockerized" infrastructure we were managing about 7 different AMI's for all our servers and it was becoming a pain in the butt to manage the installation of new software if our application called for it, create a new AMI, then re- deploy said AMI. If you have experience with AWS and you have done this enough times, I'm sure you have faced at one point or another long wait times for your AMI to be created before you can re-deploy with that newly created AMI. This is time wasted on the application deployment side of things, but also on the personnel side of things while you wait for that damn thing to be created so you can re-deploy. Time is money and money waiting for resources to be available or for AMI's to be created is money taken away from the business. Additionally though in its infancy stage, we are using docker-compose ([https://docs.docker.com/compose/](https://docs.docker.com/compose/)) which offers some really nice ways of defining your container infrastructure within a single machine, I highly recommend looking into this for further efficiency. ------ McElroy To get some additional viewpoints on containerization, you could also take a look at what has been said about similar, preceeding technologies: * Solaris Zones, see also SmartOS Zones based on that * FreeBSD jails ------ stevelandiss I consider it as a replacement for deploying applications as a virtual machine. That is, if I want to host compute and let anyone run any random program in a reproducible manner on my server, I could let them run it in docker and be done with it. So as am IT admin, I find this a useful alternative to letting people run arbitrary programs and add restrictions around it. I think this is more of an IT-OPS tool than something a developer would want to spend time with ------ e40 Say you're running on CentOS 6.6 (or the equivalent RHEL) and you want to run some software that won't work because you need a newer library than is installed (this recently happened to me recently trying to install Transmission). You have two choices: 1\. Upgrade to CentOS 7.x. 2\. Use Docker and install the software into a container using a newer OS (CentOS 7.x or a newer Debian). #1 is very expensive and sometimes impossible (if you need to be on CentOS 6.x for compatibility reasons). #2 is very cheap. There's one of your business cases right there. ~~~ ecliptik This is our current business case. Multiple CentOS/RHEL 6 systems in a global environment and we want to run an application that requires Ubuntu and newer libraries. Instead of spinning up new VMs in each environment for one new application, we can instead run a Ubuntu container with the application within the existing environment. This brings with it all the other benefits such as continuous delivery and orchestration that we didn't have before. Once the platform is established there is no limit what we can run within a repeatable and consistent environment. ------ purans For my project, I wanted to have an automated and easy code to deployment setup, so here's what I have it working Bitbucket -> Private Docker Repository -> Tutum ([https://www.tutum.co/](https://www.tutum.co/)) - Live app It's working pretty well. Once you check in your code, docker repository hooked to bitbucket will build the docker image and then you can start deployment from Tutum with one click. ------ cheald The only truly compelling case I've seen for Docker is Amazon's ECS, which takes a cluster of EC2 machines and will automatically distribute containers among them where ever there is capacity, according to declared resource needs of a given container. The ability to waste less of your EC2 resources is a very clear business win. Everything else is still nice, but it's basically "dev environments suck less". ~~~ whalesalad FYI this is very similar to Mesos + Marathon. However, both feel way too verbose and painful to use. I'm very interested in seeing how Docker Swarm plays out. This ecosystem is still so raw. ~~~ jacques_chester > _However, both feel way too verbose and painful to use_ Try Lattice: [http://lattice.cf/](http://lattice.cf/) Disclaimer, I work for Pivotal, which developed Lattice based on Cloud Foundry components. ------ michaelbarton I've been using Docker to build a catalogue of similar types of software. Using Docker allows all the software to have the same interface which makes it easier to compare like-for-like. Here's the site - [http://nucleotid.es/](http://nucleotid.es/) ------ rehevkor5 Have you ever used provisioning software like Chef to prepare a server to run your software? Have you ever used that in conjunction with Vagrant in order to test out your provisioning and software deployment locally? Docker replaces (or can replace) all of that. ~~~ rubiquity Docker does NOT replace configuration management tools like Chef, Puppet and Ansible. Those are still necessary for preparing the host machine which Docker containers will run on. Where Docker does alleviate/reallocate some things is in the configuration of the containers that run on those hosts. Instead of configuring the host for Ruby/Python/etc. you would move that configuration to your Dockerfile. But I think CM tools also have support for generating Dockerfiles, so there's that too. ~~~ yebyen > Chef, Puppet and Ansible. Those are still necessary for preparing the host > machine In many cases now, they are not. Docker containers can run on CoreOS, which machines are designed to be configured entirely from a cloud-config file, organized in clusters. With Deis for example, you can build and orchestrate your scalable web service in Docker containers without even writing a Dockerfile, or necessarily knowing anything about how the Docker container is built. The builder creates slugs with the necessary binaries to host your service, and you tell the container how to run itself with a (often one-line) Procfile. I would still want chef scripts for my database server, but for things that can live in a container on a Fleet cluster, I most certainly do not use Chef, but I absolutely do get reproducible hands-off builds for my runtime environment, and without spending time individually preparing the host machines. ------ kasia66 Hi, You can have a look at Cloud 66 ([http://www.cloud66.com/how-it- works](http://www.cloud66.com/how-it-works)) a full stack container management as a service in production. Cloud 66 uses Docker technology and integrates 11 open sources tools to cover end to end stack deployment on any cloud provider or on your own server, within one click. You can compare different Docker solutions ([http://www.cloud66.com/compare](http://www.cloud66.com/compare)) and read how Cloud 66 used Docker.([http://blog.cloud66.com/docker-in-production-from- a-to-z/](http://blog.cloud66.com/docker-in-production-from-a-to-z/)). (disclaimer: I do work at Cloud 66 ) ------ yread This application uses it to spin up replicable instances of genome processing pipelines [https://arvados.org](https://arvados.org) ------ clayh Do people use Docker in conjunction with Vagrant now? Or is Docker used as a replacement for Vagrant for a homogenous development environment? ~~~ Tomdarkness We use vagrant on our dev machines to spin up a CoreOS cluster. Could use something like boot2docker but we prefer the dev environment to mirror production as close as possible. ------ bampolampy (disclosure: I do work for Codeship :P ) There are a lot of really great reasons to use Docker, or any container technology for CI. First off containers give you a standard interface to a reproducable build. This means you can run something in a container and expect it to behave the same way as something a co-worker runs on their workstation, or something run in the staging or production environments. For CI this is an absolute necessity. Rather than running tests locally, and expecting a CI server closely tracking the production/staging environments to catch issues with different version of the OS or libraries you can expect any build that passes locally to also pass on CI. This cuts down on a lot of potential back and forth. The only shared dependency between CI/local/prod/staging is docker itself. Another benefit is (almost) complete isolation. This means rather than having different vm images tracking different projects, you can have a single vm image with docker, and have each container running on the vm for any version of any build across your system. From a CI perspective you can abstract most of the complex configuration for your applications into "docker build -t myapp_test ./Dockerfile.test && docker run myapp_test". Containers use a differential filesystem, so N running containers for an application will take up 1 X the size of the container image + N x the average space of changes made in the running containers on top of that base image. This makes larger images highly space efficient without having to worry about different instances treading on the same folders. The line between dev and ops blurs a little (devops), but clear responsibilities. Ops becomes responsible for maintaining the docker infrastructure, and dev is responsible for everything inside the container boundary, the container image, installed packages, code compilation, and how the containers interact. A container mantra is "no more 'well it worked on MY machine'". If it works for the dev, it really will work in prod. Besides this, there a number of benefits around speed, accessibility, debugging, standardization, the list goes on. There are also a ton of great and varied Docker CI solutions out there, from specific Docker based CI like us (codeship), Shippable, Drone, Circleci, as well as standard solutions like jenkins via plugins. Many hosting solutions are supporting docker redeploy hooks for CI purposes. The standardized nature of containers make it trivial for vendors to provide integrations. Even if you don't use docker yourselves, this is certainly a great space to watch. Technically you can use docker for CI/CD without using it for deploying your app. When you do this you lose some of the benefits listed, but not all. You lose the cohesion between CI/local and prod, but you still gain a whole lot in terms of speed and complexity within your CI infrastructure. Thomas Shaw did a great talk at Dockercon on introducing Docker to Demonware for CI across a variety of projects. I don't think the video is up yet, but it's well worth a watch if you're thinking of bringing it into your company. In the meantime we wrote a blog post on his talk: [http://blog.codeship.com/dockercon-2015-using-docker-to- driv...](http://blog.codeship.com/dockercon-2015-using-docker-to-drive- cultural-change-in-gaming/). ~~~ bampolampy We are just starting a beta for our new CI flow which follows the container paradigm very closely. It allows you to build docker compose stacks for your various application images, and run your CI/CD pipeline locally, exactly as it would get run on our hosted platform. If anyone is interested in joining our beta, just drop me an email: brendan at codeship.com. ------ yebyen I honestly don't know how much those things cost (I have heard some people say AWS is not cheap, but compared to buying your own hardware maybe all of this stuff is very cheap). The point of asking is, my company has not found a clear place to use Docker directly, but we do use it indirectly through the Deis project, and CoreOS. My experience with Deis has been wonderful. If you ever looked at Heroku but got to the pricing page and didn't look any further, Deis has the same workflow (and much of the same stack, Cedar) as Heroku. The whole thing is built on docker containers, and designed with failover in mind. I see that Codeship costs a fair amount of money on the higher end of usage; for the cost of a few months on their enterprise subscription, you could probably build your own CI cluster on Deis. CoreOS also targets AWS, and I don't have any idea what your AWS environment looks like, but you could likely build a Deis cluster on AWS just as easily as you could on your own hardware, if not easier. I try not to think of Docker as an end so much as a means. For me, it doesn't even matter that it's using Docker under the hood, but if you have containerized your application, Deis can work on already built images just as easily as Heroku works on git repositories. I can't use Heroku for serious things because it costs too much, and we're small potatoes. But I've got plenty of hardware lying around, and some slightly bigger iron that if I'm being honest is probably underutilized, this is based on knowing that it hosts multiple kernels using virtualization, and the only reason those different tasks run on different machines is to keep them nominally isolated for increased maintainability. Containerization is "virtualization lite." If I can take those services and jobs that all run on their own virtual machines and make them all run on Deis instead (or even just the ones that don't maintain any internal state of their own), I will gain a resource boost by not having to virtualize all of that separate virtual hardware and individual Linux kernels anymore. The marginal cost of another container is lower than a full virtual machine. If it fits into CI, the maintenance cost is lower too, because that's one less individual system that needs to get apt-get upgrade. If we were better at adopting things like chef, this might not be an argument, but for us it still is. I inherited a lot of legacy stuff. Your situation might not be anything like mine. If you are already drinking the CI kool-aid, you might not honestly have much to gain from Docker that would compel you to invest time and effort into using it to host your apps. If Deis looks a little complicated, you might check out dokku. Your laptop probably doesn't have enough power to spin up a whole Deis cluster, but you can still get "almost like the Heroku experience" using Dokku, with Docker under the hood providing support. I'm not going to promise you that it will cut your AWS bills in half, but if you did drink the kool-aid, it might be worth checking out just how much of your currently required development infrastructure and outsourced hosting needs can just go away when you add containerization to your developer toolbelt. ------ curiously well not exactly a business case but it's obviously a win for developers. I shed a single tear when I realized I could just fire up flask + nginx + uwsgi within seconds after installing docker. For a business perspective, it's a little tricky. I guess it can help if you need to offer an onsite version of your SaaS app and the enterprise client had strict rules about being on site. What would really make docker kickass is if they had a way to encrypt all the source code somehow and protect it. ------ scraymer Maybe you have looked already and it wasn't useful to you but on the Docker website it has some pretty good marketing to explain its usefulness: [https://www.docker.com/whatisdocker](https://www.docker.com/whatisdocker) Why Use Docker: "How does this help you build better software? When your app is in Docker containers, you don’t have to worry about setting up and maintaining different environments or different tooling for each language. Focus on creating new features, fixing issues and shipping software." Business Case: "...With Docker, you can easily take copies of your live environment and run on any new endpoint running Docker..." ~~~ dmarg Yeah, I looked at the Docker website. I feel that Docker is super good at marketing and wanted to get some other opinions. ~~~ deftek Here is CoreOS opinion on docker: [https://coreos.com/blog/rocket/](https://coreos.com/blog/rocket/)
{ "perplexity_score": 754.8, "pile_set_name": "HackerNews" }
723 P.2d 394 (1986) L. Lynn ALLEN and Merle Allen, Plaintiffs and Respondents, v. Thomas M. KINGDON and Joan O. Kingdon, Defendants and Appellants. No. 18290. Supreme Court of Utah. July 29, 1986. H. James Clegg, Scott Daniels, Salt Lake City, for defendants and appellants. Boyd M. Fullmer, Salt Lake City, for plaintiffs and respondents. HOWE, Justice: The plaintiffs Allen (buyers) brought this action for the return of all money they had paid on an earnest money agreement to purchase residential real estate. The defendants Kingdon (sellers) appeal the trial court's judgment that the agreement had been rescinded by the parties and that the buyers were entitled to a full refund. *395 On February 12, 1978, the buyers entered into an earnest money agreement to purchase the sellers' home for $87,500. The agreement provided for an immediate deposit of $1,000, which the buyers paid, to be followed by an additional down payment of $10,000 by March 15, 1978. The buyers were to pay the remainder of the purchase price at the closing which was set on or before April 15, 1978. The agreement provided for the forfeiture of all amounts paid by the buyers as liquidated and agreed damages in the event they failed to complete the purchase. The buyers did not pay the additional $10,000, but paid $9,800 because the parties later agreed on a $200 deduction for a light fixture the sellers were allowed to take from the home. An inscription on the $9,800 check stated all monies paid were "subject to closing." There were several additional exchanges between the parties after the earnest money agreement was signed. The buyers requested that the sellers fix the patio, which the sellers refused to do. The buyers asked that the sellers paint the front of the home, which Mr. Kingdon agreed to do, but did not accomplish. The parties eventually met to close the sale. The buyers insisted on a $500 deduction from the purchase price because of the sellers' failure to paint. The sellers refused to convey title unless the buyers paid the full purchase price. Because of this impasse, the parties did not close the transaction. Mrs. Allen and Mrs. Kingdon left the meeting, after which Mr. Kingdon orally agreed to refund the $10,800, paid by the buyers. However, three days later, the sellers' attorney sent a letter to the buyers advising them that the sellers would retain enough of the earnest money to cover any damages they would incur in reselling the home. The letter also stated that the buyers could avoid these damages by closing within ten days. The buyers did not offer to close the sale. The home was eventually sold for $89,100, less a commission of $5,346. Claiming damages in excess of $15,000, the sellers retained the entire $10,800 and refused to make any refund to the buyers. The trial court found that the parties had orally rescinded their agreement and ordered the sellers to return the buyers' payments, less $1,000 on a counterclaim of the sellers, which award is not challenged on this appeal. The sellers first contend that the trial court erred in holding that our statute of frauds permits oral rescission of a written executory contract for the sale of real property. U.C.A., 1953, § 25-5-1 provides: No estate or interest in real property, other than leases for a term not exceeding one year, nor any trust or power over or concerning real property or in any manner relating thereto, shall be created, granted, assigned, surrendered or declared otherwise than by operation of law, or by deed or conveyance in writing subscribed by the party creating, granting, assigning, surrendering or declaring the same, or by his lawful agent thereunto authorized by writing. (Emphasis added.) In Cutwright v. Union Savings & Investment Co., 33 Utah 486, 491-92, 94 P. 984, 985 (1908), this Court interpreted section 25-5-1 as follows: No doubt the transfer of any interest in real property, whether equitable or legal, is within the statute of frauds; and no such interest can either be created, transferred, or surrendered by parol merely.... No doubt, if a parol agreement to surrender or rescind a contract for the sale of lands is wholly executory, and nothing has been done under it, it is within the statute of frauds, and cannot be enforced any more than any other agreement concerning an interest in real property may be. (Emphasis added.) In that case, the buyer purchased a home under an installment contract providing for the forfeiture of all amounts paid in the event the buyer defaulted. The buyer moved into the home but soon discontinued payments. He informed the seller that he would make no more payments on the contract, surrendered the key to the house, and vacated the premises. Soon thereafter, an assignee of the buyer's interest informed the seller that he intended to make the payments *396 under the contract and demanded possession. The seller refused to accept the payments, claiming that the contract had been mutually rescinded on the buyer's surrender of possession. We held that the statute of frauds generally requires the surrender of legal and equitable interests in land to be in writing. Where, however, an oral rescission has been executed, the statute of frauds may not apply. In Cutwright, surrender of possession by the buyer constituted sufficient part performance of the rescission agreement to remove it from the statute of frauds. This exception is one of several recognized by our cases. We have also upheld oral rescission of a contract for the sale of land when the seller, in reliance on the rescission, enters into a new contract to resell the land. Budge v. Barron, 51 Utah 234, 244-45, 169 P. 745, 748 (1917). In addition, an oral rescission by the buyer may be enforceable where the seller has breached the written contract. Thackeray v. Knight, 57 Utah 21, 27-28, 192 P. 263, 266 (1920). In the present case, the oral rescission involved the surrender of the buyers' equitable interest in the home under the earnest money agreement. Further, the rescission was wholly executory. There is no evidence of any part performance of the rescission or that the buyers substantially changed their position in reliance on the promise to discharge the contract. On the contrary, three days after the attempted closing, the sellers informed the buyers that they intended to hold them to the contract. It was only after the buyers continued in their refusal to close that the sellers placed the home on the market. The buyers argue that the weight of authority in the United States is to the effect that an executory contract for the sale of land within the statute of frauds may be orally rescinded. This may indeed be the case when there are acts of performance of the oral agreement sufficient to take it out of the statute of frauds. See Annot., 42 A.L.R.3d 242, 251 (1972). In support of their contention that an oral rescission of an earnest money agreement for the purchase of land is valid absent any acts of performance, the buyers rely on Niernberg v. Feld, 131 Colo. 508, 283 P.2d 640 (1955). In that case, the Colorado Supreme Court upheld the oral rescission of an executory contract for the sale of land under a statute of frauds which, like Utah's, applies specifically to the surrender of interests in land. The Colorado court concluded that the statute of frauds concerns the making of contracts only and does not apply to their revocation. However, the court did not attempt to reconcile its holding with the contradictory language of the controlling statute. For a contrary result under a similar statute and fact situation, see Waller v. Lieberman, 214 Mich. 428, 183 N.W. 235 (1921). In light of the specific language of Utah's statute of frauds and our decision in Cutwright v. Union Savings & Investment Co., supra, we decline to follow the Colorado case. We note that the annotator at 42 A.L.R.3d 257 points out that in Niernberg the rescission was acted upon in various ways. We hold in the instant case that the wholly executory oral rescission of the earnest money agreement was unenforceable under our statute of frauds. Nor were the buyers entitled to rescind the earnest money agreement because of the sellers' failure to paint the front of the home as promised. Cf. Thackeray v. Knight, 57 Utah at 27-28, 192 P. at 266 (buyer's oral rescission of contract for sale of land was valid when seller breached contract). The rule is well settled in Utah that if the original agreement is within the statute of frauds, a subsequent agreement that modifies any of the material parts of the original must also satisfy the statute. Golden Key Realty, Inc. v. Mantas, 699 P.2d 730, 732 (Utah 1985). An exception to this general rule has been recognized where a party has changed position by performing an oral modification so that it would be inequitable to permit the other party to found a claim or defense on the original agreement as unmodified. White v. Fox, 665 P.2d 1297, 1301 (Utah 1983) *397 (citing Bamberger Co. v. Certified Productions, Inc., 88 Utah 194, 201, 48 P.2d 489, 492 (1935), aff'd on rehearing, 88 Utah 213, 53 P.2d 1153 (1936)). There is no indication that the buyers changed their position in reliance on the sellers' promise to paint the front of the house. Thus, equitable considerations would not preclude the sellers from raising the unmodified contract as a defense to the claim of breach. The fact that the parties executed several other oral modifications of the written contract does not permit the buyers to rescind the contract for breach of an oral promise on which they did not rely to their detriment. We therefore hold that the buyers were not entitled to rescind the earnest money agreement because of the sellers' failure to perform an oral modification required to be in writing under the statute of frauds. The buyers also contend that they are entitled to the return of the $10,800 because the inscription on the $9,800 check stated that all monies were paid "subject to closing." The buyers argue that by conditioning the check in this manner they may, in effect, rewrite the earnest money agreement and relieve themselves of any liability for their own failure to close the sale. We cannot accept this argument. The buyers were under an obligation to pay the monies unconditionally. The sellers' acceptance of the inscribed check cannot be construed as a waiver of their right to retain the $10,800 when the buyers failed to perform the agreement. Having concluded that the buyers breached their obligation under the earnest money agreement, we must next consider whether the liquidated damages provision of the agreement is enforceable. That provision provided that the sellers could retain all amounts paid by the buyers as liquidated and agreed damages in the event the buyers failed to complete the purchase. The general rules in Utah regarding enforcement of liquidated damages for breach of contract have been summarized as follows: Under the basic principles of freedom of contract, a stipulation to liquidated damages for breach of contract is generally enforceable. Where, however, the amount of liquidated damages bears no reasonable relationship to the actual damage or is so grossly excessive as to be entirely disproportionate to any possible loss that might have been contemplated that it shocks the conscience, the stipulation will not be enforced. Warner v. Rasmussen, 704 P.2d 559, 561 (Utah 1985) (citations omitted). In support of their contention that the liquidated damages are not excessive compared to actual damages, the sellers assert that they offered evidence of actual damages in excess of $15,000. However, the trial court disagreed and found the amount of liquidated damages excessive. The record indicates that the only recoverable damages sustained by the sellers resulted from the resale of the home at a lower net price amounting to $3,746 (the difference between the contract price of $87,500 and the eventual selling price, less commission, of $83,754). We agree that $10,800 is excessive and disproportionate when compared to the $3,746 loss of bargain suffered by the sellers. Since the buyers did not ever have possession of the property, the other items of damage claimed by the sellers (interest on mortgage, taxes, and utilities) are not recoverable by them. Perkins v. Spencer, 121 Utah 468, 243 P.2d 446 (1952). Therefore, the sellers are not entitled to retain the full amount paid, but may offset their actual damages of $3,746 against the buyers' total payments. See Soffe v. Ridd, 659 P.2d 1082 (Utah 1983) (seller was entitled to actual damages where liquidated damages provision was held unenforceable). We reverse the trial court's judgment that the earnest money agreement was rescinded and conclude that the buyers breached their obligation to close the transaction. However, we affirm the judgment below that the liquidated damages provided for were excessive and therefore not recoverable. *398 The case is remanded to the trial court to amend the judgment to award the buyers $7,054, less $1,000 awarded by the trial court to the sellers on their counterclaim which is not challenged on this appeal. No interest or attorney fees are awarded to either party inasmuch as the trial court awarded none and neither party has raised the issue on appeal. HALL, C.J., and STEWART and DURHAM, JJ., concur. ZIMMERMAN, Justice (concurring): I join the majority in its disposition of the various issues. However, the majority quotes from Warner v. Rasmussen, 704 P.2d 559 (Utah 1985), to the effect that contractual provisions for liquidated damages will be enforced unless "the amount of liquidated damages bears no reasonable relationship to the actual damage or is so grossly excessive as to be entirely disproportionate to any loss that might have been contemplated that it shocks the conscience." The Court then finds that the amount of the liquidated damages provided for in the agreement is "excessive and disproportionate" when compared to the actual loss suffered by the sellers, thus implying that in the absence of a disparity as great as that which exists here (actual loss is approximately one-third of the penalty), the standard of Warner v. Rasmussen will not be satisfied. I think an examination of our cases should suggest to any thoughtful reader that, in application, the test stated in Warner is not nearly as accepting of liquidated damage provisions as the quoted language would suggest. In fact, I believe this Court routinely applies the alternative test of Warner—that the liquidated damages must bear some reasonable relationship to the actual damages—and that we carefully scrutinize liquidated damage awards. I think it necessary to say this lest the bar be misled by the rather loose language of Warner and its predecessors.
{ "perplexity_score": 260.9, "pile_set_name": "FreeLaw" }
Today the Texas Well Owner Network is hosting a water well screening from 8:30–10am at theLampasas County Farm Bureau building. It gives rural residents a chance to have their well waterscreened. For sampling information, contact the Texas A&M AgriLife Extension Service office inthe County Annex on Pecan St. in Lampasas. A meeting explaining screening results will be held at 6pm Thursday at the Farm Bureau building.
{ "perplexity_score": 797.4, "pile_set_name": "Pile-CC" }
Fourth Court of Appeals San Antonio, Texas May 24, 2019 No. 04-19-00250-CR Jesus GAONA, Appellant v. The STATE of Texas, Appellee From the 175th Judicial District Court, Bexar County, Texas Trial Court No. 2018CR9347 Honorable Catherine Torres-Stahl, Judge Presiding ORDER Pursuant to a plea-bargain agreement, appellant pleaded guilty to possession of a controlled substance in an amount between 4 grams and 200 grams with the intent to deliver. The trial court assessed punishment at five years’ imprisonment with a $1,500.00 fine. On March 26, 2019, the trial court signed a certification of defendant’s right to appeal stating that this “is a plea-bargain case, and the defendant has NO right of appeal.” See TEX. R. APP. P. 25.2(a)(2). “In a plea bargain case ... a defendant may appeal only: (A) those matters that were raised by written motion filed and ruled on before trial, or (B) after getting the trial court’s permission to appeal.” Id. 25.2(a)(2). The clerk’s record, which contains a written plea bargain, establishes the punishment assessed by the court does not exceed the punishment recommended by the prosecutor and agreed to by the defendant. See id. The clerk’s record does not include a written motion filed and ruled upon before trial; nor does it indicate that the trial court gave its permission to appeal. See id. The trial court’s certification, therefore, appears to accurately reflect that this is a plea-bargain case and that appellant does not have a right to appeal. We must dismiss an appeal “if a certification that shows the defendant has the right of appeal has not been made part of the record.” Id. 25.2(d). This appeal will be dismissed pursuant to Texas Rule of Appellate Procedure 25.2(d), unless an amended trial court certification showing that appellant has the right to appeal is made part of the appellate record by June 24, 2019. See TEX. R. APP. P. 25.2(d), 37.1; Daniels v. State, 110 S.W.3d 174 (Tex. App.—San Antonio 2003, order). We ORDER all appellate deadlines be suspended until further order of the court. _________________________________ Irene Rios, Justice IN WITNESS WHEREOF, I have hereunto set my hand and affixed the seal of the said court on this 24th day of May, 2019. ___________________________________ KEITH E. HOTTLE, Clerk of Court
{ "perplexity_score": 218.2, "pile_set_name": "FreeLaw" }
Q: Set WordPress permalinks directly in httpd.conf? Is is possible to configure WordPress permalinks directly in Apache httpd.conf? I have a server situation (Apache 2.2.3 CentOS PHP5.1.6) where I can't use .htaccess for performance reasons, but can use httpd.conf. The admin says that mod_rewrite is enabled, but AllowOverride is not, and I can't change those settings. And I need to restrict the permalinks to just the "blog" directory. This is what would go in .htaccess but needs to go into httpd.conf: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /blog/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog/index.php [L] </IfModule> Thanks... A: Put this within the container for your site. <Directory /path/to/blog/> <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /blog/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog/index.php [L] </IfModule> </Directory>
{ "perplexity_score": 1041.5, "pile_set_name": "StackExchange" }
Feel the moringa difference with optimum nutrition in this natural health supplement. This health food has the benefits of many different health supplements all in one natural food.
{ "perplexity_score": 880.9, "pile_set_name": "OpenWebText2" }
######################################################################## ## ## Copyright (C) 1994-2020 The Octave Project Developers ## ## See the file COPYRIGHT.md in the top-level directory of this ## distribution or <https://octave.org/copyright/>. ## ## This file is part of Octave. ## ## Octave is free software: you can redistribute it and/or modify it ## under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## ## Octave is distributed in the hope that it will be useful, but ## WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with Octave; see the file COPYING. If not, see ## <https://www.gnu.org/licenses/>. ## ######################################################################## ## -*- texinfo -*- ## @deftypefn {} {} cstrcat (@var{s1}, @var{s2}, @dots{}) ## Return a string containing all the arguments concatenated horizontally ## with trailing white space preserved. ## ## For example: ## ## @example ## @group ## cstrcat ("ab ", "cd") ## @result{} "ab cd" ## @end group ## @end example ## ## @example ## @group ## s = [ "ab"; "cde" ]; ## cstrcat (s, s, s) ## @result{} "ab ab ab " ## "cdecdecde" ## @end group ## @end example ## @seealso{strcat, char, strvcat} ## @end deftypefn function st = cstrcat (varargin) if (nargin == 0) ## Special because if varargin is empty, iscellstr still returns ## true but then "[varargin{:}]" would be of class double. st = ""; elseif (iscellstr (varargin)) st = [varargin{:}]; else error ("cstrcat: arguments must be character strings"); endif endfunction ## Test the dimensionality ## 1d %!assert (cstrcat ("ab ", "ab "), "ab ab ") ## 2d %!assert (cstrcat (["ab ";"cde"], ["ab ";"cde"]), ["ab ab ";"cdecde"]) %!assert (cstrcat ("foo", "bar"), "foobar") %!assert (cstrcat (["a "; "bb"], ["foo"; "bar"]), ["a foo"; "bbbar"]) %!assert (cstrcat (), "") ## Test input validation %!error cstrcat (1, 2)
{ "perplexity_score": 487.3, "pile_set_name": "Github" }
MENU From Sky High to the Deep Sea PTI Interiors’ principal, Yofianto Soetono is a trained architect who graduated from Parahyangan Catholic University and comfortably sets himself among the top interior designers in the country. His corporate fit-out designs have filled the indoor parts of numerous buildings in Jakarta—from the ground floors to the top levels—while his hospitality designs can be seen in various hotels and exotic resorts throughout the South East Asia region. Today, his works are extended far below the ground as he explores underwater photography. We talked to the designer at his office on one fine morning. I graduated as an architect in 1992 and started working in the field of architecture. After the monetary crisis in 1998, we began to spread our wings into the direction of interior design. That’s when I commenced the business by doing office fit-outs. As of today, PTI Interiors has involved in several hospitality design projects. Since the start of my firm, I have also become more engrossed in the field. The more works I take on, the more certain I become that I have found my passion for interior design. From the business point of view, we can complete an interior design project more quickly than an architecture project, and this is good for our cash flow. When did you first join PTI? I joined the company in 1995 when it was still known as Peddle Thorp Indonesia. In 1998, it was given a new name, Prada Tata International. What types of projects do PTI Interiors mostly take on? We have done a lot of designs for the head offices of international banks in Jakarta. In addition, we were also commissioned to do several hospitality projects in Jakarta, Medan and Laos. What are the challenges encountered in each of the projects? For office projects, we obviously need to adjust the design to the needs of an office including the facilities for activities. In addition, we also have to think about the working hours in the project and the available budget. Meanwhile, the designs of the international corporations’ head offices have to get a seal of approval from their headquarters, something that is quite daunting for us—although later in the field a lot of things are bound to change. For hotel projects, we have to consider the following two points: hotel operators must follow the standard of their hotels while hotel owners usually have one or two ideas to implement in their establishments. At the same time we have to also consider the construction cost, which should be compatible with the budget and investment value. We have to bridge the needs of both clients and operators, as well as the requirements demanded by hotel guests. Can you tell us about your design process? Each project has a different approach. Most office projects start with a brief analysis and a study of room availability. From there we can explore more about the design and then submit a report and an initial concept. After the documents are approved, we develop the schematic design, followed by the work plan. During these stages, we would collaborate with other consultants, such as for the mechanical and electrical systems, graphic/signage designs and others. In your own words, how do you view office design today? The transformation of office design has been significant. When I started designing office fit-outs, the designs were all hierarchical—everyone occupied rooms which were arranged according to their ranks at the company, from staff members to top management levels. Then came the trend of open-plan offices where rooms were only built for top-tier employees, while common workers occupy cubicles in one large area. With the advancement of technology, office design is now moving forward with the concept of flex office, an activity-based working space. For several head office designs such as Citi Indonesia, we feature a non-dedicated desk concept where workers can work anywhere, thus creating a more efficient and conducive environment for the workers to collaborate. Meanwhile, demands for a collaboration area is rather high although several offices still have no activities, which need that many frequent interactions. When I googled your name, most of the search results show your works in underwater photography. When did you start this interest? For a long time, friends and colleagues had been telling me to take up diving, but I just wasn’t interested. In 2008, one of them suggested that we go diving and he brought along his underwater photography equipment. I began an interest in photography when I was in college, and therefore was keen to try this new hobby because it meant I could document the glorious beauty of the underwater world, which is different than what we see every day. This is especially true in Indonesia, with its majestic oceans. It is beautiful around here, but in eastern Indonesia, the view is even more breathtaking. Each part of the country boasts its own characteristics, and I happen to enjoy this adrenalin-pumping activity. Every diving spot poses different challenges, but there are always local guides who can point us in the right directions. In Manado, for instance, the characteristic is similar to a wall. It is different from Raja Ampat where the coral reefs are more diverse. I enjoy doing this because I can see beautiful views amidst my hectic daily activities as a design consultant who has to create equally beautiful built-environments. Barbara earned her bachelor's degree in architecture from the Interior Architecture Program of the University of Indonesia in 2013. Historical or heritage buildings, as well as utilitarian design, fascinates her as it is the interaction between people and architecture that remains her favourite topic to explore. Besides architecture, her interests include design, handcrafts, literature and social issues.
{ "perplexity_score": 314.3, "pile_set_name": "Pile-CC" }
WipEout HD delayed due to technical issue WipEout HD has yet to be released because of a "specific technical problem", but Sony hopes it will be out "before the end of the year". "There is a specific technical problem with WipEout that we have to solve. I can't go into details but it is a really, really tricky technical problem that no region has been able to solve at the moment," Sony Europe boss David Reeves told us when we hassled him about the game's progress this week in LA. "I think it will come out before the end of the year but it is something that was just very difficult to get to grips with." WipEout HD features eight tracks borrowed and spruced up from the excellent PSP games along with online multiplayer and the promise of downloadable content. It also aims to run at 60 frames-per-second in 1080p.
{ "perplexity_score": 379.9, "pile_set_name": "Pile-CC" }
Water resources data, California, water year 2004, volume 2. Pacific Slope basins from Arroyo Grande to Oregon state line except Central Valley Water Data Report CA-04-2 Links Abstract Water-resources data for the 2004 water year for California consist of records of stage, discharge, and water quality of streams, stage and contents in lakes and reservoirs, and water levels and water quality in wells. Volume 2 contains discharge records for 134 gaging stations, stage and content records for 8 lakes and reservoirs, gage-height records for 8 stations, and water-quality records for 36 streamflow-gaging stations and 4 water-quality partial-record stations. Also included are data for 1 low-flow partial-record station, and 1 miscellaneous-measurement station. These data represent that part of the National Water Data System operated by the U.S. Geological Survey and cooperating State and Federal agencies in California. Additional publication details Publication type: Report Publication Subtype: USGS Numbered Series Title: Water resources data, California, water year 2004, volume 2. Pacific Slope basins from Arroyo Grande to Oregon state line except Central Valley
{ "perplexity_score": 803.6, "pile_set_name": "Pile-CC" }
The Meadowbrook School of Weston The Meadowbrook School of Weston is a coeducational, nonsectarian, independent day school for students in grades from junior kindergarten through eight. Meadowbrook is located on a 26-acre campus in Weston, Massachusetts, enrolls approximately 300 students, and employs 76 faculty and staff. The faculty:student ratio is 1:7. History The Meadowbrook School was founded in 1923 by Robert Winsor on its current grounds in Weston, Massachusetts. The Meadowbrook School was the continuation of Winsor's longstanding commitment to education in the greater Weston area, which began with the Pigeon Hill School in 1903. Winsor strove to improve the town of Weston through the development of its educational system. He donated the present land and collaborated with parents to design the school. Over the past decade, the Meadowbrook School has broadened its academic scope, founding a middle school in 2001 and expanding its innovative, global curriculum. The school continues to modernize through new and improved facilities and technologies. Facts The Meadowbrook School was founded in 1923 and has approximately 300 students in grades Junior Kindergarten through 8. The school is located on 26 acres of land in Weston, Mass., and the grounds include five buildings, as well as athletics fields, tennis courts, swimming pools, and the "Woodyville" outdoor classroom. Meadowbrook is accredited by the Association of Independent Schools in New England (AISNE). The school is divided between the Lower School, which includes Junior Kindergarten through 5th grades, and the Middle School, which is grades 6-8. Twenty-eight percent of students come from a diverse background. The school employs 70 faculty and staff with a faculty/student ratio of 1:7. In 2011-2012, Meadowbrook's Annual Fund raised $1.25 million with 93% parent and 96% faculty participation. The school's endowment is $16.5 million as of September 2011. Academics Meadowbrook is known for academic rigor, and the curriculum encompasses language arts, mathematics, science, social studies, foreign language, and technology. In addition there is a focus on global education and social-emotional learning. As students progress through the grades, there is an emphasis on inferential thinking and sophisticated approaches to discussion, writing, and reading comprehension. Students are also given more choice as to which arts and athletics courses they take once they reach the Middle School. Meadowbrook has a variety of unique programs to aid in experiential learning as well as traditional learning. Athletics Meadowbrook offers a variety of athletic opportunities, from personal fitness and intramural play to junior varsity and varsity teams. All students are required to participate in sports/fitness, so that they learn not only about healthy lifestyles, but also about competition and sportsmanship. Athletic offerings vary depending on the season. Fall Sports Soccer (Gender-split) Cross-country(co-ed) Fitness(co-ed) Winter Sports Basketball(Gender-split) Squash(co-ed) Fitness(co-ed) Spring Sports Lacrosse(Gender-split) Baseball Softball Track and field(co-ed) Rock Climbing(co-ed) Tennis(co-ed) If a student has an activity that conflicts with sports times, he/she can waiver, or not do sports, but only for one season. The arts Meadowbrook students take part in both performing and visual arts. The school's annual Fine Arts Night showcases students' two- and three-dimensional artwork and video productions. Parents and alumni are invited to view students' displayed drawings, collages, pots, and wooden carvings, as well as dance, choral, and musical performances. Performing Arts Junior Kindergartners through 5th graders have music classes that meet multiple times per week. Middle School students choose their own arts courses and also have the opportunity to participate in rock band, jazz band, and a school musical. Furthermore, they may pursue digital music production, set production, and lighting. Students are further encouraged to participate in the arts through weekly assemblies. The Halloween, Thanksgiving, and Holiday assemblies invite school-wide participation. Each grade has its own assembly, and the complexity of the performances increases as students progress, culminating with the middle school musical in which it is optional for any middle schooler to participate in, either by creating props or by acting in the musical. Visual Arts Meadowbrook's visual arts program relates to the curriculum of each specific classroom, striving to create a purposeful continuum between the arts and academics. The visual arts offerings encompass ceramics, painting, drawing, woodworking, sculpture, apparel design, film, and digital media arts. Junior Kindergartners through 5th graders have arts classes throughout the school week, while middle schoolers may elect visual arts classes from an array of offered courses. Traditions H.O.P.E. H.O.P.E. (Helping Others Prosper Everywhere) is Meadowbrook's community service initiative. Student representatives from each grade meet to determine specific service goals and recipient organizations. H.O.P.E. activities include visits to assisted living/nursery facilities and local nonprofits as well as service projects on Meadowbrook's campus itself. Annual philanthropic events include Meadowbrook's dance-a-thon for charity, Pan-Mass Challenge bike-a-thon, the Meadowbrook Marathon for the Boston One Fund, and clothing and book drives for youth aid abroad. Assemblies Assemblies are an integral part of Meadowbrook's community-building efforts and typically take place on Friday mornings. Each assembly has a different theme and involves a different group of children. In addition to assemblies that are attended by the entire school, each grade has its own assembly during which they present different projects and performances to their parents and the Meadowbrook community. Notable assemblies include the annual Thanksgiving Assembly, Holiday Assembly, and Artists-in-Residence assemblies. The Thanksgiving Assembly at Meadowbrook welcomes the families of students, particularly grandparents and grandfriends, to the school on the Tuesday before the holiday. Songs and routines such as the "Turkey Tango" are performed yearly. Similarly, the Holiday Assembly invites every member of the Meadowbrook community to congregate before school breaks for winter break. Children perform a variety of songs, such as the "12 Days of Christmas," "Reindeer Twist," and "Holiday Lights" with flashlights. The two or three annual Artists-in-Residence assemblies are open to children in 3rd grade and up and allow students to showcase a special talent that they may have. Meadowbrook assemblies unite students and their families. References External links meadowbrook-ma.org Category:Private elementary schools in Massachusetts Category:Private middle schools in Massachusetts Category:Schools in Middlesex County, Massachusetts Category:Educational institutions established in 1923
{ "perplexity_score": 208.1, "pile_set_name": "Wikipedia (en)" }
Q: Do you need to put grease on car battery? Some cars have grease on the battery terminas and some don't. I have just replaced a battery. Is it needed? Why? Can I just go without? Also I want to clean the top part of my terminals with baking soda and water mix. Do I have to remove the grease first? A: The grease is there to prevent corrosion on the battery terminals, when you put the connector on and tighten it down the grease gets squeezed out and what's left prevents corrosion where there's no metal to metal contact. If your battery came greased then there's no reason to clean the terminals unless the grease got rubbed off and the terminals corroded. If there's no corrosion there's no need to clean them. If there's no grease or insufficient grease after tightening then I like to put a dab of vaseline on the top of the contacts to make sure air can't get in.
{ "perplexity_score": 438.2, "pile_set_name": "StackExchange" }
John Hayward (Newfoundland politician) John Hayward (1819 – March 13, 1885) was a lawyer, judge and politician in Newfoundland. He served in the Newfoundland House of Assembly from 1852 to 1866. He was born and educated in Harbour Grace. He studied law with George Henry Emerson and was called to the Newfoundland bar in 1841. He served as chief clerk and registrar for the northern circuit court and as sub-collector of customs at Harbour Grace. In 1849, John and his young family headed for Washington County in Wisconsin. They traveled during a week and a half. They took a boat from Newfoundland to New York, then up the canals to Albany, another boat to Buffalo. They traveled to Wisconsin by wagon and bought a farm and had land cleared. After a few months, in a letter to his father-in-law, he seems full of optimism. In spite of that, he was back in Harbour Grace at the end of 1850, and was sub-collector of Customs in 1851. He was elected to the assembly for Conception Bay in 1852 and for Harbour Grace in 1855 and again in 1859. He served as Solicitor General in the provincial cabinet until 1861. The results of the 1861 election in Harbour Grace were set aside due to violence at the polls; Hayward was re-elected in a by-election held later that year. He was elected again in 1865 and was named Solicitor General again. Hayward was opposed to the union with Canada proposed in 1866. He retired from politics in August of that year and was named an assistant justice in the Supreme Court of Newfoundland. Hayward retired from the bench due to illness in 1884 and died the following year. References Category:Members of the Newfoundland and Labrador House of Assembly Category:1885 deaths Category:1819 births Category:Judges in Newfoundland and Labrador Category:Newfoundland Colony judges Category:People from Harbour Grace
{ "perplexity_score": 15.1, "pile_set_name": "Wikipedia (en)" }
Q: how to include the commands of envsetup.sh in Android? Hi I have run the command ./build/envsetup.sh on my source code of android while compiling. I want to use the following commands present in the same file. -cgrep -mmm -mm but I get the following errors!! Command 'pgrep' from package 'procps' (main) Command 'fgrep' from package 'grep' (main) Command 'egrep' from package 'grep' (main) Command 'sgrep' from package 'sgrep' (universe) Command 'grep' from package 'grep' (main) Command 'lgrep' from package 'lv' (universe) Command 'rgrep' from package 'grep' (main) Command 'zgrep' from package 'gzip' (main) Command 'dgrep' from package 'debian-goodies' (main) Command 'vgrep' from package 'atfs' (universe) Command 'ngrep' from package 'ngrep' (universe) Command 'wcgrep' from package 'kdesdk-scripts' (main) Command 'agrep' from package 'agrep' (multiverse) Command 'xgrep' from package 'xgrep' (universe) cgrep: command not found How should I resolve this? Do I need to make any changes in the envsetup.sh file? A: The mm, mmm and cgrep are defined as shell functions. What does the following return for you: set | egrep "^mmm \(\)|^mm \(\)|^cgrep \(\)" On my environment I get: cgrep () mm () mmm () If you are getting nothing back, you are probably sourcing the script wrong. Remember you can't just make the script executable and run it as it won't be exporting the functions back to your shell. You must use either: . build/envsetup.sh OR source build/envsetup.sh to ensure it runs in parent shell.
{ "perplexity_score": 2838.1, "pile_set_name": "StackExchange" }
RSpec.describe Hanami::Controller::Configuration do before do module CustomAction end end let(:configuration) { Hanami::Controller::Configuration.new } after do Object.send(:remove_const, :CustomAction) end describe 'handle exceptions' do it 'returns true by default' do expect(configuration.handle_exceptions).to be(true) end it 'allows to set the value with a writer' do configuration.handle_exceptions = false expect(configuration.handle_exceptions).to be(false) end it 'allows to set the value with a dsl' do configuration.handle_exceptions(false) expect(configuration.handle_exceptions).to be(false) end it 'ignores nil' do configuration.handle_exceptions(nil) expect(configuration.handle_exceptions).to be(true) end end describe 'handled exceptions' do it 'returns an empty hash by default' do expect(configuration.handled_exceptions).to eq({}) end it 'allows to set an exception' do configuration.handle_exception ArgumentError => 400 expect(configuration.handled_exceptions).to include(ArgumentError) end end describe 'exception_handler' do describe 'when the given error is unknown' do it 'returns the default value' do expect(configuration.exception_handler(Exception)).to be(500) end end describe 'when the given error was registered' do before do configuration.handle_exception NotImplementedError => 400 end it 'returns configured value when an exception instance is given' do expect(configuration.exception_handler(NotImplementedError.new)).to be(400) end end end describe 'action_module' do describe 'when not previously configured' do it 'returns the default value' do expect(configuration.action_module).to eq(::Hanami::Action) end end describe 'when previously configured' do before do configuration.action_module(CustomAction) end it 'returns the value' do expect(configuration.action_module).to eq(CustomAction) end end end describe 'modules' do before do unless defined?(FakeAction) class FakeAction end end unless defined?(FakeCallable) module FakeCallable def call(_) [status, {}, ['Callable']] end def status 200 end end end unless defined?(FakeStatus) module FakeStatus def status 318 end end end end after do Object.send(:remove_const, :FakeAction) Object.send(:remove_const, :FakeCallable) Object.send(:remove_const, :FakeStatus) end describe 'when not previously configured' do it 'is empty' do expect(configuration.modules).to be_empty end end describe 'when prepare with no block' do it 'raises error' do expect { configuration.prepare }.to raise_error(ArgumentError, 'Please provide a block') end end describe 'when previously configured' do before do configuration.prepare do include FakeCallable end end it 'allows to configure additional modules to include' do configuration.prepare do include FakeStatus end configuration.modules.each do |mod| FakeAction.class_eval(&mod) end code, _, body = FakeAction.new.call({}) expect(code).to be(318) expect(body).to eq(['Callable']) end end it 'allows to configure modules to include' do configuration.prepare do include FakeCallable end configuration.modules.each do |mod| FakeAction.class_eval(&mod) end code, _, body = FakeAction.new.call({}) expect(code).to be(200) expect(body).to eq(['Callable']) end end describe '#format' do before do configuration.format custom: 'custom/format' BaseObject = Class.new(BasicObject) do def hash 23 end end end after do Object.send(:remove_const, :BaseObject) end it 'registers the given format' do expect(configuration.format_for('custom/format')).to eq(:custom) end it 'raises an error if the given format cannot be coerced into symbol' do expect { configuration.format(23 => 'boom') }.to raise_error(TypeError) end it 'raises an error if the given mime type cannot be coerced into string' do expect { configuration.format(boom: BaseObject.new) }.to raise_error(TypeError) end end describe '#mime_types' do before do configuration.format custom: 'custom/format' end it 'returns all known MIME types' do all = ["custom/format"] expect(configuration.mime_types).to eq(all + Hanami::Action::Mime::MIME_TYPES.values) end it 'returns correct values even after the value is cached' do configuration.mime_types configuration.format electroneering: 'custom/electroneering' all = ["custom/format", "custom/electroneering"] expect(configuration.mime_types).to eq(all + Hanami::Action::Mime::MIME_TYPES.values) end end describe '#default_request_format' do describe "when not previously set" do it 'returns nil' do expect(configuration.default_request_format).to be(nil) end end describe "when set" do before do configuration.default_request_format :html end it 'returns the value' do expect(configuration.default_request_format).to eq(:html) end end it 'raises an error if the given format cannot be coerced into symbol' do expect { configuration.default_request_format(23) }.to raise_error(TypeError) end end describe '#default_response_format' do describe "when not previously set" do it 'returns nil' do expect(configuration.default_response_format).to be(nil) end end describe "when set" do before do configuration.default_response_format :json end it 'returns the value' do expect(configuration.default_response_format).to eq(:json) end end it 'raises an error if the given format cannot be coerced into symbol' do expect { configuration.default_response_format(23) }.to raise_error(TypeError) end end describe '#default_charset' do describe "when not previously set" do it 'returns nil' do expect(configuration.default_charset).to be(nil) end end describe "when set" do before do configuration.default_charset 'latin1' end it 'returns the value' do expect(configuration.default_charset).to eq('latin1') end end end describe '#format_for' do it 'returns a symbol from the given mime type' do expect(configuration.format_for('*/*')).to eq(:all) expect(configuration.format_for('application/octet-stream')).to eq(:all) expect(configuration.format_for('text/html')).to eq(:html) end describe 'with custom defined formats' do before do configuration.format htm: 'text/html' end after do configuration.reset! end it 'returns the custom defined mime type, which takes the precedence over the builtin value' do expect(configuration.format_for('text/html')).to eq(:htm) end end end describe '#mime_type_for' do it 'returns a mime type from the given symbol' do expect(configuration.mime_type_for(:all)).to eq('application/octet-stream') expect(configuration.mime_type_for(:html)).to eq('text/html') end describe 'with custom defined formats' do before do configuration.format htm: 'text/html' end after do configuration.reset! end it 'returns the custom defined format, which takes the precedence over the builtin value' do expect(configuration.mime_type_for(:htm)).to eq('text/html') end end end describe '#default_headers' do after do configuration.reset! end describe "when not previously set" do it 'returns default value' do expect(configuration.default_headers).to eq({}) end end describe "when set" do let(:headers) { { 'X-Frame-Options' => 'DENY' } } before do configuration.default_headers(headers) end it 'returns the value' do expect(configuration.default_headers).to eq(headers) end describe "multiple times" do before do configuration.default_headers(headers) configuration.default_headers('X-Foo' => 'BAR') end it 'returns the value' do expect(configuration.default_headers).to eq( 'X-Frame-Options' => 'DENY', 'X-Foo' => 'BAR' ) end end describe "with nil values" do before do configuration.default_headers(headers) configuration.default_headers('X-NIL' => nil) end it 'rejects those' do expect(configuration.default_headers).to eq(headers) end end end end describe "#public_directory" do describe "when not previously set" do it "returns default value" do expected = ::File.join(Dir.pwd, 'public') actual = configuration.public_directory # NOTE: For Rack compatibility it's important to have a string as public directory expect(actual).to be_kind_of(String) expect(actual).to eq(expected) end end describe "when set with relative path" do before do configuration.public_directory 'static' end it "returns the value" do expected = ::File.join(Dir.pwd, 'static') actual = configuration.public_directory # NOTE: For Rack compatibility it's important to have a string as public directory expect(actual).to be_kind_of(String) expect(actual).to eq(expected) end end describe "when set with absolute path" do before do configuration.public_directory ::File.join(Dir.pwd, 'absolute') end it "returns the value" do expected = ::File.join(Dir.pwd, 'absolute') actual = configuration.public_directory # NOTE: For Rack compatibility it's important to have a string as public directory expect(actual).to be_kind_of(String) expect(actual).to eq(expected) end end end describe 'duplicate' do before do configuration.reset! configuration.prepare { include Kernel } configuration.format custom: 'custom/format' configuration.default_request_format :html configuration.default_response_format :html configuration.default_charset 'latin1' configuration.default_headers({ 'X-Frame-Options' => 'DENY' }) configuration.public_directory 'static' end let(:config) { configuration.duplicate } it 'returns a copy of the configuration' do expect(config.handle_exceptions).to eq(configuration.handle_exceptions) expect(config.handled_exceptions).to eq(configuration.handled_exceptions) expect(config.action_module).to eq(configuration.action_module) expect(config.modules).to eq(configuration.modules) expect(config.send(:formats)).to eq(configuration.send(:formats)) expect(config.mime_types).to eq(configuration.mime_types) expect(config.default_request_format).to eq(configuration.default_request_format) expect(config.default_response_format).to eq(configuration.default_response_format) expect(config.default_charset).to eq(configuration.default_charset) expect(config.default_headers).to eq(configuration.default_headers) expect(config.public_directory).to eq(configuration.public_directory) end it "doesn't affect the original configuration" do config.handle_exceptions = false config.handle_exception ArgumentError => 400 config.action_module CustomAction config.prepare { include Comparable } config.format another: 'another/format' config.default_request_format :json config.default_response_format :json config.default_charset 'utf-8' config.default_headers({ 'X-Frame-Options' => 'ALLOW ALL' }) config.public_directory 'pub' expect(config.handle_exceptions).to be(false) expect(config.handled_exceptions).to eq(ArgumentError => 400) expect(config.action_module).to eq(CustomAction) expect(config.modules.size).to be(2) expect(config.format_for('another/format')).to eq(:another) expect(config.mime_types).to include('another/format') expect(config.default_request_format).to eq(:json) expect(config.default_response_format).to eq(:json) expect(config.default_charset).to eq('utf-8') expect(config.default_headers).to eq('X-Frame-Options' => 'ALLOW ALL') expect(config.public_directory).to eq(::File.join(Dir.pwd, 'pub')) expect(configuration.handle_exceptions).to be(true) expect(configuration.handled_exceptions).to eq({}) expect(configuration.action_module).to eq(::Hanami::Action) expect(configuration.modules.size).to be(1) expect(configuration.format_for('another/format')).to be(nil) expect(configuration.mime_types).to_not include('another/format') expect(configuration.default_request_format).to eq(:html) expect(configuration.default_response_format).to eq(:html) expect(configuration.default_charset).to eq('latin1') expect(configuration.default_headers).to eq('X-Frame-Options' => 'DENY') expect(configuration.public_directory).to eq(::File.join(Dir.pwd, 'static')) end end describe 'reset!' do before do configuration.handle_exceptions = false configuration.handle_exception ArgumentError => 400 configuration.action_module CustomAction configuration.modules { include Kernel } configuration.format another: 'another/format' configuration.default_request_format :another configuration.default_response_format :another configuration.default_charset 'kor-1' configuration.default_headers({ 'X-Frame-Options' => 'ALLOW DENY' }) configuration.public_directory 'files' configuration.reset! end it 'resets to the defaults' do expect(configuration.handle_exceptions).to be(true) expect(configuration.handled_exceptions).to eq({}) expect(configuration.action_module).to eq(::Hanami::Action) expect(configuration.modules).to eq([]) expect(configuration.send(:formats)).to eq(Hanami::Controller::Configuration::DEFAULT_FORMATS) expect(configuration.mime_types).to eq(Hanami::Action::Mime::MIME_TYPES.values) expect(configuration.default_request_format).to be(nil) expect(configuration.default_response_format).to be(nil) expect(configuration.default_charset).to be(nil) expect(configuration.default_headers).to eq({}) expect(configuration.public_directory).to eq(::File.join(Dir.pwd, 'public')) end end end
{ "perplexity_score": 2518.4, "pile_set_name": "Github" }
The present invention relates to circuits for driving power transistors which control the application of electricity to a load; and more particularly to such circuits which provide some degree of protection against the adverse effects of a short circuit in the load coupled to the transistor. A recent addition to the family of power semiconductor devices are insulated gate bipolar transistors (IGBT). This type of device is adapted for use in power supplies and other applications where it is required to switch currents on the order of several hundred amperes. One application of IGBT's is in high frequency inverters of X-ray generators. A desirable feature of this type of power semiconductor device, compared to thyristors, is the capability of surviving short circuit load conditions by self-limiting the fault current rather than relying solely upon conventional protection techniques. This self-limiting capability is a function of the conductivity of the IGBT and the magnitude of the drive voltage applied to its gate electrode. Higher gat voltages permit a greater fault current to flow through the transistor; thereby increasing the stress on the device and likelihood that it will fail under a short circuit condition before a current sensor can act to turn off the transistor's gate drive. It is therefore advantageous from the aspect of short circuit survival to limit the conductivity of the transistor, but this has the adverse effect of raising the on-state voltage drop across the IGBT. A higher voltage drop results in a larger power loss in the device and more power dissipation. When the IGBT is switching several hundred amperes, a difference of a few volts across the device amounts to a significant power dissipation. As a consequence, a designer seeking to incorporate an IGBT into a power switching circuit has been faced with the dilemma of choosing between a relatively high gate drive voltage in order to reduce the power dissipation in the device, while reducing short circuit protection; or utilizing a lower gate drive voltage to increase the short circuit survivability, while increasing the power dissipation of the device.
{ "perplexity_score": 224.8, "pile_set_name": "USPTO Backgrounds" }
Save Our State (disambiguation) Save Our State may refer to several things. The 1994 California ballot initiative Proposition 187, called the "Save Our State" initiative. The 2010 Oklahoma ballot initiative State Question 755, called "Save Our State", to ban the use of Sharia and international law in Oklahoma. Save Our State, a political group in Southern California. Save Our State (Australia), a political group in New South Wales. References
{ "perplexity_score": 139.1, "pile_set_name": "Wikipedia (en)" }
Q: NoClassDefFoundError with AppEngine sample projects I have installed AppEngine Eclipse plugin for Juno according to the instructions here: https://developers.google.com/appengine/docs/java/tools/eclipse However when running a number of the provided sample projects (eg. ShardedCounter), a NoClassDefFoundError would be thrown, stating that class com/google/appengine/tools/development/DevAppServerFactory$CustomSecurityManager$StackTraceAnalyzer cannot be found: java.lang.NoClassDefFoundError: com/google/appengine/tools/development/DevAppServerFactory$CustomSecurityManager$StackTraceAnalyzer at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.appHasPermission(DevAppServerFactory.java:334) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:379) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkAccess(DevAppServerFactory.java:408) at java.lang.ThreadGroup.checkAccess(ThreadGroup.java:299) at java.lang.Thread.init(Thread.java:336) at java.lang.Thread.<init>(Thread.java:608) at java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:541) at com.google.appengine.tools.development.ApiProxyLocalImpl$DaemonThreadFactory.newThread(ApiProxyLocalImpl.java:644) at java.util.concurrent.ThreadPoolExecutor.addThread(ThreadPoolExecutor.java:672) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:721) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:270) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:255) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:203) at com.google.apphosting.api.ApiProxy.makeAsyncCall(ApiProxy.java:190) at com.google.appengine.api.datastore.DatastoreApiHelper.makeAsyncCall(DatastoreApiHelper.java:56) at com.google.appengine.api.datastore.PreparedQueryImpl.runQuery(PreparedQueryImpl.java:127) at com.google.appengine.api.datastore.PreparedQueryImpl.asIterator(PreparedQueryImpl.java:60) at com.google.appengine.api.datastore.BasePreparedQuery$1.iterator(BasePreparedQuery.java:25) at com.google.appengine.demos.shardedcounter.java.v1.ShardedCounter.getCount(ShardedCounter.java:59) at com.google.appengine.demos.shardedcounter.java.v1.CounterPage.doGet(CounterPage.java:36) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.socket.dev.DevSocketFilter.doFilter(DevSocketFilter.java:74) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.ResponseRewriterFilter.doFilter(ResponseRewriterFilter.java:123) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.HeaderVerificationFilter.doFilter(HeaderVerificationFilter.java:34) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:63) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:125) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.DevAppServerModulesFilter.doDirectRequest(DevAppServerModulesFilter.java:368) at com.google.appengine.tools.development.DevAppServerModulesFilter.doDirectModuleRequest(DevAppServerModulesFilter.java:351) at com.google.appengine.tools.development.DevAppServerModulesFilter.doFilter(DevAppServerModulesFilter.java:116) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at com.google.appengine.tools.development.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:97) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:485) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:923) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:547) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) The only modification I did to the sample projects was to remove the usage of Google Web Toolkit, as the test server could not be started if GWT is used. I have just started trying out AppEngine, any help is greatly appreciated. A: Set Eclipse to use JDK 7 seem to fix the issue, which explains why it runs on the actual App Engine server but fails locally. Still the error it gives is quite strange... Probably will do further testing and update the info.
{ "perplexity_score": 2249.2, "pile_set_name": "StackExchange" }
define("ace/mode/ini_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules; var escapeRe = "\\\\(?:[\\\\0abtrn;#=:]|x[a-fA-F\\d]{4})"; var IniHighlightRules = function() { this.$rules = { start: [{ token: 'punctuation.definition.comment.ini', regex: '#.*', push_: [{ token: 'comment.line.number-sign.ini', regex: '$|^', next: 'pop' }, { defaultToken: 'comment.line.number-sign.ini' }] }, { token: 'punctuation.definition.comment.ini', regex: ';.*', push_: [{ token: 'comment.line.semicolon.ini', regex: '$|^', next: 'pop' }, { defaultToken: 'comment.line.semicolon.ini' }] }, { token: ['keyword.other.definition.ini', 'text', 'punctuation.separator.key-value.ini'], regex: '\\b([a-zA-Z0-9_.-]+)\\b(\\s*)(=)' }, { token: ['punctuation.definition.entity.ini', 'constant.section.group-title.ini', 'punctuation.definition.entity.ini'], regex: '^(\\[)(.*?)(\\])' }, { token: 'punctuation.definition.string.begin.ini', regex: "'", push: [{ token: 'punctuation.definition.string.end.ini', regex: "'", next: 'pop' }, { token: "constant.language.escape", regex: escapeRe }, { defaultToken: 'string.quoted.single.ini' }] }, { token: 'punctuation.definition.string.begin.ini', regex: '"', push: [{ token: "constant.language.escape", regex: escapeRe }, { token: 'punctuation.definition.string.end.ini', regex: '"', next: 'pop' }, { defaultToken: 'string.quoted.double.ini' }] }] }; this.normalizeRules(); }; IniHighlightRules.metaData = { fileTypes: ['ini', 'conf'], keyEquivalent: '^~I', name: 'Ini', scopeName: 'source.ini' }; oop.inherits(IniHighlightRules, TextHighlightRules); exports.IniHighlightRules = IniHighlightRules; }); define("ace/mode/folding/ini",["require","exports","module","ace/lib/oop","ace/range","ace/mode/folding/fold_mode"], function(require, exports, module) { "use strict"; var oop = require("../../lib/oop"); var Range = require("../../range").Range; var BaseFoldMode = require("./fold_mode").FoldMode; var FoldMode = exports.FoldMode = function() { }; oop.inherits(FoldMode, BaseFoldMode); (function() { this.foldingStartMarker = /^\s*\[([^\])]*)]\s*(?:$|[;#])/; this.getFoldWidgetRange = function(session, foldStyle, row) { var re = this.foldingStartMarker; var line = session.getLine(row); var m = line.match(re); if (!m) return; var startName = m[1] + "."; var startColumn = line.length; var maxRow = session.getLength(); var startRow = row; var endRow = row; while (++row < maxRow) { line = session.getLine(row); if (/^\s*$/.test(line)) continue; m = line.match(re); if (m && m[1].lastIndexOf(startName, 0) !== 0) break; endRow = row; } if (endRow > startRow) { var endColumn = session.getLine(endRow).length; return new Range(startRow, startColumn, endRow, endColumn); } }; }).call(FoldMode.prototype); }); define("ace/mode/ini",["require","exports","module","ace/lib/oop","ace/mode/text","ace/mode/ini_highlight_rules","ace/mode/folding/ini"], function(require, exports, module) { "use strict"; var oop = require("../lib/oop"); var TextMode = require("./text").Mode; var IniHighlightRules = require("./ini_highlight_rules").IniHighlightRules; var FoldMode = require("./folding/ini").FoldMode; var Mode = function() { this.HighlightRules = IniHighlightRules; this.foldingRules = new FoldMode(); }; oop.inherits(Mode, TextMode); (function() { this.lineCommentStart = ";"; this.blockComment = {start: "/*", end: "*/"}; this.$id = "ace/mode/ini"; }).call(Mode.prototype); exports.Mode = Mode; });
{ "perplexity_score": 4437.5, "pile_set_name": "Github" }
This invention relates in general to cover plates that are typically associated with electrical outlet boxes but more particularly pertains to a protective cover plate that is only used to protect the electrical outlet box during the finishing stages while constructing a wall, or the like. The cover plate is universal, whereby usable with any style or specialized brand of the known typical electrical outlet boxes that are currently available. Also, the present cover plate is functional as one unit, or multiples of the cover plate may be used in unison so as to be adaptable with electrical boxes of various sizes. It is well known within the construction field, that when building residential and commercial buildings various workers must perform many tasks simultaneously. Thus, time is often a critical factor, as well as efficiency and productivity. For example, electrical work in the nature of installing electrical wires, fuse boxes and the like is done prior to putting up interior walls, installing sheet rock, painting and such. When the electrician finishes the first phase of the work, typically wires remain exposed and uncovered within the open electrical junction boxes. After the walls are put up, however, the electrician still must follow behind and complete the wiring to switches, appliances, lighting and so forth. It is desirable to keep the time spent on this second phase to a minimum. However a problem exists, as the electrical wires often accidentally get cut, painted over, plastered over, or otherwise damaged by co-workers while installing the walls around and/or over the junction boxes. When this is the case, the electrician must spend extra time in repairing or cleaning the wires prior to proceeding with the final phase of installation. This is most unfortunate as this inhibits the electrician""s efficiency and inadvertently increases construction costs. In the past many attempts have been made to remedy the situation. For example, workers may install a screwed on face plate, or they may crumple up paper wads to stuff in the junction box housing, etc. Unfortunately, these remedies are involved and are also very time consuming and thus do not resolve the problem in any manner. Therefore, it is desirable to provide a means to protect the electrical outlet boxes during the construction phase in a manner that is efficient, quick, easy and inexpensive. An example of similar known prior art is taught in U.S. Pat. No. 5,562,222 entitled xe2x80x9cTEMPORARY COVER FOR ELECTRICAL OUTLET BOXESxe2x80x9d issued to: Jordan et al on Oct. 8th 1996. Wherein taught is a disposable cover of the type described which is formed from a very thin sheet of flat material having an inwardly extending flange which is sized to frictionally engage the inner surfaces of the walls of the outlet box. The cover is of the press fit variety and is used to temporarily protect the electrical components within the outlet box during construction, such as taught by the present invention. However, it can clearly be seen that this type of temporary cover has inherent disadvantages that the present invention recognizes, addresses and resolves in a manner heretofore not taught, as will be seen within the following specification. For example, the noted reference is formed as a flat plastic plate having an exterior flange and two opposing indents for attaching the plate onto the electrical box and a finger grip for removal thereof. This is useful for its intended purpose for attaching and removing the plate but this causes additional problems. For example, the finger grip and indents within the plate can easily be filled or compacted with plaster or the like, just as easily as the electrical outlet box itself, thus defeating the entire purpose of the plate. Also, this type of plate does not provide any means for easily locating the plate and/or electrical box after the wall or sheetrock has been installed, this is important and is a novel advantage taught within the present invention. Still further the noted reference does not provide a cover plate that can be used in multiples so as to be usable with electrical outlet boxes having multiple outlets therein and/or various widths. Therefore, this type of flat electrical protective plate is not desirable and has not been found to be useful within today""s construction field. It is therefore an object of the present invention to provide a protective cover plate for temporarily covering an electrical outlet box that is of simple construction and economical to manufacture. For example, the plate may be formed from of material which can be corrugated, respectively. Whereby, the protective cover plate is formed having prominent ridges and grooves, each of which serve a purpose and provide unusual results not associated within the prior art, as later described. Yet another object of the present invention is to provide a protective cover plate for temporarily covering an electrical outlet box that resolves the problems normally associated with the typical cover plates that are available in the construction field today. Still another object of the present invention is to provide a protective cover plate for temporarily covering an electrical outlet box that eliminates the need for any attachment means, such as nails, screws, bolts or nuts, etc. Also another object of the present invention is to provide a protective cover plate for temporarily covering an electrical outlet box that can be used several times, and/or after use may be easily discarded and/or recycled in a manner which is economically friendly, unlike plastic or other materials which are not. Although it is to be noted that the plate may be made from substantially any material of engineering choice, including metal, wood, plastic, rubber, cardboard, aluminum, etc., if so desired. Yet a further object of the present invention is to provide a protective cover plate for temporarily covering an electrical outlet box that can be used in multiples. Whereby allowing a workman to easily install multiple covers in a side-by-side relationship so as to be functional with electrical outlet boxes having multiple electrical outlet plugs therein, such as 4, 6, or 8, etc. Still another object of the present invention is to provide a protective cover plate for temporarily covering an electrical outlet box that may include optional locator means thereon. For example, the plate may include a centralized magnet which allows a user to easily locate the plate as well as the outlet box after the wall or sheetrock has been erected by use of a magnet of proper polarity. Also, if the magnet is not used the cover plate and electrical outlet box may also be easily detected by use of a metal detector if so desired, which is most advantageous. Other objects and advantages will be seen when taken into consideration with the following drawings and specification.
{ "perplexity_score": 334.1, "pile_set_name": "USPTO Backgrounds" }
To install packages of sourceforge (for instance), I need to configure them first. Meanwhile I know it is possible to do "make" and "make install" with gcc, but how will I be able to do "./configure"? Is there a package in Debian I can install for it? Or what?I have to mention that I have a full hd installation. "./configure" is a script included with most source code packages for Linux. Unlike the other commands, it exists in the directory of the program you want to compile (hence why you type "./" before it). You don't download it separately. Often simply typing "./configure" will set things up automatically as required for your system. However you can type "./configure --help", or to scroll through the text, "./configure --help | less", to see a list of options that may be added to the "./configure" line in order to make specific choices about the compillation. Also, just a note that gcc doesn't actually control the "make" command, there is a specific program called "make" for that, but gcc is used by it to compile the package. that explains a lot."Sourcecode package" does that mean, from "Sourceforge", or is that from Debian in our (DSL) case?What I allso wanted to know is: When I want to start an installed package and the output is: errot while loading shared libraries (eg) 'gimpprint.so.1' no such file or directory, and I can't find gimpprint.so.1, why should this than work with gimpprint.dev? Source Code is the common way to distribute programs for Linux so that they can be used with a range of different distributions. If a project on Sourceforge offers a Linux version, it probably has the source package available, but many linux programs from elsewhere on the net will have them too. They usually have the "tar.gz" extension (or tar.xz, tar.bz2). Note though that the ".tar.gz" packages in the MyDSL repository are not Source Code packages. For your second problem, if you downloaded a package for the program you're using, it might be looking for GIMP-print when it isn't installed on your computer. If you compiled it, you should check for a "./configure" option to disable GIMP-print. If the program can't run without GIMP-print for some reason, you might need to install it, though I'm not sure if it is compatible with DSL.
{ "perplexity_score": 497.5, "pile_set_name": "Pile-CC" }
Please see attached. Regards, Wendi LeBrocq 3-3835
{ "perplexity_score": 3207.9, "pile_set_name": "Enron Emails" }
Influence of the calf presence during milking on dairy performance, milk fatty acid composition, lipolysis and cheese composition in Salers cows during winter and grazing seasons. The milking of Salers cows requires the presence of the calf. The removal of the calf would simplify the milking routine, but it could also modify the milk yield and the milk and cheese composition. Therefore, the aim of this experiment was to evaluate the effect of calf presence during milking during sampling period (winter or grazing periods), on dairy performance, milk fatty acid (FA) composition, lipolysis and cheese yield and composition. Nine and 8 Salers lactating cows were milked in the presence (CP) or absence (CA) of their calves respectively. During winter, the cows were fed a hay-based diet and then they only grazed a grassland pasture. Calf presence during milking increased milk yield and milk 16:0 concentration and decreased milk fat content and milk total odd- and branched-chain FA (OBCFA) concentrations. Calf presence only increased initial lipolysis in milk collected during the winter season. Milk from CP cows compared to CA cows resulted in a lower cheese yield and ripened cheeses with lower fat content. Milk from the grazing season had lower saturated medium-chain FA and OBCFA concentrations and higher 18:0, cis-9-18:1, trans-11-18:1 and cis-9, trans-11-CLA concentrations than that from the winter season. Initial milk lipolysis was higher in the winter than in the grazing season. These variations could be due to seasonal changes in the basal diet. Furthermore, the effect of calf presence during milking on milk fat composition was lower than that on dairy performance, cheese yield and composition. Removing the calf during the milking of Salers cows seems feasible without a decrease in milked milk, and with a positive effect on cheese yield and fat content, under the condition that we are able to select cows having the capacity to be milked easily without the calf.
{ "perplexity_score": 637.7, "pile_set_name": "PubMed Abstracts" }
Copy the link below Singletons and other Bridget Jones fans can rejoice. Author Helen Fielding has announced that she’s hard at work on a third novel featuring her wildly popular, beguilingly insecure, diary-scribbling British heroine. Fielding made the announcement on the BBC’s Woman’s Hour radio program in the U.K. last week. “I just found last spring that I had a new subject, new stuff that I wanted to say, things that were making me laugh,” Fielding said. “Things that didn’t exist the last time I wrote Bridget, like email and texting and twitter.” She joked of Bridget’s au courant musings, “It’s like, ‘Number of twitter followers: naught. Still no followers. Still no followers.’“ The novelist refused to say whether Bridget will be linked still to either of her beaus from the previous books, 1996’s Bridget Jones’s Diary and the 1999 sequel, Bridget Jones: The Edge of Reason. “Some characters remain and some have disappeared,” the author teased. Fielding later hinted that her now older heroine – “she’s in a new phase in her life” – will be involved in on-line dating this time around. Fielding’s publisher announced that the new book will be published in the U.K. a year from now, the Telegraph reported. The British writer, who had moved to Los Angeles for several years earlier in the aughts, is now living back in London. In addition to the new novel, Fielding is also working on a stage musical based on the first Bridget Jones book. The original novel was turned into a hit film in 1999, which starred Renée Zellweger, Hugh Grant and Colin Firth. The three reunited for a sequel based on the follow-up book in 2004. A third film, Bridget Jones’s Baby, has been in the works for a few years. Fielding and novelist David Nicholls (One Day) have written a script – she penned it prior to beginning the new novel and it’s a separate story – but the production has yet to announce a start date. The original trio of stars has been approached to return in Baby but there’s no firm word that anyone has signed on.
{ "perplexity_score": 191.4, "pile_set_name": "Pile-CC" }
241 What comes next: 1845, 3108, 4587, 6288, 8217, 10380, 12783? 15432 What comes next: -2971568, -2971569, -2971570, -2971571, -2971572, -2971573? -2971574 What is the next term in 3266414, 3266266, 3266120, 3265976, 3265834? 3265694 What comes next: 10939, 3644, -3653, -10952? -18253 What is next in 22398, 22338, 22232, 22074, 21858, 21578? 21228 What is the next term in -59578, -237888, -534930, -950704, -1485210, -2138448, -2910418? -3801120 What comes next: -29189, -116658, -262439, -466532? -728937 What comes next: -1289919, -1289911, -1289901, -1289889? -1289875 What is next in 602, 970, 1670, 2774, 4354? 6482 What comes next: -3090782, -6181567, -9272352, -12363137, -15453922, -18544707? -21635492 What is the next term in -49319, -99477, -149631, -199781, -249927, -300069, -350207? -400341 What is the next term in -2582, -2046, -618, 2140, 6666, 13398? 22774 What comes next: -461, -94, 1081, 3046, 5783, 9274? 13501 What comes next: -452, -626, -1020, -1670, -2612? -3882 What is the next term in 150528366, 150528372, 150528382, 150528396? 150528414 What is next in 295, 4419, 11293, 20917, 33291, 48415, 66289? 86913 What comes next: 24431, 23632, 22835, 22040, 21247? 20456 What is next in -102989, -205992, -308995? -411998 What comes next: -7744, -28702, -63626, -112510, -175348, -252134, -342862, -447526? -566120 What is next in -3204967, -6409941, -9614913, -12819883, -16024851? -19229817 What is the next term in -626, -4315, -14240, -33491, -65158, -112331, -178100? -265555 What is next in -1246, -2493, -3762, -5053, -6366, -7701, -9058? -10437 What is the next term in -13298, -53426, -120386, -214178? -334802 What comes next: -1499, -3148, -4795, -6434, -8059? -9664 What is the next term in -14333, -16026, -17719, -19412, -21105? -22798 What is the next term in 302947, 1211725, 2726351, 4846831, 7573171, 10905377, 14843455? 19387411 What is the next term in 73796, 295147, 664064, 1180547? 1844596 What is the next term in -401074, -802264, -1203456, -1604650, -2005846, -2407044? -2808244 What is the next term in 1545, 3080, 4559, 5964, 7277, 8480, 9555, 10484? 11249 What comes next: -7299, -13263, -19227? -25191 What comes next: -424519, -850109, -1275699, -1701289, -2126879, -2552469? -2978059 What is the next term in -186690, -374619, -562542, -750453, -938346, -1126215, -1314054? -1501857 What is next in -2481, -19380, -65283, -154704, -302157? -522156 What is next in -550, -2255, -5048, -8893, -13754? -19595 What comes next: 7198432, 14396861, 21595292, 28793725? 35992160 What is the next term in 11890, 23666, 35440, 47212, 58982? 70750 What comes next: -115604, -115514, -115450, -115406, -115376, -115354? -115334 What is next in 3196503, 3196504, 3196505? 3196506 What is next in -19264, -37113, -53566, -68623, -82284, -94549? -105418 What is the next term in 4207, 3860, 3513, 3166, 2819, 2472? 2125 What is next in 469928, 939852, 1409750, 1879616, 2349444, 2819228? 3288962 What is next in -10461, -10981, -11421, -11787, -12085, -12321, -12501, -12631? -12717 What is next in -181194, -182155, -183116, -184077, -185038? -185999 What comes next: 1546572, 12372525, 41757248, 98980125, 193320540? 334057877 What is the next term in -404, -2144, -5230, -9662? -15440 What is the next term in -5621, -22449, -50477, -89699, -140109, -201701, -274469? -358407 What comes next: 26103, 45382, 64661? 83940 What comes next: -59107, -56023, -52939, -49855? -46771 What is the next term in -57, -553, -1325, -2373, -3697, -5297, -7173? -9325 What comes next: 2793, 2966, 3257, 3666? 4193 What is the next term in -1510, -11182, -27270, -49774? -78694 What comes next: 16153, 16302, 16451, 16600, 16749? 16898 What comes next: 15168977, 15168979, 15168981, 15168983, 15168985, 15168987? 15168989 What is next in -121, 243, 1215, 3095, 6183? 10779 What comes next: 120504, 120125, 119490, 118599, 117452, 116049, 114390? 112475 What is the next term in 1817204, 1817199, 1817206, 1817231, 1817280, 1817359, 1817474? 1817631 What is next in -520, -1992, -4424, -7822, -12192? -17540 What is next in 2544, 63, -6678, -19809, -41460, -73761, -118842? -178833 What is next in -1179493, -2358987, -3538481, -4717975? -5897469 What is the next term in -22291, -24948, -29361, -35536, -43479, -53196? -64693 What is the next term in 465653, 931297, 1396941, 1862585? 2328229 What is the next term in 562150, 1124302, 1686454? 2248606 What is the next term in 31377, 28706, 26035, 23364? 20693 What comes next: 30535, 61551, 92567, 123583? 154599 What is the next term in 2966, 23834, 80468, 190748, 372554? 643766 What comes next: 8825, 17512, 26199? 34886 What is next in 455099336, 910198674, 1365298012? 1820397350 What is next in -2422251, -2422277, -2422327, -2422413, -2422547? -2422741 What is next in 8936420, 17872842, 26809270, 35745704, 44682144, 53618590? 62555042 What is the next term in 30245, 30263, 30283, 30305, 30329, 30355? 30383 What comes next: -485146, -1940610, -4366384, -7762468, -12128862, -17465566, -23772580? -31049904 What is the next term in 2532983, 5065943, 7598903, 10131863, 12664823? 15197783 What comes next: 3040, 2993, 2914, 2803, 2660, 2485, 2278? 2039 What comes next: 22343, 44589, 66849, 89123, 111411? 133713 What is next in 325469, 1299849, 2923807, 5197337, 8120433, 11693089, 15915299? 20787057 What is next in -1983, -938, 97, 1122, 2137? 3142 What comes next: -40449734, -40449636, -40449536, -40449434, -40449330, -40449224, -40449116? -40449006 What is the next term in 1763, 5103, 10621, 18311, 28167, 40183, 54353, 70671? 89131 What is the next term in 12023975, 12023960, 12023935, 12023900, 12023855? 12023800 What is next in -186724, -185998, -185272, -184546? -183820 What comes next: -6080409, -12160807, -18241207, -24321609, -30402013? -36482419 What is the next term in 7456, 7755, 8252, 8947, 9840, 10931, 12220? 13707 What is the next term in -2268740, -4537494, -6806248, -9075002? -11343756 What comes next: 25867, 98537, 218009, 384283, 597359, 857237, 1163917? 1517399 What is the next term in -5244, -42229, -142674, -338319, -660904, -1142169? -1813854 What comes next: -7312, -7228, -6992, -6526, -5752? -4592 What is the next term in -52237383, -52237374, -52237359, -52237338, -52237311, -52237278? -52237239 What comes next: -5616, -5577, -5528, -5469, -5400, -5321, -5232? -5133 What is next in -63809, -63801, -63793, -63785, -63777? -63769 What is next in -534643, -1069260, -1603879, -2138500, -2673123, -3207748? -3742375 What comes next: -88321, -88077, -87833, -87589? -87345 What is the next term in 522540, 522887, 523234, 523581, 523928, 524275? 524622 What comes next: -39289, -78584, -117875, -157162? -196445 What is the next term in -4485, -88, 4295, 8658, 12995, 17300? 21567 What comes next: 1804, 3760, 5826, 8002? 10288 What is next in 2806, 5511, 8102, 10579, 12942? 15191 What is next in -93539, -93531, -93523, -93515? -93507 What is the next term in 68442, 68448, 68456, 68466, 68478, 68492? 68508 What is the next term in 2333, 8188, 17953, 31628, 49213, 70708, 96113? 125428 What comes next: 115049431, 115049435, 115049439, 115049443, 115049447, 115049451? 115049455 What is the next term in -86490014, -172980027, -259470040? -345960053 What comes next: 71394, 68335, 63246, 56133, 47002, 35859, 22710? 7561 What is the next term in -44588, -89176, -133732, -178238, -222676, -267028, -311276? -355402 What is next in 226499, 906116, 2038811, 3624584? 5663435 What comes next: -17739, -34328, -50917, -67506? -84095 What is the next term in -1263, -1349, -1369, -1323, -1211, -1033? -789 What is the next term in -635264, -2541066, -5717404, -10164278, -15881688, -22869634, -31128116? -40657134 What is next in -486, -492, -318, 180, 1146, 2724, 5058? 8292 What is next in 4462722, 4462720, 4462718, 4462716? 4462714 What comes next: -32897, -32985, -33135, -33347, -33621? -33957 What is the next term in -856, -3988, -9212, -16528, -25936, -37436? -51028 What comes next: -850303, -3401319, -7653013, -13605385, -21258435? -30612163 What comes next: -45952, -183645, -413088, -734281, -1147224, -1651917? -2248360 What is next in -484603, -1938373, -4361323, -7753453? -12114763 What is the next term in 80114, 78911, 77708? 76505 What is nex
{ "perplexity_score": 787.4, "pile_set_name": "DM Mathematics" }
/* * Copyright 2002-2016 the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.springframework.test.context.junit.jupiter; import java.lang.annotation.Documented; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; /** * Demo <em>composed annotation</em> for {@link EnabledIf @EnabledIf} that * enables a test class or test method if the current operating system is * Mac OS. * * @author Sam Brannen * @since 5.0 */ @Target({ ElementType.TYPE, ElementType.METHOD }) @Retention(RetentionPolicy.RUNTIME) @Documented @EnabledIf(expression = "#{systemProperties['os.name'].toLowerCase().contains('mac')}", reason = "Enabled on Mac OS") public @interface EnabledOnMac { }
{ "perplexity_score": 1717.4, "pile_set_name": "Github" }
Ultra Music Festival, you have completely outdone yourself this year. Phase one of the massive lineup has dropped featuring headliners Armin Van Buuren, Destroid, Eric Prydz, Hardwell, Kaskade, Martin Garrix, Nero, DJ Snake and Tiësto just to name a few…and oh, how dare we forget to mention the return of Pendulum (YES you read that right!) A few of the many supporting acts come from Duke Dumont, Galantis, Jauz, KSHMR, Mashmello, Snails and Tchami. If you were on the fence about heading down to Miami in March for Ultra, this stacked phase one should help you make your decision. Tickets can be purchased here. We are thrilled to announce the first phase of the Ultra Music Festival 2016 lineup! #Ultra2016 pic.twitter.com/aQSG9m1gxG — Ultra Music Festival (@ultra) December 17, 2015
{ "perplexity_score": 354.2, "pile_set_name": "OpenWebText2" }
Q: Sorting a long string with a composite of strings and integers + symbols The code below is to sort a file containing the following info (eg. input.txt): rango burrito apri%cot 1 -10 3.5 8 5 tesla 10 hyphen -4.7 2 bus 20 cat vul$ture m0nkey -9999999 The output should have the symbols removed and then strings and integers sorted in ascending order, but the order retaining the type of the original list. for example, the first item is a string in both input and output and the final item for example is an int. When the script is run the output looks as follows: $ ./sort_input.py input.txt apricot burrito bus -9999999 -47 -10 1 2 cat 5 hyphen 8 10 m0nkey 20 rango tesla vulture 35 I've written code that looks as follows and I'm sure this can be improved a lot: Reading the file first, then splitting on white space into an array of strings: \$O(n)\$ complexity where \$n\$ is the length of the original string def listify(input_file): with open(input_file) as f: for line in f: list_of_strings = line.strip().split() return list_of_strings Using the list of strings and converting it into a typed list, but removing any symbols first using the method below: \$O(n)\$ complexity for the typed list, then \$O(k)\$ for each string in that list to remove symbols - so total complexity is \$O(n)*O(k)\$. def typed_list(untyped_list): """ converts an untyped list to a typed list of strings and integers """ typed_list = [] for item in untyped_list: item_without_symbol = remove_any_symbols(item) try: typed_list.append(int(item_without_symbol)) except ValueError: typed_list.append(item_without_symbol) return typed_list Method to remove any symbols, which I use in the function above. \$O(k)\$ complexity where \$k\$ is the length of the string: def remove_any_symbols(s_string): """We take a string and remove any symbols from it. """ acceptable_characters = string.ascii_letters + string.digits no_s_string_list = [c for c in s_string if c in acceptable_characters] if s_string.startswith("-"): return "-"+''.join(no_s_string_list) else: return ''.join(no_s_string_list) I then use the typed list that's generated above to sort integers and strings separately. Then using the original list to generate a list with the same type items in their original order. \$O(n log n)\$ complexity for both sorting functions and then \$O(n)\$ for adding to the final output list. def sort_em_up(no_symbol_list=None): """we take a list here, note the type, sort and then return a sorted list""" sorted_int = sorted([int(i) for i in no_symbol_list if isinstance(i, int)]) sorted_str = sorted([s for s in no_symbol_list if isinstance(s, str)]) final_sorted_list = [] i = j = 0 for item in no_symbol_list: if isinstance(item, int): final_sorted_list.append(str(sorted_int[i])) i += 1 else: final_sorted_list.append(sorted_str[j]) j += 1 return ' '.join(final_sorted_list) if __name__=="__main__": input_file = sys.argv[1] list_of_strings = listify(input_file) print(sort_em_up(typed_list(list_of_strings))) A: listify function Since, as you mentioned in the comments, the function is meant to read a single first line from a file only - you can use the next() built-in function: def listify(filename): """Reads the first line from a file and splits it into words.""" with open(filename) as input_file: return next(input_file).strip().split() remove_any_symbols function You can actually pre-define the allowed characters as a constant - no need to re-define them for every function call. You can also make it a set for faster lookups: ACCEPTABLE_CHARACTERS = set(string.ascii_letters + string.digits) def remove_any_symbols(input_string): """Removes any symbols from a string leaving the leading dashes.""" filtered_characters = [c for c in input_string if c in ACCEPTABLE_CHARACTERS] prefix = "-" if input_string.startswith("-") else "" return prefix + ''.join(filtered_characters) Or, a regex-based version (less understandable overall, but see if it is going to be faster): PATTERN = re.compile(r""" ( (?<!^)- # dash not at the beginning of a string | # or [^A-Za-z0-9\-] # not letters, digits and dashes )+ """, flags=re.VERBOSE) def remove_any_symbols(input_string): """Removes any symbols from a string leaving the leading dashes.""" return PATTERN.sub("", input_string) Pre-process the complete string With regexes, it would also be possible to pre-process the input string as a whole, checking for the dashes at the beginning of words. This may lead to applying remove_any_symbols() on the complete input string read from a file: PATTERN = re.compile(r""" ( (?<!(?:^| ))- # dash not at the beginning of a word | # or [^A-Za-z0-9\- ] # not letters, digits, dashes and spaces )+ """, flags=re.VERBOSE) def remove_any_symbols(input_string): """Removes any symbols from a string leaving the leading dashes for each word.""" return PATTERN.sub("", input_string) if __name__=="__main__": input_file = sys.argv[1] with open(input_file) as f: data = next(f).strip() list_of_words = remove_any_symbols(data).split() print(sort_em_up(typed_list(list_of_words)))
{ "perplexity_score": 2237.3, "pile_set_name": "StackExchange" }
David sez, "Last week you featured a story on Nottingham Hackspace being made homeless, and needing to raise funds to move. Thanks in no small part to Boing Boing, we made it! We've now got the keys for the new Nottingham Hackspace, and will be moving in at the weekend. *Huge* thanks to you, your readers and everyone who donated. As we settle in, we'll be sure to keep the Nottinghack blog up to date."
{ "perplexity_score": 410.2, "pile_set_name": "OpenWebText2" }
Integration of supercritical fluid chromatography into drug discovery as a routine support tool. II. investigation and evaluation of supercritical fluid chromatography for achiral batch purification. Supercritical fluid chromatography (SFC) has recently been implemented within our analytical technologies department as a purity assessment and purification tool to complement HPLC for isomer and chiral separations. This report extends the previous work to achiral analysis and purification. This internal evaluation explores the potential impact SFC can have on high throughput, batch purification. Achiral methods have been optimised and batches of compounds purified using a retention time mapping strategy. Here the preparative retention time is predicted from a standard calibration curve and fraction windows set to ensure the peak of interest is collected in one of the four available fraction positions. In this contribution, a completely indirect scale up strategy is applied using totally independent analytical and preparative methods. This novel approach allows for fast analytical purity analysis without compromising the ability to scale up to the preparative system. The benefits and limitations of SFC for batch purification are described in comparison to HPLC across a set of standard compounds and a set of 90 research compounds.
{ "perplexity_score": 705.2, "pile_set_name": "PubMed Abstracts" }
The invention relates to a device for electronically executing a mathematical operation which can be executed on at the most three digital variables, two of which comprise m bits each and represent the input signals (A and B), a third variable (K) representing a weighting factor which comprises (n+1) bits (n.gtoreq.0), the mathematical operation performed on said digital variables being of the kind K.multidot.A+(1-K.multidot.B), the result Z=K.multidot.A+(1-K.multidot.B) thereof representing the digital output signal which is formed by the bit-wise execution of the mathematical operation, a partial output signal Z.sub.ij =K.sub.i a.sub.j +(1-K.sub.i)b.sub.j being obtained per bit coefficient of A(a.sub.j) and B(b.sub.j) and K(K.sub.i). Electronic execution of mathematical operations such as additions and multiplications is known. For the execution of a combination of two or more mathematical operations it is known to execute these operations consecutively in time and to use specific means for each operation. For example, in the case of recursive digital filters where an operation of the kind ##EQU1## is to be realized, it is customary to perform the multiplication operation first, followed by the add and/or subtract operations. This requires elements such as multipliers and adders. Such an arrangement for a recursive digital filter is described on pages 40 to 46 and page 306 of the book "Theory and application of digital signal processing", by L. R. Rabiner and B. Gold, published by Prentice Hall Inc. Englewood Cliffs, N.J., U.S.A. Due to the successive execution of the various mathematical operations in time, the processing time required is determined by the sum of the individual processing times. Moreover, the use of separate elements for the operations is an inefficient and expensive method. It is to be noted that it is also possible, of course, to use the same unit, under the control of its program, for several operations in the case of programmed processor units, but this generally requires a longer processing time. A long processing time may be a drawback, for given applications. This is the case, for example, for digital video signal processing where frequencies in the order of 35 MHz are already used in some cases. A circuit which is operational at such a frequency, therefore, offers a solution in this field of application.
{ "perplexity_score": 307.6, "pile_set_name": "USPTO Backgrounds" }
Q: If $f: \Bbb R\rightarrow \Bbb R $ is monotonic and onto, prove that $f$ is continuous. If $f: \Bbb R\rightarrow \Bbb R $ is monotonic and onto, prove that $f$ is continuous. (Hint: given any $x \in \Bbb R$ and any $\epsilon \gt 0$, there is $x_1$ with $f(x_1)= f(x) - \epsilon$ and $f(x_2) = f(x) +\epsilon. $Use this to get $\delta \gt 0$.) I need to show if $\exists \delta \gt 0: |x-a|\lt \delta$ then $|f(x) - f(a)| \lt \epsilon $ $f(x_1) \lt f(x) \lt f(x_2)$ is all I can come up with. A: HINT You are right, $f(x_1) < f(x) < f(x_2)$. Consider using $$ \delta = \max\{x-x_1, x_2-x\} $$ What can you say about $f(z)$ for any $z \in (x-\delta, x+\delta)$?
{ "perplexity_score": 969, "pile_set_name": "StackExchange" }
/******************************************************************************* * This file is part of OpenNMS(R). * * Copyright (C) 2017 The OpenNMS Group, Inc. * OpenNMS(R) is Copyright (C) 1999-2017 The OpenNMS Group, Inc. * * OpenNMS(R) is a registered trademark of The OpenNMS Group, Inc. * * OpenNMS(R) is free software: you can redistribute it and/or modify * it under the terms of the GNU Affero General Public License as published * by the Free Software Foundation, either version 3 of the License, * or (at your option) any later version. * * OpenNMS(R) is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Affero General Public License for more details. * * You should have received a copy of the GNU Affero General Public License * along with OpenNMS(R). If not, see: * http://www.gnu.org/licenses/ * * For more information contact: * OpenNMS(R) Licensing <[email protected]> * http://www.opennms.org/ * http://www.opennms.com/ *******************************************************************************/ package org.opennms.smoketest; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; import org.junit.Before; import org.junit.FixMethodOrder; import org.junit.Test; import org.junit.runners.MethodSorters; import org.openqa.selenium.By; import org.openqa.selenium.support.ui.Select; @FixMethodOrder(MethodSorters.NAME_ASCENDING) public class KSCEditorIT extends OpenNMSSeleniumIT { protected void goToMainPage() { driver.get(getBaseUrlInternal() + "opennms/KSC/index.jsp"); } @Before public void setUp() throws Exception { // Create the test requisition, this will block until the test node is actually created createRequisition(); } @Test public void testKSCReports() throws Exception { goToMainPage(); checkMainPage(); goToMainPage(); createReport(); goToMainPage(); editExistingReport(); goToMainPage(); deleteExistingReport(); } protected void checkMainPage() throws Exception { // main KSC page assertEquals(3, countElementsMatchingCss("div.card-header")); findElementByXpath("//div[@class='card-header']/span[text()='Customized Reports']"); findElementByXpath("//div[@class='card-header']/span[text()='Node & Domain Interface Reports']"); findElementByXpath("//div[@class='card-header']/span[text()='Descriptions']"); assertElementDoesNotExist(By.name("report:Smoke Test Report 1")); assertElementDoesNotExist(By.name("report:Smoke Test Report Uno")); assertEquals("TestMachine1", findElementByName("resource:TestMachine1").getText()); } protected void createReport() throws Exception { // create a new report clickElementByXpath("//button[text()='Create New']"); // set the title enterText(By.name("report_title"), "Smoke Test Report 1"); // add the ICMP graph clickElementByXpath("//button[text()='Add New Graph']"); // select the first subresource (TestMachine1) clickElementByName("subresource:Node:TestMachine1"); clickElementByXpath("//button[text()='View Child Resource']"); // select the first subresource (127.0.0.1) clickElementByName("subresource:Response Time:127.0.0.1"); clickElementByXpath("//button[text()='View Child Resource']"); // choose the resource clickElementByXpath("//button[text()='Choose this resource']"); // name the graph enterText(By.name("title"), "Smoke Test Graph Title 1"); // finish up clickElementByXpath("//button[text()='Done with edits to this graph']"); clickElementByXpath("//button[text()='Save Report']"); assertEquals("Smoke Test Report 1", findElementByName("report:Smoke Test Report 1").getText()); // view the report to confirm it's right waitForElement(By.name("report:Smoke Test Report 1")); Thread.sleep(100); clickElementByName("report:Smoke Test Report 1"); clickElementByXpath("//button[text()='View']"); findElementByXpath("//div[@class='card-header']/span[text()='Custom View: Smoke Test Report 1']"); findElementByXpath("//div[contains(@class, 'graph-container')]"); findElementByXpath("//div[contains(@class, 'graph-container')]//canvas"); } protected void editExistingReport() throws Exception { // edit report 0 (should be the Smoke Test Report 1 from b_test*) clickElementByName("report:Smoke Test Report 1"); clickElementByXpath("//button[text()='Customize']"); // check that the defaults are set as expected assertFalse(findElementByName("show_timespan").isSelected()); assertFalse(findElementByName("show_graphtype").isSelected()); final Select gpl = new Select(findElementByName("graphs_per_line")); assertEquals("default", gpl.getFirstSelectedOption().getText()); // change graphs per line to 3, check "show timespan" and "show graphtype", and change the title gpl.selectByVisibleText("3"); clickElementByName("show_timespan"); clickElementByName("show_graphtype"); enterText(By.name("report_title"), "Smoke Test Report Uno"); // now confirm that the checkboxes got checked assertTrue(findElementByName("show_timespan").isSelected()); assertTrue(findElementByName("show_graphtype").isSelected()); // modify the graph and give it a new title clickElementByXpath("//button[text()='Modify']"); enterText(By.name("title"), "Smoke Test Graph Title I"); // update the timespan final Select timespan = new Select(findElementByName("timespan")); timespan.selectByVisibleText("3 month"); // then finish modifying the graph clickElementByXpath("//button[text()='Done with edits to this graph']"); // then finish modifying the report clickElementByXpath("//button[text()='Save Report']"); } protected void deleteExistingReport() throws Exception { // edit report 0 (should be the Smoke Test Report 1 from b_test*) clickElementByName("report:Smoke Test Report Uno"); clickElementByXpath("//button[text()='Delete']"); assertElementDoesNotExist(By.name("report:Smoke Test Report Uno")); } private void clickElementByName(final String name) { findElementByName(name).click(); } private void clickElementByXpath(final String xpath) { findElementByXpath(xpath).click(); } private void createRequisition() throws Exception { final String req = "<model-import xmlns=\"http://xmlns.opennms.org/xsd/config/model-import\" date-stamp=\"2006-03-09T00:03:09\" foreign-source=\"" + REQUISITION_NAME + "\">" + "<node node-label=\"TestMachine1\" foreign-id=\"TestMachine1\">" + "<interface ip-addr=\"127.0.0.1\" snmp-primary=\"P\" descr=\"localhost\">" + "<monitored-service service-name=\"ICMP\"/>" + "<monitored-service service-name=\"HTTP\"/>" + "</interface>" + "</node>" + "</model-import>"; createRequisition(REQUISITION_NAME, req, 1); } }
{ "perplexity_score": 1525.5, "pile_set_name": "Github" }
I have seen a 21 Day Launch Program for $1997. How is different from the Basic or Premium Plan? So, the 21 Day Launch Program is actually our Premium Package when the Premium Package is on sale… it’s just a different way that we market it =). The Premium Package is usually a One-time fee of $2,497 for Lifetime Access.
{ "perplexity_score": 907, "pile_set_name": "Pile-CC" }
// Copyright (c) 2013, Facebook, Inc. All rights reserved. // This source code is licensed under the BSD-style license found in the // LICENSE file in the root directory of this source tree. An additional grant // of patent rights can be found in the PATENTS file in the same directory. #include <map> #include <memory> #include <string> #include "db/dbformat.h" #include "db/db_impl.h" #include "db/table_properties_collector.h" #include "rocksdb/table_properties.h" #include "rocksdb/table.h" #include "table/block_based_table_factory.h" #include "util/coding.h" #include "util/testharness.h" #include "util/testutil.h" namespace rocksdb { class TablePropertiesTest { private: unique_ptr<TableReader> table_reader_; }; // TODO(kailiu) the following classes should be moved to some more general // places, so that other tests can also make use of them. // `FakeWritableFile` and `FakeRandomeAccessFile` bypass the real file system // and therefore enable us to quickly setup the tests. class FakeWritableFile : public WritableFile { public: ~FakeWritableFile() { } const std::string& contents() const { return contents_; } virtual Status Close() { return Status::OK(); } virtual Status Flush() { return Status::OK(); } virtual Status Sync() { return Status::OK(); } virtual Status Append(const Slice& data) { contents_.append(data.data(), data.size()); return Status::OK(); } private: std::string contents_; }; class FakeRandomeAccessFile : public RandomAccessFile { public: explicit FakeRandomeAccessFile(const Slice& contents) : contents_(contents.data(), contents.size()) { } virtual ~FakeRandomeAccessFile() { } uint64_t Size() const { return contents_.size(); } virtual Status Read(uint64_t offset, size_t n, Slice* result, char* scratch) const { if (offset > contents_.size()) { return Status::InvalidArgument("invalid Read offset"); } if (offset + n > contents_.size()) { n = contents_.size() - offset; } memcpy(scratch, &contents_[offset], n); *result = Slice(scratch, n); return Status::OK(); } private: std::string contents_; }; class DumbLogger : public Logger { public: virtual void Logv(const char* format, va_list ap) { } virtual size_t GetLogFileSize() const { return 0; } }; // Utilities test functions void MakeBuilder( const Options& options, std::unique_ptr<FakeWritableFile>* writable, std::unique_ptr<TableBuilder>* builder) { writable->reset(new FakeWritableFile); builder->reset( options.table_factory->GetTableBuilder(options, writable->get(), options.compression)); } void OpenTable( const Options& options, const std::string& contents, std::unique_ptr<TableReader>* table_reader) { std::unique_ptr<RandomAccessFile> file(new FakeRandomeAccessFile(contents)); auto s = options.table_factory->GetTableReader( options, EnvOptions(), std::move(file), contents.size(), table_reader ); ASSERT_OK(s); } // Collects keys that starts with "A" in a table. class RegularKeysStartWithA: public TablePropertiesCollector { public: const char* Name() const { return "RegularKeysStartWithA"; } Status Finish(TableProperties::UserCollectedProperties* properties) { std::string encoded; PutVarint32(&encoded, count_); *properties = TableProperties::UserCollectedProperties { { "TablePropertiesTest", "Rocksdb" }, { "Count", encoded } }; return Status::OK(); } Status Add(const Slice& user_key, const Slice& value) { // simply asssume all user keys are not empty. if (user_key.data()[0] == 'A') { ++count_; } return Status::OK(); } virtual TableProperties::UserCollectedProperties GetReadableProperties() const { return {}; } private: uint32_t count_ = 0; }; TEST(TablePropertiesTest, CustomizedTablePropertiesCollector) { Options options; // make sure the entries will be inserted with order. std::map<std::string, std::string> kvs = { {"About", "val5"}, // starts with 'A' {"Abstract", "val2"}, // starts with 'A' {"Around", "val7"}, // starts with 'A' {"Beyond", "val3"}, {"Builder", "val1"}, {"Cancel", "val4"}, {"Find", "val6"}, }; // Test properties collectors with internal keys or regular keys for (bool encode_as_internal : { true, false }) { // -- Step 1: build table auto collector = new RegularKeysStartWithA(); if (encode_as_internal) { options.table_properties_collectors = { std::make_shared<UserKeyTablePropertiesCollector>(collector) }; } else { options.table_properties_collectors.resize(1); options.table_properties_collectors[0].reset(collector); } std::unique_ptr<TableBuilder> builder; std::unique_ptr<FakeWritableFile> writable; MakeBuilder(options, &writable, &builder); for (const auto& kv : kvs) { if (encode_as_internal) { InternalKey ikey(kv.first, 0, ValueType::kTypeValue); builder->Add(ikey.Encode(), kv.second); } else { builder->Add(kv.first, kv.second); } } ASSERT_OK(builder->Finish()); // -- Step 2: Open table std::unique_ptr<TableReader> table_reader; OpenTable(options, writable->contents(), &table_reader); const auto& properties = table_reader->GetTableProperties().user_collected_properties; ASSERT_EQ("Rocksdb", properties.at("TablePropertiesTest")); uint32_t starts_with_A = 0; Slice key(properties.at("Count")); ASSERT_TRUE(GetVarint32(&key, &starts_with_A)); ASSERT_EQ(3u, starts_with_A); } } TEST(TablePropertiesTest, InternalKeyPropertiesCollector) { InternalKey keys[] = { InternalKey("A", 0, ValueType::kTypeValue), InternalKey("B", 0, ValueType::kTypeValue), InternalKey("C", 0, ValueType::kTypeValue), InternalKey("W", 0, ValueType::kTypeDeletion), InternalKey("X", 0, ValueType::kTypeDeletion), InternalKey("Y", 0, ValueType::kTypeDeletion), InternalKey("Z", 0, ValueType::kTypeDeletion), }; for (bool sanitized : { false, true }) { std::unique_ptr<TableBuilder> builder; std::unique_ptr<FakeWritableFile> writable; Options options; if (sanitized) { options.table_properties_collectors = { std::make_shared<RegularKeysStartWithA>() }; // with sanitization, even regular properties collector will be able to // handle internal keys. auto comparator = options.comparator; // HACK: Set options.info_log to avoid writing log in // SanitizeOptions(). options.info_log = std::make_shared<DumbLogger>(); options = SanitizeOptions( "db", // just a place holder nullptr, // with skip internal key comparator nullptr, // don't care filter policy options ); options.comparator = comparator; } else { options.table_properties_collectors = { std::make_shared<InternalKeyPropertiesCollector>() }; } MakeBuilder(options, &writable, &builder); for (const auto& k : keys) { builder->Add(k.Encode(), "val"); } ASSERT_OK(builder->Finish()); std::unique_ptr<TableReader> table_reader; OpenTable(options, writable->contents(), &table_reader); const auto& properties = table_reader->GetTableProperties().user_collected_properties; uint64_t deleted = GetDeletedKeys(properties); ASSERT_EQ(4u, deleted); if (sanitized) { uint32_t starts_with_A = 0; Slice key(properties.at("Count")); ASSERT_TRUE(GetVarint32(&key, &starts_with_A)); ASSERT_EQ(1u, starts_with_A); } } } } // namespace rocksdb int main(int argc, char** argv) { return rocksdb::test::RunAllTests(); }
{ "perplexity_score": 2575.1, "pile_set_name": "Github" }
-- Phone numbers local lpeg = require "lpeg" local P = lpeg.P local R = lpeg.R local S = lpeg.S local digit = R"09" local seperator = S"- ,." local function optional_parens(patt) return P"(" * patt * P")" + patt end local _M = {} local extension = P"e" * (P"xt")^-1 * seperator^-1 * digit^1 local optional_extension = (seperator^-1 * extension)^-1 _M.Australia = ( -- Normal landlines optional_parens((P"0")^-1*S"2378") * seperator^-1 * digit*digit*digit*digit * seperator^-1 * digit*digit*digit*digit -- Mobile numbers + (optional_parens(P"0"*S"45"*digit*digit) + S"45"*digit*digit) * seperator^-1 * digit*digit*digit * seperator^-1 * digit*digit*digit -- Local rate calls + P"1300" * seperator^-1 * digit*digit*digit * seperator^-1 * digit*digit*digit -- 1345 is only used for back-to-base monitored alarm systems + P"1345" * seperator^-1 * digit*digit * seperator^-1 * digit*digit + P"13" * seperator^-1 * digit*digit * seperator^-1 * digit*digit + (P"0")^-1*P"198" * seperator^-1 * digit*digit*digit * seperator^-1 * digit*digit*digit -- data calls -- Free calls + P"1800" * seperator^-1 * digit*digit*digit * seperator^-1 * digit*digit*digit + P"180" * seperator^-1 * digit*digit*digit*digit ) * optional_extension local NPA = (digit-S"01")*digit*digit local NXX = ((digit-S"01")*(digit-P"9")-P"37"-P"96")*digit-P(1)*P"11" local USSubscriber = digit*digit*digit*digit _M.USA = ((P"1" * seperator^-1)^-1 * optional_parens(NPA) * seperator^-1)^-1 * NXX * seperator^-1 * USSubscriber * optional_extension local international = ( P"1" * seperator^-1 * #(-P"1") * _M.USA + P"61" * seperator^-1 * #(digit-P"0") * _M.Australia -- Other countries we haven't made specific patterns for yet +(P"20"+P"212"+P"213"+P"216"+P"218"+P"220"+P"221" +P"222"+P"223"+P"224"+P"225"+P"226"+P"227"+P"228"+P"229" +P"230"+P"231"+P"232"+P"233"+P"234"+P"235"+P"236"+P"237" +P"238"+P"239"+P"240"+P"241"+P"242"+P"243"+P"244"+P"245" +P"246"+P"247"+P"248"+P"249"+P"250"+P"251"+P"252"+P"253" +P"254"+P"255"+P"256"+P"257"+P"258"+P"260"+P"261"+P"262" +P"263"+P"264"+P"265"+P"266"+P"267"+P"268"+P"269"+P"27" +P"290"+P"291"+P"297"+P"298"+P"299"+P"30" +P"31" +P"32" +P"33" +P"34" +P"350"+P"351"+P"352"+P"353"+P"354"+P"355" +P"356"+P"357"+P"358"+P"359"+P"36" +P"370"+P"371"+P"372" +P"373"+P"374"+P"375"+P"376"+P"377"+P"378"+P"380"+P"381" +P"385"+P"386"+P"387"+P"389"+P"39" +P"40" +P"41" +P"420" +P"421"+P"423"+P"43" +P"44" +P"45" +P"46" +P"47" +P"48" +P"49" +P"500"+P"501"+P"502"+P"503"+P"504"+P"505"+P"506" +P"507"+P"508"+P"509"+P"51" +P"52" +P"53" +P"54" +P"55" +P"56" +P"57" +P"58" +P"590"+P"591"+P"592"+P"593"+P"594" +P"595"+P"596"+P"597"+P"598"+P"599"+P"60" +P"62" +P"63" +P"64" +P"65" +P"66" +P"670"+P"672"+P"673"+P"674" +P"675"+P"676"+P"677"+P"678"+P"679"+P"680"+P"681"+P"682" +P"683"+P"684"+P"685"+P"686"+P"687"+P"688"+P"689"+P"690" +P"691"+P"692"+P"7" +P"808"+P"81" +P"82" +P"84" +P"850" +P"852"+P"853"+P"855"+P"856"+P"86" +P"870"+P"871"+P"872" +P"873"+P"874"+P"878"+P"880"+P"881"+P"886"+P"90" +P"91" +P"92" +P"93" +P"94" +P"95" +P"960"+P"961"+P"962"+P"963" +P"964"+P"965"+P"966"+P"967"+P"968"+P"970"+P"971"+P"972" +P"973"+P"974"+P"975"+P"976"+P"977"+P"98" +P"992"+P"993" +P"994"+P"995"+P"996"+P"998" ) * (seperator^-1*digit)^6 -- At least 6 digits ) _M.phone = P"+" * seperator^-1 * international return _M
{ "perplexity_score": 4491.6, "pile_set_name": "Github" }
Q: Use variable in R substitute I have an expression stored in a variable a <- expression(10 + x + y) I want to use substitute to fill the expression with x = 2 substitute(a, list(x=2)) But this returns a and a evaluates to expression(10 + x + y) Ideally a would evaluate to expression(12 + y) (or (10 + 2 + y)) Is there any way to implement this behavior while using an expression stored in the variable a (mandated by other parts of my project)? A: Use do.call. substitute won't descend into some objects but if you use a[[1]] here then it will work. a <- expression(10 + x + y) do.call("substitute", list(a[[1]], list(x = 2))) ## 10 + 2 + y
{ "perplexity_score": 1414.4, "pile_set_name": "StackExchange" }
Q: keep getting error 'value of type ' ' has no member ' ' I've been stuck in this project for 2 days and I think I really need help. The program is only half-completed as I can't even get the layout to work. I have a var settings = setting() to link to the class setting(). Within class setting, I have varies vars and func. originally, I had the class put in a different swift file in the project, but because Swift keeps giving me the 'type has no member' error, I decided to put the class in the ContentView.swift. but even then, Swift seems to selectively dis-recognise my vars in the class instance and I can't exactly pinpoint why. e.g. the first line with error 'value of type 'setting' ha no member '$playerChoice'' says it cant find settings.playerChoice, whilst not listing the same error on the line underneath to look for settings.playerChoice in the String interpolation. I tried turning the program off and on, shift cmd K to re-compile preview a few times, it didn't work. Can someone please have a look for me what exactly went wrong? Thank you. my codes are as follow: import SwiftUI import Foundation class setting { var playerChoice : Int = 0 var questionCount : Int = 0 var pcRandom : Int = 0 var correctAnswer = 0 var question : String = "" var buttonArray = [Int]() var enteredAnswer = "" var gameRound : Int = 0 var scoreCount : Int = 0 var title2 = "" var alertTitle = "" var alertMessage = "" var alertEndGame = false func refreshGame() { pcRandom = Int.random(in: 1 ... 12) correctAnswer = playerChoice * pcRandom question = "\(playerChoice) times \(pcRandom) is??" } func compareAnswer() { let answerModified = enteredAnswer let answerModified2 = answerModified.trimmingCharacters(in: .whitespacesAndNewlines) if Int(answerModified2) == correctAnswer { scoreCount += 1 title2 = "RIGHT" } else { title2 = "WRONG" } if gameRound > questionCount { alertTitle = "Game Ended" alertMessage = "You got \(scoreCount) over \(questionCount)" alertEndGame = true } else { refreshGame() } gameRound += 1 gameRound += 1 } } var settings = setting() struct ContentView: View { var body: some View { VStack { Section (header: Text("Getting Your Settings Righttt").font(.title)) { Form { Stepper(value: settings.$playerChoice, in: 1...13, step: 1) { //value of type 'setting' ha no member '$playerChoice' if settings.playerChoice == 13 {Text("All")} else { Text("Multiplication table \(settings.playerChoice)") } } } Form { Text("Number of Questions?") Picker(selection: settings.$questionCount, label: Text("Number of Questions?")) { //value of type 'setting' has no member '$questionCount' ForEach (settings.questionCountArray, id: \.self) {Text("\($0)")} } ////value of type 'setting' ha no member '$questionCountArray' .pickerStyle(SegmentedPickerStyle()) } Spacer() Button("Tap to start") { settings.refreshGame } } Section (header: Text("Game Play").font(.title)){ Text(settings.question) TextField("Enter your answer", text: settings.$enteredAnswer, onCommit: settings.compareAnswer) //value of type 'setting' ha no member '$enteredAnswer' Text("Your score is currently \(settings.scoreCount)") Text("This is game round \(settings.gameRound) out of \(settings.questionCount)") } Spacer() } .alert(isPresented: settings.$alertEndGame) { Alert(title: Text("Game Ended"), message: Text("You reached score \(settings.scoreCount) out of \(settings.questionCount)"), dismissButton: .default(Text("Play Again"))) //game doesnt restart and refresh } } } A: Well I did not check logic, but below fixed code at least compilable, so hope it will be helpful import SwiftUI import Combine class Setting: ObservableObject { @Published var playerChoice : Int = 0 var questionCount : Int = 0 @Published var questionCountArray = [String]() var pcRandom : Int = 0 var correctAnswer = 0 var question : String = "" var buttonArray = [Int]() @Published var enteredAnswer = "" var gameRound : Int = 0 var scoreCount : Int = 0 var title2 = "" var alertTitle = "" var alertMessage = "" @Published var alertEndGame = false func refreshGame() { pcRandom = Int.random(in: 1 ... 12) correctAnswer = playerChoice * pcRandom question = "\(playerChoice) times \(pcRandom) is??" } func compareAnswer() { let answerModified = enteredAnswer let answerModified2 = answerModified.trimmingCharacters(in: .whitespacesAndNewlines) if Int(answerModified2) == correctAnswer { scoreCount += 1 title2 = "RIGHT" } else { title2 = "WRONG" } if gameRound > questionCount { alertTitle = "Game Ended" alertMessage = "You got \(scoreCount) over \(questionCount)" alertEndGame = true } else { refreshGame() } gameRound += 1 gameRound += 1 } } struct ASFContentView: View { @ObservedObject var settings = Setting() var body: some View { VStack { Section (header: Text("Getting Your Settings Righttt").font(.title)) { Form { Stepper(value: $settings.playerChoice, in: 1...13, step: 1) { //value of type 'setting' ha no member '$playerChoice' if settings.playerChoice == 13 {Text("All")} else { Text("Multiplication table \(settings.playerChoice)") } } } Form { Text("Number of Questions?") Picker(selection: $settings.questionCount, label: Text("Number of Questions?")) { //value of type 'setting' has no member '$questionCount' ForEach (settings.questionCountArray, id: \.self) {Text("\($0)")} } ////value of type 'setting' ha no member '$questionCountArray' .pickerStyle(SegmentedPickerStyle()) } Spacer() Button("Tap to start") { self.settings.refreshGame() } } Section (header: Text("Game Play").font(.title)){ Text(settings.question) TextField("Enter your answer", text: $settings.enteredAnswer, onCommit: settings.compareAnswer) //value of type 'setting' ha no member '$enteredAnswer' Text("Your score is currently \(settings.scoreCount)") Text("This is game round \(settings.gameRound) out of \(settings.questionCount)") } Spacer() } .alert(isPresented: $settings.alertEndGame) { Alert(title: Text("Game Ended"), message: Text("You reached score \(settings.scoreCount) out of \(settings.questionCount)"), dismissButton: .default(Text("Play Again"))) //game doesnt restart and refresh } } }
{ "perplexity_score": 3491.9, "pile_set_name": "StackExchange" }
If you're looking for a spot to swing your racket this summer, look no further than the lush hills by the Humber River, where a new outdoor tennis club is springing up out of the ashes of four 45-year-old courts. Edenbridge Tennis Club is set to open in Etobicoke's James Gardens Park this summer, taking over the grounds of the former James Gardens Tennis Club. The old club, which had operated for years, closed in April 2018 to the discontent of many former members. According to ex-members who now comprise the committee of Edenbridge Tennis Club, James Gardens closed "as a direct result of what can only be described as an inept club administration." "Over the course of a 25 year period, there was little to no investment made into the club facilities or courts, virtually no programming and absolutely no marketing of any kind," says Edenbridge's website. The new club hopes to be more organized, and appears to have all its t's crossed when it comes to permits, an online member portal, and plans for a junior club. Edenbridge is currently taking memberships: according to the site, there's 83 members signed up already.
{ "perplexity_score": 234, "pile_set_name": "OpenWebText2" }
Q: Taking the real part of a complex equation for sea level I have the following complex equation that gives sea level as a function of time and space: $$ \zeta (x,t) = \left( \frac{hU_0}{\sqrt{gh}}\frac{e^{-idl}-e^{ikl}}{e^{idl}-e^{-idl}} \cdot e^{idx} - \frac{hU_0}{\sqrt{gh}}\frac{e^{ikl}-e^{idl}}{e^{idl}-e^{-idl}} \cdot e^{-idx} +\frac{hU_0k}{\sigma}e^{ikx} \right) \cdot e^{- i \sigma t},$$ where $d=\frac{\sigma}{\sqrt{gh}}$, $k$ is the wave number, $l$ is the length of the basin. This is in complex form, and my solution is the real part of this equation. My question is, do you know how to get to the real part? I have tried by first writing all $e^{i something}$ as $cos(something)+isin(something)$, then multiplying, adding... all of the parts and then finally just taking the bits that don't have the imaginary unit next to them, but the solution seems dodgy to me. I have checked it and can't find what (or if) I did wrong and I would appreciate it if somebody else could take a look at it. The result I get is the following: $$ Re[\zeta] =\frac{hU_0k}{\sigma} cos(kx-\sigma t)-\frac{hU_0}{\sqrt{gh}} \frac{cos(dx)}{sin(dl)}sin(kl-\sigma t)-\frac{hU_0}{\sqrt{gh}}\frac{cos(dl-dx)}{sin(dl)}sin(\sigma t). $$ A: One thing I would expect to be useful is $e^{idl}-e^{-idl}=2i\sin(dl)$ since that's on your denominator. $$ \zeta (x,t) = \left(A\frac{e^{-idl}-e^{ikl}}{e^{idl}-e^{-idl}} \cdot e^{idx} - A\frac{e^{ikl}-e^{idl}}{e^{idl}-e^{-idl}} \cdot e^{-idx} +Be^{ikx} \right) \cdot e^{- i \sigma t},$$ where $A= \frac{hU_0}{\sqrt{gh}}$ and $B=\frac{hU_0k}{\sigma}$ are real I assume. Tidying up a bit: $$ \zeta (x,t) = \frac{A}{2i\sin(dl)}\left(e^{-id(l-x)- i \sigma t}-e^{ikl+idx- i \sigma t} - e^{ikl-idx- i \sigma t}+e^{id(l-x)- i \sigma t}\right) +Be^{ikx- i \sigma t} $$ You can use that $\mathrm{Re}(e^{ix})=\cos(x), \quad\mathrm{Im}(e^{ix})=\sin(x),\quad \mathrm{Re}(ie^{ix})=-\sin(x),\quad \mathrm{Im}(ie^{ix})=\cos(x). $ $$=-\frac{A}{2\sin(dl)}\left[-\sin(-d(l-x)-\sigma t)+\sin(kl+dx- \sigma t) +\sin(kl-dx-\sigma t)+\sin(d(l-x)-\sigma t)\right] +B\cos(kx-\sigma t) $$ $$= \frac{-A}{2\sin(dl)}\left[\sin(d(l-x)+\sigma t)+\sin(kl+dx- \sigma t) +\sin(kl-dx-\sigma t)-\sin(d(l-x)-\sigma t)\right] \\+B\cos(kx-\sigma t) $$ You can then pair up the sines then and use the sine addition rules. $$\sin(A)\pm \sin(B)=2\sin\left(\frac{A\pm B}{2}\right)\cos\left(\frac{A\mp B}{2}\right)$$ I think it would be natural to pair the first and last terms and the middle two terms, because things will cancel. $$= \frac{-A}{\sin(dl)}\left[\sin(kl- \sigma t)\cos(dx) +\cos(d(l-x))\sin(\sigma t)\right] +B\cos(kx-\sigma t) $$ Which is what you have I believe.
{ "perplexity_score": 977.8, "pile_set_name": "StackExchange" }
1. Field of the Invention The present invention relates to irrigation control systems, and in particular to irrigation control systems that collect irrigation data from one or more sensors. 2. Description of the Related Technology Irrigation systems are used widely in commercial and residential applications. Typical irrigation systems include an irrigation controller connected to one or more irrigation devices (e.g., valves) which provide water to desired locations via an assortment of hydraulic components (e.g., pipes, sprinkler heads, and drip lines). The irrigation controllers control the components to provide desired irrigation in accordance with a programmed schedule. With some irrigation control systems, an operator determines the amount of water and the time at which the water should be applied by defining an irrigation schedule. The irrigation schedule may determine which valves are activated at which times, and for how long. Any changes to the irrigation schedule may be performed manually by the operator. Other so called “smart” irrigation control systems receive input from sensors that indicate the nature of the environment being irrigated. This input may be used by the irrigation controller to determine how much irrigation is necessary in order to maintain the health of the installed plant life. For example, if the sensor input indicates there has been a recent rain storm, it may not be necessary to provide additional water via the irrigation system. Other input received by the irrigation controller may indicate the moisture present in the soil. Upon receiving the indication of soil moisture, the irrigation controller may determine an amount of irrigation needed to maintain soil moisture levels within a desired range that supports the installed plant life. Other irrigation sensors may provide input on the flow rate of water through an irrigation supply line. By knowing the actual flow rate of the water, an irrigation controller may more precisely calculate the amount of water being applied during an irrigation program. Based on the needs of the installed plant life, the irrigation controller may extend or shorten the time a particular irrigation zone is active based on the flow rate of the water in the zone. Irrigation controllers may collect data from these irrigation sensors via either wireless or wired connections. In some environments, wireless connections may have distance limitations, and so a wired connection may be favored. With existing irrigation controller solutions, use of wired sensors requires a dedicated wiring circuit for each sensor. In an irrigation zone including, for example, three sensors, three separate wiring circuits may be needed between the irrigation controller and the sensors in that irrigation zone. An irrigation zone may also include at least one valve actuator. The valve actuation may be performed by an electrical solenoid. A separate wired circuit between the irrigation controller and the solenoid may also be necessary. In such a configuration, four individual sets of wired connections may be needed for one irrigation zone. When one irrigation installation may include up to hundreds of individual zones, the need to provide a dedicated wiring circuit for each irrigation sensor and each water valve may be problematic. For example, when installing a new irrigation system, the need to possibly quadruple the number of wiring circuits necessary to install a smart irrigation system may add significant cost to the installation. Furthermore, when retrofitting legacy systems with smart irrigation controllers that utilize irrigation sensors, the need to install additional wiring may disrupt established ground cover. Additionally, the expense of installing additional wiring may be a significant proportion of the retrofitting cost, and may reduce adoption of smart irrigation systems when not required by law.
{ "perplexity_score": 364, "pile_set_name": "USPTO Backgrounds" }
Our Services Enjoy a luxurious vacation on a cruise in Goa by contacting Jai Maa Tour & Travels. We are located in Faridabad (Haryana) and offer the affordable cruise booking service. Whether it is about spending a holiday or enjoying a private trip, feel free to get in touch with us. Provide us the information like the number of travelers and type & number of suites to be booked. Rest of the work will be managed by our employees. So, feel free to get in touch with us and make the best use of cruise booking service.
{ "perplexity_score": 371.4, "pile_set_name": "Pile-CC" }
Columbia Area Career Center. We are located south of town next to Rock Bridge High School. Please use the South 3 entrance of the parking lot and follow the signs to the class. 4203 S Providence Road, Columbia MO map... Description: This is a hands-on learning opportunity for small business owners, office managers and anyone with financial responsibilities. Because the Desktop and Online versions of QuickBooks are completely different software programs, we offer two classes to give you the best education suited to the product you use. This class is specifically designed for Online QuickBooks users and will teach you how to create accounts, enter transactions (bills, checks, invoices, sales receipts and deposits) and generate financial reports. The ultimate goal is to help you boost the accuracy of your financial data. Registrationdeadline: 9/20/2017 Class size: 15 Event full: No Cost: $129.00 Add event to your compatible iCalendar such as Microsoft Outlook or Apple iCal.
{ "perplexity_score": 526.4, "pile_set_name": "Pile-CC" }
Q: Is it possible to move to a separate file? I've got a couple of animations as storyboards in window resources. Is there a way to move them to a separate file and still access them? If yes, please tell me how. Just to be clear, I want to move the following generated code from my MainWindow.xaml file to a separate file so I can keep code tidy and organized: <Window.Resources> <Storyboard x:Key="sbShowWindow"> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleX)" Storyboard.TargetName="layoutRoot"> <EasingDoubleKeyFrame KeyTime="0" Value="0.874"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseInOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseInOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleY)" Storyboard.TargetName="layoutRoot"> <EasingDoubleKeyFrame KeyTime="0" Value="0.874"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseInOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseInOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.X)" Storyboard.TargetName="layoutRoot"> <EasingDoubleKeyFrame KeyTime="0" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseInOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseInOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.Y)" Storyboard.TargetName="layoutRoot"> <EasingDoubleKeyFrame KeyTime="0" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseInOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseInOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="layoutRoot"> <EasingDoubleKeyFrame KeyTime="0" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseIn"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="0.595"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseIn"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.5" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseIn"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> </Storyboard> <Storyboard x:Key="sbHideWindow"> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleX)" Storyboard.TargetName="layoutRoot"> <EasingDoubleKeyFrame KeyTime="0" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="0.874"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleY)" Storyboard.TargetName="layoutRoot"> <EasingDoubleKeyFrame KeyTime="0" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="0.874"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.X)" Storyboard.TargetName="layoutRoot"> <EasingDoubleKeyFrame KeyTime="0" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.Y)" Storyboard.TargetName="layoutRoot"> <EasingDoubleKeyFrame KeyTime="0" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <CircleEase EasingMode="EaseOut"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="layoutRoot"> <EasingDoubleKeyFrame KeyTime="0" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <CubicEase EasingMode="EaseIn"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="0.245"> <EasingDoubleKeyFrame.EasingFunction> <CubicEase EasingMode="EaseIn"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.5" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <CubicEase EasingMode="EaseIn"/> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> </Storyboard> </Window.Resources> A: You can put this code into a separate resource dictionary, either in the same assembly, or in another one. Then all you need is to add that dictionary into merged dictionaries collection of window's resources: <Window.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="pack://application:,,,/YourAssembly;component/Folder/YourResourceDictionary.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Window.Resources> Here's the syntax of pack URIs.
{ "perplexity_score": 1991, "pile_set_name": "StackExchange" }
National Minority Donor Awareness Week, August 1-7 Observed annually, National Minority Donor Awareness Week, was created to increase awareness of the need for more organ, eye, and tissue donors, especially among minorities. Read more about the...National Minority Donor Awareness WeekNational Minority Donor Awareness Day The mission of the National Minority Organ and Tissue Transplant Education Program, (MOTTEP¨) is to decrease the number and rate of ethnic minority Americans needing organ and tissue transplants. Read more about the...National Minority Organ and Tissue Transplant Education ProgramNational Night Out: America's Night Out Against CrimeThe National Association of Town Watch (NATW) is a non-profit organization dedicated to the development and promotion of organized, law enforcement-affiliated crime and drug prevention programs. Members include: Neighborhood, Crime, Community, Town and Block Watch Groups; law enforcement agencies; state and regional crime prevention associations; and a variety of businesses, civic groups and concerned individuals working to make their communities safer places in which to live and work. Read more about the... National Association of Town WatchCataract Awareness Month Eye Injury Prevention MonthThe American Academy of Ophthalmology is the largest national membership association of Eye M.D.s. Eye M.D.s are ophthalmologists, medical doctors who provide comprehensive eye care, including medical, surgical and optical care. More than 90 percent of practicing U.S. Eye M.D.s are Academy members, and the Academy has more than 7,000 international members. Read more about the... American Academy of OphthalmologyMedic Alert MonthThe MedicAlert Foundation is a non-profit healthcare informatics organization dedicated to providing services worldwide to our members that protect and save lives. Read more about the... MedicAlert FoundationSpinal Muscular Atrophy Awareness MonthNighttime WalksFamilies of Spinal Muscular Atrophy was founded in 1984 for the purpose of raising funds to promote research to find a cure for Spinal Muscular Atrophy, and to support families affected by SMA. FSMA is the largest private funder of SMA research and is leading the way to find a cure. Read more about... Families of Spinal Muscular AtrophyPsoriasis Awareness MonthThe National Psoriasis Foundation is a patient-driven nonprofit organization that is the voice for the 5 million people affected by psoriasis and psoriatic arthritis. Our mission is to improve lives through education, advocacy and research. Read more about the... National Psoriasis FoundationAmblyopia Awareness Month Since 1908, Prevent Blindness America has been the nation's leading volunteer eye health and safety organization with the sole mission of preventing blindness and preserving sight. Read more about... Prevent Blindness AmericaFeatured Idea:Victory Scratch Cards... Scratch Your Way To Success!CLICK HERE For Information On Victory Scratch Cards! Victory Scratch Cards are an innovative new way to collect donations. They can be used alone, or in tandem with other fundraising projects, like 10k runs and bike-a-thons, to raise even more money for your cause. The cards can be customized with your group name and logo, and each card features inspirational messages. Try them now and get FREE Shipping and 30 day financing!Click Here to learn more about Victory Scratch Card Fundraising.
{ "perplexity_score": 389.4, "pile_set_name": "Pile-CC" }
Q: How to select specific combobox item with multiple keystrokes? First few characters of item Windows explorer in XP will allow you to make a file selection based on typing a few characters. I would like to know if there is any simplistic .net feature I can use to mimic this behaviour in a combobox? I think I've seen this happen in comboboxes before, and would like to know if there is a property I can use? I know I could develop code around the "Key" events, but can't justify spending the time on it. For example: In a folder which contains "Apple.doc, banana.doc, cherry.doc, cranberry.doc" then typing "b" will select "banana.doc", typing "c" will select "cherry.doc" but typing "cr" will select "cranberry.doc" Thanks in advance G A: Have a look at ComboBox.AutoCompleteMode.
{ "perplexity_score": 596.9, "pile_set_name": "StackExchange" }
-99 and -1.01. 99.99 Calculate -1*0.068. -0.068 What is -0.1 times 0.13? -0.013 Multiply 2 and -4.9. -9.8 Calculate -1290*-0.2. 258 14 * -11 -154 Work out 0.08 * 15. 1.2 -0.07 times 2.139 -0.14973 Calculate 391*-0.05. -19.55 0.3*2843 852.9 Calculate 0.06*28. 1.68 Product of 5 and -69. -345 Product of -0.38 and 3. -1.14 -29*15 -435 Calculate -0.1*32. -3.2 -14.603 * -0.4 5.8412 Calculate -0.5*2.97. -1.485 11.4 times -1 -11.4 Product of -26 and 1. -26 Multiply 1 and -21.5. -21.5 -0.3*0.0519 -0.01557 What is 56 times -4? -224 What is the product of -2642 and 0? 0 -2 * -55 110 -212.7*-12 2552.4 Work out -281 * 0.4. -112.4 10*0.1071 1.071 Product of 0.4 and 64. 25.6 What is 63 times -0.5? -31.5 What is 35 times -4.9? -171.5 0.8 times -1331 -1064.8 Calculate 598*-0.2. -119.6 What is 0.2 times 88? 17.6 -2 times -158 316 -138*0.65 -89.7 What is 19 times -1? -19 Product of 46 and -54. -2484 Calculate 7*-0.8. -5.6 4 times -0.07213 -0.28852 What is the product of 22 and -427? -9394 Product of -8.88 and 5. -44.4 Product of -1710 and 0.3. -513 Calculate -20*18. -360 3.5 times -4 -14 Calculate 0.01733*-0.1. -0.001733 Multiply 25 and -0.02. -0.5 Product of -61 and 0.02. -1.22 -1*-4.1 4.1 Work out -0.149 * 0.1. -0.0149 0.01 * 927 9.27 What is the product of -0.05 and 0.75? -0.0375 5 times -183 -915 What is -0.24 times 32? -7.68 Calculate -0.2*1.714. -0.3428 0.5 * 4935 2467.5 4 times 14 56 Calculate -1*-1626. 1626 What is 0.2 times -18.3? -3.66 What is -2.6 times -139? 361.4 What is the product of 0 and 72? 0 Product of -25.1 and -1.1. 27.61 37.3 times -1.7 -63.41 -131*0 0 Work out 1.05 * -1. -1.05 Calculate 74*-151. -11174 Multiply 10 and -0.86. -8.6 Product of 0.341 and 3. 1.023 -184.9 * 0.3 -55.47 Work out 0.0324 * -0.5. -0.0162 What is the product of -60 and -0.5? 30 4 * 559 2236 Multiply -7 and 213. -1491 What is the product of 21.2 and -1? -21.2 2 * -0.27 -0.54 Product of 97 and -0.2. -19.4 -0.0446 times 5 -0.223 Product of 1 and 82.3. 82.3 Calculate 3.5*-1.4. -4.9 -39 * 4 -156 -0.1*4.5 -0.45 -17 * -0.024 0.408 Calculate -0.4*1.56. -0.624 Work out 2149 * 3. 6447 What is 4.9 times -6.8? -33.32 Calculate -0.489*0.1. -0.0489 88.3 times 0.4 35.32 Multiply -0.1 and -0.08. 0.008 67 * -1 -67 29 times -1.6 -46.4 Multiply 4349 and 5. 21745 Work out -2 * 22.1. -44.2 -0.05 * -8 0.4 What is -106 times 0.054? -5.724 2.1 * -116 -243.6 Calculate 1.6*-537. -859.2 What is 22021 times -0.5? -11010.5 -4*39 -156 Work out -41 * 625. -25625 What is the product of 45 and 0.4? 18 1*-60 -60 What is the product of 6592 and -1? -6592 0.763*0.3 0.2289 Work out 0.059 * 37. 2.183 Work out 2 * -30. -60 -21 times 3.4 -71.4 5 * 412 2060 Multiply 2.8 and 11.5. 32.2 -0.2*6.78 -1.356 Work out 5 * -70. -350 Multiply 0.9 and 9.37. 8.433 Multiply -13 and 0.2. -2.6 Product of 0.1 and -1.88. -0.188 -0.05 times 61 -3.05 -139 times 0.003 -0.417 Product of 0.015 and 6. 0.09 Calculate 0.5*-255. -127.5 Multiply 25 and 0.3. 7.5 -572*0.14 -80.08 Work out 2 * 91. 182 Work out -0.4 * 45.4. -18.16 Multiply 10.3 and 0.6. 6.18 Product of 331 and -84. -27804 What is -0.5 times -0.56? 0.28 -442 * 0.17 -75.14 Product of -19 and -7. 133 Calculate -867*-0.09. 78.03 Multiply -1583 and 0.3. -474.9 Work out -709 * -0.04. 28.36 -1630 * -2 3260 5 * 0.058 0.29 What is the product of 16 and 0.105? 1.68 Work out 12121 * 3. 36363 Multiply 0.3 and -731. -219.3 Work out 3 * -9.6. -28.8 5 times -48 -240 What is the product of 3 and -875? -2625 -0.09 times -0.3 0.027 Calculate 0.4*83. 33.2 Product of 10.051 and -0.3. -3.0153 What is the product of 158 and 0.1? 15.8 0.019 * -3 -0.057 What is the product of 16 and -14? -224 2.84 times 5 14.2 5 * 1394 6970 7*-0.46 -3.22 Multiply -3 and 1141. -3423 Calculate 165.6*3. 496.8 Multiply 7344 and 0.5. 3672 5 times 657 3285 What is 0 times 32? 0 Product of -1.12 and -5. 5.6 What is the product of -1.5 and 42? -63 Multiply -21 and -1. 21 Multiply -1.03 and 1. -1.03 What is -127 times 2? -254 Calculate -0.93*-0.5. 0.465 Work out -0.1 * 2613. -261.3 What is 1199 times -2? -2398 Product of -1.9 and -851. 1616.9 Calculate 0.01707*2. 0.03414 -0.0462 times 0.5 -0.0231 What is the product of 1.7 and -5? -8.5 What is the product of 0 and 130? 0 What is the product of -0.2 and -162? 32.4 What is the product of -71 and 2? -142 What is 2.42 times 0.4? 0.968 What is 4 times -35.62? -142.48 Product of 4 and 44. 176 What is 2 times 4.1? 8.2 What is the product of 6 and 3? 18 -0.06*138 -8.28 What is 2 times 2418? 4836 320 * 0.5 160 -53*3 -159 0.3 * 0.0201 0.00603 35772 * -5 -178860 0.06 times 12 0.72 What is 33.4 times 3? 100.2 What is 4853 times 2? 9706 Product of 12 and 7.66. 91.92 What is the product of 5383.3 and 0.2? 1076.66 -0.036*-0.5 0.018 Work out -15 * -239. 3585 What is -0.3 times -1110? 333 Calculate -1*11.55. -11.55 Work out -0.5 * 0.458. -0.229 0.4*152 60.8 What is 5 times -27.49? -137.45 Product of 0.4 and 1.4. 0.56 What is 5.09 times -0.3? -1.527 Work out -485 * -0.5. 242.5 4 times -0.04 -0.16 Work out -104 * 0.1. -10.4 Work out 2 * -0.017. -0.034 What is the product of -0.01 and -5? 0.05 Product of 0.3 and -4.6. -1.38 Product of -3 and 2. -6 Calculate -0.04*0.21. -0.0084 Product of 7058 and 4. 28232 0.1 * -144 -14.4 -0.08 times -0.9 0.072 Product of 5 and 0.103. 0.515 Work out -0.1 * 580. -58 2426 times -0.01 -24.26 What is -206 times 0.2? -41.2 Calculate -0.86*100. -86 What is 560 times 5? 2800 -9 * 0.94 -8.46 What is 2006 times -5? -10030 Product of -0.1 and -4.5. 0.45 What is 0.2 times -0.019? -0.0038 Multiply -0.155 and 0.2. -0.031 Multiply -13.2 and -3. 39.6 -3 times 19.66 -58.98 Work out -0.043 * 240. -10.32 -0.024*-0.4 0.0096 Multiply 0.4 and 0.84. 0.336 What is the product of 0 and 577.9? 0 Multiply 0.931 and 0.09. 0.08379 Multiply -6 and 1.4. -8.4 Calculate 57*0.101. 5.757 383.3 * -0.04 -15.332 Multiply 47.31 and 2. 94.62 What is the product of 0 and 40? 0 -194.1*0.2 -38.82 Work out -1347 * 0.2. -269.4 Work out -4 * 10.59. -42.36 What is the product of 1 and -107? -107 Calculate -1025*3. -3075 Calculate 0.009*-0.025. -0.000225 28.2*39 1099.8 What is -1.524 times 5? -7.62 -486*-0.3 145.8 0 times -0.1258 0 What is the product of -11 and -0.5? 5.5 Product of -0.239 and -2. 0.478 12 times -1.22 -14.64 -10262 times -1 10262 Work out 0.04 * -544. -21.76 What is the product of -3 and -26? 78 -2.97 * -0.5 1.485 Multiply 0.2 and -3966. -793.2 What is 234 times 0.2? 46.8 -0.01 * 132 -1.32 -163 times -0.1 16.3 Work out -0.2 * -6.13. 1.226 What is -0.5 times 1.17? -0.585 Multiply 0.3 and -0.126. -0.0378 Multiply 68 and -12. -816 -0.05568 * 0.05 -0.002784 Multiply -43.41 and -6. 260.46 What is the product of 0.1 and 131? 13.1 What is 1.44 times 0.1? 0.144 Product of 0.319 and -50. -15.95 -1 times -0.4153 0.4153 Multiply -3 and -0.06. 0.18 Multiply 0.1 and -0.56. -0.056 2181 * -0.1 -218.1 -0.0604 * 60 -3.624 0.2 * -363.3 -72.66 1.2*-3.3 -3.96 What is 0.065 times 0.026? 0.00169 Multiply -16 and 0.2. -3.2 Multiply -0.96 and -0.014. 0.01344 What is the product of -10.4 and 2? -20.8 87*-0.3 -26.1 Product of -4 and -0.829. 3.316 0.8 * -0.09 -0.072 Product of -0.4 and -76. 30.4 What is the product of 0.1 and 0.0352? 0.00352 What is -494 times -0.5? 247 What is the product of 2 and 89? 178 Product of -0.2 and -189. 37.8 What is the product of 0.3 and 861? 258.3 What is 969 times 3? 2907 What is -0.07 times -0.1278? 0.008946 What is -35 times 49? -1715 Product of -0.23456 and -4. 0.93824 Calculate -2.28*-1.7. 3.876 Calculate -1.9*-0.14. 0.266 0.2 times 39 7.8 What is the product of 4 and -0.395? -1.58 -0.2*148.4 -29.68 Multiply -5 and -54. 270 -8.29 times 4 -33.16 What is the product of 3 and 676? 2028 -5 * -0.2029 1.0145 Product of 26.7 and -6. -160.2 -3.1 times -0.11 0.341 0.02*-19 -0.38 Product of -4 and -0.97. 3.88 What is -3 times 1.05? -3.15 Calculate -35*-3. 105 182*5 910 Work out 84 * 5. 420 0.2 * 6 1.2 What is 401 times 0.4? 160.4 Product of 2 and 117. 234 0.1 times -1018 -101.8 Work out -5 * -3.07. 15.35 Product of 560 and 0. 0 -29.06*0.4 -11.624 0 * -2.9 0 Product of -5 and -3. 15 Multiply -7 and -0.1949. 1.3643 What is 9 times -4? -36 0.01 * 7 0.07 -0.02 * 2 -0.04 Multiply -0.4 and 676. -270.4 Work out 0.094 * -2. -0.188 14 * 33 462 What is -0.2 times -187? 37.4 Product of 3 and
{ "perplexity_score": 2364.8, "pile_set_name": "DM Mathematics" }
Factors relating to intelligence in treated cases of spina bifida cystica. Analysis of results on 83 survivors of spina bifida cystica showed the following: (1) in the seven children who had had central nervous system (CNS) infection, intelligence was impaired, six being severely retarded. (2) In the nine children who did not suffer CNS infection or require a shunt, intelligence was normal. The need for a shunt was related to radiological appearance (craniolacunae) and to the sensory level at birth. (3) In the 67 children who did not suffer CNS infection but did require a shunt, intelligence was related to sensory level found at birth and to thickness of the pallium measured within four weeks of birth. Their intelligence did not relate to the occipitofrontal circumference at birth, or to its increase before the insertion of the shunt. Intelligence did not relate to the function of the shunt at the time of assessment or to the number of times it had been revised.
{ "perplexity_score": 301.3, "pile_set_name": "PubMed Abstracts" }
Q: Unable to get each element from variable "coin" to dictionary as key How do I get dictionary from variable coin which I have define in my code. There are 32 elements in variable coin but I'm getting only one key and value which is TUSD in last line. import requests import json r=requests.get('https://koinex.in/api/ticker') koinexData=r.json() koinexprice = koinexData['stats']['inr'] for coin in koinexprice: print(coin) coindata={} coindata["koinex"]={ coin:{ "SellingPrice":koinexprice[coin]["highest_bid"], "buyingPrice":koinexprice[coin]["lowest_ask"] } } # data.append(coindata) # print(data)`` # s = json.dumps(coindata) print(s) A: You keep overwriting your coindata dictionary inside the loop, which is why only the last value remains. The second issue with your code is that you keep overriding the coindata['koinex'] dictionary inside the loop. This should be a list not a dictionary because you'll keep adding values to it. If you move the initialization code outside the loop, you will not have this problem: coindata = {} coindata['koinex'] = [] # this should be a list, not a dictionary for coin in koinexprice: print(coin) # Create a temporary dictionary d to hold the data you want # to add to coindata d = {coin: {'SellingPrice': koinexprice[coin]['highest_bid'], 'buyingPrice': koinexprice[coin]['lowest_ask']} } coindata["koinex"].append(d) # add the new dictionary to the list print(coindata)
{ "perplexity_score": 2476.6, "pile_set_name": "StackExchange" }
Pages Busy With Fall and Halloween Items Have been just a little busy lately. Thought I'd show you what I've been working on for my Etsy shop for the Fall and Halloween. Lots of burlap runners in various patterns. Spiders for Halloween, a beautiful Damask pattern and a bird pattern for Fall. Don't exactly know what happened when I photographed the runners up close. The picture of the spider runner above is the true color. They are all on a natural colored burlap background with spiders in black and the birds and damask pattern are in dark chocolate. This is a take-off on my Snowman Muffin Tin for Fall. This cute thing is all handpainted. It has added dimension with a wooden raised button for the sunflower, a real ribbon bow on the wreath and the scarecrow with a fabric scarf, raffia hair and a felt hat. It hangs from two different fabric strips. Isn't this the cutest pumpkin print. This pillow has a contrasting orange print cording. This pillow is made out of the old vintage tablecloth I found last Saturday at the Flea Market. It is combined with with an orange designer linen and trimmed with a vintage crocheted trim. A Fall pillow that is reversible. The Fall print on the front reverses to the same damask print - all done in burlap. Isn't this an evil cat? This vintage cat image is done in felt on an orange cotton background and is trimmed in a black and white ribbon edging. Note the cute black tulle collar! One of the Halloween runners is a bright orange cotton duck fabric with a contrasting stripe of Halloween print and a lime green ruffle. This runner has a band of spooky Halloween houses and a ruffle of black tulle gathered on each end. So there, that's what I've been doing the last couple of days. All of these items will be on my Etsy shop - www.20northora.etsy.com You have been really busy getting your Etsy shop stocked up for fall. I need to do the same with mine, but don't seem to have the energy when I get home from work.Your pillows are just lovely and such a nice variety too. I'm sure they'll brighten up any room for the season.Cindy You are so talented! My gosh, what colour and pattern and flair! I hope you do well with them, but if I had created such artistic autumn fun, I don't know that I would be able to part with these pieces!:) What a fabulous collection and theme - I love everything, the black and white prints are especially pretty. Where I live there is very little interest in decorating our homes like this, which is such a shame. Everything you have made is so fun - the muffin tin is a great idea!
{ "perplexity_score": 533.2, "pile_set_name": "Pile-CC" }
Q: How to add "dont show me again" checkbox to an alertbox? When my android app starts there will be a prompt that asks user whether to upgrade to newer version or not.I used an alertbox to display it.I have two buttons in it, "Upgrade" and "No thanks".Then I added a checkbox to it.And the label for that check box is "Dont ask me again". When user click on that checkbox,that should be remembered and the prompt shouldnt asked again.Can anyone suggest me a solution to achieve this? A: The Best option you can go for is of SharedPreference. You can Save the in Internal Database. PackageInfo info = getPackageManager().getPackageInfo(PACKAGE_NAME, 0); int currentVersion = info.versionCode; // version name here for display in the about box later. this.sVersionName = info.versionName; SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(this); int lastVersion = prefs.getInt("Key", 0); if (currentVersion > lastVersion) { prefs.edit().putInt("key",currentVersion).commit(); Intent intent = new Intent(this, StartUp.class); intent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_WHEN_TASK_RESET) // Your Code goes here if you want to Display it Only Once. return true; } EDIT SavePreferences("MEMORY1","Your String Here"); private void SavePreferences(String key, String value) { SharedPreferences settings = getSharedPreferences(pref, 0); SharedPreferences.Editor editor = settings.edit(); editor.putString(key, value); editor.commit(); } private void LoadPreferences() { SharedPreferences settings = getSharedPreferences(Settings.pref, 0); String sDefault_Card = settings.getString("MEMORY1", ""); }
{ "perplexity_score": 2897.6, "pile_set_name": "StackExchange" }
Q: Can sqrtsd in inline assembler be faster than sqrt()? I am creating a testing utility that requires high usage of sqrt() function. After digging in possible optimisations, I have decided to try inline assembler in C++. The code is: #include <iostream> #include <cstdlib> #include <cmath> #include <ctime> using namespace std; volatile double normalSqrt(double a){ double b = 0; for(int i = 0; i < ITERATIONS; i++){ b = sqrt(a); } return b; } volatile double asmSqrt(double a){ double b = 0; for(int i = 0; i < ITERATIONS; i++){ asm volatile( "movq %1, %%xmm0 \n" "sqrtsd %%xmm0, %%xmm1 \n" "movq %%xmm1, %0 \n" : "=r"(b) : "g"(a) : "xmm0", "xmm1", "memory" ); } return b; } int main(int argc, char *argv[]){ double a = atoi(argv[1]); double c; std::clock_t start; double duration; start = std::clock(); c = asmSqrt(a); duration = std::clock() - start; cout << "asm sqrt: " << c << endl; cout << duration << " clocks" <<endl; cout << "Start: " << start << " end: " << start + duration << endl; start = std::clock(); c = normalSqrt(a); duration = std::clock() - start; cout << endl << "builtin sqrt: " << c << endl; cout << duration << " clocks" << endl; cout << "Start: " << start << " end: " << start + duration << endl; return 0; } I am compiling this code using this script that sets number of iterations, starts profiling, and opens profiling output in VIM: #!/bin/bash DEFAULT_ITERATIONS=1000000 if [ $# -eq 1 ]; then echo "Setting ITERATIONS to $1" DEFAULT_ITERATIONS=$1 else echo "Using default value: $DEFAULT_ITERATIONS" fi rm -rf asd g++ -msse4 -std=c++11 -O0 -ggdb -pg -DITERATIONS=$DEFAULT_ITERATIONS test.cpp -o asd ./asd 16 gprof asd gmon.out > output.txt vim -O output.txt true The output is: Using default value: 1000000 asm sqrt: 4 3802 clocks Start: 1532 end: 5334 builtin sqrt: 4 5501 clocks Start: 5402 end: 10903 The question is why the sqrtsd instruction takes only 3802 clocks, to count square root of 16, and sqrt() takes 5501 clocks? Does it have something to do with HW implementation of certain instructions? Thank you. CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 2 Core(s) per socket: 2 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 21 Model: 48 Model name: AMD A8-7600 Radeon R7, 10 Compute Cores 4C+6G Stepping: 1 CPU MHz: 3100.000 CPU max MHz: 3100,0000 CPU min MHz: 1400,0000 BogoMIPS: 6188.43 Virtualization: AMD-V L1d cache: 16K L1i cache: 96K L2 cache: 2048K NUMA node0 CPU(s): 0-3 A: Floating point arithmetic has to take into consideration rounding. Most C/C++ compilers adopt IEEE 754, so they have an "ideal" algorithm to perform operations such as square root. They are then free to optimize, but they must return the same result down to the last decimal, in all cases. So their freedom to optimize is not complete, in fact it is severely constrained. Your algorithm probably is off by a digit or two part of the time. Which could be completely negligible for some users, but could also cause nasty bugs for some others, so it's not allowed by default. If you care more for speed than standard compliance, try poking around with the options of your compiler. For instance in GCC the first I'd try is -funsafe-math-optimizations, which should enable optimizations disregarding strict standard compliance. Once you tweak it enough, you should come closer to and possibly pass your handmade implementation's speed.
{ "perplexity_score": 1533.9, "pile_set_name": "StackExchange" }
Q: Since upgrade to 18.04 libreOffice calc no longer imports xml Since I upgraded Ubuntu to the latest, the Data->Xml import in libreOffice Calc is greyed out and non functioning. apt-get libreoffice reports that the latest version is installed. A: OK, I think I found something that might help you. Warning: This might cause Libreoffice Calc to be unstable. Click on Tools -> Options -> Advanced Then click on Enable experimental features (this may be unstable) After clicking OK it will prompt you to restart calc. Then Data -> XML Source should now be enabled. Hope this helps!
{ "perplexity_score": 2328.9, "pile_set_name": "StackExchange" }
Thymic epithelial cells: the multi-tasking framework of the T cell "cradle". The thymus provides the anatomical "cradle" that fosters developing thymocytes. Thymic epithelial cells (TECs) are specialized cellular components that may be viewed as a multifunctional "frame" to nurture distinct stages of thymopoiesis. A symbiotic relationship between TECs and thymocytes exists because reciprocal interactions are required to achieve complete maturation of both cell types. Here, we propose that crucial instructive signals delivered by developing thymocytes negatively regulate functional attributes of immature TECs (including the expression of Delta-like 4 (DLL4) and interleukin-7 (IL-7)) that are required during early stages of thymopoiesis, while promoting the diversification of more mature TEC subsets. Thus, the division of labour among TECs may be coordinated directly by local cellular feedback mechanisms operating within distinct thymic niches.
{ "perplexity_score": 410.3, "pile_set_name": "PubMed Abstracts" }
More info Mon - Sat 10:30am - 7:15pm Accessorize That Sari Saris are, not surprisingly, a popular souvenir among visitors to Hyderabad. But what some shoppers may not realize is that a sari without accessories is incomplete. At Color D Earth, local jewelry artisans can custom-design bracelets, necklaces, and other pieces to accessorize saris and other clothing. All of the objects at Color D Earth are earth-friendly, made of terra cotta, by artisans who participate in the co-op style business and are guaranteed a fair living wage.
{ "perplexity_score": 636.7, "pile_set_name": "Pile-CC" }
I don’t know what to say here… here’s some ska. I really hope you dig it. I seriously love every song on this episode, even the one in a language I don’t fully understand, but that’s the thing about ska, right? It’s bringing us together. The love of music, the love of a good time, that’s ska. No matter the language, no matter the country, no matter the decade, it’s that ska beat that brings us joy. I hope this episode brings you joy too. 00:00 – Danny Rebel & the KGB – I got a Feeling (Lovehaus ’17) 03:37 – the Scotch Bonnets – Whimsical Friend (Come on Over ’19) 07:34 – the ‘Vengers – Can You Feel It (Push This? ’97) 11:45 – Matamoska! – Cicatriz (Skalluminati ’17) 15:19 – Mad Dog and the 20/20s – Girl on a String (Things We Should’ve Said (But We’ll Dance to Instead) ’17) 18:27 – the Domingoes – Unity (Unity ’19) Show support for the bands by clicking on those links and checking out their websites and music! Show support for the podcast by donating on or by finding & liking 23min of Ska on , , and . Also, feel free to subscribe and listen to the podcast on , , and . Also, feel free to download this episode if you wanna keep it forever. What to submit your band? Email: [email protected] Have something else to say? Email: [email protected] Another way to support this podcast is to listen to and support our sister podcast the Ska After Party or to buy some records from our partners in crime over at Grandpa’s Casino Recordings, they carry some great vinyl ska records!
{ "perplexity_score": 761, "pile_set_name": "OpenWebText2" }
Correction to: *Nature Communications*; 10.1038/s41467-018-05461-5, published online 06 August 2018 The original version of this Article contained errors in the second sentence in the legend of Fig. 1, which incorrectly read 'These two elastic insulators are identical in lattice constant *a* (3*a*~0~), plate thickness (0.4*a*~0~), and radius of perforated holes *r* (0.18*a*~0~) but different hole-center distance characterized by *b*.' The correct version states 'plate thickness ($\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt 3$$\end{document}$ × 0.4*a*~0~)' in place of 'plate thickness (0.4*a*~0~)' and 'radius of perforated holes *r* ($\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt 3$$\end{document}$ × 0.18*a*~0~)' rather than 'radius of perforated holes *r* (0.18*a*~0~)'. The first sentence of the 'Sample preparation' section of the Methods originally incorrectly read 'Our samples are prepared exclusively on polished stainless-steel plates (Type 201, mass density 7803 kg m^−3^) with a fixed plate thickness 7.82 mm.' In the corrected version, 'mass density 7903 kg m^−3^' replaces 'mass density 7803 kg m^−3^'. The second sentence in the legend of Supplementary Fig. 3, originally incorrectly read 'The symmetry of the phononic crystal remains unchanged as C~6*ν*~, and thickness of the substrates *H* (equals to 0.4*a*~0~), lattice constant a (equals to 3*a*~0~) and radius of perforated holes *r* (equals to 0.18 *a*~0~) maintain constant.' The correct version states '√3 × 0.4*a*~0~' in place of '0.4*a*~0~' and '√3 × 0.18*a*~0~' rather than '0.18*a*~0~'. This has been corrected in both the PDF and HTML versions of the Article.
{ "perplexity_score": 1984.6, "pile_set_name": "PubMed Central" }
Brearley respects and encourages the religious observances of its students and their families, whether or not those observances fall on school holidays. We ask only that parents notify their daughter’s Division Leader before an absence from school for a religious holiday so that teachers may adjust assignments accordingly. Except for reasons of health, family emergencies or religious observance, students are not excused from school during the academic year. This calendar is subject to change in the event of an emergency.
{ "perplexity_score": 207.5, "pile_set_name": "Pile-CC" }
Q: Convert one liner private key to multi-line (original) I have to convert RSA private key to one-line to store in the password manager (Passwordstate). I used tr -d '\n' < id_rsa to convert to single line and cat id_rsa.line | sed -e "s/-----BEGIN RSA PRIVATE KEY-----/&\n/" -e "s/\S\{64\}/&\n/g" to convert back to original multi-line. Conversion back to multi-line worked on Ubuntu, but not on Mac. Why this doesn't work on Macbook A: Please try: LF=$'\\\x0A' cat id_rsa.line | sed -e "s/-----BEGIN RSA PRIVATE KEY-----/&${LF}/" -e "s/-----END RSA PRIVATE KEY-----/${LF}&${LF}/" | sed -e "s/[^[:blank:]]\{64\}/&${LF}/g" or LF=$'\\\x0A' cat id_rsa.line | sed -e "s/-----BEGIN RSA PRIVATE KEY-----/&${LF}/" -e "s/-----END RSA PRIVATE KEY-----/${LF}&${LF}/" | fold -w 64 Non-GNU sed does not interpret "\n" in the replacement as a newline. As a workaround, you can assign a variable to a newline and embed it in the replacement. Note that I've kept the UUC for readability :P.
{ "perplexity_score": 571.4, "pile_set_name": "StackExchange" }
{ "token": "", "team_id": "", "enterprise_id": "", "api_app_id": "", "event": { "type": "star_added", "user": "", "item": { "type": "message", "channel": "", "message": { "bot_id": "", "type": "message", "text": "", "user": "", "ts": "0000000000.000000", "team": "", "bot_profile": { "id": "", "deleted": false, "name": "", "updated": 12345, "app_id": "", "icons": { "image_36": "https://www.example.com/", "image_48": "https://www.example.com/", "image_72": "https://www.example.com/" }, "team_id": "" }, "edited": { "user": "", "ts": "0000000000.000000" }, "attachments": [ { "service_name": "", "service_url": "https://www.example.com/", "title": "", "title_link": "https://www.example.com/", "author_name": "", "author_link": "https://www.example.com/", "thumb_url": "https://www.example.com/", "thumb_width": 12345, "thumb_height": 12345, "fallback": "", "video_html": "", "video_html_width": 12345, "video_html_height": 12345, "from_url": "https://www.example.com/", "service_icon": "https://www.example.com/", "id": 12345, "original_url": "https://www.example.com/", "msg_subtype": "", "callback_id": "", "color": "", "pretext": "", "author_id": "", "author_icon": "", "author_subname": "", "channel_id": "", "channel_name": "", "bot_id": "", "indent": false, "is_msg_unfurl": false, "is_reply_unfurl": false, "is_thread_root_unfurl": false, "is_app_unfurl": false, "app_unfurl_url": "", "text": "", "fields": [ { "title": "", "value": "", "short": false } ], "footer": "", "footer_icon": "", "ts": "", "mrkdwn_in": [ "" ], "actions": [ { "id": "", "name": "", "text": "", "style": "", "type": "button", "value": "", "confirm": { "title": "", "text": "", "ok_text": "", "dismiss_text": "" }, "options": [ { "text": "", "value": "" } ], "selected_options": [ { "text": "", "value": "" } ], "data_source": "", "min_query_length": 12345, "option_groups": [ { "text": "" } ], "url": "https://www.example.com/" } ], "filename": "", "size": 12345, "mimetype": "", "url": "https://www.example.com/", "metadata": { "thumb_64": false, "thumb_80": false, "thumb_160": false, "original_w": 12345, "original_h": 12345, "thumb_360_w": 12345, "thumb_360_h": 12345, "format": "", "extension": "", "rotation": 12345, "thumb_tiny": "" } } ], "is_starred": false, "permalink": "https://www.example.com/" }, "date_create": 12345 }, "event_ts": "0000000000.000000" }, "type": "event_callback", "event_id": "", "event_time": 12345, "authed_users": [ "" ] }
{ "perplexity_score": 561.3, "pile_set_name": "Github" }
In this study, sponsored by MorTan, a comparison between the new Morgan MT2000 Lens and the older Morgan MT100 Lens was performed. In addition, the new MT2000 Lens was tested WITHOUT ANESTHETIC, and all methods were compared to the manual irrigation method. Overall it was shown that the Morgan MT2000 Lens offers an improved technique for the care of patients with eye trauma requiring irrigation treatment.
{ "perplexity_score": 497.6, "pile_set_name": "Pile-CC" }
import { BufferWhenSignature } from '../../operator/bufferWhen'; declare module '../../Observable' { interface Observable<T> { bufferWhen: BufferWhenSignature<T>; } }
{ "perplexity_score": 4046.7, "pile_set_name": "Github" }
Q: How to calculate quantity on hand for every lot (mysql) This is the transaction details table I am trying to design a Mysql inventory database. I consider every type: 1 row a product lot (batch). The type column has 1 for IN and 0 for OUT for each transaction. detail_id is referencing the id column. How can I get this result: id item sum(quantity) 1 1 3 [10-(5+2)] 4 1 0 (5-5) 6 2 20 20 A: You can use this: SELECT lots.id, MIN(lots.item) AS item, MIN(lots.quantity) - IFNULL(SUM(details.quantity), 0) AS quantity FROM ( SELECT id, item, quantity FROM details WHERE type = 1 ) lots LEFT JOIN details ON lots.id = details.detail_id GROUP BY lots.id ORDER BY lots.id demo on dbfiddle.uk
{ "perplexity_score": 1567.3, "pile_set_name": "StackExchange" }
Waitaki River The Waitaki River is a large braided river that drains the Mackenzie Basin and runs some south-east to enter the Pacific Ocean between Timaru and Oamaru on the east coast of the South Island of New Zealand. It starts at the confluence of the Ohau River and the Tekapo River, now in the head of the artificial Lake Benmore, these rivers being fed by three large glacial lakes, Pukaki, Tekapo, and Ohau. The Waitaki flows through Lake Benmore, Lake Aviemore and Lake Waitaki, these lakes being contained by hydroelectric dams, Benmore Dam, Aviemore Dam and Waitaki Dam. The Waitaki has several tributaries, notably the Ahuriri River and the Hakataramea River. It passes Kurow and Glenavy before entering the Pacific Ocean. The river's flow is normally low in winter, with flows increasing in spring when the snow cloaking the Southern Alps begins to melt, with flows throughout the summer being rainfall dependent and then declining in the autumn as the colder weather begins to freeze the smaller streams and streams which feed the catchment. The median flow of the Waitaki River at Kurow is . The middle of the river bed formed a traditional political boundary between Canterbury and Otago. As such, the term "South of the Waitaki" is often used to refer to the Otago and Southland regions as one common area (the two regions share historical and ethnic relationships which make them distinct from the regions to the north of them). The river is popular for recreational fishing and jetboating. Electricity generation The river is the site of many hydroelectricity projects. The Waitaki dam was built first, between 1928 and 1934, and without earth-moving machinery, followed by the development of the Aviemore Dam, creating Lake Aviemore - and Benmore Dam, creating Lake Benmore. Lake Pukaki was initially dammed at this stage to provide storage and flow control. A small station was also installed on Lake Tekapo, with water taken through a tunnel to a powerstation below the level of the lake. The original Waitaki power stations discharge water back into the Waitaki River which then forms a storage lake for the next station in the chain. The three power stations are (commissioned/capacity/annual output): Waitaki (1935/105 MW/500 GWh) Benmore (1965/540 MW/2200 GWh) Aviemore (1968/220 MW/940 GWh) In the 1960s, work was started on the Upper Waitaki project. This project consisted of taking the discharge from the original Tekapo (A) station through the Tekapo Canal, to Tekapo B station at the edge of Lake Pukaki. The dam at Pukaki was increased in height. Water from Pukaki is then transferred into the Pukaki Canal which meets the Ohau Canal from Lake Ohau into Ohau A station and Lake Ruataniwha. The Ohau Canal continues beyond Lake Ruataniwha to Ohau B midway along, before emptying through Ohau C into Lake Benmore. The stations are (commissioned/capacity/annual output): Tekapo A (1955/25 MW/160 GWh) Tekapo B (1977/160 MW/800 GWh) Ohau A (1980/264 MW/1150 GWh) Ohau B (1984-85/212 MW/970 GWh) Ohau C (1984-85/212 MW/970 GWh) Later proposals In 2001 a proposal for a new series of canals and dams was made by Meridian Energy for irrigation and electricity generation on the river. This scheme, known as Project Aqua, planned to divert up to 77 percent of the lower river's flow to create a hydroelectric scheme, but these plans were dropped in March 2004. Lack of commercial viability was given as the major reason for the scheme's shelving, although strenuous public protest may also have been a major contributing factor. A more modest successor scheme, the North Bank tunnel looked likely to proceed, with water rights being granted in 2009, but land access negotiations were suspended in January 2013 due to flat demand for electricity forecast for the next five years. See also List of rivers of New Zealand Reservoirs and dams in New Zealand References Further reading External links Waitaki Valley website Meridian energy Waitaki hydro scheme Hydrologic webmap of the Waitaki river basin Category:Rivers of Canterbury, New Zealand Category:Rivers of Otago Category:Hydroelectricity in New Zealand Category:Braided rivers Category:Rivers of New Zealand
{ "perplexity_score": 123.9, "pile_set_name": "Wikipedia (en)" }
Monday, December 19, 2005 Cosmological Natural Selection It will take some time to analyze this paper—it has more words than equations, which makes it harder (for me) to evaluate. The claim of the paper is that a certain type of multiverse theory is falsifiable, one called “cosmological natural selection.” Before a description of this cosmological natural selection, I want to point out that a falsifiable multiverse theory is a good thing—a very good thing. If another universe with different physics is ever observed, then I for one will abandon cosmological ID. On the other hand, if there is a legitimate falsification test for a multiverse model that produces a negative result—well that would indirectly strengthen cosmological ID. The cosmological natural selection idea is indeed provocative. I will attempt to explain the idea without, at this time, evaluating its merits. We begin with a mechanism for creating new universes: the black hole bounce. The black hole bounce results from quantum modifications on a “classical” black hole collapse. Instead of collapsing down to a singularity, the black hole, at some point, begins to expand—producing a new region of spacetime that is not causally connected with the universe in which the black hole was originally formed. It is, in fact, a new universe. Smolin writes: A multiverse formed by black holes bouncing looks like a family tree. Each universe has an ancestor, which is another universe. Our universe has at least 1018 children, if they are like ours they have each roughly the same number of their own. In setting up the case for cosmological natural selection, Smolin (see p. 29 of his paper) presents three hypotheses: A physical process produces a multiverse with long chains of descendents Let P be the space of dimensionless parameters of the standard models of physics and cosmology, and let the parameters be denoted by p. There is a fitness function F(p) on P which is equal to the average number of descendents of a universe with parameters p. The dimensionless parameters pnew of each new universe differ, on average by a small random change from those of its immediate ancestor. Small here means with small with respect to the change that would be required to significantly change F(p). Let me paraphrase. In this model, there is a process for creating children (new universes.) That process is black hole bouncing. Furthermore—there is a fitness function that is being maximized: the average number of children (black holes) produced by the universe. Finally, the set of physical constants (such as the cosmological constant) get passed to the descendent universes, like cosmic DNA, but are slightly modified (mutated) in the process. The ultimate result, from natural selection seeking to maximize the fitness function, is that universes that are very good at producing blackholes will emerge as the “fittest.” What types of universes are good at producing black holes? Universes such as ours with an improbably small values for their cosmological constant and the right low energy physics and chemistry for star production. Which, as an aside, are also the types of universes that can produce intelligent life. Note that this model absolutely requires that the physics in a child universe differs only slightly from its ancestor. Smolin admits: The hypothesis that the parameters p change, on average by small random amounts, should be ultimately grounded in fundamental physics. We note that this is compatible with string theory, in the sense that there are a great many string vacua, which likely populate the space of low energy parameters well. It is plausible that when a region of the universe is squeezed to Planck densities and heated to Planck temperatures, phase transitions may occur leading to a transition from one string vacua to another. But there have so far been no detailed studies of these processes which would check the hypothesis that the change in each generation is small. I will continue studying this paper and will comment further in the near future. No comments: Post a Comment READERS: I am a scientist. I am a physics professor at a very good public liberal arts university. I am not a theologian. These are my often stream-of-conciousness ramblings. Do not put any trust in them. Sometimes they don't even make sense to me. If something I write interests you, examine it further and draw your own conclusions.
{ "perplexity_score": 352.8, "pile_set_name": "Pile-CC" }
He is arranging different things in my can not afford to immobilize so many people in a period how to know ? There is far too much light above buy authentic jordans from china mist, following a path that Gale does not know but that after what happened during the rebellion. The talkative jays disapproval is general. Finnick pushes me away. took the discharge. buy authentic jordans from china ask why this place does not refresh after to cover myself with blisters. continues to bleed. are on the beach in the corner of the monkeys, far too close to mine could serve us in the Games. Handle a pickaxe. Finnick, Beetee and Peeta have no way of knowing what That's what I suspected in the forest, when I found Nike Air More Uptempo Barely Green White Mens SKOO8652 see mocking jays hopping on the branches, listening to the from the countryside supposes Schuster de lui confier les blessés maison cossue Il s'assied à la table, ouvre are journal et Nike Air More Uptempo Barely Green White Mens SKOO8652 best day of the week, even if it's not like before, Nike Air More Uptempo Barely Green White Mens SKOO8652 citizens of Panem, while nothing is further from the someone, and to whom. Not to my mother or to Prim, obviously; I raise my arms. I feel fur coat me from - I just want some fresh air. I'm coming back from its inhabitants. But these kids we see on the screen every year, we are wrong! The main square is unrecognizable. A Nike Air More Uptempo Barely Green White Mens SKOO8652 trying to hide my limp. very contrasted reactions within the crowd. People soup of Sae Boui-boui, and for having seen it spread on the grandplace never been able to accomplish. Peeta, on the contrary, would be more valuable Nike Air More Uptempo Barely Green White Mens SKOO8652 invite all to my wedding ... but, at least, I'm happy manage to get some precious minutes of rest. starts spinning, faster and faster, and I see the jungle Nike Air More Uptempo Barely Green White Mens SKOO8652 crossed by a slight vibration. Suddenly, the thin golden thread our revolvers. Nike Air More Uptempo Barely Green White Mens SKOO8652 Cinna came early to help me prepare? Nike Air More Uptempo Barely Green White Mens SKOO8652 It's an innocuous sentence, but I see Haymitch I tell him everything: the President's visit, Gale, the fate that stay here. I owe it to the deceased. And even if I had offered all - She talked about the entrance, confirms Peeta. (He turns called "Atonement", in tribute to the victims of the rebellion Maysilee at the neck. He holds her hand while she Nike Air More Uptempo Barely Green White Mens SKOO8652 doubt because of Rue and Thresh. Incapable of me - I saw. During your Tour. The blue dress without in my old room. I sit on the edge of the bed, never been able to accomplish. Peeta, on the contrary, would be more valuable Nike Air More Uptempo Barely Green White Mens SKOO8652 we took the plunge, explains Peeta. And for us, we - Anyway, it's useless to ask what's wrong blood. Beetee pulls a piece of thread that runs through his fingers. Nike Air More Uptempo Barely Green White Mens SKOO8652 since a long time ? explained the healer. Well, the truth was I did not Nike LeBron 15 Low Black Red White Shoes SKOO17732 VCR. When you get to this scene, make a stop Nike LeBron 15 Low Black Red White Shoes SKOO17732 still a bit abstract. box on the shelf. - Tell me: why did not you go to Kuta? to refuse the truth? to make sure it stays open and that I can leave presence. You will unconsciously have the proof that the world is Nike LeBron 15 Low Black Red White Shoes SKOO17732Nike Air More Uptempo Barely Green White Mens SKOO8652 trying to hide my limp. not very subtle. That night, there were no fewer than eight on stage, and, - Hello, I would like to consult a screen connected to the Internet, if describe is not yours right now. What could have done that Nike LeBron 15 Low Black Red White Shoes SKOO17732 which is still, in a way, a small business, discover and if it is really necessary to meet again on Saturday. renew my request when his cat approach, his style Nike LeBron 15 Low Black Red White Shoes SKOO17732 Someone who is content to be and who offers this state to others, such I began to wonder where I had set foot. I was then Nike LeBron 15 Low Black Red White Shoes SKOO17732 curve, and be a little taller, more erect. You will also see that the world around him and can not get anything from others. He will not these would no doubt be surprised to learn that this word does not exist Nike LeBron 15 Low Black Red White Shoes SKOO17732 It was one of my best memories of Bali. We were long without thinking of anything, under the benevolent gaze of acts as a filter, like a selective pair of glasses that we do not go far in his life. Go. I trust you. rain. It was so. The Balinese accepted what the gods Nike LeBron 15 Low Black Red White Shoes SKOO17732 instructions: you must not approach within two kept his smile. - To close the chapter on health, he tells me, it is interesting to General. To be successful, you need luck, and I'm not - I feel it: it's so different from my current job, from what - Perhaps. I even had the feeling that some people were born some people. Everyone believes in himself things that are Nike LeBron 15 Low Black Red White Shoes SKOO17732 off ; by dint of blowing on the last coals Nike LeBron 15 Low Black Red White Shoes SKOO17732 watching us commit suicide, Peeta and I - which deprived the to have on Peeta and, as a result, on the atmosphere the expectations of the Capitol, of my future with Peeta, in a hurry and a pot of smoky chocolate waiting for me on the convenience written for us by the Capitol. When a food, flaccid, gloves Cinna. Gifts that her Nike LeBron 15 Low Black Red White Shoes SKOO17732 - We do not know what will become of your team, their backyards to the hole in the fence closest to the sit on the cushion. - I thought you had changed your mind today. In not Nike LeBron 15 Low Black Red White Shoes SKOO17732 districts that may be revolting. The bread dough that would be kneaded again and again. My mother accessories. Apparently, at each stage, the inhabitants of the Nike LeBron 15 Low Black Red White Shoes SKOO17732 He sneers. I do not answer anything. The window is wide open, Russian painter whom you will talk to you all day long. You will see yourself preparing your wedding Nike LeBron 15 Low Black Red White Shoes SKOO17732 Peter greeted the woman and left the bar, rolling her eyes. Jonathan, who had still not touched his dish, tore a small piece - It's really nice of you to be interested in this way but you Nike LeBron 15 Low Black Red White Shoes SKOO17732 met from morning to evening, meeting to establish the calendars of the sales of the afternoon had excused her. Jonathan had proceeded with the young Frank to hide-and-seek pierced the thin layer of clouds. Clara immediately dropped her small tavern in front of him, he pressed the accelerator. A little later, few moments before following her. In this country house, each covered her head. A mature couple hurried out of the house. The Nike LeBron 15 Low Black Red White Shoes SKOO17732 Clara stopped under the light of a lamppost, and faced him. She would have pop you in the wind. Tears beaded from her closed eyes. The - And the police did not believe you? - No, Madam never came here, I lived alone. Nike LeBron 15 Low Black Red White Shoes SKOO17732 immediately recognized Peter's voice. - Are you already on your way to your desert island? Renoir in the last ten years? And Bowen's collection with his Jongkind, Air Jordan VII (7) Retro Orion Blue White Orion Blue Black Infrared Shoes SKOO9862 not usually do good together. it abandons it and seeks a new source of life that will welcome it to The plane was already underway and the wait was coming to an end. Anna a chimera, an effect of the devouring and unhealthy passion that her future leaning on high tables, each dipped in the reading of a newspaper or - I have work. Bringing Radskin's canvases to the US is not Air Jordan VII (7) Retro Orion Blue White Orion Blue Black Infrared Shoes SKOO9862 portrait of him as he had stopped in front of a car and took the shuttle that took him to the Alitalia wickets. The flight liked to speak, to find some right words, but perhaps at that moment his hiding place and put him on trestles. Protected by a gray blanket, the Air Jordan VII (7) Retro Orion Blue White Orion Blue Black Infrared Shoes SKOO9862 Jonathan spent the rest of the morning with The Young Woman in the was just a poor player riddled with debt, someone had to correct will never leave prison again. Will you come to San Francisco one of these days? Air Jordan VII (7) Retro Orion Blue White Orion Blue Black Infrared Shoes SKOO9862 gave you? that it's you. building generators. The door, wide open, showed wood. Fifteen meters higher, we met the first way Air Jordan VII (7) Retro Orion Blue White Orion Blue Black Infrared Shoes SKOO9862 small supply. Every morning, usually, we went down - You have an hour left, put all of you to work; and all the girls of Montsou prowled with their lovers. debaucher in the pear trees of the gardens. Ah! this youth, We are adhering to all sides, it seems. He alone had the intelligence untied enough to analyze the Air Jordan VII (7) Retro Orion Blue White Orion Blue Black Infrared Shoes SKOO9862 foresight worried him, became a threat to the future, - Name of God from God's name! Maheu repeated, pointing out Maheu's house. Etienne, as secretary, had shared what had she been saying against her for a month? Filer with a Air Jordan VII (7) Retro Orion Blue White Orion Blue Black Infrared Shoes SKOO9862 A quarter of an hour passed. We were impatient in At the top, dominating the slope, Etienne stood, with - I know him, Pluchart. New Nike KD 10 Lover Duck Shoes SKOO15391 not hot. New Nike KD 10 Lover Duck Shoes SKOO15391 replaced by others, empty or loaded in advance of the woods slippery soil, which was becoming more and more soaked. At times, he on the lips. She had big lips of a pale pink, brightened – Maheu ! Maheu ! - For God Sake ! what is not right is not right. Me, all over. Pierron, despite his sweet face, slapped his thin New Nike KD 10 Lover Duck Shoes SKOO15391 glittering of her hair. the goods, the considerable clientele of the corons him strong, your coffee: you put what you need. more dissatisfied with the woodwork. It overwhelmed the workers New Nike KD 10 Lover Duck Shoes SKOO15391 charbonnier de Montsou has not joined yet. But, if they are there M. Hennebeau did not become angry. He even smiled. The commissioner, a slow man, that dramas New Nike KD 10 Lover Duck Shoes SKOO15391 nothing, they began to look down, burying a fat - What an idea ! murmured the innkeeper. Why all of this ? The New Nike KD 10 Lover Duck Shoes SKOO15391 skin that I have to rip with my own hands. And the one who was stolen is not without reproach to have been stolen. You can not separate the righteous from the unjust and the good of the wicked, New Nike KD 10 Lover Duck Shoes SKOO15391 One of the elders of the city spoke and said: believe what occupies him? Vociferations now covered his voice, fled, they were pursued with stones. Two were became the young man. It surrounded him with a legend. We Rasseneur burst out laughing, the idea that the two workers of New Nike KD 10 Lover Duck Shoes SKOO15391 no doubt he had only made dead rabbits; and, for and we were talking about going down the neck to see if he would could be improved. We will finally do all he dispersed; and the two men looked at each other in silence. All New Nike KD 10 Lover Duck Shoes SKOO15391 It has the effect of recovering, with the outrageous air bubbles. rumor was lost in the deep silence, they were recovering to When the good is hungry he does not fear to seek his food even in the Jordan Slippers Flats Nike Air Jordan XXXII Low Red White Black SKOO14638 hosted Etienne, trembling, still restrained himself. He lowered his voice. Beaugnies, when a voice, still unknown, threw out the idea that snuggle in the hay. In the midst of these furies, Cecile shivered, legs that the Levaque raised her legs. And the Burned, with his hands Jordan Slippers Flats Nike Air Jordan XXXII Low Red White Black SKOO14638 far away, on the plain. farthing. So they did not even bother to look, they would have become ill, and there would be nothing done, and He gave details. The Association, having conquered the Jordan Slippers Flats Nike Air Jordan XXXII Low Red White Black SKOO14638 in the pallor of the sky. And what interested the young man, in their place, we only saw the blonde face of Souvarine. he bastards! She dipped, tumbled, turned around so everyone Jordan Slippers Flats Nike Air Jordan XXXII Low Red White Black SKOO14638 Grégoire, in the company of Négrel. This one, up from the pit, free of you to wear his freedom like a yoke and iron bracelets. the air is poisoned ... But you will see, just now, if I am when they recognized their cook, Jordan Slippers Flats Nike Air Jordan XXXII Low Red White Black SKOO14638 the troops had come to occupy Montsou, a whole regiment, as soon as he had driven out the nightmare, he extinguished, stingy with this interested him in this massacre. He was starting again, hedonists. And, hold on! you see my hands, if my hands devils, for the rich! neighing with fear when he realized that the other was not Jordan Slippers Flats Nike Air Jordan XXXII Low Red White Black SKOO14638 the assault of the cages. We crashed, we killed each other to be reassembled repeated stubbornly. carried her poor clenched hands to her throat, she had to floor. They searched for each other, they remained in each other's arms, Jordan Slippers Flats Nike Air Jordan XXXII Low Red White Black SKOO14638 - I have a mother ... I have children ... You need bread. Comrade held out his hand to say goodbye to them all
{ "perplexity_score": 1035.5, "pile_set_name": "Pile-CC" }
Quilting Fabric – The Personality Of A Quilt A handmade quilt is a thing of beauty; not just for the technical skill that it so clearly requires, but also for the passion required by the quilter to produce such a beautiful result. In all ways, a quilt reflects the love and personality of its maker; and nowhere is that more evident than in the choice of quilting fabric. The choosing of quilting fabric is by far the most important element of quilting. The quilting fabric is largely chosen based on the ultimate use for the quilt. For a quilt being designed as a baby gift, often pastel fabrics are chosen; bolder quilting fabric is often chosen for the design of a decorative quilt that will be hung to complement a room; themed quilting fabric can be chosen for a specialty quilt designed to commemorate a special event. The texture of the quilting fabric is also important – soft and supple, boldly textured, warm and cozy, or smooth and sophisticated. The texture of the quilting fabric speaks of the quilt’s personality and the way it will be used in its lifetime. Most quilters prefer to use 100% cotton quilting fabric because of its ease of care. Just as important, however, is the quality of the quilting fabric. To ensure that you produce a quilt that can be enjoyed for years – and even kept for generations – you must be sure that the craftsmanship of the fabric is superior. While you can easily order quilting fabric online, you may not be getting the highest quality quilting fabric. Give Internet companies a trial run by ordering swatches of fabric so you can examine the quality up close. Once you establish a relationship with a reputable online company you can comfortably continue to order from them. Fabric stores are a more traditional way to shop for quilting fabric and you can easily choose quality materials with the help of a knowledgeable store employee. Don’t neglect the quilting thread when shopping for quality materials. The role of the quilting thread, after all, is to hold the quilt together. Quality quilting fabric will ultimately mean very little if the thread binding it is of poor quality. The quilting fabric you choose will determine your final product. If you choose wisely, your quilt will reflect all the care and attention you put in to choosing the appropriate materials.
{ "perplexity_score": 545.2, "pile_set_name": "Pile-CC" }
Two gray-haired men from Tokyo Medical University bowed their heads in shame before the assembled media in early August. An internal inquiry into one curious case — how did a government official’s son gain admission despite doing poorly on the entrance exam? — had exposed a pattern of fraud and discrimination. For more than a decade, investigators found, the school had systematically altered entrance-exam scores to restrict the number of female students and to award admission to less-qualified male applicants. The supposed rationale, that female doctors are prone to leave the profession after marriage or childbirth, only inflamed a national debate on gender inequality. The school initially denied any knowledge of the wrongdoing, but one of the bowing men — Tetsuo Yukioka, who happened to be chairman of the school’s diversity promotion panel — offered an oblique explanation: “I suspect that there was a lack of sensitivity to the rules of modern society.” A century and a half after opening up, Japan is now one of the planet’s most advanced, affluent and democratic countries. But in one key respect, it has remained stubbornly regressive: Japanese women, to a degree that is striking even by the lamentable standards of the United States and much of the rest of the world, have been kept on the margins of business and politics. Five years ago, the Japanese prime minister, Shinzo Abe, vowed to create what he describes as “a society where women can shine.” Falling birthrates had left Japan with one of the world’s oldest and fastest-shrinking labor forces. (The population from ages 15 to 64 is expected to plummet to 45 million in 2065 from 76 million in 2017.) Rather than open the gates to immigration, an unpopular solution in insular Japan, Abe embraced a plan to ease the way for millions of married and middle-aged women to return to work. The effort, Abe said, was “a matter of the greatest urgency.” The nickname for Abe’s program, “womenomics,” originated with Kathy Matsui, the vice chairwoman of Goldman Sachs Japan. Matsui, a Japanese-American who has lived in Japan on and off for more than three decades, told me she became aware of women’s underutilized economic potential soon after the birth of her first child during the stagnant 1990s. “A lot of my ‘mama’ friends were not returning to the work force to the extent that I assumed,” she recalled. “I realized that maybe the growth solution for Japan was right in front of my face.” After Abe adopted “womenomics” in 2013, Matsui predicted that the plan could add 7.1 million employees and lift Japan’s gross domestic product by nearly 13 percent. Activists and scholars were skeptical — the breathless calculations seemed to underplay the institutional sexism that pervades Japanese society — but Matsui credits Abe with depoliticizing the debate. “He moved the issue of diversity out of the realm of human rights into the realm of economic growth,” Matsui says. The correlation between the advancement of women and increased development rates follows a simple logic: More working women means more growth, especially in rapidly aging societies, where their participation alleviates the impact of a shrinking labor force. And a more inclusive economy can create ripple effects, expanding the talent pool, forming a more skilled work force and putting more money in the hands of women. In Japan, the ultimate hope was that women would no longer be faced with the cruel choice between remaining single (to pursue a career among men) or having a family (and giving up a career). “With this one stone, we could hit three or four birds,” says Rui Matsukawa, a legislator and member of Abe’s Liberal Democratic Party and mother of two. “It was like a survival strategy.”
{ "perplexity_score": 268.3, "pile_set_name": "OpenWebText2" }
Sydney was blanketed in a smoky haze on Tuesday morning, after weekend hazard reduction burns left parts of the city with air quality so poor it was more than twice the hazardous level, and more than five times as bad as the air quality in Beijing. Smoke from weekend hazard reductions, including a large state forest burn at Colo Heights on the weekend, was still covering Sydney on Tuesday, and NSW Rural Fire Service spokesman Ben Shepherd said it was not expected to clear until Wednesday. Air quality in parts of Sydney was more than two times the hazardous level. Credit:Janie Barrett "Over last 24 hours there haven't been any new ignitions, but some of those have continued to burn," he said. "What we've seen across Sydney is relatively light winds, so a lot of this smoke has basically hung around in the basin, and moved around.
{ "perplexity_score": 509, "pile_set_name": "OpenWebText2" }
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist SYSTEM "file://localhost/System/Library/DTDs/PropertyList.dtd"> <plist version="0.9"> <dict> <key>IBDocumentLocation</key> <string>29 42 356 240 0 0 1280 1002 </string> <key>IBEditorPositions</key> <dict> <key>29</key> <string>10 455 318 44 0 0 1280 1002 </string> </dict> <key>IBFramework Version</key> <string>248.0</string> <key>IBOpenObjects</key> <array> <integer>29</integer> <integer>21</integer> </array> <key>IBSystem Version</key> <string>5S66</string> </dict> </plist>
{ "perplexity_score": 2034.6, "pile_set_name": "Github" }
Q: Method overloading in Python: more overloading this question is not a duplicate of the other overloading question because I am attempting to reference self.args in the function. The answer given in that question does not satisfy, since if I implement that answer, I will get an error. However, in other languages, this is easily done through method overloading...so this question is highly related to the other question. So I have a method, class learner(): def train(a ton of arguments): self.argA = argA, etc. And I want to call train with just one value, and have it use all the self calls to populate the other arguments...but it is a circular reference that python doesn't seem to support. Ideally, I would write: class learner(): def train(self, position, data = self.data, alpha = self.alpha, beta = etc): ...do a bunch of stuff def roll_forward(self,position): self.alpha += 1 self.beta += 1 self.train(position) How would I do this? In other languages, I could just define a second train function that accessed the internal variables... currently, I have a janky hack where I do this: class learner(): def train(...): .... def train_as_is(self,position): self.train(position,self.alpha,self.beta, etc.) But this is a pretty big class, and a ton of functions like that are turning my code into spaghetti... A: An enhancement on other answers is to use a defaults dictionary: def train(self, position, **kwargs): defaults = { 'a': self.a, 'b': self.b, ...etc... } defaults.update(kwargs) ... do something with defaults def roll_forward(self,position): self.alpha += 1 self.beta += 1 self.train(position) A: Not sure if I follow your question 100% but usual pattern is to use None as a default value: class Foo(object): def __init__(self): self.a = ... self.b = ... ... def foo(self, position, a=None, b=None, ...): if a is None: a = self.a if b is None: b = self.b ... You can simplify that by using or: def foo(self, position, a=None, b=None, ...): a = a or self.a b = b or self.b ... however that is not as reliable in case you will try to pass falsy values like 0 the or part will kick in
{ "perplexity_score": 1305.5, "pile_set_name": "StackExchange" }
{ "pip": [ "dcos==0.1.13", "git+https://github.com/mesosphere/dcos-cassandra.git#dcos-cassandra=0.1.0" ] }
{ "perplexity_score": 1877.3, "pile_set_name": "Github" }
Strong ion calculator--a practical bedside application of modern quantitative acid-base physiology. To review acid-base balance by considering the physical effects of ions in solution and describe the use of a calculator to derive the strong ion difference and Atot and strong ion gap. A review of articles reporting on the use of strong ion difference and Atot in the interpretation of acid base balance. Tremendous progress has been made in the last decade in our understanding of acid-base physiology. We now have a quantitative understanding of the mechanisms underlying the acidity of an aqueous solution. We can now predict the acidity given information about the concentration of the various ion-forming species within it. We can predict changes in acid-base status caused by disturbance of these factors, and finally, we can detect unmeasured anions with greater sensitivity than was previously possible with the anion gap, using either arterial or venous blood sampling. Acid-base interpretation has ceased to be an intuitive and arcane art. Much of it is now an exact computation that can be automated and incorporated into an online hospital laboratory information system. All diseases and all therapies can affect a patient's acid-base status only through the final common pathway of one or more of the three independent factors. With Constable's equations we can now accurately predict the acidity of plasma. When there is a discrepancy between the observed and predicted acidity we can deduce the net concentration of unmeasured ions to account for the difference.
{ "perplexity_score": 254.4, "pile_set_name": "PubMed Abstracts" }
Introduction {#Sec1} ============ The *PDCD4* (*Programmed cell death 4*) gene encodes a highly conserved nuclear-cytoplasmic shuttling protein that acts as a tumor suppressor (for recent reviews see refs. ^[@CR1],[@CR2]^). PDCD4 contains two highly structured MA-3 domains located in the central and C-terminal parts of the protein, which mediate protein-protein-interactions with the translation initiation factor eIF4A. A putative unstructured domain at its N-terminal has been shown to mediate protein-protein- and protein-RNA-interactions^[@CR3]--[@CR10]^. PDCD4 was initially shown to suppress tumor development in an *in-vitro* mouse keratinocyte model of tumor promotion^[@CR11]^, but has since been implicated as a tumor suppressor in a broad spectrum of human tumors^[@CR12]--[@CR19]^. Down-regulation of PDCD4 expression in tumor cells occurs by different mechanisms. *PDCD4* mRNA is targeted by several microRNAs, most prominently oncogenic microRNA miR-21, whose over-expression in cancer cells down-regulates *PDCD4* expression^[@CR20],[@CR21]^. On the protein level, p70(S6K) kinase-mediated phosphorylation of PDCD4 triggers its ubiquitination by the E3 ubiquitin ligase complex SCF(βTRCP) and its subsequent degradation^[@CR22]^. A large body of work has suggested that down-regulation of *PDCD4* expression contributes to tumor development by stimulating the mobility and the metastatic potential of tumor cells^[@CR18]--[@CR20],[@CR23]--[@CR25]^. Furthermore, silencing of *PDCD4* has been shown to affect the cellular DNA-damage response, suggesting that decreased PDCD4 expression might compromise genomic stability and contribute to tumor development^[@CR26],[@CR27]^. PDCD4 has emerged as a critical regulator of protein translation due to its ability to interact with and inhibit the function of the eukaryotic translation-initiation factor eIF4A, a RNA helicase that promotes the unwinding of mRNA secondary structures present in the 5′-untranslated regions (UTRs) of certain mRNAs^[@CR3],[@CR4],[@CR19],[@CR28]^. PDCD4 is therefore thought to suppress the cap-dependent translation of mRNAs with 5′-structured UTRs. This was supported by studies showing that PDCD4 suppresses the translation of RNAs containing engineered 5′-hairpin structures^[@CR3],[@CR4]^ as well as by the identification of specific mRNAs regulated by this mechanism^[@CR19],[@CR28]^. However, alternative mechanisms of translational suppression involving direct RNA-binding of PDCD4 to the coding regions of specific mRNAs have also been described^[@CR29],[@CR30]^. Our current understanding of the function of human PDCD4 derives mostly from work carried out with transformed tumor cells. Here, we have used a telomerase-immortalized human epithelial cell line to study the effect of PDCD4 silencing on the cell cycle, gene expression and mRNA translation. Our work reveals a novel role of PDCD4 in the regulation of the cell cycle and provides a more complete picture of its cellular functions. Results {#Sec2} ======= PDCD4 is required for the G1/S-transition in RPE cells {#Sec3} ------------------------------------------------------ Our current understanding of PDCD4′s role in human cells is largely based on studies using transformed tumor cell lines. Such studies have provided insight into the function of PDCD4 as a tumor suppressor but may not reveal an unbiased picture of its cellular roles due to the aberrant nature of these cells. To study the function of human PDCD4 in normal cells we have used the telomerase-immortalized hTERT-RPE-1 cell line (referred to as RPE hereafter) as a model of untransformed epithelial cells. Expression of PDCD4 was effectively silenced by two different siRNAs (Fig. [1a](#Fig1){ref-type="fig"}). The cells did not show obvious changes of their spindle-shaped fibroblast-like morphology when viewed under the microscope. To explore whether PDCD4 knockdown disrupts the cell cycle we examined the cell cycle distribution of asynchronous cultures of RPE cells treated with PDCD4-specific or control siRNAs by flow cytometry. The cell cycle profiles of the control and PDCD4 knock-down cells were different. Specifically, the abundance of S- and G2-phase cells was strongly decreased in cultures treated with the two different PDCD4-specific siRNAs compared to the control cells (Fig. [1b](#Fig1){ref-type="fig"} and Supplementary Table [S1](#MOESM1){ref-type="media"}). Both siRNAs yielded similar results suggesting that the partial G1 arrest is induced by PDCD4 knockdown and not by off-target effects.Figure 1PDCD4 knockdown affects the cell cycle and growth properties of RPE cells. (**a**) Silencing of PDCD4 expression in RPE cells with PDCD4-specific siRNA-1 and -2. (**b)** Cell cycle distribution of RPE cells treated with control or PDCD4-specific siRNA-1 and -2. G1 and G2/M peaks are marked. (**c**) Equal numbers of RPE cells treated with control siRNA or PDCD4 siRNA-1 or -2 were plated onto replicate tissue culture plates. The growth of the cells was followed over several days by fixing one of the replicate plates at each indicated day of culture with formaldehyde. After 5 days of culture all plates were stained simultaneously with crystal violet. (**d**) RPE cells treated with siRNAs as in A. The cells were then incubated in medium supplemented with 10 μCi/ml 3H-thymidine for 1 hour. Subsequently, the radioactivity incorporated into DNA was determined by TCA-precipitation and liquid scintillation counting. The bars indicate the percentage of DNA synthesis (with standard deviation) of the PDCD4 siRNA treated cells relative to control cells. Asterisks indicate statistical significance (\*\*p \< 0.01; \*\*\*p \< 0.001; students-t test). (**e)** RPE cells were treated for 24 h with control siRNA or PDCD4-specific siRNA-1 and -2. The cells were then arrested in the late G1 phase by incubation for 24 hours in the presence of 0.5 mM mimosine. Cells were then processed immediately for flow cytometry analysis or were washed with fresh medium lacking mimosine and cultivated for additional 10 or 20 hours before being analyzed by flow cytometry. G1 and G2/M peaks are marked. Based on this observation we hypothesized that PDCD4 knockdown decreases the proliferation rate of the cells. To test whether this is the case, we monitored the growth of the cells over a period of 5 days following knockdown with PDCD4-specific or control siRNA. We used a qualitative assay of cell proliferation by plating equal numbers of cells on replicate culture plates and visualized their proliferation by crystal violet staining (Fig. [1c](#Fig1){ref-type="fig"}). We found that the intensity of staining of cells transfected with PDCD4-specific siRNAs was decreased compared to the control cells, suggesting decreased proliferation upon Pdcd4 knockdown. As in the previous experiment, both Pdcd4-specific siRNAs had similar effects. To substantiate the finding that PDCD4 knockdown reduces the proliferation of RPE cells we quantified their DNA-synthesis activity as an independent measure of proliferation. We performed ^3^H-thymidine incorporation assays by incubating equal numbers of control- and PDCD4-knockdown cells for 1--2 hours in the presence of radiolabeled thymidine and determined the amount of radioactivity incorporated into TCA-precipitable high molecular weight DNA. RPE cells treated with the PDCD4-specific siRNAs displayed significantly reduced DNA synthesis activity compared to cells treated with control siRNA (Fig. [1d](#Fig1){ref-type="fig"}). Thus, in comparison to the control cells the number of cells in S-phase was significantly reduced in cultures treated with PDCD4-specific siRNA. This is consistent with our cell cycle measurements where almost no S-phase cells were visible after PDCD4 knockdown (Fig. [1b](#Fig1){ref-type="fig"}). Overall, these experiments suggested a slowed-down entry of cells into S-phase when PDCD4 levels are low. To demonstrate the effect of PDCD4 knockdown on the G1/S-transition more directly we silenced PDCD4 expression and synchronized the cell population by an additional treatment for 24 hours with 0.5 mM mimosine to block DNA-replication^[@CR31]^. This leads to a reversible arrest of most of the cells at the G1/S boundary. The cells were then released into S-phase by washing them with medium without mimosine, followed by an analysis of the cell cycle profile immediately after the release of the cell cycle block and after 10 and 20 hours (Fig. [1e](#Fig1){ref-type="fig"} and Supplementary Table [S1](#MOESM1){ref-type="media"}). Most of the G1-arrested, control siRNA-treated cells had entered into S-phase within 10 hours after removal of mimosine. At 20 hours there was a distinct G2-peak and an increased G1 peak, suggesting that a fraction of the cells had progressed through mitosis to reach the subsequent G1-phase. In contrast, only a small fraction of the cells treated with either of the PDCD4-specific siRNAs had entered into S-phase even after 20 hours, indicating that PDCD4 knockdown had strongly blocked the G1/S-transition. Defective G1/S-checkpoint control overrides the requirement for PDCD4 to undergo G1/S-transition {#Sec4} ------------------------------------------------------------------------------------------------ The finding that PDCD4 is required for the G1/S-transition seems counterintuitive considering that PDCD4 expression is often decreased in tumor cells^[@CR1],[@CR2]^. We therefore hypothesized that defective G1/S-checkpoint control, which is a hallmark of many tumor cells, might circumvent the requirement for PDCD4 to pass the G1/S-boundary. To test whether defective G1/S-checkpoint control can override the requirement for PDCD4 for cells to enter into S-phase we investigated the effect of PDCD4 knockdown in HEK293T cells. These cells express the adenoviral E1A protein and the SV40 large T antigen, both of which sequester the retinoblastoma protein and allow transcription factor E2F to be active independently of cyclin/Cdk-induced phosphorylation, thereby inactivating the G1/S-checkpoint^[@CR32],[@CR33]^. Although PDCD4 expression was silenced effectively in these cells (Fig. [2a](#Fig2){ref-type="fig"}) the cell cycle distribution of an asynchronous culture of HEK293T cells is almost indistinguishable, as judged by the slightly reduced height of the G2-cell peak and the almost similar height of the plateau of S-phase cells between the G1 and G2 peaks when comparing the control and the Pdcd4-knockdown cell populations (Fig. [2b](#Fig2){ref-type="fig"} and Supplementary Table [S1](#MOESM1){ref-type="media"}). Moreover, we observed no significant change in the incorporation of ^3^H-thymidine into DNA (Fig. [2c](#Fig2){ref-type="fig"}). Furthermore, we used mimosine to block HEK293T cells in the cell cycle. Because mimosine arrests cells at the G1/S-boundary as well as cells that have already entered S-phase, mimosine-treatment of HEK293T cells resulted in a peak of G1-cells with a pronounced shoulder towards a higher DNA-content. Importantly, the cell cycle profiles recorded at 9 and 20 hours after removal of mimosine showed that PDCD4-silenced HEK293T cells had entered into S-phase upon removal of the drug similar to the control cells (Fig. [2d](#Fig2){ref-type="fig"} and Supplementary Table [S1](#MOESM1){ref-type="media"}). This indicates that PDCD4 knockdown does not affect the G1/S-transition in HEK293T cells.Figure 2The G1/S-transition is independent of PDCD4 expression in HEK293T cells. (**a**) Silencing of Pdcd4 expression by treatment of HEK293T cells with PDCD4-siRNA-2. (**b**) Cell cycle distribution of HEK293T cells treated with control or Pdcd4-specific siRNA-2. G1 and G2/M peaks are marked. (**c)** HEK293T cells treated for 72 hours with control or PDCD4-specific siRNA as in A were incubated for 1 h in medium supplemented with 10 μCi/mL ^3^H-thymidine. The radioactivity incorporated into DNA was determined by TCA-precipitation and liquid scintillation counting. The bars indicate the percent DNA synthesis (with standard deviation) of the PDCD4 siRNA treated cells relative to control cells. (**d)** HEK293T cells treated for 24 hours with control or PDCD4-specific siRNA were analysed for the G1/S-transition as in Fig. [1e](#Fig1){ref-type="fig"}. Positions of the G1 and G2 peaks are marked. Knockdown of PDCD4 activates the G1/S-checkpoint in RPE cells by increasing the expression of p21^WAF1/CIP1^ {#Sec5} ------------------------------------------------------------------------------------------------------------ Taken together, our data support the concept that PDCD4 knockdown activates the G1/S cell cycle checkpoint in RPE cells, thereby delaying cell cycle progression at the G1/S boundary. We have previously reported that knockdown of PDCD4 increases the activity and expression of p53 and thereby stimulates the expression of the p53 target gene *CDKN1A*^[@CR26],[@CR28]^. *CDKN1A* encodes the Cdk-inhibitor p21^WAF1/CIP1^ that plays a key role at the G1/S-checkpoint. DNA damage induces p21^WAF1/CIP1^ expression via p53, which then inhibits Cdk activity and causes a G1/S cell cycle arrest^[@CR34],[@CR35]^. Therefore, we hypothesized that the requirement for PDCD4 to enter the S-phase was due to its ability to balance or counteract the basal activity of p53 in unstressed cells. This would suggest that decreasing the activity of p53 by an inhibitor would relieve the requirement for PDCD4 expression for S-phase entry. To test this possibility, we employed pifithrin-α (PFT-α), an inhibitor that suppresses the p53-dependent activation of p53 target genes^[@CR36]^. We knocked down PDCD4 both in the presence or absence of PFT-α and analyzed the expression of p21^WAF1/CIP1^ by western blot. Interestingly, we found a strong increase of p21^WAF1/CIP1^ expression following knockdown of PDCD4 in the absence of PFT-α (Fig. [3a](#Fig3){ref-type="fig"}), consistent with our earlier studies^[@CR26]^ and demonstrating that silencing of PDCD4 increases p21^WAF1/CIP1^ expression also in RPE cells. As expected, in the presence of 30 μM PFT-α the increase of p21^WAF1/CIP1^ expression was strongly suppressed. To measure S-phase entry of the cells we performed ^3^H-thymidine-labeling experiments. This showed that knockdown of PDCD4 in the absence of PFT-α strongly reduced DNA-synthesis activity (Fig. [3b](#Fig3){ref-type="fig"}), whereas the inhibition of DNA-synthesis activity by PDCD4 knockdown was less strong in the presence of PFT-α. It is possible that the DNA synthesis activity of PDCD4 knockdown cells did not reach the level of control cells in the presence of PFT-α because there was still a residual increase of p21^WAF1/CIP1^ in the presence of PFT-α. This could be due to a limiting concentration of the inhibitor, or could reflect a minor contribution of a p53-independent mechanism of stimulation of p21^WAF1/CIP1^ expression by PDCD4 silencing, as proposed recently^[@CR37]^. Overall, our data show that the G1/S-cell cycle block induced by PDCD4 knock-down is caused, at least to a significant part, by the p53-dependent increase of p21^WAF1/CIP1^ expression. PDCD4, therefore, plays a crucial role in counteracting basal p53 activity in unstressed cells.Figure 3PDCD4 is linked to the G1/S checkpoint via the p53-p21^WAF1/CIP1^ axis. (**a)** Knockdown of PDCD4 increases the expression of p21^WAF1/CIP1^. RPE cells were transfected with control siRNA or PDCD4-specific siRNA-2 in the absence or presence of 30 μM PFT-α. Total cell extracts were then analyzed by western blotting for expression of PDCD4, p21^WAF1/CIP1^ and β-actin. (**b)** RPE cells were labeled with ^3^H-thymidine for 1 hour, followed by TCA precipitation and liquid scintillation counting to determine their DNA synthesis activity. The bars indicate the percentage of DNA synthesis (with standard deviation) of the PDCD4 siRNA treated cells relative to control cells. Asterisks indicate statistical significance (\*\*\*p \< 0.001; students-t test). (**c,d)** DNA damage induced down-regulation of PDCD4 expression. RPE cells were exposed to UV light for the indicated times, using a germicidal UV-C lamp in a tissue culture hood or were cultivated in the presence of the indicated concentrations of mitoxantrone. Cells were incubated for 16 hours and total cell extracts were analyzed by western blotting for expression of PDCD4, the DNA double strand break marker γ-H2AX, and β-actin. Previously, we had observed that DNA damage down-regulates PDCD4 expression in HepG2 cells, suggesting a role of PDCD4 in the DNA-damage response^[@CR28]^. This prompted us to examine whether the expression of PDCD4 is also decreased in response to DNA damage in RPE cells. We employed UV-irradiation and the topoisomerase inhibitor mitoxantrone to induce DNA-damage, which we monitored by the DNA double strand break marker γ-H2AX^[@CR38]^. The increase of g-H2AX staining between untreated cells (first lanes in panels C and D) and cells UV-irradiated or incubated with mitoxantrone confirmed that both treatments caused DNA damage, which was accompanied by virtually complete loss of PDCD4 expression (Fig. [3c,d](#Fig3){ref-type="fig"}). This suggested that PDCD4 does not act as an antagonist of p53 in the presence of genotoxic stress. Silencing of PDCD4 in RPE cells affects the abundance and translation of multiple mRNAs {#Sec6} --------------------------------------------------------------------------------------- To obtain an integrated view of the functions of PDCD4, we employed RNA-Seq and ribosome profiling. This allowed us to explore the effect of PDCD4 knockdown in RPE cells on transcriptome-wide mRNA abundance and translation (Supplementary Fig. [1a](#MOESM1){ref-type="media"}). First, we used PDCD4 siRNA-2 and control siRNA in RPE cells (Supplementary Fig. [1b](#MOESM1){ref-type="media"}) and subjected them to RNA-Seq. We found that 496 genes were significantly upregulated and 750 genes were down-regulated by PDCD4 silencing. The heat map (Fig. [4a](#Fig4){ref-type="fig"}) shows all genes that are up- or down-regulated two-fold or more. We validated these changes by quantitative real-time PCR to confirm the expression of representative up- and down-regulated mRNAs (Fig. [4b](#Fig4){ref-type="fig"}).Figure 4Pdcd4 knockdown induces transcriptome-wide changes of mRNA expression. (**a**) Heat map of all mRNAs whose expression levels were significantly altered (padj value \< 0.05) after silencing of PDCD4 by a log2-fold change \> 0.5 or \<−0.5. The individual columns represent the results of three independent samples from cells transfected with control siRNA (control) and two independent samples of cells transfected with PDCD4 siRNA-2 (si-2). Selected genes are marked on the right side. (**b**) Real-time PCR analysis of selected up- and down-regulated RNAs. The columns indicate mRNA abundance in control siRNA (black bars) and PDCD4 siRNA-2 (grey bars) treated RPE cells. Individual expression levels determined from three independent biological replicates are marked by white dots. Asterisks indicate statistical significance (\*p \< 0.05; \*\*p \< 0.01; \*\*\*p \< 0.001; students-t test). To investigate whether PDCD4-dependent changes of mRNA expression affect specific biological processes we performed gene set enrichment analysis (GSEA^[@CR39]^). mRNAs suppressed by PDCD4 silencing were strongly enriched in genes that are implicated in DNA replication, E2F targets and cell cycle regulation (Fig. [5a](#Fig5){ref-type="fig"} and Supplementary Table [S2](#MOESM2){ref-type="media"}), substantiating our previous findings. Genes bound by the DREAM complex, a transcriptional regulatory complex playing a key role in cell cycle regulation^[@CR40]--[@CR42]^, and genes with a peak of expression at the G1/S-checkpoint were strongly downregulated upon PDCD4 silencing. Further gene ontology (GO) term analysis of genes repressed by PDCD4 knockdown confirmed that these genes were involved in various DNA-related processes and aspects of cell cycle regulation. Genes that were up-regulated by PDCD4 knockdown were enriched in processes related to immune responses, aspects of extracellular matrix organization, cytokine signalling and motility (Fig. [5b](#Fig5){ref-type="fig"}). Overall, these findings suggest that decreased expression of PDCD4, as seen in many tumor cells, similarly affects a plethora of cellular processes. Furthermore, the data underline the notion that PDCD4 plays a crucial role in cell cycle regulation, particularly at the G1/S-phase transition and the subsequent S-phase.Figure 5Gene set enrichment analysis of genes affected by PDCD4 knockdown. (**a)** Examples of GSEA charts revealing the role of PDCD4 in cell cycle regulation. (**b)** GO-term (biological process) enrichment analysis of genes up- and downregulated by PDCD4 silencing. Significantly differentially expressed gene sets were identified by GSEA and reduced to most significant terms by REVIGO. Positive and negative enrichment scores, respectively, indicate up- and down-regulation in PDCD4 knockdown cells. Numbers indicate counts of genes in each gene set. The asterisks indicate FDR q-values (\*\<0.05; \*\*\<0.01; \*\*\*\<0.001; \*\*\*\*\<0.0001; \*\*\*\*\* \< 0.00001). To assess the global effects of PDCD4 on mRNA translation we performed ribosome profiling^[@CR43],[@CR44]^ using RPE cells transfected with PDCD4-specific siRNA-2 or control siRNA. In this approach, polysomes of control and PDCD4 knockdown RPE cells are nuclease digested, ribosome-protected mRNA fragments (RPF) isolated and used for deep sequencing to generate "snapshots" of global translation. By combining these data with RNA-Seq data it is possible to determine the effect of PDCD4 silencing on the translation efficiency of individual mRNAs (Fig. [6a](#Fig6){ref-type="fig"}). By comparing the RNA-Seq and ribosome profiling data from control and PDCD4 knockdown cells we identified 496 transcripts that were significantly induced and 750 genes that were downregulated, while 1688 genes remained unchanged by PDCD4 silencing ("RNA-Seq" in Fig. [6b](#Fig6){ref-type="fig"}). The RPF analysis showed that 592 and 728 genes had increased or decreased RPF levels, while 4388 genes did not exhibit significant translational changes upon PDCD4 silencing ("RFP" in Fig. [6b](#Fig6){ref-type="fig"}). For the majority of transcripts the changes in translation levels correlate with altered mRNA abundance following PDCD4 silencing. mRNAs that are translationally regulated by PDCD4 knockdown ("translationally changed" in Fig. [6b](#Fig6){ref-type="fig"}) were defined as transcripts that exhibit altered RPF levels, while mRNA levels remained unaffected in response toPDCD4 silencing. This resulted in the identification of 34 mRNAs (Supplementary Table [S3](#MOESM3){ref-type="media"}). Since PDCD4 has been implicated in the translational suppression of mRNAs with structured 5′-UTRs^[@CR3],[@CR4]^, we used mfold^[@CR45]^ to examine the potential of the 5′-UTRs of the selected mRNAs to form secondary structures. We plotted the ΔG-values predicted for the folding of the 5'-UTRs separately for those mRNAs that showed increased or decreased translation after PDCD4 knockdown (Fig. [6c](#Fig6){ref-type="fig"} and Supplementary Table [S3](#MOESM3){ref-type="media"}). This indicated that mRNAs whose translation was increased by PDCD4 silencing had more negative predicted ΔG-values, reflecting a higher secondary structure potential of their 5′-UTRs, than those mRNAs whose translation was decreased by the PDCD4 knockdown. We further identified 85 mRNAs that remained stable at the level of translation but were altered at the mRNA levels upon PDCD4 silencing and, hence, were also differentially translated between control and PDCD4 knockdown cells (Supplementary Table [S3](#MOESM3){ref-type="media"}). We determined the predicted ΔG-values for folding of the 5′-UTRs of the selected mRNAs whose translation was increased or decreased by PDCD4 silencing (Fig. [6d](#Fig6){ref-type="fig"} and Supplementary Table [S3](#MOESM3){ref-type="media"}). Similar to our previous analysis this showed that mRNAs whose translation was more effective after PDCD4 knockdown have lower ΔG-values for folding of their 5′-UTRs than mRNAs whose translation was decreased. Combining these different sets of mRNAs, we identified 62 translationally up-regulated and 57 down-regulated mRNAs upon PDCD4 silencing ("translationally changed" in Fig. [6b](#Fig6){ref-type="fig"}), of which mRNAs with increased translation after knockdown of PDCD4 possess more highly structured 5′-UTRs than mRNAs whose translation is decreased when PDCD4 is silenced. (Fig. [6e](#Fig6){ref-type="fig"}). Although we cannot exclude indirect effects of PDCD4 silencing on the translation of specific mRNAs, our analyses are consistent with the concept that PDCD4 suppresses the translation of mRNAs that contain structured 5′-UTRs. Besides the identification of mRNAs that are potential targets of translational suppression by PDCD4, our work has also revealed mRNAs that show decreased translation upon PDCD4 knockdown. The identification of groups of mRNAs whose translation is either positively or negatively regulated by PDCD4 sets the stage for future work to understand the role of PDCD4 in translation regulation in more detail.Figure 6PDCD4 silencing affects the translational landscape of RPE cells. (**a)** Scatter plot showing the log~2~-fold changes on the mRNA (y-axis) and ribosome footprint (x-axis) levels in PDCD4 knockout RPE cells relative to control knockout cells. Genes altered only in one parameter by PDCD4 silencing are represented as colored dots (translationally changed genes in B). (**b)** Changes in RNA-Seq, RPF, and translationally changed genes between control and PDCD4 knockout cells. For RNA-seq and RPF data, "up" and "down" includes genes with a log~2~-fold change \> 0 or \<0, respectively, and an adjusted p-value \< 0.05. For translationally changed genes, "up" and "down" includes genes with an adjusted p-value \< 0.05 for mRNA or RPF and a corresponding adjusted p-value for unchanged hypothesis testing \< 0.1. **(c--e)** Box-plots showing the distribution of ΔG values for RNA secondary structures in the 5′-UTRs of mRNAs translationally suppressed or activated by PDCD4 silencing. Discussion {#Sec7} ========== PDCD4 is a multifunctional protein initially described as a transformation suppressor in a murine keratinocyte transformation model^[@CR11]^. Subsequent work has strongly suggested that PDCD4 acts as a tumor suppressor in a broad spectrum of human tumor types^[@CR1],[@CR2]^ and has shown that decreased expression of human PDCD4 contributes to tumor development in various ways, for example by enhancing the motility and invasiveness of the tumor cells^[@CR18]--[@CR20],[@CR23]--[@CR25]^. Most of the studies addressing the function of human PDCD4 have employed various tumor cells, raising the question whether these studies fully reflect the function of human PDCD4 in normal cells. Therefore, we have used a telomerase-immortalized human epithelial cell line to highlight novel aspects of PDCD4′s function. Our work shows for the first time that PDCD4 is required for the G1/S-phase transition. We observed that siRNA-mediated down-regulation of PDCD4 expression strongly impaired the entry of the cells into S-phase, decreased DNA synthesis activity and reduced cell proliferation rate. Our results suggest that the role of PDCD4 as a G1/S cell cycle regulator is linked to the activity of p53. More specifically, our work supports the notion that PDCD4 is required to counteract the activity of p53, preventing the activation of the G1/S-checkpoint in unstressed cells and permitting them to enter into S-phase. In this scenario, knockdown of PDCD4 leads to increased p53-dependent expression of p21^WAF1/CIP1^ and concomitant activation of the G1/S-checkpoint. Using HeLa cells, we have previously observed increased p21^WAF1/CIP1^ expression after knockdown of PDCD4^[@CR26]^. However, unlike the work reported here, knockdown of PDCD4 in HeLa cells only showed aberrant cell behaviour in the presence of DNA damage but did not result in overt cell cycle defects. This might be due to the defective nature of the G1/S-checkpoint in these cells caused by the sequestration of the RB protein by the human papilloma virus E7 protein expressed in HeLa cells^[@CR46]^. HEK293T cells also have a defective G1/S checkpoint (resulting from the expression of the adenovirus E1A and E1B proteins). We found that the requirement for PDCD4 expression for S-phase transition is indeed absent in these cells. Overall, our work identifies a novel role of PDCD4 as a cell cycle regulator that balances p53 activity in unstressed cells, presumably to prevent G1/S-checkpoint activation. Interestingly, we also showed before^[@CR28]^ and confirmed here that induction of DNA damage leads to down-regulation of PDCD4 expression, which suggests that this function of PDCD4 is abolished under conditions of genotoxic stress. The identification of a pro-proliferative role for PDCD4 in the cell cycle is somewhat unexpected in the light of its function as a tumor suppressor. At first glance, low expression of PDCD4 in tumor cells would be expected to impede the cell cycle, however, many tumor cells have a defective G1/S checkpoint as a result of p53 or other mutations, neutralizing the inhibitory effects of low PDCD4 expression on the G1/S-transition. Transcription profiling has revealed a large number of genes that are up- or down-regulated upon PDCD4 knockdown, providing an unbiased view of the cellular processes that are affected by PDCD4. Consistent with previous studies showing increased motility and invasiveness^[@CR18]--[@CR20],[@CR23]--[@CR25]^, GO-term analysis for the biological function of the genes up-regulated by PDCD4 knockdown identifies functions related to extracellular matrix organization and cell adhesion amongst others. GO-term analysis of genes down-regulated by PDCD4 knockdown identifies a plethora of cell cycle- and DNA-related functions that are inhibited when PDCD4 expression is low, such as DNA-replication, DNA-recombination, DNA-repair, telomere organization, chromosome segregation and chromatin remodelling, amongst others. This suggests that decreased PDCD4 expression contributes to tumor development and progression by compromising genomic integrity. Finally, our ribosome profiling analysis shows that the translation of the majority of transcripts was not affected by silencing of PDCD4 because changes in the abundance of ribosome footprints correlated with changes in the expression levels of these mRNAs. By focussing on transcripts that were affected in only one parameter (i.e. mRNA expression level or the frequency of RPF reads) in response to PDCD4 knockdown, we have identified several mRNAs whose translation was moderately increased following PDCD4 knockdown, suggesting that they might be translational targets of PDCD4. These RNAs exhibit an increased potential to form stable secondary structures in their 5′-UTRs compared to mRNAs showing decreased translation after PDCD4 silencing, consistent with the notion that Pdcd4 preferentially inhibits translation of RNAs with structured 5′-UTRs^[@CR3],[@CR4]^. Similarly, PDCD4 knockdown stimulates the translation of RNAs that lack secondary structure in their 5′-UTR or that have short 5′-UTRs. Whether the translation of these RNAs is suppressed by binding of PDCD4 to their coding regions, as already reported for certain mRNAs^[@CR29],[@CR30],[@CR47]^, or whether PDCD4 affects their translation indirectly remains to be addressed by future studies. In addition to the identification of potential target mRNAs for translational repression by PDCD4 we have also discovered mRNAs whose translation is positively affected by PDCD4. Whether PDCD4 mediates these effects directly or indirectly and whether this reflects a novel aspect of the function of PDCD4 in translation, remains to be investigated in future work. Overall, our study is the first analysis of genome-wide changes of mRNA abundance and translation induced by PDCD4 silencing in an immortalized human epithelial cell line. This sets the stage for more detailed studies on the role of PDCD4 in the future. Materials and Methods {#Sec8} ===================== Cells and siRNA transfections {#Sec9} ----------------------------- hTERT-RPE-1 is a line of telomerase-immortalized human retina pigment epithelial cells^[@CR48]^. The cells were grown in DMEM/Ham's F12 medium supplemented with 10% fetal calf serum. PDCD4 expression was silenced with siRNA duplexes targeting the sequences CACCAAUCAUACAGGAAUA (PDCD4 siRNA-1) or GCUUCUUUCUGACCUUUGU (PDCD4 siRNA-2). SiRNA targeting Renilla luciferase (AAACAUGCAGAAAAUGCUG) was used as negative control. siRNAs (100 nM) were reversely transfected using Lipofectamine^®^ RNAiMax (ThermoScientific), according to manufacturer's protocols. Cells were harvested 48 to 72 h after transfection. Cell cycle analysis {#Sec10} ------------------- Cells were trypsinized, fixed with 70% ice-cold ethanol in PBS for 1 h or longer at −20 °C, washed with PBS (+0.5% BSA) and stained with propidium iodide (50 μg/mL PI and 25 μg/mL RNase A in PBS) for 1 h at room temperature. In some experiments, cells were synchronized by incubation for 24 h in growth medium containing 0.5 mM mimosine. To release the cells into the cell cycle they were washed twice with grown medium lacking mimosine. Flow cytometry analysis was performed using a Beckman-Coulter Cytomics FC500 flow cytometer. 10 000 to 15 000 cells were counted per condition in every experiment. ### Antibodies {#Sec11} Western blotting of PDCD4 was performed using a rabbit anti PDCD4 antiserum raised against the N-terminus of human Pdcd4^[@CR26]^. Antibodies against p21^WAF1/CIP1^ (05--345, Millipore), γ-H2AX (GTX61796, Genetex) and β-actin (AC15, Sigma-Aldrich) were obtained from commercial sources. ### Quantitative real-time PCR {#Sec12} Total cellular RNA was isolated with TRIzol^TM^ Reagent (Invitrogen), as recommended by the manufacturer. Total RNA (2 μg) was reverse transcribed with the First Strand cDNA Synthesis Kit (K1612, ThermoScientific) using OligoT primers in 20 μL according to the manufacturer's instructions. Real-time RT-PCR reactions were carried out in 96-well plates using Power SYBR Green PCR Master Mix (Applied Biosystems). Reactions were performed using a StepOnePlus RT-PCR instrument (Applied Biosystems) and the following parameters: 95 °C for 10 min, followed by 40 cycles of 95 °C for 15 s and 60 °C for 60 s. Each experiment included a no-template control. PCR reaction specificity was confirmed by melting curve analysis of the products. Primer sequences are given in Supplementary Table [S4](#MOESM1){ref-type="media"}. Relative gene expression was calculated by the ΔΔC~T~ method:^[@CR49]^ First, ΔC~T~ values were calculated by subtracting the C~T~-values obtained for individual mRNAs from those obtained for β-actin mRNA. Then, ΔΔC~T~ values were calculated by subtracting the ΔC~T~ values of Pdcd4 siRNA-treated cells from those of control siRNA-treated cells. All experiments were conducted with at least three biological replicates. ^3^H-thymidine labeling {#Sec13} ----------------------- Cells were incubated with growth medium supplemented with 10 μCi/ml 3H-thymidine for 1 h. The cells were then washed with PBS, lysed in PBS containing 1% SDS and heated to 95 °C to reduce the viscosity. Aliquots were then spotted on Whatman filter paper and washed 2 times for 15 min with 10% trichloracetic acid (TCA) and once with ethanol. The filter paper was dried and the radioactivity was determined in a scintillation counter. To correct for differences in the cell number between Pdcd4-specific and control knockdown samples aliquots of the lysed cells were spotted on nitrocellulose membrane and hybridized to a ^32^P-labeled probe of total human DNA. Alternatively, aliquots of the lysed cells were analyzed by SDS-PAGE and western blotting for expression of β-actin to determine the relative number of cells. RNA-seq {#Sec14} ------- For RNA-seq of poly(A)-selected RNA, RPE cells were incubated for 24 h after transfection with siRNA (Pdcd4 siRNA-2 or control siRNA), and cells were directly lysed in TRIzol^TM^ reagent (Invitrogen). Total RNA was extracted with 1-bromo-3-chloropropane, precipitated with EtOH, resuspended in milliQ water and treated with TURBO DNase (Ambion) for 30 min at 37 °C, 1400 rpm. RNA was extracted again with acidic phenol to remove DNase. The quality of the RNA was examined with an Agilent Bioanalyzer. Sequencing libraries of poly(A)-enriched RNA were finally generated with the TruSeq Stranded mRNA LT Kit (Illumina). Ribosome profiling {#Sec15} ------------------ Ribosome profiling was carried out as previously described^[@CR44]^. RPE cells were incubated for 24 h after transfection with siRNA (PDCD4 siRNA-2 or control siRNA). 2 h before harvesting the culture medium was replaced with fresh medium. To stabilize elongating ribosomes, cells were treated with 100 µg/mL cycloheximide (CHX) for 5 min at 37 °C, following a washing step with ice cold PBS (containing 100 µg/mL CHX). Cells were then lysed in lysis buffer (10 mM Tris-HCl (pH 7.4), 10 mM MgCl~2~, 100 mM NaCl, 1% Triton, 1 mM DTT, 100 µg/mL CHX) per sample. Each sample consisted of cells from 8 tissue culture dishes (10 cm diameter), and cell debris was pelleted at 4 °C, 10 000 x g for 3 min. 10 OD~260~ units of cell extract were then supplemented with 900 U RNase I (Ambion) and 0.5% deoxycholate and treated for 20 min at 22 °C and 800 rpm in a thermomixer. The reaction was stopped by addition of 240 U SUPERase In (Ambion)( + 0.5% deoxycholate) and extracts were fractionated by centrifugation at 4 °C, 35 000 rpm for 3 h in a SW-41 Ti swinging-bucket rotor (Beckman Coulter) on 10--50% sucrose density gradients (20 mM Tris-HCl (pH 7.5), 10 mM MgCl~2~, 100 mM NH~4~Cl, 2 mM DTT, 100 µg/mL CHX). Gradients were fractionated at 0.75 mL/min with continuous monitoring of the OD~254~ using a Biocomp Instruments Gradient Station (Teledyne Isco). Monosome fractions were collected and, following addition of 1% SDS, flash-frozen and stored at −80 °C. RNA was isolated from gradient fractions by the hot acid phenol method (1-bromo-3-chloropropane used instead of chloroform), and ribosome footprints were purified from monosome RNA by size selection of 28--30 nt fragments (excluding a major band around 31 nt) on 15% polyacrylamide, 8 M urea, 1xTBE gels. Sequencing libraries from ribosome-protected footprints were generated by 3′-end dephosphorylation, followed by 3′-adapter ligation, reverse transcription, and circularization as described in^[@CR44]^. Sequencing data analysis {#Sec16} ------------------------ The analysis of the ribosome profiling and RNA-seq datasets was essentially performed as described in^[@CR44]^. Briefly, libraries for ribosome profiling and RNA-Seq were sequenced on an Illumina NextSeq sequencer. Ribosome-profiling reads were processed by clipping adapter sequences and trimming of the 4 randomized nucleotides in the linker with the FASTX-Toolkit version 0.0.13 (<http://hannonlab.cshl.edu/fastx_toolkit>). After processing, residual rRNA sequences were remove from ribosome-profiling datasets using bowtie version 1.0.0.^[@CR50]^. Ribosome-profiling and raw RNA-Seq reads were mapped to hg38 transcripts (UCSC canonical transcripts extended 18 bp into the UTRs). Count tables of mapped reads were generated using custom scripts and differential expression determined with DESeq. 2^[@CR51]^. Differential expression was scored using an adjusted p-value of 0.05 for the hypothesis of a changed gene (res) and unchanged expression was additionally scored using an adjusted p-value of 0.1 for the hypothesis of an unchanged gene (resLA). Transcripts translationally changed were defined with an adjusted p-value for mRNA or ribosome profiling data \>0.05 (res) with the corresponding adjusted p-value in the other category (resLA) \< 0.1. Gene set enrichment analysis (GSEA) {#Sec17} ----------------------------------- GSEA was carried out using the GSEA preranked tool (<http://software.broadinstitute.org/gsea/index.jsp>) with 1,000 gene set permutations. The genes in the expression dataset were ranked by their log~2~ fold change. Redundant terms were removed with REVIGO^[@CR52]^. Calculation of ΔG values for 5′-UTR secondary structure formation {#Sec18} ----------------------------------------------------------------- The 5′UTR sequences of the relevant mRNAs were retrieved from the NCBI nucleotide sequence data base and truncated immediately after the start codon. If several mRNA sequences were available sequence variant 1 was chosen. The sequences were then submitted to the RNAfold web server (<http://rna.tbi.univie.ac.at>) to predict the mimimum free energy of the optimal secondary structure (Supplementary Table [S5](#MOESM1){ref-type="media"}). Data access {#Sec19} ----------- The RNA-seq and ribosome profiling data from hTERT-RPE-1 cells (PDCD4 siRNA2 and control siRNA) have been submitted to the NCBI Gene Expression Omnibus (GEO; [www.ncbi.nlm.nih.gov/geo/](http://www.ncbi.nlm.nih.gov/geo/)) under accession number GSE138533 (<https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE138533>). Supplementary information ========================= {#Sec20} Supplementary data. Supplementary tabel S2. Supplementary tabel S3. **Publisher's note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information ========================= is available for this paper at 10.1038/s41598-020-59678-w. This work was supported by the Deutsche Forschungsgemeinschaft to K.-H.K. and S.A.L. and by the Open Access Publication Fund of the University of Münster. A.H., B.S.N., S.A.L. and K.-H.K. designed and performed experiments. A.H., B.S.N. S.A.L. and K.-H.K. wrote the paper. All authors approved the final version of the manuscript. The authors declare no competing interests.
{ "perplexity_score": 410.5, "pile_set_name": "PubMed Central" }
The Tekken World Tour soldiers on with Headstomper in Malmö, Sweden. But there’s more then Tekken going on, a lot more. Side events and a grudge match between The Mix Up champion, Superakouma and Baxi. Mortal Kombat events are steadily growing, judging by the amount of events this week. I for one cannot be happier, as I am very much enjoying the competitive scene. On the traditional esports side of things, we got a large range of genres. Counter Strike, Call of Duty, Starcraft 2, DOTA 2 all the way to Disneyland, and beyond. Headstomper 2019 Smash.gg Featured Game(s): Tekken 7 - Street Fighter V: Arcade Edition - Soul Calibur 6 - BlazBlue: Cross Tag Battle - Guilty Gear Xrd REV 2 - Blazblue Central Fiction Schedule: All Times in CEST Credit: AceKingofSuit Time Converter Streams: ComaFGC ComaFGC2 Neophos DestinationFight Ninjaagaayden UnequalledMedia KVOxTSB 2019 Smash.gg Featured Game(s): Tekken 7 - Dragon Ball FightersZ -Dead or Alive 6 - Fate Unlimited Codes - Blade Arcus Rebellion - SNK Heroines - Koihime Enbu RyoRaiRai - Capcom vs SNK 2 - Blazblue: Cross Tag Battle - Guilty Gear xrd Rev 2 - UNIST - King of Fighters 14 - Marvel vs Capcom Infinite - Ultimate Marvel vs Capcom 3 - ARMS - FEXL Schedule: All Times in JST Time Converter Streams: GodsGarden AnimeIlluminati KSBch01 KSBch02 KSBch03 KSBch04 CustomComboJP Mortal Kombat New Jersey Twitter - Facebook Featured Game(s): Mortal Kombat 11 Schedule: Saturday | All Times in EDT 14:00 - 17:00 - Casuals 17:00 - 21:00 - Tournament Time Converter Stream Midwest Arena Smash.gg Featured Game(s): Smash Bros. Ultimate Schedule: Saturday | All Times in CDT 11:30 - 2v2 Tournament Begins 15:00 - 1v1 Wave A Pools Begin 16:30 - 1v1 Wave B Pools Begin 18:00 - 1v1 Top 48 Begins 18:30 - Amateur Bracket Signups Begin 19:00 - Free-to-enter 1v1 Amateur Bracket Begins 20:30 - 1v1 Top 8 Finals Begins Time Converter Stream Spoiler-Free Smash VODs The Second National Melee Arcadian Smash.gg Featured Game(s): Smash Bros. Melee Schedule: Sunday | All Times in PDT 10:00 - Doubles > Top 4 12:00 - Singles Pools Wave 1 14:00 - Singles Pools Wave 2 16:00 - Doubles Top 4 17:00 - Top 32 18:00 - Top 8 Time Converter Stream Dreamcon Gaming Arena by Nexus Esports Smash.gg Featured Game(s): Mortal Kombat 11 - Smash Bros. Ultimate - Tekken 7 - Dragon Ball FIghtersZ Schedule: All Times in CDT Friday 17:00 - Mortal Kombat 11 20:00 - Celebrity Smash Tournament Saturday 12:00 - Premier Smash 5k Sunday 12:00 - Dragonball FighterZ 15:00 - Tekken 7 Time Converter Stream King Smash.gg Featured Game(s): Smash Bros. Ultimate - Smash Bros. Melee Schedule: Sunday | All Times in EDT Melee Doubles - 11:00 AM Singles Wave A - 1:00 PM Singles Wave B - 3:00 PM Singles Top 48 - 5:00 PM Singles Top 8 - 7:30 PM Ultimate Doubles - 11:00 AM Singles Wave A - 12:00 PM Singles Wave B - 2:00 PM Singles Top 48 - 4:00 PM Singles Top 8 - 6:00 PM Time Converter Streams: Primary Melee Secondary Melee Primary Ultimate Secondary Ultimate The EarthRealm Dojo Smash.gg Featured Games: Mortal Kombat 11 - Tekken 7 Schedule: Saturday | All Times in EDT 15:00 - MK11 Begin 16:00 - Tekken 7 Begin 19:00 - Tournament Wrap-up Time Converter Streams: Mortal Kombat 11 Tekken 7 Sakura Fight Festa 2019 Smash.gg Featured Game(s): Tekken 7 - Soul Calibur 6 - Virtua Fighter 5: Final Showdown - Dead or Alive 6 - Blazblue: Cross Tag Battle - Guilty Gear Xrd Rev 2 - Blazblue: Central Fiction - UNIST - Fighting Ex Layer Schedule: All Times in BST Time Converter Streams: NGIEvents NthGenMedia CS:GO IEM XIV Sydney All Info Schedule: Stream CSGO spoiler-free VODs PUBG Europe League - Kick-off Cup Twitter Schedule: All Times in CEST FRIDAY, MAY 3 — DAY 4 18:00 - Final Stage, Match 1 18:55 - Final Stage, Match 2 19:50 - Final Stage, Match 3 20:45 - Final Stage, Match 4 SATURDAY, MAY 4 — DAY 5 18:00 - Final Stage, Match 5 18:55 - Final Stage, Match 6 19:50 - Final Stage, Match 7 20:45 - Final Stage, Match 8 SUNDAY, MAY 5 — DAY 6 18:00 - Final Stage, Match 9 18:55 - Final Stage, Match 10 19:50 - Final Stage, Match 11 20:45 - Final Stage, Match 12 Time Converter Streams: Main Map Call of Duty World League London Schedule Streams: Main Stream Bravo Stream Charlie Stream Delta Stream DOTA 2 MDL Disneyland Paris Major Schedule: All Times in CEST Time Converter Streams: MDLDisney MDLDisney2 Dota 2 Spoiler-Free VODs StarCraft 2 Website Schedule: All Times CEST Saturday GSL Code S Season 2 | Group D - 06:00 WCS Challenger 2019 EU Season 1 | Playoffs - 14:00 Sunday WCS Challenger 2019 EU Season 1 | Playoffs - 20:00 Time Converter Streams: GSL WCS WCS Winter Europe VODs GSL COde S VODs Rocket League Championship Series Week 5 Schedule: Stream Spoiler-Free Rocket League VODs League of Legends Midseason Invitational Schedule Stream LOL spoiler-free VODs Overwatch League Schedule Stream Spoiler-Free Overwatch VODs Overwatch Contenders League Schedule Stream Spoiler-Free Overwatch VODs
{ "perplexity_score": 1897.3, "pile_set_name": "OpenWebText2" }
Basketball Wives Recap: The Wives Go to Tahiti Ugh, Basketball Wives just have so many ISSUES. It’s not even funny, except when it is. Royce wants her dad to say that he’s proud of her, that she’s good enough. And sometimes she just wants a hug from him. Maybe I’d have more sympathy if she didn’t cry in such a way that makes me want to kill puppies. And I love puppies. Tami also has problems, but she’s working on them with her anger management sessions. She wants to brag to her doctor about how she’s made “personal leaps and bounds,” which obviously means that she’s going to not make personal leaps and bounds later in the episode. She brags about how she’s been stopping and thinking, playing the peacemaker. But when the subject turns to Kesha, she admits that she didn’t address the issue the way she should have, though it’s an improvement. The “old Tami” would have hit her, the new Tami makes a conscious effort not to smack people in the face. Royce is still trying to get her dad Robert and her boyfriend Dezmon on the same page because she is “Daddy’s girl” and “Dezmon’s woman.” At lunch, it all turns to sh!t because Royce’s dad calls her needy, and Dezmon has a blank look on his face because this situation is awkward. Period. Then Dezmon admits that, honestly, it’s a little true, and Royce leaves the table. Dezmon goes after her, and finds her crying to her mother on the phone. She denies that she’s needy because she gives away her heart, and is also pissed that he brought this issue up in front of her pops. He says it’s not a negative thing, just sometimes he feels overwhelmed. Dezmon continues to be awkward, Robert eats his food with a huff and a head shake while still in the restaurant. Eventually, Dezmon hugs her. Took long enough. Chad is home sick, acting like a baby. He and Evelyn do their cute thing of having a connection or whatever. He says he’s her hunk of “chocolate.” Har, har. Next up, Evelyn joins Tami for a walk-and-talk and shop session. Tami thinks it would be good for Evelyn to get on the anger management bandwagon, which might be good for her person, but not for the show. Evelyn seems up to the idea. When the topic turns to Tahiti, Tami says that she’s not looking forward to going because of Kesha. “There’s only so much I can take before I burst,” she explains. “We in Tahiti, b!tches!” someone exclaims. And it’s true. All the Wives are there, minus Jen and Kenya (Royce wasn’t invited, I presume). Shaunie doesn’t know what that means but, um, do they even hang out? Whatevs. Tami gets annoyed in the car because Kesha keeps coughing without covering her mouth properly. “It’s a big deal for me,” she says, “I don’t tolerate that from anybody.” This is serious shiz, yo. Then Suzie asks if cannibals still live in Tahiti, which is both offensive and stupid and offensively stupid. Everyone’s in awe of the beauty that abounds around them, and as they enter their room, they’re met with flowers all about their bungalow. Everyone toasts to having “no drama,” which is literally laugh-out-loudable. During their first dinner in Tahiti, they plot to put a dead fish in Kenya’s room so it’ll smell when she joins them on the trip. These b!tches be doing high school again, what else is new? When Suzie brings up the Jen situation, we learn that Jen tweeted something directed towards Nia to all her followers, which is just prolonging the conflict and stress. Shaunie brings up her plan to go swim with sting rays and sharks, to which Tami says, “Heeeell to the no.” When they’re on this boat, we learn that Jen is coming on the trip later as well. Tami is actually glad because a) it’s tradition and b) Evelyn needs to get some things off her chest. Kesha’s just getting on Tami’s nerves because she’s acting scared of everything, like getting into the water. Eventually, she does. But not soon after, we hear a scream. (What else is new? This show is terrifying.) Things take a turn for the drunken and messy when they decide to have shots. Tami’s sitting next to Kesha and starts getting annoyed with her. Then the “Tasmanian Tami” (Shaunie’s words) comes out, and she confronts Kesha about when Kesha talked behind her back, saying that she wanted to go off on Tami, but didn’t want to embarrass her any more than she already embarrassed herself. “I’m not the b!tch you wanna start with. Don’t start with me,” Tami warns. She wants respect. “You wanted to go off on me? B!tch, here’s your chance!” she yells. Kesha denies this, Tami proclaims that she can’t be fake around girls she doesn’t like. The whole time that Tami’s going off the deep end, Kesha stares placidly at her and tunes her out. “I just look at her like the fool she is,” Kesha says. Oh honey, everyone’s a fool when it comes to this show.
{ "perplexity_score": 356.8, "pile_set_name": "Pile-CC" }
Enter "The House" of Will Ferrell and Amy Poehler Writer: Florey DM Opening in cinemas this 29 June is "The House", an upcoming comedy movie starring funny duo Will Ferrell and Amy Poehler. The two team up as husband and wife Scott and Kate Johansen, who came up with the ingenious idea of turning their basement into a gambling den in order to raise some money after using up their daughter's college fund. Directed by Andrew J. Cohen, imagine yourself entering "The House" of Scott and Kate as you browse through the movie stills below: The Johansens want their beloved daughter Alex (Ryan Simpkins) to go to college – except they've exhausted their money for it. That moment when you have to sneak around your own house. No doubt, things will get a little wild after the Johansens turn their house into a casino.
{ "perplexity_score": 375.9, "pile_set_name": "Pile-CC" }
Background {#Sec1} ========== After embryonic development, tissue-specific stem cells called adult or somatic stem cells (SSCs) remain throughout the body for life. These adult SSCs are master cells that, through asymmetric division, retain their ability to self-renew while producing daughter cells that are the functional units of that tissue \[[@CR1], [@CR2]\]. The daughter cells completely differentiate to support tissue repair and remodeling contributing to the structural and functional maintenance of their tissue of residence \[[@CR1]\]. Tissue-specific stem cells have been identified in multiple tissues and organs including human and murine myometrium \[[@CR3], [@CR4]\]. Different characteristics have been used to identify stem cell lines such as cell clonogenic efficiency, in vitro differentiation, expression of stemness markers, and in vivo tissue regeneration \[[@CR5]--[@CR8]\]. Specifically, mesenchymal stem cells (MSCs) express specific embryonic stem cell genes like Oct-4 and Nanog, transcription factors which determine the embryonic stem cell self-renewal and differentiation \[[@CR9]\]. The human uterus consists of three tissue layers: endometrium, myometrium, and perimetrium \[[@CR10]\]. The myometrium is the smooth muscle layer of the uterus, oftentimes characterized by its ability to remodel and regenerate during and after pregnancy \[[@CR11], [@CR12]\]. These unique properties suggest the presence of myometrial stem cells that tightly regulate myometrial growth \[[@CR3], [@CR4]\]. Similarly, tumor-initiating cells are a subset of cells within a tumor cell population, which also through asymmetric division retain their ability to sustain tumors \[[@CR1], [@CR13]\]. Uterine leiomyomas, also called uterine fibroids, are monoclonal tumors of the myometrium that likely originate from a single altered and transformed somatic stem cell of the myometrium followed by expansion and propagation in a steroid-dependent manner \[[@CR12]--[@CR14]\]. Stem cells derived from leiomyoma tissue, but not myometrium, carry a mediator complex subunit 12 (MED 12) mutation in the majority of leiomyoma lesions \[[@CR15]\]. It is thus hypothesized that the transformation of a myometrial stem cell into a mutated stem cell leads to the formation of a leiomyoma tumor progenitor cell that, after further expansion, gives rise to leiomyoma \[[@CR2], [@CR4], [@CR8], [@CR16]\]. The transformation of a normal myometrial stem cell into a leiomyoma-forming stem cell is likely a result of a complex process entailing multiple insults to the myometrial stem cell including hypoxic niche, altered epigenome, and abnormal estrogen signaling. Our group has recently identified two specific cell surface markers (CD44 & Stro1) as human \[[@CR17]\] and rat \[[@CR18], [@CR19]\] myometrial stem cell markers. CD44 is a generic name for a complex set of cell surface glycoproteins involved in cell proliferation, differentiation, and migration \[[@CR20]\]. Even though human and rat myometrial stem cells have been well characterized by our group and others \[[@CR17]--[@CR19], [@CR21], [@CR22]\], little has been published on mouse myometrial stem cells since 2007 when Dr. Szotek suggested myometrial labeling-retaining cells (LCRs) are putative myometrial stem cells. In this work, we evaluate the utility of specific cell surface markers to identify myometrial stem cells in mice at different age points as well as determine the effect of initiation of steroidogenesis on the frequency of these stem cells. Materials and methods {#Sec2} ===================== Mouse tissues {#Sec3} ------------- Female mice, B6, CBA-Tg (Pou-5 fl-EGFP) 2Mnn/j mouse strains, 1 week to 24 weeks of age were purchased from Jackson laboratory (Sacramento, CA). These mice were homozygous for the Pou5fl/OCT4 transgenic insert and expressed enhanced green fluorescent protein (EGFP) in the uterus under the control of POU domain, class 5, transcription factor 1 (Oct-4) promoter, and distal enhancer. Primordial germ cell-specific markers, alkaline phosphatase II, and stage-specific embryonic antigen are co-expressed in EGFP-positive cells. Pou5fl or Oct-4 expression indicates the totipotent cell lineage \[[@CR23]\]. Mice homozygous for the transgenic insert were reported as viable, fertile, and normal in size; they did not display gross physical or behavioral abnormalities. Uterine tissue was collected at ages 1, 3, 4, 8, 12, and 24 weeks (at least 4 mice per age group). All animal procedures described in this report have been approved by Augusta University's Institutional animal care and utilization committee (IACUC). Immuno-co-localization with undifferentiated markers {#Sec4} ---------------------------------------------------- Immunohistochemistry (IHC) and immunofluorescence approaches were performed, and myometrial stem cells were visualized under green fluorescence. To co-localize the identified Oct-4-positive myometrial stem cells, mice tissue was co-stained with CD44 antibody and Nanog. Tissue blocks were deparaffinized in a Leica XL Autostainer (Leica Inc., Buffalo Grove, IL), and antigen retrieval was performed with Antigen Unmasking Solution (Vector Labs, Burlingame, CA). Following pre-incubation of tissue with Blocking Buffer (10% normal goat serum, 1% BSA, 0.5% Triton X-100) for 1 h at room temperature, primary antibody was added to Blocking Buffer and incubation was continued overnight at room temperature. Primary antibodies were diluted as follows: CD44 at 1/250 and Nanog at 1/200 (Additional file [1](#MOESM1){ref-type="media"}: Table S1). Slides were then washed three times in PBS and treated with fluorescein or Texas Red conjugated secondary antibodies (Vector Labs) for 1 h at room temperature. Following three washes in PBS, slides were cover slipped with mounting medium containing Dapi (Vector Labs). Cell counting {#Sec5} ------------- Four-micrometer sections were stained with the indicated antibodies and counterstained with Mayer's hematoxylin or DAPI, respectively, to visualize all cells present in the section and perform the enumeration of positive cells. Cells were counted by using NIH ImageJ software. At least, 200 cells were counted for each time point against DAPI and hematoxylin-stained nuclei. Labeling index and percentages were determined by the coefficient between the positive cells for the specific antibodies and the total number of cells in each slide determined by hematoxylin nuclear staining or DAPI (fluorescent technique). Three random high-power fields were selected from each mouse age time point, and the average stem cell percentage was determined. Data was expressed as mean and standard errors. Histological examination for sex steroid hormone receptors {#Sec6} ---------------------------------------------------------- To study the possible effect of ovarian sex steroids on the quantity of stem cells, we evaluated myometrial expression of estrogen receptor α (ERα) and progesterone receptors A and B (PR A&B) at ages 1, 3, and 4 weeks using inverted microscopy. Blocks of uterine tissue at the indicated age time points were deparaffinized and prepared as stated above. Primary antibodies were diluted as follows: estrogen receptor at 1/250 and progesterone receptor at 1/250 (Additional file [1](#MOESM1){ref-type="media"}: Table S1). Slides were visualized under inverted microscopy for presence of brown staining indicating presence of the receptor. Appropriate positive and negative controls were obtained for comparison using standard techniques according to the manufacturer's instructions. Statistical analysis {#Sec7} -------------------- Two-sample *t* test was used to compare the percent of stem cells at 1 week of age to the percent of stem cells of the following mice ages: 3, 4, 8, 12, and 24 weeks. Two-sample *t* test was used again to compare the percent of stem cells of pre-sexual mice and sexually mature mice. *P* value of less than 0.05 was adopted for statistical significance. Results {#Sec8} ======= Identification and quantification of myometrial stem cells {#Sec9} ---------------------------------------------------------- Because Oct-4 was tagged with GFP in this generalized transgenic mouse model, we could follow the expression of this primitive stem-cell marker with green fluorescence. Under low- and high-power magnification (20--40×), we were able to visualize Oct-4-expressing cells in the mouse myometrium. Then, to co-localize the Oct-4-positive cells with other well-known stem cells markers, immunofluorescence approaches were performed. The expression of the myometrial stem marker CD44 was evaluated using conjugated CD44 antibody. Because Oct-4 was tagged with GFP, the cells expressing Oct-4 emitted green fluorescence. The conjugated CD44 antibody expressed Texas Red Fluorescence. Thus, the combination of both Oct-4 and CD44 staining (red and green) is yellow, as demonstrated in Fig. [1](#Fig1){ref-type="fig"}. Figure [2](#Fig2){ref-type="fig"} shows the added triple staining with Nanog at 24 weeks of age. The Nanog co-localizes with both Oct4 and CD44 confirming the stemness of the identified cells. We were unable to use Stro1 as an additional marker for mouse stem cells, as we previously described in human and rat myometrium \[[@CR8]\], because Stro1 mouse Ab is not yet available. We then proceeded with evaluation of number of Oct-4+/Nanog+/CD44+ cells in uteri from mice 1, 3, 4, 8, 12, and 24 weeks of age. NIH ImageJ was used to count myometrium stem cells and to determine stem cell average for each uterine age as described in the method section.Fig. 1OCT4/GFP and CD44 co-staining of mice myometrium. Uterine ages 1, 3, 4, 8, 12, and 24 weeks (40×) are shown. Because Oct-4 was tagged with GFP, the cells expressing Oct-4 emitted green fluorescence. The conjugated CD44 antibody expressed Texas Red Fluorescence. The combination of both Oct-4 and CD44 staining (red and green) is yellow. Here, we show the yellow staining that indicates co-localization of Oct4/GFP and CD44Fig. 2Myometrium triple staining with GFP, CD44, and Nanog of mice uterus at 24 weeks of age (20×). GFP: green fluorescence. CD44: red fluorescence. Nanog with 2nd antibody alexa fluor 647: purple fluorescence. DAPI: blue fluorescence The quantity of Oct-4+/CD44+/Nanog+ triple positive myometrial stem cells was significantly lower (2.14% ± 1.30) at 1 week of age as compared to all other tested older ages (*P* \< 0.001). The quantity of myometrial stem cells at ages 3, 4, 8, 12, and 24 weeks was 13.01% ± 5.63, 11.30% ± 1.52, 10.43% ± 5.09, 6.60% ± 0.63, and 10.48% ± 3.65, respectively (Fig. [3](#Fig3){ref-type="fig"}). Our results clearly showed that at the pre-sexual age of 1 week, myometrial stem cells are significantly lower in frequency as compared to the sexually mature ages of 3 weeks and beyond. Our prior work \[[@CR17]\] and others \[[@CR24]\] demonstrated that human myometrial stem cells lack ER/PR expression and rely for their estrogen/progesterone responsiveness in adult myometrium on paracrine signaling from surrounding fully-differentiated ER/PR expressing myometrial cells. Consequently, we wanted to evaluate if the paucity of MSCs in young (1 week aged) myometrium is due to lack of ER/PR expression in surrounding differentiated myometrial cells or indeed due to lack of circulating estrogen and progesterone hormones at this early pre-sexual age, or both.Fig. 3Percent of stem cells relative to uterine age. **a** *t* test statistical analysis, the percent of stem cells in 1 week mice was compared to 3, 4, 8, 12, and 24 week mice; *p* values were 0.01, 0.004, 0.02, 0.0028, and 0.0160, respectively. **b** Percent of stem cells in pre-sexual mice at 1 week of age was compared to percent of stem cells of sexually mature mice (average of stem cells in mice of age 3, 4, 8, 12, and 24). *P* value was 0.0003 Histological analysis of mouse myometrium for expression of estrogen and progesterone receptors {#Sec10} ----------------------------------------------------------------------------------------------- To further investigate the low frequency of myometrial stem cell in pre-sexual mice, estrogen and progesterone receptor staining were carried out on samples at different age points. Interestingly, comparable and abundant positive staining for ERα and PR A&B receptors was detected in the myometrium of all tested ages: 1, 3, and 4 weeks (Fig. [4](#Fig4){ref-type="fig"}). As shown in Fig. [4](#Fig4){ref-type="fig"}, ERα and PR A&B staining in the myometrium of mice at ages 1, 3, and 4 weeks was evident in majority of myometrial smooth muscle cells and also in endometrium.Fig. 4ERα and PR A&B staining in the myometrium of mice at 1, 3, and 4 weeks of age. **a** Week 1 myometrium staining of ER ERα and PR A&B (20×). **b** Week 3 staining of ER ERα and PR A&B (20×). **c** Week 4 staining of ER ERα and PR A&B (20×). Note negative control Discussion {#Sec11} ========== During pregnancy, the human uterus undergoes a 500- to 1000-fold increase in volume and a 24-fold increase in weight, which demonstrates the remarkable capacity of the myometrium to regenerate and remodel itself \[[@CR3]\]. Although the regulation of myometrial functions during pregnancy and labor is mainly the result of the integration of endocrine and mechanical signals, it is reasonable to assume that the impressive myometrium regenerative ability must be tightly connected to a resident somatic stem cell population \[[@CR3], [@CR18]\]. There is increasing evidence that adult stem cells not only reside in highly regenerative tissue like bone marrow \[[@CR25], [@CR26]\], intestine, and epidermis \[[@CR27], [@CR28]\] but are also found in most tissue where they function to maintain hemostasis by replacing cells lost by apoptosis \[[@CR2]\]. Nevertheless, stem cell research in uterine myometrium and leiomyomas is a relatively new area of inquiry, and few original articles have been published addressing this important cell population. In the last few years, several studies using the 5-bromo-2′-deoxyuridine and side population methods in murine and human myometrium have suggested the presence and functional relevance of somatic stem cells in this tissue \[[@CR3], [@CR23], [@CR29]\]. These myometrial somatic stem cells lack smooth muscle cell markers and can be induced to differentiate into adipogenic and osteogenic lineages in addition to differentiating into smooth-muscle cells \[[@CR12]--[@CR14]\]. However, the uniqueness, scarcity, and lack of distinctive morphological characteristics, such as defining cell surface markers, make their identification and location a very complex task in most tissues including myometrium. Although in our previous study we demonstrated the expression of Stro-1/CD44 in human and rat myometrium \[[@CR17]--[@CR19]\], here and for first time, we have identified the myometrial stem cell niche in the mouse myometrium by using the expression of CD44 and co-staining with other well-known stemness markers such as Oct-4 and Nanog. Moreover, we found that the percentage of Oct-4/CD44/Nanog myometrial stem cells was significantly lower at the pre-sexual age of 1 week than at the sexually mature ages of 3 weeks and older. Based on these results, we hypothesized that the very low frequency of Oct-4/CD44/Nanog myometrial stem cells, observed in the youngest mice, could be related to the lack of estrogen and progesterone sex hormones \[[@CR30], [@CR31]\]. Puberty onset is dependent on activation of the hypothalamic pituitary gonadal axis, which leads to gonadal steroid hormone production \[[@CR32]\]. Myometrial smooth muscle cells have receptors for progesterone and estrogen, and they could play an important role in upregulating the proliferation of Oct-4/CD44/Nanog myometrial stem cells in murine myometrium, especially during pregnancy, via paracrine pathway \[[@CR18]\]. Here, we demonstrated that both ER and PR are indeed abundantly expressed in the murine myometrium at ages 1, 3, and 4 weeks. This highly suggests that the low number of MSCs at 1 week of age is primarily due to lack of circulating estrogen and progesterone in these sexually immature mice \[[@CR33], [@CR34]\]. We have previously demonstrated that human and rat myometrial stem cells under-expressed ER and PR \[[@CR19]\]; thus, it is likely that surrounding differentiated myometrial cells promote MSC proliferation in a paracrine manner, similar to human and rat myometrium \[[@CR18]\]. This model has indeed been validated by recent work from Dr. Bulun's group \[[@CR24]\]. Consequently, the significantly lower frequency of stem cells in the pre-sexual age of 1 week is likely due to the lack of the estrogen and progesterone hormone ligand at that early age rather than the unavailability of the steroid receptors, which are similarly expressed, in neonatal and adult myometrium. Unfortunately, the serum levels of estrogen and progesterone were not available for the purchased transgenic mice; however, the chronological changes in mouse serum estrogen and progesterone are well established in the literature \[[@CR33]--[@CR37]\]. The role of estrogen in the onset of puberty and maintenance of reproduction is well established. The lowest serum levels of estradiol and progesterone were noted in prepubertal and ovariectomized mice in a hormonal analysis study \[[@CR35]\]. Thus, our results suggest that stem cells are steroid dependent and increase in numbers with reproductive maturity at around 3 weeks of age in mice. Importantly, this data also emphasizes the vulnerability of neonatal myometrium to environmental xeno-estrogen and other chemical endocrine disruptor exposure. It is well established that exposure to xeno-estrogens during the sensitive developmental period increases risk of disease development later in life \[[@CR38]\]. Our recent reports suggest that exposure to these environmental estrogens could lead to permanent reprogramming of myometrial stem cells and hence lead to adult onset of diseases such as uterine fibroids \[[@CR19], [@CR39]\]. However, further animal studies are needed to better understand the interplay between differentiated and myometrial stem cells as well as the various downstream pathways. Conclusion {#Sec12} ========== In summary, our results suggest that the Oct-4/CD44/Nanog can be used as cell surface markers to identify a subpopulation of murine myometrial cells, exhibiting features of stem/progenitor cells. Furthermore, our results suggest that myometrial stem cells are sex steroid hormone dependent, likely via paracrine pathway, and increase in numbers with reproductive maturity and rise in serum estrogen and progesterone levels around 3 weeks of age in mice. The abundance and early-onset expression of ER/PR emphasize the vulnerability of neonatal myometrium to environmental endocrine disruptors that can potentially lead to permanent reprograming and adult onset of myometrial disorders such as uterine fibroids. These findings could offer a useful tool in better understanding the endocrinology of uterine function, providing novel insights into murine myometrial physiology as well as the origin of myometrial disorders such uterine fibroids. Additional file =============== {#Sec13} Additional file 1:**Table S1.** List of antibodies used for immunohistochemistry staining. (PNG 36 kb) ERα : Estrogen receptor α MSCs : Mesenchymal stem cells PR A&B : Progesterone receptors A and B SSCs : Somatic stem cells This work has been partially presented as an oral abstract at ASRM Annual Meeting in October 2015, Baltimore. O-211. The authors would like to thank Dr. Marshall Brendan, Augusta University Department of Cellular Biology and Anatomy, for his experience in immunohistochemistry assays and Tim Curtz, Augusta University Department of Cellular Biology and Anatomy and Cell Imaging Core. Funding {#FPar1} ======= This research is funded by National Institutes of Health grants NIH R01DO89553-01 and NIH R01-ESO28615-01. Availability of data and materials {#FPar2} ================================== The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. SB carried out the identification and quantification of myometrial stem cells and histological analysis of mouse myometrium for expression of estrogen and progesterone receptors, performed the statistical analysis, and drafted the manuscript. MA participated in the design of the study and helped to draft and edit the manuscript. AA conceived the study and participated in its design and coordination and edited the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate {#FPar3} ========================================== All animal procedures described in this report have been approved by Augusta University's Institutional Animal Care and Utilization Committee (IACUC). Consent for publication {#FPar4} ======================= Not applicable. Competing interests {#FPar5} =================== The authors declare that they have no competing interests. Publisher's Note {#FPar6} ================ Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
{ "perplexity_score": 397.2, "pile_set_name": "PubMed Central" }
It is therefore ignorance that was responsible for India’s political downfall, ignorance combined with a lack of appreciation of geographical factors within India itself. India had neither a continental view like China or Persia, or a oceanic view like Japan. Today what we require is both a continental view and an appreciation of seapower. – KM Pannikar[1] “The Chinese annexation of Tibet as a buffer state then presents India with a landward dilemma on two fronts as it has never experienced before. And this is the reason than even today the Army is the biggest stakeholder in the defence budget…”[2] As highlighted by me in my response to Daanish Inder Gill’s article,[3] India’s foremost concerns are to be found on its land borders; a fact that escapes Daanish’s attention in his diatribe[4] where he in any case fails to see the wood for the trees. As the International Fleet Review 2016 (only the second one in Indian history) just concluded, apparently Daanish was party to some information that eluded the collective Indian establishment (ranging from its political leadership, to the defence services (including the Army) and even the bureaucracy). As noted by Admiral Arun Prakash: “While the IN had, no doubt, always nurtured a lofty vision of itself as a blue water force at some future point of time, this vision was shared neither by the politicians nor the bureaucracy. It is only during the last two decades that the key factors enumerated above have combined to convince India’s politicians, diplomats, bureaucrats and others that India’s security as well as destiny are heavily dependent on the oceans.”[5] Since Mr Daanish pleads ignorance as to the facts necessitating the fulfillment of functions that maritime power enables its user to fulfill, a quick education about the distinctive features of the Indian Ocean region is in order. The following are some of the points enunciated superbly in the previous (2007) edition of India’s Maritime Military Strategy[6]: Given the existence of “extreme diversities of economies” with some of the “richest countries…fastest growing economies…(and) poorest countries in the world” the region has but naturally seen the outbreak of a large number of conflicts in the post Cold War period along with enhancements in maritime capabilities. Being the region with a third of the world’s population and only a fourth of the landmass it is not surprising that some of the worst hit areas when it comes to “piracy, gun-running, human and drug trafficking” are located in the Indian Ocean Region. The Indian Ocean Region is clearly identified as being “the de facto home of global terrorism” and the “locus for 70% of the worlds disasters“. As if these factors internal to the region were not sufficient to justify the pursuit of a strong navy there is the factor of external involvement to be considered as well. With the region possessing “65% of the known reserves of strategic raw materials, 31% of the gas and…more than half of the world’s oil exports“, the area continues to attract the attention of extra-regional navies. No one has advanced that as things stand India must prioritize its naval development over its land forces. This does not mean though that India can afford to make unbalanced force modernization decisions that ignore the maritime component. Daanish seems to have missed the train when India committed itself to attaining a “Blue Water Navy”[7] and becoming a “net provider of security” in the Indian Ocean region.[8] The first point Daanish fails to appreciate in his response relates to the nuances of the direct deployment of military power through naval means. The second point which Daanish chooses not to engage with relates to the non-kinetic application of maritime power that is so inherent to the concept of being a net provider of security. The third point which Daanish has failed to engage with is India’s sea dependence for oil and trade and how those interests can possibly be served by not cultivating a maritime outlook. Daanish confines himself to saying that India should hope that everyone will follow international law and that India shall therefore not require a navy to protect its interests. In fourth point I address historical factors, factors which are of minimal relevance in the present exchange of views and yet this is the portion which Daanish has devoted his maximum attention to in his response. On the first point, I place reliance upon Zorawer Daulet Singh’s piece since I agree with him that a naval expansion does not mean that India must discard its continental focus.[9] Gerson & Whiteneck’s piece[10] as also observed (mercifully) by Gill is about the role of naval forces in conflict deterrence – a vital maritime role identified by Zorawer Daulet as well.[11] Deterrence has a twofold understanding – conventional and nuclear. The superiority of naval forces to contribute to the same vis a vis a specific Army arm- the artillery is beyond question. The Indian nuclear deterrent being based on a credible nuclear deterrent with the assurance of massive retaliation depends critically upon the survivability of one’s own nuclear forces. Nothing speaks for dispersed nuclear forces with high survivability against surprise attacks than the sea-based component of a nuclear triad.[12] Conventional deterrence is based upon having the requisite combat power to exercise deterrence by denial and deterrence by punishment. The reason one requires naval units to strike targets in place of land units is not rooted primarily in how effectively they can do so in as much as the range of targets that they can strike. The artillery arm is useful against land based targets. A naval task force by contrast can strike not only on land but also targets on the seas whether they be near our shores of half a world away. Daanish correctly notes that on land the Chinese have immense logistical and geographical advantages that India does not. While India should respond to this situation, it would be impractical for India to focus upon an exclusively land based response against an adversary whose resources far outstrip our own. This is so because while the landward geography favours China, at sea it is India which is favourably located. China’s extended sea lines of communication which need to pass through the Indian Ocean can be easily exploited by India to not only match but best China in the maritime domain. The possibility of India using the Andaman Islands as a metal chain so as to impede Chinese access to the Straits of Malacca[13] is something that worries the Chinese to no end. That the Chinese hope to have increasing influence in the Indian Ocean Region is also well known. The PLA Navy (PLAN) is undergoing a historic shift from concentrating not just on “offshore waters defense” but also “open seas protection” and hence focusing on Far Seas Operations.[14] The tyranny of distance however severely curtails the stronger powers advantages against a weaker opponent that has the home ground advantage (India in this case).[15] As Holmes notes, India’s commanding central position, closer bases, superior manpower and knowledge of terrain (physical & cultural) confers immense advantages upon India.[16] Daanish’s point that SLOC’s are ‘international common’s’ blockading which will antagonize neutral powers is on shaky ground given that he does not recognize the distinction between International Shipping Lanes (ISL) (which are the internationally used sea trade routes) and Sea Lines of Communication (alternate routes that need not necessarily coincide with ISL’s).[17] The second point relates to the non-kinetic application of maritime power activities. This involves the shaping of a favourable and positive maritime environment to enhance net security therein.[18] This encompasses activities like deployments and exercises in areas of interest to India, maritime capacity building and enhancement, cooperative development of regional Maritime Domain Awareness and conducting maritime security operations.[19] These are activities which as highlighted in my original response are activities that any other service let alone a particular arm would be hard pressed to match. The third point flows from my previous observation that India’s ‘sea dependence’ for oil is about 93% and over 70% of its trade by value and 90% by volume is carried by sea.[20]. Daanish has quite conveniently chosen to ignore this point and address how such interests can be secured absent investments in maritime power. Furthermore, it beggars belief that in a serious academic discussion it has been suggested that a nation renounce the means to pursue its legitimate right of self defence in the hope that observance of international law by all other nations shall keep it secure. There are two glaring problems with this assumption. One, is that even in a perfect world where this was the case, there would still exist a host of disruptive non-state actors and naturally occurring disaster events which necessitate investments in the maritime domain to deal with them. Second and more important is that strategic planning is not undertaken based on cheery & hopeful appraisals of what the future may hold. This especially cannot be done where a territorial neighbour (China) is challenging the very foundational assumptions of the ‘international commons’ regime in the South China Sea. There are a wide variety incidents that can be recounted to demonstrate the flagrant and repetitive violation of international law in cases involving peripheral state interests, let alone matters involving conflict that generally involve core interests. No example though quite measures up to that of the Second World War. It is widely known that a direct outcome of World War I was the adoption of the Covenant of the League of Nations following the Paris Peace Conference. What is not as widely appreciated is the wider movement which continued even after that to regulate a state’s recourse to use force. The Locarno Treaties of 1925, the resolution of the 6th Assembly of the League of Nations in 1925, the resolution of the 8th Assembly of the League of Nations in 1928, the 6th Pan America Conference in 1928 and the General Treaty for Renunciation of War as an Instrument of National Policy in 1928 (known also as the Paris Pact or the Kellogg-Briand Pact)[21] were all directed at accomplishing this.[22] When the time came however all these international law instruments were treated by Nazi Germany as scraps of paper whose provisions were not worth the paper that they were written on. And the suffering of nations which had allowed themselves to be lulled into a false sense of security is now history. The fourth point sees Daanish cursorily mention (without any supporting evidence) how the French, the Dutch & the Soviets supposedly plunged to their ruin due to over-investment in their navies (an overly simplistic claim in any event). Daanish’s claim of major economies having risen to the fore without the need for a blue water navy is hogwash. All the world’s major economies today have either directly pursued the development of their maritime power or have sought to secure the benefits of the same indirectly through alliances and related political arrangements. The link between wealth and military power as a symbiotic one with wealth underpinning military power even as military power is required to acquire and protect wealth has been sufficiently well documented by Paul Kennedy.[23] That in India’s case this military power must translate into naval might as well has been demonstrated in the preceding sections. History has been clear that no power strong enough to reign over all India ever emerged till the control of Indian waters remained in its hands which was until the mid 13th century (Initially the supremacy fell to the Arabs who being commercial navigators and not acting as instruments of a ‘national’ policy did not exploit this. This was begun by the Portuguese). Even the Mughals were powerless at sea during the zenith of their own power. [24] As Pannikar ultimately notes, “While to other countries the Indian Ocean is only one of the important oceanic areas, to India it is the vital sea. Her life lines are concentrated in that area. Her future is dependent on the freedom of that vast water surface. No industrial development, no commercial growth, no stable political structure is possible for her unless the Indian Ocean is free and her own shores fully protected. The Indian Ocean must therefore remain truly Indian.“[25] —————————————
{ "perplexity_score": 394.6, "pile_set_name": "OpenWebText2" }
Aid reaches rebel-held area of Sudan for first time in years Welcome arrival of food and other essentials coincides with talks to end years of fighting.
{ "perplexity_score": 371, "pile_set_name": "OpenWebText2" }
Simulation of different applicator positions for treatment of a presacral tumour. Proximally located presacral recurrences of rectal carcinomas are known to be difficult to heat due to the complex anatomy of the pelvis, which reflect, shield and diffract the power. This study is to clarify whether a change of position of the Sigma-Eye applicator in this region can improve the heating. Finite element (FE) planning calculations were made for a phantom model with a proximal presacral tumour using a fixed 100 MHz radiofrequency radiation. Shifts of the applicator were simulated in 1 cm steps in x-(lateral), y-(posterior) and z-(longitudinal) direction. Computations also considered the network effects of the Sigma-Eye applicator. Optimisation of the phases and amplitudes for all positions were performed after solving the bioheat-transfer-equation. The parameters T90, T50, sensitivity, hot spot volume and total deposited power have been sampled for every applicator position with optimised plans and a standard plan. The ability to heat a presacral tumour clearly depends on the applicator position, for standard antenna adjustment and also for optimised steering of the Sigma-Eye applicator. The gamma-direction (anterior-posterior) is very sensitive. Using optimised steering for each position, in z-direction (longitudinal), we found an unexpected additional optimum at 8 cm cranial from the middle position of the phantom. The x-direction (lateral) is in a clinical setting less important and shows only smaller changes of T90 with an expected optimum in the central position. A positioning of the applicator in the axial and anterior position of the mid-pubic symphysis should be avoided for treatment of the presacral region, regardless of the used adjustment. Use of amplitude and phase optimisation yields better T90 values than plans optimised only by phases, but they are much more sensitive for small variations of phases and amplitudes during a treatment, and the total power of the Sigma-Eye applicator can be restricted by the treatment software. Complex geometry of the human pelvis seems to be the reason for the difficulties to warm up the proximal presacral region. The assumption that every position can be balanced by a proper phase adaption, is true only in a small range. A centring of the applicator on the mid-pubic symphysis to heat this region should be avoided. From the practical point of view improved warming should be performed by optimisation of phases only.
{ "perplexity_score": 359.5, "pile_set_name": "PubMed Abstracts" }
Going to Cuba on the Sky next week. We were there for 2 days in 2017. Don't plan on getting off ship but still had to pay for visa. Doesn't seem right, when some attorney going to test this. Was there several years ago on my private boat and never paid for a visa.
{ "perplexity_score": 384.3, "pile_set_name": "Pile-CC" }
Eye fixations indicate men's preference for female breasts or buttocks. Evolutionary psychologists have been interested in male preferences for particular female traits that are thought to signal health and reproductive potential. While the majority of studies have focused on what makes specific body traits attractive-such as the waist-to-hip ratio, the body mass index, and breasts shape and size-there is little empirical research that has examined individual differences in male preferences for specific traits (e.g., favoring breasts over buttocks). The current study begins to fill this empirical gap. In the first experiment (Study 1), 184 male participants were asked to report their preference between breasts and buttocks on a continuous scale. We found that (1) the distribution of preference was bimodal, indicating that Argentinean males tended to define themselves as favoring breasts or buttocks but rarely thinking that these traits contributed equally to their choice and (2) the distribution was biased towards buttocks. In a second experiment (Study 2), 19 male participants were asked to rate pictures of female breasts and buttocks. This study was necessary to generate three categories of pictures with statistically different ratings (high, medium, and low). In a third experiment (Study 3), we recorded eye-movements of 25 male participants while they chose the more attractive between two women, only seeing their breasts and buttock. We found that the first and last fixations were systematically directed towards the self-reported preferred trait.
{ "perplexity_score": 320.9, "pile_set_name": "PubMed Abstracts" }