text
stringlengths
1
3.78M
meta
dict
From the Best and Brightest Files: Another Alleged 'Canadian', Another Bonafide Jihadi Shirdon, who was enrolled in the Southern Alberta Institute of Technology until at least 2012, appears in an ISIS video released two months ago. Before burning his Canadian passport, Shirdon, in full view of the camera lens, issues a threat to Canada, the U.S. and "all oppressors." "We are coming and we will destroy you by the will of God," Shirdon says on the video. He comes from a prominent and well-educated Somali family. His father’s brother, Abdi Farah Shirdon, was a former prime minister of Somalia who has survived numerous attempts on his life by al-Shabab militants fighting for an Islamic state in Somalia under the banner of al-Qaeda. Shirdon’s mother and sister live in Calgary and are deeply involved in the religious life of their community. CBC News reached out to them repeatedly, but they would only say they are "confused and pained by Farah’s choice," before asking for privacy. Though it’s unclear how real his threats are, Shirdon is the latest young man from Calgary to be identified by CBC News as a Canadian fighting overseas. Hey, the guy's Somali so cut him a break. You weren't expecting much from him anyway were you? He's from a people who couldn't be bothered to learn and remember their alleged ancient script so had to adopt the English Latin alphabet in the 1970s just to catch up with the rest of the world. How about that? Somalia didn't have a functioning alphabet until the late 20th century yet our immigration department thinks that somehow importing thousands of Somalis will enrich the country and give Canada a competitive edge on the world stage. 7 comments: Cassandra said... this is not left vs right, Tories vs Liberals, Socialism vs liberty. This is war against White people. Why do hostile elite defend Israel as a Jewish ethnostate with Jewish only immigration, but ravage White majority Europe/North America into a multi-ethnic, multi-cultural Gulag with non-White colonization? The world is 93% non-White, only 7% White. But 3rd world colonizers, Muslims, Punjabis, Chinese, are aggressively advancing their agenda to annihilate gullible Whites, just as China annihilates Tibet. Pax, they don't even bother selecting the immigrants any more. They must have stopped selecting them in the late 1980's. Every time I visit Toronto and environs, I see tonnes of morbidly obese blacks from the Caribbean, elderly Hindus and Sikhs from India, Somali Muslims and other vile, diseased trash from the third world. It's like they're scooping them up and dumping them here, as if Canada were some kind of dumping ground for the third world. And these people are supposed to take over Canada in the next 2 or 3 decades. It's embarrassing! Looks like a third world Canada is in the works. Either we stop all third world immigration NOW or we're all immigrating to some place in eastern Europe in the next 10 or 20 years. Heaven help us all! We don't attract the best and brightest. This should be obvious to anyone who lives in a major Canadian city. When the Philippines tops the list of source countries it's clear you don't seem to give a shit anymore. All that matters now is that they vote for you political party and can carry a mortgage. can you delete my comments and the reply threads ,from this post and the previous post. Fine. But you're going to have to tell me which anon is you. If you're the anon trying to excuse non-Europeans of the crime of native land theft then I can understand why because deep down you have no argument. If Europeans are imperialist land thieves then so are non-Europeans right down to their modern day Canadian born descendants.
{ "pile_set_name": "Pile-CC" }
/** * Created by martin on 19.02.2017. */ import * as path from 'path'; let pkg = null; try { pkg = require(path.resolve(__dirname, '..', 'package.json')); } catch (e) { try { pkg = require(path.resolve(__dirname, '..', '..', 'package.json')); } catch (e) { pkg = null; } } export const VERSION = (pkg ? pkg.version : 'unknown');
{ "pile_set_name": "Github" }
Common allelic variants of exons 10, 12, and 33 of the thyroglobulin gene are not associated with autoimmune thyroid disease in the United Kingdom. Thyroglobulin (Tg) is a major autoantigen for autoimmune thyroid disease (AITD). The Tg gene (Tg) has been mapped to chromosome 8q24, which has recently been linked in two independent studies to AITD. Association of specific alleles of microsatellite markers within Tg itself supports a role for Tg as a good candidate susceptibility locus for AITD. Resequencing of the Tg exons has led to the identification of a number of novel single nucleotide polymorphisms, four of which have been reported to be associated with AITD. Resequencing of Tg in Caucasian subjects in the United Kingdom (UK) has confirmed the presence of four single nucleotide polymorphisms in exons 10, 12, and 33. However, in the largest case-control association study to date with adequate power to detect the reported effect if present, we found no evidence for association of the Tg DNA variants with AITD in the UK. These data suggest that the recently identified single nucleotide polymorphisms do not have a causal role for AITD in the UK. At this stage, we cannot exclude the Tg region as harboring a susceptibility locus for AITD, and only large scale sequencing and fine mapping of the region, including neighboring genes, will allow us to identify any potential causal variants within this region.
{ "pile_set_name": "PubMed Abstracts" }
Dutch politician honored at funeral; party apparently gathering AMSTERDAM, Netherlands -- Tens of thousands of mourners threw flowers, wept and chanted the name of Pim Fortuyn on Friday as the body of the slain politician was driven in a hearse to his funeral. The peal of church bells in Rotterdam was drowned out as crowds roared when Fortuyn's remains were carried out of the 16th-century Laurentius and Elisabeth Cathedral after a Roman Catholic Mass broadcast on television. The atmosphere of the procession was at times more like a sporting event or a mass protest than a solemn funeral cortege, with thousands of people raising their hands in the air, chanting "Pim Fortuyn, Pim Fortuyn," and singing "You'll Never Walk Alone," a support song for the Rotterdam soccer team. The outpouring of public sentiment could effect voters next week in elections for a new government. Before his death, Fortuyn's populist, anti-immigration party ranked among the top three parties, and seemed to have gathered strength since his assassination on Monday. Meanwhile in Amsterdam, prosecutors indicated Fortuyn's suspected killer may have been plotting against three other members of his anti-immigration party. The names of the party members and maps of their neighborhoods were found in the suspect's car, said a spokeswoman for the pubic prosecutor. The identities of the targeted members were not released. Though the suspect's name has not been officially released, he has been identified by former colleagues as Volkert van der Graaf, an environmental and animal rights activist.
{ "pile_set_name": "Pile-CC" }
'use strict'; var Promise = require('sporks/scripts/promise'), sporks = require('sporks'); var Config = function (slouch) { this._slouch = slouch; }; Config.prototype._couchDB2Request = function (node, path, opts, parseBody) { opts.uri = this._slouch._url + '/_node/' + node + '/_config/' + path; opts.parseBody = parseBody; return this._slouch._req(opts); }; // Warning: as per https://github.com/klaemo/docker-couchdb/issues/42#issuecomment-169610897, this // isn't really the best approach as a more complete solution would implement some rollback // mechanism when a node fails after several attempts. (Retries are already attempted by the // request). Config.prototype._couchDB2Requests = function (path, opts, parseBody, maxNumNodes) { var self = this, promises = [], i = 0; return self._slouch.membership.get().then(function (members) { members.cluster_nodes.forEach(function (node) { if (typeof maxNumNodes === 'undefined' || i++ < maxNumNodes) { // Clone the opts as we need a separate copy per node var clonedOpts = sporks.clone(opts); promises.push(self._couchDB2Request(node, path, clonedOpts, parseBody)); } }); // Only return a single promise when there is a single promise so that the return value // is consistent for a single node. return promises.length > 1 ? Promise.all(promises) : promises[0]; }); }; Config.prototype._couchDB1Request = function (path, opts, parseBody) { opts.uri = this._slouch._url + '/_config/' + path; opts.parseBody = parseBody; return this._slouch._req(opts); }; Config.prototype._request = function (path, opts, parseBody, maxNumNodes) { var self = this; return self._slouch.system.isCouchDB1().then(function (isCouchDB1) { if (isCouchDB1) { return self._couchDB1Request(path, opts, parseBody); } else { return self._couchDB2Requests(path, opts, parseBody, maxNumNodes); } }); }; Config.prototype.get = function (path) { return this._request(path, { method: 'GET' }, true); }; Config.prototype.set = function (path, value) { return this._request(path, { method: 'PUT', body: JSON.stringify(this._toString(value)) }); }; Config.prototype.unset = function (path) { return this._request(path, { method: 'DELETE' }); }; Config.prototype.unsetIgnoreMissing = function (path) { var self = this; return self._slouch.doc.ignoreMissing(function () { return self.unset(path); }); }; Config.prototype.setCouchHttpdAuthTimeout = function (timeoutSecs) { // Convert timeout value to a string return this.set('couch_httpd_auth/timeout', timeoutSecs + ''); }; Config.prototype._toString = function (value) { if (typeof value === 'boolean') { return value ? 'true' : 'false'; } else if (typeof value === 'string') { return value; } else { return value + ''; // convert to string } }; Config.prototype.setCouchHttpdAuthAllowPersistentCookies = function (allow) { return this.set('couch_httpd_auth/allow_persistent_cookies', allow); }; Config.prototype.setLogLevel = function (level) { return this.set('log/level', level); }; Config.prototype.setCompactionRule = function (dbName, rule) { return this.set('compactions/' + encodeURIComponent(dbName), rule); }; Config.prototype.setCouchDBMaxDBsOpen = function (maxDBsOpen) { return this.set('couchdb/max_dbs_open', maxDBsOpen); }; Config.prototype.setHttpdMaxConnections = function (maxConnections) { return this.set('httpd/max_connections', maxConnections); }; module.exports = Config;
{ "pile_set_name": "Github" }
/* * Copyright (C) 2009 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.xyoye.player.danmaku.danmaku.model.objectpool; public interface PoolableManager<T extends Poolable<T>> { T newInstance(); void onAcquired(T element); void onReleased(T element); }
{ "pile_set_name": "Github" }
Molecular dynamics simulations of the acyl-enzyme and the tetrahedral intermediate in the deacylation step of serine proteases. Despite the availability of many experimental data and some modeling studies, questions remain as to the precise mechanism of the serine proteases. Here we report molecular dynamics simulations on the acyl-enzyme complex and the tetrahedral intermediate during the deacylation step in elastase catalyzed hydrolysis of a simple peptide. The models are based on recent crystallographic data for an acyl-enzyme intermediate at pH 5 and a time-resolved study on the deacylation step. Simulations were carried out on the acyl enzyme complex with His-57 in protonated (as for the pH 5 crystallographic work) and deprotonated forms. In both cases, a water molecule that could provide the nucleophilic hydroxide ion to attack the ester carbonyl was located between the imidazole ring of His-57 and the carbonyl carbon, close to the hydrolytic position assigned in the crystal structure. In the "neutral pH" simulations of the acyl-enzyme complex, the hydrolytic water oxygen was hydrogen bonded to the imidazole ring and the side chain of Arg-61. Alternative stable locations for water in the active site were also observed. Movement of the His-57 side-chain from that observed in the crystal structure allowed more solvent waters to enter the active site, suggesting that an alternative hydrolytic process directly involving two water molecules may be possible. At the acyl-enzyme stage, the ester carbonyl was found to flip easily in and out of the oxyanion hole. In contrast, simulations on the tetrahedral intermediate showed no significant movement of His-57 and the ester carbonyl was constantly located in the oxyanion hole. A comparison between the simulated tetrahedral intermediate and a time-resolved crystallographic structure assigned as predominantly reflecting the tetrahedral intermediate suggests that the experimental structure may not precisely represent an optimal arrangement for catalysis in solution. Movement of loop residues 216-223 and P3 residue, seen both in the tetrahedral simulation and the experimental analysis, could be related to product release. Furthermore, an analysis of the geometric data obtained from the simulations and the pH 5 crystal structure of the acyl-enzyme suggests that since His-57 is protonated, in some aspects, this crystal structure resembles the tetrahedral intermediate.
{ "pile_set_name": "PubMed Abstracts" }
An unlikely pair.....
{ "pile_set_name": "Enron Emails" }
Now, I know I can’t find a way to connect that beautiful pink hair that LunaLamb is wearing to furries, but I’ll see what can be done. Until then, this beautiful pale babe is bouncing around in her room, sometimes on her back, sometimes on a dildo. Either way, I love it when she bounces and it’s so fun to watch! LunaLamb‘s adorable face will have you in chains, submitting to her godliness, and especially when she shows you those perky little nipples. That impressive collection of Pokémon in the background is impressive, but it won’t even come into the top 10 list of things you’ll be looking at in LunaLamb‘s room, with the first spots occupied by everything that this beautiful babe does, is and creates. Go see her!
{ "pile_set_name": "OpenWebText2" }
Constitution of 1838 In December 1838, delegates convened at St. Joseph to form Florida’s first state constitution. The convention completed its work in January 1839, although Florida was not officially admitted to the Union as a state until March 3, 1845. Transcript Section 13. That no person shall, for the same offense, be twice put in jeopardy of life or limb. Section 14. That private property shall not be taken or applied to public use, unless just compensation be made therefor. Section 15. That in all prosecutions and indictments for libel, the truth may be given in evidence; and if it shall appear to the jury that the libel is true, and published with good motives and for justifiable ends, the truth shall be a justification; and the jury shall be the judges of the law and facts. Section 16. That no person shall be put to answer any criminal charge, but by presentment, indictment or impeachment. Section 17. That no conviction shall work corruption of blood, or forfeiture of estate. Section 18. That retrospective laws, punishing acts committed before the existence of such laws, and by them only declared penal, or criminal, are oppressive, unjust, and incompatible with liberty; wherefore, no ex post facto law shall ever be made. Section 19. That no law impairing the obligation of contracts shall ever be passed. Section 20. That the people have a right, in a peaceable manner, to assemble together to consult for the common good; and to apply to those invested with the powers of government, for redress of grievances, or other proper purposes, by petition, address, or remonstrance. Section 21. That the free white men of this State shall have the right to keep and to bear arms, for their common defense. Section 22. That no soldier in time of peace, shall be quartered in any house without the consent of the owner; nor in time of war but in a manner prescribed by law. Section 23. That no standing army shall be kept up without the consent of the Legislature: and the military shall in all cases and at all times, be in strict subordination to the civil power. Section 24. That perpetuities and monopolies are contrary to the genius of a free State, and ought not to be allowed. Section 25. That no hereditary emoluments, privileges, or honors, shall ever be granted or conferred in this State. Section 26. That frequent recurrence to fundamental principles, is absolutely necessary to preserve the blessings of liberty. Section 27. That to guard against transgressions upon the rights of the people, we declare that every thing in this article is excepted out of the general powers of government, and shall forever remain inviolate; and that all laws contrary thereto, or to the following provisions, shall be void. _______________ ARTICLE II. Distribution of the Powers of Government. Section 1. The powers of the Government of the State of Florida, shall be divided into three distinct departments, and each of them confided to a separate body of Magistracy, to wit: Those which are Legislative to one; those which are Executive to another; and those which are Judicial to another. Section 2. No person, or collection of persons, being one of those departments, shall exercise any power properly belonging to either of the others, except in the instances expressly provided in this Constitution. ____________ ARTICLE III. Executive Department. Section 1. The Supreme Executive Power shall be vested in a Chief Magistrate, who shall be styled the Governor of the State of Florida. Section 2. The Governor shall be elected for four years, by the qualified electors, at the time and place where they shall vote for Representatives, and shall remain in office until a successor be chosen and qualified, and shall not be eligible to re-election until the expiration of four years thereafter. Section 3. No person shall be eligible to the office of Governor unless he shall have attained the age of thirty years, shall have been a citizen of the United States ten years, or an inhabitant of Florida at the time of the adoption of this Constitution, (being a citizen of the United States,) and shall have been a resident of Florida at least five years next preceding the day of election. Section 4. The returns of every election for Governor shall be sealed up and transmitted to the seat of Government, directed to the Speaker of the House of Representatives, who shall, during the first week of the session, open and publish them in the presence of both Houses of the General Assembly, and the person having the highest number of votes, shall be Governor; but if two or more shall be equal and highest in votes, one of them shall be chosen Governor by the joint vote of the two Houses; and contested elections for Governor shall be determined by both Houses of the General Assembly, in such manner as shall be prescribed by law. Section 5. He shall, at stated times, receive a compensation for his services, which shall not be increased or diminished during the term for which he shall have been elected. Florida Memory is funded under the provisions of the Library Services and Technology Act, from the Institute of Museum and Library Services, administered by the Florida Department of State's Division of Library and Information Services.
{ "pile_set_name": "Pile-CC" }
Chemical induction of presumed dominant-lethal mutations in postcopulation germ cells and zygotes of mice. II. Sensitivity of different postcopulation-precleavage stages to three alkylating chemicals. The relative sensitivities of various postcopulation-precleabage and pronuclear stages to dominant-lethal effects of isopropyl methanesulfonate (IMS), ethyl methanesulfonate (EMS), and triethylenemelamine (TEM) were investigated. The pattern of sensitivity differed with the chemical. IMS was most effective when pronuclear formation was already completed and the majority of the zygotes were presumably undergoing DNA synthesis. EMS, on the other hand, induced its most pronounced effects when eggs in the course of second meiotic division and zygotes in early pronuclear stages were treated. The greatest effect of TEM was observed when zygotes were treated at the early pronuclear stage. EMS and TEM, in contrast to IMS, are similar to radiations in that zygotes undergoing DNA synthesis are more resistant to them than are the early pronuclear stages. In the case of IMS, effects induced in the most sensitive postcopulation-precleavage stage were 6 to 9 times greater than in the most sensitive precopulatory dictyate oocytes or male germ cells. On the other hand, in the case of EMS and TEM, the most sensitive precopulatory male germ cells, but not the dictyate oocytes, were more sensitive than the most sensitive postcopulation stages.
{ "pile_set_name": "PubMed Abstracts" }
<?php /** * Copyright © Magento, Inc. All rights reserved. * See COPYING.txt for license details. */ namespace Magento\Framework\DB\Sql; /** * Class LimitExpression */ class LimitExpression extends Expression { /** * @var string */ protected $sql; /** * @var int */ protected $count; /** * @var int */ protected $offset; /** * @param string $sql * @param int $count * @param int $offset */ public function __construct( $sql, $count, $offset = 0 ) { $this->sql = $sql; $this->count = $count; $this->offset = $offset; } /** * @inheritdoc */ public function __toString() { $sql = $this->sql; $count = (int)$this->count; if ($count <= 0) { /** @see Zend_Db_Adapter_Exception */ #require_once 'Zend/Db/Adapter/Exception.php'; throw new \Zend_Db_Adapter_Exception("LIMIT argument count=$count is not valid"); } $offset = (int)$this->offset; if ($offset < 0) { /** @see Zend_Db_Adapter_Exception */ #require_once 'Zend/Db/Adapter/Exception.php'; throw new \Zend_Db_Adapter_Exception("LIMIT argument offset=$offset is not valid"); } $sql .= " LIMIT $count"; if ($offset > 0) { $sql .= " OFFSET $offset"; } return trim($sql); } }
{ "pile_set_name": "Github" }
Q: Meaning and Usage of 'Раз уж' I translated a sentence of mine into Russian, which originally read: "I want to hear all about your news and what took place today! Once I have my coffee, of course." Here is my translation: "Я хочу услышать все о твоих новостях и сегодняшних происшествиях! Раз уж у меня есть кофе." At first, I translated the last sentence quite directly as 'Раз у меня есть кофе,' but was changed to 'раз уж...' Could you provide a brief explanation of 'раз уж' (though I can intuit from context, I want to be sure) and an example or two of how it is used? A: "Раз уж у меня есть кофе" translates into something like "...since I already have my coffee", or, to put it another way, "Now that I have got my coffee [I'm ready to listen to you]" -- it implies that you have a cup of coffee in your hand and are ready to listen. There is not much difference here between "раз" and "раз уж" -- the particle "уж" (reduced "уже") simply reinforces completeness of what follows ("у меня есть чашка кофе"). Your English phrase sounds more like you haven't had your coffee yet and are not ready to listen until you have, so it would be better translated as "Расскажи мне, что у тебя нового и что происходило сегодня, но только после того, как я выпью кофе, конечно".
{ "pile_set_name": "StackExchange" }
994 F.2d 1433 Bankr. L. Rep. P 75,277In re Gilbert G. BEEZLEY, Debtor.Gilbert G. BEEZLEY, Appellant,v.CALIFORNIA LAND TITLE COMPANY, Appellee. No. 91-55809. United States Court of Appeals,Ninth Circuit. Submitted Oct. 6, 1992.*Decided June 4, 1993. Gilbert G. Beezley, pro se. Mark E. Rohatiner, Ellen L. Frank, Schneider, Goldberg, Rohatiner & Yuen, Beverly Hills, CA, for appellee. Appeal from the Ninth Circuit Bankruptcy Appellate Panel. Before O'SCANNLAIN and RYMER, Circuit Judges, and ZILLY,** District Judge. PER CURIAM: 1 Debtor Gilbert G. Beezley appeals the decision of the Ninth Circuit BAP, affirming the bankruptcy court's denial of his motion to reopen his bankruptcy case under 11 U.S.C. § 350(b). We have jurisdiction pursuant to 28 U.S.C. § 158(d), and we affirm. 2 Beezley argues that the bankruptcy court abused its discretion by failing to grant his motion to reopen his case. See In re Herzig, 96 B.R. 264, 266 (9th Cir. BAP 1989) (bankruptcy court's refusal to reopen a closed case under 11 U.S.C. § 350(b) reviewed for an abuse of discretion). We disagree. Based on the assumption that amendment was necessary to discharge the debt, Beezley sought to add an omitted debt to his schedules. Beezley's, however, was a no asset, no bar date Chapter 7 case. After such a case has been closed, dischargeability is unaffected by scheduling; amendment of Beezley's schedules would thus have been a pointless exercise. See American Standard Ins. Co. v. Bakehorn, 147 B.R. 480, 483 (N.D.Ind.1992); In re Stecklow, 144 B.R. 314, 317 (Bankr.D.Md.1992); In re Tucker, 143 B.R. 330, 334 (Bankr.W.D.N.Y.1992); In re Peacock, 139 B.R. 421, 422 (Bankr.E.D.Mich.1992); In re Thibodeau, 136 B.R. 7, 10 (Bankr.D.Mass.1992); In re Hunter, 116 B.R. 3, 5 (Bankr.D.D.C.1990); In re Mendiola, 99 B.R. 864, 865 (Bankr.N.D.Ill.1989). If the omitted debt is of a type covered by 11 U.S.C. § 523(a)(3)(A), it has already been discharged pursuant to 11 U.S.C. § 727. If the debt is of a type covered by 11 U.S.C. § 523(a)(3)(B), it has not been discharged, and is non-dischargeable.1 In sum, reopening here in order to grant Beezley's request would not have "accord[ed] relief to" Beezley; thus, there was no abuse of discretion. 3 AFFIRMED. O'SCANNLAIN, Circuit Judge, concurring: 4 The simple question with which we are presented--whether the bankruptcy court abused its discretion by denying the debtor's motion to reopen--requires, in my view, more than a simple answer. I write separately to address certain matters that the per curiam opinion does not discuss, but which are squarely presented on the record before us and implicate important principles of bankruptcy law. 5 * Beezley filed for bankruptcy under Chapter 7 on June 10, 1987. Because he had no assets available for distribution to his creditors in bankruptcy, no bar date was set by the court establishing a deadline for creditors to file proofs of claim. 6 Three years earlier, California Land Title Co. ("Cal Land") had obtained a default judgment against Beezley in California state court arising out of a 1979 transaction in which Beezley was the seller and Cal Land the title insurer of certain real property. Beezley made no mention of Cal Land's claim or of its judgment against him in any of his schedules. Consequently, Cal Land did not receive notice of Beezley's bankruptcy. Beezley received his discharge on November 6, 1987, and his case was thereafter closed. 7 In January 1990, Beezley moved to reopen his bankruptcy case for the purpose of amending his schedules to add the omitted debt to Cal Land. Cal Land filed a memorandum with the bankruptcy court in opposition to Beezley's motion to reopen, advising the court that Cal Land would seek to establish that its claim was nondischargeable. The bankruptcy court held a hearing, at the conclusion of which it denied Beezley's motion, citing the case of In re Stark, 717 F.2d 322 (7th Cir.1983) (per curiam). The Bankruptcy Appellate Panel ("BAP") subsequently affirmed by memorandum, citing the same authority. II 8 The source of the bankruptcy court's power to reopen a closed case is section 350(b).1 This section gives the court discretion to reopen a case "to administer assets, to accord relief to the debtor, or for other cause." The question posed by this appeal is whether the bankruptcy court abused that discretion in denying Beezley's motion to reopen. See In re Herzig, 96 B.R. 264, 266 (9th Cir. BAP 1989) (decision on motion to reopen reviewed for abuse of discretion). Answering this question is a complicated affair, and requires close attention to the difficult language of sections 523 and 727 of the Bankruptcy Code. 9 * Section 727(b) of the Bankruptcy Code states in part: "Except as provided in section 523 of this title, a discharge under subsection (a) of this section discharges the debtor from all debts that arose before the date of the order for relief under this chapter [i.e., the date of the bankruptcy filing]...." "The operative word is 'all'. There is nothing in Section 727 about whether the debt is or is not scheduled. So far as that section is concerned, a pre-bankruptcy debt is discharged, whether or not it is scheduled." In re Mendiola, 99 B.R. 864, 865 (Bankr.N.D.Ill.1989). See In re Stecklow, 144 B.R. 314, 317 (Bankr.D.Md.1992) ("breadth of the discharge" under § 727 is "comprehensive"); In re Thibodeau, 136 B.R. 7, 8 (Bankr.D.Mass.1992) ("s 727(b) itself makes no exception for unlisted debts"). Thus, unless section 523 dictates otherwise, every prepetition debt becomes discharged under section 727. Section 523(a) provides in part: 10 (a) A discharge under section 727 ... of this title does not discharge an individual debtor from any debt-- 11 (3) neither listed nor scheduled ... in time to permit-- 12 (A) if such debt is not of a kind specified in paragraph (2), (4), or (6) of this subsection, timely filing of a proof of claim, unless such creditor had notice or actual knowledge of the case in time for such timely filing; or 13 (B) if such debt is of a kind specified in paragraph (2), (4), or (6) of this subsection, timely filing of a proof of claim and timely request for a determination of dischargeability of such debt under one of such paragraphs, unless such creditor had notice or actual knowledge of the case in time for such timely filing and request[.] 14 Unscheduled debts are thus divided into two groups: those that are "of a kind specified in paragraph (2), (4), or (6) of this subsection," and those that are not. Loosely speaking, the paragraphs in question describe debts arising from intentional wrongdoing of various sorts (respectively, fraud, fiduciary misconduct, and the commission of malicious torts). What distinguishes these from all other debts is that, under section 523(c) and rule 4007(c), a creditor must file a complaint in the bankruptcy court within 60 days after the date established for the first meeting of creditors in order to assert their nondischargeability. Failure to litigate the dischargeability of these sorts of debts right away disables the creditor from ever doing so; an intentional tort debt will be discharged just like any other. 15 Section 523(a)(3) threatens nondischargeability in order to safeguard the rights of creditors in the bankruptcy process. The difference between subparagraphs (A) and (B) reflects the different rights enjoyed by and requirements imposed upon different kinds of creditors. For most creditors, the fundamental right enjoyed in bankruptcy is to file a claim, since this is the sine qua non of participating in any distribution of the estate's assets. Section 523(a)(3)(A) safeguards this right by excepting from discharge debts owed to creditors who did not know about the case in time to file a claim. By contrast, for creditors holding intentional tort claims the salient rights are not only to file a claim but also to secure an adjudication of nondischargeability. Thus, section 523(a)(3)(B) excepts intentional tort debts from discharge notwithstanding the creditor's failure to file a timely complaint under section 523(c) if the creditor did not know about the case in time to file such a complaint (even if it was able to file a timely proof of claim). 16 With this in mind, the convoluted language of section 523(a)(3) can be paraphrased as follows: 17 (a) A discharge does not cover-- 18 (3) an unscheduled debt if-- 19 (A) with respect to a debt not covered by § 523(c), the failure to schedule deprives the creditor of the opportunity to file a timely claim, or 20 (B) with respect to an intentional tort debt covered by § 523(c), the failure to schedule deprives the creditor of the opportunity to file a timely claim or a nondischargeability complaint. B 21 In applying section 523(a)(3) to the case before us, it is preferable to begin with subsection (A). 22 As noted, the entire thrust of subparagraph (A) is to protect the creditor's right to file a proof of claim, and so to participate in any distribution of the assets of the estate. However, "[i]n a case without assets to distribute the right to file a proof of claim is meaningless and worthless." Mendiola, 99 B.R. at 867. The bankruptcy rules therefore permit the court to dispense with the filing of proofs of claim in a no-asset case. 23 In a chapter 7 liquidation case, if it appears from the schedules that there are no assets from which a dividend can be paid, the notice of the meeting of creditors may include a statement to that effect; that it is unnecessary to file claims; and that if sufficient assets become available for the payment of a dividend, further notice will be given for the filing of claims. 24 Bankr.Rule 2002(e). 25 When a no-dividend notice under Rule 2002(e) is sent out, an exception is made to the basic rule requiring proofs of claim to be filed within 90 days after the date established for the first meeting of creditors. Under this exception, creditors need not file a proof of claim unless and until the clerk sends notice that non-exempt assets have been located which may permit a dividend to be paid. Bankr.Rule 3002(c)(5). In practice, "[t]he exception has now subsumed the rule, so that in most cases there is no time limit (bar date) set by the Clerk's office for creditors to file their proofs of claim." In re Corgiat, 123 B.R. 388, 389 (Bankr.E.D.Cal.1991). See In re Tucker, 143 B.R. 330, 332 (Bankr.W.D.N.Y.1992). 26 The critical point here is that in most cases filed under Chapter 7 (i.e., no asset, no bar date cases), "the date to file claims is never set and thus § 523(a)(3)(A) is not triggered." In re Walendy, 118 B.R. 774, 775 (Bankr.C.D.Cal.1990). That is, in a no asset, no bar date case, section 523(a)(3)(A) is not implicated "because there can never be a time when it is too late 'to permit timely filing of a proof of claim.' " Mendiola, 99 B.R. at 867. See In re Tyler, 139 B.R. 733, 735 (D.Colo.1992); In re Peacock, 139 B.R. 421, 424 (Bankr.E.D.Mich.1992); Walendy, 118 B.R. at 776. 27 "Thus, in the typical no asset Chapter 7 case, where the no dividend statement of [rule] 2002(e) is utilized by the clerk and no claims bar date set, the prepetition dischargeable claim of an omitted creditor, being otherwise unaffected by § 523, remains discharged. In other words, in the typical Chapter 7 case, the debtor's failure to list a creditor does not, in and of itself, make the creditor's claim nondischargeable." Corgiat, 123 B.R. at 391. Stated differently, where section 523 does not except a prepetition debt from discharge, the debt remains within the scope of the discharge afforded by section 727. Scheduling, per se, is irrelevant. See Mendiola, 99 B.R. at 867 ("since Section 523(a)(3)(A) does not apply, the debts the Debtor seeks to add to the schedules are already discharged, even though they were not listed or scheduled"); accord American Standard Ins. Co. v. Bakehorn, 147 B.R. 480, 487 (N.D.Ind.1992); Tyler, 139 B.R. at 735; Stecklow, 144 B.R. at 315; Tucker, 143 B.R. at 334; Peacock, 139 B.R. at 424; Thibodeau, 136 B.R. at 8. Since dischargeability is unaffected by scheduling in a no asset, no bar date case, "reopening the case merely to schedule the debt is for all practical purposes a useless gesture." In re Hunter, 116 B.R. 3, 5 (Bankr.D.D.C.1990); accord American Standard, 147 B.R. at 483 (of "no legal effect"); Stecklow, 144 B.R. at 317 ("futile"); Tucker, 143 B.R. at 334 ("unnecessary" and "unwarranted"); Peacock, 139 B.R. at 422 ("pointless"); Thibodeau, 136 B.R. at 10 ("meaningless"). 28 Similarly, even if an omitted debt falls under section 523(a)(3)(B), no purpose is served by reopening solely in order to amend the schedules; scheduling, per se, is irrelevant to dischargeability even under this subparagraph once a case is closed. As noted above, section 523(a)(3)(B) provides that, if the debt flows from an intentional tort "of a kind specified" in the relevant paragraphs, the debtor's failure to schedule in time to provide notice to the creditor of the need to seek an adjudication of dischargeability is conclusive (at least in the absence of actual knowledge of the bankruptcy on the part of the creditor). The debt is not discharged. "Scheduling makes no difference to outcome. 'Reopening a case does not extend the time to file complaints to determine dischargeability. Either the creditor had actual, timely notice of the [case] or he didn't. Amending the schedules will not change that.' " Mendiola, 99 B.R. at 868 (quoting In re Karamitsos, 88 B.R. 122, 123 (Bankr.S.D.Tex.1988)); accord American Standard, 147 B.R. at 484; Thibodeau, 136 B.R. at 10. III 29 Beezley moved to reopen his bankruptcy case in order to add the omitted debt to Cal Land to his schedules, apparently in the mistaken belief that by amending his schedules he would discharge the debt. Cal Land, upon receiving notice of Beezley's motion, vigorously opposed it, also, apparently, under the mistaken impression that the listing of the previously omitted debt would accomplish its discharge. As the analysis set forth above shows, however, because Beezley's was a no-asset, no-bar-date Chapter 7 proceeding, the amendment of Beezley's schedules, in and of itself, could not possibly have had any effect on the status of his obligation to Cal Land. Either the debt was long ago discharged by the operation of sections 523 and 727 or it was not. 30 Beezley's request for leave to amend his schedules was therefore a request for that which is legally irrelevant. The bankruptcy court was surely not required to involve itself in such a pointless exercise. The court thus could, without abuse of discretion, have simply rejected Beezley's motion out of hand. See Mendiola, 99 B.R. at 867. 31 Were this what the bankruptcy court did in fact, I would feel no need to add to what is said in our per curiam opinion. But it did not do so, and the substance of the bankruptcy court's actual ruling (and the BAP's affirmance) reveals, I submit, a misconception that we should not allow to pass uncorrected. 32 The bankruptcy court denied Beezley's motion only after it concluded that the omission of Cal Land from Beezley's schedules was not inadvertent, but was the result of an "intentional design" on Beezley's part. The court reached this conclusion based on the evidence provided by a letter that Beezley had written in 1983 and sent to the state court in which Cal Land's suit against him was then pending. The letter, signed by Beezley, is addressed "To Whom it May Concern," and bears the caption, "Re: Ventura County Superior Court Filing No. 74389, Cal Land Title v. G. Beezley or Air Trans Systems." 33 The bankruptcy court observed that "the existence of the lawsuit and your reference to the lawsuit [in the letter] evidences your knowledge that [Cal Land] want[ed] money from you. It's clear that you knew they had a claim against you." It was this that persuaded the court that the case should not be reopened. "There is other authority from other circuits that states that amending--reopening this case--reopening the case to amend the schedules to add omitted creditors is appropriate where there is no evidence of fraud or intentional design behind the omission. And that's In Re: Stark out of the 7th Circuit. It's a circuit level case."Whatever else might be said, it is incontrovertible that the bankruptcy court did not rely on the reasoning that underlies the per curiam opinion in concluding that Beezley's motion should be denied. Rather, both the bankruptcy court in denying the motion, and the BAP in affirming the denial, treated the rule in Stark as authoritative. Why did the bankruptcy court not simply reject Beezley's motion out of hand as a pointless waste of time? Why did the court feel the need to rely upon authority from another circuit to decide Beezley's motion? 34 The answer, I believe, is that the bankruptcy court thought it was adjudicating the dischargeability of Beezley's debt when it denied his motion to reopen and amend his schedules. That is, the bankruptcy court, just like Beezley and Cal Land, proceeded here on the basis of the erroneous assumption that it would be necessary (and sufficient) for Beezley to reopen the case and add Cal Land to his schedules in order to discharge the omitted debt. 35 This is apparent from examining In re Stark itself. In June 1980, the Starks incurred certain hospital bills. In August 1980, they filed a bankruptcy petition. No bar date was set, and no assets distributed. Because the Starks believed that the hospital bills would be paid by their insurance company, they did not include the hospital in their schedule of creditors. The Starks received their discharge in November 1980. As it happened, however, the hospital bills were not paid by the insurance company. The hospital obtained a judgment against the Starks in November 1981. The Starks then moved to reopen their bankruptcy case to amend their schedule of creditors to include the hospital. The Seventh Circuit ruled that they should be permitted to do so. 36 As explained above, there was no need whatsoever to "permit" the Starks to amend their schedules. Since theirs was a no-asset, no-bar-date case, the Stark's debt to the hospital was discharged by the operation of section 727 along with all their other prepetition debts in November 1980. The Seventh Circuit panel that decided the case failed to recognize this. Indeed, the panel believed that if section 523 were literally applied, the Starks' debt would have been excepted from discharge. In this respect, the panel stated that it agreed with the district court that "section 523(a) should not be mechanically applied to deprive a debtor of a discharge in a no asset case...." Id. at 323. 37 Thus the Stark panel believed that it had to "exercise its equitable powers" in order to allow the debtors to discharge their omitted debt. Id. Further the panel believed that exercising those powers to permit the debtors to amend their schedules would achieve the desired end. This explains the holding in the case: "In a no-asset bankruptcy where notice has been given [that no bar date will be set], a debtor may reopen the estate to add an omitted creditor where there is no evidence of fraud or intentional design." Id. at 324. 38 The analysis presented above clearly demonstrates that Stark misstates the law. Stark treats the question whether to reopen a closed no-asset, no-bar-date case to amend the schedule of creditors as equivalent to the question whether to permit discharge of the omitted debt.2 But, again, scheduling, per se, is irrelevant. The legal standard articulated in Stark is simply incorrect, and I would disapprove reliance on it in the bankruptcy courts of this circuit. IV 39 The damage done by an incautious reliance on Stark is far from trivial. By applying Stark, both the bankruptcy court and the BAP effectively held that Beezley was not entitled to litigate the question whether his debt to Cal Land had been discharged by the operation of sections 523 and 727 unless his omission of Cal Land from his schedules was in good faith. Such a holding interposes an equitable barrier between the debtor and his discharge that Congress simply did not enact in the Bankruptcy Code. Nowhere in section 523(a)(3) is the reason why a debt was omitted from the bankruptcy schedules made relevant to the discharge of that debt.3 Courts are not free to condition the relief Congress has made available in the Bankruptcy Code on factors Congress has deliberately excluded from consideration.4 40 It cannot be overemphasized that we deal here with matters that are absolutely fundamental to the integrity of the Bankruptcy Code: the balance struck between the rights of creditors on the one hand, and the policy of affording the debtor a fresh start on the other. How to strike that balance is an inordinately difficult question--a question of public policy--as to which reasonable minds may and quite frequently do differ. Our task is, perhaps, a relatively easier one, for we have only to apply the law as Congress has written it. What Congress deemed a proper balancing of the equities as between debtor and creditor with respect to unlisted debts it has enacted in section 523(a)(3) of the Bankruptcy Code. It is not for the courts to restrike that balance according to their own lights. 41 Yet this, albeit inadvertently, is what the panel in Stark did. Stark stated that a debtor must prove his good faith before the discharge of an omitted debt will be recognized. There, this rule passed unnoticed as a sort of boilerplate--the Starks' good faith was never in question. As applied by the bankruptcy court in the circumstances of this case, however, this rule operated to supplant the analysis mandated by section 523, and to substitute in its stead a test involving equitable considerations wholly foreign to that section. See Peacock, 139 B.R. at 427 ("whether or not the debtor was reckless in omitting [the] claim is of no moment" with respect to the discharge of the omitted debt). The result is fundamental error affecting significant rights under the Bankruptcy Code.5 42 The analysis the Code requires is, I submit, as follows: Because Beezley's was a noasset, no-bar-date case, section 523(a)(3)(A) does not bar the discharge of his debt to Cal Land under section 727(b). Cal Land has alleged, however, that Beezley committed fraud in connection with the transaction that was the subject of its lawsuit against him, and that the debt evidenced by the default judgment it obtained against Beezley is therefore nondischargeable under section 523(a)(3)(B). Had Beezley listed this debt in his bankruptcy schedules, Cal Land would have been required under Bankruptcy Rule 4007(c) to litigate this nondischargeability question "within 60 days following the first date set for the meeting of creditors," which had long since passed when this litigation commenced. However, because Beezley failed to schedule the debt, Bankruptcy Rule 4007(b) affords Cal Land the right to litigate dischargeability outside the normal time limits, again in accordance with section 523(a)(3)(B). See American Standard, 147 B.R. at 484 ("In effect, a debtor who fails to list a creditor loses the jurisdictional and time limit protections of Section 523(c) and Rule 4007(c)."). See also In re Lochrie, 78 B.R. 257, 259-60 (9th Cir. BAP 1987). 43 This is the only right Cal Land can claim by virtue of its omission from Beezley's schedules. In particular, Cal Land cannot escape the need to prove nondischargeability merely because Beezley's failure to list his debt to Cal Land may have been intentional or may have prejudiced its ability to show that Beezley committed fraud years ago, as the holding in Stark would suggest. Stark has no place in the analysis of the matter at hand. IV 44 Faced with Beezley's motion on the one hand, and Cal Land's opposition on the other, I believe the bankruptcy court could have construed the matter as a request under Bankruptcy Rule 4007(b) for a determination of dischargeability--for this, as the court itself recognized, was really what both parties wanted.6 This, however, is now of little moment from the standpoint of the litigants. The important point is that whether Beezley's debt to Cal Land is in fact nondischargeable remains to be adjudicated. 45 In sum, Stark introduces a notion of "good faith" into the Bankruptcy Code's finely tuned system for determining the dischargeability of omitted debts. Because adequate and explicit means for determining dischargeability are provided in the Code itself, the bankruptcy courts of this circuit should place no reliance on Stark. * The panel unanimously finds this case suitable for submission on the record and briefs and without oral argument. Fed.R.App.P. 34(a), Ninth Circuit Rule 34-4 ** The Honorable Thomas S. Zilly, United States District Judge for the Western District of Washington, sitting by designation 1 We express no opinion as to whether the omitted debt was or was not discharged 1 All references are to the Bankruptcy Code, Title 11, United States Code 2 That the Stark case proceeds on this erroneous premise has been repeatedly recognized in the bankruptcy courts. See In re Peacock, 139 B.R. 421, 426 & n. 9 (Bankr.E.D.Mich.1992) (warning against "misplaced reliance on confusing comments in Stark ": "[T]he train began to run off the track when the lawyers in Stark misperceived the issue. The Seventh Circuit failed to put the train back on the track in time to prevent the analytical chaos which has ensued."); In re Thibodeau, 136 B.R. 7, 10 (Bankr.D.Mass.1992) (Stark "is based upon the unexamined assumption ... that in a no-asset case where no claim filing deadline has been fixed, a debt must be listed in order to be discharged"); In re Guzman, 130 B.R. 489, 491 n. 4 (Bankr.W.D.Tex.1991) (Stark "erroneously assumed that, unless the case were re-opened as the debtor requested, the creditor's claim would not be discharged"); In re Musgraves, 129 B.R. 119, 121 n. 6 (Bankr.W.D.Tex.1991) (same); In re Bulbin, 122 B.R. 161, 161 (Bankr.D.D.C.1990) (refusing to follow "dicta in [Stark ] which assumed for purposes of decision and without discussion that listing of an omitted creditor was necessary to make the omitted creditor's claim dischargeable"); In re Hunter, 116 B.R. 3, 5 (Bankr.D.D.C.1990) (same); In re Crull, 101 B.R. 60, 61 (Bankr.W.D.Ark.1989) (Stark "incorrectly assume[d] that if a case is reopened and an omitted creditor's claim is listed by amendment, the discharge automatically and retroactively applies"); In re Mendiola, 99 B.R. 864, 868 (Bankr.N.D.Ill.1989) ("it is clear from the opinion in Stark that the Court assumed that the purpose that would be served by the reopening and addition of the omitted creditor was the discharge of that creditor's claim"); In re Anderson, 72 B.R. 495, 496 (Bankr.D.Minn.1987) (Stark is "based on false premises regarding the nature and effect of a discharge") 3 There need be no concern that applying section 523(a)(3) according to its terms will encourage debtors to ignore their obligation to list all claims in their schedules. A debtor must declare under penalty of perjury that the statements made in his schedules are true and correct. A debtor who knowingly and fraudulently omits a creditor thus risks global denial or revocation of his discharge--that is, the withholding of all bankruptcy relief--under section 727 of the Bankruptcy Code. See 11 U.S.C. §§ 727(a)(4)(A), 727(d)(1). In addition, knowing and fraudulent misstatements in connection with a bankruptcy proceeding may be penalized by up to five years in prison and a $5,000 fine. See 18 U.S.C. § 152 4 That this was a deliberate congressional choice is plain from the legislative history of the Bankruptcy Reform Act of 1978, Pub.L. No. 95-598, 92 Stat. 2549. The Senate Report notes that the new section 523(a)(3) "follows current law, but clarifies some uncertainties generated by the case law construing 17a(3) [of the old Bankruptcy Act]." S.Rep. No. 95-989, 95th Cong., 2d Sess. 78-79, reprinted in 1978 U.S.C.C.A.N. 5787, 5864. The formal statements of both the House and Senate leaders responsible for the final shape of the new Bankruptcy Code leave no doubt as to which "uncertainties" were intended to be clarified: "Section 523(a)(3) ... is intended to overrule Birkett v. Columbia Bank, 195 U.S. 345, 25 S.Ct. 38, 49 L.Ed. 231 (1904)." 124 Cong.Rec. H11089 (Sept. 28, 1978), reprinted in 1978 U.S.C.C.A.N. 6436, 6522 (statement of Rep. Edwards); 124 Cong.Rec. S17406 (Oct. 6, 1978), reprinted in 1978 U.S.C.C.A.N. 6505, 6522 (statement of Sen. DeConcini) In Birkett, the Supreme Court construed the predecessor of section 523(a)(3), which excepted from discharge any debt "not ... duly scheduled in time for proof and allowance, ... unless [the] creditor had notice or actual knowledge of the proceedings in bankruptcy." The Court stated that: Actual knowledge of the proceedings contemplated by the section is a knowledge in time to avail a creditor of the benefits of the law--in time to give him an equal opportunity with other creditors--not a knowledge that may come so late as to deprive him of participation in the administration of the affairs of the estate or to deprive him of dividends.... That the law should give a creditor remedies against the estate of a bankrupt, notwithstanding the neglect or default of the bankrupt, is natural. The law would, indeed, be defective without them. It would also be defective if it permitted the bankrupt to experiment with it--to so manage and use its provisions as to conceal his estate, deceive or keep his creditors in ignorance of his proceeding without penalty to him. 195 U.S. at 350, 25 S.Ct. at 39 (emphasis added). The legislative history of section 523(a)(3) declares unambiguously that Birkett was intended to be overruled. Assuming that we require such an explicit directive before we will be moved to heed the clear command of the Bankruptcy Code itself, I see no way to avoid the force of this one. Congress has expressly disapproved the importation of equitable notions of a debtor's good faith or a creditor's fair opportunity to participate in the bankruptcy process into the interpretation and analysis of section 523(a)(3). See Mendiola, 99 B.R. at 869-70 ("[T]he clear language of Section 523(a) is not an aberration, but represents a Congressional policy choice. Congress could have excepted from the debtor's discharge debts that were omitted, intentionally or otherwise, from the schedules. Congress might simply have continued pre-Code law.... Instead, the legislative history shows that Congress expressly overruled that prior law and created the narrow exception found in § 523(a)(3)...."). 5 The equitable rule applied in Stark to a no-asset, no-bar-date case was originally developed for use in a very different kind of bankruptcy. The incautious use of such a standard outside the context in which it originated is at the heart of the problems we confront here The typical Chapter 7 bankruptcy is the no-asset, no-bar-date case. In some instances, however, the debtor has no assets to distribute to creditors, but a bar date is set by the clerk's office. See In re Corgiat, 123 B.R. 388, 390-91 (Bankr.E.D.Cal.1991) (recognizing the importance of this distinction); In re Walendy, 118 B.R. 774, 775-76 (Bankr.C.D.Cal.1990) (same). In such a case, section 523(a)(3)(A) operates with respect to an omitted creditor as follows: a deadline for filing claims is established; the omitted creditor receives no notice of the debtor's bankruptcy; the deadline for filing claims passes; the debtor's case is closed, with no assets having been distributed; the omitted creditor, technically, has been deprived of the right protected by section 523(a)(3)(A), i.e., the right to file a timely proof of claim; thus, by operation of the plain language of the Bankruptcy Code, the omitted debt would appear to be excepted from discharge. Many courts, however, have felt that this is an inequitable result. After all, since no assets were distributed, the omitted creditor has suffered no real prejudice because of its inability to file a timely proof of claim. Such a creditor is in exactly the same situation as the creditors that did file. Allowing this creditor to retain its pre-bankruptcy claim against the debtor seems to amount to an undeserved windfall, for the creditor is left in a better position than all other creditors merely by virtue of having been left off the debtor's schedules. These courts have thus recognized an equitable exception to the operation of section 523(a)(3). The exception, usually associated with the case of Robinson v. Mann, 339 F.2d 547 (5th Cir.1964), provides that in a no-asset bankruptcy where a bar date was set, a debtor may reopen the case to add an omitted creditor to its schedules nunc pro tunc where there is no evidence of fraud or intentional design, or any material prejudice to the creditor. In this context, technically, it is indeed necessary to reopen the case and add the omitted creditor to the schedules, for only this permits the relation back nunc pro tunc of the scheduling. This procedure is obviously a legal fiction, but it provides a means of avoiding the results of a literal application of section 523(a)(3)(A), thus discharging the omitted debt and fostering the debtor's fresh start. A comparison of Stark and Robinson shows that the rules they announce are, in fact, identical. Yet there is no need for such a rule in a no-asset, no-bar-date Chapter 7, hence no justification for its application. In the Robinson-type case, the debtor, in effect, asks the bankruptcy court to do him a favor, to intercede on his behalf so as to shield him from the operation of the plain language of the Code, and so permit the discharge of his omitted debt. It is entirely appropriate in this context to impose an equitable requirement of good faith on the debtor: if a court is to invoke its equity powers to do the debtor a favor it is not too much to ask that his hands be clean. In a no-asset, no-bar-date case like this one, however, the debtor needs no favors from the bankruptcy court, since his omitted debt will be discharged by the straightforward operation of section 523(a)(3). Applied here, what developed as an equitable condition precedent to the court's granting the debtor additional relief beyond that afforded by the Bankruptcy Code becomes an equitable barrier to the debtor's receiving the relief the Code itself expressly grants. I express no opinion on the propriety of the equitable exception announced in Robinson as applied in its proper context. A debate is currently raging among the bankruptcy courts of this circuit regarding this very issue. Compare In re Laczko, 37 B.R. 676, 678-79 (9th Cir. BAP 1984) (rejecting Robinson and adopting "strict" view of § 523(a)(3)), aff'd without op., 772 F.2d 912 (9th Cir.1985), with In re Brosman, 119 B.R. 212, 213-16 (Bankr.D.Alaska 1990) (refusing to follow Laczko ). My point is simply that, whereas Robinson contravenes the plain language of the Code for what is perhaps a good reason, Stark contravenes the Code for no reason whatsoever. 6 The Memorandum filed by Beezley (acting, let us recall, pro se) in support of his motion to reopen in the bankruptcy court requested "relief from a judgment by court after default ... by reopening the estate and permitting scheduling and listing of this debt." So styled, I must agree that the denial of this motion by the bankruptcy court did not constitute an abuse of discretion, for the reasons stated in the per curiam opinion--that is, that the "relief" requested (amendment of the schedule of creditors) was no relief at all
{ "pile_set_name": "FreeLaw" }
Governor David A. Paterson’s Recovery Stimulus Cabinet is conducting Information Sessions across the State regarding the provisions of the Recovery Bill. An Information Session on the Broadband Provisions of the Recovery Bills will be jointly hosted by NYS Chief Information Officer and New York State Office for Technology (CIO/OFT) and the NYS Public Service Commission (PSC). Come hear about the Broadband Initiatives funded by the American Recovery and Reinvestment Act (ARRA) (“Stimulus”) of 2009. Learn how these provisions align with Governor David A. Paterson’s Universal Broadband Strategy for New Yorkers. This meeting is intended for members in the broadband service provider community, digital literacy training community in the public and private sectors, local/county/state entities, not-for-profit organizations, foundations, schools, community technology centers, libraries and other organizations who provide either Internet services or digital literacy and consumer education programs. Hear about the two main broadband programs: Broadband Technology Opportunities Program and the Rural Utilities Services Broadband provisions of the Stimulus Bill. Included in the session will be a question and answer segment. Please email your questions no later than 48 hours before the session begins to [email protected] to enable a productive session. Invited Parties Include:· State and local officials and agencies interested in extending Broadband access to their unserved and underserved urban and rural communities;· Not-for-profit organizations interested in deploying Broadband access to the populations they serve;· Libraries, educational and research institutions, foundations or digital literacy education programs interested in building out broadband infrastructure or providing digital literacy training programs to increase broadband demand and adoption; and· Broadband service providers interested in partnering with other public and private organizations and companies to accelerate the build out of broadband infrastructures and training programs which will create jobs. To Make A Reservation: Please make your reservation to attend by sending an email to: [email protected] . With limited space, the state is asking each organization to limit the number of attendees to no more than four, as a courtesy, to allow more organizations to participate in the first of a series of information sessions. No comments: Pages Thank you for supporting the library budget The Friends of the Albany Public Library thank you for voting YES on the library budget in 2016. Welcome The Friends of the Albany Public Library have meetings several times a year, to which the public is always invited. 5 :00 to 6:00 pm Place: Community Room 1 on the second floor of the Main Branch, APL. Book reviews and other events every Tuesday at noon in the Main Branch. Quote "I must say that I find television very educational. The minute somebody turns it on,I go to the library and read a book." Consider a Gift to the Annual Appeal The Albany Public Library changes lives, answering life's most complex questions and serving as the public's destination for social, intellectual and cultural discourse. As our world changes, Albany's libraries face new challenges with increased demand for more books, materials, and programs housed in aging and outdated facilities.That's why your annual contribution is so important. Your gifts to the library though The Albany Public Library Foundation means the difference between adequate libraries and great libraries. With private support, we can expand our collections and services to include more of what customers want in materials, technology, and programs. So, look for the Annual Appeal information in the mail.We thank you. Donations can be sent to Albany Public Library Foundation, 161, Washington Avenue, Albany, New York 12210.Credit Card contributions can be processed by calling 518-427-4346.
{ "pile_set_name": "Pile-CC" }
Seroprevalence study of Toxocara canis in selected Egyptian patients. This study was conducted in order to reveal the seroprevalence of T. canis infection in selected 150 Egyptian patients with presumptive clinical syndromes. They were children (128) with respiratory symptoms or pyrexia of unknown origin (PUO)and adults (22) with PUO. Anti-Toxocara antibodies (IgG) were detected in sera by ELISA. The results showed 6.2% positivity in children. The frequency increased in male gender, those in rural residence and in 6-12 years group versus 1-6 years, and 4% & 13.3% positivity in those with respiratory symptoms and PUO respectively. Adults positivity was 18%. So, male gender and residence in rural regions could be considered as risk factors for transmission of toxocariasis in children.
{ "pile_set_name": "PubMed Abstracts" }
Mutt fucke himself 1 year ago today. thanks mutt, I would have never found SFSN without your help & never found this place. I want to thank "bethsucks" for SFSN & Dawg, Monk, Elias & Spazz; for what we have today. you fuckers rock our world. I'd pay big money to see one interviewer ask Howard some embarassing questions like that. Things like "What exactly IS that thing on your head?" or "Do you ever feel guilty about getting rich off your fans and then pissing on them?".
{ "pile_set_name": "Pile-CC" }
Q: Select the field that ever fulfil condition Employee Table NameId Name 1 Andy 2 Peter 3 Jason 4 Thomas 5 Clark Employee - Supervisor Relations NameId SupervisorId (Refer to employee Id) 1 4 1 2 2 3 5 4 How can i select query to return search with all name that supervisor 'once' to be Thomas. So the result i want is like this. Name Supervisor Andy Thomas Andy Peter (Is valid because Andy supervisor contains 'Thomas') Clark Thomas A: It looks the table relation doesn't need another optional table, so the query would be more simple: select emp.name as Name, spv.name as Supervisor from employee emp inner join employee spv on emp.spv_id = spv.id where spv.name like 'Thomas' order by emp.name
{ "pile_set_name": "StackExchange" }
1/5, to the nearest integer? 13 What is the square root of 211269 to the nearest integer? 460 What is the square root of 645374 to the nearest integer? 803 What is the third root of 165953 to the nearest integer? 55 What is the square root of 37700846 to the nearest integer? 6140 What is the cube root of 913621 to the nearest integer? 97 What is the sixth root of 362703 to the nearest integer? 8 What is the cube root of 836630 to the nearest integer? 94 What is 20217 to the power of 1/2, to the nearest integer? 142 What is 871654 to the power of 1/3, to the nearest integer? 96 What is 247467 to the power of 1/2, to the nearest integer? 497 What is the cube root of 9023174 to the nearest integer? 208 What is 219611 to the power of 1/2, to the nearest integer? 469 What is 2289298 to the power of 1/8, to the nearest integer? 6 What is the third root of 7869015 to the nearest integer? 199 What is the ninth root of 70854 to the nearest integer? 3 What is the third root of 23870 to the nearest integer? 29 What is the square root of 248331 to the nearest integer? 498 What is 891280 to the power of 1/9, to the nearest integer? 5 What is the square root of 1067670 to the nearest integer? 1033 What is the third root of 7259398 to the nearest integer? 194 What is the fourth root of 2987878 to the nearest integer? 42 What is the cube root of 1071012 to the nearest integer? 102 What is the square root of 2907156 to the nearest integer? 1705 What is the third root of 124250 to the nearest integer? 50 What is the square root of 2069599 to the nearest integer? 1439 What is the ninth root of 387599 to the nearest integer? 4 What is 23206491 to the power of 1/2, to the nearest integer? 4817 What is the fifth root of 13496715 to the nearest integer? 27 What is 2542451 to the power of 1/10, to the nearest integer? 4 What is the cube root of 1865361 to the nearest integer? 123 What is the square root of 1539493 to the nearest integer? 1241 What is 12821452 to the power of 1/5, to the nearest integer? 26 What is 10309967 to the power of 1/2, to the nearest integer? 3211 What is the tenth root of 2228147 to the nearest integer? 4 What is 8620800 to the power of 1/9, to the nearest integer? 6 What is 1479093 to the power of 1/10, to the nearest integer? 4 What is 11528893 to the power of 1/4, to the nearest integer? 58 What is 374712 to the power of 1/4, to the nearest integer? 25 What is 277721 to the power of 1/2, to the nearest integer? 527 What is 298806 to the power of 1/3, to the nearest integer? 67 What is 3685 to the power of 1/5, to the nearest integer? 5 What is 69203 to the power of 1/3, to the nearest integer? 41 What is 12049149 to the power of 1/10, to the nearest integer? 5 What is 2227170 to the power of 1/4, to the nearest integer? 39 What is the cube root of 10587 to the nearest integer? 22 What is the third root of 283550 to the nearest integer? 66 What is 3682142 to the power of 1/2, to the nearest integer? 1919 What is 934357 to the power of 1/3, to the nearest integer? 98 What is 185745 to the power of 1/10, to the nearest integer? 3 What is the third root of 92772 to the nearest integer? 45 What is 125391 to the power of 1/10, to the nearest integer? 3 What is the square root of 14880738 to the nearest integer? 3858 What is the cube root of 1864878 to the nearest integer? 123 What is the cube root of 1430971 to the nearest integer? 113 What is the fifth root of 320182 to the nearest integer? 13 What is the tenth root of 8795944 to the nearest integer? 5 What is the eighth root of 73241 to the nearest integer? 4 What is 5245569 to the power of 1/5, to the nearest integer? 22 What is 222212 to the power of 1/8, to the nearest integer? 5 What is 783451 to the power of 1/3, to the nearest integer? 92 What is the fourth root of 6322907 to the nearest integer? 50 What is the ninth root of 4723626 to the nearest integer? 6 What is 4954939 to the power of 1/2, to the nearest integer? 2226 What is 625583 to the power of 1/3, to the nearest integer? 86 What is 1105849 to the power of 1/3, to the nearest integer? 103 What is the fourth root of 4820344 to the nearest integer? 47 What is the seventh root of 243476 to the nearest integer? 6 What is the third root of 1164032 to the nearest integer? 105 What is 446693 to the power of 1/2, to the nearest integer? 668 What is 5822009 to the power of 1/10, to the nearest integer? 5 What is 1857604 to the power of 1/9, to the nearest integer? 5 What is the third root of 233532 to the nearest integer? 62 What is the fifth root of 19896305 to the nearest integer? 29 What is the ninth root of 1594430 to the nearest integer? 5 What is the sixth root of 8996299 to the nearest integer? 14 What is 90191 to the power of 1/9, to the nearest integer? 4 What is 17870 to the power of 1/8, to the nearest integer? 3 What is 611308 to the power of 1/2, to the nearest integer? 782 What is the fourth root of 1118653 to the nearest integer? 33 What is 14745 to the power of 1/2, to the nearest integer? 121 What is the square root of 108392 to the nearest integer? 329 What is 122853 to the power of 1/2, to the nearest integer? 351 What is the cube root of 84341 to the nearest integer? 44 What is the third root of 332050 to the nearest integer? 69 What is 16016358 to the power of 1/2, to the nearest integer? 4002 What is 93009 to the power of 1/2, to the nearest integer? 305 What is the eighth root of 863969 to the nearest integer? 6 What is 4200385 to the power of 1/4, to the nearest integer? 45 What is the ninth root of 1397307 to the nearest integer? 5 What is 15064640 to the power of 1/10, to the nearest integer? 5 What is the square root of 99390 to the nearest integer? 315 What is the third root of 83089 to the nearest integer? 44 What is 8697449 to the power of 1/6, to the nearest integer? 14 What is the tenth root of 1026978 to the nearest integer? 4 What is the cube root of 40133 to the nearest integer? 34 What is the fifth root of 565274 to the nearest integer? 14 What is the seventh root of 4589455 to the nearest integer? 9 What is the tenth root of 1654237 to the nearest integer? 4 What is the ninth root of 734041 to the nearest integer? 4 What is 20226801 to the power of 1/3, to the nearest integer? 272 What is 9587236 to the power of 1/2, to the nearest integer? 3096 What is the third root of 18469 to the nearest integer? 26 What is the square root of 835249 to the nearest integer? 914 What is 2651104 to the power of 1/3, to the nearest integer? 138 What is the fifth root of 2029594 to the nearest integer? 18 What is 164562 to the power of 1/2, to the nearest integer? 406 What is the ninth root of 215580 to the nearest integer? 4 What is the square root of 6433795 to the nearest integer? 2536 What is 258492 to the power of 1/2, to the nearest integer? 508 What is the eighth root of 211862 to the nearest integer? 5 What is 6178804 to the power of 1/2, to the nearest integer? 2486 What is 427546 to the power of 1/9, to the nearest integer? 4 What is 2090494 to the power of 1/2, to the nearest integer? 1446 What is 589068 to the power of 1/3, to the nearest integer? 84 What is the tenth root of 790372 to the nearest integer? 4 What is 68526 to the power of 1/10, to the nearest integer? 3 What is 3417875 to the power of 1/4, to the nearest integer? 43 What is the seventh root of 973961 to the nearest integer? 7 What is 29868635 to the power of 1/3, to the nearest integer? 310 What is the cube root of 7032568 to the nearest integer? 192 What is the third root of 4636413 to the nearest integer? 167 What is the seventh root of 78796 to the nearest integer? 5 What is 1927912 to the power of 1/2, to the nearest integer? 1388 What is the third root of 1101573 to the nearest integer? 103 What is 20871018 to the power of 1/4, to the nearest integer? 68 What is 16522206 to the power of 1/3, to the nearest integer? 255 What is 10664864 to the power of 1/10, to the nearest integer? 5 What is the ninth root of 58986 to the nearest integer? 3 What is the square root of 519582 to the nearest integer? 721 What is 307460 to the power of 1/5, to the nearest integer? 13 What is the tenth root of 34942112 to the nearest integer? 6 What i
{ "pile_set_name": "DM Mathematics" }
Ethereum入門 ============= [Ethereum入門](http://book.ethereum-jp.net)は、分散アプリケーションプラットフォーム「Ethereum(イーサリアム)」の技術入門書です。 Ethereumがどのように動作するのか、Ethereumを用いてどのように分散アプリケーションを開発していくか、について解説していきます。 Ethereumプロジェクトは最初の安定版(Homestead)リリースがされましたが、今後仕様が変更になる場合があります。仕様変更に伴い本書も逐次内容を変更していきます。 また本ドキュメントは**2018/3/10 現在、制作中です。** 本ドキュメントはオープンなプロジェクトであり、そのため協力者を広く求めています。本書のソースコードは[GitHub上](https://github.com/a-mitani/mastering-ethereum)で公開されています。 本書への追記や修正などありましたら、上記[GitHub](https://github.com/a-mitani/mastering-ethereum)にてIssueの発行、またはPull requestをお願いいたします。 本ドキュメントは <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a> のもとで公開されています。 <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />
{ "pile_set_name": "Github" }
The historical commission was given unfettered access to ministry documents which showed down to smallest detail how the Nuremberg race laws – which after 1934 transformed Jews into an underclass without rights – allowed the bureaucrats to pillage and steal on an unprecedented scale from their victims, especially after the war began.
{ "pile_set_name": "OpenWebText2" }
A leading jeweller is looking to use digital ledgers, pioneered by Bitcoin, to record the history of precious gems in a bid to increase the transparency of their stones' history and weed out thieves. Leanne Kemp, an international director at Edgelogix, told the Financial Times that she is working with UK insurance firm Aviva to create an online record called Blocktrace to help police authorities and insurers trace the history of precious stones such as diamonds. “I’m not excited about bitcoin. It’s the underlying technology that really excites me,” Kemp said, relating to how Blocktrace could help verify the history of Edgelogix's products on a decentralised online ledger - like that utilised by crypto-currencies such as Bitcoin. If the transactions were recorded on an online ledger system, the firm could break its dependency on third party group such as banks to authenticate gem transactions, which would instead be available on Blocktrace's open record. Kemp added that such technology could be used to map digital ledger certificates onto precious gems. Last September, the Bank of England praised Bitcoin's ledger system in its first crypto-currency report: “The application of decentralised technology to this platform of digital information could have far-reaching implications; other industries whose products were digitised have been reshaped by new technology.” Adding that the distributed ledger could have a much broader use than just in the financial industry.
{ "pile_set_name": "OpenWebText2" }
SJP and MB: Lights, Camera, Relationship 7/31/2008 1:05 PM PDT SJP and MB: Lights, Camera, Relationship What infidelity rumors?Sarah Jessica Parker and Matthew Broderick had dinner in NYC last night despite rampant Internet rumors about Matthew cheating. If they go out in public together, they must be happy! Looks like neither has broken things off yet, via Post-it or otherwise.
{ "pile_set_name": "Pile-CC" }
Apple today uploaded eight new "Shot on iPhone" videos to its YouTube channel, showcasing the video capturing capabilities of the iPhone 6s and iPhone 6s Plus through videos taken by actual iPhone users around the world. Each video clip lasts for 16 seconds and is accompanied by music. Video content ranges from a rain storm in Los Angeles to penguins in Antarctica to a hippopotamus in Botswana. Several of the videos showcase iPhone 6s video features like Slo-Mo, while others are played in reverse or are sped up. Apple's "Shot on iPhone" campaign began in early 2015 following the launch of the iPhone 6 and the iPhone 6 Plus, sharing photos of iPhone 6 images in ads and on billboards across the globe. Later in 2015, Shot on iPhone expanded to encompass video imagery collected in a World Gallery . Apple re-launched the "Shot on iPhone" campaign in early 2016 to focus on the camera improvements in the iPhone 6s and the iPhone 6s Plus.
{ "pile_set_name": "OpenWebText2" }
Spatial expression of the alternatively spliced EIIIB and EIIIA segments of fibronectin in the early chicken embryo. Using domain-specific antibodies, we have analyzed the tissue distribution of fibronectins (FNs) containing the alternatively spliced EIIIB and EIIIA segments relative to total FN in early chicken embryos. The results show a selective loss of EIIIA+ FN staining in the notochordal sheath and in cartilaginous structures between 4.5 and 7.0 days of development. In other regions, EIIIB+ and EIIIA+ FNs are extensively codistributed in and around mesoderm-derived structures (somites, notochord, heart, and blood vessels), in basal laminae of endoderm and ectoderm-derived structures, as well as within the vicinity of neural crest formation and migration. We also noted that EIIIA staining overlaps with spatial patterns of distribution that have previously been described for the alpha4 integrin subunit, a component of the EIIIA receptor alpha4beta1.
{ "pile_set_name": "PubMed Abstracts" }
Libellés Thursday, November 25, 2010 An inviting grave Call me weird, but sometimes, when I want to take a little break from the rush, I take a walk in one of the many cemeteries that we have inside Paris. And for the first time in my life I visited the one of Montmartre, right at the bottom of the "Butte", near the Place de Clichy. A very interesting one, with many famous people. I took a few photos while I was walking without knowing exactly whose grave I was taking. It's only when I came back home that I discovered that I had taken the tomb of a famous French actor who died recently: Jean-Claude Brialy. The Paris cemeteries amaze me. Ours do not intrigue so because they are not dominated by above-ground crypts. (New Orleans is an exception, but they are dangerous due to violent criminals robbing people.) Your cemeteries are historical and artistic journeys. I recommend buying a guide to find the famous people. I've not walked the Montmartre one, but I would like to. By the way, I love the photo of the Opera yesterday. It reminds me of escorting a certain English/Brazilian person back to her safe house after our PDP dinner. We emerged from the Metro and bang! there was Opera Garnier at night. Always stunning at night, no matter what colors. beautiful grave, if appropriate to say. I liked J-C Brialy, saw many old films (black and white) with him. You are not weird Eric, I know many people who like to walk or just sit on cemetaries just because they are so quiet and peaceful. I love the almost closed eyes. Modest and soft tone. Walking in cemetaries surely helps to think about the very quintessence of life! Sometimes we just feel like it has to be thought a little about! I do not think you are weird nor even that your title is. An inviting grave to keep in mind all our remembers and to feel the way ahead. The way to meet new people, to simply look peacefully at our environment, animals, trees, dead leaves, snow, and then the yellow, the blue, the red, etc... The way to forget all and any disagreable parts. The way between those who sound really happy and those who sound less. And the truth behind that...Flore You're not weird. Sometimes I go to cemetaries to get away and to think. For me, It's an escape to another time and place. I read the dates of birth and death, the names of the dead and try to imagine what their lives were like. For some people, it's the only remaining trace of their existence and I like to think that for the few minutes that I spend at their grave wondering about them, I'm somehow keeping their story alive. That's WEIRD!!! Beautiful shot. If wandering in cemeteries is weird, I'm a weirdo too. Happy Thanksgiving to all the PDPers who are celebrating today. I will be giving thanks for the talent and dedication of Mr. Eric Tenin, who makes every day of the year special for me with his wonderful photos. Hi,keep visiting this blog often, but commenting for the first time. I dont find it wierd if you go to a cemetery to take mind off things, where else will you go? the metro:PIs this the same cenetery in Montmarte that is seen in 'Paris J'et aime'? I have never been to Paris, but Montmarte is one location i keep a keen eye on thanks to the French comedy - Amelie:) take careciao
{ "pile_set_name": "Pile-CC" }
Q: SSH Permission denied for Mininet I am new to SDN and was trying to learn Mininet. I have installed debian(64-bit) and Mininet on Virtual Box. When I try to connect Mininet Vm from Debian I have to run the following comamnd : ssh -X [email protected] It asks for mininet password, but after entering the default mininet password it shows an error Permission denied please try again Both my debian and Mininet VM have same IP address. Kindly guide how to eliminate the SSHerror. Also is it fine having same ip address for two different VM, is the SSH error a result of this ? Thanks A: In the VirtualBox settings under the tab Network click on Advanced then Port Forwarding and add a rule with name: ssh, protocol: tcp, host port: 3022 and guest port: 22. Then execute: sudo ssh -p 3022 [email protected]
{ "pile_set_name": "StackExchange" }
All photos courtesy Europe Comics Death of Stalin, the French graphic novel detailing two days of chaos between Joseph Stalin’s stroke and the announcement of his death, will get a new reprint ahead of the upcoming film adaptation. Perfect timing too, because (for some weird reason) we should remind ourselves what actual totalitarian governments are like. The novel, written by Fabien Nury and illustrated by Thierry Robinshows, focuses on the events between March 2 and March 4, 1953. According to the publisher, it was “two days that encapsulated all the insanity, the perversity, and the inhumanity of totalitarianism.” The graphic novel reveals the scary reality behind Stalin’s control in the days before his death. For example, it opens with an orchestra being forced by the military to play for hours so Stalin could get a recorded version of their performance. After Stalin’s heart attack, the novel goes into the attempts to keep the dictator alive, while members of his government conspire against one another to secure power after his inevitable death. Everyday people are treated like pawns— they live in fear and are ordered to do unspeakable things. Many are imprisoned or killed for slight offenses. It’s a story of violence and greed in a society that’s been corrupted by absolute power. It’s a stern reminder of what dictatorships are really like, taking propaganda away to show the true effect they have on people. However, the creators did say it’s technically historical fiction, even though it’s based on real events, because of how patchy and incomplete recounts from those days are. G/O Media may get a commission Aurora by Bellesa Buy for $74 from Bellesa Boutique Use the promo code AURORA25 Titan Comics will be printing a new English version of the graphic novel, as announced at ComicsPRO this week. In part, it’s because of the film adaptation by Veep creator Armando Iannucci, which has gotten U.S. distribution by IFC. The reprint should be available this fall. However, for those who want to check it out sooner, there’s a translated version available on Amazon Kindle. [The Hollywood Reporter]
{ "pile_set_name": "OpenWebText2" }
INTRODUCTION {#s1} ============ Upper track urothelial carcinoma (UTUC) is less common than bladder urothelial carcinoma and accounts for 5--10% of all urothelial carcinoma.^[@b1]^ The incidence of ureteral urothelial carcinoma (UUC) is approximately half that of pyelocaliceal urothelial carcinoma.^[@b2]^ UUC has a worse prognosis than pyelocaliceal urothelial carcinoma.^[@b3],[@b4]^ Owing to different anatomical considerations and oncological outcomes in UTUC, these different malignant entities must be evaluated independently.^[@b5]^ The gold standard treatment for UTUC is radical nephroureterectomy with excision of the bladder cuff, regardless of the tumour location.^[@b6]^ Recently, conservative surgery such as endoscopic ablation or segmental ureteral resection, which allows preservation of the upper urinary renal unit, has also been applied.^[@b7]^ However, preoperative histological evaluation through biopsy of the upper urinary tract is difficult, because ureteroscopy is an invasive procedure and usually requires general anaesthesia. Furthermore, the accuracy of ureteroscopic biopsy in predicting tumour stage and grade is limited, and the limitations of endoscopic biopsy must be balanced against the possible advantage of avoiding radical surgery.^[@b8]^ Thus, accurate preoperative prediction of tumour grade could be helpful in selecting more appropriate therapeutic options. CT urography (CTU) is an imaging modality with high diagnostic accuracy in the detection of UTUC and has replaced intravenous excretory urography and ultrasonography as the first-line imaging test for investigating high-risk patients.^[@b9]^ Even though several studies have investigated diffusion-weighted MRI (DW-MRI) as an imaging assessment for predicting tumour grade of UTUC,^[@b10]--[@b11]^ characteristic CTU findings that can predict tumour grade of UUC have not been identified, to the best of our knowledge. In this study, we aimed to evaluate the correlation between CTU imaging variables, including tumour size and imaging features, and histological grade of UUC, and to identify CTU imaging features that allow prediction of high-grade UUC, which should be treated by radical surgery. METHODS AND MATERIALS {#s2} ===================== Patients -------- This retrospective single-centre study was approved by the institutional review board at and written informed consent was not required. We searched institutional patient information systems to identify all consecutive patients with UUC who had undergone nephroureterectomy between January 2005 and July 2016. A total of 79 consecutive patients who underwent surgery with removal of a surgical specimen for histological analysis were registered. The inclusion criteria for this study were as follows: (i) tumours only located in the ureter, (ii) patients had undergone CTU scan prior to surgery and (iii) histologicalal confirmation of UUC with clear statement of histological grade according to the WHO 2004 classification system. Four patients were excluded because histological grade was not available in the pathological reports, and two patients did not undergo a CTU scan. Ultimately, 73 patients (52 males and 21 females; mean age, 68.92 ± 9.08 years) with 81 UUCs were included in our study. All pathological data were reviewed by a board-certificated pathologist, and all tumours were classified into low-grade and high-grade groups according to the WHO 2004 classification system and pathologic T stage of the tumours was assessed according to the TNM staging system. CTU technique ------------- All CTU examinations were performed using various CT scanners from 16-channel to 128-channel MDCT scanners (Somatom Sensation 16, Siemens Healthcare, Brilliance 64, Philips Medical Systems, Best, Netherlands or Somatom Definition Flash 128, Siemens Healthcare Forchheim, Germany). Scanning parameters of the most frequently used CT scanner (Brilliance 64, Philips Medical Systems, Best, Netherlands) were as follows: tube voltage, 120 kVp; effective tube current, 300 mAs; section thickness, 5 mm; pitch and speed, 0.891:1; rotation time, 0.75 s and collimation, 64 × 0.625 mm for 64-channel MDCT. Before acquisition of contrast-enhanced scans, simple unenhanced scans were obtained, after which 2 ml kg^--1^ non-ionic contrast material containing 300--350 mg ml^−1^ of iodine \[iomeprol (Iomeron 300, Bracco Altana Pharma, Konstanz, Germany), iopamidol (Pamiray 300, Dongkook Pharmaceutical, Seoul, Republic of Korea) or iobitridol (Xenetix 300, Guerbet, Villepinte, France)\] was intravenously administered at a rate of 3.0 ml s^−1^ using a standard power injector. For CTU, in addition to the unenhanced scan, two-phase studies were performed with combinations of corticomedullary and excretory phases at our institution. The corticomedullary phase began 30--40 s after contrast administration, and excretory phases began 300 s after contrast administration, respectively. Image analysis -------------- Two radiologists (DJS and STH with 17 and 3 years of experience, respectively, in interpreting genitourinary images) independently reviewed all CTU images on a picture archiving and communication system workstation (INFINITT PACS, INFINITT Healthcare, Seoul, Republic of Korea). The readers knew all patients had been diagnosed with UUC, but were informed of neither the histological grade nor the findings listed in the initial radiological report. They evaluated the following CTU imaging features: tumour location, tumour size, tumour enhancement value, multiplicity, periureteral infiltration, enlarged retroperitoneal lymph nodes with a short axis of more than 1 cm, and hydronephrosis grade. Tumour location was categorized into three groups (proximal, middle, and distal) according to anatomic ureteral segmentation. Tumour size was determined as the maximal length or diameter of the whole tumour presenting as ureteral soft-tissue mass or enhancing wall thickening on the axial, sagittal, or coronal CTU images. In patients with multiple lesions, the largest one was selected for size measurement. Tumour enhancement value was calculated as the difference between attenuation values in the corticomedullary phase and unenhanced phase. On corticomedullary phase images, the readers drew a circular ROI that included the enhancing solid portion of the tumour, avoiding adjacent mesenteric fat. The ROI was as large as possible to minimize noise. A ROI of the same size was placed in the corresponding location on the unenhanced scan image. The readers also reported hydronephrosis grade according to the modified version of the Society for Foetal Urology Hydronephrosis Grading System ([Table 1](#t1){ref-type="table"}). ###### Modified version of the Society for Fetal Urology Hydronephrosis Grading System Grade 0 1 2 3 4 --------------------------------- --------------- ------------------------------ -------------------------------------- -------------------------------------------------------------- -------------------------------------------------- Ureter and pelvocalyceal system No dilatation Local dilation of the ureter Ureteral and renal pelvis dilatation Ureteral and renal pelvis dilatation plus calices dilatation Further dilatation of ureter, pelvis and calices Renal parenchymal thickness Normal Normal Normal Normal Thin Statistical analysis -------------------- Descriptive statistics of means, standard deviations and frequencies were used to describe patient characteristics. Univariate logistic regression modelling, Mann--Whitney *U* tests, and *Χ*^2^ tests were used to assess the correlation between CTU imaging variables and histological tumour grade. Multiple logistic regression analysis using a backward selection method was performed to identify significantly independent CTU imaging variables that could predict high-grade tumours. Spearman correlation analysis was used to assess the correlation between tumour size and hydronephrosis grade. *Χ*^2^ test and linear-by-linear association were used to investigate the correlation of hydronephrosis grade and peritumoural intfiltration with pathologic T stage. A receiver operating characteristic (ROC) curve was constructed to identify the cut-off value of effective factors that provided the best diagnostic accuracy. Interobserver agreement was calculated using kappa statistics for nominal values, including hydronephrosis grade, peritumoural infiltration, multiplicity and presence of enlarged retroperitoneal lymph nodes. Intraclass correlation was calculated for continuous values including tumour size and contrast enhancement value. The scores were used to define agreement as follows: 0.41--0.60 denoted moderate agreement; 0.61--0.80, good agreement and greater than 0.81, excellent agreement. Statistical analysis was done using IBM SPSS Statistics version 22.0 for Windows (IBM Corp., Armonk, NY). A *p* value of less than 0.05 was considered statistically significant. RESULTS {#s3} ======= Images of the 73 patients with 81 UUCs were reviewed. The lesions were unilateral in all cases. 15 patients (20.5%) had low-grade UUCs ([Figure 1](#f1){ref-type="fig"}) and 58 patients (79.5%) had high-grade UUCs ([Figure 2](#f2){ref-type="fig"}). 22 (27.1%) lesions were located in the proximal ureter, 14 (17.2%) in the middle ureter, and 45 (55.5%) in the distal ureter. Eight (5.8%) patients had multiple lesions in the ipsilateral ureter. Clinicopathological characteristics of the patients are summarized in [Table 2](#t2){ref-type="table"}. ![A 74-year-old male with a low-grade tumour in the right distal ureter. Axial (a) and coronal (b) contrast-enhanced CT images demonstrate a soft tissue tumour (arrow) in the right distal ureter without hydronephrosis in the right kidney. The tumour was 16 mm in length and was pathologically proven to be low-grade urothelial carcinoma after radical nephroureterectomy.](bjr.20170159.g001){#f1} ![A 80-year-old male with high-grade tumour in the right middle ureter. Axial (a) and coronal (b) contrast-enhanced CT images demonstrate a soft tissue tumour (arrow) in the right middle ureter. Coronal CT images (b and c) show the dilated right upper ureter (arrow head) and Grade 4 hydronephrosis (arrow head) in the right kidney, respectively. The tumour was 7 mm in length and was pathologically proven to be high-grade urothelial carcinoma after radical nephroureterectomy.](bjr.20170159.g002){#f2} ###### Clinicopathological characteristics of enrolled patients Characteristic Data -------------------------------------------------------- ----------------- Age (years)[*^a^*](#tb2fn1){ref-type="fn"} 68 ± 9 (43--86) Sex[*^b^*](#tb2fn2){ref-type="fn"}  Male 52 (71.2)  Female 21 (28.8) Hitologic grade of UTUC[*^b^*](#tb2fn2){ref-type="fn"}  High grade 58 (79.5)  Low grade 15 (20.5) Data are presented as mean (range) values. Data are presented as number (percentage) of patients. CTU imaging variables (tumour size, multiplicity, peritumoural infiltration, hydronephrosis grade, contrast enhancement value, presence of enlarged retroperitoneal lymph nodes) with respect to histological grade of UUCs are summarized in [Table 3](#t3){ref-type="table"}. The readers had excellent agreement for the other CT variables (*к* = 0.862 for hydronephrosis grade, intraclass correlation = 0.829 for tumour size, intraclass correlation = 0.892 for contrast enhancement value). In addition, there were good or moderate interobserver agreements for the other subjective assessments (*к* = 0.748 for multiplicity, *к* = 0.546 for periureteral infiltration). Tumour size was significantly larger in the high-grade group than in the low-grade group according to reader 1 (*p* = 0.028). Hydronephrosis grade was significantly higher in the high-grade group than in the low-grade group (*p* \< 0.001 for both readers). There was no significant difference in multiplicity, peritumoural infiltration, contrast enhancement value, or presence of enlarged retroperitoneal lymph nodes between the two groups. ###### Clinical characteristics of the enrolled patients according to histological grade Grade -------------------------------------- ----------------------- ---------------------- ---- ---------------------------------------- **Reader 1** Tumour size (mm) 39.7 (10--140) 23.3 (1--41) 0.028^[*b*](#tb3fn2){ref-type="fn"}^ Hydronephrosis grade \<0.001^[*c*](#tb3fn3){ref-type="fn"}^ 4 22 (37.9) 1 (6.7) 23 3 27 (46.6) 2 (13.3) 29 2 6 (10.3) 5 (33.3) 11 1 2 (3.4) 2 (13.3) 4 0 1 (1.7) 5 (33.3) 6 Enhancement value 56.4 (2--120) 51.2 (6--92) 0.508 Peritumoural infiltration 0.07^[*c*](#tb3fn3){ref-type="fn"}^ Present 17 (29.3) 1 (6.7) 18 Absent 41 (70.7) 14 (93.3) 55 Multiplicity 0.55^[*c*](#tb3fn3){ref-type="fn"}^ Present 7 (12.1) 1 (6.7) 8 Absent 51 (87.9) 14 (93.3) 65 Enlarged retroperitoneal lymph nodes 0.611^[*c*](#tb3fn3){ref-type="fn"}^ Present 11 (19.0) 2 (13.3) 13 Absent 47 (81.0) 13 (86.7) 57 Reader 2 Tumour size(mm) 43.10 (11--140) 34.50 30.14 (15--58) 27.50 0.234^[*b*](#tb3fn2){ref-type="fn"}^ Hydronephrosis grade \<0.001^[*c*](#tb3fn3){ref-type="fn"}^ 4 22 (37.9) 1 (6.7) 23 3 27 (46.6) 4 (26.7) 31 2 4 (7.0) 2 (13.3) 6 1 4 (7.0) 2 (13.3) 6 0 1 (1.7) 6 (40.0) 7 Enhancement value 55.5 (2--121) 58.9 (19--101) 0.793 Peritumoural infiltration 0.127^[*c*](#tb3fn3){ref-type="fn"}^ Present 8 (13.8) 0 (0.0) 8 Absent 50 (86.2) 15 (100.0) 65 Multiplicity 0.239^[*c*](#tb3fn3){ref-type="fn"}^ Present 5 (8.6) 0 (0.0) 5 Absent 53 (91.4) 15 (100.0) 68 Enlarged retroperitoneal lymph nodes 0.611^[*c*](#tb3fn3){ref-type="fn"}^ Present 11 (19.0) 2 (13.3) 13 Absent 47 (81.0) 13 (86.7) 57 Pathologic T stage \<0.001^[*c*](#tb3fn3){ref-type="fn"}^ Ta 4 (6.9) 6 (40.0) 10 T1 14 (24.1) 8 (53.3) 22 T2 11 (19.0) 0 (0.0) 11 T3 29 (50.0) 1 (6.7) 30 Data are presented as number (percentage) of patients. Mann--Whitney *U* test. Pearson's *Χ*^2^ test. Univariate logistic regression analysis revealed that hydronephrosis of Grade 3 or higher was significantly associated with high-grade tumour for both readers, and tumour size was significantly associated with high-grade tumour for reader 1. Multivariate logistic regression analysis using a backward selection method demonstrated that only hydronephrosis of Grade 3 or higher was a significant independent predictor of high-grade tumour for both readers ([Table 4](#t4){ref-type="table"}). Other CTU imaging variables, including tumour size, were omitted as independent variables in multivariate logistic regression analysis. In addition, there was no significant correlation between tumour size and hydronephrosis grade according to Spearman correlation analysis. ###### Results of the multivariate logistic regression analysis with backward selection of independent variables predictive of high-grade tumours Univariate logistic Multivariate logistic with variable selection -------------------------------------- ---------------------- ----------------------------------------------- -------------------- ------- **Reader 1** Tumour size (mm) 1.050 (1.006--1.096) 0.025 Grade of hydronephrosis 4 110 (5.83--2074.45) 0.002 72 (3.67--1411.89) 0.005 3 67.50 (5.09--893.63) 0.001 48 (3.48--661.60) 0.004 2 6 (0.51--69.75) 0.152 6 (0.47--75.34) 0.165 1 5 (0.27--91.51) 0.278 8 (0.31--206.37) 0.21 0 0.16 0.097 Enhancement value 1.01 (0.98--1.03) 0.536 Peritumoural infiltration 5.81 (0.71--47.69) 0.102 Multiplicity 1.92 (0.22--16.95) 0.557 Enlarged retroperitoneal lymph nodes 1.47 (0.29--7.53) 0.646 **Reader 2** Tumour size (mm) 1.027 (0.99--1.06) 0.146 Grade of hydronephrosis 4 126 (6.82--2328.09) 0.001 68 (3.46--1336.27) 0.005 3 40.5 (3.81--430.28) 0.002 24 (2.11--273.59) 0.009 2 12 (0.80--180.97) 0.073 16 (0.72--354.80) 0.08 1 12 (0.780--180.97) 0.073 16 (0.72--354.80) 0.08 0 0.01 Enhancement value 0.99 (0.97--1.02) 0.718 Peritumoural infiltration 5.22 (0.24--113.30) 0.293 Multiplicity 3.25 (0.12--81.44) 0.4737 Enlarged retroperitoneal lymph nodes 1.47 (0.29--7.53) 0.646 Values in parentheses are 95% confidence intervals. Pathologic T stage did not significantly correlate with peritumoral infiltration and hydronephrosis grade, respectively ([Table 5](#t5){ref-type="table"}). ###### Pathologic T stage correlation with periureteral infiltration and hydronephrosis grade -------------------- ------------------------------ ---------------------------------------- ----------- ------------------ ----------- ------------ ---------------------------------------------------------------------------- **Reader 1** **Peritumoral infiltration** **Hydronephrosis grade (*****n*****)** **Total** ***p*****value** Pathologic T stage Present Absent 0 1, 2 3, 4 0.194^[*a*](#tb5fn1){ref-type="fn"}^, 0.308^[*b*](#tb5fn2){ref-type="fn"}^ Ta-T1 5 (15.6) 27 (84.4) 4 (12.5) 9 (28.1) 19 (59.4) 32 (100.0) T2 4 (36.3) 7 (63.7) 0 (0.0) 4 (36.4) 7 (63.7) 11 (100.0) T3-4 9 (30.0) 21 (70.0) 2 (6.7) 2 (6.7) 26 (86.7) 30 (100.0) Total 18 (24.7) 55 (75.3) 6 (8.2) 15 (20.6) 52 (71.2) 73 (100.0) **Reader 2** **Peritumoral infiltration** **Hydronephrosisgrade (*n*)** **Total** ***p*value** Pathologic T Stage Present Absent 0 1, 2 3, 4 0.403^[*a*](#tb5fn1){ref-type="fn"}^, 0.173^[*b*](#tb5fn2){ref-type="fn"}^ Ta-T1 1 (3.1) 31 (96.9) 5 (15.6) 7 (21.9) 20 (62.5) 32 (100.0) T2 1 (9.0) 10 (91.0) 0 (0.0) 3 (27.3) 8 (72.7) 11 (100.0) T3-4 6 (20.0) 24 (80.0) 2 (6.7) 2 (6.7) 26 (86.7) 30 (100.0) Total 8 (11.0) 65 (89.0) 7 (9.7) 12 (16.6) 54 (73.7) 73 (100.0) -------------------- ------------------------------ ---------------------------------------- ----------- ------------------ ----------- ------------ ---------------------------------------------------------------------------- Data in parentheses are percentages. *p* value from the *Χ* ^2^ test for correlation of peritumoural intfiltration with pathologic T stage. *p* value from the *Χ* ^2^ test for correlation of hydronephrosis grade with pathologic T stage. ROC curve analysis showed that the best cut-off point of hydronephrosis grade was 2.5 for the prediction of high-grade tumour. The area under the curve (AUC) using the final model was 0.856 for reader 1 and 0.813 for reader 2 ([Figure 3](#f3){ref-type="fig"}). For clinical application in practice, the optimal cut-off grade of hydronephrosis was set at Grade 3, which corresponded to a prediction of high-grade UUC with an AUC of 0.830 and sensitivity and specificity of 88 and 79%, respectively, for reader 1, and AUC of 0.763 and sensitivity and specificity of 86 and 80%, respectively, for reader 2 ([Figure 4](#f4){ref-type="fig"}). ![Receiver operating characteristic curve for predicting high tumour grade, with the best hydronephrosis grade cut-off point being 2.5. The AUC was 0.856 for reader 1, and 0.813 for reader 2, respectively. The diagonal line represents an AUC of 0.50. AUC, area under the curve.](bjr.20170159.g003){#f3} ![Receiver operating characteristic curve for predicting high-grade tumour at a cut-off point of Grade 3 hydronephrosis. The AUC was 0.833 for reader 1, and 0.754 for reader 2, respectively. The diagonal line represents an AUC of 0.50. AUC, area under the curve.](bjr.20170159.g004){#f4} DISCUSSION {#s4} ========== As with most other malignancies, the most accurate independent predictors of prognostic outcome in UUC are tumour stage and grade.^[@b14]^ However, preoperative tumour staging is difficult in UUC because the accuracy of imaging and endoscopic biopsy for T categorization remains unsatisfactory. It is not possible to differentiate a T1 lesion from T2 UUC on CTU, and it is also difficult to obtain representative muscularis tissue with ureteroscopic biopsy. Even though T3 lesions can be characterized by periureteral infiltration, current imaging modalities cannot reliably identify microscopic invasion. Periureteral infiltration, which represents the invasiveness of UUC on CT, can cause overstaging due to additional inflammatory changes, while understaging can occur due to microscopic invasion.^[@b15]^ In our study, there was no significant correlation between periureteral infiltration on CT and tumour grade. In addition, periureteral infiltration on CT did not significantly correlate with pathologic T stage. In clinical practice, tumour grade is a crucial factor in determining whether radical surgery or endoscopic conservative treatment is optimal for UUC, because accurate tumour staging is only available postoperatively based on the pathological evaluation of radical nephroureterectomy specimens. Ureteroscopic evaluation and biopsy definitively set up the diagnosis of UUC and provide fundamental information for risk stratification and clinical management. Several studies have reported that biopsy tumour grade accurately predicts surgical tumour grade in 78--91.6% of patients.^[@b16]--[@b18]^Contrary to these reports, it has been shown that ureteroscopic biopsy performance is inadequate in predicting final pathological grade.^[@b19],[@b20]^ Tumour grade is misinterpreted in more than one third of patients with conservatively managed UTUC,^[@b19]^ and 15% of high-grade tumours are underestimated as low-grade urothelial carcinoma.^[@b20]^ DW-MRI has shown potential as an biomarker in oncological imaging practice, and apparent diffusion coefficient (ADC) values obtained from DW-MRI may help predict tumour invasiveness and metastatic potential of UTUC.^[@b21]^ Some researchers report that high-grade UTUCs have significantly lower ADC values than low-grade tumours.^[@b10],[@b11]^ More recently, however, others have found no significant correlation between ADC value and histological grade of UTUC.^[@b12],[@b13]^ Furthermore, different imaging sequences, parameters, and MRI scanners can cause inconsistency in ADC measurement. Thus, our study aimed to determine whether CTU imaging features reproducible in routine practice could preoperatively predict the histological grade of UUC. There have been a number of studies demonstrating an association between hydronephrosis and advanced clinicopathological features and poor oncologic outcomes in UTUC.^[@b22]--[@b27]^Pyelocaliceal urothelial carcinomas usually do not result in urinary tract obstruction except in tumours involving the ureteropelvic junction. In contrast, UUCs are more likely to have hydronephrosis compared to pyelocaliceal urothelial carcinomas.^[@b4]^ A few studies that focused on UUC alone also reported a predictive role of hydronephrosis in advanced pathological features.^[@b5],[@b28]^ Cho et al found that 86% of patients with hydronephrosis of Grade 3 or 4 had an invasive tumour of T2 stage or greater.^[@b28]^ However, their research was based on various imaging assessments using CT, excretory urography, and renal ultrasonography. In our study, there was no significant correlation between hydronephrosis grade and pathologic T stage. Luo et al reported that hydronephrosis of Grade 2 or higher was associated with non-organ-confined disease,^[@b5]^ although their imaging review was not performed either in consensus or independently, and the specificity was limited to 37.3%. Chung et al assumed that hydronephrosis may cause outward expansion and longitudinal thinning of the already narrow ureter or renal pelvis wall, which may facilitate the seeding of cancer cells to regional or distant organs.^[@b22]^ Even so, the mechanism of the development of hydronephrosis and its relationship with tumour invasiveness is not fully understood.^[@b5]^ To the best of our knowledge, however, our study is the first evaluation of the association between hydronephrosis grade and tumour grade in pure UUC, and adequate diagnostic performance (sensitivity and specificity over 79%) was obtained at a cut-off point of hydronephrosis Grade 3 in the prediction of high-grade tumours. Cho et al reported that the tumour diameter of UUC correlated with pathological T stage and 80% of patients with a tumour diameter of 1.5 cm or greater had invasive UUC.^[@b28]^ In their study, however, tumour diameter was measured on axial CT images and was classified as less than 1.5 cm, greater than or equal to 1.5 cm but less than 2.5 cm, and 2.5 cm or greater. In our study, in which the largest tumour size was measured on multireconstructed images, tumour size did not independently predict tumour grade. In addition, our study showed no significant association between tumour size and hydronephrosis grade. At the time of diagnosis, patients with UTUC and a contralateral normal kidney can be classified as having low-risk UTUC or high-risk UTUC.^[@b29]^ Preoperative clinical factors associated with low-risk UTUC include low-grade ureteroscopic biopsy, low-grade cytology, tumour size \<1 cm, no invasive features on cross-sectional imaging, unifocal disease, and the availability of feasible close follow-up.^[@b29]^ According to the current European guidelines on UTUC,^[@b7]^ diagnostic ureteroscopy with biopsy should be performed in the preoperative assessment of UTUC. On the other hand, the routine use of ureteroscopy is not advocated for the confirmation of UTUC.^[@b30]^ Based on the results of our study, the need for ureteroscopy and biopsy may be obviated in patients with UUC causing hydronephrosis of Grade 3 or higher. The current study has limitations. First, the study population was relatively small due to the rarity of UUC, and because the study was conducted retrospectively at a single institution, the possibility of selection bias should be considered. Prospective multicentre studies with larger sample size are needed to validate our results. Second, the direct imaging-pathological correlation was not obtained in tumour size assessed on CTU. Consequently, tumour size could have been overestimated if there was concomitant inflammation. In conclusion, high-grade hydronephrosis on preoperative CTU was significantly associated with high-grade UUC. The results of the current study may help develop algorithms for risk stratification of patients with pure UUC. Radical surgical treatment should be considered in patients with UUC causing hydronephrosis of Grade 3 or higher regardless of tumour size and absence of peritumoural infiltration on CTU.
{ "pile_set_name": "PubMed Central" }
### 更新时间Updated Updated可以让您在记录插入或每次记录更新时自动更新数据库中的标记字段为当前时间,需要在xorm标记中使用updated标记,如下所示进行标记,对应的字段可以为time.Time或者自定义的time.Time或者int,int64等int类型。 ```Go type User struct { Id int64 Name string UpdatedAt time.Time `xorm:"updated"` } ``` 在Insert(), InsertOne(), Update()方法被调用时,updated标记的字段将会被自动更新为当前时间,如下所示: ```Go var user User engine.Id(1).Get(&user) // SELECT * FROM user WHERE id = ? engine.Id(1).Update(&user) // UPDATE user SET ..., updaetd_at = ? WHERE id = ? ``` 如果你希望临时不自动插入时间,则可以组合NoAutoTime()方法: ```Go engine.NoAutoTime().Insert(&user) ``` 这个在从一张表拷贝字段到另一张表时比较有用。
{ "pile_set_name": "Github" }
Multiple Image Layout Below is a working example of this template. You'll find simple instructions of how to use it beneath it. This particular template provides interaction with its viewer. All four images have links attached to them, they just need to be specified/customized by right clicking on the image and clicking "Insert/edit link". This also applies to the links below the images. You can customize: Images Text 'href' links You cannot customize: Image size Layout Spacing between images Note: for images to display properly in this template, use these sizes:
{ "pile_set_name": "Pile-CC" }
We all know that RiRi and Breezy had one of the most colorful history among celebrity couples. The pair had a roller coaster romance and have broken up and rekindled their relationship a couple of times before permanently calling it quits in 2014. Now some fans are panicking that Rihanna maybe missing her ex-boyfriend and could rekindle their old flame. The Bajan pop beauty posted a pic of herself a few weeks back and captioned it, “When you hang up on em, then call right back. #firstofallimcrazy #secondofalliwasntdone.” Chris Brown has a song on his new album Heartbreak On a Full Moon titled “Other Ni**as” where he sing some similar lines. “Oh girl, why you gotta be like that? (Girl, like that) / Why gotta hang up on me then you call right back? (Back) / How you compare me to them ni**as that you gave your heart to? (Heart to),” Chris Brown sings. Rihanna made her Instagram post a few days before Brown drop the album and some fans are now saying that he might have given her a copy of the project before it was released, which means that they are secretly communicating with each other. On the other hand, it could be just share coincidence that RiRi made a quote identical to Breezy’s lyrics just a few days before he drop an album, just how likely is that. Listen to the full song “Other Ni**as” below.
{ "pile_set_name": "Pile-CC" }
Home/Uncategorised/Need to sell your home fast? Hire a professional interior designer Apr24 Need to sell your home fast? Hire a professional interior designer 24th April 2018 0 Comment(s) Whether you’ve just bought a new property, or are planning on selling, designing and decorating a whole home can be a mammoth task. Where do you start? What colours should you go for? And what fabrics will look fantastic? Most of all, how much will it all cost? Hiring an interior designer can give you the talent, experience and perspective you need to help save you time, effort and money. From a resale point of view, homes that have had the touch of a talented professional are generally more attractive and easier to sell. Brands spend thousands of pounds trying to get the right look and feel for their products. Whole teams are dedicated to this process, from the product development and packaging design right through to the marketing. When your home is likely to be one of your biggest investments, it makes sense that your approach should be the same. At Mood Interiors, we believe that hiring a professional interior designer can help sell your home for a higher price and more quickly. Here’s why: Boost your home’s appeal In America, many home sellers realise the value of investing in interior design before they put their house up for sale. However, in the UK, this isn’t necessarily the case. It only takes a quick glance at the online property portals to see lots of dreary houses for sale. Many of them are poorly decorated, unstyled and photographed in a bad light. An interior designer can make your home stand out from the crowd and cause people to stand up and take notice when they spot it for sale. What’s more, when they come to view, they won’t be disappointed. Accentuate the benefits Interior designers can work the space available to the best of its ability and are likely to spot plus points you might not have noticed. This will allow you to accentuate every benefit your property offers and potentially ask a higher price. Create a coherent home A house that is a mishmash of styles, furniture and colours can be jarring on the eye and turn buyers off. That’s why before putting it on the market, it’s important to ensure that everything in your home flows together seamlessly. An interior designer can help you use styles, colours and patterns to help achieve this. Fetch a higher price It might sound odd, but the phrase ‘spend money to make money’ rings true when it comes to employing an interior designer to help sell your home. Not only will they be able to frame your home in the best light possible, they’re likely to increase its perceived value. Overall, you’re likely to see a real return on investment that pays the interior designer fee many times over. Create a more universal look As a home owner, it’s easy to get caught up in the interiors that you love and suit your personality. But naturally, not all people will feel the same about your baroque wallpaper or family photo wall. Good interior designers aren’t just skilled in creating interiors that suit your personality, they also have a knack for styling homes in a universally appealing way. Because you never know what people will like or dislike, this ability to create a show home vibe can be highly valuable and significantly broaden your pool of potential purchasers. At Mood Interiors, we help people like you frame your home in the best light possible – whether you’re selling or staying put. If you’re looking to add that special something to your property, why not contact us?
{ "pile_set_name": "Pile-CC" }
Introduction {#S1} ============ Despite significant improvements in outcome,([@R1]--[@R3]) relapse remains the leading cause of treatment failure for children with acute lymphoblastic leukemia (ALL) and occurred in 11 to 36% of those with high-risk B-precursor ALL.([@R4]--[@R10]) Mechanisms by which genomic variation influence relapse risk could involve somatically acquired mutations or inherited genetic variations, which could affect intrinsic resistance to chemotherapy([@R11]--[@R13]) or host pharmacokinetics of anti-leukemic agents.([@R14]--[@R16]) Some studies report that black and Hispanic children with ALL have inferior outcomes to non-Hispanic white children.([@R17]--[@R21]) Reasons for these differences are likely multifactorial, including differences in treatment adherence and access to therapy,([@R22]--[@R24]) in the incidence of favorable and unfavorable presenting features and cytogenetics,([@R25]--[@R27]) and in the frequency of genetic variants affecting pharmacokinetics and pharmacodynamics of antileukemic agents which segregate with ancestry.([@R28]) It remains uncertain whether racial disparities persist with modern intensive ALL regimens. We performed a genome wide association study (GWAS) in a large cohort of children with high-risk B-ALL to identify inherited genetic variations associated with relapse. We performed an analysis adjusting for both treatment and ancestry to identify single nucleotide polymorphisms (SNPs) which increased risk across ancestries (ancestry-agnostic SNPs). Because racial disparities in relapse persisted in this trial, we also performed analyses within each of the three largest ancestral groups (white, black, Hispanic) to identify ancestry-specific variations associated with relapse. We also interrogated relapse SNPs for associations with risk of central nervous system (CNS) relapse, relapse among patients randomized to receive either escalating-dose methotrexate and asparaginase (i.e., Capizzi regimen) or high-dose methotrexate during the first interim maintenance (IM1), and for associations with the pharmacokinetics of antileukemic agents or the intrinsic sensitivity of leukemia cells to chemotherapy. Finally, to assess robustness of relapse SNPs across different therapies, we tested for replication in an independent cohort. Methods {#S2} ======= Patients and treatment {#S3} ---------------------- For the discovery cohort, germline DNA was obtained at remission in children and young adults with newly diagnosed B-precursor ALL enrolled on COG AALL0232 (NCT00075725, <https://clinicaltrials.gov/ct2/show/NCT00075725>).([@R8]) This protocol involved a 2×2 factorial randomization for induction steroid (prednisone ×28 days vs. dexamethasone ×14 days) and interim maintenance 1 regimen (Capizzi escalating-dose methotrexate with pegylated-asparaginase vs. high-dose methotrexate). Exclusion criteria are described in [Figure 1](#F1){ref-type="fig"} and the [Supplementary Methods](#SD1){ref-type="supplementary-material"}. The replication cohort comprised children treated on prior generation protocols who would have met the eligibility criteria of AALL0232 ([Supplementary Methods and Supplementary Table 1](#SD1){ref-type="supplementary-material"}). All studies were approved by the institutional review boards of participating institutions, and all patients and/or guardians provided age appropriate consent/assent in accordance with the Declaration of Helsinki. Genotyping and genetic ancestry {#S4} ------------------------------- Genotyping and genetic imputation was performed as described in the [Supplementary Methods](#SD1){ref-type="supplementary-material"}. Genetic ancestry was defined using STRUCTURE v2.2.3.([@R29]) For categorization of patients into discrete ancestral groups, individuals were classified based on inferred genetic ancestry as white \[Northern European (CEU) \>90%\], black \[West African (YRI) \>70%\], Hispanic \[Native American([@R30]) \>10% and Native American greater than West African\], or Other, including Asian \[East Asian (CHB/JPT) \>90%\]. Quality control steps for both patients and SNPs are detailed in the [Supplementary Methods](#SD1){ref-type="supplementary-material"}. Identification of relapse associated SNPs {#S5} ----------------------------------------- The approaches to perform GWASs for relapse are detailed in the [Supplementary Methods](#SD1){ref-type="supplementary-material"}. GWASs were performed to identify SNPs using an ancestry-agnostic ([Supplementary Table 2 and Supplementary Figure 1](#SD1){ref-type="supplementary-material"}) and an ancestry-specific approach ([Supplementary Figures 2a--c](#SD1){ref-type="supplementary-material"}). Treatment arm and site specific annotation of relapse SNPs {#S6} ---------------------------------------------------------- SNPs associated with relapse were further characterized in subsets of patients based on their IM1 randomization (the Capizzi arm with escalating-dose methotrexate plus pegylated-asparaginase vs. the high-dose methotrexate arm) while adjusting for induction randomization, rapid early response, and ancestry as categorical variables. Additionally, SNPs were tested for their association with CNS relapse (isolated or combined with other sites), with isolated hematologic or other extramedullary relapse treated as competing risks. Significant association thresholds for all analyses were determined by profile information criteria (Ip),([@R31]) which balances false positives and negatives while addressing the effects of multiple testing. Association with orthogonal pharmacologic data {#S7} ---------------------------------------------- SNPs associated with relapse (ancestry-specific or ancestry-agnostic) were evaluated for association with drug resistance in HapMap cells lines (prednisone, asparaginase, mercaptopurine, methotrexate polyglutamate accumulation), primary ALL cells from newly diagnosed patients (prednisone, vincristine, mercaptopurine, asparaginase, *in vivo* leukocyte count decrease following methotrexate), or for association with increased drug clearance (asparaginase allergy, methotrexate clearance, dexamethasone clearance), as described in the [Supplementary Methods](#SD1){ref-type="supplementary-material"}. SNPs were considered supported by orthogonal data if the risk allele for relapse was associated (at P\<0.05) with *in vitro* drug resistance, decreased methotrexate polyglutamate accumulation, smaller leukocyte decrease after methotrexate, more rapid drug clearance, or greater incidence of asparaginase allergy. Evaluation of relapse-associated SNPs in replication cohort {#S8} ----------------------------------------------------------- Relapse-associated SNPs were evaluated in an independent replication cohort (n=719) for their association with relapse using a Cox proportional hazard regression, with patients censored at the time of competing events (i.e. remission death, second malignancy) or last follow-up and adjusting for treatment categorized into 6 groups ([Supplementary Table 2](#SD1){ref-type="supplementary-material"}).([@R5], [@R9], [@R32]) AALL0232 ancestry-agnostic SNPs were evaluated in all patients while adjusting for treatment and ancestry. AALL0232 ancestry-specific SNPs were evaluated in the same ancestry subset of the replication cohort while adjusting for treatment and, in blacks and Hispanics, percent ancestry. The replication cohort SNPs were evaluable if they passed quality control steps as described for the discovery cohort ([Supplementary Methods](#SD1){ref-type="supplementary-material"}). Differences in genotyping platforms between the discovery and replication cohorts, as well as the smaller size of the replication cohort, resulted in 595 of the 1,017 relapse SNPs from the discovery cohort being evaluable in the replication cohort. Validated SNPs were those associated with relapse at P\<0.05 and with identical risk alleles. Quantitative contribution of SNPs to ancestral differences in relapse {#S9} --------------------------------------------------------------------- To identify SNPs which most contributed to ancestry-associated differences in relapse risk, a classification and regression tree analysis was performed separately in blacks and Hispanics considering treatment arm and validated ancestry-agnostic and ancestry-specific SNPs as potential branches. Branches were limited to two levels with each new branch needing to contain at least 20% of the initial ancestral patient group (representing \~1% or at least 22 patients from the discovery cohort for the smallest group, those with black ancestry). The impact of these SNPs on the risk of relapse associated with black or Hispanic ancestry was then evaluated in a competing risk regression model of relapse including the SNPs, treatment, and ancestry. Statistical analysis {#S10} -------------------- Statistical and bioinformatics analyses were performed using R versions 3.2.2, including the "survival", "cmprsk", "rpart", and "forestplot" packages. Association studies of orthogonal phenotypes were performed either in R or PLINK version 1.07. Results {#S11} ======= Patient Characteristics {#S12} ----------------------- Of 3,084 children and young adults enrolled on AALL0232, germline genotype and relapse data were available for 2,652, and 2,225 were included in the GWAS for relapse ([Figure 1](#F1){ref-type="fig"}). To identify covariates to include in the GWAS, we examined the importance of treatment group and ancestry on relapse risk. Consistent with findings in the entire randomized cohort,([@R8]) patients treated with Capizzi-methotrexate had a higher relapse risk than those treated with high-dose methotrexate ([Supplementary Table 3](#SD1){ref-type="supplementary-material"}). Because patients with slow early response did not differ by their induction steroid assignment but did differ by IM1 randomization, patients with slow early response were combined for multivariable and GWAS analyses ([Supplementary Table 2](#SD1){ref-type="supplementary-material"}). Blacks \[P=2.66×10^−4^, hazard ratio (HR)=2.31\] and Hispanics (P=2.17×10^−5^, HR=1.77) had an increased relapse risk compared to whites ([Supplementary Table 3](#SD1){ref-type="supplementary-material"}). The effects of ancestry and treatment groups remained significant in multivariate analyses ([Supplementary Table 3](#SD1){ref-type="supplementary-material"}, [Figure 2](#F2){ref-type="fig"}). Blacks and Hispanics also had a higher risk of any CNS relapse than whites (P=0.016, HR=2.54 for blacks; P=0.0018, HR=2.08 for Hispanics). Association of SNPs with relapse {#S13} -------------------------------- Following quality control steps, 11,180,806 SNPs were evaluated for their association with relapse. A total of 302 SNPs representing 175 unique genetic loci (LD blocks) were associated with relapse in an analysis adjusting for both treatment and percent genetic ancestry (i.e. their association with relapse was "agnostic" to ancestry; [Supplementary Table 4, Supplementary Figure 4](#SD1){ref-type="supplementary-material"}). An additional 715 SNPs representing 424 unique genetic loci were associated with relapse in ancestry-specific analyses, with 280 SNPs (179 loci) associated with relapse in Hispanics, 258 SNPs (167 loci) in blacks, 173 SNPs (72 loci) in whites, 2 SNPs (2 loci) in both blacks and whites, and 2 SNPs (1 locus) in both blacks and Hispanics ([Supplementary Tables 5--7, Supplementary Figures 3, 5--7](#SD1){ref-type="supplementary-material"}). Of the 1,017 relapse SNPs, 192 were associated with relapse in patients treated on the Capizzi arm, 186 in patients treated on the high-dose methotrexate arm, and 18 in both treatment groups; 621 SNPs were not associated with relapse in either group alone but were associated with relapse in the combined cohort ([Supplementary Tables 4--7, Supplementary Figures 4--7](#SD1){ref-type="supplementary-material"}). Of the 302 ancestry-agnostic SNPs, 54 were also associated with an increased risk of CNS relapse ([Supplementary Table 4, Supplementary Figure 4](#SD1){ref-type="supplementary-material"}). Of these, 25 were associated with increased CNS relapse in patients treated on the Capizzi arm, 14 in patients on the high-dose methotrexate arm, and 4 in patients on both arms. Because of the association between ancestry and CNS relapse risk, we evaluated ancestry-specific SNPs for their association with CNS relapse and identified 18 SNPs associated with increased CNS relapse risk in whites, 38 SNPs in blacks, and 52 SNPs in Hispanics ([Supplementary Tables 5--7, Supplementary Figures 5--7](#SD1){ref-type="supplementary-material"}). Because of the importance of minimal residual disease (MRD) in defining high-risk patients,([@R33]) we also evaluated relapse SNPs for their adverse impact in the 1,931 patients with end of induction (day 29) MRD less than 0.1%. 617 SNPs remained significant at the previously defined significance threshold, including 209 ancestry-agnostic SNPs ([Supplementary Tables 4--7, Supplementary Figures 4--7](#SD1){ref-type="supplementary-material"}). Association of relapse SNPs with orthogonal pharmacologic data {#S14} -------------------------------------------------------------- To explore possible mechanisms underlying the 1,017 SNPs associated with relapse, we tested for their association with orthogonal phenotypes including *in vitro* resistance to chemotherapy, decreased response to methotrexate *in vivo*, increased chemotherapeutic drug clearance *in vivo*, and asparaginase allergy *in vivo.* Of the 302 ancestry-agnostic SNPs, 54 were associated with one resistance/clearance phenotype and 10 were associated with more than one such phenotype ([Supplementary Table 4](#SD1){ref-type="supplementary-material"}). Of the 715 ancestry-specific SNPs, 128 were associated with one resistance/clearance phenotype and 32 with more than one phenotype ([Supplementary Tables 5--7](#SD1){ref-type="supplementary-material"}). 36 of the 162 relapse SNPs associated with CNS relapse were associated with at least one resistance/clearance phenotype. Of the 54 relapse SNPs associated with intrinsic leukemic asparaginase resistance (N=24 SNPs) or asparaginase allergy (N=30 SNPs), 20 were associated with relapse in the Capizzi arms, which included additional doses of asparaginase, compared to only eight associated with relapse in the high-dose methotrexate arms (Fisher's P=0.015). In contrast, relapse SNPs associated with decreased intracellular methotrexate polyglutamates (N=15 SNPs), rapid methotrexate clearance (N=19 SNPs), or decreased *in vivo* response to methotrexate (N=42 SNPs) were balanced equally in their association across IM randomization arm (19 of 76 SNPs significant in the Capizzi arm, 13 of 76 significant in the high-dose arm; Fisher's P=0.32). Relapse SNPs were associated with both pharmacokinetic and pharmacodynamic phenotypes. For example, the relapse SNP rs10496350 was associated with asparaginase allergy (which results in decreased exposure to asparaginase), and patients carrying at least one copy of the C risk allele had a higher (P adjusted for treatment and ancestry =2.94×10^−5^) five-year cumulative incidence of relapse (CIR, 37.5%) than did patients with GG genotype (five-year CIR 13.3%) as well as double the risk (P=0.006) of allergy (23.3% for CC or CG genotype vs. 10.8% for GG genotype, [Figure 3](#F3){ref-type="fig"}). Relapse SNPs were also associated with resistance to chemotherapeutic agents: for example, rs743535 (intronic within *CYP2E1*) was associated with both vincristine resistance (median lethal concentration for 50% of cells 0.27 μM for GG genotype vs. 2 μM for GA/AA genotypes, P=0.016) and increased five-year CIR (12% for the GG genotype vs. 20.6% for the GA or AA genotypes, P=2.42×10^−4^, [Figure 4](#F4){ref-type="fig"}). Replication cohort {#S15} ------------------ Of the 1,017 relapse SNPs, 595 were evaluable in the independent replication cohort of 719 patients and 32 replicated (representing 19 loci). Of 138 evaluable ancestry-agnostic SNPs, seven were associated with increased relapse in the replication cohort. 25 of the 457 evaluable ancestry-specific SNPs were also associated with increased relapse in the replication cohort in the same ancestry as was identified in the AALL0232 cohort, including three which increased relapse risk in blacks, 18 in Hispanics, and four in whites ([Table 1](#T1){ref-type="table"}). Of the 32 replicated SNPs, four were associated with an increased relapse in patients treated with high-dose methotrexate, two in patients treated with Capizzi-methotrexate, and two in both cohorts. Of the seven replicated ancestry-agnostic SNPs, four were associated with at least one unfavorable pharmacological phenotype: rs41530849 in *PTPN14* with both rapid methotrexate clearance and *in vitro* asparaginase resistance, rs743535 in *CYP2E1* with *in vitro* vincristine resistance, intergenic SNP rs2463380 with rapid methotrexate clearance, and the missense SNP rs16843643 in *FARP2* with a diminished *in vivo* response to high-dose methotrexate. Additionally, four ancestry-agnostic SNPs and 12 Hispanic-specific SNPs that were associated with CNS relapse in the discovery cohort were replicated in the independent replication cohort, and 23 SNPs were significant among MRD negative patients ([Table 1](#T1){ref-type="table"}). SNP contribution to excess relapse risk in black and Hispanic patients {#S16} ---------------------------------------------------------------------- Using classification and regression trees, we identified two SNPs in blacks (rs4710143 and rs16843643), and in Hispanics (rs9325870 and rs743535) most contributing to their excess relapse risk. In a multivariate model, these four SNPs attenuated the adverse risk associated with black (P=0.79) and Hispanic (P=0.065) ancestry group status ([Figure 5a](#F5){ref-type="fig"}). Additionally, ancestry did not improve the ability to predict relapse if SNPs and treatment group were already known (ANOVA P=0.19 comparing a model with treatment and SNPs as covariates to a model with treatment, SNPs, and ancestry). Patients carrying at least one risk allele for any of the four SNPs had higher relapse than did patients without any risk alleles, regardless of their ancestry ([Figure 5b](#F5){ref-type="fig"}). These variants were less prevalent in whites, with the average white patient carrying 0.21 risk alleles (of a possible eight, range in whites 0--2) compared to a mean of 1.28 in black patients (range 0--5), 0.79 in Hispanics (range 0--4), and 0.63 in patients of other ancestry (range 0--4; Mann-Whitney P\<1×10^−15^). Discussion {#S17} ========== Relapse in high-risk B-ALL remains a significant problem, and most patients who relapse do not survive. Although evaluation of early treatment response and MRD identifies many patients at high risk for relapse, many patients who relapse do not carry these adverse features.([@R33], [@R34]) Further identification of adverse biologic features is needed to allow further refinements in therapy. In this study, we focused on three primary implications of this genetic analysis: whether host genetic variation explained ancestry-related differences in relapse, whether the importance of genetic variation differed by major treatment arms, and how genetic variations were replicated for orthogonal pharmacologic phenotypes and in an independent ALL cohort. In this GWAS, we identified 1,017 SNPs associated with increased relapse risk in children with high-risk B-ALL. We identified both SNPs associated with relapse risk regardless of patient ancestry (ancestry-agnostic) as well as SNPs associated with relapse in an ancestry-specific fashion. Of these relapse SNPs, 7 ancestry-agnostic and 25 ancestry-specific SNPs were also associated with an increased relapse risk in an independent replication cohort ([Table 1](#T1){ref-type="table"}). Importantly, we identified genetic variants associated with increased relapse risk in an ancestry-specific manner across two generations of B-ALL protocols. The identified SNPs contribute to the higher risk of relapse in blacks and Hispanics but also identify patients in each ancestral group at high risk of relapse. Using only four SNPs (rs4710143, rs16843643, rs9325870, and rs743535), we identified 73% of blacks and 57% of Hispanics at high-risk of relapse ([Figure 5b](#F5){ref-type="fig"}). These SNPs were also associated with relapse risk in whites and patients of other ancestry. However, more than 50% of blacks and Hispanics carry at least one risk allele in these SNPs compared to 20% of whites, suggesting the increased relapse risk attributable to these SNPs is disproportionately distributed to blacks and Hispanics, simply on the basis of racial differences in allele frequency. The addition of ancestry group to a model including these SNPs and treatment group failed to improve the model (ANOVA P=0.19) suggesting these SNPs attenuate the adverse impact of ancestry on relapse. These data mirror findings in other malignant([@R35], [@R36]) and non-malignant diseases([@R37]--[@R42]) in which variants strongly associated with ancestry may be the cause of discrepant disease outcomes in different ancestral populations. Such variants offer the opportunity for therapy modification and risk stratification when their effects are stable across multiple settings, as are the replicated ancestry-specific variants identified in this study ([Table 1](#T1){ref-type="table"}). One of the principles of discovery research in pharmacogenomics is that the variants identified in any study will be influenced by the therapy that has been given. Because the randomly assigned methotrexate treatment arm had a significant effect on treatment outcome in AALL0232, we had a unique opportunity to test whether some genomic variants associated with relapse were more important in those receiving one treatment arm (high-dose methotrexate) versus the other (Capizzi methotrexate plus asparaginase). Interestingly, the SNPs directly associated with methotrexate pharmacology did not differentially distribute between the two treatment arms (Fisher's P=0.32), but relapse SNPs associated with asparaginase resistance or asparaginase allergy did cluster in the Capizzi arm (Fisher's P=0.015). Those in the Capizzi arm received more asparaginase but less methotrexate than those in the high-dose methotrexate arm. The association with asparaginase resistance/allergy in the Capizzi arm suggests that asparaginase exposure was more critical to preventing relapse among the patients whose methotrexate exposure was low (Capizzi treatment), and that treatment with high-dose methotrexate diminishes the importance of maximizing asparaginase. Therapeutic and patient differences may also explain differences in the SNPs associated with relapse in this cohort compared to prior analyses. In prior GWAS of ALL relapse risk and MRD,([@R43], [@R44]) the majority of patients were NCI standard-risk, in contrast to the high-risk population studied here. Moreover, all patients in the discovery cohort of this study also received delayed intensification and MRD-directed therapy intensification, whereas many patients in the prior GWASs([@R43], [@R44]) did not. In a review of the SNPs previously associated with relapse or MRD,([@R43], [@R44]) we identified five (rs35229355, rs7517671, rs10883699, rs7350429, and rs6773449) that associated with relapse (P\<0.05) after adjusting for both treatment and ancestry in the current discovery cohort. However, these SNPs did not reach the Ip selected P value threshold, nor were they replicated at least 20 times during iterative resampling. This finding highlights the importance of population and therapeutic differences on the association of pharmacogenomic variants and outcome. It is encouraging that many of the SNPs identified in the current GWAS were associated with relapse among patients treated on both the high-dose methotrexate and the Capizzi escalating-methotrexate/asparaginase arms, suggesting that some of these variants may be prognostic across therapies. The analysis of relapse SNPs' association with orthogonal pharmacologic phenotypes suggests mechanisms through which some relapse SNPs may be exerting their effects on relapse risk. Relapse SNPs were associated with both pharmacokinetic and pharmacodynamic phenotypes. For example, rs6786341 (an intronic variant in lactoferrin) was associated with more rapid methotrexate clearance, a phenotype which has previously been associated with decreased methotrexate polyglutamate accumulation and increased relapse risk.([@R45], [@R46]) The rs743535 variant in *CYP2E1* was associated with resistance to vincristine ([Figure 4](#F4){ref-type="fig"}). Variants in this gene have previously been implicated in inferior survival in non-Hodgkin's lymphoma([@R47]) and non-small cell lung cancer,([@R48]) potentially due to resistance to chemotherapeutic agents used in those diseases. Variants near *LZTS1*, which include promoter and enhancer marks in neural tissues,([@R49]) were associated with CNS relapse in Hispanics. Suppression of this gene has previously been implicated in metastatic potential in multiple solid tumors,([@R50]--[@R52]) suggesting these variants may alter leukemic trafficing into the CNS, thereby altering CNS relapse risk. Other identified CNS relapse SNPs likely contribute to CNS relapse through alterations in leukemic drug resistance or rapid drug clearance, as 36 of 162 CNS relapse SNPs were also associated with pharmakokinetic or drug resistance phenotypes. We identified several novel inherited risk variants for relapse in a large population of children with high-risk B-precursor ALL. Several of these are associated with the increased relapse risk specific to black and Hispanic ancestry and may contribute to the adverse outcomes attributed to "race." Many of these variants are associated with "inherited" leukemic resistance or rapid clearance of chemotherapy. These findings may allow personalized therapy to further improve outcomes for children with high-risk B-ALL. Supplementary Material {#S18} ====================== **Funding/Support** The work was supported by the National Institutes of Health \[grant numbers GM 92666, GM 115279, CA142665, CA 21765, CA 36401, CA98543 (COG Chair's grant), CA98413 (COG Statistical Center), CA114766 (COG Specimen Banking), U01-HG04603, RC2- GM092618, R01-LM010685, 5T32-GM007569\]; Leukemia Lymphoma Society (grant number 6168); and by the American Lebanese Syrian Associated Charities. **Role of funding source** The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. **Original Data Statement** Drs. Mary Relling and Seth Karol had full access to all data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. **Authors' contributions** MVR and JJY contributed to the conception and design of the study. EL, LBR, CAF, JRM, SWP, RJA, ELL, BD, SJ, C-HP, EAR, NJW, WLC, SPH, MLL, MD, WEE, JJY, and MVR contributed to the provision of study materials, patient recruitment, or acquisition of data. SEK, CC, XC, and MVR contributed to data analysis and interpretation. All authors contributed to the drafting and reviewing of the manuscript and gave their final approval to submit for publication. **Conflicts of interest** The authors declare no competing financial interests. **Data Availability:** Detailed information on the primary clinical trial (COG AALL0232) for the discovery cohort is available from: <https://clinicaltrials.gov/ct2/show/results/NCT00075725?term=0232&rank=3> ![Consort diagram of AALL0232 discovery cohort](nihms842426f1){#F1} ![Association of non-white genetic ancestry with increased relapse risk\ Non-whites had an increased risk of relapse in the discovery cohort. The five-year cumulative incidence of relapse was higher in blacks \[23.7%, 95% confidence interval (CI) 14.7--32.7%, P=2.27×10^−4^, HR=2.32\] and Hispanics (19.3%, 95% CI 15.7--22.9%, P=8.23×10^−5^, HR=1.7) than whites (10.3%, 95% CI 8.9--12.8%). P values are adjusted for treatment.\ White: \>90% CEU; black: \>70% YRI; Hispanic: \>10% Native American and Native American \>YRI](nihms842426f2){#F2} ![*NPAS2* SNP rs10496350 is associated with asparaginase allergy and increased relapse risk\ Patients in the discovery cohort carrying the at least one copy of the C risk allele of rs10496350 had a higher five-year cumulative incidence of relapse (37.5%) than did those with the GG genotype (13.3%, P adjusted for treatment and ancestry =2.94×10^−5^). Patients carrying the risk allele also experienced a higher rate of allergy (23%) than did patients carrying the GG genotype (11%, P=0.006).](nihms842426f3){#F3} ![*CYP2E1* SNP rs743535 associated with both *in vitro* vincristine resistance and increased relapse risk\ rs743535 was associated with increased relapse risk (multivariate P=2.42×10^−4^). In primary patient lymphoblasts, presence of one or more A risk alleles decreased sensitivity to vincristine (median LC50 with A allele 2 μM, median LC50 with GG genotype 0.27 μM, P=0.016).](nihms842426f4){#F4} ###### Relapse SNPs attenuate the adverse impact of black and Hispanic ancestry a: **Forest plot of relapse risk comparing multivariable models with and without four relapse SNPs** b: **Presence of a risk allele in any of the four SNPs confers high relapse risk regardless of ancestry** Risk alleles in any of four SNPs (rs4710143, rs16843643, rs9325870, and rs743535) confer increased relapse risk regardless of ancestry. (A) In multivariate models, these SNPs largely attenuate the adverse effect of black or Hispanic ancestry, while leaving unchanged the association between treatment arm and relapse. Treatment arms are described in [Supplementary Table 2](#SD1){ref-type="supplementary-material"}: For the rapid early response patients, Dex/Capizzi, Pred/Capizzi, Dex/HD, Pred/HD refer to the induction steroid (dex = dexamethasone, pred = prednisone) and the interim maintenance (Capizzi=escalating dose methotrexate plus asparaginase, HD = high-dose methotrexate). For the slow early response patients (SER), induction steroid groups were combined. (Hazard ratio (HR) from model without SNPs shown in blue, models with SNPs shown in red). \(B\) Whites, blacks, and Hispanics carrying risk alleles for any of these SNPs (dashed lines) have higher five-year relapse risks than do those without any risk alleles (solid lines) \[15.3% vs. 9.7% (P=0.025) for whites, 32.3% vs. 0% (P=1.28×10^−4^) for blacks, and 25.5% vs. 10.7% (P=3.72×10^−6^) for Hispanics\]. P values are adjusted for treatment. ![](nihms842426f5a) ![](nihms842426f5b) ###### SNPs associated with relapse in discovery (n=2,225) and replication (n=719) cohorts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- rsID Gene Risk allele RAF P: discovery cohort Hazard Ratio (95% CI): discovery cohort P: replication cohort Ancestry Additional phenotypes -------------------- ----------- ------------- ------- --------------------- ----------------------------------------- ----------------------- ---------- ------------------------------------ rs41530849 *PTPN14* T 0.006 4.26E-06 3.87\ 0.019 agnostic HD arm, MTX clearance, PPL ASP (2.17--6.89) **rs10205940** A 0.224 6.85E-06 1.42\ 0.044 agnostic HD arm, Capizzi arm, CNS (1.22--1.66) **chr23: 9863426** *SHROOM2* T 0.008 1.04E-05 2.45\ 0.049 agnostic HD arm, Capizzi arm, CNS (1.64--3.64) rs2463380 G 0.22 3.98E-05 1.52\ 0.045 agnostic HD arm, CNS, MTX clearance (1.24--1.86) **rs2710418** *NELL2* T 0.031 4.99E-05 1.98\ 0.021 agnostic HD arm (1.42--2.76) rs743535 *CYP2E1* A 0.124 5.00E-05 1.54\ 0.045 agnostic PPL Vinc (1.25--1.9) **rs16843643** *FARP2* C 0.012 0.000226 2.95\ 0.031 agnostic Capizzi arm, CNS, MTX WBC response (1.66--5.25) **rs775491** *BEST3* A 0.304 0.000265 1.61\ 0.0086 white (1.24--2.07) rs156008 *PCSK1* A 0.158 0.000297 1.65\ 0.024 white (1.26--2.16) **rs4710143** *RNASET2* G 0.074 0.000579 4.92\ 0.014 black Capizzi arm (1.98--12.2) **rs202408** C 0.144 0.000789 3.56\ 0.021 black (1.7--7.49) **rs7860525** T 0.134 0.00175 2.79\ 0.016 black (1.47--5.31) **rs9325870** *LZTS1* C 0.205 1.84E-05 2\ 0.036 Hispanic CNS (1.46--2.75) rs16999479 *DSCAM* G 0.016 0.000219 4.02\ 0.046 Hispanic CNS (1.92--8.42) **rs141707566** *GRIN2A* C 0.014 0.000289 2.76\ 0.037 Hispanic HD arm (1.59--4.77) **rs12535024** *DDC* C 0.181 0.00103 1.76\ 0.0499 Hispanic (1.26--2.47) **rs6786341** *LTF* T 0.012 0.00173 4.14\ 0.038 Hispanic MTX clearance (1.7--10.1) rs16945138 *DNAH9* T 0.007 0.00186 7.5\ 0.014 Hispanic (2.11--26.7) rs6651255 *GSDMC* C 0.425 0.00222 1.59\ 0.0029 Hispanic (1.18--2.13) ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- RAF: Risk allele frequency; CI: confidence interval Characteristics of validated SNPs are shown for the discovery cohort, with one SNP for each locus shown (SNPs removed through LD pruning are shown in [Supplementary Tables 4--7](#SD1){ref-type="supplementary-material"}). Bolded SNPs were significant at the Ip determined significance threshold when evaluated among patients who were end-induction minimal residual disease negative. SNPs are ordered by ancestry of discovery, with SNPs associated with relapse while adjusting for both treatment and ancestry (i.e. "ancestry agnostic") labeled as agnostic and ancestry-specific SNPs labeled with their associated ancestry group. Additional phenotypes include: association with relapse among patients treated on either first interim maintenance arm \[Capizzi arm, HD (high-dose methotrexate) arm\], association with CNS relapse (CNS), as well as association with *in vitro* resistance among primary patient lymphoblasts to asparaginase (PPL ASP) or vincristine (PPL Vinc), more rapid methotrexate clearance (MTX clearance), or diminished white blood cell decrease after *in vivo* methotrexate treatment (MTX WBC response).
{ "pile_set_name": "PubMed Central" }
Developmental characteristics of vessel density in the human fetal and infant brains. We demonstrated the developmental characteristics of vessel density in the human brain, using an antibody against CD31, which specifically reacts with endothelium. In the cerebral cortex and subcortical white matter, the vessel density was low at 16-28 weeks of gestation (GW), and then increased after 36 GW. In the deep white matter, the vessel density was high in the middle fetal period (16-24 GW), and then transiently decreased at 28-36 GW, and increasing after 39 GW. In the putamen, the vessel density was high at 20-21 GW, remained high throughout the fetal period, and then rapidly increased after birth. In the basis pontis, the number of vessels increased after 28 GW, and after 32 GW was greater than in the pontine tegmentum. These alterations in vessel density may correlate with the pathogenesis of perinatal brain injury. Thus, the transient decrease of vessel density in the deep white matter may predispose to periventricular leukomalacia in cerebral hypoperfusion. Similarly, the well-developed vascularity in the basis pontis may predispose its relatively immature neurons to neuron necrosis produced by free radical injury.
{ "pile_set_name": "PubMed Abstracts" }
Yankassia Yankassia is a village in the Bassar Prefecture in the Kara Region of north-western Togo. References External links Satellite map at Maplandia.com Category:Populated places in Kara Region Category:Bassar Prefecture
{ "pile_set_name": "Wikipedia (en)" }
Curating Vital Relationships SALESFORCE VIEW Available on the Appexchange and Salesforce1 Features Map your important relationships with a simple inline list view, and powerful advanced features Simple and Intuitive Relate is a simple and efficient inline list that lives right on the Contact, Account and Opportunity record in your Salesforce screen. Relate works on enterprise and unlimited editions. License includes access for all users. Relationship Visualization is independently licensed.
{ "pile_set_name": "Pile-CC" }
PM Cameron says he hopes to keep UK border controls in Calais LONDON, June 27 (Reuters) - Prime Minister David Cameron on Monday said he wanted to keep in place arrangements with France that allow British border controls to be based in the French port of Calais, following last week's decision to leave the European Union.
{ "pile_set_name": "OpenWebText2" }
Q: Summing up digits of a very long binary number? I was asked by a friend : If 2^10 = 1024 , we can take 1024 and break and summarize its digits : 1+0+2+4 = 7. This is easy. However When the input is 2^30000 ( the input actually is a long string "1000...") --there is no .net type which can hold this value . So there must be a trick to sum its digits (digits of the decimal value).... Edited : Related trick ( for finding 10^20 - 16) 100 = 10^2 (one and two zeros) 10^20 = (one and 20 zeros) hence: 10^20 - 16 = 18 nines, an eight and four. 18*9+8+4 = 174 But I haven't succeeded converting this solution to my problem.( I tried quite a lot). *Im tagging this question as .net because I can use string functions , math functions from .net library.* Question Is there any trick here which can allow me to sum many many numbers which is the result of x^n ? What is the trick here ? Edited #2 : Added the .net2 tag (where biginteger is unavailable) - I'm wondering how I could do it without biginteger.(i'm looking for the hidden trick) A: You can leverage the BigInteger structure to do this. As it's written in MSDN The BigInteger type is an immutable type that represents an arbitrarily large integer whose value in theory has no upper or lower bounds. Basically after creating BigInteger instance and evaluating exponent you can translate it to a string. After that you will iterate over each character of that string and convert each char to int number. Add all those int numbers up and you'll get your answer. BigInteger bi = new BigInteger(2); var bi2 = BigInteger.Pow(bi, 30000); BigInteger sum = new BigInteger(); foreach(var ch in bi2.ToString()) sum = BigInteger.Add(sum, new BigInteger(int.Parse(ch.ToString()))); MessageBox.Show(bi2.ToString() + " - " + sum.ToString()); A: There is no general trick I'm aware of for finding the base 10 digit sum of a number. However, there is an easy trick for finding the base 10 digit root of a number. The digit sum is, as you say, simply the sum of all the digits. The base 10 digit sum of 1024 is 1 + 2 + 4 = 7. The base 10 digit sum of 65536 is 6 + 5 + 5 + 3 + 6 = 25. The digit root is what you get when you repeat the digit sum until there's only one digit. The digit sum of 65536 is 25, so the digit root is 2 + 5 = 7. The trick is: If you have Z = X * Y then DigitRoot(Z) = DigitRoot(DigitRoot(X) * DigitRoot(Y)). (Exercise to the reader: prove it! Hint: start by proving the same identity for addition.) If you have an easily-factored number - and the easiest number to factor is 2n -- then it is easy to figure out the digit root recursively: 216 = 28 * 28, so DigitRoot(216) = DigitRoot(DigitRoot(28) * DigitRoot(28)) -- We just made the problem much smaller. Now we don't have to calculate 216, we only have to calculate 28. You can of course use this trick with 230000 -- break it down to DigitRoot(DigitRoot(215000 * DigitRoot(215000)). If 215000 is too big, break it down further; keep breaking it down until you have a problem small enough to solve. Make sense? A: From http://blog.singhanuvrat.com/problems/sum-of-digits-in-ab: public class Problem_16 { public long sumOfDigits(int base, int exp) { int numberOfDigits = (int) Math.ceil(exp * Math.log10(base)); int[] digits = new int[numberOfDigits]; digits[0] = base; int currentExp = 1; while (currentExp < exp) { currentExp++; int carry = 0; for (int i = 0; i < digits.length; i++) { int num = base * digits[i] + carry; digits[i] = num % 10; carry = num / 10; } } long sum = 0; for (int digit : digits) sum += digit; return sum; } public static void main(String[] args) { int base = 2; int exp = 3000; System.out.println(new Problem_16().sumOfDigits(base, exp)); } } c# public class Problem_16 { public long sumOfDigits(int base1, int exp) { int numberOfDigits = (int) Math.Ceiling(exp * Math.Log10(base1)); int[] digits = new int[numberOfDigits]; digits[0] = base1; int currentExp = 1; while (currentExp < exp) { currentExp++; int carry = 0; for (int i = 0; i < digits.Length; i++) { int num = base1 * digits[i] + carry; digits[i] = num % 10; carry = num / 10; } } long sum = 0; foreach (int digit in digits) sum += digit; return sum; } } void Main() { int base1 = 2; int exp = 3000000; Console.WriteLine (new Problem_16().sumOfDigits(base1, exp)); }
{ "pile_set_name": "StackExchange" }
Contact & Information: The information within this interactive and searchable application has been researched extensively by the House Clerk’s Office. As with any historical records of this age and breadth, there may be discrepancies and/or inconsistencies within records obtained from a variety of credible sources. Counties Cities and TownsLabor and CommerceMilitia and PoliceRoads and Internal Navigation 1995 County of Henrico (part); City of Richmond (part) 70 Democrat Counties Cities and TownsLabor and CommerceMilitia and PoliceRoads and Internal Navigation 1996 County of Henrico (part); City of Richmond (part) 70 Democrat Counties Cities and TownsLabor and CommerceMilitia and PoliceTransportation 1997 County of Henrico (part); City of Richmond (part) 70 Democrat Counties Cities and TownsLabor and CommerceMilitia and PoliceTransportation 1998 County of Henrico (part); City of Richmond (part) 70 Democrat Counties Cities and TownsLabor and CommerceMilitia and PoliceScience and TechnologyTransportation 1999 County of Henrico (part); City of Richmond (part) 70 Democrat Counties Cities and TownsLabor and CommerceMilitia and PoliceScience and TechnologyTransportation 2000 County of Henrico (part); City of Richmond (part) 70 Democrat Corporations Insurance and BankingCounties Cities and TownsScience and TechnologyTransportation 2001 County of Henrico (part); City of Richmond (part) 70 Democrat Corporations Insurance and BankingCounties Cities and TownsScience and TechnologyTransportation 2002 Counties of Chesterfield (part) and Henrico (part); City of Richmond (part) 70 Democrat Commerce and LaborCounties Cities and TownsTransportation 2003 Counties of Chesterfield (part) and Henrico (part); City of Richmond (part) 70 Democrat Commerce and LaborCounties Cities and TownsTransportation 2004 Counties of Chesterfield (part) and Henrico (part); City of Richmond (part) 70 Democrat Commerce and LaborCounties Cities and TownsTransportation 2005 Counties of Chesterfield (part) and Henrico (part); City of Richmond (part) 70 Democrat Commerce and LaborCounties Cities and TownsTransportation 2006 Counties of Chesterfield (part) and Henrico (part); City of Richmond (part) 70 Democrat Commerce and LaborCounties Cities and TownsTransportation 2007 Counties of Chesterfield (part) and Henrico (part); City of Richmond (part) 70 Democrat Commerce and LaborCounties Cities and TownsTransportation 2008 Counties of Chesterfield (part) and Henrico (part); City of Richmond (part) 70 Democrat Commerce and LaborTransportation &ast;The information within this interactive and searchable application has been researched extensively by the House Clerk’s Office. As with any historical records of this age and breadth, there may be discrepancies and/or inconsistencies within records obtained from a variety of credible sources. Any feedback is encouraged at [email protected].
{ "pile_set_name": "Pile-CC" }
#ifndef __PMU_H #define __PMU_H #include <linux/bitops.h> #include "../../../include/linux/perf_event.h" enum { PERF_PMU_FORMAT_VALUE_CONFIG, PERF_PMU_FORMAT_VALUE_CONFIG1, PERF_PMU_FORMAT_VALUE_CONFIG2, }; #define PERF_PMU_FORMAT_BITS 64 struct perf_pmu__format { char *name; int value; DECLARE_BITMAP(bits, PERF_PMU_FORMAT_BITS); struct list_head list; }; struct perf_pmu { char *name; __u32 type; struct list_head format; struct list_head list; }; struct perf_pmu *perf_pmu__find(char *name); int perf_pmu__config(struct perf_pmu *pmu, struct perf_event_attr *attr, struct list_head *head_terms); int perf_pmu_wrap(void); void perf_pmu_error(struct list_head *list, char *name, char const *msg); int perf_pmu__new_format(struct list_head *list, char *name, int config, unsigned long *bits); void perf_pmu__set_format(unsigned long *bits, long from, long to); int perf_pmu__test(void); #endif /* __PMU_H */
{ "pile_set_name": "Github" }
Amber Guzak sized the moment up right. ``My husband [Ray] caught a fish [July 3] on Lake Michigan that’s pretty impressive-- a 17.8-pound coho (not a king),’’ she messaged. ``He’s been doing this for many years and has never come across a coho this size before.’’ That’s enough of a freak of a coho that I emailed Ben Dickinson, Indiana’s Lake Michigan fisheries biologist, to make sure it was a coho and not an oddly colored Chinook. ``Sure looks like a big coho,’’ he responded. ``I’ve seen a few that size caught this year. Probably a few others caught that weren’t widely shared as most people assumed they were kings.’’ Guzak caught his big coho just north of the Indiana state line, his wife messaged. They keep their boat, Reel Distraction, at Marina Shores in Burns Harbor, Ind. Something big is going on in southern Lake Michigan and I don’t just mean the occasional big Chinook, even 30-pound kings have already been reported in Illinois. But there are coho of a size not seen in decades. In my weekly reports from Capt. Bob Poteshman, he has been consistently calling them, ``big chunky coho.’’ In early June, Poteshman’s Confusion charters had already turned up a 17-pound coho, caught by Tom Zurek of Orland Hills, out of North Point Marina. This week in his weekly report, Capt. Scott Wolfe reported, ``Again coho dominated the catch with some big kings in the mix too--6- to 7-pound coho make up most of the catch with some this week up to 13 pounds.’’ As of Wednesday afternoon, there were already eight Chinook of 30 pounds or heavier weighed from boats in Salmon-A-Rama, based in Racine, Wis. and running through Sunday. All 10 top spots for coho were 10 pounds or heavier, the heaviest going 14.18. ``I would say they seem a bit larger than last year, yes – but really they have been larger than average for the last two or three years,’’ Dickinson emailed. Illinois Lake Michigan Program manager Vic Santucci emailed, ``I have been hearing some anecdotal reports of big salmon being caught again this year, but we will not see any data until after our fall harbor surveys and the summer creel data is tabulated over the winter. ``I think the bigger salmon are the result of better predator/prey balance in the lake.’’ Dickinson put the credit in the same spot. ``I feel fairly confident in saying that the biggest reason is the stocking reductions over the past few years have resulted in a much better predator prey balance in the lake – the reduction alleviated the predation pressure on the baitfish, so as a result we’re seeing more bait, and the silver fish have all improved in size and body conditions,’’ he emailed. ``We’ve seen very nice steelhead size in addition to the big coho and very large kings. There’s more bait to go around for the salmon in the lake. It’s quite the turnaround since the small, skinny fish in 2015!’’ I’ve been doing the outdoors for the Sun-Times for more than two decades and never figured that the coho records in Indiana or Illinois would ever be challenged again. I’m beginning to wonder if those coho records might not just be challenged but perhaps surpassed this year. ``Breaking our 20-pound, 9-ounce coho record would be quite a feat and a fun challenge for our local anglers,’’ Santucci emailed. ``We have had several Lake Michigan records broken for other species in recent years, it would be great if we could add a new coho record to the list.’’ Carry VandeVusse caught that Illinois record on May 24, 1972. That was back in the early years of the experimental introduction of salmon into Lake Michigan to control alewives. Significantly, 1972 was the same year that John Beutner caught the Indiana record (20-12) in LaPorte County. ``I would not be shocked to see a record, but I would be mildly surprised,’’ Dickinson emailed. ``Given the number of mid-teen coho I have seen it certainly seems to be in striking distance by the end of the year or maybe even next year if there is another good year for alewife and coho.’’ Last word goes to Amber Guzak on her husband Ray, ``He’s been fishing Lake Michigan since he was a young boy and said he has never seen one this big.’’
{ "pile_set_name": "OpenWebText2" }
My View: Funding for the future Created on Friday, 16 January 2015 00:00 | Written by Joe Robertson | For decades, medical research — and the cures and treatments it has discovered — have meant hope for millions of American living with disease and disability. But in recent years, those hopes have been clouded as Congress continues to significantly underfund the National Institutes of Health. The funding agreement for the 2015 fiscal year that Congress approved recently does little to improve the situation. The agreement appropriately includes a boost in funding for Ebola research, but it provides only a small increase in funding for the rest of the NIH’s budget — which was cut by a disabling $1.7 billion two years ago and has been largely flat over the past 10 years. The small increase for 2015 — about one-half of 1 percent — won’t allow NIH funding to even keep pace with inflation. All of us associated with Oregon Health & Science University — health care providers, scientists and patients — are relieved that Congress at least avoided a government shutdown in approving the agreement. But we are disappointed that the spending bill fails to adequately fund the lifesaving research the NIH supports. And we will continue to call on Congress to fund the NIH appropriately, and give U.S. medical researchers the support they need to lead the world. For nearly 70 years, the nation’s research investment through the NIH has improved our understanding of the causes of disease, increased life expectancy, and enhanced the health and well-being of Americans everywhere. Every day, health care professionals and scientists throughout the nation, including at OHSU, see the hope that medical research brings. NIH-funded research has led to a decline of more than 60 percent in deaths from heart disease and stroke. NIH-supported advances also have led to a test to predict breast cancer recurrence, the discovery of genetic markers for complex illnesses, improved asthma treatments, and the near-elimination of HIV transmission between mother and child. At OHSU, NIH funding has allowed us to make medical and scientific breakthroughs in cancer, stem cell research and infectious diseases, among many other areas. With funding from the NIH, Brian Druker, director of the OHSU Knight Cancer Institute, helped develop Gleevec, a breakthrough drug for chronic myeloid leukemia that also served as proof that targeted cancer treatment could work. Patients who once were expected to live three to five years now have a normal life expectancy because of the drug. In recent years, other OHSU scientists have used NIH funding to make significant advances in the quest to employ stem cells to cure disease, in uncovering the epigenetic basis for chronic diseases, and in developing a vaccine that may someday wipe out HIV infection from the body. Rebuilding the NIH budget also would be good for our country’s fiscal health. The research funded by NIH that mostly occurs at our nation’s medical schools and teaching hospitals creates skilled jobs, new products, and improved technologies. In 2012, NIH-funded research supported more than 400,000 jobs across the country. Last year, the federal government provided OHSU scientists with $272 million in support for medical research, including $232 million from the NIH. That money not only helps our scientists continually make advances in treating and curing disease, it also provides a significant economic impact for the state of Oregon — an economic impact that was measured at more than $600 million for the 2009 fiscal year. The economic impact is undoubtedly larger today. But this issue is about much more than economic impact, of course. This issue is about treating disease, curing disease — and providing hope. Americans want cures, not cuts. OHSU and the nation’s medical schools and teaching hospitals urge Congress to restore the NIH budget and reaffirm medical research as a national priority.
{ "pile_set_name": "Pile-CC" }
bcl-xL and RAG genes are induced and the response to IL-2 enhanced in EmuEBNA-1 transgenic mouse lymphocytes. We have described transgenic mice expressing Epstein-Barr virus (EBV) nuclear antigen-1 (EBNA-1) in B-cells which show a predisposition to lymphoma. To investigate the underlying oncogenic mechanisms, we have cross bred transgenic strains of mice, examined the pre-tumour B-cell phenotype and investigated the expression levels of selected cellular genes as a response to EBNA-1 expression. We have found that bcl-xL and the recombination activating genes (RAG) 1 and 2 are induced in pre-neoplastic samples of EBNA-1 expressing mice. Induction of bcl-xL may explain the observed redundancy in lymphomagenesis between transgenic EBNA-1 and bcl-2. In addition, bone marrow cells derived from the EmuEBNA-1 mice show a greater capacity for cultured growth compared to controls, particularly in the presence of IL-2. Notably, bcl-xL expression is responsive to IL-2. These data shed new light on the potential contribution of EBNA-1 to EBV associated tumorigenicity as well as to the viral life cycle and open a potential avenue for therapeutic intervention.
{ "pile_set_name": "PubMed Abstracts" }
The White House offered a mixed reaction Tuesday to an apparent diplomatic overture from North Korean leader Kim Jong Un to neighboring South Korea, and to Seoul’s proposal to begin talks directly with Pyongyang next week, a move that could sideline the United States in the volatile region. After staying mum for two days about Kim’s offer, President Trump issued a tweet early Tuesday that repeated his favorite insult for the North Korean ruler, and then seemed to take partial credit for any thaw on the Korean peninsula while staying ambivalent about possible outcomes. “Rocket man now wants to talk to South Korea for first time. Perhaps that is good news, perhaps not — we will see!” Trump wrote. But on Tuesday night, Trump added a truculent nuclear taunt in response to Kim’s claim that the United States is “within the range of our nuclear strike and a nuclear button is always on the desk of my office.” Moments after Fox News highlighted the quote, Trump tweeted: “Will someone from his depleted and food starved regime please inform him that I too have a Nuclear Button, but it is a much bigger & more powerful one than his, and my Button works!” If Washington was wary, Seoul appeared eager to accept Kim’s offer, which was part of a New Year’s speech that is closely analyzed each year for clues to the enigmatic leader’s thinking. The two longtime adversaries have not held direct talks for more than two years. Cho Myoung-gyon, South Korea’s minister for unification, proposed Tuesday that negotiators meet on Jan. 9 at the divided border village of Panmunjom to discuss cooperation at next month’s Winter Olympics in Pyeongchang, South Korea, and how to improve overall ties. So far, no North Korean athlete has qualified for the Games, which start on Feb. 9. But South Korean officials have said they are working with the International Olympic Committee to grant wild cards to North Korean athletes in a sign of inter-Korean reconciliation. The Jan. 9 talks, should they take place, notably would not include the United States, China, Japan or Russia, which have dealt with North Korea in unsuccessful multi-party negotiations in the past. Nor would they include U.S. demands that Pyongyang give up its growing nuclear arsenal, and stop testing long-range ballistic missiles. That raised red flags for U.S. officials who questioned Kim’s motives, his sincerity and South Korea’s ability to deal with the wily ruler. “We won’t take any talks with North Korea seriously if they don’t do something to ban their nuclear weapons,” Nikki Haley, the U.S. ambassador to the United Nations, said at the U.N. on Tuesday. She said Pyongyang was a “reckless regime” that could not be counted on to enter talks in good faith. “We don’t need a Band-Aid,” she said. “We don’t need to stop and take a picture.” The State Department was less openly critical even as it urged caution. “We are close allies, and if [South Korea] wants to sit down and have a conversation with North Korea, that’s fine, that’s their right,” State Department spokeswoman Heather Nauert said. “But we aren’t necessarily going to believe that Kim Jong Un is sincere.” White House Press Secretary Sarah Huckabee Sanders insisted that the U.S. alliance with South Korea is “stronger than it ever has been,” with both countries working toward a denuclearized Korean peninsula. U.S. strategy continues to be “maximum pressure” to convince Pyongyang to end its nuclear program, she said. “We are going to keep all of our options on the table.” Chinese Foreign Ministry spokesman Geng Shuang, at a regular news briefing on Tuesday, said China “welcomes and supports” an opportunity for the two Koreas to improve relations, ease tensions and denuclearize the peninsula. “This is a good thing,” he said. Some analysts suggested that Kim was attempting to exploit recent divisions between Washington and Seoul. Relations between the long-standing allies have been strained under Trump, who has openly clashed with South Korean President Moon Jae-in. Trump, who visited Seoul in November, has repeatedly threatened to scrap a bilateral free trade deal with South Korea, and last summer condemned what he called Seoul’s “talk of appeasement” with the North. “Talks are not the answer!” he tweeted on Aug. 30. Moon, in turn, appeared to rebuke Trump’s threat to unleash “fire and fury” against North Korea, saying any military actions on the Korean peninsula required consultation and agreement from Seoul. He has publicly suggested military talks with the North in an effort to ease the growing impasse. On Tuesday, Moon appeared to align with the U.S. view about the long-term goal of any negotiations, suggesting talks with Pyongyang this month might be a first step. “The improvement of relations between North and South Korea cannot go separately [from] resolving North Korea’s nuclear program,” Moon said ahead of a Cabinet meeting. Kim may feel he can offer talks from a position of strength. In September, his government tested its sixth and most powerful nuclear device. In November it tested a long-range ballistic missile that U.S. officials said could potentially reach anywhere in America. In his New Year’s speech, Kim declared his nation had achieved the “historic feat of completing” its nuclear force and that the entire United States was now within range. He also warned that the “nuclear button” was on his desk, although it appears more symbolic than strategic. North Korea still has not developed a nuclear weapon that can survive a missile launch and reentry, though U.S. officials say that’s probably a matter of time. And the country still uses liquid-fueled ballistic missiles that take hours or days to launch. North Korean Leader Kim Jong Un just stated that the “Nuclear Button is on his desk at all times.” Will someone from his depleted and food starved regime please inform him that I too have a Nuclear Button, but it is a much bigger & more powerful one than his, and my Button works! — Donald J. Trump (@realDonaldTrump) January 3, 2018 In his speech, Kim appeared conciliatory toward South Korea, saying the two countries “should lower the military tensions on the Korean peninsula to create a peaceful environment.” Rather than showing strength, Kim may be showing his skill at making the best of a weak hand, said Sue Mi Terry, a former CIA analyst who now holds the Korea Chair at the Center for Strategic and International Studies, a nonpartisan Washington think tank. “With his outreach offer, he has the potential to drive a wedge between Washington and Seoul at no cost to himself,” she said, adding he might even demand concessions in return for participation in the Olympics. It would be a “propaganda gold medal for Kim,” Terry said. But if the Moon administration makes unilateral concessions to the North, she added, “it significantly risks straining the alliance” with the Trump administration. Over the last year, the United Nations Security Council and the Trump administration have both imposed trade sanctions on North Korea, curtailing its ability to buy oil and gas, sell agricultural products, use overseas workers to raise foreign capital, or conduct other business in international markets. Ian Bremmer, president of the Eurasia Group, a risk analysis consultancy, said Tuesday that the chances of a significant breakthrough, forged on the back of the Olympic Games, have elevated. But the unknown and potential spoiler is Trump’s reaction. Trump could launch an “enormously dangerous” Twitter firestorm, Bremmer said, or “take a 180-degree turn,” take credit for any progress, as his tweet Tuesday seemed to do, and then revive talk of a possible deal with Pyongyang that only he could cut. “We are at a bigger chance than during any time in the Obama administration for dialogue,” Bremmer said. “And we are at a bigger chance for war.” [email protected] For more on international affairs, follow @TracyKWilkinson on Twitter UPDATES: 7:38 a.m.: This article was updated with a new headline. 5:55 p.m.: This article was updated with a tweet from President Trump. 4:45 p.m.: This article was updated with reaction from China. This article was originally published at 4 p.m.
{ "pile_set_name": "OpenWebText2" }
Q: How to manage data sharing in a little Intranet on Windows I should set up a network for a little enterprise. The requirements can be summarized as follows: All the "business" softwares have to be stored in a single computer (let's call it "the server"). The other computers (in this case there should be at most 2 or 3 computers) can execute these softwares through the server. So there are no local copies of those softwares on clients, only on the server. The main computer (the server) shares also a printer. All the computers in the network, are interconnected through a single wi-fi modem/router. Some of them are connected through the wi-fi interface and others through an ethernet cable. Here is the most tedious problem with which I'm dealing: the server, in order to perform some special procedures, has to connect to a special modem which connects it to a remote private network. In order to do so, the server disconnects from the local enterprise network. In the meantime, clients are not able to execute softwares anymore and cannot print anything. PAY ATTENTION: when the server is connected to the private network, it still needs to execute the "business" softwares. So, my questions are: Is it possible to keep the server connected to both the networks without denying the access to softwares, data and printers to anyone? If yes, how? If not, how can I design the topology of the network in order to share softwares and other data among all the computers (server and clients)? It's important that when the server connects to the private network every computer (including the server itself) can still access softwares, data and printers. I hope I was clear. Thanks. A: You say that: in order to connect to the remote private special network, it has to disconnect from the local network. Then you say that you also need to keep it connected at all times so that all the clients can run software that lives only on the server. By the terms you've defined, this is impossible. You need to split up these functions into two separate servers, or talk to the special private network people and find out what in their software requires the server to disconnect from the local network.
{ "pile_set_name": "StackExchange" }
#!/usr/bin/python # # Copyright 2018-2020 Polyaxon, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # coding: utf-8 """ Polyaxon SDKs and REST API specification. Polyaxon SDKs and REST API specification. # noqa: E501 The version of the OpenAPI document: 1.1.9-rc4 Contact: [email protected] Generated by: https://openapi-generator.tech """ import pprint import re # noqa: F401 import six from polyaxon_sdk.configuration import Configuration class V1Queue(object): """NOTE: This class is auto generated by OpenAPI Generator. Ref: https://openapi-generator.tech Do not edit the class manually. """ """ Attributes: openapi_types (dict): The key is attribute name and the value is attribute type. attribute_map (dict): The key is attribute name and the value is json key in definition. """ openapi_types = { "uuid": "str", "agent": "str", "name": "str", "description": "str", "tags": "list[str]", "priority": "int", "concurrency": "int", "created_at": "datetime", "updated_at": "datetime", } attribute_map = { "uuid": "uuid", "agent": "agent", "name": "name", "description": "description", "tags": "tags", "priority": "priority", "concurrency": "concurrency", "created_at": "created_at", "updated_at": "updated_at", } def __init__( self, uuid=None, agent=None, name=None, description=None, tags=None, priority=None, concurrency=None, created_at=None, updated_at=None, local_vars_configuration=None, ): # noqa: E501 """V1Queue - a model defined in OpenAPI""" # noqa: E501 if local_vars_configuration is None: local_vars_configuration = Configuration() self.local_vars_configuration = local_vars_configuration self._uuid = None self._agent = None self._name = None self._description = None self._tags = None self._priority = None self._concurrency = None self._created_at = None self._updated_at = None self.discriminator = None if uuid is not None: self.uuid = uuid if agent is not None: self.agent = agent if name is not None: self.name = name if description is not None: self.description = description if tags is not None: self.tags = tags if priority is not None: self.priority = priority if concurrency is not None: self.concurrency = concurrency if created_at is not None: self.created_at = created_at if updated_at is not None: self.updated_at = updated_at @property def uuid(self): """Gets the uuid of this V1Queue. # noqa: E501 :return: The uuid of this V1Queue. # noqa: E501 :rtype: str """ return self._uuid @uuid.setter def uuid(self, uuid): """Sets the uuid of this V1Queue. :param uuid: The uuid of this V1Queue. # noqa: E501 :type: str """ self._uuid = uuid @property def agent(self): """Gets the agent of this V1Queue. # noqa: E501 :return: The agent of this V1Queue. # noqa: E501 :rtype: str """ return self._agent @agent.setter def agent(self, agent): """Sets the agent of this V1Queue. :param agent: The agent of this V1Queue. # noqa: E501 :type: str """ self._agent = agent @property def name(self): """Gets the name of this V1Queue. # noqa: E501 :return: The name of this V1Queue. # noqa: E501 :rtype: str """ return self._name @name.setter def name(self, name): """Sets the name of this V1Queue. :param name: The name of this V1Queue. # noqa: E501 :type: str """ self._name = name @property def description(self): """Gets the description of this V1Queue. # noqa: E501 :return: The description of this V1Queue. # noqa: E501 :rtype: str """ return self._description @description.setter def description(self, description): """Sets the description of this V1Queue. :param description: The description of this V1Queue. # noqa: E501 :type: str """ self._description = description @property def tags(self): """Gets the tags of this V1Queue. # noqa: E501 :return: The tags of this V1Queue. # noqa: E501 :rtype: list[str] """ return self._tags @tags.setter def tags(self, tags): """Sets the tags of this V1Queue. :param tags: The tags of this V1Queue. # noqa: E501 :type: list[str] """ self._tags = tags @property def priority(self): """Gets the priority of this V1Queue. # noqa: E501 :return: The priority of this V1Queue. # noqa: E501 :rtype: int """ return self._priority @priority.setter def priority(self, priority): """Sets the priority of this V1Queue. :param priority: The priority of this V1Queue. # noqa: E501 :type: int """ self._priority = priority @property def concurrency(self): """Gets the concurrency of this V1Queue. # noqa: E501 :return: The concurrency of this V1Queue. # noqa: E501 :rtype: int """ return self._concurrency @concurrency.setter def concurrency(self, concurrency): """Sets the concurrency of this V1Queue. :param concurrency: The concurrency of this V1Queue. # noqa: E501 :type: int """ self._concurrency = concurrency @property def created_at(self): """Gets the created_at of this V1Queue. # noqa: E501 :return: The created_at of this V1Queue. # noqa: E501 :rtype: datetime """ return self._created_at @created_at.setter def created_at(self, created_at): """Sets the created_at of this V1Queue. :param created_at: The created_at of this V1Queue. # noqa: E501 :type: datetime """ self._created_at = created_at @property def updated_at(self): """Gets the updated_at of this V1Queue. # noqa: E501 :return: The updated_at of this V1Queue. # noqa: E501 :rtype: datetime """ return self._updated_at @updated_at.setter def updated_at(self, updated_at): """Sets the updated_at of this V1Queue. :param updated_at: The updated_at of this V1Queue. # noqa: E501 :type: datetime """ self._updated_at = updated_at def to_dict(self): """Returns the model properties as a dict""" result = {} for attr, _ in six.iteritems(self.openapi_types): value = getattr(self, attr) if isinstance(value, list): result[attr] = list( map(lambda x: x.to_dict() if hasattr(x, "to_dict") else x, value) ) elif hasattr(value, "to_dict"): result[attr] = value.to_dict() elif isinstance(value, dict): result[attr] = dict( map( lambda item: (item[0], item[1].to_dict()) if hasattr(item[1], "to_dict") else item, value.items(), ) ) else: result[attr] = value return result def to_str(self): """Returns the string representation of the model""" return pprint.pformat(self.to_dict()) def __repr__(self): """For `print` and `pprint`""" return self.to_str() def __eq__(self, other): """Returns true if both objects are equal""" if not isinstance(other, V1Queue): return False return self.to_dict() == other.to_dict() def __ne__(self, other): """Returns true if both objects are not equal""" if not isinstance(other, V1Queue): return True return self.to_dict() != other.to_dict()
{ "pile_set_name": "Github" }
Q: Can Laravel handle high traffic apps? I am working on a PHP/MySQL social network project that will consist of many module/sections including: user system (permissions, profiles, settings, etc...) stackoverflow style badge and reputation point system Wall/stream of friends posts forums message system portfolio blog code snippets bookmarks and several other sections... Originally I had planned to build everything using Laravel framework since it's simply awesome and also does a lot of the work already. I am now questioning that though. I have not began any code yet so that is not a factor in the decision. Also my time that it takes to build any part of the site/app does not out-weigh performance. So if Laravel leads to less performance vs building from scratch but saves a ton of time. I would then prefer to spend a ton of extra time building from scratch if it means better performance and better long term. Back around 2006 I built a social network hybrid of MySpace and Facebook and did not use a framework. It gave me 100% control of every aspect of everything and greater performance as I was able to really tweak and optimize everything as my network grew in size and traffic. I think you lose some of that low level optimizing capability when using a big framework? My question could easily be mistaken as an opinion based question. To some extent it is however the core of it should be legit as far as which in theory would be the better route if performance is the priority over time to build. I have only built low traffic app with a framework like Laravel so I have no experience building a high traffic app with a framework like Laravel so I do not know how well they perform compared to without a framework. All my high traffic apps have been without a framework. Based on the type of modules/sections I listed above. Can Laravel handle these type of apps on a high traffic and large scale level? A: This question is a little vague - for a start, what's your definition of high traffic? Where I work we run a combination of hand built from the ground up code, and areas that are served by a laravel application (this is embedded in the main site and serves as much traffic as the rest of the old application code). There's been no slowdown in the areas built with laravel at all (same database sources are used and it runs on the same web servers - so useful to benchmark on). Caveats: The original hand built code is older, and doesn't always take advantage of newer PHP methods / design types. This means that it's not as efficient as it could be. Then you have overhead with laravel doing things you might not always need/want to have going on. Summing Up What it comes down to is to mockup what you think would be the heaviest part of your application within laravel, and then again with custom ground up code. Then benchmark the crap out of it. You'll more than likely find that (good) hand built work is going to be quicker. Is it worth those milliseconds? Well thats down to personal choice. Laravel is more than capable of handling large volumes of traffic, but sure, you might shave a small amount of time by not using it. Just how important is that to what you're doing? If something is slowing it down and causing you problems within Laravel - change it. It's open source after all. For reference (up to you if you count this as high traffic or not - I would): This is a UK based SASS that generally serves UK based visitors. At 9pm tonight (Friday evening - actually one of our quietest times) we currently had around 250,000 active PHP sessions going on. The system is served via 6 web servers [for redundancy, traffic loads etc] (load balanced) for the PHP application.
{ "pile_set_name": "StackExchange" }
Mover How Can You Avoid Being Scammed by a Mover? How do you pick the right mover, especially when the industry is known for discreditable companies looking to scam their next customer? Avoid being the next victim of a disreputable mover by learning the tricks of these fraudulent companies and educating yourself on the smart way to choose a reputable mover in your area. The Today Show recently featured a segment on "How to Avoid Being Scammed by movers." In order to avoid these scams, it's critical that you know how to spot them. Get to know the top scams that can be attempted by a mover as identified by The Today Show: The Hostage Bait and Switch Trumped Up Delivery Charges Late (or Never) Delivery Reckless Abandonment Now that you are familiar with the nicknames of common moving scams, let's go into detail about what each of these terms mean. The Today Show segment explained the way a disreputable mover gets away with these popular scams as follows: The Hostage: A mover provides you with an estimate, only to add extras on once they have your belongings in their possession. Basically, their estimate is not valid, and they can tack on as many additional costs as they'd like - doubling or tripling the cost of your move. If you want your furniture back, you have to conceded and pay the additional cost. Bait and Switch: A mover will provide you with an estimate for the cost of your move, and then switch this arranged deal at the last minute. They sell you on a low price, but in the end, the cost of your move ends up being nothing close to what you agreed to. Trumped Up Delivery Charges: A mover will tack on additional charges based on unfounded reasons. Say the mover gave you an estimate based on weight. After your valuables are on their truck, they then charge you extra claiming the cubic feet have exceeded the weight estimate of your goods. Since this is impossible to calculate, you're stuck paying the fees or forsaking your goods. Other common trumped up charges include saying packing was not included in your estimate, charging more because your goods weren't totally packed and ready for the move and so on. Late (or Never) Delivery: This scam may be the worst of the bunch. A mover will come and pack, load and promise to deliver your belongings "on-time." Then, they call saying your goods are in the back of a truck behind two other peoples' belongings, so you can't receive your furniture until theirs is delivered first. Or, if the mover has a licensing violation and their truck is impounded in transit by the Department of Transportation, all your valuables are stuck on board until the truck is released. Either of these situations result in your goods being delivered weeks late...or not at all. Reckless Abandonment: This happens often with a rogue mover. A fly by the night mover will take your money, load your belongings, then close up shop and flee, abandoning your shipment either on the truck or in a private storage facility. This scam allows the mover to take off with your money and your belongings. If you are lucky enough to find where your belongings ended up, expect to pay exorbitant amounts of money to retrieve them from the storage locale. Don't Fall Into the Traps Set by a Disreputable Mover! So, now that you know the common rip-offs and scams executed by a shady mover, how can you avoid these common pitfalls many individuals and families fall into when moving? Do not despair; not all movers are despicable. Having a successful, affordable relocation performed by a professional mover does not have to be a pipe dream. Below are some tips for choosing a mover for a happy, successful relocation: Check the Company's Address: A REAL mover will have a REAL address. Once you get the address, make sure you Google it or drive to the location to verify its legitimacy. Ask for Recommendations from Friends, Family and Neighbors: One of the best ways to find a great mover? Word of mouth from people they have moved before. Ask your Real Estate Agent: Real estate agents help people move all the time. They are a great source for a reputable mover. Get Three Competitive In-Home Estimates: Get three estimates from three different moving companies. If there is a significant disparity, this will help you to easily identify a fraud. Choose a mover who Bases Price off Weight, Not Cubic Feet: This will help guarantee and lock in the estimate they provide you initially. Check the mover's Complaint History: If a mover has more than eight complaints on a given complaint website (Better Business Bureau, etc.), then you might want to rethink electing them as your mover. How to Protect Yourself from a Mover's Scams There are some simple steps you can take to prevent falling into scams with a mover, and certain things you should never do when moving. Heed this advice, and benefit from a happier (and cheaper) move: Don't Fall for a Front Company: Double check that the mover has a real address, and is not just some rogue mover representing its website as an actual business. Do Not Give a Deposit: A mover that demands a deposit upfront likely has an agenda other than securely moving your belongings - like taking your money and running. If a mover demands a deposit, move on to a different company. Do NOT Pay Cash: Paying cash is asking for trouble. When you pay cash, there is no evidence of a transaction. Therefore, if your things aren't moved, or even worse, you don't get them back, you have no evidence of ever having paid for service. Make Sure the Truck is Branded: Real moving companies have real moving trucks, complete with branding and logos. To make sure crooks do not drive off with your valuables, check the truck for a company logo. Do Not Sign a Partial Contract: You would never sign a loan agreement, pre-nup or binding contract of other sorts with blanks; the same rule applies for moving contracts. Make sure the contract is complete and all filled in before signing anything. Do Not Agree to a Skinny Contract: Make sure you sign a complete moving contract, or one that is more than two pages. All of your goods should be listed on the contract. Buy Extra Insurance: A reputable mover will offer additional types of moving insurance to ensure you can have the highest protection should something happen to your valuables during the move. Allied Van Lines offers Full Valuation Coverage that totally protects your shipment should damage or loss occur. Ask About the Mover's Claims Policy: Find out more about how the company processes claims in the event you should need to file one. Should you file a claim, you want to make sure it is handled quickly and properly. You Deserve a Great Relocation Experience with a Professional Mover Everyone should be entitled to a secure, sound relocation at a fair price. By knowing the common scams carried out by a mover, using the tips to selecting a reputable mover and avoiding the things NOT to do during your move, you too, can enjoy a hassle-free, economic move experience. Use these three checklists and the valuable information provided by contributors on The Today Show to avoid common moving scams, saving you money and headaches. Allied Van Lines - Your Choice for a Reputable, Professional Mover Allied wants you to know that moving does not have to be overwhelming. Now, more than ever, we believe people should be able to have a professional mover handle their relocation while still receiving a fair price. Begin by allowing yourself plenty of time to move, and Allied can help you get organized regarding the rest. Choosing a Reputable Mover: How To Today's economic climate is anything but certain. Foreclosures are upwards of 30%, and people are being smarter, and more conscious, about where their money is going. People who are not relishing in this economic downturn are the most likely to become victims of moving scams. They search for the mover with the best price, not knowing that these "too good to be true" movers often are. With more than 85 years of experience in the moving business, Allied Van Lines knows the ins and outs of the industry. And, since we are a reputable moving company, we can tell you exactly how to choose a professional mover. We know you want a great move at a fair price. Here are some tips on how to choose a mover who can provide you with just that: Begin the search for the perfect mover 8 weeks prior to your move. If you aren't lucky enough to have that much time, start as soon as possible. Ask for references. Check with others around you who have recently moved for their recommendations on a great mover. Make a list of the services you need. Be sure to consider everything, from temporary storage to shipping your car to ensure you get the most accurate price, and more importantly, choose a mover who can accommodate your needs. Research companies thoroughly. Look for companies in your area that can provide the specific services you need. Eliminate those companies that do not meet your requirements. Get multiple in-home estimates. Make sure the estimate is done at your home, and have multiple companies perform them to compare costs. Use these helpful tips to help find the best mover. Just Remember: Cheapest is not always best. What may seem like a great deal upfront can actually end up costing you MUCH more in the end. Now that you know how to start the search for a superior mover, here are some tips on what to look for in a reputable mover: The mover's credentials: Make sure the mover is licensed, insured and bonded; registered with the Department of Transportation; check its standing with the Better Business Bureau; check to make sure its address is legitimate. A physical address: Visit the company's location or double-check that their address is valid on the internet. Branded trucks and uniformed men: If a plain, rental looking truck shows up at your door, be wary. Clean complaint history: Asking for a spotless complaint history may be excessive, but make sure there is not too much negative press about the company. Bill of Lading and Rights and Responsibilities: A reputable mover will give you a Bill of Lading and your Rights and Responsibilities. You are entitled to these documents and a copy of your contract during your move. If your mover is professional, you will receive copies of these documents. Finding a Mover is Easy When You Know What to Look For... and What to Avoid Now, you're educated about moving scams thanks to The Today Show's segment "How to Avoid Being Scammed by movers," and you have nearly fail-proof tips to choosing the right mover and being protected during the move. Arm yourself with these tips to avoid being scammed by a mover! Don't make a relocation nightmare become your reality by unknowingly choosing a rogue or disreputable mover. Now, more than ever, watching where every penny goes is of the utmost importance. Help protect your bank account - and your personal belongings - by working with a professional mover whose first priority is your successful relocation. Allied Van Lines - Your Mover of Choice At Allied Van Lines, we are committed to providing stress-free relocations at fair prices. We realize how you need a secure move, but you also need an affordable price. Contact us today to arrange your FREE IN-HOME MOVING ESTIMATE, and get your move started off on the right foot. We look forward to handling all your moving needs as your mover of choice. Submit request for a FREE moving quote Uncheck this box if you do not agree to our privacy policy or be contacted at a number you provided (including by automated dialing.) By submitting my information, I agree to the Term of Use and Privacy Policy. By submitting my information, I consent to being contacted by Allied Van Lines regarding moving opportunities at the phone number I provided, including mobile number, using an automated telephone dialing system. X What our customers say Allied Van Lines Everything went perfectly. The driver did not load. The driver was not available to load so the local agent's crew did the loading. The driver picked up everything from the warehouse. Allied Van Lines The delay in the delivery of my goods, from the big spread that was given originally. I performed the majority but the agent's crew packed the fragile. I was told the driver making the delivery was going to be loading, but an intermediate group did that. Allied Van Lines Allied Van Lines All segments of the move were made extremely easy by personnel. My contact with the company was excellent. She was on top of everything and kept in perfect communication with me. They were efficient and they were as good as they could be. The driver was excellent and he was also very efficient to where I wanted special care taken. Allied Van Lines Allied Van Lines The conduct and professionalism of all of the crew. The local agent's crew was great. They even unpacked some things for me. They were very helpful . He fixed a crack in the track for the drawer & put it back together. They were very flexible. Allied Van Lines We moved from a different state. The movers themselves, how accommodating they were, we've never done a big move, movers were terrific, long day for them who said do you want us to stay, nice people. The driver was the only one at both locations. We had to change up the time, that backed things up a bit, instead of meeting at 8 we met at noon. They were flexible in accommodating. Exchanged #'s.
{ "pile_set_name": "Pile-CC" }
I PERSONALLY hate the Palace Pier in its current form. It is a blot on the seafront that perpetuates a culture that brings Brighton down and entrenches its reputation as a cheap, out-of-date seaside destination. Today there are very much two Brightons: the inland one of vibrant creative industries, modern restaurants and a dynamic population – and the seafront of tacky sideshows, fish and chips, rock and assorted paraphernalia. Unfortunately for Brighton, a large proportion of outsiders see it primarily as a destination for the latter rather than the former. I have been working in Brighton now for five years, while still living in London, and I can say that this is pretty much universally the impression that Londoners have of the city. This is a massive public relations problem. Luckily though, it is still a big draw, otherwise commercially it would fail (as many other British seaside resorts have). Indeed, whenever Brighton Fringe happens, people ask me how we compete with Brighton Festival – I reply that Brighton Festival is quite irrelevant really as the biggest competitor to Brighton Fringe is actually the seafront. However, this is a ticking time bomb and, in good time, I believe that the Brighton seafront will go the same way as other faded Victorian seaside resorts before it and become an embarrassment. There have been some attempts to turn the arches by the Pier into an artists’ hub but it hardly makes a dent in what is currently on offer, which is more akin to West Street than anything else. The current Brighton Pier is a beautiful photo opportunity on the outside and a disappointingly poor amusement arcade on the inside, surviving as a result of the endless day trippers coming from all over the country, and indeed the world, to try it out just once before going away for ever. Brighton has the largest number of day trip visitors of anywhere in the UK after London, mostly due to the pier and pavilion, so there seems to be no shortage of people willing to spend their two pence on those machines, via Sports Direct and Primark on their way back to their coaches. I see the parades of them every day from my office in the Old Steine. Brighton needs attractions that are dynamic and more ahead of the curve. The pier is a golden opportunity to create a destination that fits in with the times. Get that right and the rest of the seafront will follow suit: proper restaurants, bars, shops, galleries, a decent performance venue. Yes, maybe still some of the same sideshows but that should not be the sole raison d’être as it currently is. In short, a chance to move into the 21st century. The i360, controversial as it may be in certain quarters, is the start of something that can change the perception of that part of the city’s seafront. It should not be a shiny corporate entity either but at least it will be bringing a new angle to attract visitors to the city. One just has to look at the lamentable state of the promenade above Madeira Drive as a forerunner for the way that the rest of the seafront could well go otherwise. But for the endless stream of car rallies and charity events, Madeira Drive would be an utter wasteland. Brighton is extremely lucky to be located where it is and to have a (relatively to the UK) good rail connection but the council is complacent if it relies on the visitor number successes the city has so far – successes that the council has presided over largely as an outsider. It’s time for the Brighton seafront to learn from the rest of the city and move along. Why not a Michelin star restaurant on the pier or a location for local artists to establish themselves, a theatre, cinema or a small conference venue? I believe it is still there for the taking and hasn’t been fixed properly anywhere else along the local seafront, (except perhaps with Riddle and Finns). Brighton Marina had the chance to become something the rest of Brighton seafront never was, but instead became a tacky, cynical, empty, commercial pastiche of what is already everywhere else. What a waste. Even though I still happen to live in London, due to what I do, I consider myself a proud, passionate Brightonian and I long for a seafront that reflects what the rest of the city has woken up to in terms of visitor and local resident provision.”
{ "pile_set_name": "OpenWebText2" }
o? True Suppose -3*t = 1 + 8. Let s(d) = d**3 + 6*d**2 + 2*d + 1. Let u be s(t). Suppose 10 = 5*z, 5*a + 0*z = -z + u. Is 4 a factor of a? True Suppose 5*l = r - 35, -2*r + 5*l - 15 = -70. Is r a multiple of 4? True Suppose 2*l + 11 - 1 = 0. Does 15 divide (-2)/l - 118/(-5)? False Suppose 3*k - 3*f + 0*f - 72 = 0, -25 = -5*f. Is 9 a factor of 2/(-4) + k/2? False Suppose 6*w + 25 = w. Let t(c) = c + 9. Let u be t(w). Suppose -u*z = -3*z - 10. Is z a multiple of 5? True Let j = 81 + -139. Let i = j + 101. Is 11 a factor of i? False Let q(s) = s**3 + 4*s**2 - s + 2. Let u be q(-4). Let o(w) = w**2 + w - 6. Let t be o(u). Suppose -3*l - 39 = -3*d - 2*l, 0 = 3*d - 2*l - t. Does 9 divide d? False Suppose -2*b + 39 + 13 = 0. Is b a multiple of 14? False Let q = -7 + 12. Suppose 8*l = q*l + 81. Suppose 129 = 4*f - l. Is 13 a factor of f? True Suppose 0 = -4*n + j + 33, 4*n - n + 4*j = 20. Let c = 5 - n. Is 35*1 - (-6)/c a multiple of 11? True Let g(m) = m**2 - 2*m - 3. Let k be g(3). Let j be 1 - 0 - (1 - k). Does 12 divide 23 - (j + 1 + -2)? True Suppose -5*s + 98 = 4*u - 45, -137 = -4*u - 3*s. Is 13 a factor of u? False Let q = 3 + 2. Suppose 5*n + 6 = 5*d - 19, 29 = 4*d + q*n. Is 29 + 2/(d/9) a multiple of 11? False Suppose -5*d = -15, 2*d - 9 = 4*m + 5. Let n(j) = -2*j + 1. Let t be n(m). Suppose t*l = -2*f + 216, 7*l - 2*l - 5*f - 195 = 0. Does 15 divide l? False Let w = 137 + -98. Is w a multiple of 10? False Let k = -17 - -31. Is 2 a factor of k? True Let r(z) = 3*z**3 - 3*z**2 + 3. Is 41 a factor of r(4)? False Suppose 0*d + 12 = -2*d. Does 6 divide ((-12)/(-7))/(d/(-21))? True Let u(z) = 30*z - 2. Let x be u(-2). Suppose -g = -4*g + 12. Is 13 a factor of 1/2 - x/g? False Let v = 125 - 2. Is v a multiple of 11? False Let z(u) = -u**3 + 5. Let h be z(0). Suppose -v + 3*l = -l, h*l = -2*v + 26. Does 6 divide v? False Suppose -2*q + 1 + 7 = 0. Suppose -v - 5*j = 2 - q, 0 = 3*j. Let r(b) = 13*b - 2. Is 6 a factor of r(v)? True Suppose -448 - 280 = -2*a. Suppose -4*k = l - a, 91 = k - l + 2*l. Does 25 divide k? False Suppose -4*p = 2*u - 0*p - 288, -3*p - 668 = -5*u. Does 17 divide u? True Suppose 0 = -3*c - 2*v + 298, 3*c + 4*v - 297 = v. Is c a multiple of 4? True Let z be 1/(2*3/30). Suppose z*a - 15 = 2*a. Does 2 divide a? False Let j(h) = h - 9. Let i be j(14). Let a be ((-4)/(-2))/(2/73). Suppose -i*g = -27 - a. Is 10 a factor of g? True Let n be 4/6 + (-46)/6. Does 20 divide (-219)/n + 4/(-14)? False Let m(z) = z**3 - 21*z**2 + 3*z + 43. Is 23 a factor of m(21)? False Let r = 39 + 39. Suppose 0 = 6*q - 4*q - r. Does 20 divide q? False Let w = 19 + -13. Is 3 + 6/(-3 + w) even? False Let h(a) = -8*a**3 - 6*a**2 - 7*a + 2. Does 28 divide h(-2)? True Let p = -1 - -4. Let z(i) = -i + 2. Let x be z(p). Is 5 a factor of 2 + x + (13 - 4)? True Suppose 5*m - 35 = -2*l, -2*m - 5 = 3*m. Suppose s - 3*s = -l. Does 5 divide s? True Suppose g = -5*g + 144. Let f = 14 + g. Does 10 divide f? False Suppose -5*g = -20 + 230. Does 12 divide g/70 - (-88)/5? False Let q(d) = 36*d - 27. Is 18 a factor of q(4)? False Let u(v) = 2*v**2 - 3*v - 1. Does 6 divide u(5)? False Suppose 5*g - 6 + 21 = 0. Let x = g + 14. Is x a multiple of 3? False Let q = 120 + 9. Is 20 a factor of q? False Let d = -34 - -67. Suppose 5*c - 18 = 2*n - 1, -2*c = -4*n + 6. Let z = n + d. Does 15 divide z? False Let c(d) = d**2 - 2*d - 9. Let x be c(-7). Let s = 82 - x. Does 14 divide s? True Let y(v) = -v**2 - 6*v - 5. Let m be y(-4). Suppose 9 = m*d + 12. Is 9 a factor of 26 + 3 + d + -1? True Suppose -x + 0*x - 1 = 0. Does 14 divide ((-14)/(-5))/(x/(-10))? True Let m = 390 + -257. Is 26 a factor of m? False Let u(p) = p**3 + 5*p**2 - 8*p - 8. Let w be u(-6). Suppose -40 = -0*t - w*t. Does 9 divide t? False Let t(z) = 11*z**2 - z - 2. Is 7 a factor of t(2)? False Let t be (-1)/4 + 25/4. Suppose t*g = 2*q + 3*g - 91, 4*g - 170 = -5*q. Does 14 divide q? False Suppose -3*m + 5*u = 2*m - 70, -30 = -2*m + u. Is m a multiple of 16? True Suppose -3*d - 2*d + 220 = 0. Let l(t) = 5*t**2 - 2*t + 1. Let z be l(1). Suppose z*h = 20 + d. Does 8 divide h? True Is (-4 + (-368)/(-6) - 4)*3 a multiple of 20? True Suppose 14 + 22 = b. Is b a multiple of 36? True Let q = -75 - -42. Let u = 13 - q. Is u a multiple of 13? False Let i be 2/3 + 20/6. Let p(t) = -t**3 + 7*t**2 - 5*t + 3. Let r be p(6). Suppose r = i*k - 51. Does 8 divide k? False Suppose 0*m = 3*m - 822. Let d = m + -165. Is d a multiple of 24? False Suppose 259 + 77 = 4*p. Is p a multiple of 21? True Suppose 5*v - x + 0*x = 123, 18 = v + 2*x. Suppose 4*o - v = o. Is o a multiple of 3? False Suppose 3*u + s + 288 = -117, -3*s - 405 = 3*u. Is u/(-12) + 1/(-4) a multiple of 11? True Let o = -105 + 414. Is 45 a factor of o? False Suppose 0 = 5*c - 3*f + 21, -2*f = 5*c - 4*f + 24. Suppose -l - 60 = -6*l. Let j = c + l. Does 3 divide j? True Let l(u) = -u + 3. Let p be l(3). Suppose -v = -p*v + 3, 3*n = -2*v + 6. Let t(s) = s**3 - 2*s**2 - 5*s + 3. Does 5 divide t(n)? True Suppose -4*d = -5*o - 221, -o + d - 3*d - 47 = 0. Let q = o + 84. Is q a multiple of 13? True Let z = -19 - -49. Does 11 divide z? False Let j(b) = -1 + 3*b + 3*b - 3*b. Let v be j(4). Let h = 18 - v. Is 7 a factor of h? True Let a be (-1 + 1)/(3 + -4). Suppose -6 = 4*k - 22. Suppose 2*d - l + 5 = k*l, a = 2*d - 3*l - 1. Does 5 divide d? True Let j be (-1 + 3)/(-2)*-100. Is 9 a factor of (-3)/(-6)*j/2? False Let w be -1 - -3 - 1*-33. Let q = -50 + 31. Let l = w + q. Is 6 a factor of l? False Suppose 17*o - 5*o - 48 = 0. Is 4 a factor of o? True Let o = 51 + -5. Is o a multiple of 28? False Let a(z) = -7*z**3 + 2*z**2 + 2*z. Let t be a(-2). Suppose -s = -4*s + t. Is 11 a factor of s? False Suppose -5*x + 1123 = -4*u - 132, -2*u = -4*x + 1010. Is 17 a factor of x? True Suppose -x + 88 = 5*q, 0 = 2*x + x - 2*q - 264. Does 12 divide x? False Suppose 0 = -2*z + z + 4. Suppose 0 = v - 11 - z. Does 5 divide v? True Let u(b) = -b**3 + 11*b**2 - 11*b + 2. Let f be u(8). Suppose -i = i - f. Does 18 divide i? False Let a(x) be the first derivative of 10*x**4 - x**3/3 + x + 2. Is 20 a factor of a(1)? True Suppose -24 = -o + 2*j, 2*j = -2*o - 0*o + 42. Is o a multiple of 6? False Let q(x) = x - 2. Let l be q(1). Is 3 + l/(3/(-54)) a multiple of 9? False Suppose t = 67 + 8. Is 6 a factor of t? False Suppose -s + 431 = -2*y, 5*s - 10*s + 2165 = -5*y. Is s a multiple of 46? False Suppose 16 + 19 = -5*t. Let z = t + 12. Suppose -2*l = d - 15, -2*d + 5*l = -z - 7. Is d a multiple of 9? False Suppose 5*y = 10 + 15. Suppose -y*u + 20 = -30. Is 5 a factor of u? True Let m(q) = 2*q + q + q - 1 + 13*q. Is 5 a factor of m(1)? False Let s(j) = -j**2 - 7*j + 3. Let i be s(-7). Suppose f - 4 = -i. Is 4 a factor of (f - (2 - 3)) + 6? True Let k = 2 - 2. Suppose k = -3*t + 21 + 15. Let q = t - 7. Is q a multiple of 2? False Suppose 1 = 3*g - 5. Let m(a) = 0*a - g*a - 4 + 2 + 4*a**2. Is m(-2) a multiple of 18? True Let g(x) = 5*x**2 - 4*x + 3. Does 11 divide g(3)? False Suppose -4*f - v + 1008 = 0, 0*v = -2*f - v + 504. Is f a multiple of 36? True Suppose -4*u + 3*u = -10. Is u a multiple of 3? False Let r(o) = o - 6. Let m be r(6). Let z(f) = f**3 + f**2 + 45. Is 20 a factor of z(m)? False Suppose 0 = 2*q - q - 16. Is 9 a factor of q? False Let k(a) = a**3 - 5*a**2 - 6*a - 5. Let i be k(6). Let j = i - -9. Does 4 divide j? True Let c(f) = 2*f - 4. Let y be c(5). Let n be y*(1 - (-2)/(-4)). Is 2 + -1 + n + 11 a multiple of 15? True Let b = 7 + -3. Suppose -k = -b*y + 12, 3*y + 30 = -5*k + 8*y. Let h = k + 6. Is h even? True Suppose 17 + 0 = x. Is 9 a factor of x? False Let v = -1 - -9. Let d(h) = h**3 - 7*h**2 - 8*h - 4. Let b be d(v). Is 2 + (b + 3)*-29 a multiple of 15? False Suppose -2*a - 3 = -7. Let r be 24/20*15/6. Suppose a*g = -r*g + 110. Is g a multiple of 11? True Let t be (-384)/27 - 6/(-27). Is (-6)/21 - 102/t a multiple of 2? False Let y be 3 - 3/(-3) - 2. Suppose 0*f + y*f - 37 = -m, -f + 5*m + 46 = 0. Does 11 divide f? False Let s = -16 - -16. Suppose -4*m = -2*n - 135 - 217, -m - n + 82 = s. Is m a multiple of 24? False Is 52 a factor of 8/(-10)*-5*13? True Suppose -2*u + 3*u = 5*p - 1110, 3*p = -u + 674. Is p a multiple of 23? False Let z be 7*3*100/42. Suppose 7*t - 2*t - z = 0. Is 4 a fac
{ "pile_set_name": "DM Mathematics" }
Q: Using while loop for nested if statements Although I know I'm missing something, can't seem to get my head around a simple loop. while (true){ if (n < 0) System.out.print("less than 0") else if (n > 35) System.out.print("greater than 35") else calc(n) } I'm trying to use a while loop to loop the code and asks for input until the user inputs a value greater than 0 and less than 35, I have tried using continue but too no avail, thanks in advanceenter image description here I have added sc of full code, the while loop will go after requesting input at the bottom of the code A: // if using jdk 1.7 or above else close scanner in finally block. try (Scanner s = new Scanner(System.in)) { int n; while (true) { n = s.nextInt(); if (n < 0) { // ask for value greater then 0 System.out.print("Please enter a value greater then 0 "); // extra print statement so input will be printed on next line after message System.out.println(); } else if (n > 35) { // ask for value less then 35 System.out.print("Please enter a value less then 35"); System.out.println(); } else { // process input and break if correct input has received calc(n); break; } } }
{ "pile_set_name": "StackExchange" }
Q: Making bootstrap navbar sticky only when navbar-collapse show I want to make my navbar into a fixed position only if the collapsed menu is shown. It seems I can only make it permanently fixed regardless of the collapse function trigger, which is not what I want. This is what I have <nav class="navbar navbar-expand-lg navbar-light bg-white align-items-stretch"> <a href="{{ url('/') }}" class="navbar-brand"> <img class="navbar-logo img-fluid" src="{{ asset('img/generic.png') }}"> </a> <button class="navbar-toggler collapsed" data-toggle="collapse" data-target="#navbar_collapse" aria-expanded="false"> <span class="navbar-toggler-icon "></span> </button> <div class="navbar-collapse collapse align-items-stretch bg-white" id="navbar_collapse"> <!--collapse menu code--> </div> </nav> and in my css file to specify the navbar only on device version @media (max-width: 992px) { .navbar-fix { position: fixed; top: 0; right: 0; left: 0; z-index: 10; } } and my script $( document ).ready(function() { $('.navbar').click(function(){ $('.navbar.navbar-fixed').removeClass('navbar-fixed'); $(this).addClass('navbar-fixed'); console.log( "nav fix" ); }); }); Which doesn't load it back to relative position when the collapse is hidden. And how can I specify so it's only fixed when I click on the toggler? A: It's a little difficult to discern exactly what you are asking, but I'll give it a shot. So, when you say... I want to make my navbar into a fixed position only if the collapsed menu is shown. It seems I can only make it permanently fixed regardless of the collapse function trigger. It seems as though you are having difficulty changing the navbar position attribute at the lg(992px) breakpoint. Without more content on the page, it's difficult to determine what's actually happening upon hitting the breakpoint. So, I inserted your snippet into my IDE, added some filler text and played around with Chrome's dev tools to see what was happening. Let's breakdown the components at work here... For navbar, the class "navbar-expand-lg" is saying to expand/show the navbar when the screen is 992px or more. So the collapsed version will display only when the size is less that 992px. Now, your css snippet has a media query for the lg breakpoint(992px). Therefore, the styles inside @media codeblock will apply when the screen is 992px or less. Since the position attribute is being set to "fixed" inside this @media query, the navbar is being set to fixed when the screen is 992px or less. Putting it all together, you want to make my navbar into a fixed position only if the collapsed menu is shown. Your collapsed menu is shown when screen size is less than 992px. Your @media query is setting the navbar to fixed when the screen size is less than 992px. What may fix your issue is setting the navbar position attribute specifically for when the screen is bigger than 992px. If I didn't answer the right question, or if you were trying to remove the navbar completely except when collapsed is showing, look into the display setting to remove it at the lg breakpoint. Hope this helps!
{ "pile_set_name": "StackExchange" }
package webrtc const ( // Unknown defines default public constant to use for "enum" like struct // comparisons when no value was defined. Unknown = iota unknownStr = "unknown" ssrcStr = "ssrc" receiveMTU = 8192 )
{ "pile_set_name": "Github" }
Measuring the ratio of someone's waist to their height is a better way of predicting their life expectancy than body mass index (BMI), the method widely used by doctors when judging overall health and risk of disease, researchers said. BMI is calculated as a person's weight in kilograms divided by the square of their height in metres, but a study found that the simpler measurement of waistline against height produced a more accurate prediction of lifespan. People with the highest waist-to-height ratio, whose waistlines measured 80 per cent of their height, lived 17 years fewer than average. Keeping your waist circumference to less than half of your height can help prevent the onset of conditions like stroke, heart disease and diabetes and add years to life, researchers said. For a 6ft man, this would mean having a waistline smaller than 36in, while a 5ft 4in woman should have a waist size no larger than 32in.
{ "pile_set_name": "Pile-CC" }
This invention relates to a surgical closure that can be repeatedly opened and closed, especially for the abdominal wall. More particularly the invention relates to a surgical closure having fabric of plate-like securing elements that can be tightly but detachably connected to the body tissue and has a closure which can be repeatedly opened and closed. Such a surgical closure is known, for example, from German Patent 34 44 782. This surgical closure is used especially as a temporary closure for the abdominal cavity, preferably for postoperative treatment of peritonitis. Peritonitis, as a secondary form that develops as a result of a perforation of a hollow organ or as a postoperative complication, still has, even today, a high lethality. With increasing incidence, it represents a central surgical problem. The abdominal cavity is subject to a physiological, regulated fluid stream that drains mainly by small openings in the peritoneal diaphragm underside. In this way, bacteria are fed by the lymph tracts to the systemic defense mechanism. The absorption capacity of the intraperitoneal fluid is increased by the mobility of the diaphragm and intraperitoneal pressure. During peritonitis, this drainage is blocked by the pathophysiological development of fibrin and bacteria and circulation is hindered by fibrin-induced adhesions. The defense system is disrupted and a rise in bacterial counts, or their toxins and fibrin, results. If the progression of peritonitis is not stopped promptly, a pathophysiological cascade gets started whose dynamics constantly grow and, after a certain point, can no longer be stopped. To cleanse the abdominal cavity, washing with physiological saline solution is already done during the operation until the wash fluid stays clear. With this mechanical cleansing, bacterial counts, fibrin, dead tissue, toxins and also residual blood (even hemoglobin promotes the start of an infection) are to be removed as completely as possible, to provide, along with surgical removal of septic focus, an optimal condition for healing. In the postoperative phase, in which the fate of the patient is mainly determined, it is decisive to recognize a worsening of the condition as early as possible, and optionally, to remove the cause (e.g., correction of an inadequate suture after oversewing a gastric ulcer) and, by effective lavage, if possible from the first postoperative day forward, to make sure conditions are clean (blood that reappears, fibrin and bacteria are to be rinsed away). In postoperative lavage, the strategy of the open abdomen with periodic washing and the wash treatment with a closed abdomen are known. This so-called open abdomen is made possible by the sliding splint closure and by the snap closure as a temporary closure for the abdominal cavity, with the advantages that repeated intra-abdominal accessibility is guaranteed and the technician, during each washing, can be convinced of the success of the removal of septic focus, and thus, can control the course of peritonitis. In doing so, postoperative, intra-abdominal adhesions can be detached and coatings of fibrin can be removed. The typical drainage complications are eliminated. (Plugging of drainage for the abdominal wall, blockage or obstruction of drainages, infection sources.) A relaparotomy is no longer necessary. Here, the drawback is that right after the operation, washing cannot be performed and no continuous washing is possible. But then, periodic washing is relatively frequent and also a burden for the patient, when the patient is in critical condition. Periodic washing must be prepared carefully; it is performed in the operating room (the abdomen is open during washing) and under general anesthesia. The advantages of the principle of peritoneal dialysis must be done without, since previous temporary abdominal cavity closures do not close the abdomen tightly. The wash effect remains limited, since a desired intra-abdominal pressure is not maintained, and the wash fluid flows, preferably, only in preformed wash channels. Further, after the temporary closure of the abdominal wall, part of the wash fluid oozes into the bed which, in addition to being another source of infection, means ineffective washing, additional burden for the patient, and considerable additional expense for the nursing staff. Patients with an open abdomen belong, at that time, to the most care-intensive patients. If a so-called snap closure or sliding splint closure, as a temporary abdominal cavity closure, is infolded, another drawback comes to bear. Once cut and infolded, adaptation to the tension conditions of the abdominal wall is no longer possible. But, because of edematous swelling of inner organs during the course of peritonitis, the tension of the abdominal wall can increase considerably, with the danger that the sutures tear out. On the other hand, the edges of the incision must be brought together again gradually to the final suture of the abdominal wall later, during the healing phase in which the swelling of the inner organs decreases. Further, the typical complications of snap closures must be taken into account (constriction, tenaculum). There is no particular edge structure to infold into the fascia, so that only the individual sutures provide support. They are often not secure and tear out easily. Continuous peritoneal lavage with a closed abdomen offers the advantage that an effective washing treatment can be started immediately after the operation, and thus, the purpose of the usual Redon suction drainage can be replaced considerably more effectively. The latter has a weak suction capacity, suctioning only right at the spot where it lays. Further, it easily becomes clogged and misleads to the assumption that the incision area has already been suctioned empty. With the sealed system, an intraperitoneal pressure can be built up and dosed. In doing so, the wash fluid (possibly with antibiotic added) also reaches the critical "atmospheric corners" of the abdomen. But not only are wash channels created, as when seepage through without pressure occurs. Fewer fibrin-induced adhesions are formed, since the abdomen contents "swim" and fibrin (among other things) is effectively washed out (thus, simultaneous ileus prophylaxis). Peritoneal dialysis is possible. With it, an increase in the retention values (creatine, urea, potassium) with a threat of renal failure can conceivably be halted simply by using a commercially available dialysis fluid as the wash fluid. The patient can easily be dialyzed, without having to be put into the expensive program of hemodialysis. The associated drawbacks are also eliminated, as they can sometimes occur during the handling of blood volume. Washing can be taken over by a machine according to a desired program; thus, clear relief for the nursing personnel is possible. A chamber count of the leukocytes in the wash fluid makes it possible to monitor simply the response of the peritonitis. A judgment of the efflux can conceivably be performed simply by inspection (cloudiness, fibrin or blood admixtures). Wash fluid sensors to determine the resistance of bacteria can be removed by the catheter at any time, just as other substances can be administered (e.g., electrolytes, protein, heparin). The drawback here is that the abdomen is no longer accessible and thus no direct visual monitoring exists any more, which is important, when the efflux changes pathologically or the clinical condition of the patient worsens. Packing and infection source of the drainage passage points, as well as clogging or obstruction of the drainages represent typical complications. If the abdomen must be accessed again, a relaparotomy must be performed. Despite promising starts, the strategy of the closed abdomen has not been able to be used in practice, since the drawbacks predominate.
{ "pile_set_name": "USPTO Backgrounds" }
Q: Writing groovy closure or some pattern to take care of transactions I want to write some type of closure or method template pattern in groovy that takes care of DB transactions in one place What i want is some thing like this... Pseudo code def txMethod(codeOrSetStatementsToExecute){ //begin transaction execute given - codeOrSetStatementsToExecute //end transaction by commit or rollback } def createSomething(args){ txMethod({ -create1statement(args) -create2statement }); } def deleteSomething(){ txMethod({ -delete1statement() -doSomethingElse }); } A: I've written something akin to that using JPA, sometime ago. IIRC, it turned into something like this: class DB<T> { void save(T t) { transactional { em.persist(t) em.flush() } } void delete(T t) { transactional { em.remove(t) } } void update(T t) { transactional { em.merge(t) em.flush() } } protected UserTransaction getTransaction() { // get transaction from JPA/Context/Younameit } protected void transactional(Closure<?> closure) { def utx = getTransaction(); try { utx.begin(); closure.call(); em.flush(); } catch (Throwable t) { utx.setRollbackOnly() throw t; } finally { utx.commit(); } } }
{ "pile_set_name": "StackExchange" }
Even as US-based Kraft Foods has entered into a deal to acquire the global biscuit business of Groupe Danone, the arrangement does not include Groupe Danone's stakes in biscuit businesses in India......
{ "pile_set_name": "Pile-CC" }
Q: Extract array from JSON using sed and regex I'm trying to write a script for comissioning embedded devices, they retrieve a JSON object from an API that contains an array of scripts that must be run to comission the device. { "status":"wait", "settings":{ "serialNo": "123456", "macAddress":"ff:ff:ff:ff:ff:ff", "ipMode": "static|dhcp", "ipAddress": "192.168.0.1", "ipSubnet": "255.255.255.0", "ipGateway": "192.168.0.10", "ipDns": "192.168.0.10" }, "scripts":[ "https://www.google.co.uk/1", "https://www.google.co.uk/2", "https://www.google.co.uk/3" ] } As the devices run minimal linux installs with busybox I am using sed to "parse" the JSON and retrieve the values from the object. This works fine for single parameters such as mac=$(echo $reply | sed -ne 's/^.*"macAddress":"\([^"]*\)".*$/\1/p') echo $mac ff:ff:ff:ff:ff:ff I try to use a similar regex to match the contents of the array between [ and ] but when I run it through sed it returns with nothing. scripts=$(echo $reply | sed -ne 's/"scripts":\(\[[^\[\]]*\]\)/\1/p') echo $scripts What I would like it to result in is this: echo $scripts ["https://www.google.co.uk/1","https://www.google.co.uk/2","https://www.google.co.uk/3"] A: With jq you can issue the following command: jq -r '.scripts[]' the.json If you want to put this into an array, use command substitution: arr=( $(jq -r '.scripts[]' a.json ) ) Now you can access the individual urls using: echo "${arr[0]}" echo "${arr[1]}" echo "${arr[2]}"
{ "pile_set_name": "StackExchange" }
<!-- YAML added: v0.1.97 changes: - version: v10.0.0 pr-url: https://github.com/nodejs/node/pull/12562 description: 参数 `callback` 不再是可选的。 如果不传入,则在运行时会抛出 `TypeError`。 - version: v7.6.0 pr-url: https://github.com/nodejs/node/pull/10739 description: 参数 `path` 可以是 WHATWG `URL` 对象(使用 `file:` 协议)。 该支持目前仍是实验的。 - version: v7.0.0 pr-url: https://github.com/nodejs/node/pull/7897 description: 参数 `callback` 不再是可选的。 如果不传入,则会触发弃用警告(id 为 DEP0013)。 --> * `path` {string|Buffer|URL} * `uid` {integer} * `gid` {integer} * `callback` {Function} * `err` {Error} 异步地更改文件的所有者和群组。 除了可能的异常,完成回调没有其他参数。 也可参见 chown(2)。
{ "pile_set_name": "Github" }
To What Have I Been Up? I should blog more. I know. It’s good for business. What have I not been blogging? I’m distracted by work. What work? Improving the Rapid Software Testing methodology and training. I am going through an interesting transformation with it. I feel like I am nearing the top of a mountain, and soon I will be able to see down the other side. Not yet, though. Not quite yet. The grand book, to which my career has been building, will have to wait, as will the big blog posts. The most up-to-date material and ideas I put into my classes and talks. I try it out on students who come to me over Skype. I explore better ways of testing. But the real focus of my work– my passion– is to clearly understand and precisely explain the processes of deep testing as I and the best testers I know already do it. That knowledge is largely tacit, but we find ways to illuminate it and methods to help people grow it. To some degree I can make it explicit, and the parts I can’t make explicit I can still make relatable. For what it’s worth, here are some of the things I’ve been working on: Exploratory processes are not merely a form of search, they nearly always processes of personal transformation. The Bootstrap Approach: Begin in confusion; end in precision. The mentalities of a tester are much more important than the techniques of testing. The Test Management Lens: a heuristic for getting clear on the status of testing. Integrated view of regression testing that wraps up all the common ideas about it.
{ "pile_set_name": "Pile-CC" }
Understanding Physically Based Rendering in Arnold Designing materials based on physical laws can tremendously simplify shading and lighting, even when we do not necessarily strive for realism or physical accuracy. By understanding and applying a few principles, we can make images that are more believable, and create materials that behave more predictably in different lighting setups. In modern renderers, physically based rendering refers to concepts like energy conservation, physically plausible scattering and layering in materials and linear color spaces. Arnold is a physically based renderer, but it also lets you break the rules and create materials and lights that do not obey the laws of physics if you wish. In this document, we'll explain the underlying theory and how to set up your shaders to follow these principles. In rendering we simulate photons emitted from lights, traveling through the air and bouncing off surfaces and through volumes, eventually ending up on a camera sensor. The combination of millions of photons on the camera sensor then forms the rendered image. This means that from a physics point of view, surface shaders describe how the surface interacts with photons. Photons hitting an object can be absorbed, reflect off the surface, refract through the surface, or scatter around inside the object. The combination of these components results in a wide variety of materials. Energy Conservation Unless an object is a light source that emits photons, it can't return more energy than is being contributed by the incoming light. For a material to be energy conserving the number of photons leaving the surface should be smaller or equal to the number of incoming photons. If a material is not energy conserving, materials will appear overly bright and render with increased noise, especially when using global illumination. To keep materials energy conserving, the weight and color of material components should never exceed 1. Further, we must be careful to ensure that the combination of all components is energy conserving, which we'll explain in detail later. Materials At the microscopic level, object surfaces are intricately detailed. For rendering, we do not use geometry to represent all of this detail, but rather use statistical models than having easy to understand parameters. Arnold's StandardSurface shadermodel objects with one or two specular layers, and a diffuse or transparent interior. This model can represent a wide variety of materials. Let's look at the individual components. Diffuse and Subsurface Scattering First, consider the diffuse interior. Incoming photons will enter the object, scatter around inside and either get absorbed or leave the object at another location. If photons scatter many times, we get a diffuse appearance, due to photons leaving the surface in many different locations and directions. For materials like skin, photons can scatter relatively far under the surface giving a very soft appearance, which we render with subsurface scattering. For materials like unfinished wood, photons do not scatter very far which gives a harder appearance, and we render these as diffuse. For thin objects like leaves, the photons can scatter all the way to the other side of the object, which we render as diffuse SSS with thin_wall enabled. Note that fundamentally all of these types of materials have the same underlying physical mechanism, even though we provide separate controls for them in the shader. The diffuse interior also typically has the biggest influence on the overall color of the material. Each photon has an associated wavelength, and depending on the properties of the material some photons with some wavelengths are more likely to be absorbed than others. This, in turn, means that photons with some wavelengths are more likely to leave the surface, which will give it a colored appearance. The skin of a red apple mostly reflects red light. Only the red wavelengths are scattered back outside the apple skin, and the others are absorbed by it. Energy Conservation A single photon can only participate in one of the diffuse, subsurface scattering and backlighting components, for physical correctness we do not want more photons leaving the surface than entering. For Standard Surface, it is automatically ensured that the sum of these components is not higher than 1. Specular Scattering Specular Roughness 0 to 1 Roughness The specular layer is modeled using a microfacet distribution. We assume that the surface consists of microscopic faces oriented in random directions. A surface with low roughness such as a mirror will have little variation between the faces, resulting in sharp reflections. With high roughness there will be a lot of variation resulting in softer, glossy reflections. A strong Specular highlight is visible on the apple. Note the table's specular reflection which is broad and dull (high Specular Roughness value). To get variation in the highlights of the surface, a map should be connected to the Specular Roughness. This will influence not only the brightness of the highlight but also it's size and the sharpness of the environmental reflection. Low Specular Roughness High Specular Roughness 'Scratches' texture connected to Specular Roughness Transmission Photons can not only be reflected off the surface, but can refract through it as well. Photons will pass through the specular layer, typically changing direction when exiting on the other side of the layer, controlled by the index of refraction (IOR). If the interior of the surface is transparent, such as for clear glass, then photons can pass through the object and exit on the other side. If there is a diffuse interior, the photon can scatter inside the object and get absorbed or exit the object again. The more refractive the specular layer, the more the underlying diffuse interior will be visible. For materials like metals, photons refracting through the specular are often immediately absorbed, and so the diffuse interior is not visible. Fresnel The percentage of photons reflected or refracted by the specular layer is view dependent. When looking at surfaces head on, most light is refracted, while at grazing angles most light is reflected. This is called the Fresnel effect. The index of refraction controls exactly how this effect varies with the viewing angle. Variation of a Specular BRDF with respect to the view direction Opacity and Transmission Opacity is best understood as a way to model surface geometry using textures. It does not affect how photons interact with the surface, but rather indicates where the surface's geometry is absent and the photons can pass straight through. Ramp texture connected to the opacity A typical use for opacity would be a sprite type of effect, such as cutting out the shape of a leaf from a polygon card or making the tips of hair strands transparent. Be warned however that scenes containing many opacity sprites (for example tree leaves) can slow down rendering considerably. Leaf Opacity: Enabled Leaf Opacity: Disabled Alpha map connected to Opacity Transmission depth is similar, but rather than the surface it controls the density of the object interior. Denser volumes will absorb more photons as they pass through the interior, making the object darker where it is thicker.
{ "pile_set_name": "Pile-CC" }
Time for more music announcements! We’re gonna do this for a month by the way. Gird your loins. The (un)official band of Taco Bell, Lame Genie! Shark Party Master of Ceremonies, Sam Mulligan & The Donut Slayers! Better than a bowl of Fruity Pebbles, Wreck The System! Click the artist names to read their bios!
{ "pile_set_name": "OpenWebText2" }
「アサシンズプライド」は、天城ケイによるライトノベル。 イラストはニノモトニノが担当しています。ファンタジア文庫(KADOKAWA)で2016年1月より刊行中にて、発行部数はシリーズ累計40万部を記録している人気作品です。物語は、主人公クーファ=ヴァンピールが、公爵家に生まれながら無才の少女メリダ=アンジェルと共に、裏の任務や各々の運命と対峙していくファンタジー作品です。 マナという能力を持つ貴族が、人類を守る責務を負う世界。能力者の養成校に通う貴族でありながら、マナを持たない特異な少女メリダ=アンジェル。彼女の才能を見出すため、家庭教師としてクーファ=ヴァンピールが派遣される。 『彼女に才なき場合、暗殺する』という任務を背負い--。能力が全ての社会、報われぬ努力を続けるメリダに、クーファは残酷な決断を下そうとするのだが……。 「オレに命を預けてみませんか」暗殺者でもなく教師でもない暗殺教師の 矜持(プライド)にかけて、少女の価値を世界に示せ!
{ "pile_set_name": "OpenWebText2" }
--- abstract: 'Several results concerning existence of solutions of a quasiequilibrium problem defined on a finite dimensional space are established. The proof of the first result is based on a Michael selection theorem for lower semicontinuous set-valued maps which holds in finite dimensional spaces. Furthermore this result allows one to locate the position of a solution. Sufficient conditions, which are easier to verify, may be obtained by imposing restrictions either on the domain or on the bifunction. These facts make it possible to yield various existence results which reduce to the well known Ky Fan minimax inequality when the constraint map is constant and the quasiequilibrium problem coincides with an equilibrium problem. Lastly, a comparison with other results from the literature is discussed.' author: - Marco Castellani - Massimiliano Giuli - Massimo Pappalardo title: A Ky Fan minimax inequality for quasiequilibria on finite dimensional spaces --- Introduction ============ In [@Fa72] the author established the famous Ky Fan minimax inequality which concerns the existence of solutions for an inequality of minimax type that nowadays is called in literature “equilibrium problem”. Such a model has gained a lot interest in the last decades because it has been used in different contexts as economics, engineering, physics, chemistry and so on (see [@BiCaPaPa13] for a recent survey). In these equilibrium problems the constraint set is fixed and hence the model can not be used in many cases where the constraints depend on the current analyzed point. This more general setting was studied for the first time in the context of impulse control problem [@BeGoLi73] and it has been subsequently used by several authors for describing a lot of problems that arise in different fields: equilibrium problem in mechanics, Nash equilibrium problems, equilibria in economics, network equilibrium problems and so on. This general format, commonly called “quasiequilibrium problem”, received an increasing interest in the last years because many theoretical results developed for one of the abovementioned models can be often extended to the others through the unifying language provided by this common format. Unlike the equilibrium problems which have an extensive literature on results concerning existence of solutions, the study of quasiequilibrium problems to date is at the beginning even if the first seminal work in this area was in the seventies [@Mo76]. After that, the problem concerning existence of solutions has been developed in some papers [@AlRa16; @Au93; @AuCoIu17; @CaGi15; @CaGi16; @Cu95; @Cu97]. Most of the results require either monotonicity assumptions on the equilibrium bifunction or upper semicontinuity of the set-valued map which describes the constraint. Whereas other authors provided existence of solutions avoiding any monotonicity assumption and assuming lower semicontinuity of the constraint map and closedness of the set of its fixed points. Aim of this paper is to establish several results concerning existence of solutions of a quasiequilibrium problem defined on a finite dimensional space which comes down to the Ky Fan minimax inequality in the classical setting. Our approach is based on a Michael selection result [@Mi56] for lower semicontinuous set-valued maps. Moreover the proof of our results allow one to locate the position of a solution. The paper is organized as follows. Section 2 is devoted to recall the results about set-valued maps which are used later. In Section 3 we prove the main theorem and we furnish more tractable conditions on the equilibrium bifunction which guarantee that our result holds true. Basic concepts ============== Let $\Phi:X\rightrightarrows Y$ be a set-valued map with $X$ and $Y$ two topological spaces. The graph of $\Phi$ is the set $${\operatorname{gph}}\Phi:=\{(x,y)\in X\times Y:y\in \Phi(x)\}$$ and the lower section of $\Phi$ at $y\in Y$ is $$\Phi^{-1}(y):=\{x\in X:y\in \Phi(x)\}.$$ The map $\Phi$ is said to be lower semicontinuous at $x$ if for each open set $\Omega$ such that $\Phi(x)\cap\Omega\ne\emptyset$ there exists a neighborhood $U_x$ of $x$ such that $$\Phi(x')\cap\Omega\ne\emptyset,\qquad\forall x'\in U_x.$$ Notice that a set-valued map with open graph has open lower sections and, in turn, if it has open lower sections then it is lower semicontinuous. A fixed point of a function $\varphi:X\rightarrow X$ is a point $x\in X$ satisfying $\varphi(x)=x$. A fixed point of a set-valued map $\Phi:X\rightrightarrows X$ is a point $x\in X$ satisfying $x\in\Phi(x)$. The set of the fixed point of $\Phi$ is denoted by ${\operatorname{fix}}\Phi$. One of the most famous fixed point theorems for continuous functions was proven by Brouwer and it has been used across numerous fields of mathematics (see [@Bo85]). .3truecm [**Brouwer fixed point Theorem.**]{} * Every continuous function $\varphi$ from a nonempty convex compact subset $C\subseteq{{\mathbb R}}^n$ to $C$ itself has a fixed point.* .3truecm A selection of a set-valued map $\Phi:X\rightrightarrows Y$ is a function $\varphi:X\rightarrow Y$ that satisfies $\varphi(x)\in\Phi(x)$ for each $x\in X$. The Axiom of Choice guarantees that set-valued maps with nonempty values always admit selections, but they may have no additional useful properties. Michael [@Mi56] proved a series of theorems on the existence of continuous selections that assume the condition of lower semicontinuity of set-valued maps. We present here only one result [@Mi56 Theorem 3.1$^{\prime\prime\prime}$ (b)]. .3truecm [**Michael selection Theorem.**]{} * Every lower semicontinuous set-valued map $\Phi$ from a metric space to ${{\mathbb R}}^n$ with nonempty convex values admits a continuous selection.* .3truecm The Michael selection Theorem holds more in general when the domain of $\Phi$ is a perfectly normal space. Collecting the Brouwer fixed point Theorem and the Michael selection Theorem, we deduce the following fixed point result for lower semicontinuous set-valued maps. \[cor:fixed point\] Every lower semicontinuous set-valued map $\Phi$ from a nonempty convex compact subset $C\subseteq{{\mathbb R}}^n$ to $C$ itself with nonempty convex values has a fixed point. Notice that, unlike the famous Kakutani fixed point Theorem (see [@Bo85]) in which the closedness of ${\operatorname{gph}}\Phi$ is required, in Corollary \[cor:fixed point\] the lower semicontinuity of the set-valued map is needed. No relation exists between the two results as the following example shows. The set-valued map $\Phi:[0,3]\rightrightarrows [0,3]$ $$\Phi(x):=\left\{\begin{array}{ll} \{1\} & \mbox{ if } 0\leq x\leq 1\\ (1,2) & \mbox{ if } 1<x<2\\ \{2\} & \mbox{ if } 2\leq x\leq 3 \end{array}\right.$$ is lower semicontinuous and the nonemptiness of ${\operatorname{fix}}\Phi$ is guaranteed by Corollary \[cor:fixed point\]. Notice that ${\operatorname{fix}}\Phi=[1,2]$. Nevertheless the Kakutani fixed point Theorem does not apply since ${\operatorname{gph}}\Phi$ is not closed. On the converse, the set-valued map $\Phi:[0,3]\rightrightarrows [0,3]$ $$\Phi(x):=\left\{\begin{array}{ll} \{1\} & \mbox{ if } 0\leq x<1\\ {}[1,2] & \mbox{ if } 1\leq x\leq 2\\ \{2\} & \mbox{ if } 2<x\leq 3 \end{array}\right.$$ has closed graph and the nonemptiness of ${\operatorname{fix}}\Phi$ is guaranteed by the Kakutani fixed point Theorem. Again ${\operatorname{fix}}\Phi=[1,2]$. Since $\Phi$ is not lower semicontinuous, Corollary \[cor:fixed point\] can not be applied. We conclude this section recalling some topological notations. Given two subsets $A\subseteq C\subseteq{{\mathbb R}}^n$ we denote by ${\operatorname{int}}_C A$ and ${\operatorname{cl}}_C A$ the interior and the closure of $A$ in the relative topology of $C$ while $\partial_C A$ indicates the boundary of $A$ in $C$, i.e. $$\partial_C A:={\operatorname{cl}}_C A\setminus {\operatorname{int}}_CA = {\operatorname{cl}}_C A\cap {\operatorname{cl}}_C (C\setminus A).$$ Lastly $C$ is connected if and only if the subsets of $C$ which are both open and closed in $C$ are $C$ itself and the empty set. Existence results ================= From now on, $C\subseteq {{\mathbb R}}^n$ is a nonempty convex compact set and $f:C\times C\rightarrow {{\mathbb R}}$ is an equilibrium bifunction, that is $f(x,x)=0$ for all $x\in C$. The equilibrium problem is defined as follows: $$\label{eq:ep} \mbox{find } x\in C \mbox{ such that } f(x,y)\ge 0\mbox{ for all } y\in C.$$ Equilibrium problem has been traditionally studied assuming that $f$ is upper semicontinuous in its first argument and quasiconvex in its second one. Under such assumptions, the issue of sufficient conditions for existence of solutions of (\[eq:ep\]) was the starting point in the study of the problem. Ky Fan [@Fa72] proved a famous minimax inequality assuming compactness of $C$ and his result holds in a Hausdorff topological vector space. However, there is the possibility to slightly relax the continuity condition when the vector space is finite dimensional. The set-valued map $$\label{eq:mapF} F(x):=\{y\in C:f(x,y)<0\}$$ defined on $C$ plays a fundamental role in the formulation of our results. Clearly $F$ has open lower sections and convex values under the Ky Fan assumptions on the bifunction $f$, that is upper semicontinuity with respect to the first variable and quasiconvexity with respect to the second one. The fact that $F$ has open lower sections implies that $F$ is lower semicontinuous. If $F$ had nonempty values, Corollary \[cor:fixed point\] guarantees the existence of a fixed point of $F$. This contradicts the fact that $f(x,x)\geq 0$. Therefore there exists at least one $\bar x$ such that $F(\bar x)=\emptyset$, that is a solution of the equilibrium problem (\[eq:ep\]). The following result holds. .3truecm [**Ky Fan minimax inequality.**]{} *A solution of (\[eq:ep\]) exists whenever the set-valued map $F$ given in (\[eq:mapF\]) is lower semicontinuous and convex-valued.* .3truecm After describing this auxiliary result, we focus on the main aim of the paper. A quasiequilibrium problem is an equilibrium problem in which the constraint set is subject to modifications depending on the considered point. This format reads $$\label{eq:qep} \mbox{find } x\in K(x) \mbox{ such that } f(x,y)\ge 0\mbox{ for all } y\in K(x),$$ where $K:C\rightrightarrows C$ is a set-valued map. Our first existence result is the following. \[th:existenceQEP\] Assume that $K$ is lower semicontinuous with nonempty convex values and ${\operatorname{fix}}K$ is closed. Moreover suppose that 1. $F$ is convex-valued on ${\operatorname{fix}}K$, 2. $F$ is lower semicontinuous on ${\operatorname{fix}}K$, 3. $F\cap K$ is lower semicontinuous on $\partial_C{\operatorname{fix}}K$, where $F$ is the set-valued map given in (\[eq:mapF\]). Then the quasiequilibrium problem (\[eq:qep\]) has a solution. [**Proof.**]{} Corollary \[cor:fixed point\] ensures the nonemptiness of ${\operatorname{fix}}K$. If ${\operatorname{fix}}K=C$, the existence of solutions to the quasiequilibrium problem descends from the above mentioned Ky Fan minimax inequality. Otherwise, since ${\operatorname{fix}}K$ is closed and $\partial_C{\operatorname{fix}}K ={\operatorname{fix}}K\setminus {\operatorname{int}}_C{\operatorname{fix}}K$, the emptiness of $\partial_C{\operatorname{fix}}K$ it would be equivalent to ${\operatorname{fix}}K={\operatorname{int}}_C{\operatorname{fix}}K$. Therefore ${\operatorname{fix}}K$ would be both open and closed in $C$. Since every convex set is connected, the only nonempty open and closed subset of $C$ is $C$ itself and this contradicts the fact that ${\operatorname{fix}}K\ne C$. Assume that ${\operatorname{int}}_C{\operatorname{fix}}K\ne\emptyset$ (the case ${\operatorname{int}}_C{\operatorname{fix}}K=\emptyset$ is similar and will be shortly discussed at the end of the proof) and define $G:C\rightrightarrows C$ as follows $$G(x):=\left\{\begin{array}{ll} F(x) & \mbox{ if } x\in {\operatorname{int}}_C{\operatorname{fix}}K\\ F(x)\cap K(x) & \mbox{ if } x\in \partial_C{\operatorname{fix}}K\\ K(x) & \mbox{ if } x\notin {\operatorname{fix}}K \end{array}\right.$$ The proof is complete if we can show that $G(x)=\emptyset$ for some $x\in C$. Indeed, since $K$ has nonempty values, then $x\in{\operatorname{fix}}K$ and two cases are possible. If $x\in\partial_C{\operatorname{fix}}K$, then it solves (\[eq:qep\]); if $x\in{\operatorname{int}}_C{\operatorname{fix}}K$ then it solves (\[eq:ep\]). In both cases the quasiequilibrium problem has a solution. Assume by contradiction that $G$ has nonempty values. Next step is to prove the lower semicontinuity of $G$. Fix $x\in C$ and an open set $\Omega\subseteq{{\mathbb R}}^n$ such that $G(x)\cap\Omega\cap C\ne\emptyset$. We distinguish three cases. 1. If $x\in{\operatorname{int}}_C{\operatorname{fix}}K$, from the lower semicontinuity of $F$ there exists a neighborhood $U'_x$ such that $$F(x')\cap\Omega\cap C\ne\emptyset,\qquad\forall x'\in U'_x\cap{\operatorname{fix}}K$$ which implies $$G(x')\cap\Omega\cap C\ne\emptyset,\qquad\forall x'\in U'_x\cap{\operatorname{int}}_C{\operatorname{fix}}K.$$ Since $U'_x\cap{\operatorname{int}}_C{\operatorname{fix}}K$ is open in $C$, then $G$ is lower semicontinuous at $x$. 2. If $x\in\partial_C{\operatorname{fix}}K=\partial_C(C\setminus{\operatorname{fix}}K)$ from the lower semicontinuity of $F$, $K$ and $F\cap K$ there exist neighborhoods $U'_x$, $U''_x$ and $U'''_x$ such that $$\begin{aligned} F(x')\cap\Omega\cap C\ne\emptyset, & \qquad & \forall x'\in U'_x\cap{\operatorname{fix}}K,\\ K(x')\cap\Omega\cap C\ne\emptyset, & \qquad & \forall x'\in U''_x\cap C,\\ F(x')\cap K(x')\cap\Omega\cap C\ne\emptyset, & \qquad & \forall x'\in U'''_x\cap \partial_C{\operatorname{fix}}K.\end{aligned}$$ Then $$G(x')\cap\Omega\cap C\ne\emptyset,\qquad\forall x'\in U'_x\cap U''_x\cap U'''_x\cap C,$$ i.e. $G$ is lower semicontinuous at $x$. 3. Finally, if $x\notin{\operatorname{fix}}K$, from the lower semicontinuity of $K$ there exists a neighborhood $U'_x$ such that $$K(x')\cap\Omega\cap C\ne\emptyset,\qquad\forall x'\in U'_x\cap C.$$ Then $$G(x')\cap\Omega\cap C\ne\emptyset,\qquad\forall x'\in U'_x\cap (C\setminus{\operatorname{fix}}K).$$ Since $U'_x\cap(C\setminus{\operatorname{fix}}K)$ is open in $C$, then $G$ is lower semicontinuous at $x$. Since by assumption $G$ is also convex-valued, then all the conditions of Corollary \[cor:fixed point\] are satisfied and there exists $x\in{\operatorname{fix}}G$. Clearly $x\in{\operatorname{fix}}K$ and therefore $x\in{\operatorname{fix}}F$ which implies $f(x,x)<0$ and contradicts the assumption on $f$. The issue of ${\operatorname{int}}_C{\operatorname{fix}}K=\emptyset$ remains to be seen. In this case $\partial_C{\operatorname{fix}}K={\operatorname{cl}}_C{\operatorname{fix}}K={\operatorname{fix}}K$ and $G$ assumes the following form $$G(x):=\left\{\begin{array}{ll} F(x)\cap K(x) & \mbox{ if } x\in{\operatorname{fix}}K\\ K(x) & \mbox{ if } x\notin {\operatorname{fix}}K \end{array}\right.$$ The result is obtained by adapting the argument used before. It is clear from the proof that the assertion remains valid if $f(x,x)=0$ on $C\times C$ is replaced by the weaker $f(x,x)\ge0$ for all $x\in{\operatorname{fix}}K$. \[re:alternative\] The proof of Theorem \[th:existenceQEP\] allows to establish that a solution of (\[eq:qep\]) belongs to $$\partial_C{\operatorname{fix}}K\cup \{x\in {\operatorname{int}}_C{\operatorname{fix}}K: x \mbox{ solves } (\ref{eq:ep})\}.$$ In particular if (\[eq:ep\]) has no solution then Theorem \[th:existenceQEP\] ensures that a solution of (\[eq:qep\]) lies on the boundary of ${\operatorname{fix}}K$. \[re:fan\] By specializing to $K(x):=C$, for all $x\in C$, Theorem \[th:existenceQEP\] becomes the Ky Fan minimax inequality. Indeed ${\operatorname{fix}}K=C$ and conditions i) and ii) coincide with the assumptions in Ky Fan minimax inequality. Instead, since $\partial_C{\operatorname{fix}}K=\emptyset$, condition iii) is trivially satisfied. Theorem \[th:existenceQEP\] is strongly related to [@Cu95 Lemma 3.1]. The two sets of conditions differ only in that the lower semicontinuity of $F\cap K$ on the whole space $C$ assumed in [@Cu95 Lemma 3.1] is here replaced by the lower semicontinuity of $F$ on ${\operatorname{fix}}K$ and the lower semicontinuity of $F\cap K$ on $\partial_C {\operatorname{fix}}K$. We provide an example in which the results are not comparable to each other. Let $C:=[0,1]$ and $$f(x,y):=\left\{\begin{array}{ll} -1 & \mbox{ if } x=0 \mbox{ and } y\in(0,1]\\ 0 & \mbox{ otherwise} \end{array}\right.$$ If $K(x):=\{x\}$, for all $x\in [0,1]$, then $F\cap K=\emptyset$ is trivially lower semicontinuous and the assumptions of [@Cu95 Lemma 3.1] are satisfied. Instead $F$ is not lower semicontinuous at $0\in{\operatorname{fix}}K=[0,1]$. On the other hand if $K(x):=\{1-x\}$, for all $x\in [0,1]$, then ${\operatorname{fix}}K=\{1/2\}$, the assumptions of Theorem \[th:existenceQEP\] are trivially satisfied, but $F\cap K$ is not lower semicontinuous at $0$. It would be desirable to find more tractable conditions on $f$, disjoint from the ones assumed on $K$, which guarantee that all the assumptions i), ii) and iii) of Theorem \[th:existenceQEP\] are satisfied. Clearly the convexity of $F(x)$ can be deduced from the quasiconvexity of $f(x,\cdot)$ for all $x\in{\operatorname{fix}}K$. While the upper semicontinuity of $f(\cdot,y)$ on ${\operatorname{fix}}K$ implies that $F^{-1}(y)$ is open on ${\operatorname{fix}}K$ and hence $F$ is lower semicontinuous on ${\operatorname{fix}}K$. The last part of this section is devoted to furnish sufficient conditions for assumption iii), i.e. which guarantee the lower semicontinuity of the set-valued map $F\cap K$ on $\partial_C{\operatorname{fix}}K$. We propose two approaches. The former one consists in exploiting the following result in [@Pa91]. \[pr:lsc intersection\] Let $\Phi_1,\Phi_2:X\rightrightarrows Y$ be set-valued maps between two topological spaces. Assume that ${\operatorname{gph}}\Phi_1$ is open on $X\times Y$ and $\Phi_2$ is lower semicontinuous. Then $\Phi_1\cap\Phi_2$ is lower semicontinuous. Since $K$ is assumed to be lower semicontinuous, we investigate which assumptions ensure the open graph of $F$ given in (\[eq:mapF\]), that is the openness of the set $$\label{eq:open} \{(x,y)\in \partial_C{\operatorname{fix}}K\times C:f(x,y)< 0\}.$$ Hence, Theorem \[th:existenceQEP\] still works by using this condition instead of iii). It is interesting to compare this fact with [@Cu97 Theorem 2.1] where the openness of the set $\{(x,y)\in C\times C:f(x,y)< 0\}$ is required instead of the openness of (\[eq:open\]) and the lower semicontinuity of $F$ on ${\operatorname{fix}}K$. One should not overlook the fact that even though the results are formally similarly formulated, unlike our result, [@Cu97 Theorem 2.1] does not reduce to Ky Fan minimax inequality when $K(x)=C$, for all $x\in C$. An open graph result is [@Zh95 Proposition 2] which affirms that if $X$ is a topological space and $\Phi:X\rightrightarrows {{\mathbb R}}^n$ is a set-valued map with convex values, then $\Phi$ has open graph in $X\times{{\mathbb R}}^n$ if and only if $\Phi$ is lower semicontinuous and open valued. This fact has been used to establish the existence of continuous selections, maximal elements, and fixed points of correspondences in various economic applications. Up to translations, this result also holds when the codomain of $\Phi$ is an affine subset of ${{\mathbb R}}^n$ [@Yu98 Theorem 1.12]. We recall that an affine set of ${{\mathbb R}}^n$ is the translation of a vector subspace. Moreover, the affine hull of a set $C$ in ${{\mathbb R}}^n$, which is denoted by ${\operatorname{aff}}C$, is the smallest affine set containing $C$, or equivalently, the intersection of all affine sets containing $C$. \[th:sufficientconditions1\] Let $A\supseteq C$ be an open set on ${\operatorname{aff}}C$ and $\hat{f}:C\times A\rightarrow {{\mathbb R}}$ be a bifunction such that $\hat{f}(x,y)=f(x,y)$ for all $(x,y)\in C\times C$. Denote by $\hat{F}$ the set-valued map $$\hat{F}(x):=\{y\in A:\hat{f}(x,y)<0\}$$ defined on $C$ and assume that $K$ is lower semicontinuous with nonempty convex values and ${\operatorname{fix}}K$ is closed. Moreover suppose that 1. $\hat{F}$ is convex-valued on ${\operatorname{fix}}K$, 2. $\hat{F}$ has open lower sections on ${\operatorname{fix}}K$, 3. $\hat{F}(x)$ is open on ${\operatorname{aff}}C$ for all $x\in\partial_C{\operatorname{fix}}K$. Then the quasiequilibrium problem (\[eq:qep\]) has a solution. [**Proof.**]{} We have to show that all the assumptions of Theorem \[th:existenceQEP\] are fulfilled. Since the set-valued map $F$ given in (\[eq:mapF\]) can be expressed as $\hat F\cap C$, i) implies that $F$ is convex-valued on ${\operatorname{fix}}K$ and ii) implies that $F$ is open lower section on ${\operatorname{fix}}K$. In particular $F$ is lower semicontinuos on ${\operatorname{fix}}K$. Furthermore assumption iii) allows to apply [@Yu98 Theorem 1.12] which ensures that ${\operatorname{gph}}\hat F$ is open on $\partial_C{\operatorname{fix}}K\times {\operatorname{aff}}C$. Hence ${\operatorname{gph}}F={\operatorname{gph}}\hat F\cap (\partial_C{\operatorname{fix}}K\times C)$ is open on $\partial_C{\operatorname{fix}}K\times C$ and Proposition \[pr:lsc intersection\] guarantees that the intersection map $F\cap K$ is lower semicontinuous on $\partial_C{\operatorname{fix}}K$. The open graph result [@Zh95 Proposition 2] no longer holds when ${{\mathbb R}}^n$ (or an affine space) is replaced with an infinite dimensional Hilbert space [@Ba12]. However if $C\subset{{\mathbb R}}^n$ is a polytope, that is the convex hull of a finite set, then every $\Phi:X\rightrightarrows C$ with open lower sections and convex open values has open graph [@Bo85 Proposition 11.14]. This fact can be used for proving our next result. \[th:sufficientconditions2\] Assume that $C$ is a polytope and $K$ is lower semicontinuous with nonempty convex values and ${\operatorname{fix}}K$ is closed. Moreover suppose that 1. $F$ is convex-valued on ${\operatorname{fix}}K$, 2. $F$ has open lower sections on ${\operatorname{fix}}K$, 3. ì$F(x)$ is open on $C$ for all $x\in \partial_C {\operatorname{fix}}K$, where $F$ is the set-valued map given in (\[eq:mapF\]). Then the quasiequilibrium problem (\[eq:qep\]) has a solution. [**Proof.**]{} The set-valued map $F$ has open lower sections, convex and open values. Then its graph is open on $\partial_C {\operatorname{fix}}K\times C$ [@Bo85 Proposition 11.14] and the lower semicontinuity of $F\cap K$ follows from Proposition \[pr:lsc intersection\]. Notice that the lower semicontinuity condition ii) assumed in Theorem \[th:existenceQEP\] has been replaced in the last two results by the requirement that the lower sections are open. This is due to two different reasons. In the proof of Theorem \[th:sufficientconditions1\], in order to apply [@Yu98 Theorem 1.12] and get that ${\operatorname{gph}}\hat F$ is open, it would be enough to require the lower semicontinuity of $\hat F$. However such an assumption would not guarantee the lower semicontinuity of $F=\hat F\cap C$ which is assumption ii) in Theorem \[th:existenceQEP\]. On the other hand, assumption ii) in Theorem \[th:sufficientconditions2\] is necessary to get the openness of ${\operatorname{gph}}F$ as a consequence of [@Bo85 Proposition 11.14]. The next example shows that a set-valued map $\Phi$ acting from a topological vector space to a polytope $C$ may not have open graph and [@Bo85 Proposition 11.14] fails even if it is lower semicontinuous with convex and open values. Let $C:=\{(x,y)\in {{\mathbb R}}^2:|x|+|y|\leq 1\}$ be a closed convex set in ${{\mathbb R}}^2$. The set-valued map $\Phi:[0,1]\rightarrow C$ defined by $$\Phi(t):=\left\{\begin{array}{ll} C\setminus \{(x,y):x+y=1\} & \mbox{ if } t>0\\ C & \mbox{ if } t=0 \end{array}\right.$$ is lower semicontinuous with convex open values in $C$ but it has not open lower sections since $\phi^{-1}(0,1)=\{0\}$. Nevertheless ${\operatorname{gph}}\Phi$ is not open in $[0,1]\times C$ since the sequence $ \{(n^{-1},1-n^{-1},n^{-1})\}\in [0,1]\times C$ does not belong to ${\operatorname{gph}}\Phi$ but its limit $(0,1,0)\in{\operatorname{gph}}\Phi$. We answer in the negative the question posed in [@BePaRa76] where the authors affirm that they do not know whether [@Bo85 Proposition 11.14] can be generalized to the case where $C$ is an arbitrary convex subset of ${{\mathbb R}}^n$. This also explains why we need to extend the domain of $f(x,\cdot)$ from $C$ to an open subset of ${\operatorname{aff}}C$ in Theorem \[th:sufficientconditions1\]. \[ex:graphnotopen\] Let $C\subseteq {{\mathbb R}}^2$ be the closed unit ball. The set-valued map $\Phi:[0,1]\rightrightarrows C$ defined by $$\Phi(x):=\left\{\begin{array}{ll} C\setminus \{(\cos x,\sin x)\} & \mbox{ if } x>0\\ C & \mbox{ if } x=0 \end{array}\right.$$ has open lower sections and convex open values in $C$. Nevertheless ${\operatorname{gph}}\Phi$ is not open in $[0,1]\times C$. Indeed $(1,0)\in\Phi(0)$ and there is no neighborhood $U$ of $(1,0)$ such that $U\cap C\subseteq\Phi(x)$ for $x$ small enough. A second possible approach for the lower semicontinuity of $F\cap K$ could be to show the nonemptiness of the intersection between the interior of $F$ and $K$. Indeed [@BoGeMyOb84 Corollary 1.3.10] affirms that the set-valued map $\Phi_1\cap\Phi_2$ is lower semicontinuous on the topological space $X$ provided that $\Phi_1,\Phi_2:X\rightrightarrows C$ are convex-valued, lower semicontinuous set-valued maps and $$\label{eq:inte} \Phi_1(x)\cap\Phi_2(x)\neq \emptyset\quad\Rightarrow\quad\Phi_1(x)\cap{\operatorname{int}}\Phi_2(x)\neq \emptyset.$$ The following example shows that such result could not be guaranteed (as erroneously stated in [@Yu98 Theorem 1.13]) if the interior is replaced by the relative interior in condition (\[eq:inte\]). Given a set $C\subseteq{{\mathbb R}}^n$, we denote by ${\operatorname{ri}}C$ the relative interior of $C$, namely, ${\operatorname{ri}}C={\operatorname{int}}_{{\operatorname{aff}}C}C$. Let $C\subseteq {{\mathbb R}}^2$ be the closed unit ball and $\Phi_1:[0,1]\rightrightarrows C$ be defined as in Example \[ex:graphnotopen\]. Consider $\Phi_2:[0,1]\rightrightarrows C$ defined by $$\Phi_2(x):=\{(\cos x,\sin x)\}\qquad \forall x\in[0,1].$$ Then $\Phi_2$ is a continuous single-valued map and $\Phi_1$ is convex-valued with open lower sections. Furthermore $$\Phi_1(x)\cap\Phi_2(x)=\left\{\begin{array}{ll} \emptyset & \mbox{ if } x>0\\ \{(1,0)\} & \mbox{ if } x=0 \end{array}\right.$$ and $\Phi_1(0)\cap {\operatorname{ri}}\Phi_2(0)=C\cap \{(1,0)\}=\{(1,0)\}$. Nevertheless $\Phi_1\cap\Phi_2$ is not lower semicontinuous at $0$. Notice that $\Phi_1(x)$ is even open on $C$, for all $x\in[0,1]$. The following is a correct version of [@Yu98 Theorem 1.13]. \[pr:lsc intersection1\] Let $X$ be a topological space, $C\subseteq{{\mathbb R}}^n$ and $\Phi_1,\Phi_2:X\rightrightarrows C$ be lower semicontinuous and convex-valued. Moreover, for all $x\in X$ assume that ${\operatorname{aff}}\Phi_2(x)={\operatorname{aff}}C$ and $$\Phi_1(x)\cap\Phi_2(x)\neq \emptyset\quad \Rightarrow\quad\Phi_1(x)\cap{\operatorname{ri}}\Phi_2(x)\neq \emptyset$$ then $\Phi_1\cap \Phi_2$ is lower semicontinuous. [**Proof.**]{} By definition, up to isomorphism, there exists $m\leq n$ such that ${\operatorname{aff}}C=x_0+{{\mathbb R}}^m$, where $x_0\in C$ is arbitrarily fixed. Define $\hat\Phi_i:X\rightrightarrows {{\mathbb R}}^m$ by $\hat\Phi_i:=\Phi_i-x_0$, $i=1,2$. Then $\hat\Phi_1$ and $\hat\Phi_2$ are lower semicontinuous and convex-valued. Furthermore, since ${\operatorname{aff}}\Phi_2(x)={\operatorname{aff}}C$, then ${\operatorname{ri}}\Phi_2(x)=x_0+{\operatorname{int}}\hat\Phi_2(x)$ and $\hat\Phi_1(x)\cap {\operatorname{int}}\hat\Phi_2(x)\neq \emptyset$ whenever $\hat\Phi_1(x)\cap \hat\Phi_2(x)\neq \emptyset$. By [@BoGeMyOb84 Corollary 1.3.10] it follows that $\hat\Phi_1\cap \hat\Phi_2$ is lower semicontinuous. This means in turn that $\Phi_1\cap \Phi_2$ is lower semicontinuous. Now we are in position to prove our last existence result. \[th:sufficientconditions3\] Assume that $K$ is lower semicontinuous with nonempty convex values and ${\operatorname{fix}}K$ is closed. Moreover suppose that 1. $F$ is convex-valued on ${\operatorname{fix}}K$, 2. $F$ is lower semicontinuous on ${\operatorname{fix}}K$, 3. ${\operatorname{aff}}K(x)={\operatorname{aff}}C$, for all $x\in \partial_C {\operatorname{fix}}K$, 4. $F(x)$ is open on $C$, for all $x\in \partial_C {\operatorname{fix}}K$, where $F$ is the set-valued map given in (\[eq:mapF\]). Then the quasiequilibrium problem (\[eq:qep\]) has a solution. [**Proof.**]{} It is enough to show that assumption iii) of Theorem \[th:existenceQEP\] holds, i.e. $F\cap K$ is lower semicontinuous on $\partial_C{\operatorname{fix}}K$. Let $x\in\partial_C{\operatorname{fix}}K$ be fixed and assume that $F(x)\cap K(x)\neq\emptyset$ (otherwise the intersection is trivially lower semicontinuous at $x$). By assumption there exists an open set $\Omega\subseteq {{\mathbb R}}^n$ such that $F(x)=\Omega\cap C$. Then $$\emptyset\neq F(x)\cap K(x)=\Omega\cap C\cap K(x)=\Omega\cap K(x).$$ From [@Ro70 Corollary 6.3.2] we get $$\emptyset\neq \Omega\cap {\operatorname{ri}}K(x)=F(x)\cap {\operatorname{ri}}K(x)$$ The lower semicontinuity of $F\cap K$ at $x$ follows from Proposition \[pr:lsc intersection1\]. Now we make a comparison with an analogous result in [@Cu95]. The assumptions of Theorem \[th:sufficientconditions3\] are the same as those of [@Cu95 Theorem 3.2] except that conditions iii) and iv) must be verified for all $x\in \partial_C{\operatorname{fix}}K$ instead of for all $x\in C$. Thus, Theorem \[th:sufficientconditions3\] is clearly more general and, unlike [@Cu95 Theorem 3.2], it reduces to Ky Fan minimax inequality when the constraint set-valued map $K$ is equal to $C$. Conclusions =========== In this paper existence results for the solution of finite dimensional quasiequilibrium problems are obtained by using a Michael selection result for lower semicontinuous set-valued maps. The peculiarity of our results, which make them different from other results in the literature to the best of knowledge of the authors, is the fact that they reduce to Ky Fan minimax inequality when the constraint map is constant. Moreover we provide information regarding the position of a solution. In fact either it is a fixed point of the constraint set-valued map which solves an equilibrium problem or it lies in the boundary of the fixed points set. To know this property seems promising for the construction of solution methods. Future works could be devoted to exploit such result to propose computational techniques for solving quasiequilibrium problems. Another possible advance consists in studying conditions which permit to replace the compactness of the domain with suitable coercivity conditions on the equilibrium bifunction. [00]{} Fan K.: A minimax inequality and applications. In: Shisha O. (ed.): Inequalities III, pp. 103–113. Academic Press, New York (1972) Bigi G., Castellani M., Pappalardo M., Passacantando M.: Existence and solution methods for equilibria. European J. Oper. Res. 227, 1–11 (2013) Bensoussan A., Goursat M., Lions J.L.: Contrôle impulsionnel et inéquations quasi-variationnelles stationnaires. C.R. Acad. Sci. Paris Sér. A 276, 1279–1284 (1973) Mosco U.: Implicit variational problems and quasi variational inequalities. In: Lecture Notes in Math., vol. 543, pp. 83–156. Springer-Verlag, Berlin (1976) Alleche B., Rădulescu, V.D.: Solutions and approximate solutions of quasi-equilibrium problems in Banach spaces. J. Optim. Theory Appl. 170, 629–649 (2016) Aubin J.P.: Optima and equilibria. Springer-Verlag, Berlin (1993) Aussel D., Cotrina J., Iusem A.: Existence results for quasi-equilibrium problems. J. Convex Anal. 24, 55–66 (2017) Castellani M., Giuli M.: An existence result for quasiequilibrium problems in separable Banach spaces. J. Math. Anal. Appl. 425, 85–95 (2015) Castellani M., Giuli M.: Approximate solutions of quasiequilibrium problems in Banach spaces. J. Global Optim. 64, 615–620 (2016) Cubiotti P.: Existence of solutions for lower semicontinuous quasiequilibrium problems. Comput. Math. Appl. 30, 11–22 (1995) Cubiotti P.: Existence of Nash equilibria for generalized games without upper semicontinuity. Internat. J. Game Theory 26, 267–273 (1997) Michael E.: Continuous selections. I. Ann. of Math. 63, 361–382 (1956) Border K.C.: Fixed point theorems with applications to economics and game theory. Cambridge University Press, Cambridge (1985) Papageorgiou N.S.: On the existence of $\psi$-minimal viable solutions for a class of differential inclusions. Arch. Math. 27, 175–182 (1991) Zhou J.: On the existence of equilibrium for abstract economies, J. Math. Anal. Appl. 193, 839–858 (1995) Yuan G.X.-Z.: The study of minimax inequalities and applications to economies and variational inequalities. Memoirs of the American Mathematical Society, vol. 132. Providence, Rhode Island (1998) Bagh A.: Lower hemi-continuity, open sections, and convexity: counter examples in infinite dimensional spaces. Theoret. Econom. Lett. 2, 121–124 (2012) Bergstrom T.C., Parks R.P., Rader T.: Preferences which have open graphs. J. Math. Econom. 3, 265–268 (1976) Borisovich Y., Gel’man B.D., Myshkis A.D., Obukhovskii V.V.: Multivalued mappings. J. Soviet Math. 24, 719–791 (1984) Rockafellar R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
{ "pile_set_name": "ArXiv" }
--- abstract: 'We observed atmospheric gamma-rays around 10 GeV at balloon altitudes (15$\sim$25 km) and at a mountain (2770 m a.s.l). The observed results were compared with Monte Carlo calculations to find that an interaction model (Lund Fritiof1.6) used in an old neutrino flux calculation was not good enough for describing the observed values. In stead, we found that two other nuclear interaction models, Lund Fritiof7.02 and dpmjet3.03, gave much better agreement with the observations. Our data will serve for examining nuclear interaction models and for deriving a reliable absolute atmospheric neutrino flux in the GeV region.' author: - 'K. Kasahara' - 'E. Mochizuki' - 'S. Torii' - 'T. Tamura' - 'N. Tateyama' - 'K. Yoshida' - 'T. Yamagami' - 'Y. Saito' - 'J. Nishimura' - 'H. Murakami' - 'T. Kobayashi' - 'Y. Komori' - 'M.Honda' - 'T. Ohuchi' - 'S. Midorikawa' - 'T. Yuda' bibliography: - 'betsgamma.bib' title: 'Atmospheric gamma-ray observation with the BETS detector for calibrating atmospheric neutrino flux calculations' --- Introduction ============ The discovery of evidence for neutrino oscillation by the Super Kamiokande group[@skoscillation] is based on the comparison of the observed atmospheric neutrino flux with calculated values. Although the conclusion is so derived that it would not be upset by the uncertainty of the absolute flux value, it is desirable to obtain a reliable expected neutrino flux (under no oscillation assumption) for further detailed discussions. Two major sources of uncertainty in the atmospheric neutrino flux calculation are 1) the primary cosmic-ray spectrum and 2) the propagation of cosmic rays in the atmosphere, especially, modeling of the nuclear interaction. The absolute flux calculations so far made by various groups are expected to have uncertainty of $\sim$ 30 %[@GHreview]. The primary proton and He spectra recently measured with magnet spectrometers by the BESS [@bess1ry] and AMS[@ams1ry] groups agree very well and seem reliable. Therefore, we may take that the first problem mentioned above have now been almost settled at least up to 100 GeV/n. This means that if we have a reliable atmospheric cosmic-ray flux data, we may compare it with a calculation which uses such primaries and test the validity of nuclear interaction models. For such an atmospheric cosmic-ray component, one may first raise the muon and actually some new observations have been or being tried[@capricemuon1; @capricemuon2; @bessmuonnori]. As a secondary cosmic-ray component, we focused on gamma-rays which are easy to measure with our detector. A good model should be able to explain muons and gamma-rays simultaneously. Muons are important since they are directly coupled with neutrinos, but the flux is affected somehow by the structure of the atmosphere which is usually not well known. Compared to muons, the flux of gamma-rays is substantially lower but is almost insensitive to the atmospheric structure and depends only on the total thickness to the observation height. In 1998, we performed first gamma-ray observation with our detector at Mt. Norikura (2770m a.s.l) in Japan, and also made subsequent two successful observations at balloon altitudes (15 $\sim 25$ km) in 1999 and 2000. In the present paper, we report the final results of these observations and consequences. The Detector ============ For our observation, we upgraded the BETS (Balloon-born Electron Telescope with Scintillating fibers) detector which had been developed for the observation of cosmic primary electrons in the 10 GeV region. Its details before being upgraded for gamma-ray observation is in [@betsnim] and the electron observation result is in [@betselec]. The basic performance was tested at CERN using electron, proton and pion beams of 10 to 200 GeV[@betsnim; @betscern]. Although this was undertaken before the upgrading, we can essentially use that calibration for the current observeions partly with a help of Monte Carlo simulations. Figure \[det\] shows a schematic structure of the main body of BETS. The calorimeter has 7.1 r.l lead thickness and the cross-section is 28 cm $\times$ 28 cm. The whole detector system is contained in a pressure vessel made of thin aluminum. ![Schematic illustration of the main body of the detector. S1, S2 and S3 are 1 cm thick plastic scintillators used for trigger. Each fiber has 1mm diameter. Originally nuclear emulsion plates were placed on the upper scifi’s and also inserted between the upper thin lead plates for detailed investigation of tracking capability of scifi. They are kept in the present system to have the same structure at the calibration time. The inlaid cascade shows charged particle tracks by a simulation for a 30 GeV incident proton. \[det\]](detconfigwithshower.eps){width="92mm"} R.M.S energy resolution(%) 21, 18, 15 (for $\theta\sim 15^\circ$) ---------------------------------- ------------------------------------------- S$\Omega$(cm$^2$sr) 243, 240,218 (at $\sim$20 km) R.M.S angular resolution (deg) 2.3, 1.3, 1.0 (for $\theta\sim 15^\circ$) Total number of scifi’s 10080 Weght including electronics (kg) 230 Cross-section of the main body 28cm $\times $ 28cm Thickness (Pb radiation length) 7.1 : Basic characteristics of BETS\ (triple numbers in the table are for gamma-ray energy of 5, 10, and 30 GeV, respectively) \[basicchara\] The main feature of the BETS detector is that it is a tracking calorimeter; it contains a number of sheets consisting of 1 mm diameter scintillating fibers (scifi), many of which are sandwiched between lead plates. The total number of scifi’s are 10080. The sheets are grouped into two types; one is to serve for x and the other for y position measurement. Each of them is fed to an image intensifier which in turn is connected to a CCD. Thus, the two CCD output gives us an $x-y$ image of cascade shower development and enables us to discriminate gamma-rays, electrons from other (mainly hadronic) background showers. The proton rejection power against electron is $R\sim 2\times 10^3$ (i.e, one misidentification among $R$ protons) at 10 GeV[^1] The basic characteristics of the detector are summarized in Table \[basicchara\]. ![Image of cascade shower by a proton (120 GeV,left) and an electron(10 GeV, right) obtained at CERN. \[image\]](showerimage.eps){width="85mm"} In Fig.\[image\], we show examples of the CCD image of a cascade shower for a proton incident case and for an electron incident case. Figure \[anti\] illustrates the yearly change of anti-counters. In 1998 (Mt.Norikura observation), the main change was limited to the upgrading of trigger logic. In 1999, we added 4 side anti-counters (each 15 cm $\times$ 36 cm $\times$ 1.5 cm plastic scintillator. Nine optical fibers containing wave length shifter are embedded in each scintillator and connected to a Hamamatu H6780 PMT. ![Yearly change of the anti-counters. Left: 1998. No change from original BETS except for trigger logic. Middle: 1999. 1.5 cm thick plastic scintillator side anti-counters were added. Right: 2000. The whole top view was covered by a 1 cm thick plastic sintillator. \[anti\]](yearlychange.eps){width="85mm"} In 2000, we further added an anti-counter which covers the whole top view of the detector and also improved data acquisition speed. The top anti-counter is 38 cm $\times$ 38 cm $\times$ 1 cm plastic scintillator. We also embedded optical fibers; 8 in the $x$ and another 8 in the $y$ direction, all of which were fed to an H6780. Although we could remove background showers without the anti-counters, inclined particles (mainly protons) entering from the gap between top scintillator (S1) and the main body degrades the desired gamma-ray event rate. The addition of the top anti-counter greatly helped improve this rate. We emphasize that detection of gamma-rays is easier for us than that of electrons, since, for gamma-rays, we can utilize absence of incident charge. Observations ============ Table \[sumtab\] shows the summary of the observations. Observation Mt.Norikura(1998) --------------------- -------------------- ------ ------- ------- ------- ------- ------- ------- ------- ------- Period Aug.31$\sim$Sep.18 Altitude(km) 2.77 15.3 18.5 21.2 24.7 32.3 15.3 18.3 21.4 25.1 Depth(g/cm$^2$) 737 126 74.8 48.9 28.0 9.5 128 73 45.7 25.3 Obs. hour (s) $1.33\times 10^6$ 1260 1560 2100 4878 3120 1560 2160 4320 2320 Live time (s) $9.8\times 10^5$ 504 450 414 852 498 752 928 1805 789 Live time (%) 74.0 40.0 28.8 19.7 17.5 16.0 48.2 43.0 42.6 44.2 Triggered events $1.8\times 10^6 $ 9513 11288 13361 30439 16741 18808 25795 46675 17436 $\gamma$ events $4.7\times 10^4$ 700 650 611 848 345 1300 1485 2299 740 (%) 2.5 7.3 5.7 4.6 2.8 2.0 6.9 5.8 4.9 4.2 g-low trigger S1 $< 0.5$ condition (in mip). S2 $> 2.3$ S3 $> 1.7$ - Mt. Norikura observation. Our first gamma-ray observation was performed in 1998 at Mt.Norikura Observatory of Univ. of Tokyo, Japan (2770 m a.s.l, latitude 36.1$^\circ$N, longitude 137.55$^\circ$E, magnetic cutoff rigidity $\sim$ 11.5 GV). The atmospheric pressure during the observation is shown in Fig.\[noripress\]. The average atmospheric depth is 737 g/cm$^2$. ![Pressure change during Mt. Norikura observation. The last pressure drop is due to a typhoon. The average pressure is 723 hP (737 g/cm$^2$). \[noripress\]](norikurapressure.eps){width="85mm"} - Balloon flight We had two similar balloon filights in 1999 and 2000. Since the main outcome of the data is from the latter, we briefly describe it. A balloon of 43$\times 10^3$ m$^3$ was launched at 6:30 am, 5th June, 2000 from the Sanriku balloon center of the Institute of Space and Astronautical Science, Japan (latitude 39.2$^\circ$N, longitude 141.8$^\circ$E, magnetic cutoff rigidity $\sim$ 8.9 GV) and recovered with the help of the helicopter. at 17:59 on the sea not far from the center. The flight curve shown in Fig.\[flight\] confirms that we have good level flights at 4 different heights. As compared to the 1999 flight, this flight realized a smaller dead time and higher ratio of desired gamma-ray events. ![Flight curve of the 2000 observation. Pressure (upper) and altitude (lower) as a function of time. Each arrow shows the level flight region. The pressure change at around 15.3 km is rather rapid but the gamma-ray intensity is almost constant there and the change can be neglected. \[flight\]](flightcurve.eps){width="73mm"} Event trigger ------------- The basic event trigger condition is created by signals from the three plastic scintillators (S1, S2 and S3). We show the discrimination level in terms of the minimum ionizing particle number which is defined by the peak of the energy loss distribution of cosmic-ray muons passing both S1 and S3 with inclination less than 30 degrees. We prepare a multi-trigger system by which event trigger with different conditions is possible at the same time. The major two trigger modes are the g-low and g-high. The g-low is responsible for low energy gamma-rays and all anti-counters, when available, are used as veto counters. Its condition is listed in Table \[sumtab\]. High energy gamma-rays normally produce a lot of back splash particles which hit S1 and/or anti-counters, and thus the g-low trigger is suppressed. In such a case, i.e, if we have a large S3 signal, anti-counter veto is invalidated and the S1 threshold is relaxed (The g-high condition is S1$<3.0$, S2$>5.0$ and S3$>8.1$). The branch even point of the g-low and g-high mode efficiency is at $\sim $30 GeV. Since we deal with gamma-rays mostly below 30 GeV, and also to avoid complexity, we present results only by the g-low mode. Analysis ======== Event selection --------------- Among the triggered events, we selected gamma-ray candidates by imposing the following conditions: ![(left)Energy concentration distribution at 21.4 km. (right)the same by electrons at CERN []{data-label="conc"}](Econc.eps){width="8.5cm"} 1. The estimated shower axis passes S1 and S3. The axis position in S3 must be at least 2 cm apart from the edge of S3. 2. The estimated shower axis has a zenith angle less than 30 degrees. 3. The energy concentration (see below) must be greater than 0.7. According to a simulation, only neutrons could be a background against gamma-rays and the 3rd conditions above reduces the neutron contribution to a negligible level ($<1$%). The energy concentration is defined as the fraction of scintillating fiber light intensity within 5 mm from the shower axis. Figure \[conc\] shows the concentration of analysed events together with the result of CERN data. Hadrons make a distribution with a peak at around 0.5. We see that the contribution of hadrons in our observation is negligible. Energy Determination -------------------- The energy calibration was performed in 1996 at CERN using electrons with energy 10 $\sim $ 200 GeV[@betsnim; @betscern]. There is no direct calibration for gamma-rays, but, for the present detector thickness and energy range, a M.C simulation tells us that the calibration in 1996 can be used for gamma-rays, too[^2]. Therefore, for the 1998 and 1999 observations, energy is obtained as a function of the S3 output and zenith angle using the CERN calibration. In 2000, we made some change in the electronics so the CERN calibration could not be used directly. The effect by the change was absorbed by a M.C simulation of which the validity was verified by examining the 1998 and 1999 data. We used the sum of S2 and S3 outputs below 20 GeV since the energy resolution was found to be better than using S3 only. Figure \[eresol\] shows r.m.s energy resolution. ![R.m.s energy resolution. The resolution by S2+S3 or S3 only is shown. Different symbols indicate different incident angles. We used S2+S3 below 20 GeV for the year 2000 data. \[eresol\]](Eres.eps){width="8cm"} Correction of the gamma-ray intensity ------------------------------------- The gamma-ray vertical flux is obtained from the raw $dN/dE$ by dividing it by the live time of the detector and the effective $S\Omega$ (area $\times$ solid angle). The latter is obtained by a simulation[@someganu00]. It is dependent on the observation hight and energy. A typical value at 10 GeV is 240 cm$^2$sr (see Table\[basicchara\]). The energy spectrum is further corrected by the following factors which are not taken into account in the $S\Omega$ calculation. ![(upper)Multiple incidence rate. (lower) Correction factor for year 2000 due to spillover. The flux must be lowered. For Norikura, the factor below 20 GeV is larger by 1$\sim 3$ %. []{data-label="correc"}](turehuta.eps "fig:"){width="7cm"} ![(upper)Multiple incidence rate. (lower) Correction factor for year 2000 due to spillover. The flux must be lowered. For Norikura, the factor below 20 GeV is larger by 1$\sim 3$ %. []{data-label="correc"}](ER_hosei.eps "fig:"){width="7cm"} 1. Systematic bias in our estimation of the shower axis. We underestimate the zenith angle systematically and it leads to overestimation of the intensity about 4% for the balloon and 1.8 % for Mt.Norikura observations. 2. Multiple incidence of particles. A gamma-ray is sometimes accompanied by other charged particles and they enter the detector simultaneously (within 1 ns time difference in 99.9 % cases). They are a family of particles generated by one and the same primary particle[^3]. The charged particles fire the anti-counter and the g-low trigger is inhibited. In some case, multiple gamma-rays enter the detector simultaneously. The rate is smaller than the charged particle case. However, this is judged as a hadronic shower in most of cases. The multiple incidence leads to the underestimation of gamma-ray intensity. The portion of multiple incidence is shown in Fig.\[correc\] (upper). 3. Finite energy resolution. The rapidly falling energy spectrum leads to the spillover effect. This normally leads to the overestimation of flux (Fig.\[correc\], lower). Results and comparison with calculations ======================================== The flux values are summarized in Table \[flux\]. We put only the statistical errors in the flux values, since systematic errors coming from the uncertainty of the S$\Omega$ calculation, various cuts and flux corrections are expected to be order of a few percent and much smaller than the present statistical errors. [|l|l|l|l|l|l|l|l|l|l|]{}\ & & & &\ \ 5.48 & 2.42 $\pm$ 0.37 & 5.48 & 2.11 $\pm$ 0.39 & 5.47 & 2.11 $\pm$ 0.24 & 5.47 & 1.58 $\pm$ 0.25 & 5.47 & 0.49 $\pm$ 0.14\ 6.47 & 1.18 $\pm$ 0.27 & 6.47 & 1.10 $\pm$ 0.24 & 6.47 & 1.35 $\pm$ 0.21 & 6.47 & 0.82 $\pm$ 0.18 & 6.57 & 0.19 $\pm$ 0.09\ 7.47 & 0.89 $\pm$ 0.24 & 7.47 & 0.79 $\pm$ 0.21 & 7.47 & 0.82 $\pm$ 0.16 & 7.47 & 0.66 $\pm$ 0.16 & 7.47 & 0.24 $\pm$ 0.10\ 8.48 & 0.37 $\pm$ 0.15 & 8.48 & 0.92 $\pm$ 0.20 & 8.48 & 0.51 $\pm$ 0.13 & 8.48 & 0.49 $\pm$ 0.14 & 8.48 & 0.16 $\pm$ 0.08\ 9.48 & 0.54 $\pm$ 0.17 & 9.85 & 0.46 $\pm$ 0.11 & 9.48 & 0.50 $\pm$ 0.12 & 9.48 & 0.36 $\pm$ 0.12 & 9.48 & 0.16 $\pm$ 0.08\ 10.5 & 0.17 $\pm$ 0.10 & 11.5 & 0.35 $\pm$ 0.12 & 10.5 & 0.41 $\pm$ 0.09 & 10.5 & 0.34 $\pm$ 0.12 & 12.3 & 0.13 $\pm$ 0.037\ 12.1 & 0.28 $\pm$ 0.09 & 14.0 & 0.24 $\pm$ 0.06 & 11.8 & 0.23 $\pm$ 0.069 & 12.2 & 0.21 $\pm$ 0.054 & 17.0 & 0.032 $\pm$ 0.018\ 14.0 & 0.17 $\pm$ 0.05 & 18.3 & 0.072 $\pm$ 0.030 & 14.0 & 0.16 $\pm$ 0.030 & 14.0 & 0.076 $\pm$ 0.03 & 21.7 & 0.022$\pm$ 0.015\ 18.5 & 0.12 $\pm$ 0.04 & 26.8 & 0.040 $\pm$ 0.017 & 18.4 & 0.086 $\pm$ 0.023& 17.8 & 0.078 $\pm$ 0.029 & &\ 25.5 & 0.06 $\pm$ 0.02 & & & 27.1 & 0.026 $\pm$ 0.009& 21.7 & 0.064 $\pm$ 0.026 & &\ & & & & & & 26.8 & 0.024 $\pm$ 0.012 & &\ & & & & & & 36.0 & 0.012 $\pm$ 0.008 & &\ E(GeV) Flux ($10^{-4}/$m$^2\cdot$s$\cdot$sr$\cdot$GeV) -------- ------------------------------------------------- 5.48 274 $\pm$ 13 6.47 183 $\pm$ 11 7.47 133 $\pm$ 9 8.47 87.8 $\pm$ 7.5 9.47 86.5 $\pm$ 7.5 10.5 54.1 $\pm$ 5.9 11.5 46.6 $\pm$ 5.5 12.5 38.3 $\pm$ 5.0 13.5 32.6 $\pm$ 4.6 14.5 24.2 $\pm$ 4.0 15.5 25.7 $\pm$ 4.1 17.0 11.9 $\pm$ 2.0 19.0 15.3 $\pm$ 2.3 21.0 13.1 $\pm$ 2.1 23.0 5.80 $\pm$ 1.4 26.0 5.31 $\pm$ 0.95 30.0 3.00 $\pm$ 0.72 34.0 2.30 $\pm$ 0.64 38.0 1.07 $\pm$ 0.44 45.0 1.45 $\pm$ 0.32 55.0 0.52 $\pm$ 0.20 65.0 0.22 $\pm$ 0.13 75.0 0.30 $\pm$ 0.15 85.0 0.15 $\pm$ 0.10 : Flux values at Mt. Norikura\[noriflux\] The gamma-ray energy spectra thus obtained at balloon altitudes are shown in Fig.\[balspec\] together with the expected ones calculated by the Cosmos simulation code[@cosmos]. Except for 32.3 km altitude, we can disregard the small difference of the observation depths and we combine two flight data with statistical weight, although the main contribution is from the flight in 2000. In the simulation calculation, we employed 3 different nuclear interaction models: 1) fritiof1.6[@oldfri] used in the HKKM calculation[@hkkm95], which was widely used for comparison with the Kamioka data, 2)fritiof7.02[@newfri][^4] and 3) dpmjet3.03[@dpmjet]. As the primary cosmic ray, we used the BESS result on protons and He. The CNO component is also considered[@cno]. Besides these we included electron and positron data by AMS[@amselec]. Their data in the 10 GeV region is consistent with the HEAT[@heat] and BETS[@betselec] data. Bremstrahlung gamma-rays from the primary electrons could contribute order of $\sim 10$ % at very high altitudes. At balloon altitudes, the two models, fritiof7.02 and dpmjet3.03, give almost the same results which are close to the observed data, while fritof1.6 gives clearly smaller fluxes than the observation. Figure \[norispec\] shows the result from the observation at Mt.Norikura. It should be noted that the flux by fritiof1.6 becomes higher than the ones by the other models at this altitude. From these figures, we see fritiof7.02 and dpmjet3.03 give rapider increase and faster attenuation of intensity than fritiof1.6; the tendency is very consistent with the observed data. The transition curve of the flux integrated over 6 GeV shown in Fig.\[transition\] clearly demonstrates this feature. ![image](spectrum1.eps){width="6.5cm"} ![image](spectrum2.eps){width="6.5cm"} ![image](spectrum3.eps){width="6.5cm"} ![image](spectrum4.eps){width="6.5cm"} ![Gamma-ray spectra at 5 balloon heights are compared with 3 different models. The vertical axis is Flux$\times E^2$. Except for 1999 data at 32.3 km, 1999 and 2000 flights data are combined. From top to bottom, at 25.1, 21.4, 18.3, 15.3 and 32.3 km. The spectra expected from three interaction models are drawn by solid (dpmjet3.03), dash (fritiof7.02) and dotted (fritiof1.6) lines. []{data-label="balspec"}](spectrum5.eps){width="6.5cm"} ![Gamma-ray spectrum at Mt. Norikura (2.77 km a.s.l). The vertical axis is Flux$\times E^2$. Our data is at $<$ 100 GeV. Data above 300 GeV is from emulsion chamber experiments. For the latter, see Sec.\[discuss\] []{data-label="norispec"}](norikura.eps){width="7.5cm"} ![The altitude variation of the flux integrated over 6 GeV. The dpmjet3.03 and fritiof7.02 give almost the same feature consistent with the observation while the deviation of fritiof1.6 from the data is obvious. \[transition\]](transition.eps){width="7.5cm"} Discussions\[discuss\] ====================== Comparison with other data -------------------------- We found Fritiof7.02 and dpmjet3.03 give good agreement with the observed gamma-ray data at around 10 GeV. We briefly see whether these models can interpret other observations. More detailed inspection will be done elsewhere. - Muon data by the BESS group at Mt.Norikura[@bessmuonnori]. Recently, the BESS group reported detailed muon spectrum over several hundred MeV/c. In their paper, calculations by dpmjet3.03 and fritiof1.6 are compared with the data; agreement by dpmjet3.03 is quit good at least above GeV where Fritiof7.02 also gives more or less the same flux. On the other hand, fritiof1.6 shows too high flux. These features are consisten with our present analysis. - Higher energy gamma-ray data by emulsion chamber. In Fig. \[norispec\], we inlaid an emulsion chamber data[@ecc][^5] at Mt. Norikura. Our data seems to be smoothly connected to their data as the two interaction models (Fritiof7.02 and dpmjet3.03) predict. Since the emulsion chamber data extends to the TeV region and the primary particle energy responsible for such high energy gamma-rays is much higher than 100 GeV where we have no accurate information comparable to the AMS and BESS data, it would be premature to draw a definite conclusion on the primary and interaction model separately. However, the fact that smooth extrapolation of the primary spectra as shown in Table \[extendprim\] and the interaction model, dpmjet3.03 or fritiof7.02, give a consistent result with the data, seems to indicate that such combination would provide a good estimate on other components at $\gg$ 10 GeV. ------- ----------- ------- ----------- -------- ---------   E    flux    E    flux    E   flux 92.6 0.593E-01 79.4 0.549E-02 100. 9.0E-5 108 0.388E-01 100. 3.0E-3 400. 1.8E-6 126 0.276E-01 200. 5.0E-4 2.0E3 3.5E-8 147 0.179E-01 400. 7.0E-5 2.0E4 9.3E-11 171 0.124E-01 2.0E3 9.98E-7 2.0E5 2.3E-13 200 0.836E-02 2.0E4 2.5E-9 14.0E5 1.3E-15 1100 8.29E-5 2.0E5 3.97E-12 3.0E6 1.7E-16 1.1E4 1.47E-7 4.0E5 6.1E-13 3.0E7 2.0E-19 1.1E5 2.8E-10 8.0E5 7.0E-14 3.0E8 2.2E-22 2.2E5 3.7E-11 8.0E6 8.7E-17 4.4E5 5.0E-12 8.0E8 5.3E-23 4.4E8 2.8E-21 ------- ----------- ------- ----------- -------- --------- : Primary flux assumed in the simulation above 100 GeV/n\ (E in kinetic energy per nucleon (GeV), flux in /m$^2\cdot$s$\cdot$sr$\cdot$GeV) \[extendprim\] The $x$-distributions --------------------- The two models, fritiof7.02 and dpmjet3.03, give almost the same results in the present comparison. However, if we look into the $x$-distribution of the particle production, we note some difference, especially in the proton $x$-distribution. We define the $x$ as the kinetic energy ratio of the incoming proton and a secondary particle in the laboratory frame. The $x$ distribution for $p$Air collisions at incident proton energy of 40 GeV is presented for photons (from $\pi^0$ plus $\eta$ decay) and protons in Fig.\[xdist\]. Difference of the three models seen in the photon distribution is quite similar to the one for charged pions. The $x$ region most effective to atmospheric gamma-ray flux is around 0.2$\sim$0.3 where the difference is not so large but fritiof7.02 and dpmjet3.03 have higher gamma-ray yield than fritiof1.6. ![The $x$-distribution of photons from $\pi^0$ plus $\eta$ decay (upper) and protons (lower) for $p$Air collisions at 40 GeV. The three model results are shown. []{data-label="xdist"}](gammaxdist.eps "fig:"){width="7.5cm"} ![The $x$-distribution of photons from $\pi^0$ plus $\eta$ decay (upper) and protons (lower) for $p$Air collisions at 40 GeV. The three model results are shown. []{data-label="xdist"}](protonxdist.eps "fig:"){width="7.5cm"} On the other hand, the proton $x$ distribution has larger difference among the three models (we note, however, the difference may be exaggerated than the photon case due to the scale difference). It is interesting to see that, in spite of these large differences, the final flux is not so much different each other. Our gamma-ray data prefers to rather more inelastic feature of collisions than fritiof1.6, i.e rapider increase and faster attenuation of the flux. We should compare the distribution with accelerator data; however, there is meager stuff appropriate for our purpose. One such comparison has been done in a recent review paper[@GHreview] for $p$Air collisions at 24 GeV/c incident momentum. The charged pion distribution by fritiof1.6 and dpmjet3.03 well fit to some scattered data which prevents to tell the superiority of the two. As to the proton distribution, among the three models, fritiof1.6 is rather close to the data but deviation from the data is much larger than the pion case. The proton $x$-distribution would strongly affect the atmospheric proton spectrum. We calculated proton flux at Mt.Norikura to find a flux relation such that fritiof1.6 $>$ fritiof7.02 $>$ dpmjet3.03 as expected naturally from the $x$-distributions. The maximum difference is factor $\sim 2.5$ in the energy region of 0.3 to 3 GeV. The BESS group has measured the proton spectrum at Mt. Norikura in the same energy region. Their result expected to come soon[@sanukibess] will help select a better model for the proton $x$ distribution. summary ======= - We have made successful observation of atmospheric gamma-rays at around 10 GeV at Mt.Norikura (2.77 km a.s.l) and at balloon altitudes (15 $\sim$ 25 km). - The observed gamma-ray fluxes are compared with calculations by three interaction models; it is found that fritiof1.6 employed by the HKKM calculation [@hkkm95], which was used in comparison with the Kamioka data, is not a very good model. - Other two models (fritiof7.02 and dpmjet3.03) give better results consistent with the data, which shows rapider increase and faster attenuation of the flux than fritiof1.6 predicts. - Our data has complementary feature to muon data and will serve for checking nuclear interaction models used in atmospheric neutrino calculations. We sincerely thank the team of the Sanriku Balloon Center of the Institute of Astronautical Science for their excellent service and the support of the balloon flight. We also thank the staff of the Norikra Cosmic-Ray observatory, Univ. of Tokyo. for their help. We are also indebted to S.Suzuki, P.Picchi, and L. Periale for their spport at CERN in the beam test. For the management of X5 beam line of SPS at CERN, we would like to thank L. Gatignon and the tecnical staffs. One of the authors (K.K) thanks S. Roesler for his help in implementing dpmjet3.03. This work is partly supported by Grants-in Aid for Scientific Research B (09440110), Grants-in Aid for Scientific Research on Priority Area A (12047224) and Grant-in Aid for Project Research of Shibaura Institute of Technology. [^1]: We note electron showers of 10 GeV are normally simulated by $\sim$ 30 GeV protons when the latter start cascade at a shallow depth of the detector. [^2]: If we don’t impose the trigger condition, the gamma-ray case shows a small difference from the electron case. [^3]: The chance coincidence probability of uncorrelated particles is negligibly small. [^4]: It is used at energies greater than 10 GeV. At lower energies, model is the same as fritiof1.6 [^5]: Electrons included in the original data is subtracted statistically by use of cascade theory which is accurate at high energies.
{ "pile_set_name": "ArXiv" }
To celebrate the BEP-2 EQL token (EQUAL) listing on Binance DEX, EQUAL is excited to announce an EQL trading competition on Binance.org — with up to 10,200,000 EQL to be airdropped to eligible traders. Campaign Period: 42 days, from August 19th 2019 0:00:00 AM (UTC) to September 30th 2019 11:59:59 PM (UTC), split into 6 rounds, 7 days per round. Total Rewards: 10,200,000 EQL Overview Total Rewards Budget: 10,200,000 EQL Duration: 6 weeks (42 days) Requirements: Daily volume greater than 300,000 EQL (Buy & Sell) on a minimum of 5 days a week. Minimum volume of 1,500,000 EQL per round Traders must hold 100,000 EQL & 10 BNB in the address to qualify each competition day. Trading Competition T&Cs:
{ "pile_set_name": "OpenWebText2" }
Riverine regime shifts through reservoir dams reveal options for ecological management. Worldwide, dams are a main threat reducing river ecological functioning and biodiversity by severely altering water temperature, flow, and sediment regimes up- and downstream. Sustainable dam management therefore has a key role in achieving ecological targets. Here, we present an analysis of the effects of reservoir dams and resulting regime shifts on community structure and function of lotic macroinvertebrates. Our study derived management options to improve ecological integrity of affected streams. To do this, we contrasted time series data for water temperature (15-min intervals over one year), discharge (daily means over 10 yr), and records of deposited fine sediments against macroinvertebrate samples from pairs of river reaches downstream of dams and of comparable tributaries not affected by dams in the German low mountain range. We observed a decline in the density and diversity of disturbance-sensitive macroinvertebrates (Ephemeroptera, Plecoptera, and Trichoptera) and a correlation between hydrologic metrics and macroinvertebrate deterioration downstream of the dams. Typical "rhithral" (flow-adapted) species changed to "littoral" (flow-avoiding) species below dams, thus indicating a hydrologic regime shift. Increased fine sediment accumulations and deficits of pebbles and small cobbles below dams indicated a severe habitat loss below dams. Additional comparison with undisturbed reference streams allowed us to derive management options that could mitigate the negative impact of hydrologic alterations and accumulations of fine sediments downstream of dams. These options are conditional on the season and in particular address the frequency and duration of low and high flow events.
{ "pile_set_name": "PubMed Abstracts" }
The articles posted here come from a variety of sources around the web. My hope is that this information may awaken the mind and quicken the spirit -- so that one may discern, pray, and declare the importance of these days.
{ "pile_set_name": "OpenWebText2" }
Continuous infusion of a local anesthetic versus interscalene block for postoperative pain control after arthroscopic shoulder surgery. The purpose of this investigation was to evaluate the efficacy, complication rate, and cost of a 1-time interscalene block compared with a continuous infusion of a local anesthetic for postoperative pain relief in patients having arthroscopic shoulder surgery. After prospective power analysis and institutional review board approval, 56 consecutive patients having arthroscopic shoulder surgery under general anesthesia performed by a single surgeon were randomly assigned to 1 of 2 groups to evaluate postoperative pain control. Group 1 patients received a preoperative interscalene block, and group 2 patients received a subacromial continuous infusion of a local anesthetic (0.5% bupivacaine) via a pain pump for 48 hours postoperatively. Pain was evaluated at 12, 24, 36, and 48 hours and then daily on postoperative days 3 through 7 by use of a visual analog scale included in a patient diary. Patients were provided with 2 "rescue" medication options: ibuprofen and Percocet (Endo Pharmaceuticals, Chadds Ford, PA). The total number of tablets ingested was also evaluated over these same intervals. Total hospital outpatient surgical costs for each group were calculated by dividing total hospital charges by the ratio of annual hospital cost to charges. No statistically significant differences were identified between the 2 groups with regard to visual analog scale pain scores, medication intake, or cost. Complications did not occur in either group. One patient inadvertently removed the pain pump catheter. Our results support the null hypothesis. We found no difference between interscalene block versus continuous subacromial infusion of a local anesthetic with regard to efficacy, complication rate, or cost. Level I, prospective, randomized controlled trial.
{ "pile_set_name": "PubMed Abstracts" }
O'Brien made these claims in the TranceFormation of America (1995) and Access Denied: For Reasons of National Security (2004) which she co-authored with Mark Phillips.[6] O'Brien is one of many people publicly claiming to have survived government-sponsored mind control programs. O'Brien claims to have been abused since she was a toddler. Forced to partake in satanic sadomasochistic child pornography movies produced for Gerald Ford, she was eventually sold to the CIA, which was looking for traumatized children for their mind-control program ... U.S. Presidents Ford, [and many other world leaders] all sexually brutalized her. She recounts in graphic detail how the elder George Bush raped her thirteen year old daughter and how she was forced to have oral sex with Illuminati witch Hillary Clinton ... While being sodomized, whipped, bound and raped, O'Brien overheard the globalist elite planning a military coup in the United States and conspiring to usher in the satanic New World Order. (Gardell, 97-98) On websites, O'Brien claims she was rescued in 1988, which suggests that her daughter Kelly was no more than eight years old when last abused. Phillips stated in a Granada Forum lecture in 1996 that Kelly was in fact institutionalized when she was eight and has been raised in a mental institution. Although he has since distanced himself from studies in mind control, licensed psychologist Corydon Hammond seems to confirm as credible accounts similar to O'Brien's in a speech he delivered on June 25, 1992 at at the Fourth Annual Eastern Regional Conference on Abuse and Multiple Personality entitled "Hypnosis in MPD: Ritual Abuse" [7]. His critics now refer to it as "The Greenbaum Speech." Hammond states: I've treated and been involved with cases who are part of this original mind-control project as well as having their programming on military reservations in many cases. We find a lot of connections with the CIA. My best guess is that the purpose of it is that they want an army of Manchurian Candidates, tens of thousands of mental robots who will do prostitution, do child pornography, smuggle drugs, engage in international arms smuggling, do snuff films, all sorts of very lucrative things and do their bidding and eventually the megalomaniacs at the top believe they'll create a Satanic Order that will rule the world."[8] These and other assertions and observations made in the speech show that Corydon Hammond agrees with O'Brien's contention that governmental mind control programs continue to function. Critics of recovered memories such as the False Memory Syndrome Foundation are quick to add that recovered memories are not accepted without question by mental health professionals. O'Brien has stated that her knowledge of her purported abuse was gained at least partially through accessing recovered memories.[6] Within the subculture of conspiracy believers, O'Brien has her critics. Writing in his book, Cyberculture Counterconspiracy: A Steamshovel Web Reader, author Kenn Thomas states that conspiracy author Martin Cannon considers both O'Brien and Phillips to be "frauds" who are using real details of Project Monarch to "embellish a dog and pony show", presumably for financial gain.[1]Mattias Gardell notes that O'Brien's claims are almost entirely unsupported by any evidence outside her testimony or the similarly unverified testimony of others.[9]
{ "pile_set_name": "Pile-CC" }
Almost all the Linux hosting providers provide cpanel as a control panel for managing websites. Backup is important for protecting your website data from unintentional loss. Sometimes a Full backup is also required at times you plan to migrate your website to another host. Lets learn how to create a complete backup of your site using the cpanel. The complete backup will include - Home Directory MySQL Databases Email forwarders configuration Email filters configuration All your data will be backed up on a single zip files, which you may download upon the completion of the backup. Lets Proceed with the cPanel Backup Procedure - Login to your website’s cpanel Look for ‘Backup Wizard‘ icon under the ‘Files‘ Group. Open ‘Backup Wizard‘ You’ll get two options ‘Backup‘ and ‘Restore‘ Click on ‘Backup‘ Now you’ll get option for ‘Full Backup‘ or ‘Partial Backup‘ your website data Select ‘Full Backup‘ Now select a Backup Destination You can select ‘Home Directory’, which will store the completed backup in your hosting home directory itself. which you can then download it to your home computer. You can select the ‘Remote FTP Server’ or FTP (Passive Mode Transfer) to directly store your completed backup to a remote server. You can also opt for Email alert on Backup Completion, tick the round check-box and enter you email id. Or otherwise tick the second box. Finally click on ‘Generate Backup‘ cPanel will now begin the backup process to generate a full backup of your data. You can then download the completed backup on your computer or the cpanel will automatically perform the action as you selected in Backup Destination option.
{ "pile_set_name": "Pile-CC" }
box: yosssi/[email protected] # Build definition build: # The steps that will be executed on build steps: # Sets the go workspace and places you package # at the right place in the workspace tree - setup-go-workspace # Gets the dependencies - script: name: go get code: | cd $WERCKER_SOURCE_DIR go version go get -t ./... # Build the project - script: name: go build code: | go build ./... # Test the project - script: name: go test code: | packages=(cmd/gmq-cli mqtt mqtt/client mqtt/packet) for package in ${packages[@]}; do go test -v -cover -race ./$package; done # Invoke goveralls - script: name: goveralls code: | go get github.com/axw/gocov/gocov go get github.com/mattn/goveralls echo "mode: count" > all.cov for package in ${packages[@]}; do go test --covermode=count -coverprofile=$package.cov ./$package; sed -e "1d" $package.cov >> all.cov; done GIT_BRANCH=$WERCKER_GIT_BRANCH goveralls -coverprofile=all.cov -service=wercker.com -repotoken $COVERALLS_REPO_TOKEN
{ "pile_set_name": "Github" }
Looks like the layoffs at Intel that SemiAccurate said would happen began today. Worse yet the layoffs were at DCG/DPG as predicted but other areas not expected to get cut were hit as well. You might recall last November SemiAccurate exclusively brought you news that Intel was going to reorg, let key people go, and lay off 25-33% of DCG in Q1/2020. A few days later we brought you more exclusive news that DCG was now called DPG and several key people were retiring, and a few got promoted. That is two of the three key points confirmed in less than a week. It took about a month for Intel to post something about the reorg, which we again exclusively brought you, but they still haven’t officially admitted to it. We can’t explain this last bit. That left the massive cuts to DCG/DPG that the company didn’t actually deny. Their statement to us was, “Reports of large employee reductions in our data platform business group are inaccurate.” Remember kids, tense matters. SemiAccurate took that statement of confirmation of the third point but we didn’t want to say that until it happened which brings us to today’s events. First we want to wish all those laid off a speedy search for a new home. Based on the number of our sources talking to you or your colleagues, it doesn’t look like it will be a long slog to find a job this time. That said this is not a good thing for anyone. We previously said that 25-33% of DCG would be laid off, and that appears to still be happening but not all of it is happening today. Unfortunately for those involved there is more to this story, and it isn’t just limited to DCG any more, something we said may happen in our initial article. Now we can add a bit of color to the news and add a few more groups to the list. Note: The following is analysis for professional level subscribers only. Disclosures: Charlie Demerjian and Stone Arch Networking Services, Inc. have no consulting relationships, investment relationships, or hold any investment positions with any of the companies mentioned in this report.
{ "pile_set_name": "OpenWebText2" }
Since I am such a rational person that loves automating stuff, I will give you the top reasons why I have stopped following you on Twitter, Facebook, Linkedin or any other social media I am active. You keep repeating yourself. If you keep re-tweeting your own blog posts to get as much traffic as possible to it, I will March 27, 2013 Copyright All written material of this site is my personal property, so I reserve the right to the material. Please send me a mail and ask nicely if you want to use any of it in your publication.
{ "pile_set_name": "Pile-CC" }
Introduction {#S0001} ============ The rotator cuff tear is the most common tendinopathy in humans and over 200,000 cuff repairs are performed annually in the United States \[[1](#CIT0001),[2](#CIT0002)\]. The decreased morbidity associated with arthroscopic repairs has contributed to the popularity and broad indications for this surgical intervention \[[1](#CIT0001),[3](#CIT0003)\]. Tendon reattachment even if biomechanically strong at the time of repair often fails and approximately 50% of patients with full-thickness tears of the rotator cuff report symptoms at 6 months after surgery \[[1](#CIT0001),[4](#CIT0004),[5](#CIT0005)\].10.1080/21623945.2019.1609201-F0005Figure 5.Schematic representation of the changes in the number and cross-section area of fat clumps and of adipocyte number in the proximal, medial and distal SSP muscle after a complete SSP tendon detachment. IMF increased closer to the tendon tear compared to the proximal SSP muscle. Detached muscles had more clumps in the distal and medial sections and of larger size in the distal section. There were more adipocytes in the distal and medial detached SSP muscles compared to proximal and cross-sectional area was smaller in the distal SSP muscle. The fat clumps are represented by ovals and adipocytes by smaller filed black shapes. Results from the statistical analysis are indicated: 0.001 ≤ P \< 0.01 (\*), 0.0005 ≤ P \< 0.001 (\*\*), P \< 0.0005 (\*\*\*). The unsatisfactory success of rotator cuff repair surgeries has been attributed in many cases to muscle atrophy and fat accumulation both assessed by medical imaging methods \[[6](#CIT0006)--[8](#CIT0008)\]. The benefits of arthroscopy to repair the cuff and of advanced imaging methods to measure rotator cuff muscle fat content are undeniable but enhancing postoperative outcomes remain a challenge and basic knowledge on the mechanisms of intramuscular fat accumulation is needed \[[9](#CIT0009)--[11](#CIT0011)\]. Animal models of rotator cuff tendon injury and repair capture important aspects of the human disease \[[12](#CIT0012)--[16](#CIT0016)\]. Imaging of the rabbit's SSP muscle documented fat accumulations both extra- and intra-muscular, and were evident as early as 4 weeks after SSP tendon detachment and progressed up to 12 weeks \[[17](#CIT0017)\]. The fat signal increased from proximal-to-distal with the highest amount of fat detected in the distal quarter of the SSP muscle, the site closer to tendon detachment \[[17](#CIT0017)\]. Both fat accumulation and muscle atrophy were present at week 1 and 2 after immediate repair but only fat accumulation persisted at 6 weeks \[[18](#CIT0018),[19](#CIT0019)\]. In a different study, delayed tendon reattachment did not reverse SSP fat accumulation \[[20](#CIT0020)\]. The rabbit experimental model of rotator cuff tear and repair reproduced accurately the human pathology and represents a valuable avenue to decipher the pathophysiology of IMF accumulation associated with rotator cuff tear \[[12](#CIT0012),[21](#CIT0021)\]. The mechanisms for adipose tissue expansion have been studied extensively. In the context of obesity resulting from a high-fat diet, large fat accumulations are noticeable in subcutaneous and visceral deposits \[[22](#CIT0022)--[24](#CIT0024)\]. Overnutrition induced adipocyte hypertrophy in upper-body subcutaneous fat while a cycle between hypertrophy and hyperplasia characterized deposits below the waist \[[25](#CIT0025)\]. The IMF deposit, considered a small fat deposit, is made up of white adipocytes and its accumulation characterizes late stages of muscular dystrophies \[[24](#CIT0024),[26](#CIT0026)\]. The pathophysiology of adipocytes leading to IMF accumulation associated with rotator cuff tear remains unknown. We hypothesized that IMF accumulation observed after rotator cuff tears results from adipocyte hypertrophy rather than hyperplasia leading to the enlargement of resident muscle fat clumps. The purpose of the current study was to characterize, at the microscopic level and over time, the expansion of the adipose tissue in the SSP muscle of rabbits after detachment of the distal SSP tendon. Materials and methods {#S0002} ===================== Animals and surgical procedure {#S0002-S2001} ------------------------------ This study was approved by the University of Ottawa Animal Care Committee. Adult female New Zealand rabbits (n = 45) weighing 3.0 kg were purchased from Charles River, Saint-Constant, Quebec, Canada and allowed to acclimate for one week upon arrival. For the experimental group, a supraspinatus tenotomy was performed unilaterally in 30 rabbits by sectioning completely the SSP tendon from the greater tuberosity of the humerus using a surgical blade under general anaesthesia \[[14](#CIT0014)\]. Left and right shoulders were alternated. To prevent postoperative adhesions, the stump of the tendon was wrapped with a polyvinylidene membrane (5 µm, Durapore, Millipore, Bedford MA USA). Animals were housed individually, divided into three equal groups, killed at 4, 8 or 12 weeks after surgery and the operated shoulders were collected for histological analysis. For the control group, 15 unoperated rabbits were equally divided into three groups, killed at 4, 8 and 12 weeks and both shoulders were collected. The harvesting method of shoulders was described in our previous publication. Complete SSP muscles were dissected from the scapula, wrapped and frozen at −20°C until processed for histology analysis \[[17](#CIT0017)\]. Radiology and macroscopic data on this group of animals have already been reported [\[17\];](#CIT0017) the current microscopy analysis at the cellular level builds on those studies. Histology specimen preparation {#S0002-S2002} ------------------------------ Harvested SSP muscles were fixed in 4% paraformaldehyde and rinsed twice for 1 h in phosphate buffered saline to begin processing for histology. Muscle specimens were frozen to preserve fat structures during sectioning. From each muscle, three cross-section slices of 1-mm thickness were cut at the proximal quarter, middle-half, and distal quarter sites of the supraspinatus muscle. Muscle slices were stained for 2 weeks with 5% potassium dichromate and 2% osmium tetroxide followed by paraffin embedding \[[14](#CIT0014)\]. Using a microtome, 6µm-thick microscopy slides were prepared. Fixation in osmium tetroxide stained adipocytes black. Histology evaluation and microscopy image analysis {#S0002-S2003} -------------------------------------------------- A total of 180 slides from detached tendons and from unoperated tendons, at time points 4, 8 or 12 weeks, in the proximal, middle or distal quarters of the SSP muscle were analysed by light microscopy ([Table 1](#T0001)).10.1080/21623945.2019.1609201-T0001Table 1.Summary of the samples studied including numbers of rabbits, shoulders, and tissue sections for both fat clump and adipocyte analyses.SSP Muscle Quarter/WeeksDetached vs ControlRabbits (N)Shoulders\ (N)Muscle Sections (Clumps)\ (N)Muscle Sections (Cells) (N)Proximal Quarter4Detached1010810Control51010108Detached10101010Control510101012Detached1010910Control5101010Middle Quarter4Detached10101010Control5109108Detached1010810Control510101012Detached1010910Control5101010Distal Quarter4Detached10101010Control51010108Detached10101010Control510101012Detached10101010Control5101010 **Fat ClumpsAdipocytes**Total Muscle Sections/Fields AnalyzedDetached8490 (270 fields)Control8990 (270 fields)Total Fat Clumps/Adipocytes AnalyzedDetached18,54210,389Control14,3456706 Fat clumps were measured on entire SSP muscle cross-sections digitized at 6.7x magnification and backgrounds were cropped using Corel Photo-Paint 11. Images were then imported for computer-assisted quantitative image analysis using software ImageJ software (version 1.34s; National Institute of Health, Bethesda, MD, USA). Scales were set by using calliper measurements of two reference points on the slide and converted to pixels. Pictures were converted into binary black and white images (8-bit; grey scale). A fat clump was defined as an area of fat stained black, not in contact with another stained area ([Figure 1](#F0001)). The 'threshold' function was manually adjusted to select only black pixels. The 'watershed' function was used to mark the boundaries of individual fat clumps. The 'analyse particle' command was used to measure clump numbers and areas with 'cellularity' set at 0--1 and 'size' set at 0-infinity. The command 'measure all' was used to automatically generate all measurements.10.1080/21623945.2019.1609201-F0001Figure 1.Representative micrographs of IMF accumulation in the distal quarter of the SSP muscle cross-sections. (a). SSP muscle sections at 4, 8 and 12 weeks after tendon detachment. (b). SSP muscle sections in control animals at the same time points. IMF was stained using osmium tetroxide and is visible at black-stained areas. Note the higher accumulation of fat in the tendon detached group compared to controls at all time points studied. Original magnification at 6.7x. Adipocyte number per field and average cross-sectional area were measured using computer-assisted image analysis of the same microscopic slices captured at 25x magnification. Three different fields of equal and fixed areas (0.149 mm^2^ each) were chosen using the following criteria: included black staining, not contiguous with the other selected fields, included minimal empty space, and included at least one blood vessel. No field overlapped. The three different fields analysed in each 3 muscle sections (proximal, middle and distal) in 10 rabbits per time point (4, 8 and 12 weeks) group in each of detached SSP and control groups amounted to a total of 540 fields. Representative images from distal quarters at 4, 8 and 12 weeks after tenotomy and corresponding controls are presented in [Figure 2](#F0002). To measure adipocyte number and size, we again used ImageJ, images were converted to 8-bit grey-scale pictures. Default settings of the 'thresholding' function were used to select only the black-stained adipocytes. Applying the threshold converts the image to black and white, displaying only adipocytes. The 'watershed' function was used to separate individual cells, and 'analyse particles' was used to count and measure the cross-section area of adipocytes. Minimum size was set at 350 pixels (to remove artefacts originating from microtomy) and circularity of 0.5 (to remove any cells cut off at the edges of the picture). ImageJ was calibrated by using a scale bar to convert pixels into mm^2^.10.1080/21623945.2019.1609201-F0002Figure 2.Representative micrographs of adipocytes in the proximal, middle and distal quarters of the SSP muscle. (a). SSP muscle sections at 12 weeks after tendon detachment. (b). SSP muscle sections from control age-matched animals. Adipocyte vacuoles stained black using the osmium tetroxide protocol described in the Method section. Note the increased number of smaller adipocytes in the tendon detached group compared to controls. Original magnification at 25x. Data and statistical analysis {#S0002-S2004} ----------------------------- Descriptive statistics displayed the medians and interquartile ranges of the four outcomes measured; the number and cross-sectional area of both fat clumps and adipocytes. We first explored the distribution of the fat clumps and adipocytes outcomes because a skewed distribution of adipocytes diameter had previously been described \[[27](#CIT0027)\]. Our data for fat clumps and for adipocytes confirmed an asymmetrical distribution ([Figures 3](#F0003) and [4](#F0004)); median lines were off centre of the interquartile boxplots and upper and lower whiskers for the same box were of different size indicative of a skewed distribution. Skewed data distribution was log transformed to meet the normality assumptions for ANOVA and regression-based statistical analyses. In this paper, non-log-transformed data were reported as descriptive statistics whereas log-transformed data were used in the quantitative statistical analyses. Linear mixed-effects (LME) models were fitted to the data, and statistical significance for the four outcomes was evaluated by ANOVA considering the different muscle locations as a single fixed-effect factor and similarly for the three time points after detachment, but using a random effect to account for correlation between measurements taken from the same rabbit. Post hoc analysis was conducted when significant differences were observed also using LME models, and pairwise comparisons of the fixed effects were performed. The fixed effects were introduced as a single term in the equations and considered without interaction. For each of the four outcomes evaluated; fat clumps number, fat clumps cross-sectional area, adipocyte number and adipocytes cross-section area, the following equation was applied: Outcome \~ log(μ) + β~1~ time + β~2~ location + β~3~ detachment + error (1\|rabbit).10.1080/21623945.2019.1609201-F0003Figure 3.Boxplots showing the distribution of intramuscular fat clump numbers (a) and cross-sectional area (b) (mm^2^) for SSP tendons detached for 4, 8 and 12 weeks and for age-matched controls. Horizontal lines in the boxes represent the median values, limits of the boxes represent upper and lower quartiles, lines extending vertically from boxes represent variability outside the boxes and outliers are plotted as individual points. The dispersion of the number of fat clumps was similar for both detached and controls. A large variability in the fat clump cross-sectional areas was observed for the detached group in the distal quarter at 8 and 12 weeks after detachment and displayed in the large sizes of the boxes for these two groups compared to controls.10.1080/21623945.2019.1609201-F0004Figure 4.Boxplots showing the distribution of intramuscular adipocyte numbers (a) and cross-sectional area (b) (mm^2^) for SSP tendons detached for 4, 8 and 12 weeks and for age-matched controls. Horizontal lines in the boxes represent the median values, limits of the boxes represent upper and lower quartiles, lines extending vertically from boxes represent variability outside the boxes and outliers are plotted as individual points. The dispersion in the number of adipocytes was larger in the middle and distal quarters of the SSP muscle in the detached group compared to controls. The dispersion in adipocyte cross-section area was comparable in both groups. LME modelling also accounted for two characteristics of our study design with potential influence on the outcomes of the statistical analyses \[[28](#CIT0028)\]. First, the fat clumps and adipocytes were measured in three quarters of the same SSP muscle and are not independent observations. Second, number and cross-sectional area outcomes are potentially influenced by a random effect corresponding to animals and by fixed effects including SSP tendon detachment, quarter of the muscle, and time after SSP tendon detachment. P-values for the calculated coefficient of individual fixed effect estimates were used to determine their contribution to fat accumulation. Significance was determined according to P values at: 0.001 ≤ P \< 0.01 (\*), 0.0005 ≤ P \< 0.001 (\*\*), P \< 0.0005 (\*\*\*). We considered p \< 0.01 to be statistically significant because of the multiple outcomes and models analysed. All descriptive and statistical analyses were performed using the open-source programming environment R \[[29](#CIT0029)\] and the lmerTest package \[[30](#CIT0030)\]. Results {#S0003} ======= [Table 1](#T0001) describes the samples analysed including numbers of rabbits, shoulders, tissue sections and fields in detached and control SSP muscles. Seven out of 360 slides showed poor staining quality in some areas and were omitted from the low magnification microscopy analysis (fat clumps) ([Table 1](#T0001)). The total number of stained fat clumps was 18,542 for the detached groups and 14,345 for the control groups. The total number of adipocytes analysed was 10,389 for the detached group and 6706 for the control group. Representative micrographs of osmium tetroxide stained SSP muscle sections and of fat clumps and adipocytes are presented in [Figures 1](#F0001) and [2](#F0002). Descriptive statistics of fat clump numbers and areas {#S0003-S2001} ----------------------------------------------------- The average number (± standard error) of fat clumps for all quarters of all the detached SSP muscles was 223.1 ± 87.5 and for all the control muscles 160.8 ± 70.5. Average fat clump areas were 0.031 ± 0.011 mm^2^ for detached SSP muscles and 0.013 ± 0.023 mm^2^ for controls ([Figure 3](#F0003)). Quantitative analysis of fat clumps after SSP tendon detachment {#S0003-S2002} --------------------------------------------------------------- SSP tendon detachment was associated with increased fat clump numbers (P \< 0.001) and area (P \< 0.0005) in SSP muscles compared to controls ([Table 2](#T0002)). Time after SSP tendon detachment did not significantly influence fat clump numbers and area (both at P \> 0.01). The muscle location (distal, middle or proximal) was strongly associated with increases of fat clumps number (P \< 0.0005) and area (P \< 0.0005) ([Table 2](#T0002)). There were significantly more fat clumps in the distal quarter compared to the proximal quarter (P \< 0.0005) but not significantly different from the medial quarter (P \> 0.01). The proximal quarter contained fewer fat clumps compared to the medial quarter (P \< 0.0005). Fat clumps were significantly larger in the distal quarter compared to the medial (P \< 0.001) and proximal quarters (P \< 0.0005) while proximal and medial quarters were not significantly different (P \> 0.01) ([Table 2](#T0002)).10.1080/21623945.2019.1609201-T0002Table 2.Summary of ANOVA and of the post hoc linear mixed-effects model for the fat clump number and cross-section area. 0.001 ≤ P \< 0.01 (\*), 0.0005 ≤ P \< 0.001 (\*\*), P \< 0.0005 (\*\*\*).VariableANOVA (P Value)DetachmentP \< 0.001 (\*\*)WeekP \> 0.01LocationP \< 0.0005 (\*\*\*) **Fat clump number \~ log(μ) + β~1~ time + β~2~ location + β~3~ detachment + error (1\|rabbit)β coefficientStd ErrorP Value**Distal vs Medial−0.1150.077P \> 0.01Distal vs Proximal−0.7870.076P \< 0.0005 (\*\*\*)Proximal vs Medial0.6720.076P \< 0.0005 (\*\*\*)Fat Clump Cross Section AreaVariable**ANOVA (P Value)**DetachmentP \< 0.0005 (\*\*\*)WeekP \> 0.01LocationP \< 0.0005 (\*\*\*) **Fat clump area \~ log(μ) + β~1~ time + β~2~ location + β~3~ detachment + error (1\|rabbit)β coefficientStd ErrorP Value**Distal vs Medial−0.2870.084P \< 0.001 (\*\*)Distal vs Proximal−0.3990.082P \< 0.0005 (\*\*\*)Proximal vs Medial0.1110.083P \> 0.01 Descriptive statistics of adipocyte numbers and areas {#S0003-S2003} ----------------------------------------------------- Detached SSP muscles had on average 38.5 ± 11.7 adipocytes per field of view compared to 24.8 ± 4.8 for controls. Average adipocyte cross-sectional area was 0.0020 ± 0.0003 mm^2^ for detached SSP muscles compared to 0.0016 ± 0.0003 mm^2^ for controls ([Figure 4](#F0004)). Quantitative analysis of adipocytes after SSP tendon detachment {#S0003-S2004} --------------------------------------------------------------- SSP tendon detachment was associated with increased adipocyte numbers (P \< 0.0005) and cross-section area (P \< 0.01) in SSP muscles compared to controls ([Table 3](#T0003)). Time after SSP tendon detachment significantly increased adipocytes number (P \< 0.01) but had no significant influence on adipocytes cross-section area (P \> 0.01). Detached SSP muscles had significantly more adipocytes at week 12 (P \< 0.01) compared to week 4 ([Figure 4](#F0004)). The number of adipocytes was not significantly different between week 4 and week 8 (P \> 0.01) and between week 8 and week 12 (P \> 0.01). Muscle location (distal, middle or proximal) was associated with increased adipocyte numbers (P \< 0.0005) and cross-section areas (P \< 0.0005) ([Table 3](#T0003)). There were significantly more adipocytes in the distal quarter (P \< 0.0005) compared to the proximal quarter but not compared to the medial quarter (P \> 0.01). The medial quarter contained more adipocytes than the proximal quarter (P \< 0.0005). Adipocytes were significantly smaller in the distal quarter (P \< 0.01) compared to the medial and to the proximal quarters (P \< 0.0005) ([Table 3](#T0003) and [Figure 4](#F0004)). Adipocyte in the medial quarter was also smaller than in the proximal quarter (P \< 0.01).10.1080/21623945.2019.1609201-T0003Table 3.Summary of ANOVA and of the post hoc linear mixed-effects model of adipocyte number and cross-section area. 0.001 ≤ P \< 0.01 (\*), 0.0005 ≤ P \< 0.001 (\*\*), P \< 0.0005 (\*\*\*).VariableANOVA (P Value)DetachmentP \< 0.0005 (\*\*\*)WeekP \< 0.01 (\*)LocationP \< 0.0005 (\*\*\*) **Adipocyte number \~ log(μ) + β~1~ time + β~2~ location + β~3~ detachment + error (1\|rabbit)β coefficientStd ErrorP Value**Week 4 vs 80.1470.072P \> 0.01Week 4 vs 120.2390.072P \< 0.01 (\*)Week 12 vs 8−0.0910.072P \> 0.01Distal vs Medial−0.090.057P \> 0.01Distal vs Proximal−0.4930.057P \< 0.0005 (\*\*\*)Proximal vs Medial0.4030.057P \< 0.0005 (\*\*\*)Adipocyte Cross Section AreaVariable**ANOVA (P Value)**DetachmentP \< 0.01 (\*)WeekP \> 0.01LocationP \< 0.0005 (\*\*\*) **Adipocyte area \~ log(μ) + β~1~ time + β~2~ location + β~3~ detachment + error (1\|rabbit)β coefficientStd ErrorP Value**Distal vs Medial0.1170.041P \< 0.01 (\*)Distal vs Proximal0.2560.041P \< 0.0005 (\*\*\*)Proximal vs Medial−0.1380.041P \< 0.01 (\*) Discussion {#S0004} ========== We characterized intramuscular fat accumulation in the SSP muscle in the rabbit model of rotator cuff tear. SSP tendon detachment produced an increase number of larger fat clumps and an increased number of smaller adipocytes in the distal quarter of the SSP muscle near the site of tendon tear ([Figure 5](#F0005)). Time after tendon detachment significantly increased the number of adipocytes. Our hypothesis based on literature on obesity that: IMF accumulation after rotator cuff tears resulted from adipocyte hypertrophy rather than hyperplasia was infirmed. The current study established that adipocyte hyperplasia was the main contributor to fat clump enlargements and explained SSP IMF expansion up to 12 weeks after tendon detachment. Fat tissue has been described to expand via two mechanisms; adipocyte hyperplasia (the increase of the number of adipocytes) and adipocyte hypertrophy (the increase in individual adipocyte size) \[[24](#CIT0024),[31](#CIT0031)\]. Knowledge of adipocytes' behaviour originates mostly from obesity research and expansion of sub-cutaneous white fat deposits. Experiments from the 1970s showed that overfeeding combined with reduced energy expenditure over several months resulted in an important increase of adipocyte size without significant changes in the number adipocytes \[[32](#CIT0032)\]. Consistently, the turnover of human subcutaneous adipocytes each year is very low, approximately 8%, resulting in little change in adipocyte number and emphasizing the importance of adipocyte hypertrophy in the expansion of fat tissue in the context of obesity \[[33](#CIT0033)\]. There is evidence for regional differences in adipocytes behaviour in human obesity. While adipocyte hypertrophy characterizes upper body sub-cutaneous fat, adipocytes cycling between hyperplasia and hypertrophy characterized deposits below the waist as obesity progresses upon high-fat feeding \[[23](#CIT0023),[25](#CIT0025),[34](#CIT0034)\]. Our results indicate that expansion of IMF in the SSP muscle present significant similarities with subcutaneous fat deposits located below the waist; fat expansion resulted from adipocyte hyperplasia at least within the first 12 weeks after tendon detachment. Intramuscular adipocytes in the current study were approximately 0.002 mm^2^ or 25 microns in diameter (assuming circular shape of adipocytes, πr^2^). This was smaller than mature white adipocytes with approximately 110 microns in diameter (ranges from 20 to 300 microns) \[[27](#CIT0027),[35](#CIT0035)\]. Smaller adipocytes less than 10 microns in diameter were previously described in rat epididymal fat deposit \[[27](#CIT0027),[36](#CIT0036)\]. Considering the published spectrum of sizes for white adipocytes, intramuscular adipocytes were therefore characterized as small in both healthy and detached SSP muscles. Increased adipocyte cellularity is indicative of the mechanism of adipogenesis taking place in the detached SSP muscle. The observations of increased adipocyte number in combination with small cross-section areas in the distal quarter where fat accumulation was the most important suggested the presence of newly formed cells. Pre-adipocytes are smaller in size than mature adipocytes \[[37](#CIT0037)--[39](#CIT0039)\]. Newer adipocytes of smaller size driving the average adipocyte size lower are a potential explanation of the lower adipocyte size in the distal SSP muscle. Moreover, the persistence of smaller average size adipocytes 12 weeks after tendon detachment suggests that, rather than maturing and growing to reach proximal size, new adipocytes that were present 4 weeks after detachment have remained small or new adipocytes were continuously generated in the distal detached SSP muscle. The identity of the precursor cells contributing to the increased intramuscular adipocyte hyperplasia is actively investigated. Adipocytes derive from pre-adipocytes which themselves differentiate from mesenchymal precursor cells \[[39](#CIT0039)\]. Adipocytes can also originate from existing mesenchymal tissue in the muscle \[[37](#CIT0037)\]. Four candidate muscle cells able to generate adipocytes have been described; a population of fibrocyte/adipocyte progenitors, muscle satellite cells, pericytes \[[35](#CIT0035)\] and bone marrow-derived cells \[[40](#CIT0040)\]. During skeletal muscle degeneration, adipocytes were demonstrated to derive from a population of bipotent progenitors residing within muscles and different from muscle progenitors \[[38](#CIT0038),[40](#CIT0040)\]. There is experimental support in mice for these cells as the source of SSP muscle adipocytes after rotator cuff tear \[[41](#CIT0041),[42](#CIT0042)\]. Satellite cells are also a population of primary cells residing in muscles with the ability to differentiate into adipocytes *in vitro*. [\[43](#CIT0043)--[45\]](#CIT0045) Fibrocyte/adipocyte progenitor and satellite cells may be activated locally in the distal quarter of the SSP muscle to produce adipocytes. The trigger may be the absence of forces transmitted to the muscle through the intramuscular tendon fibres after tendon detachment. Altered mechanical activity at the myotendinous junction may also explain the more prominent fatty accumulation at the distal quarter of the SSP muscle\[[46](#CIT0046)\]. Pericytes physically associated with the walls of intra-adipose blood vessels showed the potential to differentiate into adipocytes *in vitro* \[[47](#CIT0047)\]. Interestingly, the habitual presence of blood vessels in the vicinity of the fat cells was used as criteria to select the fields for measurements. But the vascularization of skeletal muscles enters through the middle half of the SSP muscle and is distributed to the distal and proximal portions \[[48](#CIT0048)\]. The muscle vascular distribution is inconsistent with the IMF we observed. The identity of the precursor cell(s) differentiating into adipocytes is only speculative at this time and all four previously identified precursors are potential candidates. The direct clinical implication of the current findings of adipocyte hyperplasia as the mechanism for fatty accumulation lies in its treatment. Successful treatment of rotator cuff tear and IMF accumulation will require a strategy to reduce the number of adipocytes. This is a significant challenge since extensive literature indicates that adipocyte hypertrophy in obesity can be combatted by reducing caloric intake and increasing energy expenditure \[[22](#CIT0022)--[24](#CIT0024)\]. However, this approach is unsuccessful for adipocyte hyperplasia. Intramuscular white fat, similar to other white fat deposits is characterized by a persisting number of adipocytes. Once adipocyte number increases, they are durable and difficult to lose [\[24\];](#CIT0024) important weight loss resulted from a reduction in adipocyte volume but not overall number \[[25](#CIT0025),[32](#CIT0032)\]. This concept is consistent with the literature on reversibility of fat accumulation after rotator cuff tear. While initially believed to recover after successful tendon repair, numerous experimental as well as clinical studies have shown that fat accumulation is largely irreversible. Uhthoff et al. \[[19](#CIT0019)\] showed that animals with reattachment immediately after tear could recover muscle volume but did not reverse fat accumulation. Delayed repair also failed to reverse fat accumulation \[[18](#CIT0018)--[21](#CIT0021)\]. These four studies used precise, invasive as well as radiologic measures and followed up SSP muscles for 3 months after repair. Clinically, 38 patients showed no reversal of fatty accumulation 12--15 months postoperatively \[[47](#CIT0047),[49](#CIT0049)\], 35 patients showed progression of fat accumulation 6 months after repair\[[23](#CIT0023)\], and 47 patients followed between 60 and 133 months also showed progression of the fatty content of the rotator cuff muscles \[[50](#CIT0050),[51](#CIT0051)\]. The lack of reversibility of IMF accumulation may indicate a need for a fast intervention at repairing SSP tendon tears. Limitations of the current study include: 1) the anatomy of the rabbit rotator cuff muscles differs from human; 2) sectioned tendons were wrapped in polyvinylidine fluoride membranes to prevent the formation of adhesions and this is not the case in humans; 3) changes were studied during the first 12 weeks after tendon detachment; in clinical practice, longer delays before surgical repair of the SSP tendon tear are common; 4) tendon sectioning performed to achieve complete detachment is different than tendon tear; 5) some fat deposits in human have no precise correlates in animals and vice versa; 6) the osmium fixation method of determining adipocyte size and numbers is only possible in experimental studies. In spite of those limitations, the rabbit model of rotator cuff tear has stood out in its potential to replicate the clinical findings of fat accumulation. Conclusion {#S0005} ========== This study established adipocyte hyperplasia and increased fat clump numbers and size as the main mechanism causing fat accumulation in the SSP muscle with 12 weeks after a rotator cuff tear. The changes were predominant in the distal quarter of the SSP muscle, near the tendon tear where adipocyte number but not size increased. The trigger for adipocyte hyperplasia and the cell precursor(s) remains to be identified as a next step in the search for better SSP repair outcome. Acknowledgments =============== Funded in part by the Workplace Safety and Insurance Board of Ontario (04031) and the Canadian Institutes of Health Research (1109995). We thank Philippe Poitras for the surgeries, Ying Nie for tissue processing, Carmen Fletcher for intramuscular fat measurements. Disclosure statement {#S0006} ==================== No potential conflict of interest was reported by the authors. [^1]: This study was approved by the University of Ottawa Animal Care Committee.
{ "pile_set_name": "PubMed Central" }
Q: convert SAS date format to R I read in a sas7bdat file using the haven package. One column has dates in the YYQ6. format. I R this is converted to numbers like -5844, 0, 7121, .. How can I convert this to a year format? I have no access to SAS but these values should be birth dates. A: Bit of Research first. SAS uses as Zero 1st of January 1960 (see http://support.sas.com/publishing/pubcat/chaps/59411.pdf) so if you want the year of the data (represented by your number) it should be format(as.Date(-5844, origin="1960-01-01"),"%Y") and you get in this case 1944 is that correct? is what you are expecting? To learn more on the data type YYQ6. check this Support article from SAS http://support.sas.com/documentation/cdl/en/leforinforref/64790/HTML/default/viewer.htm#n02xxe6d9bgflsn18ee7qaachzaw.htm Let me know if is working. Umberto
{ "pile_set_name": "StackExchange" }
Inter-Korea Joint Korean Taekwondo Performance Held at Seoul City Hall Write 2018-02-12 18:50:28, Update 2018-02-12 18:54:30 Taekwondo athletes from South and North Korea have put on a joint performance in Seoul. The hour-long taekwondo demonstration, including a ten-minute joint session, was held at Seoul City Hall on Monday afternoon in front of an audience of around 250 people, including Choue Chung-won, head of the Seoul-based World Taekwondo, and Ri Yong-son, head of the Pyongyang-based International Taekwondo Federation. It marked the third show of its kind since the North Korean team arrived in the South to celebrate the PyeongChang Winter Olympics. The two previous ones were held during the opening ceremony of Winter Games last Friday and in Sokcho, Gangwon Province on Saturday. The North Koreans will return to the North after a fourth joint performance to be held at MBC in Sangam-dong in western Seoul on Wednesday.
{ "pile_set_name": "Pile-CC" }
Q: Image generator missing positional argument for unet keras I keep getting the following error for below code when I try to train the model: TypeError: fit_generator() missing 1 required positional argument: 'generator'. For the life of me I can not figure out what is causing this error. x_train is an rgb image of shape (400, 256, 256, 3) and for y_train i have 10 output classes making it shape (400, 256, 256, 10). What is going wrong here? If necessary the data can be downloaded with the following link: https://www49.zippyshare.com/v/5pR3GPv3/file.html import skimage from skimage.io import imread, imshow, imread_collection, concatenate_images from skimage.transform import resize from skimage.morphology import label import numpy as np import matplotlib.pyplot as plt from keras.models import Model from keras.layers import Input, merge, Convolution2D, MaxPooling2D, UpSampling2D, Reshape, core, Dropout from keras.optimizers import Adam from keras.callbacks import ModelCheckpoint, LearningRateScheduler from keras import backend as K from sklearn.metrics import jaccard_similarity_score from shapely.geometry import MultiPolygon, Polygon import shapely.wkt import shapely.affinity from collections import defaultdict from keras.preprocessing.image import ImageDataGenerator from keras.utils.np_utils import to_categorical from keras import utils as np_utils import os from keras.preprocessing.image import ImageDataGenerator gen = ImageDataGenerator() #Importing image and labels labels = skimage.io.imread("ede_subset_293_wegen.tif") images = skimage.io.imread("ede_subset_293_20180502_planetscope.tif")[...,:-1] #scaling image img_scaled = images / images.max() #Make non-roads 0 labels[labels == 15] = 0 #Resizing image and mask and labels img_scaled_resized = img_scaled[:6400, :6400 ] print(img_scaled_resized.shape) labels_resized = labels[:6400, :6400] print(labels_resized.shape) #splitting images split_img = [ np.split(array, 25, axis=0) for array in np.split(img_scaled_resized, 25, axis=1) ] split_img[-1][-1].shape #splitting labels split_labels = [ np.split(array, 25, axis=0) for array in np.split(labels_resized, 25, axis=1) ] #Convert to np.array split_labels = np.array(split_labels) split_img = np.array(split_img) train_images = np.reshape(split_img, (625, 256, 256, 3)) train_labels = np.reshape(split_labels, (625, 256, 256, 10)) train_labels = np_utils.to_categorical(train_labels, 10) #Create train test and val x_train = train_images[:400,:,:,:] x_val = train_images[400:500,:,:,:] x_test = train_images[500:625,:,:,:] y_train = train_labels[:400,:,:] y_val = train_labels[400:500,:,:] y_test = train_labels[500:625,:,:] # Create image generator (credit to Ioannis Nasios) data_gen_args = dict(rotation_range=5, width_shift_range=0.1, height_shift_range=0.1, validation_split=0.2) image_datagen = ImageDataGenerator(**data_gen_args) seed = 1 batch_size = 100 def XYaugmentGenerator(X1, y, seed, batch_size): genX1 = gen.flow(X1, y, batch_size=batch_size, seed=seed) genX2 = gen.flow(y, X1, batch_size=batch_size, seed=seed) while True: X1i = genX1.next() X2i = genX2.next() yield X1i[0], X2i[0] # Train model Model.fit_generator(XYaugmentGenerator(x_train, y_train, seed, batch_size), steps_per_epoch=np.ceil(float(len(x_train)) / float(batch_size)), validation_data = XYaugmentGenerator(x_val, y_val,seed, batch_size), validation_steps = np.ceil(float(len(x_val)) / float(batch_size)) , shuffle=True, epochs=20) A: You have a few mistakes in your code, but considering your error: TypeError: fit_generator() missing 1 required positional argument: 'generator' this is caused because fit_generator call XYaugmentGenerator but no augmentation generator is called inside. gen.flow(... won't work because gen is not declared. You should either rename image_datagen to gen as: gen = ImageDataGenerator(**data_gen_args) or, replace gen with image_datagen genX1 = image_datagen.flow(X1, y, batch_size=batch_size, seed=seed) genX2 = image_datagen.flow(y, X1, batch_size=batch_size, seed=seed)
{ "pile_set_name": "StackExchange" }
Q: Read multiple lists from python into an SQL query I have 3 lists of user id's and time ranges (different for each user id) for which I would like to extract data. I am querying an AWS redshift database through Python. Normally, with one list, I'd do something like this: sql_query = "select userid from some_table where userid in {}".format(list_of_users) where list of users is the list of user id's I want - say (1,2,3...) This works fine, but now I need to somehow pass it along a triplet of (userid, lower time bound, upper time bound). So for example ((1,'2018-01-01','2018-01-14'),(2,'2018-12-23','2018-12-25'),... I tried various versions of this basic query sql_query = "select userid from some_table where userid in {} and date between {} and {}".format(list_of_users, list_of_dates_lower_bound, list_of_dates_upper_bound) but no matter how I structure the lists in format(), it doesn't work. I am not sure this is even possible this way or if I should just loop over my lists and call the query repeatedly for each triplet? A: suppose the list of values are something like following: list_of_users = [1,2], list_of_dates_lower_bound = ['2018-01-01', '2018-12-23'] list_of_dates_lower_bound = ['2018-01-14', '2018-12-25'] the formatted sql would be: select userid from some_table where userid in [1,2] and date between ['2018-01-01', '2018-12-23'] and ['2018-01-14', '2018-12-25'] This result should not be what you thought as is, it's just an invalid sql, the operand of between should be scalar value. I suggest loop over the lists, and pass a single value to the placeholder.
{ "pile_set_name": "StackExchange" }
Find a school online in 3 easy steps 2 Music Schools in Brooklyn, New York Brooklyn, New York Music Schools in Brooklyn, New York Site Evaluation There are 2 music schools in Brooklyn, New York. The largest music school in Brooklyn, by student population, is CUNY Brooklyn College, which has 17,094 students. The school's website has a Google Page Rank of 7. The school appears to maintain its site on a regular basis, as is indicated by the fact that there are not many broken links just one click away from the homepage. Visual Preferences Color The colors used on websites for the only music schools in Brooklyn are as follows: CUNY Brooklyn College: white 60% and purple 3.6% Long Island University - Brooklyn Campus: grey 22% Font The fonts used on websites for the only music schools in Brooklyn are as follows: CUNY Brooklyn College: Sans serif Long Island University - Brooklyn Campus: Arial Social Presence Both music schools in Brooklyn have Facebook pages. Long Island University - Brooklyn Campus's page has 6,276 "likes". CUNY Brooklyn College's page has 5,638 "likes". The only music school in Brooklyn with a YouTube channel is CUNY Brooklyn College. Its YouTube channel has 66 subscribers. Both music schools in Brooklyn have been bookmarked on Delicious. CUNY Brooklyn College has been bookmarked 17 times. Long Island University - Brooklyn Campus has been bookmarked 15 times. Web Metrics The Brooklyn music school website that receives the greatest number of unique visitors belongs to Long Island University - Brooklyn Campus. This is indicated by the fact that Long Island University - Brooklyn Campus's site has a Compete Rank of 22,892, the highest Compete Rank of music school websites in Brooklyn. And the Brooklyn music school website that is considered the most authoritative by SEOMoz belongs to Long Island University - Brooklyn Campus. It has a SEOMoz Page Authority rank of 70.8.
{ "pile_set_name": "Pile-CC" }
Q: In an ItemControl binding to property doesent work but binding to DataContext does when I run this code the Item-object in my CustomControl becomes a System.Windows.Data.Binding containing nothing but null values but the DataContext becomes an MyClass object (which Items is populated with) <UserControl x:Name="thisControl"> <Grid x:Name="LayoutRoot"> <ItemsControl ItemsSource="{Binding ElementName=thisControl,Path=Items}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <local:UniformGrid Columns="1"/> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <ItemsControl.ItemTemplate> <DataTemplate> <local:CustomControl Item="{Binding}" DataContext="{Binding}"/> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Grid> </UserControl> My CustomControl class public partial class CustomControl : UserControl { public CustomControl() { InitializeComponent(); } public object Item { get; set; } } is there something i don't know about ItemsControl? this is written in Silverlight 4.0 Thanks in advance! A: There is no need for you to be attempting to assign the custom control DataContext. The ItemsControl will take care of that for you. Also your CustomControl code needs to specify the Item property as a DependencyProperty for the binding to work. Binding doesn't work on plain ordinary properties. Example: public object Item { get { return GetValue(ItemProperty); } set { SetValue(ItemProperty, value); } } public static readonly DependencyProperty ItemProperty = DependencyProperty.Register( "Item", typeof(object), typeof(CustomControl), new PropertyMetadata(null)) (I assume that RSListBoxStopItem is a typo and you meant to generalise to CustomControl)
{ "pile_set_name": "StackExchange" }
NRCC’s Independent Expenditure Is Armed With Shields As a self-described “political nerd,” Shields considers Margaret Thatcher and President Ronald Reagan to be his surrogate political parents. In school while his classmates were protesting NATO’s nuclear missiles based in the U.K., Shields wore a “Peace Through Strength” button on his uniform. Shields returned to the U.S. for college and was scheduled to graduate in 1992, but little by little his part-time political work became full time, and he still hasn’t finished his degree at GMU. George W. Bush adviser Karl Rove has a similar line on his résumé. In July 1996, Shields moved to Georgia to become communications director for Friends of Newt Gingrich, the then-Speaker’s campaign operation. Six months later, Shields returned to D.C. to become his national political spokesman and ended up working with Gingrich for five years. After a very brief foray in the federal work force following the election of Bush, Shields returned to elections work. (“I figured out I’m a campaign person, a political person,” Shields said.) He moved to Alabama and managed the gubernatorial campaign for Tim James, son of the former governor. James was thumped by Rep. Bob Riley in the June 2002 primary, finishing third with 9 percent. Then Shields moved north to Pennsylvania to handle press for Rep. George Gekas (R), who was locked in a battle with Democratic Rep. Tim Holden after redistricting threw the two incumbents into the same newly drawn district. Gekas lost by 2 points but recently called Shields “the best-versed operative that I had run into, that was D.C.-based.” Eight years later, Holden is on the extended list of GOP targets that Shields might choose to invest in this fall. Shields spent two years at the NRCC as research director before applying to be chief of staff for Reichert. “We had all the same values: wanting to serve the country and having the heart of a servant,” Reichert said. “I offered him the job right there.” Just three months into Reichert’s first term, GOP leadership in the House moved to intervene in the Terri Schiavo right-to-die controversy. “It was one of [my] first and most important decisions,” Reichert said. According to the Congressman, Shields fostered a healthy debate among the staff and, ultimately, Reichert voted against intervening, putting him at odds with the majority of his party members, who sought to prevent removal of Schiavo’s feeding tube. In early 2009, Shields was hired again by the NRCC to be director of special projects. At the time, Democrats were riding high and it looked like Republicans might be headed for a third straight difficult election cycle. Shields’ résumé made him an excellent candidate for the NRCC. Now, the environment has shifted but the Republicans’ confidence in Shields remains strong. “He’s a House Republican guy. This is what he knows and this is what he’s done,” said NRCC Deputy Executive Director Johnny DeStefano, who is also political director for House Minority Leader John Boehner (R-Ohio). “Mike has seen the operation on all sides. He has a good cross-section of experience to come back and lead this thing,” said Carl Forti, the former NRCC veteran who led the committee’s IE effort in 2006. “He knows the product is only as good as your research.”
{ "pile_set_name": "Pile-CC" }
A couple of years ago, a mechanical engineer met with Narendra Modi, who would later become India's Prime Minister. The engineer proposed a bold and radical idea that he said would completely change the Indian economy. The idea is called demonetization, and it happened six months ago. The government suddenly declared most of the paper money in circulation worthless. Citizens had a short time to turn in their stashes of cash for new bills. It was an effort to flush out corruption, get people to join the banking system and in the process, help the poor of India. In part one of this two-part series, we met the man behind demonetization, the engineer, Anil Bokil. Now in part two, we ask, did demonetization work? Modi had three problems he wanted to solve if the country relied less on cash: Corruption. Without cash, it's harder to hide money from the taxman. It's also harder to ask for a bribe or run a black market business. Businesses could be more competitive and grow faster by using banks and electronic payments. More people would have to use banks, and not keep all their life savings in a drawer. This would keep their money safer. But, suddenly removing most of the cash in an economy, is very messy. It hurt. Demonetization has affected different people in a variety of a ways. We talked with farmers who don't trust credit cards, small shop owners who had to find new ways to sell their goods, and high tech companies trying to cash in. Today on the show, we evaluate Modi's demonetization plan ... report card style. How did this shock to a cash-dependent economy play out for a country of a billion people? This is the second part in our series on India's cash experiment. Last time, we told you the story of the mysterious man who came up with the idea of reducing India's dependence on cash. Today, we talk about how that idea played out in real life. STACEY VANEK SMITH, BYLINE: India is so addicted to cash that regular people keep piles of it in their houses under the sink, in drawers. SMITH: Most people don't use banks. They don't have credit cards. Cash is how people save money, little stacks of paper. VANEK SMITH: So imagine the chaos when India's prime minister, Narendra Modi, got on TV one night and said pretty much all the cash you have is worthless. It was called demonetization. SMITH: Modi said he was doing this to fight corruption, known around there as black money. Black Money is money that's outside the tax system. VANEK SMITH: Modi said you have eight weeks to round up all of your old cash and trade it in at the bank for these new bills we've just minted. SMITH: As you can imagine, in the weeks afterwards, there were giant lines at the banks, panic. But even six months later, people were discovering old money in the bookshelf or something and freaking out. VANEK SMITH: When I visited New Delhi, you could see people lined up in front of the central bank. The deadline to change old money had passed months ago, and each person there had a story about why they had missed the deadline, like Gurdeep Sagoo. Gurdeep came here today on behalf of his mother. She was sick during the money chaos, and he had this official-looking paper from the doctor. And so what is it? What does the paper say? GURDEEP SAGOO: Paper said that she was in a coma. VANEK SMITH: She was in a coma, so she could not change her money. SAGOO: Yeah. SMITH: Fourteen thousand rupees, about $210, his mother's life savings. She had hidden it in her house, but she didn't tell anyone because she was in a coma. VANEK SMITH: Gurdeep took the day off of work and came to the bank to plead his mother's case, but bank officials said no dice, it's too late. The money is worthless. How does it feel to look at that money now? SAGOO: Well, it's blank paper. It's my hard money from my mother. VANEK SMITH: I mean, how do you feel about all this? SAGOO: Upset. VANEK SMITH: Upset. Demonetization caused all kinds of chaos in India. Businesses went bankrupt because nobody had cash to buy anything. People died of heart attacks in the long lines. Some people committed suicide when they realized that they didn't have enough time to change out all their money. About half the country doesn't even use bank accounts. SMITH: And the government had not prepared for this. They didn't make enough new cash, so parts of India were without money for weeks. And the new cash they printed didn't fit into the old ATMs. As public policy goes, this was a disaster. VANEK SMITH: And I expected Gurdeep to be really angry at the government for taking away his mother's life savings and for all the trouble this caused but he wasn't. SAGOO: On the whole, it was a good way. VANEK SMITH: Really? SAGOO: Yeah. VANEK SMITH: Why? OK, why? You have to explain this to me because it doesn't make any sense to me. SAGOO: Well, black money and other monies that has to be replaced but with the condition that it should not affect the common people. VANEK SMITH: But it did. SAGOO: Yeah. It's still affecting. VANEK SMITH: But you still like it. SAGOO: Even then. Even then. VANEK SMITH: Gurdeep was not an exception. Almost everyone I talked to in India told me the same thing. Yes, it's been hard for me. Yes, a lot of people were harmed, but the government did the right thing. SMITH: Which is striking. I mean, six months ago, the prime minister of India basically dropped a bomb on the economy. Now polls show he's never been more popular. Somehow, this whole thing, it was a hit. VANEK SMITH: Prime Minister Narendra Modi said this economic shock therapy was necessary. It was the only way to solve the country's corruption problem and pull India's economy into the 21st century. SMITH: Today on the show, six months later, we will give demonetization a report card. Did it work? (SOUNDBITE OF FLAVIO LEMELLE'S "CHEEKY TONGUES") SMITH: When Prime Minister Modi decided to take away his country's cash, he said this will make India better. And he gave three main reasons for that. VANEK SMITH: The first reason was corruption because without cash, it is harder to hide money from the tax man. It is also harder to pay a bribe or run a black market. Criminals love cash, so take the cash away, it makes their lives harder. SMITH: The second reason was the modernization of the Indian economy. Businesses, he said, could be more competitive, could grow faster if they used banks and electronic payments instead of paper. VANEK SMITH: The third reason was that Modi said regular people would be safer and better off if they were not keeping their life savings in a drawer, if they had access to modern financial help. SMITH: So that's why Modi made cash scarce in India - eliminated the large bills, put limits on new bills and basically let everyone figure out how to deal with this new cashless society. VANEK SMITH: So I went to India six months after demonetization had happened to check in on these three goals, to see how India was coming along and see how demonetization was working. My first stop I went to talk to some farmers. Most of the population of India farms these little, tiny plots of land. Oh, there's a tractor that just pulled out in front of us. I drove out to Ghaziabad. It's a rural area outside of New Delhi. And here farmers work little half-acre plots all right next to each other. They grow tomatoes and celery and sugarcane. I found about a dozen farmers all standing together under a giant fig tree in the middle of all the fields. ASSAR PREMI: My name is Assar Premi. My age is 93. VANEK SMITH: Ninety-three. Assar says he and his friends meet here every morning and every evening. PREMI: They're my friends. Sit here, discuss everything here. VANEK SMITH: Really? Do you sit here many evenings? PREMI: Yes. VANEK SMITH: What do you talk about? PREMI: About our relationship, sometimes politics also. VANEK SMITH: (Laughter). Relationships and politics. Things are hard here. The farmers don't have much land. The crops they raise barely cover their expenses. And everything, all of the business here, it's all done in cash. SMITH: After Prime Minister Modi said a lot of that cash would be worthless in those first few weeks, farmers had a tough decision to make. I mean, obviously they had to work. They had to tend to their fields. But they also had to do something about their life savings in cash that was about to be rendered worthless. VANEK SMITH: Garunder Shodi, one of the farmers here under the fig tree, said he and all the farmers he knew had to leave the fields and go get in line at the bank. GARUNDER SHODI: There were some people who suffered a lot. There were queues since morning 6 o'clock, 5 o'clock, till night, 10 o'clock. There are people, hand-to-mouth people - there are so many poor people. SMITH: And these are people who could not afford to miss a day's wages standing in line totally panicked. And nobody had money to buy anything. Farmers couldn't sell their crops. SHODI: The small farmers also suffered. Seventy trucks of tomatoes were thrown on the road by farmers. Because of dismonetization (ph), there was no buyer. VANEK SMITH: No buyers for the tomatoes. And it's not like farmers can just be patient and wait it out. They have to harvest the food when it's ready. And when there were no buyers they just had to destroy their crops. During this time, farmers saw sales of produce drop in half. SMITH: I almost feel like the government wanted to punish people for using cash. It was such a pain for everyone in those first few weeks that it almost seems like they were trying to send a message, that hey, look, like, you would be way better off if you just used bank accounts, if you just had a credit card. VANEK SMITH: But after the lines at the banks got shorter, all the farmers I talked to just went back to using cash, like Harinder Singh, who grows greens and onions and these things. HARINDER SINGH: (Foreign language spoken). VANEK SMITH: Oh, eggplants. SINGH: (Foreign language spoken). VANEK SMITH: Yes, it's a little guy. And through a translator, Harinder told me that he is just too old to start trying to pay for things in a new way. SINGH: (Through interpreter) Cash is something I understand, so I would rather deal with cash. I don't understand these other things, so I'm happy with cash. VANEK SMITH: Harinder said to him, credit cards seemed dangerous, like someone could steal the number and use it. He wasn't exactly sure how they worked and it kind of freaked him out. He wants to stick with cash. SMITH: So we're going to try and grade how Modi's done on his objectives. And so one of the reasons for demonetization was to improve the lot of regular people in India, to improve the lot of farmers. I'm going to call this grade an incomplete because it seems like things were very bad and then went back to normal, mostly. People are still using cash. VANEK SMITH: Right. Nothing really changed, although the government has claimed that millions of people opened bank accounts in rural areas. From everything that I've seen and read, incomplete sounds about right. SMITH: OK, next up? VANEK SMITH: The suburbs of New Delhi, which were actually not too far from the fields. You could actually see a lot of the high-rises from the eggplant patch. And when you drive through them, it's really striking because it's all these, like, nice kind of high-rise buildings and they're all vacant. Empty building. Empty building. I wanted to see these buildings because they have become, like, a symbol of corruption in India. One of the main reasons Modi gave for demonetization was to fight corruption, black money. And Modi and his advisers felt like corruption was siphoning off the economic success of his country. Now, there are a lot of ways to hide cash in India, but these apartment buildings, this is where most of the black money in India was stored. AKHILESH TILOTIA: But real estate, of course, is the big gorilla. VANEK SMITH: Akhilesh Tilotia is the author of "The Making Of India." He says in fact, high rises like these are where most of the black money in India is stored, something like 25 percent of the country's GDP. SMITH: So here's how it works. Say you want to buy an apartment and the price is $125,000. Well, officially, you only pay $100,000. The extra $25,000 you slip to the owner in cash. Nobody talks about it. It doesn't get taxed. VANEK SMITH: So as the owner, you get $25,000 in untaxed money under the table, which is great. And as the buyer, you count on being able to do that same thing when you sell your apartment someday. But when Modi rolled out his demonetization plan that whole system just kind of shut down. TILOTIA: I am stuck. If you were sitting with an inventory of cash that becomes exposed, and suddenly that black money has disappeared because now you can't transact that house at a higher amount. You will find that builders are willing to offer anywhere between 10 to 30, 40 percent discounts. VANEK SMITH: That's a lot. That's a lot. TILOTIA: Yes. And so that has had a rather terrible shock for people who were hiding or putting away their black money in some of these places. SMITH: I mean, it is bad for the economy. Although from the government's point of view I can see how they might consider this a success, you know, thwarting this big pool of black money. And eventually prices will adjust and things will return back to normal but without the whole kickback part. VANEK SMITH: That is true. But it is still hard to know whether the actual goal of getting rid of corruption in India has been successful. I mean, the government was hoping that tax evaders and criminals would be stuck with these huge stacks of cash that they wouldn't have time to turn in, or when they turned them in the bank would suddenly see that they had all this cash that they hadn't been taxed on. SMITH: In other words, they would learn that crime does not pay. VANEK SMITH: They would learn that crime does not pay. But Bhaskar Chakravorti, an economist at Tufts University, says even after all that drama almost all of the cash in circulation in India still made its way back into the banking system. BHASKAR CHAKRAVORTI: Not all of that was legal cash. It just happened to be the case that India has - is extremely innovative in getting it on constraints. SMITH: Innovative. VANEK SMITH: Innovative. For instance, I talked to one banker who told me that he'd seen a bunch of his colleagues at the bank taking huge amounts of cash from rich clients and just changing it into new bills for a little fee. There were also stories of rich families asking their maids and drivers and cooks to stand in line for them and change out their cash. VANEK SMITH: Yeah. That feels right. And that's not terrible for six months later. SMITH: Not bad. Yeah, it is a long-term problem. Our next and final stop is a market in the city of New Delhi, where we will meet people who actually are making the best of a cashless economy after this. (SOUNDBITE OF HARLIN JAMES AND PAUL LEWIS' "YADA YADA") VANEK SMITH: For my final stop, I wanted to see how things had changed for the urban middle class in India. Remember, Modi believed that getting rid of large amounts of cash would help India move into the 21st century. Businesses, he said, could be more competitive and grow faster using banks and electronic payments. So I went right into the heart of New Delhi to this market to see what people there would tell me. Rawinda Bhardwaj is 66 and he owns a pharmacy stall in the market. RAWINDA BHARDWAJ: I'm running my pharmacy shop here since the last 35 years. VANEK SMITH: Yeah, you have a lot of prescriptions, I'm seeing, all the way up to the ceiling. This market was in a very middle-class neighborhood. It was a nice market. But even still, almost all of the businesses had only taken cash, including Rawinda. He was a cash man. He didn't like credit cards, didn't really understand how they worked. Mobile payments were totally out of the question for him. SMITH: But then demonetization happened. And in the weeks afterwards, you know, Rawinda's customers still needed their prescriptions, their medicine, and cash was no longer an option for him. VANEK SMITH: So Rawinda made himself figure it out. He started taking credit cards, and he even signed up for this mobile payment service called Paytm where people could pay for their prescriptions on their mobile phones. Has it been good for business to take the cards? BHARDWAJ: It's very good for business. VANEK SMITH: Now, six months later, about half of Rawinda's customers pay with Paytm or credit card. And he says they even buy a little bit more than they used do. Oh, these are your receipts. BHARDWAJ: Yes, receipts. This is all by credit card, by Paytm, by debit card, by check. VANEK SMITH: Rawinda now checks his deposits on his phone. He no longer has to walk to the bank with cash deposit every day. And he loves this. He's become such a convert that, in fact, he has stopped using cash himself. BHARDWAJ: My personal, all credit card. VANEK SMITH: Oh, you don't use cash anymore? BHARDWAJ: No, no, no, no need of cash - credit card. VANEK SMITH: Most of the stores in the market now take credit cards, and all the stores in the market take mobile payments from this company called Paytm. In fact, everywhere I went in India everybody was talking about Paytm and how big they'd gotten since demonetization. So I thought I would go check out the company. Oh, Paytm. Here we are. For Paytm, demonetization was like the greatest gift ever because when the government got rid of most of the cash in the country, it didn't really offer an alternative. Madhur Deora is the CFO of Paytm. And he said when Modi made his announcement, the company knew this was their moment. MADHUR DEORA: Then we sort of jumped into action. So we had front-page ads in the next day's newspapers. VANEK SMITH: Paytm wasn't exactly a household name, and so a lot of the ads were just like, hey, we are here. You can pay for stuff with us. Other ads were these kind of DIY instructions to businesses and employers, basically, explaining as simply and clearly as possible how to use the Paytm phone app, even how mobile payments worked. DEORA: One of the ads that we did we feel quite proud off. A quarter of that page was a cut out. So merchants could literally cut a quarter of that ad and just put that up and just fill in their name and their mobile number that they could put on the wall. VANEK SMITH: Once their name and their mobile number was on the wall, customers could use that information to pay them through the Paytm app. It was very simple. And businesses were desperate to do this. They were all signing up - little roadside vegetable stands, hardware stores, taxis - even temples started letting people leave offerings with Paytm. SMITH: Now, I know India is a big country, but the growth rates that Paytm was seeing was pretty amazing. Half a million people were signing up every day. A hundred and twenty million Indians now use Paytm. VANEK SMITH: And for the government, this is also good news because, unlike cash, Paytm transactions are trackable. They are taxable. Between the credit cards and the mobile apps, it was like this whole unseen part of the economy was suddenly out in the light. SMITH: So if you look specifically at the part of the Indian economy that was sort of on the verge of moving away from cash, anyway - if you look at them - the people we're talking about here - I mean, I'd give the grade satisfactory. VANEK SMITH: Yeah, satisfactory, for sure. SMITH: And as we talked about at the beginning, Modi does deserve some extra credit because, I've got to say, I've never seen a political official who changed an economy overnight, and people still approved of him. People still forgave him for what he did. VANEK SMITH: And it wasn't just a question of forgiveness. This move actually made Modi more popular. His approval ratings are now at 69 percent, which is incredible for a politician. And almost everyone I talked to in India supported Modi's demonetization plan, even people who'd really suffered because of it - actually, especially people who'd really suffered because of it. And when I asked them why, a lot of them told me that they stand in long lines all the time and deal with frustrating bureaucracy all the time. But all of a sudden, they're standing in line for a reason - to thwart corruption, to punish tax evaders. All of a sudden, standing in line and going through all this bureaucracy had a patriotic purpose. SMITH: You know, this explains a lot. When I was listening to the last episode, which was about Anil Bokil - he's the architect of this whole plan - he had this strange moment with you when he was talking about demonetization in an almost religious way. SMITH: And now I kind of understand it. Enlightenment means finding sort of a greater meaning and purpose in even the difficult things in life, and that's essentially what India is doing right now. VANEK SMITH: Yes. And I heard this from a lot of people, including Gurdeep Sagoo, the man whose mother lost her life savings because she went into a coma at the wrong moment. He was standing there with his mother's worthless cash in his hand, and he said he was just so fed up and so frustrated with all the corruption in his country. And it finally felt like things were changing, like Modi was fixing it. I asked him what he was going to do with all the old bills that had belonged to his mom. SAGOO: Maybe I'll put a frame. This was 1,000 rupees. No, these are 500 rupees. VANEK SMITH: You'll frame it? SAGOO: Maybe for my next generation. VANEK SMITH: Oh, so that your family can see what the old... SAGOO: Yeah, for family, just for the sake of showing of the people. VANEK SMITH: Just for the sake of showing the people. Like, here it is. Here is a token of the sacrifice we all made for the country - that we all made to make a stronger India. VANEK SMITH: We love to hear what you think of the show. You can email us [email protected] or find us on Facebook or Twitter. SMITH: Also, this week marks 30 years since Terry Gross started the NPR program Fresh Air. We're celebrating by sharing our favorite interviews and moments using the hashtag #freshair30. You can find the Fresh Air archive and new episodes on NPR One or wherever you get your podcast. Those of us who ask questions for a living are still in awe of Terry Gross. VANEK SMITH: Today's episode was produced by Elizabeth Kulas. PLANET MONEY is edited by Bryant Urstadt and produced by Alex Goldmark.
{ "pile_set_name": "Pile-CC" }
Orthodromic study of the sensory fibers innervating the fourth finger. Fourth finger stimulation has been used to obtain the compound nerve action potential (CNAP) of the median and ulnar nerves by a single cutaneous bipolar recording electrode placed in some specific sites of the upper limb. In normal subjects, the response was a combination of both action potentials which could be seen as separated peaks only when the recording was made at midarm with the elbow flexed at 90 degrees. This finding is mainly attributed to the longitudinal sliding of nerves according to the joint movements. In patients with a carpal tunnel syndrome, there were a striking separation between the responses of both nerves in wrist recording. This finding allows this technique to be applied in the clinical inspection of median nerve entrapment at wrist, demonstrating graphically the delay on the median nerve action potential with regard to that of the ulnar nerve.
{ "pile_set_name": "PubMed Abstracts" }
Q: Create file and directories at the same time I am trying to create a blank file and its directories. I have tried to use cd. > foo\bar.txt but it wont also make the directory. Thank you. A: The only thing I can suggest is to mkdir first and then crate file, in fact it's 2 instructions but you can execute it in one line mkdir test& cd. > test\test.txt
{ "pile_set_name": "StackExchange" }
Q: What am I missing here: $U(144) \neq U(140)$ I'm confused about the following exercise: Prove that $U(144)$ is isomorphic to $U(140)$. Here are my thoughts: $$U(144) = U(12^2) = U(3^2)\oplus U(2^4) = \mathbb Z_{6} \oplus \mathbb Z_{8}$$ and $$ U(140) = U(2^2 *7*5) = U(2^2) \oplus U(5) \oplus U(7) = \mathbb Z_{2} \oplus \mathbb Z_{4} \oplus \mathbb Z_{6}$$ And I have the following result: $\mathbb Z_{n_1 \dots n_k}\cong \mathbb Z_{n_1}\oplus \dots \oplus \mathbb Z_{n_k}$ if and only if $n_i$ are pairwise coprime. But $2$ and $4$ are not coprime therefore $\mathbb Z_{2} \oplus \mathbb Z_{4} \oplus \mathbb Z_{6}$ is not isomorphic to $\mathbb Z_{6} \oplus \mathbb Z_{8}$. What am I doing wrong? A: Hint: $U(2^4)$ is not isomorphic to $\mathbb{Z}_8$.
{ "pile_set_name": "StackExchange" }
Are you looking for ways to improve your business website? Want to know why adding video should be one of the techniques to try? We share five reasons you should use video on your website in the infographic below, here are the key points: Selling is about storytelling, and videos are a great way to share a story Videos keep people on your website longer and engage them with your content People work with people, and video helps people get to know, like and trust you Videos keep your audience interested, no matter your product or service Videos are fast and convenient Enjoy the infographic. This post was first published on the Red Website Design Blog.
{ "pile_set_name": "OpenWebText2" }
The latest update for zombie survival sandbox DayZ brings with it several new fixes and features, including a few tweaks that will stop some players and spawned zombies from being invisible to others, according to a post on the game's Tumblr. Additions have also been made to the steadily-growing hunting and cooking system. A new system for "emissive" textures has been implemented, which will make the air around fires give off a heat haze. Players can also now upgrade fireplaces to make makeshift heat ovens for cooking in the wilderness. New craftable content included in the update, besides fireplaces and cooking improvements, includes PVC bows and tweaks to the AKM gun that will make it compatible with the DayZ attachment system. The team is working on a side-mounted PSO scope to add on to AK weapons as well. Throwing, ragdoll and bow and arrow physics have also been smoothed over for more realistic arcs. Arrows will also now stick into targets and the animations for drawing and firing bows has been improved. New towns have also been added to the Chernarus map, tucked away and abandoned in the green landscape. Pictures of two such areas can be found in the Tumblr post. The team is also working to improve animal, collision and roaming zombie pathfinding and optimizations for the persistent loot system. Once the build containing these updates is stable, developer Bohemia Interactive will push it live. Future updates will include vehicles and a barricading system, according to the post.
{ "pile_set_name": "OpenWebText2" }
Video Description and More... In this exciting FREE Video Art Lesson, Chuck McLachlan works on a preliminary sketch for his upcoming painting titled "The Road Home", and discusses the importance of knowing your subject matter. A former NFL football player with an articulate New England accent and the soul of a raconteur, Chuck McLachlan dispels the notion that there might be anything called a typical artist. Whether the subject is football or the latest book on Picasso, Chuck brings thoughtful and colorful enthusiasm to the classroom discussion. His work hangs in many privat .... More
{ "pile_set_name": "Pile-CC" }
An intriguing mystery surrounding one of the most important portraits of the early 16th century has been solved by an art historian at St John’s College. Portrait of Lady Margaret Beaufort by Meynnart Wewyck Image supplied by: St John's College, University of Cambridge (8154679) The painting of Lady Margaret Beaufort - mother of King Henry VII - is the first piece of work identified as by Dutch artist Meynnart Wewyck, and the oldest large-scale portrait of an English woman. While Wewyck was Henry VII’s preferred painter, his name has been unknown because the absence of a signed or documented work has made it impossible to attribute paintings to him. His 180cm tall by 122cm wide painting is the earliest large-scale portrait of an English woman, and one of the earliest large-scale portraits of a single individual in the UK. Educationalist and philanthropist Lady Margaret was one of the wealthiest women in England and, once her son was on the throne, used her money to build schools, churches, and two University of Cambridge colleges – Christ’s and St John’s. The portrait of her held at St John’s was originally believed to have been given to the college in the late 16th century. But fellow Dr Andrew Chen, an art historian, found documents in the college archives referring to a painting of Lady Margaret by Wewyck arriving at St John’s in 1534. Analysis of tree rings in the wooden frame of the portrait showed it was made before 1521, enabling Dr Chen and Dr Charlotte Bolland, senior curator at the National Portrait Gallery, to link the painting to the one referenced in the college records. Dr Chen said: “This portrait of Lady Margaret Beaufort is one of the most important portraits of the early 16th century. It demonstrates that elite patrons were working with European painters who had the skills to realise large, ambitious compositions even before Hans Holbein the Younger arrived in the England in 1526.” Paintings of women depicted on their own in a large-scale format are very rare. Dr Andrew Chen Picture: St John's College, University of Cambridge (8154681) Dr Chen explained: “On smaller scales, portraits of women would be displayed in houses or circulate as part of marriage negotiations. Women are also shown on larger scales as donors in altarpieces, but in these contexts they are associated with religious subjects and normally paired with men. “The composition of our Lady Margaret portrait derives from the art of sacred settings, but, significantly, here the woman comes to stand alone. This innovation in format seems to be related to the fact that she was the foundress of institutions.” The portrait was commissioned shortly after Lady Margaret’s death, around 1510, by John Fisher, Bishop of Rochester and Lady Margaret’s advisor. In 1534 he fell out of favour with King Henry VIII, Lady Margaret’s grandson, and his home was raided by the king’s henchmen, who stole or destroyed many of his possessions, including books he had promised to St John’s College Library. But the portrait of Lady Margaret was safe at the Bishop of Rochester’s palace in Lambeth Marsh and was transported to St John’s shortly afterwards to ensure it would not be destroyed. Dr Mark Nicholls, Tudor historian and fellow of St John’s, said: “In contrast to the similar portrait of Lady Margaret in the college's hall, which was commissioned from the artist Rowland Lockey in 1598, the origins of this painting have long been mysterious. “Now, thanks to a productive coming together of scientific analysis and close reading of surviving documents in the college's collection, we can recognise a remarkable early Tudor portrait for what it is, and place it accurately in the long tradition of portraiture on display in the college.” Researchers have now connected Wewyck to a portrait in similar style of Henry VII, owned by the Society of Antiquaries of London. Further technical analysis of the paintings may help discover further pieces. Dr Chen said: “These paintings can serve as touchstones for further research into Wewyck’s work. As perhaps the first Netherlandish painter to find work at the Tudor court, Wewyck stands at the beginning of a process of the transfer of artistic skills that would dominate the production of painted portraiture in England throughout the 16th century. It’s a very exciting discovery.” Read more Alzheimer's breakthrough from University of Cambridge scientists 'could lead to drug trials in two years' Knighthood for Prof Christopher Dobson, master of St John’s College, in recognition of ground-breaking Alzheimer’s research
{ "pile_set_name": "OpenWebText2" }
Liberal candidate expresses stance in contrast to party’s official policy as he comes under fire over climate change This article is more than 1 year old This article is more than 1 year old The Liberal candidate for Wentworth, Dave Sharma, has said “he is open” to relocating Australia’s embassy to Jerusalem as the US has done, in contrast to the official policy of both the Liberals and Labor to leave it in Tel Aviv. As the battle in the Wentworth byelection enters its final week, the votes of the large Jewish community that lives in Wentworth could be crucial to whether the Liberals hold the seat. There are around 20,000 Jewish people in Wentworth, according to the 2016 census, making up 12.5% of the population. Tony Abbott says Turnbull 'owes it' to Liberals to endorse Wentworth candidate Read more The issue is highly contentious as Jerusalem is important to both the Palestinians and Israelis, and its future will be central to the two-state solution that Australia and others have backed. Sharma, a former Australian ambassador to Israel, said at a candidates’ forum on Monday: “I think we should be open to considering it as Australians. The US has done it.” But he added: “We need to look at in context of a two-state solution.” But on his Twitter feed and during his pitch to Liberal preselectors he has been more explicit. Dave Sharma (@DaveSharma) Even if we don’t move Embassy, we shld at least consider recognising Jerusalem as Israel’s capital (w/o prejudice to its final boundaries or potential status as capital of future Palestinian state). Where else do we disagree with a country about where its capital is? https://t.co/F30Dh1GOyN Dave Sharma (@DaveSharma) Trump’s announcement on Jerusalem, though risky, carries with it an opportunity to advance peace. Will he take it? pic.twitter.com/hIioxWhA5n The main candidates are due to participate in a panel at the Jewish Board of Deputies in Woollahra on Tuesday but the candidates faced questions on the embassy question on Monday when they participated in the Sydney Morning Herald’s candidate panel at Bondi Surf Bathers Life Saving Club. The leading independent candidate and local GP, Kerryn Phelps, who is a convert to Judaism, said it was “a difficult issue” and that if elected, she would be seeking briefings on the impact moving it would have on achieving a two-state solution. 'Taken over by extremists': Malcolm Turnbull's son urges voters to dump Liberals Read more But Labor’s Tim Murray said though he recognised how important the issue was to people in Israel (he referred to the views of Israel’s Labor leader), he said Australia should only consider such a move once a two-state solution was achieved. “I would want a two-state solution first, and on basis of a strong peace consider moving to Jerusalem,” Murray said. This was the first time Sharma had appeared on a panel with his main rivals: Phelps, Murray, the Greens’ Dominic Wy Kanak and independent Licia Heath. Not surprisingly, climate change dominated as an issue. Outside the surf club in driving rain, GetUp staged a small protest. A number of activists dressed as prime minister Scott Morrison brandished lumps of coal, while another dressed as former prime minister Tony Abbott shivered in his budgie smugglers. Facebook Twitter Pinterest GetUp demonstrators took aim at the Coalition’s record on climate change outside the candidate’s forum. Photograph: Carly Earl/The Guardian Sharma said he accepted the scientific evidence showing climate change was caused by human activity and said he supported Australia’s Paris commitments. He repeated the government’s assurances that the nation was on track to meet its pledged cuts to greenhouse gas emissions by 2030, even though several expert reports, including the World Bank, have cast doubt on this. Sharma also said the nation needed to address energy affordability and security. “Coal currently provides 60% of our energy and will be part of the energy mix for years to come, he said. This brought a robust exchange from the other candidates. Facebook Twitter Pinterest Dave Sharma is mobbed by protesters as he arrives at a candidates forum in Bondi on Monday. Photograph: Carly Earl/The Guardian “You are in a party dominated by climate change sceptics and unless you cross the floor you will be part of a party with no climate change policy,” Phelps said. If elected, Phelps said, she will work across the major parties to achieve policies that address climate change. Murray said the Coalition had a complete lack of comprehensive policy on either climate change or energy and without it they were powerless to effect power prices. “They are wrong that renewables are more expensive,” Murray said. He urged a vote for Labor, arguing it was the only party that could form government and actually deliver a climate change policy. But Murray also came under pressure. He had pledged in his opening address to work within the Labor party to stop the giant Adani mine in Queensland’s Galilee basin going ahead. But Phelps said unless he crossed the floor, which would result in expulsion from the Labor party, he was powerless to vote against Adani’s Carmichael coalmine. Facebook Twitter Pinterest (L-R) Liberal candidate Dave Sharma, independent Kerryn Phelps, Labor’s Tim Murray, Greens candidate Dominic Wy Kanak and independent Licia Heath. Photograph: Carly Earl/The Guardian On the question of the summary dismissal of the former member, Malcolm Turnbull, Sharma said he “shares that frustration of his electorate” of Wentworth over the infighting that led to Turnbull’s ousting as prime minister. “I consider him a mentor and a friend and appalled at treatment meted out to him,” he said. Kerryn Phelps urges Wentworth voters to use byelection to protest 'inhumane' refugee policies Read more But he warned the choice facing electors this Saturday was stability or instability: a one-seat majority or a hung parliament. Turnbull held Wentworth with a 17% margin, two-party-preferred. But the Liberals are bracing for a hit this Saturday, with Phelps and Murray both commanding significant slices of the vote in a field of 16 candidates. Polls have indicated the Liberals’ primary vote has been slashed to the low 40s or even as low as 38.8%, according to a weekend Voter Choice Project poll of 723 people. Phelps, who is polling around 23% of the vote, could win if she comes in second and then gets over the line with Murray’s preferences and the preferences of others. Phelps again faced questioning over why she had put the Liberals ahead of Labor on her how to vote, given she is so critical of their policies on climate change and refugees. She said it was only a “guide” and she believed Wentworth voters could make up their own minds.
{ "pile_set_name": "OpenWebText2" }
Greece: Idomeni Refugee Site Transfers In recent months, Idomeni, Greece, has been a tent city of over 10,000 men, women and children, refugees and migrants. Sanitary conditions were poor. There were scrambles for food, water and firewood. Most people slept in tents, but many slept in the open. Greek authorities started an operation on Tuesday to move the remaining 8,000 into new Government sites. UNHCR monitored the process.
{ "pile_set_name": "OpenWebText2" }
Tag Archives: kitchen cabinet door storage ideas It’s no secret that when it comes to clutter in the home, the kitchen can often be the worst offender. From unkempt spice racks to disorganized cabinets, adequate kitchen space is often a rarity in any busy family’s life. Extra cabinet storage can transform your kitchen into an organized space for you and your family […]
{ "pile_set_name": "Pile-CC" }
Q: How to assume the Probability function that will be used in Likelihood We define Likelihood as follows: $$ \mathcal{L}(\theta | X) = \prod P(x_{i}|\theta) $$ Question: How to assume the probability function $P$, specially in case of complex dataset? I understand that if we are doing a Coin Toss, I can assume $P$ to be Bernoulli. But what if my dataset is complex (ex: financial data, flu cases) or I am working on some complex use case where I am using a Neural Network to classify images for example and then applying Bayesian inference for identifying the network weights $W$. $$ P(W | D) \propto P(D | W) P(W)$$ where, $$ P(W) = N(0, 1) $$ but how do we define / assume, $$ P(D | W) = ?? $$ A: I found a wonderful resource online that describes this question in much detail. Bayesian Methods for Neural Networks: https://www.cs.cmu.edu/afs/cs/academic/class/15782-f06/slides/bayesian.pdf Also, see chapter 10 of 'Neural Networks for Pattern Recognition' book by Bishop: http://cs.du.edu/~mitchell/mario_books/Neural_Networks_for_Pattern_Recognition_-_Christopher_Bishop.pdf#page=400
{ "pile_set_name": "StackExchange" }
WARNING: Screenshake. A lot of it. If the web player gives you problems or input delay then try the Windows or Mac download. Android version now available! A ridiculous twist of the arcade classic "Pong" based on the game jam theme "double". Use the W/S and Up/Down keys to play, or use the left stick on two controllers. If controllers don't work, try refreshing the page with both controllers connected. This game was made in several hours for NJ Games++ 2016 and was awarded best use of theme. It was also made in March 2016 for #1GAM. Follow me on Twitter if you want to see a new game every month! Pong is copyright Atari Inc.
{ "pile_set_name": "OpenWebText2" }
If you are looking to bet the nfl football odds this year this page, come back to it, black jack play strip and take a look at the sheet info plays; black widow; john martin; jack jones; jimmy boyd; jeff. Money? % exciting casino entertainment and the best odds take a look hot casino action! black jack casino system on today, bingo rule e-bingo rule sheet. For mumia (abu-jamal- ) presents the jack jury, black download jack and among other things, black free jack learn play the desire to exclude black and on this little sheet of paper that you have, mark. On his back on his hotel bed with his sheet same as always, black jack knife trailguide jack the odds favor the house; and i m the boy passed them carefully to jack they were greenish-black, these. Purchase learn black jack depends on black jack odds black jack book the about free black jack game download of black jack cheat sheet, jack black. Info "pitstop" restaurant; night club restaurant "black jack as well as giving you the prices, the sheet tells you the be more than happy to explain how all our betting odds. From all of us at odds on racing tm to all of you, good filly won of and more than c$600, internet black jack for the jack there have been numerous suggestions from a second sheet of. Black and blue; let it bleed; mortal causes; the black book; strip jack; a good lights had been erected, a sheet pinned up so that he s not going to get any odds from me, black jack mountain oklahoma rebus. Betting football sheet betting online college football betting nfl betting line and odds bet betting black jack betting websites. Best fake book ever - rd edition - c edition at sheet music against all odds * amazed * at the hop * autumn leaves ballin the jack a dream is a wish your heart makes. About gp - join now - bonus - live odds - cashier - site map - black jack roulette baccarat craps pai gow let it ride visit , fill out the sign up sheet for. Part of the indian way of making a living remains cheat online casino black jack cheat sheet for vip players from absolute poker online sport book and casino black jack odds. Odds gaming molly hatchet beatin the odds odds against tomorrow sheet music free nfl vegas odds agaist all odds clothing black jack odds. His knee late in the east regional finale romp over jack schools that said teams could play as many as three black and we have odds on all games, black jack chewing gum prop bets on all teams. That are designed to give me business cards per sheet the most popular of these have been slot machines, black jack and does not involve nearly as putation for odds. Blackjack - cheat at online black jack+free games+ began online casino bonuses black jack strategy sheet up best sports casino gambling online basic strategy black jack odds.
{ "pile_set_name": "Pile-CC" }
Share this post Link to post Share on other sites Well I kinda feel that the last logo (the patch) should be the main one. That's all well and good, but there are only so many things you can do with a lightning bolt. Honestly, I love their secondary logo- the state of Florida with the bolt. If anything, you can argue they should make that their primary logo. Share this post Link to post Share on other sites This would be fine as a prototype... but it just doesn't look like it's ready to be rolled out yet. Would it have really ruined their scheme to rely on a bit of silver to distinguish it from the Leafs' uni? Share this post Link to post Share on other sites @BCdevil: Yeah but before there was the "Tampa Bay" above it and and the circle to distinguish it from the Gatorade bolt. Without those, the comparisons are easier to make. That's a fair point, prodigy. Never thought of it that way. I would counter by saying that for most, but not all, of its history, the Gatorade bolt has been accompanied by either the word "Gatorade" in the middle or in front of the "G" that they use now. Share this post Link to post Share on other sites Another new uniform and another thread that is chock full of complaints. I really have no idea what kind of jersey would satisfy you guys. Its fine. They're the Lightning, their logo is a fvcking lightning bolt. What do ya want lol? I like 'em. Definitely. I think these jerseys are great. I think the simple design of the jersey looks nice and the logo is awesome, especially because it's a symbol and not a cartoonish drawing, not another team name in a circle, not a big letter, and doesn't have the name or location of the name spelled out in it...it's just a nice clean symbol.
{ "pile_set_name": "Pile-CC" }
Antes de James Brown entrar no palco, seu mestre de cerimônias pessoal lhe dava uma elaborada introdução, citando as alcunhas de Brown e suas principais canções. A introdução de Fats Gonder, captada no álbum de 1963 Live at the Apollo, é um exemplo representativo: So now ladies and gentlemen it is star time, are you ready for star time? Thank you and thank you very kindly. It is indeed a great pleasure to present to you at this particular time, national and international [ly] known as the hardest working man in show business, the man that sings "I'll Go Crazy" … "Try Me" … "You've Got the Power" … "Think" … "If You Want Me" … "I Don't Mind" … "Bewildered" …the million dollar seller, "Lost Someone" … the very latest release, "Night Train" … let's everybody "Shout and Shimmy" … Mr. Dynamite, the amazing Mr. Please Please himself, the star of the show, James Brown and The Famous Flames!! 01. Please, Please, Please 02. Chonnie-On-Chon 03. Hold My Baby's Hand 04. I Feel That Old Feeling Coming On 05. Just Won't Do Right 06. Baby Cries Over The Ocean 07. I Don't Know 08. Tell Me What I Did Wrong 09. Try Me 10. That Dood It 11. Begging, Begging 12. I Walked Alone 13. No, No, No, No 14. That's When I Lost My Heart 15. Let's Make It 16. Love Or A Game 01. There Must Be a Reason 02. I Want You So Bad 03. Why Do You Do Me 04. Got to Cry 05. Strange Things Happen 06. Fine Old Foxy Self 07. Messing With the Blues 08. Try Me 09. It Was You 10. I've Got to Change 11. Can't Be the Same 12. It Hurts to Tell You 13. I Won't Plead No More 14. You're Mine, You're Mine 15. Gonna Try 16. Don't Let It Happen to Me 01. Think 02. Good Good Lovin' 03. Wonder When You're Coming Home 04. I'll Go Crazy 05. This Old Heart 06. I Know It's True 07. Bewildered 08. I'll Never Never Let You Go 09. You've Got The Power 10. If You Want Me 11. Baby You're Right 12. So Long 01. Just You And Me Darling 02. I Love You, Yes I Do 03. I Don't Mind 04. Come Over Here 05. The Bells 06. Love Don't Love Nobody 07. Dancin' Little Thing 08. Lost Someone 09. And I Do Just What I Want 10. So Long 11. You Don't Have To Go 12. Tell Me What You're Gonna Do 01. Out Of Sight 02. Come Rain Or Come Shine 03. Good Rockin' Tonight 04. Till Then Listen 05. Nature Boy Listen 06. I Wanna Be Around 07. I Got You 08. Maybe The Last Time 09. Mona Lisa 10. I Love You Porgy 11. Only You 12. Somethin' Else 01. Papa's Got A Brand New Bag (Part 1) 02. Papa's Got A Brand New Bag (Part 2) 03. Mashed Potatoes U.S.A. 04. Cross Firing 05. Love Don't Love Nobody 06. Just Won't Do Right (I Stay In The Chapel Every Night) 07. And I Do Just What I Want 08. This Old Heart 09. Baby, You're Right 10. Have Mercy Baby 11. You Don't Have To Go 12. Doin' The Limbo 01. Papa's Got A Brand New Bag (Part 1) 02. Papa's Got A Brand New Bag (Part 2) 03. Oh Baby Don't You Weep 04. Try Me 05. Sidewinder 06. Out Of Sight 07. Maybe The Last Time 08. Every Beat Of My Heart 09. Hold It 10. A Song For My Father (Part 1) 11. A Song For My Father (Part 2) 01. Scratch 02. It's A Man's, Man's, Man's World 03. Bewildered 04. Is It Yes Or Is It No? 05. Aun't That A (Part 1) 06. Bells 07. Ain't That A Groove (Part 2) 08. Come Over Here 09. In The Wee Wee Hours (Of The Night) 10. I Didn't Mind 11. Just You Me, Darling 12. I Love You, Yes I Do 01. Sunny 02. That's Life 03. Strangers In The Night 04. Willow Weep For Me 05. Cold Sweat 06. There Was A Time 07. Chicago 08. (I Love You) For Sentimental Reasons 09. Time After Time 10. All The Way 11. It Had To Be You 12. Uncle 01. That's My Desire02. Your Cheatin' Heart03. What Kind of Fool Am I04. It's A Man's Man's Man's World05. The Man In The Glass06. It's Magic07. September Song08. For Once In My Life09. Every Day I Have The Blues10. I Need Your Key (To Turn Me On)11. Papa's Got A Brand New Bag12. There Was A Time 01. It's A New Day 02. Let A Man Come In And Do The Popcorn (Parts 1 & 2) 03. World (Parts 1 & 2) 04. Georgia On My Mind 05. It's A Man's, Man's, Man's World 06. Give It Up Or Turnit A Loose (Part 1) 07. If I Ruled The World 08. Man In The Glass (Part 1) 09. I'm Not Demanding (Part 1) 01. Intro - It's A New Day, So Let A Man Come In02. Bewildered03. (Get Up I Feel Like Being A) Sex Machine04. Escape-Ism05. Make It Funky06. Try Me!07. Fast Medley08. Give It Up Or Turn It A Loose09. Super Bad10. Get Up, Get Into11. Soul Power12. Hot Pants 01. Get On The Good Foot (Parts 1 & 2) 02. The Whole World Needs Liberation 03. Your Love Was Good To Me 04. Cold Sweat 05. Recitation By Hank Ballard 06. I Got A Bag Of My Own 07. Nothing Beats A Try But A Fail 08. Lost Someone 09. Funky Side Of Town 10. Please, Please, Please 11. Ain't That A Groove 12. My Part - Make It Funky (Parts 3 & 4)13. Dirty Harri 14. I Know It's True 01. Down And Out In New York City 02. Blind Man Can See It 03. Sportin' Life 04. Dirty Harri 05. The Boss 06. Make It Good To Yourself 07. Mama Feelgood (Lyn Collins) 08. Mama's Dead 09. White Lightning (I Mean Moonshine)10. Chase 11. Like It Is, Like It Was 01. Coldblooded 02. Hell 03. My Thang 04. Sayin' It And Doin' It 05. Please, Please, Please 06. When The Saints Go Marchin' In 07. These Foolish Things Remind Me Of You 08. Stormy Monday 09. A Man Has To Go Back To The Cross Road Before He Finds Himself 10. Sometime 11. I Cant Stand It '76 12. Lost Someone (Remake) 13. Dont Tell A Lie About Me And I Wont Tell The Truth On You 14. Papa Dont Take No Mess 01. Introduction02. It's Too Funky In Here03. Gonna Have A Funky Good Time04. Get Up Offa That Thing05. Body Heat06. I Got The Feelin'07. Try Me!08. (Get Up I Feel Like Being A) Sex Machine09. It's A Man's Man's Man's World10. Get On The Good Foot11. Papa's Got A Brand New Bag12. Please Please Please13. Jam 01. Give It Up Or Turn It A Loose02. It's Too Funky In Here03. Gonna Have A Funky Good Time04. Try Me!05. Get On The Good Foot06. Prisoner Of Love07. Get Up Offa That Thing08. Georgia On My Mind09. I Got The Feelin'10. It's A Man's Man's Man's World11. Super Bad12. Disco Rap13. Cold Sweat14. I Can't Stand Myself (When You Touch Me)15. Papa's Got A Brand New Bag16. I Got You (I Feel Good)17. Get Up (I Feel Like Being A) Sex Machine18. Hot Pants19. Please Please Please20. Jam 01. (So Tired of Standing Still We Got To) Move On 02. Show Me 03. Dance, Dance, Dance To The Funk 04. Teardrops On Your Letter 05. Standing On Higher Ground 06. Later For Dancing 07. You Are My Everything 08. It's Time To Love (Put A Little Love In Your Heart) 01. Intro02. Cold Sweat (Part 1)03. Gonna Have A Funky Good Time04. It's A Man's Man's Man's World05. Get Up Offa That Thing06. Try Me07. The Payback!08. Hot Pants (She Got To Use What She Got, To Get What She Wants)09. Prisoner Of Love10. Papa's Got A Brand New Bag11. Living In America12. Make It Funky13. Get On The Good Foot14. Georgia On Mind15. Georgia-Lina16. I Got You (I Feel Good)17. Please Please Please18. (Get Up I Feel Like Being A) Sex Machine19. Respect Me 01. Sleigh Ride02. Clean For Christmas03. Spread Love04. Not Just Another Holiday05. Mom And Dad06. Christmas Is For Everyone07. God Gave Me This08. A Gift09. Reindeer On The Rooftop10. Funky Christmas11. Dont Forget The Poor At Christmas 01. Automatic (Remix) 02. Send Her Back To Me (Remix)03. Motivation 04. Sunshine 05. Nothing But A Jam 06. Baby, You Got What It Takes 07. It's Time 08. Why Did This Happen To Me (Remix) 09. Good And Natural (Remix) 10. Killing Is Out School Is In (Remix) 01. Introduction To The J.B.'s02. Doing It To Death (Pts1&2)03. You Can Have Watergate Just Gimme Some Bucks And I'll Be Straight (Pt1)04. More Peas05. La Di Da La Di Day06. You Can Have Watergate Just Gimme Some Bucks And I'll Be Straight (Pt2)07. Sucker08. You Can Have Watergate Just Gimme Some Bucks And I'll Be Straight (Pt3) 01. Damn Right I Am Somebody02. Blow Your Head03. I'm Payin' Taxes, What Am I Buyin'04. Same Beat (Part 1)05. If You Don't Get It The First Time, Back Up And Try It Again, Party06. Make Me What You Want Me To Be07. Going To Get A Thrill08. You Sure Love To Ball 01. Gimme Some More (Live)02. Same Beat (Parts 1-3)03. If You Don't Get It The First Time, Back Up And Try It Again, Party04. Damn Right I Am Somebody05. I'm Payin' Taxes, What Am I Buyin'06. Soul Power '7407. Keep On Bumpin' Before You Give Out Of Gas08. Breakin' Bread09. Rockin' Funky Watergate10. Control (People Go Where We Send You) (Part 1)11. Cross The Track (We Better Go Back)12. All Aboard The Soul Funky Train13. (It's Not The Express) It's The J.B.'s Monaurail14. Future Shock (Dance Your Pants Off)15. Everybody Wanna Get Funky One More Time (Part 1) 01. Do The Do02. On The Spot03. Up On 45 (Part 1)04. Carry On05. Bring The Funk On Down06. Dynaflo07. Why Did You Have To Go08. What About The Music09. There's A Price To Pay To Live In Paradise10. Tag Alone11. Born To Groove12. Who Do You Think You're Fooling13. Soul Men14. Mistakes And All 01. Doing It To Death (Parts 1 & 2)02. Hot Pants Road03. Pass The Peas04. Gimme Some More05. Blow Your Head06. The Grunt (Part 1)07. Givin' Up Food For Funk (Part 1)08. Same Beat (Part 1)09. Damn Right I Am Somebody (Part 1)10. Breakin' Bread11. (It's Not The Express) It's The J.B.'s Monaurail (Parts 1 & 2)12. If You Don't Get It The First Time, Back Up And Try It Again, Party 01. I Feel That Old Feeling Coming On02. No No No No03. I Hold My Baby's Hand04. Chonnie-On-Chon05. Just Won't Do Right06. Let's Make It07. Fine Old Foxy Self08. Why Does Everything Happen To Me09. Begging, Begging10. That Dood It11. There Must Be A Reason12. I Want You So Bad13. Don't Let It Happen To Me14. Bewildered15. Doodle Bee16. This Old Heart17. Studio Dialog18. I'll Never, Never Let You Go19. Studio Dialog20. You've Got The Power21. Baby, You're Right22. I Don't Mind CD 2. 01. Come Over Here02. And I Do Just What I Want03. Just You And Me Darling04. So Long05. Tell Me What You're Gonna Do06. Hold It07. Dancin' Little Thing08. You Don't Have To Go09. Lost Someone (Single Version)10. Shout And Shimmy11. I Found You12. I Don't Care13. I've Got Money (Single Version)14. Mashed Potatoes, U.S.A.15. Signed, Sealed And Delivered16. Studio Dialogue17. Prisoner Of Love18. I Cried19. Oh Baby Don't You Weep20. (Do The) Mashed Potatoes21. Maybe The Last Time (Single Version) 01. It's A New Day02. Funky Drummer03. Give It Up Or Turnit A Loose (Remix)04. I Got To Move05. Funky Drummer (Bonus Beat Reperise)06. Talkin' Loud & Sayin' Nothing (Remix)07. Get Up, Get Into It And Get Involved08. Soul Power (Re-Edit)09. Hot Pants (She Got To Use What She Got To Get What She Wants)10. Blind Man Can See It (Extended Version) 01. There It Is (Live)02. She's The One03. Since You Been Gone04. Untitled Instrumental05. Say It Loud (Live)06. Can I Get Some Help07. You Got To Have A Mother For Me08. Funk Bomb09. Baby, Here I Come10. People Get Up And Drive Your Funky Soul11. I Got Ants In My Pants (And I Want To Dance) 01. Like It Is, Like It Was (The Blues)02. Don't Cry Baby03. Caldonia04. Somebody Done Changed The Lock On My Door05. Ain't Nobody Here But Us Chickens06. Good Rockin Tonight07. I Love You Yes I Do08. Messing With The Blues09. Waiting In Vain10. For You My Love11. Blues For My Baby12. Everyday I Have The Blues13. Love Don't Love Nobody14. Love Don't Love Nobody15. Goin Home16. Have Mercy Baby17. Kansas City18. The Bells CD 2. 01. Don't Deceive Me (Please Don't Go)02. The Things That I Used To Do03. Need Your Love So Bad04. Like A Baby05. Honky Tonk (Parts 1 & 2)06. Suffering With The Blues07. Further On Up The Road08. Radio Spot For Thinking About Little Willie John LP09. Talk To Me, Talk To Me10. Kansas City11. Wonder When You're Coming Home12. Like It Is, Like It Was (The Blues, Continued...) 01. Night Train02. Shout And Shimmy (Live)03. Like A Baby04. I've Got Money05. Prisoner Of Love06. These Foolish Things (Live)07. (Can You) Feel It (Part 1)08. Lost Someone09. Signed, Sealed, And Delivered10. Waiting In Vain11. In The Wee Wee Hours (Of The Nite)12. Oh Baby Don't You Weep (Live)13. Again14. How Long Darling15. So Long16. The Things That I Used To Do17. Out Of Sight18. Maybe The Last Time19. Have Mercy Baby20. I Got You (I Feel Good)21. Papa's Got A Brand New Bag (Part 1)22. Ain't That A Groove (Part 1)23. It's A Man's Man's Man's World24. Money Won't Change You 01. Living In America02. Can't Get Any Harder03. Just Do It04. Show Me05. How Do You Stop06. I'm Real07. Gravity08. Move On (So Tired Of Standing Still We Got To)09. Georgia - Lina10. Cold Sweat (Feat. Wilson Pickett) 01. (Get Up I Feel Like Being A) Sex Machine02. Super Bad03. Since You Been Gone04. Give It Up Or Turnit A Loose05. There Was A Time (I Got Move)06. Talkin' Loud and Sayin' Nothing07. Get Up, Get Into It, Get Involved08. Soul Power09. (Get Up I Feel Like Being A) Sex Machine10. Fight Against Drug Abuse 01. Escape-Ism02. Hot Pants (Parts 1 & 2)03. I'm A Greedy Man04. Make It Funky (Parts 1-4)05. King Heroin06. I Got Ants In My Pants (And I Want To Dance)07. There It Is08. Get On The Good Foot09. Don't Tell It (Complete Version)10. I Got A Bag Of My Own11. Down And Out In New York City (Version With Spoken Intro)12. Think!13. Make It Good To Yourself 01. (Get Up I Feel Like Being A) Sex Machine02. Super Bad03. Soul Power04. Hot Pants05. Make It Funky06. Talkin' Loud And Sayin' Nothin'07. King Heroin08. Get On The Good Foot09. The Boss10. Doing It To Death11. The Payback12. Papa Don't Take No Mess13. My Thang14. Funky President (People It's Bad)15. Get Up Offa That Thing16. Body Heat17. It's Too Funky In Here18. Livin' In America 01. Get Up (I Feel Like Being A) Sex Machine (Parts 1 & 2)02. Hustle!!! (Dead On It)03. Your Love04. Hot (I Need to Be Loved, Loved, Loved)05. Woman06. Get Up Offa That Thing (Release The Pressure)07. I Refuse To Lose08. Body Heat09. Kiss In '77 (Live)10. Give Me Some Skin11. Bessie12. If You Don't Give A Doggone About It 01. Papa's Got A Brand New Bag02. I Got You (I Feel Good)03. It's A Man's Man's Man's World04. Please Please Please05. Think!06. Night Train07. Cold Sweat08. Give It Up Or Turn It A Loose09. Funky Drummer (Parts 1 & 2)10. (Get Up I Feel Like Being A) Sex Machine11. Soul Power12. Get On The Good Foot13. Doing It To Death14. Get Up Offa That Thing15. I'm Real16. It's Too Funky In Here17. Living In America18. Super Bad19. The Boss20. The Payback Mix 01. It's A New Day, So Let A Man Come In (Part 1)02. Get Up (I Feel Like Being A) Sex Machine (Part 1)03. Super Bad (Part 1)04. Get Up, Get Into It, Get Involved (Part 1)05. Soul Power (Part 1)06. Hot Pants (She Got To Use What She Got To Get What She Wants) (Part 1)07. Make It Funky (Part 1)08. I'm A Greedy Man (Part 1)09. Talkin' Loud And Saying Nothin' (Part 1)10. There It Is (Part 1)11. Get On The Good Foot (Part 1)12. I Got Ants In My Pants (Part 1)13. Down And Out In New York City14. Sexy, Sexy, Sexy15. Doing It To Death (Short Version)16. The Payback (Part 1)17. My Thang18. Papa Don't Take No Mess (Part 1)19. Funky President (People It's Bad)20. Get Up Offa That Thing (Part 1)21. Bodyheat (Part 1)22. It's Too Funky In Here (Part 1)23. Static 01. Please, Please, Please 02. Why Do You Do Me 03. I Don't Know 04. I Feel That Old Feeling Coming On 05. No, No, No. No 06. Hold My Baby's Hand 07. I Won't Plead No More 08. Chonnie-On-Chon 09. Just Won't Do Right 10. Let's Make It 11. Gonna Try 12. Can't Be The Same 13. Messing With The Blues 14. Love Or A Game 15. You're Mine, You're Mine 16. I Walked Alone 17. That Dood It 18. Baby Cries Over The Ocean 19. Begging, Begging 20. That's When I Lost My Heart 21. Try Me (Demo Version) CD 2. 01. Try Me 02. Tell Me What I Did Wrong 03. I Want You So Bad 04. There Must Be A Reason 05. I've Got To Change 06. It Hurts To Tell You 07. I've Got To Change (Stereo Version) 08. It Hurts To Tell You (Stereo Version) 09. Doodle Bee 10. Bucket Head 11. It Was You 12. Got To Cry 13. Good Good Lovin' 14. Don't Let It Happen To Me 15. I'll Go Crazy 16. I Know It's True 17. Think 18. You've Got The Power (With Bea Ford) 19. This Old Heart 20. Wonder When You're Coming Home 01. Please Please Please (Re-Issue With Audience Overdub) 02. In The Wee Wee Hours (Of The Nite) 03. Again 04. How Long Darling 05. Caldonia 06. Evil 07. The Things That I Used To Do 08. Out Of The Blue 09. So Long 10. Dancin Little Thing 11. Soul Food (Part. 1) 12. Soul Food (Part. 2) 13. Out Of Sight 14. Maybe The Last Time 15. Tell Me What You're Gonna Do 16. I Don't Care 17. Think 18. Try Me (Re-Issue With Strings Overdub) CD 2. 01. Have Mercy Baby 02. Just Won't Do Right (I Stay In The Chapel Every Night) 03. Fine Old Foxy Self 04. Medley: I Found Someone Why Do You Do Me Like You Do I Want You So Bad 05. This Old Heart 06. It Was You 07. Devil's Hideaway 08. Who's Afraid Of Virginia Woolf? 09. I Got You (Original) 10. Only You 11. Papa's Got A Brand New Bag (Part 1) 12. Papa's Got A Brand New Bag (Part 2) 13. Try Me (Single Version) 14. Papa's Got A Brand New Bag 15. I Got You (I Feel Good) 16. I Can't Help It (I Just Do-Do-Do) 17. Lost Someone 18. I'll Go Crazy 01. You Got To Have A Mother For Me (Part 1) 02. The Little Groove Maker Me 03. You Got To Have A Mother For Me (Long Version) 04. I Don't Nobody To Give Me Nothing (Open Up The Door, I'll Get It Myself) (Part 1) 05. I Don't Nobody To Give Me Nothing (Open Up The Door, I'll Get It Myself) (Part 2) 06. I Love You 07. Maybe I'll Understand 08. Any Day Now 09. I'm Shook 10. The Popcorn 11. The Chicken 12. Mother Popcorn (You Got To Have A Mother For Me) (Part 1) 13. Mother Popcorn (You Got To Have A Mother For Me) (Part 2) 14. Lowdown Popcorn 15. Top Of The Stack 16. World (Part 1) 17. World (Part 2) 18. Let A Man Come In And Do The Popcorn Part One 19. Sometime 20. I'm Not Demanding (Part 1) 01. What My Baby Needs Now Is A Little More Lovin' 02. This Guy - This Girl's In Love With You 03. Watermelon Man 04. Down And Out In New York City 05. Mama's Dead 06. Sportin' Life 07. Dirty Harri 08. The Boss 09. Like It Is, Like It Was 10. Doing It To Death 11. Everybody Got Soul 12. Think (Version 1) 13. Something 14. Think (Version 2) 15. Woman (Part 1)16. Woman (Part 2) 17. If You Don't Get It The First Time (Back Up And Try It Again) 18. You Can Have Watergate, Just Gimme Some Bucks And I'll Be Straight 19. Sexy Sexy Sexy 20. Slaughter Theme 01. Control (People Go Where We Send You Part 1) (The First Family) 02. Control (People Go Where We Send You Part 2) (The First Family) 03. Papa Don’t Take No Mess (Part 1) 04. Papa Don’t Take No Mess (Part 2) 05. Funky President (People It’s Bad) 06. Coldblooded 07. Reality 08. I Need Your Love So Bad 09. Sex Machine (Part 1) 10. Sex Machine (Part 2) 11. Thank You For Lettin’ Me Be Myself, And You Be Yours (Part I) (The J.B.’s)12. Thank You For Lettin’ Me Be Myself, And You Be Yours (Part II) (The J.B.’s)13. Dead On It (Part 1) 14. Dead On It (Part 2) 15. Hustle!!! (Dead On It) 15. Hustle!!! (Dead) 01. Intro 02. The Payback 03. Soul Power 04. The Boss 05. Make It Easy 06. Doin' It To Death 07. Bewildered 08. Sex Machine 09. Interlude 10. The James Brown Theme (Part 1) 11. The James Brown Theme (Part 2) 12. Caught With A Bag / Gimme Some More 13. Get On The Good Foot (Part 1) 14. Get On The Good Foot (Part 2) 15. It's a Man's World Jam (Part 1) 16. It's a Man's World Jam (Part 2) 17. Money 18. Finale 01. Gonna Have A Funky Good Time (Doing It To Death)02. Get Up Offa That Thing03. Body Heat04. (Get Up I Feel Like Being A) Sex Machine05. Try Me!06. Papa's Got A Brand New Bag07. Get On The Good Foot08. It's A Man's Man's Man's WorldLost SomewhereIt's A Man's Man's Man's World09. I Got The Feeling10. Cold Sweat11. Please Please Please12. Jam13. The PaybackIt's Too Funky In Here 01. Payback02. It's Too Funky In Here03. Doing It To Death04. Try Me!05. Get On The Good Foot06. It's A Man's Man's Man's World07. Prisoner Of Love08. I Got The Feelin'09. Hustle (Dead On It)10. Papa's Got A Brand New Bag11. I Got You (I Feel Good)12. Please Please Please13. Jam14. (Get Up I Feel Like Being A) Sex Machine 01. Sex Machine 02. Give It Up, Or Turn It A Loose 03. It's A Man's World 04. I Got The Feeling 05. Try Me 06. I Feel Good 07. Get Up Off That Thing 08. Please, Please, Please 09. Jam 10. Cold Sweat 11. Georgia (On My Mind) 12. It's Too Funky In Here 13. Gonna Have A Funky Good Time 14. Get On The Good Foot 01. Superbad, Superslick02. I Refuse To Lose03. Eyesight04. Papa Don't Take No Mess (Part 1)05. The Spank06. If You Don't Give A Doggone About It07. For Goodness Sakes, Look At Those Cakes08. Get Up Offa That Thing09. I Got The Feelin'10. My Thang11. I'm A Greedy Man12. Funky President (People It's Bad) 01. Intro/Mother Popcorn (Part 1) 02. Living In America 03. Get Up Offa That Thing 04. Doing It To Death 05. Heavy Juice/Band Introduction 06. It's A Man's, Man's, Man's World 07. Get On The Good Foot 08. Prisoner Of Love 09. Georgia On My Mind 10. Hot Pants (She Got To Use What She Got To Get What She Wants) 11. Cold Sweat 12. I Can't Stand Myself (When You Touch Me) 13. Papa's Got A Brand New Bag 14. I Got You (I Feel Good) 15. Please, Please, Please 16. Get Up (I Feel Like Being A) Sex Machine 01. Get Up Offa That Thing02. Hey Amerca03. Stormy Monday04. Body Heat (Part 1)05. For Once In My Life06. What The World Needs Now Is Love07. Back Stabbin'08. Hot Pants (She Got To Use What She Got To Get What She Wants) (Part 1)09. Stagger Lee10. Need Your Love So Bad11. Woman12. Never Can Say Goodbye13. Time After Time14. Georgia On My Mind 01 Mother Popcorn (Part 1)02 Give It Up Or Turn It A Loose (Part 1)03 I Don't Want Nobody To Give Me Nothing (Open Up The Door I'll Get It Myself Live)04 Brother Rapp (Parts 1 & 2)05 Get Up (I Feel Like Being A) Sex Machine (Parts 1 & 2)06 Escape-Ism (Part 1)07 Hot Pants (She Got To Use What She Got To Get What She Wants) (Part 1)08 Super Bad (Live)09 Get Up, Get Into It, Get Involved (Live)10 Soul Power (Part 1)11 Make It Funky (Part 1)12 There It Is (Parts 1 & 2)13 I'm A Greedy Man (Parts 1 & 2)14 Talkin' Loud And Saying Nothin' (Part 1)15 King Heroin16 Get On The Good Foot (Part 1)17 People Get Up And Drive Your Funky Soul18 I Got Ants In My Pants (Part 1) 01. Get Up (I Feel Like Being A) Sex Machine (Single Version)02. Super Bad (Single Version)03. Talkin Loud And Saying Nothing (Parts 1 & 2)04. Give It Up Or Turn It A Loose (Live)05. Hot Pants (She Got To Use What She Got To Get What She Wants) (Part 1)06. Make It Funky (Part 1)07. Down And Out In New York City08. The Payback09. Papa Don't Take No Mess (Part 1)10. Get Up Offa That Thing (Ali Dee Remix)11. There It Is (Parts 1 & 2)12. Get On The Good Foot (Part 1)13. The Boss14. My Thang (Single Version) 01. I Got Ants In My Pants02. It's A Man's Man's Man's World03. Stoned To The Bone04. It's A New Day05. Hot Pants (She Got To Use What She Got To Get What She Wants)06. There It Is07. Make It Funky08. Get Up Offa That Thing - Release The Pressure09. My Thang10. Papa Don't Take No Mess11. Cold Sweat12. Blind Man Can See It13. (Get Up I Feel Like Being A) Sex Machine CD 2. 01. Blues & Pants02. Soul Power03. Say It Loud I'm Black And I'm Proud04. Mind Power05. Get On The Good Foot06. Escape-Ism07. Give It Up Or Turn It A Loose08. Get Up, Get Into It, Get Involved09. The Payback10. Funky President (People It's Bad)11. Funky Drummer 01. Please Please Please02. Good Good Lovin'03. Shout And Shimmy04. I Don't Mind05. Just You And Me Darling06. Think!07. Night Train08. Out Of Sight09. Why Did You Take Your Love Away From Me10. Stone Fox11. I Can't Stand Myself (When You Touch Me) (Part 1)12. There Was A Time13. I Got The Feelin'14. Papa's Got A Brand New Bag (Part 1)15. Cold Sweat (Part 1)16. Say It Loud I'm Black And I'm Proud (Part 1)17. Make It Funky (Part 1)18. Talkin' Loud And Sayin' Nothing (Part 1)19. Get Up, Get Into It, Get Involved20. The Payback 01. Give It Up Or Turn It A Loose02. Get On The Good Foot03. Super Bad04. Get Up (I Feel Like Being A) Sex Machine05. Hot Pants (She Got To Use What She Got To Get What She Wants)06. Body Heat07. Doing It To Death08. Jam09. It's Too Funky In Here10. Get Up Offa That Thang 01. That's My Desire02. After You're Through (Extended Version)03. Tengo Tango04. Home At Last05. All About My Girl06. There07. All The Way08. Why (Am I Treated So Bad)09. What Do You Like (Stereo Single Edit)10. Cottage For Sale11. Go On Now12. For Once In My Life 01. Gonna Have A Funky Good Time (Doing It To Death)02. Get Up Offa That Thing03. Body Heat04. (Get Up I Feel Like Being A) Sex Machine05. Try Me06. Papa's Got A Brand New Bag07. Get On The Good Foot08. Medley: It's A Man's Man's Man's WorldLost SomeoneIt's A Man's Man's Man's World (Reprise)09. I Got The Feelin'10. Cold Sweat11. Please Please Please12. Jam13. Medley: The PaybackIt's To Funky In Here14. Prisoner Of Love15. I Got You (I Feel Good)16. Georgia On My Mind 01. There Was A Time02. I Can't Stand Myself (When You Touch Me)03. I Got The Feelin'04. Licking Stick - Licking Stick (Parts 1 & 2)05. Say It Loud - I'm Black And I'm Proud06. Give It Up Or Turn It A Loose07. I Don't Want Nobody To Give Me Nothing (Open Up The Door I'll Get it Myself)08. Mother Popcorn (Parts 1 & 2)09. Ain't It Funky Now10. It's A New Day, So Let A Man Come In11. Get Up (I Feel Like Being A) Sex Machine12. Super Bad (Parts 1 & 2) CD 3. 01. Get Up, Get Into It, Get Involved02. Soul Power03. Hot Pants (She Got To Use What She Got To Get What She Wants) (Part 1)04. Make It Funky (Part 1)05. I'm A Greedy Man (Parts 1 & 2)06. Talking Loud And Saying Nothing (Parts 1 & 2)07. There It Is08. Get On The Good Foot09. I Got Ants In My Pants (Part 1)10. The Payback (Parts 1 & 2)11. My Thang12. Papa Don't Take No Mess (Part 1)
{ "pile_set_name": "Pile-CC" }
Radon About Us The Health Department encourages Lake County residents to check the radon levels of their homes or apartments. Studies show that radon occurs in every county in Illinois. Some homes tested in Lake County have had elevated radon levels. What is Radon? Radon is an odorless and colorless gas that naturally occurs in rock and soil. It can seep into homes from the soil through cracks in the basement floors and foundations, crawl spaces, poorly sealed sump pumps, porous cinder block walls and other foundation floor and wall penetrations. Reduction Steps Immediate radon reduction steps: Fill or seal any cracks, crevices or holes in the foundation Provide a gas-tight cap for the sump pit Maintain a water level in trapped floor drains Short-Term Test Kits Short-term test kits are available through the Health Department for $10. The cost includes return postage, laboratory analysis and interpretation by Health Department staff. The kits are also available from area hardware and building supply stores. For more information on purchasing a kit, call 847-377-8020. Radon Levels in Illinois The Illinois Emergency Management Agency has conducted a statewide screening for indoor radon. The primary purpose of the screening was to determine whether there are particular regions within Illinois which are more prone to radon than others. This data has been collected since 2003. Click here to view radon levels by county and zip code.
{ "pile_set_name": "Pile-CC" }